close Warning: Can't synchronize with repository "(default)" ("(default)" is not readable or not a Git repository.). Look in the Trac log for more information.

Changes between Initial Version and Version 1 of Proto/bSoftwareRelease/c0GENIDeployment


Ignore:
Timestamp:
Nov 29, 2014, 1:12:15 AM (9 years ago)
Author:
seskar
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Proto/bSoftwareRelease/c0GENIDeployment

    v1 v1  
     1= GENI Deployment =
     2
     3[[TOC(Proto/bSoftwareRelease*, depth=1)]]
     4
     5== Pre-requisites ==
     6 * '''GENI account:''' To create a slice and reserve resources on GENI, you must first get a experimenter account, say from the GENI Experimenter Portal. The account is normally associated with a project, so you can either create a new project or join an existing one - e.g., '!MobilityFirst'.
     7
     8The following steps assume experiments will be setup and run on the GENI slice from a separate control node. The controller should be able to reach at least one interface on each reserved host to be able to issue commands over SSH.
     9== 1. Prepare the Controller ==
     10Any platform that has a Unix-like env would do, including Cygwin.
     11 * Install [http://trac.gpolab.bbn.com/gcf/wiki gcf (GENI Control Framework)] - requires Python. This includes the [http://trac.gpolab.bbn.com/gcf/wiki/Omni Omni] tools required to reserve resources.
     12 * Download the !MobilityFirst software release to access the GENI specific control scripts found under eval/geni/scripts
     13 * Install a recent version of Perl, which is required by some helper scripts.
     14 * Set env. variables as below:
     15{{{
     16export PYTHONPATH=path/to/gcf/src
     17
     18export PATH=${PATH}:path/to/gcf/src:path/to/gcf/examples
     19}}}
     20== 2. Create GENI Slice and Reserve Resources ==
     21The Omni command line tool is one way (and the one we prefer) to create slices and reserve resources. One can also use other graphical tools noted on the GENI site and experimenter portal (e.g., Flack) to achieve identical results. The following instructions are for when using Omni.
     22 === 2.a. Create a Slice ===
     23A slice helps bind together all slivers (resource allocations) created across two or more resource aggregates (RA). Basically, one or more public keys (or users effectively) can be associated with a slice and determines accessibility of resources.
     24{{{
     25omni.py createslice <slicename>
     26}}}
     27Note that the omni tool authenticates the user before executing the operation. Authentication is handled through a clearing house (CH) that manages project and user identities. CH also registers the slices for each user. A '-a URL' option can be used to specify a particular CH, which usually defaults to the CH associated with the GENI portal.
     28 === 2.b. Create RSPECs ===
     29An RSPEC is an XML-based definition of the resources you want to request from a particular RA. These can be for hosts, VMs, links (tunnels or VLAN) or any other resource type supported by an aggregate. You can either [http://groups.geni.net/geni/wiki/HowTo/WriteOFv3Rspecs write RSPECs from scratch] or be extracted from the resource advertisements returned by an RA when queried.
     30 === 2.c. Reserve Resources ===
     31The collection of resources reserved within an RA under a particular slice constitutes a sliver. Due to limitations in GENI RA implementation, only one sliver can exist at an RA per slice, and the sliver is immutable once created.
     32Resource described in an RSPEC can be reserved either using Omni directly OR using the 'createslivers.sh' in the MF scripts dir:
     33{{{
     34omni.py -a <AR-URL|AR-nickname> <slicename> <rspec-file>
     35}}}
     36OR using MF helper script to handle any number of rspec files:
     37{{{
     38createsliver.sh <slicename> <rspec_file1> <rspec_file2> .....
     39}}}
     40This helper script is handy when all rspecs are in a single folder and simple shell expansion could be used to specify all. For this simpler form to work, we cheat a little by adding a AR nickname comment to each RSPEC file at creation time - can be placed anywhere in the file. The format of comment is shown in the example below taken from RSPEC for Rutgers Instageni AR:
     41{{{
     42<!--
     43AM nickname: ru-ig
     44-->
     45}}}
     46== 3. Setup Layer-2 Connectivity Between Routers ==
     47Since MF introduces a new non-IP layer-3 protocol, the router deployment requires layer-2 connectivity between neighbor nodes. This can be achieved in one of several ways supported within GENI - refer to [http://groups.geni.net/geni/wiki/ConnectivityOverview GENI's Connectivity Overview page].
     48A separate 'fia-mobilityfirst' VLAN is currently set aside for MF experimentation. This currently connects 7 InstaGENI rack sites - Rutgers, NYU, BBN/GPO, NYSERNet, UIUC, UWisc, and U.Utah - and provides a single layer-2 broadcast domain across these locations. Since this is a LAN, a network topology will need to be enforced to achieve the required neighbor connectivity between routers. This is supported within the Click-based MF software router by passing a simple topology file that specifies the adjacency.
     49This VLAN can be easily shared across multiple experimenters with a little coordination by using distinct Ethertype values when framing layer 2 packets. This can specified in Click router configuration. If you decide to use this VLAN, please get in touch with Ivan Seskar or Kiran Nagaraja at WINLAB to get a Ethertype value assigned for your experiments so we can avoid conflicts.
     50Current Ethertype Usage:
     51
     52|| Ethertype || Group / Univ. ||  Contact Person || Experiment Description ||
     53|| 0x27c0 || WINLAB || Kiran Nagaraja || long term deployment ||
     54
     55== 4. Configure !MobilityFirst Deployment ==
     56The MF helper scripts rely on the configuration specified in a single file named 'config'. There's a template 'sample-config' provided with the scripts that can be customized to specify particulars of the deployment.
     57The key properties to change in the 'config' file are:
     58=== 4.a. GENI account ===
     59{{{
     60# -----------------------
     61# GENI account properties
     62# -----------------------
     63
     64key="/path/to/geni/private_key"
     65username="mygeniusername"
     66}}}
     67=== 4.b. Network ===
     68MF deployment assumes a two level topology of core and edge networks. The helper scripts identify interfaces on the router and host nodes as either edge or core facing based on whether the assigned IP belongs to a core or edge designated subnet. This is used for instance by the router control script that brings up either a edge (has additional host services) or a core router on a particular experiment node.
     69{{{
     70# ---------------------
     71# network configuration
     72# ---------------------
     73
     74#the IP subnet that will be used for GNRS service plane
     75Netfilter="10.44.0.0/16"
     76
     77#the IP subnet that will be used for end-host access
     78Edgefilter="10.43.0.0/16"
     79}}}
     80=== 4.c. Source code ===
     81Details of which MF repository to use as origin during install and the specific branch or tag that should be installed.
     82{{{
     83# --------------------
     84# code base properties
     85# --------------------
     86
     87#mf git repo; bitbucket 
     88repo_username="myrepousername"
     89mfgitorigin="https://${repo_username}@bitbucket.org/nkiran/mobilityfirst.git"
     90
     91#mf branch to install
     92mfbranch="master"
     93
     94#click release to install
     95clickversion="v2.0.1"
     96}}}
     97== 5. Build a nodes-file (a handy list of all reserved nodes) ==
     98Since we'll be using SSH as the means to issue commands at each node, and often issuing the same commands across several or all nodes, a list of the hostnames or their control IPs will be needed to address each node. Remember that RSPECs only specify data plane interfaces and the control interfaces are determined once the resources are allocated and active. So we've built a helper script that uses the RSPECs to query the RAs about the control interfaces and build a list. We scour a few more details from the nodes such as the names of the data plane interfaces (e.g., eth0, eth1) since these are not uniformly assigned across different RAs, and also derive the GUID of the node by passing it's OpenSSH host key through a hash function. All of these details can be gotten using the 'identifynodes.sh' helper script:
     99{{{
     100identifynodes.sh <slicename> <rspec-file1> [<rspec-file2>] ...  > <nodes-file>
     101}}}
     102By passing all RSPEC files to the script, we can build the db/list of all nodes deployed at once and capture the output in a nodes-file. Here's a sample nodes-file from the MF long running deployment across 7 Instageni sites:
     103{{{
     104#hostname,interface,hwaddr,ipv4addr,guid
     105pcvm5-44.instageni.gpolab.bbn.com,eth0,02:c8:9a:b2:f9:0c,192.1.242.158,343275562
     106pcvm5-44.instageni.gpolab.bbn.com,eth1,02:5a:68:40:8a:d7,10.44.2.1,343275562
     107pcvm5-45.instageni.gpolab.bbn.com,eth0,02:b1:fe:5a:30:12,192.1.242.159,1326973177
     108pcvm5-45.instageni.gpolab.bbn.com,eth1,02:9b:09:39:61:27,10.44.2.128,1326973177
     109pcvm3-30.instageni.illinois.edu,eth0,02:27:98:d4:e7:ad,72.36.65.65,1105395882
     110pcvm3-30.instageni.illinois.edu,eth1,02:06:aa:9e:17:37,10.44.9.1,1105395882
     111pcvm3-31.instageni.illinois.edu,eth0,02:ad:81:00:57:1b,72.36.65.68,1087418188
     112pcvm3-31.instageni.illinois.edu,eth1,02:61:12:f6:c7:17,10.44.9.128,1087418188
     113pcvm3-6.instageni.nysernet.org,eth0,02:2c:68:ee:70:6e,199.109.64.50,1864282817
     114pcvm3-6.instageni.nysernet.org,eth1,02:1c:54:e4:58:f8,10.44.18.1,1864282817
     115pcvm3-7.instageni.nysernet.org,eth0,02:a2:63:55:83:31,199.109.64.52,1008227076
     116pcvm3-7.instageni.nysernet.org,eth1,02:d9:73:3e:30:0b,10.44.18.128,1008227076
     117pcvm3-1.genirack.nyu.edu,eth0,02:af:aa:76:0f:74,192.86.139.64,743633713
     118pcvm3-1.genirack.nyu.edu,eth1,02:b9:93:28:39:08,10.43.4.1,743633713
     119pcvm3-1.genirack.nyu.edu,eth2,02:64:14:d8:86:98,10.44.4.1,743633713
     120pcvm3-3.genirack.nyu.edu,eth0,02:1a:0b:1f:01:fa,192.86.139.65,1650457279
     121pcvm3-3.genirack.nyu.edu,eth1,02:64:1a:41:39:0e,10.44.4.128,1650457279
     122pcvm3-3.genirack.nyu.edu,eth2,02:c5:ea:b3:66:ec,10.43.4.128,1650457279
     123pcvm3-3.instageni.rutgers.edu,eth0,02:6c:9a:f2:39:99,165.230.161.230,1455426667
     124pcvm3-3.instageni.rutgers.edu,eth1,02:a9:01:f8:f4:78,10.43.0.1,1455426667
     125pcvm3-3.instageni.rutgers.edu,eth2,02:c4:01:9e:d9:9f,10.44.0.1,1455426667
     126pcvm3-4.instageni.rutgers.edu,eth0,02:a5:c7:4d:6d:5b,165.230.161.231,1394255251
     127pcvm3-4.instageni.rutgers.edu,eth1,02:d4:ce:eb:6c:04,10.43.0.128,1394255251
     128pcvm3-4.instageni.rutgers.edu,eth2,02:2b:39:c5:4b:9c,10.44.0.128,1394255251
     129pcvm3-1.utah.geniracks.net,eth0,02:9f:d8:b1:37:34,155.98.34.130,603169490
     130pcvm3-1.utah.geniracks.net,eth1,02:37:37:70:fc:37,10.44.14.1,603169490
     131pcvm3-2.utah.geniracks.net,eth0,02:ee:0d:90:2f:df,155.98.34.131,646019851
     132pcvm3-2.utah.geniracks.net,eth1,02:17:a0:cb:59:c0,10.44.14.128,646019851
     133pcvm3-24.instageni.wisc.edu,eth0,02:93:6c:e5:bf:39,128.104.159.129,1852534458
     134pcvm3-24.instageni.wisc.edu,eth1,02:ee:22:cc:d9:3e,10.44.8.1,1852534458
     135pcvm3-24.instageni.wisc.edu,eth2,02:38:f4:c9:f3:a1,10.43.8.1,1852534458
     136pcvm3-25.instageni.wisc.edu,eth0,02:9f:f7:47:c3:a8,128.104.159.131,336385182
     137pcvm3-25.instageni.wisc.edu,eth1,02:2e:ad:b2:a0:df,10.44.8.128,336385182
     138pcvm3-25.instageni.wisc.edu,eth2,02:a2:e0:55:d7:87,10.43.8.128,336385182
     139}}}
     140Note that there is one line per interface on each node. Some have 'core' interfaces, some 'edge' and some have both. Those with both will run edge router configurations.
     141== 6. Install !MobilityFirst on GENI Nodes ==
     142Router, naming service (GNRS), host stack and network API libraries can be individually installed according to the role of a node. Or, the provided helper script could be used to install all of these on all of the nodes in parallel. The 'installmf.sh' script uses the nodes-file assembled in the previous step.
     143{{{
     144installmf.sh <nodes-file>
     145}}}
     146The install script first copies over the configuration file and a local installation script -'localinstallmf.sh' - to each of the nodes, followed by the simultaneous execution of the local script across all of the nodes. Note that the script handles duplicate host entries in the nodes-file (unless they are DNS aliases) and nodes may be left out by commenting out ('#') the corresponding node-file entries.
     147=== Note on Git Authentication: ===
     148To access non-public git repos (e.g., the current MF repo) during installation on remote nodes, the username/password can entered in more than way. For automation however, it's simpler to install a '.netrc' file in the home directory of each node with the file containing the following single line:
     149{{{
     150machine <hostname> login <username> password <password>
     151}}}
     152For the wary experimenter, look into the git property 'core.askpass' to enable interactive password entry. You would have to modify the 'localinstallmf.sh' script where this property can be set as shown below:
     153{{{
     154git config --global core.askpass /usr/lib/git-core/git-gui--askpass
     155}}}
     156== 7. Bring up Routers and GNRS ==
     157Helper scripts simplify bringing up and controlling router and gnrs instances on the GENI nodes. For instance, routerctl.sh automates the running of core and edge routers with appropriate configurations on the designated GENI nodes. The determination of core vs edge is presently based on implicit rules on availability of interfaces on the edge network (i.e., interface is assigned IP with edge net prefix). A topology file that establishes GUID-based adjacency for each router is a required input. Look at the section on Topology Control for details on how to compose a deployment topology.
     158{{{
     159> routerctl.sh
     160usage: routerctl.sh <nodes-file> <cmd=list|start|stop> <topologyfile>
     161
     162> routerctl.sh mynodes start mytopo
     163}}}
     164The following will bring up the GNRS server instances on each of the router nodes. The provided configuration is customized to the particular instance where needed (e.g., server listen interface):
     165{{{
     166> gnrsctl.sh
     167Usage: ./gnrsctl.sh <nodes-file> {config|start|stop|clean} [options]
     168        options:
     169            if 'config': <template-config-dir>
     170
     171> gnrsctl.sh nodes.all config mygnrsconfdir
     172
     173> gnrsctl.sh nodes.all start
     174}}}
     175Template configuration files for gnrs servers can be found under eval/geni/conf/gnrs-srvr
     176== 8. Bring up Host Stacks ==
     177The following brings up the host stack on nodes determined to be clients - this is currently determined by the implicit rule that client nodes will have core interfaces with lastoctet > 128 :
     178{{{
     179> hostctl.sh
     180Usage: hostctl.sh <nodes-file> <cmd=start|stop>
     181
     182hostctl.sh nodes.all start
     183}}}