wiki:Internal/SystemPrototyping/Software
close Warning: Can't synchronize with repository "(default)" ("(default)" is not readable or not a Git repository.). Look in the Trac log for more information.

Version 23 (modified by nkiran, 12 years ago) ( diff )

--

MobilityFirst Prototype

1. Overview

2. Components

2.1. Router

Our Linux-based prototype router, as shown in Figure 1, follows a two-level emulation setup: a fast data path handled by a forwarding engine, and a control path implemented at user-level. The Click Modular router will embody the forwarding engine in our primary prototype and will run on commodity x-86 hardware. An alternate OpenFlow-based implementation, where routing and network support services may be implemented as modules in a central controller, is also under consideration. Performance may further be pushed by implementing the forwarding engine on a programmable network hardware platform such as the Stanford NetFPGA card (available as 1 or 10 Gbit port versions), to achieve line-rate implementation and evaluations of the MobilityFirst protocols.

In our Click-based implementation, C++ data-path elements implement the following base components: 1.) a link-level reliable, hop-by-hop data transport, 2.) a fast store/forward-aware forwarding table lookup, and 3.) a store manager to temporarily hold packets that cannot be forwarded at this time. The Click-based router supports one or more logical interfaces which can be either wired or wireless. It also attempts to allocate and isolate memory and processing resources to sustain reasonable data rates across each interface.

User-level processes either within a framework such as the extensible open source routing platform (XORP), or as standalone processes, will implement inter and intra-domain routing protocols, management services, and network-support services such as name resolution. Messages exchanged by these user-level services with other routers are forwarded through the Click engine through host receive and transmit queues. Updates to the forwarding tables computed by the control plane are pushed down to the forwarding engine through exported interfaces.

Component Details

2.2. Global Name Resolution Service (GNRS)

Name resolution is a targeted to be a globally distributed service with GNRS servers imagined to be co-located with routing elements. They are required to provide low-latency lookup (and update) of id to locator bindings, and service is expected to scale to GUID space which identify network objects in MF.

In present implementation, GNRS is a user-level process that is run on each router node to handle queries from the router or host protocol stack. The id-locator mappings for a GUID is 'hosted' on a node determined by a hash function with the GUID as a param. Therefore mappings are distributed among the address space of the nodes that participate in this hosting. One implemented scheme places mappings at the AS whose number is derived by hashing the GUID to the participating ASes in the network. Mappings hosted at each server are persistent, while being served out of in-memory caches for low latency. The implementation also allows the use of multiple hash functions to place a mapping a multiple hosts - providing reliability under partial failures, and improved lookup times by going to the nearest replica among the set.

Component Details

2.3. Host Stack and Network API

In our clean slate approach to networking, we propose a new socket-like API to interface with services offered by MF. Services include name resolution, message-oriented communication, intentional data receipt, context and location management, and content retrieval. API is available to applications as user-level library, which interacts with the protocol stack passing control and data messages between the two. A few sample applications exist in the code base and more are being worked on.

The host protocol stack implements a thin transport layer (handling message chunking, message ordering), network layer (GUID services, temp. storage, ad-hoc routing), and a upper link-layer (reliable, hop-by-hop data transport). It also implements an interface manager that can take as input user policy and network conditions to implement smart use of interfaces for performance, energy or other cost-metric optimization. The stack implementation and the network API allow for extensibility and customization of stack functions.

Component Details

3. Releases

No public releases yet.

4. ORBIT Evaluation

Early versions of the prototype MobilityFirst (MF) network consisting of: (1.) Click-based router, (2.) a distributed name resolution service, and (3.) client network API and host protocol stack are available for evaluation on the ORBIT testbed. The steps involved in evaluating sample configurations are listed below. Before you begin, it is suggested that your familiarize yourself with the OMF framework, which is used to setup and orchestrate these experiments.

4.1. Deploying MobilityFirst on ORBIT nodes

The deployment can be done in one of the following ways:

  1. Installing an MF release (tarball or SVN revision) and dependencies on a base ORBIT image
  2. Imaging ORBIT nodes with a pre-established MF disk image

4.2 MF Disk Images for ORBIT

An MF disk image contains all components (router, gnrs, and client stack and network API library - sources and precompiled binaries) and can be installed on nodes using OMF tools.

All images listed below are stored at repository1.orbit-lab.rutgers.edu:/export/omf/omf-images-5.2: Images Currently In Use

Image NameCreated onMF ReleaseDescription
mf-proto-rc1_0.ndz 4-18-2012 rc1.0 End-to-end integrated layer 2 with MF Network API, MF host stack and an MF Router with GSTAR, and GNRS interface, distributed 'in-network' GNRS
mf-proto-gec12.ndz 11-1-2011 - End-to-end integrated layer 2 with MF Network API, MF host stack and an MF Router with GSTAR. Router partially integrated with locally running GNRS server
mf-proto-trial3.ndz 9-22-2011 - MF node with router, gnrs and client modules - OMF script configures node function. GUID assignment and topology choice from OMF script

Loading a particular image is done using the 'load' command:

> omf load <node-set> <image-name>

e.g., 

> omf-5.X load [[1,1], [1,2], [1,3]] mf-proto-1.0.ndz

'X' in the command above refers to the version of OML control framework. Presently, 5.2 and 5.3 are most used.

4.3. Inside a MobilityFirst Image

4.3.1. Code Base

The image holds the prototype code base under /usr/local/mobilityfirst/code. It has the following top-level directories:

  • click - Router elements implementing storage-aware routing, hop-by-hop reliable link-level data transport, and interface to GNRS service. This also has elements that implement Click-based sender and receiver applications.
  • gnrsd - C++ implementation of a GNRS server, and an interactive GNRS client.
  • client - C implementations of client API and stack that compile for Linux and Android platforms. Also has sample sender and receiver applications using the API
  • eval

4.3.2. Binaries, Configuration Files

  1. bin - compiled binaries go here
  2. conf - config files from across sub projects, incl. click and gnrs configurations
  3. scripts - e.g., to initialize Click execution
  4. topology - definition files used within the MF router to enforce connectivity among nodes

Also installed on this image are the dependencies for the router, gnrs, and client components. A complete list of installed dependencies can be found in the README accompanying the code base.

4.3.3. Boot Script

The image also contains a boot script (/etc/init.d/mf-proto) that can be used to automate the update/compile functions. It updates the local codebase to the latest release from MF SVN, (TODO - auto updating is currently disabled, pending the creation of an anonymous account access to MF SVN), and then compiles and installs Click and other MF component binaries as described above. Excerpt below from the boot script shows the update and compilation of the click router:

...

MF_DIR=/usr/local/mobilityfirst
MF_CLICK_ELEMENTS_DIR=$MF_DIR/code/click/elements
CLICK_DIR=/usr/local/src/click

#update mobilityfirst prototype code base 
cd $MF_DIR/code

#auto-update disabled pending anonymous account
#svn update

#Compile user-level click after copying MF's click elements into click codebase
rsync -vt $MF_CLICK_ELEMENTS_DIR/gstar/* $CLICK_DIR/elements/local

cd $CLICK_DIR
./configure --disable-linuxmodule --enable-local
make elemlist
make install

...

The output of the boot script is appended to /var/log/mf-proto-boot.log.

4.4. Updating or Customizing an Existing MF Image

The installed MF source can be updated to a latest release either from the SVN or using a release tar ball. If updating to latest SVN version, you simply run the update command from under the /usr/local/mobilityfirst/code dir. If customizing to a particular MF version from SVN or using a newer tar ball, first delete contents under the code dir before installing. Similarly, one can also update 3rd party components like Click while creating an updated MF image.

For compiling an updated code base, we currently have a simple 'make' bash script (/usr/local/mobilityfirst/code/boot/mf-proto) that combines several steps. Usually, the compiled MF binaries are installed under /usr/local/mobilityfirst/bin. However, the 3rd party components we use are in various locations. For example the source for Click modular router is under /usr/local/src/click, and it's compiled binary ends up under /usr/local/bin. Therefore, before building the Click-based router, the MF-Click elements that implement the protocol stack are to be installed under Click's designated source dir (/usr/local/src/click/elements/local to be specific). So, the bash compilation script does all such steps for the router, gnrs, host protocol stack and network API library components, placing the compiled binaries in proper locations.

Once the source has been updated, the following compiles and installs binaries. Alternately, one can also update just individual components and run the local make and copy binaries to proper locations.

> /usr/local/mobilityfirst/code/boot/mf-proto #compilation script

Once ready with all upgrades and want to create a new MF image, you have to run an ORBIT 'prepare' script to among other things ensures the package manager caches are cleaned, and interfaces are configured appropriately when creating a general disk image that can be booted on a variety of hardware nodes.

> cd /root

> ./prepare.sh #common ORBIT script to prepare file systems for creating an image

The prepare.sh script above will also log you out and turn the node off, following which you can save the newly created image by using the OMF save command:

> omf-5.X save [1,1] 

The resulting image can be found at repository1:/export/omf/omf-5.X/ and can be used to image nodes for subsequent experiments.

4.5. Configuring and Running MobilityFirst Experiments

While custom scripting can be used to execute an experiment, OMF has all necessary functionality to reliably configure and repeat experiments on the ORBIT testbed. Both the configuration details (what nodes run what applications with what parameters) and the experiment execution control (when to run what) can be specified within a ruby script using omf syntax. Refer to OMF User Guide to get familiar with writing OMF scripts. In the next section, we present several sample scripts to get you started with MF network experimentation.

Once you define an experiment script, and have loaded the MF image on the nodes that will run MF components, you run the experiment using the 'exec' command:

> omf-5.X exec my-mf-expt.rb

The omf runtime reports any failure to bring up components. For more detailed information, refer to the logs created by MF components - all located under /var/log. Additionally, MF components (only Click-based router presently) are instrumented to report key statistics which are then logged to relational tables hosted on an OML server co-located with the testbed. Section on 'OML-based Monitoring' has more details.

4.6. Sample OMF Scripts for ORBIT

4.6.1 Test Config 1: Sender-Router-Receiver

Below is the simple topology:

         S ---- MFR ---- R

S-Sender Host, MFR - MobilityFirst Router, R - Receiver Host

The topology in these experiments is enforced within the Click implementations by a GUID-based connectivity graph specified by a topology file passed to click. The following lines in the topology file define the above graph:

#syntax: <node-GUID> <neighbor-count> <neighbor-GUID1> [<neighbor-GUID2>] ...
1 1 2
2 2 1 3
3 1 2

Files: OMF script | topology file

4.6.2. Test Config 2: Multiple Senders and Receivers

Below is the topology:

                 S2
                 |
                 |
         S1 ---- MFR1 ----- MFR2 ---- MFR3 ---- R1
                                       |
                                       |
                                       R2

S-Sender Host, MFR - MobilityFirst Router, R - Receiver Host

Files: OMF script | topology file

4.7. Steps to run experiment

  1. Choose and reserve a testbed with OMF support and required number of nodes. ORBIT has a 400 node grid, but also has 9 sandboxed testbeds sb1-sb9 more suitable for smaller experiments.
  2. Image the nodes with an established (and compatible, see compatibility notes below) MF image from list of available OR image the nodes with a baseline ORBIT image followed by install of MF release from SVN or tarball
  3. Determine the node set you will use for the experiment and ensure they are available, imaged and working.
  4. Modify OMF ruby script provided to use the chosen nodes. For example, if you choose nodes node1-1, node1-3, node1-4 (ORBIT hostnames, domain part depends on chosen testbed), then modify the statically defined node set in the ruby script as shown below:
    ...
    
    #static topo with available nodes
    #defTopology('static_universe', [[1,1],[1,2],[1,3]]) - original 
    defTopology('static_universe', [[1,1],[1,3],[1,4]])
    baseTopo = Topology['static_universe']
    
    ...
    
    
  5. Execute the script using appropriate version of omf tools. For example, the node naming conventions used above (as points in a grid: [1,1]) is valid for OMF version 5.2, but invalid for versions 5.3 and above which use FQDN to identify hosts.

4.8. Test Script to MF Image Compatibility

Owing to feature additions and/or interface modifications, older OMF scripts may be incompatible - despite our best efforts - when used against newer MF versions and images . Here is what we understand at present:

mf-proto-trial3.ndzmf-proto-gec12.ndzmf-proto-rc1_0.ndz
Test Config 1 No No Yes
Test Config 2 No No Yes

4.9. Test Script to MF Release Compatibility

No releases yet.

5. GENI Evaluation

GENI, an NSF-funded proposal for a global environment for network innovation, is a multi-group collaborative effort to realize an at-scale experimental network infrastructure that is rich (i.e., with wired and wireless resources, commercial and experimental platforms) and allows for deep programmability.

ProtoGENI is the prototype implementation and deployment of GENI. ProtoGENI is also the control framework for a number of GENI resources currently deployed on the national backbone and at several participating campuses. It is worth noting, however, that there are several GENI deployments that use other control frameworks and experimentation across ProtoGENI and these deployments is currently set up via personnel coordination/manual configuration.

The following links provide the basic information to learn about ProtoGENI and to get started with experimentation:

  • ProtoGENI Tutorial with basics on
    • Creating an account with one of the Clearing houses (e.g., Utah Emulab or BBN)
    • Setting up certificate (with managers) and key-based (with individual hosts) authentication and authorization
    • Steps and test scripts for finding and reserving resources on ProtoGENI
  • Quering and Reserving Resources can be done using either of following:

6. MobilityFirst Prototype Demonstrations

  1. Generalized Storage-Aware Routing - GEC-12, Kansas City, MI, Nov 2, 2012
  2. Receiver-Controlled Multi-Homing - HotMobile'12, San Diego, CA, Feb 28, 2012
  3. Robust Content Delivery to Multi-homed Mobile - GEC-13, Los Angeles, CA, Mar 13, 2012
  4. Openflow-controlled Routing ByPass - WINLAB Industrial Advisory Board Meeting (IAB), WINLAB, May 14, 2012
Note: See TracWiki for help on using the wiki.