[[TOC(Proto/bSoftwareRelease*, depth=1)]] = Network Deployment = == Basic Requirements == Set of x86/x64 nodes, one per routing element and each end-host in network graph nodes can be virtual instances nodes run recent Linux distributions; Ubuntu 12.04 or 14.04 LTS is suggested layer-2 network connectivity between neighboring nodes in network graph IPv4 network connectivity between all routing elements, and pairwise IPv4 connectivity between end-hosts and corresponding access router == Example Network Graphs == {{{ Network Graph 1 (G1) -------------------- SndrHost1 ---- MFAR1 ---- RcvrHost1 Network Graph 2 (G2) -------------------- SndrHost2 | | | SndrHost1 ---- MFAR1 ------ MFR1 / \ / \ / \ / \ MFR2 ------ MFR3 ------ MFAR2 ---- RcvrHost2 | | | RcvrHost1 MFR - MobilityFirst Router, MFAR - MobilityFirst Access Router }}} Network graphs G1 and G2 show simple topologies with commonly used routing elements and end-hosts. As shown, hosts connect to the network through access routers (i.e., MFAR), which handle host association and optional protocol translation, in addition to the forwarding operations of a regular MFR. Server instances of the GNRS service, which provides name resolution and dynamic binding of GUID to one or more addresses, are co-deployed with the routing elements. == Current release notes :== 1. Inter-network protocol not integrated. Routers run intra-domain routing protocol only. 1. Hosts connect to MFARs over any of ethernet, wifi, or wimax. 1. The host data/control traffic to and from the MFAR is IPv4 encapsulated. 1. Between routers all traffic is MF-over-ethernet, except for name resolution, which is IPv4. == Deployment Steps == 1. Define network graph and assign GUIDs to each routing element and end-host. Current implementations accept a 32-bit integer for GUID. 1. Translate network graph to a corresponding GUID-based topology file that is provided as input to MFRs to enforce topology as shown here in topology control under router deployment instructions. The topology enforcement is required when nodes are deployed in a single broadcast domain, as is often the case with testbeds. Feel free to use others means to enforce topology such as with ebtables, iptables, etc. 1. Decide on number of service instances for the GNRS service and configure the namespace partitioning across these instances as shown here under multi-server deployment. Every routing node may run a GNRS instance or one instance may service several routers. If your network resembles G1 above or something simple, a single-server GNRS configuration may suffice. 1. Install required prototype components on the nodes as per the instructions provided for: router, GNRS, host stack, and net API library. 1. [wiki:https:HostProtocol/StackRunning Bring up the GNRS service] instances at the chosen nodes. [wiki:https:HostProtocol/StackRunning Test the GNRS service] using sample command line clients in the release. 1. Appropriate versions of the Click-based router can be brought up (i.e., a core router, access router, basic router, etc.,) using the configuration files and command lines detailed under [wiki:https:HostProtocol/StackRunning router configurations]. Each router needs to be told which GNRS instance it should contact, which can either be the local server instance, or a remote shared one. 1. [wiki:https:HostProtocol/StackRunning Bring up MF protocol stacks] on each end-host. Host stacks can determine their access router either automatically (by latching onto a period broadcast beacon), or can be forced to associate with a particular one by specifying router MAC and IP. Look at [wiki:https:HostProtocol/StackRunning host stack configuration] for further details. 1. Once all components have been brought up, the network can be tested using sample applications provided in the release such as the command line mfping tool in applications/mfping/c