| 59 | ---- |
| 60 | |
| 61 | Simulation designs: [[BR]] |
| 62 | |
| 63 | 1. Initialization [[BR]] |
| 64 | (1) Topology[[BR]] |
| 65 | --- Grid setting, e.g. 5 by 5; [[BR]] |
| 66 | --- Infrastructure setting, includes GNRS server, Base station, router. Assume each grid has one GNRS server, one BS, one router. Also assume full connectivity. [[BR]] |
| 67 | --- Device deployment: mobile nodes deployment follow location matrix from SUVNet. [[BR]] |
| 68 | --- Routing strategy for mobile client - static server and mobile client - mobile client communication pattern. [[BR]] |
| 69 | |
| 70 | (2) Parameter list [[BR]] |
| 71 | --- Lookup latency [[BR]] |
| 72 | --- Update latency [[BR]] |
| 73 | --- Routing latency (distribution) [[BR]] |
| 74 | --- Application related parameters: file size distribution for multimedia on demand application and peer-to-peer communication application, packet size, transmission time [[BR]] |
| 77 | 2. Update [[BR]] |
| 78 | (1) Mobile node updates binding to local GNRS server: constant latency [[BR]] |
| 79 | (2) Local GNRS server hash K times and then sends binding to K grids: routing latencies from local GNRS server to K grids (for simplicity, latency is proportional to the distance between grids) [[BR]] |
| 80 | (3) K grids reply acknowledgements: routing latencies from K grids to original local GNRS server (for simplicity, this latency is proportional to the distance between grids) [[BR]] |
| 81 | update latency T1 = (1) + biggest in (2) + biggest in (3) [[BR]] |
| 82 | or update latency T1 = (1) + biggest in (2) (since after (2), GNRS server is able to answer query) [[BR]] |
| 83 | |
| 84 | 3. Query [[BR]] |
| 85 | (1) Node sends query message to its local GNRS server: constant latency [[BR]] |
| 86 | (2) Local GNRS server A hash K times and pick the nearest grid B to send query message: assume routing latency is proportional to the distance between A and B. [[BR]] |
| 87 | (3) B return binding message to A: routing latency is same as latency in (2). [[BR]] |
| 88 | query latency T2 = (1) + (2) + (3) [[BR]] |
| 89 | |
| 90 | 4. Delivery failure due to node moving [[BR]] |
| 91 | Assume traffic is from service provider's static server C to mobile node D. [[BR]] |
| 92 | (1) While generating updates, check node D every minute; therefore, in simulation, the gap between update events is multiple minutes. [[BR]] |
| 93 | (2) The routing delay of packets transmission from C to D depends on the routing infrastructure, link quality and status, routing strategy used, etc. For simplicity, assume this delay T3 is some constant. [[BR]] |
| 94 | (3) Packets, which are sent from C to router E (D's original grid router) during the update latency period T1, can not be delivered to D successfully. E holds the packets and queries C's binding every T4 seconds (T4 > T2). Latency in this stage is T5 = T2 + (k-1)T4 (k is the times of E's query ). [[BR]] |
| 95 | (4) Packet then reroute from router E to router F (D's new grid router). This delay T6 can simply set to proportion to the distance between E and F. [[BR]] |
| 96 | |
| 97 | latency due to delivery failure <= T1 + T5 + T6 ("<" is possible when D sends out update and C's router sends out query for D at the same time, in this case T1 and T2 have overlap) [[BR]] [[BR]] |
| 98 | |
| 99 | Future optimization / analysis: [[BR]] |
| 100 | (1) In multimedia on demand application, mobile node D sends service request message to service provider server D encapsulated binding information <GUID, NA> so that D can directly send media file to C's NA without sending query to GNRS server. We can analyze the delay of the two strategies in the scenario : a) D use C's NA carried by the service request directly, therefore decrease the query delay; but the risk is that C moves to another grid and therefore increase the reroute delay; b) after receive service request, D still query GNRS server for C's NA. [[BR]] |
| 101 | |