bor-rong chen kiran-kumar muniswamy-reddy and matt welsh...

14
Lessons Learned from Implementing Ad-Hoc Multicast Routing in Sensor Networks Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh TR-22-05 Computer Science Group Harvard University Cambridge, Massachusetts

Upload: others

Post on 21-Jul-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

Lessons Learned from Implementing

Ad-Hoc Multicast Routing in SensorNetworks

Bor-rong Chen

Kiran-Kumar Muniswamy-Reddyand

Matt Welsh

TR-22-05

Computer Science Group

Harvard UniversityCambridge, Massachusetts

Page 2: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

Lessons Learned from Implementing Ad-hoc MulticastRouting in Sensor Networks

Bor-rong Chen, Kiran-Kumar Muniswamy-Reddy, and Matt WelshDivision of Engineering and Applied Sciences

Harvard University

{brchen,kiran,mdw}@eecs.harvard.edu

AbstractThe mobile ad-hoc networking (MANET) community has proposed awide range of protocols for unicast and multicast routing through mo-bile wireless devices. Many of these protocols have been studied onlyunder simulation using simplistic radio models, and do not considerissues such as bandwidth or memory limitations. In contrast, the sen-sor network community has demanded solutions that work on hardwarewith limited resources, with a focus on routing through stationary nodesto a single base station.

Still, several emerging sensor network applications involve mo-bile nodes with communication patterns requiring any-to-any routingtopologies. We should be able to build upon the MANET work to im-plement these systems. However, translating these protocols into realimplementations on resource-constrained sensor nodes raises a num-ber of challenges. In this paper, we present the lessons learned fromimplementing one such protocol, Adaptive Demand-Driven MulticastRouting (ADMR), on CC2420-based motes using the TinyOS operatingsystem. ADMR was chosen because it supports multicast communica-tion, a critical requirement for many pervasive and mobile applications.To our knowledge, ours is the first non-simulated implementation ofADMR. Through extensive measurement on a 30-node sensor networktestbed, we present the performance of TinyADMR under a wide rangeof conditions. We highlight the real-world impact of path selection met-rics, radio asymmetry, protocol overhead, and limited routing table size.

1 IntroductionTo date, much work on routing protocols in sensor networks hasfocused on forming stable routes to a single aggregation point,such as a base station. Many approaches have been proposed forspanning-tree formation, parent selection, and hop-by-hop dataaggregation as data flows up the tree [28, 5, 17, 18]. These proto-cols are appropriate for networks consisting of stationary nodesthat are primarily focused on data collection. However, severalemerging applications for sensor networks require more generaltopologies as well as communication with mobile nodes. Exam-ples include tracking firefighters in a burning building [29], datacollection with mobile sensors [12, 13, 15], and monitoring thelocation and health status of disaster victims [16].

The mobile ad-hoc networking (MANET) community has de-veloped a wide range of protocols for unicast and multicast rout-ing using mobile wireless devices [20, 10, 11, 19]. Many of theseprotocols have been studied only under simulation using simplis-tic radio models, and do not consider issues such as bandwidth ormemory limitations. In contrast, the sensor network communityhas demanded solutions that work on real hardware with limitedresources. As a result, much of the MANET work has been over-looked by the sensor network community in favor of specially-tailored protocols that focus on energy management [23, 7], re-

liability [28, 27, 22], and in-network aggregation [17, 18].

Our goal is to bridge the gap between the mobile ad-hoc net-working field and the state-of-the-art in sensor networks. Bydoing so, we hope to tap into the rich body of work in theMANET community, which may require reevaluating and re-designing these protocols as necessary. In particular, we identifyseveral challenges to implementing MANET-based protocols onsensor nodes. The limited memory, computational power, andradio bandwidth deeply impact the implementation strategy. Inaddition, the realities of radio propagation, such as lossy andasymmetric links, require careful evaluation of path selectionmetrics.

This paper presents our experience with implementing a par-ticular protocol, Adaptive Demand-Driven Multicast Routing(ADMR) [10], in TinyOS on MicaZ motes using the CC2420radio. ADMR was chosen because it represents a fairly sophisti-cated and mature ad hoc multicast routing protocol, and was de-veloped by an independent research group. To our knowledge,ADMR has never been implemented on real hardware, althoughits design has been well-studied inns-2simulations assuming an802.11 MAC. Our overarching goal is to study the challengesinvolved in translating this style of protocol into a real imple-mentation on sensor motes.

Several important lessons have emerged from this experience.The first is that communication performance is very sensitive topath selection metrics. The ADMR design attempts to minimizepath hop count, but this can result in very lossy paths [4, 28]. Wedescribe a new metric that estimates the overall path deliveryratio with a simple hop-by-hop measurement of the CC2420’sLink Quality Indicator (LQI). The second lesson involves theimpact of protocol overhead and practical limitations on datarates given the very limited radio bandwidth of IEEE 802.15.4.The third lesson deals with the impact of limited memory onrouting protocol state. We evaluate several approaches for selec-tively dropping reverse-path information when the size of thisstate exceeds memory availability.

We present a detailed evaluation of our TinyOS implemen-tation of ADMR running on a 30-node indoor sensor networktestbed. This testbed exhibits a great deal of variation in linkquality and exercises the ADMR protocol in several ways. Wepresent a comparison of several path selection policies, as wellas demonstrate the impact of varying data rates, interferencefrom other transmitters, and limitations on routing protocol state.

Page 3: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

27

14

23

21

18

10

16

13

25

7

20

1

1719

12

26

9 22

4

15

3

28

24

2

8

11

5

6

 Forwarder

Sender 1

Receiver 1

Sender 2 Receiver 2

Sender 3

ReceiverSender

Figure 1:Example of ADMR forwarding trees for a single group.Two forwarding trees are established to route data to the two receivingnodes, 22 and 28. Nodes 1, 6 and 26 are the senders. Intermediatenodes act as forwarders, rebroadcasting messages for the group untilthey reach the receivers.

2 Background: Adaptive Demand-Driven Multi-cast Routing (ADMR)

Adaptive Demand-Driven Multicast Routing (ADMR) [10] isa multicast routing protocol designed for ad hoc networks inwhich nodes collaborate with each other to deliver packets. Datais multicast by sending packets togroup addressesrather than in-dividual node addresses. These packets will then be forwardedtowards all the receivers belonging to a particular group alonga forwarding tree established by the protocol. In this section,we present a brief overview of the ADMR protocol as describedin [10].

ADMR was chosen as a representative protocol in the familyof MANET protocols such as AODV [20] and DSR [11]. ADMRis fairly sophisticated and as a result we elide many details fromthis discussion; we refer the reader to [10] for complete details.

2.1 Data packet forwarding

ADMR delivers packets from senders to receivers by routingeach packet along a set offorwarding treesthat are constructedon demand. Each tree is rooted at a single receiver and hasleaves at each sender node for a group. ADMR’sroute discov-ery process assigns nodes in the network to beforwardersfor agroup based on measurements of potential routing paths betweensenders and receivers. Nodes assigned as forwarders rebroadcastdata packets received for the corresponding group.

Forwarders are ignorant of the recipients for any group;rather, they simply rebroadcast group messages, and a singlebroadcast may be received by multiple nodes (including re-ceivers or other forwarders). ADMR may also cause mes-sages to traverse multiple routes from the sender to receiver.Each ADMR forwarder performs duplicate packet suppressionby keeping track of the previously-transmitted sequence numberfor each〈sender , group〉 pair; a forwarder will not rebroadcastthe same data packet multiple times.

Figure 1 illustrates an example of ADMR forwarding trees.In this case, there are 3 senders and 2 receivers in one groupin the network. 10 nodes are selected as forwarders. Becauseboth routing trees belong to the same group, there is no need totransmit data multiple times along links shared between trees;

27

14

23

21

18

10

16

13

25

7

20

1

1719

12

26

9 22

4

15

3

28

24

2

8

11

5

6

 Forwarder

Sender 1

Receiver 1

Receiver 2

ReceiverSenderROUTE­DISCOVERY

RECEIVER­JOIN

Figure 2: The ADMR route discovery process.The sender broad-casts a ROUTE-DISCOVERY packet containing a group address asa network flood. Receivers for this group respond with a unicastRECEIVER-JOIN packet, which configures nodes along the route fromthe sender as forwarders for the group.

the link from node 14 to 13 is an example.

2.2 Route discoveryRoute discovery is the process of assigning forwarders in the net-work and therefore is crucial to ADMR. In the original ADMRdesign, there are two ways of establishing forwarding states:sender-initiated discoveryand receiver-initiated discovery. Insender-initiated discovery, senders initiate a network flood tofind potential receivers. Receiver-initiated discovery reversesthis process and has receivers flooding the network to discoversenders.

In the presence of asymmetric radio links, we expect thatsender-initiated discovery will outperform receiver-initiated dis-covery. This is because the information propagated through thenetwork flood is used to measure the routing cost from sendersto receivers, as described in detail below. When this process isinitiated by sending nodes, theforward pathfrom senders to re-ceivers is measured. Receiver-initiated discovery measures thereverse path, which may suffer severe packet loss when datapackets are forwarded along this route in the opposite direction.While we have implemented both discovery techniques, for sakeof brevity we only discuss sender-initiated discovery further.

The route discovery process (Figure 2) begins with senderssending out a ROUTE-DISCOVERY packet as a controlled net-work flood. Every node receiving this packet rebroadcasts thepacketonceallowing the message to propagate throughout thenetwork. Upon receipt of a ROUTE-DISCOVERY message, thenode compares the hop count of the ROUTE-DISCOVERY tothe lowest stored hop count (if any) from the sender generatingthe discovery.

If the new hop count is lower, the node stores three piecesof information: thesender addressthat originated the discov-ery, theprevious hopfrom which the discovery message wasreceived, and newhop countof the discovery message. Thisinformation is refreshed each time the sender initiates a new dis-covery process, as indicated by a sequence number in the mes-sage header. In this way, each node maintains the lowest hopcount path from all sending nodes, as well as the previous hopfrom this sender.

When a receiver interested in the group specified in the

Page 4: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

discovery message receives ROUTE-DISCOVERY, it sends aRECEIVER-JOIN packet back to the original sender as a unicastmessage using path reversal. That is, the RECEIVER-JOIN isrelayed along the lowest hop-count path back to the sender, usingthe stored previous hop information. Each intermediate node re-ceiving a RECEIVER-JOIN configures itself as aforwarder forthe corresponding〈sender , group〉 pair. Once a sender receivesany RECEIVER-JOIN, it can start broadcasting data packets forthis group. The forwarding nodes will relay the messages untilthey reach the receivers.

Two issues arise with this design. First, the use of minimumhop count paths is generally not ideal in real wireless networks,where link quality can vary greatly. Second, using path reversalto route the RECEIVER-JOIN back to the sender may traversea lossy path due to link asymmetry. We discuss both of theseissues and propose solutions in Section 4.2.

2.3 Forwarding state maintenanceDuring the course of operation, network conditions may changedue to node failures, radio interference, or node mobility. Theseunpredictable changes cause the forwarding states to becomestale and potentially ineffective at maintaining reliable routes.Therefore, forwarding state maintenance procedures are re-quired to repair or expire inappropriate states in the network.

2.3.1 Rediscovery: reestablishing forwarding states

When network connectivity changes over time, periodically ini-tiating the route discovery process is a straightforward but ef-fective way of adapting the network to the variations in the linkquality. The time between rediscovery periods should be set tothe longest time that the application can tolerate a broken path.Clearly, correctly estimating this parameter is difficult and de-pends on node mobility and traffic conditions.

2.3.2 Tree pruning

Tree pruning allows ADMR to deactivate unnecessary for-warders in the network. When a forwarder is no longer effectiveat delivering packets to downstream receivers, it should stop re-broadcasting messages to avoid wasting bandwidth. Likewise, ifa receiver moves away or is no longer interested in the data froma certain group, there is no need to forward packets to that re-ceiver. ADMR performs state expiration usingpassive acknowl-edgments. Whenever a forwarder rebroadcasts a packet, it listensfor another downstream node to retransmit the packet that it justforwarded. If the packet is retransmitted by other nodes, thenthe forwarding state will remain valid. Otherwise, the forwarderwill deactivate itself after an expiration period.

In Section 4.4 we discuss the impact of passive acknowl-edgments versus active route reinforcement, which requires areceiver to periodically refresh forwarders with a RECEIVER-JOIN message.

2.4 Routing stateTo support the functions described above, ADMR maintains 3tables on each node: theNode Table, Membership Table, andSender Table.

Node Table: The Node Table is indexed by the sender node ad-dress. Each entry stores theprevious hopand path costmeasured from ROUTE-DISCOVERY messages received

from this sender. The previous hop is used for routingRECEIVER-JOIN messages back to the sender to reinforcethis route. In the original ADMR protocol, the path costwas the number of radio hops from the sender. The previ-ous hop information in a node table entry is only updatedwhen a new message with a lower path cost is received.The node table essentially determines which intermediatenodes are used as forwarders on a path from a sender to areceiver. The node table also stores thesequence numberof the most recent ROUTE-DISCOVERY message, used tosuppress duplicate retransmissions. The node table there-fore must scale with the size of the network. In Section 4.5we discuss the impact of limited memory capacity on main-tenance of the node table.

Membership Table: The Membership Table is indexed bygroup address and sender node ID and stores informationon whether a node is part of a group for a particular senderID (as a receiver, forwarder, or both). The forwarder flagis set when a RECEIVER-JOIN is routed through the nodefor the corresponding〈sender , group〉 combination. Thereason that both the sender and group address are consultedis to prevent messages from being forwarded back towardsa sender from another branch of the routing tree; for exam-ple, in Figure 1, we wish to prevent messages from node 9being routed back up the tree to node 8. The receiver flag isset when a node elects to receive messages for a group. Ifneither of these flags are set, data packets for this group aredropped by the node.

Sender Table: The Sender Table stores a list of group addressesfor which a node is a sender. The table is consulted whenthe protocol needs to perform periodic path rediscovery. Inthe original ADMR design, this table also keeps track of theactiveness of the group address and how often the sendershould reinforce the paths associated with it.

3 TinyADMR ImplementationIn this section we describeTinyADMR, our implementation ofthe ADMR protocol for TinyOS-based mote platforms. In devel-oping TinyADMR, we have attempted to be as faithful as possi-ble to the original ADMR design. Note that we did not simplyport the original ADMR code (which was implemented inns-2)to TinyOS; this is a complete reimplementation based on detailsin [10]. We did consult the ADMR code to understand certaindetails when necessary. Figure 3 depicts the block diagram ofthe TinyADMR implementation.

TinyADMR deviates from the original ADMR specificationin several respects. First, TinyADMR does not include the so-phisticated route repair protocol described in [10]. Instead, werely on periodic route rediscovery to refresh routes, which ismuch simpler. As a result, TinyADMR may not be able to re-spond to a broken route as rapidly as ADMR. We believe thatlocal route repair is important in cases where there is both a highdegree of node mobility and a low tolerance for broken routes.In the applications that we are targeting for ADMR, neither ofthese conditions hold so we have not yet implemented this tech-nique.

The most substantial change in TinyADMR is the departurefrom hop count as a path selection metric. As we discuss in Sec-tion 4.2, we have explored a range of metrics for picking good

Page 5: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

TOS CC2420 Radio Stack

QueuedSend

APPLICATION

Command

Processing

NodeTable

MembershipTable

Sender Table

Routing State

ControlMsg

Generator

Receive

Queue

receive()

receive()

send()

publish()

subscribe()

send()

TinyADMR

state change command path event path

Packet

Processing

Packet

Forwarder

Figure 3:TinyADMR implementation diagram.

interface PubSub {command result_t publish(uint16_t chan);command result_t subscribe(uint16_t chan);command result_t leave(uint16_t chan);

command result_t send(uint16_t channel,uint8_t length, TOS_Msg* msg);

event result_t sendDone(TOS_MsgPtr msg,result_t success);

event TOS_MsgPtr receive(TOS_MsgPtr m,uint16_t channel, uint16_t srcAddr);

}

Figure 4:The TinyADMR software interface.

paths from senders to receivers. In TinyADMR, as ROUTE-DISCOVERY messages are propagated through the network, thecode provides a general notion ofpath costthat is stored in theNode Table.

3.1 InterfaceTo provide the applications a clean interface for multicast datadissemination, a Publish/Subscribe interface has been definedfor TinyADMR. The interface is shown in Figure 4. All com-munications in TinyADMR are based on group addresses ratherthan individual node addresses. Each group defines a separatecommunications channel. Senders send data by publishing tospecific group addresses. Receivers interested in the data sub-scribe to those group addresses. ThePubSub interface is in-tended to be generic for supporting any protocol that implementsthis model.

With TinyADMR, the application expresses its interests in

sending and receiving data on certain group addresses by call-ing the PubSub.publish()and PubSub.subscribe()commands.Whenever the application decides to stop participating on agroup address, it callsPubSub.leave()to cancel its membership.The PubSub.send()command is identical to its TinyOS activemessage counterpart except that it takes a group, rather than anode, address as a destination.PubSub.receive()event is sig-naled whenever a data packet arrives on a group subscribed toby the node. Unlike the regular TinyOSReceiveMsg.receive()event,PubSub.receive()provides information on both the senderand the group address associated with the packet.

3.2 Protocol implementationTinyADMR is implemented as a NesC component,TinyADMR.nc , which wires in several modules provid-ing the protocol functionality. This component provides thePubSub and StdControl interfaces, as well as a separatedebugging interface. Most of the protocol functionality itself isimplemented in a single module,TinyADMRM.nc , consistingof 1793 lines of commented NesC code. When compiled forthe MicaZ, TinyADMR requires 3544 bytes of ROM and 1563bytes of RAM. Memory usage could be significantly reducedby removing debugging and instrumentation from the code.

3.2.1 Packet formatEach ADMR message carries a 13-byte header, not including the10-byte header used by TinyOS messages with the CC2420 radiostack. The default TinyOS message payload (on MicaZ) is 29bytes, leaving just 16 bytes for application payload data. How-ever, we have measured TinyADMR performance with payloadsof up to 100 bytes, and we believe that it is safe to use largerpayloads when necessary.1

The header fields are briefly described below:

pktType (one byte): Indicates whether a packet is a DATA,ROUTE-DISCOVERY, or RECEIVER-JOIN message.

seqNo (two bytes): Unique sequence number assigned by theoriginator of the packet. Nodes that rebroadcast or forwardthe packet should not change this field.

groupAddr (two bytes): The group address for the packet.

originAddr (two bytes): Address of the node that originated thepacket.

senderAddr (two bytes): Address of the node that last for-warded the packet.

destAddr (two bytes): Address of the destination node ofthis packet for unicast messages, specifically RECEIVER-JOIN. Unused for data and discovery messages.

hopCount (one byte): Number of hops the packet has traversedsince the originator.

routeCost (one byte): The accumulated routing cost of a mes-sage since it has been originated by the sender. The mean-ing of this field varies with the path selection metric beingused.

1The original payload limit was chosen for the RFM1000 radio on the Renemotes, which had problems maintaining bit synchronization over long transmis-sions; this does not seem to be a problem for the CC2420. Memory use for eachmessage buffer is perhaps a more serious concern.

Page 6: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

3.2.2 Route establishment

When a node expresses its desire to publish data to a group, itinvokesPubSub.publish(), which initiates the periodic route dis-covery process. A timer is set that will periodically issue a floodof ROUTE-DISCOVERY messages from this node. The routediscovery interval is configurable; by default it is set to 15 sec,although for our experiments in Section 4 we decreased the in-terval to 5 sec to reduce the time to acquire measurements.

Upon receipt of a ROUTE-DISCOVERY message, a nodethat has expressed interest in subscribing to the associated groupmust establish a forwarding path from the sender by replyingwith a unicast RECEIVER-JOIN message. However, reinforc-ing the path taken by the first ROUTE-DISCOVERY messagewill not necessarily yield the best path. Therefore, the receiverwaits for a short time (1 sec in our prototype) in order to ac-quire measurements on multiple paths from the sender of theROUTE-DISCOVERY. After this interval, the path with the low-est routing cost (as indicated by the Node Table entry for thecorresponding sender) is used to relay the RECEIVER-JOIN.

Because RECEIVER-JOIN messages traverse the reversepath from sender to receiver, in the presence of asymmetric ra-dio links this message may experience poor links even whenthe sender-to-receiver path has high reliability. Therefore, theRECEIVER-JOIN uses hop-by-hop acknowledgment and re-transmission to ensure that it is routed to the sender. Each nodealong the path attempts to retransmit the RECEIVER-JOIN upto 5 times before dropping the message. As a result, it is possiblethat a very lossy link will cause the RECEIVER-JOIN to be lost.A possible solution is to allow RECEIVER-JOIN messages totraverse multiple reverse paths and expire redundant forwardersthrough tree pruning.

3.2.3 Route pruning

The forwarding tree established for a group during sender andreceiver discovery should be pruned when there are no down-stream receivers for this group or when the sender stops sendingdata. For this purpose, every Node Table entry is assigned a life-time when it is created. The lifetime of a Node Table entry isdecremented by one each time an associated timer fires, and theentry is expired when the lifetime becomes zero. The lifetime ofeach entry is refreshed based on apath reinforcement policy.

In TinyADMR, two path reinforcement strategies are imple-mented: active reinforcementand passive reinforcement. Ac-tive reinforcement refreshes a forwarder membership whenevera new RECEIVER-JOIN message is received, that is, when areceiver wishes to keep a node as a forwarder along a path. Pas-sive reinforcement refreshes the forwarder membership based onpassive acknowledgment of each transmitted data packet. De-tails and impact of these two reinforcement methods are dis-cussed in Section 4.4.

3.3 Routing state

As specified in Section 2, each node maintains three tables inorder to support the multicast functionality. The size of the NodeTable depends on how many nodes are in the network that areacting as multicast senders and receivers. The size of the Senderand Membership Tables can be determined by the number ofmulticast groups are expected to exist in the network. Havingenough space in the routing tables, especially the Node Table, is

0

10

20

30

40

50

60

70

80

90

100

0 10 20 30 40 50 60 70 80 90 100

LDR

: Lin

k D

eliv

ery

Rat

io (

%)

Pair of motes (sorted by foward LDR).

LDR of forward linkLDR of reverse link

Figure 6: Link delivery ratio (LDR) asymmetry observed in ourtestbed.

critical because ADMR will not perform properly without largeenough table sizes.

The Node Table entries are 8 bytes each. Membership Tableentries are 7 bytes each, while Sender Table entries are 2 byteseach. By default, we configure each table to hold 32 entries, re-sulting in a total memory use of 544 bytes. In Section 4.5 wepresent techniques for evicting Node Table entries when mem-ory is limited.

4 Evaluation and Lessons LearnedImplementing ADMR in TinyOS and obtaining good commu-nication performance in a realistic network environment hasnot been a trivial undertaking. There is a significant discon-nect between the original ADMR protocol as published and theconditions encountered in a real sensor network. In particular,ADMR assumes symmetric links, uses hop count as its path se-lection metric, and ignores memory space issues when maintain-ing routing tables. In this section we present a detailed evalua-tion of our TinyOS-based ADMR implementation and present aseries of lessons learned in the process of developing and tuningthe protocol. We believe these lessons will be useful to otherprotocol designers working with 802.15.4-based sensor motes.

4.1 Evaluation environmentWe have focused exclusively on real implementation and eval-uation on a sensor node testbed, rather than simulations, to un-derstand the performance and behavior of TinyADMR. All ofour results have been gathered on an indoor testbed of 30 Mi-caZ motes installed over three floors of our Computer Sciencebuilding (a map of one floor is shown in Figure 5). This testbedprovides facilities for remote reprogramming of each node overan Ethernet backchannel board (the Crossbow MIB600). Eachnode’s serial port is also exposed through a TCP port permit-ting detailed instrumentation and debugging. Motes are installedin various offices and labs and are often placed on shelves at aheight of 1-2 m.

Because of the relatively sparse node placement, this testbedexhibits a high degree of variation in radio link quality and manyasymmetric links. Figure 6 shows the forward and reverselinkdelivery ratio (LDR) calculated for every pair of nodes in the

Page 7: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

Figure 5:Map of one floor of our 30-node sensor network testbed.

50

60

70

80

90

100

0 10 20 30 40 50 60 70 80 90 100

CC24

20LQ

I LQ

I(%)

Pair of motes (sorted by foward LQI)

LQI of forward linkLQI of reverse link

Figure 7:Link-level LQI asymmetry observed in our testbed.

testbed. Using a technique similar to the SCALE benchmark [3],the link delivery ratio is measured by having each node broad-cast a fixed number of messages in turn, while all other nodesrecord the number of messages received from each transmitter.The LDR is the ratio of received messages to transmitted mes-sages for each pair of nodes. As the figure shows, the LDR ishighly variable and often asymmetric.

The CC2420 radio provides an internal Link Quality Indica-tor (LQI) for each received message [1]. This value representsthe ability of the CC2420 to correlate the first eight 32-chip sym-bols following the start-of-frame delimiter, and has an effectiverange from 110 (highest quality) to 50 (lowest quality). As wewill discuss in Section 4.2, the LQI is highly correlated with thelink delivery ratio. Figure 7 shows the corresponding asymmetryin the pairwise LQI measurement (averaged over 1000 packetsper transmitter).

In each of the cases presented below, we measure routes be-tween 28 different sender/receiver pairs. These pairs were se-lected to present a diverse view of potential routes in our testbed.Four of the 28 pairs were within a single radio hop, 11 were

0

10

20

30

40

50

60

70

80

90

100

50 60 70 80 90 100 110

Deliv

ery

Ratio

(%)

CC2420 LQI

MeasurementApproximation Model

Figure 8: Relationship between LQI and delivery ratio in ourtestbed. Also shown is the piecewise linear model used to mapLQI observations to LDR estimates.

within two radio hops, and the remaining 13 pairs were cho-sen so that the endpoints were on opposite ends of the building(shown in Figure 5). In each case, senders generated data at arate of 5 Hz, each experiment was run for 100 sec, and path dis-covery messages were generated every 5 sec. Once the bench-mark is started, we wait for 30 seconds before collecting packetreception statistics, to avoid measuring warmup effects.

Original ADMR evaluation: It is worth contrasting our en-vironment to that used in the original ADMR paper. In [10],ADMR was measured usingns-2simulations with a network of50 nodes roaming in a 1500 m× 300 m area. A 2 Mbps 802.11radio with a radio range of 250 m was simulated; this impliesthat most nodes are within a small number of hops of each other.Most importantly, nodes have perfect connectivity to all othernodes within this range and links are always symmetric.

Page 8: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

4.2 Impact of path selection metricsAd hoc routing protocols use apath selection metricto deter-mine which path to maintain between a given sender and receiverpair. In ADMR, the path cost is updated as discovery messagespropagate from senders to receivers. The receiver sends a routereply message along the reverse of the lowest-cost path, whichconfigures nodes along that path as forwarders.

The original ADMR protocol selects paths with theminimumhop count, which we call the MIN-HOP metric. As has been dis-cussed elsewhere [4, 28], this choice of metric is not necessarilyideal, especially when link quality varies considerably. For ex-ample, MIN-HOP will prefer a short path over (potentially) verypoor radio links rather than a longer path over high-quality links.MIN-HOP was appropriate in the original ADMR work whichdid not consider lossy radio links. However, in a realistic envi-ronment we expect it to have very poor performance.

4.2.1 MAX-LQI and PATH-DR metricsWe have investigated several alternate path selection metrics inour development of TinyADMR. For brevity we describe onlytwo here. The first is to select the path with the “best worstlink.” In this metric the receiver selects the path with the highestminimumLQI value over all links in the path. Given a set ofpotential pathsP and set of linksLp for eachp ∈ P , we selectthe pathp?:

p? = arg maxp∈P

minl∈Lp

LQI(l)

We call this metric MAX-LQI.MAX-LQI attempts to find the path with the best “bottle-

neck,” however, it does not have any way of differentiating be-tween two paths with the same bottleneck link but different linkcharacteristics. Because our goal in ADMR is to maximize theoverall path delivery ratio(PDR), directly using the path withthe best PDR would be ideal. However, estimating PDR re-quires measuring the hop-by-hop LDR along the path. Thisrequires multiple rounds of message exchange between neigh-boring nodes, incurring additional messaging overhead. TheETX [4] and MintRoute [28] protocols use this technique.

Rather than directly measure LDR, we have found that thereis a high correlation between the LQI and LDR observed on eachlink. Figure 8 plots the pairwise LQI and LDR over an extensiveset of measurements in our testbed. From this data we can derivea simplemodelmapping LQI to LDR; a piecewise linear approx-imation appears to work well for this data set. Because LQI canbe observed from asingle packet reception, using this informa-tion to predict the previous-hop LDR allows us to produce thePATH-DR metric, which selects the pathp? such that:

p? = arg maxp∈P

∏l∈Lp

ESTLDR(l)

whereESTLDR(l) is the estimated LDR of the link from theLQI of the received discovery message, derived from our empir-ical model shown in Figure 8.

4.2.2 ResultsFigure 9 shows a CDF of the path delivery ratio for the set of28 paths. As the figure shows, PATH-DR results in the highestdelivery ratio over all paths, with nearly all paths resulting ina PDR of over 80%, and a median of 92.4%. MAX-LQI also

0

10

20

30

40

50

60

70

80

90

100

0 10 20 30 40 50 60 70 80 90 100

Cum

ulat

ive p

erce

ntag

e of

nod

e pa

irs

Delivery Ratio (%)

MAX-LQIMIN-HOPPATH-DR

Figure 9: Comparison of MIN-HOP, MAX-LQI, and PATH-DRrouting metrics. This CDF shows the path delivery ratio measuredover 28 separate pairs of nodes using each of the three metrics. PATH-DR produces the best results with 50% of the paths obtaining a deliveryratio of over 92.4%.

performs well, with a median PDR of 79.6%. MIN-HOP is no-ticeably worse, with a median PDR of just 60.7%.

PATH-DR and MAX-LQI achieve higher robustness at thecost of higher overhead. In addition to taking longer routes, mul-tiple routing paths may be active between a sender and receiverat once. The existence of multipath routes should result in higherpath delivery ratios. Figure 10 shows the overhead for each pathselection metric in terms of the ratio between the total numberof transmitted messages and the number of messages generatedby each sender. This ratio is at least as high as the number ofrouting hops from sender to receiver, and will be higher whenmultiple routes are involved.

The path length for each receiver join message is shown inFigure 11, while the total number of active forwarders for eachnode pair is shown in Figure 12. The median path length forMIN-HOP is 2 hops, where it is about 4 hops for both PATH-DRand MAX-LQI. The number of forwarders is somewhat higherthan the number of hops because PATH-DR and MAX-LQI mayactivate multiple paths on subsequent route selection phases.The impact of pruning these extra routes is presented in Sec-tion 4.4.

These results show that the routing selection metric has alarge impact on performance and communication overhead. ThePATH-DR metric provides high reliability using a simple modelmapping the CC2420’s LQI to LDR, making it straightforwardto implement without incurring additional measurement over-head. We can imagine a wide range of alternate path selectionmetrics as part of future work.

4.2.3 Stability of path measurements

Receivers select routes based on a single measurement of eachpath, which is accumulated as a route discovery message flowsfrom the senders to the receiver. We are concerned with thesta-bility of the path measurements over time. If we were to observehigh variability in the routing metric, then selecting a path basedon a single measurement of its quality may result in suboptimalpaths.

Page 9: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

0

10

20

30

40

50

60

70

80

90

100

0 2 4 6 8 10 12 14

Cum

ulat

ive p

erce

ntag

e of

nod

e pa

irs

Ratio of Total Data Pkts / Original Pkts

MAX-LQIMIN-HOPPATH-DR

Figure 10:Overhead incurred by each path selection metric.ThisCDF shows the ratio of the number of data packets transmitted in thenetwork (including forwarded messages) to the number of data pack-ets originated by each sender. For example, for 70% of the 28 paths,MAX-LQI resulted in an overhead of 6 transmissions for every originalpacket.

Figure 15 shows the value of each of the three routing met-rics, measured across 100 contiguous data packets, for a sin-gle sender-receiver pair. Over time, ADMR is adapting thepath chosen for incoming packets; vertical lines in the figureindicate when the node responded with RECEIVER-JOIN mes-sages. The path metric used in this case is MAX-LQI but wehave recorded the measured values for the MIN-HOP and PATH-DR metrics as well. Note that because multiple forwarders areactive, the specific path traversed by each incoming data mes-sage varies. This can be seen in the MIN-HOP graph whichshows that the number of hops varies on a per-packet basis.

As the figure shows, the PATH-DR metric is very stable,despite the different paths traversed by incoming data packets.MAX-LQI and MIN-HOP are more variable, suggesting thatthey are less reliable as a predictor of overall routing cost.

4.2.4 The ETX metric

Another metric that has been proposed for choosing good ad hocrouting paths is theminimum expected transmission countmet-ric [28], also called ETX [4]. ETX is defined as

p? = arg minp∈P

∑l∈Lp

1LDRf (l) × LDRr(l)

WhereLDRf (l) andLDRr(l) are the forward and reverse LDRsfor link l, respectively. ETX is intended to maximize paththroughput, rather than delivery ratio. It is intended primarilyfor routing protocols that perform link-by-link acknowledgmentand retransmission, which is why the reverse LDR factors in.

We chose not to implement ETX for two reasons. First, in amulticast routing protocol such as ADMR, link-layer acknowl-edgments are unnecessary. A forwarder simply rebroadcastseach message it receives, which may be received by multipleforwarders or receivers. Unless some form of multi-receiveracknowledgment were implemented in the MAC (a feature notcurrently available in TinyOS radio stack), it is not possible touse link-layer acknowledgments in any case. Therefore, ADMR

0

10

20

30

40

50

60

70

80

90

100

0 1 2 3 4 5 6 7 8

Cum

ulat

ive p

erce

ntag

e of

nod

e pa

irs

Average Hop Count of Recv Join Packets

MAX-LQIMIN-HOPPATH-DR

Figure 11: Path lengths for receiver join messages for each pathselection metric. This CDF shows the length of the paths selected byreceivers for each metric. The MIN-HOP metric minimizes the hopcount, while MAX-LQI and PATH-DR incur some path stretch becausethey focus on higher-quality links.

does not particularly care about the reverse LDR. The one casewhere reverse LDR may affect performance is when propagat-ing route reply messages, which traverse the reverse path. In ourimplementation, link-layer acknowledgment and retransmissionare used only for this message type.

The second reason to avoid ETX is that it relies on a directmeasurement of each link’s LDR. While this information can begathered by message snooping on regular traffic, a link that hasnever been measured will require multiple rounds of messageexchange to yield an LDR estimate. It is far simpler and moreefficient to use the LQI-based model to predict LDR as describedabove.

4.3 Impact of limited bandwidth

Another shortcoming of most MANET protocol designs is thatthey assume relatively high link bandwidths. The originalADMR protocol was evaluated using a simulated 802.11 net-work with a raw transmission rate of 2 Mbps. MAC overheadleaves roughly 1 Mbps to applications. In contrast, 802.15.4-based radios provide substantially less capacity. While 802.15.4operates at a nominal transmission rate of 250 Kbps, our mea-surements of the CC2420 using the default radio stack inTinyOS results in an application data rate of just 25 Kbps withsmall packets (28 bytes) and up to 60 Kbps for larger pack-ets (100 bytes). This is 17 times less than 802.11 at 2 Mbps(1 Mbps for applications), or 93 times less than 802.11b operat-ing at 11 Mbps (5.5 Mbps for applications).

For these reasons we expect protocol overheads to have a seri-ous impact on the performance of ADMR on 802.15.4. We havenot attempted to minimize these overheads; rather, our goal is todemonstrate the practical implications of limited channel band-width.

Protocol overhead in ADMR arises primarily due to routediscovery and route reply messages. In sender-initiated dis-covery, each sender periodically floods the network, which al-lows node tables to be maintained on each intermediate node.When a receiver wishes to establish a path it sends a route re-

Page 10: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

0

10

20

30

40

50

60

70

80

90

100

0 2 4 6 8 10 12 14

Cum

ulat

ive p

erce

ntag

e of

nod

e pa

irs

Number of Forwarders

MAX-LQIMIN-HOPPATH-DR

Figure 12: Number of active forwarders for each path selectionmetric. This CDF shows the number of forwarders that are active whilerouting data for each of the 28 paths. The number of forwarders is notidentical to the path length in Figure 11 because multiple paths may beactive.

ply back to each sender. The periodic per-sender floods inducethe highest overhead in ADMR and unfortunately scale with net-work size. For example, in a network of 30 nodes propagatingper-node floods every 5 sec, the per-node protocol overhead is302/5 = 180 packets/sec.

Because the size of our testbed is limited, we cannot generatean arbitrary amount of protocol overhead (which might be seenin a much larger network) directly. Instead, we cause all nodesin the network to generateinterferencepackets at a rate that wecontrol. We then show the impact on the achieved delivery ratiofor several paths as this interference rate varies.

Figure 16 shows the results of this experiment with the perinterfering node interference rate increasing from 0 to 50 pack-ets/sec. To eliminate effects where a node drops its own trans-missions because it is also generating interference messages,whenever a node is configured as a forwarder, it generates nointerference messages of its own. While this is not entirely re-alistic (a node will still propagate discovery floods while it isacting as a forwarder), we wanted to avoid losing data transmis-sions due to queue overflow on the transmitter.

As the figure shows, the path delivery ratio drops rapidly witheven a modest amount of background traffic. Keep in mind thatthis is not because forwarders are dropping outgoing packets(due to MAC backoff or queue overflow). The only explanationis that nodes are unable toreceivepackets as well in the presenceof interfering traffic. That is, the background traffic “jams” re-ceivers along the ADMR path and prevents them from correctlydecoding incoming messages. This is likely due to collisionscaused by hidden terminal and capture effects.

We have performed extensive measurements of this phe-nomenon on our testbed and have observed that the LQI andRSSI for properly received messages is not diminished by thepresence of background traffic. However, this may be due tothe fact that RSSI and LQI are sampled at the beginning of anincoming message, while a collision could corrupt the messagelater during its reception.

0

10

20

30

40

50

60

70

80

90

100

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4

Cum

ulat

ive p

erce

ntag

e of

nod

e pa

irs

Protocol Overhead

MAX-LQIMIN-HOPPATH-DR

Figure 13:Comparison of Protocol Overhead for each path selec-tion metric. This CDF shows the protocol overhead, ratio of protocolcontrol packets to total number of packets sent on the network, for eachpath selection metric.

0

10

20

30

40

50

60

70

80

90

100

0 5 10 15 20 25

Cum

ulat

ive p

erce

ntag

e of

nod

e pa

irs

Normalized Overhead

MAX-LQIMIN-HOPPATH-DR

Figure 14:Comparison of the Normalized Overhead for each pathselection metric. This CDF shows the Normalized Overhead, ratioof total number of packets sent on the network to total data packetsreceived by a receiver, for each path selection metric.

4.4 Impact of route pruning

In ADMR, nodes are configured as routers when they receive aroute reply message from a receiver. Over time, different routesmay be activated as link conditions change. Also, at any giventime, multiple routes may exist between a single pair of nodes.By pruning redundant routes from the network over time, com-munication overheads can be reduced, although this may alsohave a negative impact on path reliability.

We investigate the impact of two approaches to path pruningin ADMR. The first, active reinforcement, requires that nodescontinue to receive route reply messages from a receiver in or-der to stay active as forwarders. If a node has not received aroute reply for 10 sec, it clears its forwarder status. This time isequivalent to 2 discovery cycles. The second approach,passivereinforcement, causes a node to remain active as a forwarder aslong as it overhears another node retransmitting its own mes-sages (or it continues to receive route replies).

Page 11: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

0 0.5

1 1.5

2 2.5

3 3.5

4

400 420 440 460 480 500

MIN

-HO

P

Packet Number

MIN-HOPRecvjoin

50 60 70 80 90

100 110

400 420 440 460 480 500

MAX

-LQ

I

Packet Number

MAX-LQIRecvjoin

0

0.2

0.4

0.6

0.8

1

400 420 440 460 480 500

PATH

DR

Packet Number

PATHDRRecvjoin

Figure 15: Stability of path selection metrics over time. This fig-ure plots the value of each of the three path selection metrics for 100contiguous data packets for a single sender-receiver pair. The verticallines indicate times when RECEIVER-JOIN messages were transmittedby the receiver.

Figure 17 compares the delivery ratio for the active and pas-sive reinforcement schemes. Not surprisingly, passive reinforce-ment keeps more forwarders active, resulting in much higher re-liability than active reinforcement. Figure 18 shows the numberof forwarders for each of the 28 node pairs at the end of each100 sec run. Active reinforcement results in about 2 fewer for-warders than passive reinforcement. This suggests that a smallnumber of additional forwarders can yield a great deal of in-creased reliability.

4.5 Impact of limited node stateThe final lesson that we explore involves the impact of limitednode state. Sensor nodes such as the Telos and MicaZ have no-toriously small memory sizes: the MSP430 microprocessor hasonly 10 KB of RAM, while the Atmega 128L has just 4 KB. Wecannot expect that the routing layer can consume an arbitraryamount of memory to store its routing state. This is especiallytrue if there is a substantial application running on top of therouting layer that has its own memory requirements.

The node table maintained by every ADMR node potentiallycontains one entry for every other node in the network. In ourimplementation, each entry consumes 9 bytes. In a very largenetwork, it is clear that the number of entries in this table canquickly saturate memory.

The original ADMR paper [10] suggests using an LRU strat-egy to prune node table entries over time. However, in a networkwith many active senders it may not be possible to guarantee anupper bound on the node table size. We are concerned with howto deal withoverflowin the node table given some fixed limit on

0

10

20

30

40

50

60

70

80

90

100

0 5 10 15 20 25 30 35 40 45 50

Deliv

ery

Ratio

Interfering node packet rate

19 --> 125 --> 67 --> 5

Figure 16: Delivery ratio for three representative paths as theamount of background traffic is increased. Delivery ratio degradeswith background traffic despite no drops by transmitters.

0

10

20

30

40

50

60

70

80

90

100

0 10 20 30 40 50 60 70 80 90 100

Cum

ulat

ive p

erce

ntag

e of

nod

e pa

irs

Delivery Ratio (%)

ACTIVEPASSIVE

Figure 17:Comparison of delivery ratio for active vs. passive routereinforcement. The MAX-LQI metric is used for this experiment. Withactive reinforcement, forwarders are deactivated more rapidly, whichnegatively impacts the achieved delivery ratio.

its size.Upon receipt of a discovery message, ADMR will consult the

node table and either update the existing entry for this sender, orattempt to create a new entry. If the table is full, we must ei-ther drop the new entry or evict some other entry to make room.Dropping a node table entry has two effects. The first is that thenode loses information on the routing cost from the sender. Thisis only needed when establishing new routes, so these entries areonly needed shortly after a discovery message has been received(in case the receiver wishes to reinforce this route).

The second effect is that the last sequence number receivedfrom this node, used for duplicate suppression, is lost. This in-formation is required for all senders for which this node is aforwarder. This suggests that ADMR should keep the last se-quence number and reverse-path information in separate tables,although this is not the case in the current protocol design.

We explore several different policies for evicting table en-tries. The most naive policy simply drops the new entry if the

Page 12: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

0

10

20

30

40

50

60

70

80

90

100

0 2 4 6 8 10 12 14 16

Cum

ulat

ive p

erce

ntag

e of

nod

e pa

irs

Number of Forwarders

ACTIVEPASSIVE

Figure 18:Number of active forwarders for active vs. passive routereinforcement. The number of forwarders is measured at the end ofthe run, after pruning has occurred. Active reinforcement only resultsin about 2 fewer forwarders.

table is full, only allowing updates to entries already in the ta-ble. Another simple policy is FIFO, which drops the oldest en-try (where entries are ordered by time of insertion). FIFO isintended to time out stale entries from the table in favor of newentries. However, if the routing cost for the new entry is veryhigh, it may not be worthwhile maintaining information on thisroute.

A better approach may be to maintain node table entries forhigh-quality routes, with the expectation that this node will becalled upon to act as a forwarder. In some sense, the impact ofdropping discovery messages for low-quality paths should notbe too severe.

Figure 19 shows a comparison of each of these table-management policies with 6 senders and 1 receiver. We emulatethe impact of a large network by artificially limiting the nodetable size to 4 entries on each node. This implies that nodeswill not be able to maintain routing state for all 6 senders. Asthe results show, none of the proposed policies works well in allcases. The naive drop-new-entry policy performs very poorly.The FIFO and drop-worst policies work reasonably for only for3 out of the 6 routes.

Another approach to managing limited memory size is toswap node table entries to external memory (such as flash) ondemand. We have not yet explored this approach, although theresults presented above suggest it may be necessary to do so inlarge networks.

5 Related Work5.1 MANET routing protocolsResearchers in the MANET community have put in a great dealof effort into the design of routing protocols for ad hoc networks.For unicast use, DSDV [19], AODV [20] and DSR [11] are threeof the most commonly-studied routing protocols for MANETapplications. DSDV is a proactive distance-vector based routingprotocol. It maintains a routing table containing shortest hopcounts from a node to every other node in the network. Thetable is built up by periodically exchanging routing tables amongneighbors. With DSDV, every possible route in the network is

0

10

20

30

40

50

60

70

80

654321

Deliv

ery

Ratio

Sender-Receiver Pair

Large TableDrop Worst

FIFODrop New

Figure 19: Comparison of delivery performance with differentnode table management strategies.This experiment uses 6 sendersand 1 receiver. In each case except forLarge Table, the node table sizeis limited to 4 entries to simulate the effect of scaling in a large network.None of the proposed schemes for prioritizing node table entries workswell in all cases.

maintained by the protocol. AODV is also based on distance-vector but it works on-demand rather than proactively. A flood ofroute request message is sent out whenever a node needs to find aroute. The destination replies with a route reply packet followingthe reverse path of the route request message. DSR is anotherpopular on-demand unicast protocol that discovers routes in asimilar fashion as AODV. However, in DSR, nodes don’t keepthe states of next hop information. Rather, source nodes are incharge of specifying the entire route in every data packet theywant to send out.

A wide range of ad hoc multicast routing protocols have alsobeen proposed. For brevity we only discuss three examples here:ADMR [10], ODMRP [14] and MAODV [21]. ODMRP andMAODV are both very similar to ADMR. The route establish-ment mechanism is basically equivalent in all three protocols, al-though ODMRP only supports sender-initiated discovery model.The major differences between ODMRP and ADMR are in thetree-pruning mechanisms and how broken links are repaired.ODMRP prunes the forwarders based on a lifetime value associ-ated with each forwarding state. The lifetime is decreased everytime a timer fires and is reset every time the node is re-selected asa forwarder. This is basically equivalent to the active reinforce-ment technique implemented in TinyADMR. ADMR only rein-forces forwarders based on passive acknowledgments. ODMRPcontains a mobility prediction scheme that acquires the informa-tion about the movement of a node using GPS-derived locationinformation. Based on a fixed-range radio model, it predicts thewhen two nodes will be out of communication and adapts there-discovery timer accordingly.

In MAODV, route discovery is also carried out with networkflooding. However, MAODV does not differentiate data sendersfrom receivers in the multicast tree. All nodes which are alreadyin a multicast group will answer the discovery message with aunicast packet following the reverse paths of the discovery mes-sages. The node that sent out the discovery message then picksthe best route to connect to the tree according to the multiplereply messages it has received. To detect link failures, MAODV

Page 13: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

requires every node to broadcast ahellomessage periodically. Ifa node stops receivinghello messages from a particular node, itwill initiate the route repair process.

Although the above mentioned MANET multicast protocolswere all carefully designed, almost all of have been evaluatedonly through simulations with simple radio models. The simu-lations often ignore link asymmetry and assume a fixed commu-nication range.

5.2 Studies of real-world link qualityStudies of link-level data delivery characteristics of wireless net-works have confirmed that the assumption of fixed-range, sym-metric links is unrealistic [2, 3, 6]. New routing metrics basedon dynamic link quality estimation [4, 28] have been proposedto alleviate this problem by selecting longer but more reliableroutes.

Unfortunately, the routing protocols described above alladopt hop count as the path routing cost. This implies that theprotocol may favor shorter paths over longer paths with higherreliability. Therefore, it is unclear from the current literaturehow well the proposed MANET multicast routing protocols willwork if they are deployed in a real environment using real radiocommunication channels.

5.3 Routing in TinyOS-based sensor networksStudies of routing in sensor networks have primarily beenfocused on building a single spanning tree that routes mes-sages from all the nodes in the network to a single base sta-tion [28, 5, 17, 18, 8]. Such a global spanning tree is usefulfor a broad range of sensor network applications that involvenetwork-wide data collection [25, 26, 24].

Woo et al. [28] carefully studied design strategies for many-to-one spanning trees in sensor networks. They found maintain-ing a spanning tree with highly-reliable links is non-trivial andrequires dynamic link estimation on each sensor node. The dy-namic link estimation proposed in their work is performed bypassively snooping packets from other nodes and using a win-dow mean EWMA to estimate link quality over time. Their eval-uation is based on a simulation model derived from measure-ments of link quality between two nodes placed at increasingdistances. Their work also highlights several eviction policiesfor neighbor tables on each node.

5.3.1 TinyOS implementation of MANET protocolsBecause TinyOS-based sensor networks are still relativelyyoung in comparison to the vast body of literature on MANETprotocols, we have been unable to identify many TinyOS-basedimplementations of MANET protocols. We are only aware ofthe TinyOS implementations of DSDV and AODV by Yarvis etal. [30, 9]. Among the two protocols, only DSDV was studiedand published. In [30], the authors present results for an imple-mentation of DSDV, and the authors also point out that selectinghigh-quality links outperforms shortest-hop-count paths. We arenot aware of any other experimental studies of MANET-stylemulticast routing protocols in TinyOS.

6 Future Work and ConclusionsThis paper has presented an investigation of the issues that arisewhen translating MANET-based protocol designs into a sensornetwork context. As sensor networks become more widespread,

new applications will be developed that present a broad set ofcommunication requirements. Given that the MANET commu-nity has invested a great deal of effort into routing protocols formobile wireless environments, we believe that there is real valuein understanding to what extent this work can be reapplied.

Many of the lessons arising from our TinyOS-based imple-mentation of ADMR stem from the enormous differences in theassumed hardware environment. ADMR was developed for rela-tively high-powered devices with significantly more radio band-width and memory than is found on typical motes. It is notsurprising, then, that we faced some challenges implementingADMR on this platform. We feel that it is significant that wewere able to produce a working and fairly robust implementa-tion of ADMR despite these challenges.

While our goal was to remain faithful to the original ADMRdesign as much as possible, the most substantial modificationwas the introduction of alternate path-selection metrics. Min-imizing hop count performs poorly, while a simple LQI-basedestimation of the path delivery ratio works well and incurs noadditional measurement overhead. We have also investigatedtechniques for active versus passive route reinforcement as wellas managing limited routing state.

Our results demonstrate that there is still significant work tobe done if ADMR is to be effective in large networks. Witha large number of nodes, protocol overhead will readily satu-rate available bandwidth. Most MANET protocols use a fixedtransmission rate for protocol packets such as discovery mes-sages. Dynamically tuning these rates based on background traf-fic or node density would permit overhead to scale with availablebandwidth.

Protocol state management under severe memory limitationsis another area for future work. We have explored various nodetable eviction policies, although results demonstrate that delib-erately dropping this state has an adverse effect on performance.More efficient data structure design and swapping state to exter-nal flash may be viable solutions.

References[1] CC2420 Product Information.http://www.chipcon.com/index.

cfm?kat_id=2&subkat_id=12&dok_id=115 .

[2] D. Aguayo, J. Bicket, S. Biswas, G. Judd, and R. Morris. Link-level mea-surements from an 802.11b mesh network.SIGCOMM Comput. Commun.Rev., 34(4):121–132, 2004.

[3] A. Cerpa, N. Busek, and D. Estrin. SCALE: a tool for Simple ConnectivityAssessment in Lossy Environments. Technical report, September 2003.

[4] D. S. J. D. Couto, D. Aguayo, J. Bicket, and R. Morris. A high-throughputpath metric for multi-hop wireless routing. InMobiCom ’03: Proceed-ings of the 9th annual international conference on Mobile computing andnetworking, pages 134–146. ACM Press, 2003.

[5] D. Ganesan, D. Estrin, and J. Heidemann. Dimensions: why do we need anew data handling architecture for sensor networks?SIGCOMM Comput.Commun. Rev., 33(1):143–148, 2003.

[6] D. Ganesan, B. Krishnamachari, A. Woo, D. Culler, D. Estrin, andS. Wicker. Complex Behavior at Scale: An Experimental Study of Low-Power Wireless Sensor Networks. Technical report, Feburary 2002.

[7] W. Heinzelman, A. Chandrakasan, and H. Balakrishnan. Energy-efficientcommunication protocol for wireless microsensor networks. InProc. the33rd Hawaii International Conference on System Sciences (HICSS), Jan-uary 2000.

[8] C. Intanagonwiwat, R. Govindan, and D. Estrin. Directed diffusion: Ascalable and robust communication paradigm for sensor networks. InProc.International Conference on Mobile Computing and Networking, Aug.2000.

Page 14: Bor-rong Chen Kiran-Kumar Muniswamy-Reddy and Matt Welsh …ftp.deas.harvard.edu/techreports/tr-22-05.pdf · 2005. 12. 20. · Lessons Learned from Implementing Ad-hoc Multicast Routing

[9] Jasmeet Chhabra, Intel Labs. Tinyaodv for tinyos. http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/tinyos/tinyos-1.x/contri%b/hsn , 2003.

[10] J. G. Jetcheva and D. B. Johnson. Adaptive Demand-Driven MulticastRouting in Multi-Hop Wireless Ad Hoc Networks. In2001 ACM Interna-tional Symposium on Mobile Ad Hoc Networking and Computing (Mobi-Hoc2001), pages 33–44, October 2001.

[11] D. B. Johnson and D. A. Maltz. Dynamic source routing in ad hoc wirelessnetworks. In Imielinski and Korth, editors,Mobile Computing, volume353. Kluwer Academic Publishers, 1996.

[12] W. Kaiser, G. Pottie, M. Srivastava, G. Sukhatme, J. Villasenor, andD. Estrin. Networked Infomechanical Systems (NIMS) for Ambient In-telligence. Invited contribution to Ambient Intelligence, Springer-Verlag,2004.

[13] A. Kansal, A. A. Somasundara, D. D. Jea, M. B. Srivastava, and D. Estrin.Intelligent fluid infrastructure for embedded networks. InMobiSYS ’04:Proceedings of the 2nd international conference on Mobile systems, appli-cations, and services, pages 111–124, New York, NY, USA, 2004. ACMPress.

[14] S.-J. Lee, M. Gerla, and C.-C. Chiang. On-demand multicast routing pro-tocol. In IEEE Wireless Communications and Networking Conference,WCNC ’99, pages 1298–1304, September 1999.

[15] T. Liu, C. M. Sadler, P. Zhang, and M. Martonosi. Implementing softwareon resource-constrained mobile sensors: experiences with impala and ze-branet. InMobiSYS ’04: Proceedings of the 2nd international conferenceon Mobile systems, applications, and services, pages 256–269, New York,NY, USA, 2004. ACM Press.

[16] K. Lorincz, D. Malan, T. R. F. Fulford-Jones, A. Nawoj, A. Clavel,V. Shnayder, G. Mainland, S. Moulton, and M. Welsh. Sensor Networksfor Emergency Response: Challenges and Opportunities.IEEE PervasiveComputing, Oct-Dec 2004.

[17] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong. TAG: A TinyAGgregation Service for Ad-Hoc Sensor Networks. InProc. the 5th OSDI,December 2002.

[18] S. Nath, P. B. Gibbons, S. Seshan, and Z. R. Anderson. Synopsis diffusionfor robust aggregation in sensor networks. InSenSys ’04: Proceedings ofthe 2nd international conference on Embedded networked sensor systems,pages 250–262, New York, NY, USA, 2004. ACM Press.

[19] C. Perkins and P. Bhagwat. Highly dynamic destination-sequenceddistance-vector routing (DSDV) for mobile computers. InACM SIG-COMM’94 Conference on Communications Architectures, Protocols andApplications, pages 234–244, 1994.

[20] C. E. Perkins and E. M. Royer. Ad hoc On-Demand Distance Vector Rout-ing. In Proceedings of the 2nd IEEE Workshop on Mobile Computing Sys-tems and Applications, pages 90–100, New Orleans, LA, February 1999.

[21] E. M. Royer and C. E. Perkins. Multicast operation of the ad-hoc on-demand distance vector routing protocol. InFifth Annual ACM/IEEE In-ternational Conference on Mobile Computing and Networking, Mobicom’99, pages 207–218, August 1999.

[22] Y. Sankarasubramaniam, O. B. Akan, and I. F. Akyildiz. ESRT: Event-to-Sink Reliable Transport in Wireless Sensor Networks. InProceedingsof ACM MobiHoc‘03, pages 177–188, Annapolis, Maryland, USA, June2003.

[23] C. Schurgers, V. Tsiatsis, S. Ganeriwal, and M. Srivastava. Topology man-agement for sensor networks: exploiting latency and density. InMobiHoc’02: Proceedings of the 3rd ACM international symposium on Mobile adhoc networking & computing, pages 135–145, New York, NY, USA, 2002.ACM Press.

[24] G. Simon, M. Maroti, A. Ledeczi, G. Balogh, B. Kusy, A. Nadas, G. Pap,J. Sallai, and K. Frampton. Sensor network-based countersniper system. InSenSys ’04: Proceedings of the 2nd international conference on Embeddednetworked sensor systems, pages 1–12, New York, NY, USA, 2004. ACMPress.

[25] R. Szewczyk, A. Mainwaring, J. Polastre, and D. Culler. An analysis ofa large scale habitat monitoring application. InProc. the Second ACMConference on Embedded Networked Sensor Networks (Sensys), Novem-ber 2004.

[26] R. Szewczyk, E. Osterweil, J. Polastre, M. Hamilton, A. Mainwaring, andD. Estrin. Habitat monitoring with sensor networks.Communicationsof the ACM, Special Issue: Wireless sensor networks, 47(6):34–40, June2004.

[27] C.-Y. Wan, A. T. Campbell, and L. Krishnamurthy. Psfq: a reliable trans-port protocol for wireless sensor networks. InWSNA, pages 1–11, 2002.

[28] A. Woo, T. Tong, and D. Culler. Taming the underlying challenges of reli-able multihop routing in sensor networks. InProc. the First ACM Confer-ence on Embedded Networked Sensor Systems (SenSys 2003), November2003.

[29] P. Wright et al. Fire information & rescue equipment.http://fire.me.berkeley.edu/indexmain.php?main=home.php&title=FIRE\%20P%roject:\%20Overview .

[30] M. Yarvis, W. S. Conner, L. Krishnamurthy, J. Chhabra, B. Elliott, andA. Mainwaring. Real-world experiences with an interactive ad hoc sen-sor network. InProceedings of the International Workshop on Ad HocNetworking, August 2002.