INTERNET-DRAFT                       Zygmunt J. Haas,  Cornell University
                                     Marc R. Pearlman, Cornell University
                                     Prince Samar,     Cornell University

Expires in six months on September 2000                        March 2000 January 2003                           July 2002

           The Zone Routing Protocol (ZRP) for Ad Hoc Networks
                  <draft-ietf-manet-zone-zrp-03.txt>
                  <draft-ietf-manet-zone-zrp-04.txt>

Status of this Memo

   This document is an Internet-Draft and is NOT offered in accordance full conformance with
   all provisions of Section 10 of RFC2026, and the author does not provide RFC 2026, except the IETF
   with any rights other than right to publish as an Internet-Draft. produce
   derivative works is not granted.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that other
   groups may also distribute working documents as Internet-Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time. It is inappropriate to use Internet-Drafts as reference material
   or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   Distribution of this memo is unlimited.

Abstract

   This document describes the Zone Routing Protocol (ZRP), (ZRP) framework, a
   hybrid routing protocol framework suitable for a wide variety of mobile ad-hoc
   networks, especially those with large network spans and diverse
   mobility patterns.  Each node proactively maintains routes within a
   local region (referred to as the routing zone).  Knowledge of the
   routing zone topology is leveraged by the ZRP to improve the
   efficiency of a globally reactive route query/reply mechanism. The
   proactive maintenance of routing zones also helps improve the quality
   of discovered routes, by making them more robust to changes in network
   topology. The ZRP can be configured for a particular network by proper
   selection of a single parameter, the routing zone radius.

                                Contents

   Status of this Memo . . . . . . . . . . . . . . . . . . . . . .  i
   Abstract  . . . . . . . . . . . . . . . . . . . . . . . . . . .  i

   Applicability Statement . . . . . . . . . . . . . . . . . . .  iii
       A. Networking Context   . . . . . . . . . . . . . . . . .  iii
       B. Protocol Characteristics and Mechanisms  . . . . . . .  iii

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . .  1

   2.  Overview of the Zone Routing Protocol Framework . . . . . . . . . . . 2
       2.1  Routing Zones, Extended Routing Zones, Zones and the Intrazone Routing Protocol (IARP)  . . .  . . . 2

       2.2  The Interzone Routing Protocol (IERP)  . . . . . . . .  3
            2.2.1  Bordercast-Based Global Reactive (Interzone) Routing Zone Based Route Discovery  . . . . .  . 4
            2.2.2  Route Accumulation Procedure  . . . . . . . . .  5
            2.2.3  Query Control Mechanisms  . . . . . . . . . . .  6
            2.2.4  Route Maintenance . . . . . . . . . . . . . . .  6

   3.  The ZRP Architecture . . . . . . . . . . . . . . . . . . .  7

   4.  Implementation Details  . . . . . . . . . . . . . . . . . .  8
       4.1  Intrazone Routing Protocol (IARP) -- Link State  . . .  8
            A.  Packet Format  . . . . . . . . . . . . . . . . . .  9
            B.  Data Structures  . . . . . . . . . . . . . . . . . 10
            C.  Interfaces . . . . . . . . . . . . . . 5

   4.  Other Considerations . . . . . . 12
            D.  State Machine . . . . . . . . . . . . . . 6
       4.1  Sizing the Routing Zone . . . . 13
            E.  Pseudocode Implementation . . . . . . . . . . . . 14 6
       4.2  Interzone  Hierarchical Routing Protocol (IERP)  . . . . . . . . . and the ZRP  . 18
            A.  Packet Format . . . . . . . . . . 6

   5. Patent Rights Statement . . . . . . . . 19
            B.  Data Structures . . . . . . . . . . . 8

   6.  References . . . . . . 22
            C.  Interfaces . . . . . . . . . . . . . . . . . . . 9

   Authors' Information  . 23
            D.  State Machine . . . . . . . . . . . . . . . . . . 23
            E.  Pseudocode Implementation . . 11
   MANET Contact Information . . . . . . . . . . 25

       4.3  Bordercast Resolution Protocol (BRP) . . . . . . . . . 30
            A.  Packet Format  . . . . . . . . . . . . . . . . . . 30
            B.  Data Structures  . . . . . . . . . . . . . . . . . 31
            C.  Interfaces . . . . . . . . . . . . . . . . . . . . 32
            D.  State Machine  . . . . . . . . . . . . . . . . . . 32
            E.  Pseudocode Implementations . . . . . . . . . . . . 34

   5.  Other Considerations  . . . . . . . . . . . . . . . . . . . 37
       5.1  Sizing the Routing Zone  . . . . . . . . . . . . . . . 37
       5.2  Hierarchical Routing and the ZRP . . . . . . . . . . . 37

   6.  References  . . . . . . . . . . . . . . . . . . . . . . . . 39

   7.  Draft Modifications . . . . . . . . . . . . . . . . . . . . 40

   Authors' Information  . . . . . . . . . . . . . . . . . . . . . 41
   MANET Contact Information . . . . . . . . . . . . . . . . . . . 41

Applicability Statement 11

Applicability Statement

A.  Networking Context

    The hybrid Zone Routing Protocol (ZRP) framework can adapt to a wide
    variety of network scenarios by adjusting the range of the nodes'
    proactively maintained routing zones.  Large routing zones are
    preferred when demand for routes is high and/or the network consists
    of many slowly moving nodes.  In the extreme case of a network with
    fixed topology, the ideal routing zone radius would be infinitely
    large.  (Consider the purely proactive routing protocols used on the
    fixed Internet).  On the other hand, smaller routing zones are
    appropriate in situations where route demand is low and/or the
    network consists of a small number of nodes that move fast relative
    to one another.  In the "worst case", a routing zone radius of one
    hop is best, and the ZRP defaults to a traditional reactive flooding
    protocol.

    When the ZRP is properly configured for a particular network scenario,
    it can perform at least as well as (and often better than) its purely
    proactive and reactive constituent protocols.  In situations where
    the network behavior varies across different regions, the ZRP
    performance can be fine-tuned by individual adjustment of each node's
    routing zone.

    The global reactive component of the ZRP uses the multicast based
    "bordercast" mechanism to propagate route queries throughout the
    network,
    network efficiently, rather than relying on neighbor-broadcast
    flooding found in
    tradtional traditional reactive protocols.  Consequently, the
    ZRP provides the most benefit in networks where reliable neighbor
    broadcasting is either inefficient or altogether impossible.  In
    particular, the ZRP is well suited for multi-channel, multi-technology multi-
    technology routing fabrics and networks operating under high load.

B.  Protocol Characteristics and Mechanisms

   * Does the protocol provide support for unidirectional links?
     (if so, how?)
        Yes.  The ZRP provides "local" support for unidirectional links.
        A unidirectional link can be used as long as the link source and
        link destination lie within each other's routing zone.

   * Does the protocol require the use of tunneling? (if so, how?)
        No.

   * Does the protocol require using some form of source routing?
     (if so, how?)

        No.  The ZRP's framework supports global route discovery based
        on source routing, distributed distance vectors, or various
        combinations of both.

   * Does the protocol require the use of periodic messaging?
     (if so, how?)

        Yes.  The ZRP framework proactively maintains local routing
        information (routing zones) based on periodic exchanges of
        neighbor discovery messages.

   * Does the protocol require the use of reliable or sequenced packet
     delivery? (if so, how?)

        The ZRP only assumes that link-layer (neighbor) unicasts are
        delivered reliably and in-sequence.  Reliable and sequenced
        delivery of neighbor broadcasts and IP unicasts is not
        required.

   * Does the protocol provide support for routing through a multi-
     technology routing fabric? (if so, how?)

        Yes.  It is assumed that each node's network interface is
        assigned a unique IP address.

   * Does the protocol provide support for multiple hosts per router?
     (if so, how?)

        Yes.  Multiple hosts may be associated with a router.  These
        hosts can be identified by the neighbor discovery/maintenance
        process.

        By default, it is assumed that a host's association with a router
        is not permanent.  As a result, the ZRP exchanges routing
        information for individual hosts in the same manner as routing
        information for routers.

        In cases where a router is permanently associated with a network
        (subnetwork), the ZRP supports the exchange of network
        (subnetwork) prefixes in place of all aggregated host IP
        addresses.

   * Does the protocol support the IP addressing architecture?
     (if so, how?)

        Yes.  Each node is assumed to have a unique IP address (or
        set of unique IP addresses in the case of multiple interfaces).
        The ZRP references all nodes/interfaces by their IP address.

        This version of the ZRP also supports IP network addressing
        (network prefixes) for routers that provide access to a
        network of non-router hosts.

   * Does the protocol require link or neighbor status sensing
     (if so, how?)

        Yes.  The ZRP proactively maintains local routing information
        (routing zones) based on detected changes in neighbor status.

   * Does the protocol have dependence on a central entity?
     (if so, how?)

        No.  The ZRP is a fully distributed protocol.

   * Does the protocol function reactively? (if so, how?)

        Yes.  The ZRP's GLOBAL route discovery mechanism is reactive.
        A route query is initiated, on demand, when a node requires
        routing information that is not immediately available in its
        routing table.

        The route query propagates through the network, using a special
        packet delivery service called "bordercasting".  Bordercasting
        leverages knowledge of local network topology to direct route
        queries away from the source, thereby reducing redundancy.

   * Does the protocol function proactively? (if so, how?)

        Yes.  The ZRP proactively maintains LOCAL routing information
        (routing zones) for each node.  The routing zone information is
        leveraged, through the bordercasting mechanism, to support
        efficient global propagation of route queries.

   * Does the protocol provide loop-free routing? (if so, how?)

        Yes.  In this draft,  If the reactive Interzone Routing is based on source
        routing, loop-freedom in the reactive route discovery process is ensured
        by inspection of accumulated source routes. For distributed
        distance vector approaches, loop-freedom can be ensured by
        labeling queries (replies) with the source (destination) address
        and locally unique sequence number.  Nodes that relay these
        messages can temporarily cache these identifiers in order to
        identify and terminate loops.

        The ZRP's Link State proactive protocol Intrazone Routing based on link states is
        inherently loop-free, although temporary loops may form while new
        link state updates propagate through the routing zone.

   * Does the protocol provide for sleep period operation? (if so, how?)

        No.

   * Does the protocol provide some form of security? (if so, how?)

        No. Sleep period operation is not addressed in this draft.
        However, sleep period support can be included as needed.

   * Does the protocol provide some form of security? (if so, how?)

        No.  It is assumed that security issues are adequately addressed
        by the underlying protocols (IPsec, for example).

   * Does the protocol provide support for utilizing multi-channel,
     link-layer technologies? (if so, how?)

        Yes.  It is assumed that each node's network interface is
        assigned a unique IP address.

1. Introduction

   One of the major challenges in designing a routing protocol for the
   ad hoc networks stems from the fact that, on one hand, to determine
   a packet route, a node needs to known know at least the reachability
   information to its neighbors.  On the other hand, in an ad hoc
   network, the network topology can change quite often.  Furthermore,
   as the number of network nodes can be large, the potential number of
   destinations is also large, requiring large and frequent exchange of
   data (e.g., routes, routes updates, or routing tables) among the
   network nodes. Thus, the amount of update traffic can be quite high.
   This is in contradiction with the fact that all updates in a
   wirelessly interconnected ad hoc network travel over the air and,
   thus, are costly in resources.

   In general, the existing routing protocols can be classified either
   as proactive or as reactive. Proactive protocols attempt to
   continuously evaluate the routes within the network, so that when
   a packet needs to be forwarded, the route is already known and can
   be immediately used.  The family of Distance-Vector protocols is an
   example of a proactive scheme.  Examples of proactive routing
   protocols include the Wireless Routing Protocol (WRP) [7] [11] and
   Destination-Sequenced Distance-Vector (DSDV) routing [11]. [16]. Reactive
   protocols, on the other hand, invoke a route determination procedure
   on demand only. Thus, when a route is needed, some sort of global
   search procedure is employed. The family of classical flooding
   algorithms belong to the reactive group. Some other examples of
   reactive (also called on-demand) ad hoc network routing protocols are
   Dynamic Source Routing (DSR)[5], (DSR) [9], Ad-hoc On demand Distance Vector
   Routing (AODV) [12] [17] and the Temporally Ordered Routing Algorithm
   (TORA) [8]. [13].

   The advantage of the proactive schemes is that, once a route is
   needed, there is little delay until the route is determined. In
   reactive protocols, because route information may not be available
   at the time a datagram is received, the delay to determine a route
   can be quite significant. Furthermore, the global flood-search
   procedure  of the reactive protocols requires significant control
   traffic.  Because of this long delay and excessive control traffic,
   pure reactive routing protocols may not be applicable to real-time
   communication.  However, pure proactive schemes are likewise not
   appropriate for the ad hoc networking environment, as they
   continuously use a large portion of the network capacity to keep the
   routing information current. Since nodes in an ad hoc networks network move
   quite fast, and as the changes may be more frequent than the route
   requests, most of this routing information is never even used! This
   results in a further waste of the wireless network capacity. What is
   needed is a protocol that, on one hand, initiates the route
   determination procedure on-demand, but at limited search cost. The
   protocol described in this draft, termed the "Zone Routing Protocol
   (ZRP)" ([1], [2],[9]), ([1],[2],[14]), is an example of a such a hybrid proactive/
   reactive scheme.

   The ZRP, on one hand, limits the scope of the proactive procedure
   only to the node's local neighborhood. On the other hand, the search
   throughout the network, although global in nature, is done by
   efficiently querying selected nodes in the network, as opposed to
   querying all the network nodes.

   A related issue is that of updates in the network topology. For a
   routing protocol to be efficient, changes in the network topology
   should have only a local effect. In other words, creation of a new
   link at one end of the network is an important local event but, most
   probably, not a significant piece of information at the other end of
   the network.  Globally proactive protocols tend to distribute such
   topological changes widely in the network, incurring large costs. The
   ZRP limits propagation of such information to the neighborhood of the
   change only, thus limiting the cost of topological updates.

2. Overview of the Zone Routing Protocol

2.1 Framework

   In the Zone Routing Zones, Extended framework, a proactive routing protocol provides a
   detailed and fresh view of each node's surrounding local topology
   (routing zone) at the local level.  The knowledge of local topology is
   used to support services such as proactive route maintenance,
   unidirectional link discovery and guided message distribution. One
   particular message distribution service, called bordercasting [5],
   directs queries throughout the network across overlapping routing
   zones. Bordercasting is used in place of traditional broadcasting to
   improve the efficiency of a global reactive routing protocol.

   The benefits provided by routing zones, compared with the overhead of
   proactively tracking routing zone topology, determine the optimal
   framework configuration.  As network conditions change, the framework
   can be dynamically reconfigured through adjustment of each node's
   routing zone

2.1 Routing Zones, Zones and the Intrazone Routing Protocol (IARP)

   A

  In Zone Routing, the Intrazone Routing Protocol (IARP) proactively
  maintains routes to destinations within a local neighborhood, which we
  refer to as a routing zone.  More precisely, a node's routing zone is
  defined for each node X, and includes the as a collection of nodes whose minimum distance in *hops* hops from X
  the node in question is no greater than some a parameter referred to as the Zone Radius.
  zone radius.  Note that each node maintains its own routing zone. An
  important consequence is that the routing zones of neighboring nodes
  overlap.

  An  example of a routing zone (for node A) of radius 2 is shown below.

          +-----------------------------------------+
          |                       +---+             |
          |            +---+   ---| F |-------|     |
   +---+  |  +---+   --| C |--/   +---+     +---+   |
   | G |-----| D |--/  +---+                | E |   |  Routing Zone of
   +---+  |  +---+       |        +---+     +---+   |      node A
          |            +---+   ---| B |-------|     |  (radius = 2 hops)
          |            | A |--/   +---+             |
          |            +---+                        |
          +-----------------------------------------+

   Note that in this example nodes B through E F are within the routing
   zone of A. Node G is outside A's routing zone. Also note that E can be
   reached by two paths from A, one with length 2 hops and one with
   length 3 hops.  Since the minimum is less or equal than 2, E is within
   A's routing zone.

   Peripheral nodes are nodes whose minimum distance to the node in
   question is equal exactly to the zone radius. Thus, in the above
   figure, nodes D, F, and E are A's peripheral nodes.

   An important consequence

   The construction of the a routing zone construction is the
   ability of requires a node to deliver a packet to first know who
   its peripheral nodes.  This
   service, which we refer to as "bordercasting", allows for more
   efficient network-wide searching than simple neighbors are.  A neighbor broadcasting.
   Bordercasting could is defined as a node with whom direct
   (point-to-point) communication can be implemented either through IP multicast
   (Distance Vector Multicast Routing Protocol (DVMRP) [14]) to established and is, thus, one
   hop away.  Identification of a node's neighbors may be provided
   directly by the
   peripheral nodes.  However, media access control (MAC) protocols, as will be explained later, efficient ZRP
   operation requires that in the bordercast service case
   of polling-based protocols.  In other cases, neighbor discovery may
   be handled, on implemented through a
   hop-by-hop basis, by the ZRP.

   In its basic form, the ZRP's Intrazone Routing separate Neighbor Discovery Protocol (IARP) is
   responsible for providing each node with current view (NDP).
   Such a protocol typically operates through the periodic broadcasting
   of "hello" beacons.  The reception (or quality of reception) of its routing
   zone topology.  With this information, a node
   "hello" beacon can identify ITS
   peripheral nodes AND construct a bordercast (multicast) tree be used to them.
   However, in order for indicate the  bordercast to be carried out, member status of the
   bordercast tree needs a connection to know the tree's downstream structure.  This
   can be achieved if each node tracks
   the routing zone topology of each
   of ITS interior routing zone members.  This "extended" routing zone
   consists of all nodes that are at beaconing neighbor.

   Neighbor discovery information is used as a minimum distance (in hops) LESS
   THAN twice basis for the "basic" routing zone radius.

   The IARP.
   IARP may can be derived from a variety of existing globally proactive link state routing
   protocols that provide a complete view of network connectivity (for
   example, connectivity.
   [3] describes the Shortest Path First OSPF [6]).  The base Intrazone Routing Protocol (IARP) in detail and
   lists the conversion guidelines for adapting a proactive link-state
   based routing protocol needs to be modified serve as an IARP.

2.2 Bordercast-Based Global Reactive (Interzone) Routing

   Route discovery in the Zone Routing framework is distinguished from
   standard broadcast-based route discovery through a message
   distribution service known as the Bordercast Resolution Protocol (BRP)
   [5]. Rather than blindly broadcasting a route query from neighbor to ensure that
   neighbor, bordercasting allows the scope query to be directed outward,
   toward regions of the route updates network (specifically, toward peripheral nodes)
   that have not yet been "covered" by the query.  (A covered node is
   restricted one
   that belongs to the radius routing zone of a node's extended routing zone.  Because
   each node needs to proactively acquire route information only for that has received a route
   query). The query control mechanisms reduce route query traffic by
   directing query messages outward from the
   nodes within its zone, query source and away from
   covered routing zones.

   A node can determine local query coverage by noting the total amount addresses of IARP traffic does not
   depend on
   neighboring nodes that have forwarded the size of query.  In the network, which case of
   multiple channel networks, a node can only detect query packets that
   have been directly forwarded to it.  For single channel networks, a
   node may be quite large.

2.2 The Interzone Routing Protocol (IERP)

   While the IARP maintains routes able to detect any query packet forwarded within a zone, the IERP* is
   responsible for finding routes between nodes located at distances
   larger than the
   node's radio range. When a node identifies a query forwarding
   neighbor, all known members of that neighbor's routing zone radius.  The IERP is distinguished from standard
   flood-search query/response  protocols by exploiting (i.e.,
   those members which belong to both the node's and neighbor's routing zone
   topology.  A
   zones) are marked as covered.

   When a node is able to respond positively called upon to any queries for relay a bordercast message, it again
   uses its routing zone topology to construct a bordercast tree, that is
   rooted at itself and spans its uncovered peripheral nodes.  For large networks, relatively few destinations
   lie within any particular node's routing zone.  In this more likely
   case, the node can efficiently continue  The
   message is then forwarded to those neighbors in the propagation bordercast tree.
   By virtue of the query,
   through fact that this node has forwarded the query, all of
   its routing zone-based bordercast delivery mechanism.

* The IERP relies on zone members are marked as covered.  Therefore, a special sub-protocol, called
   bordercasting node will not forward a query more than once.
   The details of the Bordercast Resolution Protocol (BRP) to provide the bordercast message delivery
     service for route queries.  Sections 3.0 and 4.0 describe are described in detail
     the relationship between the BRP and [5].

   Given the IERP.

2.2.1  Routing Zone Based Route Discovery

   We illustrate availability of an underlying bordercast service, the basic
   operation of routing zone based Zone Routing's global reactive Interzone Routing Protocol
   (IERP) is quite similar to standard route discovery through a simple (and as we will see, inefficient) protocols. An IERP
   implementation.  The source node, in need of a
   route to a destination
   node, first checks whether the destination lies within its routing
   zone.  (This is possible since every node knows the content of its
   routing zone).  If a path to the destination discovery is known, initiated when no further route discovery processing is required.  On the other hand, if locally available to the
   destination is not within the source's routing zone, the of an outgoing data packet.  The source
   bordercasts generates a route
   query packet, which is uniquely identified by a combination of the
   source node's address and request number. The query is then relayed to all
   a subset of its peripheral nodes. neighbors as determined by the bordercast algorithm. Upon
   receipt of the a route query, each peripheral nodes executes the same
   algorithm.  If query packet, a node checks if the destination lies within
   in its routing zone, zone or if a valid route to it is available in its route cache.
   If the answer is affirmative, a route reply is sent back to the source, indicating the route to the
   destination.
   source. If not, this node forwards the query to ITS peripheral
   nodes.  This process continues until node bordercasts the query has spread throughout again. The operation of
   the network.

                              +---+
                     +---+    | F IERP is sufficiently general, so that many existing reactive
   protocols can be used as an IERP with minimal modification. The
   details of IERP's operation can be found in [4].

3.0 The ZRP Architecture

             .........................................
             :                 Z R P                 :
             :                                       :
+---------+  :      +---------+         +---------+  :      +---------+
|
             +---+---| C |----+---+-----+---+    +---+   NDP   |========>|  IARP   |========>|  IERP   |<========|  ICMP   | D |   +---+              | E |----| H
+---------+  :      +---------+         |---+-----+  :      +---------+
    /|\      :          /|\             |BRP|  /|\   :          /|\
     |
             +---+       :           |      +---+-----+---+              +---+
                     +---+----| B |   |    :           | A
     |    +---+-----+---+    +---+
                     +---+       :           | G               /|\    |    :           | I
     |
                                        +---+    +---+       :...........|................|.....|....:           |
                                        +---+
                               +---+
     | J                   |                | C |----+---+----+---+    +---+
                               +---+     | K |----| L                |
                                                 +---+    +---+

   In the example illustrated above, node A has a datagram to send to
   node L.  Assuming each node's zone radius is 2 hops, node L does not
   lie within A's routing zone (which does include B,C,D,E,F,G).
   Therefore, A bordercasts a routing query to its peripheral nodes:
   D,F,E and G.  Each one of these peripheral nodes check whether L
   exists in their routing zones. Since L is not found in any routing
   zones of these nodes, the nodes bordercast the request to their
   peripheral nodes. In particular, G bordercasts to K, which realizes
   that L is in its routing zone and returns the requested route (L-K-G-A)
   to the query source, namely A.

2.2.2 Route Accumulation Procedure

   The query propagation mechanism allows a route query to indirectly
   reach the desired destination (through some node Y, which discovers
   the destination in its routing zone.)   To complete the route
   discovery process, Y needs to send a reply back to the query's source,
   S, providing S with the desired route.  To perform the route reply,
   sufficient information needs to be accumulated during the query phase
   so that Y is provided with a route back to S.  Providing routes from
   discovering node Y to query source S, and from the query source S back
   to the query destination D (through Y), is the role of the Route
   Accumulation procedure.

   In the basic Route Accumulation, a node appends its IP address to a
   received query packet.  The sequence of IP addresses specifies a route
   from the query's source to the current node.  By reversing this
   sequence, a route may also be obtained back to the source.  In this
   way, the route reply can be returned via strict source routing.

   In order to reduce the length of query packets and query response time,
   the query packet's accumulated route can be cached among the path's
   nodes (in the form of the previous hop toward the query source),
   rather than explicitly recorded in the packet itself.  During the
   reply phase, the cached previous hops can direct the reply back to the
   source.  Along the way, information can be accumulated in the reply
   packet, forming a source route.  Alternatively, the discovered route
   may be distributed among its nodes' routing tables, providing a next
   hop routing solution.

   Sending replies along the (reversed) paths explicitly traveled by the
   successful query packets can result in a set of discovered routes that
   exhibit limited diversity among.  This anomalous behavior can be
   explained as follows.  Variations in packet forwarding delay often
   result in one query packet reaching the destination region ahead of
   others.  The descendents of this query packet trigger the majority of
   route replies.  The route diversity problem is aggravated when all
   possible paths between the source and destination pass through a
   common node, called a topological bottleneck.  Since only one query
   packet passes through the bottleneck, all discovered routes will be
   identical prior to the topological bottleneck.

   This problem could be addressed by allowing a node to relay a route
   query more than once. Alternatively, we can apply "diversity
   injection" [10] to increase diversity without generating additional
   route replies.  During the route query stage, nodes temporarily cache
   the previous hop information for ALL received query packets (including
   the packets that are discarded).  Later, during the reply phase,
   replies are relayed through the shortest of the least selected cached
   paths that does not produce a routing loop.

2.2.3 Query Control Mechanisms

   Bordercasting has the potential to support global querying schemes
   that are more efficient than flooding.  To achieve this efficiency,
   the protocol should be able to terminate a bordercasted packet before
   it arrives at a recipient peripheral node lying in a previously
   queried region of the network.  The ZRP's ability to provide this
   level of query control capability is significantly limited when
   bordercast messages are forwarded to peripheral nodes by IP.

   To prevent redundant querying, nodes should be able to detect when
   routing zones that they belong to have been queried.  Clearly, a node
   that bordercasts a route query is aware that its own zone has been
   queried.  If bordercast messages are relayed by IP, then the query
   will not be detected again until it reappears at the target peripheral
   nodes.  On the other hand, if bordercast forwarding is performed
   within the routing protocol, then all nodes in the bordercast tree
   will detect the query.  Additional query detection is possible in
   shared channel networks if the underlying IP delivery is neighbor
   broadcast or if promiscuous mode operation is enabled.  In this case,
   nodes may "overhear" a query even if they do not belong to the
   bordercast tree.

   Standard flood search protocols terminate query packets that are
   targeted for (or arrive at) previously queried nodes.  We can extend
   this idea to the ZRP by discarding route queries before they arrive at
   bordercast recipients that belong to a previously queried routing
   zone.  More precisely, a node will not relay a packet down a branch of
   the bordercast tree if each of the branch's downstream "leaves"
   (bordercast recipients) either lies inside the routing zone of a
   previous bordercasted nodes, or if this node has already relayed the
   query to that leaf.  This scheme, which we refer to as Early
   Termination (ET), relies on the aforementioned Query Detection to
   identify which routing zone nodes have already bordercast a query.
   The "extended" routing zone maintained by the IARP is then used to
   determine the members of these previously bordercasted nodes' routing
   zones.

2.2.4 Route Maintenance

   In addition to initially discovering routes, the IERP may also assume
   responsibility for monitoring the integrity of these routes and
   repairing failed routes as appropriate.

   Route failures can be detected either proactively or reactively.
   Proactive route failure detection is triggered by the IARP, in
   response to a node leaving the routing zone.  Any IERP routes
   containing this node as the first hop can be considered invalid.
   Route failures may also be detected reactively, by IP, when the next
   hop in a datagram's source route is determined to be unreachable
   (i.e. does not appear in the Routing Table).

   Upon local detection of a route failure, a node may choose to attempt
   a route repair by discovering a path to bypass the failed link.  The
   repair discovery process is nearly identical to an initial route
   discovery.  Route repairs should not be substantially longer than the
   original failed connection.  Therefore, the depth of a repair query
   can be limited, through the use of a "time to live" field.  This has
   the advantage of producing much less query traffic than an
   unrestricted initial route query.  After a successful repair, the
   route's source MAY be notified so that routes are properly selected
   for use.  Alternatively, the repair could go unreported without
   compromising the connectivity between source and destination.

3.0 The ZRP Architecture

             .........................................
             :                 Z R P                 :
             :                                       :
+---------+  :      +---------+         +---------+  :      +---------+
|   NDM   |========>|  IARP   |========>|  IERP   |<========|  ICMP   |
+---------+  :      +---------+         |---+-----+  :      +---------+
    /|\      :          /|\             |BRP|  /|\   :          /|\
     |       :           |              +---+   |    :           |
     |       :           |               /|\    |    :           |
     |       :...........|................|.....|....:           |
     |                   |                |     |                |
    \|/                 \|/              \|/   \|/              \|/
+---------+---------+---------+---------+---------+---------+---------+
|                                 IP                                  |
+---------+---------+---------+---------+---------+---------+---------+

Legend:

        A <---> B      exchange of packets between protocols A & B
        A  ===> B      information passed from protocol A to protocol B

        Existing Protocols
        ------------------
                IP              Internet Protocol
                ICMP            Internet Control Message Protocol

        ZRP Entities
        ------------
                IARP            IntrAzone Routing Protocol
                IERP            IntErzone Routing Protocol
                BRP             Bordercast Resolution Protocol

      Additional Protocols
      --------------------
                NDM             Neighbor Discovery/Maintenance Protocol

   Note, it is assumed that basic neighbor discovery operation is
   implemented by the MAC/link-layer protocols. Thus the NDM protocol
   remains unspecified here.

4. Implementation Details

4.1 The IntrAzone Routing Protocol (IARP)  -- Link State Implementation

    The Intrazone Routing Protocol (IARP) is responsible for proactively
    maintaining routes within each node's routing zone.  Many traditional
    proactive protocols can be modified to serve as an IARP by limiting
    the range of route updates to the boundary of (extended) routing
    zones.

    In this draft, we detail implementations for a Link-State IARP.
    Nodes compute intrazone routes based on the link state (neighbor
    connectivity) of each extended routing zone node.  A node may receive
    link state updates either from an IARP link state packet or from an
    interrupt generated by its Neighbor Discovery / Maintenance (NDM)
    process*.  Link states are maintained in a Link State Table.  When
    all pending link state updates have been received (full link state
    updates may contain multiple links and span multiple packets), the
    routing table is recomputed, using a minimum spanning tree algorithm.
    The Link State Table is then updated to remove links that lie outside
    of the extended routing zone.  All new link state updates for non-
    peripheral routing zone nodes are forwarded to all neighbors.  In
    addition, if a new neighbor is discovered, the new neighbor is sent
    the FULL link states of all non-peripheral routing zone nodes.

    In addition to computing routes to all nodes in the extended routing
    zone, the IARP also uses the extended zone topology to construct
    the bordercast tree of each routing zone member.  A node's bordercast
    tree can be constructed by first determining the minimum spanning
    tree for its routing zone, and then pruning interior node leaves.
    For each bordercast tree, the node should record if it's a member,
    and if so, its downstream neighbors and peripheral nodes.

   * the IARP relies on the services of a separate protocol (referred to
     here as the Neighbor Discovery/Maintenance Protocol) to provide
     current information about a node's neighbors.  At the least, this
     information must include the IP addresses of all the neighbors.
     The IARP can be readily configured to support supplemental link
     quality metrics.

    A.  Packet Format

                       1                   2                   3
   0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                      Link Source Address                      |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                    Link Destination Address                   |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                     Packet Source Address                     |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |         Link State ID         |  Zone Radius  |     Flags     |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |            Link Destination Subnet Mask  (Optional)           |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ --+--
   |   RESERVED    |  Metric Type  |          Metric Value         |   |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+   |
   |   RESERVED    |  Metric Type  |          Metric Value         |   |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+   |
                                  | |                                 Link
                                 \| |/                              Metrics
                                  \ /                                  |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+   |
   |   RESERVED    |  Metric Type  |          Metric Value         |   |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ --+--

        Field Description:

        * Link Source Address          (node_id)           (32 bits)
            IP address of the reported link's source node.

        * Link Destination Address     (node_id)           (32 bits)
            IP address of the reported link's destination node.

        * Packet Source Address        (node_id)           (32 bits)
            IP address of the node that sent this packet.
            (Used to account for any outstanding link state information)

        * Link State ID                (unsigned int)      (16 bits)
            Sequence number used to track the link state history of
            the Link Source node.

        * Zone Radius                  (unsigned int)      (8 bits)
            Routing zone radius of the link's source node.  Determines
            the extent that link state information propagates.

        * Flags                        (boolean)           (8 bits)
             Flags(0)   Full Link Information
                 Indicates whether this link state information was sent
                 as:
                          (0) a PARTIAL link state update
                                      OR
                          (1) part of a FULL update  of all the
                              link source's current links
             Flags(1)   Current Link State Update
                 Indicates whether more link state information is pending
                 for THIS link state update {link_source,state_id}
                          (0) INCOMPLETE
                          (1) COMPLETE

             Flags(2)   All Link State Updates
                 Indicates whether more link state updates are pending
                          (0) INCOMPLETE
                          (1) COMPLETE

             Flags(3)   Link Destination Subnet Mask
                          (0) INCLUDED
                          (1) NOT INCLUDED

             Flags(4)   Link Status
                          (0) DOWN
                          (1) UP

             Flags(5..7)  RESERVED for future use.

        * Node/Link Metrics            (metric)            (X * 32 bits)
                This section of the packet is used to report the quality
                of this link (or link source node).

                * Metric Type          (char)              (8 bits)
                      Type of metric being reported
                      (based on pre-defined metric types)

                * Metric Value         (unsigned int)      (16 bits)
                      Value of node / link metric specified by Metric Type

    B.  Data Structures

    B.1  Routing Table

    +-----------------------|--------------------------------+-----------+
    |   Dest    |  Subnet   |     Routes     |     Route     | Intrazone |
    |   Addr    |   Mask    |                |    Metrics    |           |
    | (node_id) | (node_id) | (node_id list) | (metric list) | (boolean) |
    |-----------+-----------|----------------+---------------+-----------+
    |           |           |                |               |           |
    |-----------+-----------|----------------+---------------+-----------+
    |           |           |                |               |           |
    |-----------+-----------|----------------+---------------+-----------+
    |           |           |                |               |           |
    +-----------------------|--------------------------------------------+
     B.2  Link State Table

     +-----------|--------+-------+------------------+
     |    Link   |  Zone  | Link  |   Link State     |
     |   Source  | Radius | State |   Information #  |
     |    Addr   |        |  ID   |                  |
     | (node_id) | (int)  | (int) | (ls_info list)## |
     +-----------|--------+-------+------------------|
     |           |        |       |                  |
     |           |        |-------+------------------|
     |           |        |       |                  |
     |           |        |-------+------------------|
     |           |        |       |                  |
     |-----------|--------+-------+------------------|
     |           |        |       |                  |
     |-----------|--------+-------+------------------|
     |           |        |       |                  |
     |           |        |-------+------------------|
     |           |        |       |                  |
     +-----------|-----------------------------------+

      #  The first link state entry for each link source
         contains the complete link state information
         corresponding to the Link State ID.
         Subsequent entries contain only changes of a single
         link state.

         A FULL link state entry of link state #k and
         a PARTIAL entry of link state #(k+1) can be
         merged into a FULL link state entry of
         link state #(k+1)

      ## detail of (ls_info) data type
          +---+---|---|---+
          |   |   |||||   |
          +---+---|---|---+
           (a) (b) (c) (d)

           (a)  Link Destination Address
           (b)  Link Destination Subnet Mask
           (c)  Link Metrics
           (d)  Forward Flag ( indicates if link state information
                               has been forwarded)
   B.3  Bordercast Tree Table

        +-----------|-----------+----------------+----------------+
        |           |           |   Downstream   |   Downstream   |
        |   Node    |   Member  |   Neighbors    |   Peripheral   |
        |           |           |                |     Nodes      |
        | (node_id) | (boolean) | (node_id list) | (node_id list) |
        +-----------|-----------+----------------+----------------|
        |           |           |                |                |
        +-----------|-----------+----------------+----------------|
        |           |           |                |                |
        +-----------|-----------+----------------+----------------|
        |           |           |                |                |
        +-----------|---------------------------------------------+

   B.4  Pending Link State List
        (list of neighbors in the process of sending link state updates)

        +----------+      +----------+        +----------+
        |          | ---> |          |  . . . |          |  (node_id list)
        +----------+      +----------+        +----------+

   B.5  New Neighbors List
        (list of new neighbors to receive a copy Link State Table,
         upon completion of updates)

        +----------+      +----------+        +----------+
        |          | ---> |          |  . . . |          |  (node_id list)
        +----------+      +----------+        +----------+

   B.6  Former Routing Zone Nodes
        (list of routing zone nodes, prior to routing table updates)

        +----------+      +----------+        +----------+
        |          | ---> |          |  . . . |          |  (node_id list)
        +----------+      +----------+        +----------+

   C.  Interfaces

        C.1  Higher Layer (N/A)
        C.2  Lower Layer (IP)
            C.1.1  Send()       (specified in IP standard)
            C.1.2  Deliver()    (specified in IP standard)
        C.3  NDM
            C.3.1  Neighbor_Lost(node,mask,metric)
                Used by the NDM to notify the IARP that direct contact
                has been lost with "node" (with subnet mask "mask").
                Any node/link quality measurements are reported in
                the "metric" list.
            C.3.2  Neighbor_Found(node,mask,metric)
                Used by the NDM to notify the IARP that direct contact
                has been confirmed with "node" (with subnet mask "mask").
                Any node/link quality measurements are reported in
                the "metric" list.
        C.4  IERP
            C.4.1  Zone_Node_Lost(node)
                Used by IARP to notify the IERP that a node no longer
                exists within the routing zone.

   D.  State Machine

        The IARP protocol consists of only one state (IDLE). Therefore,
        no state transitions need to be specified. The IARP immediately
        acts upon an event and then returns back to the IDLE state.

        Notes:  1) X is used as a label for the node running this state
                   machine.

        D.1
            Event:   An IARP link state packet is received.

            Action:  The link state update is recorded in the Link State
                     Table.  If more link state updates are pending,
                     then the IARP returns to an idle state.  Otherwise,
                     X rebuilds its routing table and the bordercast trees
                     of its routing zone nodes.  Links that lie outside of
                     the routing zone are removed from the Link State
                     Table.  X sends its neighbors all previously
                     unforwarded link state updates from its NON-
                     peripheral nodes.  Finally, all neighbors in the New
                     Neighbor List are sent the link states of X's NON-
                     peripheral nodes, and the New Neighbor List is
                     cleared.

        D.2
            Event:   A "Neighbor Found" interrupt is received, indicating
                     the discovery of a neighbor N.

            Action:  X's new link to N is recorded in the Link State Table
                     and N is added to the New Neighbors List.  If more
                     link state updates are pending, then the IARP returns
                     to an idle state.  Otherwise, X rebuilds its routing
                     table and the bordercast trees of its routing zone
                     nodes.  Links that lie outside of the routing zone
                     are removed from the Link State Table.  X sends its
                     neighbors all previously unforwarded link state
                     updates from its NON-peripheral nodes.  Finally, all
                     neighbors in the New Neighbor List are sent the link
                     states of X's NON-peripheral nodes, and the New
                     Neighbor List is cleared.

        D.3
            Event:   A "Neighbor Lost" interrupt is received, indicating
                     that node N is no longer a neighbor of X.

            Action:  The lost link to neighbor N is removed from the Link
                     State Table and N is removed from the New Neighbor
                     List.  If more link state updates are pending, then
                     the IARP returns to an idle state.  Otherwise, X
                     rebuilds its routing table and the bordercast trees
                     of its routing zone nodes.  Links that lie outside of
                     the routing zone are removed from the Link State
                     Table.  X sends its neighbors all previously
                     unforwarded link state updates from its NON-
                     peripheral nodes.  Finally, all neighbors in the New
                     Neighbor List are sent the link states of X's NON-
                     peripheral nodes, and the New Neighbor List is
                     cleared.

    E.  Pseudocode Implementation

        We define two complimentary operations on packets:
        extract(packet) and load(packet)

            extract (packet)
                extracts the fields of the IARP packet to the following
                variables: {link_source, link_dest, pk_source, state_id,
                            radius, flags, mask, link_metric}
                             * full_link_state -> flag(0)
                             * current_update  -> flag(1)
                             * all_updates     -> flag(2)
                             * mask            -> flag(3)
                             * link_status     -> flag(4)

            load (packet)
                loads the values of the aforementioned variables into
                the fields of the IARP packet.

    E.1  Update Intrazone Routing Table
         args:  (packet, link_dest, mask, link_metric)

         // IARP may be triggered by either a link state packet update
         // or an interrupt from the NDP

         if (packet)
         {
             extract(packet);
             my_link_changed = FALSE;
         }
         else
         {
            link_source = MY_ID;
            pk_source   = MY_ID;
            state_id    = MY_LINK_STATE_ID;
            radius      = MY_ROUTING_ZONE_RADIUS;

            full_link_state = 0;
            current_update  = COMPLETE;
            all_updates     = COMPLETE;

            if (type(intrpt) == "Neighbor Found")
                link_status = UP;
            else
               link_status = DOWN;

            my_link_changed = TRUE;
        }

        // Record link state information in Link State Table
        add(Pending_Link_State_List, pk_source_id);
        status = add(Link_State_Table, link_source, link_dest, mask,
                     radius, link_metric, state_id, flags);

        // Determine if received link state information is new
        if (status == NEW_LINK_INFO)
        {
           cum_status = UPDATE_IN_PROGRESS;
           if(my_link_changed)
           {
              if (link_status == UP)
                 add(New_Neighbor_List, link_dest);
              else
                 remove(New_Neighbor_List, link_dest);
              MY_LINK_STATE_ID++;
           }
        }

        // Release hold on pk_source when all link state information
        // from pk_source is received
        if(all_updates == COMPLETE)
            remove(Pending_Link_State_List, pk_source);
        // Once all pending link state information has been received,
        // recompute Routing Table
        if(empty(Pending_Link_State_List) (AND)
                 cum_status == UPDATE_IN_PROGRESS)
        {
            // Record former routing zone nodes in order to determine
            // (later) which nodes have left the routing zone
            for((EACH) node (BELONGING TO) Routing_Table)
            {
               if(Routing_Table[node].Intrazone)
                   add(Former_Routing_Zone_Nodes, node);
            }

            // The Link State Table should not contain links that lie
            // beyond the (extended) routing zone.
            // The Routing Table is recomputed until all outlying links
            // have been removed from the Routing Table
            rebuild = TRUE;
            while (rebuild)
            {
                 for((EACH) node (BELONGING TO) Routing_Table)
                 {
                     if(Routing_Table[node].Intrazone)
                         delete(Routing_Table[node]);
                 }

                 for((EACH) node (BELONGING TO) Link_State_Table)
                 {
                     Routing_Table[node].route =
                           compute_route_from_links(Link_State_Table,node);
                     Routing_Table[node].Intrazone = TRUE;
                 }
                 rebuild = update(Link_State_Table);
            }

            // After recomputing the Routing Table, the bordercast trees
            // of the interior routing zone nodes are recomputed
            for ((EACH) node (BELONGING TO) Routing_Table)
            {
                if(Routing_Table[node].Intrazone)
                {
                    Bordercast_Tree[node] =
                          construct_bordercast_tree(Link_State_Table,node);
                }
            }

            // Broadcast all new link state information that lies
            // inside the extended routing zone
            broadcast_link_state_updates(Link_State_Table);

            // Send all link state information that lies inside the
            // extended routing zone to each new neighbor
            for ((EACH) node (BELONGING TO) New_Neighbor_List))
               send_link_state_table(Link_State_Table, node);
            clear(New_Neighbor_List);

            cum_status = UPDATE_COMPLETE;

            // Alert IERP about any lost routing zone nodes
            for ((EACH) node (BELONGING TO) Former_Routing_Zone_Nodes)
            {
               if(node !(BELONGS TO) Routing_Table)
                  Routing_Zone_Node_Lost(IERP, node);
            }
            clear(Former_Routing_Zone_Nodes);
         }

4.2 IntErzone Routing Protocol (IERP)

   The Interzone Routing Protocol (IERP) is responsible for discovering
   and maintaining routes to nodes which are beyond a node's routing
   zone.  These routes are acquired, on demand, by efficiently probing the
   network with bordercasted route queries.*

   This version of the IERP extends the basic query / reply mechanism
   described in Section 3.  A node issues a ROUTE_QUERY when a route is
   needed for a destination outside of its routing zone.  A ROUTE_QUERY
   for a new route may probe the entire network.  In contrast, a route
   repair will only trigger a limited depths search.  A ROUTE_QUERY is
   guided from a node outward to its peripheral nodes, using the BRP's
   bordercast service (described in greater detail in Section 4.3).
   Because future IERP implementations may not be "routing zone aware",
   the BRP delivers the query up to the IERP at every hop.  When the
   ROUTE_QUERY is received at the IERP, the previous hop node address is
   recorded in the Routing Table and Temporary Query Cache.  The previous
   hop node address is then overwritten in the packet by the current
   node's address.  If the ROUTE_QUERY destination lies within this node's
   routing zone, then a ROUTE_REPLY is sent to the query source and a
   QUERY_EXTENSION is forwarded to the query destination.  Otherwise, the
   ROUTE_QUERY is sent back down to the BRP.

   The QUERY_EXTENSION performs route accumulation in distributed manner,
   similar to the ROUTE_QUERY.  When the QUERY_EXTENSION arrives at the
   destination, a next hop route is formed back to the query source.

   As the ROUTE_REPLY proceeds back the query source, each node selects a
   previous hop from its Temporary Query Cache (i.e. based on diversity
   injection criteria) and appends the address to the packet.  When the
   ROUTE_REPLY arrives at the query source, it contains a source route
   to the destination.

   This IERP version offers some additional features.  Multiple
   destination route discoveries (for any OR all destinations) can be
   aggregated into a single ROUTE_QUERY.  In addition, QOS routing is
   supported through the collection of various route quality metrics.

   * The bordercast service is provided by the Bordercast Resolution
     Protocol (BRP)

                       +-----+                    +-----+       +-----+
                       |  S  |      . . . .       |  Y  | . . . |  D  |
                       +-----+                    +-----+       +-----+

  (1)  search for                   ROUTE_QUERY
       route               |-------------------------->

  (2)  establish                    ROUTE_REPLY            EXTENSION
       connectivity        <--------------------------|
                                                      |=============>

              Sequence of the IERP Route Discovery Stages
   A.  Packet Format
                        1                   2                   3
    0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |     Type      |      TTL      |   Hop Count   |     Flags     |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |        Current Hop Ptr        | Num Dests = D | Num Nodes = N |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |            Query ID           |           Reserved            |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                 Query/Route Source Address                    |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ --+--
    |              Query/Route Destination (1) Address              |   |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+   |
    |              Query/Route Destination (2) Address              |   |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+   |
                                   | |                                  |
                                   | |                                Dests
                                  \| |/                                 |
                                   \ /                                  |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+   |
    |              Query/Route Destination (D) Address              |   |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ --+--
    |                 Intermediate Node (1) Address                 |   |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+   |
    |                 Intermediate Node (2) Address                 |   |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+   |
                                   | |                                  |
                                   | |                           route(1:N)
                                  \| |/                                 |
                                   \ /                                  |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-|   |
    |                 Intermediate Node (N) Address                 |   |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ --+--
    |     Node      | Metric Type |D|          Metric Value         |   |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+   |
    |     Node      | Metric Type |D|          Metric Value         |   |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+   |
                                   | |                                Node/
                                   | |                              Segment
                                  \| |/                             Metrics
                                   \ /                                  |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+   |
    |     Node      | Metric Type |D|         Metric Value          |   |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ --+--
        Field Description:

        * Type                         (char)              (8 bits)
                Identifies the type of IERP packet.  The current version
                 of IERP contains three packet types:

                ROUTE_QUERY:
                        request for an Interzone source route to the
                        destination specified by the Destination
                        IP Address

                QUERY_EXTENSION:
                        extension of a QUERY packet sent from the
                        node that discovers the Destination to the
                        Destination itself.

                ROUTE_REPLY:
                        response to a ROUTE_QUERY packet, sent from the
                        node that discovers the Destination back to the
                        Source.  At a minimum, the ROUTE_REPLY contains
                        next-hop route information to the Destination.
                        (In general, returns the source route up to the
                        first node which has cached routing information
                        If no nodes have cached routing information, then
                        the complete source route is returned.)

        * TTL (Time to Live)           (unsigned int)      (8 bits)
                Number of hops that a route query may continue to
                propagate.  This field allows a querying node to configure
                the depth of a route search, in order to control the
                amount of IERP traffic generated.

        * Hop Count                    (unsigned int)      (8 bits)
                Hop count from source to current node (ROUTE_QUERY,
                QUERY_EXTENSION) or hop count from source to route
                destination (ROUTE_REPLY, ROUTE_ACCUMULATION,
                ROUTE_OPTIMIZATION).

        * Flags                        (boolean)           (8 bits)
                 Flags(0)        ANY destination (0) / ALL destination (1)
                        In the case of multiple destinations, specifies
                        whether the ROUTE_QUERY should return routes for
                        ANY specified destination, or ALL specified
                        destinations.

                        In the case of a single MULTICAST IP address,
                        specifies whether the ROUTE_QUERY should return
                        routes to ANY member of the multicast group, or
                        ALL members of the multicast group.

                 Flags(1..7)     RESERVED for future use
        * Current Hop Pointer          (unsigned int)      (16 bits)
                INDEX of the route (see below) corresponding to the route
                node that has most recently received this packet.  (the
                first node in the route has an index of 1).

        * Number of Destinations = D   (unsigned int)      (8 bits)
                Number of destinations included in the route query/reply
                packet.  This allows multiple route discoveries to be
                consolidated into a common route query process.  The
                multiple query destinations feature is particularly useful
                for routing to a multicast group with known members, or
                for locating downstream nodes during the route repair
                phase.

        * Number of Nodes = N          (unsigned int)      (8 bits)
                Number of nodes IP addresses appearing in the route
                specification.

        * Query ID                     (unsigned int)      (16 bits)
                Sequence number which, along with the Query Source Address
                (see below) uniquely identifies any ROUTE_QUERY in the
                network.

        * Query/Route Source Address   (node_id)           (32 bits)
                IP address of the node that initiates the ROUTE_QUERY.
                In subsequent stages, this corresponds to the IP address
                of the discovered route's source node.

        * Query/Route Destination Addresses  (node_id)     (D * 32 bits)
                List of IP addresses to be located during the ROUTE_QUERY
                phase.  (Either ANY or ALL addresses, depending on the
                setting of Flags(0))  In subsequent stages, this field
                contains the IP address of the discovered route's (single)
                destination node.

        * Route                        (node_id)           (N * 32 bits)
                Variable length field that contains the recorded IP
                addresses of nodes along the path traveled by this
                ROUTE_QUERY packet from the Query Source.  In subsequent
                stages (after a route to a Query Destination has been
                discovered), this set of IP addresses provides a partial
                specification of the route between the Route Source and
                Route Destination.

        * Node/Segment Metrics         (metric)            (X * 32 bits)
                This optional section of the packet can be used to
                record a variety of node and segment quality measurements.
                (In this context, a segment refers to the communication
                path between consecutive nodes in the packet's accumulated
                route.  In the case of neighboring nodes, a segment is
                equivalent to a (one-hop) link).

                * Node                 (char)              (8 bits)
                      Index of the route corresponding to a particular
                      node.

                * Metric Type          (char)              (7 bits)
                      Type of metric being recorded
                      (based on pre-defined metric types)

                * Direction Flag       (boolean)           (1 bit)
                      For segment metrics, specifies either the:
                      upstream segment   (toward Query Source)       (0)
                      downstream segment (toward Query Dest)         (1)

                * Metric Value         (unsigned int)      (16 bits)
                      Value of node / segment metric specified by
                      Metric Type

    B.  Data Structures
        B.1  Routing Table   (See IARP Description)

        B.2  Temporary Query Cache

        +------------------------------------------------------+  ...
        | Query   |   Query_ID   |   Prev_hop   |  Hop Count   |
        | Source  |              |              |              |
        |(node_id)|(unsigned int)|(node_id list)|(unsigned int)|
        |---------+--------------|--------------+--------------+  ...
        |         |              |              |              |
        |---------+--------------|--------------+--------------+  ...
        |         |              |              |              |
        |---------+--------------|--------------|--------------+  ...
        |         |              |              |              |
        +------------------------------------------------------+  ...

          ... +--------------+
              |  Injection   |
              |   Counter    |
              |(unsigned int)|
          ... +--------------|
              |              |
          ... +--------------|
              |              |
          ... +--------------|
              |              |
          ... +--------------+
    C.  Interfaces

        C.1  Higher Layer (N/A)
        C.2  Lower Layer (BRP)
            C.2.1   Send(packet,source,query_ID,BRP_cache_ID)
                Used by the IERP to request packet transmission
                through the BRP.
            C.2.2  Deliver(packet,BRP_cache_ID)
                Used by the BRP to deliver data packet to the IERP.
        C.3  Lower Layer (IP)
            C.3.1  Send()       (specified in IP standard)
            C.3.2  Deliver()    (specified in IP standard)
        C.4  IARP
            C.4.1  Zone_Node_Lost(node)
                Used by the IARP to notify the IERP that a node has left
                the routing zone
        C.5  ICMP
            C.4.1  Host_Unreachable()    (specified in ICMP standard)
            C.4.2  Source_Route_Failed() (specified in ICMP standard)

    D.  State Machine

        The IERP protocol consists of only one state (IDLE). Therefore,
        no state transitions need to be specified. The IERP immediately
        acts upon an event and then returns back to the IDLE state.

        Note:  1) X is used as a label for the node running this state
                  machine.

        D.1
            Event:   A routing zone node is reported lost by the IARP,
                     indicating that a node H has moved beyond the
                     routing zone.

            Action:  Remove every route from the Interzone Routing Table
                     which contains H.  If any routes containing H are
                     found AND Proactive Route Repair is enabled, then a
                     route "Repair" (limited depth route discovery) to H
                     is initiated.

        D.2
            Event:   ICMP issues a "Host_Unreachable" message, triggering
                     a route discovery for unreachable host H.

            Action:  A FULL depth route discovery to H is initiated.
                     X's query_id is incremented and assigned to a new
                     ROUTE_QUERY packet.  The route is initialized with
                     the IP addresses of the route's source and destination
                     IP addresses (X and H).  Finally, the ROUTE_QUERY
                     packet is sent to the BRP to be bordercasted.

        D.3
            Event:   The IERP or ICMP (via source_route_failed()) triggers
                     a "Repair" message, indicating that the next hop
                     H specified in a source route cannot be found locally.

            Action:  A limited depth route discovery to H is initiated.
                     X's query_id is incremented and assigned to a new
                     ROUTE_QUERY packet.  The route is initialized with
                     the IP addresses of the route's source and destination
                     IP addresses (X and H). Finally, the ROUTE_QUERY
                     packet is sent to the BRP to be bordercasted.

        D.4
            Event:   A ROUTE_QUERY packet is received with a destination
                     that appears within X's routing zone.

            Action:  The packet's accumulated route information (for the
                     source)is recorded in X's Routing Table and Temporary
                     Query Cache.  The accumulated route is replaced by
                     X's address and any accumulated route metrics are
                     updated and compressed.

                     X copies the ROUTE_QUERY packet's contents to a
                     ROUTE_REPLY.  A loop-free return path is selected
                     from the Temporary Query Cache (i.e. based on
                     "Diversity Injection").  X also forwards a
                     QUERY_EXTENSION to discovered query destination.  If
                     the ROUTE_QUERY packet has not traveled too deep into
                     the network (i.e. TTL > 0) AND there are still more
                     destinations to be discovered, then the ROUTE_QUERY
                     is sent down to the BRP to be relayed outward.

        D.5
            Event:   A ROUTE_QUERY packet is received with a destination
                     that DOES NOT appear within X's routing zone.

            Action:  The packet's accumulated route information (for the
                     source) is recorded in X's Routing Table and
                     Temporary Query Cache.  The accumulated route is
                     replaced by X's address and any accumulated route
                     metrics are updated and compressed.  The ROUTE_QUERY
                     packet is sent down to the BRP to be relayed outward.

        D.6
            Event:   A QUERY_EXTENSION packet is received.

            Action:  The packet's accumulated route information (for the
                     source) is recorded in X's Routing Table.  The
                     accumulated route is replaced by X's address and any
                     accumulated route metrics are updated and compressed.
                     If X is not the query destination, then X forwards the
                     message toward the destination (directly through IP)
        D.7
            Event:   A ROUTE_REPLY packet is received.

            Action:  The packet's accumulated route information (for the
                     destination) is recorded in X's Routing Table.  X
                     adds its address to the accumulated route and adds
                     metrics for the downstream (toward the destination)
                     link to the accumulated metrics.  If X is not the
                     query source, then X forwards the message toward the
                     source (directly through IP), along a path selected
                     from the Temporary Query Cache (i.e. based on
                     Diversity Injection).

    E.  Pseudocode Implementation

        We define two complimentary operations on packets:
        extract(packet) and load(packet)

            extract(packet)
                extracts the fields of the IERP packet to the following
                variables: {type, TTL, hop_count, flags, current_hop_ptr,
                            query_id, reply_id, source, reply_node,
                            dests, route, metric}

            load(packet)
                loads the values of the aforementioned variables into
                the fields of the IERP packet.
                load(packet) automatically computes the values of:
                              num_dests = |dests|
                              num_nodes = |routes|
        E.1 Routing Zone Node Lost
            args: (lost_node)

            // Determine if lost_node belongs to any stored routes.
            // If so, the route should be removed from the
            // Routing Table and a route repair triggered
            // (if proactive route repairs are enabled)
            repair_link = FALSE;
            for((EACH) node (BELONGING TO) Routing Table)
            {
              for ((EACH) route (BELONGING TO) Routing_Table[node])
              {
                if (lost_node (BELONGS TO) route)
                {
                  if (PROACTIVE_REPAIRS_ENABLED)
                  {
                      removal_timer = ROUTE_QUERY_TIMEOUT;
                      repair_link = TRUE;
                  }
                  else
                  {
                      removal_timer = 0;
                  }
                  schedule(remove(Routing_Table[node,route],removal_timer)
                }
              }
            }
            if(repair_link)
            {
                dests = lost_node;
                Initiate_Route_Discovery(IERP, {dests, ALL, REPAIR});
            }

        E.2  Initiate Route Discovery
             args:  (dests, flags, type)

             // Route query is initialized and sent down to the BRP
             // to be bordercasted

             if (type == REPAIR)
                TTL = MAX_REPAIR_HOPS;
             else
                TTL = MAX_FULL_QUERY_HOPS;

             source    = MY_ID
             query_id  = MY_QUERY_ID++;
             type      = ROUTE_QUERY;
             hop_count = 0;
             route     = NULL;
             current_hop_ptr = 0;

             load (packet);
             send ((packet, source, query_id, NULL), BRP);
        E.3  Receive IERP Packet
             args:  (packet, BRP_cache_ID)

             extract(packet);
             switch(type)
             {
                case:  ROUTE_QUERY
                    // Update Time-to-Live and hop count
                    TTL--;
                    hop_count++;

                    // Extract route (and metrics) from packet and record
                    // them in the Routing Table and Temporary Query Cache
                    prev_hops = route(1 : current_hop_ptr);
                    metrics = compress(metrics,
                                  get_metrics(prev_hops|prev_hop|,MY_ID));
                    add(Routing_Table[source],(reverse(prev_hops),
                                                  reverse(metrics),FALSE);
                    add(Temp_Query_Cache[source,query_id],
                                          (reverse(prev_hops),hop_count));

                    // Determine if routes are known for each destination
                    find_all  =  flags(0);
                    find_any  = !flags(0);
                    found_any = FALSE;
                    found_all = TRUE;

                    // Save current values for dests and metrics, since
                    // we may need them later
                    DESTS     = dests;
                    METRICS   = metrics;
                    for((EACH) dest (BELONGING TO) DESTS)
                    {
                        if (Routing_Table[dest].Intrazone)
                        {
                            // A route to this destination is known
                            found_any       = TRUE;
                            dests           = dest;
                            remove(dest, DESTS);

                            // Forward query directly to destination, so
                            // that the destination will have some routing
                            // information back to the source
                            type            = QUERY_EXTENSION;
                            prev_hops       = NULL;
                            next_hops       = Routing_Table[dest].route;
                            route           = {prev_hops,MY_ID,next_hops};
                            current_hop_ptr = |prev_hops|+1;
                            load(packet);
                            send((packet,next_hops(1)),IP);
                            // Send reply back to the source
                            type            = ROUTE_REPLY;
                            reply_node      = MY_ID;
                            reply_id        = MY_REPLY_ID++;
                            next_hops       = Routing_Table[dest].route;
                            prev_hops   = inject_loopfree_path(next_hops,
                                        Temp_Query_Cache[source,query_id]);
                            metrics         = NULL;
                            route           = {prev_hops,MY_ID,next_hops};
                            current_hop_ptr = |prev_hops|+1;
                            load(packet);
                            send((packet,prev_hops(|prev_hops|)),IP);
                        }
                        found_all = found_all && found_any;
                    }

                    // If more destinations need to be discovered then
                    // forward QUERY via BRP bordercasting
                    if(((find_all && !found_all)||(find_any && !found_any))
                        && TTL>0)
                    {
                        dests           = DESTS;
                        metrics         = METRICS;
                        prev_hops       = NULL;
                        next_hops       = NULL;
                        route           = {prev_hops, MY_ID, next_hops};
                        current_hop_ptr = |prev_hops|+1;
                        load(packet);
                        send((packet,BRP_cache_ID), BRP);
                    }
                break;

                case:  QUERY_EXTENSION
                    // Extract route (and metrics) from packet and record
                    // them in the Routing Table
                    prev_hops = route(1 : current_hop_ptr);
                    next_hops = route(current_hop_ptr+1 : |route|);
                    metrics   = compress(metrics,
                                  get_metrics(prev_hops|prev_hop|,MY_ID));
                    add(Routing_Table[source],(reverse(prev_hops),
                                                   reverse(metrics),FALSE);
                    // Forward query extension until it reaches destination
                    if(MY_ID !(BELONGS TO) dests)
                    {
                        prev_hops = NULL;
                        if(|next_hops|==1)
                        {
                            next_hops = Routing_Table[dest].route;
                            route           = {prev_hops, MY_ID, next_hops};
                            current_hop_ptr = |prev_hops|+1;
                        }
                        else
                        {
                            current_hop_ptr++;
                        }
                        load(packet);
                        send((packet,next_hops(1)),IP);
                    }
                break;

                case:   ROUTE_REPLY
                    // Extract route (and metrics) from packet and record
                    // them in the Routing Table
                    prev_hops = route(1 : current_hop_ptr-1);
                    next_hops = route(current_hop_ptr : |route|);
                    metrics = {metrics,get_metrics(MY_ID,next_hops(1))};
                    add(Routing_Table[dest],(next_hops,compress(metrics),FALSE);

                    // Forward reply until it reaches source
                    if(MY_ID != source)
                    {
                        // Choose a loopfree return path from the Temporary
                        // Query Cache (eg. shortest path, shortest least
                        // "injected" path, etc.)

                        prev_hops  = inject_loopfree_path(next_hops,
                                       Temp_Query_Cache[source,query_id]);
                        route           = {prev_hops, MY_ID, next_hops};
                        current_hop_ptr = |prev_hops|+1;
                        load(packet);
                        send((packet,next_hop),IP);
                    }
                break;
            }

4.3  Bordercast Resolution Protocol (BRP)

   The BRP provides the "bordercasting" message delivery service, here
   used to forward IERP route queries.  Messages are relayed from a
   bordercasting node outward to its peripheral nodes, along a
   bordercast (multicast) tree.  Although the intended recipients of
   the bordercast are the peripheral nodes, the BRP may deliver the
   bordercast message up to the higher layer application (in this case,
   the IERP) at EVERY hop.  This may be necessary for applications that
   use bordercasting, but are generally not "routing zone aware".

   When a node receives a bordercasted message, it first needs to determine
   whether the message should be forwarded.  Clearly, the node should
   only forward the message if it belongs to the bordercast tree.
   Furthermore, if the node is the (peripheral node) bordercast recipient,
   it would re-bordercast the message, but only if it has not been a bordercast
   recipient before AND it does not lie inside the routing zone of a
   detected "previously bordercasted" node.

   If the node decides not to discard the message, it next determines which
   of the current bordercast tree's downstream branches will be dropped.
   A downstream branch will be dropped if each of the branch's downstream
   peripheral nodes either has had a bordercast message relayed to it by
   this node OR lies inside the routing zone of a detected previously
   bordercasted node.

   The BRP delivers the message up to the higher layer application (IERP).
   After some processing, the application may return an updated message
   back to the BRP.  The BRP will then relay the message on the selected
   outgoing links.

   A.  Packet Format

    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                    Message Source Addresss                    |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                          Message ID                           |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                    Prev Bordercast Address                    |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                                                               |
    |            E N C A P S U L A T E D     P A C K E T            |
    |                                                               |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
        * Message Source Address       (node_id)           (32 bits)
                IP address of the node that initiates the message.

        * Message ID                     (unsigned int)    (16 bits)
                Sequence number which, along with the Message Source
                Address uniquely identifies any BRP message in the
                network.

        * Prev Bordercast Address      (node_id)           (32 bits)
                IP address of the most recent bordercasting node

        * Encapsulated Packet          (packet)

        *** note:  Within the context of the ZRP, the Message Source
                   Address and Message ID can assume the same values
                   as the corresponding "Query" fields in the
                   encapsulated IERP packet.
                   As such, they can be eliminated AS LONG AS:
                           (a)  The BRP has read access to the contents of
                                the encapsulated IERP packet.
                           (b)  The BRP knows the format of the IERP packet.

                   For this version of the ZRP, both (a) and (b) are true.
                   However, we provide the BRP with its own Message ID and
                   Message Source fields for future support of generic
                   higher layer applications.

    B.  Data Structures

        B.1  Routing Table          (see IARP description)

        B.2  Bordercast Tree Table  (see IARP description)
        B.3  Detected Messages Cache

             +------------------------|--------------------------+ ...
             | Message |  Message ID  | BRP_cache_ID |   Prev    |
             |  Source |              |              |Bordercast |
             |(node_id)|(unsigned int)|(unsigned int)| (node_id) |
             |---------+--------------|--------------+-----------+ ...
             |         |              |              |           |
             |         |              |--------------+-----------+ ...
             |         |              |              |           |
             |---------+--------------|--------------+-----------+ ...
             |         |              |              |           |
             |         |              |--------------+-----------+ ...
             |         |              |              |           |
             +------------------------|--------------+-----------+ ...

                 ... +---------------+
                     | Out_neighbor  |
                     |               |
                     |(node_id list) |
                 ... +---------------|
                     |               |
                 ... +---------------|
                     |               |
                 ... +---------------|
                     |               |
                 ... +---------------|
                     |               |
                 ... +---------------+

    C.  Interfaces

        C.1  Higher Layer (IERP)
            C.2.1  Send(packet, BRP_cache_ID)
                Used by the IERP to request packet transmission.
            C.2.2  Deliver(packet, BRP_cache_ID)
                Used by the BRP to deliver a data packet up to IERP.
        C.2  Lower Layer (IP)
            C.2.1  Send()       (specified in IP standard)
            C.2.2  Deliver()    (specified in IP standard)

    D.  State Machine

      The BRP protocol consists of only one state (IDLE).  Therefore,
      no state transitions need to be specified. The BRP immediately
      acts upon an event and then returns back to the IDLE state.

      Notes: 1) X is used as a label for the node running this state
                machine.

      D.1
            Event:   A message is received from the higher layer (IERP).
                     BRP_cache_ID == NULL.

            Action:  The bordercast was initiated by X.  X marks itself as
                     the previous bordercast node and relays the message to
                     all the downstream neighbors in its bordercast tree.

      D.2
            Event:   A message is received from the higher layer (IERP).
                     BRP_cache_ID != NULL.

            Action:  The bordercast was not initiated by X.  X looks up
                     this message's forwarding information recorded in
                     it's Detected Messages Cache (and referenced by
                     BRP_cache_ID).  X then relays the message to the
                     designated downstream neighbors.

      D.3
            Event:   A message is received from the IP.

            Action:  The message is terminated under the following
                     conditions:
                     (1) X does not belong to the bordercast tree of
                         prev_bcast.
                     (2) If X is the bordercast recipient and
                             (a) X has already been a bordercast recipient.
                                            OR
                             (b) X is an interior (i.e. non-peripheral)
                                 member of a detected previously
                                 bordercasted node's routing zone.

                     If X is a bordercast recipient and does not terminate
                     the message then it re-bordercasts the message.

                     Next, X decides which of the current bordercast
                     tree's downstream branches (neighbor) will be dropped.
                     A downstream branch will be dropped if, for each of
                     the branch's downstream peripheral node D:
                             (a) X has already relayed a message to D.
                                            OR
                             (b) D is an interior (i.e. non-peripheral)
                                 member of a detected previously
                                 bordercasted node's routing zone

                     Finally, X delivers the message to the higher layer
                     (IERP).

    E.  Pseudocode Implementation

        We define two complimentary operations on packets:
        extract(packet) and load(packet)

            extract (packet)
                  extracts the fields of the BRP packet to the following
                  variables: {source,message_id,prev_bordercst,
                              message_packet)

            load (packet)
                loads the values of the aforementioned variables into
                the fields
    \|/                 \|/              \|/   \|/              \|/
+---------+---------+---------+---------+---------+---------+---------+
|                                 IP                                  |
+---------+---------+---------+---------+---------+---------+---------+

Legend:

        A <---> B      exchange of the BRP packet.

        E.1  Receive Packet packets between protocols A & B
        A  ===> B      information passed from higher layer (IERP)
             args: (message_packet, BRP_cache_ID)

             // If BRP_cache_ID == NULL then this is a new bordercast
             // issued by this node.  This new bordercast can be relayed
             // to ALL downstream neighbors in the bordercast tree.
             // Otherwise, this is an existing bordercast that should be
             // forwarded protocol A to the pre-assigned downstream neighbors.

             if(BRP_cache_ID)
             {
                prev_bcast = Detected_Messages[source,message_id
                                               ,BRP_cache_ID].prev_bcast;
                out_neighbors = Detected_Message
                             [source,message_id,BRP_cache_ID].out_neighbors;
             }
             else
             {
                prev_bcast = MY_ID;
                out_neighbors = Bordercast_Tree[MY_ID].downstream_neighbors;
             }
             source     = MY_ID;
             message_id = MY_BORDERCAST_ID++;
             load(packet);
		 send(packet,out_neighbors),IP);

        E.2  Receive Packet from protocol B

        Existing Protocols
        ------------------
                IP
             args:  (packet)

             extract(packet);

             // Discard bordercast message if this node is not a member              Internet Protocol
                ICMP            Internet Control Message Protocol

        ZRP Entities
        ------------
                IARP            IntrAzone Routing Protocol
                IERP            IntErzone Routing Protocol
                BRP             Bordercast Resolution Protocol

      Additional Protocols
      --------------------
                NDP             Neighbor Discovery Protocol

   The architecture of
             // prev_bcast's bordercast tree
             discard = !Bordercast_Tree[prev_bcast].member;
             // If this node is the downstream peripheral node of
             // prev_bcast then terminate if this node:
             //     (a) has already been a bordercast recipient
             //                       OR
             //     (b) hybrid Zone Routing framework is inside the routing zone of a node modular, so
   that has
             //         already bordercasted the message

             if(is_peripheral_node(prev_bcast,MY_ID))
             {
                 for((EACH) queried_node (BELONGING TO)
                                  Detected_Messages[source,message_id]
                                  .prev_bcast)
                 {
                    a = MY_ID (BELONGS TO)
                        Bordercast_Tree[queried_node]
                                            .downstream_peripheral_nodes;
                    b = is_interior_node(queried_node,MY_ID);
                    discard = discard || (a || b);
                 }
                 // Since this node is the downstream peripheral recipient,
                 // it will re-bordercast the message (if it isn't
                 // terminated)
                 prev_bcast = MY_ID;
             }
             out_neighbors = {};
             if(!discard)
             {
                 // Don't relay message to a given downstream neighbor if
                 // each covered downstream peripheral node (of prev_bcast):
                 //     (a) has already been a bordercast recipient
                 //                       OR
                 //     (b) is inside the link state routing zone of a node that has
                 //         already bordercast the message

                 for((EACH) neighbor (BELONGING TO)
                                          Bordercast_Tree[prev_bcast]
                                                   .downstream_neighbors)
                 {
                     drop_neighbor = TRUE;
                     leaves = Bordercast_Tree[prev_bcast,neighbor
                                           ].downstream_peripheral_nodes;
                     for((EACH) leaf_node (BELONGING TO) leaves)
                     {
                         failed_leaf = TRUE;
                         for((EACH) queried_node (BELONGING TO)
                                Detected_Messages[source,message_id]
                                                             .prev_bcast)
                         {
                            a = leaf_node (BELONGS TO)
                                Bordercast_Tree[queried_node]
                                            .downstream_peripheral_nodes;
                            b = is_interior_node(queried_node,leaf_node);
                            failed_leaf = failed_leaf || (a || b);
                         }
                         drop_neighbor = drop_neighbor && failed_leaf;
                     }
                     if(!drop_neighbor)
                         out_neighbors = {out_neighbors, neighbor};
                 }
             }

             // Record BRP "state" in Detected Messages Cache, so that
             // the message protocol can be properly forwarded when returned by
             // the higher layer app (IERP)
             BRP_cache_ID++;
             add(Detected_Messages[source,message_id],(BRP_cache_ID,
                                               prev_bcast,out_neighbors));
             schedule(remove(Detected_Messages[BRP_cache_ID],
                                                      MAX_QUERY_LIFETIME);

             deliver(message_packet,BRP_cache_ID);

5. used as an IARP [3] and an
   on-demand routing protocol can be used as an IERP [4]. As an example,
   consider TBRPF [12] or OLSR [7] as an IARP and AODV [15] or DSR [8] as
   an IERP.

4. Other Considerations

5.1

4.1 Sizing the Zone Radius

   The performance of the Zone Routing Protocol is determined by the
   routing zone radius.  In general, dense networks consisting of a few
   fast moving nodes favor smaller routing zones.  On the other hand, a
   sparse network of many slowly moving nodes operates more efficiently
   with a larger zone radius.

   The simplest approach to configuring the routing zone radius is to
   make the assignment once, prior to deploying the network.  This can
   be performed by the network administration, if one exists, or by
   the manufacturer, as a default value.  This may provide acceptable
   performance, especially in situations where network characteristics
   do not vary greatly over space and time.  Alternatively, the ZRP can
   adapt to changes in network behavior, through dynamic configuration
   of the zone radius [3]. [6].  In [9], [14], it was shown that a node can
   accurately estimate its optimal zone radius, on-line, based on local
   measurements of ZRP traffic.  The re-sizing of the routing zone can be
   carried out by a protocol that conveys the change in zone radius to
   the members of the routing zone.

   In Zone Routing with independently sized routing zones capability,
   each of the nodes in the network can adaptively configure its own
   optimal zone radius in a distributed fashion. The performance of Zone
   Routing is further improved by the ability to provide fine-tuned
   adaptation to local and temporal variations in network characteristics
   [19]. The details of such an update protocol the Independent Zone Routing (IZR) framework will
   be provided included in a future version of this draft.

5.2

4.2 Hierarchical Routing and the ZRP

   In a hierarchical network architecture, network nodes are organized
   into a smaller number of (often disjoint) clusters.  This routing
   hierarchy is maintained by two component routing protocols.  An intra-
   cluster protocol provides routes between nodes of the same cluster,
   while an inter-cluster protocol operates globally to provide routes
   between clusters.

   The ZRP, with its routing zones and intrazone / interzone routing
   protocol (IARP/IERP) architecture may give the appearance of being a
   hierarchical routing protocol.  In actuality, the ZRP is a flat
   routing protocol.  Each node maintains its own routing zone, which
   heavily overlaps with the routing zones of neighboring nodes. Because
   there is a one-to-one correspondence between nodes and routing zones,
   the routing zones, unlike hierarchical clusters, do not serve to hide
   the details of local network topology.  As a result, the network-wide
   interzone routing protocol (IERP) actually determines routes between
   individual nodes, rather than just between higher level network
   entities.

   For small to moderately sized networks, flat routing protocols, like
   the ZRP, are preferable to hierarchical routing protocols.  In order
   for a node to be located, its address needs to reflect the node's
   location within the network hierarchy (i.e.  {cluster id,node id}).
   Because of node mobility, a node's cluster membership (and thus
   address) is subject to change.  This complicates mobility management,
   for the benefit of more efficient routing.  In many hierarchical
   routing protocols, each cluster designates a single clusterhead node
   to relay inter-cluster traffic.  These clusterhead nodes become
   traffic "hot-spots", potentially resulting in network congestion and
   single point of failure. Furthermore, restricting cluster access
   through clusterhead nodes can lead to sub-optimal routes, as potential
   neighbors in different clusters are prohibited from communicating
   directly.   Some hierarchical routing protocols, such as the
   Zone-Based Hierarchical Link-State (ZHLS) [4], [8], avoid these problems by
   distributing routing information to all cluster nodes, rather than
   maintaining a single clusterhead.

   In spite of these disadvantages, hierarchical routing protocols are
   often better suited for very large networks than flat routing
   protocols. Because hierarchical routing protocols provide global
   routes to network clusters, rather than individual nodes, routing
   table storage is more scalable.  Similarly, the amount of route update
   messages is also more scalable.  To some extent, the reduction in
   routing traffic is offset by extra mobility management overhead (i.e.
   identifying which cluster a node belongs to).  However, it is quite
   common that the environment or existing organizational structure
   causes nodes to naturally cluster together.  In these cases, there may
   be a high degree of intra-cluster mobility, with inter-cluster
   mobility is less common.

   A hierarchical routing protocol can be viewed as a set of flat routing
   protocols, each operating at different levels of granularity.  In a two-
   tier
   two-tier routing protocol, the inter-cluster components is essentially
   a flat routing protocol that computes routes between clusters.
   Likewise, the intra-cluster component is a flat routing protocol, that
   computes routes between nodes in each cluster.  For example, the Near
   Term Digital Radio (NTDR) System [13] [18] and ZHLS both employ proactive
   link state protocols to determine inter and intracluster connectivity.

   In place of traditional proactive or reactive protocols, we recommend
   that the intra-cluster and inter-cluster routing protocol components
   be implemented based on the hybrid proactive/reactive ZRP.  As
   described throughout this draft, the ZRP is designed to provide an
   optimal balance between purely proactive and reactive routing. This
   applies equally well to routing between nodes at the intra-cluster
   level and between clusters at the inter-cluster level. Additionally,
   the hybrid ZRP methodology can be applied to the related mobility
   management protocols, which determine complete node addresses based on
   a node's location in the network hierarchy.

6.0

5.  Patent Rights Statement

   Cornell University has filed one or more patents protecting the
   inventions described in this submission (the "Patents").  If Cornell
   University's submission, or material portions thereof, is accepted and
   becomes part of the IETF standard (the "Standard"), then Cornell will
   grant any entity wishing to practice the Standard a non-exclusive,
   royalty-free license under the Patents to the extent, but only to the
   extent, that such license is required to implement (a) mandatory
   elements of the Standard; or (b) elements of Cornell's submission that
   are explicitly specified (and not merely incorporated by reference) in
   the Standard.

   Use of any elements of Cornell's submission that are neither required
   in order to implement the Standard nor explicitly specified in the
   Standard will require an additional license from Cornell University
   under terms to be negotiated between the parties.

   While the protocol disclosed in Cornell's submission is being
   evaluated by the IETF, whether on the standards track or otherwise,
   Cornell University will and hereby does grant any party a license to
   utilize, test and evaluate such protocol solely for non-commercial,
   research purposes.

6. References

   [1]   Haas, Z.J., "A New Routing Protocol for the Reconfigurable
              Wireless Networks," ICUPC'97, San Diego, CA, Oct. 12,1997.

   [2]   Haas, Z.J. and Pearlman, M.R., "The Performance of Query
              Control Schemes for the Zone Routing Protocol,"
              SIGCOMM'98, Vancouver, BC, Sept. 2-4, 1998.

   [3]   Haas, Z.J., Pearlman, M.R. and Samar, P., "Intrazone Routing
              Protocol (IARP)," IETF Internet Draft,
              draft-ietf-manet-iarp-02.txt, July 2002.

   [4]   Haas, Z.J., Pearlman, M.R. and Samar, P., "Interzone Routing
              Protocol (IERP)," IETF Internet Draft,
              draft-ietf-manet-ierp-02.txt, July 2002.

   [5]   Haas, Z.J., Pearlman, M.R. and Samar, P., "Bordercasting
              Resolution Protocol (BRP)," IETF Internet Draft,
              draft-ietf-manet-brp-02.txt, July 2002.

   [6]   Haas, Z.J. and Tabrizi, S., "On Some Challenges and Design
              Choices in Ad-Hoc Communications," MILCOM'98, Boston, MA,
              October 18-21, 1998.

   [4]

   [7]   Jacquet, P., Muhlethaler, P., Qayyum A., Laouiti A., Viennot L.,
              and Clausen T., "Optimized Link State Routing Protocol,"
              IETF Internet Draft, draft-ietf-manet-olsr-03.txt,
              November 2000.

   [8]   Joa-Ng, M. and Lu, I.T., "A Peer-to-Peer Zone-based Two-Level
              Link State Routing for Mobile Ad-Hoc Networks," to appear
              in IEEE JSAC issue on Ad-Hoc Networks, June, 1999.

   [5]

   [9]   Johnson, D.B., and Maltz, D.A., "Dynamic Source Routing
              in Ad-Hoc Wireless Networks," in Mobile Computing,
              edited by T. Imielinski and H. Korth, chapter 5,
              pp. 153-181, Kluwer, 1996.

   [6]

   [10]   Moy, J., "OSPF Version 2," INTERNET DRAFT STANDARD,
              RFC 2178, July 1997.

   [7]

   [11]   Murthy, S., and Garcia-Luna-Aceves, J.J., "An Efficient
              Routing Protocol for Wireless Networks," MONET, vol. 1,
              no. 2, pp. 183-197, October 1996.

   [8]

   [12]   Ogier, R. "Efficient Routing Protocols for Packet-Radio
              Networks Based on Tree Sharing," MoMUC '99, November 1999.

   [13]   Park, V.D., and Corson, M.S., "A Highly Adaptive Distributed
              Routing Algorithm for Mobile Wireless Networks,"
              IEEE INFOCOM'97, Kobe, Japan, 1997.

   [9]

   [14]   Pearlman, M.R. and Haas, Z.J., "Determining the Optimal
              Configuration for the Zone Routing Protocol," IEEE JSAC,
              August, 1999.

   [10]

   [15]  Pearlman, M.R. and Haas, Z.J., "Improving the Performance of
              of Query-Based Routing Protocols Through 'Diversity
              Injection'", IEEE WCNC'99, New Orleans, LA, Sept. 1999.

   [11]

   [16]  Perkins, C.E., and Bhagwat, P., "Highly Dynamic
              Destination-Sequenced Distance-Vector Routing (DSDV) for
              Mobile Computers," ACM SIGCOMM, vol.24, no.4, October 1994.

   [12]

   [17]  Perkins, C.E. and Royer, E.M., "Ad-Hoc On-Demand Distance
              Vector Routing," IEEE WMCSA'99, New Orleans, LA, Feb. 1999

   [13]

   [18]  Ruppe, R., Griswald, S., Walsh, P. and Martin, R., "Near Term
              Digital Radio (NTDR) System," IEEE MILCOM'97, Monterey, CA,
              Nov. 1997.

   [14]

   [19]  Samar, P., Pearlman, M.R. and Haas, Z.J., "Hybrid Routing: The
              Pursuit of an Adaptable and Scalable Routing Framework for
              Ad Hoc Networks," in Handbook of Ad Hoc Wireless Networks,
              M. Ilyas, ed., CRC Press 2002.

   [20]  Waitzman, D., Partridge, C., Deering, S.E., "Distance
              Vector Multicast Routing Protocol," RFC 1075, Nov. 1, 1988.

7.0  Draft Updates

   - The sole function of the Bordercast Resolution Protocol (BRP) is
     to perform application independent bordercasting (all excess
     routing protocol functionality has been moved up to the IERP).
     This means that the BRP can be used outside of the ZRP, for any
     application that requires an efficient network-wide query/probing
     mechanism.  Likewise, a wide variety of globally reactive routing
     protocols may be adopted as alternate IERPs, by substituting the
     query "broadcast()" with "bordercast()".
     In further support of application independent bordercasting, the BRP
     can deliver messages up to the higher layer application at every hop
     (as opposed to only delivering for peripheral nodes).  This is
     important for applications that wish to use the bordercast service,
     but are otherwise "routing zone unaware".

   - The bordercast message delivery service is now implemented via
     multicasting, rather than a series of separate unicasts to individual
     peripheral nodes.  The bordercast control mechanisms have been revised
     to support this transition.

   - The IARP is now required to proactively track the COMPLETE topology of
     an EXTENDED routing zone, consisting of all nodes that are at a
     minimum distance (in hops) LESS THAN twice the "basic" routing zone
     radius.  Nodes use this extended information to construct the
     downstream portion of each routing zone node's bordercast (multicast)
     tree.  Consequently, the Distance Vector IARP implementation has been
     dropped.

   - For convenience, the IARP and IERP now share a single routing table.
     IARP entries are indicated by a special flag.

   - The IERP's optional Route Accumulation and Route Optimization stages
     have been dropped.

   - The route maintenance policy has been simplified.  Local route
     repairs are only reported to the source of the broken link, not to
     the route's source.

   - Support for "diversity injection" [10] has been added to the IERP.
     The diversity injection mechanism allows the global route discovery
     to extract more information about current network connectivity,
     without any additional traffic.

Authors' Information

   Zygmunt J. Haas
   Wireless Networks Laboratory
   323 Frank Rhodes Hall
   School of Electrical & Computer Engineering
   Cornell University
   Ithaca, NY 14853
   United States of America
   tel: (607) 255-3454, fax: (607) 255-9072
   Email: haas@ee.cornell.edu haas@ece.cornell.edu

   Marc R. Pearlman
   389 Frank Rhodes Hall
   School of Electrical & Computer Engineering
   Cornell University
   Ithaca, NY 14853
   United States of America
   tel: (607) 255-0236, fax: (607) 255-9072
   Email: pearlman@ee.cornell.edu pearlman@ece.cornell.edu

   Prince Samar
   374 Frank Rhodes Hall
   School of Electrical & Computer Engineering
   Cornell University
   Ithaca, NY 14853
   United States of America
   tel: (607) 255-9068, fax: (607) 255-9072
   Email: samar@ece.cornell.edu

 The MANET Working Group contacted through its chairs:

   M. Scott Corson
   Institute for Systems Research
   University of Maryland
   College Park, MD 20742
   (301) 405-6630
   corson@isr.umd.edu

   Joseph Macker
   Information Technology Division
   Naval Research Laboratory
   Washington, DC 20375
   (202) 767-2001
   macker@itd.nrl.navy.mil