Internet Draft Charlie Lynn Nimrod Working Group January 1995 Expires July 1995 Nimrod Deployment Plan Status of this Memo This document is one of several by the IETF Nimrod Working Group. It describes how Nimrod can be incrementally deployed into the existing Internet. Consideration is also given to IPv6. This document is an Internet Draft. Internet Drafts are working documents of the Internet Engineering Task Force (IETF), its Areas, and its Working Groups. Note that other groups may also distribute working documents as Internet Drafts. Internet Drafts are draft documents valid for a maximum of six months. Internet Drafts may be updated, replaced, or obsoleted by other documents at any time. It is not appropriate to use Internet Drafts as reference material or to cite them other than as a "working draft" or "work in progress". Please check the 1id-abstracts.txt listing contained in the internet- drafts Shadow Directories on ftp.isi.edu, ds.internic.net, nic.nordu.net, munnari.oz.au, or ftp.is.co.za to learn the current status of any Internet Draft. Distribution of this Internet Draft is unlimited. Please send all comments to nimrod-wg@BBN.Com. Abstract We describe how the Nimrod routing system can be deployed incrementally into an internetwork, and in particular into the Internet. We discuss the initial implementation required to obtain Nimrod functionality in some places within an internetwork, and we also discuss the migration path to the full implementation. Internet Draft Nimrod Deployment January 1995 Contents 1 Introduction 1 1.1 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.3 Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.4 Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Objectives of this Transition Plan 4 3 Initial Deployment 5 3.1 Hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 IP Versions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2.1 Endpoint Labels . . . . . . . . . . . . . . . . . . . . . . . 6 3.2.2 Agent Discovery . . . . . . . . . . . . . . . . . . . . . . . 7 3.3 Deployment Locations . . . . . . . . . . . . . . . . . . . . . . . 7 3.4 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.4.1 Plug and Play . . . . . . . . . . . . . . . . . . . . . . . . 8 3.4.2 Independent Implementation . . . . . . . . . . . . . . . . . . 11 4 Full-Scale Deployment 13 4.1 Directory Services . . . . . . . . . . . . . . . . . . . . . . . . 13 4.1.1 Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2 Host Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.3 Boot Server Changes . . . . . . . . . . . . . . . . . . . . . . . 15 4.4 EID Administration . . . . . . . . . . . . . . . . . . . . . . . . 15 5 Security Considerations 15 5.1 Accountability . . . . . . . . . . . . . . . . . . . . . . . . . . 16 5.1.1 Integrity of Routing Information . . . . . . . . . . . . . . . 16 5.1.2 Compromised Components . . . . . . . . . . . . . . . . . . . . 16 5.1.3 Interactions with Other Internetwork Subsystems . . . . . . . 17 5.1.4 Enhanced Service Considerations . . . . . . . . . . . . . . . 17 5.2 Summary of Nimrod Security Services. . . . . . . . . . . . . . . . 18 6 Utilities 18 7 Author's Address 19 References 19 i Internet Draft Nimrod Deployment January 1995 1 Introduction Nimrod is a scalable routing architecture for large, heterogeneous, and dynamic internetworks, providing service-specific routing in the presence of multiple constraints and admitting incremental deployment throughout an internetwork. This document presents a plan for deploying the Nimrod routing system in an internetwork. We briefly describe the Nimrod architecture to provide context for the deployment plan presented in subsequent sections of this document. Detailed descriptions of the Nimrod architecture and functionality can be found in [1], [2], [3], and [4]. 1.1 Nodes Nimrod aggregates the physical assets of an internetwork into a hierarchy of clusters, called "nodes", and abstracts connectivity and service information (e.g., adjacency relationships, service offerings, and restrictions on the use of these services) associated with the component assets to determine the connectivity and service information for a node. Clustering and information abstraction allow the Nimrod routing system to scale to an internetwork the size of the projected Internet. At the top of the hierarchy, there is a single "universal node" representing the complete internetwork. Other examples of nodes include autonomous systems, networks (including routers, hosts, and links between them), individual routers, and even interfaces on routers. Assets within a (non-partitioned) node must be able to intercommunicate over routes that do not traverse any assets outside of the node. 1.2 Identifiers The Nimrod architecture makes a fundamental distinction between the name and the location of the endpoints for a traffic session. Three separate label spaces are used: endpoint names, endpoint identifiers, and locators. "Endpoint names" are variable length, relatively long, globally unique text strings used by humans to name endpoints in an internetwork. They are administratively allocated and are not necessarily related to the topology of the internetwork. "Endpoint identifiers" (EIDs) are fixed length, relatively short, topologically insignificant, densely packed, globally unique identifiers used by the transport layer to identify the source and destination endpoints of traffic sessions. An EID has the semantics of "who". In most situations, an EID will correspond to a host but may also, for example, correspond to a migratory process or a sub-component of a host. EIDs are administratively allocated and are not necessarily related to the topology of the internetwork. An EID has no internal structure useful to routing. 1 Internet Draft Nimrod Deployment January 1995 "Locators" are moderate (and possibly variable) length, topologically significant, structured bit strings used by routing to identify the source and destination nodes containing the endpoints of a traffic session. A locator has the semantics of "where". It has internal structure that consists of a list of "elements", one from each node in the hierarchy that contains the given node. It does not appear that an arbitrary observer need be able to parse an arbitrary locator into its component elements. However, each Nimrod agent must be able to recognize its own locator and prefix, and can thus isolate any remaining portion, e.g., suffix, of locators within itself. 1.3 Maps The Nimrod architecture assembles, distributes, and uses routing information in the form of "maps" of contiguous portions of an internetwork, represented by nodes and their associated adjacencies and service attributes. The adjacencies between nodes in a Nimrod map may be visualized as arcs. However, arcs are not part of the Nimrod Architecture, only the adjacencies. There are several maps associated with a node. A node's "basic map" is built from the maps of its component (next lower-level) nodes. It may contain information that the administrators of the node do not wish to be made available outside of the node. An "internal map" is a map that is made available outside of the node for those interested in the node's internal structure. The contents of an internal map is a filtered version of the basic map. There may be several internal maps as the filtering applied to the basic map may be a function of the credentials of the requester of the map. An "abstract map" is a map that is made available outside of the node for those interested in passing traffic through the node. The contents of a abstract map is a filtered, abstracted version of the basic map that contains the node's adjacencies, and "inbound", "transit", and "outbound" service offerings and restrictions, but does not contain the node's internal structure. The advertised services apply to node entry and exit points from and to adjacent nodes and need not be uniform over all such points. There may be several abstract maps as the filtering and abstraction applied to the basic map may be a function of the credentials of the requester of the map. Filtering results from information hiding considerations and implies that the map distributor trusts the recipient not to redistribute the map arbitrarily. Nimrod version 1 will not rely on such trust relationships. We expect that those who choose to be part of the initial deployment will distribute service restrictions with the maps rather than restrict service information distribution in maps. 1.4 Agents The Nimrod functions of map construction and distribution, endpoint location, route generation, path establishment, and packet forwarding are 2 Internet Draft Nimrod Deployment January 1995 performed by Nimrod "agents". These agents may reside within routers, hosts, or special-purpose Nimrod "boxes". "Node representatives" construct and distribute maps for the given node. When a node's abstract map changes, one of its representatives distributes the new map to a node representative at the next level up in the hierarchy, ensuring that higher-level node representatives can accommodate lower-level map changes in their own maps. Node representatives also respond to requests for specific maps from "route agents", which generate routes on behalf of endpoints. The Nimrod architecture does not specify clustering and abstraction algorithms for constructing maps nor route generation algorithms. We will, however, provide in a future document a Nimrod usage guide detailing several example algorithms and their suitability with respect to various internetwork topologies and traffic service requirements. The Nimrod clustering and abstraction algorithms may differ from node to node, and the route generation algorithms may differ from endpoint to endpoint. In fact, these algorithms may be proprietary. We expect that in a full-scale implementation of Nimrod in the Internet, some organizations may specialize in providing map construction or route generation services for its Internet customers. There is always a simple route between any two locators by simply going up the clustering hierarchy to a common node then down the hierarchy to the destination. Such an organization might find better routes by exploring more detailed maps in order to find adjacencies that have been filtered out of higher level maps. There may also be a market for optimizing multicast routes between a number of pre-specified nodes. Other organizations will contract for these services, either instead of providing this functionality in their own organization, or because it is more cost effective to purchase some routes than it is to generate them locally. As the internal and abstract maps returned to a route agent might depend on the credentials of the requester of the map, a route agent should be prepared to receive different versions of a map for a given node. "Endpoint representatives" request acceptable routes on behalf of endpoints, providing route agents with endpoint locators, EIDs, and traffic service requirements. They obtain endpoint locators from "association agents" that maintain the database of endpoint names, EIDs, and locators. Using this endpoint information, service requirements, and maps obtained from node representatives, route agents generate routes between source and destination endpoints. Maps requested during, and routes resulting from, route generation may be cached by the route agents for use in fulfilling future route requests. Endpoint representatives pass the acceptable routes routes supplied by the route agent back to the endpoint so that it may select the route or routes that best satisfies its requirements. The endpoint initiates path setup, which sets up forwarding information in "forwarding agents" along the path(s) to the destination(s). Other acceptable routes might be cached for later use if the alternative selected fails during use. Nimrod supports 3 Internet Draft Nimrod Deployment January 1995 unicast, multicast, and multiple-path routes. Forwarding agents forward traffic according to stored state information and forwarding directives contained in packets. "Boundary forwarding agents" reside at points along a node's boundary and are responsible for forwarding traffic and, during path setup, enforcing any local policy applicable to traffic entering or leaving the node. 2 Objectives of this Transition Plan This transition plan focuses on the deployment issues most relevant in the Internet. It does not attempt to cover the obscure or innovative ways that the Nimrod architecture and functionality might be used. The process of moving from the current Internet routing systems (e.g. BGP, OSPF) to the Nimrod routing system must be accomplished in a smooth and incremental way. Both the old and new routing systems will be operating simultaneously indefinitely. There is no requirement that all portions of the Internet switch over to the Nimrod routing system. Thus, it is incorrect to think that the "transition" will ever be "completed". The functionality offered by Nimrod must be made available as Nimrod is being incrementally deployed. We expect that organizations will want to see the benefits provided by Nimrod before investing in it. Hence, we plan to make a Nimrod implementation available that is inexpensive and easy to use to allow prospective Nimrod users to become familiar with Nimrod's functions and benefits. The cost to an organization to "buy in" to a Nimrod deployment should be as low as possible. A target cost is one box running freely available software that includes all the necessary Nimrod agents. When an organization's demand increases to the point where the one box solution does not have sufficient resources to meet the demand, additional boxes would be economically justifiable. Nimrod deployment does not require any changes to host software before an organization is able to take advantage of Nimrod functionality. Moreover, an organization's routers need not be changed for Nimrod support. The plan does not require existing hosts or networks to be renumbered. Changes to hosts and routers to support Nimrod functionality will occur when there are sufficient customers to justify the host and router vendors' development costs and when performance improvements are attractive enough. These objectives both provide incentive to deploy the Nimrod routing system and permit more widespread implementation experience and interoperability testing to be gained so that any potential problems with either the protocols or specifications can be identified and fixed, and any areas that may need improvement can be addressed. 4 Internet Draft Nimrod Deployment January 1995 3 Initial Deployment There are several possible scenarios for deploying Nimrod in an internetwork in an incremental fashion. These scenarios differ depending on whether hosts are Nimrod capable, whether the internetwork is using IPv4, IPv6, or some other internetwork layer, whether Nimrod will be deployed in an existing portion of the Internet or will be employed in an entirely new network, and whether the prospective Nimrod users will use an "off-the-shelf" version or choose to implement their own version of Nimrod. 3.1 Hosts For initial deployment, hosts need not be Nimrod capable nor even Nimrod aware. The consequence of this requirement is that existing hosts cannot use the full functionality provided by EIDs, or the ability to dynamically specify service requirements. Later on, in section 4, we discuss potential benefits of introducing Nimrod functionality into hosts. 3.2 IP Versions Nimrod functionality can be added to internetworks that use any internetwork packet header and forwarding procedures. We note that this is the same approach used with IDPR [5], enabling that routing system to be used in any internetwork, independent of packet format and forwarding conventions. In particular, Nimrod will work with IPv4 and IPv6. However, Nimrod is not responsible for conversion between IPv4 and IPv6 addresses or formats. That translation functionality is part of the IPv6 transition and not part of the Nimrod deployment. With either version of IP, Nimrod information will be encapsulated within IP packets and forwarded between Nimrod agents. Currently deployed hosts and routers need not be aware of the presence of Nimrod; they forward packets solely on the IP header. The encapsulation will appear as a "Nimrod protocol". Packets destined to the box are passed up to the protocol handler, and either decapsulated for delivery to an IP host that is not Nimrod-aware, or re-encapsulated for delivery to the next Nimrod-aware entity. We note that in the case of IPv6, some performance improvements in data packet forwarding may be realized by placing Nimrod forwarding information in a Nimrod-specific IPv6 Routing Header and/or end-to-end EID option. One drawback of using options is that the IPv4/IPv6 translators would need to be modified to also translate the option, and that there might not be sufficient space in the IPv4 header for the option. Consequently, use of options can be restricted to communication between IPv6 capable systems. The simplest transition, in terms of the amount of implementation required, is to encapsulate all Nimrod information within IP packets, with none of the Nimrod-specific information visible at the IP level. An alternate viewpoint is that an IPv6 address is a Nimrod locator, and that the IPv6 Routing Header is the Nimrod routing header. It is not clear 5 Internet Draft Nimrod Deployment January 1995 at this point whether the aggressive IPv6 schedule will permit general consensus to be reached based on the as yet unproved Nimrod routing system. One issue is that the current IPv6 thinking is that intermediate systems may not insert a routing header, as we envision Nimrod endpoint representatives or forwarding agents doing. Another point is that the IPv6 Authentication Header is end-to-end, and only appears once in a packet. There has been no documented consideration to authentication between end systems and routers, as Nimrod is proposing. 3.2.1 Endpoint Labels For initial Nimrod deployment, we suggest using Domain Name System (DNS) names and/or IP addresses as endpoint names. These are the lookup keys used by the DNS system. There will be no "inverse lookup" on either EID or locator as Nimrod does not require such a mechanism. (Note that this does not to prohibit an EID or locator from being used as a cache key.) This simplifies both the administration and maintenance of those label spaces. IP currently overloads the IP address with the semantics of both an EID and a locator. Thus we will make the assumption that if no EID is specified either explicitly in a packet, or implicitly by a flow, that the IP address is to be used for the EID. Hosts that have been modified to use EIDs (that are not the same as the IP address) will interoperate with unmodified hosts using the Nimrod mechanism to communicate EID information. Locators will be handled similarly. The endpoint representatives will maintain the mapping between locators and unmodified local hosts. This approach reduces the amount of implementation effort required to get Nimrod running in a portion of the Internet. We suggest adding additional resource records to the DNS to specify EID and locator information. We further suggest that the EID and locator resource records be returned as additional information when either an IPv4 A or IPv6 AAAA record is requested. Unmodified hosts will not query for EID or locator records, and will not make use of the additional information returned with A or AAAA records. That information will be used by the endpoint representatives. DNS queries from an endpoint will most likely pass through the endpoint's representative, which can relay the request and then cache the reply. If the information is not in the cache when needed by the endpoint representative, it can query for it directly using the IP addresses in packets it is processing. Another approach is to use Nimrod to maintain the authoritative endpoint/locator associations. In this case, Nimrod boxes would be placed near hosts to intercept DNS queries, use the DNS names as keys into the association database of endpoint locators, retain the appropriate endpoint/locator associations, and return only the IP addresses to the hosts. While this approach may be more quickly implemented initially, it has several long term drawbacks which we feel outweigh its advantages. First, it would be another configuration database that would have to be documented, taught to the system administrators, and maintained. Looking up remote host information would require a mechanism to locate the Nimrod agent 6 Internet Draft Nimrod Deployment January 1995 responsible for the remote host, essentially setting up a parallel delegation hierarchy. Then, as the long term method is being introduced, there would need to be tools, documentation, training, and ongoing maintenance provided to maintain synchronization between the short term and the long term solutions. Nimrod code would then have to be provided to decide which method to use, and it would be hard to remove the short term solution once there was any significant installed base. For these reasons we do not advocate using this approach. Note, however, that using Nimrod to maintain non-authoritative information is logically the same as the caching described above, and that one mechanism to prime the cache would be through a local configuration file. This step on the path to the long term solution can be used for testing while the DNS solution is being implemented. It would require local configuration of all local and remote peers, but avoids the need for a second delegation and lookup hierarchy. IP addresses may be used as default EIDs for initial Nimrod deployment. Thus, no EID management mechanism need be in place prior to Nimrod deployment. This means that endpoints are initially confined to be hosts and not processes within hosts, as no port information is available in IP addresses. Multiple Nimrod traffic sessions may be established between a pair of hosts provided that information in addition to the IP address, such as protocol and ports, is visible in the packet. These restrictions disappear when hosts use EIDs that are independent of IP addresses. 3.2.2 Agent Discovery Agent discovery will be a key factor in any "plug-and-play" deployment of Nimrod. In IP internetworks, each type of Nimrod agent for a node will be assigned an IP multicast address that uniquely identifies them as Nimrod agents. To distribute their information to and to obtain information from other agents for the node, Nimrod agents will distribute their information using IP multicasting. Hence, all agents for a given node must be configured with the special IP multicast address(es) for such agents. Since IP multicast may not be available for the initial Nimrod deployment, configuration files for the Nimrod agents can be used. The availability of IP multicast will remove the necessity for the configuration information in a full-scale deployment. 3.3 Deployment Locations We expect that most prospective Nimrod users will want to introduce Nimrod functionality into existing portions of the Internet where there already exist IP routing and packet forwarding mechanisms. We do not expect Nimrod users to build new internetworks based entirely on Nimrod until it has been proven in existing internetworks, but they may choose to build a Nimrod only subnetwork within their organization. For an initial Nimrod deployment, we suggest two different methods, one based on breaking existing nodes into smaller nodes and the other based on 7 Internet Draft Nimrod Deployment January 1995 building new nodes by clustering existing nodes. In the first scenario, a large autonomous system (e.g., a campus network or even a transit network) becomes a node. This autonomous system is then decomposed into component nodes, for example, along network boundaries. In the second scenario, neighboring autonomous systems become nodes, and these autonomous systems are in turn clustered into a single larger node. For example, a regional service provider might clustered together into a single larger node the nodes of customers that have Nimrod capability. In any case, we suggest initial deployment of Nimrod in an area of the Internet that is large enough to enable observation of the benefits of Nimrod yet small enough for observers to be able to manage and understand Nimrod performance. Within a Nimrod-capable portion of the Internet, Nimrod functionality will be available among the clustered nodes, but conventional Internet routing (e.g., BGP, OSPF) will be used to route to or from hosts external to the Nimrod portion. Nimrod-capable portions of the Internet can be extended by introducing Nimrod functionality to autonomous systems or networks bordering on autonomous systems or networks that already handle Nimrod. Deployment of Nimrod in such a gradual fashion makes the process manageable and can eventually result in full-scale deployment without having to simultaneously convert large numbers of autonomous systems or networks to Nimrod. 3.4 Implementations Prospective Nimrod users will want to assess the performance of the Nimrod routing system through limited deployment in their networks. Although some users will build their own Nimrod implementations for this purpose, we expect most users will want to take advantage of existing Nimrod implementations in order to reduce the delay and cost of getting Nimrod up and running. For each case, we describe the deployment process, emphasizing the aspects that minimize the amount of work that must be performed to obtain a functional Nimrod routing system. 3.4.1 Plug and Play We plan to make available versions of Nimrod software that can execute on ordinary workstations for quick trial deployments of Nimrod. With these implementations, a single Nimrod "box" can provide all of the Nimrod functionality. These boxes must also be capable of performing ordinary IP routing and packet forwarding, but are not likely to provide the throughput of a high-performance router. Nimrod deployment within a node requires boundary forwarding agents---one for each node entry/exit point---to control traffic passing into and out of the node. It also requires a node representative and at least one endpoint representative if there are endpoints, i.e., hosts that are to use Nimrod routing, within the node. Association agent and route agent functionality need not be present in each node but there should be good connectivity to these agents. For minimal performance impact on traffic to non-Nimrod 8 Internet Draft Nimrod Deployment January 1995 destinations, we recommend the following deployment of Nimrod boxes within a node. Typical topologies for organizations that deploy Nimrod functionality consist of one or more high performance "border routers" that connect the organization's network(s) to the internetwork. These border routers may contain policy filters or firewall functionality. They also typically either define the boundary of IP Multicast intra-site scope connectivity, or do not pass native IP Multicast traffic. A provider typically has border routers that connect both to the provider's customers and to other providers. There needs to be subnetwork connectivity between each Nimrod endpoint and its endpoint representative. There should also be subnetwork connectivity between the Nimrod box(es) providing the boundary forwarding agent functionality and the organization's relevant border routers. Note that an organization may choose to restrict Nimrod traffic to only pass through a subset of its border routers so not all would be relevant. We require subnetwork connectivity for two reasons. First, so that default routes can be installed in each endpoint (boundary forwarding agent) pointing to the endpoint representative (border router), respectively, in order to reduce the amount of required configuration. Second, so that endpoints will accept ICMP Redirects when the communication peer(s) are not reachable through the Nimrod routing system. Boundary Forwarding Agents -- Attach a Nimrod box to each relevant border router for the Nimrod node representing the organization. Each such box will act as a boundary forwarding agent and node representative. It will also act as a route agent and association agent for queries about the given node. Nimrod traffic would enter the border router, be passed to the boundary forwarding agent, be processed, passed back to the border router, and then on to the next hop. Only Nimrod traffic will be forwarded to these boxes; non-Nimrod traffic will pass directly through the border routers. This topology is easily used when the connections to the border router are point-to-point links, as may be the case with service providers. Connecting the boundary forwarding agent to the border router in this manner allows the boundary forwarding agent to have a single default route pointing to the border router. This eliminates the need for Nimrod to exchange routing information with other routing protocols, e.g., BGP or OSPF, that are used by the border router. There are tradeoffs in the way the boundary forwarding agent is interconnected to the organization's network assets. One way is for the boundary forwarding agent to only be connected to the border router. This topology would require the border router to handle each Nimrod packet twice---once when it arrives at the border router from outside the organization destined to the boundary forwarding agent and once when it is passed from the boundary forwarding agent into the organization, and vice versa; it also requires another interface on the border router. 9 Internet Draft Nimrod Deployment January 1995 Another way would be to connect the boundary forwarding agent to a LAN that is directly connected to the border router. Using this topology, the each Nimrod packet would be handled once by the border router, but would appear twice in the LAN; no additional interfaces are required on the border router. Note that the LAN may be connected to other routers in addition to the border router. A third way that requires two interfaces on the boundary forwarding agent and one on the border router would be to connect the boundary forwarding agent to the border router and the boundary forwarding agent to a LAN. This topology would cause each Nimrod packet to be handled only once by the border router and boundary forwarding agent and it would appear only once on each link. Other topologies with slightly different tradeoffs are also possible. Each Nimrod box located at a border router must be configured with an EID, the types of agents it supports, the node's service attributes (if the node provides transit service), the node's (temporary) locator, the IP multicast address(es) associated with all Nimrod agents for the given node, and a default route to the border router. Note that because these boxes support the complete Nimrod functionality, they are capable of performing the discovery functions, including locator acquisition. Hence, each node will be able to acquire its own locator to replace the configured temporary locator. Endpoint Representatives -- Attach a Nimrod box to each LAN containing endpoints (hosts) that should take advantage of Nimrod packet forwarding. These boxes will act as endpoint representatives for the designated hosts. Each Nimrod host on the LAN should be configured with a default route pointing to the endpoint representative box, but the configuration of all other hosts should not be changed. Moreover, if the endpoint representative determines that a particular destination cannot be reached via Nimrod, it sends an ICMP Redirect back to the Nimrod host to route traffic for that destination directly to the preexisting LAN router. Hence, non-Nimrod traffic almost always travels directly to the LAN router. Each of these Nimrod boxes acting as an endpoint representative must be configured with an EID, the IP multicast address(es) associated with all Nimrod agents within the given node, the IP address of (one of) the LAN router(s), the service requirements of those hosts it serves, and the IP addresses of the hosts on whose behalf it acts, if different hosts should be given differing service requirements. Each endpoint representative will acquire the locators for the hosts on whose behalf it acts. Note that for certain topologies the endpoint representative functionality can be provided by the Nimrod boxes located next to the border routers and in these situations there would not have to be extra Nimrod boxes located near the hosts. The above deployment scenario has been designed to provide maximum Nimrod functionality with minimal effort. It requires only a small number of 10 Internet Draft Nimrod Deployment January 1995 Nimrod boxes per node, minimal configuration on the part of the administrators, no changes to routers or those hosts not using Nimrod services, and has no negative performance impacts on traffic to or from non-Nimrod hosts. Non-Nimrod traffic need never be processed by the Nimrod boxes. 3.4.2 Independent Implementation For those seeking to produce an independent implementation of Nimrod, we recommend a phased development of functionality. The phases in order of implementation are as follows: 1. Packet forwarding procedures. 2. Path setup protocol. 3. Map and association database maintenance including the update and query/response protocols. 4. Route generation procedures. 5. Discovery protocols. This functionality may be implemented across a combination of or completely within workstations, routers, and hosts. For the sake of this discussion, we assume that all Nimrod functionality is initially implemented in routers. Packet Forwarding -- A minimal Nimrod routing system provides Nimrod packet forwarding along preconfigured routes. In this case, only a node's boundary forwarding agents require any Nimrod functionality. Each boundary forwarding agent must be configured (e.g., using SNMP) with the routes to be used for specific traffic sessions. These routes list all the nodes that the packets pass through from source to destination and can be looked up using a key formed from the source and destination IP addresses contained in the data packets. Also, each of these agents must be configured with the IP addresses for the other boundary forwarding agents in the node and the locators for the neighboring nodes to which those boundary forwarding agents connect. This information is used to locate the boundary forwarding agent corresponding to the next hop along a specified route. Each of these boundary forwarding agents must also be able to execute a portion of the Nimrod path setup protocol and packet forwarding procedures. Initially, only the "datagram mode" packet forwarding need be implemented, provided that the route specification contained in each packet includes all intervening nodes. During path setup, forwarding agents normally perform checks on route consistency (e.g., the requested service is available through the node, the node permits traffic from the given source to use this 11 Internet Draft Nimrod Deployment January 1995 service) and resource availability (e.g., there is sufficient bandwidth on the next hop), but these checks need not be in place initially. Path Setup -- Once the datagram mode functionality is in place, the full path setup protocol including "flow mode" packet forwarding should be implemented. This includes both path setup and teardown, as well as route consistency and resource checks at boundary forwarding agents. With all of this functionality in place, both datagram and flow mode packet forwarding can be supported. Database Maintenance -- Both maps and endpoint/locator associations may be configured in boundary forwarding agents, e.g., a pre-loaded cache, for small numbers of nodes. Map distribution and association query functionality may be implemented to use that configured database information. Eventually, the entire database update and query/response protocols and the reliable transaction protocol should be put in place to reduce the amount of configuration required. Once boundary forwarding agents are able to determine the reachability of boundary forwarding agents in neighboring nodes and within the given node, they can then incorporate this information into their maps and can use the information to determine when an existing path is broken. Note that map construction using information from component nodes can be postponed until there are at least three levels in the clustering hierarchy. Route Generation -- Once it is possible to obtain maps, route generation algorithms can be implemented. We recommend implementing this functionality in routers attached to LANs whose hosts will be using the Nimrod routing system. These routers will act as route agents and as endpoint representatives, and thus must be configured with traffic service information for the hosts on whose behalf they generate routes. Nimrod Boxes -- Implementing all Nimrod functionality in routers is feasible when the Nimrod-capable portion of an internetwork consists of a small number of nodes. Eventually, the map, association, and route database maintenance (node representatives, association agents, and route agents) should be implemented in separate Nimrod boxes. To begin with, these boxes can be configured with the IP addresses, EIDs, and locators of the other such boxes with which they will communicate. Eventually, these configurations will become unmanageable, however, and at that point, the agent discovery mechanisms should be implemented. Ultimately, when Nimrod becomes popular, we expect vendors of networking products to implement Nimrod forwarding functionality in routers and Nimrod database functionality in specialized high-performance Nimrod boxes. A phased implementation minimizes the amount of Nimrod protocol implementation necessary to get portions of Nimrod up and running and ranks these phases according to their relative importance in providing Nimrod 12 Internet Draft Nimrod Deployment January 1995 functionality. The main disadvantage of this approach is the amount of configured information required to compensate for the missing protocols at certain stages. Each organization that produces a Nimrod implementation will have to decide on the tradeoffs between the approach given above and the associated costs, including development, documentation, training/customer support, and product releases. 4 Full-Scale Deployment Several Nimrod functions can be provided with modified versions of existing Internet services such as the Domain Name System and the Dynamic Host Configuration Protocol. For full-scale deployment of Nimrod, one should consider enhancing these services to include the functionality needed by Nimrod rather than implementing separate Nimrod-specific services. This will reduce the amount of implementation required to get a Nimrod routing system up and running. 4.1 Directory Services Existing directory services, such as those currently provided by the Domain Name System (DNS), can be expanded to handle associations between endpoint names and EIDs and locators. Each fully-qualified endpoint name will be associated one EID and one or more locators. It may be possible to get the necessary DNS server and resolver changes into the more popular code bases at the same time as the IPv6 additions. The association between endpoint name and EID is expected to change slowly, if at all, over time. The association between endpoint name and locators is expected to become more dynamic as endpoints become mobile and topology changes result in renumbering. Frequent updating of the DNS entries to correspond to the locations of rapidly moving endpoints, if in fact it turns out to be needed, is beyond the capabilities of the current DNS, but might be possible with a future version of DNS. We note that with the mobility approach currently proposed by the Mobile IP working group of the IETF, each mobile endpoint is associated with a "home" that remains stationary. Hence, only the home needs to track an endpoint's new location; dynamic endpoint locator information need not be updated in the DNS, as the DNS entry would only contain the locator of the home. This approach, however, also implies that some portion of traffic destined for a mobile endpoint must be sent via its home, often resulting in inefficient forwarding. This inefficiency is a property of the technique of using a home, whether or not Nimrod is used. 4.1.1 Format We now describe two ways of storing endpoint information in a directory. The first format, , has the advantage of permitting each endpoint to be individually configured. The second is a formatting optimization that reduces the number of directory records that 13 Internet Draft Nimrod Deployment January 1995 have to be updated when changing the locator for a node that contains many endpoints. With the second format, an endpoint's locator is split into two parts. The first part, or prefix, is the locator of a node that contains the endpoint, but not necessarily the lowest-level node containing the endpoint. Instead, it may refer to the node for the managing organization responsible for the endpoint or a node whose locator is likely to change often. The second part, or suffix, is the remaining portion of the node's locator. By storing the prefixes separately from the suffixes, none of the endpoint records have to be updated when the ancestral node's locator changes; only the single entry containing the prefix needs updating. For example, if an organization's provider prefix(es) are stored separately from the locators of the endpoints, then only the separately stored prefix would need to be updated when a provider changed the organizations prefix, e.g., when changing providers or renumbering to reduce costs after topology changes. When generating a reply to a locator query, the DNS will provide the prefix (there may be more than one prefix) and the suffix. It could concatenate them or leave the concatenation to the resolver and thus reduce the size of the DNS reply. We note that such techniques to simplify management of the directory can be implemented differently for each directory implementation. The issue of interoperability only arises when forming replies---must the locators be expanded in the replies or can they contain the abbreviated (non-concatenated) form---and when contracting for service from secondary directory servers, i.e., exchanging information between the primary and secondary DNS servers. 4.2 Host Changes Although changes to hosts are not required for initial deployment of Nimrod, they will eventually be required for hosts that wish to dynamically change their traffic service requirements, or to use the full EID functionality. Static configuration of traffic service requirements in endpoint representatives is not sufficient when service requirements change frequently. Instead, such hosts will need to communicate their service requirements to their endpoint representatives. The IPv4 and IPv6 packet headers currently allow a host to communicate coarse-grained traffic service requirements (e.g., low delay, high throughput), but do not permit communication of more specific services (e.g., bounds on maximum delay, minimum throughput, preferred service provider). For full-scale Nimrod deployment, Nimrod will have a protocol that enables hosts to communicate endpoint, e.g., DNS, names and traffic service requirements directly to Nimrod agents. Hosts with static service requirements need not implement this protocol, however. Endpoint representatives will be responsible for querying the DNS (using DNS names for hosts that supply this information and IP addresses for hosts that do not supply DNS names) to obtain the EID and locators for each endpoint, and for querying the route agent(s) for routes. Hosts may also be modified to 14 Internet Draft Nimrod Deployment January 1995 formulate their own DNS queries and to provide EID and locator information to endpoint representatives. This will eliminate a second DNS query by the endpoint representative. Furthermore, some hosts may even be modified to absorb all of the endpoint representative functionality. The network protocol stack must be modified to use EIDs when demultiplexing at the transport layer, and not base that operation on the address. Note that some hosts may support applications that require support for more than a single EID. Changes in the way that information from, e.g., DNS responses, is passed to the appropriate protocol layers will be needed. This may require changes to the current application programming interface (API) as they do not support the EID concept. The hosts will need to be able to support multiple locators, i.e., addresses, on a single interface. An interface to the configuration server and entity representative will be needed to allow the set of locators to be changed, through addition, removal, replacement, or a change in preference. The locator update process to the host must be done securely, as must any associated DNS update. 4.3 Boot Server Changes As noted above, full-scale deployment will require extensions to the mechanisms used to load configuration information into a host, e.g., using the Dynamic Host Configuration Protocol. This issue will be covered in a later version of this document. 4.4 EID Administration Once EIDs are in common use, there will need to be a mechanism to allocate them for each endpoint. Since the EID space is to be densely packed, handing out large blocks is undesirable. One way that might be used to solve this problem and reduce the amount of human labor required would be to provide EID Servers throughout the internet. They might be email based. Send a message containing an endpoint name to the server, and receive a reply containing the endpoint name and the assigned EID, possibly in the form of a DNS record to reduce the chance of human error. There could even be a background process to lookup the specified endpoint name and verify that the EID was correctly entered into the system. This process can be made robust and still provide quick turnaround. 5 Security Considerations Nimrod's main functions are distribution and manipulation of routing-related information contained within its databases. Corruption of this information can lead to communication failures, some affecting large portions of an internetwork. Hence, all Nimrod routing information must be protected from corruption to the extent practical. In particular, source authentication and information integrity checks should be performed on all Nimrod routing information distributed throughout an internetwork. To the 15 Internet Draft Nimrod Deployment January 1995 extent possible, Nimrod will take advantage of the security and key management protocols developed by the IETF and other standards bodies. If these security mechanisms (e.g., authentication in IPv6) are not available by the time of initial Nimrod deployment, the deployed version of Nimrod will contain its own selectable authentication and integrity mechanisms (as in IDPR), within the overall framework provided for IPv6. 5.1 Accountability This section provides motivation for security services that the Nimrod routing system will need. A more detailed discussion of the points will be presented in a Nimrod Security Architecture document. 5.1.1 Integrity of Routing Information Recall that the basic routing information flow in Nimrod is for information generated at the lowest levels to be gathered, abstracted, and filtered as it is passed up the clustering hierarchy. Endpoints request routes through their enpoint representative, one or more route agents, and node representatives. Any maps cached by a route agent should contain the credentials that were used to obtain the map. The resulting route is returned to the endpoint, or its representative, and used to establish a path through the various forwarding agents. Corruption within the network or the information processing components can be detected through use of integrity services. 5.1.2 Compromised Components One must consider the threat of a compromised component that either advertizes services that are not available or does not advertize services that are available. Detecting either of these situations is very difficult since similar advertizements could be the result of the aggregation and/or filtering algorithms used within a node, e.g., an "optimistic" or a "pessimistic" algorithm. However, if a route agent is suspicious of an advertizement it can always ask for maps at lower levels of the hierarchy. The threat then becomes whether a route agent can be denied access to the lower level node representatives within the hierarchy. It is difficult to provide mechanisms that prevent false information from being advertized. What Nimrod does is to make it clear where any piece of information originated, by having it signed. It provides the mechanisms that allow any endpoint or administration to incorporate previous experience when expressing its preferences to a route agent so as to encourage or prohibit its traffic from traversing portions of the internetwork. 16 Internet Draft Nimrod Deployment January 1995 5.1.3 Interactions with Other Internetwork Subsystems The Nimrod routing system also interacts with other components of the internet infrastructure. Examples of these interactions are changes to an endpoint's locator(s), e.g., renumbering, in response to topology changes, and the resulting update to both the association database, e.g., DNS, and the set of valid locators within the endpoint itself. There is an IETF working group investigating mechanisms for secure DNS updates as part of the IPv6 effort that we hope to build upon. These interactions must also be considered when analyzing potential threats to the integrity of the routing system. 5.1.4 Enhanced Service Considerations The inclusion of policy and quality of service information will require additional security services not present in current routing protocols. When charging for special services becomes more dynamic, it is likely that endpoints will want assurance that rates advertized by providers through the Nimrod routing system cannot be repudiated. Providers might also want assurance that customers of their service meet any restrictions the provider has placed on users of their services, and that endpoints cannot repudiate having sent the packets. A non-repudiation security service can be used to facilitate this requirement. The issue arises of what an endpoint should place into a data packet so that the multiple providers that handle the packet can be assured the packet was in fact sent by the endpoint and that the endpoint is authorized to use the services requested. The latter function can be validated at path setup time. Including an object per provider in each packet does not scale well, either in terms of packet size or processing required to find the appropriate object for a specific provider. A scalable technique would be to include a key in the setup request that was then part of the authenticated part of each packet sent. One problem with this approach is that providers then need to keep track of the object on a flow-by-flow basis. Whatever mechanism that is used would need a mechanism, e.g., a timestamp or sequence number, to be able to detect replay attacks. This endpoint to intermediate system authentication is separate from any end-to-end authentication required by the endpoints. This implies that data packets might have, in the context of IPv6, two authentication headers. The endpoint to intermediate system authentication would only be processed by those providers that require it and those providers might choose to only verify selected packets. Also, whatever verification scheme is used by the providers should not cause undue packet reordering, as likely candidates for enhanced service are real-time applications. When combined with a non-repudiation service, a provider could then prove, for example, that an endpoint did in fact send packets for which it was being billed. 17 Internet Draft Nimrod Deployment January 1995 5.2 Summary of Nimrod Security Services o All Nimrod protocols use authentication and integrity services. o All information is signed by originator. o Queries to endpoint representatives, route agents, and node representatives contain authentication information for each agent handling the query. o Advertized services and restrictions on their use may not be repudiated. o Endpoints may not repudiate sending packets that use special services. 6 Utilities There are several tools that should be developed to demonstrate the features of the Nimrod routing system and to assist with deployment and ongoing administration. They are not part of the core Nimrod protocols and can be developed independently from that effort (although their availability would make the protocol work easier). It may not be the case that the internal representation of a map is the same as an external representation as might be stored in a configuration file. A utility that converts between formats will be useful, both to initialize the maps in a node representative, and to save maps for subsequent analysis during debugging and ongoing operations. A utility, perhaps SNMP based, that collected and displayed the forwarding information and paths in forwarding agents will be useful during debugging and ongoing operation. A utility that could query for, combine, and save maps and display them graphically will help administrators visualize the Nimrod representation of their networks. It can also be used to illustrate to prospective Nimrod users how others have organized their networks and thus would provide models for their own organization. Similarly, a utility that accepts service requirements and endpoints and requests candidate routes would be useful. It would be similar to the common traceroute tool that has proven so useful in solving operational problems. It could graphically display the resulting routes, highlighting points where service requirements resulted in failure of a candidate route. It could be enhanced so that the user could guide exploration of the topology, and could even return a desirable route to an organization's route agents for caching and subsequent use. A graphical tool that let an organization explore alternative clustering hierarchies and evaluate and observe the results of different clustering and abstraction algorithms would help administrators select the best candidate. It might be possible to make the tool accept modifications from the user, 18 Internet Draft Nimrod Deployment January 1995 learn the rules that were used, load them into the organization's node representatives, and then apply those rules for subsequent abstraction and filtering operations. Currently existing tools, such as DNS lookup utilities and SNMP managers and agents will need to be extended to handle the new information useful to Nimrod. The EID Server described in section 4.4 is another utility that will simplify the deployment of Nimrod. 7 Author's Address Charles Lynn BBN Systems and Technologies 10 Moulton Street Cambridge, MA 02138 Phone: (617) 873-3367 Email: CLynn@BBN.Com References [1] I. Castineyra, J. N. Chiappa, and M. Steenstrup. "The Nimrod Routing Architecture". June 1994. Working Draft. [2] M. Steenstrup. "A Perspective on Nimrod Functionality". July 1994. Working Draft. [3] S. Ramanathan. "Multicast Support for Nimrod: Requirements and Approaches". June 1994. Working Draft. [4] S. Ramanathan. "Mobility Support for Nimrod: Requirements and Approaches". June 1994. Working Draft. [5] M. Steenstrup. "Inter-Domain Policy Routing Protocol Specification: Version 1". July 1993. RFC 1479. 19