This repository consists of the code implementing Bundle Protocol version 7 (BPv7) and supplementary information for the paper Bundle Protocol version 7 Implementation with Configurable Faulty Network and Evaluation published in IEEE WiSEE 2023 at Aveiro, Portugal.
Aidan Casey |
Ethan Dickey |
Jihun Hwang |
Sachit Kothari |
Raushan Pandey |
Wenbo Xie |
This is a lightweight, easy-to-understand, framework specifically designed to simulate and test BPv7. It was implemented solely based on RFC 9171 (see the disclaimer below), the IETF standardization document that defines and specifies BPv7.
The src folder is the main folder that contains the implementation of BPv7 architecture. It largely consists of three subfolders: DTCP, BPv7, Configs.
- The src/DTCP folder contains the code for Disruption-TCP (DTCP), the de facto convergence layer that we created for this project. It is our configurable faulty network CLA that is capable of simulating disruptions, both expected and unexpected ones. Please refer to Section II-A [Implementation - DTCP] of our paper for details.
- The main BPv7 code is in the src/BPv7 folder; see RFC 9171 for more details.
- Various parameters (e.g., bundle lifetime, sending delay range between bundles, etc.) can be set in the src/Configs folder. Moreover, simulation scenarios can be found and added in src/Configs/resources folder.
The mininet folder contains the files for network configurations (e.g., changing the topology of networks).
- mininet/Makefile file allows users to add more hosts.
- mininet/cfg/netcfg.json file is for configuring switches/gateways that connect hosts.
Disclaimer: Due to the use of JSON over CBOR, this implementation is not fully RFC9171-compliant. JSON was chosen over CBOR for understandability and ease of implementation and analysis, as one of our goals was to create an easy-to-understand testbed for BPv7.
- Mininet: Software-defined networking (SDN) based network emulator.
- Open Network Operating System (ONOS): Software providing the control plane for an SDN. This is the operating system for the SDN controller inside our Mininet.
The video tutorial (demo video) is available in this link: https://youtu.be/aika4nRm7wM.
- Clone this repository
- Open four separate terminals
- In the first shell, start ONOS:
make controller - In the second terminal, start Mininet:
make mininet - Start the ONOS command-line interface (CLI) from the third terminal. The password is
rocks:and activate the routing applicationmake clifwdusing this command:app activate fwd - In the fourth terminal, run the ONOS netcfg script:
make netcfg - You can try to ping hosts from one another to see if they respond correctly. For example, in the default setting (see Makefile in
mininetfolder), there are three hosts:h1,h2, andh3. One can test ifh2is reachable fromh1using the following script:h1 ping h2 - One can modify the config files (Makefile for Mininet and netcfg for ONOS) as needed, to simulate more complicated network topology. The default setup is: three hosts (
h1,h2, andh3) being connected to switchs1acting as a gateway. However, our DTCP forces a topology ofh1 -- h3 -- h2, basically makingh3act as a forwarding node.
For more details, please refer to our demo video https://youtu.be/aika4nRm7wM.
This is a continuation of Section II-B [Implementation - Mininet] of our paper. Here is how a transmission from Node A to Node B through Node F the 'Forwarder' would look like (also available in click here).
Explanations for each step are as follows:
-
(1) Sending the message:
send(a). -
(2) User API sends the received message a to the BPA:
send(a). -
(3) BPA storing the received message a with its id as a key
a_id=4inside the send buffersend_bufferif there is a space available for it. -
(4) BPA's
send_bufferreturns the key of a which is4. -
(5) The sender thread
SenderThreadis spawned by BPA. The thread callsnext_bundlefunction to retrieve a message that needs to be sent. -
(6)
send_bufferreturns a toSenderThread -
(7)
SenderThreadmakessend_buffermarkaas sent. -
(8)
SenderThreadinquires DTCP via DTCP API whether the next node –Node F– is currently reachable or not:canReach(nodeID).If DTCP API returns
No, run (8) again; ifYes, then move on to (9). -
(8.5) DTCP API checks whether
Node Fis reachable or not, then response back toSenderThread. -
(9) Once DTCP API sends
Yesin (8), processainto a bundle and send:Send(Bundle(a)). -
(10) ANSF: Process
Send(Bundle(a))into network serializable format and send it over to TCP. -
(11) Transmit it to the next node via TCP.
-
(12-13) ANSF.
-
(14) Notify
ListenerThreadthat a new packet/bundle has arrived:YouGotAMessage(Bundle(a)). -
(15) Send it to another function in BPA for decoding.
-
(16) Decode bundle and realize it is destined for another node. Send delivery confirmation admin record if requested.
-
(17) Message a gets stored in the sending buffer
SendBufferwith some key ofNode F’s choicex. -
(18)
SenderThreadasks for the next message to be sent:getNextMsg -
(19)
SenderThreadreceives the messageafromsendBuffer. -
(20-29) Same as (8-17).
-
(30) The application of
Node Brequests for a certain number of bytesn. -
(31) Reads data from the receiving buffer
receiveBuffer:a[n] = getPayload(n) -
(32)
receiveBufferreturnsa[n]. -
(33) Stores
a[n]to a local buffer just in case. -
(34) Returns
a[n]to the application.
This is a continuation of Section III [Analysis] of our paper. Here is the list of figures that appear in our paper:
- Figure 1. Scenario 100, 101, 111: Delay from application layer to application layer between sender and receiver, high density tests without the 110 test (which has high density of packets, high packet size, and low density of expected downs). Packets stopped being sent from the application layer at t = 50s. This is also Figure 1 of our paper.
- Figure 2. Scenario 100, 101, 110, 111: Delay from application layer to application layer between sender and receiver, high density tests. Same as Figure 1 with the 110 test added. Packets stopped being sent from the application layer at t = 50s. This is also Figure 2 of our paper.
- Figure 3. Scenario 000, 001, 010, 011: Delay between application layers of sender and receiver, low density tests. Packets stopped sending from the application layer at t = 50s. This is also Figure 3 of our paper.
The following figures are also the graphs of the delay from the application layer to the application layer between sender and receiver (also packets stopped being sent from the application layer at t=50s), but for each scenario.
- Figure 4. Scenario 000: Low density, small packet sizes, low expected disruption density.
- Figure 5. Scenario 001: Low density, small packet sizes, high expected disruption density.
- Figure 6. Scenario 010: Low density, large packet sizes, low expected disruption density.
- Figure 7. Scenario 011: Low density, large packet sizes, high expected disruption density.
- Figure 8. Scenario 100: High density, small packet sizes, low expected disruption density.
- Figure 9. Scenario 101: High density, small packet sizes, high expected disruption density.
- Figure 10. Scenario 110: High density, large packet sizes, low expected disruption density.
- Figure 11. Scenario 111: High density, large packet sizes, high expected disruption density.
Figure 1-3 are combinations of Figure 4-11. In particular,
- For Figure 1, see Figure 8, Figure 9, and Figure 11.
- For Figure 2, see Figure 8, Figure 9, Figure 10, and Figure 11.
- For Figure 3, see Figure 4, Figure 5, Figure 6, and Figure 7.
A more accurate figure for Figure 10 (Scenario 110) can be found below (or click here):
This plot represents the case of "BPA Thrashing" where the queue in the forwarding host got so big that by the time a bundle queued up, it had already expired, and so they needed their lifetime extended by the sending host and to be resent. Recall that packets were stopped being sent from the application layer at t=50s, and that thrashing was under what we consider normal network loads. This hence suggests that even a small-scaled DTN network may experience a performance issue under network traffic that is considered normal today.
Similarly, here is a more accurate version of Figure 11 (Scenario 111) (or click here)
and this updates Figure 1 (Scenario 100, 101, 111) as follows (or click here):
For more simulation results, please see mininet/5s_sims_logs_and_graphs. This folder contains the results for the experiments where the packets were stopped being sent from the application layer at t=5s.
This is the continuation of Section IV-C [Architectural Improvements - Missing Critical Features] and Section II-C-1 [Implementation - Configuration - Routing and Name Lookup] of our paper. These were omitted in our paper as these are the deficiencies of the current BPv7 architecture regarding potential deployment issues that are closer to the implementation details than flaws of RFC 4838/5050/9171.
Disclaimer: As we raised in the Section I-A [Introduction - Problem] of our paper, it was unclear (based on the previous implementations available online) whether the "disruption-tolerance" and other required services of DTNs are attributed to the BP alone, or there needs to be additional protocols/programs to fulfill every requirement of DTN. Per RFC 4838, a node that implements the bundle layer is called a DTN node, and currently (as of the date our paper was published) the only protocols available for the bundle layer are BP and BPSec (BP Security). Section IV-B [Architectural Improvements - Missing Specifications] and Section IV-C [Architectural Improvements - Missing Critical Features] of our paper are claiming that BPv7 by itself, solely based on how it is defined in RFC 9171, could be insufficient for satisfying the requirements of DTN which are stated in RFC 4838 (hence, "missing" specifications/critical features). They DO NOT undermine the validity of BPv7 as a legitimate network protocol or subvert the entire RFC 9171 document.
As described in Section 8 of RFC 9171, BPv7 makes use of absolute timestamps in many places, and includes provisions for nodes having inaccurate clocks. However, it states that nodes may be unaware that their clock is inaccurate and exhibit unexpected behavior, but does not say how to synchronize clocks within DTN, or how nodes can learn if their clocks are inaccurate. This is a major potential flaw and needs to be addressed in the future. Assuming that a network --- especially a (potentially) large unstable network with prevailing disconnectivity and asymmetric data rates like DTN --- is always time synchronized is a huge, or maybe unrealistic, assumption.
Information about routing and forwarding is provided in Sections 3.8 and 4.3 of RFC 4838, the first RFC describing the basic architecture of DTN. However, it provides only rough, high-level intuition on how routings in DTN can be modeled mathematically. There have not been any updates or new versions of the RFC since RFC 4838 appeared. Although RFC 9171 and RFC 5050 are strictly about a specific protocol, given that they assume the existence of a convergence layer protocol for handling node ID name resolution, some practical details about routing must at least be mentioned and described.



