Prototyping TSN-Ready DDS Networks
A convenient and lightweight method of modeling a DDS network for use with TSN.
Introduction
DDS™ (Data Distribution Service) and TSN (Time-Sensitive Networking) are both used in the field of real-time communication and data distribution, but they serve different purposes and operate at different layers of the network stack. DDS is a data-centric middleware standard designed for real-time systems, while TSN is a set of standards developed to provide deterministic real-time communication over standard Ethernet networks. These two complementary communication standards working together make them extremely well suited to mission-critical, deterministic applications such as Autonomous Driving, Surgical Robotics, and Nuclear Power Generation.
-
What this Example Does
The intention is to simulate a distributed application using RTI Connext where the data flow from the DataWriters and DataReaders is through a time-sensitive critical stream and the rest of the meta traffic, such as discovery, is through a non-time-critical stream.
When designing a TSN-capable DDS system, a simple three-step plan is recommended:
1) DDS Architecture: Plan the distribution of Topics, DataReaders and DataWriters.
2) Physical / Virtual Network Architecture: Determine how the system is deployed on physical hardware by identifying hosts and physical or virtual network interfaces that are to be used to send or receive DDS samples.
3) TSN Data Flows: Determine which topics or logical channels need to be deterministic. Care should be taken at this point to ensure that consideration is given to the use of unbound types and number of instances per topic in the data model.
Each TSN flow can be modeled as a one-to-one TSN to VLAN mapping, defined by the flow characteristics, such as source and destination addressing and ports. A single flow may have a multicast address as a destination, which greatly reduces the complexity as one-to-many communication reduces the number of individual, point-to-point VLANs required.
Note: TSN does not inherently require VLANs to work, but VLANs can be used as part of a TSN implementation for network segmentation and prioritization of traffic, helping to isolate TSN traffic from other non-real-time traffic on the network.
-
Building the Example
To clone the GitHub repository containing this Case + Code example you will need to run git clone as follows to download both the repository and its submodule dependencies:
git clone --recurse-submodule
https://github.com/rticommunity/rticonnextdds-usecases-tsn-sim.git
Build Connext Docker ImageOnce the initial outline for the system's network architecture is defined, the next stage is to create a network prototype. One of the ways this can be achieved is by using the capabilities built into Docker.
The initial step in this Case + Code is to configure a Docker image, which will serve as a node within the application architecture. During the image build process, the applications associated with this Case + Code will be automatically compiled and included inside the Docker image. These applications will be accessible in the resulting image at the /root/app/ directory.
There is a docker file in the docker subdirectory. To create the docker image adjust the line in the Dockerfile that reads ENV TZ=Europe/Madrid to set the correct timezone, then run the following command to build the Connext image for the 7.3.0 release, take into account that you are accepting the RTI license agreement by setting the RTI_LICENSE_AGREEMENT_ACCEPTED argument to "accepted":docker build -t connext:7.3.0 -f docker/Dockerfile --build-arg RTI_LICENSE_AGREEMENT_ACCEPTED=accepted --build-arg CONNEXT_VERSION=7.3.0 .If you want to target a different version of Connext, then you must set the following arguments to match your requirements
- CONNEXT_VERSION: supported values are 7.3.0 and 6.1.2
Here's an example of what that would look like for a 6.1.2 release:
docker build -t connext:6.1.2 -f docker/Dockerfile --build-arg RTI_LICENSE_AGREEMENT_ACCEPTED=accepted --build-arg CONNEXT_VERSION=6.1.2 .
NetworksThe docker images will be using bridged networking. For that propose we are going to create three different networks the stdnet used for discovery and meta-traffic and two networks added to simulate VLANs tsnnet1 and tsnnet2. To create these docker networks, run:
docker network create --subnet=10.1.0.0/24 stdnet
docker network create --subnet=10.2.1.0/24 tsnnet1
docker network create --subnet=10.2.2.0/24 tsnnet2Image 4: Docker bridged networking
Application configurationThe application will consist of three nodes based on the ShapeTypeExtended type used by ShapesDemo. An Orange sample will represent video command data, and Cyan samples will be used to represent effector command data. The samples differ in color to permit easy visualization in WireShark. Below is a summary of the three applications and their network configurations.
Surgeon Console
discovery/meta-traffic: 10.1.0.11
Console/Effector VLAN (tsnnet1): 10.2.1.11
Console/Video VLAN (tsnnet2): 10.2.2.11The "Surgeon Console" has two data writers, one to send commands to the "Video Server" (Orange) and one to send commands to the "Effector Server" (Cyan).
Video Serverdiscovery/meta-traffic: 10.1.0.12
Console/Video VLAN (tsnnet2): 10.2.2.12
The DataReader in the "Video Server" will receive samples via the tsnnet2 network.
Effector Serverdiscovery/meta-traffic: 10.1.0.13
Console/Effector VLAN (tsnnet1): 10.2.1.13
The DataReader in the "Effector Server" will receive samples via the tsnnet1 network.
Now that a virtual network of application nodes exists, the next logical step would be to exercise the flows. Simple applications created from the data models by using Connext tools, can be quickly customized to publish and subscribe samples at representative volume, variety and frequency.
-
Running the Example
Starting the Docker Containers
The default configuration used for the development of this simulation is to share the display with an X server to allow ShapesDemo to be executed from within the container and rendered on the host's display.
If this isn't required, remove the -e DISPLAY, -v $XAUTHORITY:/root/.Xauthority and -v /tmp/.X11-unix:/tmp/.X11-unix parameters from the provided docker run commands.
A valid RTI Connext license is required to execute Connext within the containers, you can download a fully functional Connext evaluation license here. To set the required environment variable RTI_LICENSE_FILE, in the terminal from which you start the docker containers, run:
export RTI_LICENSE_FILE=/path/to/rti_license.dat
Surgeon ConsoleIn a terminal window run:
docker run --rm -it -e DISPLAY --privileged --network stdnet --ip 10.1.0.11 --hostname surgeon_console --name surgeon_console -v $XAUTHORITY:/root/.Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix -v $RTI_LICENSE_FILE:/root/rti_license.dat connext:7.3.0 bash
Here’s what those parameters mean:run: run container
--rm: Automatically remove the container when it exits
-i: Interactive
-t: Allocate pseudo-TTY
-e: Set environment variables (DISPLAY)
--privileged: all capabilities, used primarily to enable/disable multicast but can do other things
–network: Define which docker network to use
–ip: Define the IP address to use
--hostname: Define a hostname on the network
--name: Define a name, allows us to address the container easily
-v: Bind mount a volume (binding X vars as unix sockets)
If you want to disable multicast on a docker network interface run:
ifconfig <interface> -multicast
And to enable it:
ifconfig <interface> -multicast
In a separate terminal window run the following commands to connect the "Surgeon Console" container to the tsnnet1 and tsnnet2 networks:
docker network connect tsnnet1 --ip 10.2.1.11 surgeon_console
docker network connect tsnnet2 --ip 10.2.2.11 surgeon_consoleRun ip a in the "Surgeon Console" terminal to verify the network configuration:
root@surgeon_console:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
108: eth0@if109: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.1.0.11/24 brd 10.1.0.255 scope global eth0
valid_lft forever preferred_lft forever
110: eth1@if111: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:01:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.2.1.11/24 brd 10.2.1.255 scope global eth1
valid_lft forever preferred_lft forever
112: eth2@if113: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:02:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.2.2.11/24 brd 10.2.2.255 scope global eth2
valid_lft forever preferred_lft forever
Video ServerIn a new terminal window run:
docker run --rm -it -e DISPLAY --privileged --network stdnet --ip 10.1.0.12 --hostname video_server --name video_server -v $XAUTHORITY:/root/.Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix -v $RTI_LICENSE_FILE:/root/rti_license.dat connext:7.3.0 bash
In a separate terminal window run the following command to connect the "Video Server" container to the tsnnet2 network:
docker network connect tsnnet2 --ip 10.2.2.12 video_server
Run ip a in the "Video Server" terminal to verify the network configuration:
root@video_server:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
114: eth0@if115: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.1.0.12/24 brd 10.1.0.255 scope global eth0
valid_lft forever preferred_lft forever
116: eth1@if117: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:02:0c brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.2.2.12/24 brd 10.2.2.255 scope global eth1
valid_lft forever preferred_lft forever
Effector ServerIn a new terminal window run:
docker run --rm -it -e DISPLAY --privileged --network stdnet --ip 10.1.0.13 --hostname effector_server --name effector_server -v $XAUTHORITY:/root/.Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix -v $RTI_LICENSE_FILE:/root/rti_license.dat connext:7.3.0 bash
In a separate terminal window run the following command to connect the "Effector Server" container to the tsnnet2 network:
docker network connect tsnnet1 --ip 10.2.1.13 effector_server
Run ip a in the "Video Server" terminal to verify the network configuration:
root@effector_server:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
118: eth0@if119: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.1.0.13/24 brd 10.1.0.255 scope global eth0
valid_lft forever preferred_lft forever
120: eth1@if121: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:01:0d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.2.1.13/24 brd 10.2.1.255 scope global eth1
valid_lft forever preferred_lft forever
UsageThe three applications have the same usage:
-d, --domain <int> Domain ID this application will
subscribe in.
Default: 0
-q, --qos_file <str> XML file containing QoS profiles
for the application.
-s, --sample_count <int> Number of samples to receive before
cleanly shutting down.
Default: infinite
-v, --verbosity <int> How much debugging output to show.
Range: 0-3
Default: 1Surgeon Console:
/root/app/surgeon_console -q /root/app/surgeon_qos_profiles.xml
Video Server:
/root/app/video_server -q /root/app/video_qos_profiles.xmlEffector Server:
/root/app/effector_server -q /root/app/effector_qos_profiles.xml
Quality of ServiceThe three applications have distinct QoS profiles, but they all follow a common theme. The base tsn_profile sets up and configures UDP transports which are given aliases defined as stdnet, tsnnet1 and tsnnet2. Only all three are defined in the "Surgeon Console" QoS, the other applications define only those that are used in their specific cases.
The transport_builtin QoS is set to MASK_NONE, meaning no builtin transports are enabled which ensures the traffic only goes through those registered.
Discovery is configured to use the transport defined with the alias "stdnet", which would usually be the default docker bridge network, but for consistency we’ve defined a specific bridge network. The initial peers are configured, as in a TSN scenario using VLANs, therefore it would be expected to know the addresses, and accept_unknown_peers is set to false.
Surgeon Console
In the "Surgeon Console" QoS, two additional profiles are defined, both deriving from tsn_profile. The first, video_profile configures the DataWriter for the video_control topic to use the transport with the tsnnet2 alias, while the second effector_profile configures the DataWriter for the effector_control topic to use the transport with the tsnnet1 alias.
Video Server
In the "Video Server" QoS, the discovery configuration differs in that only the "Surgeon Console" node is defined in the peer list. The transport configuration also differs, as the transport with the alias stdnet is using the address docker assigned to the default bridge network (10.1.0.12), and the tsnnet1 transport is absent as it is not used by the "Video Server".
In the video_profile QoS profile, the DataReader is configured to use the tsnnet2 transport, and additionally the unicast endpoint is defined as using the tsnnet2 transport and the specific receive port (2345). (This port can be anything as long as it doesn't conflict with those defined for built-in transports)
Effector Server
In the "Effector Server" QoS, the discovery configuration again differs from that of the "Surgeon Console" in the discovery configuration. The transport configuration again differs, as the transport with the alias stdnet is using the address docker assigned to the default bridge network (10.1.0.13), and the tsnnet2 transport is absent as it is not used by the "Effector Server".
In the effector_profile QoS profile, the DataReader is configured to use the tsnnet1 transport, and additionally the unicast endpoint is defined as using the tsnnet1 transport and the specific receive port (1234).Image 5: Entities assigned to transports via Quality of Service
Within each application, for each DataReader that will communicate via a specific VLAN, the Quality of Service will define which transport to bind to and a port. The transport configuration determines the destination address, either unicast or multicast, while the port is configured by the DataReader Quality of Service. This determines which VLAN the DataReader will use for operation.
Running the applications in conjunction with a network monitoring tool such as Wireshark, should show that user data between the DataWriters and DataReaders configured to use specific VLAN flows, or routes, is indeed conveyed along the defined routes. It should also be evident that non-VLAN traffic, including discovery and meta-traffic is handled by the default Docker network.Image 6: Diagram of the final prototype
-
Summary
Both DDS and TSN standards provide the necessary building blocks for complex, advanced communications through an approach combining complementary technologies at the application software and network hardware levels.
Combined with TSN, RTI Connext optimizes system performance by prioritizing data flow throughout the system, based on user-defined QoS criteria within a modern architectural framework.
Docker containers enable the rapid prototyping of DDS dataflows ready for deployment in a TSN enabled network. Even if TSN isn't used, using the Connext DDS configuration as described over VLANs allows for some network segmentation and traffic prioritization depending on the network hardware capabilities.
Note
RTI is not able to provide specific advice on TSN architecture, configuration or usage. To get the best possible support, RTI suggests contacting your TSN vendor. RTI partners with several TSN vendors, see our Partners page for details.
-
Next Steps
This example can also be followed via the rticommunity GitHub Repository found here.
Post questions on the RTI Community Forum.
Check out more of the Connext product suite and learn how Connext can help you build your distributed system. If you haven't already, download the free trial.