Many of our customers evaluate our products after having built and maintained their own networking middleware. Their middleware efforts typically started as an interesting side project. Over time, as more and more applications needed to communicate, and different types of protocols and applications joined their network, maintaining this software became cumbersome and expensive.
These customers appreciate the technical challenges of building a scalable middleware stack. Many have run into limitations, such as the number of nodes which can be reliably discovered in a given timeframe, the amount of CPU it may take to deliver specific data to a subset of subscribers or the memory consumption to manage a large set of interested subscribers. When they get introduced to RTI Connext DDS, they wonder ‘Where’s the magic?’.
Let’s be clear: there is no magical new way to network. Underneath it all, we still make a socket call to the networking stack or copy data into a shared memory buffer. There is no one thing, such as multicast, which makes the volume go to eleven. In fact, RTI Connext can run in environments with and without support for multicast.
Many things contribute to a scalable middleware product, starting with the internal data model: how we keep track of the data (“topics”), the publishers and subscribers, and their specific destinations. We spend a lot of time refining the model to optimize how we look up destination information, as well as what and how meta-data (such as filter information) can be cached. Specific features such as batching and our implementation of the OMG RTPS reliability protocol also improve scalability.
But still, we get a lot of questions about using multicast, and whether we work without multicast, so allow me to clarify….
Pluggable Transport Architecture
Both RTI Connext DDS and RTI Connext Micro, our small-footprint product for resource constrained or certifiable applications, provide a pluggable transport architecture. The middleware can be configured to use multiple transports to deliver data. Out of the box, RTI Connext DDS is configured to use UDPv4 and shared memory. Users can enable the built-in UDPv6 transport or install extension transports. RTI provides a variety of other transports, including TCP, secure WAN, and transports optimized for limited bandwidth environments. Customers can even develop their own transport. We have worked with customers on serial, Infiniband, PCIe, and Starfabric transports. These transports can be selectively enabled per destination, and some, such as compression transport, can be combined (stacked).
When using the UDP transports, the middleware can operate using unicast and/or multicast.
Multicast for Simple Discovery
Before exchanging data, the discovery phase allows applications to know about the presence of other applications (participant discovery phase) and the type of topics each is publishing or wants to subscribe to (endpoint discovery phase). The participant discovery phase requires a little bootstrapping help: you need to know where to look for participants. By default, we look for participants using a well-known pre-configured multicast address and port. It provides a great out-of-the-box experience. If multicast is unavailable, you can configure the middleware to check if specific applications are available by configuring the initial peer list. In a UDP/IP network, this is simply a list of IP addresses and ports. You can use both methods if you like. Additional discovery mechanisms are available, including statically defined peers or rendezvous server-style discovery.
Multicast for Data Delivery
Data delivery can also be based on either unicast or multicast. By default, we send the data using unicast. To optimize data delivery to multiple subscribers, a multicast address can be defined by the subscriber: “I want to receive data for topic “AirPressure” using multicast address:port 184.108.40.206:7451.” RTI Connext DDS also supports a configuration where multicast groups can be defined on the publisher side. This is often used when associating a subset of data with a specific multicast address: for example, all stock symbols starting with the letter A are published using multicast address 220.127.116.11.
Magic is definitely at work behind the scenes to deliver data using multicast. How do you send data using multicast while simultaneously applying content-based filters? How do you efficiently repair samples for reliable data sample delivery when some subscribers in the multicast group have not yet received the sample? But we make it all easy and hide the complexity from the application developer.
At its core, RTI Connext technology is built on a low-latency, highly-scalable distributed data bus that can scale to a large number of publishers and subscribers. Networks supporting multicast are just one of the technologies that allow us to scale to many subscribers without burdening the publisher with many sends. Even without multicast, RTI Connext is designed to offer superior scalability.
Multicast definitely helps, but RTI Connext products do not rely on it.
See our forum post: Configure RTI Connext DDS not to use Multicast