An Industrial-Grade Connectivity Architecture Reply

The Industrial IoT introduces new requirements for the velocity, variety, and volume of information exchange. Connectivity must be real-time and secure, and it must work over mobile, disconnected, and intermittent links. It must efficiently scale to handle any number of things, each of which may have its own unique requirements for information exchange, such as streaming updates, state replication, alarms, configuration settings, initialization, and commanded intents. These requirements are above and beyond the requirements commonly handled by conventional connectivity solutions designed for static networks.

Designers and standards organizations are fueling the advancement of appropriate connectivity standards like the Data Distribution Service (DDS) that meet these requirements and facilitate a more open, interoperable connectivity architecture for intelligent devices. The benefits include shorter development times, flexible design options, and scalable designs that can evolve with the IoT.

Reduced Integration Times

One of the primary roles of the connectivity architecture is to ensure interoperability of the IoT and thereby reduce integration time for complex devices and subsystems. Ultimately, the goal is to evolve the connectivity architecture to achieve full plug-and-play compatibility.

Levels of interoperability within a connectivity architecture

Levels of interoperability within a connectivity architecture

Currently, industry standards for real-time connectivity are focused on mid-level interoperability, or syntactic-level compatibility, where all endpoints and systems use a common data format and syntax.

Flexible Connectivity Gateways

A connectivity standard that delivers syntactic-level interoperability facilitates the introduction of connectivity gateways to address the diversity of devices in modern systems. These gateways serve multiple purposes, including the support of external systems and devices that rely on other connectivity technologies. Gateways can also be used to create hierarchical architectures and to group various endpoints and devices into subsystems.

Decoupled Apps and Data

Unlike human-driven environments, industrial systems operate autonomously and therefore call for a data- driven architecture. This shift can be compared to the historical development of databases. By decoupling data from applications, databases gave application developers much greater flexibility for evolving modular, independent applications, and they fostered innovation and standards in the application programming interface (API).

Within the Industrial IoT, data-centric communications can similarly promote interoperability, scalability, and ease of integration. The concept of a data bus allows the possibility of decoupling data from application logic so application components interact with data and not directly with each other. The data bus can independently optimize the delivery of data in motion, and can also be more effectively managed and scaled separately from the application components.

Fundamental Building Blocks

In conventional enterprise IT environments, the data architecture deals with events, transactions, queries, and jobs. The Industrial IoT, which is made up of a broad range of devices, differs greatly from this human-driven environment. The fundamental building blocks of the Industrial IoT include streams of data, commands, status (or state) information, and configuration changes.

Note that the key activity triggers within conventional environments involve human requests or responses (decisions). In the Industrial IoT, activity is triggered by data or state changes that exist and happen autonomously.

Rapid Data Transformations Are Moments Away! 1

When I started college, everybody was talking about “Information Technology.” At that point I had been programming for quite a while and it wasn’t clear to me what coding had to do with that fancy terminology. After a few more years of coding, I realized the connection: all I do, day in and day out, is move bytes (information) from one memory location to another. Copying the contents of a struct into the socket buffer and sending it out; getting the bytes from the socket buffer and deserializing them into a structure to pass them to the application logic. Well, that’s part of what communication middleware does for you!

RTI Connext DDS implements the OMG DDS Standard. DDS is data-centric middleware. We call those bytes being transferred data, and we assign a type to them. The type describes each byte in your data, so you (and your application) can make good use of them.

Designing a good distributed system means having a good type system in place. Often though, those types carry a lot of information, and it can become pretty difficult to deal with them. Also, you may not need all the information all the time. This often happens when you have heterogeneous systems with nodes that have different capabilities, such as bandwidth constraints or resource limits. In this case, some of the nodes in the system may not be able to handle a whole sample for certain data types, but they still need to be able to receive part of the information.

Let me present a possible scenario first and then suggest one way to solve it.


Figure 1: Scenario

Let’s say a new testing module for the International Space Station (ISS) has been shipped to space. The module has a device that collects thousands of data points, puts them all in a DDS sample with a specific type, and writes it on the topic, ComplexDataTopic. We will call this device the Emitter.

One of the many different data points this device collects is the temperature from 10 different sensors. It puts all the collected temperature values into a sequence.

enum Unit {

struct Temp {
    long sensorId;
    double value;
    Unit unit;

struct Measurements {
  long deviceId;
  // thousand of more fields. 
  sequence<Temp,10> temperatures;

// Open file

The module also comes with a FancyReceiver (what we call a topic subscriber) that runs a UI; it gets all the data contained in each sample and creates statistics and charts so astronauts can easily understand the data. For example, looking that the temperature data, it could calculate the average (mean) or the median, or spot significant differences between sensors.

Let’s now say that back on Earth, NASA scientists need to know what the temperature is on the testing module. They’re not interested in pressure and humidity. Just the temperature. But, since the bandwidth between Earth and the ISS “is what it is”, it’s considered acceptable to receive aggregates of the date. For example, instead of sending all the values for the temperature the testing module will only send the average, which will save the ISS some bandwidth on the link to ground. Basically, they want something that looks like this:

enum Unit {

struct SimpleMeasurements {
    long deviceId;
    double avgTmp;
    Unit unit;

// Open File

We have many options to achieve this result:

  1. We can create an ad-hoc application: it will subscribe to the ComplexDataTopic, get the data we need, create another topic, another type, calculate what we need, and send it over.
  2.  We can use RTI Routing Service: write a custom transformation library and be done with it.
  3.  We can use RTI Connector via the RTI Prototyper with Lua.
  4. We can use RTI Connector via Python or node.js.

I will now explain option 3: Use RTI Prototyper with Lua to easily transform your types. If you are already an RTI Connext user, you will have what you need in your distribution (under the “bin” directory if you are using RTI Connext 5.2, or under the “scripts” directory for older versions). Otherwise, check out this page to learn how to get your free copy of RTI Prototyper.

We will create a new component, the Transformer. The Transformer will subscribe to ComplexDataTopic (with type Measurements) and will Publish to SimpleDataTopic (with type SimpleMeasurements).

The Types

For simplicity, let’s say that the complex type is the following:

<enum name="Unit">
  <enumerator name="F"/>
  <enumerator name="C"/>
  <enumerator name="K"/>
<struct name= "Temp">
  <member name="sensorId" id="0" type="long"/>
  <member name="value" id="1" type="double"/>
  <member name="unit" id="2" type="nonBasic"  
        nonBasicTypeName= "Unit"/>
<struct name= "Measurements">
  <member name="deviceId" id="0" type="long"/>
  <member name="humidity" id="1" type="double"/>
  <member name="pressure" id="2" type="double"/>
  <member name="temperatures" sequenceMaxLength="10" id="3" 
        type="nonBasic"  nonBasicTypeName= "Temp"/>

<!--Open File -->

The simple type looks like this:

<enum name="Unit">
  <enumerator name="F"/>
  <enumerator name="C"/>
  <enumerator name="K"/>
<struct name= "SimpleMeasurements">
  <member name="deviceId" id="0" type="long"/>
  <member name="avgTmp" id="1" type="double"/>
  <member name="unit" id="2" type="nonBasic"  
        nonBasicTypeName= "Unit"/>

<!-- Open File -->

The Topics and the Entities

All we need to do now is define an XML application file with a data reader for topic ComplexDataTopic and a writer to SimpleDataTopic:

<!-- Domain Library -->
    <domain_library name="MyDomainLibrary">
        <domain name="MyDomain" domain_id="0">
            <register_type name="Measurement" kind="dynamicData" 
            <register_type name="SimpleMeasurement" kind="dynamicData" 
            <topic name="ComplexDataTopic"  
            <topic name="SimpleDataTopic"   
<domain_participant name="Transform" 
            <participant_qos base_name="QosLibrary::DefaultProfile"/>
                <publisher name="MyPublisher">
                    <data_writer name="MySimpleWriter" 
                                       topic_ref="SimpleDataTopic" />
                <subscriber name="MySubscriber">
                    <data_reader name="MyComplexReader" 
                                       topic_ref="ComplexDataTopic" />

<!-- Complete File -->

The Actual Logic

Once you have defined what your types are and how your entities are called, you just have to write a simple chunk of Lua code that will be executed by RTI Prototyper. Let’s have a look:

local myComplexReader = 
local mySimpleWriter  = 
local instance        = mySimpleWriter.instance

for  i, sample in ipairs(myComplexReader.samples) do
    if (not myComplexReader.infos[i].valid_data) then
        print("\t invalid data!")
        local deviceId = sample['deviceId']
        local avgTmp = 0
        local sum = 0;
        for i=1, sample['temperatures#'] do
            sum = sum + sample['temperatures[' .. i .. '].value'];
        avgTmp = sum /  sample['temperatures#']

        --setting the instance
        instance['deviceId'] = deviceId
        instance['avgTmp']   = (avgTmp - 32) * (5/9);
        instance['unit'] = 1 -- C

        -- writing the simple instance
        print("Writing sample with avgTmp = " .. instance['avgTmp'] )

-- Open File

As you can see, you have to write very little code to transform your complex data type into a simple one using RTI Prototyper.

In the first 3 lines we are just getting the complex reader and the simple writer. We are then assigning the pre-allocated instance of the simple writer to the variable called instance.

Next we are doing a take and, if the received samples are valid, we iterate over them, getting only the field we care about and aggregating the data. (In this example, we calculate the average of all the temperatures sent in the original sample, and we ignore humidity and pressure.) We can also do some more intelligent transformations, such as transforming the incoming temperature from Fahrenheit to Celsius (line 20)!

Once we have what we need, we assign the values to writer instance (lines 19-21) and we write the sample (line 25 ). It’s that simple!!!

For your convenience I uploaded all the code and files to a GitHub repository here.

In the repo you will find the following directories:

  • Transformer: Contains both the XML and the Lua file described in this blog post. To run it just execute:
rtiddsprototyper -cfgFile Transformer.xml -luaFile Transformer.lua
  • EmitterEmu: Contains an XML file and a Lua file to emulate the behavior of your ISS module Emitter. To run it just execute:
rtiddsprototyper -cfgFile EmitterEmu.xml -luaFile EmitterEmu.lua
  • FancyReceiver: Contains an XML file and a Lua file to emulate your fancy dashboard running on board the ISS. To run it, just execute:
rtiddsprototyper -cfgFile FancyReceiver.xml -luaFile FancyReceiver.lua
  • LimitedReceiver: Contains an XML file and a Lua file to emulate your limited subscriber running back on earth. To run it, just execute:
rtiddsprototyper -cfgFile LimitedReceiver.xml -luaFile LimitedReceiver.lua

So, clone the repo, play with the examples, read the few lines of code, and you will right away understand the power of RTI Prototyper with Lua. If you want more information on RTI Prototyper check out the Getting Started Guide or the other blog posts about it here.

And if you have some time left check out the other solution for doing scripting with RTI Connext DDS using Python and node.js here.

RTI Services Delivery Partner (SDP) Program Reply


Core values are critical. They shape how and why we do what we do, both personally and professionally. Here at RTI, we do our best to make decisions that are informed by our values, which ensures our strategic goals are consistent with our mission – to enable and realize the potential of smart machines to serve mankind.

Our Services Delivery Partner (SDP) program is no exception. It’s the realization of one of our core values: valuing the importance of working as a team. As with any organization, the value of teamwork is realized internally as well as externally. With SDP, the value of teamwork is embodied in how we work with our partners, who are critical to continued growth. That’s one of the many reasons I’m excited about the recent addition of Tech Mahindra and Remedy IT to our rapidly expanding SDP program.

The Value of Partnerships

I find Wikipedia’s definition of partnerships quite helpful in light of our SDP program vision. Wikipedia defines a partnership as “an arrangement where parties agree to cooperate to advance their mutual interests…to increase the likelihood of each achieving their mission and to amplify their reach.” By leveraging the power of two brands and our market-leading connectivity platform for the IIoT, our SDPs can offer their customers a broader variety of product and service choices for a truly best-in-class IIoT solution. The SDP program gives partners the ability to design, plan, implement, and support RTI-enabled solutions in their customers’ environments.

Needless to say, we see these partnerships as critical to healthy growth, and that is precisely why we’re actively growing our SDP program and driving our strategic partnerships to the next level.

Advancing Our Mutual Interests, Amplifying Our Reach

Our SDP program strategy is specifically designed to focus on:

  • Establishing a growth multiplier
  • Amplifying our reach

Establishing a Growth Multiplier

Companies are continually looking for effective ways to strengthen their brand and expand awareness. We’re no different at RTI, and plan to leverage the power of our partnerships to increase our market presence and technology adoption rate. In fact, right after our joint press release was issued announcing our new relationship with Tech Mahindra, we saw immediate coverage:

Aligning with our SDPs will create a catalyst for growth that allows us to benefit from our partners’ solutions-focused global presence, significant vertical expertise, and worldwide sales teams. We see this as a perfect complement to RTI’s connectivity platform for the IIoT. These partnerships enable us to be even more aggressive in driving adoption of our technology in more and more vertical solutions. By combining efforts, we can deliver solutions and services for the emerging and dynamic requirements of the IIoT market in key verticals around the world, including energy, automotive, avionics, healthcare, and telecommunications.

Amplifying Our Reach

In addition to expanding the adoption of our connectivity platform, we’re also looking to amplify our services reach. Today, RTI’s professional services organization delivers high-value expertise, but is inelastic given our current composition. We plan to leverage the size and vertical expertise of our partners to offer our existing and emerging customers additional value, and establish a more elastic delivery capability, which is vital to RTI’s continued growth.

The Way Forward

As you can probably tell, we’re quite excited about our SDP program, and we look forward to providing additional updates as the program grows. If you have any questions or you’re interested in becoming a Services Delivery Partner, don’t hesitate to contact us at


Data-Centric Stream Processing in the Fog Reply

It has been almost an year and a half since my last post on reactive stream processing with RTI Connext DDS. A lot has happened since then and I’ve a lot of interesting updates to share with you.

For starters, here’s a quick reason why reactive programming and specifically, stream processing makes a lot of sense in Industrial Internet Systems. As you probably know the Industrial Internet of Things (IoT) has really taken off, and billions and billions of new devices will be connected to the Internet in this decade. Many of them will belong to national critical infrastructure (e.g., smart power grids, smart roads, smart hospitals, smart cities). Most of the data generated by these devices cannot and will not be sent to the cloud due to bandwidth, latency, and exorbitant costs needed to do so. Therefore, much of the data must be correlated, merged, filtered, and analyzed at the edge so that only summary and exceptional data is sent to the cloud. This is Edge Computing (a.k.a. Fog).

Rx4DDS is an elegant solution to this problem that is productive, composable, concurrency-friendly, and scales well. Rx4DDS is a collection of Reactive Extensions (Rx) adapters for RTI Connext DDS. Rx is a collection of open-source libraries for functional-style composable asynchronous data processing. It is available in a number of flavors including RxJava, RxJS, RxCpp, Rx.Net, RxScala, RxPy and many more. The official site of all things Rx is

The original Rx4DDS work started in C#. Since then, Rx4DDS has been implemented in C++ and JavaScript (Node.js). Recently, I presented Rx4DDS and 3 demos at CppCon’15. The session recording, all demo videos, and the slides are now available. And yeah, there’s a research paper too.

Connext DDS Modern C++ API in Rx4DDS

The Rx4DDS C++ adapter uses the modern C++ API for DDS (DDS-C++PSM) at its core. DDS-C++PSM has it’s own reactive flavor to it. For instance, DDS ReadCondition accepts a lambda to process data. User’s thread of control automatically invokes the lambda provided earlier. As a result, the dispatch logic that programmers have to write is greatly simplified (often just one line). See an example here.

Rx4DDS takes DDS reactive programming to a whole new level. The best way to learn what it can do for you is watching a demonstration and understanding the corresponding code. 

So, here is one of the demos that shows transformation of keyed data streams using GroupBy and Map.

The demo includes multiple parallel stateful data pipelines. The transformation pipeline is replicated for each key (i.e., shape color). The resulting topics (“Circle” and “Triangle”) follow the lifecycle of the original instance in the “Square” topic. I.e., when a new square arrives, a new circle and triangle orbit around it. Similarly, when a square is disposed, circle and triangle are disposed with it.

Lets look at some code.

auto circle_writer = ...
auto triangle_writer = ... 
int circle_degree = 0;
int tri_degree = 0;

using KeyedShapeObservable = 
  rx::grouped_observable<string, LoanedSample<ShapeType>>;

  topic_sub(participant, "Square", waitset, worker);

auto grouped_stream =
           >> rx4dds::group_by_instance ([](ShapeType & shape) { 
                return shape.color(); 

auto subscription = 
  .flat_map([circle_writer, triangle_writer]
            (KeyedShapeObservable go) 
    rx::observable<ShapeType> inner_transformed =
      go >> rx4dds::to_unkeyed()
         >> rx4dds::complete_on_dispose()
         >> rx4dds::error_on_no_alive_writers()
         >> filter([](LoanedSample<ShapeType> s) {
         >> map([](LoanedSample<ShapeType> valid) {
         >> map([circle_degree](ShapeType & square) mutable {
                  circle_degree = (circle_degree + 3) % 360;
                  return shape_location(square, circle_degree);
         >> rx4dds::publish_over_dds(circle_writer, ShapeType(go.key())
         >> map([tri_degree](ShapeType & circle) mutable {
                  tri_degree = (tri_degree + 9) % 360;
                  return shape_location(circle, tri_degree);
        >> rx4dds::publish_over_dds(triangle_writer, ShapeType(go.key());
    return inner_transformed;


Diving Deep

There’s a lot going on in here. But the first thing you might have noticed is that there’s not a single loop. You don’t need them here. Rx4DDS lets you write more declarative code. Think Linux command-line pipes and i/o redirection.

The TopicSubscription object is the entry point. As you might have suspected, it hides the underlying DDS DataReader. The create_observable function creates an observable from the TopicSubscription. This observable is a stream of ShapeType objects.  Note that whether TopicSubscription uses a waitset or a listener to retrieve DDS data is up to the user. Just use the right constructor. In either case, create_observable returns an observable. The rest of the program structure is identical in either case. This is important.

“Rx4DDS decouples data retrieval from consumption.”

The group_by_instance is an Rx4DDS “operator” that shards the incoming ShapeType objects across “inner” keyed observables (KeyedShapeObservable) which are bound to a specific key. It uses the the viewstate  and instance handle information in SampleInfo for sharding. It also accepts a lambda to retrieve the key for each sample.

The grouped_stream object is a stream-of-streams–a natural mapping for keyed topics. After all, a keyed topic contains many instances and each instance has its own lifecycle as in Create–>Update–>Dispose. This lifecycle of instances maps naturally to observables as they have the same lifecycle expressed in terms of OnNext*[OnError|OnCompleted]. I.e., OnNext may be called zero or more times and one of OnError and OnCompleted if/when the observable ends.

We call flat_map on the grouped_stream. The flat_map operator has many uses. First, when there’s a stream-of-streams, it flattens it out to just a stream. It does that by simply merging the individual streams together. Of course, sharding an incoming stream and merging each sharded streams back won’t be useful at all. That’s equivalent to a no-op. So flat_map lets you map each incoming stream to some other stream. Hence there’s a lambda that returns an inner_transformed observable, which is a transformation of every instance-specific observable (go).

So what’s the transformation? It’s just concatenation of operators that simply describe what they do. First, we drop the “keyed” aspect and get a regular key-less ShapeType observable. We’ll use the key shortly.

The complete_on_dispose and error_on_no_alive_writers operators constitute the data-centric magic sauce. When an instance is disposed, the complete_on_dispose operator recognizes it and propagates an OnCompleted event through the instance-specific observable. Likewise, when an instance runs into some sort of error (e.g., no alive writers), the error_on_no_alive_writers operator recognizes it and propagates an OnError event through the instance-specific observable.

Data-centric interpretation of the data and SampleInfo is wired into the Rx4DDS programming model.”

The best part is that it is quite extensible. If not-alive-writers does not mean an “error” in your application, you can either not use it or write your own “interpretations” of SampleInfo as you please. The  error_on_no_alive_writers  operator is very simple and gives you idea of what’s possible.

The rest of the data pipeline is quite straight-forward. filter eliminates invalid samples and each valid sample is mapped to its data right after filter. At this point the source observable (go) is transformed to a pure observable<ShapeType>. What comes next is just business-logic.

Each ShapeType objects that is a location of the square is transformed to a circle position using basic geometry (not shown). Each circle position is also a ShapeType. It is then published over DDS via the circle_writer. Similar transformation occurs for triangles.

The publish_over_dds is an interesting operator. It’s one more ingredient that goes in the making of data-centric magic sauce. Note that it accepts a ShapeType object created from the key. When OnComplete or OnError is propagated, publish_over_dds  disposes the instance selected by the key. For instance, When a blue Square is disposed, the complete_on_dispose operator recognizes it and propagates an OnCompleted event. In response to OnCompleted the the publish_over_dds operator disposes both blue circle and blue triangle.

In summary,

“The finalization actions are baked into each data pipeline at the time of creation.”

This is declarative programming as it completely hides explicit control flow necessary to do cleanup. This is one of the main strengths of Rx4DDS.

Finally, the flat_map operator simply merges the transformed (inner) streams together. The result of merge is ignored as there’s no subsequent user-defined stage. The final subscribe function call sets the whole machinery going by creating the only user-visible subscription that the user-code has to manage.

The CppCon’15 presentation touches upon several concepts discussed here and more.

DDS and Rx Side-by-Side

I hope you are starting to see that Rx and DDS are a great match. More compatibilities reveal themselves as you start looking deeper. Here’s how I think Rx and DDS are related architecturally.

Design Methodology Data-Centric Reactive, Compositional
Architecture Distributed Publish-subscribe In-memory Observable-Observer. Monadic
Anonymity/loose coupling Publisher does not know about subscribers Observable does not know about observers
Data Streaming A Topic<T> is a stream of data samples of type T An Observable<T> is a stream of object of type T
Stream lifecycle Instances (keys) have lifecycle


Observables have lifecycle


Data Publication Publisher may publish without any subscriber “Hot” Observables
Data Reception Subscriber may read the same data over and over “Cold” Observables


Rx4DDS allows you to map many DDS concepts to declarative Rx programming model. Some of my favorite mappings are listed below. This table is based on the C# implementation of Rx4DDS.


DDS Concept Rx Concept/Type/Operator
Topic of type T An object that implements IObservable<T>, which internally creates a DataReader<T>
Communication status, Discovery event streams IObservable<SampleLostStatus>



Topic of type T with key type=Key IObservable<IGroupedObservable<Key, T>>
Detect a new instance Notify Observers about a new IGroupedObservable<Key, T> with key==instance. Notification using

IObserver<IGroupedObservable<Key, T>>.OnNext()

Dispose an instance Notify Observers through IObserver<IGroupedObservable<Key,T>>.OnCompleted
Take an instance update of type T Notify Observers about a new value of T using Iobserver<T>.OnNext()
Read with history=N IObservable<T>.Replay(N)
Query Conditions Iobservable<T>.Where(…) OR


SELECT in CFT expression IObservable<T>.Select(…)
FROM  in CFT expression DDSObservable.FromTopic(“Topic1”)


WHERE in CFT expression IObservable<T>.Where(…)
ORDER BY in CFT expression IObservable<T>.OrderBy(…)
MultiTopic (INNER JOIN) IObservable<T>.Join().Where().Select()
Joins between DDS and non-DDS data Join, CombineLatest, Zip


Phew! This blogpost is already quite longer than expected. Yet, there’s so much more to talk about. I would encourage you look at the other demonstration videos available here.

If JavaScript is your thing, you might also want to find out how we used DDS, RxJS, and Node.js Connector to implement Log Analytics and Anomaly Detection in Remote DataCenters. Check out these slides and a research paper.

Finally, we welcome your comments on this blog & we would be interested in discussing how this technology may fit your needs.

TCP scalability improvements Reply

I’m excited to talk about new stuff we have added to our latest release, RTI Connext 5.2.0. In particular, I’ll talk about scalability improvements we added to the RTI TCP Transport Plugin.

Full Windows IOCP Support

RTI Connext can be used for a wide range of scenarios: from relatively small deployments with dozens of DomainParticipants communicating over a LAN, to WAN scenarios involving hundreds or even thousands of DomainParticipants. Our TCP transport provides a way to communicate over the WAN that is NAT-friendly and easy to configure. In WAN scenarios, the transport can be used in asymmetric mode. In this case, there is one DomainParticipant acting as a TCP server that receives connections from several other DomainParticipants acting as TCP clients. The server DomainParticipant does not need to know the addresses of the client DomainParticipants.

Something to consider when building a TCP server that handles hundreds or thousands of clients is socket management strategy, or socket monitoring strategy. This refers to how the system reacts to socket-generated I/O events.

While the strategy used by RTI Connext 5.0.0 TCP Transport Plugin (select() monitoring) had good scalability for servers running on Linux systems (up to 1,500 clients), we observed during experiments that scalability on Windows systems dropped to approximately 600 clients.

In order to solve this issue, RTI Connext 5.1.0 added partial support of MS Windows IOCP (I/O Completion Ports) socket monitoring strategy to the TCP Transport Plugin.

MS Windows IOCP provides an efficient threading model for processing multiple asynchronous I/O requests on a multiprocessor system. During our test on Windows systems running RTI TCP Transport Plugin, we observed an increase of the scalability from roughly 600 clients to 1500. However, RTI Connext 5.1.0 did not include Windows IOCP support for TLS over TCP.

I’m glad to announce that the latest release RTI Connext 5.2.0 includes full support of Windows IOCP socket monitoring strategy.

You can start using this feature by adding the following XML snippet to the TCP Transport Plugin configuration:

  • *Where {transport_prefix} is the name you assigned to the RTI TCP Transport Plugin (for example, dds.transport.TCPv4.tcp1).

External Load Balancer Support

Windows IOCP support is not the only new scalability improvement included in RTI Connext 5.2.0. Now I’ll talk about another addition that provides support for even more DomainParticipants acting as TCP clients.

When requiring high scalability using TCP, the TCP server becomes the bottleneck, and we need to take advantage of external load balancers. A load balancer is a software or hardware device that increases the capacity and/or reliability of a system by transparently distributing the workload among multiple servers. In particular, a TCP load balancer distributes connections from TCP clients among multiple TCP servers – in our scenario, DomainParticipants.

RTI Connext TCP Transport Plugin creates multiple connections when communicating two DDS DomainParticipants. These connections create a common state – a session – between the two DomainParticipants. In previous releases, the RTI TCP Transport Plugin was not friendly with externals load balancers. This was because a load balancer may have distributed the connections among multiple DomainParticipants , thus preventing communication.


In order to support external load balancers, RTI Connext 5.2.0 includes a new session-id negotiation feature. When session-id negotiation is enabled (by using the negotiate_session_id property) the TCP Transport Plugin will perform an additional message exchange that allows load balancers to assign to the same DomainParticipant all the connections belonging to the same session:


In order to enable the session-id negotiation, you can use the following XML snippet:

  • *Where {transport_prefix} is the name you assigned to the RTI TCP Transport Plugin (for example, dds.transport.TCPv4.tcp1).

This feature has been tested on a system using F5 Big-IP load balancer. However, it will work with other load balancers as long as:

  • They are able to modify the TCP data stream to include a unique identification of the node serving the first connection on a session.
  • They are able to assign connections to a server depending on the content of the TCP data stream.

I hope you find these new features useful and helpful in deploying highly scalable RTI Connext–based systems. To try it for yourself, download our free 30 day trial, today!

Architectural Mapping within the Industrial Internet Reply

Based on location and function, the right connectivity solution must be evaluated and selected for the various scenarios:

  • Smart devices (endpoints)
  • Device-to-cloud connections
  • Connectivity within the cloud
  • User connectivity (cloud-to-user)
Mapping the right technologies into the connectivity architecture

Mapping the right technologies into the connectivity architecture

Data Distribution Service Standard

Between the devices and the cloud (WAN connections), DDS provides an ideal solution with:

  • Stateful interactions
    • Intelligent connections/disconnects, and the ability to resend only relevant data upon reconnection
    • Intelligence built into the bus, without application overhead
  • Many data-flow patterns, for meeting current and future requirements
  • Publish-subscribe architecture style that is data-driven
  • Scalability, performance, resilience, and security

Inside the endpoint devices themselves, DDS has been applied broadly for the same reasons listed above for device- to-cloud connections. Additionally, DDS makes it possible to design smart devices that operate very reliably and meet safety and longevity requirements in industries such as healthcare and automotive. DDS also supports diversity of transports and platforms within a system, as previously discussed in terms of gateway capabilities and routing services.

DDS has also made inroads in the cloud. Here, the requirements span a broader range and give rise to a mixture of connectivity options. DDS can support this connectivity diversity, and it can also promote longevity of cloud solutions.

In contrast, other technologies make more sense for the user-to-cloud WAN connections (see illustration). At this point in the connectivity model, traditional web technologies such as web sockets and HTTP meet the human-centric requirements with:

  • Stateless interactions
  • Single data-flow pattern (query)
  • Request-response architecture style that is human-driven
  • Established scalability and security infrastructure
  • Forgiving performance and resilience (including easy-to-restart connections and applications)
  • Ubiquitous access from mobile devices or thin clients

Deployment Flexibility

DDS domains make it easy to isolate subsystems with individual data communication planes. Besides facilitating security rules with logical separation, domains also make it possible to tailor endpoint discovery rules and activity levels and significantly reduce network bandwidth and CPU/memory overhead over gateway connections. As shown in the previous diagram, for example, DDS domains can be defined with:

  • Domain 0 on the WAN connections. Within Domain 0, discovery can be limited to detection of the gateway endpoints and routing services. (These gateways act as proxies for the endpoint devices in their realms.)
  • Domain 1 encompassing the devices and the cloud. Full device discovery can be carried out on the device and cloud buses.

DDS also supports a choice of transports, including UDP, TCP, shared memory, OpenSSL (TLS/SSL, DTLS), and low-bandwidth connections. For example, in the generic use case, the DDS connectivity between devices and the cloud can utilize DDS over TCP. Typically, transport guidelines are different for:

  • LANs: Use UDP (with multicast, if available). This applies within a cloud or for application-to-application communications.
  • WANs: For device-to-cloud communications, use TCP (with TLS).

DDS is being adopted for this last category to provide remote access to any DDS data bus. DDS can manage state for seamless data-sharing and switching between cellular and Wi-Fi networks. State is managed independently of the network mobility and switching, and DDS Quality of Service (QoS) can introduce resilient rules for distributing and managing state information.

Finally, for cloud-to-human communications (mobile user endpoint devices or thin clients), you can use traditional web sockets and HTTP(s) (over TCP).

For an online demonstration of remote access from web applications, visit the RTI Connext DDS Demo site at

We love our new Launcher Reply

For the new RTI Connext DDS 5.2 release, we have re-implemented the RTI Launcher application. We love it! We love the new native OS look and feel, we love the new functionality, and we’re confident you’ll love it, too.


Of course, the basic idea is the same: providing you with an easy way to quickly access all the RTI products. The UI design hasn’t changed much, so the transition should easy for existing users. As usual, there are three main panels with our tools, infrastructure services and utilities. Every tool has a context menu, which shows related docs and information:

You can also open a window to show the command-line help for the service/utility – and what’s more, you can now access the history of commands that you launched in the current session for that specific product:


This may turn out handy for you at some point, because it reminds you of the exact parameters you used for the call. You can also copy the contents for later reference.

Apart from the usual tabs for our tools, services and utilities, we’ve incorporated a new tab for third-party products, as part of our pilot project for an RTI marketplace:


This tab contains products and tools from our partners. These products are usually not installed by default, but worry not! You can easily download and install them by clicking on the green download arrow (or if you right-click to show the context menu, you’ll find a menu item to do that as well).

Another cool thing about our tabs is that there is an optional new tab (you just have to enable it) where you can put your stuff that you want to keep handy. I have to confess, I use (and love) Eclipse for development, so why not add it to my Launcher?


I can even add tool-tips and context menu entries!


Good stuff. Now I can launch Eclipse from Launcher. Launcher has never been this “launchy”… And all configured via XML!

We’ve also added a Help tab. This tab contains links for information and documentation about RTI and Connext DDS. Some links are web links (you’ll need an Internet connection to access them) and some are links to local PDF documentation.


Lastly, I wanted to mention the Installation tab and the RTI Package Installer. There you can select a license file to use for your Connext installation. You can peer through the license contents (in a visual way). Launcher will notify you when your license is invalid or expired. You can also see which Connext products and components you have installed.

And speaking of installing new (and awesome!) RTI stuff, Launcher has an interface for the new RTI Package Installer application. So if I want to install the new RTI Queuing Service I would just open the RTI Package Installer dialog and click the “+” button next to the package file for that product. (To open the RTI Package Installer, click the icon on the Installation tab or the Utilities tab.)

Then just click install and Launcher will call the Package Installer tool for you with all the files specified in the dialog. The new component will then be installed. Some components are shown by Launcher in panels and they will be ready to use after the installation.

We think the new Launcher is a great addition to Connext 5.2! Go try it! We’re sure you’ll love it as much as we do!

A Taxonomy for the IIoT Reply

Kings Play Chess On Fine Glass Stools.  Anyone remember this?

For most, that is probably gibberish.  But not for me.  This mnemonic helps me remember the taxonomy of life: Kingdom, Phylum, Class, Order, Family, Genus, Species.

The breadth and depth and variety of life on Earth is overwhelming.  A taxonomy logically divides types of systems by their characteristics.  The Science of Biology would be impossible without a good taxonomy.  It allows scientists to classify plants and animals into logical types, identify commonalities, and construct rules for understanding whole classes of living systems.

The breadth and depth and variety of the Industrial Internet of Things (IIoT) is also overwhelming.  The Science of IIoT Systems needs a similar organized taxonomy of application types.  Only then can we proceed to discuss appropriate architectures and technologies to implement systems.

The first problem is to choose top-level divisions.  In the Animal Kingdom, you could label most animals either, “land, sea, or air” animals.  However, those environmental descriptions don’t help much in understanding the animal.  The “architecture” of a whale is not much like an octopus, but it is very like a bear.  To be understood, animals must be divided by their characteristics and architecture.

It is also not useful to divide applications by their industries like “medical, transportation, and power.”  While these environments are important, the applicable architectures and technologies simply do not split along industry lines.  Here again, we need a deeper understanding of the characteristics that define the major challenges, and those challenges will determine architecture.

I realize that this is a powerful, even shocking claim.  It implies, for instance, that the bespoke standards, protocols, and architectures in each industry are not useful ways to design the future architectures of IIoT systems.  Nonetheless, it is a clear fact of the systems in the field.  As in the transformation that became the enterprise Internet, generic technologies will replace special-purpose approaches. To grow our understanding and realize the promise of the IIoT, we must abandon our old industry-specific thinking.

So, what can we use for divisions?  What defining characteristics can we use to separate the Mammals from the Reptiles from the Insects of the IIoT?

There are thousands and thousands of requirements, both functional and non-functional, that could be used.  As in animals, we need to find those few requirements that divide the space into useful, major categories.

The task is simplified by the realization that the goal is to divide the space so we can determine system architecture.  Thus, good division criteria are a) unambiguous and b) impactful on the architecture.  That may sound easy, but it is actually very non-trivial.  The only way to do it is through experience.  We are early on our quest.  However, significant progress is within our collective grasp.

From RTI’s extensive experience with nearly 1000 real-world IIoT applications, I suggest a few early divisions below.  To be as crisp as possible, I also chose “metrics” for each division.  The lines, of course, are not that stark.  But the numbers force clarity, and that is critical; without numerical yardsticks (meter sticks?), meaning is too often lost.

IIoT Taxonomy Proposal

Reliability [Metric: Continuous availability must be better than “99.999%”]

We can’t be satisfied with the platitude “highly reliable”.  Almost everything “needs” that.  To be meaningful, we must be more specific about the architectural demands to achieve that reliability.  That requires understanding of how quickly a failure causes problems and how bad those problems are.

We have found that the simplest, most useful way to categorize reliability is to ask: “What are the consequences of unexpected failure for 5 minutes per year?”  (We choose 5 min/yr here only because that is the “5-9s” golden specification for enterprise-class servers.  Many industrial systems cannot tolerate even a few milliseconds of unexpected downtime.)

This is an important characteristic because it greatly impacts the system architecture.  A system that cannot fail, even for a short time, must support redundant computing, sensors, networking, and more.  When reliability is truly critical, it quickly becomes a – or perhaps the – key architectural driver.

Real Time [Metric: Response < 100ms]

There are thousands of ways to characterize “real time”.  All systems should be “fast”.  But to be useful, we must specifically understand which speed requirements drive architecture.

An architecture that can satisfy a human user unwilling to wait more than 8 seconds for a website will never satisfy an industrial control that must respond in 2ms.  We find the “knee in the curve” that greatly impacts design occurs when the speed of response is measured in a few tens of milliseconds (ms) or even microseconds (µs).  We will choose 100ms, simply because that is about the potential jitter (maximum latency) imposed by a server or broker in the data path.  Systems that much respond faster than this usually must be peer-to-peer, and that is a huge architectural impact.

Data Set Scale [Metric: Data set size > 10,000 items]

Again, there are thousands of dimensions to scale, including number of “nodes”, number of applications, number of data items, and more.  We cannot divide the space by all these parameters.  In practice, they are related.  For instance, a system with many data items probably has many nodes.

Despite the broad space, we have found that two simple questions correlate with architectural requirements.  The first is “data set size”, and the knee in the curve is about 10k items.  When systems get this big, it is no longer practical to send every data update to every possible receiver.  So, managing the data itself becomes a key architectural need.  These systems need a “data centric” design that explicitly models the data thereby allowing selective filtering and delivery.

 Team or Application Scale [Metric: number of teams or interacting applications >10]

The second scale parameter we choose is the number of teams or independently-developed applications on the “project”, with a breakpoint around 10.  When many independent groups of developers build applications that must interact, data interface control dominates the interoperability challenge.  Again, this often indicates the need for a data-centric design that explicitly models and manages these interfaces.

Device Data Discovery Challenge [Metric: >20 types of devices with multi-variable data sets]

Some IIoT systems can (or even must) be configured and understood before runtime.  This does not mean that every data source and sink is known, but rather only that this configuration is relatively static.

However, when IIoT systems integrate racks and racks of machines or devices, they must often be configured and understood during operation. For instance, a plant controller HMI may need to discover the device characteristics of an installed device or rack so a user can choose data to monitor.  The choice of “20” different devices is arbitrary.  The point: when there are many different configurations for the devices in a rack, this “introspection” becomes an important architectural need to avoid manual gymnastics.  Most systems with this characteristic have many more than 20 device types.

Distribution Focus [Metric: Fan out > 10]

We define “fan out” as the number of data recipients that must be informed upon change of a single data item.  This impacts architecture because many protocols work through single 1:1 connections.  Most of the enterprise world works this way, often with TCP, a 1:1 session protocol.  Examples include connecting a browser to a web server, a phone app to a backend, or a bank to a credit card company.

However, IIoT systems often need to distribute information to many more destinations.  If a single data item must go to many destinations, the architecture must support efficient multiple updates.  When fan out exceeds 10 or so, it becomes impractical to do this branching by managing a set of 1:1 connections.

Collection Focus [Metric: One-way data flow with fan in > 100]

Systems that are essentially restricted to the collection problem do not share significant data between devices.  They instead transmit copious information to be stored or analyzed in higher-level servers or the cloud.

This has huge architectural impact.  Collection systems can often use a hub-and-spoke “concentrator” or even a cloud-based server design.

Taxonomy Benefits

Defining an IIoT taxonomy will not be trivial.  This blog just scratches the surface.  However, the benefits are enormous.  Resolving these needs will help system architects choose protocols, network topologies, and compute capabilities.  Today, we see designers struggling with issues like server location or configuration, when the right design may not even require servers.  Overloaded terms like “real time” and “thing” cause massive confusion between technologies with no practical use case overlap.

It’s time the Industrial Internet Consortium took on this important challenge.  Its newest Working Group will address this problem, with the goal of clarifying these most basic business and technical imperatives.  I am excited to help kickoff this group at the next IIC Members meeting in Barcelona.  If you are interested, contact me (, Dirk Slama (, or Jacques Durand (  We will begin by sharing our extensive joint experiences across the breadth of the IIoT.

Code Generator: Experiment with New Features


In previous posts we explained how RTI’s new code generator saves you time with faster code generation. It’s now the default code generator in RTI Connext DDS 5.2.0, and it includes a number of other new features we think you will like.

Did you know that our new code generator is much better at detecting syntax errors?

For instance, the code generator will tell you if you forgot to define a type, showing better error messages. And this is just one example.

Do you want to generate only the type files and your project files without overwriting your publisher/subscriber files?

Try the autoGenFiles option.

Do you want even more control over which files you generate?

The new create and update <typefiles | examplefiles | makefiles> options are your solution.

Are you compiling dynamically?

Generate your project files using the shareLib option. It will link your application with RTI’s core dynamic libraries.

Do you want to share private IDLs without showing their actual content?

Try the new obfuscate option.

Are you using unbounded sequences?

Try the unboundedSupport option. Read this blog post to learn more about this new feature.

It’s all easier than ever with the Apache Velocity (VLT) templates. You can check the predefined set of variables available in RTI_rtiddsgen_template_variables.xlsx, located in the Code Generator documentation folder.

As a simple example, imagine you want to generate a new method that prints whether each member of a type is an array or a sequence. It could also print the corresponding dimensions or size. For example, consider this type:

module Mymodule{ 
    struct MyStruct{ 
       long longMember; 
       long arrayMember [2][100];
       sequence sequenceMember;
       sequence arrayOfSequenceMember[28];

Our desired print function would be something like this:

void MyModule_MyStruct_specialPrint(){ 
    printf(" longMember \n");
    printf(" arrayMember is an array [2, 100] \n "); 
    printf(" sequenceMember is a sequence <2> \n"); 
    printf(" arrayOfSequenceMember is an array [28] is a sequence <5> "); 

Implementing this is straightforward. You would just need to create a few macros in the code generator templates to generate the above code. We start with the main method, which would be something like this:

void ${node.nativeFQName}_specialPrint(){

#specialPrint ($node)


This method calls to specialPrint macro. That macro iterates within the struct members and print whether they are an array or a sequence

#macro (specialPrint $node)

#*--*##foreach($member in $node.memberFieldMapList)

print("$ #isAnArray($member) #isASeq($member) \n");


We just need to define  two auxiliary macros to check each case.

#macro (isAnArray $member)

#if($member.dimensionList) is an array $member.dimensionList #end


#macro (isASeq $member)

#if($member.seqSize) is a sequence &lt;$member.seqSize&gt; #end


If you need more information about supported and new features available in Code Generator, check out the Getting Started Guide or the online documentation.

Unbounded Support For Sequences and Strings 1

When we first designed Connext DDS, deterministic memory allocation was on the top of our priority list. At that time most of our customers used small data types such as sensor and track data. We decided to go with an approach in which we pre-allocated the samples in the DataWriter and DataReader queues to their maximum size. For example:

struct Position {
    double latitude;
    double longitude;

struct VehicleTrack {
    string<64> vehicleID; //@key
    Position position;

In the previous example, a sample of type VehicleTrack was pre-allocated to its maximum size. Even if vehicleID did not have a length of 64 bytes, Connext DDS pre-allocated a string of 64 bytes to store the sample value.

As our customer base increased, the set of use cases expanded, and with that came the need to be more flexible in our memory allocation policies. For example, customers may use Connext DDS to publish and subscribe video and audio. Typically these data types are characterized for having unbounded members. For example:

struct VideoFrame {
    boolean keyFrame;
    /* The customer does not necessarily know the maximum 
       size of the data sequence */
    sequence<octet> data;

Pre-allocating memory for samples like VideoFrame above may become prohibitive and really expensive as the customer will be forced to use an artificial and arbitrarily large bound for the variable size sequence. For example:

struct VideoFrame {
    boolean keyFrame;
    /* Alternatively the customer can use the code generator 
       command-line option -sequenceSize to set an implicit 
       limit for the unbounded sequence */
    sequence<octet,1024000> data; 

In Connext DDS 5.2.0, we have introduced support for unbounded sequences and strings.

To support unbounded sequences and strings, the Code Generator has a new command-line option: -unboundedSupport. This new option may only be used when generating code for .NET, C, and C++ (that is, the -language option must be specified for C++/CLI, C#, C, or C++).

With this option, Connext DDS will not pre-allocate the unbounded members to their maximum size. For unbounded members, the generated code will de-serialize the samples by dynamically allocating and de-allocating memory to accommodate the actual size of the unbounded member. Unbounded sequences and strings are also supported with DynamicData and Built-in Types and they integrate with the following tools and services:

  • Routing Service
  • Queuing Service
  • Recording Service on serialized and automatic mode (records in serialized format)
  • Replay Service when recorded in serialized mode
  • Spy
  • Admin Console
  • Toolkit for LabVIEW

The integration with Persistence Service, Database Integration Service and Spreadsheet Add-in for Microsoft Excel is not available at this point.

For additional information, check Chapter 3 Data Types and DDS Data Samples and Chapter 20 DDS Sample-Data and Instance-Data Memory Management in the RTI Connext DDS Core Libraries User’s Manual as well as the User’s Manuals for services and tools.