We love our new Launcher Reply

For the new RTI Connext DDS 5.2 release, we have re-implemented the RTI Launcher application. We love it! We love the new native OS look and feel, we love the new functionality, and we’re confident you’ll love it, too.


Of course, the basic idea is the same: providing you with an easy way to quickly access all the RTI products. The UI design hasn’t changed much, so the transition should easy for existing users. As usual, there are three main panels with our tools, infrastructure services and utilities. Every tool has a context menu, which shows related docs and information:

You can also open a window to show the command-line help for the service/utility – and what’s more, you can now access the history of commands that you launched in the current session for that specific product:


This may turn out handy for you at some point, because it reminds you of the exact parameters you used for the call. You can also copy the contents for later reference.

Apart from the usual tabs for our tools, services and utilities, we’ve incorporated a new tab for third-party products, as part of our pilot project for an RTI marketplace:


This tab contains products and tools from our partners. These products are usually not installed by default, but worry not! You can easily download and install them by clicking on the green download arrow (or if you right-click to show the context menu, you’ll find a menu item to do that as well).

Another cool thing about our tabs is that there is an optional new tab (you just have to enable it) where you can put your stuff that you want to keep handy. I have to confess, I use (and love) Eclipse for development, so why not add it to my Launcher?


I can even add tool-tips and context menu entries!


Good stuff. Now I can launch Eclipse from Launcher. Launcher has never been this “launchy”… And all configured via XML!

We’ve also added a Help tab. This tab contains links for information and documentation about RTI and Connext DDS. Some links are web links (you’ll need an Internet connection to access them) and some are links to local PDF documentation.


Lastly, I wanted to mention the Installation tab and the RTI Package Installer. There you can select a license file to use for your Connext installation. You can peer through the license contents (in a visual way). Launcher will notify you when your license is invalid or expired. You can also see which Connext products and components you have installed.

And speaking of installing new (and awesome!) RTI stuff, Launcher has an interface for the new RTI Package Installer application. So if I want to install the new RTI Queuing Service I would just open the RTI Package Installer dialog and click the “+” button next to the package file for that product. (To open the RTI Package Installer, click the icon on the Installation tab or the Utilities tab.)

Then just click install and Launcher will call the Package Installer tool for you with all the files specified in the dialog. The new component will then be installed. Some components are shown by Launcher in panels and they will be ready to use after the installation.

We think the new Launcher is a great addition to Connext 5.2! Go try it! We’re sure you’ll love it as much as we do!

A Taxonomy for the IIoT Reply

Kings Play Chess On Fine Glass Stools.  Anyone remember this?

For most, that is probably gibberish.  But not for me.  This mnemonic helps me remember the taxonomy of life: Kingdom, Phylum, Class, Order, Family, Genus, Species.

The breadth and depth and variety of life on Earth is overwhelming.  A taxonomy logically divides types of systems by their characteristics.  The Science of Biology would be impossible without a good taxonomy.  It allows scientists to classify plants and animals into logical types, identify commonalities, and construct rules for understanding whole classes of living systems.

The breadth and depth and variety of the Industrial Internet of Things (IIoT) is also overwhelming.  The Science of IIoT Systems needs a similar organized taxonomy of application types.  Only then can we proceed to discuss appropriate architectures and technologies to implement systems.

The first problem is to choose top-level divisions.  In the Animal Kingdom, you could label most animals either, “land, sea, or air” animals.  However, those environmental descriptions don’t help much in understanding the animal.  The “architecture” of a whale is not much like an octopus, but it is very like a bear.  To be understood, animals must be divided by their characteristics and architecture.

It is also not useful to divide applications by their industries like “medical, transportation, and power.”  While these environments are important, the applicable architectures and technologies simply do not split along industry lines.  Here again, we need a deeper understanding of the characteristics that define the major challenges, and those challenges will determine architecture.

I realize that this is a powerful, even shocking claim.  It implies, for instance, that the bespoke standards, protocols, and architectures in each industry are not useful ways to design the future architectures of IIoT systems.  Nonetheless, it is a clear fact of the systems in the field.  As in the transformation that became the enterprise Internet, generic technologies will replace special-purpose approaches. To grow our understanding and realize the promise of the IIoT, we must abandon our old industry-specific thinking.

So, what can we use for divisions?  What defining characteristics can we use to separate the Mammals from the Reptiles from the Insects of the IIoT?

There are thousands and thousands of requirements, both functional and non-functional, that could be used.  As in animals, we need to find those few requirements that divide the space into useful, major categories.

The task is simplified by the realization that the goal is to divide the space so we can determine system architecture.  Thus, good division criteria are a) unambiguous and b) impactful on the architecture.  That may sound easy, but it is actually very non-trivial.  The only way to do it is through experience.  We are early on our quest.  However, significant progress is within our collective grasp.

From RTI’s extensive experience with nearly 1000 real-world IIoT applications, I suggest a few early divisions below.  To be as crisp as possible, I also chose “metrics” for each division.  The lines, of course, are not that stark.  But the numbers force clarity, and that is critical; without numerical yardsticks (meter sticks?), meaning is too often lost.

IIoT Taxonomy Proposal

Reliability [Metric: Continuous availability must be better than “99.999%”]

We can’t be satisfied with the platitude “highly reliable”.  Almost everything “needs” that.  To be meaningful, we must be more specific about the architectural demands to achieve that reliability.  That requires understanding of how quickly a failure causes problems and how bad those problems are.

We have found that the simplest, most useful way to categorize reliability is to ask: “What are the consequences of unexpected failure for 5 minutes per year?”  (We choose 5 min/yr here only because that is the “5-9s” golden specification for enterprise-class servers.  Many industrial systems cannot tolerate even a few milliseconds of unexpected downtime.)

This is an important characteristic because it greatly impacts the system architecture.  A system that cannot fail, even for a short time, must support redundant computing, sensors, networking, and more.  When reliability is truly critical, it quickly becomes a – or perhaps the – key architectural driver.

Real Time [Metric: Response < 100ms]

There are thousands of ways to characterize “real time”.  All systems should be “fast”.  But to be useful, we must specifically understand which speed requirements drive architecture.

An architecture that can satisfy a human user unwilling to wait more than 8 seconds for a website will never satisfy an industrial control that must respond in 2ms.  We find the “knee in the curve” that greatly impacts design occurs when the speed of response is measured in a few tens of milliseconds (ms) or even microseconds (µs).  We will choose 100ms, simply because that is about the potential jitter (maximum latency) imposed by a server or broker in the data path.  Systems that much respond faster than this usually must be peer-to-peer, and that is a huge architectural impact.

Data Set Scale [Metric: Data set size > 10,000 items]

Again, there are thousands of dimensions to scale, including number of “nodes”, number of applications, number of data items, and more.  We cannot divide the space by all these parameters.  In practice, they are related.  For instance, a system with many data items probably has many nodes.

Despite the broad space, we have found that two simple questions correlate with architectural requirements.  The first is “data set size”, and the knee in the curve is about 10k items.  When systems get this big, it is no longer practical to send every data update to every possible receiver.  So, managing the data itself becomes a key architectural need.  These systems need a “data centric” design that explicitly models the data thereby allowing selective filtering and delivery.

 Team or Application Scale [Metric: number of teams or interacting applications >10]

The second scale parameter we choose is the number of teams or independently-developed applications on the “project”, with a breakpoint around 10.  When many independent groups of developers build applications that must interact, data interface control dominates the interoperability challenge.  Again, this often indicates the need for a data-centric design that explicitly models and manages these interfaces.

Device Data Discovery Challenge [Metric: >20 types of devices with multi-variable data sets]

Some IIoT systems can (or even must) be configured and understood before runtime.  This does not mean that every data source and sink is known, but rather only that this configuration is relatively static.

However, when IIoT systems integrate racks and racks of machines or devices, they must often be configured and understood during operation. For instance, a plant controller HMI may need to discover the device characteristics of an installed device or rack so a user can choose data to monitor.  The choice of “20” different devices is arbitrary.  The point: when there are many different configurations for the devices in a rack, this “introspection” becomes an important architectural need to avoid manual gymnastics.  Most systems with this characteristic have many more than 20 device types.

Distribution Focus [Metric: Fan out > 10]

We define “fan out” as the number of data recipients that must be informed upon change of a single data item.  This impacts architecture because many protocols work through single 1:1 connections.  Most of the enterprise world works this way, often with TCP, a 1:1 session protocol.  Examples include connecting a browser to a web server, a phone app to a backend, or a bank to a credit card company.

However, IIoT systems often need to distribute information to many more destinations.  If a single data item must go to many destinations, the architecture must support efficient multiple updates.  When fan out exceeds 10 or so, it becomes impractical to do this branching by managing a set of 1:1 connections.

Collection Focus [Metric: One-way data flow with fan in > 100]

Systems that are essentially restricted to the collection problem do not share significant data between devices.  They instead transmit copious information to be stored or analyzed in higher-level servers or the cloud.

This has huge architectural impact.  Collection systems can often use a hub-and-spoke “concentrator” or even a cloud-based server design.

Taxonomy Benefits

Defining an IIoT taxonomy will not be trivial.  This blog just scratches the surface.  However, the benefits are enormous.  Resolving these needs will help system architects choose protocols, network topologies, and compute capabilities.  Today, we see designers struggling with issues like server location or configuration, when the right design may not even require servers.  Overloaded terms like “real time” and “thing” cause massive confusion between technologies with no practical use case overlap.

It’s time the Industrial Internet Consortium took on this important challenge.  Its newest Working Group will address this problem, with the goal of clarifying these most basic business and technical imperatives.  I am excited to help kickoff this group at the next IIC Members meeting in Barcelona.  If you are interested, contact me (stan@rti.com), Dirk Slama (Dirk.Slama@bosch-si.com), or Jacques Durand (JDurand@us.fujitsu.com).  We will begin by sharing our extensive joint experiences across the breadth of the IIoT.

Code Generator: Experiment with New Features


In previous posts we explained how RTI’s new code generator saves you time with faster code generation. It’s now the default code generator in RTI Connext DDS 5.2.0, and it includes a number of other new features we think you will like.

Did you know that our new code generator is much better at detecting syntax errors?

For instance, the code generator will tell you if you forgot to define a type, showing better error messages. And this is just one example.

Do you want to generate only the type files and your project files without overwriting your publisher/subscriber files?

Try the autoGenFiles option.

Do you want even more control over which files you generate?

The new create and update <typefiles | examplefiles | makefiles> options are your solution.

Are you compiling dynamically?

Generate your project files using the shareLib option. It will link your application with RTI’s core dynamic libraries.

Do you want to share private IDLs without showing their actual content?

Try the new obfuscate option.

Are you using unbounded sequences?

Try the unboundedSupport option. Read this blog post to learn more about this new feature.

It’s all easier than ever with the Apache Velocity (VLT) templates. You can check the predefined set of variables available in RTI_rtiddsgen_template_variables.xlsx, located in the Code Generator documentation folder.

As a simple example, imagine you want to generate a new method that prints whether each member of a type is an array or a sequence. It could also print the corresponding dimensions or size. For example, consider this type:

module Mymodule{ 
    struct MyStruct{ 
       long longMember; 
       long arrayMember [2][100];
       sequence sequenceMember;
       sequence arrayOfSequenceMember[28];

Our desired print function would be something like this:

void MyModule_MyStruct_specialPrint(){ 
    printf(" longMember \n");
    printf(" arrayMember is an array [2, 100] \n "); 
    printf(" sequenceMember is a sequence <2> \n"); 
    printf(" arrayOfSequenceMember is an array [28] is a sequence <5> "); 

Implementing this is straightforward. You would just need to create a few macros in the code generator templates to generate the above code. We start with the main method, which would be something like this:

void ${node.nativeFQName}_specialPrint(){

#specialPrint ($node)


This method calls to specialPrint macro. That macro iterates within the struct members and print whether they are an array or a sequence

#macro (specialPrint $node)

#*--*##foreach($member in $node.memberFieldMapList)

print("$member.name #isAnArray($member) #isASeq($member) \n");


We just need to define  two auxiliary macros to check each case.

#macro (isAnArray $member)

#if($member.dimensionList) is an array $member.dimensionList #end


#macro (isASeq $member)

#if($member.seqSize) is a sequence &lt;$member.seqSize&gt; #end


If you need more information about supported and new features available in Code Generator, check out the Getting Started Guide or the online documentation.

Unbounded Support For Sequences and Strings 1

When we first designed Connext DDS, deterministic memory allocation was on the top of our priority list. At that time most of our customers used small data types such as sensor and track data. We decided to go with an approach in which we pre-allocated the samples in the DataWriter and DataReader queues to their maximum size. For example:

struct Position {
    double latitude;
    double longitude;

struct VehicleTrack {
    string<64> vehicleID; //@key
    Position position;

In the previous example, a sample of type VehicleTrack was pre-allocated to its maximum size. Even if vehicleID did not have a length of 64 bytes, Connext DDS pre-allocated a string of 64 bytes to store the sample value.

As our customer base increased, the set of use cases expanded, and with that came the need to be more flexible in our memory allocation policies. For example, customers may use Connext DDS to publish and subscribe video and audio. Typically these data types are characterized for having unbounded members. For example:

struct VideoFrame {
    boolean keyFrame;
    /* The customer does not necessarily know the maximum 
       size of the data sequence */
    sequence<octet> data;

Pre-allocating memory for samples like VideoFrame above may become prohibitive and really expensive as the customer will be forced to use an artificial and arbitrarily large bound for the variable size sequence. For example:

struct VideoFrame {
    boolean keyFrame;
    /* Alternatively the customer can use the code generator 
       command-line option -sequenceSize to set an implicit 
       limit for the unbounded sequence */
    sequence<octet,1024000> data; 

In Connext DDS 5.2.0, we have introduced support for unbounded sequences and strings.

To support unbounded sequences and strings, the Code Generator has a new command-line option: -unboundedSupport. This new option may only be used when generating code for .NET, C, and C++ (that is, the -language option must be specified for C++/CLI, C#, C, or C++).

With this option, Connext DDS will not pre-allocate the unbounded members to their maximum size. For unbounded members, the generated code will de-serialize the samples by dynamically allocating and de-allocating memory to accommodate the actual size of the unbounded member. Unbounded sequences and strings are also supported with DynamicData and Built-in Types and they integrate with the following tools and services:

  • Routing Service
  • Queuing Service
  • Recording Service on serialized and automatic mode (records in serialized format)
  • Replay Service when recorded in serialized mode
  • Spy
  • Admin Console
  • Toolkit for LabVIEW

The integration with Persistence Service, Database Integration Service and Spreadsheet Add-in for Microsoft Excel is not available at this point.

For additional information, check Chapter 3 Data Types and DDS Data Samples and Chapter 20 DDS Sample-Data and Instance-Data Memory Management in the RTI Connext DDS Core Libraries User’s Manual as well as the User’s Manuals for services and tools.

Visualize your data! Reply

Ok, I have to admit right from the start that I’m very excited about this feature. I’ve wanted a high-performance, platform-independent visualization for DDS data for more than a decade. When I was an RTI customer (we started with the 3.x version), I built a small UI to display data. It used generated code and was quite basic but still useful. Just the other day I heard from a person working on that project that they still use it! I can’t wait to show them what we now ship with Admin Console!

What can you do with it?

First things first, subscribe to your Topic of interest.


If you don’t have a data type, no problem. You can specify it through one (or more) XML files. Use rtiddsgen to generate them from your IDL or XML type file with the -convertToXml option. You can also specify more advanced settings including overriding QoS, setting a content filter, or picking from which DataWriters to receive data.


Here you can set the DataWriter filter.


Once you’ve picked your QoS and subscribed to the Topic, you’ll see the instances start filling in from the DataWriter(s). Instances? If you’re not familiar with this DDS concept, an instance is a unique piece of data within a Topic. In the Triangle topic (below), there are 3 instances, “ORANGE,” “MAGENTA,” AND “CYAN.” Each instance is shown in its own row. This display (which we call the Topic Data Tab) chooses a few fields to show by default but you can customize this to show the fields that you prefer.


But, I want to look at all of the fields. Of course, who wouldn’t?! To see all of the data, select an instance (a row in the Topic Data Tab) and then look at the Sample Inspector. This view shows all of the fields in a tree view. You can view all nested structures and you can see all of the metadata (SampleInfo) as well!


That’s very helpful but I want to see more than just the current value. There are two more views and they can show historical data. Let’s talk first about the Sample Log. This view shows each sample in its own row (regardless of which instance it is). This view stores sample data (10,000 samples by default) so you can look back at captured data (perhaps after using the pause feature). And this view works with the Sample Inspector so when you select a sample (row), the Sample Inspector displays all of the data fields. And, you can display multiple Topics in this view too!

I mentioned that there were two views which show historical data, the second one is the Time Chart. The Time Chart plots numerical field values as a function of time. You can use this view to compare data fields (either from different instances in the same Topic or instances in any number of other Topics). Or you can use it to look for trends in your data. The Time Chart also stores displayed data (1,000,000 values by default for each trace). You can put the Time Chart in historical mode to view this archived data.

This is also cool but how can I save my data to disk? Each of the views contains an export button image01. This button will export the data contained in the view into a comma-separated value (CSV) file (Topic Data Tab, Sample Log, & Time Chart) or a text file (Sample Inspector). You can also export the chart as an image from the Time Chart (using the button that looks like a camera).

What about performance? Data Visualization exhibits very good performance characteristics. But I’ll cover that in detail in a future blog post.

Is that all? There are many things that I haven’t covered here. If you want to see more right away, you can check out Data Visualization => Getting Started section of the Admin Console Help.

Replication and persistence features of RTI Queuing Service Reply

There are many queuing services available, but few support both persistence and replication. If preserving data integrity is vital to your business and you also need high performance or a full remote administration, RTI Queuing Service may be just what you need.

When considering replication, persistence or any other aspect of RTI Queuing Service, you should keep in mind the most basic and unique of its features: it uses DDS as the underlying messaging technology. If you don’t know RTI Queuing Service yet but you are familiar with DDS, you will find its concepts and APIs familiar. RTI Queuing Service seamlessly integrates with your existing DDS system. You can also use it to bring the power of DDS to your already existing queuing-based systems.


There are different scenarios where a replicated queuing service can help you. You may need a highly available service that will stay up when part of your system or network goes down. Additionally, you may also need to assure the integrity and consistency of your data when a failure occurs. If you are dealing with a queue of financial transactions, for example, you not only want your system to stay operative most of the time, you also need to know which transactions went through and which didn’t when a failure occurs. A replicated queuing service can keep your system running and at the same time provide a safe place for your data even under failure.

To provide a highly available service, RTI Queuing Service leverages the decentralized architecture and discovery capabilities of DDS. If part of your system fails, the surviving DDS subsystems will still communicate and keep you running. During a failure RTI Queuing Service will allow you to remotely administer the surviving subsystem. You will be able to issue remote administration commands without fear of a final, inconsistent global state once the failing nodes recover.

Remarkably, all RTI Queuing Service replication features also apply to each and every aspect of its remote administration capacities, and these capacities are wide! You can remotely create and destroy queues, see their messages, filter messages using SQL expressions or empty a queue. You will be able to perform these operations during a failure, changing your system configuration with the guarantee that it will end up in a global consistent state. Not many Queuing Services (if any) can offer this.

If you not only want a highly available service but also need to ensure data consistency under failure, RTI Queuing Service provides a robust replication protocol. The protocol is based on a level of redundancy provided by the user. This redundancy level applies to messages as well as message operations, such as the acknowledgment or the assignment of a message to a recipient. The focus of the replication protocol is not on redistributing messages to all operating nodes but on ensuring that only messages that are successfully replicated are transacted and that only message operations agreed upon by multiple nodes occur. DDS reliable Quality of Service already ensures that messages will eventually be delivered to the Queuing Service nodes when possible.

RTI Queuing Service makes sure that a queue message producer gets an acknowledgment for a message only if the message was successfully replicated with the desired redundancy level. The message will be negatively acknowledged if the enqueuing process fails to achieve the desired redundancy. Only acknowledged messages will be sent to consumers, and the rest will be consistently discarded across all the nodes. Optionally you can also require your messages to be delivered if the message recipient is known with the desired redundancy level; this is useful when a message has to be redelivered after a failure.

To configure RTI Queuing Service you are asked for the maximum number of nodes you are planning to run – the number of queue instances. From this number the replication level is calculated as the lowest integer that is higher than queue instances divided by 2.

Under the hood the queuing service nodes choose a master as the orchestra conductor for all operations. The master node is chosen automatically and it can change at any time depending on a variety of factors. You can set a master timeout period that controls for how long the nodes can remain unable to coordinate with others before they undergo an internal reconfiguration with a new master election process. The election is based on which nodes are most up to date so there is a high probability a node among your best nodes is elected master. You don’t need to worry about it.


If replication is not sufficient, you may consider whether any of the various persistence modes of RTI Queuing Service fits your needs. Using replication does not guarantee the integrity of your data under catastrophic failure affecting all or most of your nodes. But your data will survive almost anything if persistency – perhaps combined with replication – is used across multiple nodes. RTI Queuing service features configuration persistence and two modes of data persistence.

The two data persistence modes supported are quite different and may be useful in different situations. If you have many queues or you are keeping in the queues a very large number of messages (so large that they will not fit in your volatile memory), you can use a persistence mode based on an underlying database implementation. In this mode all the messages and their metadata (the message state) is at all times in your database (in your hard drive) and only there. On the other hand, if you do not have a large number of messages in the queues and want better performance, you can choose an alternative mode that keeps the messages’ metadata in both the volatile memory and the hard drive while messages themselves are kept only in a file system in the persistent storage.

Both persistence modes support the familiar hard drive synchronization modes that control whether the data goes straight to the hard drive or is allowed to live for a while in your OS buffers. Hard drive operations are slow and it is frequent practice to combine multiple hard drive writing operations in one. If you set synchronization FULL, every single modification of your data or metadata will be in the hard drive the moment it takes place.

RTI Queuing Service also allows you to persist the configuration of your entire system. This is very handy if you use remote administration to create queues or to dynamically configure a large system. If you end up with a very large and finely tuned system, you’ll want to ensure that the configuration isn’t lost.

Now you can benefit from the high-throughput, low-latency capabilities of DDS, the middleware used in many of the most sophisticated real-time systems around the world. And you can do queuing with the highest guarantees for the safety and integrity of your data.

For more information about RTI Queuing Service, please visit https://www.rti.com/products/dds/queuing-service.html.

A new family member: RTI Queuing Service 1

Author: Javier Povedano

Hey, have you heard the news? RTI is glad to announce a new member in the Connext family: RTI Queuing Service. It brings a bunch of cool new features to satisfy more use cases.

Typical Publish/Subscribe model vs. message queue model.

Figure. Typical Publish/Subscribe model vs. message queue model.

With the rise of cloud computing and Software as a Service (SaaS), queuing services have become more popular in recent years. The main purpose of queuing services is to provide a model in which a message is delivered to only one recipient based on certain criteria  When using the queue model, message generators (Producers) rely for the delivery of messages on a broker, which delivers messages to Consumers based on predetermined delivery strategy (for example, round robin). This approach makes queuing services the best candidate for load balancing and dispatching tasks among consumers.

Until the arrival of RTI Queuing Service, DDS Connext product family was more focused on publish-subscribe and request/reply scenarios. That doesn’t mean it was not already possible to perform one-to-one communications using the request/reply API or even implement round-robin delivery strategies at application level. The difference is that now, with Queuing Service, things become easier: users don’t have to worry anymore about implementing their own scheduling/dispatching strategy.

Apart from the one-to-one communication model, RTI Queuing Service brings a lot of new and exciting features and capabilities:

First, it is built using regular Data Distribution Service Topics, so you can still use the same standard API to communicate with Queuing Service including its rich set of Quality of Service (QoS) settings. The use of Connext DDS also brings another advantage: users can use existing Connext tools and services to communicate with Queuing Service.

Second, to make things more natural to users familiar with Queuing Services, RTI also provides a C# wrapper API based on introducing four new entities that are closer to message queuing concepts: QueueProducers, QueueConsumers, QueueRequesters, and QueueRepliers.

Third, it provides a REST-like remote administration API to inspect the status of the queues and samples. With this API, users will be able to create/delete queues, inspect their contents, access stored samples and their delivery status, and flush queues remotely.

Fourth, it provides an extra level of reliability thanks to its persistency capabilities. RTI Queuing Service offers persistence modes that allow undelivered samples to survive system crashes. With this mode, messages are saved on disk until they are delivered and acknowledged by a consumer, so they can be redelivered when the system is restarted after a failure. In addition, the persistent mode also allows the current configuration to persist on disk, so queues created remotely won’t be lost if the system is restarted.

Fifth, it provides a one-of-a-kind replication mechanism to support High Availability (HA) scenarios. When using replication, queues and samples are automatically replicated through multiple RTI Queuing Service instances working in coordination, so in case of failure, another replica can continue the delivery of samples to consumers seamlessly.

With all these capabilities, RTI Queuing Service has all the ingredients to satisfy use cases where the load-balancing and round-robin message distribution are important.

RTI Queuing Service is available as an add-on product to RTI Connext 5.2.0, on selected platforms.

Where to Find Things in 5.2.0 Reply

In 5.2.0, we’ve made some changes to simplify our directory structure, make our file size smaller, and make your downloads shorter.  In that process we’ve moved some files around, so here’s what you need to know:

Directory Structure Overview

rti_connext_dds-5.2.0/ Top-level RTI directory for all libraries, tools, and services.
  • bin/
Scripts to run tools, services, and utilities.  These scripts set up the environment correctly for RTI’s applications.
  • doc/
Documentation for all installed products, including manuals and APIs
  • include/
Header files for all installed products
  • lib/
All Connext libraries in the RTI SDK
  • java
Location of jar files.  Update your Java classpath to include these libraries.
  • <architecture>
Static SDK libraries you can link into your application and dynamic libraries you can add to your system-specific library path.
  • resource
Location of XML files, IDL files, example templates, and additional installers


All of our host bundles are now installed with installer executables. These should be pretty self-explanatory. We will always place the installation in an “rti_connext_dds-5.2.0” directory in the path you select.

All targets and add-ons are now being distributed as rtipkg files.  You can install rtipkg files using the RTI Package Installer – either through Launcher on the Installation tab, or at the command line using bin/rtipkginstall[.bat].

Changes to your Build System

rtiddsgen Location

The rtiddsgen script has moved from the scripts directory to the bin directory. If your build system automatically calls rtiddsgen to generate code from your IDL files, you will need to update your build scripts to call rtiddsgen from:


Java Development

If you develop Java applications, download any target bundle for your platform and add the lib/<architecture> directory to your library path. Note that you no longer need to add the jdk suffix to generate example code – instead you specify the target architecture you installed and the language as Java, such as:

rtiddsgen -example i86Win32VS2008 -language Java HelloWorld.idl

On Windows, you must either have the version of Visual Studio installed that matches the target architecture – such as Visual Studio 2008 for libraries in target architecture i86Win32VS2008 – or you can download a version of the Microsoft Redistributable Package that matches the target architecture you have chosen.  The Microsoft Redistributable Packages can be downloaded here.

.Net Development

If you are developing for .Net, you should download a target that maps to the version of the Visual Studio and .Net framework you are using.  All the necessary .Net libraries will be inside the target directory with the core libraries.  The targets map to .Net framework versions as shown in the table.

When you generate example code, you can specify the architecture name without specifying a .Net framework version.

i86Win32VS2008/x64Win64VS2008 .Net 2.0
i86Win32VS2010/x64Win64VS2010 .Net 4.0
i86Win32VS2012/x64Win64VS2012 .Net 4.5
i86Win32VS2013/x64Win64VS2013 .Net 4.5.1


The first time you run any RTI script, RTI’s examples will be copied into an rti_workspace directory.  This allows you to build and modify examples even if your installation is in a shared or non-writable location.  The default location where the workspace is copied is:

  • Mac OS X systems: /Users/your user name/rti_workspace/5.2.0/examples
  • Linux and other UNIX-based systems: /home/your user name/rti_workspace/5.2.0/examples
  • Windows systems: your Windows documents folder\rti_workspace\5.2.0\examples

You can choose not to copy the examples or to copy them to another location.  There is information on how to do that in the Getting Started Guide.

On the Floor at NIWeek: Presentations, Demos, New Technologies and Best Product Award! Reply


From the in-depth presentations, interactive panel discussions, technical training sessions, to the abundance of networking opportunities with peers and industry leaders, the 21st annual NIWeek conference is alive and kicking!

The conference opened on August 3rd in Austin, Texas, and true to Austin’s roots, the event was peppered with amazing rock music during keynotes and jazz at the Tuesday evening floor show.

Jazz at NIWeek 2015

This energy was infectious as once again NIWeek brought together the brightest minds in engineering and science. More than 3,200 innovators representing a wide spectrum of industries, from automotive and telecommunications to robotics and energy, came to Austin this week. Everyone was looking to discover the latest technology to accelerate productivity for software-defined systems in test, measurement, and control. Hopefully you are attending, because it is a fantastic event! RTI is in booth #1018 as well as the Pavilion at booth 805L where we are featuring our LabVIEW Tools Network demo.

Additionally, RTI’s Brett Murphy, our Director of Business Development, is presenting two sessions today:
Distributed Hardware-in-the-Loop Simulation With CompactRIO and DDS LabVIEW (2:15-3:15PM)
Data Communication Security for the Industrial Internet of Things (4:45-5:45PM)

RTI was also mentioned in the joint Cisco/NI Keynote this morning, Both Cisco and RTI are active members of the Industrial Internet Consortium (IIC). Our CEO, Stan Schneider is on the IIC Steering Committee. The IIC brings together organizations and technologies to accelerate growth of the Industrial Internet by identifying, assembling and promoting best practices. The Cisco/NI keynote highlighted our joint IIC efforts. In addition to the keynote, Cisco also featured RTI in their NI/IIC demo on the show floor.

Cisco featuring RTI in their NI/IIC demo

Lastly, we’re happy to announce that one of our products, RTI DDS Toolkit for LabVIEW, has been awarded the 2015 LabVIEW Tools Network Product of the Year award!

NIWeek 2015 RTI Accepting Award

The toolkit provides fast, scalable and interoperable publish/subscribe messaging for distributed applications. You can use it to share real-time data between LabVIEW applications and any other applications written in C, C++, C# and Java. The resulting solution works over any transport and can scale to hundreds or even thousands of heterogeneous applications across local- and wide-area networks. You can learn more about the product by visiting the NI Tools Network or by watching this brand new whiteboard video:

We hope to see you at the show!

Modern C++ is here! 2

We are thrilled to announce that the Modern C++ API for RTI Connext DDS is complete and publicly available now with RTI Connext 5.2 (data sheet). A lot of our customers have already experienced a new way to write DDS code through our preview version—we hope you’ll enjoy it too!

This brand-new C++ programming API, based on the ISO/IEC C++ 2003 Language DDS PSM (DDS-PSM-Cxx) specification, brings modern C++ techniques and patterns to DDS, most notably:

  • Generic programming
  • Integration with the standard library
  • Automatic object lifecycle management, with value types and reference types
  • C++11 support: move constructors, initializer lists, lambdas, range for-loops, and more

We’ve also updated the code that rtiddsgen generates for your IDL types.

Where can you start?

Ah! If your system is using the previous C++ API and you still want to take advantage of all the other great features and bug fixes in 5.2, don’t worry. It’s still fully supported—now we call it the Traditional C++ API.

Stay tuned for more Modern C++ here at the RTI blogs, coming soon!