OpenFMB cartoon

Speed Your Time to Market with Connext Pro Tools Reply

It was two weeks until the demo.

We had this single opportunity to build a working microgrid control system that needed to:

  • Run on Intel and ARM processors
  • Target Linux and Windows platforms
  • Include applications written in C, C++, Java, SCALA, Lua, and LabVIEW
  • Talk to legacy equipment speaking ModBus and DNP3 protocols
  • Perform real-time control while meeting all of the above requirements

In this post, I’ll talk about the real-world problems we faced and how the tools included in RTI Connext DDS Pro helped us solve our integration problems in just a couple of days. Common issues encountered in most projects are highlighted, with specific RTI tools for addressing each. Along the way you’ll find links to supporting videos and articles for those who want a deeper dive. My hope is that you find this a useful jumping off point for learning how to apply RTI tools to make your DDS development quicker and easier.

The Big Demo

This was the first working demo of the Smart Grid Interoperability Panel’s Open Field Message Bus (OpenFMB), a new way of controlling devices at the edge of the power grid in real time by applying IoT technologies like DDS (see this link for more info).

OpenFMB cartoon

Here’s a block diagram of the system showing hardware architectures, operating systems, and languages:

OpenFMB demo net diagram-2

As we brought the individual participants onto the network, we encountered a number of problems. A description of challenges and the tools we used to address each follows. Scan the list of headings and see if you’ve had to debug any of these issues in your DDS system, then check out the links to learn a few new tips.  As you do, think about how you would try to diagnose the problems without the tools mentioned.

Problem: Network configuration problems

Tools: RTI DDS Ping

The team from Oak Ridge National Labs was working on the LabVIEW GUI that would be the main display.  Their laptop could not see data from any of the clients on the network. We checked the basics to make sure their machine was on the same subnet – always check the basics first!  While the standard ping utility can confirm basic reachability between machines, it doesn’t check that the ports necessary for DDS discovery are open.  The rtiddsping utility does exactly that, and it told us in seconds that the firewall installed on their government-issued laptop was preventing DDS discovery traffic.  For a great rundown on how to check the basics, see this community post.

Problem: Is my app sending data?

Tools: Spy, Admin Console

A common question among the vendors using DDS for the first time was whether their application was behaving properly: Was it sending data at the proper intervals, and did the data make sense? For a quick check, we used the RTI DDS Spy utility. Spy provides a simple subscriber that can filter selectively for specific types and topics, and it can print the individual samples it receives, allowing you to quickly see the data your app is writing.  Every vendor used DDS Spy as a sanity check after initially firing up their application.

Sometimes an update to the same topic can come from multiple publishers in the system. Not sure which one wrote the latest update?  A command line switch for Spy (“-showSampleIdentity”) allows you to see where an update originated.

Spy is a console app that can be deployed on embedded targets for basic testing.  Its small size, quick startup, and simplicity are its main advantages.  Details on usage are here.

Problem: Data type mismatch

Tools: Admin Console, Monitor

One vendor reported that in an earlier test they were seeing data from one of the other apps, and now they were not. Admin Console quickly showed us that a data type mismatch was to blame – that is, two topics with the same name but different data types. These types of mismatches can be difficult to diagnose, especially for large types with many members. Admin Console leverages the data-centricity of DDS to introspect the data types as understood by each application in your system. It then presents both a simplified view and an “equivalent IDL” view that makes it easy to compare the types in side-by-side panes. This is especially valuable in situations where you don’t have the source IDL from every application.

In this case, one vendor had not synced up with the GitHub repository for the latest IDL, so they were working from an older version of the file. They pulled the latest files from GitHub, rtiddsgen created new type-specific code for them, and after a quick recompile their app was able to read and write the updated topics.

Data type introspection

Admin Console shows data types

Problem: QoS mismatch

Tools: Admin Console, Monitor

Next to discovery, Quality of Service (QoS) mismatches are the most common problem experienced by DDS users during integration. With so many knobs to turn, how do you make sure that settings are compatible? The OpenFMB project had its fair share of QoS mismatches at first. Admin Console spots these quickly and tells you the specific QoS settings that are in conflict. You can even click on the QoS name and go directly to the documentation. QoS information shared during discovery is used by Admin Console to detect mismatches.

QoS Mismatch

Admin Console identifies a reliability QoS mismatch

Problem: Is the system functioning as expected?

Tools: Admin Console, Monitor

While Spy provides basic text output for live data, you can’t beat a graph for seeing how data changes over time. For more sophisticated data visualization, we turned to Admin Console. The data visualization feature built in to Admin Console was a huge help in quickly determining how the system as a whole was working. It even allowed us to scroll through historical data to better understand how we arrived at the current state. To find out more about data visualization, see this short intro video, or this deep dive video.

Data visualization

Visualize your data with Admin Console

Problem: Performance tuning

Tools: Monitor, Admin Console

When it comes to performance tuning, Monitor should be your go-to tool. Monitor works with a special version of the DDS libraries that periodically publish real-time performance data from your application. The debug libraries are minimally intrusive, and the data is collected and presented by Monitor.

Using Monitor, you can learn about:

  • Transmission and reception statistics
  • Missed deadlines
  • High-water marks on caches
  • QoS mismatches
  • Data type conflicts
  • Samples lost or rejected
  • Loss of liveliness

It’s important to note that not every QoS setting is advertised during discovery. Many QoS settings apply to an application’s local resource management and performance tuning, and these are not sent during discovery. With Monitor you can inspect these, too. For a great introduction to Monitor, check out this video.

Problem: Transforming data in flight

Tools: Prototyper with Lua, DDS Toolkit for LabVIEW

We wanted a large GUI to show what was happening in the microgrid in real time.  The team at Oak Ridge National Labs volunteered to create a GUI in LabVIEW. The DDS Toolkit for Labview allows you to grab data from DDS applications and use it in LabVIEW Virtual Instruments (VIs). There are some limitations however, as we found out. The Toolkit does not handle arrays of sequences, which some types in the OpenFMB data model use. We needed a quick solution that would allow the LabVIEW VI to read these complex data types.

One of the cool new tools in the Connext DDS Pro 5.2 toolbox is Prototyper with Lua. Prototyper allows you to quickly create DDS-enabled apps with little to no programming: define your topics and domain participants in XML, add a simple Lua script, and you can be up on a DDS domain in no time. (Check out Gianpiero’s blog post on Prototyper)

Back at the hotel one evening I wrote a simple Lua script that allows Prototyper to read the complex DDS topics containing arrays of sequences and then republish them to a different, flattened topic for use by the LabVIEW GUI. I was able to test it offline using live data recorded earlier in the lab, which brings us to…

Problem: Disconnected development

Tools: Record, Replay, Prototyper with Lua

A geographically dispersed development team built the OpenFMB demo. With the exception of those few days in Knoxville, no one on the team had access to all the components in the microgrid at one time. So how do you write code for your piece of the puzzle when you don’t have access to the other devices in the system?

When I worked on the Lua bridge for the LabVIEW GUI, I used the Connext Pro Record and Replay services.  In the lab, I had recorded about 10 minutes of live data as we ran the system through all the use cases.  Later that evening in the hotel, I was able to play this data back as I worked on the Lua scripts.  Replay allows you selectively play back topics, looping the playback so it runs continuously.  You can also choose to play the data at an accelerated rate – this is a huge time saver that enables you to simulate days or hours worth of runtime in just a few minutes.

Recording console

Recording Console

One of the really neat things Prototyper does once it’s running is to periodically reload the Lua script.  This made developing the bridge to LabVIEW very quick: Replay played data continuously in an accelerated mode; I had an editor open on the Lua script; and as I made and saved changes they were instantly reflected in Prototyper which was running constantly – no need to restart to see changes to the script.  The conversion script was done in just a couple of hours.

Prototyper also came in handy for quickly creating apps to generate simulated data.  The LabVIEW GUI was developed entirely offline without any of the real-world devices, using some topics generated by the Replay services and others that were bridged or simulated with Prototyper.  I’d email a simulator script to ORNL, they’d do some LabVIEW work and send me an updated VI, and then I’d run that locally to verify it.   ORNL did an amazing job integrating real-time data from the DDS domain along with visual elements from the SGIP cartoons, and the GUI was the centerpiece of the demo.

LabVIEW GUI

The final GUI written in LabVIEW

Main Takeaways

When we showed up in New Orleans a couple weeks later, the entire system was brought up in about 30 minutes, which is remarkable considering some of the applications (like the LabVIEW GUI) had never even been on a network with the actual hardware. Everything just worked.

The rich set of tools provided by RTI Connext DDS Pro allowed us to solve our integration problems quickly during the short week in Knoxville, and to carry on development at many remote locations. Admin Console, Monitor, DDS Ping, and DDS Spy got our system up and running.  Record, Replay, and Prototyper made it possible for remote development teams to work in the absence of real hardware.  DDS Toolkit for LabVIEW enabled us to create a sophisticated GUI quickly.  And even after the event, we can continue to do development and virtual demos using these tools.

Jedi Mind Trick Reply

rti award 2015My CEO Stan Schneider closed our 2016 CKO (Corporate Kickoff) with a Jedi Mind Trick.

CEO: “You will leave kickoff truly inspired and have a great year.”

Audience (me included): “I will leave kickoff truly inspired and have a great year.”

CEO: “Drop your weapon on the way out.”

Maybe this Jedi mind trick is why I am posting this, but hey I got an award for exceeding quota in 2015 (thank you customers!) and this is in no small part due to the fact that RTI employees develop and bring to market the best in class product that helps our  customers, who happen to love us, solve challenging problems. What kinds of challenging problems? To name a few: enabling distributed systems, allowing disparate systems to work together, and ushering in the digital transformation. This is a big deal! This transformation is projected to be the next big economic value creating engine for the universe (previous record holder is the Internet), and so it’s pretty easy to be inspired about what we are going to accomplish this year.

Check out what some of our customers’ develop with RTI Connext DDS technology at: https://www.rti.com/industries/overview.html .

Happy Selling in 2016!

– Nando Ramamirtham

P.S.  RTI is hiring so you can be inspired like me.  Check out our careers page, and if you have the talent and are interested in working at RTI drop me a message on LinkedIn and I will submit your LinkedIn profile to our hiring team.

Enabling Autonomous Cars Reply

twitter_1500x500

An autonomous car is a great example of a highly distributed dynamic system, where component objects continuously make real-time local decisions based on system-wide constraints and approximate global state. DDS evolved to specifically address this type of system, and RTI has become a trusted expert assisting the innovators of future autonomous cars.

The ease of integration and flexible, reliable, and fast publish-subscribe data model of the RTI Connext DDS middleware are uniquely suited to addressing many of the toughest challenges posed by autonomous cars:

  • Vehicle subsystem integration and control, spanning driving control, safety, infotainment, and diagnostic functions
  • Inter-vehicle interactions, for collision avoidance and optimized travel experiences
  • Tracking and control functions, for fleet management, traffic monitoring and management, crisis management, and government agency coordination
  • Sensor and camera data aggregation at millisecond speeds
  • Local and remote feedback loops
  • Reliable communications over unreliable channels (for example, wireless, cellular)
  • Ability to operate within redundant environments (intelligently delivering only one copy of data)
  • Rapid time to market for safety-certifiable infrastructure, using RTI Connext DDS Cert
connectedVehicle_dds_architecture

DDS within connected vehicle architecture

DDS Map to Autonomous Car Requirements

Unlike other connectivity middleware, DDS emerged more than 10 years ago to address physics-speed connectivity requirements. Today, DDS remains the only middleware capable of satisfying the most stringent requirements including:

  • Reliability. Within an autonomous car, even five milliseconds of downtime can be a disaster. DDS implements natural redundancy to ensure continued operation.
  • Performance. For the system components that need millisecond or microsecond response, DDS provides fast peer-to-peer
  • Integration at scale. Autonomous cars integrate many applications and deal with thousands of addressable data items during normal operation. Data-centric DDS eases complex data flow within these types of large-scale systems.
figure2_MsgCentricMiddleware

Unlike message-centric models, data-centricity offers superior modularity, simplicity, and scalability.

To minimize overhead, the DDS publish-subscribe model delivers:

  • Fine control of quality of service (QoS) parameters including reliability, bandwidth control, delivery deadlines, liveliness status, resource limits, and security
  • Explicitly managed communications data model, with a choice of connection types
  • Data centricity, with inherent understanding about the contents of the information being managed and shared
  • Inherent automation (no hard-coded interactions between applications and devices)
  • Device discovery (easy add-on of new devices without any configuration changes required)

Compared to traditional point-to-point communications, DDS offers a superior databus with plug-and-play simplicity, scalability, and an architecture that can evolve while maintaining exceptional performance levels. Scalability and integration capacity of DDS are also instrumental in enabling a car’s connections with other vehicles and their own environments, including external systems such as traffic monitoring.

To learn more about RTI & DDS in autonomous car design, RSVP to attend the upcoming webinar and download our latest white paper!

facebook_webinar_1200x630_REV1

RTI’s 2015 and a Peek at 2016 Reply

Stan SchneiderHello RTI Customer!

I will always fondly remember 1999 … at the peak of the dotcom boom. Our company, then focused on tools, was one of the fastest-growing in the frothy Silicon Valley market. The dawn of The Internet age was exciting, and we were along for the ride.

2015 may not have matched the hyperactive dotcom era. But, I will also always remember it as a standout year. It was a real turning point for RTI. It was by far our best sales and strategic year since the dotcom days. In particular:

  • Our expanded sales team turned in an impressive performance. We grew sales nearly 30%, easily exceeding our goals. And, two years in, it’s great to see customers flocking to our fair, simple and open subscription pricing.
  • The Industrial IoT and the Industrial Internet Consortium created a hot market. We are perhaps the best positioned of all small companies to ride and lead the IIoT wave.
  • Our great new products and features like security, better tools, queuing, easier installation, and safety certification help us explore this market like no competitor.
  • We now have experience with about 1000 applications, including surgical robotics, autonomous cars, drones, emergency medical systems, automotive testing, imaging, communications, operating room integration, video sharing, grid control, cancer treatment, oil & gas drilling, ships, wind turbines, avionics, broadcast television, air traffic control, SCADA, robotics, defense, and on and on. And on. Wow! The IIoT truly spans all industries.
  • We shared our story in dozens of webinars, conferences, and trade shows. My favorite? The Inside Story: GE Healthcare’s IIoT Architecture.

So, what about 2016? We expect another strong sales growth year. With our great new “Service Delivery Partners” like Tech Mahindra, we will offer a more complete solution. We will drive product quality and coverage to ensure we can meet our customers’ demanding use cases. We will hire many new teammates in sales, business development, services, engineering, and marketing (watch www.rti.com/jobs). These new resources will help us serve you, our customers, with better products and care.

Back during the dotcom boom, nobody could really foresee the transformative impact of The Internet. The shocking truth: the IIoT smart machine era will be an even bigger transformation. We are at the beginning of a new world of intelligent distributed systems. The IIoT will change every industry, every life, every application, and every job.

RTI is a real leader of this transformation. Our fundamental purpose is “To enable and realize the potential of smart machines to serve mankind.” We are now designed into well over $1 trillion worth of “things.” We are saving lives, improving efficiency, and ensuring reliability across an amazing slice of the new world.

To deliver on our purpose, we understand that, in the end, we must earn your trust. We accept that as a fundamental responsibility. I am continually grateful for your faith in us as your partner on the IIoT adventure.

Thank you,

Stan Schneider, CEO RTI


Now is the Time to Migrate from PrismTech OpenSplice to RTI Connext DDS. 1

With the recent acquisition of PrismTech by the Taiwanese company ADLINK, we are seeing increased demand for porting from OpenSplice to RTI Connext DDS. One of the benefits of going with a standard like Data Distribution Service for Real-Time Systems (DDS) is that you have options if your middleware supplier becomes unreliable for any reason. That’s why the DDS community went to so much trouble to develop more than just a wire protocol standard. The DDS specification also includes API definitions, with the explicit goal of making it easier to port.

In particular, DDS specifications include both a PIM (platform independent model) and a set of language PSMs (platform specific models). The PIM defines all the important user-visible API concepts including the DDS Entities (DomainParticipant, Topic, Publisher, Subscriber, etc.), their operations and behavior, the QoS, the listeners, and so on. All DDS implementations use the PIM. That means that the structure and major options in your code will translate unchanged to a different DDS implementation.

The PSMs are much newer and they are only unambiguously defined for C++ and Java. Other programming languages come from the IDL-derived mappings (also standard). If you used one of the new standard language PSM APIs, it’s even easier to port your application. Even if you didn’t, the common PIM makes it a straightforward process. Of course, in general, there are some differences between versions, both API variability and “non standard” options, features, and configuration. Still, compared to porting to a new middleware architecture, the work required to port is minimal.

How hard is it to port an existing application from OpenSplice DDS to the RTI Connext DDS in practice? 

It’s not trivial, but it’s not that bad, either. One significant difference between these two SDKs is in the IDL syntax. Although most of the IDL syntax is identical, there is a difference in how “key attributes” are indicated. Connext DDS uses the DDS-XTYPES standard format consisting of a “@Key” annotation placed inside a comment. OpenSplice uses a (non-standard) “#pragma keylist” directive. We can see this difference in the example below.

OpenSpliceDDS IDL Equivalent Connext DDS IDL
struct Coordinates {

   float latitude;

   float longitude;

   float altitude;

};
struct Flight  {

   string<32>  airlineCode;

   long        flightNumber;

   string<3>   departureAirport;

   string<3>   arrivalAirport;

   long long   departureTime;

   Coordinates currentPosition;

};
#pragma keylist Flight airlineCode flightNumber

struct Coordinates {

   float latitude;

   float longitude;

   float altitude;

};
struct Flight  {

   string<32>  airlineCode;  //@Key

   long        flightNumber; //@Key

   string<3>   departureAirport;

   string<3>   arrivalAirport;

   long long   departureTime;

   Coordinates currentPosition;

};

 

The first step in the migration is to modify the IDL files replacing #pragma keylist with the //@Key annotation, as shown above.

Once the IDL file(s) have been migrated, the rtiddsgen tool can be used to generate all the network marshaling/unmarshaling code. This allows the data to understood by any computer, independent of processor architecture, operating system, and programming language. The rtiddsgen tool is included with the Connext DDS SDK. In addition to generating the network marshaling code, rtiddsgen can generate example code and makefiles / build projects for the programming languages and platforms of your choice.

Beyond this there are a few application code edits that may be required. These affect the initialization of the TypeSupport, the use of the ReturnCode_t type, the use of the length() operation on types String and Sequence, and the way sequences are passed to the DataReader read/take operations.

RTI has experience doing this! Some RTI customers, including many of our largest and happiest customers, started with OpenSplice before they discovered the performance, quality, tools, and top-flight support from RTI. So, before you start this effort, contact us. We have a migration SDK including compatible headers, examples, and more. Better yet, we can offer you a cost-effective service to help you transition quickly and correctly.

Best of RTI: Do You Like to Watch? Reply

rti-logo-tag-StackedRightAs we head into the new year, we’d like to take a moment to highlight some of our top webinars. After sorting through 20 webinars and ranking each based on attendance, we bring to you our top five webinars of 2015. A huge shout out goes to our knowledgable and passionate experts at RTI, in addition to our great customers and partners, as through our webinars they were able to shed much light and spark a lot of interest in the growing space of the Industrial Internet of Things.

Now, we present to you, the list:

  • Panel Discussion – Exploring IoT: Silicon, Software, Security, and Sensors
    Presented February 23, 2015 | Sponsors: Eurotech, Flexera, McAfee, RTI, ThingWorx
    Diverse panel of experts discuss IoT applications and dissect what puts the “smart” in IoT devices.
  • Interoperability and the Internet of Things – To standardize or not to standardize?
    Presented March 12, 2015 | Sponsors: PrismTech, RTI, ThingWorx
    Networking experts consider the current connectivity landscape and define approaches that enable ubiquitous connectivity for the IoT.
  • Blueprint for the Industrial Internet of Things
    Presented April 23, 2015 | Sponsors: Industrial Internet Consortium, Cisco, RTI
    This talk examines “data centricity,” an architectural feature of the IIC’s guidance to enable the IIoT vision. Successful use cases in medical, power, transportation, and industrial control applications provide support for this new era of connectivity.
  • Blueprint for the Industrial Internet: The Architecture
    Presented July 9, 2015 | Speaker: Stan Schneider, CEO, RTI
    Stan dives further into data centricity and pulls from real use cases to emphasize the promise of controlled and secure data: a new era of connectivity to unify systems top and bottom across industries.
  • How to Build the Connectivity Architecture for the Industrial Internet of Things (IoT)
    Presented February 25, 2015 | Speaker: Rajive Joshi, Ph.D. Principal Solution Architect, RTI
    Rajive describes the core connectivity architecture model for the Industrial IoT and shows how the Data Distribution Service (DDS) messaging standard addresses its unique requirements. Concepts will be illustrated using a scalable Industrial IoT application architecture that shares real-time data between mobile devices and the cloud and provides access via thin clients (web browser).

We hope you’ll enjoy these on-demand webinars. Feel free to pass them on to your colleagues!

dot-then-static

Modern Asynchronous Request/Reply with DDS-RPC Reply

Complex distributed systems often use multiple styles of communication, interaction patterns, to implement functionality. One-to-many publish-subscribe, one-to-one request-reply, and one-to-one queuing (i.e., exactly-once delivery) are the most common. RTI Connext DDS supports all three. Two of them are available as standard specifications from the Object Management Group (OMG): the DDS specification (duh!) and newer Remote Procedure Call (RPC) over DDS (DDS-RPC) specification.

DDS-RPC specification is a significant generalization of RTI’s own Request-Reply API in C++ and Java. There are two major differences: (1) DDS-RPC includes code generation from IDL interface keyword and (2) asynchronous programming using futures.

Let’s look at an example straight from the DDS-RPC specification.

module robot 
{
  exception TooFast {};
  enum Command { START_COMMAND, STOP_COMMAND };
  struct Status {
    string msg;
  };

  @DDSService
  interface RobotControl 
  {
    void  command(Command com);
    float setSpeed(float speed) raises (TooFast);
    float getSpeed();
    void  getStatus(out Status status);
  };

}; //module robot

It’s pretty obvious what’s going on here. RobotControl is an IDL interface to send commands to a robot. You can start/stop it, get its status, and control its speed. The setSpeed operation returns the old speed when successful. The robot knows its limits and if you try to go too fast, it will throw a TooFast exception right in your face.

To bring this interface to life you need a code generator. As of this writing, however, rtiddsgen coughs at the interface keyword. It does not recognize it. But sometime in not-so-distant future it will.

For now, we’ve the next best thing. I’ve already created a working example of this interface using hand-written code.  The API of the hand-written code matches exactly with the standard bindings specified in DDS-RPC. As a matter of fact, the dds-rpc-cxx repository is an experimental implementation of the DDS-RPC API in C++. Look at the normative files to peak into all the gory API details.

Java folks… did I mention that this is a C++ post? Well, I just did… But do not despair. The dds-rpc-java repository has the equivalent Java API for the DDS-RPC specification. However, there’s no reference implementation. Sorry!

The Request/Reply Style Language Binding

DDS-RPC distinguishes between the Request/Reply and the Function-Call style language bindings. The Request-Reply language binding is nearly equivalent to RTI’s Request/Reply API with some additional asynchronous programming support. More on that later.

Here’s a client program that creates a requester to communicate with a RobotControl service. It asks for the current speed and increases it by 10 via getSpeed/setSpeed operations. The underlying request and reply topic names are “RobotControlRequest” and “RobotControlReply”. No surprise there.

RequesterParams requester_params = 
  dds::rpc::RequesterParams()
    .service_name("RobotControl")
    .domain_participant(...); 

Requester<RobotControl_Request, RobotControl_Reply> 
  requester(requester_params); 

// helper class for automatic memory management 
helper::unique_data<RobotControl_Request> request; 
dds::Sample<RobotControl_Reply> reply_sample; 
dds::Duration timeout = dds::Duration::from_seconds(1); 
float speed = 0; 

request->data._d = RobotControl_getSpeed_Hash; 
requester.send_request(*request); 

while (!requester.receive_reply( 
               reply_sample, 
               request->header.requestId, 
               timeout)); 

speed = reply_sample.data().data._u.getSpeed._u.result.return_; 
speed += 10; 
request->data._d = RobotControl_setSpeed_Hash; 
request->data._u.setSpeed.speed = speed; 

requester.send_request(*request); 

while (!requester.receive_reply( 
                reply_sample, 
                request->header.requestId, 
                timeout)); 

if(reply_sample.data().data._u.setSpeed._d == robot::TooFast_Ex_Hash) 
{
  printf("Going too fast.\n"); 
} 
else 
{ 
  speed = reply_sample.data().data._u.setSpeed._u.result.return_; 
  printf("New Speed = %f", speed); 
}

There’s quite a bit of ceremony to send just two requests to the robot. The request/reply style binding is lower-level than function-call style binding. The responsibility of packing and unpacking data from the request and reply samples falls on the programmer. An alternative, of course, is to have the boilerplate code auto-generated. That’s where the function-call style binding comes into picture.

The Function-Call Style Language Binding

The same client program can be written in much more pleasing way using the function-call style language binding.

robot::RobotControlSupport::Client 
  robot_client(rpc::ClientParams()
                 .domain_participant(...)
                 .service_name("RobotControl"));
float speed = 0;

try
{
  speed = robot_client.getSpeed();
  speed += 10;
  robot_client.setSpeed(speed);
}
catch (robot::TooFast &) {
  printf("Going too fast!\n");
}

Here, a code generator is expected to generate all the necessary boilerplate code to support natural RPC-style programming. The dds-rpc-cxx repository contains the code necessary for the RobotControl interface.

Pretty neat, isn’t it?

Not quite… (and imagine some ominous music and dark skies! Think Mordor to set the mood.)

An Abstraction that Isn’t

The premise of RPC, the concept, is that accessing a remote service can be and should be as easy as a synchronous local function call. The function-call style API tries to hide the complexity of network programming (serialization/deserialization) behind pretty looking interfaces. However, it works well only until it does not…

Synchronous RPC is a very poor abstraction of latency. Latency is a hard and insurmountable problem in network programming. For reference see the latency numbers every programmer should know.

If we slow down a computer to a time-scale that humans understand, it would take more than 10 years to complete the above synchronous program assuming we were controlling a robot placed in a different continent. Consider the following table taken from the Principles of Reactive Programming course on Coursera to get an idea of what human-friendly time-scale might look like.

latency

The problem with synchronous RPC is not only that it takes very long to execute but the calling thread will be blocked doing nothing for that long. What a waste!

Dealing with failures of the remote service is also a closely related problem. But for now let’s focus on the latency alone.

The problem discussed here isn’t new at all. The solution, however, is new and quite exciting, IMO.

Making Latecy Explicit … as an Effect

The DDS-RPC specification uses language-specific future<T> types to indicate that a particular operation is likely going to take very long time. In fact, every IDL interface gives rise to sync and async versions of the API and allows the programmers to choose.

The client-side generated code for the RobotControl interface includes the following asynchronous functions.

class RobotControlAsync
{
public:

  virtual dds::rpc::future<void> command_async(const robot::Command & command) = 0;
  virtual dds::rpc::future<float> setSpeed_async(float speed) = 0;
  virtual dds::rpc::future<float> getSpeed_async() = 0;
  virtual dds::rpc::future<robot::RobotControl_getStatus_Out> getStatus_async() = 0;

  virtual ~RobotControlAsync() { }
};

Note that every operation, including those that originally returned void return a future object, which is a surrogate for the value that might be available in future. If an operation reports an exception, the future object will contain the same exception and user will be able to access it. The dds::rpc::future maps to std::future in C++11 and C++14 environments.

As it turns out, C++11 futures allow us to separate invocation from execution of remote operations but retrieving the result requires waiting. Therefore, the resulting program is barely an improvement.

try
{
  dds::rpc::future<float> speed_fut = 
    robot_client.getSpeed_async(); 
  // Do some other stuff  
  while(speed_fut.wait_for(std::chrono::seconds(1)) == 
        std::future_status::timeout);
  
  speed = speed_fut.get();
  speed += 10;
  
  dds::rpc::future<float> set_speed_fut = 
    robot_client.setSpeed_async(speed);
  // Do even more stuff  
  while(set_speed_fut.wait_for(std::chrono::seconds(1)) == 
        std::future_status::timeout);
  
  set_speed_fut.get();
}
catch (robot::TooFast &) {
  printf("Going too fast!\n");
}

The client program is free to invoke multiple remote operations (say, to different services) back-to-back without blocking but there are at least three problems with it.

  1.  The program has to block in most cases to retrieve the results. In cases where result is available before blocking, there’s problem #2.
  2. It is also possible that while the main thread is busy doing something other than waiting, the result of the asynchronous operation is available. If no thread is waiting on retrieving the result, no progress with respect that chain of computation (i.e., adding 10 and calling setSpeed) won’t proceed. This is essentially what’s known as continuation blocking because the subsequent computation is blocked due to missing resources.
  3. Finally, the programmer must also do correlation of requests with responses because in general, the order in which the futures will be ready is not guaranteed to be the same as the requests were sent. Some remote operations may take longer than the others. To resolve this non-determinism, programmers may have to implement state-machines very carefully.

For the above reasons, this program is not a huge improvement over the request/reply style program at the beginning. In both cases, invocation is separate from retrieval of the result (i.e., both are asynchronous) but the program must wait to retrieve the results.

That reminds me of the saying:

“Blocking is the goto of Modern Concurrent Programming”

—someone who knows stuff

In the multicore era, where the programs must scale to utilize the underlying parallelism, any blocking function call robs the opportunity to scale. In most cases, blocking will consume more threads than strictly necessary to execute the program. In responsive UI applications, blocking is a big no-no. Who has tolerance for unresponsive apps these days?

Composable Futures to the Rescue

Thankfully people have thought of this problem already and proposed improved futures in the Concurrency TS for the ISO C++ standard. The improved futures support

  • Serial composition via .then()
  • Parallel composition via when_all() and when_any()

A lot has been written about improved futures. See Dr. Dobb’s, Facebook’s Folly, and monadic futures. I won’t repeat that here but an example of .then() is in order. This example is also available in the dds-rpc-cxx repository.

for(int i = 0;i < 5; i++)
{
  robot_client
    .getSpeed_async()
    .then([robot_client](future<float> && speed_fut) {
          float speed = speed_fut.get();
          printf("getSpeed = %f\n", speed);
          speed += 10;
          return remove_const(robot_client).setSpeed_async(speed);
      })
    .then([](future<float> && speed_fut) {
        try {
          float speed = speed_fut.get();
          printf("speed set successfully.\n");
        }
        catch (robot::TooFast &)
        {
          printf("Going too fast!\n");
        }
      });
}

printf("Press ENTER to continue\n");
getchar();

The program with .then() sets up a chain of continuations (i.e., closures passed in as callbacks) to be executed when the dependent futures are ready with their results. As soon as the future returned by getSpeed_async is ready, the first closure passed to the .then() is executed. It invokes setSpeed_async, which returns yet another future. When that future completes, the program continues with the second callback that just prints success/failure of the call.

Reactive Programming

This style of chaining dependent computations and constructing a dataflow graph of asynchronous operations is at the heart of reactive programming. It has a number of advantages.

  1. There’s no blocking per request. The program fires a series of requests, sets up a callback for each, and waits for everything to finish (at getchar()).
  2. There’s no continuation blocking because often the implementation of future is such that the thread that sets value of the promise object (the callee side of the future), continues with callback execution right-away.
  3. Correlation of requests with replies isn’t explicit because each async invocation produces a unique future and its result is available right in the callback. Any state needed to complete the execution of the callback can be captured in the closure object.
  4. Requires no incidental data structures, such as state machines and std::map for request/reply correlation. This benefit is a consequence of chained closures.

The advantages of this fluid style of asynchronous programming enabled by composable futures and lambdas is quite characteristic of modern asynchronous programming in languages such as JavaScript and C#. CompletableFuture in Java 8 is also provides the same pattern.

The Rabbit Hole Goes Deeper

While serial composition of futures (.then) looks cool, any non-trivial asynchronous program written with futures quickly get out of hands due to callbacks. The .then function restores some control over a series of asynchronous operations at the cost of the familiar control-flow and debugging capabilities.

Think about how you might write a program that speeds up the robot from its current speed to some MAX in increments of 10 by repeatedly calling getSpeed/setSpeed asynchronously and never blocking.

Here’s my attempt.

dds::rpc::future<float> speedup_until_maxspeed(
  robot::RobotControlSupport::Client & robot_client)
{
  static const int increment = 10;

  return
    robot_client
      .getSpeed_async()
      .then([robot_client](future<float> && speed_fut) 
      {
        float speed = speed_fut.get();
        speed += increment;
        if (speed <= MAX_SPEED)
        {
          printf("speedup_until_maxspeed: new speed = %f\n", speed);
          return remove_const(robot_client).setSpeed_async(speed);
        }
        else
          return dds::rpc::details::make_ready_future(speed);
      })
      .then([robot_client](future<float> && speed_fut) {
        float speed = speed_fut.get();
        if (speed + increment <= MAX_SPEED)
          return speedup_until_maxspeed(remove_const(robot_client));
        else
          return dds::rpc::details::make_ready_future(speed);
      });
}

// wait for the computation to finish asynchronously
speedup_until_maxspeed(robot_client).get();

This program is unusually complex for what little it achieves. The speedup_until_maxspeed function appears to be recursive as the second lambda calls the function again if the speed is not high enough. In reality the caller’s stack does not grow but only heap allocations for future’s shared state are done as successive calls to getSpeed/setSpeed are made.

The next animation might help understand what’s actually happening during execution. Click here to see a larger version.

dot-then-animation

The control-flow in a program with even heavy .then usage is going to be quite hard to understand, especially when there are nested callbacks, loops, and conditionals. We lose the familiar stack-trace because internally .then is stitching together many small program fragments (lambdas) that have only logical continuity but awkward physical and temporal continuity.

Debugging becomes harder too. To understand more what’s hard about it, I fired up Visual Studio debugger and stepped the program though several iterations. The call-stack appears to grow indefinitely while the program is “recursing”. But note there are many asynchronous calls in between. So the stack isn’t really growing. I tested with 100,000 iterations and the stack did not pop. Here’s a screenshot of the debugger.

stacktrace-wo-await

So, .then() seems like a mixed blessing.

Wouldn’t it be nice if we could dodge the awkwardness of continuations, write program like it’s synchronous but execute it fully asynchronously?

Welcome to C++ Coroutines

Microsoft has proposed a set of C++ language and library extensions called Resumable Functions that helps write asynchronous code that looks synchronous with familiar loops and branches. The latest proposal as of this writing (N4402) includes a new keyword await and its implementation relies on improved futures we discussed already.

Update: The latest C++ standard development suggests that the accepted keyword will be coawait (for coroutine await).

The speedup_until_maxspeed function can be written naturally as follows.

dds::rpc::future<void> test_iterative_await(
  robot::RobotControlSupport::Client & robot_client)
{
  static const int inc = 10;
  float speed = 0;

  while ((speed = await robot_client.getSpeed_async())+inc <= MAX_SPEED)
  {
    await robot_client.setSpeed_async(speed + inc);
    printf("current speed = %f\n", speed + inc);
  }
}

test_iterative_await(robot_client).get();

I’m sure C# programmers will immediately recognize that it looks quite similar to the async/await in .NET. C++ coroutines bring a popular feature in .NET to native programmers. Needless to say such a capability is highly desirable in C++ especially because it makes asynchronous programming with DDS-RPC effortless.

The best part is that compiler support for await is already available! Microsoft Visual Studio 2015 includes experimental implementation of resumable functions. I have created several working examples in the dds-rpc-cxx repository. The examples demonstrate await with both Request-Reply style language binding and Function-Call style language binding in the DDS-RPC specification.

Like the example before, I debugged this example too. It feels quite natural to debug because as one would expect, the stack does not appear to grow indefinitely. It’s like debugging a loop except that everything is running asynchronously. Things look pretty solid from what I could see! Here’s another screenshot.

stacktrace-await

Concurrency

The current experimental implementation of DDS-RPC uses thread-per-request model to implement concurrency. This is a terrible design choice but there’s a purpose and it’ very quick to implement. A much better implementation would use some sort of thread-pool and an internal queue (i.e., an executor). The concurrency TS is considering adding executors to C++.

Astute readers will probably realize that thread-per-request model implies that each request completes in its own thread and therefore a question arises regarding the thread that executes the remaining code. Is the code following await required to be thread-safe? How many threads might be executing speedup_until_maxspeed at a time?

Quick test (with rpc::future wrapping PPL tasks) of the above code revealed that the code following await is executed by two different threads. These two threads are never the ones created by the thread-per-request model. This implies that there’s a context switch from the thread that received the reply to the thread that resumed the test_iterative_await function. The same behavior is observed in the program with explicit .then calls. Perhaps the resulting behavior is dependent on the actual future/promise types in use. I also wonder if there is a way to execute code following await in parallel? Any comments, Microsoft?

A Sneak Peek into the Mechanics of await

A quick look into N4402 reveals that the await feature relies on composable futures, especially .then (serial composition) combinator. The compiler does all the magic of transforming the asynchronous code to a state machine that manages suspension and resumption automatically. It is a great example of how compiler and libraries can work together producing a beautiful symphony.

C++ coroutines work with any type that looks and behaves like composable futures. It also needs a corresponding promise type. The requirements on the library types are quite straight-forward, especially if you have a composable future type implemented somewhere else. Specifically, three free functions, await_ready, await_suspend, and await_resume must be available in the namespace of your future type.

In the DDS-RPC specification, dds::rpc::future<T> maps to std::future<T>. As std::future<T> does not have the necessary functionality for await to work correctly, I implemented dds::rpc::future<T> using both boost::future and concurrency::task<T> in Microsoft’s TPL.  Further, the dds::rpc::future<T> type was adapted with its own await_* functions and a promise type.

  template <typename T>
  bool await_ready(dds::rpc::future<T> const & t)
  {
    return t.is_ready();
  }

  template <typename T, typename Callback>
  void await_suspend(dds::rpc::future<T> & t, Callback resume)
  {
    t.then([resume](dds::rpc::future<T> const &)
    {
      resume();
    });
  }

  template <typename T>
  T await_resume(dds::rpc::future<T> & t)
  {
    return t.get();
  }

Adapting boost::future<T> was straight-forward as the N4402 proposal includes much of the necessary code but some tweaks were necessary because the draft and the implementation in Visual Studio 2015 appear slightly different. Implementing  dds::rpc::future<T> and dds::rpc::details::promise<T> using concurrency::task<T>, and concurrency::task_completion_event<T> needed little more work as both types had to be wrapped inside what would be standard types in near future (C++17). You can find all the details in future_adapter.hpp.

There are a number resources available on C++ Coroutines if you want to explore further. See the video recording of the CppCon’15 session on “C++ Coroutines: A Negative Overhead Abstraction”. Slides on this topic are available here and here. Resumable Functions proposal is not limited to just await. There are other closely related capabilities, such as the yield keyword for stackless coroutines, the await for keyword, generator, recursive_generator and more.

Indeed, this is truly an exciting time to be a C++ programmer.

“See” what is going on with your DDS System Reply

RTI - GUEST POST - SIMVENTIONS

 

simventions

Matt Wilson is the Vice President for Tools and Technology at SimVentions. Mr. Wilson has been designing, building, and integrating complex system and related software based tools for over 20 years. His experience with Naval Combat Systems, Human Systems Integration, and Systems Engineering projects has been the knowledge base for the design and development of a wide variety of innovative and useful tools.

 

Have you ever had to figure out what is going on inside of your DDS-based system and had no idea how to begin? You have many data topics being published to and subscribed by a multitude of data readers and data writers across a vast array of processes. There are a img1wide variety of scenarios from mis-matched properties, out of date message definitions, missing connections, etc. How do you begin to “look at” this problem?

SimVentions has developed a tool called InformeDDS (Info for me – DDS) that uses network graphing technologies and DDS discovery data to “show you” what is going on in your network. InformeDDS’ ability to auto-discover DDS entities (participant, publisher/subscriber, data writer/reader, topic) and then automatically provide an interactive visual representation of connections between DDS entities and all relevant attributes allows you to quickly “see” what is happening without impacting your network.

img2 Using one of the preset (or personally customized) display filters helps reduce complicated networks to a meaningful subset to allow you to focus on key issues. Each of these filters can be built, copied, tailored, and improved through a simple user interface to highlight attributes unique to your own system. We call these filters “View Profiles” which provide you the ability to set your own visualization business rules or use the ones that come out of the box.

InformeDDS has proven to be a useful tool for software developers during testing and debugging and for performing integration testing on complex systems. It has helped us (SimVentions) debug and troubleshoot networking issues for our own customers and can be quickly learned to bring the same value to your integration and testing efforts. During a recent visit to one of our large systems integration partners, an on-site engineer commented “Now THAT is an integration tool” after using InformeDDS.img3

SimVentions is proud to be an RTI partner. To try out InformeDDS with the latest version of RTI Connext DDS (5.2.0) please use the RTI Launcher to download and install the tool.

We continue to improve the InformeDDS technology and look forward to hearing your feedback about this capability. There are many other features in the tool that we will introduce to you in future stories. Until then, download and try the free trial of this innovative and useful debug and integration tool for your DDS network today.

For more information, visit the following sites or leave a note in the comments section of this post!

An Industrial-Grade Connectivity Architecture Reply

The Industrial IoT introduces new requirements for the velocity, variety, and volume of information exchange. Connectivity must be real-time and secure, and it must work over mobile, disconnected, and intermittent links. It must efficiently scale to handle any number of things, each of which may have its own unique requirements for information exchange, such as streaming updates, state replication, alarms, configuration settings, initialization, and commanded intents. These requirements are above and beyond the requirements commonly handled by conventional connectivity solutions designed for static networks.

Designers and standards organizations are fueling the advancement of appropriate connectivity standards like the Data Distribution Service (DDS) that meet these requirements and facilitate a more open, interoperable connectivity architecture for intelligent devices. The benefits include shorter development times, flexible design options, and scalable designs that can evolve with the IoT.

Reduced Integration Times

One of the primary roles of the connectivity architecture is to ensure interoperability of the IoT and thereby reduce integration time for complex devices and subsystems. Ultimately, the goal is to evolve the connectivity architecture to achieve full plug-and-play compatibility.

Levels of interoperability within a connectivity architecture

Levels of interoperability within a connectivity architecture

Currently, industry standards for real-time connectivity are focused on mid-level interoperability, or syntactic-level compatibility, where all endpoints and systems use a common data format and syntax.

Flexible Connectivity Gateways

A connectivity standard that delivers syntactic-level interoperability facilitates the introduction of connectivity gateways to address the diversity of devices in modern systems. These gateways serve multiple purposes, including the support of external systems and devices that rely on other connectivity technologies. Gateways can also be used to create hierarchical architectures and to group various endpoints and devices into subsystems.

Decoupled Apps and Data

Unlike human-driven environments, industrial systems operate autonomously and therefore call for a data- driven architecture. This shift can be compared to the historical development of databases. By decoupling data from applications, databases gave application developers much greater flexibility for evolving modular, independent applications, and they fostered innovation and standards in the application programming interface (API).

Within the Industrial IoT, data-centric communications can similarly promote interoperability, scalability, and ease of integration. The concept of a data bus allows the possibility of decoupling data from application logic so application components interact with data and not directly with each other. The data bus can independently optimize the delivery of data in motion, and can also be more effectively managed and scaled separately from the application components.

Fundamental Building Blocks

In conventional enterprise IT environments, the data architecture deals with events, transactions, queries, and jobs. The Industrial IoT, which is made up of a broad range of devices, differs greatly from this human-driven environment. The fundamental building blocks of the Industrial IoT include streams of data, commands, status (or state) information, and configuration changes.

Note that the key activity triggers within conventional environments involve human requests or responses (decisions). In the Industrial IoT, activity is triggered by data or state changes that exist and happen autonomously.

Rapid Data Transformations Are Moments Away! 3

When I started college, everybody was talking about “Information Technology.” At that point I had been programming for quite a while and it wasn’t clear to me what coding had to do with that fancy terminology. After a few more years of coding, I realized the connection: all I do, day in and day out, is move bytes (information) from one memory location to another. Copying the contents of a struct into the socket buffer and sending it out; getting the bytes from the socket buffer and deserializing them into a structure to pass them to the application logic. Well, that’s part of what communication middleware does for you!

RTI Connext DDS implements the OMG DDS Standard. DDS is data-centric middleware. We call those bytes being transferred data, and we assign a type to them. The type describes each byte in your data, so you (and your application) can make good use of them.

Designing a good distributed system means having a good type system in place. Often though, those types carry a lot of information, and it can become pretty difficult to deal with them. Also, you may not need all the information all the time. This often happens when you have heterogeneous systems with nodes that have different capabilities, such as bandwidth constraints or resource limits. In this case, some of the nodes in the system may not be able to handle a whole sample for certain data types, but they still need to be able to receive part of the information.

Let me present a possible scenario first and then suggest one way to solve it.

scenario

Figure 1: Scenario

Let’s say a new testing module for the International Space Station (ISS) has been shipped to space. The module has a device that collects thousands of data points, puts them all in a DDS sample with a specific type, and writes it on the topic, ComplexDataTopic. We will call this device the Emitter.

One of the many different data points this device collects is the temperature from 10 different sensors. It puts all the collected temperature values into a sequence.

enum Unit {
   F,
   C,
   K
};

struct Temp {
    long sensorId;
    double value;
    Unit unit;
}

struct Measurements {
  long deviceId;
  // thousand of more fields. 
  sequence<Temp,10> temperatures;
};

// Open file

The module also comes with a FancyReceiver (what we call a topic subscriber) that runs a UI; it gets all the data contained in each sample and creates statistics and charts so astronauts can easily understand the data. For example, looking that the temperature data, it could calculate the average (mean) or the median, or spot significant differences between sensors.

Let’s now say that back on Earth, NASA scientists need to know what the temperature is on the testing module. They’re not interested in pressure and humidity. Just the temperature. But, since the bandwidth between Earth and the ISS “is what it is”, it’s considered acceptable to receive aggregates of the date. For example, instead of sending all the values for the temperature the testing module will only send the average, which will save the ISS some bandwidth on the link to ground. Basically, they want something that looks like this:

enum Unit {
   F,
   C,
   K
};

struct SimpleMeasurements {
    long deviceId;
    double avgTmp;
    Unit unit;
}

// Open File

We have many options to achieve this result:

  1. We can create an ad-hoc application: it will subscribe to the ComplexDataTopic, get the data we need, create another topic, another type, calculate what we need, and send it over.
  2.  We can use RTI Routing Service: write a custom transformation library and be done with it.
  3.  We can use RTI Connector via the RTI Prototyper with Lua.
  4. We can use RTI Connector via Python or node.js.

I will now explain option 3: Use RTI Prototyper with Lua to easily transform your types. If you are already an RTI Connext user, you will have what you need in your distribution (under the “bin” directory if you are using RTI Connext 5.2, or under the “scripts” directory for older versions). Otherwise, check out this page to learn how to get your free copy of RTI Prototyper.

We will create a new component, the Transformer. The Transformer will subscribe to ComplexDataTopic (with type Measurements) and will Publish to SimpleDataTopic (with type SimpleMeasurements).

The Types

For simplicity, let’s say that the complex type is the following:

<enum name="Unit">
  <enumerator name="F"/>
  <enumerator name="C"/>
  <enumerator name="K"/>
</enum> 
<struct name= "Temp">
  <member name="sensorId" id="0" type="long"/>
  <member name="value" id="1" type="double"/>
  <member name="unit" id="2" type="nonBasic"  
        nonBasicTypeName= "Unit"/>
</struct>
<struct name= "Measurements">
  <member name="deviceId" id="0" type="long"/>
  <member name="humidity" id="1" type="double"/>
  <member name="pressure" id="2" type="double"/>
  <member name="temperatures" sequenceMaxLength="10" id="3" 
        type="nonBasic"  nonBasicTypeName= "Temp"/>
</struct>

<!--Open File -->

The simple type looks like this:

<enum name="Unit">
  <enumerator name="F"/>
  <enumerator name="C"/>
  <enumerator name="K"/>
</enum> 
<struct name= "SimpleMeasurements">
  <member name="deviceId" id="0" type="long"/>
  <member name="avgTmp" id="1" type="double"/>
  <member name="unit" id="2" type="nonBasic"  
        nonBasicTypeName= "Unit"/>
</struct>

<!-- Open File -->

The Topics and the Entities

All we need to do now is define an XML application file with a data reader for topic ComplexDataTopic and a writer to SimpleDataTopic:

<!-- Domain Library -->
    <domain_library name="MyDomainLibrary">
        <domain name="MyDomain" domain_id="0">
            <register_type name="Measurement" kind="dynamicData" 
                         type_ref="Measurement"/>
            <register_type name="SimpleMeasurement" kind="dynamicData" 
                           type_ref="SimpleMeasurement"/>
            <topic name="ComplexDataTopic"  
                          register_type_ref="Measurement"/>
            <topic name="SimpleDataTopic"   
                          register_type_ref="SimpleMeasurement"/>
        </domain>
    </domain_library>
<domain_participant name="Transform" 
                     domain_ref="MyDomainLibrary::MyDomain">
            <participant_qos base_name="QosLibrary::DefaultProfile"/>
                <publisher name="MyPublisher">
                    <data_writer name="MySimpleWriter" 
                                       topic_ref="SimpleDataTopic" />
                </publisher>
                <subscriber name="MySubscriber">
                    <data_reader name="MyComplexReader" 
                                       topic_ref="ComplexDataTopic" />
                </subscriber>
        </domain_participant>

<!-- Complete File -->

The Actual Logic

Once you have defined what your types are and how your entities are called, you just have to write a simple chunk of Lua code that will be executed by RTI Prototyper. Let’s have a look:

local myComplexReader = 
    CONTAINER.READER['MySubscriber::MyComplexReader']
local mySimpleWriter  = 
    CONTAINER.WRITER['MyPublisher::MySimpleWriter']
local instance        = mySimpleWriter.instance

myComplexReader:take()
for  i, sample in ipairs(myComplexReader.samples) do
    if (not myComplexReader.infos[i].valid_data) then
        print("\t invalid data!")
    else
        local deviceId = sample['deviceId']
        local avgTmp = 0
        local sum = 0;
        for i=1, sample['temperatures#'] do
            sum = sum + sample['temperatures[' .. i .. '].value'];
        end
        avgTmp = sum /  sample['temperatures#']

        --setting the instance
        instance['deviceId'] = deviceId
        instance['avgTmp']   = (avgTmp - 32) * (5/9);
        instance['unit'] = 1 -- C

        -- writing the simple instance
        print("Writing sample with avgTmp = " .. instance['avgTmp'] )
        mySimpleWriter:write()
    end
end

-- Open File

As you can see, you have to write very little code to transform your complex data type into a simple one using RTI Prototyper.

In the first 3 lines we are just getting the complex reader and the simple writer. We are then assigning the pre-allocated instance of the simple writer to the variable called instance.

Next we are doing a take and, if the received samples are valid, we iterate over them, getting only the field we care about and aggregating the data. (In this example, we calculate the average of all the temperatures sent in the original sample, and we ignore humidity and pressure.) We can also do some more intelligent transformations, such as transforming the incoming temperature from Fahrenheit to Celsius (line 20)!

Once we have what we need, we assign the values to writer instance (lines 19-21) and we write the sample (line 25 ). It’s that simple!!!

For your convenience I uploaded all the code and files to a GitHub repository here.

In the repo you will find the following directories:

  • Transformer: Contains both the XML and the Lua file described in this blog post. To run it just execute:
rtiddsprototyper -cfgFile Transformer.xml -luaFile Transformer.lua
  • EmitterEmu: Contains an XML file and a Lua file to emulate the behavior of your ISS module Emitter. To run it just execute:
rtiddsprototyper -cfgFile EmitterEmu.xml -luaFile EmitterEmu.lua
  • FancyReceiver: Contains an XML file and a Lua file to emulate your fancy dashboard running on board the ISS. To run it, just execute:
rtiddsprototyper -cfgFile FancyReceiver.xml -luaFile FancyReceiver.lua
  • LimitedReceiver: Contains an XML file and a Lua file to emulate your limited subscriber running back on earth. To run it, just execute:
rtiddsprototyper -cfgFile LimitedReceiver.xml -luaFile LimitedReceiver.lua

So, clone the repo, play with the examples, read the few lines of code, and you will right away understand the power of RTI Prototyper with Lua. If you want more information on RTI Prototyper check out the Getting Started Guide or the other blog posts about it here.

And if you have some time left check out the other solution for doing scripting with RTI Connext DDS using Python and node.js here.

Follow

Get every new post delivered to your Inbox.

Join 259 other followers