The Industry-First Vendor-Backed FACE ™ 2.1 TSS Reference Implementation is Here! Reply


A few weeks ago, we released a new version of our Future Airborne Capability Environment (FACE) Transport Service Segment (TSS) Reference Implementation. This new release is based on the FACE Technical Standard, Edition 2.1.

We introduced our first FACE TSS implementation two years ago. Since then we have worked with several military organizations to enable interoperable and reusable avionics systems. The result of our hard work is this new release. We hope that it will bring the power of DDS into next generation systems by providing an open, flexible and robust Commercial-Off-The-Shelf (COTS) communications foundation. There are three things in this release that we are excited about.

First, we are the first company to release a fully supported and vendor-backed TSS 2.1 reference implementation. Avionics software developers and platform integrators can now combine best-of-breed technologies with COTS components with proven safety certification credentials. This reduces integration and airworthiness certification risk while facilitating reuse across both manned and unmanned systems. For example, a surveillance system built for a Navy aircraft can be used in another vehicle without requiring a full system rebuild..

Second, the new FACE TSS implementation now supports C++. It includes a customized version of rtiddsgen that generates the C++ API specific to the message types that a component exchanges. The FACE standard requires that components (known as “Units of Portability”) use this type-specific interface and that component suppliers provide an IDL specification of the types.

Third, we provide the TSS Reference Implementation as portable source code. Customers are free to modify it to meet their own requirements. If you are a Connext DDS customer, we provide the TSS at no charge. For basic support and training, customers may optionally purchase an annual TSS Support Package. Customers may also contract with RTI Services to enhance the capabilities of the TSS to meet system-specific requirements.

To learn more about our new FACE TSS Reference Implementation, check out the FACE content on our website or watch the webinar on FACE Standard for Avionics Software and a Ready-to-Go COTS Platform.

Reaching for the Stars Reply

Listening this morning to an interview with science writer Brian Clegg, I got to thinking about how the smallest things can have the biggest impact. Clegg just published his latest book, The Quantum Age: How the Physics of the Very Small has Transformed Our Lives. In the interview, he noted that around 30% of the GDP for a developed country like the United States stems from inventions based on quantum physics, including lasers, microprocessors, and mobile phones.

Despite influencing our day-to-day lives in such ways, we often like to think of quantum effects in terms of weird things that don’t seem to make sense, like particles travelling back in time or teleporting: the stuff of science fiction.

For many science fiction fans like me, the gap between the dream of interstellar travel,teleporters that harnessing this quantum “weirdness” and the more prosaic reality of today’s limited space exploration is a bit of a let down. The fact that the Voyager 1 probe, the most distant person-made object, is still only 18 light hours from earth is somewhat at odds with the distances travelled during a typical episode of Star Trek.

On June 3, 2015, however, a very interesting feat was performed, one that brings the prospect of substantial planetary exploration closer. On that date, NASA astronaut Terry Virts shook hands with André Schiele from the European Space Agency (ESA) – and, crucially, each felt the strength of the other’s grip. What’s remarkable about this historic handshake is that the two men were 5,000 kilometres apart – one on the International Space Station (ISS) and the other firmly on the ground here on Earth.

Space station hapticTerry Virts performing a handshake on the ISS

This experiment demonstrated that it is possible for one person to precisely and instantly control an object on a planet’s surface from a location in space thousands of kilometres away, and to do so as if they were holding that object in their hands. That possibility offers the opportunity to explore planets intimately without ever setting foot on them. This is hugely significant because one of the limitations of planetary exploration is the extreme difficulty of landing astronauts on a foreign body and subsequently launching them safely back into space for the journey home.

Another limitation is the time it takes for signals to travel between Earth and other planets. For example, it takes 24 minutes for a command to travel on a roundtrip to Mars, our nearest planetary neighbour, which makes operating the Mars Rover remotely from Earth a challenging endeavour.

The most effective way to enable robotic exploration is to have an astronaut near enough to control the rover, preferably orbiting the planet, and this is exactly what the experiment proved is possible. Critically in this case, the demonstration proves not only that it is possible to move something, but by experiencing force feedback in real time, it becomes possible for the astronaut to act as if he or she were actually on the ground directly performing the task.

ISSThe International Space Station

This type of real-time communication leverages standardisation initiatives by the Open Management Group – in this case, an open real-time communications standard, the Distributed Data Standard (DDS), implemented by Sunnyvale’s RTI Inc. and used by the ESA for this experiment. (The learnings from this and other uses of the DDS standard are enabling a whole new era of communication between intelligent machines called the Industrial Internet of Things.)

By removing the need to develop a safe landing and take off system for humans, the ESA could take years and billions of dollars off the time and cost to reach further into the cosmos and learn more of its secrets. Accomplishments like the June 3 “handshake” will hopefully continue to fire the imagination of innovators to explore the quantum world on the ground as well as the frontiers of space, to the mutual benefit of all of us through continued growth of global GDP.

Further Reading:

Data Connectivity in the Industrial Internet Reference Architecture Reply

Today, the Industrial Internet Consortium (IIC) released the Industrial Internet Reference Architecture (IIRA). The IIC is the largest of the Internet of Things (IoT) consortia with over 170 members ( More importantly, it’s the only one focused on industrial systems. The first public release of the IIRA is a formal overview of the systems architecture from a high-level perspective. It covers everything from business goals to system interoperability. The architecture establishes many key technical guidelines. Critically, it also eliminates many approaches; an architecture is as much about what you can’t do as what you can do.

We at Real-Time Innovations (RTI) are most excited by one key aspect: the IIRA connectivity architecture. “Connectivity”, or how things communicate, is one of the biggest challenges for the emerging Industrial Internet of Things (IIoT). The IIRA takes an innovative, distributed “databus” approach that eases interoperability while providing top performance, reliability, and security.

The Power of Common Architecture

Ultimately, the IIoT is about building distributed systems.   Connecting all the parts intelligently so the system can perform, scale, evolve, and function optimally is the crux of the IIRA.

To enable the IIoT, we need to develop a common architecture that can span computing capability, interoperate across vendors, and bridge industries. Over time, common technologies that span industries always replace bespoke systems. However, incremental adoption and adapting current technology are also crucial. The IIoT must therefore integrate many standards and connectivity technologies. The IIC architecture explicitly blends the various connectivity technologies into an interconnected future that can enable the sweeping vision of a hugely connected new world.

This is the “interoperability” problem, and it’s really RTI’s specialty. RTI participates in 15 different standards and consortia efforts. They span many industries: naval systems, avionics, power, medical devices, unmanned vehicles, consumer electronics, industrial control, and broadcast television, to name just a few. All focus on how to get systems to work together. The IIC draws on experience from these industries and more.

The Integration Challenge

When you connect many different systems, the fundamental problem is the “N squared” interconnect issue. Connecting two systems requires matching many aspects, including protocol, data model, communication pattern, and quality of service (QoS) parameters like reliability, data rate, or timing deadlines. While connecting two systems is a challenge, it is solvable with a special-purpose “bridge”. But it doesn’t scale; connecting N systems together requires N-squared bridges. As N gets large, this becomes daunting.

One way to ease this problem is to keep N small. You can do that by dictating all standards and technologies across all systems that interoperate. Many industry-specific standards bodies successfully take this path. For instance, the European Generic Vehicle Architecture (GVA) specifies every aspect of how to build military ground vehicles, from low-level connectors to top-level data models. The German Industrie 4.0 effort takes a similar pass at the manufacturing industry, making choices for ordering and delivery, factory design, technology, and product planning. Only one standard per task is allowed.

This approach eases interoperation. Unfortunately, the result is limited in scope because the rigidly-chosen standards cannot provide all functions and features. There are simply too many special requirements to effectively cross industries this way. Dictating standards also doesn’t address the legacy integration problem. These two restrictions (scope and legacy limits) make this approach unsuited to building a wide-ranging, cross-industry Industrial Internet.

On the other end of the spectrum, you can build a very general bridge point. Enterprise web services work this way, using an “Enterprise Service Bus” (ESB) like Apache Camel. However, despite the “bus” in its name, an ESB is not a distributed concept. All systems must connect to a single point, where each incoming standard is mapped to a common object format. Because everything maps to one format, the ESB requires only “one-way” translation, avoiding the N-squared problem. Camel, for instance, supports hundreds of adapters that each convert one protocol or data source.

Unfortunately, this doesn’t work well for demanding industrial systems. The single ESB service is an obvious choke and failure point. ESBs are large, slow programs. In the enterprise, ESBs connect large-grained systems executing only a few transactions per second. Industrial applications need much faster, reliable, smaller-grained service. So, ESBs are not viable for most IIoT uses.

The IIRA Connectivity Core Standard

The IIRA takes an intermediate approach. The design introduces the concept of a “Connectivity Core Standard”. Unlike an ESB, the core standard is very much a distributed concept. Some endpoints can connect directly to the core standard. Other endpoints and subsystems connect through “gateways”. The core standard then connects them all together. This allows multiple protocols without having to bridge between all possible pairs. Each needs only one bridge to the core.

Like an ESB, this solves the N-squared problem. But, unlike an ESB, it provides a fast, distributed core, replacing the centralized service model. Legacy and less-capable connectivity technologies transform through a gateway to the core standard. There are only N transformations, where N is the number of connectivity standards.


The IIRA connectivity architecture specifies a quality-of-service controlled, secure “core connectivity standard”. All other connectivity standards must only bridge to this one core standard.

Obviously, this design requires a very functional core connectivity standard. Some systems may get by with slow or simple cores. But, most industrial systems need to identify, describe, find, and communicate a lot of data with demands unseen in other contexts. Many applications need delivery in microseconds or the ability to scale to thousands or even millions of data values and nodes. The consequences of a reliability failure can be severe. Since the core standard really is the core of the system, it has to perform.

The IIRA specifies the key functions that connectivity framework and its core standard should provide: data discovery, exchange patterns, and “quality of service” (QoS). QoS parameters include delivery reliability, ordering, durability, lifespan, and fault tolerance functions. With these capabilities, the core connectivity can implement the reliable, high-speed, secure transport required by demanding applications across industries.

The IIRA outlines several data quality of service capabilities for the connectivity core standard. These ensure efficient, reliable, secure operation for critical infrastructure.

The IIRA outlines several data quality of service capabilities for the connectivity core standard. These ensure efficient, reliable, secure operation for critical infrastructure.

Security is also critical. To make security work correctly, it must be intimately married to the architecture. For instance, the “core” standard may support various patterns and delivery capabilities. The security design must match those exactly. For example, if the connectivity supports publish/subscribe, so must security. If the core supports multicast, so must security. If the core supports dynamic plug-n-play discovery, so must security. Security that is this intimately married to the architecture can be imposed at any time without changing the code. Security becomes just another controlled quality of service, albeit more complexly configured. This is a very powerful concept.

The integrated security must extend beyond the core. The IIRA allows for that too; all other connectivity technologies can be secured at the gateways.

DDS as a Core Standard

The IIRA does not specify standards; the IIC will take that step in the next release. However, it’s clear that the DDS (Data Distribution Service) standard is a great fit to the IIRA. DDS provides automated discovery, each of the patterns specified in the IIRA, all the QoS settings, and intimately integrated security.

This is no accident. The IIRA connectivity design draws heavily on industry experience with DDS. DDS has thousands of successful applications in power systems (huge hydropower dams, wind farms, microgrids), medicine (imaging, patient monitoring, emergency medical systems), transportation (air traffic control, vehicle control, automotive testing), industrial control (SCADA, mining systems, PLC communications), and defense (ships, avionics, autonomous vehicles). The lessons learned in these applications were instrumental in the design of the IIRA.

Thank you!

Finally, I would like to close by thanking the teams that built the IIRA. This was a large effort supported by many companies. RTI was most involved on the architecture, connectivity, and distributed data & interoperability teams. Thank you all, and congratulations on your first release.

RTI Connext on Snappy Ubuntu 1

Snappy Ubuntu Core is a brand new version of the Ubuntu Linux operating system with transactional updates. Mark Shuttleworth, founder of Ubuntu and Canonical, introduced Snappy during his keynote presentation at the 2015 Internet of Things (IoT) World conference in San Francisco. There he highlighted efforts to create an open platform that supports developer innovation and opens new markets to device and software creators. Snappy applications (Snaps) are isolated from one another completely, just as on the Ubuntu mobile phone, making it much safer to install und upgrade applications independently from each other and Ubuntu Core. RTI Connext is the perfect communication platform for applications that can evolve and be upgraded independently. Here’s why:

  1. RTI Connext DDS provides a solution optimized for communication between Snaps on the same node as well as different nodes.
  2. RTI Connext meets the demanding performance, reliability and security requirements of Snaps.
  3. RTI Connext running on Ubuntu Core allows smart appliances to leverage real-time connectivity for the Industrial Internet of Things (IIoT).
  4. RTI Connext runs on many platforms besides Ubuntu Core and therefore supports interoperability between platforms.
  5. RTI Connext provides mechanisms to interoperate with different technologies and protocols.

At IoT World, Ubuntu showcased several different Snappy-based applications. An overview of the different applications can be found here.


An RTI demo as part of the IoT demo display at Canonical’s booth at IoT World.

At IoT World, RTI demonstrated how two Snaps can communicate with each other using an OWI Robotic Arm. The robotic arm can be controlled through USB. In the demo setup the USB interface is connected to BeagleBone Black, an open-source development board, running Ubuntu Core Snappy as its operating system. BeagleBone Black runs the controller Snap which subscribes to commands using RTI Connext and then sends the control commands to the robot arm through USB.

The Advantech Intel gateway, which has an Intel Atom E3800 processor, runs Ubuntu Core Snappy and the Robot command Snap. The Robot command Snap has a command-line interface through which commands can be entered, like turn clockwise, turn counterclockwise, and open grip. The commands are then published on the data bus. Using RTI Connext allows the controller and command Snaps to be co-located on the same node (BeagleBone) or different nodes (IoT gateway and BeagleBone) without having to change any of the Snap configurations. Below is a diagram of the demo setup.


More about RTI’s presence at IoT world can be found at and

Data Centricity vs. Message Centricity Reply

In this blog post, I would like to clarify the difference between data-centricity and message-centricity when it comes to middleware solutions.

Let’s consider DDS as an example of data-centric middleware and MQTT as an example of message-centric middleware. There are definitely other examples, and all middleware solutions have advantages and disadvantages. They are designed to tackle different use cases, and there is no one-size-fits-all solution.

DDS is a standards-based publish/subscribe integration infrastructure for critical applications of the Industrial Internet of Things (IIoT). It is neither just a middleware solution nor an SDK. One of DDS’s strengths is its data-centricity. MQTT, on the other hand, is a publish/subscribe integration infrastructure for commercial IIoT. Its strength is in its simple and lightweight messaging protocol designed for constrained devices. Both DDS and MQTT are designed for use cases that need to run under low-bandwidth, high-latency or unreliable networks.

To illustrate the difference between data-centricity and message-centricity, let’s consider the following two approaches to Calendaring, a common, everyday process:

Alternative Process #1 (message-centric):

  1. Email: “Meeting Monday at 10:00.”
  2. Email: “Here’s dial-in info for meeting…”
  3. Email: “Meeting moved to Tuesday”
  4. You: “Where do I have to be? When?”
  5. You: (sifting through email messages…)

Alternative Process #2: (data-centric)

  1. Calendar: (add meeting Monday at 10:00)
  2. Calendar: (add dial-in info)
  3. Calendar: (move meeting to Tuesday)
  4. You: “Where do I have to be? When?”
  5. You: (check calendar, contains consolidated state)

The difference between these two scenarios is “state.” In the data-centric approach, the infrastructure consolidates the changes and maintains them. DDS’s approach to implementing data-centricity allows applications to be integrated easily into the information-data model without forcing the application developers to write serialization/de-serialization code, and without forcing them to maintain state or make custom mappings. DDS directly supports data-centric actions such as create, dispose and read/take.

Data types are captured in DDS as topics. Topics define a name, type (including a key) and Quality of Service (QoS) settings. Data instances are managed by topic keys. The rich set of QoS settings control data availability (durability, lifespan and history), delivery (reliability and ownership) and timeliness (deadline and latency) via xml profiles. This approach immensely reduces the amount of code written and maintained by application developers.

Below you will see how much code must be written to implement the above scenario by utilizing data-centric middleware solutions such as DDS versus message-centric middleware solutions such as MQTT. As seen below, in the case of MQTT, the application developer needs to maintain the code to create an application-defined Collection to hold agenda items, deserialize bytes in the message, insert or replace in the Collection and lookup in the Collection.

Using Data-Centricity: Rich QoS –> less code to maintain data state

QoS for topic (i.e. data type): 


Code to maintain data state: none, as the middleware maintains data state:

  1. create DDS DomainParticipant to join DDS Domain
  2. with DomainParticipant, create DDS Topic “MeetingTopic” and DDS Subscriber
  3. with Subscriber and MeetingTopic, create DDS MeetingTopicDataReader
  4. when requested by user for agendaId: agendaId,
        out {meetingId, meeting time, date}[])

Using Message-Centricity: not so rich QoS –> more code to maintain data state

QoS for message:

Chose one of the three below:

  • At most once. The message is delivered at most once, or it is not delivered at all.
    • QoS=0
  • At least once. The message is always delivered at least once
    • QoS=1
  • Exactly once. The message is always delivered exactly once
    • QoS=2

Code to maintain data state:

  1. create MQTT Client to connect to server (IP address or host name) and port number
  2. create application-defined Collection to hold agenda items
  3. with MQTT Client, subscribe to “MeetingTopic”
  4. when message received by MQTT Client:
    • invoke application code to deserialize bytes in message to get {agendaId, meetingId, meeting time, date}
    • insert or replace in Collection {meeting time, date} for agendaId, meetingId
    • when requested by user for agendaId:
    • lookup in Collection all {meetingId, meeting time, date} for agendaId

In this case, the application developer needs to maintain the code to create an application-defined Collection to hold agenda items, deserialize bytes in the message, insert or replace in the Collection and lookup in the Collection.

Day Two at IoT World: Using DDS to Make Smart Window Shades Even Smarter 1

The second day at IoT World was as busy and exciting as the first day. We had good traffic at the RTI booth and many interesting conversations with people of different backgrounds on a range of topics: semiconductors, sensors, energy, telecommunication, robotics, automotive, embedded software, mobile devices, testing, management software, data storage, and on and on. Though only a small percentage of people knew about RTI prior to the show, most of our booth visitors had no problem quickly understanding the role of RTI in the world of IoT and the value DDS brings to the IoT community.

Here is just one example. One visitor’s company manufactures “smart” window shades. (Remember: in the IoT world, everything is smart!) How would DDS apply to his world?

This company’s shades open and close based on input from sensors in the room. These sensors may track temperature, movement of objects (people), and light level. There may also be other sensors in the room to track conditions that could potentially contribute to the “decision” of the shade to open or close as well of the time and speed of that action.

How do the shades respond to these sensors? Should they consider all sensors, or only a select set? Can new sensors be added to the environment? Should the shades change their behavior in different kinds of rooms or different geographic locations? Can room conditions be recorded and analyzed by a controller application (perhaps running in the cloud?) to help the company to make maintenance and service decisions or improvements to its product?

DDS can play a key role in enabling data-sharing between the sensors and the controller app. Because DDS uses a data-centric approach, it does not require different sensors from different manufacturers to know about each other, nor is a discovery process required. Also, creating an application is easy because no point-to-point connection is required at the application level. DDS does it all under the hood.

RTI Connext DDS builds on this communication standard a rich set of QoS specifications that applications can tailor for their unique requirements. For example, an application can filter data from certain sources or with certain characteristics.

Interestingly enough, once the DDS story is told in the right context, it “comes to life” quickly for most people, and they get really excited. The window shade example might be not be the most typical use for DDS. But in the brave new world of “making everything smart,” it is not possible to imagine every scenario in which the technology will be applied.

Ubuntu and RTI: Going Beyond Smart to Brilliant

For Mark Shuttleworth, the “daddy” of Ubuntu, “making everything smart” is just not good enough! In his provocative and fun IoT World keynote, “All Things Smart,” he insisted on making all things “brilliant.”

photo (6)

Mark’s keynote highlighted the efforts of Ubuntu to create an open platform that supports developer innovation and opens these new markets to device and software creators.

Ubuntu’s booth at IoT World featured several RTI-powered, DDS-capable devices and applications.

photo (4)

RTI is proud to be part of this innovation making the vision of Ubuntu’s founder a reality!

Sneak Peek: Internet of Things World Conference in SF! 1

photo 1

Today is the opening day at Internet of Things World Conference in San Francisco. RTI is excited to bring to the event two live product demonstrations. Both will be shown in the RTI booth at the conference expo by RTI product manager Burcu Alaybeyi.

Here is the sneak peek of what you will see.

The first demo features Open Integrated Clinical Environment (Open ICE). In this environment, medical devices such as pulse oximeters, ECGs, infusion pumps, etc., from different vendors can publish data to a logical data bus (RTI Connext DDS) by conforming to a common data model. The demo includes a supervisory application that conforms to the same data model, which allows the sample application to subscribe to topics that the medical devices publish to.

DDS makes it much easier to develop medical device interfaces and medical supervisory applications, and Open ICE is an excellent example of medical device interoperability enabled by RTI Connext DDS. You can see a full demo of the PCA Safety use case here: References and more info on the Open ICE initiative and medical device interoperability are available at and

The second demo shows robot arm control using DDS as data bus. This is a demonstration of how different SDKs, APIs and interfaces can be utilized to write RTI Connext DDS applications. In this demo, we use an RTI DDS LabVIEW VI to publish what the user wants to do with the robot arm. In this scenario a Python DDS application subscribes to data that is published by the LabVIEW VI and controls the robot via USB.


Come see our exciting demonstrations during the conference. You can find us at Booth 215. See you there!

A Connectivity Architecture for the Industrial Internet of Things Reply

Over time, conventional connectivity solutions can dramatically multiply development costs for your industrial Internet of Things (IoT) applications.

If it feels like you are re-inventing the wheel every time you add another application, maybe you have fallen into the trap of hanging onto the old, when you really need a new connectivity architecture – one that is ideally suited for today’s world of connected devices.

The Internet originally evolved to connect human beings, regardless of their physical location and compute environments. The current industrial IoT, in contrast, connects devices and systems. These non-human users of the industrial IoT operate in non-stop mode, and outages or failures can trigger severe consequences. Smooth operation also relies on data timeliness; the right answer delivered too late becomes the wrong answer.

The Data Distribution Service (DDS) standard evolved specifically to enable real-time, non-stop environments. In today’s industrial IoT, DDS makes it possible to connect everything, everywhere with a shared data model and open databus. Seamless data sharing can be achieved regardless of proximity, platform, language, physical network, transport protocol, and network topology.

A Generic Use Case: DDS in the Industrial IoT

More than a dozen DDS implementations have propagated the standard into hundreds of system designs in healthcare, transportation, communications, energy, industrial, defense, and other industries. Many of these domains utilize a common connectivity use case, where a user has both local and remote access to a variety of intelligent connected devices and systems.

DDS in the IIoT

Within a connected home, for example, the devices can include smart thermostats, lighting controllers, security cameras, and more. In the energy industry, the same DDS use case model lets operators oversee energy turbines at multiple locations. And in healthcare, the model can encompass smart devices at the patients’ bedsides and in the lab, with doctors and clinicians given access via the cloud or on-site LANs.

Between the devices and the cloud (WAN connections), DDS provides an ideal solution with:

  • Stateful interactions
    • Intelligent connections/disconnects, and the ability to resend only relevant data upon reconnection
    • Intelligence built into the bus, without application overhead
  • Many data-flow patterns, for meeting current and future requirements
  • Publish-subscribe architecture style that is data-driven
  • Scalability, performance, resilience, and security

Inside the endpoint devices themselves, DDS has also been applied broadly. DDS makes it possible to design smart devices that operate very reliably and meet safety and longevity requirements in industries such as healthcare and automotive. DDS has also made inroads in the cloud. Here, the standard can support diverse connectivity options, and can also promote longevity of cloud solutions.

Notice that this covers all of the connections that do not directly interface to the human beings that depend on the systems. For the user-to-cloud WAN connections traditional web technologies such as web sockets and HTTP still make sense.

For everywhere else, DDS continues to grow in popularity by saving developers time and enabling systems that can scale and accommodate smart devices and data-centric real-time environments.

Why attend RTI’s Connext Conference 2015? Reply

If the history of technology tells us one thing, it’s that standards are most effective when they are outside the control of any one organisation. Effective standards require the cooperation of strongly competing companies that work towards their mutual interest. This is because, by creating a standards framework that provides scope for innovation coupled with interoperability, more usable products are provided to the market, customers have the confidence to invest, and as a result the total available market revenues become much larger.

In his 2009 article in the RFID journal, Kevin Ashton, the man accredited with coining the phrase “the Internet of Things” stated “People have limited time, attention and accuracy—all of which means they are not very good at capturing data about things in the real world. We need to empower computers with their own means of gathering information, so they can see, hear and smell the world for themselves, in all its random glory”.

The Industrial Internet of Things is concerned with systems at the core of our daily lives that demand the highest standards of reliability, security and performance: the electricity grid, air traffic control systems and systems in space being just a few examples. Industrial sectors of the economy are responsible for two thirds of global Gross Domestic Product (GDP) and so improving the efficiency and capability of this infrastructure using distributed sensors and computing systems has the potential to revolutionise how the whole world works, serving all of us -rich or poor-, in a multitude of ways.

For the Industrial Internet of Things to be realised therefore requires companies large and small that supply and maintain industrial systems to design them work together and this in turn requires a set of standards for interoperability. In this vision of the future no system sits in its own walled garden using proprietary protocols, it must be able to interact freely with any other system, and not just ones that exist today, but those which may be designed in the future.

To meet this challenge, the Industrial Internet Consortium was formed. Today it has over 150 members from Industry, Government and Academia working on a mission to coordinate vast ecosystem initiatives to connect and integrate objects with people, processes and data using common architectures, interoperability and open standards.

RTI has made a significant investment in support for the IIC because we believe that this is the best way to allow large engineering companies to engage with technology innovators to deliver truly interoperable IIoT solutions. At our upcoming Connext Conferences in Washington and London we will be adding a focus on IIC industry standardisation alongside our traditional deep dive into Connext DDS its real-world applications.

Attending one of this year’s Connext Conferences, which will be held in Washington D.C. and London, will allow you to understand not only how to build world-class IIoT solutions, but also how your work can sit within an emerging framework for interoperability that will allow it to be applied across your industry and beyond.

Registration for these events will be open shortly – stay tuned!