2nd Version of the Industrial Internet Reference Architecture is Out with Layered Databus Reply


A year and a half ago the IIC released the first version of the Industrial Internet Reference Architecture (IIRA) – now the second version (v1.8) is out. It includes tweaks, updates and improvements, the most important or interesting of which is a new Layered Databus Architecture Pattern. RTI contributed this new architecture pattern in the Implementation Viewpoint of the IIRA because we’ve seen it deployed by hundreds of organizations that use DDS. Now it’s one of the 3 common implementation patterns called out by the new version of the IIRA.

So, what is a databus? According to the IIC’s Vocabulary document, “a databus is a data-centric information-sharing technology that implements a virtual, global data space, where applications read and update data via a publish-subscribe communications mechanism. Note to entry: key characteristics of a databus are (a) the applications directly interface with the operational data, (b) the databus implementation interprets and selectively filters the data, and (c) the databus implementation imposes rules and manages Quality of Service (QoS) parameters, such as rate, reliability and security of data flow.”

For those who know the DDS standard, this should sound familiar. You can implement a databus with a lower level protocol like MQTT, but DDS provides all the higher-level QoS, data handling, and security mechanisms you will need for a full featured databus.

As we look across the hundreds of IIoT systems DDS users have developed, what emerges is a common architecture pattern with multiple databuses layered by communication QoS and data model needs. As we see in the figure below, we’ll usually see databuses implemented at the edge in the smart machines or lowest level subsystems like a turbine, a car, an oil rig or a hospital room. Then above those, we’ll see one or more databuses that integrate these smart machines or subsystems, facilitating data communications between them and with the higher level control center or backend systems. The backend or control center layer might be the highest layer databus we see in the system, but there can be more than these three layers. It’s in the control center (which could be the cloud) layer that we see the data historians, user interfaces, high-level analytics and other top-level applications. From this layer, it’s straightforward to zero in on a particular data publication at any layer of the system as needed. It’s from this highest layer that we usually see integration with business and IT systems.


The Layered Databus Architecture Pattern: one of three implementation patterns in the newly released Industrial Internet Reference Architecture v1.8.

Why use a layered databus architecture? As the new IIRA says, you get these benefits:

  • Fast device-to-device integration – with delivery times in milliseconds or microseconds
  • Automatic data and application discovery – within and between databuses
  • Scalable integration – comprising hundreds of thousands of machines, sensors and actuators
  • Natural redundancy – allowing extreme availability and resilience
  • Hierarchical subsystem isolation – enabling development of complex system designs

If you want to dig into the databus concept, especially as it compares with a database (similar data-centric patterns for integrating distributed systems, but different in the way they integrate via data), take a look at this earlier blog post on databus versus database.

In addition to the new IIRA release, the IIC is getting ready to release an important document on the Connectivity Framework for its reference architecture. Look for much more detail on this document that sets out core connectivity standards for the Industrial Internet.

The First Smart Healthcare Testbed at the Industrial Internet Consortium Reply


Today, Infosys, RTI, PTC, and Massachusetts General Hospital’s MD PnP Lab launched the Industrial Internet Consortium’s (IIC) entry into smart healthcare.

Healthcare is an industry in transition. The current, poorly connected state of healthcare systems limits progress in this field and is an obstacle in the goal of providing better medical treatment at lower costs. The need for better systems is urgent. Today, hospital errors contribute to hundreds of thousands of deaths in the US every year. And with at least 80% of older adults suffering from one or more chronic health issues, there is an enormous opportunity to improve healthcare and prevent unnecessary loss of life.

The rise of the Industrial Internet of Things (IIoT) promises a better future. With the IIoT, we can create intelligent distributed systems of devices that check patient status, better inform doctors of status, and prevent errors. New healthcare systems can provide remote monitoring and care both in the home and small clinics. This revolution will lower cost, improve patient outcomes, enable much better care in hospitals and homes, and improve long-term care.

The new Connected Care Testbed applies the IIC’s leading reference architecture to demonstrate and prove connected care systems for hospitals, homes, and clinics. The goal is to develop an open IIoT architecture for clinical and remote medical devices. It will integrate patient monitoring data with data management and analytics. The testbed leverages the OpenICE medical device framework to ensure secure interoperability between medical devices and applications. RTI Connext DDS, already used by the MD PnP Lab, will be the data distribution middleware for the testbed.

Testbed use case: A doctor receives a text message warning about one of her patients, and logs into her user account on the Connected Care system. On the summary dashboard, the system has highlighted one of her patients that may need follow up. Clicking on the patient’s name, the doctor accesses the patient’s health history and sees a highlighted alert that the patient has been taking medication incorrectly (as recorded by a sensor in the patient’s home ). The doctor then views details of the patient’s medication use and is able to determine the problem. With another click, the doctor is able to forward the alert to her staff for follow up with a note to discuss proper use of medication with the patient.

As the lead on the testbed, Infosys will be doing much of the system development and integration. Infosys has identified a family willing to participate in the first phase for home monitoring. The plan is to use home monitoring devices like a FitBit Activity Monitor, a smart scale, a Withings blood pressure monitor, a GreatCall fall detection system, and an Ivy Vital-Guard 450C patient monitor. The goal is then to find a hospital or clinic partner and use the MD PnP Lab at Massachusetts General Hospital for initial data collection for clinical monitoring. Data gathering at the MD PnP Lab already uses RTI Connext DDS in its implementation of the OpenICE framework, while RTI and the MD PnP Lab continue to develop the OpenICE framework.

As a coordinating activity, RTI collaborates with the MD PnP lab on securing data in an OpenICE system. Security is critical for many reasons. First, laws such as the Health Insurance Portability and Accountability Act (HIPAA) mandate patient privacy. Also, security attacks can have serious safety consequences for patients. In a phase 1 SBIR project for the Defense Health Program, RTI modeled security requirements for a set of key care environments and worked out security risks for ICE deployments in each setting. RTI then demonstrated how and to what extent the risks would be mitigated by using an ICE Controller that complies with the recently adopted DDS security specification. Collaborating with the MD PnP lab, we developed a proof-of-concept prototype. We recently used this proof-of-concept as an example in our tech lab at the RSA Conference, and we plan to use the lessons learned in this research in the Connected Care testbed too.

Learn more on the IIC site, or watch this video to hear what the Infosys, MD PnP, and RTI have to say about the Connected Care Testbed.

25 Partners, the IIoT, and a Smart Grid Demo at Distributech 2016 Reply

At DistribuTECH 2016, the second week of March in Orlando FL, Duke Energy and 25 partners demonstrated a distributed microgrid application scattered across 12 booths on the show floor. As the culmination of Duke’s Coalition of the Willing Phase II (COW II) project, it demonstrated near-real-time microgrid use cases like optimization, islanding and grid resynchronization. Each booth was connected via wireless networks and running a part of the overall microgrid demonstration, all based on the new OpenFMB (Open Field Message Bus) distributed device interoperability framework using open Industrial IoT protocols. DDS was one of 3 IoT publish-subscribe protocols utilized in the simulated demo and underpinned the SCADA control messages between OpenFMB nodes.

Duke Booth DTech 2016

Duke Energy demonstrating an interoperable microgrid application across 14 booths at Distributech 2016

OpenFMB is both a project at the SGIP (the Smart Grid Interoperability Panel – a consortium of utilities and vendors dedicated to accelerating the development of a smart power grid) and an emerging standard from NAESB (North American Energy Standards Board) for edge device communication and control. The OpenFMB project team at SGIP spent several months focused on challenging microgrid use cases that require low-latency peer-to-peer communication and control across edge devices in the power grid. Duke Energy’s Stuart Laval and Hitachi Consulting’s Stuart McCafferty (yes, we have nicknames to tell them apart on the weekly calls) chaired the OpenFMB project team. The team delivered an initial demonstration of the framework at SGIP’s annual meeting in New Orleans in late 2015. This initial demo used DDS for real-time data communications between all the devices and controllers. The result was a proof of concept of interoperability between microgrid devices executing near-real-time use cases.

In parallel, Duke Energy and a team of 25 vendors partnered as the Coalition of the Willing II team, to develop a live operational system at Duke’s Mt Holly microgrid testbed. Mt Holly includes real equipment such as a 100-kW photo-voltaic solar system with smart inverter capabilities, a 250-kW battery energy storage system, a 10-kW photo-voltaic solar carport, operations room and more. All based on OpenFMB, this testbed was completed recently and showcases both edge device interoperability and the real-time communications and control needed for microgrid applications. The microgrid in Mount Holly is able to switch to an islanded mode in under 50ms. OpenFMB provides an open data model based on existing standards, and IIoT protocols, including DDS (of course!), MQTT, and AMQP. You can learn more about the Coalition of the Willing (COW) and Mt Holly by watching  Duke’s video (above) or going to their coalition website.

At the DistribuTECH event this year, Duke Energy and a bunch of the COW II partners demonstrated a simulation of the Mt Holly microgrid testbed. Duke showed eight demonstrations, 30 minutes each, over the three days of the conference and presented to over 1,000 attendees. DDS was one of the main publish-subscribe middleware technologies for communicating SCADA data between a variety of devices in different booths, and DDS held its own on the noisy (for the network that is) show floor.  (For more detail on the demo and for a blow-by-blow on how our team members debugged the DDS data communications check out this previous blog post.) I would say this demo was the talk of the show, proving that OpenFMB (and DDS and IIoT protocols) can drive previously difficult-at-best edge control use cases for the Smart Grid.

If you’d like to learn more, be sure to check out the replay of our recent webinar “How to Architect Microgrids for the Industrial Internet of Things“!