Well Being over Ethernet Reply

Guest Author: Andrew Patterson, Business Development Director for Mentor Graphics’ embedded software division (Thank you, Andrew!)

Mentor Embedded on the NXP Smarter World Truck 2017

Mentor Embedded on the NXP Smarter World Truck 2017

One of the larger commercial vehicles present at CES 2017 was the NXP® Smarter World Truck – an 18-wheeler parked right outside the Convention Center.  It contained over 100 demonstrations making use of NXP products showing some of the latest innovations in home-automation, medical, industrial and other fields.  Mentor Embedded, together with RTI, worked with NXP to set up a medical demonstration that showed data aggregation in real-time from medical sensors. By collecting medical data, and analyzing it in real time, either locally or in a back-office cloud, a much quicker and more accurate diagnosis of any medical condition can be possible.  Mentor Embedded’s aggregation gateway made use of the multicore NXP i.MX6, a well-established platform, running our own secure Mentor Embedded Linux®.  The technology we specifically wanted to highlight in this example was DDS (Data Distribution Service), implemented by RTI’s Connext® DDS Professional.  The DDS communication protocol, based on a physical Ethernet network, allows multiple sensor nodes to link to a hub or gateway, so it is appropriate for many medical and industrial applications where multi-node data needs to be collected securely and reliably.

Traditional patient monitoring systems have made use of client/server architectures, but these can be inflexible if reconfiguration changes are needed, and they don’t necessarily scale to a large number of clients in a large-scale medical or industrial installation. DDS uses a “publisher” and “subscriber” concept – it is easy to add new publishers and subscribers to the network without any other architecture changes, so the system is scalable.

medical-dds-on-ethernet-protocol-520x273

In the publish-subscribe model there is no central data server – data flows directly from the patient monitor source to the gateway destination.  In our demo medical system, the data sources are individual sensors that put data onto the Ethernet network when the new readings are available.  Data is tagged for reading and accessed by any registered subscriber.  Once received by the subscriber gateway, the data can be uploaded to a cloud resource for further analysis and comparisons made with historical readings. Further trend analysis can be made over time.

The process for adding a new node to a publish-subscribe network is straightforward. A new data element announces itself to the network when it attaches, optionally describing the types and formats of the data it provides. Subscribers then identify themselves to the data source to complete the system reconfiguration.

Mentor Embedded and RTI medical applications demo where multi-node data needs to be collected securely and reliably

Mentor Embedded and RTI medical applications demo where multi-node data needs to be collected securely and reliably

DDS provides a range of communication data services to support a variety of application needs, ranging from guaranteed command and control, to real-time data transmission. For example, if it is required to send a “halt” command to a specific node, there is a data service type that guarantees error-free delivery, so sensor data transmission stops immediately. There are also time-sensitive modes, useful when there is time-sensitive data, which require minimum network latency.  Less time-critical data can make use of a “best effort” service, where transmission is scheduled as a lower priority than the time-sensitive communication.

Our demonstration setup is shown in the picture on the left in the NXP Smarter World Truck 2017. The NXP i.MX6 quad core system was linked to a 10” touch-screen display, showing patient graphs.  The Mentor Embedded Linux operating system included the RTI Connext DDS protocol stack, the necessary drivers for high-performance graphics, and the Ethernet network connections. Other options include a fastboot capability and wireless communication links for cloud-connectivity.  For more information please visit Mentor Embedded Linux.

To see when the NXP Smarter World Truck is coming near you, visit the schedule at iot.nxp.com/americas/schedule – it is being updated frequently, so keep a watch on it!

Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.

2nd Version of the Industrial Internet Reference Architecture is Out with Layered Databus Reply

celebration

A year and a half ago the IIC released the first version of the Industrial Internet Reference Architecture (IIRA) – now the second version (v1.8) is out. It includes tweaks, updates and improvements, the most important or interesting of which is a new Layered Databus Architecture Pattern. RTI contributed this new architecture pattern in the Implementation Viewpoint of the IIRA because we’ve seen it deployed by hundreds of organizations that use DDS. Now it’s one of the 3 common implementation patterns called out by the new version of the IIRA.

So, what is a databus? According to the IIC’s Vocabulary document, “a databus is a data-centric information-sharing technology that implements a virtual, global data space, where applications read and update data via a publish-subscribe communications mechanism. Note to entry: key characteristics of a databus are (a) the applications directly interface with the operational data, (b) the databus implementation interprets and selectively filters the data, and (c) the databus implementation imposes rules and manages Quality of Service (QoS) parameters, such as rate, reliability and security of data flow.”

For those who know the DDS standard, this should sound familiar. You can implement a databus with a lower level protocol like MQTT, but DDS provides all the higher-level QoS, data handling, and security mechanisms you will need for a full featured databus.

As we look across the hundreds of IIoT systems DDS users have developed, what emerges is a common architecture pattern with multiple databuses layered by communication QoS and data model needs. As we see in the figure below, we’ll usually see databuses implemented at the edge in the smart machines or lowest level subsystems like a turbine, a car, an oil rig or a hospital room. Then above those, we’ll see one or more databuses that integrate these smart machines or subsystems, facilitating data communications between them and with the higher level control center or backend systems. The backend or control center layer might be the highest layer databus we see in the system, but there can be more than these three layers. It’s in the control center (which could be the cloud) layer that we see the data historians, user interfaces, high-level analytics and other top-level applications. From this layer, it’s straightforward to zero in on a particular data publication at any layer of the system as needed. It’s from this highest layer that we usually see integration with business and IT systems.

iira

The Layered Databus Architecture Pattern: one of three implementation patterns in the newly released Industrial Internet Reference Architecture v1.8.

Why use a layered databus architecture? As the new IIRA says, you get these benefits:

  • Fast device-to-device integration – with delivery times in milliseconds or microseconds
  • Automatic data and application discovery – within and between databuses
  • Scalable integration – comprising hundreds of thousands of machines, sensors and actuators
  • Natural redundancy – allowing extreme availability and resilience
  • Hierarchical subsystem isolation – enabling development of complex system designs

If you want to dig into the databus concept, especially as it compares with a database (similar data-centric patterns for integrating distributed systems, but different in the way they integrate via data), take a look at this earlier blog post on databus versus database.

In addition to the new IIRA release, the IIC is getting ready to release an important document on the Connectivity Framework for its reference architecture. Look for much more detail on this document that sets out core connectivity standards for the Industrial Internet.

A Foggy Forecast for the Industrial Internet of Things Reply

A Foggy Forecast for the Industrial Internet of Things

Signs on I-280 up the San Francisco peninsula proclaim it the “World’s Most Beautiful Freeway.” It’s best when the fog rolls over the hills into the valley, as in this picture I took last summer.

That fog is not just pretty, it’s also the natural refrigerator responsible for California’s famously perfect weather. Clouds in the right place work wonders.

What is Fog? iiot-glossary

This is a perfect analogy for the impending future of the Industrial Internet of Things (IIoT) computing. In weather, fog is the same thing as clouds, only close to the ground. In the IoT, fog is defined as cloud technology close to the things. Neither is a precise term, but it’s true in both cases: clouds in the right place work wonders.

The major industry consortia, including the Industrial Internet Consortium (IIC) and the OpenFog Consortium, are working hard to better define this future.  All agree that many aspects that drive the spectacular success of the cloud must extend beyond data centers.  The also agree that the real world also contains challenges not handled by cloud systems.  They also bandy about names and brand positioning; see the sidebar for a quick weather map.  By any name, the fog, or layered edge computing, is critical to the operation of the industrial infrastructure.

Perhaps the best way to understand fog is to examine real use cases.

Example: Connected Medical Devices

Consider first the coming future of intelligent medical systems.  The driving issue is an alarming fact: the 3rd leading cause of death in the US is hospital error.  Despite extensive protocols that check and recheck assumptions, device alarms, training on alarm fatigue, and years of experience, the sad truth is that hundreds of thousands of people die every year because of miscommunications and errors.  Increasingly clearly, compensating for human error in such a complex environment is not the solution.  The best path is to use technology to take better care of patients.

The Integrated Clinical Environment standard is a leading effort to create an intelligent, distributed system to monitor and care for patients.  The key idea is to connect medical devices to each other and to an intelligent “supervisory” computing function.  The supervisor acts like a tireless member of the care team, checking patient status and intelligently alerting human caretakers or even taking autonomous actions when there are problems.

icedataflow

The supervisor combines and analyzes oximeter, capnometer, and respirator readings to reduce false alarms and stop drug infusion to prevent overdose. The DDS “databus” connects all the components with real-time reliable delivery.

This sounds simple.  However, consider the real-world challenges.  The problem is not just the intelligence.  Current medical devices do not communicate at all.  They have no idea that they are connected to the same patient.  There’s no obvious way to ensure data consistency, staff monitoring, or reliable operation.

Worse, the above diagram is only one patient.  That’s not the reality of a hospital; they have hundreds or thousands of beds.  Patients move between rooms every day.  The environment includes a mix of wired and wireless networks. Finding and delivering information within the treatment-critical environment is a superb challenge.

A realistic hospital environment includes thousands of patients and hundreds of thousands of devices. Reliable monitoring technology must find the right patient and guarantee delivery of that patient’s data to the right analysis or staff. In the connectivity map above, every red dot is a “fog routing node”, responsible for passing the right data up to the next layer.

A realistic hospital environment includes thousands of patients and hundreds of thousands of devices. Reliable monitoring technology must find the right patient and guarantee delivery of that patient’s data to the right analysis or staff. In the connectivity map above, every red dot is a “fog routing node”, responsible for passing the right data up to the next layer.

This scenario exposes the key need for a layered fog system.  Complex systems like this must build from hierarchical subsystems.  Each subsystem shares internal data, with possibly complex dataflow, to execute its functions.  For instance, a ventilator is a complex device that controls gas flows, monitors patient state, and delivers assisted breathing.  Internally, it includes many sensors and motors and processors that share this data.  Externally, it presents a much simpler interface that conveys the patient’s physiological state.   Each of the hundreds of types of devices in a hospital face a similar challenge.  The fog computing system must exchange the right information up the chain at each level.

Note that this use case is not a good candidate for cloud-based technology.  These machines must exchange fast, real-time data flows, such as signal waveforms, to properly make decisions.  Also, patient health is at stake.  Thus, each critical component will need a very reliable connection and even redundant implementation for failover.  Those failovers must occur in a matter of seconds.  It’s not safe or practical to rely on remote connections.

Example: Autonomous Cars

The “driverless car” is the most disruptive innovation in transportation since the “horseless carriage”.  Autonomous Drive (AD) cars and trucks will change daily life and the economy in ways that hard to imagine.  They will move people and things faster, safer, cheaper, farther, and easier than the primitive “bio-drive” cars of the last century.  And the economic impact is stunning; 30% of all US jobs will end or change; trucking, delivery, traffic control, urban transport, child & elder care, roadside hotels, restaurants, insurance, auto body, law, real estate, and leisure will never again be the same.

Autonomous car software exchanges many data types and sources. Video and Lidar sensors are very high volume; feedback control signals are fast. Infrastructure that reliably sends exactly the right information to exactly the right places at the right time makes system development much easier. The vehicle thus combines the performance of embedded systems with the intelligence of the cloud…aka fog.

Autonomous car software exchanges many data types and sources. Video and Lidar sensors are very high volume; feedback control signals are fast. Infrastructure that reliably sends exactly the right information to exactly the right places at the right time makes system development much easier. The vehicle thus combines the performance of embedded systems with the intelligence of the cloud…aka fog.

Intelligent vehicles are complex distributed systems.  An autonomous car combines vision, radar, lidar, proximity sensors, GPS, mapping, navigation, planning, and control.  These components must work together as a reliable, safe, secure system that can analyze complex environments in real time and react to negotiate chaotic environments.  Autonomy is thus a supreme technical challenge.  An autonomous car is more a robot on wheels than it is a car. Automotive vendors suddenly face a very new challenge.  They need fog.

Fog integrates all the components in an autonomous car design. Each of these components is a complex module on its own. As in the hospital patient monitoring case, this is only one car; fog routing nodes (red) are required to integrate subsystems and connect the car into a larger cloud-based system. This system also requires fast performance, extreme reliability, integration of many types of dataflow, and controlled module interactions. Note that cloud-based applications are also critical components. Fog systems must seamlessly merge with cloud-based applications as well.

Fog integrates all the components in an autonomous car design. Each of these components is a complex module on its own. As in the hospital patient monitoring case, this is only one car; fog routing nodes (red) are required to integrate subsystems and connect the car into a larger cloud-based system. This system also requires fast performance, extreme reliability, integration of many types of dataflow, and controlled module interactions. Note that cloud-based applications are also critical components. Fog systems must seamlessly merge with cloud-based applications as well.

How Can Fog Work?

So, how can this all work?  I’ve hinted at a few of the requirements above.  Connectivity is perhaps the greatest challenge.  Enterprise-class technologies cannot deliver the performance, reliability, redundancy, and distributed scale that IIoT systems need.

The key insight is that systems are all about the data.  The enabling technology is data-centricity.

A data-centric system has no hard-coded interactions between applications.  When applied to fog connectivity, this concept overcomes problems associated with point-to-point system integration, such as lack of scalability, interoperability, and the ability to evolve the architecture. It enables plug-and-play simplicity, scalability, and exceptionally high performance.

The leading standard for data-centric connectivity is the Data Distribution Service (DDS).  DDS is not like other middleware.  It directly addresses real-time systems. It features extensive fine control of real-time Quality of Service (QoS) parameters, including reliability, bandwidth control, delivery deadlines, liveliness status, resource limits, and security.  It explicitly manages the communications “data model”, or types and QoS used to communicate between endpoints.  It is thus a “data-centric” technology.

DDS is all about the data: finding data, communicating data, ensuring fresh data, matching data needs, and controlling data.  Like a database, which provides data-centric storage, DDS understands the contents of the information it manages.  This data-centric nature, analogous to a database, justifies the term “databus”.

Databus vs. Database: The 6 Questions Every IIoT Developer Needs to Ask

Traditional communications architectures directly connect applications. This connection takes many forms, including messaging, remote object-oriented invocation, and service oriented architectures. Data-centric systems fundamentally differ because applications interact only with the data and properties of data. Data centricity decouples applications and greatly enables scalability, interoperability and integration. Because many applications may interact with the data independently, data centricity also makes redundancy natural.

Traditional communications architectures directly connect applications. This connection takes many forms, including messaging, remote object-oriented invocation, and service oriented architectures. Data-centric systems fundamentally differ because applications interact only with the data and properties of data. Data-centricity decouples applications and greatly enables scalability, interoperability and integration. Because many applications may interact with the data independently, data-centricity also makes redundancy natural.

Note that the databus replaces the application-application interaction with application-data-application interaction.  This abstraction is the crux of data-centricity and it’s absolutely critical.  Data-centricity decouples applications and greatly eases scaling, interoperability, and system integration.

Continuing the analogy above, a database implements this same trick for data-centric storage.  It saves old information that you can later search by relating properties of the stored data.  A databus implements data-centric interaction.  It manages future information by letting you filter by properties of the incoming data.  Data-centricity makes a database essential for large storage systems.  Data-centricity makes a databus a fundamental technology for large software-system integration.

The databus automatically discovers and connects publishing and subscribing applications.  No configuration changes are required to add a new smart machine to the network.  The databus matches and enforces QoS.  The databus insulates applications from the execution, or even existence, of other applications.  As long as its data specifications are met, an application can run successfully.

A databus also requires no servers.  It uses a protocol to discover possible connections.  All dataflow is directly peer-to-peer for the lowest possible latency.  And, with no servers to clog or fail, the fundamental infrastructure is both scalable and reliable.

To scale as in our examples above, we must combine hierarchical subsystems; that’s important to fog.  This requires a component that isolates subsystem interfaces, a “fog routing node”.  Note that this is a conceptual term.  It does not have to be, and often is not, implemented as a hardware device.  It is usually implemented as a service, or running application.  That service can run anywhere needed: on the device itself, in a separate box, or in the higher-level system.  Its function is to “wrap a box around” a subsystem, thus hiding the complexity.  The subsystem thus exports only the needed data, allows only controlled access, and even presents a single security domain (certificate).  Also, because the databus so naturally supports redundancy, the service design allows highly reliable systems to simply run many parallel routing nodes.

Hierarchical systems require containment of subsystem internal data. The fog routing node maps data models between levels, controls information export, enables fast internal discovery, and maps security domains. The external interface is thus a much simpler view that hides the internal system.

Hierarchical systems require containment of subsystem internal data. The fog routing node maps data models between levels, controls information export, enables fast internal discovery, and maps security domains. The external interface is thus a much simpler view that hides the internal system.

RTI has immense experience with this design, with over 1000 projects.  These include fast 3kHz feedback loops for robotics, NASA KSC’s huge 300k-point launch control SCADA system, Siemens Wind Power’s largest offshore turbine farms, the Grand Coulee dam, GE Healthcare’s CT imaging and  patient monitoring product lines, almost all Navy ships of the US and its allies, Joy Global’s continuous mining machines, many pilotless drones and ground stations, Audi’s hardware-in-the-loop testing environment, and a growing list of autonomous car and truck designs.

The key benefits of a databus include:

  • Reliability: Easy redundancy and no servers to fail allow extremely reliable operation. The DDS databus supports systems cannot tolerate being offline even for a short period, whether 5 minutes or 5 milliseconds.
  • Real-time: Databus peer-to-peer delivery easily supports latencies measured in milliseconds and even tens of microseconds.
  • Interface scale: Large software projects with more than 10 interacting modules must carefully define, coordinate, and evolve interfaces. Data-centric technology moves this responsibility from manual processes to automatic, enforced infrastructure.  RTI has experience with systems with over 1500 teams of programmers building thousands of interacting applications.
  • Data scale: When systems grow large, they must control dataflow. It’s simply not practical to send everything to every application.  The databus allows filtering by content, rate, and more.  Thus, applications receive only what they truly need.  This greatly reduces both network and processor load.  This is critical for any system with more than 1000 independently-addressable data items.
  • Architecture: Data-centricity is not easily “added” to a system. It is instead adopted as the core design.  Thus, the transformation makes sense only for next-generation IIoT designs.  Most system designs have lifecycles of many years.

Any system that meets most of these requirements should seriously consider a data-centric design.

FREE eBook: Leading Applications & Architecture for the Industrial Internet of Things

The Foggy Future

Like the California fog blanket, a cloud in the right place works wonders.  Databus technology enables elastic computing by bringing the data where it’s needed reliability.  It supports real-time, reliable, scalable system building. Of course, communication is only one of the required functions of the evolving fog architecture.  But it is key and relatively mature.  It is thus driving many designs.

The Industrial IoT will change nearly every industry, including transportation, medical, power, oil and gas, agriculture, and more.  It will be the primary driving trend in technology for the next several decades, the technology story of our lifetimes.  Fog computing will move powerful processing currently only available in the cloud out to the field.  The forecast is foggy indeed.