Databus vs. Database: The 6 Questions Every IIoT Developer Needs to Ask 3

importantQuestionsDatabasevsDatabus

The Industrial Internet of Things (IIoT) is full of confusing terms.  That’s unavoidable; despite its reuse of familiar concepts in computing and systems, the IIoT is a fundamental change in the way things work.  Fundamental changes require fundamentally new concepts.  One of the most important is the concept of a “databus”.

The soon-to-be-released IIC reference architecture version 2 contains a new pattern called the “layered databus” pattern.  I can’t say much more now about the IIC release, but going through the documentation process has been great for driving crisp definitions.

The databus definition is:

A databus is a data-centric information-sharing technology that implements a virtual, global data space.  Software applications read and update entries in a global data space. Updates are shared between applications via a publish-subscribe communications mechanism.

Key characteristics of a databus are:

  1. the participants/applications directly interface with the data,
  2. the infrastructure understands, and can therefore selectively filter the data, and
  3. the infrastructure imposes rules and guarantees of Quality of Service (QoS) parameters such as rate, reliability, and security of data flow.

Of course,  new concepts generate questions.  Some of the best questions came from an architect from a large database company.  We usually try to explain the databus concept from the perspective of a networking or software architect.  But, data science is perhaps a better approach.  Both databases and databuses are, after all, data science concepts.

Let’s look at the 6 most common questions.

Question 1: How is a databus different from a database (of any kind)?

Short answer: A database implements data-centric storage.  It saves old information that you can later search by relating properties of the stored data.  A databus implements data-centric interaction.  It manages future information by letting you filter by properties of the incoming data.

Long answer: Data centricity can be defined by these properties:

  • The interface is the data. There are no artificial wrappers or blockers to that interface like messages, or objects, or files, or access patterns.
  • The infrastructure understands that data. This enables filtering/searching, tools, & selectivity.  It decouples applications from the data and thereby removes much of the complexity from the applications.
  • The system manages the data and imposes rules on how applications exchange data. This provides a notion of “truth”.  It enables data lifetimes, data model matching, CRUD interfaces, etc.

A relational database is a data-centric storage technology. Before databases, storage systems were files with application-defined (ad hoc) structure.  A database is also a file, but it’s a very special file.  A database knows how to interpret the data and enforces access control.  A database thus defines “truth” for the system; data in the database can’t be corrupted or lost.

By enforcing simple rules that control the data model, databases ensure consistency.  By exposing the data to search and retrieval by all users, databases greatly ease system integration.  By allowing discovery of data and schema, databases also enable generic tools for monitoring, measuring, and mining information.

Like a database, data-centric middleware (a databus) understands the content of the transmitted data.  The databus also sends messages, but it sends very special messages.  It sends only messages specifically needed to maintain state.  Clear rules govern access to the data, how data in the system changes, and when participants get updates.  Importantly, only the infrastructure sends messages.  To the applications, the system looks like a controlled global data space.  Applications interact directly with data and data “Quality of Service” (QoS) properties like age and rate.  There is no application-level awareness or concept of “message”.  Programs using a databus read and write data, they do not send and receive messages.

Database vs Databus

A database replaces files with data-centric storage that finds the right old data through search. A databus replaces messages with data-centric connectivity that finds the right future data through filtering. Both technologies make system integration much easier, supporting much larger scale, better reliability, and application interoperability.

With knowledge of the structure and demands on data, the databus infrastructure can do things like filter information, selecting when or even if to do updates.  The infrastructure itself can control QoS like update rate, reliability, and guaranteed notification of peer liveliness.  The infrastructure can discover data flows and offer those to applications and generic tools alike.  This knowledge of data status, in a distributed system, is a crisp definition of “truth”.  As in databases, the infrastructure exposes the data, both structure and content, to other applications.  This accessible source of truth greatly eases system integration.  It also enables generic tools and services that monitor and view information flow, route messages, and manage caching.

Question 2: “Software applications read and update entries in a global data space. Updates are shared between applications via a publish-subscribe communications mechanism.”  Does that mean that this is a database that you interact with via a pub-sub interface?

Short answer: No, there is no database.  A database implies storage: the data physically resides somewhere.  A databus implements a purely virtual concept called a “global data space”.

Long answer: The databus data space defines how to interact with future information.  For instance, if “you” are an intersection controller, you can subscribe to updates of vehicles within 200m of your position.  Those updates will then be delivered to you, should a vehicle ever approach.  Delivery is guaranteed in many ways (start within .01 secs, updated 100x/sec, reliable, etc.).  Note that the data may never be stored at all.  (Although some QoS settings like reliability may require some local storage.)  You can think of a data space as a set of specially-controlled data objects that will be filled with information in the exact way you specify, although that information is not (in general) saved by the databus…it’s just delivered.

Question 3: “The participants/applications directly interface with the data.”  Could you elaborate on what that means?

With “message-centric” middleware, you write an application that sends data, wrapped in messages, to another application.  You may do that by having clients send data to servers, for instance.  Both ends need to know something about the other end, usually including things like the schema, but also likely assumed properties of the data like “it’s less than .01 seconds old”, or “it will come 100x/second”, or at least that there is another end alive, e.g. the server is running.  All these assumed properties are completely hidden in the application code, making reuse, system integration, and interoperability really hard.

With a databus, you don’t need to know anything about the source applications.  You make clear your data needs, and then the databus delivers it.  Thus, with a databus, each application interacts only with the data space.  As an application, you simply write to the data space or read from the data space with a CRUD interface.  Of course, you may require some QoS from that data space, e.g. you need your data updated 100x per second.  The data space itself (the databus) will guarantee you get that data (or flag an error).  You don’t need to know if there are only one or 27 redundant sources of that data, or if it comes over a network or shared memory, or if it’s a C program on Linux or a C# program on Windows.  All interactions are with your own view of the data space.  It also makes sense, for instance, to write data to a space with no recipients.  In this case, the databus may do absolutely nothing, or it may cache information for later delivery, depending on your QoS settings.

Note that both database and databus technologies replace the application-application interaction with application-data-application interaction.  This abstraction is absolutely critical.  It decouples applications and greatly eases scaling, interoperability, and system integration.  The difference is really one of old data stored in a (likely centralized) database, vs future data sent directly to the applications from a distributed data space.

Question 4: “The infrastructure understands, and can therefore selectively filter the data.” Isn’t that true of all pub-sub, where you can register for “events” of interest to you?

Most pub-sub is very primitive.  An application “registers interest”, and then everything is simply sent to that application.  So, for instance, an intersection collision detection algorithm could subscribe to “vehicle positions”.   The infrastructure then sends messages from any sensor capable of producing positions, with no knowledge of the data inside that message.  Even “content filtering” pub-sub offers only very simple specs and requires the system to pre-select what’s important for all.  There’s no real control of flow.

A databus is much more expressive.  That intersection could say “I am interested only in vehicle positions within 200m, moving at 10m/s towards me.  If a vehicle falls into my specs, I need to be updated 200 times a second.  You (the databus) need to guarantee me that all sensors feeding this algorithm promise to deliver data that fast…no slower or faster.  If a sensor updates 1000 times a second, then only send me every 5th update.  I also need to know that you actually are in touch with currently-live sensors (which I define as producing in the last 0.01secs) on all possible roadway approaches at all times.  Every sensor must be able to store 600 old samples (3 seconds worth), and update me with that old data if I need it.”   (These are a few of the 20+ QoS settings in the DDS standard.)

Note that a subscribing application in the primitive pub-sub case is very dependent on the actual properties of its producers.  It has to somehow trust that they are alive (!), that they have enough buffers to save the information it may need, that they won’t flood it with information nor provide it too slowly.  If there are 10,000 cars being sensed 1000x/sec, but only 3 within 200m, it will have to receive 10,000*1000 = 10m samples every second just to find the 3*200 = 600 it needs to pay attention to.  It will have to ping every single sensor 100x/second just to ensure it is active.  If there are redundant sensors on different paths, it has to ping them all independently and somehow make sure all paths are covered.  If there are many applications, they all have to ping all the sensors independently.  It also has to know the schema of the producers, etc.

The application in the second case will, by contrast, receive exactly the 600 samples it cares about, comfortable in the knowledge that at least one sensor for each path is active.  The rate of flow is guaranteed.  Sufficient reliability is guaranteed.  The total dataflow is reduced by 99.994% (we only need 600/10m samples, and smart middleware does filtering at the source).  For completeness, note that the collision algorithm is completely independent of the sensors themselves.  It can be reused on any other intersection, and it will work with one sensor per path or 17.  If during runtime, the network gets too loaded to meet the data specs (or something fails), the application will be immediately notified.

Question 5: How does a databus differ from a CEP engine?

Short answer: a databus is a fundamentally distributed concept that selects and delivers data from local producers that match a simple specification.  A CEP engine is a centralized executable service that is capable of much more complex specifications, but must have all streams of data sent to one place.

Long answer: A Complex Event Processing (CEP) engine examines an incoming stream of data, looking for patterns you program it to identify.  When it finds one of those patterns, you can program it to take action. The patterns can be complex combinations of past and incoming future data.  However, it is a single service, running on a single CPU somewhere.  It transmits no information.

A databus also looks for patterns of data.  However, the specifications are simpler; it makes decisions about each data item as it’s produced.  The actions are also simpler; the only action it may take is to send that data to a requestor.  The power of a databus is that it is fundamentally distributed.  The looking happens locally on potentially hundreds, thousands, or even millions of nodes.  Thus, the databus is a very powerful way to select the right data from the right sources and send them to the right places.  A databus is sort of like a distributed set of CEP engines, one for every possible source of information, that are automatically programmed by the users of that information.  Of course, the databus has many other properties beyond pattern matching, such as schema mediation, redundancy management, transport support, an interoperable protocol, etc.

Question 6: What application drove the DDS standard and databuses?

The early applications were in intelligent robots, “information superiority”, and large coordinated systems like navy combat management.  These systems needed reliability even when components fail, data fast enough to control physical processes, and selective discovery and delivery to scale.  Data centricity really simplified application code and controlled interfaces, letting teams of programmers work on large software systems over time.  The DDS standard is an active, growing family of standards that was originally driven by both vendors and customers.  It has significant use across many verticals, including medical, transportation, smart cities, and energy.

If you’d like to learn about how intelligent software is sweeping the IIoT, be sure to download our whitepaper on the future of the automotive industry,”The Secret Sauce of Autonomous Cars“.

Is Your Security Tail Wagging Your Architecture Dog? 4

tail wagging the dog

Recently, as a leader in the IIoT, I seem to get a lot of questions from insurance company executives. Their common question: where is the risk in the IIoT? Their theme seems to be: connecting things is just too risky. We don’t understand the security or safety risks, so It Can’t Be Good.

I disagree.

I do agree that the IoT is a brave new world in general, and for risk management in particular. There are all sorts of new opportunities for mischief if a machine is compromised. The hack that caused a Jeep to go off the road by getting into the tire pressure monitoring system is a classic example.

That said, intelligent machines also have more opportunity to protect themselves. The sad truth today is that most systems are very poorly protected (like that Jeep). Security gets orders of magnitude more attention today than only a short time ago. Most industrial systems didn’t even consider anything beyond “eggshell” firewalls or “air gap” offline designs until recently. That has changed 100% today; everyone is thinking security, security, security. And progress is exhilarating. Put another way, I think that everyone is installing cyber “burglar alarms” much faster than the increase in burglars. Bottom line: despite the rise in connected systems, the “likely real” risk is going down in most cases.

My insurance contacts consider this an overly optimistic view of the future. I counter that they hold a too-optimistic view of the present. You see, I claim that the situation today is unacceptably, intolerably, unbelievably high risk. Entire industries run without a whit of security. It seems scarier in the future only because the risk you don’t know seems worse than the risk you do know. That’s human nature. But anyone who looks will see that the current risks are very high, and the new designs are much better.

That said, my real optimism stems from the opportunity to change. In my experience (and this may shock security wonks), security is not a change driver. By that, I mean that industrial systems are usually not willing to implement a new architecture (just) to improve security. The power industry is my favorite example. The industry has been screaming for 20 years that security is a problem. And, imho, they will go right on screaming…unless something else drives the change.

The good news: the IIoT is that change driver. And security today is absolutely a change gate. Every application insists on security when they do implement a new architecture for other reasons. Since the IIoT is motivating many, many industrial applications to redo their architectures, security is getting better. Of course, implementing a new architecture for a major industrial application, or for that matter an entire industry, is daunting. But this is the magic of the sweeping changes offered by the IIoT. The IIoT is compelling. Change is coming, and it’s coming fast.

While we’re on the topic of change, let’s not discount improvements in technology to enable that gate. For instance, many potential IIoT systems primarily face scalability and system integration challenges. With a little thought, the architects figure out that IIoT systems are all about the data, and then that they really have a high-performance data flow and data transparency challenge. The best way to provide transparent flow is a “peer to peer” or “publish subscribe” design. This is the architecture “dog”: systems need the simplicity and performance of a communications pattern that simply sends the data where it’s needed, right now. That data transparency makes the huge future IIoT system manageable.

Of course, although data transparency is an integration dream, it’s a security nightmare.

The “dog” side of the dialog goes something like this:

Hey! Let’s just send the data right where we need it. Pervasive data availability makes systems fast, reliable, and scalable. And look how much simpler the code is!

But, then comes the security “tail”:

We can’t maintain thousands of independent secure sessions! How do we keep such a system secure?

Only last year, that was a damn good question. It blocked adoption of IIoT technologies where they are really needed. But then, the DDS standard developed a security architecture that exactly matches its data-centric data flow design. The result? The data-centric dog wags its perfectly-matched data-centric security tail. Security works seamlessly without clouding data transparency. Advances like this—that span industries—will make future IIoT systems much more secure than today’s ad-hoc industry-specific quagmire of afterthought security hacks. Security that matches the architecture is elegant and functional.

This argument leaves my insurance correspondents searching for Tao in their actuarial tables. So, I can’t resist adding that it’s not really what they should worry about.

Safety engineering will be a much bigger impact on insurance. For instance, I expect the $200b auto insurance industry to disappear in the next 10-20 yrs as ADAS and autonomous cars eliminate 90+% of accidents. Most hospital errors can also be prevented (hospital error is currently the 3rd leading cause of death in the US). In factories, and plants, and oil rigs, and mining systems, and many more applications, automated systems (somewhat obviously) don’t have humans around, thus removing a significant current risk today. Accidents, in general, are mostly the result of human folly. Machines will soon check or eliminate the opportunity for folly. I see this as an extremely positive increase in the quality and preservation of life. Insurance execs see it as an existential threat.

I tell them not to feel bad; most industries will be greatly disrupted by smart machines. Navigating that transition well will make or break companies. Insurers certainly understand that losses are easier to grasp than gains; that principal underwrites their industry. But, that perception is not reality. The IIoT’s impact on the economy as a whole will be hugely positive; the analysts measure it in multiple trillions of dollars in only a few years. So, there will be many, many places to seek and achieve growth. The challenge to find those paths is no less or greater for insurance than for any other industry. But, fundamentally, the IIoT will drive a greener, safer, better future. It Is Good.

To learn more about our security solutions, visit http://www.rti.com/products/secure.html.

How OPC UA and DDS Joined Forces Reply

opcuaAndDDSUniteBlog.jpg

It all started, appropriately, at National Instrument’s annual show called NIWeek in Austin, Texas. There, Thomas Burke, President & Executive Director at the OPC Foundation, approached me and asked if I was interested in helping build a partnership between the two most important connectivity solutions in the IIoT. Because of RTI’s leadership at the IIC and within DDS, we were well placed to lead.

That was the start of a great journey.

It was easy to agree to Thomas’s proposal. Both communities were struggling with how to differentiate our core value propositions. As everyone now knows, in practice, OPC UA and DDS solve very different problems. They focus on different industries. Even in the same application, they solve different use cases.

Nonetheless, the world thought we were at war. Why?   You need to understand the confusion of a new, very hot market. The Internet changed banking, retail, and travel agencies.  It created huge new companies and ended many others.  But, it didn’t touch most industrial applications.  Factories, plants, hospitals, and power systems operate today the same way they did 20 years ago.

Suddenly that is changing.  Gartner, the analyst firm, predicts that the “smart machine era” will be the most disruptive in the history of IT.  The CEO of General Electric famously said if you go to bed an industrial manufacturer, you will wake up a software and analytics company.  The modernization of the industrial landscape—the “Industrial Internet of Things” (IIoT)—will impact virtually every industry on the planet.

Mega trends that sweep through huge swaths of the economy like that always cause a lot of stress.

In this case, the stress was a perceived clash of industry alliances. The German industrial leadership has been developing a new architecture for manufacturing called Industrie 4.0. The German government invested over a billion Euros in Industrie 4.0 over most of a decade. Then, in 2014, five large US companies founded the Industrial Internet Consortium (IIC). The IIC struck a nerve in the market, and quickly grew to include hundreds of companies. Since both the IIC and Industrie 4.0 are working on “industrial systems” architecture, people assume they compete. A challenging reporter wrote an article on the implications for world dominance, and a conflict was born.

Then, that same reporter posted an opinion that the conflict was really technical, rather than political, and the most important technical conflict was between OPC UA and DDS. Suddenly, both communities were embroiled in controversy that made no real sense.

The rest, as they say, is history. Today, the IIC and Industrie 4.0 announced their cooperation. Their plan is to seek ways to combine Industrie 4.0’s depth in manufacturing with the IIC’s breadth across industries. The core technologies have similar strengths and similar goals.

Our path had its rocky stretches, but we are making great progress. We are working on mapping the architectures. The OMG has an official standards effort to define an OPC UA/DDS bridge. The OPC Foundation is building a “DDS Profile” for OPC UA pubsub. And, the IIC is creating joint testbeds that will prove the integration. We are, together, building the IIoT’s future.

The positioning document and press release going out today are the result of many people’s work. It combines input from the major DDS and OPC UA vendors, from the IIC and Industrie 4.0, and from the OMG and OPC Foundation standards organizations. I would like to particularly thank those most involved: Thomas Burke and Stefan Hoppe from OPCF, Matthias Damm from Unified Automation, and RTI’s Gerardo Pardo-Castellote. Coordinating all these organizations to make any joint statement would be impressive on its own. But, somehow, we managed the deep cooperation required to clarify the markets and design a technical integration. That’s because we all realized how important it is to build a standard, interoperable design that covers the IIoT. By coordinating our political leadership with the leading technologies, we will build, together, the future of the IIoT.