Databus vs. Database: The 6 Questions Every IIoT Developer Needs to Ask 3


The Industrial Internet of Things (IIoT) is full of confusing terms.  That’s unavoidable; despite its reuse of familiar concepts in computing and systems, the IIoT is a fundamental change in the way things work.  Fundamental changes require fundamentally new concepts.  One of the most important is the concept of a “databus”.

The soon-to-be-released IIC reference architecture version 2 contains a new pattern called the “layered databus” pattern.  I can’t say much more now about the IIC release, but going through the documentation process has been great for driving crisp definitions.

The databus definition is:

A databus is a data-centric information-sharing technology that implements a virtual, global data space.  Software applications read and update entries in a global data space. Updates are shared between applications via a publish-subscribe communications mechanism.

Key characteristics of a databus are:

  1. the participants/applications directly interface with the data,
  2. the infrastructure understands, and can therefore selectively filter the data, and
  3. the infrastructure imposes rules and guarantees of Quality of Service (QoS) parameters such as rate, reliability, and security of data flow.

Of course,  new concepts generate questions.  Some of the best questions came from an architect from a large database company.  We usually try to explain the databus concept from the perspective of a networking or software architect.  But, data science is perhaps a better approach.  Both databases and databuses are, after all, data science concepts.

Let’s look at the 6 most common questions.

Question 1: How is a databus different from a database (of any kind)?

Short answer: A database implements data-centric storage.  It saves old information that you can later search by relating properties of the stored data.  A databus implements data-centric interaction.  It manages future information by letting you filter by properties of the incoming data.

Long answer: Data centricity can be defined by these properties:

  • The interface is the data. There are no artificial wrappers or blockers to that interface like messages, or objects, or files, or access patterns.
  • The infrastructure understands that data. This enables filtering/searching, tools, & selectivity.  It decouples applications from the data and thereby removes much of the complexity from the applications.
  • The system manages the data and imposes rules on how applications exchange data. This provides a notion of “truth”.  It enables data lifetimes, data model matching, CRUD interfaces, etc.

A relational database is a data-centric storage technology. Before databases, storage systems were files with application-defined (ad hoc) structure.  A database is also a file, but it’s a very special file.  A database knows how to interpret the data and enforces access control.  A database thus defines “truth” for the system; data in the database can’t be corrupted or lost.

By enforcing simple rules that control the data model, databases ensure consistency.  By exposing the data to search and retrieval by all users, databases greatly ease system integration.  By allowing discovery of data and schema, databases also enable generic tools for monitoring, measuring, and mining information.

Like a database, data-centric middleware (a databus) understands the content of the transmitted data.  The databus also sends messages, but it sends very special messages.  It sends only messages specifically needed to maintain state.  Clear rules govern access to the data, how data in the system changes, and when participants get updates.  Importantly, only the infrastructure sends messages.  To the applications, the system looks like a controlled global data space.  Applications interact directly with data and data “Quality of Service” (QoS) properties like age and rate.  There is no application-level awareness or concept of “message”.  Programs using a databus read and write data, they do not send and receive messages.

Database vs Databus

A database replaces files with data-centric storage that finds the right old data through search. A databus replaces messages with data-centric connectivity that finds the right future data through filtering. Both technologies make system integration much easier, supporting much larger scale, better reliability, and application interoperability.

With knowledge of the structure and demands on data, the databus infrastructure can do things like filter information, selecting when or even if to do updates.  The infrastructure itself can control QoS like update rate, reliability, and guaranteed notification of peer liveliness.  The infrastructure can discover data flows and offer those to applications and generic tools alike.  This knowledge of data status, in a distributed system, is a crisp definition of “truth”.  As in databases, the infrastructure exposes the data, both structure and content, to other applications.  This accessible source of truth greatly eases system integration.  It also enables generic tools and services that monitor and view information flow, route messages, and manage caching.

Question 2: “Software applications read and update entries in a global data space. Updates are shared between applications via a publish-subscribe communications mechanism.”  Does that mean that this is a database that you interact with via a pub-sub interface?

Short answer: No, there is no database.  A database implies storage: the data physically resides somewhere.  A databus implements a purely virtual concept called a “global data space”.

Long answer: The databus data space defines how to interact with future information.  For instance, if “you” are an intersection controller, you can subscribe to updates of vehicles within 200m of your position.  Those updates will then be delivered to you, should a vehicle ever approach.  Delivery is guaranteed in many ways (start within .01 secs, updated 100x/sec, reliable, etc.).  Note that the data may never be stored at all.  (Although some QoS settings like reliability may require some local storage.)  You can think of a data space as a set of specially-controlled data objects that will be filled with information in the exact way you specify, although that information is not (in general) saved by the databus…it’s just delivered.

Question 3: “The participants/applications directly interface with the data.”  Could you elaborate on what that means?

With “message-centric” middleware, you write an application that sends data, wrapped in messages, to another application.  You may do that by having clients send data to servers, for instance.  Both ends need to know something about the other end, usually including things like the schema, but also likely assumed properties of the data like “it’s less than .01 seconds old”, or “it will come 100x/second”, or at least that there is another end alive, e.g. the server is running.  All these assumed properties are completely hidden in the application code, making reuse, system integration, and interoperability really hard.

With a databus, you don’t need to know anything about the source applications.  You make clear your data needs, and then the databus delivers it.  Thus, with a databus, each application interacts only with the data space.  As an application, you simply write to the data space or read from the data space with a CRUD interface.  Of course, you may require some QoS from that data space, e.g. you need your data updated 100x per second.  The data space itself (the databus) will guarantee you get that data (or flag an error).  You don’t need to know if there are only one or 27 redundant sources of that data, or if it comes over a network or shared memory, or if it’s a C program on Linux or a C# program on Windows.  All interactions are with your own view of the data space.  It also makes sense, for instance, to write data to a space with no recipients.  In this case, the databus may do absolutely nothing, or it may cache information for later delivery, depending on your QoS settings.

Note that both database and databus technologies replace the application-application interaction with application-data-application interaction.  This abstraction is absolutely critical.  It decouples applications and greatly eases scaling, interoperability, and system integration.  The difference is really one of old data stored in a (likely centralized) database, vs future data sent directly to the applications from a distributed data space.

Question 4: “The infrastructure understands, and can therefore selectively filter the data.” Isn’t that true of all pub-sub, where you can register for “events” of interest to you?

Most pub-sub is very primitive.  An application “registers interest”, and then everything is simply sent to that application.  So, for instance, an intersection collision detection algorithm could subscribe to “vehicle positions”.   The infrastructure then sends messages from any sensor capable of producing positions, with no knowledge of the data inside that message.  Even “content filtering” pub-sub offers only very simple specs and requires the system to pre-select what’s important for all.  There’s no real control of flow.

A databus is much more expressive.  That intersection could say “I am interested only in vehicle positions within 200m, moving at 10m/s towards me.  If a vehicle falls into my specs, I need to be updated 200 times a second.  You (the databus) need to guarantee me that all sensors feeding this algorithm promise to deliver data that fast…no slower or faster.  If a sensor updates 1000 times a second, then only send me every 5th update.  I also need to know that you actually are in touch with currently-live sensors (which I define as producing in the last 0.01secs) on all possible roadway approaches at all times.  Every sensor must be able to store 600 old samples (3 seconds worth), and update me with that old data if I need it.”   (These are a few of the 20+ QoS settings in the DDS standard.)

Note that a subscribing application in the primitive pub-sub case is very dependent on the actual properties of its producers.  It has to somehow trust that they are alive (!), that they have enough buffers to save the information it may need, that they won’t flood it with information nor provide it too slowly.  If there are 10,000 cars being sensed 1000x/sec, but only 3 within 200m, it will have to receive 10,000*1000 = 10m samples every second just to find the 3*200 = 600 it needs to pay attention to.  It will have to ping every single sensor 100x/second just to ensure it is active.  If there are redundant sensors on different paths, it has to ping them all independently and somehow make sure all paths are covered.  If there are many applications, they all have to ping all the sensors independently.  It also has to know the schema of the producers, etc.

The application in the second case will, by contrast, receive exactly the 600 samples it cares about, comfortable in the knowledge that at least one sensor for each path is active.  The rate of flow is guaranteed.  Sufficient reliability is guaranteed.  The total dataflow is reduced by 99.994% (we only need 600/10m samples, and smart middleware does filtering at the source).  For completeness, note that the collision algorithm is completely independent of the sensors themselves.  It can be reused on any other intersection, and it will work with one sensor per path or 17.  If during runtime, the network gets too loaded to meet the data specs (or something fails), the application will be immediately notified.

Question 5: How does a databus differ from a CEP engine?

Short answer: a databus is a fundamentally distributed concept that selects and delivers data from local producers that match a simple specification.  A CEP engine is a centralized executable service that is capable of much more complex specifications, but must have all streams of data sent to one place.

Long answer: A Complex Event Processing (CEP) engine examines an incoming stream of data, looking for patterns you program it to identify.  When it finds one of those patterns, you can program it to take action. The patterns can be complex combinations of past and incoming future data.  However, it is a single service, running on a single CPU somewhere.  It transmits no information.

A databus also looks for patterns of data.  However, the specifications are simpler; it makes decisions about each data item as it’s produced.  The actions are also simpler; the only action it may take is to send that data to a requestor.  The power of a databus is that it is fundamentally distributed.  The looking happens locally on potentially hundreds, thousands, or even millions of nodes.  Thus, the databus is a very powerful way to select the right data from the right sources and send them to the right places.  A databus is sort of like a distributed set of CEP engines, one for every possible source of information, that are automatically programmed by the users of that information.  Of course, the databus has many other properties beyond pattern matching, such as schema mediation, redundancy management, transport support, an interoperable protocol, etc.

Question 6: What application drove the DDS standard and databuses?

The early applications were in intelligent robots, “information superiority”, and large coordinated systems like navy combat management.  These systems needed reliability even when components fail, data fast enough to control physical processes, and selective discovery and delivery to scale.  Data centricity really simplified application code and controlled interfaces, letting teams of programmers work on large software systems over time.  The DDS standard is an active, growing family of standards that was originally driven by both vendors and customers.  It has significant use across many verticals, including medical, transportation, smart cities, and energy.

If you’d like to learn about how intelligent software is sweeping the IIoT, be sure to download our whitepaper on the future of the automotive industry,”The Secret Sauce of Autonomous Cars“.

Why I Joined RTI Reply

With a fresh perspective, I thought I could write about this small company in Silicon Valley that you probably haven’t heard of: Real-Time Innovations, Inc. (RTI). RTI has been quietly working on a technology called DDS that could be one of the most important and fundamental tools for the industrial internet revolution. If you haven’t heard, the industrial internet, or Industrial Internet of Things (IIoT), is going to change the world in ways we haven’t seen since the industrial revolution. My grandparents saw communications technology change from horse and cart, to the proliferation of the automobile and internet. This next revolution is going to be a much bigger deal.

next revolution

I worked with RTI for 3 years while running LocalGrid technologies. I can’t take the credit for selecting RTI as a vendor. However, I think our CTO did his homework and chose RTI and Connext DDS for the technical benefits and also because of the company’s dedication to the quality and reliability of their product. That level of product care, I’m sure, comes in part from their long history working with the hardest problems like the US DoD. You can read a bit more about this here.

So why did I join RTI? The initial motivation was the relationship I had with the company, and the product. With RTI’s Connext™ DDS toolkit, and Connext DDS Secure, it was clear that DDS would be a fundamental and critically important technology for LocalGrid’s efforts in the Smart Grid market. Having worked on many system integration projects where we struggled to create scalable, simple to use, and robust communications solutions, I knew how important this middleware would be. But it was the influence and leadership of this small company that really attracted me, first to working with them as a partner, and then as an employer.

So when I found myself looking for my next career opportunity, I knew I wanted a company growing in an exciting technology space. The product and the company seemed like a great fit. However, when I started interviewing, what really struck me was how much everyone seemed to be working together. The stories about the company strengths, benefits, and motivation were consistent across all groups in the company, and included everyone. Everyone was ‘pulling together’. Instead of telling me how great their team was, each group in the company talked about how good sales, or engineering, or marketing was doing. The company is small, about 100 people, but clearly punching above their weight in multiple industries. It has a culture of service: “we won’t let our customers fail” is a phrase I heard many times in the interview process. This can only be evidence of the teamwork and a sense of ownership.

2016 RTI Company Photo CKO

My first week on the job happened to coincide with the Company Kick-off week. During this week, every RTI employee from all over the world meets in person at our HQ in Sunnyvale, CA – definitely a good time to join; I got to meet a lot of people at the company in person. This is important in a company where 50% of the staff work remotely. The most important thing I took away from that first week was a great look at the company culture, which I would describe principally as an immense amount of trust between everyone. That translates into a lot of questions and a lot of feedback. Of course there are growing pains as with any small company. And they don’t have unlimited resources or funds. But, clearly with this teamwork, and especially in a new a undefined market, RTI is successfully ‘failing towards success’. That’s great, because they aren’t afraid to try, to put the company and product out in front, and learn about their customers and markets as they go. With teamwork, this works.

As an engineering company with a complex product, I can confirm that they have a very ‘geek’ friendly culture. But more than that, it is a culture where people communicate and express themselves freely. I’ve run my own meetings and attended many others where, when the question is inevitably asked, “are there any questions” and then there is only silence. At RTI, there is never silence. There is always a question or a comment and it comes from all departments in the company: Engineering, Sales, Marketing, Operations, no matter what the topic is. Everyone seems to understand the importance of what RTI is doing and everyone cares. Add the clear respect and trust that exists in the company, and this translates into lots of questions, feedback and collaboration. This is how such a small group can be the most influential company in their space, beating out companies like Google and GE.

Unlike many Silicon Valley companies, people stay here for a long time. I’ve met engineers that have been here for 15+ years as the company’s technology has changed, evolved, and matured. They move from engineering to sales or marketing and to management along the way. The people make a huge impact on the sense of ownership and the culture. I am very happy to be part of this amazing little company, and I suggest that everyone keeps their eye on RTI. The impact on our society, businesses and the economy will be huge, even if you will never know it.

25 Partners, the IIoT, and a Smart Grid Demo at Distributech 2016 Reply

At DistribuTECH 2016, the second week of March in Orlando FL, Duke Energy and 25 partners demonstrated a distributed microgrid application scattered across 12 booths on the show floor. As the culmination of Duke’s Coalition of the Willing Phase II (COW II) project, it demonstrated near-real-time microgrid use cases like optimization, islanding and grid resynchronization. Each booth was connected via wireless networks and running a part of the overall microgrid demonstration, all based on the new OpenFMB (Open Field Message Bus) distributed device interoperability framework using open Industrial IoT protocols. DDS was one of 3 IoT publish-subscribe protocols utilized in the simulated demo and underpinned the SCADA control messages between OpenFMB nodes.

Duke Booth DTech 2016

Duke Energy demonstrating an interoperable microgrid application across 14 booths at Distributech 2016

OpenFMB is both a project at the SGIP (the Smart Grid Interoperability Panel – a consortium of utilities and vendors dedicated to accelerating the development of a smart power grid) and an emerging standard from NAESB (North American Energy Standards Board) for edge device communication and control. The OpenFMB project team at SGIP spent several months focused on challenging microgrid use cases that require low-latency peer-to-peer communication and control across edge devices in the power grid. Duke Energy’s Stuart Laval and Hitachi Consulting’s Stuart McCafferty (yes, we have nicknames to tell them apart on the weekly calls) chaired the OpenFMB project team. The team delivered an initial demonstration of the framework at SGIP’s annual meeting in New Orleans in late 2015. This initial demo used DDS for real-time data communications between all the devices and controllers. The result was a proof of concept of interoperability between microgrid devices executing near-real-time use cases.

In parallel, Duke Energy and a team of 25 vendors partnered as the Coalition of the Willing II team, to develop a live operational system at Duke’s Mt Holly microgrid testbed. Mt Holly includes real equipment such as a 100-kW photo-voltaic solar system with smart inverter capabilities, a 250-kW battery energy storage system, a 10-kW photo-voltaic solar carport, operations room and more. All based on OpenFMB, this testbed was completed recently and showcases both edge device interoperability and the real-time communications and control needed for microgrid applications. The microgrid in Mount Holly is able to switch to an islanded mode in under 50ms. OpenFMB provides an open data model based on existing standards, and IIoT protocols, including DDS (of course!), MQTT, and AMQP. You can learn more about the Coalition of the Willing (COW) and Mt Holly by watching  Duke’s video (above) or going to their coalition website.

At the DistribuTECH event this year, Duke Energy and a bunch of the COW II partners demonstrated a simulation of the Mt Holly microgrid testbed. Duke showed eight demonstrations, 30 minutes each, over the three days of the conference and presented to over 1,000 attendees. DDS was one of the main publish-subscribe middleware technologies for communicating SCADA data between a variety of devices in different booths, and DDS held its own on the noisy (for the network that is) show floor.  (For more detail on the demo and for a blow-by-blow on how our team members debugged the DDS data communications check out this previous blog post.) I would say this demo was the talk of the show, proving that OpenFMB (and DDS and IIoT protocols) can drive previously difficult-at-best edge control use cases for the Smart Grid.

If you’d like to learn more, be sure to check out the replay of our recent webinar “How to Architect Microgrids for the Industrial Internet of Things“!