Why Would Anyone Use DDS Over ZeroMQ? Reply

zeroMQ vs AMQP vs DDS

Choices, choices, choices. We know you have many when it comes to your communications, messaging and integration platforms. A little over a year ago, someone asked a great question on StackOverflow: why would anyone use DDS over ZeroMQ?. In their words, “it seems that there is no benefit [to] using DDS instead of ZeroMQ.”

It’s an important question – one that we believe you should have the answer to, and so our CTO, Gerardo Pardo-Castellote, provided an answer on StackOverflow (a shorter version of his answer is provided below).

++++

In my (admittedly, biased) experience as a DDS implementer/vendor, I have found that many developers of applications find significant benefits using DDS over other middleware technologies including ZeroMQ. In fact, I see many more “critical applications” using DDS than ZeroMQ.

First a couple of clarifications:

DDS, and the RTPS protocol it uses underneath, are standards, not specific products.

There are many implementations of these standards. To my knowledge, there are at least nine different companies that have independent products and codebases that implement these standards. It does not make sense to talk about the “performance” of DDS versus ZeroMQ, you can only talk about the performance of a specific implementation. I will address the issue of performance later but from that point of view alone the statement “the latency of ZeroMQ is better” is plainly wrong. The opposite statement would be just as wrong, of course.

When is it better to use DDS?

It is difficult to provide a short/objective answer to a question as broad as this. I am sure ZeroMQ is a good product and many people are happy using it. The same can be said about DDS. I think the best thing to do is point at some of the differences and let people decide what is important to them.

DDS and ZeroMQ are different in terms of the governance, ecosystem, capabilities, and even layer of abstraction. Some important differences:

Governance, Standards and Ecosystem

Both DDS and RTPS are open international standards from the Object Management Group (OMG). ZeroMQ is a “loose structure controlled by its contributors.” This means that with DDS there is open governance and clear OMG processes that control the specification and its evolution, as well as the IPR rules.

ZeroMQ IPR is less clear. On the web page (http://zeromq.org/docs:features), it is stated that “ZeroMQ’s libzmq core is owned by its contributors” and “The ZeroMQ organization is a loose confederation without a clear power structure, that mostly lives on GitHub. The organization’s Wiki page explains how anyone can join the Owners’ team simply by bringing in some interesting work.”

This “loose structure” may be more problematic to users that care about things like IPR pedigree, Warranty and indemnification.

Related to that, if I understood correctly, there is only one core ZeroMQ implementation (the one in GitHub), and only one company that stands behind it (iMatix). Besides that, it seems just four committers are doing most of the development work in the core (libzmq). If iMatix was to be acquired or decided to change its business model, or the main committers lost interest, the user’s only recourse would be supporting the codebase themselves.

Of course, there are many successful projects/technologies based on common ownership of the code. On the other hand, having an ecosystem of companies competing with independent products, codebases, and business models provides users with assurance regarding the future of the technology. It all depends on how big the communities and ecosystems are and how risk-averse the user is.

Features and Layer of Abstraction

Both DDS and ZeroMQ support patterns like publish-subscribe and Request-Reply (a new addition to DDS, the so-called DDS-RPC). But generally speaking, the layer of abstraction of DDS is higher. Meaning the middleware does more “automatically” for the application. Specifically:

DDS provides for automatic discovery

In DDS you just publish/subscribe to topic names. You never have to provide IP addresses, computer names or ports. It is all handled by the built-in discovery. And it does it automatically without additional services. This means that applications can be re-deployed and integrated without recompilation or reconfiguration. 

In comparison, ZeroMQ is lower level. You must specify ports, IP addresses, etc.

DDS pub-sub is data-centric

An application can publish to a Topic, but the associated data can represent updates to multiple data-objects, each identified by key-attributes. For example, when publishing airplane positions each update can identify the “airplane ID” and the middleware can provide history, enforce QoS, update rates, etc. for each airplane separately. The middleware understands and communicates when new airplanes appear or disappear from the system.

Also, DDS can keep a cache of relevant data for the application, which it can query (by key or content) as it sees fit, e.g., read the last 5 positions of an airplane. The application is notified of changes but it is not forced to consume them immediately. This also can help reduce the amount of code the application developer needs to write.

DDS provides more support for “application” QoS

DDS supports over 22 message and data-delivery QoS policies, such as Reliability, Endpoint Liveliness, Message Persistence and delivery to late-joiners, Message expiration, Failover, monitoring of periodic updates, time-based filtering and ordering. This is all configured via simple QoS-policy settings. The application uses the same read/write API and all the extra work is done underneath.

ZeroMQ approaches this problem by providing building blocks and patterns. It is quite flexible but the application has to program, assemble and orchestrate the different patterns to get the higher-level behavior. For example, to get reliable pub-sub requires combining multiple patterns as described in http://zguide.zeromq.org/page:all#toc119.

DDS supports additional capabilities like content-filtering, time-filtering, partitions, domains and more

These are not available in ZeroMQ. They would have to be built at the application layer.

DDS provides a type system and supports type extensibility and mutability

You have to combine ZeroMQ with other packages like Google protocol buffers to get similar functionality.

Security

There is a DDS-Security specification that provides fine-grained (Topic-level) security including authentication, encryption, signing, key distribution and secure multicast.

How does the performance of DDS compare with ZeroMQ?

Note that you cannot use the benchmarks of Object Computing Inc.’s “OpenDDS” implementation for comparison. As far as I know, this is not known to be one of the fastest DDS implementations. I would recommend you take a look at some of the other implementations such as RTI Connext DDS (our implementation), PrismTech’s OpenSplice DDS or TwinOaks’ CoreDX DDS. Of course, results are highly dependable on the actual test, network and computers used, but typical latency performances for the faster DDS implementations using C++ are on the order of 50 microseconds, not 180 microseconds of OpenDDS. See https://www.rti.com/products/dds/benchmarks.html#CPPLATENCY

Middleware layers like DDS or ZeroMQ run on top of things like UDP or TCP, so I would expect they are bound by what the underlying network can do and for simple cases they are likely not too different, and they will, of course, be worse than the raw transport.

Differences also come from what services they provide. So you should compare what you can get for the same level of service, for example publishing reliably to scaling to many consumers, prioritizing information and sending multiple flows and large data over UDP (to avoid TCP’s head-of-line blocking).

Based on the relative performance of different DDS implementations  (http://www.dre.vanderbilt.edu/DDS/), I would expect that in an apples-to-apples test, the better-performing implementations of DDS would match or exceed ZeroMQ’s performance.

That said, people rarely select the middleware that gives them the “best performance.” Otherwise, no one would use Web Services or HTTP. The selection is based on many factors; performance just needs to be as good as required to meet the needs of the application. Robustness, scalability, support, risk, maintainability, the fitness of the programming model to the domain and total cost of ownership are typically more important to making the correct decision.

Which one is best?

If you’re still undecided, or simply want more information to help you in making a decision, you’re in luck! We have the perfect blog post for you: 6 Industrial IoT Communication Solutions – Which One’s for You? [Comparison].

If you think RTI Connext DDS may be what you need, head on over to our Getting Started homepage where you’ll find a virtual treasure trove of resources to get you up and running with Connext DDS.

getting started with connext dds

Mission: score an interview with a Silicon Valley company Reply

RTI’s engineering team is based in Sunnyvale, CA. We also have a smaller, yet rapidly growing team in Granada, Spain.

Between Sunnyvale and Granada stands 6000 miles. It takes an entire day to travel there. And we need to keep a difference of 9 hours in mind when organizing team meetings.

There are also quite a few differences in how people write a resume (curriculum vitae,) and approach getting a job.

This blog post is a summary of my recent presentation to the engineering students at the University of Granada: “How to get hired by a Silicon Valley Company.” Many of the tips below are not just beneficial to new engineering graduates in Spain, but also to new grads in the US.

Your first mission is to be invited for an initial interview.

Your preparation started yesterday

Before you approach the graduation stage and walk to the tune of Elgar’s Pomp and Circumstance March, there are quite a few things you can do. These are things which go beyond your regular classes and assignments.

Hiring managers pay attention to the type of internships and projects you worked on. You can show your love for programming through your open source contributions or by the cool demo you built at a hackathon. Your work speaks for itself if I can download and test drive your mobile application from the Apple App Store or Google Play Store.

Beyond the technical projects, it is important to learn and practice English. Our Granada team is designed as an extension of the team in Sunnyvale. As a result, engineers in Spain will work on a project together with engineers in California. Being able to express and defend your ideas well, in English, is important. Some of us learned English while watching Battlestar Galactica (the original) or Star Trek. We may even admit to picking up phrases watching the A-team or Baywatch. Yes, those shows are a few decades old. Learn, read, write and mostly find a way to speak English often. Go on an adventure through the Erasmus program, and practice English.

Lastly, start building your professional online profile:

  • Create a LinkedIn profile. Most often, employers will consult your LinkedIn profile, even before your resume. Please use a picture, suitable for work.
  • Create a personal website, with your resume, your projects, and how to contact you. Resumes and LinkedIn profiles are dull. Your website allows you to describe the projects in more depth, and include diagrams and even videos of your demos. Consider it the illustrated addendum to your resume.
  • Share your thoughts on a blog, or on websites such as Medium.
  • Contributions to Github or Stack Overflow speak for themselves. You can start by adding your school assignments to your GitHub profile. However, hiring managers will look for contributions beyond the things you had to do to get a good grade.
  • Publish your applications to the Apple App Store of Google Play Store. I love to download the applications of a candidate and try it out. It takes time, effort and even guts to create a working application and share it publicly.
  • Manage your social profile carefully. Future employers may look at your Twitter rants or Facebook antics.

Drop the Europass style of resume

There are plenty of websites which give you the basics to write a good resume: keep it to 1–2 pages and follow a simple structure: objective, education, experience and projects, skills and qualifications and finally list your accomplishments, etc.

Here are a few Do’s and Don’ts, specifically for international candidates:

  • Write your resume in English. Make sure there are no typos. Use online services, such as Hemmingway App or Google Translate, to improve your work.
  • Focus on what you learned or did on a project. Do not just list project names, leaving the reader to guess what you did.
  • Add hyperlinks where the resume screener can get more details. And make sure the hyperlinks work.
  • Add your grades, in easy to understand terms, e.g., specify you graduated, first in class with 92%, rather than 6.1/7. I do get confused when I see two non-correlated grades: e.g., 3.2/4 and 8.7/10.
  • Read other US resumes to learn the lingo. A hiring manager might check if you took a class in Data structures and algorithms; in your university, that may have been covered in Programación II.
  • Customize your resume for the job.
  • Do not create any cute resume designs. No background or design touches (unless you are applying for a design job).
  • Drop the Europass resume format; i.e., do not include a picture, data of birth or multiple contact addresses. For an engineering position, I do not care about your driver’s license information. Do not use the standardized table to indicate your proficiency in various languages. Rather than indicate you rate your proficiency in German as a B2, state, “Conversational German.”
  • Do not use long lists of keywords, technologies or acronyms.
  • A pet peeve of mine: do not list Word or Excel unless you actually developed add-ons for those applications. Similarly, only list Windows, if you developed to the Windows APIs.

A cover letter allows you to make a great first impression

Before you submit your resume, craft a cover letter. Although most companies do not require it, I recommended creating one as it allows you to introduce yourself in your words. It is your first impression.

A short and well-crafted introduction letter allows you to make a more personal connection. Your intro paragraph should list the job you are applying for and why you are excited about the job and the company. Next, describe in three points why you are a great fit. Describe your successes. Do not repeat your resume. Close by asking for the interview.

You probably read this blog post because you are ready to contact RTI for a job. Let me make it easy: go to the RTI Career Page.

Good luck.

Fog Computing: IT Compute Stacks meet Open Architecture Control Reply

Fog computing is getting more popular and is breaking ground as a concept for deploying the Industrial IoT. Fog computing is defined by the OpenFog Consortium as “a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to Things.” Looking further into the definition, the purpose is to provide low-latency, near-edge data management and compute resources to support autonomy and contextually-aware, intelligent systems.

In particular, fog computing facilitates an open, internet architecture for peer-to-peer, scalable compute systems that supports edge analytics, local monitoring and control systems. It’s this latter application to control that I think is particularly interesting.  Control systems have been in place for decades across multiple industries, and recently some of these industries have moved to create interoperable, open control architectures. In this blog, we’ll take a look at some existing open architecture frameworks that bear a resemblance to Fog and how Fog computing and these framework initiatives could benefit from cross-pollination.

Open Architecture Control

Back in 2004, the Navy started developing its Navy Open Architecture. In an effort to reduce costs and increase speed and flexibility in system procurement, the DoD pushed industry to establish and use open architectures. The purpose was to make it simpler and cheaper to integrate systems by clearly defining the infrastructure software and electronics that “glue” the subsystems or systems together. The Navy selected DDS as their publish-subscribe standard for moving data in real time across their software backplane (Figure 1 below).

Screen Shot 2017-04-11 at 7.57.41 PM

Figure 1. The Navy Open Architecture functional overview. Distribution and adaptation middleware, at the center, is for integrating distributed software applications.

Fast forward to now and we find the OpenFog Consortium’s reference architecture looks very much like a modern, IT-based version of what the Navy put together back in 2004 for open architecture control. Given that this Navy Open Architecture is deployed and running successfully across multiple ships, we can feel confident that fog computing as an architectural pattern makes sense for real-world systems. Also, we can likely benefit from looking at lessons learned in the development and deployment of the Navy’s architecture.

OpenFMB

The Open Field Message Bus (OpenFMB) is a more recent edge intelligence, distributed control framework standard for smart power-grid applications. It is being developed by the SGIP (Smart Grid Interoperability Panel). Energy utilities are looking at ways to create more efficient and resilient electricity delivery systems that take advantage of clean energy and hi-tech solutions.

Instead of large, centralized power plants burning fossil fuels or using nuclear power to drive spinning turbines and generators, Distributed Energy Resources (DERs) have emerged as greener, local (at the edge of the power grid) alternatives that do not have to transmit electricity over long distances. DERs are typically clean energy solutions (solar, wind, hydro, geothermal) that provide for local generation, storage and consumption of electricity. But DERs are intermittent and need to be managed and controlled locally, as opposed to centrally, which is all that is available in the current power grid.

Distributed intelligence and edge control is the solution. The OpenFMB framework is being deployed and proven in smart grid testbeds and field systems. Looking at the OpenFMB architecture (Figure 2 below), you can see the concept of a software integration bus clearly illustrated.

Screen Shot 2017-04-11 at 7.59.02 PM

Figure 2. OpenFMB architecture integrates subsystems and applications through a central, real-time publish-subscribe bus.

Like the Navy Open Architecture, the OpenFMB distributed intelligence architecture looks very much like a fog computing environment. Since OpenFMB is still under development, I would bet that the OpenFog Consortium and OpenFMB project team would benefit by collaborating.

OpenICE

Patient monitoring, particularly in intensive care units and emergency rooms is a challenging process. There can be well over a dozen devices attached to a patient – and none of them interoperate. To integrate the data needed to make intelligent decisions about the welfare and safety of the patient, someone has to read the front-end of each device and do “sensor fusion” in their head or verbally with another person.

OpenICE, the Open Source Integrated Clinical Environment, was created by the healthcare IT community to provide an open architecture framework that supports medical device interoperability and intelligent medical application development. OpenICE (Figure 3 below) provides a central databus to integrate software applications and medical devices.

openICE data flow RTI DDS

Figure 3. The OpenICE distributed compute architecture, with DDS-based databus, facilitates medical device and software application integration.

Again, the OpenICE architecture supports distributed, local monitoring, integration and control and looks very much like a fog architecture.

And now Open Process Automation

More recently, Exxon-Mobil and other process automation customers have gathered via the Open Process Automation Forum to begin defining an open architecture process automation framework. If you look at the various refineries run by Exxon-Mobil, you’ll find distributed control systems from multiple vendors. Each major provider of process automation systems or distributed control systems has its own protocols, management interfaces and application development ecosystems.

In this walled garden environment, integrating a latest and greatest subsystem, sensor or device is much more challenging. Integration costs are higher, device manufacturers have to support multiple protocols, and software application development has to be targeted at each ecosystem. The opportunity for the Open Process Automation Forum is to develop a single, IIoT based architecture that will foster innovation and streamline integration.

Looking at the Exxon-Mobil diagram below, we find, again, an architecture centered around an integration bus, which they call a real-time service bus. The purpose is to provide an open-architecture software application and device integration bus.

Exxon-Mobile Process Automation Architecture

Figure 4. Exxon-Mobil’s vision of an open process automation architecture centered around a real-time service bus.

Again, we see a very similar architecture to what is being developed in the IIoT as fog computing.

The Opportunity

Each of these open architecture initiatives is looking to apply modern, IIoT techniques, technologies and standards to their particular monitoring, analysis and control challenges. The benefits are fostering innovation with an open ecosystem and streamlining integration with an open architecture.

In each case, a central element of the architecture is a software integration bus (in many cases a DDS databus) that acts as the software backplane facilitating distributed control, monitoring and analysis. Each group is also addressing (or needs to address) the other aspects of fog computing like end-to-end security, system management and & provisioning, distributed data management and other aspects of a functional fog computing architecture. They have the opportunity to take advantage of the other capabilities of Industrial IoT beyond control.

We have the opportunity to learn from each effort, cross-pollinate best practices and develop a common architecture that spans multiple industries and application domains. These architectures all seem to have very similar fog computing requirements to me.