Robots on Wheels – By 2021, This May Be the New Normal Reply

autonomous cars robots on wheels ride sharing 2021

If you drive a car, I suggest you read this post by Mark Fields, CEO of Ford Motor Company. Ford is staking their claim in the driverless car market, and it is a bold one. Ford expects to mass-produce driverless cars in 2021 for ride-hailing and ride-sharing services. And this isn’t an advanced autopilot or car with self-driving capabilities; this is a “No steering wheel. No gas pedals. No brake pedals. A driver will not be required.” fully driverless car.

Ford has decided that Level 3 automation, defined by the US Department of Transportation as semi-autonomous driving where the driver needs to take over with ‘reasonable notice,’ is not worth pursuing and instead they are going directly for Level 4, a fully driverless car. Ford isn’t the only company investing and preparing for this future, but they have made a very public and bold statement that should accelerate investment in this market. It should also signal to any car company that doesn’t already have an advanced program to develop self-driving cars: you may already be too late.

It is also interesting that Ford will provide self-driving cars for ride-sharing services first. This is a smart move because it will allow them to retain control of the vehicles (with ride-sharing partners) and they will be able to collect the massive amount of data needed to improve the safety and operation of these systems.

This will be a great social experiment. How will society adapt to having cars available on-demand that will take us anywhere? How fast can we reduce the number of motor vehicle deaths (30,000/yr in the United States) by a factor of 10x? The future is almost here, and it isn’t hover-boards or self-drying clothes but is ubiquitous transportation for everyone, available instantly at the touch of an app.

Click here to learn more about RTI and Autonomous Cars.

How to Integrate RTI Connext DDS Micro with Container-Based Applications [tutorial] Reply

integration of container based apps with connext dds

Container-based microservices are all the rage as software architects and engineers work to bring the flexibility and scalability of the cloud to the edge. To support real-time communication between those microservices with a guaranteed Quality of Service (QoS), DDS makes the perfect companion. This post covers the steps necessary to integrate RTI Connext DDS Micro with container-based applications. The steps required and benefits of the technology will be described in the context of a simple publisher/subscriber example.

ConnextDDSMicroContainer

Before we get started a little background is helpful. At an abstract level, containers are used to run individual, isolated applications on your machine. Each container provides operating-system level capabilities making it possible to run isolated Linux systems on one host.  Containers serve as a lightweight alternative to full machine virtualization, which requires use of hypervisors to manage multiple operating systems. Docker is the world’s leading software containerization platform. As such, it is often used interchangeably when discussing container technology even though there are other alternatives.  Please ensure you have Docker installed and functioning correctly on your machine by following the getting started documentation.

One of the first considerations when creating an image is what base image to build on.  For our purposes, we use Alpine Linux as the base image for the container. Alpine is a very lightweight, thin Linux weighing in at only 5 MB. Because it is so minimalistic, containers using it have faster build times while still including the most necessary and important functions. This makes Alpine a good option to use with micro-DDS. RTI Connext DDS Micro comes with a large number of pre-built and tested libraries for various operating systems.  However, the binaries aren’t available for this configuration, but luckily, Connext DDS Micro is available in source code form and can be built easily.  So let’s get started with that task.

Building the Example

Our first task is to build the Connext DDS Micro libraries and create a build image or build-pack.  The goal of the build image is to assist in building the runtime image from source code, 3rd party libraries, etc.  Remember that images are the main component in building containers, and when working with Docker the blueprint is contained in a Dockerfile.  Here is the Dockerfile for the build image or build-pack:

FROM alpine:3.3

# Install Alpine packages to support build of RTI Micro DDS
RUN apk add --update alpine-sdk bash cmake linux-headers openjdk7-jre && rm -rf /var/cache/apk/*

# Extract RTI Micro DDS host tools and point to Alpine JRE for build
COPY RTI_Connext_Micro_Host-2.4.8.zip RTI_Connext_Micro_Host-2.4.8.zip
RUN unzip RTI_Connext_Micro_Host-2.4.8.zip
RUN rm -rf /rti_connext_micro.2.4.8/rtiddsgen/jre/i86Linux
RUN ln -s /usr/lib/jvm/default-jvm/jre /rti_connext_micro.2.4.8/rtiddsgen/jre/i86Linux

# Extract RTI Micro DDS source and patch for build
COPY RTI_Connext_Micro-2.4.8-source.zip RTI_Connext_Micro-2.4.8-source.zip
RUN unzip RTI_Connext_Micro-2.4.8-source.zip
COPY  patch/posixMutex.c rti_connext_micro.2.4.8/source/unix/src/osapi/posix/

# Build RTI Micro DDS
RUN mkdir /build \
    && cd /build \
    && cmake -DRTIMICRO_BUILD_LANG:STRING=C++/rti_connext_micro.2.4.8/source/unix \
    && make \
    && cp -R /build/lib /rti_connext_micro.2.4.8 \
    && rm -rf /build 

It isn’t that different than you’d expect to see in a standard build script.   The first line identifies the base image to be used.   As previously mentioned we’ll be using Alpine version 3.3 available from the public Docker registry.  Next, we install some build dependencies using apk (Alpine package manager).  After that we unzip and patch the source and use traditional cmake and make commands to build the C++ libraries.    To build an image from this Dockerfile we change to the directory containing this file and execute the build command.

$ docker build –t dds-base .

The –t command tags the image with a human-friendly string versus a randomly generated one for future use.    So the build image or build-pack has been created. Let’s use this image to create the publisher and subscriber images.  

The creation of the publisher and subscriber are similar and accomplished in two steps.  The first step uses the previously created build image or build-pack to compile the executable and the second takes the generated executable and packages it in a runtime image.   This two-phased approach minimizes the size of the container since the build tools and intermediate artifacts are discarded when the runtime image is created.  The two Dockerfiles used in creating the images are intuitively called Dockerfile.build and Dockerfile.run.

FROM dds-base:latest                                    (Dockerfile.build)

# Add publisher source code for build
COPY /src /src

# Compile sources to executable
RUN set -ex \
    && cd /src \
    && /rti_connext_micro.2.4.8/rtiddsgen/scripts/rtiddsgen -replace -language microC++ HelloWorld.idl \
    && g++ -Wall -DRTI_UNIX -DRTI_LINUX -DRTI_POSIX_THREADS -I. -I/rti_connext_micro.2.4.8/include -I/rti_connext_micro.2.4.8/include/rti_me *.cxx -L/rti_connext_micro.2.4.8/lib/i86Linux2.6gcc4.4.5/ -o HelloWorld_publisher -L/rti_connext_micro.2.4.8/lib/i86Linux2.6gcc4.4.5/ -lrti_me_cppz -lrti_me_rhsmz -lrti_me_whsmz -lrti_me_discdpdez -lrti_me_discdpdez -lrti_mez -ldl -lpthread -lrt \
    && chmod +x HelloWorld_publisher \
    && mv HelloWorld_publisher /bin

# copy the runtime dockerfile into the context
COPY Dockerfile.run Dockerfile

#export the dockerfile and executable as a tar stream
CMD tar -cf - Dockerfile /bin 
FROM alpine:3.3                                          (Dockerfile.run)

# Include Standard C++ Library
RUN apk add --update libstdc++ && rm -rf /var/cache/apk/*

# Add service and application
COPY /bin/HelloWorld_publisher /bin/HelloWorld_publisher
RUN chmod a+x /bin/HelloWorld_publisher

# Start publisher using multicast for discovery
CMD ["/bin/HelloWorld_publisher", "-peer", "239.255.0.1"] 

The two steps are accomplished through a series of docker build and run commands.  

$ docker build --force-rm -t dds-builder -f Dockerfile.build .

$ docker run --rm dds-builder | docker build --force-rm -t dds-publisher - ; 

The docker build using the Dockerfile.build file copies the source code and builds the binary.  The run command creates a container from that built image.  When that container executes it packages up the available binary into a tar stream and setups the resources for the runtime image build.   The last docker build uses the Dockerfile.run file and the tar stream piped in to copy the files to the appropriate location after installing C++ standard library runtime. 

The subscriber follows the exact same approach.  Change into the subscriber directory and repeat the previous docker build and run commands using the subscriber rather than the publisher name when tagging the image.

$ docker build --force-rm -t dds-builder -f Dockerfile.build .

$ docker run --rm dds-builder | docker build --force-rm -t dds-subscriber - ; 

The images have been built but before proceeding, we should verify that by executing the docker images command.  This command lists all the images available in your local registry.  You should see the two images we created in the previous steps.

$ docker images | grep dds
REPOSITORY        TAG                 IMAGE ID            CREATED             SIZE 
dds-subscriber    latest              0442ffc6ca02        2 minutes ago      8.098 MB 
dds-publisher     latest              15fa3c2ed441        4 minutes ago      8.084 MB

Running the Example

Now that we built the images we should take them for a test drive and ensure the example runs successfully.  Open two terminal windows.  We’ll use one for the publisher and the other for the subscriber.  In one of the windows start the publisher using the docker run command.

$ docker run -t dds-publisher

If everything is successful you should see the “Hello World” text followed by a number that is incremented after every message is published.

Hello World (0) 
Hello World (1) 
Hello World (2) 
Hello World (3) 
Hello World (4) 
Hello World (5) 
Hello World (6) 
… 

With the publisher successfully running we can start the subscriber and see if the DDS messages are being received across the two containers over the Docker bridge network.

$ docker run -d -t dds-subscriber

The output should look similar to this, proving the subscriber is working:

Sample received     msg: Hello World (9)  
Sample received     msg: Hello World (10)  
Sample received     msg: Hello World (11)  
Sample received     msg: Hello World (12)  
Sample received     msg: Hello World (13)  
Sample received     msg: Hello World (14) 
… 

The number starts with the most recent published because the QoS didn’t have any history settings.  Both of these containers will continue to run until they are manually stopped or the container engine is brought down.  After the learning curve of containers is overcome the rest is just DDS the way you’ve (hopefully) done in the past.

Next Steps

Linux containers, especially Docker, are providing improvements across the DevOps cycle.   They provide a convenient packaging mechanism and promote a modular, microservice based architecture.  Using DDS as the data bus between container-based microservices provides an asynchronous publish/subscribe data bus for these services to communicate when traditional synchronous REST-based approaches are insufficient.   Together they make a solid choice for the Industrial Internet and Internet of Things software architecture.   Take the next step and start using RTI Connext DDS Micro with your container-based architecture today.

Special thanks to Katelyn Schoedl, Research Intern, GE Global Research and Joel Markham, Senior Research Engineer, GE Global Research for authoring this guest blog post – THANK YOU! 

Secure Your IIoT System with the Cryptography Library of YOUR Choice! 1

Secure Your IIoT System with the Cryptography Library of YOUR Choice!

By now, you might have read about the OMG DDS Security Specification which enhances the existing DDS standard with a security architecture and model. Version 1.0 of that specification is about to be finalized by the OMG. This means that a data-centric security model will now be natively integrated into the DDS standard – the only open communications standard that was designed to deliver the flexibility, reliability and speed necessary to build complex real-time applications, including many types of Industrial IoT systems.

One of the striking features introduced in DDS Security Spec is the notion of a Service Plugin Interface (SPI) architecture. The mechanism of SPIs allows users to customize the behavior and technologies that the DDS implementation uses for Information Assurance, without changes to the application code.

This blog post briefly explains the SPI architecture and demonstrates an easy way to leverage the RTI Connext DDS Secure built-in security plugins to have them execute selected cryptographic actions with the cryptography library of your choice.

DDS Secure Service Plugin Interfaces (SPIs)

The DDS Security Specification does not introduce any changes in the way applications interact with the DDS infrastructure. Instead, it defines five different plugin components that are leveraged by the infrastructure when needed. Each of those components provides a certain aspect of the Information Assurance functionality and has a standardized interface, as defined by the DDS Security Specification. This is what the name Service Plugin Interfaces (SPIs) refers to. The plugin architecture is illustrated in the image below.

Secure Your IIoT System with the Cryptography Library of YOUR Choice!

As you can see, there are five SPIs that collectively provide Information Assurance to DDS systems. Their names and purposes are as follows:

SPI Name Purpose of its types and operations
Authentication Support verification of the identity of DDS DomainParticipants, including facilities to perform mutual authentication and to establish shared secrets.
AccessControl Make decisions on what protected DDS-related operations an authenticated DDS DomainParticipant is allowed to perform, including joining of a DDS Domain and creating Topics, DataReaders and DataWriters.
Cryptography Support cryptographic operations, including encryption and decryption, hashing, digital signatures and message authentication codes.
Logging Support logging of security-related events for a DDS DomainParticipant.
Data Tagging Provide the ability to add a security label or tag to data, for application-specific purposes.

The SPI-architecture gives you a lot of freedom to customize the Information Assurance aspects of your secure DDS system. All aspects mentioned in the above bullet list can be modified or re-implemented by using your own implementation of the SPIs. What you can not change, is the mechanism when the DDS implementation actually invokes the methods of the SPIs — they just get invoked when necessary. This is actually a good thing because it means that the middleware continues to behave as prescribed in the specification and you do not have to worry about breaking that.

In addition to the interfaces of the SPIs, the DDS Security specification also provides a functional description of the so-called builtin plugins, described in detail in Chapter 9 of that document. Their primary intention is to provide out-of-the-box interoperability between different implementations of DDS Security. With RTI Connext Secure DDS, the builtin plugins also happen to be an excellent starting point for customization.

Customizing the RTI Connext DDS Secure builtin plugins

The builtin security plugin binaries shipped with Connext DDS Secure can be used out-of-the-box to create your DDS system that includes Information Assurance. All you need to do is properly configure the PropertyQosPolicy of your DomainParticipant as explained in the specification to point to the desired security artifacts like access control and governance configuration files as well as identity certificates, among others.

For those who wish to modify the behavior of the plugins, a set of buildable source code files is provided as well. However, for many situations, the Connext DDS Secure plugins offer a much easier option. Enter the OpenSSL EVP API…

Swapping out cryptographic algorithm implementations

The builtin Connext DDS Secure plugins source code makes use of the OpenSSL cryptographic library — not for its SSL or TLS functionality, but for its set of cryptographic function implementations and a number of helper classes used with those. If you are familiar with OpenSSL programming, you will know that it is good practice to leverage the so-called EVP interface. (In case you are wondering about it, like I did: EVP stands for EnVeloPe.) The Connext DDS Secure  plugins invoke a subset of its functions, namely those related to the items in the table below:

Functionality Algorithms specified for builtin DDS Security plugins
Symmetric encryption and decryption AES in Galois Counter Mode (GCM) for 128-bits or 256-bits key sizes
Signing and verifying RSA-PSS or ECDSA signature algorithms with SHA-256 as their hash function
Key exchange Diffie-Hellman using modular arithmetic (DH) or elliptic curves (ECDH), with specified parameters
Message authentication codes HMAC, with with SHA-256 as its hash function, and GMAC
Secure hash functions SHA-256
Random number generation Any cryptographically-strong random number generator

The plugins shipped with the product use the OpenSSL implementations of these functions, as found in the standard OpenSSL EVP engine. However, they also support inserting your own engine. Your OpenSSL engine implementation could invoke other implementations of these cryptographic functions, for example leveraging the cryptographic library of your choice, maybe because you are required to use FIPS-compliant implementations. Some libraries already support an EVP engine, in which case you only have to configure the plugins. Otherwise, you will have to write a shim layer that invokes the right functions from your library.

Modifying the builtin plugins themselves

It might happen that the algorithms and mechanisms in the builtin plugins, outlined in the previous section, do not meet the needs of your project. In that case, you will have to resort to making modifications to the code of the actual plugins, the code that invokes the EVP functions. For example, you can make small modifications like selecting different algorithms than those defined by the specification, possibly using different key sizes or algorithm parameters. As another example, you can change from dynamic to static linking if you prefer.

It is possible to go beyond minor changes and, for example, introduce an entirely different identity authentication mechanism. Going down that path becomes complicated pretty quickly and we strongly recommend contacting us  to discuss your needs and plans. We are looking forward to engaging with you!