Building flexible manufacturing systems for Industrie 4.0 Reply

Many discussions on the industrial internet of things (IIOT) describe how all kind of sensors will be connected to the cloud, where the big data analytics beast will consume lots of data to provide you with efficiency optimizations. Huge cost savings are especially promised in the energy and transportation sectors. The medical industry, on the other hand, sees an opportunity to provide better and safer care, by integrating patient monitoring devices, and correlating the data or by merely reducing the amount of erroneous alarms.

I am very excited about how the IIOT will transform how things are made. Additive manufacturing and 3D printing are already revolutionizing how machines produce individual items. A generic 3D printer, supplied with the right basic materials, and guided by an electronic blueprint, can produce any type of component. NASA demonstrated recently how to create a new wrench in space, based upon a blueprint emailed to the international space station. GE is building critical jet engine parts using 3D printing. This technology is already here.

Hot off the 3-D printer is this jet engine bracket – GE Tumblr

The manufacturing floor will undergo dramatic changes. Manufacturing heavyweight Germany jumped on the opportunity and launched the Industry 4.0 project to lead in the fourth industrial revolution. In the US, the Smart Manufacturing Leadership Coalition initiative is bringing together manufacturing companies, and research groups to define the future of manufacturing.

Industrie-4.0-videos by Siemens and Bosch provide great examples of how changes in manufacturing will provide cost savings, as well as allow companies to deliver higher quality products:

  • More flexible manufacturing processes and shop floor systems will make it less expensive to deliver customer-defined products (“batch size 1”): custom LeBron James shoes, anyone?
  • Predictive maintenance will save companies money by extending the lifetime of the machine, and promises to reduce the overall machine downtime.
  • Manufacturing will be better integrated with spare part replenishment through location and weight sensors. This saves time and money to locate and reorder parts.
  • RFIDs and other technologies will provide a “product memory” to read what needs to be done to a part and record the progress: e.g., intelligent tightening tools record location and amount of torque applied to bolt on an airplane wing. By making the tools smarter, fewer mistakes will be made on the manufacturing floor. In the event of a product recall, it will also be easier to recall select products, rather than e.g., all cars of a specific model between 2007 and 2013.

Building a more flexible and integrated manufacturing system requires changes to the machines and addition of new sensors.  It  also requires changes to processes and software providing the Manufacturing Service Bus (MSB).

A recent whitepaper “Increasing the Adaptability of Manufacturing Systems by using Data-centric Communication” makes the case that data-centric communication paradigm is key to meeting the requirements of future adaptable manufacturing systems. We agree that a data-centric approach is key. However, obviously biased, we disagree with the authors on the solution.

RTI Connext, based upon the Data Distribution Service (DDS) OMG standard, meets the requirements to power next-generation manufacturing systems.  With RTI Connext, systems are:

    1. Decoupled – Designed for real-time communication of distributed systems, RTI Connext decouples systems and components in space, time and flow. Systems do not need to know in advance which component is producing the data or which component will be receiving the data. Historical data is available to late joiners. Sending and receiving data can be decoupled from the main control flow, avoiding blocking more critical tasks.
    2. Auto-Discovery – DDS discovers data sources and sinks through a lightweight automatic discovery mechanism. A keep-alive mechanism verifies that systems are still up and behaving as expected. Non-responsive components can be ignored.
    3. Resilient to failure – Because all information flows peer-to-peer, RTI Connext avoids any singlepoint of failure. Furthermore, RTI Connext is smart enough to handle redundant producers, consumers, and even networks.
    4. Proven in heterogeneous systems – RTI Connext is deployed in many many systems across many industries, it supports a wide variety of operating systems, including embedded real-time operating systems.  It also support many different transports; with its pluggable transport interface, new or legacy transports can be added.
    5. Standards-based – Because it is based on the DDS and Real-Time Publish-Subscribe interoperability protocol OMG standards, RTI Connext allows systems to interoperate in a standards-based way.
    6. SecureRTI Connext secure DDS and secure transports provide full security of all dataflows.  This includes authentication, encryption (confidentiality), integrity, and availability, along with non-repudiation of transactional information.
    7. Flexible – RTI Connext supports multiple communication paradigms: publish-subscribe (e.g., for alarms and events), request-reply (e.g., status inquiries), at-most-once, and exactly-once delivery. The quality of service can be defined on a per-topic basis: e.g., alarm data must be delivered reliably and high priority, while a sensor signal may only require best-efforts delivery at background priority.
    8. Fast and scalable – Real-time performance is in our veins: we design for low latency, low jitter, high throughput and high scalability. Our customers require micro- or milli-second latency, which scales from a handful of devices to hundreds and thousands.

Does RTI Connext provide all the pieces today? Not quite yet. Integration with legacy protocols and systems will be key. One option to bridge to the existing installed base is through the RTI Connext Routing Service, which provides a mechanism to build adapters and transformations between various existing protocols.

The changes happening on the manufacturing floor are exciting, and also demanding. A proven real-time communication protocol for distributed systems, like RTI Connext, will be key. Go check out why to use RTI Connext for your next generation manufacturing system.

From College Students to Entrepreneurs Reply

this post was written by the winners of the TechChallenge: Israel Blancas Álvarez, Ignacio Cara Martín, Nicolás Guerrero García, and Benito Palacios Sánchez.

A year ago, we started our work for the IV ETSIIT Technical Challenge (video). Who are we? Well, we are four students studying Electrical Engineering and Computer Science at the University of Granada in Spain.

figure1

Figure 1. Prometheus Team: Nicolás, Israel, Benito, and Ignacio.

Our team, Prometheus, won the Tech Challenge sponsored by RTI. For this challenge, teams of four or five students had to create a product to solve a challenge proposed by an external company. The theme for this year’s challenge was “Multi-Agent Video Distributed System.”

We joined this challenge for practical experience. One year after the Tech Challenge, we are all still students, but we  have now researched a business opportunity, designed the competitive product Locaviewer, developed a strategy to sell it in the market and created a working prototype in addition to the course work required for our degrees.

Locaviewer

Most parents with children in a nursery school worry about their children’s health and progress. Our product, Locaviewer, attempts to provide parents to track and see their child in real time. As part of our marketing plan, we created a promotional video. Our code has been released under MIT license on GitHub.

Team Organization and Schedule

The project took us approximately 250 hours to complete. Each week we met for at least 4 hours, with the exception of the last month where we spent 20 hours/week on the project. To be more efficient, we divided in two teams. Two people worked on the indoor bluetooth location algorithm. The other two focused on an application to capture, encode/decode a video stream, and share it using RTI Connext DDS.

Location Algorithm

The first and most important step of our solution was to determine the location of the children inside the nursery school. Each child needed to wear a wristband with a Bluetooth device -sensor-, which continually reported the signal power received a Bluetooth device -dongle-, placed in the room walls. This Received Signal Strength Indication (RSSI) value is usually measured in decibels (dB). We determined the relationship between RSSI and distance.

Figure 2. Empirically measuring Bluetooth signals by angle and distance.

Figure 2. Empirically measuring Bluetooth signals by angle and distance.

Figure 3. Locaviewer wearable.

Figure 3. Locaviewer wearable.

The RSSI values were transmitted to a minicomputer (Raspberry Pi or MK802 III) to run a triangulation algorithm and identify the child’s location. Since we knew the camera position, after determining the position of the child, we knew which cameras were recording the child and selected the best camera.

Figure 4. Indoor triangulation.

Figure 4. Indoor triangulation.

Video Recording Application

To record, encode, decode and visualize video we used GStreamer for Java. We tried other libraries such as vlcj but they didn’t support Raspberry Pi or satisfy the real-time constraints of our system. After some research, we discovered  GStreamer which worked with Raspberry Pi, and could easily get the encoded video buffer in real time (using AppSink and AppSource elements). This allowed us to encapsulate it and send it to a DDS topic. We worked on this for several months, even , implementing a temporary workaround with HTTP streaming using vlcj until we settled on our final approach.

We used the VP8 (WebM) video encoder. Since the wrapper for Java only works with GStreamer version 0.10, we could not optimize it, and had to reduce the video dimensions. Our tests used Raspberry Pi, but we plan to use a MK802 III device in the final implementation because it has the same price but more processing power. The final encoding configuration was:

Figure 5. GStreamer pipeline to record, encode and get video.

Figure 5. GStreamer pipeline to record, encode and get video.

We used the following Java code to create VP8 encoder elements.

    Element codec = ElementFactory.make("vp8enc", null);
    codec.set("threads", 5);
    codec.set("max-keyframe-distance", 20);
    codec.set("speed", 5);

    Element capsDst = ElementFactory.make("capsfilter", null);
    capsDst.setCaps(Caps.fromString("video/x-vp8 profile=(string)2"));

On the client side, we used the following configuration:

Figure 6. GStreamer pipeline to set, decode and play video.

Figure 6. GStreamer pipeline to set, decode and play video.

We used the following Java code to create VP8 decoder elements.

    String caps = "video/x-vp8, width=(int)320, height=(int)240, framerate=15/1";
    Element capsSrc = ElementFactory.make("capsfilter", null);
    capsSrc.setCaps(Caps.fromString(caps));

    Element queue = ElementFactory.make("queue2", null)
    Element codec = ElementFactory.make("vp8dec", null);
    Element convert = ElementFactory.make("ffmpegcolorspace", null);

We also tried JPEG encoding, but this was not feasible for real-time use due to the larger size and greater number of packets.

DDS Architecture

The publish-subscribe approach was  key to our solution. It allowed us to share data between many clients without worrying about network sockets or connections. We just needed to specify what kind of data to send and receive. We created a wrapper library, DDStheus, to abstract DDS usage in our system.

Figure 7. General DDS architecture of the system.

Figure 7. General DDS architecture of the system.

Our final solution was composed of six programs that shared three topics. We used different programming languages:

  1. Python to work at low level (HCI) with Bluetooth devices
  2. MATLAB/Octave to make the triangulation script
  3. Java to work with RTI Connext DDS and graphical user interfaces

We needed to know all the RSSI values in a room. We created a script to configure the Bluetooth dongles and get the RSSI information. These values were sent to a Java program using a simple socket connection in the same machine. The Java application published the data in the Sensor Data topic. It sent Child ID (the sensor Bluetooth MAC), Bluetooth dongle ID and position, current room (as a key to filter by room), RSSI value and expire time.

Figure 8. Sensors program flowchart.

Figure 8. Sensors program flowchart.

After  the cameras recorded and encoded the video, the Java program Gava sends the video viathe Video Data topic. It sent the camera ID as key value to filter the stream using ContentFilteredTopic with camera position, room, encoded frame and codec info.

Furthermore, the application put the camera ID, room and camera position in the USER_DATA QoS value of each video publisher. The triangulation minicomputer could then get all the camera info in a room just by discovering publishers. It could also detect new and broken cameras in real time and update the location script to improve the camera selection algorithm.

Figure 9. Video program flowchart.

Figure 9. Video program flowchart.

In last step we processed the data and wrote the result as Child Data topic. This was done by the room server (implemented with Raspberry Pi or MK802 III)  that triangulated the child location and selected the appropriate camera. It filtered only the sensors in the current room and gathered all video publisher info in that room. The data was sent to an Octave script, which returned the child’s location and best camera ID. The information sent to the cloud with the topic Child Data, included child ID, video quality, camera ID, child location and room ID. For efficiency, the child ID and quality are sent as keys that can be filtered against or used for sorting video.

To optimize the application, the room server called the triangulation script only if there was a subscriber asking for the child. We determined this using subscriber discovery and looking at the ContentFilteredTopic filter parameters.

Finally, we implemented a redundancy mechanism to handle the room server failure. Each minicomputer in the room created a publisher and set its USER_DATA value to the room and a default (unique) priority ID. If one of the minicomputers detected that it had the lowest ID in its room, it started the server application, and acted as server until a new minicomputer with a lower ID appeared.

Figure 10: Room server program flowchart.

Figure 10: Room server program flowchart.

User Applications

We developed two end-user applications. The first one will be used by the parents to see their children in the nursery school. The second program will be used by nursery employees to see all the cameras in real time, manage parent access (add and remove) and automatically handle attendance control.

Figure 11. Parent client application.

Figure 11. Parent client application.

Figure 12. Security camera program for the nursery.

Figure 12. Security camera program for the nursery.

Final Thoughts

We had to cope with two big problems in the challenge:

  1. Getting the RSSI values: we bought a very low quality, low cost Bluetooth device (around $5). The signal had a lot of errors and noise. We had to develop an algorithm to optimize the values, reducing that error from 3 to 0.5 meters. We could not find any library for low level operations with Bluetooth devices in Java (we finally used pybluez). We had to communicate using Python and Java programs.
  2. Video encoding: it was not easy to find a library that allowed us to get the encoded video buffer. It was even more difficult to optimize the elements in the GStreamer 0.10 pipeline to work at maximum performance in the Raspberry Pi. With the final configuration, the image delay is around 3-5 seconds. For better performance, we plan to replace the Raspberry Pi with a similarly priced MK802 III device, which includes Wi-Fi and a dual-core Cortex A9 processor.

RTI Connext DDS saved us a lot of work by implementing networking, data serialization and quality of service mechanisms. We thank our engineering school and RTI for giving us the opportunity and resources to successfully address this business challenge.

Connecting Your DDS Apps to Web Services using Javascript and Node.js Reply

One of the benefits of the Industrial Internet for manufacturers and OEM’s is the access to live data for more and more applications.  This data availability enables performance analytics, more robust business metrics, predictive maintenance analysis and various other capabilities to be realized by end users and equipment manufactures alike.  In the enterprise, Web Services dominate the typical access to data.  On the manufacturing floor or in deployed systems, real-time data delivery is a requirement.  DDS is one of the primary infrastructures used to share large amounts of data in very low latency delivery times.

Typical solutions for data analysis and visualization rely on protocol libraries and heavyweight client applications to receive / process the data.  Today, in the enterprise, the distribution of these proprietary client applications is cumbersome and time consuming.  This results in a reduction of the utilization an organization can get from available live data.  Thus, there is a need to enable applications to leverage technologies that can be found on everyone’s desk.  Just about every computer, tablet or handheld available today have browsers like Internet Explorer, Firefox, Safari, Chrome and others.  Through the use of standard web technologies like Javascript, JQuery, Node.js and Websockets, it is quite trivial to interface browser based user applications with live data.  The problem, however, is how to connect live DDS data up inside these thin client browser based applications.  As we know, access to DDS data requires the use of DDS libraries to interact with the DDS RTPS protocol.

To solve this problem, we have created a lightweight connector technology that enables DDS data to be accessed with Javascript, Node.js server side applications.  This solution provides the ability to have a single server process interact bidirectionally  with DDS data while serving up html pages that leverage standard WebSockets for communication to the server for data.  Take a look at the following diagram showing how the Javascript connector enables the flow of data between DDS and browser based applications.

Image1

 

Taking this a step further, and now that there is an integration with Node.js, we have the ability to create useable object nodes in NodeRed.  NodeRed is a visual programming environment for Node.js applications.  This development environment, which itself runs on Node.js, can be used in any browser.  NodeRed provides many building blocks for typical items used in Industrial Internet applications such as:  WebSockets, HTTP, custom TCP, custom UDP, MQTT, custom function blocks, email, twitter, XMPP, and many others.  As you can see in the following diagram we are using several of these blocks to subscribe to DDS data (Square, ShapeType) and publish data on MQTT, then with a separate flow, subscribe to the MQTT server and output the data to a debug window.

Image2

At RTI, we are constantly working on creating components that are useful and beneficial for your Industrial Internet applications.  This latest solution gives a developer quick and easy access to live data in many different forms depending on their specific application requirements.  The ability to present live data in a thin client browser based application provides a great deal of flexibility for creating and distributing applications to better leverage data that is already available in the system.

Information regarding NodeRed can be found at http://www.nodered.org.  If you are are interested in connecting up your live data to backend enterprise applications and thin client browser based applications, please contact us via email or leave a comment in the comments section.

A Noteworthy and Newsworthy 2014 Reply

2014 was an especially busy year here at RTI. We continued our dedication to innovation in the Industrial Internet of Things (IIoT), celebrated wins with some great new customers, hosted events all over the world, and more. A lot more, really!

To cap off the year I wanted to share with you some of the great things that happened over the past 12 months and so – drumroll, please! – here are some of the most noteworthy and newsworthy things that happened this year at RTI:

London Connext Conferencelondon connext conference rti EMEA 2014

This October at the packed London Connext Conference, I heard more than 20 customers share how they used our products in building solutions to their hard problems. Coding tips, illustrative guidance, professional peer conversations and live demos during the event cemented relationships between established and budding DDS experts. It was inspiring and exciting, and I can’t wait for next year’s conference!

To see what all of the excitement was about, head on over to community.rti.com for an agenda summary and selected presentations.

RTI In the News

IoT-Influence_10_2Forbes Reports RTI as the #1 Influencer for The Industrial Internet of Things

For years, we’ve worked with our customers and partners to develop leading solutions for massive and complex problems in the healthcare, energy and technology markets. In recent months, we’ve seen big rewards for our hard work.

In July 2014, Forbes published a study conducted by The Data Journalism Group at Appinions that identified the top influential companies for the Industrial Internet of Things. RTI was rated number 1. Check out the Forbes article and the Appinions IoT Influence Study (Industrial IoT rankings on page 9).

Obviously, we’re  thrilled to be recognized alongside companies such as GE, Cisco, Google and Apple for our leadership and progress in the market.

CIOReviewCIO Review Names RTI as One of Top 50 IoT Companies

In December, CIO Review named RTI a most promising Internet of Things company, emphasizing the performance, reliability and security of Connext DDS as a real-time communications platform for smart integrated systems. (link to pdf – http://www.rti.com/docs/CIOReview-2014.pdf )

The Industrial Internet ConsortiumIndustrial-Internet-Consortium-Member-logo210514

In April, RTI joined the Industrial Internet Consortium (IIC), which has since grown to100+ industry-leading companies. Its goal is to accelerate the development of connected industrial applications and to provide a common blueprint for the IIC.

RTI CEO Stan Schneider was elected to the IIC steering committee for a 1-year term as the representative for small industry. The IIC offers real market opportunities for motivated and innovative smaller businesses to work closely with leading companies and government and advance the future of industry.

Working With Our Customers (who are doing amazing things!)

In 2014 we announced several important customers:

  • GE Healthcare is using the RTI Connext DDS platform in smart systems of connected medical devices in hospitals and to enable faster integration of large medical instruments.
  • LocalGrid Technologies is using Connext DDS and Connext DDS Secure to deliver high performance scalability to the electric grid. LocalGrid is modernizing the electric grid to support distributed generation of renewable energy. Bob Leigh, CEO of LocalGrid, says “The LocalGrid DataFabric™ grid architecture enables [distributed generation] by securely supporting field deployment of analytics and control applications.”
  • The U.S. Army and General Dynamics Advanced Information Systems selected RTI technology to be integrated into a prototype next generation open architecture (OA) for Unmanned Aircraft Systems (UAS) Ground Control Stations.
  • The European Space Agency built an advanced telerobotics development platform on RTI Connext DDS. We are thankful to work with companies who are leaders in their markets and we’re excited for the projects we have in progress.
  • The MEVION S250, the world’s most advanced compact proton beam radiotherapy system, was recently used – for the first time – to treat a cancer patient. It uses Connext DDS for data distribution and control systems. Get the details

 Our Top SlideShare Content

Top Content can mean many things – most clicks, views, likes, shares, etc. These 2 SlideShares rocked in 2014 and we know this because you – our users – told us (and the numbers back you up)!

Understanding the Internet of Things Protocols

Comparison of MQTT and DDS as M2M Protocols for the Internet of Things

New Releasesconnext dds 5.1.0_wordCloud

2014 saw the release of Connext DDS 5.1 and Connext DDS Secure, two major launches with usability enhancements, new capabilities and an eye towards future customer development needs.

  • Connext DDS Secure. The OMG DDS Security standard was approved in April and the new RTI Connext DDS Secure package – the world’s first complete DDS security solution – is available as a preview release. Read more at www.rti.com/gosecure [link may change] Read more about Connext DDS Secure
  • Connext DDS 5.1. Focusing on greatly increased usability, scalability and performance, Connext DDS 5.1 has been available since February 11. Improvements abound, with over 60 new features and enhancements, as well as support for over 20 new platforms. Read more and download Connext DDS 5.1
  • VS2013 Connext DDS Downloads. Current customers can download Connext DDS Professional libraries for Microsoft Visual Studio 2013 on both 32-bit and 64-bit Intel architectures. Get the VS2013 files now
  • Performance Test Update. Connext DDS 5.1 contains powerful new auto-tuning capabilities that dynamically balance latency and throughput as system conditions such as update rates and network capacity change. With the ” -enableAutoThrottle” and “-enableTurboMode” command line options, you can see how these new features perform on your hardware and network. Try out Performance Test now
  • DDS Toolkit for Labview. Our DDS Toolkit for LabVIEW is completely free from the NI LabVIEW Tools Network. Sara Granados, an RTI software engineer, explains just how easy it is to use, and how valuable it can be.  Watch the DDS Toolkit for LabVIEW video
  • Connext DDS Micro and Connext DDS Cert.

The Blog

In our top blog post of 2014, Alex Campos, Senior Software Engineer at RTI, explained how to Create a P2P Application with Fewer than 35 Lines of C++11 Code. Other popular posts included:

Looking Ahead

This year has been chock full of product advancements, leading customer announcements and top accolades. We are well-positioned for 2015 as we ramp up our efforts with our employees, partners, the IIC, the OMG and our customers. Together we’re all creating a more intelligent IIoT that promises to continue to positively impact the world.

Will Work for User Feedback! Reply

We just attended a Connext DDS Users Group meeting in Chengdu, China on November 14. Most of the conference was conducted in Mandarin Chinese, and neither Edwin (RTI VP of Sales) nor I spoke the language.

However Vision Microsystems, our host and RTI’s Chinese distributor, had arranged an excellent translator who managed to help us keep up with about 75% of the information. This was their second such meeting, with 70 users representing over 30 Connext DDS projects.

Howard Wang

Taking the stage this November in China at our latest DDS Users Group event.

Edwin and I took part of the day to explain how RTI was involved with the Industrial Internet of Things, and presented a technical roadmap of the Connext 5.2 release coming in early 2015.

During the meeting, several long-time Connext DDS developers explained how they chose DDS for their projects, the problems they were solving, and pointed out issues they had with DDS. These presenters were pragmatic and direct, pointing out situations where DDS was not the best solution for them. Both Edwin and I were glad to hear these points of view. Neither of us, nor RTI, advocate DDS as the best technology for all applications involving network communications and systems integration.

7DyPJ

Edwin, our VP of Sales, engaging the attendees.

We learned that many use cases incorporating Connext DDS were those in which DDS was superior to legacy or competing solutions. Users were discovering better performance and scalability, and reducing user code for implementing many capabilities and features — particularly those that were inherently part of our stable, thoroughly-tested, commercial-off-the-shelf technology.

Overall, the testimonials were gratifying for us, and valuable for others in the audience.

This type of feedback helps assure our potential customers that Connext DDS solves real problems in real systems. RTI generally focuses on problems that are difficult to solve or would otherwise take a great deal of effort to solve with DIY solutions. For Edwin and I, the Chengdu meeting was a chance to understand how users truly employed our products and validate their experiences as their discovered where DDS wasn’t a perfect fit.

Of course, in the future, and with more customer feedback, DDS is quite capable of becoming a better fit for many more problems than it solves today. You can help make that happen by sharing your experiences with Connext DDS. Post them to our community forums, or send an email to info @ rti.com .

An IIoT Sensor to Cloud Gateway Solution 1

One of the primary use cases for the IIoT (Industrial Internet of Things) is to collect sensor data and deliver that to an enterprise cloud for enhanced real-time visibility in to remote operational systems.  This is very important for applications such as Oil & Gas, Manufacturing Plant Production monitoring, Healthcare Patient Monitoring and Power Substation Monitoring.  With advances in network infrastructure and the promise of higher bandwidth WAN (Wide Area Network) connections, the ability to pull raw sensor data across the WAN to a backend enterprise cloud where data processing and predictive maintenance solutions can be implemented, and monitored.  Enabling this type of architecture provides great agility for organizations to respond and react to changing conditions for their deployed systems.

There are a few issues that arise when trying to achieve this level of architecture.  The primary issue that presents itself is the sheer volume of data that must be sent from the deployed system back to the enterprise.  Allowing for individual network packets from each sensor is not feasible.  In addition, the amount of data from each sensor is constant whether its actually required on the enterprise side for evaluation.  Given the following diagram below, this architecture does not solve any of the problems that exist today for getting data from the sensors to the enterprise.  Bottlenecks will be seen in the WAN because of the number of data packets that must be handled is large and must be consistent.  As soon as bursts of WAN traffic occur, the ability for the enterprise to gather data for processing becomes increasingly difficult and unpredictable.

Trying to accomplish Sensor to Cloud with traditional solutions is not feasible

To enable a feasible solution today for getting sensor data back to the enterprise requires two key data handling pieces to be put in place.  First, the number of network packets must be reduced.  Second, there must exist some intelligence in the path of data that allows the enterprise side to declare what data it would like to access so that data that is not relevant is not sent.  Also this capability must be mutable as changing conditions must allow for the enterprise side to adjust the set of data it would like to access.

RTI provides a bridging capability called Routing Service exactly for applications such as these.  Routing Service is a logical based bridging solution that enables administrators to configure topic based routes that collect data from publishers on the input side and send data to any subscribers on the output side.  And because Routing Service is based on DDS, it enables the individualized setting of Quality of Service (QoS) on either side of a Topic Route.

One QoS that is available for setting in the Routing Service is Batching of data.  This capability an opportunity to batch or coalesce small pieces of data into larger network packets for more efficient transfer of data over the WAN.  The Batching has configuration controls that limit the size of the data packet and that limit the amount of time that exists between outbound packets.  This gives the user complete control over the bandwidth and latency profiles of the sensor data.  The net result is configurable bandwidth shaping solution that dramatically reduces the number of packets sent over the WAN by a factor of 10x or more.

The second capability that Routing Service provides the enablement of the receive side of a topic route to express a data filter that would limit what data is actually sent through the bridge.  For example, if the sensors on the deployed system were temperature sensors, and the receive side was interested in temperature values “ > 100 degrees F”, then the expression this filter could be configured on the receiving side of the bridge and Routing Service will propagate that filter to the original sending side of data.  Therefore data will be filtered at the original writer side and thus limits how many data packets must be sent over the WAN.  This filtering capability is builtin to DDS and thus Routing Service enables the use of it across any topic route in place.

The following diagram shows where Routing Service would be used in such an architecture.

With the use of DDS Filtering and Logical based Routing that incorporates Batching, Sensor to Cloud is possible.

This solution presents a very high level architecture with unique benefits to solve the problem of getting sensor data back to the cloud without the need to process raw data at the deployed site. Please contact us via email to request more information regarding RTI Connext DDS and Routing Service capabilities and using these products to address your specific requirements and challenges.

Additional Reading — Using Connext DDS in applications:

Baker, bake me a cake! Reply

(SHNS photo by Tom Wallace / Minneapolis Star Tribune)

Fall is typically when US students in their final year at university, start looking for a job for when they graduate in May. (In Spain on the other hand, students start looking for a job closer to their graduation date.)  For some students it will be the first time they apply for a job. You can tell by their flip flops during the on-campus interview session, or the lame “Sorry I am late, I just rolled out of bed” excuse, or lack of preparation all together about the company (“Who are you again? What does your company do?”). Luckily these examples are more and more the exception. Many students prepare well and often ask in advance what to expect from the initial interview.

What are we looking for?

Hire better than yourself. In the Macintosh Division, we had a saying, “A players hire A players; B players hire C players”–meaning that great people hire great people. On the other hand, mediocre people hire candidates who are not as good as they are, so they can feel superior to them. (If you start down this slippery slope, you’ll soon end up with Z players; this is called The Bozo Explosion. It is followed by The Layoff.) I have come to believe that we were wrong–A players hire A+ players, not merely A players. It takes self-confidence and self-awareness, but it’s the only way to build a great team. – Guy Kawasaki

It is true that building real-time communication software for the industrial internet of things requires a special kind of software engineer. We could list the detailed low level or programming skills such engineer should master, but most often we’ll end up hiring the wrong person. What makes a trained engineer successful at RTI are at a different level:

  1. Is the person generally brilliant and talented? Even a talented aerospace or mechanical engineer can excel at RTI building software.
  2. Is the person curious and eager to learn?
  3. Does the person have high integrity?
  4. Is the person a great communicator?
  5. Will the engineer fit in with the RTI culture? The list of culture attributes is not short: thorough, values quality, flexible, gives and takes feedback, willing to follow process, friendly, etc.

How can you test these skills?

When looking for a wedding cake baker, don’t ask how they would handle a particular situation or what they would do if the cake color is off. Ask them to make a sample cake and taste it.

We’ve found that the best method to evaluate a candidate is to have them work on a project. Internship projects are a great way to work with new grads. We work hard on making the internship program a wonderful experience for the engineers. When we asked our past interns for comments on a recent blog post on creating a great internship culture, the feedback was positively reassuring we are doing the right things.  Having currently employed engineers join an internship program is unlikely. Having a 45-day bootcamp without a guarantee of a full-time position sounds an interesting idea, but one we have not tried. As described further on, for already employed candidates, we typically provide a take home project. Yes, this sounds like a lot of work. With a few hours of homework, and a follow up discussion, you can learn a lot of how the candidate will work out every day.

What to expect from the recruitment process?

Resume screening is almost always done by the hiring manager. We’ve seen all kind of resumes: 1 page to small books, in all kind of languages, with and without cover letter. Focus on the cakes you baked and what experience you got doing so. A list of engineering abbreviations doesn’t tell us much.

The initial interview is typically an hour long video conference, with a virtual whiteboard (often we use a shared Google document). These are typically simple programming questions. We’re not asking you to compile the programs. Our focus is here on whether you have a minimum level of technical competence. This interview may be hosted by one or two RTI engineers.

Before the in-depth interview, we will provide the candidate with a homework project. This should take 3-4 hours to complete, and can be done over the course of a week in the luxury of your home, with your favorite books, search engine and development tools. It allows the candidate to demonstrate thoroughness, completeness, coding style, test methodology and overall development skills. This homework project is a “Baker, bake me a cake” type of project. Based upon the outcome of the homework project, we decide to bring the candidate on-site.

The in-depth interview is a combination of on-site and remote interviews, as some of our key interview engineers are remote engineers. The in-depth interviews last somewhere between 6 and 8 hours. We test your programming skills with hands-on coding exercises, and code reviews. We do design questions (e.g., how do you design a distributed database?) on the whiteboard. We also ask non-technical questions to understand how you would work in the team. In some cases, we even ask you to present on your favorite technical topic for 30 minutes. We’ll let you know in advance if that’s the case.

After the in-depth interview, the interview team will meet to discuss the pro and cons of the candidates and a decision is made to make the candidate an offer.

If you are interested to join the RTI engineering team, go check out RTI’s products, technology, culturer and career opportunities: http://www.rti.com/company/careers.html

The Attack of the DDS C++ APIs Reply

Connext DDS C++ API

If you are currently developing Connext DDS applications in C++, you have a single API option: Use a “traditional” DDS  API that is derived from the DDS interface definition defined in the OMG IDL language.

Thankfully, things are about to change and you have two more options to choose from:

  1. A new standard DDS C++ API which defined by the OMG DDS-C++ PSM standard (DDS-PSM-Cxx), and
  2. the C++ API obtained by applying a new OMG IDL to C++11 standard mapping to the IDL DDS interfaces, see Johnny’s post on the subject here.

Curious yet confused? This post will give you some historical context, explain why it makes sense to have both, and where you would use each one.

Context & Rationale.

First the basics: The OMG Interface Definition Language (IDL) provides a way to define interfaces and data-types independently of a specific programming language. This means that you can define an interface in IDL just once and then have it automatically mapped to C, C++, Java, C#/.NET, Ada, Python, Ruby, etc. These transformations are defined in the so-called IDL to Language mappings. See the OMG specifications page for a list of the current ones.

Define the interface once, and get it in all the popular languages automatically. It sounds like a great idea, doesn’t it? Yes, it is a very nice feature and the reason the OMG DDS Specification defined the DDS functionality at a higher level, using UML, and then the DDS APIs using OMG IDL instead of taking the time to define the API separately for each programming language.

However there is a price to be paid for the convenience of using IDL. Because IDL needs to be mappable to all the programming languages, it provides a “minimal common denominator” and lacks many features that are specific to the different programming languages. When you use a programming language you want to leverage its features and idioms to make the code as clean, readable, and robust as the language allows.  If the idioms are absent the code seems clunky.

For example, the IDL lacks generics, so the IDL to C++, Java, and C# does not use templates/generics even in the places that would make the most sense. Also IDL interfaces cannot have overloaded operations, or cannot define operators, the list goes on.

For this reason the DDS SIG decided that the best approach was to create new specifications that define the DDS APIs in the most popular languages, starting with C++ and Java. It is the same DDS specification (same  classes, operations, behaviors defined by the UML model in the DDS Specification)  but mapped directly each programming language leveraging the features and idioms natural in that language. Apply some elbow grease, meetings, reviews, and votes and you get the DDS C++ API and the DDS Java API specifications.

Choosing Your API & Having the Best Possible Experience.

Defining the DDS API directly on a programming language gives the best possible experience to the programmer. As Alex eloquently put it in his recent blog “Create a P2P Distributed Application In Under 35 Lines Of C++11 Code!”. So this is typically the best choice.

Why, you may ask, use the DDS API derived from the IDL to C++11 mapping?

It turns out that defining the APIs in IDL is very useful for automated tools and an important capability for component-based programming.

If an application developer uses a component-based programming framework or some other code-generation framework  it is isolated from the middleware APIs. The application user programs to the “framework APIs”  and the mapping to the “technical middleware layer” is handled by the code generation and the tooling. The IDL provides a nice intermediate representation for the framework which can then generate code that is not tied to a programming language, and the IDL to language mapping handles the rest. In this scenario the IDL to C++11 mapping may be the best approach. The tools can keep using IDL and yet the resulting code is cleaner, more efficient, and robust than the one that would be generated from the “classic” IDL to C++ mapping.

There are other situations where using IDL-derived APIs may be advantageous to the application, for example if they are integrating other technologies and frameworks that also use IDL. In this case the IDL to C++11 mapping may also be the best approach.

What about the old-tried (classic) IDL to C++ API? This also makes sense for people that do not particularly like or cannot use some of the “modern” C++ magic (auto pointers, STL, templated meta programming, etc.). For example some compilers that do not support these advanced features or it these extra libraries would make the code too complex or expensive to certify.

In the end, it is all about choice and ensuring that you have the best tool to do the job. One of the great things about DDS is that allows applications that are written in different programming languages to communicate and share information. Stated another way, DDS gives you a way to have these applications integrated and interoperate. The DDS concepts: DomainParticipant, Topic, DataWriter, DataReader, QoS, Content Filters, are the same across these options, while the specific language APIs can differ. Therefore, using a specific C++ language binding is a matter of choice and convenience, much like deciding to use Java, C, of C#.

As American as Tapas and Apple Pie 1

Every morning, double decker bus after double decker bus shuttles engineers from all over the Bay Area to the GooglePlex, the Facebook compound, the Apple spaceship or the Yahoo campus. Yahoo infamously ended its work from home privilege. Google pulls out all stops to bring engineers together in the same and crowded place, showers them with perks, all to make magic happen.

“It is best to work in small teams, keep them crowded, foster serendipitous connections” – from How Google Works

What do you do when you are not one of these tech behemoths? What do you do when the skillset you seek is highly specialized and all over the world? When you find the masters in building real-time distributed systems, do you require them to move to the Bay Area, one of the most expensive places on earth? Housing prices throughout the larger Bay Area are astronomical. Do you make them slug through the busy Bay Area commute, even if that means that some will sit in traffic for over an hour each way?

As a small software company building real-time infrastructure software (“data distribution software”) for the industrial internet of things (IIoT), headquartered in Sunnyvale, California, we opted for a hybrid approach. We have two main development sites: Sunnyvale, CA and Granada, Spain.  Engineers have a flexible schedule to arrive at work to avoid the busy commute times. They all have the option to work from home occasionally. Most often, they choose a fixed schedule to work from home: e.g., every Wednesday. We also have remote engineers all over the US: Massachusetts, Florida, New Hampshire, New York, Virginia, and Minnesota. Managing the team, I stress about how to blend this team together, as if they were all in the same location and “foster serendipitous connections”? How do you cross the many time zones and yet leave no team behind?

The key to making this work is to establish the right team habits which build trust and are based upon transparency. We experiment often with new collaboration tools. If these tools do not foster more transparency or build trust, they will fail. Use tools and establish habits which emulate what happens in a traditional office setting, where you can drop into somebody’s office of a chat or for help. We’ve found that the following set of tools providing video, group chat and shared documents, paired with good team habits, to be the most effective.

When you want help debugging a problem, don’t default to sending an email. Email is impersonal and can get snippy (remember the Have you ever kissed a girl? email). Instead contact a remote engineer by video chat (You may still provide the log message via email). We use a lot of video these days, so much that we had to upgrade our internet to handle the many concurrent calls. We use Google Hangout for small group meetings, and installed a Google Chrome Box in several conference rooms. For example, our weekly bug-court meeting is a hangout with folks in Sunnyvale, New Hampshire, San Francisco, Maryland, Massachusetts and sometimes New York and Granada. For larger meetings, we use Webex and ask folks to turn on their camera when talking. In our large conference room, we have a remote controlled camera so remote participants can pan the room to the speaker. Yes, using video has it challenges (is my remote office clean?, CPU usage of Google Hangout, etc), but it is a key habit which help build trust.

An important practice are virtual (daily) scrums, through hangouts or group chat. We do this in individual development teams, and since recent, across the entire team. In individual development teams, the synch up meetings are more detailed and cover progress, plans, specific blockers or needs. When it comes to the entire team, we ask folks to post in a group chatroom specifics of what they will be working on that day. Initially we experimented with IRC (which failed on some platforms as it wasn’t as easy to use), and now we use Atlassian HipChat. Our rules for using HipChat are simple:

  • Rule #1: When you start your day, you say a virtual good morning and mention what you will work on in the GoodMorningGang room. This is similar to walking into the office and chatting with your colleague of what they will be doing that day. No good morning, no more soup for you: you don’t get to be part of HipChat. This rule has brought folks a lot closer. You get a sense of folks are working on, what they level of stress and frustration is, and you get to celebrate and chit-chat, as if folks were all in the same office.
  • Rule #2: move the conversation to the right room: e.g., don’t launch a discussion about platforms in the GoodMorningGang room; take the platform related discussions to the All Things Platforms room.
  • Rule #3: memes and animated gifs are allowed and encouraged. All work and no play makes Jack a dull boy. It’s ok to goof off, have some fun and create silly memes or celebrate with a little dance.

Very little of what goes on in the engineering team is a secret in the team or in the company. All our weekly meeting notes, team summaries or discussions are posted in internally available Google Docs. The engineering meeting notes are posted weekly to the entire company. We use Atlassian’s Jira to keep track of what we work on, or what type of bugs people have encountered with our products. This is accessible to the entire company. During weekly tech briefings, we educate each other about cool technical developments in the development and research team.

There are many more habits and tools which helps us work more efficiently as a distributed team: from a reasonably flat organization where the people doing the work are encouraged to make the decisions, to transitioning to a better revision control tool (git), to being able to power cycle all embedded boards in our lab from anywhere in the world.

Making a remote team work efficiently as if they are in the same office is not easy and takes constant adjustments and experimentation with habits and tools. When experimenting with a new habit or tool, always start small. Create pockets of excellence, succeed and then copy to another group. Be patient in the process. The combination of restlessness (aka we need to look for better tools to work as a distributed team) and patience (make them work) is important. Live the behavior.