The Industrial Internet Consortium takes on the Green Energy Challenge Reply

Stan Schneider, CEO of Real-Time Innovations, Industrial Internet Consortium Steering Committee Member

Today, the Industrial Internet Consortium (IIC) decided to take on the smart grid to enable large-scale efficient use of green energy. The power system is perhaps the central infrastructure of industry. Modernizing the grid is critical to building an integrated Industrial Internet of Things. Our first goal: deliver on the promise of renewable energy.

Today’s central-station-controlled power grids operate on 15-minute output update cycles; every 15 minutes, the central station estimates the power and spins up generators to meet the load. Unfortunately, renewable power sources such as solar panels or wind turbines don’t have a predictable output over that time range. Fast loads like plugging in your Tesla are also hard to predict. If the grid drops below the needed power, it can fail. So, to ensure sufficient power, grid operators spin up reserves so they can compensate for these fluctuations. That keeps the lights on, but it wastes fossil fuels and means renewables are not lowering the carbon footprint as much as they should.

However, to realize the promise of green energy at scale, we must improve control. Microgrids can be part of the answer. A microgrid puts intelligence in the field, connecting local generation sources like solar panels with controllers and storage. A microgrid can close loops in milliseconds instead of minutes. It can take advantage of practical energy storage like capacitors and batteries.

The core of a microgrid is a high-performance “field message bus.” The bus connects devices and intelligent nodes at high speeds. It can also interact with the central station and the cloud, taking advantage of both local and remote state to optimize operations.

To achieve their goal of enabling smart grids, the IIC announced its first energy-focused testbed: the Communication and Control Testbed for Microgrid Applications. IIC member organizations Real-Time Innovations (RTI), National Instruments (NI) and Cisco, are collaborating on the project, working with power utilities CPS Energy and Southern California Edison. Additional industry collaborators include Duke Energy and the power industry organization – Smart Grid Interoperability Panel (SGIP).

With this new IIC testbed, RTI, Cisco and NI are taking the grid a big step into the future. This testbed will test the contention that the DDS data-centric standard can provide the core of a microgrid-based power architecture. DDS is already operating critical grid infrastructure today, including Siemen Wind Power’s largest turbines and North America’s largest hydropower dam. This project will scale the technology down to the smallest of power systems, enabling an eventual architecture of small, efficient microgrids connected into a larger, intelligent Industrial Internet of Things whole.

We are working closely with leaders in the power industry. Southern Cal Edison has perhaps the industry’s most advanced grid simulation laboratory, with full photovoltaic, central-station, sub-station, and transmission hardware and simulators. It even features a “garage of the future” with electric cars and the required electronics. CPS Energy, the largest municipal utility in San Antonio, will do the final test phase in its real-world “Grid of the Future” neighborhood.

After a detailed analysis, Duke Energy, the largest US utility, published a distributed intelligence reference architecture for an Open Field Message Bus (OpenFMB) based on DDS last February. Also, SGIP, the leading industry grid consortium, recently launched a project to codify the data models, service requirements and standards for OpenFMB. We are working closely with these projects to ensure alignment.

Long term, the real power of the IIoT is to connect sensor to cloud, power to factory, and road to hospital. To do that, we must change core infrastructure to use generic, capable networking technology that can span industries, field and cloud. We are excited to see the IIC leading the way to a more connected, more efficient, greener world.

The Industrial Internet Consortium Turns 1 This Week! Reply

I remember my daughter’s first birthday. I remember she really enjoyed her cake. And I remember spending at least 30 minutes scrubbing it out of her hair that evening.

The Industrial Internet Consortium is 1 year old! And to celebrate, they’re holding a public IIC 365 event at their quarterly meeting in Reston, Virginia on Thursday, March 26. If you are around, register and come listen. There will be a demo of the first publically announced Industrial Internet Consortium Testbed, a discussion on the World Economic Forum’s report on the potential of connected products and services, and a keynote by Dr. Joe Salvo of GE Research, one of the key instigators of the consortium. Joe Salvo alone will be worth the effort of attending.

This first publically announced testbed is actually just one of five Industrial Internet Consortium Testbeds at the moment; the others are still unannounced outside the IIC. In fact, RTI (Real-Time Innovations) is leading one of those testbeds, and we’ll be announcing it soon! At the event, you’ll get to see a demo of the Power Tools Fleet Management testbed – in other words the tracking and tracing of smart hand tools. There are actually some fascinating use cases beyond just making sure you know where all the expensive tools are in the manufacturing plant. For example, if you can track the precise location of a riveting tool and when it rivets, you can actually determine if the user missed a rivet when working on an airplane wing.

RTI was one of the first members of the Industrial Internet Consortium following its founding a year ago by GE, Intel, Cisco, IBM and AT&T. Besides having a member on the steering committee – our CEO – RTI is helping to specify the connectivity reference architecture to ensure end-to-end interoperability. We’re also helping to draft the security architecture to ensure end-to-end security for Industrial Internet of Things (IIoT) systems. We’re also active in the marketing working group, the use cases team, and the testbed working group. With over 140 companies in the consortium, and growing, I’m personally excited to see how much work is getting done and how much momentum there is behind the IIoT.

So come celebrate the Industrial Internet Consortium’s birthday in Reston. I doubt we’ll have to wash any cake out of our hair.

The Future of Live TV Reply

We’re working with our customers to share with you their stories and insights, to offer you a rare glimpse into the future of systems from some of the world’s most exciting and innovative industries and development teams. Enjoy!

The Future of Live TV Production and Broadcast - The Content Experience

Bryn Balcombe, CTO at London Live on The Future of Live TV

 

By Bryn Balcombe – CTO, London Live

The Future of Live TV Production and Broadcast needs to be defined by the expectations of the consumer. This is not just about content access. It’s about the content experience itself.

Filmmaking recently underwent a similar change that has set the baseline for these new expectations. Progressive filmmakers such as George Lucus only saw computing as a tool to enhance live-action films. However, a small team, supported by Steve Jobs, departed Lucus Films to form Pixar in the belief of a different future; the ability of software and compute power to create feature films. The result was Toy Story released 20 years ago [1].

After 16 years of research for one of the world’s largest sports rights holder I believe live sports productions have reached the same tipping point. The most progressive sports broadcasters only enhance live productions, but we now have enough advancement in software and processing power to capture 3D worlds, real life in real-time, and create completely new live event experience using this compute power.

This is what happens when the Industrial Internet of Things (IIoT) is applied to Live TV Broadcast. Without recognising this potential the TV Industry risks having its own Kodak moment [2] as new competitors such as Apple, Google, Amazon become uniquely positioned to deliver this future.

The Future of Live Event Experiences

Live sports will be the first battleground, but this shift will impact the commercial foundations of the entire broadcast industry[3]; competitors will enter the sector with a completely differentiated offering to existing linear TV broadcast that opens up new market opportunities for right holders and platform providers.

The capture of real 3D worlds in real-time sets the stage for radically different user experiences: watching major events from any perspective you choose – live, cinematic views from any location at any resolution. Anyone, anywhere to control the action, the stories to follow, anyone can direct, anyone can fly the camera, it’s personalised or shared, it’s lean back or lean forward, it’s live or time-shifted, it’s mobile, tablet, game console, TV, cinema, VR (Virtual Reality).

The Future Of Live TV - Bryn Balcombe - 3D Live Compute

Fig. 1 The Integration of 3D compute into Live TV has already started.

These completely new live event experiences can only be created using computing and will uniquely target the capabilities of modern end user devices; touch screens, gyros, controllers and voice interactions that have become natural to young and old.

The IIoT enables converged content acquisition for a diverged audience.

Current Industry Direction

The broadcast TV industry is picture quality driven – increasing resolution (HD, 4K, 8K), High Dynamic Range (HDR), Wide Colour Gamut (WCG) and High Frame Rate (HFR) are all future directions being considered[4]. Ever increasing communication speeds will be required.

The industry bodies are looking towards IT networking hardware to meet the demand – leveraging R&D investments that have been driven by other industries while broadcasting was creating it’s own single purpose hardware appliances.

Bandwidths are driving COTS (Commercial Off-the-Shelf) IT adoption in TV Broadcast – not compute. However, new experiences can only be created with compute.

The TV Industry has recognised it is being out paced by the development of IT networking R&D investment. The same is also true about communication protocols and processing; the combined efforts of multiple industries are perhaps best defined by what IBM describe as “the next paradigm in computing – data centric systems”[5].

This system of systems thinking is currently missing from the TV Industry’s standardisation approach but it’s fundamental to the future of Live TV Production. Not only is the industry relying on human image interpretation but also upon human communication and human control.

Our research at London Live leads us to believe that a standardised middleware called the Data Distribution Service (DDS) delivers the right mix of data-centric, open standard, real-time communication. Not just for carrying information between systems but also for the individual images themselves – each frame uniquely identified, so that it can be analysed, correlated and manipulated in isolation, or in time or spatial groups. London Live is now working towards creating a fully reactive TV studio environment based upon this open standard.

The Future Of Live TV - Automated Robotics Integrated into Live TV Broadcast

Fig. 2 London Live – Integrating automated robotics into live TV broadcast

TV’s Kodak Moment?

The full implications of the shift towards a computing based future are not yet addressed and some analysts predict TV broadcast risks having its Kodak moment.[6]

Without considering the requirements of future user experiences, the industry risks locking itself into standardisation that will, ultimately, limit what is possible.

Mapping of 3D worlds is the direction experiences are moving in. Ari Emanuel, co-CEO of William Morris Endeavor (WME) and IMG, an entertainment, sports and media agency, has stated: “The future of media and entertainment is not going to be a flat screen. OTOY[7] is building the content pipeline for the next generation of movies and computer graphics, where immersion and presence will be a key axis of the creative process.”

Is that where the Live TV Broadcast Industry is heading?

In 2014 GE Chairman and CEO Jeff Immelt made a powerful statement about the impact of the IIoT future “If you went to bed last night as an industrial company, you’re going to wake up this morning as a software and analytics company.”[8]

The major risk is that TV industry’s shift to a software future is being based upon foundations that assume human interpretation of images – foundations that do not scale. Just as proprietary broadcast systems are designed to make these human processes efficient – but not to change them – the current approach to IT adoption runs the same risk.

New competitors

If broadcasters don’t create new experiences others will – Apple, Amazon, Google are the competitors now.

The key asset is no longer the TV broadcast transmission to the home via terrestrial, cable, satellite. The key asset is the cloud-based ecosystems competitors are building. They compete on the total experience that can be delivered using this infrastructure. The experience now combines the user devices (they design), the OS and App software (they write) and the core data centres (they’ve built). They compete on the intellectual property that differentiates the total experience.

Content of all types is layered on top.

The future of Live TV is compute-centric and Live TV communication has to become data-centric to enable new consumer experiences.

Footnotes

[1] President of Pixar Animation and Disney Animation, Ed Catmull, Creativity Inc. Over coming the unseen forces that stand in the way of true inspiration

[2] North River Ventures http://www.northriver.com/ventures/broadcast-tv-dead-man-walking/ & Profiting From the Cloud Membranes Part Four (http://vimeo.com/102073844 @ 3m50s) My only point of difference is that I do not see live sports as a form of protected linear media services – in fact I believe that the application of compute to live event capture (sports or news) will be the industry tipping point – perhaps these will be referred to as “spatial media services” rather than linear/non-linear.

[3] Other live events, including global news, will follow.

[4] https://tech.ebu.ch/docs/techreports/tr028.pdf

[5] http://ibmresearchnews.blogspot.co.uk/2014/11/data-centric-systems-new-paradigm-for.html

[6] North River Ventures http://www.northriver.com/ventures/broadcast-tv-dead-man-walking/ & Profiting From the Cloud Membranes Part Four (http://vimeo.com/102073844 @ 3m50s) My only point of difference is that I do not see live sports as a form of protected linear media services – in fact I believe that the application of compute to live event capture (sports or news) will be the industry tipping point – perhaps these will be referred to as “spatial media services” rather than linear/non-linear.

[7] http://render.otoy.com/newsblog/?p=547

[8] Third “Minds + Machines” Summit http://youtu.be/djB6BmBda6Q & http://fortune.com/2014/10/10/ge-data-robotics-sensors/

The Best Programming Language for Industrial Internet of Things Applications Reply

Recently, RedMonk released the January 2015 version of their programming language rankings (http://redmonk.com/sogrady/2015/01/14/language-rankings-1-15/).  Here are the top 10 languages from their list:

  1.      JavaScript
  2.      Java
  3.      PHP
  4.      Python
  5.      C#
  6.      C++
  7.      Ruby
  8.      CSS
  9.      C
  10.      Objective-C

If you’re a software developer, chances are you are using one or more of the languages on this list.  So which is the right one for Industrial IoT applications?  There is no one “right” language – the choice will depend entirely on your application, experience, and hardware platform.  The good news is that you won’t need to give up your favorite language to create scalable, interoperable Industrial IoT solutions that communicate seamlessly.  Connext DDS supports most of these languages, giving you flexibility in design while retaining the power of fast, scalable, reliable, and secure Industrial IoT communications.

Java, C#, C++, and C are all supported out of the box.  Experimental RTI integrations allow languages such as Python and Javascript to access data in flight on the RTI DDS Databus.  For web programmers using PHP and CSS, the RTI Web Integration Service allows their apps to interact with Connext DDS.

RTI Connext DDS runs on the most popular desktop and embedded operating systems including Linux, Windows, OS X, VxWorks, QNX, Integrity, LynxOS, AIX, and Solaris.  Both desktop hardware (x64/x86) and embedded processors (such as PowerPC and ARM) are supported.

A typical DDS-based system might include a mix of hardware platforms, operating systems, and languages:

blog image small

By basing your Industrial IoT applications on Connext DDS, you ensure that they can interoperate despite differences in programming languages, operating systems, and underlying CPU.  A sensor built on ARM7 hardware with firmware written in C running under an RTOS can publish its readings via Connext DDS Micro.  The sensor data can be subscribed to by a Windows app on a PC, a Java app running on an Android tablet, and a hardened PowerPC board running a C++ application under VxWorks.

Ultimately, you can build your Industrial IoT application in any language you like, but it’s important to choose a connectivity solution that both supports various languages and can be used end to end, from the sensor to the cloud.  Connext DDS offers unmatched language compatibility across the full range of Industrial IoT platforms.

The Future of Medical Reply

We’re working with our customers to share with you their stories and insights, to offer you a rare glimpse into the future of systems from some of the world’s most exciting and innovative industries and development teams. Enjoy!

rti - blog - future of - banner - BKMED

 

michael eibye - BK Medicalbk medical_ultrasound

By Michael Eibye, Manager – R&D Software and System Test, BK Medical

The fundamental premise of a 5-year research program that we at BK Medical are undertaking, focused on next generation ultrasound imaging systems, can be summed up in 6 words: The future of medical is distributed.

As anyone who has ever worked first-hand with large distributed systems can attest, as a distributed system scales, those 6 simple words are something of a Call To Action as well – how do we address the future and ensure we learn from our past experiences, whilst building on the investments customers have already made? Building distributed systems is challenging, but we are confident that this shift will have a far reaching impact on the quality of care systems as they are made more cost effective, while at the same time providing greater integration into the patient care systems, facilitating improved decision making for doctors.

Distributing an ultrasound system requires determining the extent to which the existing stand-alone system (Figure 2) can be re-constructed into constituent elements while maintaining functionality and performance – the transducer for obtaining the data, the algorithmic engine for processing it, the display device for presenting the resultant images, plus the integration with the care environment back office systems. (Figure 1)

Figure 1. Logical Distribution of the BK Medical Ultrasound Scanner

Moving to a distributed solution introduces a new set of challenges: An ultrasound transducer can generate Gbits of data per second. Can this data be safely and securely transmitted over wireless instead of tightly-coupled single system integration? How far can the algorithmic processing engine be separated from the data stream and display device and still deliver timely screen updates? How can we better integrate with hospital systems? Most important of all, how do we migrate these services into our customers and allow them to leverage their legacy systems investments? These are some of the fundamental questions we seek to answer.

There are also commercial drivers behind the shift to distributed: Hardware costs for traditionally tightly integrated system elements, such as the display device, have dropped precipitously in the last few years as tablets have become commonplace. The algorithmic processing power of the PC games engine has driven down compute engine costs and their programmability. In addition the cost of Wi-Fi integration has fallen now that every smartphone has it included, while the tools to enable reliable high performance communication have increased.

Figure 2. State of the Art of a current Ultrasound System

Figure 2. State of the Art of a current Ultrasound System

There are as many drivers towards distributed systems as there are external market pressures: BK Medical is part of a larger organization, Analogic Ultrasound. As we seek to leverage the Intellectual Property and skill base across the companies, we need a development methodology that enables independent development, yet at the same time enables rapid integration of relevant software and hardware IP for new products. With this objective in mind we sought to architect loosely coupled modular software. It became clear the best and most validated approach was to build our systems data-centrically. This decision combined with a need for robust, real-time, scalable software led BK Medical to select DDS (Data Distribution Service) as its software infrastructure.

The core IP of future BK medical products will be software so we are paying great attention to how to construct that software to maximize flexibility. A focus on a decoupled software approach has to work in two dimensions at the same time; De-coupled applications that can be rapidly and easily integrated to bring together new and more powerful system features and functions, plus the software applications and modules have to be decoupled from the underlying hardware and distributed network topology, thus enabling rapid integration of new hardware IP or integration of low cost commercial hardware platforms in new distributed designs. While several software approaches provide the former, few simultaneously enable the latter in distributed systems, especially in future systems that may need real-time communication, can potential scale significantly and have to be very robust. In addition, with an eye to the future of the Industrial Internet of Things and how this will impact the hospital environment, an approach that can be safely secured is also important.

RTI Connext DDS provides us the ideal platform on which to research and develop the future of ultrasound medical systems. After all, at BK Medical we believe that the future of medical ultrasound is distributed.

If you have a story about using Connext DDS that you’d like to share, email us at RTIBlogs@rti.com. 

Building Connext Applications using Android Studio 11

Android Studio is the official IDE for Android application development, based on IntelliJ IDEA. The first stable build was released in December 2014, starting from version 1.0. Android Studio is designed specifically for Android development and it is available for download on Windows, Mac OS X and Linux at http://developer.android.com/tools/studio/index.html. This section will describe how to use Android Studio to build a Connext application. It assumes that Android Studio is correctly installed.

Create an Android Project

This example uses and IDL file to define the types. The IDL file will be created and added once the project s created. The Android App built in this section can interoperate any other RTI Connext application using the same IDL file, same topic name, and compatible QoS settings. If a different IDL file is used, then note that the following steps will need to be altered slightly as the IDL file name, “HelloWorld” is used in completing fields and naming other files in this example. This example shows only the creation of the publisher application. These steps can be repeated to create a HelloWorldSubscriber application. If the subscriber and the publisher will be installed on the same device, make sure the subscriber’s Application, Project, and Package names are different than the publisher’s.

From the Android Studio menu bar, select File -> New Project.

pic1

In the displayed New Project dialog, fill in the Application name, Company domain, and project location. The dialog will select the default project location and uses the application name to create a sub-directory. You can leave the project location and use the default. After you completed all the fields click next.

pic2

 

Select the form factors and click next. For this example we create an Application for a Nexus 7 tablet and selected an API level which makes it compatible with most of the Android devices. You can select different API levels based on your application preferences.

pic3

Choose Blank Activity since we are not creating any UI for this application. For your application you might want to use some of the other templates based on your target application. Click Next.

pic4

Last we select the name for the activity and some other elements. Edit the Activity Name as shown below. There is no need to edit any of the other fields since they are derived from the Activity Name. Click Finish

pic5

After the project has been created, Android Studio will display the Project as shown below. At this point, HelloWorldPublisherActivity.java is the only source file in the project.

pic6

 

Edit the Project Source Code

Edit the generated Java class (HelloWorldPublisherActivity) by adding the two lines shown below. This will create the simplest Android application with Connext. More sophisticated Android code can and probably would be used for a real application. Real Android applications should never call an endless loop from the main activity since this will stop the application from responding.

Android Studio shows HelloWorldPublisher in red text because that class does not yet exist in the project. Save the changes to HelloWorldPublisherActivity.java.

pic7

Create the HelloWorld.idl file in the project’s java folder. Right click on the java folder and select New -> File.

pic8

 

The destination directory dialog box should already show the right location ..\src\main\java. Click OK.

pic9

 

Enter HelloWorld.idl as file name and click OK.

pic10

 

Android Studio will open the newly created file for editing. Add the text as shown below and save the file.

const long HELLODDS_MAX_PAYLOAD_SIZE = 8192;
const long HELLODDS_MAX_STRING_SIZE  =   64;

struct HelloWorld {
    string<HELLODDS_MAX_STRING_SIZE>             prefix;
    long                                         sampleId;
    sequence<octet, HELLODDS_MAX_PAYLOAD_SIZE>   payload;
};

pic11

 

Open a terminal window/command prompt. Before generating the example code make sure that NDDSHOME environment variable is set correctly to the Connext installation directory and that the PATH environment variable includes $NDDSHOME/scripts.

Change the directory to your applications java directory (HelloWorldPublisher\app\src\main\java) and run rtiddsgen as follow to generate the example files

rtiddsgen -language Java -package com.rti.example.helloworldpublisher -example armv7aAndroid2.3gcc4.8jdk HelloWorld.idl

Note that the -package argument is the same value as used when creating the project. If the rtiddsgen-generated java classes are in a different package to that of the project then “import” statements will need to be added to some java files. You can find the package name at the top of the HelloWorldPublisherActivity.java file.

If you run on Windows and do not have the Visual Studio compiler in your PATH environment variable add –ppDisable to the rtiddsgen command line to disable the preprocessor. Since the IDL file doesn’t have any pre-processor statements running the pre-processor is not needed.

pic12

 

You will see the following un your command prompt window:

Running rtiddsgen version 5.1.0, please wait ...
Done

Done means that all the required type support files and an example files have been generated.

In Android Studio you should see all the additional classes which have been generated and the call to HelloWorldPublisher.main in HelloWorldPublisherActivity should now be shown as resolved. The USER_QOS_PROFILES.xml file, the HelloWorldSubsciber class, and the makefile is not used by this project. Those files can be deleted or kept for reference.

 

Add Connext Libraries

Resolve the Connext symbols in the generated java files by adding the Connext libraries to the project.

Copy nddsjava.jar from $NDDSHOME/class to the lib directory of your HelloWorldPublisher App. You should see the library being added as shown below.

pic13

The next step is to add the library to the build. Right click on it and select Add as library.

pic14

The library needs to be added to the application. Click OK.

pic15

If you open the build.gradle file you will see that the library has been added.

pic16

 

Connext is not pure Java, it also consists of some native libraries. When building the project, you will not see any complaints if these libraries are missing, but when you try to run the App, the LogCat panel will show errors such as:

com.rti.example.helloworldpublisher W/System.err﹕ The library libnddsjava.so could not be loaded by Linux.

com.rti.example.helloworldpublisher W/System.err﹕ Make sure that the library is in your LD_LIBRARY_PATH environment variable.

And you will see the following error screen on your Android device

pic17

Below is how I added the native libraries to the Android Studio project.

Add a lib directory to your project with a subdirectory which contains the native Connext DDS libraries (libnddsc.so, libnddscore.so, and libnddsjava.so). for this example named the sub-directory armeabi-v7. This will look as follow in your project.

pic18

Once you have your .so files in that directory configuration, create a .zip file from the lib directory. I named the file lib_rti.zip. The .zip file will have lib/armeabi-v7/*.so as the contents.

pic19

 

Rename the extension of the zip to .jar giving you a file names native-rti-libs.jar.

pic20

 

And then drag the .jar into your Android Studio project libs directory (same location as all your other jar libraries).

pic21

 

Finally, add this to your projects build.gradle:

Compile fileTree(dir: ‘libs’, include: ‘*.jar’)

pic22

 

Update the Android Manifest

Connext expects to use the Internet and Wi-Fi to access the Internet. It needs to be able to change some of the Wi-Fi settings. Connext also needs to access external storage if it is to read a USER_QOS_PROFILES.xml file (although the current example doesn’t make use of that).

In the project Explorer, double-click the AndroidManifest.xml file to edit it. Add the following lines to the manifest file as sown below.

<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.CHANGE_WIFI_STATE"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.CHANGE_WIFI_MULTICAST_STATE"/>

pic23

 

Run the Example

Before running the example make sure you have an Android device connected via USB or have a Virtual Android Device running.

Select Run->Run ‘app’ or click on the green triangle next to app in the tool bar

pic24

Choose the device you want the application to run on. The example below shows a Nexus 7 connected through USB and a virtual device which emulates a Nexus 7.

pic25

Once it is running you will see the output being displayed in the logcat window.

pic26

If you have a machine connected to the same network which has RTI Connext installed you can start rtiddspsy and see the samples being published.

pic27

The sample code doesn’t fill the samples with any actual values. You can add this to your code in the HelloWorldPublisher class. Look for the comment “Modify the instance to be written”

pic28

The example used in here uses a loop to send DDS samples. A real Android application would of course have a UI to control the behavior and not have a loop sending samples with a delay between sending samples. A real Android application should never block in a loop. However I hope this example was useful in showing how to use the RTI Connext libraries and build an application using Android Studio.

The Future of Automotive Reply

We’re working with our customers to share with you their stories and insights, to offer you a rare glimpse into the future of systems from some of the world’s most exciting and innovative industries and development teams. Enjoy!

rti - blog - future of - banner - AUDI

constantin - HIL AUDI AG

By Constantin Brueckner, Hardware-in-the-Loop Functional Test, AUDI AG

Your car is probably the most compute-intensive thing that you own. It will have at least 40-50 Electronic Control Units (ECUs) for a recent economy vehicle and well over 100 for a top of the range car. In the past, each of these ECUs had one dedicated function to perform. This evolved over time and most of the ECUs now perform more than only one single function or group of functions. Despite this evolution of ECU use, there is still an increasing need to reduce the number of ECUs and the cabling between them with the ultimate aim of increasing fuel economy and reducing CO2 emission, whilst providing the customer even greater functionality in the car. These additional demands are being met by a shift towards functional integration and communication between ECUs and between the car and its environment. This is one of many more reasons why the future of automotive test is becoming distributed and interconnected. Furthermore our test systems have to evolve as fast as the car functionality to encompass this change. To address these challenges Audi founded a pre-development department for test systems, which currently develops a real-time capable bus system based on RTI DDS for the future test systems.

But first, let us have a look in more detail at the shift towards ‘functional integration’ and explain it with following examples:

  • A simple former “air bag computer” fired the air bags at the time of a crash. This now becomes an integral element of the complex safety system with more safety functions to avoid great injury to the passengers in case of a crash. The new, so called, ‘safety computer’ has an automatic crash detection capability (“Audi pre-sense”) and it has to perform, for example, fully automated braking support, deploy the air bags, pre-tension the seat belts, close the windows and roof and move seats into an upright position.
  • Dedicated ECUs for radio, navigation and rear seat entertainment are evolving into a “main entertainment unit.”
  • Dedicated ECUs for body electronics like head light, interior light, and air conditioning, are combined into one “body control module” and enriched with new capabilities such as bending light, LED head light, parking assist, air condition and rain-sensing wipers.

Furthermore there is a new automotive safety assurance standard to comply with that reflects this change to a function-centric system view, ISO26262. Functional Safety is intrinsically end-to-end communication in scope. It has to treat the function of a subsystem as part of the function of the whole system. This means that whilst Functional Safety Standards focus on Electronic and Programmable Systems (E&PS), the end-to-end objectives for the approval process means that in practice functional safety review has to extend to the non-E&PS parts of the system that the E&PS actuates, controls or monitors.

Functional integration and this regulatory change are the issues driving a fundamental shift in how the HIL (Hardware-in-the-Loop) tool chain of automotive test departments has to be developed.

In the past, we would have to determine one HIL vendor before we set up a new HIL test bench to ensure that every particular subsystem can seamlessly work together with each other. Today we are moving from this all-in-one solution with monolithic HIL test benches provided by one single vendor towards heterogeneous and distributed test benches, which consist of several hardware modules from different HIL vendors, connected via the real-time capable HIL-Bus.

Why? Because no single HIL vendor has this previously mentioned all-in-one-solution, which meets all our test demands regarding distributed functions and highly integrated ECUs. As a result, we must choose the best-in-class solution for each sub-system and use those to develop a new test platform in which we have a high degree of confidence. The challenge is in how we go about integrating this set of HIL platforms from all of these different vendors in order to produce a new generation test bench for next generation cars and functions.

DDS-based HIL-Bus

Audi HIL test lab – showing how we integrate multi-vendor HIL systems together

The communication in cars has already moved from dedicated wire-based communication to a data-oriented bus communication using for example CAN bus or FlexRay. We have now transferred this bus-based approach from our cars to our next generation HIL architecture. We call this new approach ‘HIL-Bus based’.

Architectural view of the distributed HIL environment

Architectural view of the distributed HIL environment

To realize this bus-based approach for HIL-test-benches we need a data-centric bus representation mechanism to be the conduit of state information.

For the technical realization Audi decided to use RTI Connext DDS with integration points for HIL vendor systems.

RTI not only provided us a market leading implementation of DDS with their Connext DDS product, but their OCS (Open Community Source) license model gave us the ideal commercial framework to work within to develop an open market ecosystem for the HIL-Bus concept. OCS enables our HIL-Bus partners to have free access to RTI Connext DDS for their development and deployment. It thus removes a major inhibitor to adoption across the industry. It allows partners to focus resources on integration and quality.

Additionally we drive and focus on open international standards like the ASAM XIL-API to seamlessly integrate test automation software for 24/7 automated and deterministic tests and experimental software tools for manual testing.

Today we are working with multiple HIL system vendors to evolve this ecosystem and to instantiate the HIL-Bus as the ideal method for end-to-end functional system test.

For more information on HIL-Bus testing, we suggest this joint Audi/RTI article by Bettina Swynnerton of RTI and myself that was published in ATZ Elektronic in July 2014.

To learn more about ASAM XIL-API visit the ASAM website www.asam.net.

Building flexible manufacturing systems for Industrie 4.0 Reply

Many discussions on the industrial internet of things (IIOT) describe how all kind of sensors will be connected to the cloud, where the big data analytics beast will consume lots of data to provide you with efficiency optimizations. Huge cost savings are especially promised in the energy and transportation sectors. The medical industry, on the other hand, sees an opportunity to provide better and safer care, by integrating patient monitoring devices, and correlating the data or by merely reducing the amount of erroneous alarms.

I am very excited about how the IIOT will transform how things are made. Additive manufacturing and 3D printing are already revolutionizing how machines produce individual items. A generic 3D printer, supplied with the right basic materials, and guided by an electronic blueprint, can produce any type of component. NASA demonstrated recently how to create a new wrench in space, based upon a blueprint emailed to the international space station. GE is building critical jet engine parts using 3D printing. This technology is already here.

Hot off the 3-D printer is this jet engine bracket – GE Tumblr

The manufacturing floor will undergo dramatic changes. Manufacturing heavyweight Germany jumped on the opportunity and launched the Industry 4.0 project to lead in the fourth industrial revolution. In the US, the Smart Manufacturing Leadership Coalition initiative is bringing together manufacturing companies, and research groups to define the future of manufacturing.

Industrie-4.0-videos by Siemens and Bosch provide great examples of how changes in manufacturing will provide cost savings, as well as allow companies to deliver higher quality products:

  • More flexible manufacturing processes and shop floor systems will make it less expensive to deliver customer-defined products (“batch size 1”): custom LeBron James shoes, anyone?
  • Predictive maintenance will save companies money by extending the lifetime of the machine, and promises to reduce the overall machine downtime.
  • Manufacturing will be better integrated with spare part replenishment through location and weight sensors. This saves time and money to locate and reorder parts.
  • RFIDs and other technologies will provide a “product memory” to read what needs to be done to a part and record the progress: e.g., intelligent tightening tools record location and amount of torque applied to bolt on an airplane wing. By making the tools smarter, fewer mistakes will be made on the manufacturing floor. In the event of a product recall, it will also be easier to recall select products, rather than e.g., all cars of a specific model between 2007 and 2013.

Building a more flexible and integrated manufacturing system requires changes to the machines and addition of new sensors.  It  also requires changes to processes and software providing the Manufacturing Service Bus (MSB).

A recent whitepaper “Increasing the Adaptability of Manufacturing Systems by using Data-centric Communication” makes the case that data-centric communication paradigm is key to meeting the requirements of future adaptable manufacturing systems. We agree that a data-centric approach is key. However, obviously biased, we disagree with the authors on the solution.

RTI Connext, based upon the Data Distribution Service (DDS) OMG standard, meets the requirements to power next-generation manufacturing systems.  With RTI Connext, systems are:

    1. Decoupled – Designed for real-time communication of distributed systems, RTI Connext decouples systems and components in space, time and flow. Systems do not need to know in advance which component is producing the data or which component will be receiving the data. Historical data is available to late joiners. Sending and receiving data can be decoupled from the main control flow, avoiding blocking more critical tasks.
    2. Auto-Discovery – DDS discovers data sources and sinks through a lightweight automatic discovery mechanism. A keep-alive mechanism verifies that systems are still up and behaving as expected. Non-responsive components can be ignored.
    3. Resilient to failure – Because all information flows peer-to-peer, RTI Connext avoids any singlepoint of failure. Furthermore, RTI Connext is smart enough to handle redundant producers, consumers, and even networks.
    4. Proven in heterogeneous systems – RTI Connext is deployed in many many systems across many industries, it supports a wide variety of operating systems, including embedded real-time operating systems.  It also support many different transports; with its pluggable transport interface, new or legacy transports can be added.
    5. Standards-based – Because it is based on the DDS and Real-Time Publish-Subscribe interoperability protocol OMG standards, RTI Connext allows systems to interoperate in a standards-based way.
    6. SecureRTI Connext secure DDS and secure transports provide full security of all dataflows.  This includes authentication, encryption (confidentiality), integrity, and availability, along with non-repudiation of transactional information.
    7. Flexible – RTI Connext supports multiple communication paradigms: publish-subscribe (e.g., for alarms and events), request-reply (e.g., status inquiries), at-most-once, and exactly-once delivery. The quality of service can be defined on a per-topic basis: e.g., alarm data must be delivered reliably and high priority, while a sensor signal may only require best-efforts delivery at background priority.
    8. Fast and scalable – Real-time performance is in our veins: we design for low latency, low jitter, high throughput and high scalability. Our customers require micro- or milli-second latency, which scales from a handful of devices to hundreds and thousands.

Does RTI Connext provide all the pieces today? Not quite yet. Integration with legacy protocols and systems will be key. One option to bridge to the existing installed base is through the RTI Connext Routing Service, which provides a mechanism to build adapters and transformations between various existing protocols.

The changes happening on the manufacturing floor are exciting, and also demanding. A proven real-time communication protocol for distributed systems, like RTI Connext, will be key. Go check out why to use RTI Connext for your next generation manufacturing system.