We just attended a Connext DDS Users Group meeting in Chengdu, China on November 14. Most of the conference was conducted in Mandarin Chinese, and neither Edwin (RTI VP of Sales) nor I spoke the language.
However Vision Microsystems, our host and RTI’s Chinese distributor, had arranged an excellent translator who managed to help us keep up with about 75% of the information. This was their second such meeting, with 70 users representing over 30 Connext DDS projects.
Edwin and I took part of the day to explain how RTI was involved with the Industrial Internet of Things, and presented a technical roadmap of the Connext 5.2 release coming in early 2015.
During the meeting, several long-time Connext DDS developers explained how they chose DDS for their projects, the problems they were solving, and pointed out issues they had with DDS. These presenters were pragmatic and direct, pointing out situations where DDS was not the best solution for them. Both Edwin and I were glad to hear these points of view. Neither of us, nor RTI, advocate DDS as the best technology for all applications involving network communications and systems integration.
We learned that many use cases incorporating Connext DDS were those in which DDS was superior to legacy or competing solutions. Users were discovering better performance and scalability, and reducing user code for implementing many capabilities and features — particularly those that were inherently part of our stable, thoroughly-tested, commercial-off-the-shelf technology.
Overall, the testimonials were gratifying for us, and valuable for others in the audience.
This type of feedback helps assure our potential customers that Connext DDS solves real problems in real systems. RTI generally focuses on problems that are difficult to solve or would otherwise take a great deal of effort to solve with DIY solutions. For Edwin and I, the Chengdu meeting was a chance to understand how users truly employed our products and validate their experiences as their discovered where DDS wasn’t a perfect fit.
Of course, in the future, and with more customer feedback, DDS is quite capable of becoming a better fit for many more problems than it solves today. You can help make that happen by sharing your experiences with Connext DDS. Post them to our community forums, or send an email to info @ rti.com .
One of the primary use cases for the IIoT (Industrial Internet of Things) is to collect sensor data and deliver that to an enterprise cloud for enhanced real-time visibility in to remote operational systems. This is very important for applications such as Oil & Gas, Manufacturing Plant Production monitoring, Healthcare Patient Monitoring and Power Substation Monitoring. With advances in network infrastructure and the promise of higher bandwidth WAN (Wide Area Network) connections, the ability to pull raw sensor data across the WAN to a backend enterprise cloud where data processing and predictive maintenance solutions can be implemented, and monitored. Enabling this type of architecture provides great agility for organizations to respond and react to changing conditions for their deployed systems.
There are a few issues that arise when trying to achieve this level of architecture. The primary issue that presents itself is the sheer volume of data that must be sent from the deployed system back to the enterprise. Allowing for individual network packets from each sensor is not feasible. In addition, the amount of data from each sensor is constant whether its actually required on the enterprise side for evaluation. Given the following diagram below, this architecture does not solve any of the problems that exist today for getting data from the sensors to the enterprise. Bottlenecks will be seen in the WAN because of the number of data packets that must be handled is large and must be consistent. As soon as bursts of WAN traffic occur, the ability for the enterprise to gather data for processing becomes increasingly difficult and unpredictable.
To enable a feasible solution today for getting sensor data back to the enterprise requires two key data handling pieces to be put in place. First, the number of network packets must be reduced. Second, there must exist some intelligence in the path of data that allows the enterprise side to declare what data it would like to access so that data that is not relevant is not sent. Also this capability must be mutable as changing conditions must allow for the enterprise side to adjust the set of data it would like to access.
RTI provides a bridging capability called Routing Service exactly for applications such as these. Routing Service is a logical based bridging solution that enables administrators to configure topic based routes that collect data from publishers on the input side and send data to any subscribers on the output side. And because Routing Service is based on DDS, it enables the individualized setting of Quality of Service (QoS) on either side of a Topic Route.
One QoS that is available for setting in the Routing Service is Batching of data. This capability an opportunity to batch or coalesce small pieces of data into larger network packets for more efficient transfer of data over the WAN. The Batching has configuration controls that limit the size of the data packet and that limit the amount of time that exists between outbound packets. This gives the user complete control over the bandwidth and latency profiles of the sensor data. The net result is configurable bandwidth shaping solution that dramatically reduces the number of packets sent over the WAN by a factor of 10x or more.
The second capability that Routing Service provides the enablement of the receive side of a topic route to express a data filter that would limit what data is actually sent through the bridge. For example, if the sensors on the deployed system were temperature sensors, and the receive side was interested in temperature values “ > 100 degrees F”, then the expression this filter could be configured on the receiving side of the bridge and Routing Service will propagate that filter to the original sending side of data. Therefore data will be filtered at the original writer side and thus limits how many data packets must be sent over the WAN. This filtering capability is builtin to DDS and thus Routing Service enables the use of it across any topic route in place.
The following diagram shows where Routing Service would be used in such an architecture.
This solution presents a very high level architecture with unique benefits to solve the problem of getting sensor data back to the cloud without the need to process raw data at the deployed site. Please contact us via email to request more information regarding RTI Connext DDS and Routing Service capabilities and using these products to address your specific requirements and challenges.
Additional Reading — Using Connext DDS in applications:
- Power Substation Monitoring >> Energy Systems Applications: Secure, High-Reliability and High-Performance Scalable Infrastructure
- Oil & Gas >> RTI and The Industrial Internet for Oil & Gas
- Manufacturing Plant Production Monitoring & Automation >> Automate Manufacturing Flows and Systems: Faster, More Reliable Factory Automation with Connext DDS
- Healthcare Patient Monitoring >> Healthcare Systems Applications: Control, Monitoring, and Integration of Medical Devices and Systems with Connext DDS
Fall is typically when US students in their final year at university, start looking for a job for when they graduate in May. (In Spain on the other hand, students start looking for a job closer to their graduation date.) For some students it will be the first time they apply for a job. You can tell by their flip flops during the on-campus interview session, or the lame “Sorry I am late, I just rolled out of bed” excuse, or lack of preparation all together about the company (“Who are you again? What does your company do?”). Luckily these examples are more and more the exception. Many students prepare well and often ask in advance what to expect from the initial interview.
What are we looking for?
Hire better than yourself. In the Macintosh Division, we had a saying, “A players hire A players; B players hire C players”–meaning that great people hire great people. On the other hand, mediocre people hire candidates who are not as good as they are, so they can feel superior to them. (If you start down this slippery slope, you’ll soon end up with Z players; this is called The Bozo Explosion. It is followed by The Layoff.) I have come to believe that we were wrong–A players hire A+ players, not merely A players. It takes self-confidence and self-awareness, but it’s the only way to build a great team. – Guy Kawasaki
It is true that building real-time communication software for the industrial internet of things requires a special kind of software engineer. We could list the detailed low level or programming skills such engineer should master, but most often we’ll end up hiring the wrong person. What makes a trained engineer successful at RTI are at a different level:
- Is the person generally brilliant and talented? Even a talented aerospace or mechanical engineer can excel at RTI building software.
- Is the person curious and eager to learn?
- Does the person have high integrity?
- Is the person a great communicator?
- Will the engineer fit in with the RTI culture? The list of culture attributes is not short: thorough, values quality, flexible, gives and takes feedback, willing to follow process, friendly, etc.
How can you test these skills?
When looking for a wedding cake baker, don’t ask how they would handle a particular situation or what they would do if the cake color is off. Ask them to make a sample cake and taste it.
We’ve found that the best method to evaluate a candidate is to have them work on a project. Internship projects are a great way to work with new grads. We work hard on making the internship program a wonderful experience for the engineers. When we asked our past interns for comments on a recent blog post on creating a great internship culture, the feedback was positively reassuring we are doing the right things. Having currently employed engineers join an internship program is unlikely. Having a 45-day bootcamp without a guarantee of a full-time position sounds an interesting idea, but one we have not tried. As described further on, for already employed candidates, we typically provide a take home project. Yes, this sounds like a lot of work. With a few hours of homework, and a follow up discussion, you can learn a lot of how the candidate will work out every day.
What to expect from the recruitment process?
Resume screening is almost always done by the hiring manager. We’ve seen all kind of resumes: 1 page to small books, in all kind of languages, with and without cover letter. Focus on the cakes you baked and what experience you got doing so. A list of engineering abbreviations doesn’t tell us much.
The initial interview is typically an hour long video conference, with a virtual whiteboard (often we use a shared Google document). These are typically simple programming questions. We’re not asking you to compile the programs. Our focus is here on whether you have a minimum level of technical competence. This interview may be hosted by one or two RTI engineers.
Before the in-depth interview, we will provide the candidate with a homework project. This should take 3-4 hours to complete, and can be done over the course of a week in the luxury of your home, with your favorite books, search engine and development tools. It allows the candidate to demonstrate thoroughness, completeness, coding style, test methodology and overall development skills. This homework project is a “Baker, bake me a cake” type of project. Based upon the outcome of the homework project, we decide to bring the candidate on-site.
The in-depth interview is a combination of on-site and remote interviews, as some of our key interview engineers are remote engineers. The in-depth interviews last somewhere between 6 and 8 hours. We test your programming skills with hands-on coding exercises, and code reviews. We do design questions (e.g., how do you design a distributed database?) on the whiteboard. We also ask non-technical questions to understand how you would work in the team. In some cases, we even ask you to present on your favorite technical topic for 30 minutes. We’ll let you know in advance if that’s the case.
After the in-depth interview, the interview team will meet to discuss the pro and cons of the candidates and a decision is made to make the candidate an offer.
If you are interested to join the RTI engineering team, go check out RTI’s products, technology, culturer and career opportunities: http://www.rti.com/company/careers.html
If you are currently developing Connext DDS applications in C++, you have a single API option: Use a “traditional” DDS API that is derived from the DDS interface definition defined in the OMG IDL language.
Thankfully, things are about to change and you have two more options to choose from:
- A new standard DDS C++ API which defined by the OMG DDS-C++ PSM standard (DDS-PSM-Cxx), and
- the C++ API obtained by applying a new OMG IDL to C++11 standard mapping to the IDL DDS interfaces, see Johnny’s post on the subject here.
Curious yet confused? This post will give you some historical context, explain why it makes sense to have both, and where you would use each one.
Context & Rationale.
First the basics: The OMG Interface Definition Language (IDL) provides a way to define interfaces and data-types independently of a specific programming language. This means that you can define an interface in IDL just once and then have it automatically mapped to C, C++, Java, C#/.NET, Ada, Python, Ruby, etc. These transformations are defined in the so-called IDL to Language mappings. See the OMG specifications page for a list of the current ones.
Define the interface once, and get it in all the popular languages automatically. It sounds like a great idea, doesn’t it? Yes, it is a very nice feature and the reason the OMG DDS Specification defined the DDS functionality at a higher level, using UML, and then the DDS APIs using OMG IDL instead of taking the time to define the API separately for each programming language.
However there is a price to be paid for the convenience of using IDL. Because IDL needs to be mappable to all the programming languages, it provides a “minimal common denominator” and lacks many features that are specific to the different programming languages. When you use a programming language you want to leverage its features and idioms to make the code as clean, readable, and robust as the language allows. If the idioms are absent the code seems clunky.
For example, the IDL lacks generics, so the IDL to C++, Java, and C# does not use templates/generics even in the places that would make the most sense. Also IDL interfaces cannot have overloaded operations, or cannot define operators, the list goes on.
For this reason the DDS SIG decided that the best approach was to create new specifications that define the DDS APIs in the most popular languages, starting with C++ and Java. It is the same DDS specification (same classes, operations, behaviors defined by the UML model in the DDS Specification) but mapped directly each programming language leveraging the features and idioms natural in that language. Apply some elbow grease, meetings, reviews, and votes and you get the DDS C++ API and the DDS Java API specifications.
Choosing Your API & Having the Best Possible Experience.
Defining the DDS API directly on a programming language gives the best possible experience to the programmer. As Alex eloquently put it in his recent blog “Create a P2P Distributed Application In Under 35 Lines Of C++11 Code!”. So this is typically the best choice.
Why, you may ask, use the DDS API derived from the IDL to C++11 mapping?
It turns out that defining the APIs in IDL is very useful for automated tools and an important capability for component-based programming.
If an application developer uses a component-based programming framework or some other code-generation framework it is isolated from the middleware APIs. The application user programs to the “framework APIs” and the mapping to the “technical middleware layer” is handled by the code generation and the tooling. The IDL provides a nice intermediate representation for the framework which can then generate code that is not tied to a programming language, and the IDL to language mapping handles the rest. In this scenario the IDL to C++11 mapping may be the best approach. The tools can keep using IDL and yet the resulting code is cleaner, more efficient, and robust than the one that would be generated from the “classic” IDL to C++ mapping.
There are other situations where using IDL-derived APIs may be advantageous to the application, for example if they are integrating other technologies and frameworks that also use IDL. In this case the IDL to C++11 mapping may also be the best approach.
What about the old-tried (classic) IDL to C++ API? This also makes sense for people that do not particularly like or cannot use some of the “modern” C++ magic (auto pointers, STL, templated meta programming, etc.). For example some compilers that do not support these advanced features or it these extra libraries would make the code too complex or expensive to certify.
In the end, it is all about choice and ensuring that you have the best tool to do the job. One of the great things about DDS is that allows applications that are written in different programming languages to communicate and share information. Stated another way, DDS gives you a way to have these applications integrated and interoperate. The DDS concepts: DomainParticipant, Topic, DataWriter, DataReader, QoS, Content Filters, are the same across these options, while the specific language APIs can differ. Therefore, using a specific C++ language binding is a matter of choice and convenience, much like deciding to use Java, C, of C#.
Every morning, double decker bus after double decker bus shuttles engineers from all over the Bay Area to the GooglePlex, the Facebook compound, the Apple spaceship or the Yahoo campus. Yahoo infamously ended its work from home privilege. Google pulls out all stops to bring engineers together in the same and crowded place, showers them with perks, all to make magic happen.
“It is best to work in small teams, keep them crowded, foster serendipitous connections” – from How Google Works
What do you do when you are not one of these tech behemoths? What do you do when the skillset you seek is highly specialized and all over the world? When you find the masters in building real-time distributed systems, do you require them to move to the Bay Area, one of the most expensive places on earth? Housing prices throughout the larger Bay Area are astronomical. Do you make them slug through the busy Bay Area commute, even if that means that some will sit in traffic for over an hour each way?
As a small software company building real-time infrastructure software (“data distribution software”) for the industrial internet of things (IIoT), headquartered in Sunnyvale, California, we opted for a hybrid approach. We have two main development sites: Sunnyvale, CA and Granada, Spain. Engineers have a flexible schedule to arrive at work to avoid the busy commute times. They all have the option to work from home occasionally. Most often, they choose a fixed schedule to work from home: e.g., every Wednesday. We also have remote engineers all over the US: Massachusetts, Florida, New Hampshire, New York, Virginia, and Minnesota. Managing the team, I stress about how to blend this team together, as if they were all in the same location and “foster serendipitous connections”? How do you cross the many time zones and yet leave no team behind?
The key to making this work is to establish the right team habits which build trust and are based upon transparency. We experiment often with new collaboration tools. If these tools do not foster more transparency or build trust, they will fail. Use tools and establish habits which emulate what happens in a traditional office setting, where you can drop into somebody’s office of a chat or for help. We’ve found that the following set of tools providing video, group chat and shared documents, paired with good team habits, to be the most effective.
When you want help debugging a problem, don’t default to sending an email. Email is impersonal and can get snippy (remember the “Have you ever kissed a girl?” email). Instead contact a remote engineer by video chat (You may still provide the log message via email). We use a lot of video these days, so much that we had to upgrade our internet to handle the many concurrent calls. We use Google Hangout for small group meetings, and installed a Google Chrome Box in several conference rooms. For example, our weekly bug-court meeting is a hangout with folks in Sunnyvale, New Hampshire, San Francisco, Maryland, Massachusetts and sometimes New York and Granada. For larger meetings, we use Webex and ask folks to turn on their camera when talking. In our large conference room, we have a remote controlled camera so remote participants can pan the room to the speaker. Yes, using video has it challenges (is my remote office clean?, CPU usage of Google Hangout, etc), but it is a key habit which help build trust.
An important practice are virtual (daily) scrums, through hangouts or group chat. We do this in individual development teams, and since recent, across the entire team. In individual development teams, the synch up meetings are more detailed and cover progress, plans, specific blockers or needs. When it comes to the entire team, we ask folks to post in a group chatroom specifics of what they will be working on that day. Initially we experimented with IRC (which failed on some platforms as it wasn’t as easy to use), and now we use Atlassian HipChat. Our rules for using HipChat are simple:
- Rule #1: When you start your day, you say a virtual good morning and mention what you will work on in the GoodMorningGang room. This is similar to walking into the office and chatting with your colleague of what they will be doing that day. No good morning, no more soup for you: you don’t get to be part of HipChat. This rule has brought folks a lot closer. You get a sense of folks are working on, what they level of stress and frustration is, and you get to celebrate and chit-chat, as if folks were all in the same office.
- Rule #2: move the conversation to the right room: e.g., don’t launch a discussion about platforms in the GoodMorningGang room; take the platform related discussions to the All Things Platforms room.
- Rule #3: memes and animated gifs are allowed and encouraged. All work and no play makes Jack a dull boy. It’s ok to goof off, have some fun and create silly memes or celebrate with a little dance.
Very little of what goes on in the engineering team is a secret in the team or in the company. All our weekly meeting notes, team summaries or discussions are posted in internally available Google Docs. The engineering meeting notes are posted weekly to the entire company. We use Atlassian’s Jira to keep track of what we work on, or what type of bugs people have encountered with our products. This is accessible to the entire company. During weekly tech briefings, we educate each other about cool technical developments in the development and research team.
There are many more habits and tools which helps us work more efficiently as a distributed team: from a reasonably flat organization where the people doing the work are encouraged to make the decisions, to transitioning to a better revision control tool (git), to being able to power cycle all embedded boards in our lab from anywhere in the world.
Making a remote team work efficiently as if they are in the same office is not easy and takes constant adjustments and experimentation with habits and tools. When experimenting with a new habit or tool, always start small. Create pockets of excellence, succeed and then copy to another group. Be patient in the process. The combination of restlessness (aka we need to look for better tools to work as a distributed team) and patience (make them work) is important. Live the behavior.
It’s been a while since the last NI event in Austin, NI Week 2014, where RTI made a splash with its easiest to use and learn DDS offering, RTI DDS Toolkit for LabVIEW. RTI demonstrated its Python DDS bindings working with RTI DDS Toolkit for LabVIEW and Lego Mindstorm NXT robots simulating a closed loop control system. During NI Week 2014, we heard and engaged in many discussions on condition-based maintenance (CBM), especially in energy vertical markets. CBM has become such a hot topic, promising huge amounts of cost savings both for industry and government. Upon returning from Texas, one thing we knew was that we wanted to explore it further.
On November 18th, RTI and the University of South Carolina collaborated during NIDays 2014 in Washington, D.C. We demonstrated RTI’s Connext DDS in action as a condition-based maintenance (CBM) platform.
This time in Washington, D.C., instead of using our robots, the University of South Carolina’s IGB mini test stand was in action. We all had to google “IGB” to figure out what it stands for, “Intermediate Gear Box”. The test stand is able to run from 0-500 rpm and is equipped with thermocouples and accelerometers. It is approximately 50” in length, 16” wide, 16” tall, and weighs almost 150 lbs. A Plexiglas enclosure was built to house the test stand to act as a separation barrier.
During the demonstration, the sensor values on the IGB test bed were transferred to a PC via an NI Data Acquisition Device (DAQ). Then on the PC, RTI’s DDS Toolkit for LabVIEW published the sensor and condition indicator values to the RTI Data Bus. Data analytics and statistics tools, including IBM’s SPSS Modeler, received the sensor values via an RTI DDS C++ subscriber application and analyzed the data. By continuously monitoring the actual conditions, we demonstrated how predictive maintenance beats time-based preventive and failure-based corrective maintenance management methods.
All the folks who visited our booth walked away with handouts describing the demo and lots of food for thought about what they saw. This was a very successful step in creating awareness of what RTI can do to bring real-time to condition-based maintenance. Special thanks to graduate students at the Condition-Based Maintenance Center at the University of South Carolina. Stay tuned for a video of students running the demo on the RTI and NI websites.
For additional information on our collaboration with the University of South Carolina on the CBM efforts, view the Press Release here or post your questions in the comments and we’ll be sure to get back to you!
Web Enabled DDS, The IoT, and The Cloud all made an appearance this Halloween at RTI HQ. Notice that the IoT has a net with things connected and RTI is underlying the whole net of things. Very clever, Stan…
I need to find a better picture of our cloud, Brea. If you could see the signs that she’s holding, it really puts the entire thing over the top. Not that it needed much because that headpiece is amazing!
And if you follow us on twitter, you may have seen this already: Web Enabled DDS, aka Fernando Garcia. Get it? Get it?!
— Rose Wahlin (@ProjectDerby) October 31, 2014
Happy Halloween, everyone!
Visit www.rti.com/careers to learn more and view our list of open positions. If you have any questions about what it’s like to work at RTI, ask in the comments and we’ll be sure to get back to you!
Our Connext DDS Secure product is generating unprecedented interest. We rarely see so much demand for, and curiosity about, a product. It’s especially unusual because the product is still in Beta yet customers are nonetheless planning to ship it asap. I thought I’d answer a few of the most common questions.
First, the new DDS Security standard specifies a security architecture and model. The Beta standard was adopted in March by OMG. We (RTI) chair the finalization committee; it should be final next year. RTI is first with support for the new standard. I’m sure other DDS vendors will also implement it, but nobody else has a product yet.
DDS Security is unique in the middleware space for several reasons. First, it addresses security more completely than other standards. The specification covers authentication, access control, confidentiality, integrity, non-repudiation, and logging. Second, it has a “plug in” design. The spec defines a set of standard plug-in components and an interoperable wire spec. But, you can define your own algorithms for the plugins. Finally, it protects DDS “topics,” not nodes or connections. So, it offers fine-grain control and can adapt to the unique Industrial Internet of Things (IIoT) requirements. It’s the first security standard that targets IIoT device-to-device and device-to-cloud networks rather than human or server-centric architectures.
Perhaps an example will make this more clear. Consider this (very) simple system:
Here, “PMU” represents a sensor (a phase measurement unit, common in power control). The “CBM” (condition-based maintenance) analysis component is monitoring the system and looking for system health issues. The simple operation of this system: the PMU sensor writes the state, the control reads that state and writes a set point. The CBM reads the state and writes alarms. The operator can monitor the system.
In DDS, this system is easily set up as data flow between topics. Of course, DDS specifies data rates, reliability requirements, and more.
To secure this system with Connext DDS Secure, you would create a configuration file that conveyed this:
PMU: State(w) CBM: State(r); Alarms(w) Control: State(r), SetPoint(w) Operator: *(r), Setpoint(w)
This says, simply, that PMU can only write State. Control can only read State and write SetPoint. CBM can only read State and set Alarms. And the operator can read anything and write the SetPoint (perhaps to turn off the system). Connext DDS Secure directly enforces these very logical system constraints.
It really is that conceptually simple. Of course, you still have to distribute certificates and the configuration file. But, this “topic based” security is much more intuitive for IIoT systems than designs based on locking out protocols, or isolating nodes, or restricting access based on user roles. Connext DDS Secure acts on the dataflow itself, directly and simply.
Importantly, our Connext DDS Secure product also doesn’t require any application code changes. You configure it & go. Connext DDS Secure offers practical, intuitive protection for existing systems.
Of course, no security protection is foolproof. So, most all practical security systems combine protection (stopping bad things) with detection (finding and isolating breaches). This is the reason, for instance, that your laptop has both a firewall (protection) and a virus scanner (detection). Together, protection and detection provide much more secure systems.
DDS, being a software “DataBus”, also allows easy monitoring. We used that with PNNL to implement a “retrofit” security test for the power grid, replacing an old DNP3 line with a secure DDS line, thus implementing protection. By tapping into the DataBus traffic and meta-traffic flow, we could then add a scripting capability (we have a slick Lua component). Simple scripts could then detect many potential attacks, including compromised systems, man-in-the-middle attacks, etc. See http://blogs.rti.com/2014/06/05/how-pnnl-and-rti-built-a-secure-industrial-control-system-with-connext-dds/
So, DDS lets you combine protection (the standard) with detection (through the DataBus). Both are relatively simple to implement.
Our product is currently in early access release. However, it is already undergoing fire testing. Here is one very extensive test activity:
The USS SECURE cybersecurity test bed is a collaboration between the National Security Agency, Department of Defense Information Assurance Range Quantico, Combat Systems Direction Activity Dam Neck, NSWCDD, NSWC Carderock/Philadelphia, Office of Naval Research, Johns Hopkins University Applied Physics Lab, and Real Time Innovations Inc. USS SECURE’s test bed determines the best combination of cyberdefense technologies to secure a naval combatant without impacting real time deadline scheduled performance requirements.
As you can see, our security product expects some really demanding customers. We can’t tell you much about these tests for obvious reasons. However, I can say that I am very proud of our Connext DDS Secure product. At this, and many other sites, it is proving extremely effective.
RTI Connext DDS Secure will be generally available next year. If you have questions, please ask your local rep…