Designing information models for distributed applications Reply

Prior to starting at RTI, I spent ten-plus years developing applications and technologies for  the commercial enterprise. While all these technologies were different, they relied on manipulating an underlying data model which these applications managed. From collecting the data to cleansing and analyzing it, all enterprise systems are fundamentally information management systems. It was all about the data

So, while getting started supporting customers developing distributed edge applications with real-time needs at RTI, I found some expected, and some unexpected, differences in how developers built applications for the “edge” compared to the enterprise. While the (embedded) distributed application architects paid closer attention to the “physics” as devices were low powered, networks more complex and sometimes ad-hoc, and microseconds & memory mattered, the relative lack of attention that they paid to the information model was very interesting.

(Reference: Read a paper as a result of this insight: “How Does Your Real-Time Data Look?”)

The technologists in edge environments spend significant time tuning the network links, but they often miss significant opportunities to make optimal use of available bandwidth by not focusing (enough) on data modeling. I recently saw a “mission and performance critical” data model that is put on the wire with over six levels of nesting in its data structure…

This relative lack of attention to the data model, while regrettable, can be better understood if we take into account that until recently, edge devices were weak (could not collect or process enough information), few (not choking the network, though bandwidth is always an issue), or not (richly) context-aware (taking advantage of other information available on the network). Since only a (relatively) few bits were published on a “functionally light” middleware,  the information model was not very consequential.

However, as edge applications become more complex — from Command and Control to Monitoring — the science of tuning the information model for a distributed application can benefit from the advances in building information models for enterprise applications. With the devices and networks becoming more powerful, and with middleware such as RTI Data Distribution Service putting more intelligence on the network, the bottleneck more and more is not the hardware, or the capabilities of a high-performance or a functionally-rich middleware such as RTI Data Distribution Service, but the inefficiencies introduced by a poorly designed data model.

(Reference: Read this paper on what you can expect a modern high-performance middleware to do: “Is DDS for You?”)

What is required is that while designing the information model for the distributed application, architects of high-performance distributed applications sit with network middleware experts to ensure that the information model fully leverages the middleware’s capabilities, such as local processing (Content-Filtered Topics), message aggregation, time-based filtering, and using sparse types to only send the updated fields on the wire…

Submit a comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s