Mission: score an interview with a Silicon Valley company 1

RTI’s engineering team is based in Sunnyvale, CA. We also have a smaller, yet rapidly growing team in Granada, Spain.

Between Sunnyvale and Granada stands 6000 miles. It takes an entire day to travel there. And we need to keep a difference of 9 hours in mind when organizing team meetings.

There are also quite a few differences in how people write a resume (curriculum vitae,) and approach getting a job.

This blog post is a summary of my recent presentation to the engineering students at the University of Granada: “How to get hired by a Silicon Valley Company.” Many of the tips below are not just beneficial to new engineering graduates in Spain, but also to new grads in the US.

Your first mission is to be invited for an initial interview.

Your preparation started yesterday

Before you approach the graduation stage and walk to the tune of Elgar’s Pomp and Circumstance March, there are quite a few things you can do. These are things which go beyond your regular classes and assignments.

Hiring managers pay attention to the type of internships and projects you worked on. You can show your love for programming through your open source contributions or by the cool demo you built at a hackathon. Your work speaks for itself if I can download and test drive your mobile application from the Apple App Store or Google Play Store.

Beyond the technical projects, it is important to learn and practice English. Our Granada team is designed as an extension of the team in Sunnyvale. As a result, engineers in Spain will work on a project together with engineers in California. Being able to express and defend your ideas well, in English, is important. Some of us learned English while watching Battlestar Galactica (the original) or Star Trek. We may even admit to picking up phrases watching the A-team or Baywatch. Yes, those shows are a few decades old. Learn, read, write and mostly find a way to speak English often. Go on an adventure through the Erasmus program, and practice English.

Lastly, start building your professional online profile:

  • Create a LinkedIn profile. Most often, employers will consult your LinkedIn profile, even before your resume. Please use a picture, suitable for work.
  • Create a personal website, with your resume, your projects, and how to contact you. Resumes and LinkedIn profiles are dull. Your website allows you to describe the projects in more depth, and include diagrams and even videos of your demos. Consider it the illustrated addendum to your resume.
  • Share your thoughts on a blog, or on websites such as Medium.
  • Contributions to Github or Stack Overflow speak for themselves. You can start by adding your school assignments to your GitHub profile. However, hiring managers will look for contributions beyond the things you had to do to get a good grade.
  • Publish your applications to the Apple App Store of Google Play Store. I love to download the applications of a candidate and try it out. It takes time, effort and even guts to create a working application and share it publicly.
  • Manage your social profile carefully. Future employers may look at your Twitter rants or Facebook antics.

Drop the Europass style of resume

There are plenty of websites which give you the basics to write a good resume: keep it to 1–2 pages and follow a simple structure: objective, education, experience and projects, skills and qualifications and finally list your accomplishments, etc.

Here are a few Do’s and Don’ts, specifically for international candidates:

  • Write your resume in English. Make sure there are no typos. Use online services, such as Hemmingway App or Google Translate, to improve your work.
  • Focus on what you learned or did on a project. Do not just list project names, leaving the reader to guess what you did.
  • Add hyperlinks where the resume screener can get more details. And make sure the hyperlinks work.
  • Add your grades, in easy to understand terms, e.g., specify you graduated, first in class with 92%, rather than 6.1/7. I do get confused when I see two non-correlated grades: e.g., 3.2/4 and 8.7/10.
  • Read other US resumes to learn the lingo. A hiring manager might check if you took a class in Data structures and algorithms; in your university, that may have been covered in Programación II.
  • Customize your resume for the job.
  • Do not create any cute resume designs. No background or design touches (unless you are applying for a design job).
  • Drop the Europass resume format; i.e., do not include a picture, data of birth or multiple contact addresses. For an engineering position, I do not care about your driver’s license information. Do not use the standardized table to indicate your proficiency in various languages. Rather than indicate you rate your proficiency in German as a B2, state, “Conversational German.”
  • Do not use long lists of keywords, technologies or acronyms.
  • A pet peeve of mine: do not list Word or Excel unless you actually developed add-ons for those applications. Similarly, only list Windows, if you developed to the Windows APIs.

A cover letter allows you to make a great first impression

Before you submit your resume, craft a cover letter. Although most companies do not require it, I recommended creating one as it allows you to introduce yourself in your words. It is your first impression.

A short and well-crafted introduction letter allows you to make a more personal connection. Your intro paragraph should list the job you are applying for and why you are excited about the job and the company. Next, describe in three points why you are a great fit. Describe your successes. Do not repeat your resume. Close by asking for the interview.

You probably read this blog post because you are ready to contact RTI for a job. Let me make it easy: go to the RTI Career Page.

Good luck.

Fog Computing: IT Compute Stacks meet Open Architecture Control Reply

Fog computing is getting more popular and is breaking ground as a concept for deploying the Industrial IoT. Fog computing is defined by the OpenFog Consortium as “a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to Things.” Looking further into the definition, the purpose is to provide low-latency, near-edge data management and compute resources to support autonomy and contextually-aware, intelligent systems.

In particular, fog computing facilitates an open, internet architecture for peer-to-peer, scalable compute systems that supports edge analytics, local monitoring and control systems. It’s this latter application to control that I think is particularly interesting.  Control systems have been in place for decades across multiple industries, and recently some of these industries have moved to create interoperable, open control architectures. In this blog, we’ll take a look at some existing open architecture frameworks that bear a resemblance to Fog and how Fog computing and these framework initiatives could benefit from cross-pollination.

Open Architecture Control

Back in 2004, the Navy started developing its Navy Open Architecture. In an effort to reduce costs and increase speed and flexibility in system procurement, the DoD pushed industry to establish and use open architectures. The purpose was to make it simpler and cheaper to integrate systems by clearly defining the infrastructure software and electronics that “glue” the subsystems or systems together. The Navy selected DDS as their publish-subscribe standard for moving data in real time across their software backplane (Figure 1 below).

Screen Shot 2017-04-11 at 7.57.41 PM

Figure 1. The Navy Open Architecture functional overview. Distribution and adaptation middleware, at the center, is for integrating distributed software applications.

Fast forward to now and we find the OpenFog Consortium’s reference architecture looks very much like a modern, IT-based version of what the Navy put together back in 2004 for open architecture control. Given that this Navy Open Architecture is deployed and running successfully across multiple ships, we can feel confident that fog computing as an architectural pattern makes sense for real-world systems. Also, we can likely benefit from looking at lessons learned in the development and deployment of the Navy’s architecture.

OpenFMB

The Open Field Message Bus (OpenFMB) is a more recent edge intelligence, distributed control framework standard for smart power-grid applications. It is being developed by the SGIP (Smart Grid Interoperability Panel). Energy utilities are looking at ways to create more efficient and resilient electricity delivery systems that take advantage of clean energy and hi-tech solutions.

Instead of large, centralized power plants burning fossil fuels or using nuclear power to drive spinning turbines and generators, Distributed Energy Resources (DERs) have emerged as greener, local (at the edge of the power grid) alternatives that do not have to transmit electricity over long distances. DERs are typically clean energy solutions (solar, wind, hydro, geothermal) that provide for local generation, storage and consumption of electricity. But DERs are intermittent and need to be managed and controlled locally, as opposed to centrally, which is all that is available in the current power grid.

Distributed intelligence and edge control is the solution. The OpenFMB framework is being deployed and proven in smart grid testbeds and field systems. Looking at the OpenFMB architecture (Figure 2 below), you can see the concept of a software integration bus clearly illustrated.

Screen Shot 2017-04-11 at 7.59.02 PM

Figure 2. OpenFMB architecture integrates subsystems and applications through a central, real-time publish-subscribe bus.

Like the Navy Open Architecture, the OpenFMB distributed intelligence architecture looks very much like a fog computing environment. Since OpenFMB is still under development, I would bet that the OpenFog Consortium and OpenFMB project team would benefit by collaborating.

OpenICE

Patient monitoring, particularly in intensive care units and emergency rooms is a challenging process. There can be well over a dozen devices attached to a patient – and none of them interoperate. To integrate the data needed to make intelligent decisions about the welfare and safety of the patient, someone has to read the front-end of each device and do “sensor fusion” in their head or verbally with another person.

OpenICE, the Open Source Integrated Clinical Environment, was created by the healthcare IT community to provide an open architecture framework that supports medical device interoperability and intelligent medical application development. OpenICE (Figure 3 below) provides a central databus to integrate software applications and medical devices.

openICE data flow RTI DDS

Figure 3. The OpenICE distributed compute architecture, with DDS-based databus, facilitates medical device and software application integration.

Again, the OpenICE architecture supports distributed, local monitoring, integration and control and looks very much like a fog architecture.

And now Open Process Automation

More recently, Exxon-Mobil and other process automation customers have gathered via the Open Process Automation Forum to begin defining an open architecture process automation framework. If you look at the various refineries run by Exxon-Mobil, you’ll find distributed control systems from multiple vendors. Each major provider of process automation systems or distributed control systems has its own protocols, management interfaces and application development ecosystems.

In this walled garden environment, integrating a latest and greatest subsystem, sensor or device is much more challenging. Integration costs are higher, device manufacturers have to support multiple protocols, and software application development has to be targeted at each ecosystem. The opportunity for the Open Process Automation Forum is to develop a single, IIoT based architecture that will foster innovation and streamline integration.

Looking at the Exxon-Mobil diagram below, we find, again, an architecture centered around an integration bus, which they call a real-time service bus. The purpose is to provide an open-architecture software application and device integration bus.

Exxon-Mobile Process Automation Architecture

Figure 4. Exxon-Mobil’s vision of an open process automation architecture centered around a real-time service bus.

Again, we see a very similar architecture to what is being developed in the IIoT as fog computing.

The Opportunity

Each of these open architecture initiatives is looking to apply modern, IIoT techniques, technologies and standards to their particular monitoring, analysis and control challenges. The benefits are fostering innovation with an open ecosystem and streamlining integration with an open architecture.

In each case, a central element of the architecture is a software integration bus (in many cases a DDS databus) that acts as the software backplane facilitating distributed control, monitoring and analysis. Each group is also addressing (or needs to address) the other aspects of fog computing like end-to-end security, system management and & provisioning, distributed data management and other aspects of a functional fog computing architecture. They have the opportunity to take advantage of the other capabilities of Industrial IoT beyond control.

We have the opportunity to learn from each effort, cross-pollinate best practices and develop a common architecture that spans multiple industries and application domains. These architectures all seem to have very similar fog computing requirements to me.

Getting Started with Connext DDS, Part Four: From Installation to Hello World, These Videos Have You Covered Reply

Hello World (C ++)

I started my career at a defense company in the San Francisco Bay Area on a project that involved a distributed system with several hundred nodes (sensors, controllers and servers). All these nodes were networked over different physical media including ethernet, fiber optics and serial. One of the challenges we faced was ensuring our control systems could operate within their allotted loop times. This meant data had to arrive on time regardless if a node required 10 messages per second or several thousand messages per second. We needed a more effective method of communication than point-to-point or centralized server.

To address our most extreme cases of receiving data every handful of microseconds, a colleague of mine developed a protocol that allowed any node on the network to publish blocks of data to a fiber optics network analogous to distributed shared memory. The corresponding nodes would only read messages that enabled them to compute their control algorithms and ignore all other data. This was around 2009, and little did I know at the time, this was my introduction to the concept of data-centric messaging. It so happens that the Object Management Group (OMG) had already been standardizing data-centric messaging as the Data Distribution Service (DDS), the latest version of which was approved in April 2015.

Fast forward a decade later, I have recently been hired as a product manager at Real-TIme Innovations (RTI), the leading DDS vendor. Like most avid technologists ramping up on a new product, I have been eager to get past the “setup” phase so I can start seeing Connext DDS in action. To help me get ramped up, my new colleagues shared these Getting Started video tutorials. With these videos, I was able to quickly build sample applications that communicated with each other over DDS. You can check out the Getting Started tutorials for yourself to see how to configure, compile and run HelloWorld examples in Java and C++.

Granted, there’s more work for me to do to get defense-grade computers talking over fiber, but here’s why I found these tutorials so helpful: they enabled me to quickly pass the beginner’s phase allowing me to hit the ground running right away, therefore shortening my learning curve. Check out the tutorials and see for yourself!