Deeper Dives: Q&A with Wind River Principal Technologist Rob Woolley on Deploying Containers in a VxWorks World

Deeper Dives: Q&A with Wind River Principal Technologist Rob Woolley on Deploying Containers in a VxWorks World

We recently launched a web seminar series called Deeper Dives, providing our principal technologists and other Wind River innovators with a platform to discuss foundational and emerging technologies in depth and in detail. If you haven’t checked out Deeper Dives already you can get started here.

There has been some head scratching about our commitment to containers for VxWorks. We recently asked Wind River Principal Technologist Rob Woolley why some might consider containers for VxWorks, well, crazy. Here’s what Rob had to say: “From a technical perspective, people associate containers with Linux and the wide range of cloud-native applications that exist out there. We obviously aren't going to run those on VxWorks so it may seem crazy.” While you could run Linux containers on VxWorks with a hypervisor and VM, Rob sees the real benefits in deployment. “As devices [increasingly] have no choice but to provide updates during their lifecycle, you need a deployment model that is interoperable with existing IT systems and understood.”

This blog post focuses on Deeper Dives episode three “Deploying Containers in a VxWorks world." After a brief overview of containers for VxWorks, Rob conducted a Q&A with web seminar attendees. Here are some highlights from the Q&A:

Do VxWorks containers provide additional security?

VxWorks containers really provide a different deployment mechanism, and one of the security aspects of container images is the ability to sign containers. So if you're not already taking advantage of signed executables today, then adding containers to VxWorks would give you an easy way of ensuring that only signed applications are running on your platform.

 

Do VxWorks containers require Wind River Helix Virtualization Platform to run?

Helix Virtualization Platform is a type 1 real-time hypervisor that's particularly used for mixed criticality systems where you might want to run VxWorks alongside a non-safety critical OS like Linux. Containers could definitely be used in that type of environment, but containers are available without Helix as well. You can use the containers on our standard VxWorks offering.

 

How does adopting containers affect real-time operating system software certification?

When you're running the real time processes inside of the container, the things that might affect determinism include things like file system access or having to do additional security checks. We've designed it in such a way that we ensure that those abstractions that would have otherwise slow things down on, say, a Linux system, are not affecting the performance of the RTP in this case. Aside from the things I mentioned, like the initial startup of the creation, of the container and the startup of RTP, it should be near-native performance, whether the real-time process is running in a container or running natively.

 

Is it possible to leverage containers to collect metrics or telemetry, either by ingesting log files or scraping metric endpoints from a container or Kubernetes pod without impacting the real-time operating system software or its certification for aviation?

That's a fantastic use case for this, particularly because gathering telemetry might be a background task. And it might be something that you do while the critical parts of the system are running, and it might also leverage certain third-party components like open source software to be able to interoperate with those other systems on the other end. So you might need to pull in Python modules, for example, in order to enable the telemetry gathering and communication to the other end. And you'd want to keep that separate, and hopefully keep it up to date with whatever is changing. Being able to leverage containers to deploy that software to the device to do that task is a fantastic use of this technology.

 

Can Linux containers run on VxWorks?
When a Linux application runs inside of a container, it's invoking the system calls from a Linux system. We have support for many of the same POSIX APIs on VxWorks, but the way that the system calls are invoked means that you can't run the same binary executable from a Linux system on a VxWorks system. However, you can take your existing source code and recompile it for VxWorks and then deploy that as a VxWorks container. And moreover, the container registries are smart enough that when you request a container, it can give you the appropriate container for your architecture or your operating system. So you could actually have a scenario where you've compiled your application twice, and when you go to pull the container, it can choose the right one for your machine.

 

How can you make sure a deployed container image will have hard real-time response? Do you provide tools with VXE to measure and assess this?

That's a great question. So within VxWorks, we have a number of tools for people that want to measure the performance of the system and assess whether it's performing within the real-time characteristics for their use case. You can leverage those existing tools to measure the RTPs inside a container, just like you would measure the performance of the RTP running outside the container. And in our own nightly builds, we are making sure to run the performance checks to ensure that adding the containers doesn't slow down the real time performance of those RTPs.

 

Does VxWorks support signed container images?
So VxWorks has support for secure boot, and if you wanted to run signed containers on your device, you'd of course want to establish a root of trust, use a signed kernel, and then when pulling down the applications ensure that your executables are signed as well. There's two methods of doing signed containers. There's the simple signing in which you sign the binary blob and then verify the image when you download the container image from the external site.

 

Or the other method is to actually share those signatures through the container registry. That latter method is currently still under development and should be fully developed by the community sometime this year in NotaryV2. However, both of those options would be available for people to use on VxWorks.

 

What are the dependencies between applications running within different containers and solved with software over the air use cases? How might this be different in a Service Oriented Architecture (SOA)?

With OTA, one of the things to understand is that there's different levels of over the air updates. People talk about FOTA, SOTA, and AOTA where the leading letter is firmware, system, or application updates. So for your cell phone, for example, you may be downloading a giant binary blob in the background, and then when it goes to update the cell phone, it's actually upgrading the firmware and the operating system under the hood. The containers are focused primarily on your real-time processes, your user space applications. So I would compare that to the application over-the-air update part, which would be less like getting an update on your Android smartphone from your service provider and more like downloading apps from the App Store.

For SOA I would just replace the word ‘application’ with ‘services.’ So if you are complimenting your device with additional services that you wish to update over container images, then you would use the containers as the transport for delivering those new services to your device, and then have something on to your device choosing how to coordinate stopping the previous version of the services and starting the new version to ensure that it doesn't do it at an inopportune time.

 

How much of the existing Docker ecosystem containers can be leveraged as they are? For example, a Redis container?

Many of the containers that are out there in the ecosystem are specifically built forX86 and they'll have Linux binaries inside. So those are not necessarily suitable for a real time operating system, because they may be much bigger or they may be doing things you wouldn't normally do in something like the VxWorks. However, if one wanted to pull those to VxWorks, then you could use much of the same source code, recompile it for the architecture of the embedded device you're using, and then deploy it as a VxWorks container. And this could use the same CICD pipelines that are used to create the containers today. Once you've set this up, you should be able to deploy it quite easily to your VxWorks devices.

 

Have VxWorks containers been deployed on Department of Defense systems where authority to operate, ATO, is required? For example, in the US Air Force aircraft avionics?

Support for VxWorks containers was recently released, and some of these authorizations to deploy might take some time. I think it's a fantastic use case, and it would fit well within the Department of Defense Platform 1 initiative, and it would certainly be something we'd love to talk with customers more about.

 

How within VxWorks startup dependencies between containers are solved so then we know which one to start first? 

This is really up to the embedded developer to choose what the start-up sequence they desire. We have implemented a number of use cases where you do need to ensure that your containers are starting up in a certain order, but traditionally embedded developers would want fine grained control over how they do that, and that's why we have that C API, so that you can put in the various checks and balances to ensure that the containers are coming up in the order you desire.

 

Is it already possible to use containers for avionics applications that need to be compliant with highest design assurance level such as DO-178C Level A or Level B or Level C?

So within that space, and going back to the previous question about Helix, there are two places where you could put containers. You could either put containers in the safety-critical VxWorks guest, or you could put those containers in a non-safety critical Linux or VxWorks guest. If you wanted to have the containers in the safety-critical guest, the VxWorks container implementation would first need to go through certification, and it's something that we've kept in mind when designing the VxWorks container source code. And that is the next step that would need to be taken in order to run it within the safety-critical portion of the guest.

 

What is the minimal OCI image size containing small app or using a network stack?

It really is limited only by the size of the RTP itself, because the container images do not contain VxWorks or the in-kernel network stack. So your application and whatever libraries it needed would be the bare minimum that you could get away with. I haven't tried the simple Hello World, but the complicated ROS2  example running under VxWorks, which has hundreds of libraries that it links against and lots and lots of templates, C++ code is something I would consider as an upper bound. The “Hello World” example should be much smaller by comparison.

 

What level of OS platform understanding does an app developer need to have to build a VxWorks app in a container?

If someone is familiar with the POSIX APIs, they can use VxWorks SDKs to build their application, and then package it up and put it in a container image. And then, when they deploy it to VxWorks, they really don't need to know anything specific about VxWorks as long as they programmed to that POSIX API or the internal VxWorks APIs. It really sort of enables people that have never used an RTOS or VxWorks before to start doing development.

 

Do we need more computational resources in order to run containers compared to legacy software?

No, the computational resources are fairly minimal. The one thing you might need to keep in mind is storage or memory use because if you're partitioning up the system, then you would want to make sure that you have the storage to keep multiple copies of the container images or run separate instances of the RTPs in memory. And this can be alleviated if you're using the same shared libraries, for example.

But one of the problems that quickly emerges even on the Linux side is that when you give the ability to run various applications from different sources on an operating system, it's very rare that they use the same components. So you just have to be mindful of that. And if you want to minimize the memory footprint, then you would want to try to coordinate people to use the same versions of the executables in the libraries. But as far as compute power is concerned, we've had it running on a number of very small, low compute devices, and there isn't much overhead at all.

 

Can you explain the VxWorks licensing in a containerized environment?

The licensing remains the same as before, because there's still the same number of instances of VxWorks running. What's really happening is just multiple applications being deployed to the VxWorks device.

 

Will the real-time behavior of VxWorks be compromised or impacted when running containers?

So the real time behavior of the RTPs should remain the same as before. If you are doing heavy disk IO while you're trying to perform some real-time task, that would affect the real time of the system with containers and without containers. But as far as the container technology itself is concerned, it shouldn't have any noticeable impact on the determinism of the system.

 

Is it possible to limit resources utilized by the application running within the container, for example, memory and CPU?

Yes, so within VxWorks we do have mechanisms to restrict the amount of memory and CPU power used by the real time processes. Traditionally those have been in our VxWorks-certified product line, because our non-certified customers haven't had the same requirements as some of our avionics type customers. However, that would be a fantastic use case to explore for the standard VxWorks product line as well.

 

Are an RTP and OCI image the same?

Real-time process is the VxWorks term for user space applications that use the MMU to separate their virtual memory from the rest of the system. The OCI image is specification by the open container initiative, and it's essentially just a tarball of files. The RTPs would be the executables that would go inside of the OCI container image.

 

What interprocess communication mechanism is available to communicate apps running within different containers?

You can use any interprocess communication mechanisms you'd like. The trick would just be to make sure that you map that into the container so that it's able to communicate with things running in other containers or outside the containers. One interesting use case as well might be to use an internal virtual switch to allow them to not only use IPC but actually use network communication to communicate between the containers as well.

 

What is the processing cost of starting up a container, and how does it impact startup time?

There's two parts to that. So there's the unpacking that I talked about before, but if you are starting a container that's already present and you've unpacked that prior to actually running it, the actual time to execute the container is negligible because you're really just setting up the namespace and telling VxWorks to launch the RTP within that namespace.

 

You mentioned that you use Buildah. Are there other tools that you use while you're creating these containers?

Yes, so you should be able to use any of the tools that would normally create an the OCI-compliant container. Buildah is nice because it's a separate tool that doesn't require the Docker daemon running in the background, and it allows you to have more flexibility when you're manipulating the containers, saving them to disk, and pushing them up to Docker Hub. But if you prefer to use Docker, you can certainly use Docker to create the same scratch container, put the files inside, and then use Docker Export to save it to disk so that you could then include it on the storage of your device.

 

Can you label a container so that you can group them together in subsets and that they can be separately managed?

I don't think we've tried that yet. I can't think of any reason why you couldn't do that. We already support the labeling feature. So all that would be needed would be to just look up that label and make sure to group the containers based on that label.

 

Can VxWorks containers run without root rights?

In the demonstration, there was no user management. If you added the user management to the system, then you should be able to launch the RTPs as that separate user. The portion that runs in the background as the container engine, it's actually running within kernel space, but the tools that you would use to launch the containers could be run without root privileges on the device. It's a good question, though.

 

You mentioned Docker, but what role does Kubernetes play here?

Kubernetes is the technology that lets you orchestrate workloads across a cluster. We have had VxWorks connecting into Kubernetes clusters and allowed it to interact with the rest of the cluster and provide telemetry. One piece that would be missing is how people want those devices in the cluster to talk to other things in the cluster.

So if you wanted the VxWorks devices to be able to talk using the overlay network to other things on the cluster, that would be a use case we'd have to explore because you're talking about using IP over IP at that point. If you just simply wanted the VxWorks for this device to talk to nodes on the Kubernetes cluster, you could then use this technology to update any container images by implementing a custom resource definition within Kubernetes, which would actually let you push container images to the VxWorks devices.

 

For additional insights, view our Deeper Dives web seminar series here.