There’s more to a container than, well, a container.

There’s more to a container than, well, a container.

It seems that virtual machines (VMs) are so last year. A new ship has sailed in and docked – it’s a container ship and it is full of Dockers (as well as Rkts and Mesos). The idea of containerisation is taking hold – the promise of a lighter means of dealing with applications enabling much higher workload densities and lower storage costs seems to attract developers, sysadmins and line of business people like moths to a flame.

However, as with all relatively young technologies, problems are appearing. Containers are small because they share as much as they can, in particular when it comes to the underlying operating system (OS) on which the containers are provisioned. Any action through to the OS is worked against that single physical copy of the OS. Only the application code is held within the container – so keeping it small. Therefore, we can call the likes of Docker, Rkt and Mesos ‘application containers’.

That seems to make sense at first look – why have multiple copies of the same code (i.e. the OS) doing exactly the same function (as is the case with VMs)?

The problem is where privileged access is required by an application to one of these shared functions. As the shared function is shared across all application containers, if that privileged access is used to compromise the underlying function, it compromises all application containers using it.

This, for what I hope are obvious reasons, is not a good thing.

As VMs are completely self-contained systems with the OS and everything held within them, they are less prone to privilege-based security issues. But, back to the initial problem, VMs are large, unwieldy and require a lot of management to ensure that the multiple copies of OSs held across them are continually patched and upgraded.

If only there was some way to bring the best parts of VMs and application containers together, so as to provide secure, lightweight systems that are easy to manage and also highly portable across IT platforms?

Well, luckily, there is.

This is what is called a ‘system container’.

It still has an underlying OS, but it also applies a ‘proxy namespace’ between that OS and the application container itself. Through this means, any calls to the underlying shared services are captured and can be secured in transit. So, an application container that makes a call to a specific port, LUN or network address can have that captured and managed within the proxy namespace. Any action that could be detrimental can be hived off and sent to a virtual copy of the library, port, LUN or whatever, ensuring that it is more effectively sandboxed away from other application containers’ needs.

This also gets around another issue with application containers. On the whole, application containers require that all containers are capable of running on not only the same version of the underlying OS, but also the same patch level.

A system container can ensure that calls that are dependent on a specific version of a library, or even a whole OS, are routed as required. Further, application containers really only perform well with a modern microservice based architecture – legacy client-server (amazing how we are regarding so many live systems as already be legacy, eh?) applications struggle to gain the advantages of a container-based architecture, but tend to work well within a VM. System containers get around this issue, as the can hold and manage legacy applications in the same way as an application container – all calls made by the legacy app can be dealt with through the proxy namespace. Therefore, system containers enable a mixed set of workloads to be run across an IT platform.

System containers also enable greater workload mobility. As all dependencies are managed within the container and the proxy namespace, moving a workload from one platform to another, whether it be within an organisation or across a hybrid cloud environment is far easier. This also then feeds into DevOps – the movement of code through from development into test and then into the production environment can be streamlined.

For organisations that are looking at using application containers within their IT strategy, Quocirca strongly recommends that system containerisation is on the shopping list.

Have Your Say: