by Richard Hands, Technical Architect
In my last blog post, we looked at the background of Containers. In this piece, we will explore what they can do and their power to deliver modern microservices.
What can they do?
Think of containers on a ship. This is the most readily used visual analogy for containers. A large quantity of containers, all holding potentially different things, but all sitting nice and stable on a single infrastructure platform, gives a great mental picture to springboard from.
Containers are to Virtual Machines, what Virtual Machines were to straight physical hardware. They are a new layer of abstraction, which allows us to get more ‘bang for our buck’. In the beginning, we had dedicated hardware, which performed its job well, but in order to scale your solution you had to buy more hardware. This was difficult and expensive. Along came Virtual Machines, which allowed us to utilise much more commoditised hardware, and scale up within that, by adding more instances of a VM, but again, this still came with quite a cost.
To spin up a new VM, you have to ensure that you have enough remaining hardware on the VM servers. If you are using subscription or licensed operating systems, you have to consider that etc. Now along comes containers. These containers literally contain only the pieces of code, and libraries necessary, to run their particular application. They rely on the underlying Infrastructure of the machine they are running on (be it physical or virtual). We can typically run 10-20x more containers PER HOST than if we were to try putting the same application directly on the VM, and scale up by scaling the number of VM’s.
Orchestration for power
Containers help us solve the problems of today in far more bite-sized chunks than ever before. They lend themselves perfectly to microservices. Being able to write a microservice, and then build a container that holds just that microservice and its supporting architecture, be it spring boot, wildfly swarm, vertex, etc., gives us an immense amount of flexibility for development. The problem comes when you want to orchestrate all of the microservices into a cohesive application, and add in scalability, service reliability, and all of the other pieces that a business requires to run successfully. Trying to do all of this by hand would be an incomprehensible challenge.
There is a solution however, and it comes in the form of Kubernetes.
“Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers.” (http://kubernetes.io)
Kubernetes gives us a container run environment that allows us to declaratively, rather than imperatively define our run requirements for our application. Again let’s look back to our older physical or VM models for the imperative definition:
“I need to run my application on that server.”
“I need a new server to run my application on, and it must have x memory and y disk”
This approach always requires justifications, and far more thought around HA considerations such as failover, as we are specifying what we want our application to run on.
Most modern applications, being stateless by design, and certainly containers, don’t generally require that level of detail of the hardware that they are running on. They simply don’t care as they’re designed to be small discrete components which work together with others. The declarations look more like:
“I want 10 copies of this container running to ensure that I’ve got sufficient load coverage, and I don’t want more than 2 down at any one time.”
“I want 10 copies of this container running, but I want a capability to increase that if cpu or memory usage exceeds x% for y% time, and then return to 10 once load has fallen back below z”
These declarations are far more about the level of application service that we want to provide, than about hardware, which in a modern commoditised market, is how things should be.
Kubernetes is the engine, which provides this facility but also so much more. For example with Kubernetes we can declare that we want x and y helper processes co-located with our application, so that we are building composition whilst preserving one application per container.
Auto scaling, load balancing, health checks, replication, storage systems, updates, all of these things can be managed for our container run environment by Kubernetes. Overall, it is a product that requires far more in depth reading than I can provide in a simple blog post, so I shall let you go and read at http://kubernetes.io
To conclude, it is evident that containers have already changed the shape of the IT world, and will continue to do so at an exponential pace. With public, hybrid, and private cloud computing becoming ‘the norm’ for both organisations, and even governments, containers will be the shift which helps us break down the barriers from traditional application development into a true microservices world. Container run systems will help us to break down the old school walls of hardware requirements, thus freeing development to provide true business benefit.