Containers: Power & Scale

by Richard Hands, Technical Architect

In my last blog post, we looked at the background of Containers. In this piece, we will explore what they can do and their power to deliver modern microservices.

What can they do?

Think of containers on a ship.  This is the most readily used visual analogy for containers. A large quantity of containers, all holding potentially different things, but all sitting nice and stable on a single infrastructure platform, gives a great mental picture to springboard from.

Containers are to Virtual Machines, what Virtual Machines were to straight physical hardware.  They are a new layer of abstraction, which allows us to get more ‘bang for our buck’.  In the beginning, we had dedicated hardware, which performed its job well, but in order to scale your solution you had to buy more hardware. This was difficult and expensive. Along came Virtual Machines, which allowed us to utilise much more commoditised hardware, and scale up within that, by adding more instances of a VM, but again, this still came with quite a cost.

To spin up a new VM, you have to ensure that you have enough remaining hardware on the VM servers. If you are using subscription or licensed operating systems, you have to consider that etc.  Now along comes containers. These containers literally contain only the pieces of code, and libraries necessary, to run their particular application. They rely on the underlying Infrastructure of the machine they are running on (be it physical or virtual).  We can typically run 10-20x more containers PER HOST than if we were to try putting the same application directly on the VM, and scale up by scaling the number of VM’s.

Orchestration for power

Containers help us solve the problems of today in far more bite-sized chunks than ever before.  They lend themselves perfectly to microservices.  Being able to write a microservice, and then build a container that holds just that microservice and its supporting architecture, be it spring boot, wildfly swarm, vertex, etc., gives us an immense amount of flexibility for development.  The problem comes when you want to orchestrate all of the microservices into a cohesive application, and add in scalability, service reliability, and all of the other pieces that a business requires to run successfully.  Trying to do all of this by hand would be an incomprehensible challenge.

There is a solution however, and it comes in the form of Kubernetes.

Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers.” (

Kubernetes gives us a container run environment that allows us to declaratively, rather than imperatively define our run requirements for our application.  Again let’s look back to our older physical or VM models for the imperative definition:

“I need to run my application on that server.”

“I need a new server to run my application on, and it must have x memory and y disk”

This approach always requires justifications, and far more thought around HA considerations such as failover, as we are specifying what we want our application to run on.

Most modern applications, being stateless by design, and certainly containers, don’t generally require that level of detail of the hardware that they are running on. They simply don’t care as they’re designed to be small discrete components which work together with others.  The declarations look more like:

“I want 10 copies of this container running to ensure that I’ve got sufficient load coverage, and I don’t want more than 2 down at any one time.”

“I want 10 copies of this container running, but I want a capability to increase that if cpu or memory usage exceeds x% for y% time, and then return to 10 once load has fallen back below z

These declarations are far more about the level of application service that we want to provide, than about hardware, which in a modern commoditised market, is how things should be.

Kubernetes is the engine, which provides this facility but also so much more. For example with Kubernetes we can declare that we want x and y helper processes co-located with our application, so that we are building composition whilst preserving one application per container.

Auto scaling, load balancing, health checks, replication, storage systems, updates, all of these things can be managed for our container run environment by Kubernetes.  Overall, it is a product that requires far more in depth reading than I can provide in a simple blog post, so I shall let you go and read at

Last thoughts

To conclude, it is evident that containers have already changed the shape of the IT world, and will continue to do so at an exponential pace.  With public, hybrid, and private cloud computing becoming ‘the norm’ for both organisations, and even governments, containers will be the shift which helps us break down the barriers from traditional application development into a true microservices world. Container run systems will help us to break down the old school walls of hardware requirements, thus freeing development to provide true business benefit.

Follow Richard Hands on Twitter to keep up to date with his latest thoughts.

Lean Tea – A new Agile Retrospective meeting format

Many of our readers and subscribers – especially those involved in Agile software development – will be familiar with Lean Coffee ™ meetings where participants get together, add potential agenda items to a Kanban board, and then discuss these items in turn, starting with the one with most votes. This is a great meeting format but if you have a ready-made batch of discussion points or important issues that need dealt with then why not cut to the chase? To that end I devised the ‘Lean Coffee with Cream’; but the introduction of cream – i.e. pre-prepared agenda items – breaks the trademarked Lean Coffee model, and I’m a typically British tea drinker, too, so I’ve decided to rename it the Lean Tea meeting.

From Lean workshops to Lean Tea meetings

I am currently serving as a Scrum Master for one of Sopra Steria’s Government sector clients; and my team and I strive for continous improvement. To that end we inspect and adapt our approach to software delivery, and we make good use of our fortnightly sprint retrospectives, mixing up the meeting format from time to time to keep things fresh, and following up on agreed actions. But a couple of months ago we decided to use the time set aside for our regular retrospective meetings and hold workshops on Lean Software Development instead.

In our first workshop we came together as a team to learn about The Toyota 3M model and the three enemies of Lean: Muda (waste), Muri (overburden) and Mura (unevenness). We then took some time in between workshops as individuals (prompted by email), to think about these three forms of waste (and the seven types of Muda in Lean Software Development) and how they apply to what we do. And in our next workshop we shared and discussed our examples of Muda, Muri and Mura, and I documented them all in our team’s online knowledge base.

There were some obvious quick wins which we dealt with – wastes that had not been mentioned in our regular retrospectives – but we were left with a long list of unprioritised wastes. So I turned to Lean Coffee Tea for our next retrospective, where instead of handing out post-it notes and pens to my teammates, I just handed out pens, because I had already added all our identified wastes to the meeting’s three-column Kanban board under “To Discuss”. We had our agenda. Now we just had to vote on those wastes that really mattered to us and were slowing us down.

Our first Lean Tea meeting was a great success: we identified and dealt with our two main wastes and our velocity has increased. I have since used the format again in a retrospective where I asked my team to vote on the five Scrum values they thought we were best at; then I reversed the voting order and we discussed those values with the fewest votes and how we could improve on them. The same could be done with the twelve agile principles or with outstanding action items (in priority order) from previous retrospectives.

So the next time you are looking for a new agile retrospective format why not try a Lean Tea? And consider having an actual cuppa while you’re doing it as tea is said to be good for the brain!

Ready Steady Cook

by Software Engineer Graduates, Alistair Steele and Gregg Wighton

Two from our February 2017 Graduates cohort discuss their recent Graduate Project using Chef Technology to solve the problem of setting up a machine (laptop) adhering to company standards. Their aim was to introduce a working example of DevOps and learn more about that sphere. This post talks about the problem they sought to address using Chef, what DevOps is and the experience they have gained from their Graduate Project.

The Problem

During a new starter’s induction day, a considerable amount of time and effort is spent on setting up a Development machine (laptop). Tasks involve downloading software and creating a folder structure which adheres to the guidelines set out by the company. This manual process is time consuming and tedious, plus it allows room for human error. The same issue occurs for a current employee who has to rebuild their machine. A third issue can be seen with employees who have forgotten the company guidelines.

Company time, in particular for new inductions, would be better spent in various other ways. Allowing the new employees to read company policy or familiarise themselves with the office building and appropriate contacts.

A key aspect of this project was to eliminate user interaction and cut down on the potential human error. To achieve this, three technologies were considered, Ansible, Puppet and Chef. We chose Chef as it is serverless, scalable and Windows compatible.

With the technology selected we looked at how best to use Chef and what it’s capabilities were. This required a lot of research – and trial and error. Understanding the problem enabled us to create three main goals: Silent Installs of required software, Folder Structure and Environment Variables, all of which were to be automated.

Our objective was for the user to simply download Chef Client, connect to the repository on InnerSource and then run a single command on the command line. The Automated process will then kick off and deliver the finished product. So what will it achieve?

  • Ensures standardisation throughout the company
  • Saves the company valuable time
  • Speeds up Induction process
  • Silent installs of software, folder structures and environment variables

Using DevOps to tackle the ‘Wall of Confusion’

In the traditional flow of software delivery, the interaction between development and support is often one of friction. Development teams are wired towards implementing change and the latest features. Support teams focus on stability of production environments through carefully constructed control measures. This divide in culture is now commonly referred to as the “wall of confusion”.

DevOps looks to break down this culture by improving the performance of the overall system, so that supporting the application is considered when it is designed. One method of doing this is to start treating your infrastructure as code so that it can be rebuilt and validated just like application code.

One area that would benefit from provisioning infrastructure would be the configuration of development environments. Setting these up can often be tedious as they rely on specific versions of software, installed in an exact order with particular environment variables and other project specific configurations – all of which can cause delays to working on a project and are prone to human error.

Automation, Automation, Automation

Chef is a powerful automation platform that uses custom Ruby DSL to provision infrastructure. A key feature of Chef is that it ensures Idempotency – only the changes that need to be applied are carried out, irrespective of the number of times it is ran. While it is intended to configure servers, the flexibility of the platform means that it can be used to set up local development environments.

Diagram described in the text belowOur diagram shows the architecture and workflow for the project. A developer writes Chef code on their workstation, then uploads their code to a Chef repository hosted on GitLab, and installers kept in an S3 bucket on AWS. This code can be pulled down to a developer’s machine to be configured and run in Chef Zero. This is a feature (usually used in testing code) where both a Chef Server and Chef Client are run at the same time. This approach ensures that development machines can be quickly and reliably configured for a project. This also introduces portability into development environments so that testing and support teams can recreate these environments should they need to.

Ready for the Cloud

Chef is tightly integrated with Amazon Web Services through AWS OpsWorks. This means that the Chef code used to automate physical servers or workstations can be used to configure AWS resources. This ability to standardize both physical and cloud environments means that it is possible to create a smooth workflow for both Development and Support teams.

Our Grad Project take-aways?

From experiencing work in a support team, we can see the benefits of embracing a DevOps culture and workflow. The ability to standardize environments means that Development teams are free to implement new technologies that can then be easily transferred and controlled by support teams. Having completed Phase I of ‘Ready Steady Cook’, we aim to embark on Phase II- developing an automated setup for a specific aspect in the support team.

We have both gained valuable experience in working through a project’s complete lifecycle, from inception to development to testing and production. Throughout the project we utilised Agile methodologies such as working towards fortnightly sprints and daily stand-up meetings. This project has also widened the scope of our graduate training in that we have gained certifications in Chef and are working towards certifications in other DevOps technologies.

Sopra Steria is currently recruiting for the Spring 2018 Consulting and Management Graduate Programme. If you, or someone you know, is interested in a career with us, take a look here.

Have you heard the latest buzz from our DigiLab Hackathon winners?

The innovative LiveHive project was crowned winner of the Sopra Steria UK “Hack the Thing” competition which took place last month.

Sopra Steria DigiLab hosts quarterly Hackathons with a specific challenge, the most recent named – Hack the Thing. Whilst the aim of the hack was sensor and IoT focused, the solution had to address a known sustainability issue. The LiveHive team chose to focus their efforts on monitoring and improving honey bee health, husbandry and supporting new beekeepers.

A Sustainable Solution 

Bees play an important role in sustainability within agriculture. Their pollinating services are worth around £600 million a year in the UK in boosting yields and the quality of seeds and fruits[1]. The UK had approximately 100,000 beekeepers in 1943 however this number had dropped to 44,000 by 2010[2]. Fortunately, in recent years there has been a resurgence of interest in beekeeping which has highlighted a need for a product that allows beekeepers to explore and extend their knowledge and capabilities through the use of modern, accessible technology.

LiveHive allows beekeepers to view important information about the state of their hives and receive alerts all on their smartphone or mobile device. The social and sharing side of the LiveHive is designed to engage and support new beekeepers and give them a platform for more meaningful help from their mentors. The product also allows data to be recorded and analysed aiding national/international research and furthering education on the subject.

The LiveHive Model

The LiveHive Solution integrates three services – hive monitoring, hive inspection and a beekeeping forum offering access to integrated data and enabling the exchange of data.

“As a novice beekeeper I’ve observed firsthand how complicated it is to look after a colony of bees. When asking my mentor questions I find myself having to reiterate the details of the particular hive and history of the colony being discussed. The mentoring would be much more effective and valuable if they had access to the background and context of the hives scenario.”

LiveHive integrates the following components:

  • Technology Sensors: to monitor conditions such as temperature and humidity in a bee hive, transmitting the data to Azure cloud for reporting.
  • Human Sensors: a Smartphone app that enables the beekeeper to record inspections and receive alerts.
  • Sharing Platform: to allow the novice beekeeper to share information with their mentors and connect to a forum where beekeepers exchange knowledge, ideas and experience. They can also share the specific colony history to help members to understand the context of any question.

How does it actually work?

A Raspberry Pi measures temperature, humidity and light levels in the hive transmits measurements to Microsoft Azure cloud through its IoT Hub.

Sustainable Innovation

On a larger scale, the data behind the hive sensor information and beekeepers inspection records creates a large, unique source of primary beekeeping data. This aids research and education into the effects of beekeeping practice on yields and bee health presenting opportunities to collaborate with research facilities and institutions.

The LiveHive roadmap plans to also put beekeepers in touch with the local community through the website allowing members of the public to report swarms, offer apiary sites and even find out who may be offering local honey!

What’s next? 

The team have already created a buzz with fellow bee projects and beekeepers within Sopra Steria by forming the Sopra Steria International Beekeepers Association which will be the beta test group for LiveHive. Further opportunities will also be explored with the service design principle being applied to other species which could aid in Government inspection. The team are also looking at methods to collaborate with Government directorates in Scotland.

It’s just the start for this lot of busy bees but a great example of some of the innovation created in Sopra Steria’s DigiLab!

[1] Mirror, 2016. Why are bee numbers dropping so dramatically in the UK?  

[2] Sustain, 2010. UK bee keeping in decline

Building an Agile Organisation: lessons learned from Lean Agile Scotland 2016

I was lucky enough to attend Lean Agile Scotland last month, a 3-day conference in Edinburgh crammed full of fantastic key notes, talks and workshops covering all things Agile: from Value Streams to Cynefin, from TDD and BDD to Neuro-diversity, and from meeting culture to dark collaboration – #LAScot16 had it all.

Trying to summarise in one blog post all the lessons and thinking I took away has been tough, so I’ve focused on some interesting ideas from the conference which can help organisations build/maintain their Agile culture:

The role of management in an Agile organisation

The subject of management was touched upon by several speakers during the 3 days, including Marc Burgauer’s Eupsychian  Manager talk and Julia Wester’s Let’s (Not) Get Rid of Managers talk.

Marc Burgauer introduced many of the conference attendees to the idea of Maslow’s Eupsycian Manager (pronounced “you-sigh-key-un”), human-oriented management generated by self-actualised people (Eupsychian is defined as having or moving towards a good mind/soul). Marc highlighted that in age where most organisations strive for conformity and “same-ness”, there is not one right way to manage everyone. Each employee needs to be managed specifically to their needs in the moment and in a way fitting of their current context – in Eupsychian Management, one size definitely does not fit all.

Eupsychian managers make it easy for their employees to say No to them – this allows your employees to make you aware of anything you may currently be blind to in your organisation and so learn valuable information.

Euphysian managers also ask their employees, “What can I do to help you do your job better?”. This question clearly sets out from the beginning how the relationship between manager and employee will function.

Mirroring many of the sentiments of Marc’s talk, Julia Wester thoughtfully discussed how as more teams move to an Agile way of working (self-organising, no hierarchy), the traditional role of managers must move too. We still need managers in Agile environments but Agile management should focus on ignoring hierarchy and having managers just be part of the team – being seen as “one of the team” encourages feedback from your team members.

In an Agile environment, we should value individuals and interactions over processes and tools, therefore managers should treat their team as people, not just resources. One example of an organisation moving away from seeing their people as just resources is Google – they have renamed their Human Resources department to People Ops. When you value your people, you foster cognitive safety and create relationships based on trust which allows you not to micromanage.

Julia finished her talk with this important quote from Peter F. Drucker:

“Management is about human beings. Its task is to make people capable of joint performance, to make their strengths effective and their weaknesses irrelevant.”

The importance of Communities of Practice in an Agile (any!) Organisation

At Emily Webber’s Communities of Practice, The Missing Piece of Your Agile Organisation talk, highlighting the importance & value of having communities of practice in your organisation, I found myself nodding along in agreement at everything she said. Fortunately, Sopra Steria already recognises the importance of Communities of Practice, demonstrated by our adoption of a Community model earlier this year with communities ranging from Agile to Architecture.

So what makes a good community? Having Membership and Influence, while providing a Fulfilment of Needs and Emotional Connection. And why are they essential? People need to feel supported in their roles. We learn better when we learn together. Collaboration creates collective intelligence which is greater than individual intelligence.

When thinking about how you can get the most from your Community of Practice, use it as an opportunity to get together and:

  • Give presentations to one another and/or invite external speakers in to present on new thinking/innovation in your area
  • Practice new skills in a safe environment
  • Visit other organisations, if possible, with similar challenges and share learning

Many more lessons can be learned from Lean Agile Scotland 2016 and if you would like to learn more then all talks from the 3 days will be made available online – follow @LeanAgileScotland on Twitter or check for updates.

If you have any thoughts on these topics, please leave a reply blow or contact me by email.

Bob Dylan was right about Digital Transformation

Bob Dylan is recognised as one of most influential writers of the 20th century.  He is not though, seen as an inspiration for the digital age. Perhaps he should be? With his 1964 song, “It’s Alright, Ma (I’m Only Bleeding)”, he states that “He not busy being born is busy dying”. With this line he couldn’t have been more prescient.

Organisations need to continually “be busy being born” and innovate or face the alternative.

Think about it: what differentiates companies hobbling along the digital highway from the ones paving the way? The ability to embrace change, refuse status quo and turn the business into an ever evolving entity. 

Put another way, being digital is about reconciling the pace of adoption of new technologies with the pace of their commoditisation. The latter recently experienced a dramatic acceleration, while the former is often stuck in old-world mode.


Old world versus Digital world

Twenty years ago, adopting new software was a big deal. There were limited numbers of vendors in each market segment. Software customisation or process transformation was necessary to take advantage of technology. Integration was complex, ongoing maintenance and support often presented challenges. All of this resulted in expensive acquisition costs, from both a financial and an effort perspective. Long-term supplier contracts were the norm.

Once software was installed, and the vendor proven, it was a lot easier for an organisation to allow the vendor to expand its footprint through additional modules and products rather than go back to the market to look for alternative solutions.

From a vendor perspective, selling and delivering software was costly requiring a large sales team to reach customers and negotiate complex contracts. Vendor delivery teams would need to be highly skilled building bespoke integrations to satisfy the specific needs of customers.

New software integration was expensive, risky and therefore needed careful consideration. Adoption pace was slow as software was seen as complex far from being a commodity.

Today, the pace of commodization has increased by an order of magnitude, mainly due to Cloud technologies. Let’s have a look: what does innovation mean today in the enterprise world? Big data maybe, machine learning and AI, blockchain or IoT?. All these have already been turned into commodities. Fancy running some machine learning algorithms on your customers database? AWS has an API for that. Conducting a first run shouldn’t take more than a few hours of work. Same goes for most of big data technologies, IoT, blockchain and even virtual reality.

as-a-Service paradigm

The as-a-Service paradigm has drastically reduced costs, complexity and risks of adopting new software and technologies. The SaaS model for instance through turning CAPEX into OPEX, has abolished any notion of commitment.

Should your company use this marketing software over this one? Who cares? Use both, allow yourself to pay a bit more for one month or two, then keep the one that perfectly meets your needs. Going further, why even consider it at company level? One department may prefer one software because it measures what they want in the exact way they want, while another department may prefer another one. With almost no acquisition and no integration costs, why try to over rationalise at the expense of business value and user experience?
Standardisation is still to be considered on non-differentiating applications, but at a much less prominent position.

The Digital highway

All this said, most of old world companies are still considering innovation with the same eyes as before, missing business opportunities and losing ground to new entrants.

If conducting an IoT experiment means running an RFP, conduct a 6-month PoC and sign a multi-year contract, then you may be doing IoT, but you’re still hobbling on the digital highway.

Velocity is key to transforming your company into an ever evolving, fast learning, business.

“He not busy being born is busy dying

Thanks to Clara, Gavin, Jian and Robin for their kind guidance.

More on the subject:

What do you think? Leave a reply below or contact me by email.

Image courtesy of Getty Images

Is ITIL dead?

Why ITIL must adapt if it is to remain relevant in the Digital era

ITIL is dead! A contentious view? Almost certainly, and I don’t think I’d need to throw a stone too far in any direction before I hit somebody that fervently disagrees with me.  So why do I say this?  One word – Digital.

Don’t get me wrong, ITIL still has its place, and many, many organisations are still using it just fine, thank you very much. But the writing is on the wall.  Digital is here to stay, and slowly but surely (and in some cases very quickly!) we are beginning to witness a wholesale shift in enterprise technology strategy from traditional, legacy IT service delivery to a model that is embracing the Cloud (in all its guises), platform and device mobility, automation (everywhere!), and focus that places customer experience front and foremost on the list of priorities.

Whilst ITIL can and does still enable the delivery and support of these technology objectives, it is rapidly being considered ‘clunky’, and organisations are increasingly seeking to adopt more flexible operational governance that aligns more sensitively with the change cadence required, nay dictated, by such advances.

Digital technologies, by their very nature, tend to be fast moving and highly volatile. Development of these technologies predicates an equally fast moving service lifecycle to ensure that customer expectations are both met and maintained in a customer environment that now demands swift and constant improvement and innovation.

Agile is one part of the industry’s response to this challenge.

The recent proliferation of tools and techniques to support Agile delivery frameworks is an indicator of the steady rise in adoption of iterative work cadences, and the reality is that many traditional ITSM framework implementations simply aren’t geared up to support this approach.  In many cases, ITSM actively works to impede the delivery of change in an agile manner, and this creates a very real dilemma for IT service management leaders.

The crux of this dilemma is as follows:

  1. Many of the core ITIL processes have been designed to protect production operations from the impact of change, and manage any impact of that change accordingly
  2. Agile (and supporting frameworks) have, however, been designed to increase the velocity of change, and the flexibility by which it is prioritised

As every Change Manager will no doubt surely confirm, increasing the rate of change (potentially to daily or even hourly increments) can put major stresses on a process not necessarily designed to work at this pace. Equally, the concept of ‘trust’, so fundamental to the Agile methodology, may sound great in theory, but is not so alluring in practice when you’re the Head of IT Operations with SLAs to meet and audit controls to adhere to.

In the Waterfall world change, to a degree, works coherently with ITIL and the phased approach to delivery (design, build, test), gives service management functions the time and space to perform the necessary activity required to protect service.  In an Agile world, however, this paradigm is challenged, and what were very well structured, methodical, and well understood governance controls, suddenly become a blocker to the realisation of business value (at the pace with which the business wants to realise it). In some cases this can happen almost overnight, as businesses take the decision to cut to iterative software development methodologies in a big bang approach, often with scant regard for the impact on service management and operations functions.  Almost instantly we witness the clash of worlds (Old versus New).  And word to the wise my friends, the business is normally championing those in the New camp.

It is at this point that we hit the dilemma.  What takes priority – the rapid realisation of business value through the swift release of change, or the protection of production systems (and thus the customer experience) from potential availability or performance degradation as a result of change?

The answer depends heavily on the type of organisation and system/service being changed, but of course the real answer is that both are equally important.  The issue, however, is that Agile is considered new, revolutionary, and progressive (it isn’t really but that’s beside the point). ITIL, on the other hand, is considered by many to be overly bureaucratic and a constraint to the realisation of business value. And remember, perception is reality, especially when those doing the perceiving also happen to be holding the purse strings.

The result is that IT service leaders, in the face of a business strategy that promotes a fast pace of change that it is perceived to be constrained by service management control, quickly become guilty by association. An inability to respond quickly to this challenge will only compound the issue.  The next logical step from there is the disintermediation of IT altogether, as business change leaders look to more flexible ways to deliver value to their customers, unhindered by legacy constraint.

To avoid this scenario IT service leaders, and the processes that they adopt, must adapt. Long term proponents of existing models must wake up to this notion. This change train is most definitely coming and it’s not showing any signs of slowing down.  We have a lot of baggage to carry, so getting on the train will be hard, but it’s also absolutely necessary (I think I may have stretched that analogy a little thin).

Thus ITIL, whilst perhaps not dead per se, is certainly badly wounded and in desperate need of triage.

As Ralph Waldo Emerson is famously quoted as saying, nothing great was ever achieved without enthusiasm. Well now is the time to get enthusiastic, because if enough of the community are, perhaps ITIL might just survive after all.

What do you think? Leave a reply below or contact me by email.