As always, Amazon Web Services (AWS) made a bunch of announcements at their recent Chicago Summit. The new features have been reported to death elsewhere so I won’t repeat that, but there were a few observations that struck me about them…
Firstly, the two new EBS storage volume types aimed at high throughout rather than IOPS – are 50% and 25% of the normal SSD EBS price, so are effectively a price cut for big data users. As I’ve commented before, the age of big headline grabbing “across the board” cloud price reductions is largely over – and now the price reductions tend to come in the form of better price/performance characteristics. In fact, this seems to be one of Google’s main competitive attacks on AWS.
Of course, I welcome the extra flexibility – it’s always comforting to have more tools in the toolbox. And to be fair, there is a nice table in the AWS blog post that gives good guidance on when to use each option. Other cloud vendors are introducing design complexity for well-meaning reasons also, e.g. see Google’s custom machine types.
What strikes me about this is that the job of architecting a public cloud solution is getting more and more complex and requires deeper knowledge and skills, i.e. the opposite of the promise of PaaS. You need a deeper and deeper understanding of the IOPS and throughout needs of your workload, and its memory and CPU requirements. In a magic PaaS world you’d just leave all this infrastructure design nonsense to the “platform” to make an optimised decision on. Maybe a logical extension of AWS’s direction of travel here is to potentially offer an auto-tiered EBS storage model, where the throughput and IOPS characteristics of the EBS volume type is dynamically modified based upon workload behaviour patterns (similar to something that on-premise storage systems have been doing for a long time). And auto-tiered CPU/memory allocation would also be possible (with the right governance). This would take away some more of theundifferentiated heavy lifting that AWS try and avoid for their customers.
So…related to that point about PaaS – another recent announcement was that Elastic Beanstalk now supports automatic weekly updates for minor patches/updates to the stack that it auto-deploys for you, e.g. for patches on the web server etc. It then runs confidence tests that you define before swapping over traffic from the old to the new deployment. This is probably good enough for most new apps, and moves the patching burden to AWS, away from the operations team. This is potentially very significant I think – and it’s in that fuzzy area where IaaS stops and PaaS starts. I must confess to having not used Elastic Beanstalk much in the past, sticking to the mantra that I “need more control” etc and so going straight to CloudFormation. I see customers doing the same thing. As more and more apps are designed with cloud deployment in mind and use cloud-friendly software stacks, I can’t see any good reason why this dull but important patching work cannot be delegated to the cloud service provider, and for a significant operations cost saving. Going forward, where SaaS is not an appropriate option, this should be a key design and procurement criteria in enterprise software deployments.
Finally, the last announcement that caught my eye was the AWS Application Discovery service – another small tack in the coffin of SI business models based on making some of their money from large scale application estate assessments. It’s not live yet and I’m not clear on the pricing (maybe only available via AWS and their partners), and probably it’ll not be mature enough to use when it is first released. It will also have some barriers to use, not least that it requires an on-premise install and so will need to be approved by a customer’s operations and security teams – but it’s a sign of the times and the way it’s going. Obviously AWS want customers to go “all in” and migrate everything including the kitchen sink and then shut down the data centre, but the reality from our work with large global enterprise customers is that the business case for application migrations rarely stacks up unless there is some other compelling event (e.g. such as a data centre contract expiring). However, along with the database migration service etc, they are steadily removing the hurdles to migrations, making those business cases that are marginal just that little bit more appealing…
What are your thoughts? Leave a reply below, or contact me by email.