Cloud adoption consideration #3: Avoid overly technology-led approaches

This is the third of a series of blog posts discussing the five main considerations critical to successful cloud adoption by enterprises.  If you missed them, the previous posts are here.

Today’s topic is about a common anti-pattern that we see – when organisations see cloud adoption as primarily a technology problem to solve.

Working on cloud has become great CV fodder in the last few years, and so everyone wants to work on the new cloud implementation project and get exposure to the new technology.  Obviously enterprises want to harness this energy, but there is a trap, and it comes back to the need for a cloud adoption strategy and an underpinning business case – i.e. why are you doing it.  Is it to become the world’s leading authority on cloud computing?  Should your organisation be focusing its energies on its core business operations, markets and customers, or pushing the envelope with ground-breaking cloud implementations?

Technologists (and I’m counting myself as one here) love this stuff – introducing complexity, sometimes at the cost of the original business goals.  So consider the following questions…

How many cloud providers do you really need?

A common emerging enterprise adoption pattern is to manage multiple cloud providers via a brokerage solution that gives a single point of control across them.  It’s a valid strategy, but do you really need this?  Is it to reduce the risk of lock-in?  For vendor negotiation leverage?  The risk here is that a great deal of complexity (a barrier to the very agility you are trying to achieve) is introduced, leading to a “lowest common denominator” set of cloud services and a maintenance nightmare as you engage in a never-ending and never-winning chase of the cloud vendors’ latest feature releases.

Do you really need internal/private and public cloud offerings?

In the enterprise market, we could perhaps characterise the last few years as being very focused on private cloud, as the traditional hardware and software vendors desperately tried to defend market share from the public cloud usurpers.  Now the tide is turning more in favour of the public cloud providers, but there is a massive inertia in large enterprises towards on-premise initiatives hence, for example, Microsoft’s focus on positioning for hybrid cloud with Azure Pack and Azure Stack.  This is understandable – there’s typically a huge data centre investment, and an operating model that has been painfully refined over the years to feed and water that investment.

So ask yourself these questions:

  • Are you reinventing something that already exists from the public providers?
  • Are you pursuing an evolutionary step that you can avoid?

Are you really that unusual?

A common statement we hear is “ah, but we are unique/different” – but we would argue that this is rarely the case.  It can certainly be true in the SaaS adoption case, but is much less likely to be the case for IaaS services.  If this argument is used to justify custom development for cloud services, be suspicious.  It’s unlikely that AWS and their kind have not solved all these challenges already, and if they haven’t, ask yourself why they haven’t…it’s likely because it’s not a genuine need.


If you want to read more about this and the other four considerations for successful enterprise cloud adoption, have a look at our white paper.

What are your thoughts about successful cloud adoption by large enterprises? Leave a reply below, or contact me by email.


Beamap is the cloud consultancy subsidiary of Sopra Steria

Cloud adoption consideration #2: Define and implement the revised operating model

This is the second of a series of blog posts discussing the five main considerations critical to successful cloud adoption by enterprises (if you missed it, the first post is here).  Today’s topic is the impact of cloud adoption on your operating model.

A common customer scenario goes like this.  We want to get the benefits of cloud computing.  But our organisation is so…slow…to…change and we have so much legacy to deal with.  So let’s set up a skunkworks team for application X that is coming up on the development roadmap, so that we can develop a small initial capability to design, deploy and operate a cloud-based solution.  So far, so sensible.

But then the advantages that this initial team had start to become disadvantages. They can’t scale. They don’t really have the organisational sponsorship as they’ve deliberately operated in a silo so they could move quickly, avoiding all those pesky organisational constraints. This is the old bimodal IT dilemma which has had far too many column inches written about it already (here is a good place to start on this topic), so I won’t add to them here.

Bite the bullet

At some point, the enterprise has to bite the bullet, define and then implement a revised operating model for the design, deployment, operating and retirement of cloud services at scale.  It needs to become the default model for the scenarios defined in the strategy that we did such a great job of earlier, not the exception.  Regardless of the technology selection, the operating model within the IT function needs to change, and in some respects this needs to be defined before the implementation phase.  This is a non-trivial undertaking and requires serious management commitment.

Most non-cloud-native, large enterprises have not got to this stage yet, regardless of what case studies are available in the trade press…digital flagship projects get all the coverage but the reality is that the vast majority of enterprise workloads are still in the data centre as we are just a few years into a very long journey, and so the evolution of operational models is tracking this trend.

So what does a cloud-ready operating model look like?

Well, it considers cloud adoption in multiple dimensions:

  • design, build and run…
  • …using the full range of cloud services – SaaS through to IaaS…
  • …considering the people, process and technology implications

Some of the aspects of the operating model can be radically new for an organisation, e.g. DevOps processes, multi-skilled teams.  And some can be refinements to existing processes to better support an environment that has greater and greater dependence over time on cloud-based services.  For example, vendor management needs to change to cope with the differing innovation pace, commercial models, legal implications and billing models of cloud providers.

The risk here is that your technologists do a fantastic job providing your staff with access to the underlying internal or external cloud services, but a pre-cloud-era operating model destroys the benefits by constraining the achieved agility and innovation.  In addition, the new operating model needs to be much more responsive to change itself, so there is a “one-off” redesign required here, but also an ongoing process of adaptation to respond to the still rapidly changing cloud landscape.  It’s still relatively early days…


If you want to read more about this and the other four considerations for successful enterprise cloud adoption, have a look at our white paper.

What are your thoughts about successful cloud adoption by large enterprises? Leave a reply below, or contact me by email.


Beamap is the cloud consultancy subsidiary of Sopra Steria

Cloud adoption consideration #1: Have a strategy

This is the first of a series of blog posts discussing the five main considerations critical to successful cloud adoption by enterprises.

Before diving into the first, let’s define the scope of what I’m talking about here, as it affects the points that need to be made.

  • We are talking about cloud adoption by large enterprises – typically global organisations who have the advantage of scale and for whom private cloud implementations are within their reach, but who also have the disadvantages of scale – the challenge of organising a large number of people to deliver complex capabilities. The critical considerations are different for start-ups where public cloud is typically the only realistic option.
  • At this scale, customer strategies tend to encompass private and public cloud components, and sometimes multiple providers for each – so our scope here includes SaaS, IaaS, the fuzzy bit in the middle that we’ll call PaaS, plus on- and off-premise implementations.
  • We are interested in the long game – of course you need quick wins to earn credibility and get the flywheel effect going, but the big gains are to be realised over several years (at least).

Firstly, technology is not one of the five critical considerations

This might seem counter-intuitive, but let me explain. Our clients have the scale to get the technology right. Typically, they have talented staff and/or strong delivery partnership organisations in place, and they are big enough to attract a lot of love from the big cloud vendors. Also, these types of organisations have already implemented and operated multiple data centres over the years, so they know how to deliver big technology change programmes.  So whilst getting the technology aspects right is absolutely fundamental, it’s not where we see customers having trouble. In fact, it’s a risk, as customers can focus too much on where they are comfortable. Five years ago, perhaps, this was the hard part, but whilst it’s still far from trivial, it’s not where organisations flounder. We do see customers with cloud initiatives that fail for technology reasons (e.g. sub-standard security patterns and governance) but typically, the root cause of these failures is not technical ignorance and can be traced back to one of our five main discussion points.

The key point is – and it’s taken me a while to figure this out myself – that successful exploitation of cloud is not just a technology challenge, but predominantly a change management challenge. So this takes us to the first consideration critical to successful cloud adoption by enterprises…

#1 – Know why you are doing it

This sounds obvious – but it’s amazing how many organisations do not have a clear set of business drivers that can be traced through to the cloud part of their strategy. (Just to clarify – we tend to talk about cloud strategy as a shorthand, but really we mean “business strategy in a cloud world” – i.e. those components of the business and IT strategy that can leverage the developments in cloud computing from the last 5-10 years)

Actually the problem is more subtle than this – often what we find with a new client is that there is a cloud strategy defined in some form, but it might have one or both of the following flaws:

It is non-specific, and therefore tries to be all things to all stakeholders

We want to be more agile? Tick.  And reduce costs? Tick.  And more secure? Tick.  And rein in and provide a credible alternative for shadow IT? Tick.

The cloud vendors (and I have contributed to this) are guilty of feeding the “cloud good, non-cloud bad” mentality, but of course it’s more complex than this. Some of these objectives compete with each other – sure the overall effect on the organisation of cloud exploitation can to achieve them all, but any strategy that can really be executed needs to be less generic than this. For example – are you chasing infrastructure cost savings, or application development savings, or operations staff savings? Is it an IT benefit you seek, or a benefit that will be visible to internal business customers? The answers will probably be different for different parts of your application estate also.

One cause of a vague or non-existent strategy is a “follow the leader” behaviour from senior management – i.e., my peers/competitors are adopting and my shareholders/investors read the same press I do, so I have to do it also. Five years ago I spent what felt like too much energy evangelising cloud adoption in a large enterprise (with limited effect!), and now it’s taken as a given that it’s part of the CIO’s mission. That’s progress. Just be sure why you are doing it and be brave enough to say no if…

It is not underpinned by a business case

Assuming the previous flaw has been addressed (it’s a pre-requisite really that the strategy is specific enough), then do a good old-fashioned cost-benefit analysis on what you are proposing to execute. I am not arguing that cloud adoption in a large enterprise does not lead a visionary leader to show the way by articulating a compelling vision – it absolutely does. But do your homework. Our motto in Beamap[1] is “If the business case for an application migration to cloud does not stack up, do not do it”.  We’ve no bias towards migrating everything regardless of benefits case – we’re not selling hardware or software or cloud services.

Of course, there are many sound strategic rationales for proceeding without a positive “hard numbers” business case, such as risk mitigation, future agility, data centre space constraints or contract terminations, etc, – and putting a financial value on these softer benefits is difficult.  But at least make it a conscious and evaluated decision – otherwise how are you going to go back later and measure the benefits realisation and adjust your approach from what this teaches you?

If you want to read more about this and the further four considerations for successful enterprise cloud adoption, have a look at our white paper.

What are your thoughts about successful cloud adoption by large enterprises? Leave a reply below, or contact me by email.

———————————

[1] – Beamap is the cloud consultancy subsidiary of Sopra Steria

Bridging the gap between Google Cloud Platform and enterprises

Recently, I spent some time with Google to understand their Google Cloud Platform (GCP) in more detail. Everyone knows that the leaders (in adoption terms at least) in this space are AWS and MS Azure so I thought it would be interesting to hear about GCP and Google’s own cloud journey.

GCP was started in 2008 and like with AWS, Google’s objective was to bring best practices used internally to the external market.  According to Google, most of their internal tools are very engineering focused so their challenge was to ensure that GCP was fast and easily consumable for an external market.

Here are my key observations:-

GCP is focussing on enterprises entering on their 2nd wave cloud journey

The IaaS space is a competitive market and Google acknowledges this. Google’s messaging is that cloud is all about what you can do with the platform and a key objective for GCP is to process large volumes of data quickly (like the Google Search Engine). They don’t really like the term ‘big data’ as they see all things as data. Their view it’s the speed at which you can process data that’s their real USP, leveraging GCP services like BigQuery and Cloud Bigtable. Google’s view is that innovation comes from what you can do with the data. For enterprises sitting on large volumes of data, GCP gives the ability to improve internal processes and it provides a new opportunity to develop and sell new services.

Containers are the way forward for new modern application development

Google has been using the containers for many years. Everything in Google runs in containers (managed via Kubernetes) and they see this as the future for improving application development efficiency for enterprises. However, they understand the huge gap between the sophistication of what they do internally and enterprises.

When developers at Google code, they do not think about servers, as it is more of a “serverless” computing environment. Scaling up is no longer an issue, so their focus is about functionality and innovation.  This is where enterprises want to be for infrastructure and new services, but it’s going to be a long journey.

Do enterprises want to be like Google?

In short, yes – in terms of speed and innovation. However, most mature enterprises have at least decades of legacy applications, infrastructure and strict governance so therefore it is difficult to be agile. Google understands that enterprises cannot operate in their unique manner either technically or culturally. GCP isn’t about turning your enterprise into Google – it is simply about enabling enterprises to leverage services in a more efficient way.

An example given during the presentation was that many years ago there were multiple search engines (Yahoo, Altavista, Excite etc). However, Google’s USP was to process data accurately and quickly using a simple UI. This was disruptive in the market place as they changed the way data was queried, processed – and how it was managed.Therefore, a lesson for enterprises is that new digital initiatives will always require new ways of thinking (forget the 20 years of legacy infrastructure and process) and using cloud platforms to develop new services could be game changing in their markets

GCP is still in battle for the enterprise market with other cloud providers

Cloud is all about innovation and this is GCP’s play!

With enterprises going “all in” with AWS or Azure, another cloud platform may just make things more complex; however I can see the value of GCP in its speed, machine learning and data processing capabilities. Google may find it challenging to persuade enterprises to use GCP if their workforce is already trained in Azure or AWS. Enterprises like to remain with a platform just because their workforce has the skills – inertia is a powerful force.

Unlike Microsoft, Google do not currently have the enterprise relationships. However neither did AWS and they are making great progress in that portion of their market. Therefore, Google’s partner channel needs to broaden out to help drive adoption.  Google are also hiring people with more of an enterprise background so they can better understand the psyche of these customers.

Questions around Google’s ability to support large-scale enterprise customers will remain however; some years back the same questions were asked about AWS and now look at their portfolio of enterprise customers.  Currently, GCP may not have the market share of AWS or Azure though it definitely has a platform rich in interesting features, which will help Google narrow the gap within the enterprise market.  An open question is whether their focus on relatively niche innovation features will present a broad enough portfolio of services to enterprise customers so that GCP is seen as a credible “all in” choice, or just a niche big data service provider.

What do you think? Leave a reply below or contact me by email

Teachable Brand AI – a new form of personalised retail customer experience?

Within the next five years, scalable artificial intelligence in the cloud – Brand AI – could potentially transform how retailers use personalisation to make every store visit a memorable, exclusive customer experience distinct from anything a competing digital disruptor could offer.

Arguably the success of this engagement approach is contingent upon a retailer’s ability to combine a range of data sources (such as social media behaviour, loyalty card history, product feedback) with its analytics capabilities to create personalised moments of delight in-store dynamically for an individual customer that drives their decision to purchase.

But could the truly disruptive approach be one where a customer is continually teaching the Brand AI directly about their wants or needs as part of their long-term personal relationship with a retailer?

Could this deliver new forms of customer intimacy online competitors can’t imitate? Here are some ideas…

  • Pre visit: Using an existing instant messaging app the customer likes (such as WhatsApp or Skype), he or she tells the Brand AI about their communication preferences (time, date, etc) and what content about a specific retailer’s products or services (such as promotions or new releases) they are interested in. This ongoing relationship can be changed any time by the customer and be pro-active or reactive – the customer may set the preference that the Brand AI only engages them when they are located within a mile of a retailer’s store or one week before a family member’s birthday, for example. Teachable Brand AI empowers the customer to be in complete control of their own personalised journey with a retailer’s brand.
  • In store: The Brand AI can communicate directly with in-store sales staff about a customer’s wants or needs that specific day to maximise the value of this human interaction, provide on-the-spot guidance and critical feedback about physical products their customer is browsing to drive a purchasing decision, or dynamically tailor/customise in-store digital experiences such as virtual reality or media walls to create genuine moments of customer delight. Teachable Brand AI has learned directly from the customer about what excites them and uses this deep insight to deliver a highly differentiated, in-store experience online competitors can’t imitate.
  • Post purchase: The customer can ask the Brand AI to register any warranties, guarantees or other after sales support or offers for their purchased good automatically. In addition, the customer can ask the Brand AI to arrange to return the good if unsatisfied or found faulty – to help ensure revenue retention a replacement or alternative is immediately suggested that can be exchanged at the customer’s own home or other convenient location. The customer can also share any feedback they want about their purchase at any time – Teachable Brand AI is driving customer retention and also gathering further data and insights to enable greater personalisation of the pre visit and in-store experience.

If you would like more information about how big data and analytics can benefit your organisation please contact the Sopra Steria Digital Practice.

 

What do recent AWS announcements tell us about the cloud market?

As always, Amazon Web Services (AWS) made a bunch of announcements at their recent Chicago Summit.  The new features have been reported to death elsewhere so I won’t repeat that, but there were a few observations that struck me about them…

Firstly, the two new EBS storage volume types aimed at high throughout rather than IOPS – are 50% and 25% of the normal SSD EBS price, so are effectively a price cut for big data users.  As I’ve commented before, the age of big headline grabbing “across the board” cloud price reductions is largely over – and now the price reductions tend to come in the form of better price/performance characteristics.  In fact, this seems to be one of Google’s main competitive attacks on AWS.

Of course, I welcome the extra flexibility – it’s always comforting to have more tools in the toolbox.  And to be fair, there is a nice table in the AWS blog post that gives good guidance on when to use each option.  Other cloud vendors are introducing design complexity for well-meaning reasons also, e.g. see Google’s custom machine types.

What strikes me about this is that the job of architecting a public cloud solution is getting more and more complex and requires deeper knowledge and skills, i.e. the opposite of the promise of PaaS.  You need a deeper and deeper understanding of the IOPS and throughout needs of your workload, and its memory and CPU requirements.  In a magic PaaS world you’d just leave all this infrastructure design nonsense to the “platform” to make an optimised decision on.  Maybe a logical extension of AWS’s direction of travel here is to potentially offer an auto-tiered EBS storage model, where the throughput and IOPS characteristics of the EBS volume type is dynamically modified based upon workload behaviour patterns (similar to something that on-premise storage systems have been doing for a long time).  And auto-tiered CPU/memory allocation would also be possible (with the right governance).  This would take away some more of theundifferentiated heavy lifting that AWS try and avoid for their customers.

So…related to that point about PaaS – another recent announcement was that Elastic Beanstalk now supports automatic weekly updates for minor patches/updates to the stack that it auto-deploys for you, e.g. for patches on the web server etc.  It then runs confidence tests that you define before swapping over traffic from the old to the new deployment.  This is probably good enough for most new apps, and moves the patching burden to AWS, away from the operations team.  This is potentially very significant I think –  and it’s in that fuzzy area where IaaS stops and PaaS starts.  I must confess to having not used Elastic Beanstalk much in the past, sticking to the mantra that I “need more control” etc and so going straight to CloudFormation.  I see customers doing the same thing.  As more and more apps are designed with cloud deployment in mind and use cloud-friendly software stacks, I can’t see any good reason why this dull but important patching work cannot be delegated to the cloud service provider, and for a significant operations cost saving.  Going forward, where SaaS is not an appropriate option, this should be a key design and procurement criteria in enterprise software deployments.

Finally, the last announcement that caught my eye was the AWS Application Discovery service – another small tack in the coffin of SI business models based on making some of their money from large scale application estate assessments.  It’s not live yet and I’m not clear on the pricing (maybe only available via AWS and their partners), and probably it’ll not be mature enough to use when it is first released.  It will also have some barriers to use, not least that it requires an on-premise install and so will need to be approved by a customer’s operations and security teams – but it’s a sign of the times and the way it’s going.  Obviously AWS want customers to go “all in” and migrate everything including the kitchen sink and then shut down the data centre, but the reality from our work with large global enterprise customers is that the business case for application migrations rarely stacks up unless there is some other compelling event (e.g. such as a data centre contract expiring).  However, along with the database migration service etc, they are steadily removing the hurdles to migrations, making those business cases that are marginal just that little bit more appealing…

What are your thoughts? Leave a reply below, or contact me by email.

Brand AI: The invisible omni-channel for retailers?

Digital can pose a range of risks for a bricks and mortar (B&M) retailer including:

  • Declining market share as customer loyalty to its established, traditional brand is eroded away by disruptive new on-line entrants and more innovative high street competitors
  • Poor ROI from implementing new in-store digital technologies because they fail to create a superior personalised customer experience across its physical and online channels
  • The inability to deliver better inventory management using big data and analytics due to immature organisational capabilities in these areas across its supply chain

So how could scalable retail artificial intelligence in the cloud – Brand AI – potentially turn these challenges into unique opportunities for competitive advantage during the next five years? Here are some disruptive ideas…

Brand AI as a personal human relationship

A retailer could personify its brand as a virtual customer assistant accessible anywhere, anytime using voice and text commands from a mobile device. But unlike today’s arguably bland, soulless smartphone versions that focus on delivering simple functionality, Brand AI would have a unique, human character that reflects the retailer’s values to inform its interactions and maturing relationship with an individual customer. Intended to be more than another ‘digital novelty’, this disruptive form of customer engagement builds on and enhances a B&M’s traditional brand as a trusted long-term friend throughout the entire customer journey by offering compelling, timely presale insights, instant payment processing and effective after sales support and care.

Brand AI as an invisible omni-channel

A customer is empowered to select what personal data they choose to share (or keep private) with the Brand AI to enrich their relationship. Social, location, wearable or browsing and buying behaviour data from complementary or even competing retailers could, potentially, be shared via its cloud platform. The Brand AI can analyse this liquid big data using its machine-learning capabilities to create dynamic real-time personalised actionable insights seamlessly across a customer’s physical and digital experience – it is the heartbeat of the retailer’s invisible omni-channel offering.

Critically, Brand AI can transform every retail store visit into a memorable, exclusive customer experience distinct from anything a competing digital disruptor could offer. For example, the Brand AI can advise in-store sales staff in advance what specific products a customer wants or needs that particular day to help personalise this human interaction, provide on-the-spot guidance and critical feedback about products available immediately to drive a purchasing decision, or tailor in-store digital experiences such as virtual reality or media walls to create genuine moments of customer delight. In addition, the AI can capture the customer’s emotional and physical reactions via wearables to these experiences (such as a raised heartbeat when seeing a new product for the first time). Such insights can then be explored later by the customer (including socially with family and friends) using the AI on the retailer’s integrated digital channel to sustain their retention.

Brand AI as an operating model

A further opportunity for using Brand AI is its potential ability to streamline inventory management to improve the customer experience and reduce operating risk. Key processes such as store returns and transfers could benefit from such an approach – not only would the invisible omni-channel AI enable a customer to easily raise the need to return goods, it can also capture the specific reasons why this is happening (rather than this information having to be interpreted by different customer service staff using prescriptive reason codes, for example). Also because the Brand AI has an established personal relationship with the customer it can proactively order a replacement for home delivery or pick up (store or other convenient location) or suggest a suitable alternative product or other cross-sell opportunities to keep the customer satisfied and minimise revenue losses for the retailer.

Managers can also use the AI to help interrogate and identify trends from this complex dataset on returns and transfers. Inventory management reporting and insights are available on demand in a manager or team’s preferred format (such as data visualisation) to support stock purchasing decisions, resolution of supply chain performance issues or investigate region or store specific fraud and theft. And because these analytics are running in the cloud they can be aligned to existing organisational capabilities in this area.

The illustrative benefits for a bricks and mortar retailer using scalable artificial intelligence in the cloud (Brand AI) potentially during the next five years include:

  • Refreshes the competitive advantages of an established, traditional high street retail brand using new disruptive forms of marketing and customer advocacy
  • Materially de-risks strategic investment in new in-store digital technologies by explicitly linking these capabilities to an holistic, long-term customer experience
  • Can improve organisational agility using big data and analytic capabilities to improve existing business processes that directly benefit the retailer and its customers

If you would like more information about how big data and analytics can benefit your organisation please contact the Sopra Steria Digital Practice.