Confronting the M&S challenge – why data is the solution

The impact of digital on the retail sector hit home at the end of May when M&S announced that it was accelerating its digital transformation following plunging profits. That one of the UK’s best-known retail brands had clearly failed to keep up with digital consumer trends may have come as a shock to many. I wasn’t surprised, however. I’ve recently written a paper on this very topic. In ‘Why data is the new retail battleground’ I look at one of the key reasons why traditional retailers are struggling to compete with their digital competitors – data.

For me, the challenge is not that these retailers have failed to invest in online commerce channels. Indeed, many are doing well in this respect. What’s holding them back is that they’re still using decades-old back office systems and processes governing Product Lifecycle Management (PLM), Product Information (PIM) and Product Master Data Management (MDM). These retailers, and especially those with a catalogue heritage, retain a large legacy of systems, processes and cultural norms that are not aligned to the expectations of today’s customer. They’ve typically expanded into digital channels to meet the consumer appetite, but they’re being hindered by operating models that remain wedded in their legacy data management principles.

The Amazon effect

To compete with the likes of Amazon and other digital retailers, traditional companies must transform – and they need to do this fast. The ability to capture the right product information quickly and accurately, then push it out to the relevant operational (Finance, Warehouse, Transport, Order Management, etc.) and commercial (Merchandising, Marketing, Pricing, etc.) systems will be critical for this. However, these processes are typically not well managed, or even automated, by many traditional retailer organisations today. Everything from data input, data cleansing and data matching, data enrichment and data profiling, through to data syndication and data analytics, is still dependent on disconnected and largely manual operations.

It’s clearly time to automate those areas of data management that are tying up valuable human resources in manual repetitive tasks. Trying to do what they do now without automation will not work for traditional retailers. In my paper, I describe a set of automation best practice that all retailers should be considering in this respect.

A strategic choice

I also point out that this isn’t just an IT challenge. It is a strategic choice to build a single source of data truth on which product decisions can be made. This is built on an understanding that to remain competitive with responsive and agile operations, every day, organisations need to bring about both technology and cultural change.

Like many traditional retailers, M&S clearly has a number of digital challenges to confront, such as those described above. After announcing its 62% drop in pre-tax profits, the retailer declared it would be modernising its business through ‘accelerated change’ to cater for an increasingly online customer base. I hope it puts data at the heart of this transformation.

Read my paper for more on how to move to a new data-led operating model in today’s fast-moving retail environment.

Community Matters Week at Sopra Steria is here: here’s how (and why) we’re doing it

Each year hundreds of Sopra Steria people support their local communities and local, national and international charities by volunteering and raising money for them.  For one special week, we do as much volunteering and fundraising together as we can.  This is what we call Community Matters Week, and today, 18 June, is the first day of our 2018 campaign.

Corporate community initiatives have become commonplace.  Almost all large companies and many small ones have some sort of philanthropic or charitable initiative.  If you ask us why we do it (we are for-profit entities after all), we will tell you that it is the right thing to do, and it is.  Companies must give back.  But there’s so much more to it.  Organisations that only think of community impact as the right thing to do, won’t do it as well as they could if they thought about it as a real business imperative, as important as (and, as I’ll argue later, in fact intrinsically linked to) the focus on profitability, the talent war, and pretty much any aspect of a company’s corporate strategy.

The problem with ‘the right thing to do’

When companies only think about community impact as the right thing to do, they aren’t forcing themselves to be imaginative and innovative; a reasonably sized cheque written out to a charity that may or may not have anything to do with the company’s objectives – or more importantly, its role in society and its capabilities – is often the sum total of its community impact work.  Certainly supporting the vital work charities do is important.  But these organisations miss the opportunity to have a much greater impact on the world while also benefitting themselves.  Furthermore, if cheque-writing is the main way a company seeks to make a positive difference, those cheques might get smaller when times are tough; organisations will want to continue to do the right thing, but it often becomes harder in lean times.  In short, this doing the ‘right thing’ mindset is not very sustainable.

Serious impact takes imagination… and critical business thinking

I like to think of developing a strong community impact programme in the same way we might think about choosing careers when we’re young.  We are encouraged then to think about what we’re good at, as well as what we enjoy, and the ultimate career path chosen should build on both aspects (probably with a slightly greater emphasis on what we’re good at).  For example, as a teenager, I really loved dance, but I wasn’t good enough to make a career out of it (don’t worry, my ego survived!).  It wouldn’t have made sense for me to pursue dance, just as, perhaps, it doesn’t make sense, for example, for a technology company to focus all its community impact resources on activities that have nothing to do with technology.

The question we ask ourselves at Sopra Steria is, ‘how can we make best use of our capabilities and resources to make a difference?’  We know that we will have a bigger impact when we do what we’re good at.  This will be true for other organisations as well.

The second step is to think big.  Too frequently, community programmes aren’t as innovative as the organisations that run them because they’re seen as something separate from the rest of the company.  This is another pitfall of the ‘right thing to do’ mentality because the ‘right thing to do’ can be anything (there is so much good work that needs to be done, so this is understandable), and the programmes don’t draw on an organisation’s innovators and strategists.  When companies think big about community impact, they follow up the question above, with another question: ‘what are the world’s most pressing challenges?’, and they get others to input: sector directors who work with customers and have a deep understanding of the things businesses are trying to address; strategists; and, of course external stakeholders, such as academics and organisations focusing on sustainable development).

It is important that this is the second question, and not the first because there are so many pressing challenges that it will be too difficult to answer this in any meaningful way.  With your answers to the first question in mind, you can identify some areas that your company, no matter its resource limitations or industry focus, could actually make a difference in.

The third and final step is to whittle down the long-ish list of ideas that will have emerged from the first two questions by testing which ones will integrate with and support your corporate strategy.  Ideally, your community programme will actually transform your corporate strategy, making it stronger by bolstering organisation mission and purpose.  Organisations stuck in the ‘doing the right thing’ mentality bristle at the idea that community impact should be a part of corporate strategy and therefore yield business benefits, but those that do not will be constantly at risk of being cut, and if they’re cut, they become less effective, have less of an impact, and that is not what anyone wants, surely.

Some help on the third question

It might not be possible to do the third step above well if you don’t have the business case for community impact programmes well established.  Although this will vary from industry to industry, there are some universal truths:

  • Communities are part your infrastructure and your future: they are the potential sources of your near and long-term future workforce and supply chain, so supporting effective, inclusive education and strong, inclusive local economic growth benefits everyone.
  • Community impact programmes provide competitive advantage both in terms of talent attraction and retention, and in winning business. Employees and customers alike want to work with companies that are making a positive difference in the world.  Employees want to be able to contribute to that in their work.
  • Community impact programmes are lenses through which to spot innovation and development opportunities: because of the point made above – that people want to have the opportunity to do good in their work – some of the most compelling innovations come through well thought-out community programmes that encourage employees to develop solutions to the problems in the world they care about.  For example, in France, a Sopra Steria employee has developed a solution to help homeless people keep digital copies of their important documents and photos so they are not damaged when they are sleeping rough.  Now we are taking this to market.  Furthermore, employees who work on such projects are developing valuable skills they can use in their jobs.

This week at Sopra Steria

All of this is informing what we are doing during Community Matters Week.  Last year we introduced a new Community Strategy that focuses on four areas:

  • Digital inclusion
  • Educations, skills & employability
  • Entrepreneurship
  • Employee engagement

Entrepreneurship and employee engagement are at the heart of Community Matters Week: all of our volunteers are using entrepreneurial skills to find new, more effective ways of fundraising for the charities we’re supporting.  They are marketing, selling, building relationships, sourcing products (for example to go in raffles and auctions), and managing projects.  Employees have a say in how Community Matters Week is run, helping to choose which organisations we support and to develop and run their own activities during the week.  All Sopra Steria people get paid time off for volunteering, too.

This year we have more digital activities than ever before.

Our Digital Innovation team has developed a new app that will be used by dozens of employees to track distances walked, run, and cycled in our Step Up for Scholars Challenge, which will raise money for scholarships for young poor people in India to go to university.

We have an eBay-style e-auction that will enable our large, distributed workforce to get involved wherever they are during the week by bidding on great prizes, with all proceeds going to charity.

We will be live-streaming events, again, so all employees everywhere can join in the goodness.

Finally, Community Matters Week isn’t where our Community programme ends – it’s just the mid-year celebration of all the things we do throughout the year.  For example, coming up soon we’ll be driving greater digital inclusion through coding clubs for girls, gadget surgeries for older people in libraries, and support for the digital skills curriculum at local training colleges.  Watch this space for further updates on how we’re going beyond ‘the right thing to do’ and making a bigger difference to communities because of it.

Reframing Digital Inclusion: going beyond basic

Basic digital skills and access to the internet are essential for living well in today’s world, issues of too much screen time and the like aside. People with even basic digital skills earn more money, save on household expenses, have access to better employment opportunities and can stay in touch with distant friends and family. For the last decade, digital inclusion initiatives in this country have been focused on ensuring all people have the skills, confidence, and access to technology to get online.

While we must continue to get as many people to that basic level of digital aptitude, it’s time for those of us working on digital inclusion to think bigger. We are facing a perfect storm: an increased need for advanced technology skills as digital permeates everything (and the business understanding that will be needed to take advantage of sophisticated, disruptive new digital technologies), and a growing skills shortage.  Add to this a serious diversity problem and the growing understanding of the knock-on effect of unconscious bias in programming (e.g. of AI), and it’s clear we have a problem. But these challenges also present brilliant opportunities for the industry.

With this in mind, digital inclusion itself must become more inclusive; we must think bigger. I offer a new definition of digital inclusion that also acts as a mission statement in our sustainability work:

Digital inclusion means ensuring all people have basic digital skills and access to technology and the internet now, while expanding opportunity for gainful employment through more advanced digital skills attainment now and in the future.

To achieve this vision, we must start:

  • Investing in the next generation of tech talent now, and not just with coding education
  • Finding and training non-tech workers wherever they are now
  • Transforming our industry’s culture and image so different kinds of people can see themselves in it

Investing in the next generation of tech talent now

Already many of us in the industry, including Sopra Steria, are working with schools, colleges and other organisations to supplement curricula with various STEM learning initiatives.  But, as a society, we need to go further and think more broadly.  Coding clubs are hot right now, and have contributed to changing perceptions of our industry for the better.  However, we have fallen behind in investments in core education: a large proportion of schools report that their teachers do not feel prepared to teach using digital tools, and even computer science tutors aren’t confident when it comes to teaching coding.  Furthermore, connectivity is still a problem.  As of 2014, two-thirds of primary schools and half of secondary schools said they didn’t have adequate WiFi provision.

We also need to continue to reposition STEM (Science, Technology, Engineering, and Maths) subjects, ensuring they are part of core curriculum throughout schooling, not spinning off computer science modules as elective subjects.  The level of technology education today’s students will need tomorrow is much greater than it ever has been, and so related subjects should be treated as sacred as English and Maths are.

I would argue the same is true for arts education…or at least creative education.  The STEM acronym is emerging in a revised version: STEAM (A for Arts), and for good reason.  As computers evolve to become more self-sufficient (i.e. more programming being undertaken by computers themselves), some coding careers will become obsolete.  The more advanced jobs in this space will be for not just the cleverest programmers, but the most creative minds among them.   (Recall the Albert Einstein quote “Imagination is more important than knowledge…” for a reminder that creativity has always been a part of brilliance in science).  But creativity and imagination won’t just be important for the techies of the future: the promise of many of the technologies on the horizon is that we will all be able to use them for better outcomes of all kinds.  We are told doctors shouldn’t fear being replaced by robots, because they will have their work enhanced by AI and big data.  The same goes for lawyers, scientists, social workers, and so many other kinds of workers.  We will become (even more) augmented humans, and augmented humans will only reach their potential if they know what questions to ask their computers.  That takes imagination and creativity.  Likewise, these skills will continue to play an outsized role in dreaming up how technology can be applied to solve current and emerging challenges, be it business challenges that lead to the creation of the next Uber, or societal challenges like solving plastic waste.

Finding and training non-tech workers wherever they are

There is still too much reliance on people finding their way to us in tech.  That is, people who benefitted from the education system and recruitment pipeline that is still plagued by unconscious bias, and an industry culture that, although cooler than it used to be, is still not welcoming to all potential talented people.  We can’t afford to wait for those who are in school now to join us, so we must transform our talent search and employment offer.  There is much to be done – and, to be fair, a lot being done, including offering more flexible working and setting objectives for diversity in recruitment and performance management – but I see two main hurdles not getting enough attention: reliance on traditional talent pipelines (including elite universities), and stubborn insistence on non-essential skills.

Elite universities produce many talented people, to be sure, and they should be in the recruitment mix.  But they shouldn’t be the only avenue, or even the most significant one.  First of all, we will never get enough candidates if we only target students coming out of top universities.  Second, these univiersities won’t help fix our diversity issues.  People from ethnic minority backgrounds and of lower socio-economic status are severely underrpresented here.  This in itself will perpetuate the lack of racial and socio-economic diversity in our sector, if we rely on these universities for our candidates too heavily.  But the problem goes deeper: the pervasive homogeneity within these institutions could mean that organisations that rely on them too heavily for talent will not only have diversity problems as described above, but they will also have a lack of diversity of thought and experience.

Drawing up a job description for a recruitment advertisement is not fun, and if you can reuse one you’ve already got, you’re probably going to do that.  The problem is, the one you’ve got is probably a wish list instead of a job description.  It’s much easier to just list everything you can think of that your ideal candidate might be able to, than to take the time to seriously challenge yourself to identify and prioritise a few skills and qualifications that you absolutely cannot do without.  (The old quote “I’m sorry I wrote such a long letter, I didn’t have time to make it shorter” springs to mind).  But we absolutely must start to do this.  For one, we know that women are likely to rule themselves out for a position if they feel they don’t meet 100% of the criteria, whereas men will tend to apply for the role if they feel they meet a third of them.  Going beyond gender, I believe we could also find untapped talent pools if we took up the practice of examining our real needs and priorities, and considering training and reskilling options.   Could a construction worker become a project manager?  Could an artist become a UX designer?  Could a stay-at-home mum who worked in tech 10 years ago jump into a sales role?  The answer is maybe, but not if we weigh down our adverts for roles with too many non-core criteria.

Being imaginative about where we’re going to find talent now and in the short-term is also crucial to preparing for any displacement that emerges from greater automation.  We will have to be better at seeing skills and competences that are transferrable, and spotting potential for non-technical people to become more technical.  And we have to commit to real retraining programmes.  Done right, retraining should be a better option than letting people go and trying to find talent in this tight market.

Transforming our industry’s culture and image

Despite progress, our industry’s culture and image continue to be barriers to addressing the skills gap.  If people don’t want to come to work in the industry because they don’t see others like themselves, or because some actors are contirbuting to a bad reputation, we will struggle to get the people we need.  The transformation will take place in our workplaces and in our work with schools and colleges, with new recruitment and talent management practices and culture change initiatives, and school outreach with a focus on diversity.  Again, though, we must think more creatively about the kinds of skills we want the future workforce to have.  We can’t train the kids of today for jobs that will be obsolete by they time they enter the jobs market; we have to help them develop problem solving skills, creativity, critical thinking skills.  If we do this, it will have a knock-on effect on our culture and image, because we won’t just be bringing in the old school geeky types from the same backgrounds.

Finally, we can do more to inspire the people we want to attract.  Technology is playing a huge role in addressing some of the world’s greatest challenges, such as climate change, social isolation, and access to healthcare.  I’ve seen firsthand in our work with schools and colleges how talking about technology as a force for social and environmental good captures imaginations and gets kids’ interest.  People of all ages want to make a positive difference in their work, and ensuring we offer those opportunities to our workers now and in the future is the right thing to do and a good way of attracting people.

It’s a lot of work.  Is it worth it?

The benefits to us in business should be clear enough: we can solve our skills shortage over time and address our diversity issues, and improving diversity brings with it its own business benefits. But this is also important on a people level: almost all jobs will require tech skills of a level higher than is required today, and the best jobs will continue to be in tech (yes, I’m biased). Enabling more people to work effectively in the most rewarding jobs could help to turn around the trend towards growing economic disparity in developed countries, and will foster stronger, fairer economic growth.  It will also make those of us in the industry better at what we do: right now we are at risk of creating flawed products because we don’t have enough people from different backgrounds contributing to their creation.

So, yes, the challenge of becoming truly digitally inclusive in the terms described above is a big one.  But we don’t really have a choice if our industry is going to continue to be the engine of economic growth and innovation that it has been.  Let’s get to work, and more importantly, let’s get others to work with us who aren’t yet!

AI – The control problem

When designing a system to be more intelligent, faster or even responsible for activities which we would traditionally give to a human, we need to establish rules and control mechanisms to ensure that the AI is safe and does what we intend for it to do.

Even systems which we wouldn’t typically regard as AI, like Amazon’s recommendations engine, can have profound effects if not properly controlled.  This system looks at items you have bought or are looking to buy. It then suggests other items it thinks you are likely to additionally purchase which can result in some pretty surprising things – like this:


Looking to buy a length of cotton rope?  Amazon might just recommend that you buy a wooden stool alongside it.  As a human, we would not suggest these two items alongside each other.  However Amazon’s algorithm has seen a correlation between people who bought cotton rope and those that also bought wooden stools. It’s suggesting to someone buying the rope that they might want a stool too with the hope of raking in an extra £17.42.  At best, this seems like an unfortunate mistake.  At worst, it’s prompting extremely vulnerable people and saying ‘why not?  This happens all the time?  Why don’t you add the stool to your basket?’.

If this can happen with a recommendation algorithm, designed to upsell products to us, clearly the problem is profound.  We need to find a reliable means to guarantee that the actions taken by AI or an automated system achieve a positive outcome.


Terminal value loading

So, why don’t we just tell an AI to protect human life?  That’s what Isaac Asimov proposed in ‘I Robot’.  Here are the three laws;

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

They sound pretty watertight.  Adding in no injury through action or inaction seems to avoid a dystopia where AI takes over and lets the human race finish itself off.

Despite how good these laws sound, they don’t work.  Asimov wrote these laws for use in novels, and the novels were much more interesting when things went wrong.  Otherwise we might have ended up with a book of ‘Once upon a time, the end’.

There’s a 4th law, the ‘Zeroth Law’ added by Asimov . This extra rule was supposed to fix the flaws of the other three, the ones that gave Will Smith a bad day. I confess, I’ve not read the book, but I understand that one didn’t go so well either.

The rules don’t even have to refer to people to be a risk.  They could be about something really mundane.  Take the idea of a paperclip maximiser, an idea put forth by Nick Bostrom. This would be a machine made by a hypothetical future human race to manage paperclip creation. Paperclips are just a simple resource and seemingly don’t need a ton of consideration to make them safe, if we tell the AI that it’s purpose is to make paperclips, and that’s just what it does.

But what if we end up with a super intelligent system, beyond our control, with the power to rally the resources of the universe making paperclips? If this system, whose priority is turning everything it around it into paperclips, sees its creators attempts to prevent it reaching this goal, the best bet is to eradicate them.  Even if it doesn’t decide to eradicate them, those humans are still made out of valuable matter which would look much nicer if it was turned into a few paperclips, so turn them into paperclips it shall.

How do we change that terminal value?  Tell the machine to make 1,000 paperclips instead of turning the entire universe into paperclips? Unfortunately, it’s not much better.  That same AI could make 1,000 paperclips, then proceed to use all the resources in the observable universe (our cosmic endowment) to make sure that it’s made exactly 1,000 paperclips, not 999 or 1,001, and that those paperclips are what its creator intended for it to make, and all of the perfect quality to satisfy their desire.

It might not even be fair to give a super intelligent machine such a mundane terminal value– assuming we find a way to make its value remain constant despite becoming extremely intelligent.

Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don’t.

Marvin – Hitchhiker’s Guide to the Galaxy, by Douglas Adam


TL;DR – Terminal values don’t seem to work well.

Indirect normativity

Instead of giving a machine a terminal value, could we instead indirectly hint towards what we want it to do?

If we managed to perfectly sum up in terminal value what morality meant to the human race in Viking times, we might have an AI which prizes physical strength very highly.  We might think we’ve reached a higher ethical standard today but that’s not to say 1,000 years from now we will not look back on the actions we are taking were ignorant.  Past atrocities happened on human timescales, with only human level intelligence to make them happen.  Doing it orders of magnitude faster with a machine may well be worse and irreversible.

With indirect normativity we don’t even try to sum up that terminal value; instead we ask a machine to figure out what we want it to do.  Using something like Eliezer Yudkowski’s ‘Coherent Extrapolated Volition’ which asks that an AI predict what we would want it to do if “if we knew more, thought faster, were more the people we wished we were, had grown up farther together”

Rather than following whatever ethical code we have at the time of releasing the AI, we create something which grows and changes as we do, and creates the future which we’re likely to want rather than a far more extreme version of what we have today.

There’s perhaps still some overlap between this system and terminal value loading, and contradictions that the systems would find.  If a machine is asked to do whatever is most valuable to us, and prizes making that correct decision over anything else, perhaps its decision will be to take our brains out, put them on a petri dish and figure out exactly what we meant for it to do.  A clause like ‘do the intended meaning of this statement’ would seem to lessen the concern, but again, to know what we intend the machine needs to be able to predict out behaviour.

A perfect prediction system would look a lot like a ‘Black Mirror’ episode.  Using an application without a second thought to manage your home automation or to find your next date. Not knowing that the machine is simulating thousands of thinking and feeling human minds to make an accurate prediction of your desires and behaviours, including all the pain that those sentient simulations feel when being torn apart from one another on thousands of simulated dates to gauge how likely you are to stay together against all odds.

The control problem is extremely tricky, and looks for answers to questions which philosophers have failed to reach a consensus on over thousands of years of research.  It is imperative  that we find answers to these questions, not just before creating as Super Intelligent AI, but in any system that we automate.  Currently the vast majority of our resources and effort is put into making these systems faster and more intelligent, with just a fraction focused towards the control problem or the societal impact of AI and automation.

Let’s redress the balance.


Bridging the gap: how Fintechs and ‘big business’ can work together

by Colin Carmichael, UK Fintech Director

Everyone’s talking about Fintechs – but what does ‘Fintech’ really mean?  It’s a generic term that loosely groups a number of innovative technical organisations within Financial Services.

As the Fintech director for Sopra Steria, I believe I know all about Fintech. To me, Fintech is all about change – introducing new, fresh ideas and ways of working – and making them happen. I’ve worked in financial services across the UK, Europe and further afield for many years – and organisations of all sizes find it hard to change; the bigger the organisation – the greater the challenge. Change means that organisations have to think and act differently to introduce brand new ways of working to deliver desirable services to their customers.  The customer really is king and new products and services need to be built to their wishes (rather than the ‘old fashioned’ way of creating a product and selling it hard). What’s more, new, faster technology and access to huge amounts of data have made this issue more acute as it’s raised customer expectations. Put simply – there’s so much to think about and to do to get ahead and stay ahead.

Organisations need to keep up with the very latest ideas – and still deliver a reliable and robust service. And it’s a fact that incorporating new technology is how they will do it. So why is it so challenging for Fintechs and big players to work together? All too often, Fintechs struggle to get their ideas to the right decision makers – and established businesses are nervous of too much change.

The biggest hurdles are often company politics, internal structures, old processes and course – the difficulty of incorporating brand new ideas into ‘old’ systems. For Fintech’s, it’s tricky to get the right contacts at the right level – and to also ensure their ideas are brought to life safely and securely.  For banks and insurers, introducing new, untried and tested ideas is hugely risky and it can take a long time – as well as effort and money to get it right.

What’s needed is a bridge between the Fintechs and the more traditional organisations – to help them to work productively together. Organisations like Sopra Steria have platforms that are at the heart of many of today’s large businesses – and they also understand existing processes, procurement and politics which often stand in the way of getting things done. By working together, Fintechs, established players and platform organisations can listen to and learn from each other, in order to fast track innovation and get the results they need – quickly and cost effectively.

So, my advice to banks and insurance companies as well as the Fintechs is to work and collaborate with a platform provider from the start. Fintechs can safely test and prove their worth in ‘virtual factories’ using real systems and data – and financial organisations can be confident about bringing the best and brightest ideas to market without huge risk. It puts new Fintechs in touch with established players – and accelerates change. And that’s what we all want.

So, maybe, we shouldn’t be using the term ‘fintech’ to refer to just new and upcoming technology companies. After all – aren’t we all Fintechs? Perhaps instead we should be focusing on partnerships and collaborations between new technology companies, established organisations and the role platform players have to accelerate change.

It really is true. It’s not what you know but who you know that makes all the difference.

Google Dupe-lex

Google unveiled an interesting new feature at their I/O conference last week – Duplex.  The concept is this: want to use your Google assistant to make bookings for you but the retailer doesn’t have an online booking system?  Looks like your going to be stuck making a phone call yourself.

Google wants to save you from that little interaction.  Ask the Google assistant to make a booking for you and Duplex will make a call to the place, let them know when you’re free, what you want to book, when, and talk the retailer through it…. With a SUPER convincing voice.

It’s incredibly convincing, and nothing like the Google assistant voice that we’re use to.  It uses seemingly perfect human intonations, pauses, umms and ahs at the right moments.  Knowing that it’s a machine, you feel like you can spot the moments where it sounds a little bit robotic, but if I’m being honest, if I didn’t know in advance I’d be hard pressed to notice anything out of the ordinary, and wouldn’t for a moment suspect it was anything but a human.

I think what they’re using here is likely a branch of the Tacotron 2 speech generation AI that was demoed last year.  It was a big leap up from the Google assistant voice we are used to, and it was difficult to tell the difference between it and a human voice.  If you want to see if you can tell the difference follow this link;


So, what’s the problem?

The big problem is that people are going to feel tricked (or ‘duped’ as me and likely 100 other people will like to joke).  Google addressed this a little bit, saying that Duplex will introduce itself and tell the person on the other end of the phone is a robot, but I’m still not sure it’s right.

I can absolutely see the utility in making this voice seem more human.  If you receive a call from a robotic sounding voice, you put the phone down.  We expect the robot is going to try to be polite for just long enough to ask us for our credit card details for some obscure reason.  By making the voice sound like a person our behaviour changes to give that person time to speak – To give them the respect that we expect to receive from another person, rather than the bluntness that we will tend to address our digital assistants with.  After all – Alexa doesn’t really care if you ask her to turn the lights off ‘please’, or just angrily bark at her to turn the lights off.

Making the booking could be just a little bit of a painful interaction. The second example that Google shows has a person trying to make a booking for 4 at a restaurant.  It turns out that the restaurant doesn’t make bookings for groups less than 5, and that it’s in fact fine just to turn up as there will most likely be tables available.  Imagine this same interaction with a machine.  Imagine that conversation with one of those annoying digital IVR systems when you call a company and try to get through to the right person – Saying ‘I want to book a table’…. ‘I want to book a table’…. ‘TABLE BOOKING’…. ‘DINNER’.   Our patience will run thin much faster if we’re waiting for a machine than if we’re waiting for a robot.

Just because there is utility, doesn’t mean this deception is fair.  I can see three issues with this.

  1. Even if the assistant introduces it as a machine, the person won’t believe it

It might just seem like a completely left of field comment and make people think they’ve just mis-heard something.  They’ll either laugh it off for a second and continue to believe it’s a person, or think they just couldn’t quite make the word our right – Especially as this conversation is happening over the phone.

  1. They know it’s a robot, but they still behave like it’s a human

Maybe we have people who hear it’s a robot, know that robots are now able to speak like a human, but still react as though it’s a person.  This is a bit like the uncanny valley.  They know it’s a machine, and the rational part of their mind is telling them it’s a machine, but the emotional or more instinctive part of their mind hears it as a human, and they still offer much the same kind of emotion and time to it that they would a human.

  1. They know it’s a machine and treat it like a machine.

This is interesting, because I think it’s exactly not what Google want people to do.  If there wasn’t some additional utility in making this system sound ‘human like’, they wouldn’t have spent the time or money on the new voice model and would have shipped the feature out with the old voice model long ago.  If people treat it like a machine, we may assume that the chance of making a booking, or the right kind of booking would be reduced.

If you believe the argument I’ve made here, then Duplex introducing itself as a machine is irrelevant.  Google’s intention is still for it to be treated like a human – And is this OK?

I’m not entirely sure it is.  When people make these conversations, they’re putting a bit of themselves into the relationship.  It reminds me of Jean-Paul Sartre talking about his trip to the café.  He was expecting to meet his friend Pierre, and left his house with all the expectations of the conversation he would have with Pierre, but when he arrives Pierre is not there.  Despite the café being full, it feels empty to Sartre.  I imagine a lot of people will feel the same when they realize that they’ve been speaking to a machine.  As superficial as the relationships might be when you are making a booking over the phone, they are still relationships.  When the person arrives for their meal, or their haircut, and they realise that person they spoke to before doesn’t really exist – that it has no conscious experience –  and they’ll feel empty.

They’ll feel kinda… duped…

Quantum Computers: A Beginner’s Guide

What they are, what they do, and what they mean for you

What if you could make a computer powerful enough to process all the information in the universe?

This might seem like something torn straight from fiction, and up until recently, it was. However with the arrival of quantum computing, we are about to make it reality. Recent breakthroughs by Intel and Google have catapulted the technology into the news. We now have lab prototypes, Silicon Valley start-ups and a multi-billion dollar research industry. Hype is on the rise, and we are seemingly on the cusp of a quantum revolution so powerful that it will completely transform our world.

On the back of this sensationalism trails confusion. What exactly are these machines and how do they work? And, most importantly, how will they change the world in which we live?


At the most basic level, the difference between a standard computer and a quantum computer boils down to one thing: information storage. Information on standard computers is represented as bits– values of either 0 or 1, and these provide operational instructions for the computer.

This differs on quantum computers, as they store information on a physical level so microscopic that the normal laws of nature no longer apply. At this minuscule level, the laws of quantum mechanics take over and particles begin to behave in bizarre and unpredictable ways. As a result, these devices have an entirely different system of storing information: qubits, or rather, quantum bits.

Unlike the standard computer’s bit, which can have the value of either 0 or 1, a qubit can have the value of 0, 1 or both 0 and 1 at the same time. It can do this because of one of the fundamental (and most baffling) principles of quantum mechanics- quantum superposition, which is the idea that one particle can exist in multiple states at the same time. Put another way: imagine flipping a coin. In the world as we know it (and therefore the world of standard computing), you can only have one of two results: heads or tails. In the quantum world, the result can be heads and tails.

What does all of this this mean in practice? In short, the answer is speed. Because qubits can exist in multiple states at the same time, they are capable of running multiple calculations simultaneously. For example, a 1 qubit computer can conduct 2 calculations at the same time, a 2 qubit computer can conduct 4, and a 3 qubit computer can conduct 8- increasing exponentially. Operating under these rules, quantum computers bypass the “one-at-a-time” sequence of calculation that a classical computer is bound by. In the process, they become the ultimate multi-taskers.

To give you a taste of what that kind speed might look like in real terms, we can look back to 2015, when Google and Nasa partnered up to test an early prototype of a quantum computer called D-Wave 2X. Taking on a complex optimisation problem, D-Wave was able to work at a rate roughly 100 million times faster than a single core classical computer and produced a solution in seconds. Given the same problem, a standard laptop would have taken 10,000 years.


Given their potential for speed, it is easy to imagine a staggering range of possibilities and use cases for these machines. The current reality is slightly less glamorous. It is inaccurate to think of quantum computers as simply being “better” versions of classical computers. They won’t simply speed up any task run through them (although they may do that in some instances). They are, in fact, only suited to solving highly specific problems in certain contexts- but there’s still a lot to be excited about.

One possibility that has attracted a lot of fanfare lies in the field of medicine. Last year, IBM made headlines when they used their quantum computer to successfully simulate the molecular structure of beryllium hydride, the most complex molecule ever simulated on a quantum machine. This is a field of research which classical computers usually have extreme difficulty with, and even supercomputers struggle to cope with the vast range of atomic (and sometimes quantum) complexities presented by complex molecular structures. Quantum computers, on the other hand, are able to read and predict the behaviour of such molecules with ease, even at a minuscule level. This ability is significant not just in an academic context; it is precisely this process of simulating molecules that is currently used to produce new drugs and treatments for disease. Harnessing the power of quantum computing for this kind of research could lead to a revolution in the development of new medicines.

But while quantum computers might set in motion a new wave of scientific innovation, they may also give rise to significant challenges. One such potentially hazardous use case is the quantum computer’s ability to factorise extremely large numbers. While this might seem relatively harmless at first sight, it is already stirring up anxieties in banks and governments around the world. Modern day cryptography, which ensures the security of the majority of data worldwide, relies on complex mathematical problems- tied to factorisation- that classical computers have insufficient power to solve. Such problems, however, are no match for quantum computers, and the arrival of these machines could render modern methods of cryptography meaningless, leaving everything from our passwords and bank details to even state secrets extremely vulnerable, able to be hacked, stolen or misused in the blink of an eye.


Despite the rapid progress that has been made over the last few years, an extensive list of obstacles still remain, with hardware right at the top. Quantum computers are extremely delicate machines, and a highly specialised environment is required to produce the quantum state that gives qubits their special properties. For example, they must be cooled to near absolute zero (roughly the temperature of outer space) and are extremely sensitive to any kind of interference from electricity or temperature. As a result, today’s machines are highly unstable, and often only maintain their quantum states for just a few milliseconds before collapsing back into normality- hardly practical for regular use.

Alongside these hardware challenges marches an additional problem: a software deficit. Like a classical computer, quantum computers need software to function. However, this software has proved extremely challenging to create. We currently have very few effective algorithms for quantum computers, and without the right algorithms, they are essentially useless- like having a Mac without a power button or keyboard. There are some strides being made in this area (QuSoft, for example) but we would need to see vast advances in this field before widespread adoption becomes plausible. In other words, don’t expect to start “quoogling” any time soon.

So despite all the hype that has recently surrounded quantum computers, the reality is that now (and for the foreseeable future) they are nothing more than expensive corporate toys: glossy, futuristic and fascinating, but with limited practical applications and a hefty price tag attached. Is the quantum revolution just around the corner? Probably not. Does that mean you should forget about them? Absolutely not.