Google Dupe-lex

Google unveiled an interesting new feature at their I/O conference last week – Duplex.  The concept is this: want to use your Google assistant to make bookings for you but the retailer doesn’t have an online booking system?  Looks like your going to be stuck making a phone call yourself.

Google wants to save you from that little interaction.  Ask the Google assistant to make a booking for you and Duplex will make a call to the place, let them know when you’re free, what you want to book, when, and talk the retailer through it…. With a SUPER convincing voice.

It’s incredibly convincing, and nothing like the Google assistant voice that we’re use to.  It uses seemingly perfect human intonations, pauses, umms and ahs at the right moments.  Knowing that it’s a machine, you feel like you can spot the moments where it sounds a little bit robotic, but if I’m being honest, if I didn’t know in advance I’d be hard pressed to notice anything out of the ordinary, and wouldn’t for a moment suspect it was anything but a human.

I think what they’re using here is likely a branch of the Tacotron 2 speech generation AI that was demoed last year.  It was a big leap up from the Google assistant voice we are used to, and it was difficult to tell the difference between it and a human voice.  If you want to see if you can tell the difference follow this link;

https://ai.googleblog.com/2017/12/tacotron-2-generating-human-like-speech.html

googledual

So, what’s the problem?

The big problem is that people are going to feel tricked (or ‘duped’ as me and likely 100 other people will like to joke).  Google addressed this a little bit, saying that Duplex will introduce itself and tell the person on the other end of the phone is a robot, but I’m still not sure it’s right.

I can absolutely see the utility in making this voice seem more human.  If you receive a call from a robotic sounding voice, you put the phone down.  We expect the robot is going to try to be polite for just long enough to ask us for our credit card details for some obscure reason.  By making the voice sound like a person our behaviour changes to give that person time to speak – To give them the respect that we expect to receive from another person, rather than the bluntness that we will tend to address our digital assistants with.  After all – Alexa doesn’t really care if you ask her to turn the lights off ‘please’, or just angrily bark at her to turn the lights off.

Making the booking could be just a little bit of a painful interaction. The second example that Google shows has a person trying to make a booking for 4 at a restaurant.  It turns out that the restaurant doesn’t make bookings for groups less than 5, and that it’s in fact fine just to turn up as there will most likely be tables available.  Imagine this same interaction with a machine.  Imagine that conversation with one of those annoying digital IVR systems when you call a company and try to get through to the right person – Saying ‘I want to book a table’…. ‘I want to book a table’…. ‘TABLE BOOKING’…. ‘DINNER’.   Our patience will run thin much faster if we’re waiting for a machine than if we’re waiting for a robot.

Just because there is utility, doesn’t mean this deception is fair.  I can see three issues with this.

  1. Even if the assistant introduces it as a machine, the person won’t believe it

It might just seem like a completely left of field comment and make people think they’ve just mis-heard something.  They’ll either laugh it off for a second and continue to believe it’s a person, or think they just couldn’t quite make the word our right – Especially as this conversation is happening over the phone.

  1. They know it’s a robot, but they still behave like it’s a human

Maybe we have people who hear it’s a robot, know that robots are now able to speak like a human, but still react as though it’s a person.  This is a bit like the uncanny valley.  They know it’s a machine, and the rational part of their mind is telling them it’s a machine, but the emotional or more instinctive part of their mind hears it as a human, and they still offer much the same kind of emotion and time to it that they would a human.

  1. They know it’s a machine and treat it like a machine.

This is interesting, because I think it’s exactly not what Google want people to do.  If there wasn’t some additional utility in making this system sound ‘human like’, they wouldn’t have spent the time or money on the new voice model and would have shipped the feature out with the old voice model long ago.  If people treat it like a machine, we may assume that the chance of making a booking, or the right kind of booking would be reduced.

If you believe the argument I’ve made here, then Duplex introducing itself as a machine is irrelevant.  Google’s intention is still for it to be treated like a human – And is this OK?

I’m not entirely sure it is.  When people make these conversations, they’re putting a bit of themselves into the relationship.  It reminds me of Jean-Paul Sartre talking about his trip to the café.  He was expecting to meet his friend Pierre, and left his house with all the expectations of the conversation he would have with Pierre, but when he arrives Pierre is not there.  Despite the café being full, it feels empty to Sartre.  I imagine a lot of people will feel the same when they realize that they’ve been speaking to a machine.  As superficial as the relationships might be when you are making a booking over the phone, they are still relationships.  When the person arrives for their meal, or their haircut, and they realise that person they spoke to before doesn’t really exist – that it has no conscious experience –  and they’ll feel empty.

They’ll feel kinda… duped…

How to keep AI alive when death is inevitable

Uber was in the headlines again last week, this time because on of their driverless cars was involved in an accident which killed a cyclist.  Loss of life is always a tragedy and I don’t want to diminish the significance of this death, however, accidents like this are likely to happen as we develop AI and we should be able to agree on situations where projects must be shut down and times when they can continue.

We saw footage released showing the moments leading up to the collision.  If we take the footage to be an honest, accurate and unaltered representation of events it appears that the car had very little opportunity to avoid this happening, with the cyclist crossing the road away from a designated crossing, unlit and shortly after a corner.

It’s hard to watch the footage without imaging yourself in that situation, and it’s hard to see how a human driver could have avoided the incident.  There would only have been a split second to react.  It’s quite possible that both human and machine would have produced the same result – Yet humans continue to be allowed to drive, and Uber is shutting down its self-driving vehicle programme.

So – Following a human death, how can we decide when our projects must be axed and when they can continue?

Intentional vs Incidental vs accidental

I would like to propose three categories of machine caused death.  Under two of the three circumstances (intentional and incidental) I suggest that the programmes must be shut down.  Under the 3rd (accidental) the project may continue, depending on a benchmark I will set out shortly.

Intentional

Intentional death caused by AI will result from the likes of ‘lethal autonomous weapons’.  I would propose that these should be banned under all circumstances from ever being created.  As Max Tegmark described in Life 3.0, AI has the potential to be either the greatest tool ever created for humanity, or the most destructive – The latter being killbots.  We want AI to go in the first direction like Chemistry or biology, which became useful to humanity rather than becoming chemical and biological weapons respectively – We have international treaties to ban them.  Nuclear had the potential to be simply a power source to help humanity, but has ended up with a dual purpose – Generating energy to power our homes and incredibly destructive weapons.

Here are a few of the most poignant issues possible;

  • With AI the risk is potentially higher than nuclear weapons. A machine with the coded right to take human life could do so with an efficiency orders of magnitude higher than any human could – Infecting our healthcare systems, power, or even launching our own nuclear weapons against ourselves.
  • As a race we are yet to create our first piece of bug-free software, and until we do we do, we run the risk of this extremely fast automated system killing people we had never aimed for it to. And even if we regain control of the device within days, hours or minutes, the damage done could be thousands of times greater than any human could have achieved in that time.
  • Using a machine only adds in a layer of ethical abstraction that allows us to commit atrocities (Automating Inequality, Virginia Eubanks).

Incidental

An incidental death can be categorised as death which happens as the result of another action, but not as the primary motivation.  This would include any action where and automated system was sure, or attributed a high probability to the possibility of a person being seriously injured or killed as the result of its primary goal or the steps taken to achieve it.  We may imagine machines allowing this to happen ‘for the greater good’, as an acceptable step towards its primary goal.  This should also be avoided and a cause to shut off and prevent any AI projects which allow this to happen.

Why?

  • It’s a short distance between this and lethal autonomous weapons. An AI is highly unlikely to be human in the way it thinks and acts.  Unlike humans which are carbon based lifeforms, evolved over thousands of years, an AI will be silicon based and evolve quickly over years, months or days.  The chances of it feeling emotions, if it does at all… like guilt, empathy, love like a human is improbably.  If it is given the flexibility to allow human death, its idea of an atrocity may be very different to ours, and due to its speed and accuracy even the fastest reactions in stopping this type of AI may be far too late to prevent a disaster.

Accidental

This is the only area where I believe death with an AI is involved may be forgiven – And even in this case not in all circumstances.  I would describe an accidental death caused by an AI as one where in-spite of reasonable steps being taken to collect and analyse available data and accident happened, which resulted in death or injury, that was believed only to have a low level of probability and became unavoidable.  Here we may see this through the eyes of the Uber driverless vehicle;

  • ‘An accidental death’ – The car should never be allowed to sacrifice human life where it is aware of a significant risk (we will discuss the ‘significant risk’ shortly), opting instead to stop entirely in the safest possible manner.
  • ‘Reasonable steps’ – These should be defined through establishing a reasonable level of risk, above 0% which is tolerable to us. More on this below.
  • ‘Collect and analyse data’ – I think this is where the Uber project went wrong. Better sensors or processing hardware and software may have made this accident preventable.

An AI designed to tolerate only accidental death should not set the preservation of human life as its primary objective.  Clearly defined final objectives for AI seemingly have unintended results – With a matrix like human farm being possible to maximise human life but sacrificing pleasure.  Maximising pleasure similarly could result in the AI dedicating its resources to generating a new drug to make humans permanently happy, or putting us all in an ideal simulated world.  Indirect normativity (Nick Bostrom, SuperIntelligence) seems to be a more appealing proposition, instead teaching an AI to;

  1. Drive a car to its destination
  2. Take any reasonable steps to avoid human harm or death while fulfilling step A
  3. Do the intended meaning of this statement

But what if a driverless car finds itself in a situation where death is unavoidable, where it’s just choosing between one person or another dying?

If an AI designed only to tolerate accidental death finds itself in a situation where it’s only decision is between one life and another, even if inaction would result in a death, it may still be compliant with this rule.  We should instead measure this type of AI from an earlier moment, that the actions leading up to this situation should have been taken to minimise risk of death.  As new data becomes available which show no further options are possible which avoid death or injury the accident has already happened and a separate decision making system may come into force to decide what action to take.

A reasonable level of risk?

To enable AI to happen at all we need to establish the reasonable level for risk in these systems.  A Bayesian AI would always attribute a greater than 0% chance of anything happening, including harm or death of a human, in any action or inaction that it takes.  For example, if a robot were to make contact with a human, holding no weapons, travelling slowly and covered in bubble wrap, the chance of it transferring bacteria or viruses which have a small chance of causing harm is higher than 0%.  If we are to set the risk appetite for our AI at 0% it’s only option will be to shut itself down as quickly and safely as possible.  We must have a minimum accepted level for AI caused harm to progress, and I think we can reach some consensus for this.

With the example of the Uber self-driving car we may assume the equivalent number of deaths caused by human and machine.  The machine was unable to avoid the death of a human, and if the evidence presented is an accurate reflection of the circumstances it seems likely a human too would have been unable to avoid the death.  The reaction for this has been strongly anti-automation, so we can tell that a 1-to-1 exchange between human and machine deaths is not the right level – That we would prefer for a human to be responsible for a death if the number of casualties is not reduced by using a machine.

If we are to change this number to 2-to-1 this begins to look different.  If we could half the number of deaths caused by driving or any other human activity automation begins to look a lot more appealing to a far greater number of people.  If we extend this to a 99% reduction in deaths and injuries the vast majority of people will lean towards AI over human actors.

Where this number stands exactly I am not certain.  It’s also unlikely that the ratio would remain static as growing trust in AI may lead us either direction.  Indirect normativity may be our best option again in this instance, accounting for the moving standard which we would hold it to.

Setting a tolerance rate for error at 0% for anything is asking for failure.  No matter how safe or fool proof a plan may seem there will always be at least a tiny possibility of error.  AI can’t solve this, but it might be able to do a better job than us.  If our goal is to protect and improve human life… maybe AI can help us along the way.

 

The Geek Shall Inherit

AI has the potential to be the greatest ever invention for humanity.  And it should be for the benefit of all humanity equally, but instead we’re heading towards a particular group, the geeks, who will benefit most from AI. AI is fundamentally more likely to favour the values of its designers, and whether we train our AI on a data set gathered from humans, or with pure simulated data through a system like deep reinforcement learning bias will, to a greater or lesser extent, remain.

A disclaimer – Humans are already riddled with bias.  Be it confirmation, selective or inclusive bias, we constantly create unfair systems and draw inaccurate conclusions which can have a devastating effect on society.  I think AI can be a great step in the right direction, even if it’s not perfect.  AI can analyse dramatically more data than a human and by doing so generate a more rounded point of view.  More rounded however is not completely rounded, and this problem is significant given any AI which can carry out a task orders of magnitude faster than a human.

To retain our present day levels of inequality while building a significantly faster AI we must dramatically reduce the number of unethical decisions it produces.  For example, if we automate a process with a system which produces only 10% as many unethical decisions as a human per transaction, but we make it 1000x faster, we end up with 100x more injustice in the world.  To retain todays levels that same system would need to make only 0.1% as many unethical decisions per transaction.

For the sake of rhyme, I’ve titled this blog the geek shall inherit.  I am myself using a stereotype, but I want to identify the people that are building AI today.  Though I firmly support the idea that anyone can and should be involved in building these systems that’s not a reflection of our world today.  Our society and culture has told certain people, women for instance, from a young age that boys work on computers and girls do not.  This is wrong, damaging and needs remedying.  That’s a problem to tackle in a different blog!  Simply accepting in this instance that the people building AI tend to be a certain type of person – Geeks.  And if we are to stereotype a geek, we’re thinking about someone who is highly knowledgeable in an area, but also socially inept, and probably a man.

With more manual forms of AI creation the problem is at its greatest.  Though we may be using a dataset gathered from a more diverse group of people, there’s still going to be selection bias in that data, as well as bias directly from the developers if they are tasked with the annotation of that data.  Whether intentionally or not , humans are always going to favour things more alike themselves and code nepotism into a system, meaning the system is going to favour geeky men like themselves more so than any other group.

In 2014 the venture capital fund ‘Deep Knowledge Ventures’ developed an algorithm called ‘VITAL’ to join their board and vote on investments for the firm.  VITAL shared a bias with it’s creators, nepotism, showing a preference to invest in businesses which valued algorithms in their own decision making (Homo Deus, Harari, 2015).  Perhaps VITAL developed this bias independently, but the chances area it’s developers unconsciously planted the seed of nepotism, and even the preference towards algorithms due to their own belief in them.

A step beyond this is deep reinforcement learning.  This is the method employed by Google’s Deep Mind in the Alpha Zero project.  The significant leap between Alpha Go and Alpha Go Zero is that Alpha Go used data recorded from humans playing Go, whereas Alpha Go Zero learned simply by playing against itself in a simulated world.  By doing this, the system can make plays which seem alien to human players, as it’s not constrained by human knowledge of the games.  The exception here is ‘move 37’ against Lee Sedol, which Alpha Go Lee used,  prior to the application of Deep Reinforcement Learning.  This move was seen as a stroke of creative brilliance that no human would ever have played, even though this system was trained on human data.

Humans also use proxies to determine success in these games.  An example of this is Alpha Go playing chess.  Where humans use a points system on pieces as a proxy to understand their performance in a game, Alpha Go doesn’t care about its score.  It’ll sacrifice valuable pieces for cheap ones when other moves which appear more beneficial are available, because it doesn’t care about its score, only about winning.  And win it does, if only by a narrow margin.

So where is the bias in this system?  Though the system may be training in a simulated world, two areas for bias remain.  For one, the layers of the artificial neural network are decided upon by those same biased developers.  Second, it is simulating a game designed by humans – Where the game board and rules of Go were designed.  Both Go and Chess for instance offer a first move advantage to black.  Though I prefer to believe that the colours of pieces on a game board has everything to do with contrast and nothing to do with race, we may be subtly teaching a machine that one colour is guaranteed by rules an advantage over others in live.

The same issue however remains in more complex systems.  The Waymo driverless car is trained predominantly in a simulated world, where it learns free from human input, fatigue and mistakes.  It is however, still fed the look and feel of human designed and maintained roads, and the human written rules of the highway code.  We might shift here from ‘the geek shall inherit’ to ‘the lawyer shall inherit’.  Less catchy, but simply by making the system learn from a system or rules that was designed by a select group of people will introduce some bias, even if it’s simulating it’s training data within the constraints of those rules.

So, what should we do?

AI still has the potential to be incredibly beneficial for all humanity.  Terminator scenarios permitting, we should pursue the technology.  I would propose tackling this issue from two fronts.

2

This would be hugely beneficial to the technology industry as a whole, but it’s of paramount concern in the creation of thinking machines.  We want our AI to think in a way that suits everyone, and our best chance of success is to have fair and equal representation throughout its development.  We don’t know how much time remains before a hard take-off of an artificial general intelligence, and we may not have time to fix the current diversity problem, but we should do everything we can to fix it.

3

Because damage caused by biased humans, though potentially catastrophic will always be limited by our inherent slowness.  AI on the other hand can implement biased actions much faster than us humans and may simply accelerate an unfair system.  If we want more equality in the world a system must focus more heavily on equality as a metric than speed, and ensure at the very least that it reduces inequality by as much as the process speed is increased e.g.;

  1. If we make a process 10x faster, we must reduce the prevalence and impact of unequal actions by at least 90%.
  2. If we create a system 1,000x faster, this reduction must be for a 99.9% reduction of inequality in its actions.

Doing this only retains our current baseline.  To make progress in this area we need go a step further with the reduction in inequality before increasing the speed.

Programmed Perspective; Empathy > Emotion for Digital Assistants

Personal assistants are anything but personal.  When I ask Alexa what the weather is, I receive an answer about the weather in my location.  When someone on the other side of the world asks Alexa that same question, they too will find out what the weather is like in their location. Neither of us will find Alexa answering with a different personality or the interaction further cementing our friendship. It is an impersonal experience.

When we talk about personal assistants, we want them to know when we need pure expediency from a conversation, when we want expansion on details, and the different way each one of us likes to be spoken to.

I would like to propose two solutions to this problem – emotion and empathy.  I’d like you to see from my view why empathy is the path we should be taking.

AdobeStock_49711430

Emotion

An emotional assistant would be personal. It would require either a genuine internal experience of emotion (which is just not possible today), or an accurate emulation of emotion.  In the same way that we build up relationships with people overtime, starting from niceties and formality, to gradually developing a relationship unique to the two parties that guides all their interactions.  Sounds great, but it’s not all plain sailing.  I’m sure everyone has experienced a time where we’ve inadvertently offended a friend in a way that has made it more difficult for us to communicate for some time afterwards, or even damaging a relationship in a way that it’ll never repair itself.

We really don’t want this with a personal assistant.  If you were a bit short with Alexa yesterday because you were tired, you still want it to set off your alarm the next morning.  You don’t want Alexa to tell you that it can’t be in the same room as you and to refuse to answer your questions until it gets a heartfelt apology.

Empathy

Empathy does not need to be emotional.  Empathy requires that we put ourselves in the place of others to imagine how they feel, and to act appropriately.  This ideally is what doctors do.  A doctor must empathise with a patient, putting themselves in their shoes to understand how they will react to difficult news, and how to describe the treatment to ensure they feel as comfortable as possible.  Importantly though, the doctor should be removed emotionally from the situation.  If they are to personally feel the emotion with each appointment it could become unbearable.  Empathy helps them to add a layer of abstraction, allowing them to shed as much of the emotion as possible when they return home.

This idea is described in Jean Paul Sartre’s ‘Being and Nothingness’.  Sartre describes two types of things;

  • Beings in themselves – An unconscious object, like a mug or a pen.
  • Beings for themselves – People and conscious things.

In our every day lives we are a hybrid of the two.  Though we are people, and naturally become a thing for itself, we adopt roles like doctors, managers, parents and more. These roles are objects, like a pen or a mug as they have an unspoken definition and structure. We use these roles/objects to guide how we interact in different situations in life. In a new role we ask ourselves ‘what should a manager do in this situation’, or ‘what would a good doctor say’.  It may become less obvious as we grow into a role, but it’s still there.

When you go into a store, we have an accepted code of conduct and a type of conversation we expect to have with the retailer.  We naturally expect them to be polite, to ask us how we are, to share knowledge of different products and services, and that their aim is to sell us something.  We believe that we can approach them, a stranger, and ask question upon question in our preamble.

Sartre states ‘A grocer who dreams is less a grocer’ (to paraphrase).  Though the grocer may be more honest to themselves as a person, they’re reducing their utility as a grocer.  It’s easy to imagine stopping to buy some vegetables, and getting stuck in an irrelevant conversation for half an hour.  It might be a nice break from the norm, and a funny story to tell when you get home, but in general we want our grocers to be…. Grocers…

If we apply this to personal assistants, it really comes together.  We want to receive the kind of personal service that we would get from a person who is really great at customer service.  We want it to communicate information to us in a way which works best for us.  By making an empathetic assistant over what we have today we gain personalisation and utility

If we go fully emotional we gain more personalisation, but the trade-off is utility.  What we don’t want is an emotion assistant, which becomes depressed, and gets angry at us.  Or even on the other extreme which becomes giddy with emotion and struggles to structure a coherent sentence to us because of the digital butterflies in its stomach.  That’s both deeply unsettling, and unproductive.

So, let’s build empathetic assistants.

The Apple of my AI – GDPR for Good

Artwork by @aga_banach

Our common perception of machine learning and AI is that it needs an immense amount of data to work. That data is collected and annotated by humans or IoT type sensors to ensure the AI has access to all the vast information it needs to make the correct decisions. With new regulations to protect stored personal data like GDPR, does this mean AI will be at a disadvantage from the headache on restrictions for IoT and data collection? Maybe not!

What is GDPR and why does it matter?

For those who are outside of the European Union, GDPR (General Data Protection Regulation) is designed to “protect and empower all EU citizens data privacy”. Intending to return the control of personal data to individual citizens, it grants powers like requests for all data a business holds on them, a right to explanation for decisions made and even a right to be forgotten. Great for starting a new life in Mexico but will this impact on how much an AI can learn due to the limiting of information?

What’s the solution?

A new type of black box learning means we may not need human data at all. Falling into the category of ‘deep reinforcement learning’, we are now able to create systems which achieve super human performance in a fairly broad spread of domains. AIs are able to generate all training data themselves from simulated worlds. The poster-boy of this type of machine learning is AlphaZero and its derivatives from Google’s Deep Mind. In 2015 we saw the release AlphaGo which demonstrated the ability for a machine to become better than a human in a 5–0 victory against Go (former) champion Mr Fan Hui. AlphaGo reached this level by using human generated data of recorded professional and amateur games of Go. The evolution of this however was to remove the human data with AlphaGo Zero, beating its predecessor AlphaGo Lee 100:0 using 1/12th the processing power over a fraction of the time, and without any human training data. Instead AlphaGo Zero generated its own data by playing games against itself. While GDPR could force a drought of machine learning data in the EU, simulated data from this kind of deep reinforcement learning could re-open the flood gates.

Playing Go is a pretty limited area (though AlphaZero can play other board games!) and is defined by very clear rules. We want machine learning which can cover a broad spread of tasks, often in far more dynamic environments. Enter Google… again… Or rather Alphabet, the parent company of Google and their self-driving car spinoff Waymo. Level 4 and 5 autonomous driving presents a much more challenging goal for AI. In real time the AI needs to categorise huge numbers of objects, predict their paths in the future and translate that into the right control inputs. All to get the car and it’s passengers where they need to be on time and in one piece. This level of autonomy is being pursued by both Waymo and Tesla, but seemingly Tesla gets the majority of the press. This has a lot to do with Tesla’s physical presence.

Tesla has around 150,000 cars on the road equipped and boasted over 100 million miles driven by AutoPilot by 2016. This doesn’t even include data gathered while the feature is not active or more recent data (which I am struggling to find — if you know please comment below!). Meanwhile Waymo has covered a comparatively tiny 3.5 million real world miles, perhaps explaining the smaller public exposure. Google thinks it has the answer to this, again using deep reinforcement learning, meaning that their vehicles have driven billions of miles in their own simulated worlds, not using any human generated data. Only time will tell whether we can build a self-driving car, which is safe and confident on our roads alongside human drivers without human data and guidance in the training process. The early signs for deep reinforcement learning look promising. If we can do this for driving, what’s to say it can’t work in many other areas?

Beyond being a tick in the GDPR box there are other benefits to this type of learning. DeepMind describes human data as being ‘too expensive, unreliable or simply unavailable’, the second of these points (with a little artistic license) is critical. Human data will always have some level of bias, making it unreliable. On a very obvious level, Oakland Police Department’s ‘PredPol’, a system designed to predict areas of crime to dispatch police, trained on historical and biased crime data. It resulted in a system which dispatched police to those same historical hotspots. It’s entirely possible that just as much crime was going on in other areas, but by focusing its attention on the same old area and turning a blind eye to others the machine struggled to break human bias. Even when we think we’re not working on an unhealthy bias our lives are surrounded by unconscious bias and assumptions. I make an assumption each time I sit down on this chair that it will support my weight. I no doubt have a bias towards people similar to me, believing that we could work towards a common goal. Think you hold no bias? Try this implicit association test from Harvard. AlphaGo learned according to this bias, whereas AlphaGo Zero had no bias and performed better. Looking at the moves the machine made we tend to see creativity, a seemingly human attribute in its actions, when in reality their thought processes may have been entirely unlike human experience. By removing human data and therefore our bias machine learning could find solutions in possibly any domain which we might never have thought of, but in hindsight appear a stroke of creative brilliance.

Personally I still don’t think this type of deep reinforcement learning is perfect, or at least the environment it is implemented in. Though the learning itself may be free from bias, the rules and play board, be that a physical game board or rather road layout, factory, energy grid or anything else we are asking the AI to work on, is still designed by a human meaning it will include some human bias. With Waymo, the highway code and road layouts are still built by humans. We could possibly add another layer of abstraction, allowing the AI to develop new road rules or games for us, but then perhaps they will lose their relevance to us lowly humans who intend to make some use from the AI.

For AI, perhaps we’re beginning to see GDPR as an Apple in the market, throwing out the old CD drive, USB-A ports or even (and it still stings a little) headphone jacks, initially with consumer uproar. GDPR pushing us towards black box learning might feel like we’re losing the headphone jack a few generations before the market is ready, but perhaps it’s just this kind of thing that creates a market leader.

AI, VR and the societal impact of technology: our takeaways from Web Summit 2017

Together with my Digital Innovation colleague Morgan Korchia, I was lucky enough to go to Web Summit 2017 in Lisbon – getting together with 60,000 other nerds, inventors, investors, writers and more. Now that a few weeks have passed, we’ve had time to collect our thoughts and reflect on what turned out to be a truly brilliant week.

We had three goals in mind when we set out:

  1. Investigate the most influential and disruptive technologies of today, so that we can identify those which we should begin using in our business
  2. Sense where our market is going so that we can place the right bets now to benefit our business within a 5-year timeframe
  3. To meet the start-ups and innovators who are driving this change and identify scope for collaboration with them

Web Summit proved useful for this on all fronts – but it wasn’t without surprises.  It’s almost impossible to go to an event like this without some preconceptions about the types of technologies we are going to be hearing about. On the surface, it seemed like there was a fairly even spread between robotics, data, social media, automation, health, finance, society and gaming (calculated from the accurate science of ‘what topic each stage focused on’). However, after attending the speeches themselves, we detected some overarching themes which seemed to permeate through all topics. Here are my findings:

  • As many as 1/3rd of all presentations strongly focus on AI – be that in the gaming, finance, automotive or health stage
  • Around 20% of presentations primarily concern themselves with society, or the societal impact of technology
  • Augmented and virtual reality feature in just over 10% of presentations, which is significantly less than we have seen in previous years

This is reflective my own experience at Web Summit, although I perhaps directed myself more towards the AI topic, spending much of my time between the ‘autotech / talkrobot’ stage and the main stage. From Brian Krzanich, the CEO of Intel, to Bryan Johnson, CEO of Kernel and previously Braintree, we can see that AI is so prevalent today that a return to the AI winter is unimaginable. It’s not just hype; it’s now too closely worked into the fabric of our businesses to be that anymore. What’s more, too many people are implementing AI and machine learning in a scalable and profitable way for it to be dispensable. It’s even getting to the point of ubiquity where AI just becomes software, where it works, and we don’t even consider the incredible intelligence sitting behind it.

An important sub-topic within AI is also picking up steam- AI ethics. A surprise keynote from Stephen Hawking reminded us that while successful AI could be the most valuable achievement in our species’ history, it could also be our end if we get it wrong. Elsewhere, Max Tegmark, author of Life 3.0 (recommended by Elon Musk… and me!) provided an interesting exploration of the risks and ethical dilemmas that face us as we develop increasingly intelligent machines.

Society was also a themed visited by many stages. This started with an eye-opening performance from Margrethe Vestager, who spoke about how competition law clears the path for innovation. She used Google as an example, who, while highly innovative themselves, abuse their position of power, pushing competitors down their search rankings to hamper the chances of other innovations from becoming successful. The Web Summit closed with an impassioned speech from Al Gore, who gave us all a call to action to use whatever ability, creativity and funding we have to save our environment and protect society as a whole for everyone’s benefit.

As for AR and VR, we saw far less exposure this year than seen at events previously (although it was still the 3rd most presented-on theme). I don’t necessarily think this means it’s going away for good, although it may mean that in the immediate term it will have a smaller impact on our world than we thought it might. As a result, rather than shouting about it today, we are looking for cases where it provides genuine value beyond a proof of concept.

I also take some interest from the topics which were missing, or at least presented less frequently. Amongst these I put voice interfaces, cyber security and smart cities. I don’t think this is because any of these topics have become less relevant. Cyber security is more important now than ever, and voice interfaces are gaining huge traction in consumer and professional markets. However, an event like Web Summit doesn’t need to add much to that conversation. I think that without a doubt we now regard cyber security as intrinsic to everything we do, and aside from a few presentations including Amazon’s own Werner Vogels, we know that voice is here and that we need to be finding viable implementations. Rather than simply affirming our beliefs, I think a decision was made to put our focus elsewhere, on the things we need to know more about to broaden our horizons over the week.

We also took the time to speak to the start-ups dotted around the event space.  Some we took an interest in like Nam.r, who are using AI in a way which drives GDPR compliance, rather than causing the headache many of us assume it may result in. Others like Mapwize.io and Skylab.global are making use of primary technological developments, which were formative and un-scalable a year ago. We also took note of the start-ups spun out of bigger businesses, like Waymo, part of Google’s Alphabet business, which is acting as a bellwether on which many of the big players are placing their bets.

The priority for us now is to build some of these findings into our own strategy- much more of a tall order than spending a week in Lisbon absorbing.  If you’re wondering what events to attend next year, Web Summit should be high up on your list, and I hope to see you there!

What are your thoughts on these topics? Leave a reply below, or contact me by email.

Learn more about Aurora, Sopra Steria’s horizon scanning team, and the topics that we are researching.

Journey interrupted

Contrary to what you might expect, this blog isn’t a reflection of my experience with Southern Rail. Instead it’s a look to the future, inspired by the Sopra Steria Horizon Scanning Team’s trip to Wired 2016.

In our horizon scanning programme, Aurora, we try to look beyond the technologies that are shaping our future and include the behavioural and social changes that are also making an impact, and this is where Wired’s annual event fits in so perfectly with our interests. Though Wired 2016 takes no shame in celebrating the advancements we’ve seen in technology and imagining what may come next, it also takes account of wider sociological and environmental changes, such as mass migration, climate change and global conflict.

The running theme throughout was ‘journey interrupted’ which seemed both to reflect on the individual journeys of many speakers who had set out with what seemed like a clear direction, but ended up somewhere entirely different to what they had planned, but also the inevitable interruption in our unsustainable way of living which needs change more urgently than ever.

In the technology content, an overwhelmingly strong theme was data. Now, data is nothing new at these kinds of events, and has not been for years.  Data in this instance was framed most clearly in machine learning and AI, which again isn’t anything new to us, but what we’re beginning to see is how achievable it is becoming to us.  Historically the privilege of huge projects backed by a great deal of money, we’re now seeing machine learning in the hands of start-ups and individual people who are able to apply the same technology to problems which receive little or no funding, but are important none the less. Applications ranged from health, limiting the spread of Ebola and the Zika virus to cancer discovery and treatment, to migrant demographics, through to the future of AI and the singularity.

The most poignant moments in Wired 2016 however did not focus wholly on technology. They were about the big shifts happening to people and our environment. Predictions on climate change are looking more devastating than ever with even fairly conservative scientists predicting that we may go well beyond the 2C maximum limit for warming since pre-industrial weather reports, going as far as 7C which could wreak untold havoc on our world. The speakers at Wired 2016 were looking both at how we can change the way we live in the developed world to reduce our environmental impact, but also how we can curtail the impact of the developing world, ideally skipping straight to renewable power, in much the same way as they have largely missed out internet on PCs, experiencing the internet for the first time on a mobile device.  The refugee crisis was also a recurring theme, where the journey that people had set for their own lives has been completely torn apart, exploring how communities around the world have found ways for them to get their journeys back on track, through enabling their work and encouraging entrepreneurship.

The story from Wired 2016 is that if we continue on this express train, we’re heading to a bad place. We need to instead take a look at the journey we’re taking, or better still find an entirely new mode of transport to take us there. The technology is there, but the community and widespread adoption is not, and if we want success this is going to need to be a journey we take together.

What do you think? Leave a reply below, or contact me by email.

Learn more about Aurora, Sopra Steria’s horizon scanning team, and the topics that we are researching.