The Apple of my AI – GDPR for Good

Artwork by @aga_banach

Our common perception of machine learning and AI is that it needs an immense amount of data to work. That data is collected and annotated by humans or IoT type sensors to ensure the AI has access to all the vast information it needs to make the correct decisions. With new regulations to protect stored personal data like GDPR, does this mean AI will be at a disadvantage from the headache on restrictions for IoT and data collection? Maybe not!

What is GDPR and why does it matter?

For those who are outside of the European Union, GDPR (General Data Protection Regulation) is designed to “protect and empower all EU citizens data privacy”. Intending to return the control of personal data to individual citizens, it grants powers like requests for all data a business holds on them, a right to explanation for decisions made and even a right to be forgotten. Great for starting a new life in Mexico but will this impact on how much an AI can learn due to the limiting of information?

What’s the solution?

A new type of black box learning means we may not need human data at all. Falling into the category of ‘deep reinforcement learning’, we are now able to create systems which achieve super human performance in a fairly broad spread of domains. AIs are able to generate all training data themselves from simulated worlds. The poster-boy of this type of machine learning is AlphaZero and its derivatives from Google’s Deep Mind. In 2015 we saw the release AlphaGo which demonstrated the ability for a machine to become better than a human in a 5–0 victory against Go (former) champion Mr Fan Hui. AlphaGo reached this level by using human generated data of recorded professional and amateur games of Go. The evolution of this however was to remove the human data with AlphaGo Zero, beating its predecessor AlphaGo Lee 100:0 using 1/12th the processing power over a fraction of the time, and without any human training data. Instead AlphaGo Zero generated its own data by playing games against itself. While GDPR could force a drought of machine learning data in the EU, simulated data from this kind of deep reinforcement learning could re-open the flood gates.

Playing Go is a pretty limited area (though AlphaZero can play other board games!) and is defined by very clear rules. We want machine learning which can cover a broad spread of tasks, often in far more dynamic environments. Enter Google… again… Or rather Alphabet, the parent company of Google and their self-driving car spinoff Waymo. Level 4 and 5 autonomous driving presents a much more challenging goal for AI. In real time the AI needs to categorise huge numbers of objects, predict their paths in the future and translate that into the right control inputs. All to get the car and it’s passengers where they need to be on time and in one piece. This level of autonomy is being pursued by both Waymo and Tesla, but seemingly Tesla gets the majority of the press. This has a lot to do with Tesla’s physical presence.

Tesla has around 150,000 cars on the road equipped and boasted over 100 million miles driven by AutoPilot by 2016. This doesn’t even include data gathered while the feature is not active or more recent data (which I am struggling to find — if you know please comment below!). Meanwhile Waymo has covered a comparatively tiny 3.5 million real world miles, perhaps explaining the smaller public exposure. Google thinks it has the answer to this, again using deep reinforcement learning, meaning that their vehicles have driven billions of miles in their own simulated worlds, not using any human generated data. Only time will tell whether we can build a self-driving car, which is safe and confident on our roads alongside human drivers without human data and guidance in the training process. The early signs for deep reinforcement learning look promising. If we can do this for driving, what’s to say it can’t work in many other areas?

Beyond being a tick in the GDPR box there are other benefits to this type of learning. DeepMind describes human data as being ‘too expensive, unreliable or simply unavailable’, the second of these points (with a little artistic license) is critical. Human data will always have some level of bias, making it unreliable. On a very obvious level, Oakland Police Department’s ‘PredPol’, a system designed to predict areas of crime to dispatch police, trained on historical and biased crime data. It resulted in a system which dispatched police to those same historical hotspots. It’s entirely possible that just as much crime was going on in other areas, but by focusing its attention on the same old area and turning a blind eye to others the machine struggled to break human bias. Even when we think we’re not working on an unhealthy bias our lives are surrounded by unconscious bias and assumptions. I make an assumption each time I sit down on this chair that it will support my weight. I no doubt have a bias towards people similar to me, believing that we could work towards a common goal. Think you hold no bias? Try this implicit association test from Harvard. AlphaGo learned according to this bias, whereas AlphaGo Zero had no bias and performed better. Looking at the moves the machine made we tend to see creativity, a seemingly human attribute in its actions, when in reality their thought processes may have been entirely unlike human experience. By removing human data and therefore our bias machine learning could find solutions in possibly any domain which we might never have thought of, but in hindsight appear a stroke of creative brilliance.

Personally I still don’t think this type of deep reinforcement learning is perfect, or at least the environment it is implemented in. Though the learning itself may be free from bias, the rules and play board, be that a physical game board or rather road layout, factory, energy grid or anything else we are asking the AI to work on, is still designed by a human meaning it will include some human bias. With Waymo, the highway code and road layouts are still built by humans. We could possibly add another layer of abstraction, allowing the AI to develop new road rules or games for us, but then perhaps they will lose their relevance to us lowly humans who intend to make some use from the AI.

For AI, perhaps we’re beginning to see GDPR as an Apple in the market, throwing out the old CD drive, USB-A ports or even (and it still stings a little) headphone jacks, initially with consumer uproar. GDPR pushing us towards black box learning might feel like we’re losing the headphone jack a few generations before the market is ready, but perhaps it’s just this kind of thing that creates a market leader.

Published by

Ben Gilburt

I lead Sopra Steria's horizon scanning team, researching emerging technologies that have the greatest potential to impact our business and that of our clients and finding how we can best make use of them. I'm also a philosophy undergrad, and the intersection between philosophy and technology often leads me to machine and robot ethics. If you're interested in this kind of thing too, follow me @RealBenGilburt

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.