The Geek Shall Inherit

AI has the potential to be the greatest ever invention for humanity.  And it should be for the benefit of all humanity equally, but instead we’re heading towards a particular group, the geeks, who will benefit most from AI. AI is fundamentally more likely to favour the values of its designers, and whether we train our AI on a data set gathered from humans, or with pure simulated data through a system like deep reinforcement learning bias will, to a greater or lesser extent, remain.

A disclaimer – Humans are already riddled with bias.  Be it confirmation, selective or inclusive bias, we constantly create unfair systems and draw inaccurate conclusions which can have a devastating effect on society.  I think AI can be a great step in the right direction, even if it’s not perfect.  AI can analyse dramatically more data than a human and by doing so generate a more rounded point of view.  More rounded however is not completely rounded, and this problem is significant given any AI which can carry out a task orders of magnitude faster than a human.

To retain our present day levels of inequality while building a significantly faster AI we must dramatically reduce the number of unethical decisions it produces.  For example, if we automate a process with a system which produces only 10% as many unethical decisions as a human per transaction, but we make it 1000x faster, we end up with 100x more injustice in the world.  To retain todays levels that same system would need to make only 0.1% as many unethical decisions per transaction.

For the sake of rhyme, I’ve titled this blog the geek shall inherit.  I am myself using a stereotype, but I want to identify the people that are building AI today.  Though I firmly support the idea that anyone can and should be involved in building these systems that’s not a reflection of our world today.  Our society and culture has told certain people, women for instance, from a young age that boys work on computers and girls do not.  This is wrong, damaging and needs remedying.  That’s a problem to tackle in a different blog!  Simply accepting in this instance that the people building AI tend to be a certain type of person – Geeks.  And if we are to stereotype a geek, we’re thinking about someone who is highly knowledgeable in an area, but also socially inept, and probably a man.

With more manual forms of AI creation the problem is at its greatest.  Though we may be using a dataset gathered from a more diverse group of people, there’s still going to be selection bias in that data, as well as bias directly from the developers if they are tasked with the annotation of that data.  Whether intentionally or not , humans are always going to favour things more alike themselves and code nepotism into a system, meaning the system is going to favour geeky men like themselves more so than any other group.

In 2014 the venture capital fund ‘Deep Knowledge Ventures’ developed an algorithm called ‘VITAL’ to join their board and vote on investments for the firm.  VITAL shared a bias with it’s creators, nepotism, showing a preference to invest in businesses which valued algorithms in their own decision making (Homo Deus, Harari, 2015).  Perhaps VITAL developed this bias independently, but the chances area it’s developers unconsciously planted the seed of nepotism, and even the preference towards algorithms due to their own belief in them.

A step beyond this is deep reinforcement learning.  This is the method employed by Google’s Deep Mind in the Alpha Zero project.  The significant leap between Alpha Go and Alpha Go Zero is that Alpha Go used data recorded from humans playing Go, whereas Alpha Go Zero learned simply by playing against itself in a simulated world.  By doing this, the system can make plays which seem alien to human players, as it’s not constrained by human knowledge of the games.  The exception here is ‘move 37’ against Lee Sedol, which Alpha Go Lee used,  prior to the application of Deep Reinforcement Learning.  This move was seen as a stroke of creative brilliance that no human would ever have played, even though this system was trained on human data.

Humans also use proxies to determine success in these games.  An example of this is Alpha Go playing chess.  Where humans use a points system on pieces as a proxy to understand their performance in a game, Alpha Go doesn’t care about its score.  It’ll sacrifice valuable pieces for cheap ones when other moves which appear more beneficial are available, because it doesn’t care about its score, only about winning.  And win it does, if only by a narrow margin.

So where is the bias in this system?  Though the system may be training in a simulated world, two areas for bias remain.  For one, the layers of the artificial neural network are decided upon by those same biased developers.  Second, it is simulating a game designed by humans – Where the game board and rules of Go were designed.  Both Go and Chess for instance offer a first move advantage to black.  Though I prefer to believe that the colours of pieces on a game board has everything to do with contrast and nothing to do with race, we may be subtly teaching a machine that one colour is guaranteed by rules an advantage over others in live.

The same issue however remains in more complex systems.  The Waymo driverless car is trained predominantly in a simulated world, where it learns free from human input, fatigue and mistakes.  It is however, still fed the look and feel of human designed and maintained roads, and the human written rules of the highway code.  We might shift here from ‘the geek shall inherit’ to ‘the lawyer shall inherit’.  Less catchy, but simply by making the system learn from a system or rules that was designed by a select group of people will introduce some bias, even if it’s simulating it’s training data within the constraints of those rules.

So, what should we do?

AI still has the potential to be incredibly beneficial for all humanity.  Terminator scenarios permitting, we should pursue the technology.  I would propose tackling this issue from two fronts.

2

This would be hugely beneficial to the technology industry as a whole, but it’s of paramount concern in the creation of thinking machines.  We want our AI to think in a way that suits everyone, and our best chance of success is to have fair and equal representation throughout its development.  We don’t know how much time remains before a hard take-off of an artificial general intelligence, and we may not have time to fix the current diversity problem, but we should do everything we can to fix it.

3

Because damage caused by biased humans, though potentially catastrophic will always be limited by our inherent slowness.  AI on the other hand can implement biased actions much faster than us humans and may simply accelerate an unfair system.  If we want more equality in the world a system must focus more heavily on equality as a metric than speed, and ensure at the very least that it reduces inequality by as much as the process speed is increased e.g.;

  1. If we make a process 10x faster, we must reduce the prevalence and impact of unequal actions by at least 90%.
  2. If we create a system 1,000x faster, this reduction must be for a 99.9% reduction of inequality in its actions.

Doing this only retains our current baseline.  To make progress in this area we need go a step further with the reduction in inequality before increasing the speed.

Published by

Ben Gilburt

I lead Sopra Steria's horizon scanning team, researching emerging technologies that have the greatest potential to impact our business and that of our clients and finding how we can best make use of them. I'm also a philosophy undergrad, and the intersection between philosophy and technology often leads me to machine and robot ethics. If you're interested in this kind of thing too, follow me @RealBenGilburt

4 thoughts on “The Geek Shall Inherit”

  1. Great blog Ben, really interesting. I hadn’t considered the angle of designer bias, but it’s true. As we speak we are designing AI’s in Sopra Steria to conform to certain processes. Processes that we intrinsically understand to be logical and sensible. However, if unbound by our ‘training’ and simply asked to come up with the same outcome, would it have followed the same process or come up with some genius and totally unexpected way to solve the problem? Ethically speaking, we should probably at the very least give an AI those boundaries that we consider intrinsic to our society, even if all other decision making processes are allowed total creative freedom. But then, do we not risk constraining the AI once more, and this time by the biased opinion of what you and I consider to be ethically moral? Where does one draw the morality line? Certainly I hope, not by the geeks, as I suspect AI may move into Troll territory pretty quickly! 🙂

    Like

  2. Hey Ben. Glad you liked it!

    I think training might help in some way – If it’s making use of collected (and therefore a wider) opinions of people over years rather than taking one seemingly creative actor who is allowed to impose their individual opinion on a process. Where we have 100s of people doing something manually, though they may follow a script, there is at least some potential of a diverse opinion. Dev teams will be much smaller though, and we’re making something with a few 10s of people for a few months to do the work of 100s for many years.

    When it comes to ethics it’s a really tricky question. If there is a kinda universal ethics which we could ask an AI to follow it’s something we’ve not discovered yet after thousands of years of head scratching and French coffee shops. The best idea I’ve seen is to only ask this kind of thing indirectly to an AI. To say ‘do the intended meaning of this statement’ – Or do what we would do, “if we knew more, thought faster, were more the people we wished we were, had grown up farther together (CEV). What might be our best idea of an ethically sound decision today might be considered an atrocity in 100 years.

    Like

  3. Very interesting read Ben! As a fellow Philosophy Grad currently working for Sopra Steria I would love it if you could recommend some further reading about the ethics of modern technology! It would also be fab to catch up with you in person at some stage; designer bias is definitely something I’m interested in 🙂

    Like

    1. Hey Megan,

      Here are some good books on the topic;

      Super Intelligence – Best intro book on machine ethics
      Sapiens / Homo Deus – Also good, but less on topic than Super Intelligence
      Life 3.0
      Things by Callum Chace

      The big one…
      Rationality; From AI to Zombies – This is the best book on the topic, but it’s massive. 6 volumes, and about 5,000 pages!

      Related, but not machine ethics;
      Automating Inequality

      Black Mirror is also a great thing to watch and heavily relates to the topic. Blade runner (old and 2049) are both great. Japanese anime also covers the topic quite a lot with things like Akira.

      Give me a shout if you’re in Holborn :).

      Ben

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.