Science

Computers tell us who to date and who to jail. But is this right?

Algorithms govern our lives more and more, but it’s critical that we engage with new technology to create the best future, says a new book. Monday, 5 November 2018

By Simon Worrall

The question of how much power algorithms have over our lives has a topical edge. Already there are lines of code that tell us what to watch, whom to date, and even whom to send to jail. In Hello World: Being Human in the Age of Algorithms British mathematician Hannah Fry takes us inside that world and asks: are we in danger of losing our humanity?

When National Geographic caught up with Fry in New York City during her book tour, the author explained why Gary Kasparov’s defeat by IBM’s Deep Blue was such a defining moment, how algorithms used in the justice system can have racial bias, and why the 2016 election should be a wake-up call for our democracy.

You begin the book with the victory of IBM’s Deep Blue over chess-master Gary Kasparov in 1997. Why was that such an important moment?

What happened with Gary Kasparov was a watershed moment in this history. He was at that point in time the greatest chess player the world had ever seen. I spoke to a lot of chess grand masters and they described him as a tornado; he’d walk into the room and everyone would be pinned to the sides because his aura was so intimidating. The man was a chess god.

IBM had built a machine that could play chess, something people had long wanted to do but had always thought was beyond the abilities of a computer. There were two sets of matches. The important one in 1997 when the computer beat Kasparov was this moment when everyone sat up and had to re-evaluate what they thought computers were capable of, as well as our unique capabilities as humans.

But the real reason why Kasparov was beaten by the machine wasn’t really that the machine was better at chess. Most people accept that at that time Kasparov was still better than the machine. It was because he allowed himself to be intimidated by the machine. He started second-guessing how much the machine knew and allowed that to psyche him out. That was what ultimately led to his downfall.

There’s a moral in that story that applies today just as much as it did back in 1997. With all of the new technology that is exploding at the moment, especially artificial intelligence and machine learning, there is this question about how much we’re trusting those machines and how much power we are giving over to them, and how much our human flaws around trust and power will come into play in the future.

Two tech companies hit the news this year: Cambridge Analytica and Facebook. A new book by American academic Kathleen Hall Jamieson suggests that they, and the Russians, helped sway the election for Donald Trump. Do you agree?

For quite a while now companies have known that harvesting our data can be extremely profitable. If you understand who your customers are you can predict what they will do, and use that prediction to subtly manipulate their behaviour, usually to make us buy more stuff. Supermarkets have been doing this for a long time. But what Facebook has been able to do, because they have access to really personal information, is not just about what’s in our shopping baskets; it’s who we are and what we like, how we speak to each other. If you have access to that information, that’s an incredibly powerful thing.

What Cambridge Analytica was doing went one step further, by trying to work out our personality types and serve us up adverts as a way of manipulating our greatest hopes and fears. One example was a pro-gun lobby message targeting single mothers who were neurotic in nature. They would then serve them up adverts about someone breaking into their homes in the middle of the night and the need for protection.

As to whether any of that made a difference in the outcome of the election, it’s really hard to tell because so much of this stuff went on under the surface. But given how tight the election was, where in some states Trump won by a few thousand votes, I don’t think it’s beyond the realm of possibility. That is a worrying thing. We need to be concerned about how much power these algorithms now have, that they can potentially change the rules of democracy.

Driverless cars are in the news. One of the biggest challenges is creating cars that don’t just see their surroundings—but perceive them like we do and even make 'compassionate' decisions. Will that ever be possible?

Technically, yes. Whether that’s what we want to happen is another question altogether because you’re referencing something called the trolley problem. The idea is that a driverless car is heading down the road and a truck in front of it sheds its load, and this car has to make decisions. Swerve one way and drive into oncoming traffic and kill the driver in the car, or swerve the other way and save everyone in the car but take out some pedestrians.

When this moral dilemma was initially posed the point was to say that there are some questions where you can’t just apply straightforward logic to find one right answer. There are some questions that just don’t have a right answer. So, I’m not convinced that we’re ever going to find ourselves in a situation where driverless cars are in amongst all of the other vehicles, noise, and chaos you find on the road.

It will come as a surprise to some of our readers, as it did to me, that algorithms are used extensively in the justice system. Can biases be corrected with the help of machine learning?

Algorithms have been used in courtrooms for a long time, dating back to the 1930s. The idea is, you’re trying to make a prediction as to whether or not an individual is going to go on to commit a crime in the future. Whether you like it or not, the judge has to decide whether the person who stands in front of him or her will commit another crime when they are released onto the street. And there are these algorithms that can make more accurate predictions than humans are able to.

There was a big news story by investigative journalists at Propublica.com in 2017, where they took one of these algorithms and demonstrated that the way the algorithm makes mistakes isn’t necessarily the same for every individual. They showed that, if you are a black defendant, the algorithm is more likely to incorrectly say you are at high risk of committing another crime than if you are white.

Algorithms are also increasingly used in medicine. Talk us through some of the applications, and the ethical issues that arise.

One of the positive stories about how algorithms are being used in medicine is in the case of cancer diagnosis. This is an example of where people are not putting too much power in the algorithm’s hands. It’s mainly based on image recognition, where an algorithm looks at a biopsy slide and detects whether or not there are tiny tumours hiding amongst the cells.

There are other examples where these algorithms are being used in diagnosis. There is an app that has gained a lot of press in the UK called Babylon Health, which is being used by the National Health Service to help simplify the diagnosis process. You log on and talk to this chat box, which will give you information and advice about your condition and then filter your inquiry to a relevant healthcare worker.

This sounds like a dream, if you can make healthcare more affordable and easy to access, not just for people in the Western world but in developing countries, too. But there’s a slight problem, which is that I don’t think these technologies are quite as good as they claim to be. They still make a lot of mistakes. An interesting example about Babylon was when someone went on to the diagnosis chat box and said that they had a hurt elbow and the chat box eventually diagnosed that they had redness around the genitals. [laughs] So there were all these jokes online about how the bot didn’t know its ass from its elbow. [laughs]

The fact is, there’s nothing more private and personal than your medical data, particularly when it comes to your DNA. All these algorithms require huge amounts of data to be able to work. The question is, who owns your data and are you complicit in giving up your right to that data? I don’t think we’re at the point yet where we know how to navigate these issues, protecting people’s privacy while shooting for a more positive future.

As well as positive applications, your book is packed with stories of the harm that algorithms can do. Describe those for us.

The harm occurs when algorithms are given too much power. Returning to the example of algorithms being used in courtrooms, there are all these issues of bias. But there’s also the fact that these algorithms end up making mistakes. One example is a young man called Christopher Drew Brooks. He was a 19-year-old man from Virginia convicted of the statutory rape of a 14-year-old girl. They had been having a consensual relationship, but nonetheless she was under age, so he was convicted of a crime. During his trial an algorithm assessed his likelihood of re-offending. Being that he was only 19 years old, and was already committing sexual offenses, it concluded that there was a high chance he was going to return to a life of crime and it recommended he be given something like 18 months in jail.

That demonstrates how these algorithms can be illogical. This particular algorithm placed a lot of weight on age. But if Christopher Drew Brooks had been 36 years old, rather than 19, making him 22 years older than his victim, which by any metric makes the crime much worse, the algorithm would have considered him a low risk of re-offending, thus escaping jail entirely. You would hope in that kind of situation a judge would see the flaws of the algorithm and know to overrule it. But in that particular case, and in many others like it, we’re not seeing people standing up and trusting their own instincts over that of the algorithms. In this case, Christopher Drew Brooks’s sentence was increased on the say-so of the algorithm. That’s where the problems come in. Algorithms make mistakes and they can make calamitous mistakes. If we’re giving them power without a human thinking about the context and nuance of the situation, I think we are asking for trouble.

You write at the end of the book, “In the age of the algorithm, humans have never been more important.” Explain that idea for us, and the issues we may face going forward.

Ultimately, this isn’t a book about algorithms; this is a book about humans. It’s about how we fit into our own future, about how technology is changing the rules of how we are speaking to each other, democracy, medicine, policing, and justice. Everything is changing and it’s about how we need to take more control over our own future and decide how much power we want to give up. It’s also about how we’re not at that stage where humans are taken out of the picture. I don’t think the discussion should be about humans versus machines. The discussion should be about partnership of humans and machines together, about shaking hands with this new technology and embracing its flaws as well as acknowledging that we have flaws ourselves, and working out that partnership, a shared journey of possibilities for the future that exploits each other’s strengths as much as possible.

This interview was edited for length and clarity.

Simon Worrall curates Book Talk. Follow him on Twitter or at simonworrallauthor.com.