This website uses cookies to improve usability and analysis of user behavior. By using this website you agree to the use of cookies. Detailed information about the use of cookies on this website can be found in our privacy statement

OK
“This will be the decade of artificial intelligence”

“This will be the decade of artificial intelligence”

Sebastian Thrun, former Vice President at Google, emphasizes the importance of demystifying artificial intelligence as its influence on everyday life increases.

As the influence of artificial intelligence on our daily lives continues to grow, it is important to demystify the technology, argues Sebastian Thrun, a former Vice President at Google. AI is merely a tool, he says, and it can have enormous benefits if we use it correctly.

We hear so much talk about how AI is going to change healthcare, automobiles and so many other industries. What do you see emerging in the new decade?

SEBASTIAN THRUN: This will be the decade of artificial intelligence (AI). I like to compare AI to the agricultural revolution, when machines began doing a lot of physical work. Farming used to involve hard physical labour. Hundreds of years ago, almost all Europeans worked in agriculture, but today, that figure is below 2 percent. I think of AI as being the same for menial work, for people who work in offices. They do extremely repetitive work, day in and day out. In the future, they can hand off some of the work to the machines.

alt=""

Those changes, though, are also fuelling fears about job losses and other forms of displacement. Do you see parallels between the current debate over AI and the concerns during the Industrial Revolution that the changes would cause people to lose their livelihoods?

It’s a great debate, and a necessary one because it will affect all of us. We need to use AI very responsibly and very carefully. If we use it well, then we will become much smarter as a human race. In addition to freeing ourselves from repetitive work, we can also improve our jobs. Doctors will be better in diagnosing and treating diseases like cancer – and who doesn’t want to have a better doctor? Our lawyers will be better too, and so will our pilots.

It feels like the debate about the ethical aspects of AI – potential inherent biases and the disadvantages it can cause – is taking place even before the technology itself has been introduced.

There is uncertainty every time a new technology is invented. And that gives rise to questions like: What does it mean for me? What does it mean for my neighbours, for my community, for my country? And because every new technology affects humanity in some way, it’s important to have this debate and to think about its risks and pitfalls in order to minimise the chances that something could go wrong. AI is no exception.

Demystifying artificial intelligence

Is it our individual or collective responsibility to address the potential pitfalls of AI?

AI is a technology. It’s a tool, in the same way that a kitchen knife is a tool. I can use a knife to cut my vegetables or I can use it to harm a person. The person decides whether to act ethically. And I want to demystify AI a bit: When we talk about AI, we really mean machine learning, which is a subfield of AI. Machine learning is a technique that allows computers to extract rules and patterns from data. If you feed a computer a lot of repetitive data, the computer is able to determine rules based on that data. If you feed a computer enough images of skin cancer, for example, it will eventually be able to detect if a person has skin cancer and provide a diagnosis. If you provide a computer with examples of emails from a criminal and the computer sees enough of them, it can eventually detect patterns and conduct the same task that a person can do. That’s it. So, it’s a tool that allows a computer to acquire the skills of highly repetitive work from people. The question is: How do we use this tool? These decisions need to be made by everybody and not just computer scientists.

Sebastian Thrun is Founder and CEO of the flying car company Kitty Hawk and Founder of the online university Udacity. Prior to that, he was a Vice President at Google, where he founded Google X, which is developing self-driving cars and other technologies. A native of Germany, Thrun spent many years as a Professor of Computer Science at Stanford University before joining Google. (Photo: CreativeShot Photography)

Bias is one of the greatest challenges emerging in AI right now. Experience has shown that technologies like facial recognition can lead to discriminatory practices.

If we as a society believe, for example, that biometric facial recognition works equally well in every part of the population, then there must be techniques for using it that are appropriate. But if a facial recognition system is trained on one race, we should not be surprised that it performs better on the one it has been trained on. The tools only extract patterns, and it is us – the designers and we as a society – who have to use the tools responsibly, in accordance with our values. If we want facial recognition to work equally well for women and men, for Hispanics and African Americans and white people, then it’s our responsibility to train the system appropriately.

AI is already being used in healthcare and in vehicles, both autonomous and otherwise. We have, in other words, gathered some experience on ethical issues. Do you see a potential rift developing – between, say, the Chinese, the Americans and the Europeans – when it comes to the values that inform our different approaches to AI?

It goes back to my kitchen knife example. AI is a tool. There are countries where kitchen knives are being used to feed people, and there might be countries where kitchen knives are being used to stab puppies. It’s up to society to think about how to use a tool responsibly. I am convinced that the way AI develops will hinge on tomorrow’s values. No artificial network will, by itself, say: “I hate white people and I love African-American people.” That’s not the case. The mathematics underlying this model have nothing to do with any racial statement. It’s the way in which we use the tool that predicates the outcome.

Moral questions

My conversations with programmers and scientists have made it sound as though a consensus is emerging that we should apply our constitutional values when determining which biases we need to prevent – that algorithms should be programmed to avoid discrimination based on gender, race, sexual orientation and other things. Do you think such a consensus in the tech industry is realistic?

When I apply AI in my own work, I use machine learning to make cars drive themselves and to make them drive more safely than people would be able to drive them. I have also used AI to diagnose deadly diseases like skin cancer, and we have saved many lives using this technology by assisting doctors in finding dangerous melanomas. The issue of ethics has not come up in any technical areas where we rely on machines to do work that humans also do well. There is no question that if we can make cars safer, we should indeed make them safer. I see no moral issue in making the diagnosis of potentially deadly diseases more accurate. We never even considered whether a cancer diagnostic works better with one group of people than another. I don’t want to whitewash these important moral questions, but it’s how we use the tools that makes all the difference.

alt=""
After losing both an arm and leg in a rail accident, Londoner James Young received a unique prosthetic. The arm features a 3D-printed bionic hand that enables him to perform a number of gestures, all controlled by tensing his shoulder muscles. The main structure of the arm is made of carbon fibre, making it light but incredibly strong. The arm has the capacity for USB-powered attachments and can even charge his phone. (Photo: David Vintiner)

Still, some of these technologies create other issues. Technologies that are great for security, for example, can also be used for surveillance, like in China, where the government has installed these technologies to control large numbers of people.

Germany itself is no stranger to mass surveillance. If you think about East Germany and about my generation, you know what that means. What makes me optimistic is that over 1,000 years, Europe grew from a state of almost continuous war into the current era of peace, hand in hand with an era of democracy. Nobody forced the Europeans into democracy – the people wanted it and they won. The people of East Germany prevailed over the East German government. West Germany and a now reunited Germany have worked hard to put the horrors of the Nazi regime behind them. And we have created values that have made lives a lot better than they were 300 or 400 years ago.

Take a look on Wikipedia, where you can see that England and France went to war against each other 26 times over the past millennium. Today, you couldn’t even imagine a war between the two. As part of humanity, we have a social contract to use our technologies responsibly. And we do want this, because at the end of the day, we want to have freedom, we want to have liberty, we want to have opportunities, we want to have safety, and we want our children to have a better life than we had ourselves. This applies universally to every country.

As someone with years of experience living in Germany, Europe and the United States, do you think it’s possible that the EU and the US could adopt a common approach to AI in the same way that the Europeans have worked together to address privacy concerns through the General Data Protection Regulation (GDPR), which regulates how people’s private data can be used?

The one thing I would love to see more of, particularly in Germany, is for people to think more about the opportunities presented by AI and not just about the potential risks and negative consequences. We are at the beginning of an era that has the potential to transform people’s lives more than any before it. If you take aspects of people’s intelligence and put them into a computer, you will fundamentally transform every job. Every single job is repetitive. As you transform human expertise, you will also make humans much smarter.

We have technologies that can turn a nurse into a world-class doctor when it comes to diagnosing skin cancer, without the 10+ years of training necessary to become a certified dermatologist. We can enable a five-year-old to safely drive a car, which was impossible before AI. We need a debate on the possibilities so we can use this toolset to improve society and also ensure that Germany remains one of the world’s best countries in the future. Germany is an exceptionally well-positioned country, with incredibly strong talent. On top of that, it is also an attractive destination for immigrants. Germany has the opportunity to become one of the absolute leaders in this nascent field.

Looking ahead

You have repeatedly mentioned human-machine interaction and AI, which has been a major focus of yours. How do you think it will transform the workplace in the next decade?

Any worker could ask him or herself: What part of my work is highly repetitive and not super creative? Say, I’m a lawyer and I do repetitive work on drafting contracts. Or I’m a radiologist and I do repetitive work looking at X-rays. I would say that 50 to 90 percent of people’s work is repetitive. If a machine watches you doing this work – and not just you, but every person in the world doing similar work – it will be able to detect patterns and gradually take over these tasks for you. I might need a lot of time to read and answer my emails now, but I might be able get it done in a few minutes in the future. If I can see 20 patients a day as a doctor now, it might be possible to see 40 a day in the future, while at the same time increasing the accuracy of my diagnoses. As a patient, this might mean that rather than paying $1,000 a month for healthcare, as is common in the US today, I might only have to pay $500 in the future.

alt=""

It’s not only menial tasks that could be automated in the future. Algorithms are growing so sophisticated that they can even write code themselves. What will we do with our time in an era of pervasive automation?

We have always found new ways of spending our time. Technical jobs like software engineer, TV moderator or pilot didn’t exist 100 years ago. There are also non-technical jobs that didn’t exist: There were no massage therapists 100 years ago, for example. And there are professions experiencing huge shortages in personnel, like teaching. We know that the best way to teach students is in groups of one to five, but most classes have 30 students. There is no shortage of work. As we free ourselves from the repetitive part of our work, we are free to get more creative jobs. This will increase the need for learning and training.

As a society, we have long believed that a single period of education is sufficient. We go to university and then never get any further formal training. We need to rethink this given that we now live so much longer, that many of us have several different jobs in our lifetimes and that society is changing so much faster. People will have to make formal training part of their entire life journey and not just a one-time event.

This interview is part of a collection of essays and interviews by Alexander Görlach:

Shaping A New Era – how artificial intelligence will impact the next decade (PDF)