“I am sceptical of people who say that technology hasn’t changed anything”
Andreas Weigend, the former Chief Scientist for Amazon, reckons that technology had an impact on everything and that it is scary how fast AI is growing and learning.
AI has changed the way we think and make decisions, and more changes are coming. Quickly. But there is a dark side to those changes and we must be vigilant, says Big Data expert Andreas Weigend.
In your former position as Chief Scientist for Amazon, your focus was on exploring the possibilities of technology, on things that we now refer to as “algorithms” and “AI”. What has changed since then?
Jeff Bezos always says: To only look at what is changing is not enough. You also need to look at what isn’t changing. As a physicist, I’m always interested in the invariances, the things that don’t change, and what hasn’t changed is that the value of data derives from the influence it has on decisionmaking. That’s how it was 50 years ago, and it will still be that way in 50 years. What has changed, of course, is the breadth of decisions that are affected by data. And the way we make decisions has also changed.
Indeed, I am sceptical of people who say that technology hasn’t changed anything, or that it doesn’t matter to them. Technology has fundamentally changed how we do things and has also fundamentally changed what we do. Technology has had an impact on work, on the future of work, and on how we perform and define work. Everything is affected by technology.
When you look at AI, you have a huge amount of complex data but just one thing you want to simplify and learn. Do you think that, in the popular imagination, we attribute too much to AI, or do you agree that its capabilities can be quite frightening?
I don’t think algorithms, the foundation of AI, have changed very much over the last 20 years. What has changed is the amount of data and computing power. According to Moore’s Law, this doubles every one-and-ahalf years, by a factor of 10 after five years, by a factor of 100 after 10 years, and so forth.
The scary thing is that the learning cycles for AI take only one year to grow by a factor of 10. It’s almost impossible to imagine, but that is the current speed of the AI cycle. This means that not only are computers getting faster, but huge amounts of resources are being poured into AI. There is no question that it will have an effect, and it will happen at a speed five times faster than we have seen with mobile phones and other technology that follows Moore’s Law. You can slice and dice Moore’s Law in many ways. If someone wants to invest money into storage, that means that every five years, you will need a factor of 10 more capacity. We can kind of grasp that. But contrast that with AI, where growth of that magnitude doesn’t take five years, it only takes one. It is almost incomprehensible.
What are some potential benefits of this massive growth? How can it make our lives better?
This speed of change makes forecasting difficult. Just take speech recognition: Five years ago, if you talked to a computer and expected text to come out, it would have been nearly impossible. Now, though, a small team of 30 engineers from Silicon Valley can create an app that produces real-time transcriptions. The default has shifted. Every day, in the 15 or so hours I am awake, I produce text. An app called Otter pulls out text that it thinks was important in my day and highlights it. This is no longer the future, this is the present. What is important to me in this conversation is how the default settings in life are changing. My book has a first section where everything is recorded and, more than that, also understood, transcribed, and made searchable. That’s a huge effect that AI has had on life. It is a paradigm shift. We don’t have to rely on our memories anymore. How will it change how people interact or think about themselves if they have their lives indexed?
In linguistics, we talk about intertextuality, which means that texts are somehow connected through our cognitive ability. Now we have all this data, but what will we do with it? Can we do some good with this massive amount of data, or will things automatically go downhill?
No, not automatically downhill. There are always good and bad sides to any technology. Certainly, whatever technology can be used here to detect our emotions or to detect terrorists in a crowd can also be used to find dissidents in China. It comes down to what you do with it.
Is regulation futile? We can regulate a company, but the data behind everything is still there.
People are trying to learn. But to mention one example, German Finance Minister Olaf Sholz was sitting next to me recently at dinner and said, “We don’t need Uber, we have something called Easy Taxi.” He is a smart man, no question. But I was sitting there thinking: How do I tell someone who has a bulletproof car with a driver parked outside, someone who has not stood outside in the Hamburg rain looking for a taxi, how do I explain to him that a platform like Uber is different than Mercedes- Benz writing an app for cab drivers?
I really think that the best thing politicians could do is spend an hour a month with a teenager and ask them how they are leading their lives. People in power have no idea how people under 50 form their opinions. Commissioning a study probably doesn’t help either. For me, my students help fill that role. Every now and then, they explain to me the cognitive dissonance and how they see the world versus how I see the world.
But when it comes to transparency, you can force corporations to be transparent. There can be policy pressure and pressure from things like the Panama Papers, for example. Or take the very thoughtful open letter from Amazon workers to Jeff Bezos, urging him to take a more active stance on climate change and transform Amazon into a carbon neutral company.
But then look at Amazon Web Services. Facial recognition at the border is built on AWS. People want to know what the technology is really being used for. The standard example is the role of IBM punchcards in the Holocaust. IBM actively helped by sending engineers to optimise their punch cards for the logistics necessary to get Jews to the gas chambers. Now, people are saying we do not want Amazon to be talked about in 50 years or whenever with regards to a genocide. What can we do now? What could IBM have done in World War II? What could their employees have done to prevent that technology from facilitating the Holocaust?
All fields of innovation are now under scrutiny. Are we entering a more ethically aware period of time? Or are we just being dreamers?
I think there is a fundamental power shift taking place from the person as an employee to the person as an individual. If a person doesn’t like what Company A is doing, he or she can quit and go to Company B. It’s also surprising to me how many people give me their Gmail address as opposed to their company email address. I think it’s great to see. It’s the power of the individual over that of the company.
That’s an interesting perspective. You are saying that we have more agency even as many are saying that AI is actually taking agency away from us by giving us recommendations and making us unlearn decision-making.
In the past, we had very limited inputs to, say, find a restaurant. Today, we have many more options. Even in the town where I was born, my brother and I will use an app to find dinner. It truly has the potential to be democratising. I have always been a rebel and I love that these traditional institutions can be blown up.
Your positive outlook on individualism and empowerment is refreshing. In your most optimistic scenario, where do you think we are headed?
This is a moment of maximum uncertainty. I don’t know if it’s more likely that we’ll end up in a surveillance society much worse than we ever imagined, or if it will empower people to express themselves and make better decisions based on the data they create. It is neither one or the other, of course, but I fear the overall distribution has shifted for the worse. I was more optimistic a few years ago than I am now, because I have learned about the Facebooks of the world. They have all the data about you and they aren’t trying to help you, but only themselves. I am a big fighter for data literacy. But isn’t it amazing that when the news hit that Facebook might be levied a $5 billion fine that their stock went up?
This interview is part of a collection of essays and interviews by Alexander Görlach: