“We cannot allow ourselves to think in an utopian way.”

“We cannot allow ourselves to think in an utopian way.”

For the philosopher Luciano Floridi, it is crucial to understand what we want to use new technologies for – experts still do not ask enough ethical questions about AI.

Being the Director of Research of the Oxford Internet Institute seems to be a bit anachronistic itself – “Oxford”, one of the oldest universities in the world, and “internet”, the bringer of modernisation and globalisation. How do you unite the two?
Luciano Floridi: Indeed, it is an interesting combination of words. Take the centuries of tradition, combine it with the complete novelty and unprecedented problems we’re facing, it does not sound straightforward. But looking at the problems we’re facing, it seems like a wonderful recipe to combine one of the world’s best universities and an institute that focuses on the internet, something that gives us sheer endless opportunities in the world. I can’t imagine a better way of combining the old with the new.

But is Oxford playing catch- up in that regard? Or is it a go-ahead, a place where you develop new theories and propose ground-breaking policies? It seems like much talent is attracted to the United States…
… Well – both. When it comes to understanding what’s happening, as opposed to identifying new phenomena, there is a lot to register in terms of novelties. Take the job market, for example – thanks to the internet, we now see huge internet companies such as Amazon enable thousands of people to be hired in the shortest of time spans, something that was unthinkable a few years ago. There is a lot of catching up in terms of understanding where the issues that the internet has brought with it lie. At the same time, we as a society should be leading the shape of things in terms of policy. I see it as two steps; one is the understanding part, and one is the policy-making part. Both influence each other, seeing that policy always affects what is happening in the real world.

Luciano Floridi is the OII’s Professor of Philosophy and Ethics of Information at the University of Oxford, where he is also the Director of the Digital Ethics Lab of the Oxford Internet Institute, and he is Distinguished Research Fellow of the Uehiro Centre for Practical Ethics of the Faculty of Philosophy and Research Associate and Fellow in Information Policy of the Department of Computer Science. His research primarily concerns information and digital ethics, the philosophy of information, and the philosophy of Technology (Photo: Ian Scott).

Interesting – there is actually an argument, made by Jean Baudrillard, who says that America has reached a realistic utopia, whereas Europe is kept behind due to the weight of its history. However, in a utopian world, by definition there wouldn’t have to be any policies, correct?
Yes, I am familiar with that argument. It is not quite about that though. Utopia, by default, is anti-historical. It establishes a status quo that remains there, forever – you don’t improve on perfection. For us in Europe, we cannot allow ourselves to think in a utopian way. We have a lot of history, and we know that more history is coming. All has happened, and during every great chapter of history, we thought history itself had come to an end – the Egyptians thought so, the Greeks thought so, the Romans did, too. More recently after World War II, after the fall of the Soviet Union, at the beginning of the European Union. It is possible in some contexts to have utopian thinking, because to some extent you do not have enough history to show that reaching a point of no further development is impossible in human nature.

So, would you say Europeans are disillusioned by it?
Quite the contrary, I think us Europeans are more realistic. The disillusion may come in the form of a cynical view on things, but I think overall the Europeans have a grown- up attitude that things change, have always and will always keep on changing. The ability of politics in this case is to handle change, and not to stop change.

The counter-narrative would be the Eurosceptic view, especially of the economy, in which most political parties in our day are an offshoot of one another.
To add to that – politics is and always has been driven by social issues. Germany had to grow in the early 1900s because the global market was dominated by a few colonial powers, so World War I happened. For the past 50 or 70 years, since World War II, slowly we have seen that the ground for politics has become purely economic. Listen to any politician, and what they discuss are topics of GDP, unemployment, growth. Everything has to do with the economy, and the debate does not focus on “are people happy?”. All the questions of happiness, social justice and equality become a by-product of economic progress or recession. We go from sociopolitical issues to mainly economic issues, and I think the future for us has to be based on other values than economic ones. I don’t like to talk about post-modernism, because in my view it is an admission of having run out of ideas, but we have to reposition our political discussion.

And with our technology developing, don’t we get the chance to actively make those decisions? Take artificial intelligence – naturally, we are very sceptical of the ethical implications a robot may have, and it has been debated for a long time now. Is that not reflecting our moral bankruptcy? Since we are doing it anyway, can it not be seen as a way for humankind to give in to the economic advantages of lower costs and consistent quality?
Maybe – we can dig deep and look at the logical consequences. The underlying question we need to ask ourselves, though, is: what do we want to do with technology? I disregard completely the ideas of singularity and artificial intelligence domination, something I see more as a scratch to address certain itches. The scratching is wrong, but the itching is real. It does mean that there is a problem deep down, namely “What future do we want to have?”, when already our world and economy are so deeply dependent and shaped by technology. It’s just another way of saying the digital economy will dominate our life – is it time to give that control a certain shape? Artificial intelligence means that we have the added possibility of countless new advances in what is possible digitally. It is up to us to determine what possibilities will become a reality.

In terms of humans, how does it affect the two aspects of people? Being a citizen vs being a consumer?
There is a tension there, and it is because the circle of interactions has become wider and less visible. In the early days, there was the production of goods, and the consumption of goods – industry and consumers. A third party like the state could regulate that exchange, but there were only three parties. Today, it’s no longer that way. Selling products to customers is not the main objective any more. In a digital economy, you mostly give your customers free things. They are not considered customers any more, but users. And a third party that regulates a “gift economy” serves no purpose. But if you add to that the analytics and advertisement that goes on, all in all this circle of interactions has become a lot less clear and a lot wider. To add to that, the notion that companies will sell customers’ data, which is what a lot of people fear as a result of the obscuring circle, is not worth paying much attention to – user data is the golden goose in this equation, and companies will sell only the golden egg, i.e. services based on the data they own. The may sell the possibility of targeting users, based on the data they own, but not the data itself. That will remain with the company.

But if data is so sacred to these companies, what does the dialogue between the governments and the big internet companies – Google, Amazon, Facebook – look like? Wouldn’t you say that they are completely against policymaking? And what role does the citizen play in it?
The citizens do not play a role in this, because they love what they are getting – free mail accounts, free videos, free websites, free news, free everything. Nobody in their right mind would be opposed to that! So – there is no way to forbid or sanction advertising. We need to remember that when we talk about “people”, we all play certain roles: user, consumer and citizen. Sometimes, especially in Europe, patient is also added in a medical context (we are prone to that because we are an ageing society). But now think of what we do if one thing does not work. As a citizen, if policies have failed us, we vote for someone else. As a consumer, you have laws that protect your purchase. As a user of a free service, you are stuck and are forced to walk away. If you don’t like the Google search engine, that’s your problem. There is a lack of accountability in this, all the way to the top. The five companies that put together a partnership for artificial intelligence – Microsoft, Google, IBM, Facebook and Amazon – took the first, very positive step in the right direction. I am a huge fan of that, and there is a certain expectation towards society to push for accountability rules in technologies in general and in artificial intelligence in particular. We need a soft and a hard legal framework, and right now it is just a bit messy.

But then, if you describe those conferences, what is your impression of what artificial intelligence is and what the state of knowledge about artificial intelligence is?
There is a lot of excessive emphasis on what artificial intelligence can do. The people always talk about specific machines, what these machines can do and what the algorithm behind it is. People do not recognise the variability and degrees of technical implementation that different artificial intelligences have. The difference between an industrial robot versus a bot that updates a Wikipedia entry automatically versus a house robot that does the dishes is huge. The result is that we become overwhelmed by the immensity of possibilities. Artificial intelligence has been most successful in industrial robotics, and it is an area where it has worked incredibly successfully for decades. The car industry has always been at the forefront, and there are regulations in place. Just look at the self-driving car debate and the accidents. Whenever I speak to people around the world, I know that technologies do not respect boundaries – they cut across fields. It is all about finding a legal framework.

What about the notion of robots replacing us in the future?
It’s ludicrous. There is no such thing – as debates happen in a very esoteric context – university halls where you forget the rest of the real world or in heated debates in newspapers. My recommendation is: look out the door, and look at what there is in the world. Tell me then sincerely whether you, the person fearing a robot takeover, see anything that remotely hints at the emergence of such a scenario. However, it does not mean that it will never happen, it means that we should not worry about negligible sci-fi scenarios. The real issues raised by artificial intelligence are too serious and pressing to waste time wondering how we shall ever teach superior artificial minds to be ethical.

An Interview with Luciano Floridi conducted by Alexander Görlach.

Painting of Scene from the Short Story The Thought Machine by Anton Brzezinski (Photo: Forrest J. Ackerman Collection/CORBIS/Corbis via Getty Images)