The big picture
German academic Alexander Görlach argues that accelerated technological innovation necessitates a discussion on how to make technology work for us – not the other way around.
As technological innovation ploughs ahead, long-term consequences and ethical questions are often forgotten. But German academic and writer Alexander Görlach argues that a new way of thinking has already begun to take root. It involves engaging society as a whole to figure out how to make technology work for us – and not the other way around.
The German language is known for creating long words like Technologiefolgenabschätzung. The term refers to the process of assessing the long-term impact new technologies will have on society. In its own dispassionate way, this word confirms the primacy of technology above all other elements of society. Philosophers, sociologists and political scientists, not to mention lawyers and theologians, have rarely knocked the world off its axis. And lawmakers are often busy stuffing what has long since become reality into a regulatory corset based on antiquated standards.
Technological achievements modernise societies and force others to react. After 15 years of constant digital-platform development and living in the new economy this has created – one that has permanently changed the way we do business, communicate and engage in politics – we are now in a phase in which the medium and long-term implications of this development are crystallising. It has become clear to social media companies, for example, how vulnerable their platforms are to manipulation. The automotive industry is asking itself what criteria should determine when self-driving vehicles must brake for other road users. In medicine, algorithms are helping us identify tumours accurately and detect pandemics before their impact is widely felt. But this also raises the question of how best to meet the challenge posed by the fact that more and more people are staying healthy and living longer. Questions have been raised and are waiting to be answered.
Data – a new way of thinking
Technologiefolgenabschätzung does not require us to demonise disruption and its consequences, but it means their impacts, both good and bad, require serious examination. And it entails thinking through scenarios that are not immediately apparent to an engineer tinkering with an invention or to a programmer on the brink of a fresh breakthrough. A new way of thinking that is willing to consider such ethical questions is already beginning to take root among some of the central players of our era. Data, a product of this new age, is useful in this endeavour. It has never been possible to investigate complex issues as quickly and comprehensively as we can today.
The Moral Machine developed at MIT, for example, collected and analysed data with the goal of developing ethical standards that decide when self-driving vehicles should apply their brakes. The answers varied depending on cultural presuppositions. In cultures that value the elderly, fewer people were willing to run over an older person in a worst-case scenario. It also revealed that certain ethical standards apply across cultures. People everywhere want to keep the number of potential victims as low as possible, and drivers around the world want to survive, regardless whether it costs one or more people their lives. The desire for self-preservation takes precedence over survival of the species.
Good and evil
The Cambridge Analytica scandal has made clear that political content is sensitive and cannot be handled like ads. We have seen how it can drive a wedge through societies if the same metric that befits the sale of mustard is applied to the distribution of political content. It is therefore logical that the future of democracy is now seen as related to “fake news” and “alternative facts”. It is becoming abundantly clear that societies cannot have an algorithm determine what rules they should live by or what constitutes good and evil. This is a task for all of society.
The internet ethicist Luciano Floridi, who teaches at Oxford, warns that people might forget how to think, evaluate and judge if we let algorithms relieve us of every decision, however small. Floridi points out that free and democratic societies are characterised by the decision-making processes of their citizens, who enjoy equal rights and access to education, including decisions about what restaurant to patronise, where to go on holiday or which new car to buy. A society only benefits its people when it negotiates a bonum commune (literally: a common good) and decides how it wants people to be able to live.
How do we want to live?
With so many marvellous opportunities currently at hand, answers are all the more urgently needed. In medicine, experts are negotiating questions about the beginning and end of life, about reproductive technologies and genetic modification. Climate protection is also a central issue. And questions about fair living conditions and the justness of our economic system must be optimistically addressed if we really want to extend our social contract.
The question of how we want to live is thus far from trivial, and involves more than lifestyle choices. When it comes to civil and social rights in today’s liberal democracies, human dignity is becoming a practical matter. The right to vote, after all, is useless if you have nothing to eat. By this logic, a fair and just social order must include universal access to education and healthcare. Only if we agree on this as a society will we be able to use the data we have to maximally benefit the greatest possible number of people.
Today, new actors oppose this line of thinking. In the People’s Republic of China, human rights are only understood socially. As long as the Communist Party delivers prosperity to the people, citizens gratefully reward it with political obedience. In the United States, a concept of humanity based on archaic natural law – according to which sexual self-determination, gender identity and homosexuality are aberrations – is once again gaining traction. Both are attempts to deny human rights their political dimension: in China in the name of maintaining the party’s power, and in the US in accordance with an evangelical Christianity based on a literal reading of the Bible. When it comes to these questions, Europe has everything it needs to present an alternative and to renew and further develop humanism, human dignity and the rule of law.
Social polarisation based on divisive traits is already measurable and powerful today, and not only in China and the US. Today, as in the past, technology can be used for good or evil. With new technologies, democracy can be renewed or entire cities turned into prisons. What, then, is the goal we are aiming for with this technological disruption: the digital citizen? Or the person under total surveillance?
One bonum commune
Free societies have the privilege of being able to reach agreement through discourse over what their common goal – their bonum commune – should be. This question cannot be answered by the experts, the technologists and the programmers alone: Because the self-driving car and the skin cancer-detecting algorithm touch on the humanum, the essence of humanity. They are equally entwined with the question: “What is the human being?”
Just as this question reaches beyond the technological, so does its answer. People are not identical to that which they create. At the same time, human beings do not exist without the implements they derive from the world. In the polarised debate now being held in many places, some supposedly natural human condition – an idealised moment in history – is invoked in order to immunise against innovations of all kinds, or, broadly speaking, against “liberal society”. As if the definition of humankind has been established by nature and is thus not subject to change.
Nature vs. Nurture
This claim is false: There is no “humankind” without society and without culture. In this case, “nature” and “nurture” are rivals pitted against each other in the debate. However, from the domestication of the horse to the invention of the self-driving car, people have always “enhanced” themselves with the aid of technology.
But people can only fulfil the promise of enhancement they have made to themselves if they remain committed to their own humanity: to empathetic listening, to the desire to learn and understand and to evaluation and judgment. This holistic overall accomplishment and overview of the “big picture” is intrinsic to humankind. In this respect, humanity is still far from being surpassed by algorithms. Algorithms are already many times faster than people when it comes to completing a single task, but they are not able to connect different contexts or complex relationships.
In this sense, this collection of interviews is committed to the “big picture”, a holistic overall view. Because only in a conversation engaging society as a whole can we probe technologies for the good or ill that they may bring and improve them over time. In this context, ethics are not somehow appallingly aloof, but rather everyday and relevant. It is up to us to put new technologies into the service of human beings. Indeed, they exist for our benefit – not the other way around.