“A key question is whether robots can handle ambiguity”
What would happen if the wealth created by robots and artificial intelligence just rested with a few superrich people in Silicon Valley? And why wealth redistribution should be our future goal.
Artificial intelligence is the buzzword of our time. When we look into societal and media discourses: would you say they have captured the disruption that we are about to see?
Martin Rees: Artificial intelligence will do better than humans at managing complex networks – city traffic, electricity grids and so forth. And it will transform the labour market. It won’t just take over manual work (in fact, plumbing and gardening will be among the hardest jobs to automate), but will be able to do routine legal work, computer coding, medical diagnostics and even surgery. But that’s very far from achieving the human-level general intelligence that grabs media interest and remains on the speculative fringe. Some artificial intelligence pundits take this seriously, and think the field already needs guidelines – just as biotech does. But others regard these concerns as premature – and worry less about artificial intelligence than about real stupidity.
You are one of the founders of the risk analysis centre at the University of Cambridge: on a scale between blessing and curse, where would you place artificial intelligence?
Among experts (and I’m not one) there’s a spectrum of opinion about how long it will take for a general human-level intelligence to be achieved. Ray Kurzweil thinks it may take 25 years; Rodney Brooks (inventor of the robot vacuum cleaner) thinks it will never happen. I would place myself somewhere in the middle of that spectrum. If robots could observe and interpret their environment as adeptly as we do, they would truly be perceived as intelligent beings, to which (or to whom) we can relate. What if a machine developed a mind of its own? Would it stay docile, or ʻgo rogueʼ? If it could infiltrate the internet – and the internet of things – it could manipulate the rest of the world. It may have goals utterly orthogonal to human wishes – or even treat humans as an encumbrance. Be that as it may, it’s likely that society will be transformed by autonomous robots, even though the jury’s out on whether they’ll be ʻidiot savantsʼ or display superhuman capabilities. A key question is whether they can handle ambiguity and the unexpected as well as a human can.
As a person that has been part of scientific disruption in the last half- century in your field, as an astrophysicist and cosmologist, how do you personally perceive the changes that we are and will be witnessing?
Perhaps because I’m an astrophysicist, I think it’s in space rather than here on Earth that artificial intelligence will fulfil its greatest long-term potential. Space is a hostile environment to which humans are ill-adapted. But near-im-mortal electronic and non-organic intelligences will be able to roam the universe, free of the constraints of organic creatures.
Out of all great transformations we are going through, from climate change to artificial intelligence to gene editing, what are the most consequential risks we are about to witness?
It depends on what timescale we are thinking about. In the next 10 or 20 years, I would say it’s the rapid development in biotechnology. Already it’s becoming easier to modify the genome, and the 2012 “gain of function” experiments, rendering the influenza virus more virulent and transmissible, are a portent of things to come. These techniques offer huge potential benefits, but catastrophic downsides as well. And the other point about them is that they are easily accessible and handled. The equipment they require is available in many university labs and many companies. And so, the risk of error or terror is quite substantial, whilst regulation is very hard. It’s not like regulating nuclear activity, which requires huge special purpose facilities. Bio-hacking is almost a student-competitive sport. Obviously, we should try to minimise the risk of misuse of these techniques, whether by error or by design. We should also be concerned about the ethical dilemmas they pose.
Do you fear that this doesn’t just happen in the realm of crime – if we think of so called “dirty bombs” for example – but also the possibility that governments might apply these techniques? Do we need a charter designed to prevent misuse?
Governments haven’t used biological weapons much. That’s because their effects are unpredictable, there is a risk of “bioerror” – leakage of pathogens from a laboratory, for instance. And there is a risk of “bioterror” by mavericks or extremists – for instance eco-fanatics who think that humans are so numerous that they are polluting the planet and jeopardising biodiversity. We do indeed need internationaly-agreed regulations, for both ethical and pragmatic reasons. But my worry is that these cannot be effectively enforced globally – any more than the drug laws or the tax laws can be.
That brings to mind recent Hollywood blockbusters like “Inferno”, where one lunatic tries to sterilise half of mankind.
Several movies have been made about global bio-disasters. A pandemic, whether natural or malevolently induced, could spread globally at the speed of jet aircraft. We have had natural pandemics in historic times, for instance the “black death”, which – though regional and not global – killed at least a third of the inhabitants of some European towns. But even when that happened, the surviving citizens were fatalistic and life went on as before. But today we have high expectations, and there could be societal breakdown even for a one per cent casualty rate, because that would overwhelm the capacity of hospitals. That is why governments put pandemics – natural or artificially produced – high on their risk register.
So, when speaking of the age of transformation, aspects of security seem paramount to you. Why is that?
We are moving into an age where small groups can have a huge and even global impact. In fact, I highlighted this theme in my book “Our Final Century” which I wrote thirteen years ago. These new technologies of bio and cyber can cause massive disruption. We have always had traditional dissidents and terrorists but there were certain limits to how much devastation they could cause. That limit has risen hugely with these new bio and cyber- technologies. I think this new threat is going to pose challenges to governance and increase the tension between freedom, security and privacy.
Let’s look at another huge topic: artificial intelligence. Is this a field where more uplifting thoughts occur to you?
Within a timeframe of ten to twenty years, I think the prime concerns are going to be cyber-threats and bio-threats. However, as I’ve already said, the labour market will be disrupted because robots will take over many occupations. To ensure we don’t develop even more inequality, there has got to be heavy taxation and massive redistribution. The money earned by robots can’t just go to a small elite – Silicon Valley people for instance. It should be recycled, so that social -democratic nations can fund dignified, secure jobs where the “human touch” can’t be replaced by a machine: carers for young and old, teaching assistants, gardeners in public parks, custodians and so forth. There is almost unlimited demand for jobs of that kind – there are currently far too few, and they’re now poorly paid and low status. But of course, most workers want more leisure – for entertainment, socialising, rituals, etc.
But robots could potentially also take on the work of a nurse for that matter.
True, they could do some routine nursing. But I think people prefer real human beings. At the present time, the wealthiest people (the only ones who have the choice) want personal servants rather than automation. I think everyone would like to be cared for by a real person in their old age.
In your opinion, what mental capacities will robots have in the near future?
I think it will be a long time before they will have the allround ability of humans. Maybe that will never happen, we don’t know. But what is called generalised machine learning, enabled by the ever-increasing number- crunching power of computers, is a genuine big breakthrough. But the development of sensors has a long way to go. If these computers were to “get out of their box”, or infiltrate the “internet of things”, they might pose a considerable threat.
In your opinion, what sparks new innovation and ideas? Will artificial intelligence and machine learning foster these processes?
Eureka moments are quite rare, sadly. They do happen, but – to quote Pasteur – “Fortune favours the prepared mind”. You have got to ruminate a lot before you are likely to achieve important insights. The big breakthroughs in scientific understanding are often triggered by some new observation that in turn was enabled by some new technological advance. New insights often require a collaboration between people who can cross disciplines. Computer simulations will supplement (or even replace) experiments; and allow huge data sets to be analysed. There are some scientific challenges which everyone agrees are important, but which receive little attention until there seems genuine hope of progress. For instance, the “origin of life” is one such problem which is only now receiving mainstream attention.
Would you say a collective can have an idea or can only individuals have ideas?
Most breakthroughs are really the outcome of a collective effort. In football one person may score the key goal – but that doesn’t mean the other ten people on the team are irrelevant. I think a lot of science is very much like that: the strength of a team is crucial to enable one person to score the goal.
Do natural sciences and humanities have the capability to tackle the challenges occurring from these transformations?
Here in Cambridge in the United Kingdom we are trying to use our university’s convening power to address which long-term near existential threats are real and which can be dismissed as science fiction, and to recommend how to reduce the probability of the credible ones. This requires expertise from the social sciences as well as natural sciences. For instance, I’ve already mentioned that, because of the societal effect, the consequences of a pandemic now could be worse than it was in the past, despite our more advanced medicine. Also, if we are thinking of problems like food shortages, the issue of food distribution is an economic question, as well as a question of what people are ready to eat. Are we, for instance, going to be satisfied eating insects for protein?
With the rising amount of aggregated data it becomes increasingly difficult for the humanities to keep up with natural sciences. How can we synchronise the languages of different academic fields in times of big data?
We need to encourage people to bridge these boundaries. I am gratified that our Cambridge group addressing extreme risks has attracted young researchers with real breadth: philosophers who are into computer science; and biologists interested in system analysis. Here in Cambridge we are advantaged because of our college system. In most universities you don’t meet people from other departments until you become very senior (a department chair or suchlike). But each college is a microcosm, covering all disciplines, so even the most junior researchers have daily exposure (at lunch or in the common room) to experts in all fields. So, Cambridge is a particularly propitious environment for cross-disciplinary work.
The blessings of modern innovation seem to be ignored by many policymakers: we see a retreat from globalisation, a retreat from digitalisation – is it a disconnect between science and the rest of society?
The misapplication of science is a problem, of course. So is the fact that science’s benefits are irregularly distributed. The welfare of the average blue-collar worker and their income in real terms – in the US and in Europe – has not risen in the last twenty years and in many respects their welfare has declined. Their jobs are less secure and there is more unemployment. But there is one aspect in which they are better off: IT. Information technologies spread far quicker than expected and led to advantages for workers in Europe, the US, and Africa.
But surely globalisation has made many poor people less poor and a few rich people even richer.
Sure, but let’s remember that we’re now witnessing a significant backlash in many places, in terms of Brexit or the presidential election in the US.
How drastically do you think these developments will affect science, the attitude towards it and its funding?
Many of the people who use smartphones and the internet aren’t aware that the fantastic underlying technologies can be traced back to scientific innovations decades ago, which were mainly funded by either the military or the public. It’s unfair to say people are anti-science – indeed I find it gratifying how much public interest there is in topics from black holes to dinosaurs that have no direct practical relevance. But they are worried that some technologies will run ahead faster than we can control and cope with them. I think there’s good reason to be concerned, for example, about biotech and cyber – to maximise the benefits while trying hard to avoid the downsides. For technology to be developed, it’s not enough to know the relevant science. There needs to be an economic or political imperative. For instance, it took only twelve years from the first Sputnik to Neil Armstrong’s “one small step” on the moon. The motivation for the Apollo programme was a political one and four per cent of the US federal budget was committed to it (in contrast to the 0.6 per cent that NASA gets today). In the case of IT, there was the obvious demand, which led to the internet and smartphones spreading globally at a rate exceeding most predictions. But commercial flying presents a contrasting example – today, we fly in the same way we did fifty years ago, even though in principle we could all fly supersonic.
Living in a so-called post-factual era, what are “facts” to you as a scientist?
If we take as an example the Brexit vote in the UK: those who voted for Brexit had a variety of motives. Some wanted to give the government a bloody nose, others voted blatantly against their own interest. The workers in South Wales, for example, benefited hugely from the EU. There is a wide variety of different motives, but I don’t think people would say that they voted against technology. And I still hope for an “exit from Brexit” as the UK public realises what they’ve let themselves in for.
Still there is this ongoing narrative about the fear of globalisation and digitalisation, and that would also imply the fear of technology.
Sure, but that is oversimplified. We can have advanced technology on a smaller scale. It allows for robotic manufacturing; it allows for more customisation to individual demand. The internet has enabled small businesses to flourish. Clean energy may be generated locally rather than delivered via vast grids.
But there seems to be an increasing disconnect in many societies regarding the consensus on which facts matter and how facts are perceived.
To understand this attitude, you are expressing, we have to realise that there aren’t many facts that are clear and relevant in their own right. There are often real grounds for scepticism. Most economic predictions, for example, have pretty poor records, so you can’t call them facts. In the Brexit debate, there were valid arguments (as well as a lot of bogus ones) on both sides. And in the climate debate, even those who agree on the science and its margin of uncertainty will differ in the policy response they favour. For instance: how strongly should we bet on some technological “fix”? And how big a sacrifice should we make today to reduce the probability of a catastrophe in remote parts of the world a century hence?
But how then do you judge the developments we now see in many Western societies?
New technologies have led to new inequalities and new insecurities. Moreover, people are now more aware of inequality. People in sub-Saharan Africa are now fully aware of the kind of life that we Europeans enjoy, and they wonder why they can’t enjoy it too. 25 years ago, they were far less aware of this unjust disparity. This understandably produces more discontent and embitterment. There is a segment of society, a less educated one, which feels left behind and unappreciated. That is why I think a huge benefit to society will arise if we have enough redistribution to recreate dignified jobs. The rich world needs to subsidise factories in the developing world, to reduce the incentive for migration.
What political framework do you think of as an ideal environment for science?
The Soviet Union had some of the best mathematicians and physicists, partly because the study of those subjects was fostered for military spinoff reasons. People in those areas also felt that they had more intellectual freedom, which is why a bigger fraction of the top intellectuals went into maths and physics in Soviet Russia than probably anywhere else ever since. That shows you can have really outstanding science in many social systems. But of course, I support – for much broader reasons – a Scandinavianstyle social democracy. And I am opposed to the austerity and “small-state rhetoric” deployed by the present UK government.
So, the ethical implication is not paramount to having “good” science after all?
I think scientists have a special responsibility. Often an academic scientist can’t predict the implications of his or her work. The inventors of the laser, for instance, had no idea that this technology could be used for eye surgery and DVD discs, but also for weaponry. Among the most impressive scientists I have known are some of those who worked at Los Alamos on the atomic bomb. They returned to academic pursuits after the end of World War II with relief, but felt obliged to do what they could to control the powers they had helped to unleash. Most of these scientists supported the making of the bomb in the context of the time. But they were also concerned about proliferation and arms control. It would have been wrong for them to not be concerned, even though their influence was limited. To make an analogy: if you have teenage children, you may not be able to control what they do, but you are a poor parent if you don’t care what they do. Likewise, if you are a scientist your ideas are your “offspring”, as it were. You can’t necessarily control how they will be applied, but nonetheless you should do all you can to ensure that they are used for the benefit of mankind and not in a damaging manner. This is surely an attitude that should be instilled into all our students.
What, then, is your motivation as a scientist?
I feel I am very privileged to have, over a career of forty years, played a modest part in debates on topics which I think will be highlights when the history of science in this period is written – understanding the evolution of the universe and its constituents. I think it is a great collective achievement. Many of the questions that were being addressed when I was young have now been solved. We’re now tackling questions that couldn’t even have been posed back then. Of course, the science I do is very remote from any application, but it’s of great fascination and a very wide audience is interested in these questions. It certainly adds to my satisfaction that I can convey the essence of these exciting ideas to a wider public. I would get less satisfaction if I could only talk about the cosmos to a few fellow specialists.
What is the best idea you’ve ever had?
I’ve never had any singular idea, but I think I have played a role in some of the insights that have gradually firmed up our view of how our universe has evolved from a simple beginning to the complex cosmos we see around us and of which we are a part. And the social part of science is very important. Many ideas emerge out of cooperation – and, of course, from experimenters and observers, who deserve far more credit than theorists like myself. Incidentally, the old idea that science eventually leads to an application is far too naïve. The interaction goes both ways because advancements made in academic science are facilitated by technology. If we didn’t have computers or ways of detecting very faint radiation, etc., we would have made minimal progress in astronomy We were no wiser than Aristotle was, and we only advanced beyond him through having much more sensitive detectors and being able to explore space via many techniques.
An Interview with Martin Rees contucted by Alexander Görlach.