“Some machines are people in the philosophical sense”
As a philosopher of time, Australian thinker Huw Price argues for a pause: we need to understand better what our human intelligence is, before we engage in much chatter about artificial intelligence.
So far, he says, the future is ours, not the machines’, as humans alone are capable of mentally projecting scenarios in a time to come.
When looking into the future of artificial intelligence and machine learning, what do we have to expect?
Huw Price: I am as little of a futurist as a technical guy. My role in this field here in Cambridge is that I am an enabler, helping to make things happen. Things that will – with the creation of our new centre for the future of intelligence – help to foster a community of people who can ask the right questions. At the moment we are probably not even asking those.
Speaking about the centre – what kind of intelligence we are talking about here?
Price: A good question! It sometimes comes up from a slightly sceptical viewpoint. People say: we don’t even know what intelligence is, how can you propose to set up a centre about its future? My response to that is: let’s not think about what intelligence is but what intelligence does. There are things about us that are responsible for us being the most successful species on the planet. Whatever that is, it is possible – at least in principle – for machines to do it too. So, whatever these specifically humanoid ingredients in us are, they don’t exist in other species.
Take the current debate about “fake news”, and the search for truth, discussing what is a fact, and what isn’t. Clearly the debate touches on our capacity for recognition and our logical capabilities. Is there an overlap between logic and intelligence?
Price: I think that logical thinking is more of an aspect of intelligence. It’s a refined form of symbolic thinking, clearly a key ingredient inherent in intelligence. But I think intelligence is a lot broader than real logic.
What is the difference then, the distinctive feature of the two?
Price: Again, intelligence is a difficult thing to define. We can loosely say that it is intelligence that distinguishes us from other animals and what is responsible for the ascendency we occupy on the planet at the moment. And that is something we can say with confidence without yet knowing what all the different ingredients to intelligence are. In a way, logic is merely a sort of abstraction from one aspect of symbolic reasoning processes.
Is one of the capabilities the ability to project scenarios in the future?
Price: It does seem that one of the things that we humans are able to do is what one may call “scenario planning”. We are able to imagine what we think of as several options of a possible future. This is basically what happens when we are confronted with choices. These choices are related to such things like short time survival. Imagination is part of human thinking and we use it in making decisions all the time. It is an extension, an elaboration of that what we use in high-level scenario planning. It clearly must be something that we do with huge amounts of abstraction and processing information because we only have the capacity to deal at that sort of contrast level with a small amount of the information which is confronting us.
When we had a life expectancy of twenty years, our risk scenarios were limited. Most of them went into mythologies and religion. How important is our own perception of time in this regard? How would you say life expectancy and the ability of scenario planning are related?
Price: I think the basic ability of imaginative thinking is associated with surviving and prospering over quite short time scales. For those basic sorts of cognitive abilities, the differences in our life spans don’t make much of a difference. Our ancestors were already surviving long enough for those skills to be relevant. There is a lot that could be said about the connections between that kind of activity, that sort of imaginative and predictive activity and our intuitive conception of time.
One of the things we know from modern physics is that the intuitive conception of time is misleading. We have the sense that there is something like a flow or a passage of time and that time is intrinsically directed. We know from physics that this is wrong. It is an old project of both philosophy and science to explore a distinction between those aspects of the world which are truly objective in the world itself, and those that in some sense come from us. We now know that a lot of those intuitive aspects of time really do come from us. They lie on the subjective side in the way that things like colour, taste and smell lie on the subjective side.
Do machines need to learn our capability of imagination to become more like us? Is that the crucial element?
Price: A machine would have to be doing some scenario planning if they wanted to act like humans. But it is often not necessary to do that. A network of machines is in another sense just a single machine. I don’t think how we carve out the machines makes much difference one way or another.
What are we going to be seeing in the next ten years?
Price: We are going to see a lot more people thinking about the long-term future of artificial intelligence; thinking about where this technology is taking us, where the opportunities are, where we can make a difference. In my view the big thing we need to do right now is to expand the community of people thinking about these issues. In particular, we need to find young people from many different fields, people who are going to spend their careers thinking about these issues; people who really will make a difference of how this transition into a machine era, in which we share the planet with non-biological intelligence, will turn out. And they will be the people to make a difference to how that goes.
So, the scenario is that we are going to be sharing our habitat with non-biological intelligence?
Price: Yes, absolutely. We have a very unclear idea at this stage as to what the capabilities of those machines will be, but we can be fairly sure that most of the things we can do with our brain will be things machines will be able to do, too. And it may well be that they will be able to do things that we have not thought of yet.
Is that a utopian or dystopian idea to you?
Price: I think that there are possibilities at both ends of the spectrum, both important to think about. We not only need to think about what to do on the safety side to avoid the dystopian possibilities, but also to clarify the range of possibilities towards the more utopian end of the spectrum. It may well be that there are importantly different paths the technology could take, which may be good in some ways and bad in others. It seems to be smart to determine a sense of destination before we set down this path. I like to say that there is an important difference between designing a self-driving car and the issue of the future of artificial intelligence. In the case of the self-driving car, you want something that will take you efficiently from A to B. In the case of artificial intelligence, in general we have no idea where it will be used, and more alarmingly, we have no idea what the possible destinations are.
Computers can already process a significantly larger amount of information than any human.
Price: Exactly, so non-biological intelligence will have access to vast quantities of data, and that will be one of the things that enables it to do things we can’t do.
Are we also speaking about processing then applying the data to create new inventions, or will machines remain our assistants to help us innovate?
Price: In order to be called intelligent the machine has to do something with the data. I don’t like that way of setting it up because it suggests we are at stage one where we have the data and stage two of artificial intelligence; solving the problem is just adding on the capability of doing something. In fact, we have lots of machines, which are capable of doing various hings with data just as we ourselves are. Despite the fact that they have access to much more data, the current machines are incapable of doing many of the things that we can do with data. As time goes on, it is likely that we will develop machines that will have much of the same general kind of abilities that we have. That will enable them to take lessons learnt from data in one case and apply it to another case. It will be that kind of generalisation that comes so easily to us that those machines will be able to do in the future.
Are you confident about this on principle or is there something in particular that gives you hope?
Price: At this point, these technologies are turning out to be so useful for many purposes and commercially really valuable, not to mention a sort of scientific fascination for the topic. As to whether it is definitely going to happen: firstly, I’m not an expert and secondly, even the experts couldn’t say with certainty that it will happen in a certain timeframe. But my understanding is a reasonable middle-of-the-road viewpoint at this time. In principle, we see an obstacle but also no fundamental technological or scientific barrier that would prevent it from happening.
Do you think we will witness an exponential development in terms of a new industrial revolution?
Price: Again, I want to emphasise that I’m not an expert in these fields, I’m a philosopher. Experts in the field think that we are probably several conceptual theoretical steps away from having machines with as wide a range of capabilities that we have. But, as with predictions in any scientific field, there is an element of guesswork there.
Speaking to you as a philosopher: what are the most fascinating questions deriving from this development for you?
Price: I have spent my philosophical life on questions like the nature of time and the foundations of quantum theory. I want to be clear that there isn’t a lot of engagement with my professional life as a philosopher. My role in the centre is much more of an enabler or facilitator, someone who can just play a role in bringing other people together to make things happen. Having said that, I think some of the most philosophically interesting questions are about whether or not the machines will ever be entities that we think of as having interests of their own. For many people, this is tied to whether at some point machines will be conscious – whatever that means. And there is a related set of questions about whether our own future as humans remains entirely on the biological side or whether at some point we have the option of perhaps enhancing ourselves so that we become hybrids, partly biological partly not.
We would have access to a greater range of abilities as a result of that. For example, we might have immediate access to much more data. Then some people think there are possibilities where we become entirely non-biological. We upload ourselves into computers or something like that. So, there are a lot of fascinating longterm issues in that space and it may turn out that some of them will become true, in particular the issue about whether we want the machines to remain tools or instruments, something you can turn on or off without worrying about the moral status of the machine, the way you can decide with your vacuum cleaner. Some people take for granted that that’s the kind of future we want artificial intelligence to have. No matter how smart they are, they see them simply as tools. Others think that the natural path goes in the other direction and would therefore live in a world where machines are fellow moral agents. And it may turn out to have some implications for safety concerns as well.
The ethical implications are particularly interesting. How would you face the ethical challenges if we assume that a non-biological intelligence can be more than a mere air conditioner? Will we be still calling it a machine?
Price: I don’t want to use the term “machine” in that sense, because I take it for granted that we are just machines. We are biological machines. Some machines are also people in the philosophical sense, entities with interests and moral agency. Whether the non-biological machines will ever be of that kind is going to be a choice that we face. It’s a choice we should face deliberately rather than accidentally. As people have pointed out, one of the dystopian possibilities is that we create a future in which we don’t acknowledge the possible emotional capabilities of intelligent machines and thus create a dimension of suffering. That would be dystopian not for us but for the machines.
Going back to the conceptual steps, what is the point of no return where we lose control over our progress?
Price: I think it is a oneway path, simply because of the commercial and other pressures on the development of technology. This leaves aside the possibility of some other largescale calamity, which affects the level of our technology. It will be a once in a lifetime opportunity for the planet. We will not have the opportunity to back up and do it some other way.
Opponents of the developments in artificial intelligence argue that we should not play God. As a philosopher, what is your take on that?
Price: I think we should be careful about playing God. But we should recognise that sometimes life confronts us with choices that we simply have to make. The choice of whether we want to make machines that are capable of consciousness or suffering is a choice we have to decide on whether we like it or not. So, a general disinterest in playing God doesn’t get us off the hook, because we will have to do it anyway.
Interview by Alexander Görlach.
The Interview is part of the Institute’s publication “Entering a New Era”.