This website uses cookies to improve usability and analysis of user behavior. By using this website you agree to the use of cookies. Detailed information about the use of cookies on this website can be found in our privacy statement

OK

“We might want to rediscover our social roots”

Pascal Finette, Singularity University: to adjust democracy to the new, fast-moving reality of today, politicians should move quickly towards creating societal frameworks.

You are basically researching and exploring technological progress, in digital, data mining, artificial intelligence. Innovation on a daily basis. What is the first thing you tell people who have no idea what your work is about nor what the future you describe so enthusiastically will look like?
Pascal Finette: I believe the most important thing you have to understand, is that tomorrow will look dramatically different from today. And by tomorrow, I don’t mean ten years from now – but quite literally the day after today. Technology is advancing at an unprecedented speed and the rate of change keeps accelerating. It’s exhilarating (and sometimes scary) to witness this, study it and figure out what tomorrow (and the days after tomorrow) will look like.

Is this accelerated change a blessing to you or a curse? The closer you draw the line of change, messianic hope and eschatological fear fall into one. This is why theories of technological progress such as the singularity have been called ideological.
I don’t think it’s binary or black and white. Exponentially accelerating technologies present amazing opportunities and, of course, can be misused by ill-meaning individuals and organisations. The technology itself (and thus also the “singularity”) is rather agnostic towards its use for good or ill. Which puts the human into the centre of the equation – it is on us to decide how we want to use technology. And when it comes to humans I am, by and at large, optimistic. As we wrote into our operating principles at eBay in the late 90s: we believe people are basically good.

This optimism is also found in the guiding and basic principles of Google “Don’t be evil”. We have gone quite philosophical in under a minute. Is being ethical, being capable of distinguishing between good and evil, something that algorithms or machines are capable of doing? I mean, frankly speaking, if artificial intelligence takes its essence from us, the humans, our data, our behaviour, how could it then possibly be agnostic? Agnosis in Greek means not knowing. Machine learning is all about knowing.
I believe that, for the foreseeable future, machines won’t make ethical or moral decisions on their own. You bring up a very good point, though – artificial intelligence is based on machines learning; thus, it matters greatly that our inputs aren’t biased or incomplete. My former employer Mozilla just launched an initiative to overcome the bias speech recognition systems have, due to the limited training sets they are exposed to.

In 2011, IBM Watson won Jeopardy requiring mastery of general knowledge and natural language processing. Photo: IBM.
IBM’s Avicenna software highlighted possible embolisms on this computer tomography scan in green, finding mostly the same problems as a human radiologist who marked up the image in red. Photo: IBM.

So, machines are basically not only replicating the biases, prejudices and injustices of our societies, but also expose us to them exponentially: they narrow down the world we live in. Is there a chance that we could become better humans by working on eradicating biases from machine learning?
Just to clarify this – a lot, if not the vast majority, of applications for artificial intelligence / machine learning won’t have any bias problem as the data sets they are trained on have no biases. Think about all the industrial applications for artificial intelligence or something like weather forecasting – the data sets for these fields are just that: data (without human bias). The challenge comes when we train artificial intelligences on human questions – for example, voice recognition as in the Mozilla case. And yes – artificial intelligence has the potential to make us better people by doing the exact opposite as well. Think about a personalised newsfeed which presents me with balanced views on a topic of interest, instead of solely partisan views.

Let’s stay with the positive, non -biased aspects of machine learning for a bit. What field in your opinion will be profiting from this the most: health, mobility, governance?
Any field which generates large amounts of data and makes decisions based on this will greatly benefit from machine learning. Self-driving cars are only possible due to sophisticated machine learning algorithms and abundant computational power, IBM’s artificial intelligence Watson has already become a better radiologist than humans and large-scale farming is starting to rely heavily on machine-learning -based systems to increase agricultural yield. Governance is tricky as there is so much human behaviour involved.

In governance there are new approaches such as deliberative democracy, a model that basically runs on the assumption that the parliaments we know do not really run by representing all groups of society. Whereas when you use algorithms to sort out these representation problems, solutions for crucial questions are found. What do you make out of this sort of new approach for democracy?
An intruiging approach. Fluid democracy and even more immediate and direct forms of electronic voting are not only interesting but also concepts which have the potential to change the way we live democracy. The challenge, from my perspective, is that we are dealing with systems which are rather encrusted, move with election cycles of four years or more and have strong powers at hand which like to keep systems the way they are. Combine this with the fact that the civil sector is not exactly the employer of choice for some of our brightest and fastest moving minds and you can see why changing governance is a tall order.

The question therefore has been raised more than once. Who is in charge: Google or the government? The big digital innovators with all the young high-potential coders working for them, or the governmental institutions entrusted with law-making yet not attracting the brightest and boldest (any more)?
I don’t think nor believe Google (or Facebook or Apple etc) is in charge. They might have influence over what we see and do online – but they surely don’t have the power to put you in jail for your behaviour. Thus, I am much more concerned about governments misusing or even abusing the data trails we leave behind than Google wanting to sell me more stuff on behalf of their advertising clients. And yes, it greatly concerns me that the brightest minds in countries around the world focus on building the next hip social-local- mobile photo sharing app instead of focusing on solving humanity’s grand challenges.

Pascal Finette heads up everything Entrepreneurship at Singularity University, including the start-up programme, Global Expansion and the Entrepreneurship Track where he inspires, educates and empowers entrepreneurs tackling the world’s most intractable problems leveraging exponential technologies. Pascal led eBay’s Platform Solutions Group in Europe, led Mozilla’s Innovation Lab, created Mozilla’s accelerator program Web FWB, and has invested into social impact organisations around the globe at Google.org. Photo: Pierre Le LŽannec.

Certainly, Google may not put you in jail, yet if you had to choose between a day in jail and no more access to all your Google services I am quite certain many people would prefer jail over losing all that you may have in the Google cloud. But that’s (lucky us!) only hypothetical. You have been involved in lots of this work focusing on solving humanity’s grand challenges. And you said at the beginning of our conversation that the changes through artificial intelligence will be visible and experienceable in a short while from now. What is the most ground-breaking thing we will be seeing next year: no more droughts? Beating cancer? Much of the suspicion an average Joe may have when it comes to artificial intelligence is that it carries the face of Arnold Schwarzenegger, the Terminator, rather than a smiling, positive countenance.
We’re making massive strides in healthcare. Not just through the abilities of artificial intelligence but also genetics, digital biology and stem cell therapy. It surely makes you optimistic about solving diseases ranging from cancer to sickle cell. Interesting work is being done on the intersection between man and machine – pros-theses connected to your nervous system, allowing amputees to walk or operate their robotic arm with great precision. And don’t get me wrong, we are only at the very beginning of what’s possible. A lot of what we see today is still crude, doesn’t work quite as advertised or is just not that helpful. Every new technology goes through these phases.

What do you make of the debates and claims for a robot tax or universal basic income to cushion the societal impacts of artificial intelligence?
The robot tax I just don’t get. Universal Basic Income (UBI) makes a lot of sense to me and we have some promising preliminary studies suggesting that UBI works in fostering a more entrepreneurial culture and thus increased GDP overall, plus more satisfaction and fulfilment. It’s early days though – we need to run many more studies to find the right formula.

If I understand it correctly, the case for a robot tax is twofold: first it’s about dis-entangling our views on taxation of a person’s labour and secondly, it’s about exploring new ideas of redistribution. Above all it’s about preventing a new global elite from accumulating the wealth generated through the work of robots, which also extends to the increasing work done by artificial intelligence. And it tries to tackle the question of fair taxation (if fewer and fewer people are involved in financing the state’s functions) and redistribution in general.
Arguably we are miles away from “fair taxation” – as is demonstrated by the increasing divide between the top one percent and the rest of the population in most, if not all, countries around the world or the fact that, although the economy is growing, it doesn’t translate to increased incomes in the middle class for the last couple of decades. I can’t see how the robot tax solves this – sadly. And surely it is a hard problem to solve. Personally speaking, I miss the debate – not only haven’t we figured out the problem, to a large degree we don’t even talk about it.

So, what is your take then on fairer, better societies through the progress provided by artificial intelligence? I see the debate is already at full pace, yet I am not sure if the prospect of a world without work, where we may all be painters and poets, is the right scenario to start with.
I believe it brings up at least two relevant and important questions: one about the financial implications, the other about the human implications. Today, in our society, we derive a large amount of our self-worth from the job we do. It not only provides the financial means to live (and hopefully thrive) but contributes to our sense of self and societal status. Take a job away and we need to rethink (and enact) a whole lot more than just the financial safety net.

I couldn’t agree more, but what do we do about it? Historian Yuval Noah Harari recently wrote an op-ed for the Guardian claiming that millions of people may engage in virtual reality games in order to be kept busy. Is that the great new world we’re headed towards?
That’s a bit apocalyptic for me. But then – the argument can be made that a virtual reality will sometime in the future be better and more engaging than what geeks like to call “meat-space”. Which raises the question: in which space do we choose to live?

Harari’s point was that we, the human species, have engaged in this sort of virtual reality game for as long as we have been around: this game is called religion. We collect scores, we ascend or we descend, heaven, hell or purgatory. To me that is not necessarily apocalyptic; rather compelling in the sense that playing games (Homo ludens) seems to be more part of our nature, more satisfying and therefore more part of our identity then we may think on a daily basis. But let me learn more about “meat-space”.
Without going into too much of the long -standing debate about (organised) religion, I find myself more on the spectrum of believing in self-determination. Probably more importantly, the scenario outlined in Harari’s post requires (to some extent) the belief in a benevolent deity / ruler.

What I find the most irritating, and that goes along with your belief and approach, is, that people by engaging in that virtual world are stripped of any self- determination and destiny whatsoever. For me this scenario raises the question: what will give us humans dignity when it is not work any-more, what will define our self-esteem when it is not the narratives of our achievements?
You bring up an excellent point. I think we have to consider two directions in (trying to) answer this question: On the one hand, we need to expand the definition of “achievements” to be more inclusive than just our career achievements, to include topics such as our civic duties / engagement, etc. And on the other hand, we might want to rediscover our social roots and derive fulfilment from our social interactions and care for the communities we live

Interview by Alexander Görlach.

The Interview is part of the Institute’s publication “Entering a New Era”.