This website uses cookies to improve usability and analysis of user behavior. By using this website you agree to the use of cookies. Detailed information about the use of cookies on this website can be found in our privacy statement

OK
“Democracy needs to evolve into a real-time system”

“Democracy needs to evolve into a real-time system”

Audrey Tang, Digital Minister of Taiwan, speaks about technological progress and its connection to democracy, while also talking about how it can be used to foster transparency.

Before the internet, it wasn’t possible to crowdsource civic life. That has changed, but Audrey Tang, Taiwan’s Digital Minister, argues that recent technological advances have shown us that machine learning is no substitute for collective intelligence.  

How has democracy changed since the advent of digital technologies?

It has meant two different things: First, digital technologies have made it much easier for people to listen to one another. Pre-internet technologies, such as radio or TV, made it simple for one person to speak to millions. Arguably, that’s how World War II was started, and to some extent, World War I as well. It made communication hierarchical.  

With the internet, everybody has more-or-less symmetrical connections, meaning they have an equal amount of bandwidth for receiving and sending. This makes “listening-at-scale,” as I call it, possible – i.e., one person can listen to millions of people, but more importantly, millions of people can listen to one another. This is a fundamental configurational change. And for the first time, it has made horizontal organisation easier than hierarchical organisation.  

This has had implications for parliamentary democracy as well.

Essentially, in the pre-internet era, representative democracy was limited by the reality that it was impossible to listen to more than 20 people at a time. Now, it is possible to listen to 200 or even 400 people at once. We do that every day on Twitter or other social media. It has become easier to coordinate with what we call “weak links” via the internet instead of stepping into a physical town hall. Representativeness has given way, to a degree, to representation.  

And what is the second significant shift?  

Previously, when governments had to notify the public about planned legislation or regulations, they could only do so using snail mail or the phone. In some cases, it was a single person’s job to handle all of the tens of thousands of incoming messages pertaining to controversial regulations. People were unaware that there were 5,000 other people in line before them, so the administrator essentially became a bottleneck.  

Taiwan’s government introduced a process that uses a combination of online and offline debate to find consensus among engaged citizens on specific issues of law and regulation. At its heart is an online platform called pol.is. (Photo: Digitale Vision/Getty Images)

Didn’t some administrations find innovative approaches for dealing with this phenomenon?  

President Barack Obama’s White House hired volunteers and had a dedicated staff whose job it was to handle the massive amount of emails he received. They used a kind of rating mechanism – a qualitative algorithm that selected around 10 emails every day to be read aloud to Obama as a way of ensuring both a diversity of voices and a direct channel to the president.  

Do you see that as an efficient way of communicating with voters?  

It represented less than 0.1 percent of the total mail received. As I said, before internet technologies, it was impossible to crowdsource. It was impossible for those who wrote letters to interact with other letter writers as they now can on the internet. It was impossible for a consensus to emerge.  

What does this form of collective knowledge do to leadership in politics? Do these technologies simply prolong the decision-making process?  

If we open the agenda-setting power up to the crowd, we can reflect back to people what they agree or disagree on. Public servants no longer have exclusive ownership of the agenda. But these technologies are still relatively new, so there isn’t a clear strategy yet for using them in governance.  

But they’ve certainly had a recognisable impact on public discourse, with Taiwan being one example.  

Yes, they provide an accurate reflection of everybody’s feelings on specific issues. That has certainly had a healing effect, because if you only look at the mainstream media or, indeed, some social media, you will see at least five divisive issues constantly being repeated. This has led to the illusion that people are inherently tribal and against each other.  

But this technological change is also anti-elitist, in a way.  

It might be populism, because it allows everybody to escape the lies of the elite and share their feelings. But it’s not tribalism, because it becomes clear to people that we are more or less part of the same tribe, no matter how different we may be superficially.  

Would you say these forms of communication can help bridge the gaps that have emerged in many democracies around the world?  

Certainly. It is not just an open space, it’s a reflective space. And reflective spaces make people aware of the common good. Everybody can relate to a fellow citizen sharing their feelings. It creates the missing link between the data, which is objective and always there, and the notion that everybody can have different ideas, even though only a few might actually be acted upon. We have created a kind of reflective stage between facts and ideas, allowing people to see each other’s feelings. The best ideas are the ones that take into account the most people’s feelings.  

It seems to me that the crisis of democracy stems from the decoupling of civic rights from social rights. Would you agree?  

Very much so. It often feels like you have to join one tribe or the other to organise, find a voice and force public servants to take action and alleviate tension. That is the traditional form of social activism or mobilisation. But people don’t organise like that anymore. Just look at #MeToo or #fridaysforfuture. It’s basically just a meme that people identify with, take control of and apply however they want.  

Is that also what is happening with rightwing parties across the globe? 

Basically, it creates hope for people suffering from a lack of representation. This hope depends on outrage against the old, wellfunctioning democratic system. By using hashtags, people feel like they’re part of each other’s consciousness in real time, whereas with democracy, you only get to express yourself every four years. It creates an asymmetric hope, and that feeds outrage against the slow pace of democracy.  

Democracy needs to evolve into a real-time system. People need to feel that, regardless of the issue, they have a say. There are many ways of doing this, like the Pol.is system for crowdsourcing legislation we use in Taiwan, but also participatory budgeting, e-petitions, etc.  

Audrey Tang is the Taiwanese Digital Minister, a position which she has held since October 2016. Considered one of Taiwan’s brightest computer programmers, Tang dropped out of school in junior high and founded her first company at the age of 16. She retired from entrepreneurship at age 35 and has since been working on the development of open-source software and freeware. (Photo: gemeinfrei)

How can we transition into such a real-time democracy?  

The United Nations’ High-Level Panel on Digital Cooperation recently coined the term COGOV, or collaborative governance. It is a rebranding of multistakeholderism, which has existed in internet governance for decades. The old model essentially asks two questions: First, given stakeholders’ positions, are there common values that can be identified? Second, given the shared values, can anyone deliver innovations that represent a Pareto improvement – i.e., that are good for some without being bad for anyone? It’s an innovation- focused democracy.  

A key component, in other words, is reducing the time between idea and implementation?  

In Taiwan, we’re adopting “sandbox” laws, which means you get one year to try out your innovations in a heavily regulated area. You will not be fined for breaking the law or violating regulations. Indeed, you’re encouraged to do so. But there are two conditions: First, you have to propose an alternate regulation for the one-year trial period. Second, all your data must be open: Your innovation must be transparent for one year so everybody can see. After that, people may decide it’s a bad idea. In that case, we thank you and your investors for paying the tuition. But if people think it’s a good idea, then we adopt the entire innovation.  

In any field?  

Almost any. The Justice Ministry has said you cannot experiment with money laundering or the funding of terrorism, because we already know what would happen. [Laughs.] Otherwise, it’s all fair game. That’s a fascinating model. It suggests people are willing to relax some of their standards if they feel like they are in a fair, balanced and discursive arena. Exactly. I wouldn’t say that we’re deliberative. The informed decision part is utopian: It assumes that people bring their interests to the table honestly, which often is not the case, even in the very deliberative Swiss referendum model, for example. We usually use the word collaborative, which means we only identify some common values, and we’re satisfied with that. It’s not really a consensus. It’s more like consent, which signals someone can live with something, not necessarily that they would sign their name to it.  

China, meanwhile, is taking the opposite approach.  

Using more or less the same technology that we are employing to make the state more transparent to its citizens – by publishing budget data, procurement data, implementation data, planning data, etc., and then asking people to participate and comment openly – the People’s Republic of China is making its citizens radically exposed to the state. It is doing so with its social credit system and many other programs. Accountability is lacking because even though they are introducing the technology in the private sector, the private sector is de-facto owned and controlled by the state using non-market forces.  

What is the critical difference?  

There’s no due process. It’s the same technology we use to foster transparency, but it is used for exactly the opposite purpose. As I mentioned, the “sandbox” method essentially allows the public to judge private-sector regulations. The Chinese Communist Party, by contrast, insists on installing a party delegate in all large companies to ensure that they follow the official agenda. Again, this is exactly the opposite of our method, but applied to the private sector rather than the social sector. I think it’s fascinating, like a mirror image of philosophies, using more or less the same technology.  

Taiwan recently unveiled its first autonomous electric vehicle, a minibus dubbed WinBus. It is scheduled to undergo a “sandbox” trial before the end of the year and enter mass production before the end of 2021. (Photo: Ministry of Economic Affairs Taiwan)

Social progress has always taken place via technological progress, both good and bad, depending on the context.  

Yes, it’s an amplifier. Whatever your dominant philosophy is, it’s bound to amplify it.  

As someone from a country that exports technology, how would you address this ethical dilemma, given that innovation is unstoppable?  

If there is a high chance of abuse, I would actually argue for export controls. Anti-proliferation is the best path, but it depends on the case. Some things are simple to rediscover based on first principles, in which case it’s a lost cause and you might as well export those technologies. But if there are things that are heavily dependent on what we call an increasing return on research, then you can set your research agenda accordingly.  

With technology constantly at the forefront of advancement, laws and regulations naturally lag behind.  

The recent Facebook example is a good one: The fine levied against Facebook is more about setting a norm than doing much harm to the company’s bottom line, which is crucial in internet governance. It is primarily about setting a cybernorm. People will write code that conforms to incentives and policies and, finally, law. Law always lags behind – but for an obvious reason: If you started with the law, it would mean lawmakers know more about innovations than the innovators do. I wouldn’t claim such a thing, and I wouldn’t accuse any sane lawmaker of claiming such a thing, either. It’s like having a law that dictates the value of π, and we all know how that went.  

You prefer the social sector to be the driving force behind innovation?  

As a conservative anarchist, my mission is to foster social innovation. This means that the social sector can take control of emerging technologies that benefit society. Just last night, I went to a meetup for RadicalxChange. I’m also on the board. I was joined by Danielle Allen and Vitalik Buterin, who invented Ethereum, the blockchain technology. Our vision is basically to use technologies like Ethereum, but for them to be owned by the social sector. Because Ethereum is open source, anyone who wants to can “fork,” meaning, use it to pursue a different vision. This kind of social innovation legitimates governance, but in a way that is free of the legitimacy provided by representative elections. This way, people can see when social innovations work better. 

Still, popular opinion can be influenced by foreign forces. Is mainland China trying to interfere with this year’s election in Taiwan?  

Our president recently coined the new phrase, “the Chinese continent.” I like how she referred to Taiwan as “an island off the Chinese continent.” In any case, there are undoubtedly Chinese continental forces. The main insight I would share from Taiwan is that we basically have three defenses, as well as three proactive responses. The first of the three defences is that we clarify any trending rumours within one hour, because more people are likely to hear the clarification within that timeframe.  

Here’s a real example: There was a rumour going around that the administration would fine anyone who permed and dyed their hair on the same day. Within one hour, we had the premier, Su Tseng-chang, post a message that went viral on social media. It read: “There’s a popular rumour going around claiming that perming your hair will subject you to a fine of 1 million. It’s not true. Although I have no hair now,” – with a picture of the prime minister when he was young – “I would not punish people like that. However, perming and dying within a week can damage hair, with serious cases ending up like me.”  

[Laughing]: That’s a funny response.  

More people saw this clarification before they saw the rumour itself. The clarifications serve as inoculating agents. This is the first defence. The second defence, of course, is collaborative verification. In Taiwan, we have members of the International Fact-Checking Network (IFCN) and the collaborative fact-checking community highlighting fake news they see on social media.  

And what happens with the flagged material?  

When something gets flagged, it is a designation that it doesn’t belong in the public space. There are international organisations, like Spamhaus, that serve as clearinghouses for flagged material. That’s how the spam wars are won. These organisations publish the signatures of people who match those emails, so that machine learning can be used to send mails with those signatures directly into the spam folder.  

IFCN basically gathers the material people have flagged as disinformation and they publish fact-checking reports. Once the Thai fact-checking centre rates something as false, for example, it goes back to the Facebook algorithm so that people stop seeing it on their news feed. As a result, it’s shared less. It’s not entirely censorship, because if you go to that friend’s page, you’ll still see the post. All in all, it reduces its virality to around one-fifth or less while increasing the virality of the clarification.   

And what is the third cyber defence 

Finally, during elections, our campaign-donation law requires political donations to be designated. Anyone can get structured data, like Excel files, of individual donation records. During the last election we observed that people had been investing in precision-targeted, political advertisements. Because those were not accounted for, we’re now changing the law so that they are subject to the same transparency requirements as campaign donations. That is to say, advertisers must disclose where their payments came from. If the purchaser at the end of the chain is not a Taiwanese citizen, everyone in the chain is subject to a fine of 50 million New Taiwan dollars.  

Won’t Facebook protest?  

Facebook, just like the Japanese messaging service Line, which is popular in Taiwan, signed on to best-practice principles for stopping disinformation.  

That’s part of the lesson learned from Cambridge Analytica, I suppose.  

Exactly. People trust Facebook less than tobacco and liquor companies. [Laughs.] It’s interesting because there are similarities between the two: Facebook also sells an addictive product and creates externalities for society, so it’s going to be regulated the same way. Once these donation lists are published, it won’t matter what Facebook’s agenda is, because it will be required to publish all of this data, including – crucially – precision-selected targets.  

You also mentioned three proactive measures to prevent intrusion in the first place.  

First, we have a team in each ministry responsible not only for clarification, but for inviting people with differing opinions to collaborative meetings. Anyone can use e-petitions. We have found that the most active users are around the ages of 15 or 65. What they have in common is that they seem to be more interested in the public sphere than the private one. Just this month, for example, Taiwan banned the use of plastic straws at many types of restaurants. Everyone is using recyclable straws or glass straws now. This came about a year and a half ago, after an e-petition went viral and quickly amassed 5,000 signatures. The leader of the movement was only 15 years old.  

How does that affect disinformation?  

Empowerment will make disinformation less likely to spread. Secondly, the citizens can see that the government isn’t refuting their agenda and is on their side – and that it is reacting in real time, not just every four years. The office hours set up by the government are also part of this. Furthermore, the conversations are not private. Anyone can go onto Google and find my position on this matter.  

Finally, the private sector amplifies flagging and clarification. Both LINE TV and Mashup Television recognise that clarifications, like the one the premier released, can be useful news items to spark a conversation. It becomes good business – and making a business case out of this is also very important. It allows people to participate in the normsetting. It is no longer just the private sector against the public sector: The private sector also sees that it can be cool to spread clarifications.  

Your premier making fun of his own baldness is a good example of the human factor within the digital sphere. What is your take on the human factor in all this?  

Old media forced people to consolidate their viewpoints to the degree that the human factor almost became indistinguishable. There was simply no bandwidth in newspapers to deliver a two-way interaction. What we are now seeing is not a transactional configuration of policymaking, but a reactional one. When people participate in e-petitions, they’re not just doing it to support a 15-year-old. One also starts identifying with a common goal of improving life on this planet. I think this is essential. 

That also requires empathy, which is a uniquely human ability. How can we incorporate that into technology? 

There are a few things. One is that divisiveness and a lack of empathy are direct results of a lack of imagination, so it feels like we’re caught in a zero-sum game. It doesn’t have to be like that. In many democracies, including Taiwan, the planning horizon is usually four to eight years. However, the hardest problems require structural solutions that take a decade or two. The first thing we do in our collaborative meetings is consider the stakeholders that haven’t been born yet. That extends everybody’s horizon.  

We had a real case involving, on the one side, people who were very much supportive of marriage equality, and on the other, people who were very conservative about whether we should extend artificial insemination rights to people who are not in a marriage. We were able to make this discussion fruitful, instead of violent or divisive, by posing the “How might we …?” question: For example, how might we ensure an inclusive and accepting society for children born into such a family?  

That rephrasing results in the question, “What kind of society do we want to create for people who haven’t been born yet?” If you use the planning horizon of 10 or 20 years down the road for a conversation, people who might otherwise disagree can work very collaboratively.  

Is that your approach to digital ethics as well, taking the long view? 

Yes. We extend the horizon whenever possible. If it’s next quarter, there’s hardly even time for consent. If it’s 20 years in the future, it’s much easier to find consensus. It’s deeply human to care about the next generation. AI or machine learning cares about the past because that’s where the data comes from. The human capacity to imagine, even the poetic capability of envisioning alternate futures, is the key principle for making humanity and the digital coexist.   

This interview is part of a collection of essays and interviews by Alexander Görlach:

Shaping A New Era – how artificial intelligence will impact the next decade (PDF)