AI & Human Rights: The Vodafone perspective

AI & Human Rights: The Vodafone perspective

Matthew Allison, Senior Public Policy Manager at Vodafone, describes Vodafone's perspective on AI and Human Rights regarding Access Now's report about the approach to AI by Europe.

Vodafone takes its commitment to Human rights seriously. That’s why in addition to publishing our annual Human Rights policy statement and creating the Vodafone AI Framework last year, we were keen to partner with Access Now in a series of multistakeholder roundtables taking place across the EU and virtually between October 2019 and 2020. The roundtables brought together local experts in AI, data, and human rights to discuss the EU’s approach to AI regulation, based on the underlying concepts of ethical and trustworthy AI. We heard a range of views on whether this approach was sufficient to guarantee the fundamental rights of EU citizens and to ensure those rights would not be jeopardised by the development and roll-out of AI solutions, neatly summarised by Access Now in their latest report on AI and Human Rights.

Our starting point is that any regulation of AI needs to be predicated on a concrete risk assessment. Vodafone supports the conclusions of the AI High-Level Expert Group in pursuing risk-based, innovation first policies, backed up by a strong ethical framework, avoiding the need for heavy-handed regulation. Many uses of AI (such as fault detection in our networks) have no direct impact on individual users and no additional regulation is required. In particular, we would underline the need for a gap analysis against existing laws imposing new transparency/explainability requirements or new liability obligations. For example; data protection requirements already impose obligations to explain how personal data is used, antidiscrimination laws already exist in EU countries explicitly prohibiting direct discrimination against groups based on certain characteristics.

The report by Access Now provides an overview of the current AI policy debate in Europe. (Photo: Markus Spiske/Unsplash)

But regulation alone is not enough; we also need to foster a culture of responsibility in the development of AI solutions and create standardised, repeatable processes to anticipate and root-out potentially harmful outcomes for citizens.

AI and ethics

A company such as ours has to resolve hundreds, if not thousands, of ethical dilemmas in the course of business every day. How will customers and employees be impacted by the deployment of innovative new technologies? Will there be any material impact on their wellbeing, on their fundamental rights and freedoms?

These problems are particularly acute in the field of AI, where we hand over a level of autonomy in the decision making process to a non-human agent. In practice today, the degree of actual autonomy exhibited by AI is still very small, but this is likely to grow in the future as algorithmic decision-making systems become more sophisticated and embedded in business processes. This raises the ‘black box’ concern: how to ensure that complex ADM systems are transparent, comprehensible, and subject to proper human oversight, while not at the same time rendering them unusable.

This is where ethics guidelines and review boards come in. By extrapolating and standardising the types of ethical dilemmas encountered by business and codifying a set of common responses and best practice, we can help steer the development of this new technology in such a way that minimises the risk of the ‘black box/mutant algorithm’ from ever taking control and executing decisions than harm our customers or wider society.

We look forward to working with all stakeholders in the coming months as the European Commission prepares its legislative proposal on AI, to help usher in a regulatory framework that incentivises the development of trustworthy AI, centred on the EU’s values and fundamental rights.

Read the full report by Access Now here