As of May 2019, a total of 42 countries, including both the UK and US, have signed up to the OECD principles on the regulation of AIPixabay/Gerd Altmann

A new whitepaper released by the European Commission on 19th February has set the terms for the future of AI regulation in the EU. But, with a post-Brexit UK on the hunt for ways to strike out alone, it is by no means clear that similar strategies will be adopted on this side of the channel. In fact, while the UK is likely to continue following European rules (as set out in the General Data Protection Regulation (GDPR)), some have suggested that we may choose to diverge on the strategies set out in this new paper, developing a separate approach in the realm of artificial intelligence.

As of May 2019, a total of 42 countries, including both the UK and US, have signed up to the OECD principles on the regulation of AI. These can provide some global unity but they are non-binding and leave room for a degree of divergence. While Trump’s US focuses on fostering innovation, other countries are taking a more ethics-driven approach. The merits and ambiguities of the EU paper and the possible advantages of divergence therefore need to be addressed as the UK approaches a significant fork in the road.

Headlines have focused on the omission of a ban on facial recognition from the paper, suggesting that the EU are taking a rather laidback stance. This example stands out particularly pointedly as a draft of the same document, leaked in January, was more stringent. Many therefore see the whitepaper as a move by the EU to weaken regulation.

“The flip-flop over facial recognition, and the backdown over a proposed ban show how difficult it is to apply these principles to concrete cases.”

In the case of facial recognition, people worry about governments’ use of it leading both to bias and to potential breaches of privacy. There are serious real-world impacts of identifying the wrong person and it causes distress to all involved. These consequences can also weigh disproportionately on minorities, as the data used to train the algorithms rarely matches up to a goal of diversity.

There is a reason the press has seized on this one algorithm rather than focusing on strategies as a whole. Firstly, it has been in the news before, with Google’s old version of the technology making headlines for racism when it labelled black people as gorillas. Secondly, this single technology continues to combine many of our most intuitive fears over AI as it faces accusations of undermining privacy and transparency, as well as being associated with discrimination against minorities. The failure of the EU to ban facial recognition has therefore led to disappointment for many.

But much more is at stake here, and it is essential to look beyond facial recognition and our most intuitive fears. Debates over the proper place for regulation in the digital realm are in their infancy, and a clear, universal and ethically sound strategy is yet to be found. With development of the technologies themselves accelerating rapidly, it has already become necessary for governments to have some form of legal strategy on artificial intelligence and its potential threats. Many governments are not ready for this.

Before the UK decides whether to head in a different direction, it is worth exploring EU plans in more depth and whether they truly address our pressing fears over issues on AI, bias and privacy. In particular, is this regulatory and investment-oriented approach really going to protect us from very real risks, or will it simply facilitate increased profits for multinational corporations?

Beyond facial recognition, the paper deals with all “high risk” technologies. Itself a questionable strategy, this lets many potentially dangerous actions fly under the radar, receiving little attention. For example, the requirement of “human oversight” is to be applied differentially, depending on the stated risk of the technology. Only in such cases will human approval be required for the output of the algorithm to be released. With no clear distinguishing feature of these algorithms, such a strategy risks being inapplicable to a huge number of potentially dangerous technologies. More thought is needed to create a precise definition of “high risk”.

Despite this, many of the paper’s key commitments demonstrate an understanding of the issues at play. For example, choosing a clear strategy on who is accountable for the actions of these artificially intelligent beings should be an EU priority. However, with the limited detail provided in the whitepaper, it is difficult to see how far the EU will go in applying these strategies. Of course, these are not intended to be fully fleshed out policies and so details cannot be expected. But with issues such as transparency and fairness, stating a commitment to the principle is easy. More information is needed to give meaning to the statements.


READ MORE

Mountain View

Varsity Explains: Coronavirus

The case of diversity and fairness can be used to demonstrate the complexity of issues at play. The EU focuses on issues arising “from the use of data without correcting possible bias” but provide less detail as to how such bias can be corrected for. We live in a world where raw data is inevitably biased and certain groups are hyper visible in the data while others are invisible. A number of strategies have been proposed for the mitigation of bias, from the introduction of protected classes which would prevent predictions made on race and gender to the creation of unbiased data through experiments. Different strategies not only differ in cost but in the accuracy of results. Sometimes, bias cannot be removed without decreasing accuracy, and it is these tricky cases that have not been addressed by the EU.

This move to think about issues such as non-discrimination in AI by the EU should be welcomed, but we must wait to see whether these strategies can truly get to grips with the complex issues at play. The flip-flop over facial recognition, and the backdown over a proposed ban show how difficult it is to apply these principles to concrete cases. Regulators have quite a task on their hands if they are to develop this document into concrete policies applicable to real world uses of technology.

In the meantime, there is plenty of time for the UK to diverge from this and come up with our own set of rules in the digital sphere. Ultimately, the big challenges will come when we get to addressing the nitty gritty aspects of implementation and translating broad principles – such as “transparency” – to the rapidly developing world of AI.