The oft-repeated idea that regulation stifles innovation needs to be done away with, it has been claimed, especially with regard to powerful technologies such as artificial intelligence (AI), because limits on the use of such tools will only encourage a more ethical and responsible approach to their development and deployment.
Speaking at TechUK’s fifth annual Digital Ethics Summit on 8 December, a number of expert panellists criticised the tech industry’s insistence that regulation would stifle technological innovation, arguing instead that clear rules are actually an enabler of positive progress.
Sandra Wachter, an associate professor and senior research fellow at the University of Oxford who specialises in algorithmic bias, said that while most people would now agree AI and other technologies should be operated fairly, transparently and with accountability, the next task is to think about how this can be put into practice.
Wachter added that this was especially important now, given that AI systems are increasingly being used to make life-altering decision about people, ranging from whether they get a loan to whether they end up in prison.
“I think the next step is to get out of the mindset that every type of regulation is stifling innovation because it is really, really not true,” she said. “If somebody says that, then they haven’t actually understood what regulation is supposed to do – it’s not about stifling innovation, it’s about inspiring responsible innovation.”
Wachter said it was also important to distinguish what exactly is meant by “stifling innovation”, and whether people are referring to the regulation of research or deployment.
“Nobody wants to regulate, stifle, hinder research,” she said. “I would say the opposite is true in the UK – it’s very much a focus on trying to find the brightest minds in the world to push the boundaries of knowledge.
“The thing that we want to focus on is deployment, especially in areas where there is high risk and potential harm for citizens – that’s the thing we want to regulate. I think the idea of ‘stifling’ deployment is odd, because I’m not quite sure if I want to have a field where only people that don’t want to play by any rules move in.
“Are those the people you want to attract? Or would you want to have people who are attracted by high standards and ethical conduct? I want to have those people who have the public in mind.”
Liberal Democrat peer Lord Clement-Jones agreed with Wachter that regulation should be seen as an enabler rather than a barrier, saying that although there has been good progress in the last five years – for example, with establishing institutions such as the Office for AI or the Centre for Data Ethics and Innovation (CDEI) – the UK is still short on concrete action.
“There seems to be this idea that regulation is the enemy of innovation,” he said. “We’ve never set out a set of UK principles for the adoption of AI, sadly. CDEI has done some great work on bias risk, but they haven’t been put on a statutory footing despite the original intentions, and there’s been very little appetite in the research and business communities to reform the GDPR [General Data Protection Regulation] and yet here we are, in the consultation, talking about getting rid of Article 22 [on automated processing], which is one of the few bits of the GDPR that tries to come to terms with AI.”
Clement-Jones added that when push comes to shove on regulation, especially on individual technologies such as facial recognition or automated decision-making, the UK is “way behind the curve” compared to the rest of Europe and even the US.
He also criticised digital minister Chris Philp, who said during his keynote at the Summit that the UK would be focusing on “innovation-friendly, light-touch regulation in relation to AI”, adding: “We want it to be risk-based in nature, proportionate, flexible… outcomes-focused, we don’t want it to be unduly onerous, and we’ll be formally bringing that forward in the coming months.”
Commenting on Philp’s commitment to “light-touch regulation”, Clement Jones said the UK carving its own path through AI regulation as Philp suggested was entirely the wrong approach. “We need to converge, otherwise our developers, our major companies applying AI systems in Europe, will find themselves subject to European regulation, and they won’t like it because they will have adjusted all their development to a UK set of rules, which doesn’t seem to be very fruitful,” he said.
“Just like the GDPR is the gold standard, we’re going to have to recognise that the EU AI Act is going to be pretty widely adopted, and even the Americans now are coming forward with standards, risk assessment, and so on. I think we’ve got to move a quite a bit further in our attitude towards regulation.”
Commenting on popular assumptions of AI competitiveness at a global scale, Joanna Bryson, professor of ethics and technology at the Hertie School in Berlin, said that narratives around ethics constraining innovation were simply wrong. “The whole point of ethics is to help a society flourish,” she said. “It isn’t just about how quickly you can express your innovations, it’s about how many innovations you can aggregate and how much good you can do.”
Bryson also suggested that, in terms of the best way to regulate new technologies right now, governments should use the nearest existing legal structures until new legislation can be put in place. “If software was treated like an ordinary manufactured product, then we could have ordinary product liability and responsibility,” she said.
Source link