• Wed. Nov 20th, 2024

Lord Holmes: UK cannot ‘wait and see’ to regulate AI

Byadmin

Mar 27, 2024



The UK government’s “wait and see” approach to regulating artificial intelligence (AI) is not good enough when real harms are happening right now, says Conservative peer Lord Christopher Holmes, who has introduced a private members’ bill to create statutory oversight for the technology.

Since the government published its AI whitepaper in March 2023 there has been significant debate over whether the “agile, pro-innovation” framework it outlined for regulating the technology is the right approach.
Under these proposals, the government would rely on existing regulators to create tailored, context-specific rules that suit the ways the technology is being used in the sectors they scrutinise.
Since the whitepaper was released, the government has been extensively promoting the need for AI safety, on the basis that businesses will not adopt AI until they have confidence that the risks associated with the technology – from bias and discrimination to the impact on employment and justice outcomes  – are being effectively mitigated.
While the government doubled down on this overall approach in its formal response to the whitepaper consultation from January 2024, and claimed it will not legislate on AI until the time is right, it is now saying there could be binding rules introduced down the line for the most high-risk AI systems.
Speaking with Computer Weekly about his proposed AI legislation – which was introduced to Parliament in November 2023 and went through its second reading on 22 March – Holmes said “wait and see is not an appropriate response,” as it means being “saddled with the risks” of the technology while being unable to seize its benefits.
“People are already on the wrong end of AI decisions in recruitment, in shortlisting, in higher education, and not only might people find themselves on the wrong end of an AI decision, oftentimes, they may well not even know that is the case,” he said.

I want to see the philosopher, the ethicist, the artist, the industry startup, the scaleup, the venture capitalist – the whole range of people who can all bring their expertise

Lord Chris Holmes

Holmes says his bill is built on seven principles of trust, transparency, inclusion, innovation, interoperability, public engagement, and accountability, and would set up a central, horizontal regulator to manage and coordinate the government’s current sectoral approach. It would also create “AI responsible officers” who would fulfil a similar function in businesses to data protection officers, and establish clear rules around data labelling and intellectual property obligations, in line with existing laws.
The bill would also “implement a programme for meaningful, long-term public engagement about the opportunities and risks presented by AI,” and make greater use of regulatory sandboxes for the tech to be safely tested before actual deployments.
While private members’ bills rarely become law, they are often used as a mechanism to generate debates on important issues and test opinion in Parliament.

Safety, inclusivity and participation  
Noting the government’s emphasis on AI safety, Holmes said it was “somewhat strange” to see so much discussion on the technology’s potentially existential threat in the run up to the Prime Minister’s AI Safety Summit in Bletchley Park last year, only to adopt a largely voluntary approach.
“If we are cognizant of the safety element, then you necessarily and should connect the elements where AI is already impacting people’s lives. The way to get that grip on safety, and that grip on the positive, ethical use of AI – surely – is to legislate,” he said.
“The argument from the government goes something like, ‘It’s too early, you will stifle innovation before it’s had a chance to get off the ground, you’re potentially stifling investment,’ but all the lessons from history demonstrate to us that if you have the right legislative framework, it’s a positive benefit, because investors and innovators know the environment they’re going into.”
For Holmes, part of making AI systems safe for business and the public is making it inclusive by design.
“If you want the best outcomes, then you must – you absolutely must – take an inclusive-by-design approach,” he said. “It’s inclusion that enables real, across-the-piece innovation.”
Linking this to the government’s creation of an AI Safety Institute in the wake of the Bletchley Summit, Holmes added that a much broader range of people need to be brought into the organisation, which at present consists primarily of technical experts drawn from industry and academia.
“I want to see the philosopher, the ethicist, the artist, the industry startup, the scaleup, the venture capitalist – the whole range of people who can all bring their expertise, their experience, their voices; not just from narrow AI, but from a whole bunch of sectors,” he said.
While there needs to be diversity and inclusion at the policy and research levels, Holmes added that the “beating heart” of his bill is public engagement, as it engenders trust in the technology and processes surrounding it.
Without the trust that comes from meaningful public engagement, Holmes reiterated that people will be “unlikely to gain the benefits” while being “extremely likely to suffer the burdens of imposed AI.”
He said the proliferation of AI “gives us an opportunity to completely transform how the state interacts with individuals, with citizens, to make consultation properly brought to life and properly human.”
Highlighting the idea of citizen assemblies and the example of Taiwan’s “alignment assemblies” – a government project attempting to build consensus around AI and ensure applications of the tech are consistent with people’s interests – Holmes said something similar could be adopted in the UK.
“The kind of results, the kind of insights, the kind of real intelligence that could come out of that would surely be so profound,” he said.

Plugging the gaps
A key pillar of Holmes’ AI bill is the need for an overarching “AI authority”, which would play a coordinating role to assure that all existing regulators are addressing their obligations under the government approach.
“The real purpose of the AI authority, as set out in the bill, is not to be a huge do-it-all AI regulator, not a bit of it. That would be inordinately expensive, bureaucratic, duplicative, and suboptimal,” he said. “Its role is look horizontally across the regulators.”
Holmes added this would ensure independent assessment of regulators’ competency around AI, as well as the suitability of current legal frameworks, while ensuring all action is underpinned by key principles.
“Transparency, trustworthiness, inclusion, interoperability, an international perspective, accountability, assurance – the AI authority can be that absolute proliferator of all of those principles across the piece,” he said.

Highlighting the success of the UK’s financial technology (fintech) sandboxes, which have since been replicated in jurisdictions around the world, Holmes’ said the AI authority could also play a role in coordinating regulatory sandboxes to properly test various uses of AI.
A July 2023 gap analysis by the Ada Lovelace Institute found that “large swathes” of the UK economy are either unregulated or only partially regulated, meaning it is not clear who would be responsible for scrutinising AI deployments under the government’s proposed approach. Holmes said a horizontal body like his proposed AI authority could address some of these issues around lack of coverage and poor redress.
“The role of the AI authority is very much to highlight all of those gaps, all of those deficiencies to government, so they can be addressed – rather than believing that in any sense it’s possible for existing regulators to just do all this in a vertical way, without having it all looked at in a coherent, consistent, horizontal approach,” he said.

Enforcing existing rules
Part of the AI bill introduced by Holmes focuses on the need for uses of the technology to comply with laws that already exist, but which have generally fallen by the wayside when it comes to AI.
“Artificial intelligence is just the latest example of the ‘frontierist’ argument, which gets made throughout history in any period of significant innovation,” he said.
“It’s the argument that goes back to the founding of many nations – ‘Don’t fence me in; the rules don’t apply to me; you’ve got to let us get on with it; we are doing such innovative, such different things to everybody else, the rules do not apply’. That argument has never had any weight, or merit, or ethical foundation at any point in human history.”
In particular, Holmes stressed the importance of existing data protection and intellectual property rules, noting for example that it’s not about completely preventing the use of copyrighted works in AI training models, but about making them available in negotiation with the rights holders.
“To have an argument where AI can just crawl over people’s work with complete abandon, as if the rules don’t apply, is wholly and entirely unacceptable,” he said, adding AI legislation is urgently needed to protect UK creatives: “If we don’t act, if we don’t legislate, it’ll be too late in so many instances.”
“Writing, artistic works, music. These are the things that tune into our human soul – it can’t just be taken and ripped out, with no thought, with no word, and with no payment, no respect,” he said.



Source link