• Mon. Nov 25th, 2024

EU Act ‘must empower those affected by AI systems to take action’

Byadmin

Mar 31, 2022




Independent research organistion the Ada Lovelace Institute has published a series of proposals on how the European Union (EU) can amend its forthcoming Artificial Intelligence Act (AIA) to empower those affected by the technology on both an individual and collective level.

The proposed amendments also aim to expand and reshape the meaning of “risk” within the regulation, which the Institute has said should be based on “reasonably foreseeable” purpose and extend beyond its current focus on individual rights and safety to also include systemic and environmental risks.
“Regulating AI is a difficult legal challenge, so the EU should be congratulated for being the first to come out with a comprehensive framework,” said Alexandru Circiumaru, European public policy lead at the Ada Lovelace Institute. “However, the current proposals can and should be improved, and there is an opportunity for EU policymakers to significantly strengthen the scope and effectiveness of this landmark legislation.”
As it currently stands, the AIA, which was published by the European Commission (EC) on 21 April 2021, adopts a risk-based, market-led approach to regulating the technology, focusing on establishing rules around the use of “high-risk” and “prohibited” AI practices.
However, digital civil rights experts and organisations have claimed that the regulatory proposal is stacked in favour of organisations – both public and private – that develop and deploy AI technologies, which are essentially being tasked with box-ticking exercises, while ordinary people are offered little in the way of protection or redress.
They claimed that ultimately, the proposal will do little to mitigate the worst abuses of AI technology and will essentially act as a green light for a number of high-risk use cases because of its emphasis on technical standards and how it approaches mitigating risk.
Published on 31 March 2022, the Ada Lovelace Institute’s proposed amendments to deal with these issues include recognising “affected persons” as distinct actors in the text of the AIA, which currently only recognises “providers” – those putting an AI system on the market – and “users” – those deploying the AI system.
It said the AIA should also be used to create a comprehensive remedies framework around “affected persons”, including a right for individuals to bring complaints, a right to bring collective action, and a right to information to supplement what is already provided under the General Data Protection Regulation (GDPR).
“The EU AI Act, once adopted, will be the first comprehensive AI regulatory framework in the world. This makes it a globally significant piece of legislation with historic impact far beyond its legal jurisdiction,” said Imogen Parker, associate director at the Institute.
“The stakes for everyone are high with AI, which is why it is so vital the EU gets this right and makes sure the Act truly works for people and society.”
The Ada Lovelace Institute further recommends renaming “users” as “deployers” to further highlight the distinction between those using the tech and those it is being used on, as well as determining risk based on the system’s “reasonably foreseeable purpose“, rather than the “intended purpose” as defined by the provider itself.
“The current approach may not offer adequate clarity about when a deployer has moved beyond the intended purpose,” the Institute said. “Changing the language to ‘reasonably foreseeable purpose’ would require providers to consider more fully the range of potential uses for their technology. It would also encourage greater clarity in setting the limits of the systems that providers put on the market as to how far deployers can experiment with an AI system without incurring extra obligations.”
Under the current proposals, high-risk systems also only face ex-ante requirements, which means they apply to AI systems before deployment, which the Institute has said reflects a “product safety” approach to AI that “fails to capture” how they are used in the real world.
To deal with this, it recommends subjecting high-risk systems to ex-post evaluations, and establishing a process for adding new types of AI to the high-risk list.

In terms of biometric categorisation and emotion recognition, the Institute recommends adding both to the “unacceptable risk” list in Article 5 of the AIA, saying: “Their use could lead to discrimination on the basis of characteristics that are protected under EU law.”
Other civil society groups have also called for major changes to the AIA on a number of occasions since its publication.
In September 2021, for example, European Digital Rights (EDRi) criticised the EU’s “technocratic” approach to AI regulation, which it said was too narrowly focused on implementing technical bias mitigation measures – otherwise known as “debiasing” – to be effective at preventing the full range of AI-related harms.
It added that by adopting a techno-centric “debiasing” approach, policymakers are reducing complex social, political and economic problems to merely technical matters of data quality, ceding significant power and control over a range of issues to tech companies in the process.
In the same month, non-governmental group Fair Trials said the EU should impose an outright ban on the use of AI to “predict” criminal behaviour on the basis its use will end up reinforcing discrimination and undermining fundamental human rights, including the right to a fair trial and the presumption of innocence.
The call to ban predictive policing systems was reiterated in March 2022 by a coalition of 38 civil society organisations, including Fair Trials and EDRi.
They said that because the underlying data used to create, train and operate such systems is often reflective of historical structural biases and inequalities in society, their deployment would “result in racialised people, communities and geographic areas being over-policed, and disproportionately surveilled, questioned, detained and imprisoned across Europe”.
And in November 2021, 114 organisations signed an open letter calling on European institutions to ensure that the AIA “addresses the structural, societal, political and economic impacts of the use of AI, is future-proof, and prioritises democratic values and the protection of fundamental rights”.
More specifically, the organisations – which included Access Now, Fair Trials, Algorithm Watch, Homo Digitalis and Privacy International – recommended: placing more obligations on users of high-risk AI systems to facilitate greater accountability; creating mandatory accessibility requirements so that those with disabilities can easily obtain information about AI systems; and prohibiting the use of any system that poses an unacceptable risk to fundamental rights.
The organisations added that the AIA does not currently contain any provisions or mechanisms for either individual or collective redress and, as such, “does not fully address the myriad harms that arise from the opacity, complexity, scale and power imbalance in which AI systems are deployed”.
While not addressing the AIA directly, Michelle Bachelet, United Nations high commissioner on human rights, has also called for a moratorium on the sale and use of AI systems that pose a serious risk to human rights, at least until adequate safeguards are implemented, as well as for an outright ban on AI applications that cannot be used in compliance with international human rights law.
“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states,” said Bachelet. “AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online.
“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face.”



Source link