The UK is highly vulnerable to misinformation and disinformation in the lead-up to a general election, according to a report from independent fact-checking charity Full Fact.
Between gaps in the Online Safety Act and the growing accessibility of generative artificial intelligence (GenAI) tools, the UK’s information environment requires fundamental legislative and regulatory changes to protect the public against misinformation, especially where it is AI-generated or affects public health.
“The government’s failed promise to tackle information threats has left us facing down transformative digital challenges with an analogue toolkit,” said Chris Morris, CEO of Full Fact.
“An Online Safety Act with just two explicit areas of reference to misinformation cannot be considered fit for purpose in the age of AI.”
The report, Trust and truth in the age of AI, delineates the imminent information threats facing the UK, as well as the actions politicians must take to resolve these issues.
Especially urgent threats to the UK information environment identified by Full Fact were health-oriented fields of misinformation, such as cancer risks and confidence and vaccines, with the report highlighting the Covid-19 pandemic as a particularly prolific time for the spread of misinformation.
Full Fact also outlined how GenAI’s capability to spread content “so plausible that it cannot easily or rapidly be identified as false” has the potential to play a harmful role in the upcoming election, with its ease of use and wide availability.
Action needed
The charity has called for UK leaders to take action, rather than deferring these gaps in policy to the interests of commercial bodies, to protect the information security of its citizens.
“A better future is possible, but only if all political parties show that they are serious about restoring trust in our system by campaigning transparently, honestly and truthfully,” said Morris.
“Without urgent action on these issues, the UK risks falling behind the pace of international progress in protecting citizens from misinformation and disinformation.”
The vulnerability of governments, businesses and society to AI-generated fake narratives was one of the key risks under discussion at the World Economic Forum (WEF) in January 2024, with the international organisation warning that AI-driven misinformation and disinformation online is the top short-term risk facing countries.
With three billion people expected to vote in elections worldwide between now and 2026, the WEF ranks the risk posed by disinformation and misinformation ahead of severe weather events, social polarisation and cyber security.
Responding to Full Fact’s findings, a Department for Science, Innovation and Technology (DSIT) spokesperson said: “We are working extensively across government to ensure we are ready to rapidly respond to misinformation. The Online Safety Act has been designed to be tech-neutral and future-proofed, to ensure it keeps pace with emerging technologies.
“In addition to the work of our Defending Democracy Taskforce, the Digital Imprints Regime will also require certain political campaigning digital material to have a digital imprint making clear to voters who is promoting the content.”
Report findings
Full Fact’s investigations explore how “generative AI has presented challenging new risks and opportunities to those tackling misinformation and disinformation”.
The report itself identifies various issues across two realms: GenAI’s relationship with the information environment, and the relationship between the government, political parties and the public with regards to trust.
Ultimately, it establishes that in the UK, online platforms and search engines do not yet effectively address or regulate bad information, especially plausible bad information generated by AI. It also calls for fact-checkers to be properly equipped to fight misinformation, as well as for the government to equip the public with sufficient media literacy resources to protect them against the anti-democratic spread of bad information.
Moreover, Full Fact advocates for politicians to follow through on their commitment to making it easier to correct misinformation, for the government to provide evidence for every claim that it makes and for a wider strengthening of the culture and system to “help restore trust in our politics”.
Urgent action
From the report, the charity drew up 15 recommendations of urgent action for the government, political parties, regulators and technology companies.
Full Fact urged the next government to amend the Online Safety Act to better address harmful misinformation; to build on existing regulatory principles to tackle AI-generated bad information, and to enable Ofcom to have “regulatory oversight of online platform and search engine policies on generative AI and content”.
Additionally, it encourages the next government to ensure researchers and fact-checkers have “timely access to data from online platforms”; to increase resources for media literacy, and to set out how it will work “transparently” with platforms and search engines to combat misinformation during the election period.
Technology companies and online platforms are equally being called to action in the report – Full Fact urges these companies to participate in “international standards for indirect closure techniques and be transparent about the accuracy and reliability” of their detection tools. Online platforms and search engines are also being encouraged to provide long-term funding for fact-checking organisations.
Looking forward, the report concludes that “once the election is over, there will be an opportunity for the next government and Parliament to make a fresh start on transparency”, calling for targeted legislation against deceptive campaign practices and the training of new MPs on correcting misinformation.