• Thu. Nov 28th, 2024

Regulation of AI needed to avoid the mistakes of social media

Byadmin

Nov 25, 2021




Experts giving evidence to the House of Lords Communications and Digital Committee have warned that without sufficient regulation, artificial intelligence (AI) could follow the path of the largely unregulated social media platforms.

Among the issues explored during the evidence hearing was the nature of international regulations and whether self-regulation works.
Tabitha Goldstaub, co-founder of CogX and chair of the AI Council, said: “Companies can deploy AI systems in an almost unregulated market. We need to ensure the government can scrutinise systems.”
OpenAI, developer of the GPT-3 algorithm, was also invited to give evidence. Mira Murati, senior vice-president of research, product and partnerships at OpenAI, described to the committee not only the pace of development of AI and ease of access via application programming interfaces (APIs), but also why there is a need for regulators to act quickly.
“We predict AI will have the same impact as social media in the coming decade, with very little attention to how systems are being used,” she said. “It is time to understand risks and opportunities before they become widely available.”
For Goldstaub, among the challenges and opportunities facing AI is the balance between academic research in the public domain, where algorithms can be analysed, and the level of R&D being run by major software businesses. According to Goldstaub, R&D is happening at breakneck speed. Among the top papers being presented at AI conferences, half came from corporate research centres such as Google, Facebook and Microsoft, and half from academia, she said.
She warned the committee that this level of commercial activity is leading to a move away from the open nature of research, which harms researchers’ ability to reproduce AI algorithms. 
Murati discussed the rapid pace of AI development, which is leading organisations such as OpenAI to self-regulate. “We can put together a large neural network, large data and large computers, which gets us reliable and astounding AI progress,” she said. “If we continue on this trajectory, we can push further and will quickly have systems capable of writing programs.”
Such a trajectory would eventually lead to the development of artificial general intelligence (AGI), in which algorithms can potentially surpass human intelligence, said Murati, adding: “Our mission is to ensure that once we reach AGI, we develop and deploy it in ways that benefit all of humanity.”
Describing the approach that OpenAI has taken in self-regulation, Murati told the committee that although GPT-3 was originally released in May 2020 and an API made available in June 2020, the company had put a lot of restrictions in place. “We had a lot of restrictions in place because we weren’t sure it could be used safely and reliably,” she said.

Murati said OpenAI had only recently made the API fully available after it had made sufficient progress on safety protocols and setting up systems to detect bad behaviours. “We have a dedicated safety team to ensure we deploy the technology in a responsible way, align to human values and reduce harmful, toxic content,” she said. “We believe regulation is essential to build and sustain public trust in the technology and ensure it is deployed in a safe, fair and transparent way.”
Among the problem areas the government faces in drawing up a regulatory framework for AI is the fact that the technology crosses international borders. Chris Philp, minister for technology and the digital economy at the Department for Digital, Culture, Media and Sport, said the pace of AI developments is a challenge to regulators.
“Technologies are international in scope, which means we can’t separate which pieces are under UK jurisdiction,” he said. At the same time, the government did not want to put in place a regulatory framework that stifled innovation or had a fixed architecture that would be immediately out of date, said Philp.
Beyond the need for regulations that keep up with the pace of change without hindering innovation, Goldstaub suggested that the committee also explore how the general public can be better educated in AI decision-making. “In order for people to trust, they need to understand the importance of AI,” she said.
Drawing an analogy with the automotive and airline industry, where there are established safety regulations that people can appreciate at a high level without needing to understand the inner workings, she said: “One of the missing pieces as consumers of AI technology is that every child leaves school with the basics of data and AI literacy.”
Murati urged the committee to look at how the government can work closely with industry to identify emerging issues. She suggested that regulators could put in place regulations that cover the transparency and explainability of AI systems in order to understand the risk and ensure that mitigations are in place. Regulators could also put in place checks that assess the reliability of AI algorithms, with companies held accountable for unreliable AI algorithms, she said.
For Murati, industry standards for explainability, robustness and reliability, combined with a set of  principles that companies can be evaluated against, would help to ensure the safe development and deployment of AI systems. “Incentivising standards would go a long way to ensure safe use,” she added.



Source link