UK Rules Out New AI Regulator, Instead Favors Existing Regulators to Develop Guidelines for Responsible Use
The UK government has revealed its plans to regulate artificial intelligence (AI) through guidelines on “responsible use” rather than establishing a new AI regulator. The government recognizes AI as one of the “technologies of tomorrow” and acknowledges its significant contribution of £3.7 billion ($5.6 billion) to the UK economy last year. However, concerns regarding job displacement and potential malicious use of AI have prompted calls for regulation.
AI refers to computer systems capable of performing tasks that typically require human intelligence, including chatbots that can understand and respond to questions with human-like answers, as well as systems that can identify objects in images. The Department for Science, Innovation and Technology has released a white paper proposing rules for general-purpose AI, which encompasses systems that can be used for various purposes, such as the chatbot ChatGPT.
As AI continues to advance rapidly, concerns have been raised regarding its potential risks to privacy, human rights, and safety. There are worries that AI systems trained on large datasets scraped from the internet, which can contain racist, sexist, and undesirable material, may exhibit biases against specific groups. Additionally, AI could be utilized to generate and spread misinformation. Consequently, many experts argue that AI requires regulation.
However, advocates of AI emphasize that the technology already delivers tangible social and economic benefits. The government is concerned that a patchwork of legal regimes could hinder organizations from fully utilizing AI’s potential due to confusion surrounding compliance. Instead of establishing a single AI regulator, the government proposes that existing regulators, such as the Health and Safety Executive, Equality and Human Rights Commission, and Competition and Markets Authority, develop their own approaches tailored to how AI is currently being used in their respective sectors. These regulators will operate within existing laws rather than being granted new powers.
While the idea of regulation is welcomed, some experts express reservations about the UK’s approach. They highlight “significant gaps” in the proposed white paper, noting that it lacks statutory footing, which means there will be no new legal obligations for regulators, developers, or users of AI systems initially. Furthermore, effectively regulating different uses of AI across sectors would require substantial investment in existing regulators.
The white paper outlines five principles that regulators should consider to enable the safe and innovative use of AI in their respective industries. These principles include ensuring safety, security, and robustness of AI applications, promoting transparency and explainability, upholding fairness and compliance with existing laws, establishing accountability and governance, and providing clear routes for contestability and redress in case of harmful outcomes or decisions generated by AI.
Over the next year, regulators will issue practical guidance to organizations on implementing these principles within their sectors. Michelle Donelan, the Secretary of State for Science, Innovation, and Technology, emphasizes the need for rules to ensure the safe development of AI, acknowledging that AI is no longer confined to science fiction and is advancing at an astonishing pace.
However, some legal experts argue that the UK’s approach is a “light-touch” one, making the country an outlier compared to global trends in AI regulation. Countries like China and the US have enacted or proposed specific laws to address perceived AI risks. In the EU, the European Commission has published the Artificial Intelligence Act, which has a broader scope than China’s regulation. The Act includes grading AI products based on their potential harm and imposing regulation accordingly. Certain AI uses, such as social grading by governments, would be prohibited altogether.
The debate around AI regulation continues as governments worldwide grapple with striking a balance between fostering innovation and ensuring the responsible and ethical use of AI.
Topics: Science, Technology, AI
Tags: AI Regulator, Artificial Intelligence Regulator
Pingback: 2023 OpenAI Leaders Call for Regulation to Prevent AI from Threatening Humanity
Pingback: 2023 Centre for AI Safety: Artificial Intelligence and the Extinction Debate. Should We Be Concerned? -
Pingback: Urgent Action Needed: Two-Year Window to Tame Artificial Intelligence, Warns Rishi Sunak’s Top Tech Adviser