red and white striped flag

UK’s pro-innovation approach to AI regulation

Regulations

The consultation paper from the UK’s Department for Science, Innovation and Technology (DSIT) proposes a pro-innovation framework that aims to balance the risks and benefits of AI, while supporting the UK’s leadership in AI research and development.

The document outlines the following key elements of the proposed framework:

  • Revised cross-sectoral AI principles: The document updates the existing AI principles published in 2018 based on feedback from stakeholders and international best practices. The revised principles are transparency, fairness, accountability, safety and security, and public trust. The document seeks views on whether these principles are adequate and how they can be implemented effectively.
  • A statutory duty to regard: The document proposes to introduce a statutory duty on regulators to have due regard to the AI principles when regulating AI technologies. This would ensure regulators have a clear and consistent mandate to address AI-related issues, while retaining flexibility and proportionality in their approach. The document asks whether this intervention is appropriate and whether there are alternatives that would be more effective.
  • New central functions to support the framework: The document identifies four functions that would benefit the AI regulation framework if delivered centrally: providing guidance and standards, facilitating coordination and collaboration, monitoring and evaluating the framework, and engaging internationally. The document explores how these functions could be delivered by existing or new bodies and what role the government should play in supporting them.
  • Legal responsibility for AI: The document examines how the current legal frameworks allocate legal responsibility for AI across the life cycle, from development to deployment to use. It argues that the existing frameworks are generally sufficient and adaptable but acknowledges that some areas may have gaps or uncertainties. The document invites views on the challenges and opportunities of applying the AI principles to different AI applications and systems and how the government can support effective AI-related risk management.
  • Foundation models and the regulatory framework: The document recognises the rapid development and broad applicability of foundation models, such as large language models (LLMs), that can perform multiple tasks across domains. It discusses the potential benefits and risks of these models and how they pose specific challenges for regulators. The document suggests that measuring compute could be a possible tool to govern foundation models and asks whether other approaches would be more effective.
  • AI sandboxes and testbeds: The document reaffirms the government’s commitment to supporting innovators by addressing regulatory barriers that prevent new, cutting-edge products from getting to market. It proposes developing AI sandboxes and testbeds that enable government and regulators to test and evaluate novel AI solutions in a safe and controlled environment. The document seeks feedback on how to maximise the benefit of sandboxes to AI innovators and which industry sectors or classes of products would most benefit from them.

The document invites individuals and organisations to provide their views by responding to the questions in the document. The consultation will be open for 12 weeks, until 21 June 2024. The document provides details on how to respond online, by email, or by post.

Leave a Reply

Your email address will not be published. Required fields are marked *