In a bid to assert their leadership in AI and to set how the world regulates AI, both the United States and the United Kingdom are making significant strides. The recent developments in the AI regulatory landscape indicate a degree of global cooperation, although it appears to be cautiously limited.
The United Kingdom has just concluded an international AI summit to address potential “catastrophic” risks associated with artificial intelligence. The summit brought together representatives from several nations, including the US, China, and Saudi Arabia, alongside prominent tech CEOs such as Elon Musk and Sam Altman.
Are you simmering tensions between the US and the UK?
While the US extended its support for the summit by sending Vice President Kamala Harris, it also managed to upstage the UK’s efforts. On Monday, President Joe Biden signed a sweeping executive order on AI, a comprehensive document he touted as “the most significant action any government anywhere in the world has ever taken.” Vice President Harris, in a subsequent speech at the US embassy in London, announced the establishment of a new AI Safety Institute, closely following the UK’s unveiling of a similar institution.
Alex Krasodomski, a senior research associate specializing in tech policy at the British think tank Chatham House, noted, “It’s no accident that the US announced its executive order a few days before the UK summit. There is some tension there.”
The US and the UK are vying to be at the forefront of regulating transformative new technologies, especially in the wake of perceived failures in regulating digital privacy and health on social media platforms.
President Biden emphasized in his executive order signing speech that many people ask if the US is leading in AI regulation. Now, he has a set of well-defined policies to point to in response. Meanwhile, the UK can highlight its diplomatic achievement in garnering agreement from the US, China, and other global powers on fundamental AI principles.
EU and China in the mix
The race for AI regulation is not confined to the US and the UK alone. The European Union (EU) has already introduced a stringent AI Act in 2021, which is expected to be further refined and passed into law shortly. Additionally, China has recently enacted laws mandating AI companies to register their services with the government and undergo a security review before entering the market.
Sarah Kreps, a professor at Cornell University and director of its Tech Policy Institute, aptly described the situation as an “AI regulation arms race,” with various actors aiming to demonstrate their proactivity and ambition in addressing AI-related issues, both in terms of timing and scope.
Different countries are taking distinct approaches to AI regulation. President Biden’s executive order primarily focuses on immediate AI risks related to security, bias, job displacement, and fraud. In contrast, the UK summit, led by Rishi Sunak, emphasized the potential long-term threats posed by advanced “frontier” AI models. However, the objective measure of success lies in executing these policies.
Declarations and executive orders are vague at best; tokenist
Biden’s executive order assigns the responsibility of setting standards and enforcing its provisions to existing government agencies, including the Department of Commerce, the Department of Homeland Security, and the Federal Trade Commission.
On the UK side, there is no legal framework for enforcing the newly established “Bletchley Declaration” among the 28 countries participating in the AI summit. Even if mechanisms were in place to penalize countries for not addressing issues like auditing destructive military AI applications, the resolutions’ language is sufficiently vague to make it difficult to hold any country accountable for their behavior. The first action item, for instance, is framed as “identifying AI safety risks of shared concern.”
While it may be tempting to dismiss the policy proposals, communiqués, and events of this week as mere public displays, they represent a starting point in constructing voluntary “normative guardrails” among nations.
Though we still need to establish a clear global framework for handling AI, these recent developments have brought us closer to meaningful policies. As competition drives innovation in the tech industry, a dose of healthy rivalry may benefit AI regulators in shaping a more robust regulatory landscape.