- 28 countries and EU sign first AI safety agreement
- China, US and EU agree to work together against AI risks
- Leaders discuss ways to advance responsible & safe use of AI
Global political and tech leaders, including representatives from U.S., China, UK and European Union (EU), came together at the AI Safety Summit 2023 to discuss the potential opportunities and risks associated with artificial intelligence (AI).
The world leaders signed the Bletchley Declaration at the summit hosted by the British government. Signed by 28 countries and EU, the “world-first” agreement on AI is seen as a significant step forward on building AI technologies safely and responsibly.
“Today we’ve reached a historic agreement, with governments and AI companies working together to test the safety of their models before and after they are released. The UK’s AI Safety Institute will play a vital role in leading this work, in partnership with countries around the world,” British Prime Minister Rishi Sunak said.
Historical significance of venue
The two-day (November 1-2, 2023) AI safety summit was held about 50 miles from London at the site that holds profound historical significance.
Bletchley Park is renowned as the birthplace of computing due to its vital role in second world war cryptographic and intelligence breakthroughs. It was once the clandestine hub of Britain’s codebreakers including mathematician Alan Turing who played a key role in cracking the Enigma code and is considered the ‘father of modern computer science’.
Key points from Bletchley Declaration
Here are the key highlights of the Bletchley Declaration, published on November 1, 2023, following the AI Safety Summit:
- Global Cooperation: The resolution underscores the need for global collaboration to address AI-related risks through international forums and relevant initiative. The document emphasizes human-centric and responsible AI, balanced regulatory approaches.
- AI’s Impact and Time to Act: The declaration recognizes that AI systems are already integrated into various aspects of daily life, including housing, employment, transport, education, health, accessibility, and justice, and their use is expected to increase. The document underscores the unique moment to act and ensure the safe development of AI.
- AI Risks: AI is acknowledged as posing significant risks, particularly at the frontier of highly capable AI models. Concerns include intentional misuse, control issues, particularly in domains such as cybersecurity and biotechnology. The declaration calls for urgent attention to deepening the understanding of these potential risks.
- Addressing AI Risk: The agenda for addressing frontier AI risk focused on identifying shared safety concerns and fostering a collective, evidence-based understanding as AI capabilities advance globally. Additionally, the countries agreed to establish risk-based policies across nations, emphasizing transparency, evaluation metrics, safety testing tools, and public sector capacity building.
- Potential of AI: The declaration acknowledges the power of AI to transform and enhance human wellbeing. It emphasizes the importance of AI being designed, developed, deployed, and used in a safe, human-centric, trustworthy, and responsible manner.
You can read the declaration here.
The 28 countries and EU agreed to support Professor Yoshua Bengio, a Turing Award winning AI academic, to lead the first-ever frontier AI ‘State of the Science’ report. “We have seen massive investment into improving AI capabilities, but not nearly enough investment into protecting the public, whether in terms of AI safety research or in terms of governance to make sure that AI is developed for the benefit of all,” Bengio said.
Which countries signed the declaration?
The declaration was signed by 28 countries and the European Union(EU). These include: Australia, Brazil, Canada, Chile, China, France, Germany, India, Indonesia, Ireland, Israel, Italy, Japan, Kenya, Saudi Arabia, Netherlands, Nigeria,The Philippines, Republic of Korea, Rwanda, Singapore, Spain, Switzerland, Türkiye, Ukraine, United Arab Emirates (UAE), United Kingdom and United States.
Who attended the summit?
The summit at Bletchley Park convened influential world leaders and tech experts from across the world. Prominent participants included British Prime Minister Rishi Sunak, U.S. Vice President Kamala Harris, European Commission President Ursula von der Leyen, United Nations Secretary-General Antonio Guterres, tech visionary Elon Musk, OpenAI CEO Sam Altman, known for creating ChatGPT.
President of the European Commission Ursula von der Leyen said: “The greater the AI capability, the greater the responsibility.” She urged prompt action and said that credible international governance requires four pillars: robust and independent scientific community, globally-accepted testing standards, thorough incident investigation of errors or misuse by AI, and a trusted alert system.
The governments of France and Germany also extended their support by mobilizing the stakeholders and resources already active on AI safety.
Prime Minister of Italy Georgia Meloni said that “Artificial intelligence is entering every domain of our lives. It is our responsibility, today, to steer its ethical development and ensure its full alignment with humankind’s freedom, control and prosperity.”
US show of power in emerging technology
The UK hosted the inaugural AI safety summit, but the United States, home to the world’s AI giants, was leading the way.
U.S. Vice President Kamala Harris laid out comprehensive domestic actions that the Biden administration is taking, including new wide-ranging directive by President Biden to promote safe and secure AI and the establishment of United States AI Safety Institute to test the safety of AI models for public use.
“We intend that these domestic AI policies will serve as a model for global policy, understanding that AI developed in one nation can impact the lives and livelihoods of billions of people around the world,” Harris said in a speech at the summit.
This dominating stance presents a challenge for the EU which is already well ahead in developing its own new AI regulations. At the summit, the EU delegation promoted the EU AI Act, considered the world’s first comprehensive AI law. Proposed by the European Commission in April 2021, the AI Act classifies AI systems based on their risk and regulates them accordingly. These rules are set to become the world’s first AI regulations, expected to be adopted by early 2024.
Věra Jourová, the European Commission’s vice-president for values and transparency, said the UK was falling behind by its “own decision”. Rishi Sunak argued it might be too early for AI legislation.
Elon Musk and Rishi Sunak discuss ‘disruptive force’ of AI
The British prime minister, Rishi Sunak, had a conversation with Elon Musk for the AI safety summit at Bletchley Park, discussing political and social impacts of the technology.
The tech billionaire, who owns Tesla , SpaceX, X (formerly Twitter), and AI startup xAI, said that AI will have the potential to become the “most disruptive force in history” and agreed that government oversight is essential.
Discussing the Musk discussed the transformative potential and challenges of AI, Elon Musk also warned that AI could outperform humans in all tasks, potentially leading to job displacement. “It’s hard to say exactly what that moment is, but there will come a point where no job is needed … You can have a job if you wanted to have a job for personal satisfaction. But the AI would be able to do everything.”
“One of the challenges in the future will be how do we find meaning in life,” Musk said speaking alongside UK Prime Minister Rishi Sunak at Lancaster House, official UK government residence.
Musk has repeatedly cautioned about the risks that AI poses to humanity. “Mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane,” Musk said at SXSW in 2018. Recently, he joined other tech leaders in a open letter advocating for a halt to the advancement of AI more advanced than OpenAI’s GPT-4.
Despite all warnings, Musk’s artificial intelligence startup xAI just released “Grok” — an AI chatbot to assist humanity in its quest for knowledge by empowering research and innovation. The chatbot would be available to all X Premium+ subscribers once it’s out of early beta.
China joins global AI safety conversation
Inviting China to the summit was a surprise for many. Even more surprising was the country’s attendance including a delegation from China’s Ministry of Science and Technology as well as Alibaba and Tencent.
At the summit, the United States and China, the two superpowers locked in a tech rivalry for years, joined forces to seek global agreement on complex AI challenges, such as safe development and regulation.
Tech billionaire Elon Musk hailed Rishi Sunak’s decision to invite China to the summit Bletchley Park. “Thank you for inviting them,” Musk said. “Having them here is essential. If they’re not participants, it’s pointless.”
Surprisingly, the United States expressed support for the United Kingdom’s decision to invite China to its AI summit. “We always want to make sure there’s good dialogue going on with every part of the world, so I thought it was a terrific idea,” Arati Prabhakar, the director of the White House’s Office of Science and Technology Policy told Washington Post.
China’s inclusion sparked backlash with former British Prime Minister Liz Truss demanding the withdrawal of invitation, saying she was “deeply disturbed” by the move.
While Bletchley Park hosted the inaugural AI safety summit, several more such initiatives have been planned with South Korea preparing to host a virtual mini-summit on AI within the next six months. France set to organize the next in-person AI summit next year.