Google, Microsoft, OpenAI and Anthropic announce $10 million for new AI safety fund 

Google, Microsoft, OpenAI and Anthropic have announced $10 million AI Safety Fund to ensure “safe and responsible development” of the most powerful AI models.

The tech giants also named Chris Meserole as the first Executive Director of the Frontier Model Forum, an industry-led body founded by the four leading AI providers to promote AI safety research.

What is Frontier Model Forum?

The Frontier Model Forum was recently founded by Google, Microsoft, OpenAI and Anthropic to advance AI safety research to promote responsible development of frontier models. It also aims to minimize potential risks, identify safety best practices for frontier models, share knowledge with policymakers, academics, civil society, and others to advance responsible AI development, and support efforts to leverage AI to address society’s biggest challenges.

The forum’s new executive director Chris Meserole has a background in interpretable machine learning and computational social science. He brings a wealth of experience focusing on the governance and safety of emerging technologies and their future applications. He most recently served as the director of the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution.

“The most powerful AI models hold enormous promise for society, but to realize their potential we need to better understand how to safely develop and evaluate them. I’m excited to take on that challenge with the Frontier Model Forum,” Meserole said.

In his new role, Meserole will be responsible for assisting the forum in advancing AI safety research, identifying safety best practices, sharing knowledge with various stakeholders, and supporting AI applications to address societal challenges.

AI Safety Fund

With over $10 million in initial funding, the AI Safety Fund will advance research into the ongoing development of the tools for society to effectively test and evaluate the most capable AI models, according to the joint statement.

The fund will support independent researchers from around the world affiliated with academic institutions, research institutions, and startups. The primary focus of the Fund will be supporting the development of new model evaluations and techniques for red teaming AI models to help develop and test evaluation techniques for potentially dangerous capabilities of frontier systems.

“We believe that increased funding in this area will help raise safety and security standards and provide insights into the mitigations and controls industry, governments, and civil society need to respond to the challenges presented by AI systems,” the four companies said.

The fund has been launched by Google, Microsoft, OpenAI, and Anthropic in collaboration with philanthropic partners, including the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Eric Schmidt, and Jaan Tallinn.

Related Posts