OpenAI’s Sam Altman urges US Congress to regulate AI

OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, on May 16, 2023, on Capitol Hill in Washington. (Image Credit: AP/Twitter)

Sam Altman, the CEO of OpenAI, called on United States lawmakers to regulate artificial intelligence (AI) as he testified before members of a US Senate subcommittee.

During the three-hour hearing, Altman declared that “There should be limits on what a deployed model is capable of and then what it actually does,” referring to the underlying AI which powers such products as ChatGPT.

Pressed on his own worst fears about AI, he said: “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that.” The company behind the viral chatbot. ChatGPT said that they “want to work with the government to prevent that from happening.”

Altman, a 38-year-old Stanford University dropout and tech entrepreneur, proposed the formation of a U.S. or global agency responsible for licensing the most advanced AI systems and possessing the power to revoke licenses to enforce compliance with safety standards.

Senator Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology, and the law, heard testimony from Altman, renowned AI researcher and a professor emeritus at New York UniversityGary Marcus, and IBM’s chief privacy and trust officer, Christina Montgomery. “We think that AI should be regulated at the point of risk, essentially,” Montgomery said, by establishing rules that govern the deployment of specific uses of AI rather than the technology itself.

The hearing was the first in a series to learn more about the potential benefits and harms of AI to eventually “write the rules” for it. Some of the worries raised by lawmakers about generative AI include job disruption, election misinformation, copyright, and licensing, impersonation of public and private figures, and dangerous and harmful content.

Three recommendations

Sam Altman laid out three key actions for regulating AI companies:

  • Establish a new government agency charged with licensing large AI models and revoke the license of the companies whose models don’t comply with government standards.
  • Develop a set of safety standards for AI models, including evaluations of their dangerous capabilities such as whether they could “self-replicate” and “exfiltrate into the wild” — that is, to go rogue and start acting on their own.
  • Require independent audits of the models’ performance on various metrics.

Altman’s proposals did not include the requirement for AI models to provide transparency into their training data, as his fellow expert witness Gary Marcus has called for.

Related Posts