top of page

How Sam Altman, the Creator of ChatGPT, Urged the Senate to Regulate AI

Updated: May 19

Sam Altman testifying in front of US Senate.
Photo Source:

Sam Altman is not your typical tech executive. He is the CEO of OpenAI, a San Francisco-based startup that created ChatGPT, a free chatbot tool that answers questions with convincingly human responses. He is also the creator of ChatGPT, a powerful artificial intelligence (AI) system that can generate coherent and engaging texts on any topic.

But unlike many of his peers in Silicon Valley, Altman is not shy about the dangers and challenges of AI. He recently testified before a Senate subcommittee and urged lawmakers to regulate the emerging technology.

What did Altman say at the Senate hearing?

Altman appeared before the Senate Judiciary Committee’s subcommittee on privacy, technology, and the law on May 16, 2023. It was his first testimony before Congress, and he was joined by other experts and witnesses from academia, industry, and civil society.

Altman told the senators that AI has enormous potential to benefit humanity, but also poses significant risks if not handled responsibly. He said that AI can go “quite wrong” and cause harm to individuals, society,,and democracy.

He also said that one of his greatest fears is the disruption to the labor market, and called on Congress to help address the impact of AI on jobs. He suggested that a universal basic income could be a possible solution to ensure that everyone has a decent standard of living in an AI-driven economy.

Altman also advocated for more transparency and accountability in AI development and deployment. He said that OpenAI is committed to creating and sharing AI for the common good, and that it has taken steps to ensure that its ChatGPT system is not misused or abused.

He said that OpenAI has implemented safeguards such as rate limits, content filters and human oversight to prevent ChatGPT from generating harmful or malicious texts.

He also said that OpenAI supports the creation of a federal agency or commission to oversee AI research and innovation. He said that such an entity could help establish ethical standards, best practices, and legal frameworks for AI.

He also said that such an entity could foster collaboration and coordination among various stakeholders, including government, industry, academia and civil society.

Why does Altman’s testimony matter?

Altman’s testimony was significant for several reasons. First, it showed that he is a leading figure in AI who has a deep understanding of the technology and its implications. He was able to explain complex concepts and issues in simple and accessible terms, and he demonstrated a willingness to engage with lawmakers and policymakers.

Second, it showed that he is not afraid to speak out about the need for regulation and oversight of AI. He did not shy away from acknowledging the potential harms and challenges of AI, and he did not try to downplay or dismiss them. He also did not oppose or resist regulation but rather welcomed it as a necessary and beneficial step.

Third, it showed that he is not alone in his views and concerns. He was supported by other witnesses and experts who also testified at the hearing, such as Dr. Kate Crawford from New York University, Dr. Timnit Gebru from Stanford University, and Ms. Rashida Richardson from Rutgers Law School. They all agreed that AI poses serious risks to privacy, civil rights, democracy, and human dignity and that regulation is needed to protect them.

What are the next steps for AI regulation?

Altman’s testimony was well received by the senators who attended the hearing. They praised him for his candor and insight, and they expressed interest in working with him and other stakeholders to develop legislation and policies for AI regulation.

However, it is unclear how soon or how effectively such regulation will be enacted. There are many challenges and obstacles to overcome, such as political polarization, bureaucratic inertia, industry lobbying and public awareness.

Moreover, there are many questions and uncertainties about how to regulate AI in a way that balances innovation and safety, promotes fairness and accountability, respects human rights and values, and fosters global cooperation and competition.

These are not easy problems to solve, but they are urgent ones to address. As Altman said at the hearing: “We have an opportunity now to shape this technology for good.”,

Learn More:

ChatGPT creator Sam Altman says AI can go quite wrong, urges US lawmakers to regulate the technology (

‘AI would be good at tasks but not…’ OpenAI CEO Sam Altman on AI’s impact on jobs at Senate hearing | Mint (

WATCH: OpenAI CEO Sam Altman testifies before Senate Judiciary Committee | PBS NewsHour

bottom of page