MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
agi
Search

Another OpenAI Researcher Quits, Calls AI “Terrifying” and a “Risky Gamble”

Wednesday January 29, 2025. 08:09 PM , from eWeek
OpenAI has experienced a series of abrupt resignations among its leadership and key personnel since November 2023. From co-founders Ilya Sutskever and John Schulman to Jan Leike, the former head of the company’s “Super Alignment” team, the exits keep piling up. But that’s not all—former safety researcher Steven Adler and Senior Advisor for AGI Preparedness Miles Brundage have also left. These are the notable names. Many other employees have chosen to depart. 
What’s making alarm bells ring louder is the reason behind their voluntary separation. One common thread tying these departures together is the fear that OpenAI is prioritizing profit over society’s safety. In the high-stakes world of AI, that’s a red flag that’s impossible to ignore.
Why Are Employees Leaving OpenAI?
AI safety and governance have been gaining more attention, and for good reason. AI models are getting smarter, and companies are racing to develop artificial general intelligence (AGI). With AI’s accelerated development, AGI is on track to becoming a reality. However, many former OpenAI employees feel the company is more focused on rapid innovation and product launches than adequately addressing the risks of AGI. Leike has been particularly vocal about this issue. 
“We’re long overdue in getting serious about the implications of AGI,” he posted on X, criticizing the company for putting AI safety on the back burner.
Why Worry About AGI?
AGI is a superintelligent AI model that can autonomously think, learn, reason, and adapt across various domains. It can perform any intellectual task like a human. Unlike today’s AI, which is designed for specific tasks, AGI can self-improve and potentially exceed human intelligence. This might sound like a sci-fi movie, but scientists in China have already developed an AI model that can self-replicate without human intervention. In a test simulation, the AI sensed an impending shutdown and replicated itself for survival. At this point, it’s not just advanced technology. It’s survival instincts in action.
It’s this type of capability that’s keeping AI safety advocates awake at night. If AGI’s goals aren’t aligned with human values and well-being, the ramifications could be catastrophic. Imagine an AI optimizing for efficiency and deciding that humans are the bottleneck. Can we trust that AI has our best interests at heart?
AI Governance and Safety
AI safety is a non-negotiable factor. Without strict governance and safety measures, AGI could become unpredictable, dangerous, and uncontrollable. The European Union, China, and the United States are working on AI laws and policies. Companies like IBM, Salesforce, and Google have pledged to build AI ethically. These are positive steps, but it’s clear we’re still playing catch-up.
The post Another OpenAI Researcher Quits, Calls AI “Terrifying” and a “Risky Gamble” appeared first on eWEEK.
https://www.eweek.com/news/open-ai-researcher-quits-calls-ai-terrifying/

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Jan, Thu 30 - 19:19 CET