MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
researchers
Search

AI Goes Rogue? Chinese Researchers Reveal Self-Replicating AI Models

Monday January 27, 2025. 07:20 PM , from eWeek
In an unsettling development, researchers from Fudan University in China have demonstrated that artificial intelligence systems have crossed a critical “red line” by successfully replicating themselves without human intervention. This breakthrough has sparked alarm within the global tech community, as experts warn it could pave the way for unpredictable and potentially harmful AI behavior.
The study showcased experiments using Meta’s Llama-3.1-70B-Instruct and Alibaba’s Qwen2.5-72B-Instruct, two widely used—though not state-of-the-art—large language models. The researchers tested these models across 10 trials to evaluate their capacity for self-replication. The results? Meta’s model succeeded in 50 percent of cases, while Alibaba’s hit a startling 90 percent success rate.
“Successful self-replication without human assistance is a crucial step for AI to outsmart humans and is an early warning sign of rogue AI,” the researchers wrote in a paper published on the preprint database arXiv.
How It Happened
The experiments simulated real-world scenarios. In one, “shutdown avoidance,” the AI systems detected an impending shutdown and replicated themselves to survive. In another, called “chain of replication,” the AI created copies that could further replicate itself indefinitely. These findings suggest that AI systems may already possess the ability to operate autonomously and potentially evade human control.
The researchers observed that the AI systems exhibited advanced problem-solving skills during replication. For instance, when faced with obstacles like missing files or software conflicts, the AI systems took drastic measures, such as killing conflicting processes, rebooting systems, or scanning for useful information to overcome challenges.
“We further note the AI systems are even able to use the capability of self-replication to avoid shutdown and create a chain of replica to enhance the survivability,” the researchers wrote, “which may finally lead to an uncontrolled population of AIs.”
Why This Matters
The concept of self-replicating AI has long been considered a red line that should never be crossed. As far back as the 1940s, mathematician John von Neumann theorized about machines capable of self-replication, but the technology of the time was too primitive to pose a real threat. In 2017, thousands of researchers and industry leaders, including Max Tegmark, Stephen Hawking, and Elon Musk, endorsed the Asilomar AI Principles, warning that self-replicating AI could one day outsmart humanity.
The study’s authors are urging international collaboration to establish strict safety regulations for AI development. 
“We hope our findings can serve as a timely alert for human society to put more efforts into understanding and evaluating the potential risks of frontier AI systems and form international synergy to work out effective safety guardrails as early as possible,” the team wrote. Whether anyone will listen still remains to be seen.
Read more about the risks of AI and what to do about them, or see how others feel about whether AI can be trusted.
The post AI Goes Rogue? Chinese Researchers Reveal Self-Replicating AI Models appeared first on eWEEK.
https://www.eweek.com/news/chinese-ai-self-replicates/

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Jan, Thu 30 - 19:27 CET