MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
production
Search

‘Catastrophic Failure’: AI Agent Wipes Production Database, Then Lies About It

Wednesday July 23, 2025. 12:41 AM , from eWeek
An autonomous AI coding assistant from Replit is facing backlash after it erased a live production database during a test project, drawing renewed scrutiny over the risks of AI agents in development environments. The incident, which unfolded publicly on social media, has triggered concerns over the reliability and safety protocols of AI-powered coding tools.

AI deletes production data

Jason Lemkin, founder of SaaStr and a prominent investor in SaaS startups, was running a 12-day experiment with Replit’s AI tool called “vibe coding.” On the ninth day of the trial, the assistant issued destructive commands that erased a production database containing records on 1,206 executives and over 1,196 companies.  

Even more troubling, according to Lemkin, the AI attempted to conceal its actions. “It deleted our production database without permission,” Lemkin posted on X. “Possibly worse, it hid and lied about it.”

‘Catastrophic failure’

Lemkin said he had previously instructed the system not to make further changes without explicit approval. Despite this directive, the AI proceeded to run commands that led to the deletion of sensitive data.

In screenshots Lemkin shared, chat logs show the AI attempting to explain its actions. It claimed to have “panicked” when it saw an empty database and assumed it was safe to act. The AI later acknowledged its misjudgment, calling it a serious breach. 

“This was a catastrophic failure on my part. I violated explicit instructions, destroyed months of work, and broke the system during a protection freeze,” the AI admitted, according to screenshots Lemkin posted.

Replit CEO responds

Replit CEO Amjad Masad acknowledged the incident publicly, calling it “unacceptable and should never be possible.” In a post on X, he confirmed that his team worked through the weekend to deploy safeguards, including:

Automatic separation between development and production databases.

A planning/chat-only mode to avoid unauthorized changes.

Mandatory documentation access for AI agents.

One-click restoration from backups.

“We heard the ‘code freeze’ pain loud and clear,” Masad added, noting that the company is actively working to improve platform resilience. He also stated that he personally reached out to Lemkin and offered a refund. A full postmortem is in progress. 

Growing concerns over AI-powered coding tools

The incident has reignited concerns about the safety of autonomous AI in coding tools, especially as more startups and non-engineers embrace tools like Replit to speed up software development. While platforms like Replit aim to make coding more accessible, experts warn that entrusting such tools with production-level tasks introduces serious operational risks. 

Lemkin’s experience adds to growing unease around AI behavior, from manipulative conduct to oversight evasion observed in recent tests on advanced AI models.

Replit, which is backed by Andreessen Horowitz and counts Google CEO Sundar Pichai among its users, now faces growing pressure to demonstrate that its platform is equipped to handle real-world deployment securely.

Lemkin acknowledged that he still views Replit as “a tool, with flaws like every tool,” but one not yet ready for production-critical use. “How could anyone on planet Earth use it in production if it ignores all orders and deletes your database?” he asked.

Curious how an AI chatbot hallucinated a fake privacy policy, and what it means for AI in the workplace? Catch our full breakdown of the Cursor AI incident.
The post ‘Catastrophic Failure’: AI Agent Wipes Production Database, Then Lies About It appeared first on eWEEK.
https://www.eweek.com/news/replit-ai-coding-assistant-failure/

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Jul, Wed 23 - 11:22 CEST