MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
code
Search

DeepSeek AI’s code bias sparks alarm over politicized AI outputs and enterprise risk

Thursday September 18, 2025. 11:32 AM , from ComputerWorld
A new study has shown that DeepSeek AI may generate deliberately flawed code when prompts involve groups or regions deemed politically sensitive by Beijing, raising fresh concerns for enterprises about the security and reliability of Chinese AI systems.

Researchers at CrowdStrike tested DeepSeek by submitting a series of nearly identical programming requests, varying only the intended user or region, according to a report from the Washington Post.

While general requests for code to run industrial control systems already produced a notable share of flawed results, the error rate increased sharply when the projects were described as serving groups or regions deemed sensitive by Beijing, including Tibet, Taiwan, and Falun Gong.

This is not the first time DeepSeek has drawn scrutiny. Earlier this year, a senior US State Department official warned that the company has provided support to China’s military and intelligence operations and is likely to continue doing so.

Security risks from bias

In the report, CrowdStrike stated that the behavior could stem from the AI engine following Chinese government directives, weaker training data in certain regions, or the model itself generating flawed code when instructed to associate a region with rebels.

Industry experts warn that these patterns carry significant implications for enterprises.

“If AI models generate flawed or biased code influenced by political directives, enterprises face inherent risks from vulnerabilities in sensitive systems, particularly where neutrality is critical, potentially leading to operational, reputational, and regulatory consequences,” said Prabhu Ram, VP of industry research at Cybermedia Research.

Enterprises operating under national security or regulatory constraints must be especially cautious, according to Neil Shah, VP for research at Counterpoint Research.

“The use of foreign AI models in sensitive workflows should be subject to national-level AI certification programs and export control compliance as a first line of defense,” Shah said. “Ultimately, trust in AI systems must be earned through transparency, accountability, and continuous oversight irrespective of the model’s popularity or open-source status.”

Systemic gaps in oversight

Analysts point out that this is not just a DeepSeek issue but a systemic risk across the AI foundational model ecosystem, citing the lack of cross-border standardization and governance.

“As the number of foundation models proliferates and enterprises increasingly build applications or code on top of them, it becomes imperative for CIOs and IT leaders to establish and follow a robust multi-level due diligence framework,” Shah said. “That framework should ensure training data transparency, strong data privacy, security governance policies, and at the very least, rigorous checks for geopolitical biases, censorship influence, and potential IP violations.”

Experts recommend that CIOs review the transparency of training data and algorithms, account for geopolitical context, and use independent third-party assessments and controlled pilot testing before moving to large-scale integration. “There is also a growing need for certification and regulatory frameworks to guarantee AI neutrality, safety, and ethical compliance,” Ram said. “National and international standards could help enterprises trust AI outputs while mitigating risks from biased or politically influenced systems.”
https://www.computerworld.com/article/4059276/deepseek-ais-code-bias-sparks-alarm-over-politicized-a...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Sep, Thu 18 - 16:25 CEST