MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
model
Search

Perplexity 1776 Model Fixes DeepSeek-R1’s “Refusal to Respond to Sensitive Topics”

Thursday February 20, 2025. 11:32 PM , from eWeek
AI company Perplexity has released “1776,” a modified version of the open-source AI model DeepSeek-R1, aimed at eliminating government-imposed censorship on sensitive topics. The name 1776 symbolizes a commitment to freedom of information, particularly in contrast to the original model’s constraints on politically sensitive discussions in China. The modified model is available on Perplexity’s Sonar AI platform with model weights publicly hosted on GitHub.

Perplexity identified sensitive topics and post-trained DeepSeek-R1

“We are not able to make use of R1’s powerful reasoning capabilities without first mitigating its bias and censorship,” Perplexity’s AI team wrote in a blog post. The research detailed instances where the model either refused to respond to a query or aligned with a pro-Chinese government stance. By implementing post-training techniques, Perplexity demonstrated how a model’s “perspective” could be adjusted through targeted fine-tuning.

In one example, the researchers asked the generative AI model how Taiwan’s independence might impact Nvidia’s stock price. In response, DeepSeek R-1 not only avoided making financial predictions but also reinforced China’s claim over Taiwan. In contrast, the modified 1776 version provided a detailed financial analysis, acknowledging potential geopolitical risks such as “China might retaliate against U.S. firms like Nvidia through export bans, tariffs, or cyberattacks.”

How Perplexity removed censorship in R1

To modify the model, Perplexity assembled a team of experts to classify approximately 300 sensitive topics that could have been censored. They then curated a dataset of prompts designed to elicit censored responses. Using Nvidia’s NeMo 2.0 framework, they post-trained the model to respond with more open-ended and contextually accurate answers.

As a result, the modified version retains DeepSeek-R1’s advanced reasoning capabilities while addressing historically censored subjects, such as the Tiananmen Square massacre and the treatment of Uyghur people.

Balancing AI transparency with ethical considerations

Perplexity asserts that its modifications did not compromise the model’s reasoning abilities, noting in the blog post that “the de-censoring had no impact on its core reasoning capabilities.”

By demonstrating how post-training can reshape an AI model’s responses, Perplexity’s approach highlights the adaptability of open-source AI. The modified model may prove particularly valuable for businesses and researchers who require more complete and uncensored AI-generated insights, such as in financial analysis and global risk assessment.

Learn about AI hallucinations, another way these technologies can express bias in results.
The post Perplexity 1776 Model Fixes DeepSeek-R1’s “Refusal to Respond to Sensitive Topics” appeared first on eWEEK.
https://www.eweek.com/news/perplexity-ai-deepseek-r1-post-training/

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Feb, Fri 21 - 23:31 CET