Navigation
Search
|
Anthropic Researchers Wear Down AI Ethics With Repeated Questions
Wednesday April 3, 2024. 10:41 PM , from Slashdot
What Anthropic's researchers found was that these models with large context windows tend to perform better on many tasks if there are lots of examples of that task within the prompt. So if there are lots of trivia questions in the prompt (or priming document, like a big list of trivia that the model has in context), the answers actually get better over time. So a fact that it might have gotten wrong if it was the first question, it may get right if it's the hundredth question. Read more of this story at Slashdot.
https://tech.slashdot.org/story/24/04/03/1624214/anthropic-researchers-wear-down-ai-ethics-with-repe...
Related News |
25 sources
Current Date
May, Thu 2 - 18:22 CEST
|