MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
language
Search

DeepMind Tests the Limits of Large AI Language Systems With 280-Billion-Parameter Model

Thursday December 9, 2021. 01:50 AM , from Slashdot
An anonymous reader quotes a report from The Verge: Language generation is the hottest thing in AI right now, with a class of systems known as 'large language models' (or LLMs) being used for everything from improving Google's search engine to creating text-based fantasy games. But these programs also have serious problems, including regurgitating sexist and racist language and failing tests of logical reasoning. One big question is: can these weaknesses be improved by simply adding more data and computing power, or are we reaching the limits of this technological paradigm? This is one of the topics that Alphabet's AI lab DeepMind is tackling in a trio of research papers published today. The company's conclusion is that scaling up these systems further should deliver plenty of improvements. 'One key finding of the paper is that the progress and capabilities of large language models is still increasing. This is not an area that has plateaued,' DeepMind research scientist Jack Rae told reporters in a briefing call.

DeepMind, which regularly feeds its work into Google products, has probed the capabilities of this LLMs by building a language model with 280 billion parameters named Gopher. Parameters are a quick measure of a language's models size and complexity, meaning that Gopher is larger than OpenAI's GPT-3 (175 billion parameters) but not as big as some more experimental systems, like Microsoft and Nvidia's Megatron model (530 billion parameters). It's generally true in the AI world that bigger is better, with larger models usually offering higher performance. DeepMind's research confirms this trend and suggests that scaling up LLMs does offer improved performance on the most common benchmarks testing things like sentiment analysis and summarization. However, researchers also cautioned that some issues inherent to language models will need more than just data and compute to fix. 'I think right now it really looks like the model can fail in variety of ways,' said Rae. 'Some subset of those ways are because the model just doesn't have sufficiently good comprehension of what it's reading, and I feel like, for those class of problems, we are just going to see improved performance with more data and scale.'

But, he added, there are 'other categories of problems, like the model perpetuating stereotypical biases or the model being coaxed into giving mistruths, that no one at DeepMind thinks scale will be the solution [to].' In these cases, language models will need 'additional training routines' like feedback from human users, he noted.

Read more of this story at Slashdot.
https://slashdot.org/story/21/12/08/2147214/deepmind-tests-the-limits-of-large-ai-language-systems-w...
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Mar, Fri 29 - 01:26 CET