MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
openai
Search

OpenAI and Others Seek New Path To Smarter AI as Current Methods Hit Limitations

Monday November 11, 2024. 03:03 PM , from Slashdot
OpenAI and Others Seek New Path To Smarter AI as Current Methods Hit Limitations
AI companies like OpenAI are seeking to overcome unexpected delays and challenges in the pursuit of ever-large language models by developing training techniques that use more human-like ways for algorithms to 'think.' From a report: A dozenAI scientists, researchers and investors told Reuters they believe that these techniques, which are behind OpenAI's recently released o1 model, could reshape the AI arms race, and have implications for the types of resources that AI companies have an insatiable demand for, from energy to types of chips.

After the release of the viral ChatGPT chatbot two years ago, technology companies, whose valuations have benefited greatly from the AI boom, have publicly maintained that 'scaling up' current models through adding more data and computing power will consistently lead to improved AI models. But now, some of the most prominent AI scientists are speaking out on the limitations of this 'bigger is better' philosophy. Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training -- the phase of training an AI model that uses a vast amount of unlabeled data to understand language patterns and structures -- have plateaued. Sutskever is widely credited as an early advocate of achieving massive leaps in generative AI advancement through t he use of more data and computing power in pre-training, which eventually created ChatGPT. Sutskever left OpenAI earlier this year to found SSI. The Information, reporting over the weekend that Orion, OpenAI's newest model, isn't drastically better than its previous model nor is it better at many tasks: The Orion situation could test a core assumption of the AI field, known as scaling laws: that LLMs would continue to improve at the same pace as long as they had more data to learn from and additional computing power to facilitate that training process.

In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law.

Some CEOs, including Meta Platforms' Mark Zuckerberg, have said that in a worst-case scenario, there would still be a lot of room to build consumer and enterprise products on top of the current technology even if it doesn't improve.

Read more of this story at Slashdot.
https://tech.slashdot.org/story/24/11/11/144206/openai-and-others-seek-new-path-to-smarter-ai-as-cur...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Nov, Fri 15 - 03:03 CET