Navigation
Search
|
Meta's Next Llama AI Models Are Training on a GPU Cluster 'Bigger Than Anything' Else
Thursday October 31, 2024. 06:20 PM , from Slashdot
Increasing the scale of AI training with more computing power and data is widely believed to be key to developing significantly more capable AI models. While Meta appears to have the lead now, most of the big players in the field are likely working toward using compute clusters with more than 100,000 advanced chips. In March, Meta and Nvidia shared details about clusters of about 25,000 H100s that were used to develop Llama 3. Read more of this story at Slashdot.
https://tech.slashdot.org/story/24/10/31/1319259/metas-next-llama-ai-models-are-training-on-a-gpu-cl...
Related News |
25 sources
Current Date
Nov, Fri 1 - 01:28 CET
|