MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
mistral
Search

Mistral Confirms New Open Source AI Model Nearing GPT-4 Performance

Thursday February 1, 2024. 01:02 AM , from Slashdot
An anonymous reader quotes a report from VentureBeat: The past few days have been a wild ride for the growing open source AI community -- even by its fast-moving and freewheeling standards. Here's the quick chronology: on or about January 28, a user with the handle 'Miqu Dev' posted a set of files on HuggingFace, the leading open source AI model and code sharing platform, that together comprised a seemingly new open source large language model (LLM) labeled 'miqu-1-70b.' The HuggingFace entry, which is still up at the time of this article's posting, noted that new LLM's 'Prompt format,' how users interact with it, was the same as Mistral, the well-funded open source Parisian AI company behind Mixtral 8x7b, viewed by many to be the top performing open source LLM presently available, a fine-tuned and retrained version of Meta's Llama 2.

The same day, an anonymous user on 4chan (possibly 'Miqu Dev') posted a link to the miqu-1-70b files on 4chan, the notoriously longstanding haven of online memes and toxicity, where users began to notice it. Some took to X, Elon Musk's social network formerly known as Twitter, to share the discovery of the model and what appeared to be its exceptionally high performance at common LLM tasks (measured by tests known as benchmarks), approaching the previous leader, OpenAI's GPT-4 on the EQ-Bench. Machine learning (ML) researchers took notice on LinkedIn, as well. 'Does 'miqu' stand for MIstral QUantized? We don't know for sure, but this quickly became one of, if not the best open-source LLM,' wrote Maxime Labonne, an ML scientist at JP Morgan & Chase, one of the world's largest banking and financial companies. 'Thanks to @152334H, we also now have a good unquantized version of miqu here: https://lnkd.in/g8XzhGSM. Quantization in ML refers to a technique used to make it possible to run certain AI models on less powerful computers and chips by replacing specific long numeric sequences in a model's architecture with shorter ones. Users speculated 'Miqu' might be a new Mistral model being covertly 'leaked' by the company itself into the world -- especially since Mistral is known for dropping new models and updates without fanfare through esoteric and technical means -- or perhaps an employee or customer gone rouge.

Well, today it appears we finally have confirmation of the latter of those possibilities: Mistral co-founder and CEO Arthur Mensch took to X to clarify: 'An over-enthusiastic employee of one of our early access customers leaked a quantized (and watermarked) version of an old model we trained and distributed quite openly... To quickly start working with a few selected customers, we retrained this model from Llama 2 the minute we got access to our entire cluster -- the pretraining finished on the day of Mistral 7B release. We've made good progress since -- stay tuned!' Hilariously, Mensch also appears to have taken to the illicit HuggingFace post not to demand a takedown, but leaving a comment that the poster 'might consider attribution.' Still, with Mensch's note to 'stay tuned!' it appears that not only is Mistral training a version of this so-called 'Miqu' model that approaches GPT-4 level performance, but it may, in fact, match or exceed it, if his comments are to be interpreted generously.

Read more of this story at Slashdot.
https://news.slashdot.org/story/24/01/31/2145205/mistral-confirms-new-open-source-ai-model-nearing-g...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
May, Fri 10 - 13:23 CEST