Navigation
Search
|
Microsoft launches its own LLMs — here’s what that really means
Wednesday September 10, 2025. 12:00 PM , from ComputerWorld
Microsoft recently released its own large language models — the technology that underlies all generative AI (genAI), from OpenAI’s ChatGPT to Google’s Gemini, Anthropic and others. Until now, Microsoft has relied on ChatGPT to be Copilot’s brains. But with Microsoft and OpenAI fighting about what Microsoft should get from its $13 billion investment in OpenAI, there’s been talk about whether Microsoft would eventually develop AI on its own.
Microsoft has released two models and, for the moment, neither powers Copilot. But it’s not unreasonable to see this as an early step by the company to move some or most of its genAI work in-house. Does their release mean Microsoft will completely deep-six its relationship with OpenAI? Was Microsoft’s decision to build its own AI models merely a negotiating tactic to get a better deal from OpenAI? Or could it meld its own work with OpenAI’s from now on? There aren’t yet definitive answers to those questions, but there are plenty of hints. Here’s what new models mean for the Microsoft-OpenAI relationship, and the future of AI at Redmond. The founding of Microsoft’s AI division It’s been clear for a year-and-a-half that Microsoft would eventually supplement or even replace ChatGPT as Copilot’s brains by building its own in-house AI division. That was made clear in March 2024, when the company hired Mustafa Suleyman, co-founder of Google’s DeepMind and CEO of the AI start-up Inflection, to become executive vice president and CEO of a new division called Microsoft AI. (Suleyman reports directly to CEO Satya Nadella.) In a blog post about the hiring, Nadella called Suleyman “a visionary, product maker and builder of pioneering teams that go after bold missions.” In addition that hire, the company brought on board almost the entire Inflection staff and said they were responsible for many of the most important AI breakthroughs of the last five years. It was Nadella’s way of broadcasting to the world that Microsoft would be building its own AI models, not just taking whatever OpenAI gave it. A look at the new LLMs The two recently released LLMs are the first from Microsoft AI and offer hints about Microsoft’s plans for AI development and its relationship with OpenAI. One of the models, MAI-Voice-1, will eventually become Copilot’s voice interface. As Microsoft noted, it is “designed to provide powerful capabilities to consumers seeking to benefit from models that specialize in following instructions and providing helpful responses to everyday queries.” Microsoft likely developed the model at least in part because ChatGPT’s voice mode has been notoriously problematic. Many users have complained about flaky connections, and an inability by ChatGPT to understand what it’s been asked or to respond properly. One person who goes by the moniker “Deepimpact” on OpenAI’s Development Community forums puts it this way: “ChatGPT’s voice chat feels indistinguishable from a trivial conversation overheard at a bar counter, with someone of limited education and poor expressive ability.” Plenty of other users on the forum agreed. That’s why you can expect MAI-Voice-1 and later versions to become the voice interface for Copilot — and probably all Microsoft AI products. The other model, MAI-1-preview, is far more interesting, and probably represents Copilot’s future. Microsoft is being cagy about what this particular model can do today or might do tomorrow, saying little more than that it is an initial “foundation model trained end-to-end and offers a glimpse of future offerings inside Copilot.” And just what are those future offerings? Microsoft offers no details aside from the kind of hype you’ve come to know and hate from tech companies: “We are actively spinning the flywheel to deliver improved models. We’ll have much more to share in the coming months. Stay tuned!” That said, it’s abundantly clear MAI-1 will be initially used to augment ChatGPT as Copilot’s brains — what Microsoft calls “future offerings inside Copilot.” For now, that’s likely to mean new Copilot capabilities, although Microsoft isn’t saying what they might be. The future of ChatGPT inside Microsoft In the long run, don’t be surprised if Microsoft uses its own models rather than ChatGPT to do the heavy lifting in Copilot, instead of just augmenting it. They represent more than just a negotiating tactic — they’re a harbinger of Microsoft’s future. After all, Nadella called Suleyman “a visionary, product maker and builder of pioneering teams that go after bold missions.” You don’t need a visionary if you’re only interested in negotiating tactics. And bolstering ChatGPT isn’t exactly a “bold mission.” Microsoft didn’t pay top-dollar for some of the most advanced AI researchers in the world simply to augment someone else’s product. Amid that increasingly vicious fight over what Microsoft will get from its $13 billion investment in OpenAI, the arrival of these new models make clear the company is planning for a post-OpenAI future. Expect it to either completely replace ChatGPT with its own AI models or keep its frenemy confined to the margins.
https://www.computerworld.com/article/4053700/microsoft-launches-its-own-llms-heres-what-that-really...
Related News |
25 sources
Current Date
Sep, Wed 10 - 21:34 CEST
|