MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
apple
Search

Why Apple’s Foundation Models Framework matter

Tuesday June 17, 2025. 03:24 PM , from ComputerWorld
Look, it’s not just about Siri and ChatGPT; artificial intelligence will drive future tech experiences and should be seen as a utility. That’s the strategic imperative driving Apple’s WWDC introduction of the Foundation Models Framework for its operating systems. It represents a series of tools that will let developers exploit Apple’s own on-device AI large language models (LLMs) in their apps. This was one of a host of developer-focused improvements the company talked about last week. 

The idea is that developers will be able to use the models with as little as three lines of code. So, if you want to build a universal CMS editor for iPad, you can add Writing Tools and translation services to your app to help writers generate better copy for use across an international network of language sites.

Better yet, when you build that app, or any other app, Apple won’t charge you for access to its core Apple Intelligence models – which themselves operate on the device. That’s great, as it means developers for no charge can deliver what will over time become an extensive suite of AI features within their apps while also securing user privacy.

What are Foundation Models?

In a note on its developer website, Apple tells us the models it made available in Foundational Models Framework are particularly good at text-generation tasks such as summarization, “entity extraction,” text understanding, refinement, dialogue for games, creative content generation, and more.

You get:

Apple Intelligence tools as a service for use in apps.

Privacy, as all data stays on the device.

The ability to work offline because processing takes place on the device.

Small apps, since the LLM is built into the OS.

Apple has also made solid decisions in the manner in which it has built Foundational Models. Guided Generation, for example, works to ensure the LLM provides consistently structured responses for use within the apps you build, rather than the messy code many LLMs generate; Apple’s framework is also able to provide complex responses in a more usable format. 

Finally, Apple said it is possible to give the Apple Intelligence LLM access to tools other than your own. Dev magazine explains that “tool calling” means you can instruct the LLM when it needs to work with an external tool to bring in information, such as up-to-the-minute weather reporting. That can also extend to actions, such as booking trips.

This kind of access to real information helps keep the LLM sober, preventing it from using fake data to resolve its task. Finally, the company has also figured out how to make apps remember the AI conversations, which means you can engage in inclusive sessions of requests, rather than single-use requests. To stimulate development using Foundation Models, Apple has built in support for doing so inside Xcode Playgrounds.

Walking toward the horizon

Unless you’ve spent the last 12 months locked away from all communications on some form of religious retreat to promote world peace (in which case, I think you should have prayed harder), you’ll know Apple Intelligence has its critics. Most of that criticism is based on the idea that Apple Intelligence needs to be a smart chatbot like ChatGPT (and it isn’t at all unfair to castigate Siri for being a shadow of what it was intended to be). 

But that focus on Siri skips the more substantial value released when using LLMs for specific tasks, such as those Writing Tools I mentioned. Yes, Siri sucks a little (but will improve) and Apple Intelligence development has been an embarrassment to the company. But that doesn’t mean everything about Apple’s AI is poor, nor does it mean it won’t get better over time.

What Apple understands is that by making those AI models accessible to developers and third-party apps, it is empowering those who can’t afford fee-based LLMs to get creative with AI. That’s quite a big deal, one that could be considered an “iPhone moment,” or at least an “App Store moment,” in its own right, and it should enable a lot of experimentation.

“We think this will ignite a whole new wave of intelligent experiences in the apps users rely on every day,” Craig Federighi, Apple senior vice president for software engineering, said at WWDC. “We can’t wait to see what developers create.”

What we need

We need that experimentation. For good or ill, we know AI is going to be everywhere, and whether you are comfortable with that truth is less important than figuring out how to best position yourself to be resilient to that reality.

Enabling developers to build AI inside their apps easily and at no cost means they will be able to experiment, and hopefully forge their own path. It also means Apple has dramatically lowered the barrier to entry for AI development on its platforms, even while it is urgently engaged in expanding what AI models it provides within Apple Intelligence. As it introduces new foundation models, developers will be able to use them, empowering more experimenting.

With the cost to privacy and cost of entry set to zero, Foundation Models change the argument around AI on Apple’s platforms. It’s not just about a smarter Siri, it is about a smarter ecosystem — one that Apple hopes developers will help it build, one AI-enabled app at a time.

The Foundation Models Framework is available for beta testing by developers already with public betas to ship with the operating systems in July.

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.
https://www.computerworld.com/article/4008276/why-apples-foundation-models-framework-matter.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Jun, Wed 18 - 05:09 CEST