MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
apple
Search

How will Apple improve its AI while protecting your privacy?

Tuesday April 15, 2025. 12:18 AM , from Mac 911
Macworld

With all the problems we’ve heard about Apple Intelligence lately–delayed Siri improvements, bad news notification summaries, unimpressive image generation, and more–you might wonder what Apple is planning to do to right the ship.

Obviously new and improved models are important, and so in increased training, but Apple has a particularly hard time of this because its privacy policies are a lot more strict than other companies creating AI products.

In a new post on Apple’s Machine Learning Research site, the company explains a technique it will employ to help its AI be more relevant, more often, without training it on your personal data.

Ensuring privacy while polling for usage data

Differential Privacy is a way to, as Apple puts it, “gain insight into what many Apple users are doing, while helping to preserve the privacy of individual users.”

Basically, whenever Apple collects data in a system like this, it first strips out any identifying information (device ID, IP address, and so on) and then slightly alters the data. When millions of users submit results, that “noise” cancels out. That’s the Differential Privacy part: take enough samples with random noise and identifiers removed, and you can’t possibly connect any particular bit of data with a user.

It’s a good way to, for example, get a good statistical sample of which emoji are picked most often, or which autocorrect word is used the most after a particular misspelling–collecting data on user preferences without actually being able to trace any particular data point back to any user, even if they wanted to.

Apple can generate synthetic text that is representative of common prompts, then use those differential privacy techniques to find out which synthetic samples are selected by users most often. Or to determine which words and phrases are common in Genmoji prompts and which results the users are most likely to pick.

The AI system could generate common sentences used in emails, for example, and then send multiple variants out to different users. Then, using differential privacy techniques, Apple can find out which ones are selected most frequently (while having no ability to know what any one individual chose).

Apple has been using this technique for years to gather data meant to improve QuickType suggestions, emoji suggestions, lookup hints, and more. As anonymous as it is, it is still opt-in. Apple doesn’t collect this type of data unless you affirmatively enable device analytics.

Techniques like this are already being used to improve Genmoji, and in an upcoming update, they’ll be used for Image Generation, Image Wand, Memories Creation, Writing Tools, and Visual Intelligence. A Bloomberg report says the new system will come in a beta update to iOS 18.5, iPadOS 18.5, and macOS 18.5 (the second beta was released today).

Of course, this is just data gathering, and it will take weeks or months of data collection and retraining to measurably improve Apple Intelligence features.
https://www.macworld.com/article/2686346/how-will-apple-improve-its-ai-while-protecting-your-privacy...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Apr, Wed 16 - 22:18 CEST