MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
rsquo
Search

Apple’s mad AI dash could end in a privacy faceplant

Monday July 8, 2024. 01:29 PM , from Macworld Reviews
Macworld

Welcome to our weekly Apple Breakfast column, which includes all the Apple news you missed last week in a handy bite-sized roundup. We call it Apple Breakfast because we think it goes great with a Monday morning cup of coffee or tea, but it’s cool if you want to give it a read during lunch or dinner hours too.

The chat’s out of the bag

In a surprising turn of events last week, it emerged that OpenAI’s ChatGPT app for the Mac had been flouting Apple’s security policies by storing queries in unencrypted plaintext. This meant other apps could easily access potentially sensitive data, and (while the issue has since been patched) raised some mid-size red flags ahead of ChatGPT’s integration with Apple Intelligence.

This isn’t the first and one suspects it won’t be the last time Apple’s AI adventure has pushed it into a compromising position. Arriving late to the party, Apple was left with the choice of either building fast and cutting corners or jumping into bed with other companies that may not have quite the same attachment to the principles of quality control and user privacy. It seems to have elected to do a little of both, and the result can hardly be reassuring to a user base that thought it could depend on Cupertino having their back in these areas at least.

Apple Intelligence is a massive raft of features, many of which (such as the ground-up remodeling of Siri) are urgently needed. But they were all announced at once in a mad rush, and it isn’t clear how Apple will satisfy the standards of QA it’s normally known for while rolling out so much functionality in such a compressed timeframe. That’s presumably why OpenAI has already been headhunted as a partner for such tasks as Apple Intelligence is unable to manage on its own, and companies like Meta and Google will be added further down the line.

If those names don’t necessarily fill the user with peace of mind, Apple hopes to assuage any doubts by making their input opt-in. (“ChatGPT can answer that query for you. Tap here to gullibly cough up your personal data” etc.) But that’s not how Apple usually plays the game. It doesn’t let scammers put their software on the App Store and hand-wave responsibility with an opt-in dialog box; it makes sure that less tech-savvy users aren’t put in a position where they have to make those kinds of decisions. (I mean, in theory. Obviously, more than a few scam apps have made it through the vetting process. But the principle remains.)

Part of the problem is that AI requires a fundamentally different business model from what Apple is used to. If the marketing and profit margins are right, selling premium phones and laptops is lucrative enough that you don’t need to stoop to data harvesting. However, AI depends on the scraping of data to build its large language models. You can’t be squeamish if you want to get into AI on any serious level; either you rummage in the dirt yourself or you outsource the work to the kind of people about whom you could charitably say, paraphrasing Tangled, that ethics-wise their hands are not the cleanest.

And the content scraping is only the beginning of AI’s issues. Anyone who’s tried to get ChatGPT to write an essay or Stable Diffusion to design a movie poster knows that they are chaotic and unpredictable systems that cannot be depended upon to do what either the user or the designer wants. Sometimes gen AI art programs distort faces or history in ways that can be upsetting. Sometimes AI search instructs amateur cooks to make pizza with glue. Apple is a control freak of a company, and it seems unlikely it would be happy with users experiencing such loose-cannon results.

Apple felt it had to get an AI product out there. But if that means it has to compromise its principles on consistent user experience, data privacy, security, and the empowerment of creatives, we may have to ask if the experiment was worth it.








Foundry

Foundry


Foundry

Trending: Top stories

Microsoft’s AI stance would be laughable if it weren’t so bad.

A Siri divided against itself cannot stand, argues Jason Snell.

What Apple Vision Pro really means for Mac users.

5 reasons why your MacBook needs a docking station.

Apple tags three of its most iconic products as ‘vintage.’ Feeling old yet?

Podcast of the week

On this episode of the Macworld Podcast, we talk about iPhone battery life–the problems we run into, what to do about them, and what to do to make sure the battery stays healthy for a long time.

You can catch every episode of the Macworld Podcast on Spotify, Soundcloud, the Podcasts app, or our own site.

Reviews corner

macOS Sequoia vs Sonoma: What’s new?

Best VPN for Mac: Reviews and buying advice for Mac users.

Best virtual machine software for Mac, plus other ways you can run Windows on a Mac.

Best password managers for Mac and iPhone.

The rumor mill

Apple’s next big thing? AirPods with frickin cameras.

Uncovered Apple chip identifiers reveal surprising slate of unreleased iPads.

Leak shows all iPhone 16 models getting an ‘A18’ chip.

Apple is ‘formally’ working on next year’s OS updates. Here are the code names.

iOS 18, Time Bandits, and everything else coming from Apple in July.

Software updates, bugs, and problems

ChatGPT Mac security flaw raises red flags ahead of Apple Intelligence integration.

AI tool to generate app designs is pulled after it copies Apple’s work.

And with that, we’re done for this week’s Apple Breakfast. If you’d like to get regular roundups, sign up for our newsletters. You can also follow us on Facebook, Threads, or Twitter for discussion of breaking Apple news stories. See you next Monday, and stay Appley.

Apple Inc
https://www.macworld.com/article/2381684/apples-mad-ai-dash-could-end-in-a-privacy-faceplant.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Sep, Mon 16 - 20:59 CEST