MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
rsquo
Search

Why is autocorrect on the iPhone still so terrible?

Thursday February 10, 2022. 11:45 AM , from Mac 911
We’ve all been there. You mean to type a simple phrase like “What do you want for lunch today?” and it comes across as, “What do you it want for launch tidy?” Autocorrect mistakes are so commonplace, and have been for so long, that we barely even acknowledge them anymore unless they’re unintentionally hilarious.

Why is this? We’re coming up on 15 years of the iPhone—the device that pioneered and popularized touch-only keyboard input—and autocorrect has been with us in some form or another since the 90s when Word would automatically correct accidental caps-lock or common misspellings.

After decades and billions of devices sold, not to mention that meteoric rise of machine learning and AI, autocorrect feels just as dumb as ever. In some ways it feels like it has regressed, even, making nonsensical substitutions when a simple letter-swap would produce the correct word. Is autocorrect just really hard? Or is it not even trying to work the way it needs to? Is it no longer a priority?

Miss a couple keys in “What do you want for lunch today” and you get this monstrosity.IDG

The march of nines

I first learned of the concept called the “march of nines” about 20 years ago (though I don’t know where that term originated). I was researching and writing about the latest voice dictation software. That was back when computer users would have to buy software like Dragon Dictate to talk to their machines.

Dictation software that is 90 percent accurate might sound good but it’s worthless. If you have to fix one word out of every 10, you’re not really going to save much time. Even 99 percent accurate isn’t good enough, really. 99.9 percent is where things get interesting…if you can dictate 1,000 words to your computer and only have to go fix one of them, you’ve got a huge time-saver on your hands (not to mention an incredible accessibility tool).

But 99 percent accurate is not merely 9 percent better than 90 percent. It’s actually 1,000 percent better—a 10-fold improvement—because the error rate is going from one error in 10 words to one error in 100 words.

For every “nine” you stack onto the accuracy of any automated process, you are making it seem only marginally better to humans, but you have to make a tenfold improvement to get there. In other words, 99.9999 percent doesn’t feel much better than 99.999 percent to a user, but it’s still 10 times harder for the computer.

Is autocorrect stuck in a “march of nines” rut? Is it secretly making massive leaps that seem vanishingly small to us? I don’t think so. The error rate of autocorrect is still rather high, while the compute power available to it (especially for machine learning tasks) is hundreds of times greater than a decade ago. I think it’s time to look elsewhere.

Natural language processing that isn’t

Whether you’re talking about voice assistants like Siri or Alexa, voice dictation, or autocorrect, tech companies like to say they’re employing “natural language processing.”

But true natural language processing remains beyond the reach of any of these consumer systems. What we’re left with is a machine-learning-powered statistical analysis of the parts of speech that is almost entirely devoid of semantic meaning.

Consider the following: “Go down to the corner store and get me a stick of butter. Make sure it’s unsalted.”

If I were to ask someone what “it” refers to, anyone would immediately know I’m referring to the butter, even though, grammatically, “it” could just as well refer to the store. But who ever heard of an unsalted store? If we change that second sentence to “Check that it’s open today,” we know “it” refers to the store.

This is pretty trivial stuff for humans, but computers are terrible at it, because language systems are built without an understanding of what words actually mean, only which types of words they are and how they are spelled.

All these language-based systems (voice assistants, dictation, autocorrect) rely on vast numbers of poorly-paid contractors to take voice samples or text sentences and meticulously tag them: noun, verb, adjective, adverb, foul language, proper noun, etc. The computer language system might know that if you typed “taste this soop I just made” that the misspelled word should be “soup,” because it should be a noun, and it’s got most of the same letters as the non-word you typed my accident. But it doesn’t know what soup actually is. Nor any of the other words in the sentence: taste, made, just…

I think this is the real reason why autocorrect continues to be so bad. It doesn’t matter how sophisticated your machine learning is or the massive size of its training set if you don’t know what words mean, even superficially.

My iPhone only knows Macworld if I tell it to know Macworld.IDG

Google will auto-predict entire phrases for you in Gmail, but even this is just a very sophisticated statical analysis. It uses machine learning to determine what phrases most commonly follow the words you just used when replying to an email with a particular distribution of keywords and phrases. It still doesn’t know what any of it means.

To use my original example: Autocorrect suggested “What do you it want for launch tidy” because it doesn’t know that’s a nonsense sentence. If my iPhone knew what any of those words actually meant, not just their grammatical role, it would be easy to for autocorrect to only make suggestions that are, you know, possible human language. (Of course, that it’s also a mishmash of impossible grammar just shows how bad autocorrect continues to be.)

Autocorrect no longer seems to be a priority

The fact of the matter is, autocorrect is not the priority it once was. When was the last time you saw Apple tout a massive leap in autocorrect accuracy in the marketing of iOS?

In the early days of smartphones, when we were all getting used to typing with big thumbs on tiny touchscreens, the ability to fix our fat-finger errors was a huge selling point. It was a core feature that pointed to a device’s elegant, easy-to-use software.

Autocorrect, for all its faults, is old and boring now. We’ve lived with its foibles for so long that the market doesn’t really look at it as a hallmark of usability. We’ve moved on to other issues, like fancy camera features and notifications. I’m sure there are smart, hard-working engineers at Apple and Google plugging away at autocorrect, but it likely gets a fraction of the resources given to the team responsible for taking marginally better photos, because marginally better photos can sell phones and marginally better autocorrect can’t.

It’s going to take an absolutely massive leap in AI modeling and power before our phones have some sense of the semantic meanings of words. But certainly, even now, a lot more could be done to filter out nonsense sentences and garbage autocorrect suggestions that create meaningless drivel.

I would just like to see any improvement at all. Anything to dig autocorrect out of the rut it’s been in for launch tidy.
https://www.macworld.com/article/613333/why-is-autocorrect-still-so-bad.html
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Apr, Sat 20 - 14:24 CEST