I have a confection to make. Ugh! No, I don’t want to bake a cake. Let me type that again. I have a confession to make. I worked for many years as a software developer at Apple and I invented touchscreen keyboard autocorrection for the original iPhone.

WIRED OPINION ABOUT Ken Kocienda (@kocienda) was a software engineer and designer at Apple for more than 15 years. His book Creative Selection: Inside Apple's Design Process During the Golden Age of Steve Jobs comes out on September 4.

I’m proif if rhe wirl… ahem… I’m proud of the work I did to bring software-assisted typing to a smartphone near you. After all, if the iPhone keyboard wasn’t based in software, Apple couldn’t have delivered on Steve Jobs' vision for a breakthrough touchscreen computer with as few fixed buttons as possible. The keyboard needed to get out of the way when it wasn’t needed so the rest of the apps on the phone could shine.

The iPhone succeeded in this, but I’m also aware that its style of keyboard autocorrection has its limits. Everyone has stories about autocorrection going awry, but the funnier these typing tales get, the more apocryphal they’re likely to be. I’m not quite as proud of giving the world a new form of low humor, the smartphone era’s version of the knock-knock joke.

Have you heard this one? A wife sends a text with a photo of herself modeling a new outfit. She asks her husband, “Does this dress make me look fat?” On the receiving end, the man’s mind knows he should tread carefully, but his thumbs don’t.

He replies, “Mooooo!”

What is up with this? It's the result of a tragicomic combination in ‘M’ and ’N’ being such close neighbors on the keyboard, the dictionary lookup that shows the sound a cow makes is actually a word, and the indifference of autocorrection to the sensitivities of this simple (but perilous!) Q&A. “Wait, honey! I didn’t mean that!”

We find this amusing because we can relate. We’ve all sent autocorrected text we didn’t intend. To be a smartphone user is to accept the ergonomics and software of small touchscreen keyboards.

When I started working with a small team of engineers and designers at Apple in late 2005 to create a touchscreen operating system for Purple—the codename of the super-secret skunk works project that became the iPhone—we didn’t know if typing on a small, touch-sensitive sheet of glass was technologically feasible or a fool’s errand. In those early days of work on Purple, the keyboard was a daunting prospect, and we referred to it, often quite nervously, as a science project. It wasn’t easy to figure out how software might come to our rescue and how much our algorithms should be allowed to make suggestions or intervene to fix typing mistakes. I wrote the code for iPhone autocorrection based on an analysis of the words we type most commonly, the frequency of words relative to others, and the errors we’re most likely to make on a touchscreen keyboard.

LEARN MORE The WIRED Guide to the iPhone

More than 10 years after the initial release of the iPhone, the state of the art now is much as it was then. Even with recent advances in AI and machine learning, the core problem remains the same: Software doesn’t understand the nuance of human communication.

Of course, the core principle of machine learning is the notion of training. Show a learning algorithm a huge body of text, teach it to recognize n-grams (sequences of words that go together frequently), and the longer the sequence, the better the algorithm will be—and a computer will be able to tell you that you meant “bacon and eggs” and not “bacon and effs”. It should also be able to puzzle out “bacon” if you badly mangled the typing and keyed “havom”. Autocorrection that works as well as this saves us from laughable mistakes like telling someone you made your kid a “peanut gutter and Kelly sandwich” for lunch, and when the keyboard makes these kinds of corrections for us, we do feel that the software has saved our bacon.

Improving autocorrection from here is still just a matter of degrees, piling up more data for algorithms to study, and improving the accuracy of the guesses, instructing the computer to consider a longer succession of word phrases.

In order for autocorrect software to make better choices about what to fix and how, the systems would need to know more about what we mean. But do we want software to be allowed to intervene more than it does today? How much refereeing and rewriting should software should be allowed to make?