Silicon Valley companies think that smart glasses, like the Magic Leap, will one day replace the smartphone.

That could be good: As we worry about smartphone addiction, glasses could present a way to access all of the information we need in a way that works better with our real lives.

And yet, we're simply not ready for the shift. So-called "augmented reality" will only make the spread of fake news and misinformation worse.

As we prepare to hand over control of our senses to a computer, we need to ask if our systems for sifting real information from fake is good enough.

Magic Leap CEO Rony Abovitz thinks his company's mysterious goggles will, one day, be able to replace “your phones, your televisions, your laptops, your tablets," he said at Recode's Code Media conference on Tuesday.

It's a bold claim. But he's not the only tech executive who's claiming that the smartphone could, one day, die.

Just last year, Facebook CEO Mark Zuckerberg claimed that augmented reality — the technology for superimposing digital imagery over the physical world — could replace anything with a screen, too. Microsoft has expressed similar sentiments, as well, with CEO Satya Nadella often referring to head-mounted displays as the "ultimate computer."

And it's no wonder why it's such an attractive proposition: If a pair of glasses could project your text messages, your e-mails, your spreadsheets, and your Netflix in the air in front of you, why carry a separate phone? Current goggles, like Microsoft's HoloLens or Intel's prototype Vaunt, are pretty limited. Yet, we're inching closer and closer to this point.

The moment seems ripe for this idea, too. Right now, the big discussion in Silicon Valley is around the notion of smartphone addiction, and whether or not it's healthy for young people to be so attached to the major technology platforms. Salesforce CEO Marc Benioff has even likened so-called "Big Tech" to the tobacco industry.

Magic Leap CEO Rony Abovitz Getty/Brian Ach

Smart glasses, then, present an appealing alternative. If information is projected straight into our eyes, it could mean no more smartphone zombies staring down at little metal rectangles. If that information only displayed as you need it, it could mean no more mindlessly swiping through Facebook and Reddit for stimulation.

Still, we're quite a ways away from this future. Zuckerberg himself has said that we're probably about a decade away from smart glasses beginning to make a serious dent on mainstream markets.

But that's a good thing. Because while the ideas are grand, and the technology impressive, I'm concerned that we as a society simply aren't ready for always-on computers that pump information as close to our brains as possible. And the reason why has everything to do with Facebook, YouTube, fake news, and Logan Paul.

We're not ready

First, there's the obvious stuff. Facebook and Google are both under the gun right now for their spreading of false information — at best, peddling simple misinformation, and at worst, spreading propaganda that threatens to undermine American democracy.

Both companies are taking steps to stop it, sure, with increased and improved vetting of the news and information sources they promote. Still, bad actors are finding all kinds of ways around it. Even when Facebook took a stand against cryptocurrency scams and banned any advertising therein, it was quickly circumvented with clever ploys.

Facebook CEO Mark Zuckerberg thinks that virtual and augmented reality are the next big frontier. Facebook

All of this to say, there are a lot of people out there heavily invested in spreading bad information through the systems we already have in place. When you put on a pair of smart glasses, like what Magic Leap is proposing, you're handing over a certain measure of control of your senses to a computer.

It's bad enough that you can't always trust a news article posted by a well-meaning former classmate. Now, we have to ask what happens when scammers and ne'er-do-wells figure out a way to pump fake stuff into your eyeballs.

If you want to get really weird with it, you can imagine subtle ways that glasses can be exploited for misuse: What if you sold a couch on Craigslist, and found that you had been paid in a bunch of $1 bills that had been doctored to appear as $100 each to your glasses? There are already stickers that cause computer vision systems to "hallucinate."

The algorithm rankles

Then there's a related but more existential threat: Namely, the reliance of big tech companies on the almighty algorithm.

We trust Facebook, Twitter, and Google to present the most interesting and important content to us, mainly because we have no choice — it's just how they operate.

But over the last year, we've seen how the algorithm can fail us. A widely-spread blog post last year highlighted how children's video creators are gaming YouTube with disturbing videos, ratcheting up millions of views on upsetting (and copyright-violating) content. Facebook is faced with similar issues, as entrepreneurial types figure out ways to ensure their low-quality videos always rank highly in the news feed.

Logan Paul Logan Paul Vlogs/YouTube

Again: These tech companies are asking us to hand control of our senses over to a computer with these smart glasses. If, today, a dedicated and unscrupulous party can figure out ways to get you to watch a video you don't want to see, it's anyone's guess what can happen when those same principles are applied to what we see in our everyday lives.

And, as Engadget pointed out, the recent controversies over YouTube star Logan Paul prove that the algorithm isn't necessarily self-policing. His now-notorious video of finding a dead body in Japan's "suicide forest" wouldn't have been picked up by any computer system as objectionable — it took the very human response of outrage to prove that.

In other words, we're trusting an imperfect system to rank the information in our lives, and soon, our senses. It's a harder problem to solve – you can always improve the quality of information you spread, but it's harder to come up with a mathematical algorithm that sorts data that doesn't have some kind of bias, one way or the other.

There's nothing to say that any of these problems are insurmountable, and I'm optimistic that solutions will present themselves in the next decade. But the clock is ticking, and today's intractable moral quandary is poised to become tomorrow's postmodern nightmare.