The biggest stories in consumer tech. Subscribe to get the Processor newsletter delivered right to your inbox.

Now that those of us brave enough to install the Android P beta have had a week or two to kick the tires, I wanted to revisit them again. Chaim Gartenberg has rightly pointed out that the combination of swipes and buttons means that the core navigation is a set of mixed metaphors. Others have called them “bad” and “a hot mess.” A gentler way to characterize the new system would be “polarizing,” but for Android users a better way might be to call it “rejection.”

Having spent some more time with the Android P beta, I tend to agree — but for different reasons. I am not at all put out by the mixed metaphors in the UI; some button taps and some swipes. Nor am I really annoyed that the gesture system hasn’t reclaimed any screen real estate, if I’m perfectly honest.

“System navigation still experiences some jank and frozen states on Pixel devices.”

Instead, the problem with the gestures in the current iteration of the Android P beta is one that is sadly familiar to Android users: jank. That’s the technical term (no really) that Google itself uses to describe the behavior of the System UI on this beta. “Jank” is usually translated as weird jitters, effects, and scrolling behavior.

I trust that much of that will be resolved in later iterations of the software, but I’m frankly terrified that the subtler issues won’t be. I’m speaking about the basic feel of moving elements around on the screen. It needs to be as close to perfect as possible — as good as it is on the iPhone X in my opinion — otherwise that sense of “jank” is going to permeate everything.

If the jank permeates the core experience of navigating Android, it will basically ruin the experience in an unignorable way. That makes the switch to gestures incredibly risky. It’s doubly risky because this is the operating system that took half a decade to get something close to a good scrolling experience.

It will either be great or awful, with no in-between

So why even take that risk? Despite (or perhaps because of) the mixed metaphors, the new gesture UI really does add a ton of additional functionality. With the new system, no matter what app you have open, you have quick access to the following:

Home screen (tap home button)

Predicted apps (half swipe up)

Phone / Google Search (half swipe up + tap)

It might only apply to Google’s implementation of Android on the Pixel line, but that last one is really important to me. With minimum thumb movement, you can start typing a search to not just get to your apps and contacts, but to Google and also to those “Slices” that will let you perform app actions like calling a Lyft home directly.

But wait, I know what you’re going to say: gestures aren’t necessary for any of those benefits. Tapping the square overview button accomplishes the same thing. Yes it does, but gestures enable a few more benefits. Again, from any app, you can quickly do the following:

Last six or so apps (swipe the home button to the right)

From any app, jump straight to the all apps drawer (full swipe up, though weirdly that’s not working for me in this beta)

”Fat finger it” — you can get to most of the gestures with a inexact swipe, instead of a precise tap on a button.

That last bullet point gets to the reason why I think this gesture system is a risk worth taking. We can (and surely will) quibble about added functionality and the use of screen real estate. But Google is supposed to be pushing this idea of a new Material Design philosophy. It’s a user interface paradigm that includes the idea that the digital elements on your screen are a kind of “magic paper” that you can move around directly, some layered on top of each other.

Follow @verge on Instagram Follow for original photography, videos, stop-motion, and Instagram Stories from The Verge’s staff.

It’s an evolution of the original insight of the iPhone: a capacitive screen that you directly touch with your finger means you’re directly touching the content. Material Design takes it just a step further, that you’re directly moving screen elements around. It lets you use the same parts of your brain that are able to move real objects around in space and applies it to moving digital objects around on a screen.

If you think about it, it’s kind of weird that an operating system with that design philosophy hasn’t already found a way to move to a gestural system.

But that highfalutin idea of moving magic paper around only works if it feels like you’re moving something. If there’s jank, it won’t feel real at all. It’ll feel terrible. I still think it’s a risk worth taking, but I hope the Android engineers in Mountain View are up to the challenge.