We’ve all tried to log into a website or submit a form only to be stuck clicking boxes of traffic lights or storefronts or bridges in a desperate attempt to finally convince the computer that we’re not actually a bot.

For many years, this has been one of the predominant ways that reCaptcha—the Google-run internet bot detector—has determined whether a user is a bot or not. But last fall, Google launched a new version of the tool, with the goal of eliminating that annoying user experience entirely. Now, when you enter a form on a website that’s using reCaptcha V3, you won’t see the “I’m not a robot” checkbox, nor will you have to prove you know what a cat looks like. Instead, you won’t see anything at all.

“It’s a better experience for users. Everyone has failed a Captcha,” says Cy Khormaee, the reCaptcha product lead at Google. Instead, Google analyzes the way users navigate through a website and assigns them a risk score based on how malicious their behavior is. Khormaee won’t share what signals Google uses to determine these scores because he says that would make it easier for scammers to imitate benign users, but he believes that this new version of reCaptcha makes it incredibly difficult for bots or Captcha farmers—humans who are paid tiny amounts to break Captchas online—to fool Google’s system.

“You have to understand what behavior on the site should be and mimic that well enough to fool us,” he says. “That’s a really hard problem versus the general problem of, ‘Pretend like I’m a human.'” Website administrators then get access to their visitors’ risk scores and can decide how to handle them: For instance, if a user with a high risk score attempts to log in, the website can set rules to ask them to enter additional verification information through two-factor authentication. As Khormaee put it, the “worst case is we have a little inconvenience for legitimate users, but if there is an adversary, we prevent your account from being stolen.”

According to tech statistics website Built With, more than 650,000 websites are already using reCaptcha v3; overall, there are at least 4.5 million websites use reCaptcha, including 25% of the top 10,000 sites. Google is also now testing an enterprise version of reCaptcha v3, where Google creates a customized reCaptcha for enterprises that are looking for more granular data about users’ risk levels to protect their site algorithms from malicious users and bots.

But this new, risk-score based system comes with a serious trade-off: users’ privacy.

According to two security researchers who’ve studied reCaptcha, one of the ways that Google determines whether you’re a malicious user or not is whether you already have a Google cookie installed on your browser. It’s the same cookie that allows you to open new tabs in your browser and not have to re-log in to your Google account every time. But according to Mohamed Akrout, a computer science PhD student at the University of Toronto who has studied reCaptcha, it appears that Google is also using its cookies to determine whether someone is a human in reCaptcha v3 tests. Akrout wrote in an April paper about how reCaptcha v3 simulations that ran on a browser with a connected Google account received lower risk scores than browsers without a connected Google account. “If you have a Google account it’s more likely you are human,” he says. Google did not respond to questions about the role that Google cookies play in reCaptcha.