This is a story of what both I and Google engineers considered to be an SSRF vulnerability in Google Calendar – but turned out to be some caching mechanism that has gone rogue. And, while the result was a bit anticlimactic, the journey I went through along with Google’s security team was quite a fascinating (albeit confusing) one so I think it’s worth telling. Let’s jump right in!





Import calendar from URL

Google calendar has many cool features, one of them – as simple as it sounds - allows you to add a remote calendar by providing its URL:





At this point, Google’s server-side fetcher will try and grab the event from the remote calendar and add it to your own calendar. Or in other words, we can invoke HTTP requests from Google’s server. So accessing external URLs is obviously the intended behavior. but perhaps we can exploit the mechanism to access internal resources (such as the ‘localhost’)? or In other words, maybe we can SSRF this feature?





First attempt – seems secure

Initial testing did not yield any interesting results. While when accessing external (internet-facing) URLS, the fetcher generates a variety of error messages regarding the targeted endpoint (anything from ‘wrong format’ to ‘unreachable’), when inputting an internal address we always got the same error response – regardless of whether we expect the ADDRESS:PORT to represent an existing endpoint or a fake one:

When in doubt – Scan!

Just to make sure I wasn’t missing anything (perhaps indeed there is no ‘127.0.0.1:443’ endpoint), I decided to run a quick automated scan against common ports on the localhost (using Fiddler’s composer to send sequential requests) . To my surprise, when using the automated script, I suddenly started receiving different results. Suddenly, the ‘non-existing’ ADDRESS:PORT combination started generating different responses:





The results became even clearer when looking at which ports seemed open and which seemed closed. I switched my web-proxy and ran the scan through Burp’s ‘Intruder’ (where the ‘payload’ is the tested port, and the different response ‘Length’ indicates if the port seems closed or open). This result, as I’m sure you would agree, is very consistent with the result you might expect to receive:





From the image it seems clear that ports 80,443 and 22 are open (when accessed from Google’s internal server), and the other tested ports are closed. I quickly launched a scan against another internal server (guts-remedy-linux-prod03.vm.corp.google.com) and found that port 22 is opened, as expected:





At this point I was pretty sure that:

I was sending internal HTTP Get requests, and I could do so with an automated script. I was able to receive some partial information about the response (if port is open or not).

Or to put in other words, I had here a valid ‘Blind SSRF’ in Google Calendar. Sweet! Time to report.





Google engineers reproduce the issue and open a bug

It was quite a ‘back and forth’ between me and Google’s team, as reproducing the issue was a bit complex – to generate these results, you had to scan automatically, but not too fast (as there is also some rate-limiting involved). Eventually though, the issue was reproduced, but a new glitch was found:









So, Google’s team was able to reproduce the issue, but they were receiving different results than me – which they attribute to the UI interaction. To test Google’s theory, I spawned up a new, ‘clean & fresh’ calendar account and ran the test again without any UI interaction. My results were consistent with my previous tests.

At this point, Google’s team agreed that there is something there, and opened a bug:





Product team finds the issue to be non-security related

After doing some research, Google’s product team found that the issue is not a security-related one. Error messages that I attributed to ‘open ports,’ were in fact a result of some caching mechanism not behaving as expected:

The report was closed, and I accepted my defeat





Final thoughts

I have great respect for Google’s team for not letting go on this one until they had sorted it out (the caching issue BTW was fixed – same errors are not returned anymore). Can’t say I wasn’t disappointed to find that at the end of the journey, my SSRF was nothing more than a ghost, but I certainly enjoyed the ride. I think the main takeaway for me is to not get discouraged by initial failed results as sometimes a deeper dive can take you into complex, fascinating paths. This time it led to a misbehaving cache, maybe next time it will lead to an RCE :)