Differential privacy is not new. In fact, it's fairly common. Essentially, it makes sure AI cannot encode information that is unique to you and could therefore reveal your identity. Instead, AI only learns from patterns that show up en masse. What thousands of people type into Gmail might become a Smart Reply auto response, but the personal data you enter will never show up in a stranger's email.

By sharing TensorFlow Privacy, Google hopes developers will add this type of security to other machine learning tools -- and maybe even improve on it. To encourage adoption, Google promises TensorFlow is easy to use. It only requires "some simple code changes" and hyperparameter tuning. You can access the tool on GitHub, and if you want to dive deeper, Google has also released a technical whitepaper.

At a time when we know companies (looking at you, Facebook) are mining our data, this type of privacy might leave everyone feeling a little less exposed.