About one year ago it was common to hear some people (often tech-savvies) explain how privacy wasn’t important to them as they had nothing to hide. Recently this argument has been getting less and less frequent, but what happened?

Cambridge Analytica happened

You probably heard about Cambridge Analytica, but in case you didn’t or you haven’t caught up with the story, let me sum it up for you: in the first months of 2018 it came out that starting from 2014 an app that created psycological profiles based on one’s Facebook account had collected a huge amount of data not only about the app’s users but also about their friends. At the time, this practice was allowed by Facebook terms of service but the problems started when the creator of the app started sharing the data with Cambridge Analytica, a company that created microtargeted ads to sway the 2016 US presidential election in favour of Donald Trump.

The fact that Facebook took action against this application just days before the Guardian and the New York Times reported the story made the trust in their service plummet and many of us started becoming more interested in their privacy.

This scandal truly shocked the tech industry and forced governments and companies to start thinking about what they can do to protect your privacy.

What are companies doing?

Apple recently released a privacy-focused ad, the first sign of a changing industry. Credit: Apple’s “Private side” commercial

Of all the tech companies, Apple was the first to embrance this trend: it all started with Tim Cook equating Facebook’s and Google’s ads-based business model to surveillance and stating that Apple believes in privacy as a basic human right.

For now, this only had two effects: a series of video and billboard commercial that touted privacy as one of the iPhone’s biggest advantages and a shift in the marketing strategy for Face ID, that is now presented as a security feature rather than a more practical and futuristic way of unlocking your phone.

While Apple’s marketing commitment has yet to turn into actions, other companies , such as Microsoft, have already started to update their products to give users more control when it comes to their data with measures such as extending features required by the GDPR to non-EU contries.

What are governments doing?

The Cambridge Analytica scandal, being so interconnected with politics, had an effect on government policies as well and we are starting to see many lawmakers discussing policies to protect consumer data.

Cambridge Analytica used consumer data to target voters, lawmakers are reacting with better privacy laws

2018 saw the rollout of the EU’s General Data Protection Regulation (which arrived with perfect timing considering it was approved before the Cambridge Analytica story broke out) and many are looking at it as a starting point to create more advanced legislation, such as the California Consumer Privacy Act, taking effect in 2020.

How will we build better AIs without data?

Many tech enthusiasts are worried that more stringent data regulation will cause AI research to slow down, since the training of AI algorithms requires a huge amount of data. I’d argue that more advanced privacy laws do not necessarily force companies to collect less data about their users but rather to use it ethically and transparently, so the future of AI shouldn’t be at risk.

Conclusion

Privacy has been overlooked by consumers and voters, and thus by companies and governments, for many year but things are starting to change: stricter privacy laws are getting approved and companies are taking serious steps for better protecting your data.

There is still a lot to do but I think the outlook is very positive and this trend shows that ultimately the consumers hold the most power in the tech industry.