‘Deepfakes’ is the name given to video and audio developed by artificial intelligence (AI), resembling something, someone— or someone doing something— that didn’t, in fact, occur.



READ NEXT Deepfake videos double in less than a year

Advances in deep-learning and AI continue to make deepfakes more realistic, to the extent that in many cases it’s becoming very difficult to distinguish what is real, and what is generated by AI.

Not so convinced yourself? Give it a go on this website, and see if you can determine which is a real photo, and which is computer generated.

With the presence of deepfakes doubling within the last year, and the technology continuously advancing, there are clear concerns surrounding the various ways they could be used. Many predict that deepfakes could provide a dangerous new medium for information warfare, helping to spread misinformation or ‘fake news’. The majority of its use, however, is in the creation of non-consensual pornography which most frequently targets celebrities, owed to large amounts of data samples in the public domain.

While this, of course, opens doors for threat actors to carry out extortion, there are also concerns that deepfakes and AI will be increasingly be used in elaborate phishing campaigns. In March this year, scammers were thought to have leveraged AI to impersonate the voice of a business executive at a UK-based energy business, requesting from an employee the successful transfer of hundreds and thousands of dollars to a fraudulent account.

More recently, however, it’s emerged that these concerns are valid, and not a whole lot of sophistication is required to pull them off. As seen in the case of Katie Jones—a fake LinkedIn account used to ‘spy’ and phish information from her connections— an AI-generated image was enough to dupe unsuspecting businessmen into connecting and potentially sharing sensitive information.

Deepfake-generated accounts are on the rise

Based on her LinkedIn account, Katie Jones earned a degree in Russian studies from the University of Michigan, a fellow of Strategic and International Studies in Washington, and works at a top think tank, except there were no records of her in any of the cases.

Her profile image— while at first glance resembling a photo— has been created by a family of dueling computer programs known as GANs (generative adversarial networks), the creator of deepfakes. It is not discoverable by reverse Google image search.

What’s intriguing is ‘Katie’ was accepted to the networks of powerful and influential profiles such as a Deputy Assistant Secretary of State, a senior aide to a senator, and leading economist Paul Winfree, without raising the alarm.

YOU MIGHT LIKE BIG DATA A backlash against facial recognition is brewing

After the fake account was revealed and investigated, Malwarebytes’ researcher Chris Boyd explained the success of a fake profile including the fabricated photo of ‘Katie’ was the ordinariness.

“The threat from deepfaked snapshots comes from their sheer, complete, and utter ordinariness. Using all that processing power and technology to carve what essentially looks like a non-remarkable human almost sounds revolutionary in its mundaneness.”

Even though the photo of ‘Katie’ was found to contain prominent landmarks of deepfakes (unaligned eyes, bizarre blurry background, and odd markings on facial features), connections of ‘Katie’ did not detect any red flags. They also claimed to have never shared any sensitive information with the fake profile.

Moreover, it shows how professionals may take for granted the authenticity of profiles circulating in networking platforms and their vulnerability to spear-phishing schemes.

In his blog, Boyd said that “deepfakes are definitely here to stay. I suspect they’ll continue to cause the most trouble in their familiar stomping grounds: fake porn clips of celebrities and paid clips of non-celebrities that can also be used to blackmail victims.”

However, deepfakes are increasingly tapping into the domain of government bodies and enterprises. For example, the case of a retired CIA officer who was contacted by a foreign agent posing as a recruiter on LinkedIn was sentenced to 20 years in prison for passing top-secret information.

The rise in such cases has prompted European officials to issue warnings to LinkedIn users, especially those working for government bodies, to be wary of connection requests.

Moreover, fake LinkedIn accounts— often with more than 500 connections and a network of influential profiles— can be sold in black markets, making it easier for individuals to have a head start in the phishing game by posing as a legitimate and highly-regarded contact.

LinkedIn states it’s combating the attack of deepfakes by shutting down fake accounts as soon as they are detected; at the same time, national bodies such as the US Defense Department are working with experts to develop technology to automatically determine deepfakes.



Fake social media profiles are nothing new, of course, but deepfake technology could make them harder to spot. If nothing else, the rise of their use should be another reminder for business leaders to think twice before hitting ‘Connect’ on LinkedIn.