It's not just your online persona that can be copied and manipulated, it's images of you too.

Fake Facebook profiles and automated Twitter bots have been around since the beginnings of social media.

One of my siblings once created a fake family member called Fred, who was a made-up character from the Zynga game FishVille. Cousin Fred amused us all for several months, but of course, nobody ever took it seriously.

Fast forward to 2018 and now we're living in an era where it's sometimes impossible to tell a fake profile from a real one. The fake personas are no longer cartoon characters. They could very well be your online doppelganger.

That's because, with today's AI technology, it's possible to create a believable imitation of someone using their publicly available online data. This particular AI manipulation technique is known as "automated laser phishing".

READ MORE:

* How New Zealand can thrive in the age of AI

* What to do when AI takes your job

* Everything you think you know about AI is wrong

Most of you have probably seen what happens when a friend on Facebook gets hacked. If the hacker tries to contact you on Messenger, the language is usually awkward and grammatically incorrect. Also what the hacker says has little resemblance to what your friend would say.

A sophisticated AI wouldn't be so clumsy. If an AI was crafting messages using your persona, it would likely be capable of closely matching your voice and opinions.

It's not just your online persona that can be copied and manipulated, it's images of you too.

Here in New Zealand, Victoria University lecturer Tom White created an image manipulation tool called SmileVector. It was the result of research he carried out for several years on the potential of generative neural net models.

In 2016, he released SmileVector as a twitter bot that used neural nets to automatically add or remove smiles from photos.

After proving the success of SmileVector, White went to work on developing "a more controllable animation tool". In collaboration with Ian Loh, a masters degree student at Victoria University, White created a tool called TopoSketch. This one used a neural network to create animations from a dataset of faces.

Despite the success of his AI apps, White has mixed feelings about how the technology could be used. On the one hand, he thinks it has enormous potential. "It enables new types of creative mediums not possible before," he said.

But White is also concerned about the potential negative impacts on society, such as the ability to create "convincing disinformation" with neural networks.

"Even when these technologies are not used directly, indirectly they can still erode the public's confidence in news reporting," White told me.

The danger is especially apparent in the ability of artificial intelligence (AI) to manipulate video.

There's already a disturbing trend on the web for face-swapped celebrity porn made using the latest AI techniques. Reddit recently banned this content from its platform, ruling that it falls under the company's restrictions on "involuntary pornography".

It's also possible now to realistically manipulate both video and audio together. Several experiments have been done to prove how easily someone with the right tools could, for example, create a fake video of Donald Trump declaring war on North Korea.

While there hasn't yet been a case of "synthetic media" fooling the public on a big news story, it's surely a matter of time given the tools that are available on the internet.

One way to combat this is for companies like Facebook and Google to use the exact same tools to identify fake videos, and automatically exclude them from their products. Facebook and YouTube already have the ability to automatically tag and categorise content using machine learning, so adding a "fake video filter" shouldn't be too tricky given the AI prowess at those companies.

AI researchers like White are also working on solutions. White told me he was "currently working on tools which help the public differentiate between genuine and manipulated media".

This feels like the tip of an iceberg though. We've reached an interesting inflection point in our ability to create fake media experiences. In April, a holographic version of Roy Orbison will embark on a tour of the United Kingdom. While this will be a pre-scripted show, how long till we see a holographic Roy Orbison controlled in real-time by an AI? That would enable a different, "unique" performance each night.

Indeed, you can have a fake multimedia experience in your own home, simply by putting on a virtual reality headset. Last year, the band Coldplay live streamed one of their concerts in VR, enabling anyone anywhere in the world to attend. Who's to say that experience wasn't as "authentic" as being at the concert in person?

These are complex, even philosophical, questions that society will increasingly grapple with.

But in the short term, we must find solutions to counter AI manipulation because there's a clear danger of it being used for nefarious purposes. The Russian government supposedly meddling with Facebook feeds before the US election is one thing. Creating a fake, but believable, video of Vladimir Putin saying he's just launched a nuclear missile is quite another.

Richard MacManus (@ricmac) founded tech blog ReadWriteWeb in 2003 and has since become an internationally recognised commentator on what's next in technology and what it means for society.