







The rise of AI has brought a new type of media to the web: deepfakes. Deepfake technology enables a user to superimpose one face over another in photos and videos, using deep learning to make the “fake” face fit and behave exactly like the real one. The result is an often fake yet strikingly convincing video.

















Platforms like Reddit and Pornhub are trying to ban Deepfakes. But private apps like FakeApp still survive and are spreading deepfake videos across the Internet.





Perhaps there’s no real harm in watching Indiana Jones with Nicholas Cage’s face, or watching what Titanic would look like with Britney Spears in it. But many people are already worried that deepfakes could be used for large-scale crime and fraud. The most pressing concern may be its uses in porn, but it could also be used for the distribution of fake news, extortion, or even terrorism.





Magnifying the problem is the fact that, at least for the moment, most of the world’s internet users are unaware of deepfakes. A tech-savvy creator can make a very real-looking video and distribute it on a wide variety of platforms without most people noticing.





Blockchain can help address this problem, even if it won’t fully resolve it. Keeping the video’s original hash value as a “signature” on a blockchain, for example, could help authenticate originals and weed out fakes. Any video whose hash value doesn’t match the original hash value on the blockchain would be identifiable as fake.





What is hashing?





To understand hashing, think about how websites store your passwords. Any website you log into needs to know whether you’ve gotten your password right so that they know whether to let you log in. But if they want to keep your information secure, they can’t store your password on their servers. What they do instead is called hashing.





When you sign up, they take the password you choose and “hash” it by performing a series of secret calculations on it. The result that gets output is a unique string of characters. It’s not your password, but only your password will produce that unique output. Changing just one character will result in a totally different hash output. And there’s no way to reverse-engineer your password from this output hash because the calculations are secret, so it’s safe to store this hash on the servers.







Then, when you come back to the site to log in the next time, they hash whatever you enter in the “password” box by performing those same calculations. The site checks this latest hash with the hash they have stored on their servers. If they’re the same, then the website knows you’ve entered your password correctly, because only the exact same input could produce the same hash output.





For a very simple example, imagine that your password is the number two, and the secret calculation used to hash it is adding two. Two plus two is four, so the “hash” that would be stored on the website would be four. Every time you log in, the website takes whatever you type as your password and adds two. If you enter “2”, it’s going to compute two plus two, and then compare the new hash (four) with the hash stored on its servers (four). Correct password, and you may log in. But if you enter one instead of two, it’s going to compute one plus two and then compare that hash, three, with the hash stored on its servers (four), and find that they’re not the same. Wrong password, no logging in for you.







Obviously, real-world hashing uses far more complex calculations, and works on long strings of characters, not just numbers. But the principle is the same. You can experiment with this hash generator to see how even a tiny change in the input completely changes the hash output.







So how is this applicable to deepfakes? Like any digital file, a video is communicated to computers in the form of character-based code, which means the source code of a video can be hashed. If I upload a video and store its hash on the blockchain, any subsequent changes to that video file will change the source code, and thus change the hash. Just as a website checks your password hash against the hash it has stored whenever you log in, a video site could check a video upload’s hash against the original to see if it had been modified (if the original was known).







If deepfakes are widely adopted -- and that seems likely -- hash values will be a way to verify the truth of videos. People will probably try to develop some tools that enable the authentication of videos with one click, but it will be difficult to ensure that the tools themselves haven’t been tampered with unless there’s an immutable blockchain record of what’s genuine as well as hashes to check everything against.







With the wide adoption of FakeApp and other apps like it, it will also become easier for ordinary people to produce fake videos, and there will inevitably be cases in which people use fake videos to strike terror or make money.







Imagine, for example, a video of US President Trump stating that Bitcoin is illegal, and anyone holding or trading Bitcoin will be considered a criminal. Considering the mercurial temper of Trump and the occasionally gullible nature of humans, a convincing video like this would be very likely to trigger a flurry of selling in the market. The producer of the video, who might have shorted the market just prior to releasing the video, could make a killing.







Video calling could be even more potentially alarming. Without a way of checking hashes in advance, you basically have no way of confirming who you are talking to. How can you tell a genuine plea for help from a family member from a deepfake? How can you tell whether the people in your business conference call are real? To prove authenticity, we may need to verify others’ identities, devices, addresses, or even each frame of the video. Perhaps there will even be a hardware device, similar to crypto hardware wallets, that we will use to confirm identity by linking with the blockchain in some way. Ideally, we will be able to develop a multisignature solution.





Because blockchains offer an immutable public record, falsified data that’s stored on the chain should generally be easy to spot. For example, if a company says it will produce 1000 blockchain-linked IDs, but secretly produces 1001 to sneak a duplicate ID under the radar, this extra entry will be apparent to anyone looking at the chain.







Here’s another example. A milk manufacturer could technically alter an expired milk carton’s production date and write it into the blockchain. But as long as the original date is on the chain, the manufacturer will eventually be discovered. When customers see the milk has expired, they would be able to check the chain and see the original expiration date entry and the second, fraudulent entry.







This same method can be applied to deepfake videos. We can use the blockchain to trace the video’s source. If the source is untraceable, then we can assume it’s fake. If the source is clear, we can then check the hash value of the original video against the value of the video on government websites or major media outlets like BBC, WSJ, etc. If you find that your video has a different hash value from those on trusted platforms, yours is probably the fake.







The blockchain, of course, does not and cannot solve the problem of human lies or falsification. To solve this problem we would need not only decentralization but depersonalization, replacing human beings with programs completely. Only a depersonalized blockchain could radically solve the data forgery issue, automating all blockchain entries via the internet of things, and keeping humans out of the equation so that the record remains accurate.







For the moment, blockchain technology alone can not necessarily prevent falsified data from being written to the chain. The blockchain is only responsible for recording all evidence and ensuring that the data is available. But the fact that locks don’t prevent all theft doesn’t make locks meaningless. Even if blockchain isn’t a perfect guard against deepfakes, it may be one of the best options we have.









