Shifts like these have always happened. But today, thanks to meme culture, online audiences can flip hate speech on its head faster than ever. In April, the Spanish political party Vox tweeted a meme referencing “The Lord of the Rings”: a picture of Aragorn, digitally manipulated to include the party logo and a Spanish flag. Aragorn was facing down orcs, who sported symbols for feminism, communism, media outlets — and oddly, an Android ghost emoji striped with rainbow colors. Cunning users termed him Gaysper, and as the image went viral, he transformed from intended insult to a stance against hatred via mockery and humor.

Pepe, Gaysper and other once-hateful symbols teach us that tech companies should institutionalize impermanence — they should build their policies to continually adapt to the changing world. Those who call on the companies to take steps to stem the tide of xenophobia, racism and the targeting of minorities that we’re seeing around the world should keep this in mind as well.

Local activists in Hong Kong transformed Pepe into an emoji on encrypted platforms, dressed as a protester or a journalist. “Symbols and colors that mean something in one culture can mean something completely different in another culture,” a protester told The Times, “so I think if Americans are really offended by this, we should explain to them what it means to us.” Most seem to understand the frog to be a symbol of youth and were unaware of his link to the alt-right. If Google, Twitter and Facebook had built in photo recognition to take down all images of Pepe the Frog, the movement might have been robbed of a critical rallying cry.

So what can platforms do? First, they can fill the public information vacuum around hate speech policy. While platforms have been doing better at making broader commitments against hate, like Facebook’s recent announcement to take a wider view of hate speech and extremist-related content, what’s missing is the ground-floor view. Tech company terms of service don’t explain how, in practice, they decide if nonhistorical content is hateful or no longer hateful. They do not state how context shifts affect their operations. The public does not know if companies revisit their enforcement calculus and, more important, the contours of how that might occur.

Second, platforms should consider attaching a periodic expiration date to exclusionary actions. This is not to render them toothless but to provide a mechanism allowing company policy to recalibrate at given points, and adjust to the fluidity of the internet and evolving social mores. This can help the targets of hate speech as well. Companies may be more willing to act if everyone understands that their processes are constantly evolving.