LOS ANGELES — For the past six weeks, XBIZ has been conducting an extensive investigation into the deletion of an unusually large number of Instagram accounts of adult performers since the beginning of this year.

Many of the victims, frustrated by what they see as a lack of transparency in the Instagram moderation process and the notoriously unresponsive customer service practices of the photo-sharing platform (and of its parent company, Facebook, and of fellow social media giant Twitter), suspected that the deletions were part of the ongoing “War on Porn” waged by anti-sex, religiously motivated lobbies.

Complicating the situation, in late November 2018, a new Twitter user account was created exclusively to target adult performers, harass and mock them, and to flaunt its own impunity.

That account, operating behind the persona of someone named “Omid,” bragged about having the ability to “take down” the Instagram accounts of adult performers and, starting around Christmas 2018, ramped up its activity, and that of a backup account, posting snapshots of Instagram accounts they had reported, and were deleted, at a rate of more than two per day.

As of this writing (May 28, 2019) both “Omid” accounts continue operating with total impunity. The number of victims of the campaign in the adult community, documented by XBIZ for our coverage, is almost 300 performers and companies, which collectively lost the ability to communicate with several millions of their followers.

Instagram Explains (Sort of)



Are the world’s biggest social media platforms attempting to expel adult performers in the name of a hodgepodge of confusing, contradictory and arbitrary standards of “decency”? Are they trying to censor the voices of a historically marginalized community (sex workers) to bend the knee before deceitful lobbies funded by wealthy religious families?

Or are tech founder figures like Mark Zuckerberg (Facebook, Instagram), Adam Mosseri (Instagram) and Jack Dorsey (Twitter) far more competent at creating internet wealth and managing programmers and engineers than at making moral and ethical decisions about the diverse content shared by a complex universe of users?

XBIZ spoke to dozens of adult performers who have been victim to the Instagram deletions, and we also spoke with several sources at Facebook and Instagram. Twitter did not respond to our requests for comment.

For the last few weeks, we have also spoken directly with Stephanie Otway, Instagram’s Brand Communications Manager (BCM), the official corporate spokesperson on the topic. Otway gave us valuable insight on how the company perceives the situation.

We also shared with Instagram some of the victims’ evidence (always with their explicit approval) about their lost accounts, resulting in their restoration of some of them.

Otway exclusively discussed with XBIZ Instagram’s awareness about the “Omid” campaign, the existence (or lack thereof) of “shadowbanning” and many other topics.

Instagram's BCM told us over the last few weeks that the company was listening, and, although they flatly denied the existence of a concerted campaign against adult performers, she assured us that changes would be implemented.

On Thursday, Otway revealed to us Instagram’s plans to unroll a new "appeals" process for post (but not account) deletions.

"While our content reviewers receive extensive training to apply our policies accurately, we understand that we won't always get it right," Otway told us via email. "When we make the wrong call, we want to give you [the user] the option to let us know so we can correct our mistakes. We'll soon be introducing the ability to request a second review if you think our reviewers have made the wrong decision. We call these 'appeals.'"

"If your photo, video or post has been removed for violating our Community Guidelines, you will be given the option to 'Request a Review,'" Otway continued. "This sends the post for another review from a different reviewer. If we've made a mistake, we'll restore the content. Regardless of the outcome, we will send you a notification to let you know the decision we made. We're excited to announce that we'll be rolling this out over the coming months, starting with nudity, and other content types coming soon after."

Is Instagram aware of the “Omid” campaign?

"We are aware of this case and we did a couple of things," Instagram’s Otway wrote to XBIZ on April 25.

"We reviewed the [deleted] accounts in question and the majority were correctly removed for violating our sexual solicitation policies. A small number were removed in error and have been restored to Instagram. We allow sex positive content and discussion on Instagram, but given the wide-ranging ages and cultures of the people who use our service, we do not allow content that facilitates, encourages or coordinates sexual encounters between adults."

The company also claims that they tried to find a pattern in the reporting and deletion claimed by “Omid” but they did not see a trend that suggested one person was responsible for those accounts being removed.

Curiously, Instagram does not differentiate between commercial sexual encounters (full-service sex work) and non-commercial ones (hookups, dates, etc.).

Aren't “matchmaking” and “dating” sites then not in violation of the terms of services according to Instagram?

A quick glance at the tag #hookup shows a variety of posts referring to “sexual encounters between adults.” There are many examples (e.g., @wealthyrichsingles @wealthyrichsingles, @millionairesingles_) advertising “matchmaking” and “dating” services that clearly are “facilitating, encouraging or coordinating sexual encounters between adults."

“We would need to review the content on these hashtags to confirm,” said another Instagram source that preferred to be anonymous. “Instagram relies on user reports and technology to help us proactively find content. Once an account or content is reported to us, we will review and take action on that account in line with our policies.”

To clarify Instagram’s official position on “sexual solicitation,” Stephanie Otway referred us to a page with Facebook’s definition of the term, part of the parent company’s “Community Standards.”





Could Instagram give an example of how the deleted posts were engaging in “sexual solicitation”?

“We’re unable to share that information for user privacy reasons,” said the Instagram source.

Why is Instagram’s spokesperson referring people to Facebook’s “Community Standards” documents?

In March 2012, Facebook bought Instagram for $1 billion (in cash and stock). On a Facebook post (which has since been deleted), Zuckerberg wrote that he believed Instagram and Facebook were “different experiences that complement each other.”

“That's why we're committed to building and growing Instagram independently,” he added.

That independence ended in September 2018, when the original Instagram founders left the company, and long-time Facebook employee Adam Mosseri was named as new Head of Instagram.

As of this writing (May 28, 2019), Instagram follows Facebook’s Community Standards.

So, what are Facebook (and Instagram)’s Community Standards?

The Introduction to Facebook (and Instagram’s) Community Standards describes their aim as making Facebook a place where people feel "empowered to communicate."

"That’s why we have developed a set of Community Standards that outline what is and is not allowed on Facebook," reads the Introduction. "Our Standards apply around the world to all types of content. They’re designed to be comprehensive — for example, content that might not be considered hate speech may still be removed for violating our bullying policies." Facebook insists its standards are designed to "encourage expression and create a safe environment." The company claims to base their content policies "on input from our community and from experts in fields such as technology and public safety."

It is unclear who these “experts” are, or their cultural backgrounds, national origins or religious affiliations, or whether these various “fields” include or exclude ethics, philosophy, religion, politics or law enforcement.

The three overarching principles for the Facebook’s Community Standards are Safety, Voice and Equity.

What on earth are "Safety," "Voice" and "Equity"?

To ensure “Safety,” Facebook is committed to “removing content that encourages real-world harm, including (but not limited to) physical, financial, and emotional injury.”

“Voice” is defined as a commitment to embrace “different views.” Facebook claims to “err on the side of allowing content, even when some find it objectionable, unless removing that content can prevent a specific harm. Moreover, at times we will allow content that might otherwise violate our standards if we feel that it is newsworthy, significant, or important to the public interest. We do this only after weighing the public interest value of the content against the risk of real-world harm.”

There is no explanation of how newsworthiness, significance, importance, public interest value or real-world harm are assessed or by whom.

“Equity,” according to Facebook, has something to do with the fact that “our community is global and diverse.”

“Our policies may seem broad,” Facebook claims, “but that is because we apply them consistently and fairly to a community that transcends regions, cultures, and languages. As a result, our Community Standards can sometimes appear less nuanced than we would like, leading to an outcome that is at odds with their underlying purpose. For that reason, in some cases, and when we are provided with additional context, we make a decision based on the spirit, rather than the letter, of the policy.”

Their statement that “we apply them consistently and fairly to a community that transcends regions, cultures, and languages,” does not explain who this “we” is or who has determined the consistency and fairness of the application of the standards.

Confusingly enough, the following two sentences refer to “underlying purpose” and “the spirit, rather than the letter, of the policy,” contradicting the previous claim to consistency or fairness.

"We have content reviewers based all over the world who review content in many different languages, across many different cultures," an Instagram source explains.

Essentially, Facebook’s (and Instagram’s) Community Standards, are said to be consistent and fair through regions, cultures and languages.

Unless, that is, if the content in question is contextually qualified, at odds with underlying purposes, newsworthy, significant, important to the public interest or can potentially cause harm, in which case the application of the Standards is admittedly not consistent at all.

That sounds like a deliberate contradiction that makes Facebook’s (and Instagram’s) decisions impossibly to argue against?

Exactly. Any grievance about, say, a post or account deletion is referred to the Facebook Community Standards, which makes trying to argue against their decisions an “Alice in Wonderland”-like experience.

But let’s go back to the official Instagram account that the “majority” of the accounts reported by “Omid” were “correctly removed for violating our sexual solicitation policies.” What do they mean by “sexual solicitation policies”? And how is that related to nudity or sexual content?

Facebook (and Instagram’s) Community Standards include two sections regarding sexual material, “Adult Nudity and Sexual Activity” and “Sexual Solicitation.”

Facebook states that they “restrict the display of nudity or sexual activity” allegedly because “some people in our community may be sensitive to this type of content.” The nature of that “restriction” is not defined.

Facebook also claims that they “default to removing sexual imagery to prevent the sharing of non-consensual or underage content.” It seems like they remove *all* “sexual imagery,” including that among consenting adults, just in case it could be non-consensual or underage. Again, “sexual imagery” is not defined.

“Restrictions on the display of sexual activity also apply to digitally created content,” the policy continues, “unless it is posted for educational, humorous, or satirical purposes.” The nature of those “restrictions” is not specified, and neither are the decision-making process or bodies that determine the purposes of the images.

So, is all nudity forbidden?

No. Facebook (and Instagram) make allowance for a host of strange exceptions.

After vaguely claiming that “our nudity policies have become more nuanced over time,” Facebook states that the company understands “that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause, or for educational or medical reasons. Where such intent is clear, we make allowances for the content. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding, and photos of post-mastectomy scarring. We also allow photographs of paintings, sculptures, and other art that depicts nude figures.”

The company does not define the process that decides which images of female breasts that include the nipple” are to be “restricted” and which are not.

“We allow sex positive content and discussion in all its forms, but we draw the line when content facilitates, encourages or coordinates sexual encounters,” the Instagram repeated. “We also restrict sexually explicit language that may lead to solicitation, and content that contains nudity or pornographic material.”

Does that vagueness open the door to a double-standard?

Yes, absolutely. A few weeks ago, for example, mainstream actress Sharon Stone posted her cover photo for Vogue Portugal, which prominently features one of her "breasts that include the nipple":

As of this writing (May 28, 2019) it is still there.

Ok—that’s that then for nudity. How about “sexual solicitation”?

Here is where Facebook (and Instagram) start echoing some of the talking points from the misinformation spread by religious fanatics who conflate all commercial sexual activity with “human trafficking.”

“As noted in Section 8 of our Community Standards (Sexual Exploitation of Adults),” the Community Standards explain, “people use Facebook to discuss and draw attention to sexual violence and exploitation. We recognize the importance of and want to allow for this discussion. We draw the line, however, when content facilitates, encourages or coordinates sexual encounters between adults. We also restrict sexually explicit language that may lead to solicitation because some audiences within our global community may be sensitive to this type of content and it may impede the ability for people to connect with their friends and the broader community.”

The “Sexual Exploitation of Adults” section states that Facebook removes “content that depicts, threatens or promotes sexual violence, sexual assault, or sexual exploitation, while also allowing space for victims to share their experiences. We remove content that displays, advocates for, or coordinates sexual acts with non-consenting parties or commercial sexual services, such as prostitution and escort services. We do this to avoid facilitating transactions that may involve trafficking, coercion, and non-consensual sexual acts.”

Did they just conflate “sexual acts with not-consenting parties” (i.e., rape) with “commercial sexual services, such as prostitution and escort services”?

Yes.

Doesn’t that deny a voice to pretty much all sex workers?

Pretty much. Unless they are the small number of former sex workers who have become vocal anti-sex work advocates after traumatic experiences or a religious conversion.

(Instagram disagrees about this assessment. "Sex workers may have a presence on Instagram," reiterated our Instagram source, "as long as they don’t attempt to coordinate a sexual encounter on the platform.")

What about porn or adult content?

Facebook (and Instagram) are also openly hostile to that kind of content. An Instagram insider (who spoke to XBIZ on background only) told us that the company doesn’t allow “nudity or pornography” and they will remove images and videos of people “displaying genitals or focusing in on fully exposed buttocks.” Explicit images of sexual intercourse are “prohibited.” Descriptions of sexual acts “that go into vivid detail may also be removed.”

Is Instagram "shadowbanning" sex workers?

The company claims that they are not “shadowbanning” people. “If you choose to follow someone, you will see that person's content in your Feed,” a source told us.

Oddly enough, XBIZ contacted a high-ranking Facebook employee who is extremely familiar with content moderation and they were confused by the term “shadowbanning,” claiming that they had never heard the word until our question.

Is Instagram aware that there is a perception that "shadowbanned" is a category Instagram places some accounts under? Is Instagram ready to make a statement that there is no such category, defined as “an account that appears to be active to the user but is restricted from appearing on searches, keywords, hashtags and auto-completion, rendering the account less visible than similar accounts that are not shadowbanned”?

“There are many different definitions of that word,” protested the Instagram source. “For us it’s about choice, if you choose to follow someone then you will see content from that person in your feed. When it comes to surfaces where we recommend content to people, like hashtags and Explore, we are more strict about what we allow on these surfaces. You can find more information here.”

Many of the targeted accounts are for porn stars with hundreds of thousands of followers and a decidedly public profile. Why is Instagram not verifying them?

Instagram pointed us to the company’s official statement on verification: “A verified badge […] means Instagram has confirmed that an account is the authentic presence of the public figure, celebrity or global brand it represents.”

In order to be verified, the account must fulfill some conditions including not containing "add me" links to other social media services, and to represent “a well-known, highly searched for person, brand or entity,” with a reviewing team determining if the “notable person” has been featured in multiple, not paid-for news sources.

According to Instagram, there is no blanket policy against verifying adult performers or any other performers. "If an account follows our community guidelines, and meets other criteria," a company source said, "it will be eligible for verification."

Does Instagram currently have any blanket policy to delete or discourage accounts on the basis of a person being a sex worker, adult performer or porn star alone, provided they do not violate the terms of service with individual posts?

"We do not have a policy like this," says our Instagram source. "Adult performers and sex workers may have an Instagram account provided they follow our Community Guidelines."

Has Instagram been asked by governmental organizations or NGOs to eliminate sex workers or adult industry workers from the platform? Has Instagram instituted policies based on those requests?

Instagram declined to answer that question.

If people designated as “models” and “influencers” (or brands) post a picture and people designated as “sex workers,” “adult models” or “porn stars” post the same exact picture, is it Instagram policy to ban or not ban the post exclusively based on the who the poster is and not the content of the post?

"This is not our policy," said our Instagram source. "We would not remove non-violating content simply because it was posted by a sex worker. If the account and content does not violate our policies, we will not take action, irrespective of the person behind the account."

Are there any categories of users who are targeted for who they are and not for what they post, or is Instagram policy that the posts themselves, and not the persons, are the basis for suspensions, bans or deletions?

"There are a few [targeted] categories," our Instagram source explained. "For example, we do not allow convicted sex offenders, terrorists or those who share child exploitative material — to have an Instagram account. This policy does not apply to sex workers."

How does Instagram explain that the real accounts of adult performers are deleted while accounts of people impersonating them or “fan accounts” that copy and re-post the content of the original account are still active?

“We rely on user reports and technology to help us proactively find content,” said the Instagram source. “Once an account or post is reported to us, we will review and take action in line with our policies.”

Has Instagram ever considered putting an age filter for sexual content like it does for gambling content?

Instagram confirmed that they are not currently considering age filtering. “Please see more information about our approach to gambling here,” the source added.

Who decides these thorny ethical issues within the company?

“We are always listening to our community and working with experts to constantly re-evaluate our policies,” said the Instagram source.

Who are these experts? How are they recruited? Are they engineers, ethicists, social historians, religious leaders, volunteers?

The Instagram source agreed to “get back to you about that.”

Does Instagram offer a “premium customer service” option to major digital marketing companies that makes the process of contesting an account deletion more straightforward and streamlined? Is there a phone number or email account that is available to people in those organizations that is not available to the general public?

XBIZ interviewed a marketing professional who has worked for several high-profile media companies and she told us that Instagram offers major digital marketing companies “premium customer service.” When we asked what that meant, she laughed and said “That means they actually get customer service.”

According to our source, “they get a phone number or an email address that puts them in touch with an actual person who can look into reports, deletions, give warnings about content, etc.

Instagram denies that they offer a service under the name “Premium Customer Service.”

However, our Instagram source replied to our question did mention a special status. “Our Partnerships team supports a variety of public figures, organizations and events across the globe to help them build the best possible presence on Instagram. Our partnerships team support public figures, organizations and events — [and] that can include supporting with account issues such as hacking or content removals.”

Is that a yes?

Pretty much.

How many Instagram employees and contractors have technical access to deleting or restoring accounts? Sources have spoken of a "friends and family” policy that allows many people connected to Instagram to “turn on” and “turn off” accounts. Is this true?

“No, this is not true,” according to our Instagram source.

A Facebook employee allegedly told an adult performer who had lost their account that “Instagram employees can submit and easily get almost any profile back as part of their 'friends and family.' Some employees, as a side business, will charge users to get their profile back.”

Our Instagram source did explain that there is (or there was) a kind of informal “friends and family thing” at the company that allowed for a second review after an account had been penalized.

“Employees are able to request a second review on behalf of a friend of family,” the Instagram source said, “but an account will only be restored if we determine a mistake has been made, and accounts will only be removed if they do not follow our Community Guidelines.”

Is it technically possible or technically impossible for a rogue Instagram employee or contractor to request money through a third party to “turn an account back on”?

Our Instagram source could not confirm or deny whether this was technically possible, but they said they thought it was unlikely given their internal protocols of supervision and “checks and balances.”

“Employees do not have the ability to request accounts be removed or restored without a robust review of the account against our policies,” the source justified.

Has Instagram leadership ever met with so-called “anti-trafficking” lobbies like the National Center on Sexual Exploitation and other groups that aim to conflate “human trafficking” with the entire adult industry and to “eradicate pornography”?

Instagram declined to answer this question.

Has Instagram leadership ever met with representatives of Sex Worker Rights organizations when crafting public policy?

“Not to my knowledge,” said the Instagram source. When XBIZ suggested sharing the names and contacts of some Sex Worker Rights organizations and activists, the Instagram source appeared receptive to the idea.

Can Instagram explain why accounts that glorify pimping and violence against women have impunity, or why the tag "sexworker" is shadowbanned ("recent posts from #sexworker cbare currently hidden because the community has reported some content that may not meet Instagram's community guidelines") but the tag "pimp" pulls up well over 750,000 photos?

Accounts like @pimpthoughts post content advocating violence against sex workers but when those posts are reported, it results in an automated response that reads: “While we reviewed the post you reported for violence or threat of violence and found it does not violate our Community Guidelines, reports like yours are an important part of making Instagram a safe and welcoming place for everyone."



The Instagram source asked us to provide some specific examples of @pimpthoughts posts that would violate Community Standards. “We are reviewing this account and will share findings as soon as we can,” they told XBIZ a few days later.

As of this writing (May 28, 2019), @pimpthoughts is still active on Instagram, gleefully posting about profiting from sex workers by intimidation and abuse.

That’s cool—does that also mean that XBIZ's Instagram source can review why my adult performer account was deleted by “Omid”?

Not really.

We gave our source an example of an “Omid” deletion (Karina Buarque, deleted April 25) to illustrate our questions and see if they could track the pattern that led to it. “Karina’s account was removed in error, we have now restored it,” was the prompt reply. “The account was mistakenly removed for hate speech, on second review — we corrected the mistake and restored the account. People can also appeal account removals here.”

However, shortly after that, a high-profile adult performer had her account deleted and she got in touch with XBIZ to share the screencaptures and, more disturbingly, the brazen attempts by some people claiming to be “insiders” who were extorting money from her to restore it. We offered Instagram to share that information with them, but with the caveat that the performer wanted it in writing that she would not be penalized for coming forward.

The Instagram source would not guarantee that.

XBIZ then offered to share the information if Instagram would agree in writing to share the results of any content review with the performer so she could delete any questionable posts and keep her account.

The Instagram source would not agree to that.

The performer paid the extortionists and got her account back. XBIZ saw evidence of the whole process (deletion, extortion, restoration) as it was happening, but could not do anything about it because of Instagram’s refusal to guarantee immunity to the whistleblower.

The Perfect Storm

While “who is ‘Omid’?” has become a kind of sick parlor game among the victims, “Porn Twitter” and even some members of the mainstream press, the identity of the person (or persons) behind the account are besides the point.

So, you’re not going to “unmask Omid”?

No. We already wrote our thoughts about “Omid” and, months later, the shadow operator/s behind this perverse Twitter-Instagram campaign targeting the historically marginalized community of sex workers continues operating with impunity.

XBIZ keeps updating a database with all the account deletions, which has been helpful in confirming that the “Omid” persona is as trite as we described it last month:

While the methodology has remained consistent and simple for months (report on Instagram, achieve deletion of Instagram profile, inform the victim by tweeting about it, repeat), the “Omid” persona relishes in playing a typical teasing cat-and-mouse game with the victims.

“Omid,” writing in an emoji-heavy, slightly ungrammatical English, comes off as a clichée online persona derived from Jigsaw from the “Saw” movies, or sex-obsessed, monomanical serial killers like Jack the Ripper or the Zodiac Killer. The Twitter account bios and the tweets have a taunting attitude.

As Twitter continues to allow this campaign to thrive, “Omid” has stepped up his game with impunity and his “master hacker with a God complex” schtick seems to be validated by Instagram’s decisions.

Anything else about “Omid”?

Well, after closely looking at the persona’s patterns (believe us, it is a thankless job to read and catalog every single one of the "Omid" posts and interactions), confusing ideology, monomaniacal obsessions, we performed a linguistic analysis on the grammar, syntax and vocabulary. It does seems possible to build a working character profile for "Omid."

Whoever is writing those posts is most likely an adult male 20-40, possibly from an area in Asia between Pakistan and Northern India (Punjab and Kashmir are options), trained in a tech institute, perhaps having spent time in the U.K., and familiar Hindu and/or Buddhist philosophy. He is vaguely religious but not a fundamentalist and quite likely not in the Christian tradition.

He is also obsessed with control, and particularly controlling beautiful women and “saving them” from porn. But he is not a stereotypical “white knight”: a telling sign is that he is not concerned with all the porn that is posted on Twitter. He is focused on Instagram because he is obsessed with rules and order, and making sex workers dance to his nonsensical tune.

If this person could not hide behind a computer screen, he may have turned into a dangerous, obsessive killer like Jack the Ripper, the Zodiac Killer or “Buffalo Bill” from “Silence of the Lambs” ("It rubs the lotion on its skin…”). His erasure of people is no less obsessive in the digital realm.

But didn’t another publication claim to “know who Omid is”?

As far as we can tell, this confusing identification of “Omid” (which the writer decided to announce but not publish) comes from a lead XBIZ also received at the same time.

The lead came from an obscure adult content production company who claimed to be one of “Omid”’s victims and clearly seemed to be looking for publicity. They offered to meet in a public place to show us “a 20-minute documentary” about their amateur detective work. XBIZ declined this unusual offer.

Maybe they found him, maybe they didn’t. We will certainly update this article whenever someone publishes this obscure production company’s findings, and their methodology.

As of this writing (May 28, 2019), several days after the announcement of his alleged unmasking, “Omid” continues operating undisturbed.

Ok—got it. He’s a sick and twisted creep with control issues. But isn’t he in cahoots with the people trying to extort adult performers to restore their accounts?

Not likely. It seems that those are just small-time crooks and extortionists who either know how to work the Instagramappeals process really well, or they have a contact with “premium customer service” (or whatever Instagram wants to call “having the hookup”), or else are (or are in touch with) corrupt/rogue Instagram employees or subcontractors with access to the appeals/moderation process.

Abusive creeps and extortionists targeting sex workers because they are marginalizedand stigmatized by the systems who are supposed to be fair to them? Wait—you mean like in the real world?

Yes. The Internet started off as a Cold War defense project, but it later morphed into an idealistic utopia where freedom and expression were going to change the world (“Think different”—Steve Jobs). But sometime in the past decade, this utopia somehow morphed into a replica of the worst of the real world.

What we have here is a Perfect Storm where three separate elements came together to wreak havoc on the life, livelihood and work (yes, building a huge Instagram presence takes a lot of work) of sex workers.

The three elements are:

the current "War on Porn," which influences politics and corporations at all levels the peculiar microculture of Silicon Valley and the usual creeps and small-time crooks who have preyed on sex workers since time out of mind

Wait—what do you mean by the "War on Porn”?

If you haven’t been paying attention, a coordinated effort by religious zealots from the Right and SWELs (Sex Worker Exclusionary Liberals) is afoot to censor sexual expression on the Internet.

This “War on Porn” uses many strategies of disinformation and deceit to pass laws and influence corporations, spreading ridiculous notions like “all sex work is human trafficking,” “adult women cannot really consent to sex work in a patriarchal society” and “porn addiction is a public health crisis.”

The War on Porn is spearheaded by extremely well-funded DC lobbies like The National Center on Sexual Exploitation (NCOSE), a religiously inspired group (which also operates as endsexualexploitation.org, and was formerly known as Morality in Media and Operation Yorkville), and supposedly grassroot “movements” like Fight the New Drug (a website devised by Mormon marketing experts that alleges "non-religious" status) and Your Brain on Porn (based on a pseudoscientific book that has long topped Amazon's best-sellers list for "Pornography”).

These sexually reactionary would like to turn turn the clock back to debunked notions about sex that predate Kinsey, Masters and Johnson and even Freud. Their views of sexuality come straight out of mysoginistic, patronizing Victorian pamphlets on "moral hygiene."

Surely mainstream corporations, politicians and commonsensical folk can see through this nonsense.

Don’t be so sure. Take a look at the Amazon Top-10 list of books on "the study of pornography” and shudder at the massive amounts of unscientific anti-porn propaganda “the public” is consuming. (The Kindle Store list is even worse).

But how inserted are these fringe anti-porn groups in mainstream society?

Very. Recently CVS agreed to hide the Sports Illustrated Swimsuit Issue because the NCSE extremists convinced the ubiquitous drugstore chain that it “pornified” women. They also convinced Walmart to hide Cosmo and set up a porn addiction booth in their pharmacy, they publish perverse children books about porn (“Good Pictures Bad Pictures”), help organize shaming campaigns against airlines for allowing people to watch what they want on planes, and get meetings with smiling Google representatives to post on their Instagram,

They also passed FOSTA with bipartisan support (and with prominent SWEL Democrat Kamala Harris as poster person), and had their made-up “porn addiction” declared “a public health crisis” in 15 states and counting.

Is the War on Porn coming after Instagram?

You betcha. For several reasons, a lot of the anti-porn movement gets money and staff support from Utah. This, report by the Salt Lake City CBS affiliate, headlined “Honestly, It Terrifies Me: Teen-related Apps May Actually Contain X-rated Material” shows how the disinformation campaign works:

“It’s incredibly easy. On Instagram this morning I pulled up a pornographic movie in ten seconds, under a very innocent search term,” said McKay. “The things that concern me about Instagram is the fact that it has such a rosy reputation, because parents think ‘Oh this app is safe my kid is safe on this app.’” [...]



"In less than seven seconds I can search for and discover hard core streaming video pornography intercourse on Instagram and there is no way to stop it,” said McKenna.

Instagram is being used by anti-porn organizations to spread mysoginistic messages like this quote from Australian anti-porn guru Matt Fradd: “No matter the level of consent, it is a manly thing to treat a woman who has forgotten her dignity with dignity nonetheless"."

Fortunately for these 21st century crusaders, Instagram only deletes (some) female nipples, not posts about “porn addiction” BS. Facebook’s term of services make it clear that “we don't remove false news from Facebook but instead, significantly reduce its distribution by showing it lower in the News Feed.”

Ok—got it. What’s #2 of this Perfect Storm all about?

Silicon Valley, as we are not the first to point out, has its own, very specific kind of toxic culture.

The tech mecca combines the gender disparity (and sometimes outright misogyny) in the STEM fields, with outdated notions of sex work (it’s a notoriously bro-ey environment that often reduces sex workers to the “party stripper” model), and a general atmosphere of unnecessary secrecy, opacity and paranoia.

Several Silicon Valley sources we spoke to did so under conditions of total anonymity. “I could get fired for talking to you,” a Facebook employee told us.

Getting a straight answer out of Instagram or Facebook proves difficult, even if public information officer Stephanie Otway was patient and generous with our requests for comments. Still, most of her answers on and off the record were literally repeating the disingenuous, unclear Community Standard phrasings, which sound like an Ivy League engineer-type who dropped out before receiving a solid liberal arts education trying to bullshit his way around Ethics 101 and Introduction to Social Studies.

Aren’t you being too harsh?

Are we? Watch for yourself: Here’s Mark Zuckerberg, Instagram’s Adam Mosseri and several other employees a few weeks ago at their F8 Conference. Just watch this summary. These are the people coming up with the Facebook Community Standards:

It’s a bizarre spectacle, with all talk about human interaction boiled down to “Community” (which? whose?) and optimizing interpersonal communication into an asexual, infantile search for “your crush” and “the one,” like in a Doris Day movie. Also, watch Mosseri getrting really fired up about “bullying,” while his company completely ignores the almost 300 sex workers targeted since the beginning of the year for their work.

Infantilism and secrecy, coupled with tech-bro culture, makes Silicon Valley the second element of this Perfect Storm against sex workers

And #3?

Creeps and small-time crooks? They’ll always be there. Digital serial killers and digital extortionists and wannabe pimps ("share your profits, and I’ll let you have your Instagram account back") will always pop up when the political system, corporations and mainstream society allow them to target marginalized communities

Adult Performers and Sex Workers: This Is How Instagram Can Improve

A few weeks ago, XBIZ asked the adult community, via Twitter, for suggestions on how Instagram could improve their service in the name of fairness. These were some of their thoughts:

“Since the apps ask for all this personal info they know ages from [the sign-up]. Make a 18+ option if there is not one where only adults can see certain content that we have our space and it doesn’t infringe on those who don’t want to see it.” @KingNoire

“1. Age verification with the ability to mark your own profile as only visible to those who are 18+ just like how they have on Facebook Groups.

2. Clear definition of explicit content and removing the vagueness. It is weighted heavily against women for just appearing sexy.” @BurningAngel

“If gambling on instagram is allowed to be an 18+ activity, that means they have the technology. why can’t sex workers, content creators, bloggers, etc be able to do this? it would make everyone’s lives easier” @BrittanyBenz_

"I saw that certain pages have an 18+ age wall warning. Why not let us have the ability to do the same? Even if nudity isn't allowed, at least allow some sort of warning that we are meant for adults only." @JessicaSage69

"Have clear definitions of what is allowed and not allowed on the platform, do not use vague descriptions like 'obscenity' which are applied unfairly to different accounts. If we can't show 'sideboob,' spell it out and then consistently enforce the rules from Kim K to porn stars." @techdomme

“I think your through line here is going to boil down to clarity and precision in the TOS. It’s impossible to navigate their fog of ‘we know problems when we see them.’ Clear-cut rules mean enforceable rules that folks can choose to abide by or eff off.” @tylerthebadwulf

As Tyler tweeted, it is indeed impossible to navigate Instagram’s fog of ‘we know problems when we see them.’

And, as user frustration mounts, some are taking the path of direct action.

On May 24, sex workers in the UK took their protest to the local Instagram headquarters: “Amazing scenes outside @instagram HQ in London this afternoon. Sex workers demand a meeting with the company, clear policies and an end to blocking workers’ accounts.”

Amazing scenes outside @instagram HQ in London this afternoon. Sex workers demand a meeting with the company, clear policies and an end to blocking workers’ accounts. pic.twitter.com/kFMBEDcZk5 — United Strippers Of The World (@unitedstripper) May 24, 2019

Organized by Rebecca Crow (via, yes, Facebook), the call to rally spelled out the grievances:

Instagram's censorship policy is disproportionately affecting sex workers on Instagram.

We try our best to stay within Instagram's vague guidelines, but increasing numbers of sex workers are having their accounts deactivated for images that celebrities and blue ticks routinely post, seriously impacting our ability to maintain our businesses.

Instagram needs to listen to the voices of people in the sex work industry to see how they can update their policies to maintain a safe and free platform for all whilst not disproportionately impacting sex workers businesses.

Policies are constantly made that affect us, but we are never consulted. LISTEN TO SEX WORKERS!