A few months ago, I started actually using Twitter after realizing that it was likely the most popular medium (at this point in time) of the blockchain community. It wasn’t long before I started noticing stuff like this on threads of influential figures in the space

Vitalik Buterin, also known as Ellis Keckilpen1

If you’ll notice, this is not Vitalik’s Twitter account, but some random bot programmed to spam all of his posts with content like this in an attempt to scam users. This has been ongoing for months now, and Vitalik actually made a tweet recently asking Jack Dorsey (Twitter CEO) for help with the situation.

Vlad being a troll. Dammit, Vlad.

So, I haven’t really seen much movement of the needle here — regarding mitigation of these bots and scammers from Twitter. So, below I’m going to propose a few second-layer solutions and if any of them are truly well received by the community and won’t be a waste of time, I’ll implement them.

Some limitations of a second layer solution include:

A large majority of the community will need to be leveraging this solution for it to provide value (network effects).

This solution will be leveraging a chrome extension, limiting it to a few browsers (no mobile-web or app support).

Given a system with no cost of account creation, it’s unreasonable to attempt to create a blacklist given the presence of ever-expanding and changing botnets. So, we move on to whitelists. Twitter does have a native whitelisting feature in their service where users receive a ‘Verified Badge’, but this feature is not open to all accounts as noted in their About verified accounts page:

An account may be verified if it is determined to be an account of public interest. Typically this includes accounts maintained by users in music, acting, fashion, government, politics, religion, journalism, media, sports, business, and other key interest areas.

So, can we improve upon this? Yes, but there are two routes here — either confirm the identities of other accounts with a trusted third party identity verification service or source the identity’s validity from the community. In this community, I think it’s safe to assume nobody wants to have to scan their ID to get verified on a second-layer solution for Twitter, so I’ll proceed with the latter.

Before beginning, we are assuming that everyone will need to be using some browser that supports chrome extensions (Ex: Brave, Chrome) and have a balance in their mainnet Ethereum wallet. Given those assumptions, users will take the following steps:

1. Download an extension — we’ll call it the “Not Giving Away Free Eth” extension, paying homage to the users who have updated their Twitter titles in an attempt to curb these scams.

2. Get a randomly generated nonce from the extension and tweet their Ethereum address as well as the nonce within a period of one minute. This will trigger a webhook which will update the entry for the twitter username with the associated Ethereum address.

3. User will be required to pay send a transaction from that Ethereum address with a payload of their Twitter username to a registry contract which is responsible for maintaining the mapping for all usernames and addresses.

At this point, the user has demonstrated control of both the Ethereum address as well as the Twitter account. And this is where things get hairy — in a simple system, we could allow users to stake as much ETH as they would like to the registry contract and use the extension to remove all users from the website DOM who are staking less than a certain amount of ETH against their username. This being the case, we would remove all users from the UI who weren’t staking anything or less than a user-specified minimum.

This makes bots more expensive, but we know they’ve got deep pockets (How else would they be able to have all these 5,000 ETH giveaways?), so it’s likely that they will be able to have a few accounts at this point registered and staking against the registry contract. How do we get rid of these rich bots? Well, the registry contract could support TCRs where the actors in the system can delegate stake in a case against other actors. If the stake against the accused account exceeds the stake for the account, the account will have its stake burned and potentially locked.

The difficulty here is collusion amongst bots and bad actors in the system — ex: I upset someone, so they and their buddies stake against me and get me slashed. I’m not quite sure how to address this problem, so let’s move on to an alternative!

Extending Twitter verified accounts — whereby existing verified Twitter accounts can verify unverified accounts in a Web of Trust. Those accounts which have been verified through this process will be able to verify others ad-infinitum. There are problems here, too, though — a black market for purchasing verification will likely arise where bad actors will flood into the system — at this point, we no longer have a bifurcated graph of good and bad actors and the whole scheme falls apart…

So, I started writing this with the believe that there was a reasonable system to address these issues, but other than leveraging the trusted third party for ID verification and logging that to a public registry, I’m not sure that there is. Staking and setting a minimum threshold for accounts to not be removed from the DOM may prove to remove a lot of the spam from the site, but it is a simple gadget that will not apply to the mobile app or mobile website. If the people creating the bots are unaware of the extensions existence, it would be more effective, but we can’t make that assumption.

If you can come up with a way to make some of these schemes more robust or are in favor of the third party verification service extension (I can certainly implement this), leave your thoughts below. Otherwise, I have roughly convinced myself that this change needs to come from within Twitter — who I assume is working on a neural net to classify spam and hide it from users.

Find me on Twitter or Github!