—————

Update:Over 100,000 paying subscriber views on our premium content service.

—————-

The stories of twitter going down frequently don’t need repeating here. Instead, I want to ask the community if there is any interest in addressing the problem.

As many are aware, Twitter’s problem with scaling is not RoR, it’s not Joyent NTT, or … Twitter’s scaling problem is exactly the same thing that makes it valuable: their database of users. And getting a traditional SQL /Relational DB to scale horizontally is pretty tough. Sharding works for some apps but not others.

It so happens that our new distributed database technology is rather well suited for twitter-style high-volume reliable messaging. If there is sufficient community interest we could help solve downtime by putting together a “twitter-proxy” that keeps twitter users on twitter, but provides an additional layer of data accessibility in the ecosystem. Not compete, just help keep users happy.

Consider the messaging problem:

Nothing is as easy as it looks. When Robert Scoble writes a simple “I’m hanging out with…” message, Twitter has about two choices of how they can dispatch that message:

PUSH the message to the queue’s of each of his 6,864 followers, or Wait for the 6,864 followers to log in, then PULL the message.

The trouble with #2 is that people like Robert also follow 6,800 people. And it’s unacceptable for him to login and then have to wait for the system to open records on 6,800 people (across multiple db shards), then sort the records by date and finally render the data. Users would be hating on the HUGE latency.

So, the twitter model is almost certainly #1. Robert’s message is copied (or pre-fetched) to 6,864 users, so when those users open their page/client, Scoble’s message is right there, waiting for them. The users are loving the speed, but Twitter is hating on the writes. All of the writes.

How many writes?

A 6000X multiplication factor:

Do you see a scaling problem with this scenario?

Scoble writes something–boom–6,800 writes are kicked off. 1 for each follower. Michael Arrington replies–boom–another 6,600 writes. Jason Calacanis jumps in –boom–another 6,500 writes.

Beyond the 19,900 writes, there’s a lot of additional overhead too. You have to hit a DB to figure out who the 19,900 followers are. Read, read, read. Then possibly hit another DB to find out which shard they live on. Read, read, read. Then you make a connection and write to that DB host, and on success, go back and mark the update as successful. Depending on the details of their messaging system, all the overhead of lookup and accounting could be an even bigger task than the 19,900 reads + 19,900 writes. Do you even want to think about the replication issues (multiply by 2 or 3)? Watch out for locking, too.

And here’s the kicker: that giant processing & delivery effort–possibly a combined 100K disk IOs— was caused by 3 users, each just sending one, tiny, 140 char message. How innocent it all seemed.

Now, are there any questions why twitter goes down when there’s any kind of event?

This is where we (potentially) come into the picture: we’ve spent the last 2 years developing a web architecture built on our horizontally scalable distributed database, and this kind of [lookup | message passing | writing] is what it eats for breakfast. We haven’t had any twitter-sized days, but we are seeing the architecture scale as designed.

You know how Yahoo News or Google News or NYTime or CNN shows everybody the same stories, and after you read them, the front page is boring? It’s a big database problem–you have to keep track of what every user has read, and SQL falls short. Our system is designed to scale horizontally so it can keep track of what hundreds of millions of individuals have read, and then show users the [highest rated | most viewed | etc.] stories that are new to them. But since we don’t have a deal with any news guys yet, we’re building out the most database intensive feed reader on the planet. It has plenty of nifty features not found in Google Reader, Bloglines, etc. But that’s an aside.

The Idea: twitter-proxy for the people

Addressing Twitter’s downtime could be pretty straightforward. It could work much like a (psudo-reverse) proxy:

You enter your twitter credentials on the proxy site You can post your tweets to the proxy. If twitter is up, we’ll post there, too. We’ll get your friend list and GET and store their tweets in our db.

When twitter is up and fully functional, twitter proxy contains a mirror of all the tweets from each to the twitter-proxy registered members, and the people they follow.

When twitter is down, you can still post a tweet to twitter-proxy. That message will immediately be available to anyone who is in our system. (How are they in the proxy system? Either they registered directly, or they are being followed BY someone who registered, so we automatically grabbed their status updates.)

Ground rules:

You should be able to access this system with nothing more than your existing twitter credentials. No separate login. We would expose a twitter-compatible API so outside clients would “just work”. (e.g. change the /etc/hosts file to resolve twitter.com to another IP)

Twitter is the new mail

Because twitter has done such a great job with their API, the net effect of a twitter-proxy is that you could could still send and receive your twitter messages, directly from twitter, or via twitter-proxy. If your friends are sending SMS messages to twitter, they would still end up at twitter-proxy.

The win is that when twitter goes down, there is another component of the ecosystem that can be alive and healthy. Messages sent via the twitter-proxy system would get to every user on the proxy system. (again, either registered directly, or was followed by someone who did register.) And twitter users stay twitter users. No one is split off to different, competing platforms.

We don’t have any experience with a SMS->HTTP gateway, so if twitter is down, the only way to get messages to and from your friends via the proxy is HTTP. That means a web page or web client. But hey, use your iphone if you’re out and about.

Moreover, we should be able to support fast Search, and the RSS/Atom feeds of people’s tweets would be available in real time, too. Built into the system could be other nice-itys such as “how many people viewed this tweet” and top-read tweets (+ that are new to you.) It’s up to your imagination.

Caveats

First of all, we won’t embark on any twitter-proxy system if the twitter folks aren’t cool with it. We would need their OK, first.

Second, enough of you–twitter diehards–need to tell us you want such a system. From where we’re at, it shouldn’t take long to build it, if there’s enough demand.

If you want it–let us know, loudly.

Thanks for reading.

-Israel