It was the darkest period of Twitter’s short history. By May the service was experiencing downtimes so often that its loyal but frustrated users – especially TechCrunch’s Michael Arrington, who blogged almost daily about it – were up in arms. People began joking that it was news when Twitter was up. Was Twitter becoming a victim of its own popularity?
Jack Dorsey, CEO and creator of Twitter, admitted on the company blog he didn’t know what was happening. Since then, all of the team’s energies have gone into fixing the problems and making Twitter more reliable.
Speaking to .net today, Jack says: “The most important thing is to communicate as much as possible about what’s going on. We’re seeking to build something that’s utility class and that’s stable. Look at any public utility and how the better ones handle the community. Plumbing, for instance, they’re pretty forthcoming with details just because so many people rely on it.”
He recognises that users are still fuming about the initial lack of communication from Twitter. “Everyone makes mistakes,” Jack admits. “It’s really easy to get caught up in the work. It was a combination of a really stupid mistake and being caught up in the crisis at hand and forgetting to communicate properly.”
Here at Twitter we’re ruled by three guidelines: simplicity, constraint and craftsmanship
Twitter’s new Status blog (status.twitter.com) is part of the new approach. It’s built on Tumblr. “Here at Twitter we’re ruled by three guidelines: simplicity, constraint and craftsmanship,” Jack explains. “We really like those concepts and I’ve always respected Tumblr in the same way. It’s a very simple publishing mechanism, it’s extremely well crafted and has the right amount of constraint.”
One of the main causes for Twitter’s instability is the amount of strain the main database is under. “We have a particular scaling issue in that we connect all these different devices and types of transport, such as SMS with the web or IM with the web,” Dorsey explains. “They all have different traffic patterns we have to adhere to and put user intelligence on top of at the same time, because some users are protected and some want their phone off at certain times. There’s a lot of routing that occurs within the system: it’s not an easy problem to solve! Also, when someone sends an update, it inspires the people that follow them to send an update as well, so you see these huge surges in traffic. For instance, when someone like Barack Obama updates, and they’re being followed by 30-40,000 people, it generates an update from a good majority of their followers. We have a lot of peaks and valleys. We have to design the system for those peaks, those high pressure moments.”
Many high-profile bloggers have come up with their own solutions. Om Malik, for example, thinks Twitter should charge people who have a large number of followers. But Jack Dorsey is having none of it. “It’s interesting, but we’d rather just focus on really getting the architecture and stability right. I don’t believe that charging people, especially for being popular, is a viable thing to do.”
We’re coming to a critical mass where people are getting the right number of friends on and really extracting a lot of value out of the system
According to Twitter’s other co-founder, Biz Stone, the two-year-old service doubled in size in March/April. Web analytics company Compete says it’s now attracting around 1.2million people per month and growing at 30 per cent a month. And TechCrunch recently revealed around three million tweets are sent per day. “We’re starting to move out of the early adopter crowd and getting a lot more mainstream usage,” Jack says. “We’re coming to a critical mass where people are getting the right number of friends on and really extracting a lot of value out of the system.”
Fair enough, but don’t the frequent downtimes suggest you weren’t prepared for your own success? “We always knew the concept was going to be big, we just didn’t think it would be big that quickly. We could have been much more prepared, but we have a very popular concept on our hands and some amazing users who are expanding it in a way that we’d never have imagined. It’s moving very quickly and requiring us to really think harder about the next one or two years and how to really architect the system.”
While Twitter’s popularity increases, more and more applications have sprung up that tie into the API. There’s Twitterific, Twitterverse, Tweetburner, twistori, and Twhirl to name but a few. Surely they must put a lot of pressure on the API. “Oh yeah, the one thing we did wrong very, very early on is when we first launched with a bunch of RSS and Atom feeds,” Jack admits. “We built the system as a real-time messaging system and it is, but RSS as a medium isn’t real time. So we had a number of people, clients and projects all on RSS feeds pulling every 10 or 30 seconds, and the transport was just not designed for that. We’re doing a lot of work to come up with a better model for delivering the API. We’ve done some work with the Jabber PubSub specification, so whenever a client receives an update, we push it, instead of having the client pull every 10 or 30 seconds. It’s much more efficient.”
Back on trackA lot of criticism has been directed at Ruby on Rails, which many don’t see as up to the job of scaling big, but Jack believes it will always be a big part of Twitter. “It’s just an amazing and quick and visual way to get something out there,” he explains. “But you have to optimise the system. We’re not entirely on Ruby on Rails now. We have memcache systems and individual Ruby daemons and IM servers that aren’t written in Ruby at all. That’s just part of what you do when you build a technology. You pick the best tool for the job. Rails is extremely good at front-end interfaces and web stacks and at providing an API interface. It hasn’t been written for real-time messaging. It hasn’t been written for routing SMS to IM. These are the things we need to optimise, and that’s what we’re working on.”
Jack says the team has a much better sense now of what exactly they need to fix and optimise. As a result, downtimes have occurred less frequently and the team celebrated 97.3 per cent uptime during Apple’s Worldwide Developer Conference. Twitter has called in help from software development specialist Pivotal Labs and the full-time technical team has been boosted to nine engineers (from only three around six months ago), who are also looking at curtailing spam issues. The plan is to put limits in place, so that people can’t create accounts and follow thousands of people without generating any content.
For everyone in the company, this is very much a labour of love
“We really appreciate the patience that everyone has shown us while we get through some of these hurdles,” says Jack, “I’d love to say ‘We’re going to be finished and completely stable within three months’ but it’s really hard to estimate this sort of work because, as we’re working on the system, it’s evolving rapidly. For everyone in the company, this is very much a labour of love. We all have our closest friends and family on the system. It kills us when it’s not available and we’re working very hard to make sure that doesn’t happen.”
Once Twitter has got a rock-solid platform, the team can continue to add new features. Users will soon be able to create groups of Twitter friends to send messages to, Twitter will interact with more social networks and of course, at some point the company has to seriously think about how to monetise the system. In the meantime, however, it’s all about the architecture.