InsightsDevops

The Epic Tale of MySpace’s Technical Failure

Web DevelopmentFacebookSocial MediaMySQLMySpaceMySpace failureSocial networksSocial networks failureMySQL databases

Remember MySpace? Yeah, that MySpace, the social network that blew up and got replaced by Facebook. Today we take a dive into MySpace’s Technical Failure. Or as Sean Parker, the first President of Facebook, so eloquently put it during a keynote:

 

 

You might ask, what was this epic fail and gross incompetence that people like Sean Parker talk about? In a nutshell, infrastructure. MySpace’s infrastructure was not originally designed for a lot of traffic or heavy load. Unlike platforms like Google, eBay and Yahoo, MySpace simply wasn’t build with scale in mind from the start.

 

How did MySpace start?

MySpace started like this. In 2003, the US passed the anti-spam law CAN-SPAM. At this point, the owners of Intermix Media saw an opportunity in opening their own social network. For the web application, they hired a programmer, Duck Chau, who wrote the first version of MySpace. The site ran on Perl, under the Apache web server and MySQL DBMS and everything looked promising.

 

However, this setup was unfamiliar to other Intermix Media programmers who had experience working under Adobe’s ColdFusion. So they rewrote the program under ColdFusion, a rigid, non-scalable setup and as a result Duck Chau quit.

 

The right platform at the right time

Fortunately for MySpace, their launch took place exactly when the most popular social network, Friendster, rstarted having performance issues. Users had to wait 20-30 seconds for each page to load, yet the developers lacked the financial resources to fix it. Very quickly, everyone switched to MySpace, whose servers worked quite well, at least in the beginning. It is really mindblowing when you think how websites and online platforms were setup back in the early 200s. The MySpace site ran on only two Dell servers (4 GB of memory and two processors) with a single database server. As incoming requests grew, new web servers were purchased. However, despite this, scaling problems started to show in early 2004.

 

Second upgrade

The number of registered users reached 400,000, and the database server could not cope with the load. Adding database servers is not as easy as web servers, so MySpace decided to create a bundle of three SQL Server databases (one main and two copies).

 

So naturally, the next upgrade happened in mid-2004, when the number of users approached 2 million. At thi point, the database server ceased to withstand the number of read-and-write requests. A simple example, comments were published on the site with a delay of up to five minutes! Can you imagine waiting five minutes on Facebook for your comment to show. The solution? Separate the data storage system from the DBMS.

 

Third upgrade

So, the third upgrade occurred shortly after as MySpace hit 3 million users making the DBMS suicidal. Now the social media platform came up with two fixes. One, create a large distributed system from relatively inexpensive database servers, which could easily be scaled in the future. Two and the main upgrade, rewrite the software running the site. Additionally, users were “divided” into clusters so that each database server uniformly accommodated 2 million people.

 

But in early 2005, the subscriber base reached 9 million. So, engineers began migrating from ColdFusion to at the time a very new web development technology. We are talking about Microsoft’s C#, running on ASP.NET. Immediately it turned out that under ASP.NET the services built to run the MySpace site worked much more efficiently. in simple terms, on the new code, 150 servers serviced the same amount of users as 246 could before. In addition, a new professional data storage system was installed withstanding heavy traffic and load. Not long after, the user base climbed to 17 million and MySpace added another series of cache servers.

 

Final step

The last MySpace upgrade took place in mid-2005 at 26 million users, when the migration to the new SQL Server 2005 DBMS was carried out, even during its beta testing. Why the rush? Well, this was the first version of SQL Server, supporting 64-bit processors with the possibility of extended memory. After that, memory was the only bottleneck in MySpace’s infrastructure.

 

By mid-2005, MySpace reached 140 million users and media mogul Rupert Murdoch buys MySpace for $580 million. However, the MySpace infrastructure was still faltering, as the C# and .NET setup did not handle the load of 140 million users. As a result, in November 2006, the site broke the SQL Server limit. Basically, the number of simultaneous connections, which was the main reason for the constant loading failures, was too much. Therefore, the Windows 2003 servers unexpectedly shut down as their built-in protection against DoS attacks worked falsely. Following this, MySpace users saw error messages literally every day. At times of peak load, between 20% and 40% of attempts to log in to the site were unsuccessful – “Unexpected Error” appears in response.

 

When you can’t keep up

Even back then these percentages were unacceptable. In fact, the average figure did not exceed 1%, and as MySpace’s main audience is teenagers, the end result was detrimental to MySpace.

MySpace's Technical Failure

 

Looking at the graph above, it’s evident that Facebook took off because of MySpace’s technical issues. And that brings us back to Sean Parker’s statement in the beginning. Had it not been for MySpace’s large technical problems, Facebook would and should never have won.

 

It speaks to the great importance for marketing/sales and technical departments to work together. One cannot create a game-changing platform without the other.

 

Don’t make the same mistake as MySpace did, and if you are struggling to build a stable yet scalable setup, don’t hesitate to reach out to us for taking a sanity check on your tech stack. In the meantime, stay tuned for more news!

Leave a Reply

Your email address will not be published. Required fields are marked *