Ummm – Lieberman website crashing?

Interesting – learning about Lieberman’s website going down – yes, you could potentially call this “hacking”, but when you are under a large amount of traffic – as long as the traffic is coming from many different hosts, this would not be counted as a Denial of Service (DoS) attack.

Let me see “the facts” – which, by the way, are snipets of information gleaned from the blogosphere

  • The Leiberman website suffered from delays on Monday night and crashed by 7am Tuesday morning (Stanford Advocate)
  • According to Dan Geary, there were numerous requests for “web pages, FTP files, and emails” which swamped the server. (MSNBC)
  • The Joe2006.com server was on a shared machine which hosted 70+ other sites (DailyKos)
  • The Joe2006.com server more than likely was on a low-cost solution that could have had a large bandwidth allocation (gleaned from multiple blogs and from MSNBC)
  • Joe2006.com email server is hosted at the GoDaddy/SecureServer service provider (DNS Stuff)
  • The Lieberman/Lamont primary was the most contested race on Monday/Tuesday which would have had a large amount of traffic on Monday and Tuesday (see previous post about web traffic)
  • Today, the Joe2006.com website is hosted at a different IP address (68.178.232.95) as I gleaned from pinging the server

So what do I see? Actually, not enough for a conclusion. Simply – I would want to see the server logs on machine hosted at IP address 69.56.129.130 (the original site of the Joe2996.com site). Dan suggests that he got a deluge of FTP (port 21), emails (port 113/25) and web requests (port 80). Since the machine will track via logs. If there is an attack, it would have logs to show for it.

My guess is that the server was having problems because shared servers are reknowned for having limited number of web clients to handle traffic. No amount of bandwidth can address not enough web server processes to handle the enormity of requests. Just this week, one of my clients (www.goodnightburbank.com) just launched a new episode of their show. Interestingly – we have over 100GB of download purchased, but the site would not load for many people. What I discovered was the virtual/dedicated server only had a max of 10 http clients available for spawning and had no growth for increased need. Once I increased the settings, I was able to handle the enormity of the requests and the site was running smoothly once again.

What? What are you saying?
Best way to explain is using a supermarket analogy. When the supermarket is in normal operation, two cashiers are usually enough to handle the number of customers – and if each customer has ten items, then there is a set amount of time to go through each of the items and handle the transaction. And if the items are all the same, small size, then the time can be estimated pretty consistantly. But what happens when suddenly a hurricane is announced and everyone comes into the supermarket to purchase items. And not only toothbrushes or sliced luncheon meat, but big bags of dog food and gallons of water?

Now the store has only a set number of employees in the store, and a subset of them can actually run the cash registers. You can see that even though there might be an increase in throughput by the cashiers, the large number of requests would queue up and come to a standstill with long lines filling the interior of the store. Now add one more behaviour – after 10 minutes of waiting, people begin to leave the store in anger and disgust.

What I have just described is what happens between a web browser and a web server, if the web server is the cashier bank and the web browser is a single customer. The groceries are the various components of the webpage that are being requested by your web browser. In the case of the Joe2006.com homepage, there were 16 images that were separate from the actual page and unknown number of background images that also needed to be loaded. On top of this, the code running the website was not simply a web server – it was also a dynamically generated website which ran PHP (which is notoriously known as a processor hog). And, even if they did try to switch over to a new server which could handle the processing – the DNS timeout (telling the computers on the Internet to go to a different machine) would not have propogated (read: happened) in a time. Am I surprised the website ran slow? Not at all.

How to prove who did what?
Get the server logs. Real simple – and if the machine crapped out – it will be because the traffic load. If there were strange FTP requests or email requests, that would prove something was up. And, you can tell if they switched servers to a better machine to handle the load, because other DNS servers will know when the information changes for the IP address and the TTL (time-to-live: how often it checks for changes iin the IP address). But my gut tells me that they were suffering from a underpowered server – not too little bandwidth.


Related Kerry story
Yes – as usual, there is a related Kerry story. When we first were migrating from our underpowered server, Slate ran an article on our newest video ad and we suddenly found our web server coming to a screeching halt. Fortunately, we were literally were 90 minutes away from switching our video content to Akamai. That meant we had shortened the TTL time to 30 minutes and Akamai had the video streaming service operating – and we recovered within 45 minutes. And this was in October 2003 – well before the crushing traffic we were to experience in the coming months.

Tags: , , , , Overloaded Server

This entry was posted in Campaign 2006. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.