Best way to explain is using a supermarket analogy. When the supermarket is in normal operation, two cashiers are usually enough to handle the number of customers – and if each customer has ten items, then there is a set amount of time to go through each of the items and handle the transaction. And if the items are all the same, small size, then the time can be estimated pretty consistantly. But what happens when suddenly a hurricane is announced and everyone comes into the supermarket to purchase items. And not only toothbrushes or sliced luncheon meat, but big bags of dog food and gallons of water?
Now the store has only a set number of employees in the store, and a subset of them can actually run the cash registers. You can see that even though there might be an increase in throughput by the cashiers, the large number of requests would queue up and come to a standstill with long lines filling the interior of the store. Now add one more behaviour – after 10 minutes of waiting, people begin to leave the store in anger and disgust.
What I have just described is what happens between a web browser and a web server, if the web server is the cashier bank and the web browser is a single customer. The groceries are the various components of the webpage that are being requested by your web browser. In the case of the Joe2006.com homepage, there were 16 images that were separate from the actual page and unknown number of background images that also needed to be loaded. On top of this, the code running the website was not simply a web server – it was also a dynamically generated website which ran PHP (which is notoriously known as a processor hog). And, even if they did try to switch over to a new server which could handle the processing – the DNS timeout (telling the computers on the Internet to go to a different machine) would not have propogated (read: happened) in a time. Am I surprised the website ran slow? Not at all.
How to prove who did what?
Get the server logs. Real simple – and if the machine crapped out – it will be because the traffic load. If there were strange FTP requests or email requests, that would prove something was up. And, you can tell if they switched servers to a better machine to handle the load, because other DNS servers will know when the information changes for the IP address and the TTL (time-to-live: how often it checks for changes iin the IP address). But my gut tells me that they were suffering from a underpowered server – not too little bandwidth.
Related Kerry story
Yes – as usual, there is a related Kerry story. When we first were migrating from our underpowered server, Slate ran an article on our newest video ad and we suddenly found our web server coming to a screeching halt. Fortunately, we were literally were 90 minutes away from switching our video content to Akamai. That meant we had shortened the TTL time to 30 minutes and Akamai had the video streaming service operating – and we recovered within 45 minutes. And this was in October 2003 – well before the crushing traffic we were to experience in the coming months.
Tags: Joe2006.com, Joe Lieberman, Ned Lamont, Denial of Service, Overloaded Server
This entry was posted in
Campaign 2006. Bookmark the
permalink.