With my business partners I’ve been working on couple of web projects. One of those is starting to grow and I’ve decided to do a bit of testing. I’ve heard that people talk about lighttpd being quite faster than apache. Well, each single test I’ve seen was false. That’s why I decided to do my own testing. First part of the tests are static web sites and how usable are high traffic web sites in virtualized environments.
First mistake people do is testing ‘which app will do X requests first’ (you know those tests ‘ab -n X -c Y’?). Well, let me tell you right away – in those tests Apache will most probably be the slowest thing on the planet (particularly if you are using prefork MPM). Cause of its design, Apache, in a way, speeds up during time (it starts child servers or threads, depending on MPM). Lighttpd will win Apache any time in those tests. Then, people often do benchmarks on localhost – come on, what kind of a test is that?! Oh, right, you can achive 70MB/s throughput on localhost interface. Is that something you can get in the production? Those situations are possible only in local networks, but then again, 70MB/s throughput will require a gigabit network and hell of an load :). Then we have tests where people benchmark one site. So, when you put that together, you get ‘ab -n X -c Y http://localhost/some_static.html’. Right… Your visitors will visit only that page and then leave your web. Doh… If you are creating server and services strategy based on these numbers, you are better of with magic 8-ball.
So, let’s take a look at these graphs:
I’ve took the wrong approach of benchmarking lighttpd, apache prefork and apache worker against localhost. The numbers are fantastic, particularly for Apache – you see how easy is to crush lighttpd myth? Let’s take a look at siege output and some interesting numbers:
Apache Prefork MPM:
Transactions: 188781 hits
Availability: 100.00 %
Elapsed time: 60.53 secs
Data transferred: 5207.92 MB
Response time: 0.04 secs
Transaction rate: 3118.80 trans/sec
Throughput: 86.04 MB/sec
Successful transactions: 188782
Failed transactions: 0
Longest transaction: 9.19
Shortest transaction: 0.00
Transactions: 74353 hits
Availability: 98.57 %
Elapsed time: 47.49 secs
Data transferred: 2020.14 MB
Response time: 0.12 secs
Transaction rate: 1565.66 trans/sec
Throughput: 42.54 MB/sec
Successful transactions: 73860
Failed transactions: 1077
Longest transaction: 9.15
Shortest transaction: 0.01
Now, most of the people will look at Transaction rate and make a decision based on those numbers. But, one should realy look at Throughput; which is insane. That troughput isn’t possible in common production environments. Yes, there are situations where you can get more, but that’s <1% of (if at all) available web sites. With that throughput, transferred data is also big. 5GB in one minute just doesn’t sound reasonable. But the most important thing in this test (and it happened every time I tested) is that lighttpd doesn’t have 100% availability. And the response time of lighttpd is also disappointing. But, if you really want to do the test, you should, based on throughput and transferred data, figure out that this test is – wrong.
Tests should always be done from another computer which will be in the same network from which your clients will access your server. Localhost tests are OK only for web developers – they can see how much their PHP/Python/Perl application is slower than static HTML (they will look at average response time). If you are a sysadmin, that just doesn’t provide enough information to you.
In these tests, I compared Lighttpd, Apache Prefork and Apache Worker in two different situations. In one, web servers (including website) were hosted on Ubuntu Intrepid 8.10 (Dell PowerEdge T300/6GB RAM/Intel X3323), and in the other situation, web servers were hosted inside KVM on that same server, with all four cores. On real hardware, all servers provided ~400 requests per second (remember the 3000/1500 req/s from the localhost – you see how that test is broken?). But, inside KVM, results were worse – ~145 requests per second.
It’s not a network stack in KVM, since it’s easy to achive >10MB/s (100mbit network) when downloading single large file. But, with lots of little static files, all three give 5-6MB/s. For those same files 11MB/s is standard on real hardware. Until I do more tests, I can conclude that virtualized web servers are OK for low traffic web sites (5-6MB/s is ~45mbit/s link), but if you are going to have high traffic site, you should really put it on a dedicated servers. I’m eagar to test dynamic content in virtualized environment.
Tests were done with default installs of lighttpd, apache2-mpm-worker and apache2-mpm-prefork packages on Ubuntu 8.10. Software used for testing was siege, while file list_of_urls was list of ~150 urls (gif, pdf, html, doc, jpeg…); basically copy of www.amzh.hr.