|
Posted
over 14 years
ago
Module support is perhaps the most exciting feature of Varnish Cache 3.0. It makes it really easy to add quite complex business logic in Varnish or to connect Varnish to any external data source. Do you want to connect Varnish to your MySQL
... [More]
database, so that it will authorize users against a table in MySQL? It's now easy and simple to write a module that does that. It will probably wreck havoc on performance but that's your decision - not ours.
read more [Less]
|
|
Posted
over 14 years
ago
Hi.
We're considering releasing a Varnish 3.0 module that makes the content of the WURFL database accessible in VCL. WURFL is crowd-sourced database consisting of some 13 thousand user-agent strings with accompanied characteristics.
If you are
... [More]
interested in seeing such a thing get developed and willing to spend some money to see it released, either send me an email (perbu [at] varnish-software [dot] com) or leave a comment below. [Less]
|
|
Posted
over 14 years
ago
The big new features are compression, basic streaming suppport and vmods. But there are a bunch of other new features as well. Installation notes and such and full change log.
|
|
Posted
over 14 years
ago
Varnish Cache 2.1 does what you can call store-and-forward proxying. It gets the request, then it turns around, gets the whole object from the backend, then turns around and delivers it to the client. If your backend takes time to deliver the object the client might get restless.
read more
|
|
Posted
over 14 years
ago
In Varnish 1.0 there was only one way or ejecting content from Varnish. You had to add VCL code that could find the object and set the TTL to zero. The typical, and squid-compatible way of doing it was by creating a new HTTP method and call it "PURGE". The VCL would typically look like this:read more
|
|
Posted
over 14 years
ago
Hi.
Finally Varnish 3.0 is feature complete and we're about to roll out a beta. All in all we've done quite a lot of testing on this release and the code seems to be quite stable. We've been testing with some rather busy websites and both speed and stability is very good.
read more
|
|
Posted
over 14 years
ago
As the snow is melting in Oslo we've spent some time doing some spring renovations on www.varnish-cache.org. We've added a forum and a new FAQ.
read more
|
|
Posted
over 14 years
ago
I’ve been asked about this so many times that I thought I should just post it here. It’s actually very simple to do using restarts.
The problem: you need to check if a user is authorized for an object (which may or may not already be cached by
... [More]
Varnish) by means of an external application.
The solution: the following VCL will pass GET requests from the users to the authorization app. You can modify the URLs, e.g. insert a custom query string if required by the app.
The request is then either denied (if the auth app returns anything other than a 200) or restarted and served from the real backend or from cache.
This is only an example; you can extend it to cache authorization responses, add a control header if you use restarts anywhere else in your VCL, etc.
sub vcl_recv {
if (req.url ~ "^/authorized_content") {
if (req.restarts == 0) {
set req.backend = authorization_backend;
return(pass);
} else {
set req.backend = real_backend;
set req.url = regsub(req.url, "_authorize_me", "");
}
}
}
sub vcl_fetch {
if (req.url ~ "^/authorized_content" && req.restarts == 0) {
if (beresp.status == 200) {
restart;
} else {
error 403 "Not authorized";
}
}
} [Less]
|
|
Posted
over 14 years
ago
I was made aware of a synthetic benchmark that concerned Varnish today, and it looked rather suspicious. The services tested was Varnish, nginx, Apache and G-Wan. And G-Wan came out an order of magnitude faster than Varnish. This made me question the
... [More]
result. The first thing I noticed was AB, a tool I’ve long since given up trying to make behave properly. As there was no detailed data, I decided to give it a spin myself.
You will not find graphs. You will not find “this is best!”-quotes. I’m not even backing up my statements with httperf-output.
Disclaimer
This is not a comparison of G-Wan versus Varnish. It is not complete. It is not even a vague attempt at making either G-Wan or Varnish perform better or worse. It is not realistic. Not complete and in no way a reflection on the overall functionality, usability or performance of G-Wan.
Why not? Because I would be stupid to publicize such things without directly consulting the developers of G-Wan so that the comparison would be fair. I am a Varnish-developer.
This is a text about stress testing. Not the result of stress testing. Nothing more.
The basic idea
So G-Wan was supposedly much faster than Varnish. The feature-set is also very narrow, as it goes about things differently. The test showed that Varnish, Apache and nginx were almost comparable in performance, whereas G-Wan was ridiculously much faster. The test was also conducted on a local machine (so no networking) and using AB. As I know that it’s hard to get nginx, Apache and Varnish to perform within the same level, this indicated that G-Wan did something differently that affected the test to me.
I installed g-wan and Varnish on a virtual machine and started playing with httperf.
What to test
The easiest number to demonstrate in a test is the maximum request rate. It tells you what the server can do under maximum load. However, it is also the hardest test to do precisely and fairly across daemons of vastly different nature.
Other things I have rarely written about is the response time of Varnish for average requests. This is often much more interesting to the end user, as your server isn’t going to be running at full capacity anyway. The fairness and concurrency is also highly relevant. A user doing a large download shouldn’t adversely affect other users.
I wasn’t going to bother with all that.
First test
The first test I did was “max req/s”-like. It quickly showed that G-Wan was very fast, and in fact faster than Varnish. At first glance. The actual request-rate was faster. The CPU-usage was lower. However, Varnish is massively multi-threaded, which offsets the cpu measurements greatly and I wasn’t about to trust it.
Looking closer I realized that the real bottleneck was in fact httperf. With Varnish, it was able to keep more connections open and busy at the same time, and thus hit the upper limit of concurrency. This in turned gave subtle and easily ignored errors on the client which Varnish can do little about. It seemed G-Wan was dealing with fewer sessions at the same time, but faster, which gave httperf an easier time. This does not benefit G-Wan in the real world (nor does it necessarily detract from the performance), but it does create an unbalanced synthetic test.
I experimented with this quite a bit, and quickly concluded that the level of concurrency was much higher with varnish. But it was difficult to measure. Really difficult. Because I did not want to test httperf.
The hardware I used was my home-computer, which is ridiculously overpowered. The VM (KVM) was running with two CPU cores and I executed the clients from the host-OS instead of booting up physical test-servers. (… That 275k req/s that’s so much quoted? Spotify didn’t skip a beat while it was running (on the same machine). )
Conclusion
The more I tested this, the more I was able to produce any result I wanted by tweaking the level of concurrency, the degree of load, the amount of bandwidth required and so forth.
The response time of G-Wan seemed to deteriorate with load. But that might as well be the test environment. As the load went up, it took a long time to get a response. This is just not the case with Varnish at all. I ended up doing a little hoodwinking at the end to see how far this went, and the results varied extremely with tiny variations of test-parameters. The concurrency is a major factor. And the speed of Varnish at each individual connection played a huge part. At large amounts of parallel requests Varnish would be sufficiently fast with all the connections that httperf never ran into problems, while G-Wan would be more uneven and thus trigger failures (and look slower)…
My only conclusion is that it will take me several days to properly map out the performance patterns of Varnish compared to G-Wan. They treat concurrent connections vastly different and perform very different depending on the load-pattern you throw at them. Relating this to real traffic is very hard.
But this confirms my suspicion of the bogus-ness of the blog post that lead me to perform these tests. It’s not that I mind Varnish losing performance tests if we are actually slower, but it’s very hard to stomach when the nature of the test is so dubious. The art of measuring realistic performance with synthetic testing is not one that can be mastered in an afternoon.
Lessons learned
(I think conclusions are supposed to be last, but never mind)
First: Be skeptical of unbalanced results. And of even results.
Second: Measure more than one factor. I’ve mainly focused on request-rate in my posts because I do not compare Varnish to anything but itself. Without a comparison it doesn’t make that much sense to provide reply latency (though I suppose I should start supplying a measure of concurrency, since that’s one of the huge strong-points of Varnish.).
Third: Conclude carefully. This is an extension of the first lesson.
…
A funny detail: While I read the license for the non-free G-Wan, which I always do for proprietary software, I was happy to see that it didn’t have a benchmark-clause (Oracle, anyone?). But it does forbid removing or modifying the Server:-header. It also forces me to give the G-Wan-guys permission to use my using of G-Wan in their marketing… Hmm — maybe I should … — err, never mind.
[Less]
|
|
Posted
over 14 years
ago
I will be in Paris next week to participate in a seminar on Varnish at Capgemini’s premises. If you are in the area and interested in Varnish, take a look at https://www.varnish-software.com/paris. The nature of the event is informational for technical minds.
(This must be my shortest blog-post by far)
|