Varnish caching for noobs

For a while during my time at Daemon I could talk geeky with the best of them – well at least follow the conversations. Jason Barnes, Daemonite development manager was filling me in on what the team had been up to recently. This included their visit to cfObjective conference where Geoff (head Daemonite) gave a talk on Varnish. The slides are online (nice HTML 5 slide deck btw). I was like Varnish!? What the? No longer working with developers means I no longer get to learn geeky things through osmosis. A hello ping on IM resulted in my schooling in Varnish – a service that simply makes websites, like Facebook and Twitter serve content fast.

 

Erietta: How’s the Daemon crew? All good I trust
Jason: Yeah we are going well. Daemon sent the team to Melbourne for cfobjective
Erietta: Was it good?
Jason: There were some good sessions, Geoff’s varnish talk was very popular
Jason: Varnish is a reverse proxy solution we now use
Erietta: oooh!
Jason: http://www.daemon.com.au/slides/varnish/index.html#slide1
Erietta: I am so dumb now, seriously, I don’t know what any of this stuff means any more. What has become of me?! Is this slide deck html 5?
Jason: Yeah
Erietta: Nice. I still need a dummies translation of what that all means though. I know it means “faster”, thats it. I’m too noob
Jason: Basically you put a server in front of your applications, which takes the request, sends the request on to the server, grabs the html it returns, and caches it locally in memory (can overflow to disk as well). Next request just pulls from the local cache, you can also break your page up into smaller caches each with their own timeouts and then it’s about all the exceptions and rules and varnish makes it easy its veerrrrry efficient, scales very well.
Erietta: OK, so it’s an intermediary server that gets the requests, has a cache of relevant html and serves that up to the next person who is asking for the same thing?
Jason: and all runs on the tiniest server, so bang for buck its stellar
Erietta: hmmmm
Jason: has added benefits like if your application server dies it keeps serving from local cache.
Erietta: So the request from the second person/user/visitor is basically just getting html only and not going to the app server at all.
Jason: yep
Erietta: aaah ok
Jason: Not only that if 10 people simultaneously request while its fetching a new request they either a) get queued behind first at proxy or b) get served old copy if available its really good if you have a 30k newsletter drop that kills your server.
Erietta: How does it differ to the caching engine you made for FarCry a couple of years ago. That had granular caching, if that’s a term I can use i.e. was able to serve new elements (updated content) while serving other cached content.
Jason: well FarCry cache relies on cf threads still. This stands in front dedicated and is waaaaay more efficient. Its written specifically to do this task.
Erietta: and technology agnostic obviously
Jason: it manages memory specifically for this task whereas java memory management is architected for a different purpose
Erietta: that being?
Jason: well objects in code
Erietta: as opposed to pages?
Jason: so in java some objects live longer than others which means the memory is subject to garbage collection a process you can’t directly control. Whereas dedicated caching objects in varnish live until they are told not to live anymore because they were replaced. Java has a machinery in the java virtual machine which is way more complex than straight caching needs to be so varnish memory management is directly allocated and deallocated: https://www.varnish-cache.org/trac/wiki/ArchitectNotes
Erietta: oh good you just linked me to a life story there. Give me the crib notes #lazyweb ! ;)
Jason: that last paragraph is the explanation, just highlights that the memory is written specifically for the task, not a framework that is flexible but with trade-offs, e.g java
Erietta: 

“Now imagine that another CPU wants to n_bar+++ at the same time, can it do that ? No. Caches operate not on bytes but on some “linesize” of bytes, typically from 8 to 128 bytes in each line. So since the first cpu was busy dealing with n_foo, the second CPU will be trying to grab the same cache-line, so it will have to wait, even through it is a different variable.”

Jason: think Farcry is a framework which makes it easy to build apps but that framework means the trade-off is performance if you built every single page on a website from scratch with a view to optimising that page you’d be 1000x more efficient than using a framework
Erietta: got it. So what other performance improvements have been made
to the FarCry framework while I haven’t been watching?
Jason: we rewrote caching :P testing it as we speak. Uses a new algorithm which is dynamic replacement cache. It also factors in memory used into the caching mechanism
Erietta: dude, that page is hard core. WHAT DOES IT MEAN!
Jason: hehe ok so previously in FarCry the way we cached was using a number we made up per content type of objects cached e.g. 1000 for html. Now we have 1 single cache which does both objects and html snippets and it dynamically resizes itself depending on what’s going on so it checks the oldgen part of memory in the jvm to see if we are at 70%. Additionally its clever in the way it chooses what objects to evict in that it will be optimised so pages like homepage and news landing page etc aren’t ever evicted before a news article from 2006 so what you see on that dump is the resizes and the memory stats and hits vs evicts. 81452 hits vs 6743 misses is an awesome hit ratio. 8% miss rate which means only 8% of those object requests needed to go to the db (or if a webskin had to be rebuilt)
Erietta: Sweet. That’s cool. What about performance of the site tree in FarCry? Has that improved? From memory it was using its own JS library?
Jason: haha nope still same tree. We are using twitter bootstrap for our forms now on some of our projects. Additionally Matt’s refactored permissions so anyone can do them not just a dev, like we could hand over the webtop to a producer to configure for the client.

Comments

%d bloggers like this: