Based on this youtube video.
- There is something called Memcache slab in Memcahce. If it’s not configured correctly the Memcache server will show that the server is full even though it’s not.
- Technical Term –
cache stampede – see point 3
cache warming – used to fill up the cache when it’s empty. update the cache so that website is up and running.
- Suppose we have cache a very popular page – like a list of colleges. If our team updates one of the college from that list then we should update that list. So a simple solution is to just delete the cache key for that list of college.
Now if a user comes to that page, there’s a cache miss and then the request tries to update that key in the cache. Looks fine..But…..what if our website had a like 1000 hits/sec for that page. Then each of those request will have make a cache hit and so like 1000 hits to cache and then the db. So our cache should be built such that if the cache key is updated it should be updated without causing a cache miss.
Suppose to tackle the previous situation( cache stampede), you decide to put a lock in place when the first user gets a cache miss until the data is fetched from db and added to cache. So you put a lock and this prevents any other user from using the same resource. What happens now…While that lock is in place there will be many extra connection on apache/nginx and memory and resources will be used.
What if the process that created the lock fails and it’s never released.
- nginx is event driven. It’s a web server. plus it’s a reverse proxy. It’s not thread driven like apache.low memory footprint compared to apache. It does not supports htacess.
- CSS sprites – it loads all the small images that are to used on the website at once. This prevents multiple calls. Time – 35:47
- Varnish ESI – 9:00 minute. Check limits of ESI as well.