Everything about Cache

I’ll put every thing I read about cache in this article

Write-through, write-around and write-back cache

There are three main caching techniques that can be deployed, each with their own pros and cons.

  • Write-through cache directs write I/O onto cache and through to underlying permanent storage before confirming I/O completion to the host. This ensures data updates are safely stored on, for example, a shared storage array, but has the disadvantage that I/O still experiences latency based on writing to that storage. Write-through cache is good for applications that write and then re-read data frequently as data is stored in cache and results in low read latency.
  • Write-around cache is a similar technique to write-through cache, but write I/O is written directly to permanent storage, bypassing the cache. This can reduce the cache being flooded with write I/O that will not subsequently be re-read, but has the disadvantage is that a read request for recently written data will create a “cache miss” and have to be read from slower bulk storage and experience higher latency.
  • Write-back cache is where write I/O is directed to cache and completion is immediately confirmed to the host. This results in low latency and high throughput for write-intensive applications, but there is data availability exposure risk because the only copy of the written data is in cache. As we will discuss later, suppliers have added resiliency with products that duplicate writes. Users need to consider whether write-back cache solutions offer enough protection as data is exposed until it is staged to external storage. Write-back cache is the best performing solution for mixed workloads as both read and write I/O have similar response time levels.

LRU implementation:

http://openmymind.net/High-Concurrency-LRU-Caching/

1.Hashmap:

A read-write mutex for the hashtable is efficient. Assuming that we are GETing more than we are SETting, we’ll mostly be acquiring read-locks (which multiple threads can secure). The only time we need a write lock is when setting an item. If we keep things basic, it ends up looking like:

If necessary, we could always shard our hashtable to support more write throughput.

http://openmymind.net/Shard-Your-Hash-table-to-reduce-write-locks/

2.List  Ultimately, our goal was to reduce lock contention against our list. We achieved this in three ways. First, we use a window to limit the frequency of promotion. Second, we use a buffered channel to process promotions in a separate thread. Finally, we can do promotion and GC within the same thread.

(See this implementation by Ebay engineering blog. similar to the second option above. http://www.ebaytechblog.com/2011/08/30/high-throughput-thread-safe-lru-caching/ )

http://openmymind.net/Back-To-Basics-Hasthables-Part-2/ :    Back To Basics: Hashtables Part 2 (And A Peek Inside Redis).  This article introduce the incremental rehashing in Redis. An interesting idea.   Summary:  The magic of keeping hashtable efficient, despite varying size, is not that different than the magic behind dynamic arrays: double in size and copy. Redis’ incremental approach ensures that rehashing large hasthables doesn’t cause any performance hiccups. The downside is internal complexity (almost every hashtable operation needs to be rehashing-aware) and a longer rehashing process (which takes up extra memory while running).  Read about Redis’ hashtable implementation: https://github.com/antirez/redis/blob/unstable/src/dict.c

 

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s