Everything about Cache

I’ll put every thing I read about cache in this article

Write-through, write-around and write-back cache

There are three main caching techniques that can be deployed, each with their own pros and cons.

  • Write-through cache directs write I/O onto cache and through to underlying permanent storage before confirming I/O completion to the host. This ensures data updates are safely stored on, for example, a shared storage array, but has the disadvantage that I/O still experiences latency based on writing to that storage. Write-through cache is good for applications that write and then re-read data frequently as data is stored in cache and results in low read latency.
  • Write-around cache is a similar technique to write-through cache, but write I/O is written directly to permanent storage, bypassing the cache. This can reduce the cache being flooded with write I/O that will not subsequently be re-read, but has the disadvantage is that a read request for recently written data will create a “cache miss” and have to be read from slower bulk storage and experience higher latency.
  • Write-back cache is where write I/O is directed to cache and completion is immediately confirmed to the host. This results in low latency and high throughput for write-intensive applications, but there is data availability exposure risk because the only copy of the written data is in cache. As we will discuss later, suppliers have added resiliency with products that duplicate writes. Users need to consider whether write-back cache solutions offer enough protection as data is exposed until it is staged to external storage. Write-back cache is the best performing solution for mixed workloads as both read and write I/O have similar response time levels.

LRU implementation:

http://openmymind.net/High-Concurrency-LRU-Caching/

1.Hashmap:

A read-write mutex for the hashtable is efficient. Assuming that we are GETing more than we are SETting, we’ll mostly be acquiring read-locks (which multiple threads can secure). The only time we need a write lock is when setting an item. If we keep things basic, it ends up looking like:

If necessary, we could always shard our hashtable to support more write throughput.

http://openmymind.net/Shard-Your-Hash-table-to-reduce-write-locks/

2.List  Ultimately, our goal was to reduce lock contention against our list. We achieved this in three ways. First, we use a window to limit the frequency of promotion. Second, we use a buffered channel to process promotions in a separate thread. Finally, we can do promotion and GC within the same thread.

(See this implementation by Ebay engineering blog. similar to the second option above. http://www.ebaytechblog.com/2011/08/30/high-throughput-thread-safe-lru-caching/ )

http://openmymind.net/Back-To-Basics-Hasthables-Part-2/ :    Back To Basics: Hashtables Part 2 (And A Peek Inside Redis).  This article introduce the incremental rehashing in Redis. An interesting idea.   Summary:  The magic of keeping hashtable efficient, despite varying size, is not that different than the magic behind dynamic arrays: double in size and copy. Redis’ incremental approach ensures that rehashing large hasthables doesn’t cause any performance hiccups. The downside is internal complexity (almost every hashtable operation needs to be rehashing-aware) and a longer rehashing process (which takes up extra memory while running).  Read about Redis’ hashtable implementation: https://github.com/antirez/redis/blob/unstable/src/dict.c

 

 

 

 

Brexit?

I’m betting Brexit will happen and I have money on spy put right now. I wish I didn’t bet as I realize it is not a wise thing to bet on such a event. My original plan was to enter the position last Friday and then retreat with profits before the brexit result is announced but I was trapped since last Friday. I already sold some of my position with loss yesterday but still keep a large portion. The Market Maker definitely knows what I’m thinking about. The market has been rallying since last Friday and seems so sure Brexit will fail.

Let’s see.  Right now the results showing leaving is not that unlikely.

Beyond my investment, I personally believe British should exit from EU for their own benefit, not for some great aspiration of other countries. I couldn’t see any benefit for the country and its people to stay in EU, except for big international companies and its allies who need its voice in EU. I don’t believe British will become less important in world politics if it leaves EU. Quite on the contrary,  I believe British will become much more important as now it stands an independent country and gets to sign trade deals with other countries independently, with other commonwealth countries, United States, China, Japan, etc.

I also don’t believe EU will punish British by shutting down its markets. someone would say British will lose the whole market while EU gets to distribute the loss between its members. but I think Germany will be most heavily impacted and it will do something for its own benefit.

Update: : Brexit Won! Congrats!

QQ20160623-0@2x

volatile considered harmful in kernel

https://www.kernel.org/doc/Documentation/volatile-considered-harmful.txt 

Why the "volatile" type class should not be used
------------------------------------------------

C programmers have often taken volatile to mean that the variable could be
changed outside of the current thread of execution; as a result, they are
sometimes tempted to use it in kernel code when shared data structures are
being used.  In other words, they have been known to treat volatile types
as a sort of easy atomic variable, which they are not.  The use of volatile in
kernel code is almost never correct; this document describes why.

The key point to understand with regard to volatile is that its purpose is
to suppress optimization, which is almost never what one really wants to
do.  In the kernel, one must protect shared data structures against
unwanted concurrent access, which is very much a different task.  The
process of protecting against unwanted concurrency will also avoid almost
all optimization-related problems in a more efficient way.

Like volatile, the kernel primitives which make concurrent access to data
safe (spinlocks, mutexes, memory barriers, etc.) are designed to prevent
unwanted optimization.  If they are being used properly, there will be no
need to use volatile as well.  If volatile is still necessary, there is
almost certainly a bug in the code somewhere.  In properly-written kernel
code, volatile can only serve to slow things down.

Consider a typical block of kernel code:

    spin_lock(&the_lock);
    do_something_on(&shared_data);
    do_something_else_with(&shared_data);
    spin_unlock(&the_lock);

If all the code follows the locking rules, the value of shared_data cannot
change unexpectedly while the_lock is held.  Any other code which might
want to play with that data will be waiting on the lock.  The spinlock
primitives act as memory barriers - they are explicitly written to do so -
meaning that data accesses will not be optimized across them.  So the
compiler might think it knows what will be in shared_data, but the
spin_lock() call, since it acts as a memory barrier, will force it to
forget anything it knows.  There will be no optimization problems with
accesses to that data.

If shared_data were declared volatile, the locking would still be
necessary.  But the compiler would also be prevented from optimizing access
to shared_data _within_ the critical section, when we know that nobody else
can be working with it.  While the lock is held, shared_data is not
volatile.  When dealing with shared data, proper locking makes volatile
unnecessary - and potentially harmful.

The volatile storage class was originally meant for memory-mapped I/O
registers.  Within the kernel, register accesses, too, should be protected
by locks, but one also does not want the compiler "optimizing" register
accesses within a critical section.  But, within the kernel, I/O memory
accesses are always done through accessor functions; accessing I/O memory
directly through pointers is frowned upon and does not work on all
architectures.  Those accessors are written to prevent unwanted
optimization, so, once again, volatile is unnecessary.

Another situation where one might be tempted to use volatile is
when the processor is busy-waiting on the value of a variable.  The right
way to perform a busy wait is:

    while (my_variable != what_i_want)
        cpu_relax();

The cpu_relax() call can lower CPU power consumption or yield to a
hyperthreaded twin processor; it also happens to serve as a compiler
barrier, so, once again, volatile is unnecessary.  Of course, busy-
waiting is generally an anti-social act to begin with.

There are still a few rare situations where volatile makes sense in the
kernel:

  - The above-mentioned accessor functions might use volatile on
    architectures where direct I/O memory access does work.  Essentially,
    each accessor call becomes a little critical section on its own and
    ensures that the access happens as expected by the programmer.

  - Inline assembly code which changes memory, but which has no other
    visible side effects, risks being deleted by GCC.  Adding the volatile
    keyword to asm statements will prevent this removal.

  - The jiffies variable is special in that it can have a different value
    every time it is referenced, but it can be read without any special
    locking.  So jiffies can be volatile, but the addition of other
    variables of this type is strongly frowned upon.  Jiffies is considered
    to be a "stupid legacy" issue (Linus's words) in this regard; fixing it
    would be more trouble than it is worth.

  - Pointers to data structures in coherent memory which might be modified
    by I/O devices can, sometimes, legitimately be volatile.  A ring buffer
    used by a network adapter, where that adapter changes pointers to
    indicate which descriptors have been processed, is an example of this
    type of situation.

For most code, none of the above justifications for volatile apply.  As a
result, the use of volatile is likely to be seen as a bug and will bring
additional scrutiny to the code.  Developers who are tempted to use
volatile should take a step back and think about what they are truly trying
to accomplish.

Patches to remove volatile variables are generally welcome - as long as
they come with a justification which shows that the concurrency issues have
been properly thought through.


NOTES
-----

[1] http://lwn.net/Articles/233481/
[2] http://lwn.net/Articles/233482/

CREDITS
-------

Original impetus and research by Randy Dunlap
Written by Jonathan Corbet
Improvements via comments from Satyam Sharma, Johannes Stezenbach, Jesper
	Juhl, Heikki Orsila, H. Peter Anvin, Philipp Hahn, and Stefan
	Richter.

good explanation about spinlock and semaphor

saw this post from stackoverflow. very good explanation.

http://stackoverflow.com/questions/195853/spinlock-versus-semaphore

Spinlock and semaphore differ mainly in four things:

1. What they are
A spinlock is one possible implementation of a lock, namely one that is implemented by busy waiting (“spinning”). A semaphore is a generalization of a lock (or, the other way around, a lock is a special case of a semaphore). Usually, but not necessarily, spinlocks are only valid within one process whereas semaphores can be used to synchronize between different processes, too.

A lock works for mutual exclusion, that is one thread at a time can acquire the lock and proceed with a “critical section” of code. Usually, this means code that modifies some data shared by several threads.
A semaphore has a counter and will allow itself being acquired by one or several threads, depending on what value you post to it, and (in some implementations) depending on what its maximum allowable value is.

Insofar, one can consider a lock a special case of a semaphore with a maximum value of 1.

2. What they do
As stated above, a spinlock is a lock, and therefore a mutual exclusion (strictly 1 to 1) mechanism. It works by repeatedly querying and/or modifying a memory location, usually in an atomic manner. This means that acquiring a spinlock is a “busy” operation that possibly burns CPU cycles for a long time (maybe forever!) while it effectively achieves “nothing”.
The main incentive for such an approach is the fact that a context switch has an overhead equivalent to spinning a few hundred (or maybe thousand) times, so if a lock can be acquired by burning a few cycles spinning, this may overall very well be more efficient. Also, for realtime applications it may not be acceptable to block and wait for the scheduler to come back to them at some far away time in the future.

A semaphore, by contrast, either does not spin at all, or only spins for a very short time (as an optimization to avoid the syscall overhead). If a semaphore cannot be acquired, it blocks, giving up CPU time to a different thread that is ready to run. This may of course mean that a few milliseconds pass before your thread is scheduled again, but if this is no problem (usually it isn’t) then it can be a very efficient, CPU-conservative approach.

3. How they behave in presence of congestion
It is a common misconception that spinlocks or lock-free algorithms are “generally faster”, or that they are only useful for “very short tasks” (ideally, no synchronization object should be held for longer than absolutely necessary, ever).
The one important difference is how the different approaches behave in presence of congestion.

A well-designed system normally has low or no congestion (this means not all threads try to acquire the lock at the exact same time). For example, one would normally not write code that acquires a lock, then loads half a megabyte of zip-compressed data from the network, decodes and parses the data, and finally modifies a shared reference (append data to a container, etc.) before releasing the lock. Instead, one would acquire the lock only for the purpose of accessing the shared resource.
Since this means that there is considerably more work outside the critical section than inside it, naturally the likelihood for a thread being inside the critical section is relatively low, and thus few threads are contending for the lock at the same time. Of course every now and then two threads will try to acquire the lock at the same time (if this couldn’t happen you wouldn’t need a lock!), but this is rather the exception than the rule in a “healthy” system.

In such a case, a spinlock greatly outperforms a semaphore because if there is no lock congestion, the overhead of acquiring the spinlock is a mere dozen cycles as compared to hundreds/thousands of cycles for a context switch or 10-20 million cycles for losing the remainder of a time slice.

On the other hand, given high congestion, or if the lock is being held for lengthy periods (sometimes you just can’t help it!), a spinlock will burn insane amounts of CPU cycles for achieving nothing.
A semaphore (or mutex) is a much better choice in this case, as it allows a different thread to run useful tasks during that time. Or, if no other thread has something useful to do, it allows the operating system to throttle down the CPU and reduce heat / conserve energy.

Also, on a single-core system, a spinlock will be quite inefficient in presence of lock congestion, as a spinning thread will waste its complete time waiting for a state change that cannot possibly happen (not until the releasing thread is scheduled, which isn’t happening while the waiting thread is running!). Therefore, given any amount of contention, acquiring the lock takes around 1 1/2 time slices in the best case (assuming the releasing thread is the next one being scheduled), which is not very good behaviour.

4. How they’re implemented
A semaphore will nowadays typically wrap sys_futex under Linux (optionally with a spinlock that exits after a few attempts).
A spinlock is typically implemented using atomic operations, and without using anything provided by the operating system. In the past, this meant using either compiler intrinsics or non-portable assembler instructions. Meanwhile both C++11 and C11 have atomic operations as part of the language, so apart from the general difficulty of writing provably correct lock-free code, it is now possible to implement lock-free code in an entirely portable and (almost) painless way.

what is new for Virtual SAN 6.0

熟记下来,这样以后可以对其它 storage system的性能有一个大概的评价基准。

what is new for Virtual SAN 6.0?

What is new for Virtual SAN 6.0?

  • Support for All-Flash configurations
  • Fault Domains configuration
  • Support for hardware encryption and checksum (See HCL)
  • New on-disk format
    • High performance snapshots / clones
    • 32 snapshots per VM
  •  Scale
    • 64 host cluster support
    • 40K IOPS per host for hybrid configurations
    • 90K IOPS per host for all-flash configurations
    • 200 VMs per host
    • 8000 VMs per Cluster
    • up to 62TB VMDKs
  • Default SPBM Policy
  • Disk / Disk Group serviceability
  • Support for direct attached storage systems to blade (See HCL)
  • Virtual SAN Health Service plugin

 

What is new for Virtual SAN 6.1?

Virtual SAN 6.1 which will come as part of vSphere 6.0 Update 1. Many new features have been added, but there are a couple which stand out if you ask me. In this post I am going to talk about what are in my opinion the key new features. Lets list them first and then discuss some of them individually.

  • Support for stretched clustering
  • Support for 2 node ROBO configurations
  • Enhanced Replication
  • Support for SMP-FT
  • New hardware options
    • Intel NVMe
    • Diablo Ultra Dimm
  • Usability enhancements
    • Disk Group Bulk Claiming
    • Disk Claiming per Tier
    • On-Disk Format Upgrade from UI
  • Health Check Plug-in shipped with vCenter Server
  • Virtual SAN Management Pack for VR Ops