名言名句

為天地立心,為生民立命,為往聖繼絕學,為萬世開太平。”(《宋元學案·橫渠學案上》)。

Do more of what you can uniquely do, and less of what other people can do”… Gloria Steinem.

老婆EAD寄出来了

12.23号寄出申请

12.26 USCIS 收到

3.20号左右:Request for more evidence。 要求signature,之前我的签字要么是用了蓝色笔,要么是签字太大,超出范围了。

4.1号把RFE packet寄回, 顺便把跳槽后renewed的H1b/H4 notice 加进去了。

4.5 USCIS 收到

4.10 更新: Card in production

4.19更新: Card mailed out

重新来过

来美6年半了,读书3年,工作3年半。

从去年开始到现在,所有的积蓄在股市亏光了,玩短期option进入了赌博模式,前后大概亏了12万美金。去年亏了7万,当时想收手了,说要把钱交给老婆管,复习了几个月刷题跳槽成功,今年年初老婆在复习考NY Bar,没空管理家里财务,我又鬼使神差的重新进去想回本,结果一把我又亏了5万。银行里现在一共也只有1万不到。

这些钱,实在对我一个普通家庭出身的人来说,太多了。自己平时生活节约,结果到头来全部输给了股市。

赌徒的日子,太痛。哭了好几天,于事无补。生活还得继续。

对不起自己刚结婚的老婆,对不起父母,工作几年,没给家里寄过几次钱。

恍惚了很多天了。28岁的我,现在清零,从头开始。

San Francisco City Hall Wedding Guide

How to Get Married at San Francisco City Hall

You can either get the marriage license on the same day or beforehand. you should at least get the marriage license 30 minutes before the ceremony.

you can get the marriage license in other city hall within California.

Santa Clara City Hall:  https://www.sccgov.org/sites/rec/Marriage%20Licenses/Pages/Applying-for-a-Marriage-License.aspx

 

Everything about Python Dict

This is a  great post I found on stackoverflow.

http://stackoverflow.com/questions/327311/how-are-pythons-built-in-dictionaries-implemented

 

Here is everything about Python dicts that I was able to put together (probably more than anyone would like to know; but the answer is comprehensive).

  • Python dictionaries are implemented as hash tables.
  • Hash tables must allow for hash collisions i.e. even if two distinct keys have the same hash value, the table’s implementation must have a strategy to insert and retrieve the key and value pairs unambiguously.
  • Python dict uses open addressing to resolve hash collisions (explained below) (see dictobject.c:296-297).
  • Python hash table is just a contiguous block of memory (sort of like an array, so you can do an O(1) lookup by index).
  • Each slot in the table can store one and only one entry. This is important.
  • Each entry in the table actually a combination of the three values: < hash, key, value >. This is implemented as a C struct (see dictobject.h:51-56).
  • The figure below is a logical representation of a Python hash table. In the figure below, 0, 1, ..., i, ... on the left are indices of the slots in the hash table (they are just for illustrative purposes and are not stored along with the table obviously!).
    # Logical model of Python Hash table
    -+-----------------+
    0| <hash|key|value>|
    -+-----------------+
    1|      ...        |
    -+-----------------+
    .|      ...        |
    -+-----------------+
    i|      ...        |
    -+-----------------+
    .|      ...        |
    -+-----------------+
    n|      ...        |
    -+-----------------+
  • When a new dict is initialized it starts with 8 slots. (see dictobject.h:49)
  • When adding entries to the table, we start with some slot, i, that is based on the hash of the key. CPython initially uses i = hash(key) & mask (where mask = PyDictMINSIZE - 1, but that’s not really important). Just note that the initial slot, i, that is checked depends on the hash of the key.
  • If that slot is empty, the entry is added to the slot (by entry, I mean, <hash|key|value>). But what if that slot is occupied!? Most likely because another entry has the same hash (hash collision!)
  • If the slot is occupied, CPython (and even PyPy) compares the the hash AND the key (by compare I mean == comparison not the is comparison) of the entry in the slot against the key of the current entry to be inserted (dictobject.c:337,344-345). If both match, then it thinks the entry already exists, gives up and moves on to the next entry to be inserted. If either hash or the key don’t match, it starts probing.
  • Probing just means it searches the slots by slot to find an empty slot. Technically we could just go one by one, i+1, i+2, ... and use the first available one (that’s linear probing). But for reasons explained beautifully in the comments (see dictobject.c:33-126), CPython uses random probing. In random probing, the next slot is picked in a pseudo random order. The entry is added to the first empty slot. For this discussion, the actual algorithm used to pick the next slot is not really important (see dictobject.c:33-126 for the algorithm for probing). What is important is that the slots are probed until first empty slot is found.
  • The same thing happens for lookups, just starts with the initial slot i (where i depends on the hash of the key). If the hash and the key both don’t match the entry in the slot, it starts probing, until it finds a slot with a match. If all slots are exhausted, it reports a fail.
  • BTW, the dict will be resized if it is two-thirds full. This avoids slowing down lookups. (see dictobject.h:64-65)

NOTE: I did the research on Python Dict implementation in response to my own question about how multiple entries in a dict can have same hash values. I posted a slightly edited version of the response here because all the research is very relevant for this question as well.

Everything about Cache

I’ll put every thing I read about cache in this article

Write-through, write-around and write-back cache

There are three main caching techniques that can be deployed, each with their own pros and cons.

  • Write-through cache directs write I/O onto cache and through to underlying permanent storage before confirming I/O completion to the host. This ensures data updates are safely stored on, for example, a shared storage array, but has the disadvantage that I/O still experiences latency based on writing to that storage. Write-through cache is good for applications that write and then re-read data frequently as data is stored in cache and results in low read latency.
  • Write-around cache is a similar technique to write-through cache, but write I/O is written directly to permanent storage, bypassing the cache. This can reduce the cache being flooded with write I/O that will not subsequently be re-read, but has the disadvantage is that a read request for recently written data will create a “cache miss” and have to be read from slower bulk storage and experience higher latency.
  • Write-back cache is where write I/O is directed to cache and completion is immediately confirmed to the host. This results in low latency and high throughput for write-intensive applications, but there is data availability exposure risk because the only copy of the written data is in cache. As we will discuss later, suppliers have added resiliency with products that duplicate writes. Users need to consider whether write-back cache solutions offer enough protection as data is exposed until it is staged to external storage. Write-back cache is the best performing solution for mixed workloads as both read and write I/O have similar response time levels.

LRU implementation:

http://openmymind.net/High-Concurrency-LRU-Caching/

1.Hashmap:

A read-write mutex for the hashtable is efficient. Assuming that we are GETing more than we are SETting, we’ll mostly be acquiring read-locks (which multiple threads can secure). The only time we need a write lock is when setting an item. If we keep things basic, it ends up looking like:

If necessary, we could always shard our hashtable to support more write throughput.

http://openmymind.net/Shard-Your-Hash-table-to-reduce-write-locks/

2.List  Ultimately, our goal was to reduce lock contention against our list. We achieved this in three ways. First, we use a window to limit the frequency of promotion. Second, we use a buffered channel to process promotions in a separate thread. Finally, we can do promotion and GC within the same thread.

(See this implementation by Ebay engineering blog. similar to the second option above. http://www.ebaytechblog.com/2011/08/30/high-throughput-thread-safe-lru-caching/ )

http://openmymind.net/Back-To-Basics-Hasthables-Part-2/ :    Back To Basics: Hashtables Part 2 (And A Peek Inside Redis).  This article introduce the incremental rehashing in Redis. An interesting idea.   Summary:  The magic of keeping hashtable efficient, despite varying size, is not that different than the magic behind dynamic arrays: double in size and copy. Redis’ incremental approach ensures that rehashing large hasthables doesn’t cause any performance hiccups. The downside is internal complexity (almost every hashtable operation needs to be rehashing-aware) and a longer rehashing process (which takes up extra memory while running).  Read about Redis’ hashtable implementation: https://github.com/antirez/redis/blob/unstable/src/dict.c

 

 

 

 

Brexit?

I’m betting Brexit will happen and I have money on spy put right now. I wish I didn’t bet as I realize it is not a wise thing to bet on such a event. My original plan was to enter the position last Friday and then retreat with profits before the brexit result is announced but I was trapped since last Friday. I already sold some of my position with loss yesterday but still keep a large portion. The Market Maker definitely knows what I’m thinking about. The market has been rallying since last Friday and seems so sure Brexit will fail.

Let’s see.  Right now the results showing leaving is not that unlikely.

Beyond my investment, I personally believe British should exit from EU for their own benefit, not for some great aspiration of other countries. I couldn’t see any benefit for the country and its people to stay in EU, except for big international companies and its allies who need its voice in EU. I don’t believe British will become less important in world politics if it leaves EU. Quite on the contrary,  I believe British will become much more important as now it stands an independent country and gets to sign trade deals with other countries independently, with other commonwealth countries, United States, China, Japan, etc.

I also don’t believe EU will punish British by shutting down its markets. someone would say British will lose the whole market while EU gets to distribute the loss between its members. but I think Germany will be most heavily impacted and it will do something for its own benefit.

Update: : Brexit Won! Congrats!

QQ20160623-0@2x