《简单》 -三毛

文/三毛许多时候,我们早已不去回想,当每一个人来到地球上时,只是一个赤裸的婴儿,除了躯体和灵魂,上苍没有让人类带来什么身外之物。

等到有一天,人去了,去的仍是来的样子,空空如也。这只是样子而已。事实上,死去的人,在世上总也留下了一些东西,有形的,无形的,充斥着这本来已是拥挤的空间。

曾几何时,我们不再是婴儿,那份记忆也遥远得如同前生。回首看一看,我们普普通通的活了半生,周围已引出了多少牵绊,伸手所及,又有多少带不去的东西成了生活的一部分,缺了它们,日子便不完整。

许多人说,身体形式都不重要,境由心造,一念之间可以一花一世界,一沙一天堂。

这是不错的,可是在我们那么复杂拥挤的环境里,你的心灵看见过花吗?只一朵,你看见过吗?我问你的,只是一朵简单的非洲菊,你看见过吗?我甚而不问你玫瑰。

不了,我们不再谈沙和花朵,简单的东西是最不易看见的,那么我们只看看复杂的吧!

唉,连这个,我也不想提笔写了。

在这样的时代里,人们崇拜神童,没有童年的儿童,才进得了那窄门。

人类往往少年老成,青年迷茫,中年喜欢将别人的成就与自己相比较,因而觉得受挫,好不容易活到老年仍是一个没有成长的笨孩子。我们一直粗糙的活着,而人的一生,便也这样过去了。

我们一生复杂,一生追求,总觉得幸福的遥不可企及。不知那朵花啊,那粒小小的沙子,便在你的窗台上。你那么无事忙,当然看不见了。

对于复杂的生活,人们怨天怨地,却不肯简化。心为形役也是自然,哪一种形又使人的心被役得更自由呢?

我们不肯放弃,我们忙了自己,还去忙别人。过分的关心,便是多管闲事,当别人拒绝我们的时候,我们受了伤害,却不知这份没趣,实在是自找的。

对于这样的生活,我们往往找到一个美丽的代名词,叫做“深刻”。

简单的人,社会也有一个形容词,说他们是笨的。一切单纯的东西,都成了不好的。

恰好我又远离了家国。到大西洋的海岛上来过一个笨人的日子,就如过去许多年的日子一样。

在这儿,没有大鱼大肉,没有争名夺利,没有过分的情,没有载不动的愁,没有口舌是非,更没有解不开的结。

也许有其他的笨人,比我笨得复杂的,会说:你是幸运的,不是每个人都有一片大西洋的岛屿。唉,你要来吗?你忘了自己窗台上的那朵花了。怎么老是看不见呢?

你不带花来,这儿仍是什么也没有的。你又何必来?你的花不在这里,你的窗,在你心里,不在大西洋啊!一个生命,不止是有了太阳、空气、水便能安然的生存,那只是最基本的。求生的欲望其实单纯,可是我们是人类,是一种贪得无厌的生物,在解决了饥饿之后,我们要求进步,有了进步之后,要求更进步,有了物质的享受之后,又要求精神的提升,我们追求幸福、快乐、和谐、富有、健康,甚而永生。

最初的人类如同地球上漫游野地的其他动物,在大自然的环境里辛苦挣扎,只求存活。而后因为自然现象的发展,使他们组成了部落,成立了家庭。多少万年之后,国与国之间划清了界限,民与民之间,忘了彼此都只不过是人类。

邻居和自己之间,筑起了高墙,我们居住在他人看不见的屋顶和墙内,才感到安全自在。

人又耐不住寂寞,不可能离群索居,于是我们需要社会,需要其他的人和物来建立自己的生命。我们不肯节制,不懂收敛,泛滥情感,复杂生活起居。到头来,“成功”只是“拥有”的代名词。我们变得沉重,因为担负得太多,不敢放下。

当婴儿离开母体时,象征着一个躯体的成熟。可是婴儿不知道,他因着脱离了温暖潮湿的子宫觉得惧怕,接着在哭。人与人的分离,是自然现象,可是我们不愿。

我们由人而来,便喜欢再回到人群里去。明知生是个体,死是个体,但是我们不肯探索自己本身的价值,我们过分看重他人在自己生命里的参与。于是,孤独不再美好,失去了他人,我们惶惑不安。

其实,这也是自然。

于是,人类顺其自然的受捆绑,衣食住行永无宁日的复杂,人际关系日复一日的纠缠,头脑越变越大,四肢越来越退化,健康丧失,心灵蒙尘。快乐,只是国王的新衣,只有聪明的人才看得见。

童话里,不是每个人都看见了那件新衣,只除了一个说真话的小孩子。

我们不再怀念稻米单纯的丰美,也不认识蔬菜的清香。我们不知四肢是用来活动的,也不明白,穿衣服只是使我们免于受冻。

灵魂,在这一切的拘束下,不再明净。感官,退化到只有五种。如果有一个人,能够感应到其他的人已经麻木的自然现象,其他的人不但不信,而且好笑。

每一个人都说,在这个时代里,我们不再自然。每一个人又说,我们要求的只是那一点心灵的舒服,对于生命,要求的并不高。

这是,我们同时想摘星。我们不肯舍下那么重的负担,那么多柔软又坚韧的纲,却抱怨人生的劳苦愁烦。不知自己便是住在一颗星球上,为何看不见它的光芒呢?

这里,对于一个简单的笨人,是合适的。对不简单的笨人,就不好了。

我只是返璞归真,感到的,也只是早晨醒来时没有那么深的计算和迷茫。

我不吃油腻的东西,我不过饱,这使我的身体清洁。我不做不可及的梦,这使我的睡眠安恬。我不穿高跟鞋折磨我的脚,这使我的步子更加悠闲安稳。我不跟潮流走,这使我的衣服永远长新,我不耻于活动四肢,这使我健康敏捷。我避开无事时过分热络的友谊,这使我少些负担和承诺。我不多说无谓的闲言,这使我觉得清畅。我尽可能不去缅怀往事,因为来时的路不可能回头。我当心的去爱别人,因为比较不会泛滥。我爱哭的时候便哭,想笑的时候便笑,只要这一切出于自然。

我不求深刻,只求简单。

What will happen if an exception is not catched

http://docs.oracle.com/javase/tutorial/essential/exceptions/definition.html

The point from this article I  am interested in is:

The exception handler chosen is said to catch the exception. If the runtime system exhaustively searches all the methods on the call stack without finding an appropriate exception handler, as shown in the next figure, the runtime system (and, consequently, the program) terminates.

That is, what will happen if an exception is not catched? it says the runtime system and program will terminate

I then searched and got to a post on stakoverflow:

C#: what happens if an exception is not caught

one answer is :

Yes. Something “exceptional” has happened, and your program does not know how to handle it, so it must stop execution at that point and “crash”. There will be code that is executed after the crash, such as finally blocks, but basically the party is over for your code.

The best thing to do is to log these events, giving as much intofmation about the state of the system/program at the time of the crash. The Logging Application Block is one of the more robust automatic ways to log errors

Here is another article I read about this: http://www.akadia.com/services/java_exceptions.html

“When an (either checked or unchecked) exception is thrown, execution will attempt to immediately branch to the first catch block whose associated exception class matches the class or a superclass of the thrown exception. If the exception does not occur within a try block or the thrown exception is not caught in a matching catchblock, execution of the method immediately terminates and control returns to the invoker of the method, where this process is repeated. The result is that the exception chain is escalated until a matching catch block is found. If not, the thread containing the thrown exception is terminated.”

In this article, the concept of checked and unchecked exceptions are introduced:”

Exceptions generated from runtime are called unchecked exceptions, since it is not possible for the compiler to determine that your code will handle the exception. Exception classes that descend from RuntimeException and Error classes are unchecked exceptions. Examples for RuntimeException are illegal cast operation, inappropriate use of a null pointer, referencing an out of bounds array element. Errorexception classes signal critical problems that typically cannot be handled by your application. Examples are out of memory error, stack overflow, failure of the Java VM.

Thrown exceptions are referred to as checked exceptions. The compiler will confirm at compile time that the method includes code that might throw an exception. Moreover the compiler requires the code that calls such a method to include this call within a try block, and provide an appropriate catch block to catch the exception.

“.


Last, I also read this:

How does an exception permeate through the code?
A: An unhandled exception moves up the method stack in search of a matching When an exception is thrown from a code which is wrapped in a try block followed by one or more catch blocks, a search is made for matching catch block. If a matching type is found then that block will be invoked. If a matching type is not found then the exception moves up the method stack and reaches the caller method. Same procedure is repeated if the caller method is included in a try catch block. This process continues until a catch block handling the appropriate type of exception is found. If it does not find such a block then finally the program terminates.
What are checked exceptions?
A: Checked exception are those which the Java compiler forces you to catch. e.g. IOException are checked Exceptions.
[ Received from Sandesh Sadhale]
Q: What are runtime exceptions?
A: Runtime exceptions are those exceptions that are thrown at runtime because of either wrong input data or because of wrong business logic etc. These are not checked by the compiler at compile time.
[ Received from Sandesh Sadhale]
Q: What is the difference between error and an exception?
A: An error is an irrecoverable condition occurring at runtime. Such as OutOfMemory error. These JVM errors and you can not repair them at runtime. While exceptions are conditions that occur because of bad input etc. e.g. FileNotFoundException will be thrown if the specified file does not exist. Or a NullPointerException will take place if you try using a null reference. In most of the cases it is possible to recover from an exception (probably by giving user a feedback for entering proper values etc.).
What is the basic difference between the 2 approaches to exception handling.
1> try catch block and
2> specifying the candidate exceptions in the throws clause?
When should you use which approach?
A: In the first approach as a programmer of the method, you urself are dealing with the exception. This is fine if you are in a best position to decide should be done in case of an exception. Whereas if it is not the responsibility of the method to deal with it’s own exceptions, then do not use this approach. In this case use the second approach. In the second approach we are forcing the caller of the method to catch the exceptions, that the method is likely to throw. This is often the approach library creators use. They list the exception in the throws clause and we must catch them. You will find the same approach throughout the java libraries we use.

C++ FAQ on destructor

http://www.parashift.com/c++-faq-lite/dtors.html#faq-11.14

[11.10] What is “placement new” and why would I use it?

There are many uses of placement new. The simplest use is to place an object at a particular location in memory. This is done by supplying the place as a pointer parameter to the new part of a new expression:

 #include <new>        // Must #include this to use “placement new
#include "Fred.h"     
// Declaration of class Fredvoid someCode()
{
char memory[sizeof(Fred)];// Line #1
void* place = memory;          
// Line #2Fred* f = new(place) Fred();// Line #3 (see “DANGER” below)
// The pointers f and place will be equal
}

Line #1 creates an array of sizeof(Fred) bytes of memory, which is big enough to hold a Fred object. Line #2 creates a pointer place that points to the first byte of this memory (experienced C programmers will note that this step was unnecessary; it’s there only to make the code more obvious). Line #3 essentially just calls the constructor Fred::Fred(). The this pointer in the Fred constructor will be equal to place. The returned pointer f will therefore be equal to place.

ADVICE: Don’t use this “placement new” syntax unless you have to. Use it only when you really care that an object is placed at a particular location in memory. For example, when your hardware has a memory-mapped I/O timer device, and you want to place a Clock object at that memory location.

DANGER: You are taking sole responsibility that the pointer you pass to the “placement new” operator points to a region of memory that is big enough and is properly aligned for the object type that you’re creating. Neither the compiler nor the run-time system make any attempt to check whether you did this right. If your Fred class needs to be aligned on a 4 byte boundary but you supplied a location that isn’t properly aligned, you can have a serious disaster on your hands (if you don’t know what “alignment” means, please don’t use the placement new syntax). You have been warned.

You are also solely responsible for destructing the placed object. This is done by explicitly calling the destructor:

 void someCode()
{
char memory[sizeof(Fred)];
void* p = memory;
Fred* f = new(p) Fred();

f->~Fred();   
// Explicitly call the destructor for the placed object
}

This is about the only time you ever explicitly call a destructor.

Note: there is a much cleaner but more sophisticated way of handling the destruction / deletion situation.

[11.9] But can I explicitly call a destructor if I’ve allocated my object with new?

Probably not.

Unless you used placement new, you should simply delete the object rather than explicitly calling the destructor. For example, suppose you allocated the object via a typical new expression:

 Fred* p = new Fred();

Then the destructor Fred::~Fred() will automagically get called when you delete it via:

 delete p;  // Automagically calls p->~Fred()

You should not explicitly call the destructor, since doing so won’t release the memory that was allocated for the Fred object itself. Remember: delete p does two things: it calls the destructor and it deallocates the memory.

[11.14] Is there a way to force new to allocate memory from a specific memory area?

Starting with a simple memory allocator function, allocate(), you would use placement new to construct an object in that memory. In other words, the following is morally equivalent to new Foo():

 void* raw = allocate(sizeof(Foo));  // line 1
Foo* p = new(raw) Foo();            
// line 2
 The next step is to turn your memory allocator into an object.

This kind of object is called a “memory pool” or a “memory arena.” This lets your users have more than one “pool” or “arena” from which memory will be allocated. Each of these memory pool objects will allocate a big chunk of memory using some specific system call (e.g., shared memory, persistent memory, stack memory, etc.; see below), and will dole it out in little chunks as needed. Your memory-pool class might look something like this:

 class Pool {
public:
void* alloc(size_t nbytes);
void dealloc(void* p);
private:
…data members used in your pool object…
};
void* Pool::alloc(size_t nbytes)
{

…your algorithm goes here…
}

void Pool::dealloc(void* p)
{

…your algorithm goes here…
}

Now one of your users might have a Pool called pool, from which they could allocate objects like this:

 Pool pool;

void* raw = pool.alloc(sizeof(Foo));
Foo* p = new(raw) Foo();

Or simply:

 Foo* p = new(pool.alloc(sizeof(Foo))) Foo();

The reason it’s good to turn Pool into a class is because it lets users create N different pools of memory rather than having one massive pool shared by all users. That allows users to do lots of funky things. For example, if they have a chunk of the system that allocates memory like crazy then goes away, they could allocate all their memory from a Pool, then not even bother doing any deletes on the little pieces: just deallocate the entire pool at once. Or they could set up a “shared memory” area (where the operating system specifically provides memory that is shared between multiple processes) and have the pool dole out chunks of shared memory rather than process-local memory. Another angle: many systems support a non-standard function often called alloca() which allocates a block of memory from the stack rather than the heap. Naturally this block of memory automatically goes away when the function returns, eliminating the need for explicit deletes. Someone could use alloca() to give the Pool its big chunk of memory, then all the little pieces allocated from that Pool act like they’re local: they automatically vanish when the function returns. Of course the destructors don’t get called in some of these cases, and if the destructors do something nontrivial you won’t be able to use these techniques, but in cases where the destructor merely deallocates memory, these sorts of techniques can be useful.

Okay, assuming you survived the 6 or 8 lines of code needed to wrap your allocate function as a method of a Pool class, the next step is to change the syntax for allocating objects. The goal is to change from the rather clunky syntax new(pool.alloc(sizeof(Foo))) Foo() to the simpler syntax new(pool) Foo(). To make this happen, you need to add the following two lines of code just below the definition of your Pool class:

 inline void* operator new(size_t nbytes, Pool& pool)
{
return pool.alloc(nbytes);
}

Now when the compiler sees new(pool) Foo(), it calls the above operator new and passes sizeof(Foo) and pool as parameters, and the only function that ends up using the funky pool.alloc(nbytes) method is your own operator new.

Now to the issue of how to destruct/deallocate the Foo objects. Recall that the brute force approach sometimes used with placement new is to explicitly call the destructor then explicitly deallocate the memory:

 void sample(Pool& pool)
{
Foo* p = new(pool) Foo();

p->~Foo();        
// explicitly call dtor
pool.dealloc(p);  
// explicitly release the memory
}

This has several problems, all of which are fixable:

  1. The memory will leak if Foo::Foo() throws an exception.
  2. The destruction/deallocation syntax is different from what most programmers are used to, so they’ll probably screw it up.
  3. Users must somehow remember which pool goes with which object. Since the code that allocates is often in a different function from the code that deallocates, programmers will have to pass around two pointers (a Foo* and a Pool*), which gets ugly fast (example, what if they had an array of Foos each of which potentially came from a different Pool; ugh).

We will fix them in the above order.

Problem 1 :

When you use the “normal” new operator, e.g., Foo* p = new Foo(), the compiler generates some special code to handle the case when the constructor throws an exception. The actual code generated by the compiler is functionally similar to this:

 // This is functionally what happens with Foo* p = new Foo()Foo* p;// don’t catch exceptions thrown by the allocator itself
void* raw = operator new(sizeof(Foo));

// catch any exceptions thrown by the ctor
try {
p = new(raw) Foo();  
// call the ctor with raw as this
}
catch (...) {
// oops, ctor threw an exception
operator delete(raw);
throw;  
// rethrow the ctor’s exception

The point is that the compiler deallocates the memory if the ctor throws an exception. But in the case of the “new with parameter” syntax (commonly called “placement new“), the compiler won’t know what to do if the exception occurs so by default it does nothing.

 // This is functionally what happens with Foo* p = new(pool) Foo():

void* raw = operator new(sizeof(Foo), pool);
// the above function simply returns “pool.alloc(sizeof(Foo))”

Foo* p = new(raw) Foo();
// if the above line “throws”, pool.dealloc(raw) is NOT called

So the goal is to force the compiler to do something similar to what it does with the global new operator. Fortunately it’s simple: when the compiler seesnew(pool) Foo(), it looks for a corresponding operator delete. If it finds one, it does the equivalent of wrapping the ctor call in a try block as shown above. So we would simply provide an operator delete with the following signature (be careful to get this right; if the second parameter has a different type from the second parameter of the operator new(size_t, Pool&), the compiler doesn’t complain; it simply bypasses the try block when your users saynew(pool) Foo()):

 void operator delete(void* p, Pool& pool)
{
pool.dealloc(p);
}

After this, the compiler will automatically wrap the ctor calls of your new expressions in a try block:

Problems #2 (“ugly therefore error prone”) and #3 (“users must manually associate pool-pointers with the object that allocated them, which is error prone”)are solved simultaneously with an additional 10-20 lines of code in one place. In other words, we add 10-20 lines of code in one place (your Pool header file) and simplify an arbitrarily large number of other places (every piece of code that uses your Pool class).

The idea is to implicitly associate a Pool* with every allocation.

Two methods are discussed in this FAQ, one is to use std::map<void *, pool * >. In other words, build a look-up table where the keys are allocation pointer and whose values are the associated pool * . it is essential that you insert a key/value pair into the map only in operator new(size_t,Pool&). In particular, you must not insert a key/value pair from the global operator new (e.g., you must not say, poolMap[p] = NULL in the global operator new). Reason: doing that would create a nasty chicken-and-egg problem — since std::map probably uses the global operator new, it ends up inserting a new entry every time inserts a new entry, leading to infinite recursion — bang you’re dead.

Another approach that is faster but might use more memory and is a little trickier is to prepend a Pool* just before all allocations. For example, if nbytes was 24, meaning the caller was asking to allocate 24 bytes, we would allocate 28 (or 32 if you think the machine requires 8-byte alignment for things like doubles and/or long longs), stuff the Pool* into the first 4 bytes, and return the pointer 4 (or 8) bytes from the beginning of what you allocated. Then your globaloperator delete backs off the 4 (or 8) bytes, finds the Pool*, and if NULL, uses free() otherwise calls pool->dealloc(). The parameter passed to free()and pool->dealloc() would be the pointer 4 (or 8) bytes to the left of the original parameter, p. If(!) you decide on 4 byte alignment, your code would look something like this (although as before, the following operator new code elides the usual out-of-memory handlers):

void* operator new(size_t nbytes)
{
if (nbytes == 0)
nbytes = 1;                    
// so all alloc’s get a distinct address
void* ans = malloc(nbytes + 4);  
// overallocate by 4 bytes
*(Pool**)ans = NULL;             
// use NULL in the global new
return (char*)ans + 4;           
// don’t let users see the Pool*
}

void* operator new(size_t nbytes, Pool& pool)
{
if (nbytes == 0)
nbytes = 1;                    // so all alloc’s get a distinct address
void* ans = pool.alloc(nbytes + 4); 
// overallocate by 4 bytes
*(Pool**)ans = &pool;            
// put the Pool* here
return (char*)ans + 4;           
// don’t let users see the Pool*
}

void operator delete(void* p)
{
if (p != NULL) {
p = (char*)p – 4;              // back off to the Pool*
Pool* pool = *(Pool**)p;
if (pool == NULL)
free(p);                     
// note: 4 bytes left of the original p
else
pool->dealloc(p);            
// note: 4 bytes left of the original p
}
}

See more at http://www.parashift.com/c++-faq-lite/dtors.html#faq-11.14

what happens after exception leaves catch ( a question in stackoverflow)

http://stackoverflow.com/questions/5340858/exception-handling-what-happens-after-it-leaves-catch

So imagine you’ve got an exception you’re catching and then in the catch you write to a log file that some exception occurred. Then you want your program to continue, so you have to make sure that certain invariants are still in a a good state. However what actually occurs in the system after the exception was “handled” by a catch?

The stack has been unwound at that point so how does it get to restore it’s state?

 

Answer 1:

“Stack unwinding” means that all scopes between throw and the matching catch clause are left, calling destructors for all automatic objects in those scopes, pretty much in the same way function scopes are left when you return from a function.

Nothing else “special” is done, the scope of a catch clause is a normal scope, and leaving it is no different from leaving the scope of an else clause.

If you need to make sure certain invariants still hold, you need to program the code changing them in a thread-safe manner. Dave Abrahams wrote a classic on the different levels of exception safety, you might want to read that. Basically, you will have to consequently employ RAII in order to be on the safe side when exceptions are thrown.

 

Answer 2:

Only objects created inside the try will have been destroyed during unwinding. It’s up to you to write a program in such way that if an exception occurs program state stays consistent – that’s called exception safety.

C++ doesn’t care – it unwinds stack, then passes control into an appropriate catch, then control flow continues normally.

 

Answer 3:

It is up to you to ensure that the application is recovered into a stable state after catching the exception. Usually it is achieved by “forgetting” whatever operation or change(s) produced the exception, and starting afresh on a higher level.

This includes ensuring that any resources allocated during the chain of events leading to the exception gets properly released. In C++, the standard idiom to ensure this is RAII.

Update

For example, if an error occurs while processing a request in a web server, it generates an exception in some lower level function, which gets caught in a higher level class (possibly right in the top level request handler). Usually the best possible thing to do is to roll back any changes done and free any resources allocated so far related to the actual request, and return an appropriate error message to the client. Changes may include DB transactions, file writes, etc – one must implement all these in an exception safe manner. Databases typically have built in transactions to deal with this; with other resources it may be more tricky.

RAII: Resource Acquisition Is Initialization

http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization

http://www.hackcraft.net/raii/

http://stackoverflow.com/questions/417481/pointers-smart-pointers-or-shared-pointers

Resource Acquisition Is Initialization

The technique was invented by Bjarne Stroustrup[1] to deal with resource deallocation in C++. In this language, the only code that can be guaranteed to be executed after an exception is thrown are the destructors of objects residing on the stack.

RAII is vital in writing exception-safe C++ code: to release resources before permitting exceptions to propagate (in order to avoid resource leaks) one can write appropriate destructors once rather than dispersing and duplicating cleanup logic between exception handling blocks that may or may not be executed.

RAII can be used with:

  • Languages which can have user-defined types allocated on the stack (“automatic” objects in C/C++ terminology) and cleaned up during normal stack cleanup (whether because of a function returning, or an exception being thrown). E.g. C++.
  • Languages which have reference-counted garbage collection and hence predictable cleanup of an object for which there is only one reference. E.g. VB6

RAII cannot generally be used with languages that clean up objects using an unpredictable garbage collection, such as Java (however see the Postscript on .NET). If a language guarantees cleanup of all objects before an application shuts down then it may be applicable to some problems.

C++ and D allow objects to be allocated on the stack and their scoping rules ensure that destructors are called when a local object’s scope ends. By putting the resource release logic in the destructor, C++’s and D’s scoping provide direct support for RAII.

The C language does not directly support RAII, though there are some ad-hoc mechanisms available to emulate it. However, some compilers provide non-standard extensions that implement RAII. For example, the “cleanup” variable attribute extension of GCC is one of them.

It’s worth noting that if you are using C++ with garbage collection (which isn’t defined by the language, but which may be provided by a run time) then that garbage collection will generally not apply to objects allocated on the stack and hence RAII can still be used.

RAII only ensures that the resource in question is released appropriately; care must still be taken to maintain exception safety. If the code modifying the data structure or file is not exception-safe, the mutex could be unlocked or the file closed with the data structure or file corrupted.

Local variables easily manage multiple resources within a single function: They are destroyed in the reverse order of their construction, and an object is only destroyed if fully constructed. That is, if no exception propagates from its constructor.

Resource management without RAII

Finalizers

In Java, objects are not allocated on the stack and must be accessed through references; hence, one cannot have automatic variables of objects that “go out of scope”. Instead, all objects are dynamically allocated. In principle, dynamic allocation does not make RAII unfeasible per se; it could still be feasible if there were a guarantee that a “destructor” (“finalize”) method would be called as soon as an object were pointed to by no references (i.e., if the object lifetime management were performed according to reference counting).

However, Java objects have indefinite lifetimes which cannot be controlled by the programmer, because, according to the Java Virtual Machine specification, it is unpredictable when the garbage collector will act. Indeed, the garbage collector may never act at all to collect objects pointed to by no references. Hence the “finalize” method of an unreferenced object might never be called or be called long after the object became unreferenced. Resources must thus be closed manually by the programmer, using something like thedispose pattern.

Disadvantages of scope bound resource management alternatives

Where both finalizers and closure blocks work as a good alternative to RAII for “shallow” resources, it is important to note however that the compositional properties of RAII differ greatly from these scope bound forms of resource management. Where RAII allows for full encapsulation of resources behind an abstraction, with scope bound resource management this isn’t the case. In an environment that purely depends on scope bound resource management, “being a resource” becomes a property that is transitive to composition. That is, using only scope bound resource management, any object that is composed using a resource that requires resource management effectively itself becomes a resource that requires resource management. RAII effectively breaks the transitivity of this property allowing for the existence of “deep” resources to be effectively abstracted away.

Lets for example say we have an object of type A that by composition holds an object of type B that by composition holds an object of type C. Now lets see what happens when we create a new implementation of C that by composition holds a resource R. R will have some close or release method that must be invoked prior to C going out of scope. We could make C into a RAII object for R that invokes release on destruction.

Basically now we have the situation where from the point of view of R and C (shallow resources), scope bound resource management alternatives functionally are fully equivalent to RAII. From a point of view of A and B (deep resources) however, we see a difference in the transitivity of “being a resource” emerging. With C as a RAII object, the interface and implementation of A, B and any code using a scoped A or B will remain unchanged and unaware of the newly introduced existence of “deep” resources. Without RAII however, the fact that C holds R means that C will need its own release method for proxying the release of R. B will now need its own releasemethod for proxying the release of C. A will need its own release method for proxying the release of B, and the scope where either A or B is used will also require of the alternative resource management techniques, where with RAII C provides the abstraction barrier that hides the implementation detail of “implemented using a resource” from the view of A, B and any users of an A or B. This difference shows that RAII effectively breaks the compositional transitivity of the “being a resource” property.

It’s all about priority

I realized that I have been troubled with my attitude. Attitude issue is the true reason I feel I have been failing to achieve things.

I mean, I have been trying to avoid hard-working, blame on something/someone else for my own fault, and regard free-ride as a win.

I feel horrible now.

When I read this in C++ faq (http://www.parashift.com/c++-faq-lite/exceptions.html#faq-17.6)

Exception handling is a convenient whipping boy. If you work with people who blame their tools, beware of suggesting exceptions (or anything else that is new, for that matter). People whose ego is so fragile that they need to blame someone or something else for their screw-ups will invariably blame whatever “new” technology was used. Of course, ideally you will work with people who are emotionally capable of learning and growing: with them, you can make all sorts of suggestions, because those sorts of people will find a way to make it work, and you’ll have fun in the process.

I was touched.

It may be interesting that I read this in a technical article. But yes, it actually turns out to be quite natural. Having a responsible attitude is important to both work and life.  We have to understand what is most important in our life and then strive for it.  No pains no gains is not just some old sayings. It is a lasting truism.

Titanic 3D观后感

今天去看了titanic 3d首映。3D效果一般般,不过还是被剧情打动了。

看完了不知道为什么有点伤感。来美国也有2年了,也就意味着大学毕业有2年了。爱情这个词 感觉跟自己也许是没有缘分了。看着别人成双成对,我只能跟着好基友和普通女性朋友一起来看,心里想,为什么我生命中这么多美好的时刻,都没有她来和我分享呢?时光一天天逝去,自己的心也一天天更加难敞开。即使偶尔心动的感觉,也总是掺杂着其他的想法,畏首畏尾。有人说,人这一辈子至少要有一次彻底忘我的去追求一个自己心爱的女人,我一直也期待自己能够这么忘我一回,但是发现很难。总是希望别人可以先表示一些,然后自己再表示好感,总是担心自己先做了什么 然后被人看扁。做为一个男人,这样的心态显然是非常不好的。

虽然说人这一辈子的孤独是一直到死的,但是正因为孤独,我希望能够找到一个知我懂我的人来度过漫长的一生。爱情,不知道为什么对我来说这么难。都知道爱情是要自己去争取的,但是自己总是缺乏勇气去争取 去寻找。真希望自己能够改变这个性格的缺点。找到一个自己爱 她也爱我的人  然后you jump, I jump 是一件多么幸福的事情。真不希望自己最后老死,然后回首,发现从来没有真正遇到过爱情。

Advantages of Exceptions

Advantage 1: Separating Error-Handling Code from “Regular” Code

Traditional error management needs error detection, reporting and returning. which often makes the logic flow of the code hard to follow.

Advantage 2: Propagating Errors Up the Call Stack

Traditional error-notification techniques force all the method in the call stack to propagate the error codes returned by readFile up the call stack until the error codes finally reach the only method that is interested in them.

The Java runtime environment searches backward through the call stack to find any methods that are interested in handling a particular exception. Only the methods that care about errors have to worry about detecting errors.

Advantage 3: Grouping and Differentiating Error Types

You could set up an exception handler that handles any Exception with the handler here.

// A (too) general exception handler
catch (Exception e) {
    ...
}

The Exception class is close to the top of the Throwable class hierarchy. Therefore, this handler will catch many other exceptions in addition to those that the handler is intended to catch. You may want to handle exceptions this way if all you want your program to do, for example, is print out an error message for the user and then exit.

In most situations, however, you want exception handlers to be as specific as possible. The reason is that the first thing a handler must do is determine what type of exception occurred before it can decide on the best recovery strategy. In effect, by not catching specific errors, the handler must accommodate any possibility. Exception handlers that are too general can make code more error-prone by catching and handling exceptions that weren’t anticipated by the programmer and for which the handler was not intended.

As noted, you can create groups of exceptions and handle exceptions in a general fashion, or you can use the specific exception type to differentiate exceptions and handle exceptions in an exact fashion.

cited from http://docs.oracle.com/javase/tutorial/essential/exceptions/advantages.html

Related Reference:

exceptions in C++    http://www.parashift.com/c++-faq-lite/exceptions.html#faq-17.6