2018年5月16日 星期三

最近找工作的過程與心得


繼續舊工作是個選擇,找新工作也是個選擇。

先寫有結論的部分,沒結論的部分想寫也沒得寫(兩家公司,然後有些也想去看看)。


1. Google、Microsoft

結論:完全不理人,相當合理,仔細想想履歷也沒什麼亮點可說。

試著想辦法讓自己變好,試著結合自己的興趣。很自然想到 Coursera 上找一些課程來加強學習,除此之外還想增加自己的曝光度,畢竟這個世界就很現實,你要有一些成績讓人家看見,人家才會對你感興趣。基於這樣的想法:

只能不斷的跟自己戰鬥。



2. Amazon

結論:Software Development Engineer 感謝函 + Cloud Support Engineer 暫停面試流程。

有簽 NDA 無法透露細節。只能就公開的工作內容討論。Cloud Support Engineer 工作內容跟我目前工作內容差異極大,我投錯了。不過我很喜歡 Amazon 的 leadership principles 裡面講的東西:

然後也試著瞭解工作曾經遇過的困難,總要停下腳步才看得清過往的錯誤。粗略看許多 DevOps 的東西,發現 Google 對於 DevOps 的理解與眾不同:

例如講到 the evolution of automation,他會介紹流程是怎麼進行:
  1. No automation
  2. Externally maintained system-specific automation
  3. Externally maintained generic automation
  4. Internally maintained system-specific automation
  5. Systems that don’t need any automation

還有講到很多東西,本身才疏學淺,讀起來特別有收穫。Google 這麼做,或許我們可以做個參考,可是這終究是 Google 的經驗,有時候面試想要吹噓幾句還是無法駕輕就熟。



3. One DevOps Startup

結論:Offer get.

想到這個點子,把點子變成賺錢機器,真的很強,台灣新創公司真的很厲害。透過面試再次理解 DevOps,新創公司眼中是寶的東西 (Jenkins、Ansible、Kubernetes),並不是理所當然的存在,只是自己沒有深入瞭解 DevOps 裡頭的東西。

(有衝動想把這些 tech stacks 架起來玩,等我有空的時候)

要說有什麼不足,可能是差強人意的薪水,雖然這產業的薪水相較於台商已經夠好了。面試過程沒有刁難的技術問題,這意味著許多事情。從出社會到現在是安安拔拔,就算是差強人意的薪水,自己也沒本事自己出來創業當老闆,也能發給員工那麼多薪水,所以還是要感謝公司願意給你那麼多薪水(非資方打手)。如果將來有新東家願意給你更多薪水,那就要非常感謝新東家。

如果資本主義本質是貪婪,那我們就該如此貪婪。

(沒啥溫度的真理)

(幻想把勞動法修完湊學分,幻想自己可以認真唸書考律師。幻想翻開民訴口述講義不會想睡覺)

(但一切都經不起貪婪真理的檢驗)



每個時刻都有每個時刻的困惑,但總是想讓自己快樂點。



希望沿莉寶貝放鬆一點,我也放鬆一點。

2018年5月10日 星期四

2018年4月27日 星期五

[Note #4][Algorithm] Backtracking

Backtracking: http://acm.nudt.edu.cn/~twcourse/Backtracking.html

Example:
  • Problem: 
    • A ring is composed of n (even number) circles as shown in diagram. Put natural numbers into each circle separately, and the sum of numbers in two adjacent circles should be a prime.
    • Note: the number of first circle should always be 1.
    • You are to write a program that completes above process.
  • AC: https://github.com/Meng-Gen/UVa/blob/master/UVa/acm_524.cc


2018年4月24日 星期二

[Note] TCP/IP State Transition

TCP/IP State Transition Diagram:

UNIX Network Programming (Volume1) 3rd:

There are 11 different states defined for a connection and the rules of TCP dictate the transitions from one state to another, based on the current state and the segment received in that state. For example, if an application performs an active open in the CLOSED state, TCP sends a SYN and the new state is SYN_SENT. If TCP next receives a SYN with an ACK, it sends an ACK and the new state is ESTABLISHED. This final state is where most data transfer occurs.

The two arrows leading from the ESTABLISHED state deal with the termination of a connection. If an application calls close before receiving a FIN (an active close), the transition is to the FIN_WAIT_1 state. But if an application receives a FIN while in the ESTABLISHED state (a passive close), the transition is to the CLOSE_WAIT state.

We denote the normal client transitions with a darker solid line and the normal server transitions with a darker dashed line. We also note that there are two transitions that we have not talked about: a simultaneous open (when both ends send SYNs at about the same time and the SYNs cross in the network) and a simultaneous close (when both ends send FINs at the same time).



Three-Way Handshake

The following scenario occurs when a TCP connection is established:
  1. The server must be prepared to accept an incoming connection. This is normally done by calling socket, bind, and listen and is called a passive open. 
  2. The client issues an active open by calling connect. This causes the client TCP to send a "synchronize" (SYN) segment, which tells the server the client's initial sequence number for the data that the client will send on the connection. Normally, there is no data sent with the SYN; it just contains an IP header, a TCP header, and possible TCP options (which we will talk about shortly).
  3. The server must acknowledge (ACK) the client's SYN and the server must also send its own SYN containing the initial sequence number for the data that the server will send on the connection. The server sends its SYN and the ACK of the client's SYN in a single segment.
  4. The client must acknowledge the server's SYN.
The minimum number of packets required for this exchange is three; hence, this is called TCP's three-way handshake.



TCP Connection Termination

While it takes three segments to establish a connection, it takes four to terminate a connection.
  1. One application calls close first, and we say that this end performs the active close. This end's TCP sends a FIN segment, which means it is finished sending data. 
  2. The other end that receives the FIN performs the passive close. The received FIN is acknowledged by TCP. The receipt of the FIN is also passed to the application as an end-of-file (after any data that may have already been queued for the application to receive), since the receipt of the FIN means the application will not receive any additional data on the connection.
  3. Sometime later, the application that received the end-of-file will close its socket. This causes its TCP to send a FIN.
  4. The TCP on the system that receives this final FIN (the end that did the active close) acknowledges the FIN.

2018年4月23日 星期一

我回來了。


(中午休息阿比指我,殊不知另外四指指著自己 XD)

我回來了。


感謝大家願意等我回來。


這陣子花很多時間偷懶,花很多時間調適,因為安安來我們家當我們的寶貝,有時候小哭包一直哭一直哭,忍不住洩氣抱怨你到底要怎樣,也有時候半夜四點起來鬧,抱著走來走去泡牛奶躺在地墊玩,直到早上九點沿莉把我從深淵拯救出來,更有時候婆婆幫忙照顧安安,讓沿莉和我出來透風發呆。


可是想到自己可能比安安先走,又忍不住眼淚想哭。勞動法老師講課時告訴我們,他希望有生之年可以看到最低工資法過關,老師不是批評政府,而是期望自己長命百歲。這邊得到一個結論,盡量把身體弄健康點,就可以看久一點的沿莉和安安。


感謝賓賓體恤民心,願意開散步路線讓大家散散步。很努力跟上大家腳步(鑽芒草很像螞蟻在草地上那邊鑽來鑽去),很努力跟著大家聊天話題回憶什麼東西,很久沒出來走早已沒貨可以講(掩面),比較糟的行為是亂指路,大量預設散步等級的路在腦海,接著剔除路底太差的小路,然後發現沒路可走(被龍頭揪錯)。磺嘴山火山口是漂亮,可惜螞蟻看不到上帝視角,在冰島的時候很想繞大的火山口走一圈,結果到台灣磺嘴山的火山口,又嫌太累沒去走一圈,整個就是偷懶的人。


至少載三個人可以拿來說嘴。


下山送人回家,原本車上打算聽陳聰富講民總,結果只是聽著廣播累著發呆,我想感冒在家顧安安的沿莉應該更累。


感謝沿莉愛護安安,安安長大也會愛麻麻。

2018年4月13日 星期五

[Note] Some questions about threads & processes

1. Singleton pattern: https://en.wikipedia.org/wiki/Singleton_pattern

2. The difference between a process and a thread

Processes
  • Concept
    • A process is an instance of a program in execution.
    • Modern systems allow a single process to have multiple threads of execution, which execute concurrently.
  • Process Scheduling
    • The two main objectives of the process scheduling system are to keep the CPU busy at all times and to deliver "acceptable" response times for all programs, particularly for interactive ones.
    • The process scheduler must meet these objectives by implementing suitable policies for swapping processes in and out of the CPU.

(Quote from here)

Threads
  • Overview
    • A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of registers, (and a thread ID.)
    • Traditional (heavyweight) processes have a single thread of control - There is one program counter, and one sequence of instructions that can be carried out at any given time.
  • Motivation
    • Threads are very useful in modern programming whenever a process has multiple tasks to perform independently of the others.
    • This is particularly true when one of the tasks may block, and it is desired to allow the other tasks to proceed without blocking. For example in a word processor, a background thread may check spelling and grammar while a foreground thread processes user input (keystrokes), while yet a third thread loads images from the hard drive, and a fourth does periodic automatic backups of the file being edited. 
    • Another example is a web server - Multiple threads allow for multiple requests to be satisfied simultaneously, without having to service requests sequentially or to fork off separate processes for every incoming request.
  • Benefits
    • Responsiveness - One thread may provide rapid response while other threads are blocked or slowed down doing intensive calculations.
    • Resource sharing - By default threads share common code, data, and other resources, which allows multiple tasks to be performed simultaneously in a single address space. 
    • Economy - Creating and managing threads is much faster than performing the same tasks for processes. 
    • Scalability, i.e. Utilization of multiprocessor architectures - A single threaded process can only run on one CPU, no matter how many may be available, whereas the execution of a multi-threaded application may be split amongst available processors.

Example: Chrome
  • Problem 
    • It's nearly impossible to build a rendering engine that never crashes or hangs. It's also nearly impossible to build a rendering engine that is perfectly secure. 
    • In some ways, the state of web browsers around 2006 was like that of the single-user, co-operatively multi-tasked operating systems of the past. As a misbehaving application in such an operating system could take down the entire system, so could a misbehaving web page in a web browser. All it took is one browser or plug-in bug to bring down the entire browser and all of the currently running tabs. 
    • Modern operating systems are more robust because they put applications into separate processes that are walled off from one another. A crash in one application generally does not impair other applications or the integrity of the operating system, and each user's access to other users' data is restricted.

3. The producer-consumer problem: race condition
  • In this condition a piece of code may or may not work correctly, depending on which of two simultaneous processes executes first, and more importantly if one of the processes gets interrupted such that the other process runs between important steps of the first process.
  • A specific example of a more general situation known as the critical section problem. 
  • A solution to the critical section problem must satisfy the following three conditions:
    • Mutual Exclusion - Only one process at a time can be executing in their critical section.
    • Progress - If no process is currently executing in their critical section, and one or more processes want to execute their critical section, then only the processes not in their remainder sections can participate in the decision, and the decision cannot be postponed indefinitely.
    • Bounded Waiting - There exists a limit as to how many other processes can get into their critical sections after a process requests entry into their critical section and before that request is granted.
  • Peterson's Solution
    • Peterson's Solution is a classic software-based solution to the critical section problem. It is unfortunately not guaranteed to work on modern hardware, due to vagaries of load and store operations.
  • Synchronization Hardware
    • To generalize the solution expressed above, each process when entering their critical section must set some sort of lock, to prevent other processes from entering their critical sections simultaneously, and must release the lock when exiting their critical section, to allow other processes to proceed. Obviously it must be possible to attain the lock only when no other process has already set a lock. 
  • Mutex Locks
    • The hardware solutions presented above are often difficult for ordinary programmers to access, particularly on multi-processor machines, and particularly because they are often platform-dependent.
    • Therefore most systems offer a software API equivalent called mutex locks or simply mutexes.
    • The terminology when using mutexes is to acquire a lock prior to entering a critical section, and to release it when exiting.
    • One problem with the implementation shown here is the busy loop used to block processes in the acquire phase. These types of locks are referred to as spinlocks, because the CPU just sits and spins while blocking the process. 
    • Spinlocks are wasteful of cpu cycles, and are a really bad idea on single-cpu single-threaded machines, because the spinlock blocks the entire computer, and doesn't allow any other process to release the lock.
    • On the other hand, spinlocks do not incur the overhead of a context switch, so they are effectively used on multi-threaded machines when it is expected that the lock will be released after a short time.
  • Semaphores
    • A more robust alternative to simple mutexes is to use semaphores, which are integer variables for which only two atomic operations are defined, the wait and signal operations.
    • Note that not only must the variable-changing steps (S-- and S++) be indivisible, it is also necessary that for the wait operation when the test proves false that there be no interruptions before S gets decremented.

4. Deadlocks
  • A set of processes is deadlocked when every process in the set is waiting for a resource that is currently allocated to another process in the set.
  • Necessary Conditions
    • Mutual Exclusion - At least one resource must be held in a non-sharable mode; If any other process requests this resource, then that process must wait for the resource to be released. 
    • Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process. 
    • No preemption - Once a process is holding a resource, then that resource cannot be taken away from that process until the process voluntarily releases it. 
    • Circular Wait - A set of processes { P[0], P[1], P[2], ..., P[N] } must exist such that every P[i] is waiting for P[(i+1) % (N+1)]. 
  • Methods for Handling Deadlocks
    • Deadlock prevention or avoidance - Do not allow the system to get into a deadlocked state.
    • Deadlock detection and recovery - Abort a process or preempt some resources when deadlocks are detected.
    • Ignore the problem all together - If deadlocks only occur once a year or so, it may be better to simply let them happen and reboot as necessary than to incur the constant overhead and system performance penalties associated with deadlock prevention or detection.

5. Classical Synchronization Problems

References: