- Initialization-on-demand holder idiom: https://en.wikipedia.org/wiki/Initialization-on-demand_holder_idiom
- [Java] Relies on the initialization phase of execution within the Java Virtual Machine (JVM) as specified by the Java Language Specification
- Double-checked locking: https://en.wikipedia.org/wiki/Double-checked_locking
- [Java] The volatile keyword now ensures that multiple threads handle the singleton instance correctly.
- [Java] synchronized
2. The difference between a process and a thread
Processes
Example: Chrome
3. The producer-consumer problem: race condition
Processes
- Concept
- A process is an instance of a program in execution.
- Modern systems allow a single process to have multiple threads of execution, which execute concurrently.
- Process Scheduling
- The two main objectives of the process scheduling system are to keep the CPU busy at all times and to deliver "acceptable" response times for all programs, particularly for interactive ones.
- The process scheduler must meet these objectives by implementing suitable policies for swapping processes in and out of the CPU.
- Overview
- A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of registers, (and a thread ID.)
- Traditional (heavyweight) processes have a single thread of control - There is one program counter, and one sequence of instructions that can be carried out at any given time.
- Motivation
- Threads are very useful in modern programming whenever a process has multiple tasks to perform independently of the others.
- This is particularly true when one of the tasks may block, and it is desired to allow the other tasks to proceed without blocking. For example in a word processor, a background thread may check spelling and grammar while a foreground thread processes user input (keystrokes), while yet a third thread loads images from the hard drive, and a fourth does periodic automatic backups of the file being edited.
- Another example is a web server - Multiple threads allow for multiple requests to be satisfied simultaneously, without having to service requests sequentially or to fork off separate processes for every incoming request.
- Benefits
- Responsiveness - One thread may provide rapid response while other threads are blocked or slowed down doing intensive calculations.
- Resource sharing - By default threads share common code, data, and other resources, which allows multiple tasks to be performed simultaneously in a single address space.
- Economy - Creating and managing threads is much faster than performing the same tasks for processes.
- Scalability, i.e. Utilization of multiprocessor architectures - A single threaded process can only run on one CPU, no matter how many may be available, whereas the execution of a multi-threaded application may be split amongst available processors.
Example: Chrome
- Problem
- It's nearly impossible to build a rendering engine that never crashes or hangs. It's also nearly impossible to build a rendering engine that is perfectly secure.
- In some ways, the state of web browsers around 2006 was like that of the single-user, co-operatively multi-tasked operating systems of the past. As a misbehaving application in such an operating system could take down the entire system, so could a misbehaving web page in a web browser. All it took is one browser or plug-in bug to bring down the entire browser and all of the currently running tabs.
- Modern operating systems are more robust because they put applications into separate processes that are walled off from one another. A crash in one application generally does not impair other applications or the integrity of the operating system, and each user's access to other users' data is restricted.
3. The producer-consumer problem: race condition
- In this condition a piece of code may or may not work correctly, depending on which of two simultaneous processes executes first, and more importantly if one of the processes gets interrupted such that the other process runs between important steps of the first process.
- A specific example of a more general situation known as the critical section problem.
- A solution to the critical section problem must satisfy the following three conditions:
- Mutual Exclusion - Only one process at a time can be executing in their critical section.
- Progress - If no process is currently executing in their critical section, and one or more processes want to execute their critical section, then only the processes not in their remainder sections can participate in the decision, and the decision cannot be postponed indefinitely.
- Bounded Waiting - There exists a limit as to how many other processes can get into their critical sections after a process requests entry into their critical section and before that request is granted.
- Peterson's Solution
- Peterson's Solution is a classic software-based solution to the critical section problem. It is unfortunately not guaranteed to work on modern hardware, due to vagaries of load and store operations.
- Synchronization Hardware
- To generalize the solution expressed above, each process when entering their critical section must set some sort of lock, to prevent other processes from entering their critical sections simultaneously, and must release the lock when exiting their critical section, to allow other processes to proceed. Obviously it must be possible to attain the lock only when no other process has already set a lock.
- Mutex Locks
- The hardware solutions presented above are often difficult for ordinary programmers to access, particularly on multi-processor machines, and particularly because they are often platform-dependent.
- Therefore most systems offer a software API equivalent called mutex locks or simply mutexes.
- The terminology when using mutexes is to acquire a lock prior to entering a critical section, and to release it when exiting.
- One problem with the implementation shown here is the busy loop used to block processes in the acquire phase. These types of locks are referred to as spinlocks, because the CPU just sits and spins while blocking the process.
- Spinlocks are wasteful of cpu cycles, and are a really bad idea on single-cpu single-threaded machines, because the spinlock blocks the entire computer, and doesn't allow any other process to release the lock.
- On the other hand, spinlocks do not incur the overhead of a context switch, so they are effectively used on multi-threaded machines when it is expected that the lock will be released after a short time.
- Semaphores
- A more robust alternative to simple mutexes is to use semaphores, which are integer variables for which only two atomic operations are defined, the wait and signal operations.
- Note that not only must the variable-changing steps (S-- and S++) be indivisible, it is also necessary that for the wait operation when the test proves false that there be no interruptions before S gets decremented.
4. Deadlocks
- A set of processes is deadlocked when every process in the set is waiting for a resource that is currently allocated to another process in the set.
- Necessary Conditions
- Mutual Exclusion - At least one resource must be held in a non-sharable mode; If any other process requests this resource, then that process must wait for the resource to be released.
- Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process.
- No preemption - Once a process is holding a resource, then that resource cannot be taken away from that process until the process voluntarily releases it.
- Circular Wait - A set of processes { P[0], P[1], P[2], ..., P[N] } must exist such that every P[i] is waiting for P[(i+1) % (N+1)].
- Methods for Handling Deadlocks
- Deadlock prevention or avoidance - Do not allow the system to get into a deadlocked state.
- Deadlock detection and recovery - Abort a process or preempt some resources when deadlocks are detected.
- Ignore the problem all together - If deadlocks only occur once a year or so, it may be better to simply let them happen and reboot as necessary than to incur the constant overhead and system performance penalties associated with deadlock prevention or detection.
5. Classical Synchronization Problems
- The Producer/Consumer (Bounded Buffer) Problem
- The Dining Philosophers Problem
- The Readers/Writers Problem
References:
- The Chromium Projects: Design documents: https://www.chromium.org/developers/design-documents
- https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/3_Processes.html
- https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/4_Threads.html
- https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/5_Synchronization.html
- https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/7_Deadlocks.html
- https://cseweb.ucsd.edu/classes/wi06/cse120/Lec06x4.pdf
沒有留言:
張貼留言