banner
bladedragon

bladedragon

"The Art of Concurrent Programming" - Reading Notes 05: Locks in Java

Reading notes on Chapter 5 of "The Art of Concurrent Programming," a perfect combination of text and images.

You lock me, I lock you, neither side gives way, and then it leads to a deadlock, much like love.

image

1. Lock Interface#

Provides features that synchronized does not have:

  1. Attempt to acquire the lock non-blockingly: tryLock(), returns immediately after calling the method.
  2. Can be interrupted while acquiring the lock: lockInterruptibly(): allows the current thread to be interrupted during the lock acquisition.
  3. Timeout for acquiring the lock: tryLock(time, unit), returns after timeout.

The implementation of the Lock interface is basically done by aggregating a subclass of a synchronizer to control thread access.

Usage Notes

  • The unlock method should be used in finally to ensure that the lock can ultimately be released after being acquired.
  • The lock method should not be placed in a try block, as throwing an exception in try catch would cause the lock to be released unexpectedly.

image

2. Queue Synchronizer#

The queue synchronizer AbstractQueuedSynchronizer is a foundational framework for building locks or other synchronization components.
It uses an int member variable to represent the synchronization state and employs an internal FIFO queue to manage the queuing of threads acquiring resources.

The main usage of the synchronizer is through inheritance, where subclasses inherit the synchronizer and implement its abstract methods to manage synchronization state. The synchronizer can support both exclusive and shared acquisition of synchronization state, making it convenient to implement different types of synchronization components (such as ReentrantLock, ReentrantReadWriteLock, and CountDownLatch).

==The synchronizer== is key to implementing locks, aggregating the synchronizer in the lock implementation to achieve the semantics of the lock.
Understanding the relationship between the two:

  • The lock is user-facing, defining the interface for user interaction with the lock while hiding implementation details;
  • The synchronizer is implementation-facing, simplifying the lock implementation and shielding low-level operations such as synchronization state management, thread queuing, waiting, and waking.

Interface and Example of Queue Synchronizer#

The design of the synchronizer is based on the template method pattern, meaning that users need to inherit the synchronizer and override specified methods, then combine the synchronizer in the implementation of custom synchronization components and call the template methods provided by the synchronizer, which will call the overridden methods by the user.

When overriding the specified methods of the synchronizer, the following 3 methods provided by the synchronizer should be used to access or modify the synchronization state:

  • getState(): Get the current synchronization state.
  • setState(int newState): Set the current synchronization state.
  • compareAndSetState(int expect, int update): Set the current state using CAS, ensuring atomicity of the state setting.

image

image

The template methods provided by the synchronizer are basically divided into 3 categories: exclusive acquisition and release of synchronization state, shared acquisition and release of synchronization state, and querying the status of waiting threads in the synchronization queue. Custom synchronization components will use the template methods provided by the synchronizer to implement their own synchronization semantics.

Working Principle#

MUtux source code omitted

Implementation Analysis of Queue Synchronizer#

1. Synchronization Queue#

The management of synchronization state is accomplished through a FIFO doubly linked queue. When the current thread fails to acquire the synchronization state, the synchronizer constructs a Node containing the current thread and its waiting state information and adds it to the synchronization queue, while blocking the current thread. When the synchronization state is released, it wakes up the thread in the head node, allowing it to attempt to acquire the synchronization state again.

The head node is the node that successfully acquired the synchronization state. When the head node releases the synchronization state, it wakes up its successor nodes, and when the successor node successfully acquires the synchronization state, it sets itself as the head node.

Node's property types, names, and descriptions

image

image

2. Exclusive Synchronization State Acquisition and Release#

 public final void acquire(int arg) {
        if (!tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
            // If the above operation fails, block the thread
            selfInterrupt();
 }
// The above code mainly completes the acquisition of synchronization state, node construction, joining the synchronization queue, and related work of spinning in the synchronization queue.

Code Analysis:
First, it attempts to acquire the synchronization state. If it fails, it constructs an exclusive synchronization node (Node.EXCLUSIVE) and adds it to the tail of the node, then calls acquireQueued, causing the node to spin in a loop to acquire the synchronization state. If it cannot acquire it, it blocks the thread in the node.

Two spin loops: enqueue, and after enqueue

  1. addWaiter and enq methods

    In the "spin loop," the current thread can only return from this method after successfully setting the node as the tail node using CAS. Otherwise, the current thread keeps trying to set it. It can be seen that the enq(final Node node) method serializes the requests to add nodes concurrently through CAS.

    The addWaiter method attempts to add quickly, but concurrency issues may prevent the node from being added successfully (tail node == null), so the enq method loops indefinitely to add the node to the tail.
    
  2. acquireQueued method

    ·==Only the predecessor node can attempt to acquire the synchronization state if it is the head node==, the reason:

  3. The head node is the node that successfully acquired the synchronization state, and when the thread of the head node releases the synchronization state, it will wake up its successor nodes. When the thread of the successor node is awakened, it needs to check whether its predecessor node is the head node.

  4. Maintains the FIFO principle of the synchronization queue. Nodes do not communicate with each other, facilitating the handling of premature notifications (premature notifications refer to threads whose predecessor node is not the head node being awakened due to interruption).

Releasing the synchronization state uses the release method.

public final boolean release(int arg) {
	if (tryRelease(arg)) {
		Node h = head;
		if (h != null && h.waitStatus != 0)
            // The unparkSuccessor(Node node) method uses LockSupport (which will be introduced in later chapters) to wake up threads in a waiting state.
			unparkSuccessor(h);
		return true;
	}
	return false;
}
Summary: When acquiring the synchronization state, the synchronizer maintains a synchronization queue, and threads that fail to acquire the state are added to the queue and spin in the queue. The condition for exiting the queue (stopping spinning) is that the predecessor node is the head node and successfully acquires the synchronization state. When releasing the synchronization state, the synchronizer calls the tryRelease method to release the synchronization state and then wakes up the successor node of the head node.

image

3. Shared Synchronization State Acquisition and Release#

The main difference: whether multiple threads can acquire the synchronization state at the same time.

In shared access to resources, other shared accesses are allowed, while exclusive access is blocked. When accessing resources exclusively, all other accesses are blocked at the same time.
  • The return value of the tryAcquireShared(int arg) method is of type int. When the return value is greater than or equal to 0, it indicates that the synchronization state can be acquired.
  • The releaseShared method differs from exclusive access mainly in that the tryReleaseShared(int arg) method must ensure that the synchronization state (or resource count) is released thread-safely, generally ensured through loops and CAS, as the operation of releasing the synchronization state may come from multiple threads simultaneously.
There are still many details in the source code that I do not understand.
For example, the timing of interrupt placement, how to ensure thread safety during shared access, and what propagation and signaling mean in shared acquisition of synchronization state...

4. Exclusive Timeout for Acquiring Synchronization State#

By calling the synchronizer's doAcquireNanos(int arg, long nanosTimeout) method, one can acquire the synchronization state with a timeout, meaning that within a specified time period, if the synchronization state is acquired, it returns true; otherwise, it returns false. This method provides features that traditional Java synchronization operations (such as the synchronized keyword) do not have.

The process of acquiring synchronization state that responds to interrupts

In Java 5, the synchronizer provides the acquireInterruptibly(int arg) method, which immediately returns and throws InterruptedException if the current thread is interrupted while waiting to acquire the synchronization state.

The doAcquireNanos(int arg, long nanosTimeout) method adds the feature of timeout acquisition on the basis of supporting interrupt response. For timeout acquisition, it is mainly necessary to calculate the sleep time interval nanosTimeout. To prevent premature notifications, the calculation formula for nanosTimeout is: nanosTimeout -= now - lastTime, where now is the current wake-up time and lastTime is the last wake-up time. If nanosTimeout is greater than 0, it indicates that the timeout has not yet arrived, and the thread needs to continue sleeping for nanosTimeout nanoseconds; otherwise, it indicates that it has timed out.

The process of exclusive timeout acquisition of synchronization state is very similar to that of exclusive acquisition of synchronization state.

The main difference lies in the handling logic when the synchronization state is not acquired. In acquire(int args), if the synchronization state is not acquired, the current thread will remain in a waiting state, while doAcquireNanos(int arg, long nanosTimeout) will cause the current thread to wait for nanosTimeout nanoseconds. If the current thread does not acquire the synchronization state within nanosTimeout nanoseconds, it will automatically return from the waiting logic.

When the time is less than or equal to a timeout spin threshold, it will no longer perform timeout waiting but will enter a fast spinning process.

image

3. Reentrant Lock (ReentrantLock)#

The synchronized keyword implicitly supports reentrancy.

ReentrantLock, unlike synchronized, does not implicitly support reentrancy. When the lock method is called, a thread that has already acquired the lock can call the lock method again without being blocked.

Fair acquisition of the lock means that the thread that has been waiting the longest gets priority in acquiring the lock, which can be said to be sequential.

In fact, fair locking mechanisms often have lower efficiency than unfair ones, but the benefit of fair locks is that they can reduce the probability of "starvation," allowing requests that have been waiting longer to be prioritized.

1. Implementation of Reentrancy#

Two issues:

  1. Reacquiring the lock

    The lock needs to identify whether the thread acquiring the lock is the one currently holding it; if so, it can successfully acquire it again.

  2. Final release

    The lock requires an incrementing count for acquisitions.

    Question: What is the significance?
    To prevent performance degradation or deadlock caused by circular lock acquisition.
    
    The mechanism for reentrant lock acquisition increments the state if it is not the first acquisition, and actually does not perform a CAS operation. Therefore, when releasing the lock, it requires the state to be 0 to completely release the lock.
    

2. Differences Between Fair and Unfair Locks:#

If a lock is fair, then the order of acquiring the lock should conform to the absolute chronological order of requests, which is FIFO.

Fair lock: CAS succeeds, and it is the head node of the queue (with an additional check against the predecessor node).
Unfair lock: CAS succeeds.

The default implementation of reentrant locks is unfair locks because, although it may lead to starvation, unfair locks have less overhead (fewer thread switches), resulting in higher throughput.

4. Read-Write Lock (ReentrantReadWriteLock)#

The locks mentioned earlier are mostly exclusive locks, allowing only one thread to access at a time.

Read-write locks allow multiple read threads to access at the same time, but when a write thread accesses, all read threads and other write threads are blocked (ensuring the visibility of write operations).

Read-write locks maintain a pair of locks, one for reading and one for writing, significantly improving concurrency compared to general exclusive locks by separating read and write locks.

Implementation Analysis of Read-Write Locks#

1. Design of Read-Write State#

It relies on a custom synchronizer. The custom synchronizer for read-write locks needs to maintain the states of multiple read threads and one write thread on the synchronization state (an int value), where the high 16 bits represent reads and the low 16 bits represent writes.

image

Bitwise operations
The current synchronization state indicates that a thread has acquired the write lock and has re-entered twice while also acquiring the read lock twice consecutively. How do read-write locks quickly determine the states of reads and writes? The answer is through bitwise operations. Suppose the current synchronization state value is S, the write state equals S&0x0000FFFF (clearing the high 16 bits), and the read state equals S>>>16 (unsigned right shift by 16 bits). When the write state increases by 1, it equals S+1, and when the read state increases by 1, it equals S+(1<<16), which is S+0x00010000.

2. Acquisition and Release of Write Locks#

image

The write lock is an exclusive lock that supports reentrancy. If the current thread has already acquired the write lock, it increases the write state. If the current thread tries to acquire the write lock while the read lock has already been acquired or if the thread is not the one that has already acquired the write lock, it enters a waiting state.

Read locks exist, write locks cannot be acquired:

Read-write locks must ensure that write operations are visible to read locks. If read locks are allowed to acquire write locks while already held, other running read threads would not be able to perceive the operations of the current write thread. Therefore, the current thread can only acquire the write lock after waiting for all other read threads to release their read locks. Once the write lock is acquired, subsequent access by other read and write threads is blocked.

3. Acquisition and Release of Read Locks#

When no other write threads are accessing, the read lock will always be successfully acquired. If the write lock has already been acquired by other threads, it enters a waiting state.

Each release of the read lock (thread-safe, as multiple read threads may release the read lock simultaneously) decreases the read state by 1<<16.

Thread safety of the read state is ensured by CAS.

4. Lock Downgrade (Downgrading Write Lock to Read Lock)#

Definition: Holding the write lock, then acquiring the read lock, and subsequently releasing the write lock.

writeLock.lock();
readLock.lock();
writeLock.unlock();

This is not very clear to me...

The premise of lock downgrade is that all threads wish to be sensitive to data changes, but since there is only one write lock, downgrading will occur. If the write lock is released first and then the read lock is acquired, it is possible that another thread will acquire the write lock and block the acquisition of the read lock, making it impossible to perceive data changes. Therefore, it is necessary to first hold the write lock, ensure that the data remains unchanged, acquire the read lock, and then release the write lock. The necessity of acquiring the read lock during lock downgrade:

To ensure data visibility, if the current thread does not acquire the read lock but directly releases the write lock, it is possible that another thread acquires the write lock and modifies the data at that moment, making it impossible for the current thread to perceive the data updates. If the current thread acquires the read lock, the other thread will be blocked until the current thread uses the data and releases the lock, allowing the other thread to acquire the write lock for data updates.

5. LockSupport Tool#

LockSupport defines a set of public static methods that provide the most basic thread blocking and waking functionalities, making LockSupport a foundational tool for building synchronization components.

image

In Java 6, LockSupport added three methods: park(Object blocker), parkNanos(Object blocker, long nanos), and parkUntil(Object blocker, long deadline) to implement the functionality of blocking the current thread, where the parameter blocker is used to identify the object that the current thread is waiting on (hereafter referred to as the blocking object), which is mainly used for problem diagnosis and system monitoring.

6. Condition Interface#

The Condition interface also provides similar monitor methods to Object, and when used in conjunction with Lock, it can implement the wait/notify pattern. However, there are still differences in usage and functional characteristics between the two.

1. Condition Interface and Example#

image

Condition acquires the lock before calling methods.

image

In the add and remove methods, use a while loop instead of an if statement to prevent premature or accidental notifications; only when the condition is met can the loop exit. Recall the classic paradigm of wait/notify mentioned earlier; the two are very similar.

2. Implementation Analysis of Condition#

ConditionObject is an inner class of the synchronizer AbstractQueuedSynchronizer. Since the operations of Condition require acquiring the associated lock, it is reasonable for it to be an inner class of the synchronizer. Each Condition object contains a queue (referred to as the waiting queue), which is key to implementing the wait/notify functionality of the Condition object.

Waiting Queue

The waiting queue is a FIFO queue, where each node in the queue contains a thread reference, which is the thread waiting on the Condition object. If a thread calls the Condition.await() method, that thread will release the lock, construct a node, add it to the waiting queue, and enter a waiting state (similar to the synchronization queue).

image

In the Object monitor model, an object has a synchronization queue and a waiting queue, while the Lock (more precisely, the synchronizer) in the concurrency package has a synchronization queue and multiple waiting queues.

The process of updating the node reference does not use CAS to ensure thread safety because the thread calling the await() method must be the one that has acquired the lock, meaning that this process is guaranteed to be thread-safe by the lock.

Waiting

Calling the signal() method of Condition will wake up the node that has been waiting the longest in the waiting queue (the head node). Before waking the node, it will be moved to the synchronization queue.

Source code omitted
1. The park() method in LockSupport is used to enter the waiting state. The flag for whether to wake the node is determined by checking whether the node is in the synchronization queue, as notifying the condition moves the node to the synchronization queue before waking it.
After waking, it is important to check whether the wake-up was due to notification or interruption.
2. From the perspective of the queue, adding a thread to the waiting queue of Condition essentially constructs a new node and adds it to the waiting queue.

Notification

Calling the signal() method of Condition will wake up the node that has been waiting the longest in the waiting queue (the head node). Before waking the node, it will be moved to the synchronization queue.

image

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.