bilibili-Java并发学习笔记15 ReentrantReadWriteLock 概览
基于 java 1.8.0
package java.util.concurrent.locks; /** * ReadWriteLock 维护了一对相关的锁,一个用于只读操作,另一个用于写入操作。 * 只要没有 writer,读取锁可以由多个 reader 线程同时保持。写入锁是独占的。 * * 所有 ReadWriteLock 实现都必须保证 writeLock 操作的内存同步效果 * 也要保持与相关 readLock 的联系。也就是说,成功获取读锁的线程会看到写入锁之前版本所做的所有更新。 * * 与互斥锁相比,读-写锁允许对共享数据进行更高级别的并发访问。 * 虽然一次只有一个线程(writer 线程)可以修改共享数据,但在许多情况下,任何数量的线程可以同时读取共享数据(reader 线程),读-写锁利用了这一点。 * 从理论上讲,与互斥锁相比,使用读-写锁所允许的并发性增强将带来更大的性能提高。在实践中,只有在多处理器上并且只在访问模式适用于共享数据时,才能完全实现并发性增强。 * * 与互斥锁相比,使用读-写锁能否提升性能则取决于读写操作期间读取数据相对于修改数据的频率,以及数据的争用——即在同一时间试图对该数据执行读取或写入操作的线程数。 * 例如,某个最初用数据填充并且之后不经常对其进行修改的 collection,因为经常对其进行搜索(比如搜索某种目录),所以这样的 collection 是使用读-写锁的理想候选者。 * 但是,如果数据更新变得频繁,数据在大部分时间都被独占锁,这时,就算存在并发性增强,也是微不足道的。 * 更进一步地说,如果读取操作所用时间太短,则读-写锁实现(它本身就比互斥锁复杂)的开销将成为主要的执行成本,在许多读-写锁实现仍然通过一小段代码将所有线程序列化时更是如此。 * 最终,只有通过分析和测量,才能确定应用程序是否适合使用读-写锁。 * * 尽管读-写锁的基本操作是直截了当的,但实现仍然必须作出许多决策,这些决策可能会影响给定应用程序中读-写锁的效果。这些策略的例子包括: * <li>在 writer 释放写入锁时,reader 和 writer 都处于等待状态,在这时要确定是授予读取锁还是授予写入锁。Writer 优先比较普遍,因为预期写入所需的时间较短并且不那么频繁。Reader 优先不太普遍,因为如果 reader 正如预期的那样频繁和持久,那么它将导致对于写入操作来说较长的时延。公平或者“按次序”实现也是有可能的。 * <li>在 reader 处于活动状态而 writer 处于等待状态时,确定是否向请求读取锁的 reader 授予读取锁。Reader 优先会无限期地延迟 writer,而 writer 优先会减少可能的并发。 * <li>确定是否重新进入锁:可以使用带有写入锁的线程重新获取它吗?可以在保持写入锁的同时获取读取锁吗?可以重新进入写入锁本身吗? * <li>可以将写入锁在不允许其他 writer 干涉的情况下降级为读取锁吗?可以优先于其他等待的 reader 或 writer 将读取锁升级为写入锁吗? * * 当评估给定实现是否适合您的应用程序时,应该考虑所有这些情况。 * * @see ReentrantReadWriteLock * @see Lock * @see ReentrantLock * * @since 1.5 * @author Doug Lea */ public interface ReadWriteLock { /** * 读锁(共享锁) */ Lock readLock(); /** * 写锁(互斥锁) */ Lock writeLock(); } package java.util.concurrent.locks; import java.util.concurrent.TimeUnit; import java.util.Collection; /** * 支持与 ReentrantLock 类似语义的 ReadWriteLock 实现。 * * 此类具有以下属性: * 获取顺序 * * 此类不会将读取者优先或写入者优先强加给锁访问的排序。但是,它确实支持可选的公平 策略。 * * 非公平模式(默认) * 当非公平地(默认)构造时,未指定进入读写锁的顺序,受到 reentrancy 约束的限制。连续竞争的非公平锁可能无限期地推迟一个或多个 reader 或 writer 线程,但吞吐量通常要高于公平锁。 * * 公平模式 * 当公平地构造线程时,线程利用一个近似到达顺序的策略来争夺进入。当释放当前保持的锁时,可以为等待时间最长的单个 writer 线程分配写入锁,如果有一组等待时间大于所有正在等待的 writer 线程 的 reader 线程,将为该组分配写入锁。 * * 如果保持写入锁,或者有一个等待的 writer 线程,则试图获得公平读取锁(非重入地)的线程将会阻塞。直到当前最旧的等待 writer 线程已获得并释放了写入锁之后,该线程才会获得读取锁。当然,如果等待 writer 放弃其等待,而保留一个或更多 reader 线程为队列中带有写入锁自由的时间最长的 waiter,则将为那些 reader 分配读取锁。 * * 试图获得公平写入锁的(非重入地)的线程将会阻塞,除非读取锁和写入锁都自由(这意味着没有等待线程)。(注意,非阻塞 ReentrantReadWriteLock.ReadLock.tryLock() 和 ReentrantReadWriteLock.WriteLock.tryLock() 方法不会遵守此公平设置,并将获得锁(如果可能),不考虑等待线程)。 * * 重入 * * 此锁允许 reader 和 writer 按照 ReentrantLock 的样式重新获取读取锁或写入锁。在写入线程保持的所有写入锁都已经释放后,才允许重入 reader 使用它们。 * * 此外,writer 可以获取读取锁,但反过来则不成立。在其他应用程序中,当在调用或回调那些在读取锁状态下执行读取操作的方法期间保持写入锁时,重入很有用。如果 reader 试图获取写入锁,那么将永远不会获得成功。 * * 锁降级 * 重入还允许从写入锁降级为读取锁,其实现方式是:先获取写入锁,然后获取读取锁,最后释放写入锁。但是,从读取锁升级到写入锁是不可能的。 * * 锁获取的中断 * 读取锁和写入锁都支持锁获取期间的中断。 * * Condition 支持 * 写入锁提供了一个 Condition 实现,对于写入锁来说,该实现的行为与 ReentrantLock.newCondition() 提供的 Condition 实现对 ReentrantLock 所做的行为相同。当然,此 Condition 只能用于写入锁。 * * 读取锁不支持 Condition,readLock().newCondition() 会抛出 UnsupportedOperationException。 * * 监测 * 此类支持一些确定是保持锁还是争用锁的方法。这些方法设计用于监视系统状态,而不是同步控制 * * 此类行为的序列化方式与内置锁的相同:反序列化的锁处于解除锁状态,无论序列化该锁时其状态如何。 * * 示例用法。下面的代码展示了如何利用重入来执行升级缓存后的锁降级(为简单起见,省略了异常处理): * * class CachedData { * Object data; * volatile boolean cacheValid; * final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock(); * * void processCachedData() { * rwl.readLock().lock(); * if (!cacheValid) { * // Must release read lock before acquiring write lock * rwl.readLock().unlock(); * rwl.writeLock().lock(); * try { * // Recheck state because another thread might have * // acquired write lock and changed state before we did. * if (!cacheValid) { * data = ... * cacheValid = true; * } * // Downgrade by acquiring read lock before releasing write lock * rwl.readLock().lock(); * } finally { * rwl.writeLock().unlock(); // Unlock write, still hold read * } * } * * try { * use(data); * } finally { * rwl.readLock().unlock(); * } * } * }} * * 在使用某些种类的 Collection 时,可以使用 ReentrantReadWriteLock 来提高并发性。通常,在预期 collection 很大,读取者线程访问它的次数多于写入者线程,并且 entail 操作的开销高于同步开销时,这很值得一试。例如,以下是一个使用 TreeMap 的类,预期它很大,并且能被同时访问。 * * class RWDictionary { * private final Map<String, Data> m = new TreeMap<String, Data>(); * private final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock(); * private final Lock r = rwl.readLock(); * private final Lock w = rwl.writeLock(); * * public Data get(String key) { * r.lock(); * try { return m.get(key); } * finally { r.unlock(); } * } * public String[] allKeys() { * r.lock(); * try { return m.keySet().toArray(); } * finally { r.unlock(); } * } * public Data put(String key, Data value) { * w.lock(); * try { return m.put(key, value); } * finally { w.unlock(); } * } * public void clear() { * w.lock(); * try { m.clear(); } * finally { w.unlock(); } * } * }} * * 实现注意事项: * * 此锁最多支持 65535 个递归写入锁和 65535 个读取锁。试图超出这些限制将导致锁方法抛出 Error。 * * @since 1.5 * @author Doug Lea */ public class ReentrantReadWriteLock implements ReadWriteLock, java.io.Serializable { private static final long serialVersionUID = -6992448646407690164L; /** Inner class providing readlock */ private final ReentrantReadWriteLock.ReadLock readerLock; /** Inner class providing writelock */ private final ReentrantReadWriteLock.WriteLock writerLock; /** Performs all synchronization mechanics */ final Sync sync; /** * 使用默认(非公平)的排序属性创建一个新的 ReentrantReadWriteLock。 */ public ReentrantReadWriteLock() { this(false); } /** * 使用给定的公平策略创建一个新的 ReentrantReadWriteLock。 * * @param fair 如果此锁应该使用公平排序策略,则该参数的值为 true */ public ReentrantReadWriteLock(boolean fair) { sync = fair ? new FairSync() : new NonfairSync(); readerLock = new ReadLock(this); writerLock = new WriteLock(this); } public ReentrantReadWriteLock.WriteLock writeLock() { return writerLock; } public ReentrantReadWriteLock.ReadLock readLock() { return readerLock; } /** * Synchronization implementation for ReentrantReadWriteLock. * Subclassed into fair and nonfair versions. */ abstract static class Sync extends AbstractQueuedSynchronizer { private static final long serialVersionUID = 6317671515068378041L; /* * Read vs write count extraction constants and functions. * Lock state is logically divided into two unsigned shorts: * The lower one representing the exclusive (writer) lock hold count, * and the upper the shared (reader) hold count. */ static final int SHARED_SHIFT = 16; static final int SHARED_UNIT = (1 << SHARED_SHIFT); static final int MAX_COUNT = (1 << SHARED_SHIFT) - 1; static final int EXCLUSIVE_MASK = (1 << SHARED_SHIFT) - 1; /** Returns the number of shared holds represented in count */ static int sharedCount(int c) { return c >>> SHARED_SHIFT; } /** Returns the number of exclusive holds represented in count */ static int exclusiveCount(int c) { return c & EXCLUSIVE_MASK; } /** * A counter for per-thread read hold counts. * Maintained as a ThreadLocal; cached in cachedHoldCounter */ static final class HoldCounter { int count = 0; // Use id, not reference, to avoid garbage retention final long tid = getThreadId(Thread.currentThread()); } /** * ThreadLocal subclass. Easiest to explicitly define for sake * of deserialization mechanics. */ static final class ThreadLocalHoldCounter extends ThreadLocal<HoldCounter> { public HoldCounter initialValue() { return new HoldCounter(); } } /** * The number of reentrant read locks held by current thread. * Initialized only in constructor and readObject. * Removed whenever a thread's read hold count drops to 0. */ private transient ThreadLocalHoldCounter readHolds; /** * The hold count of the last thread to successfully acquire * readLock. This saves ThreadLocal lookup in the common case * where the next thread to release is the last one to * acquire. This is non-volatile since it is just used * as a heuristic, and would be great for threads to cache. * * <p>Can outlive the Thread for which it is caching the read * hold count, but avoids garbage retention by not retaining a * reference to the Thread. * * <p>Accessed via a benign data race; relies on the memory * model's final field and out-of-thin-air guarantees. */ private transient HoldCounter cachedHoldCounter; /** * firstReader is the first thread to have acquired the read lock. * firstReaderHoldCount is firstReader's hold count. * * <p>More precisely, firstReader is the unique thread that last * changed the shared count from 0 to 1, and has not released the * read lock since then; null if there is no such thread. * * <p>Cannot cause garbage retention unless the thread terminated * without relinquishing its read locks, since tryReleaseShared * sets it to null. * * <p>Accessed via a benign data race; relies on the memory * model's out-of-thin-air guarantees for references. * * <p>This allows tracking of read holds for uncontended read * locks to be very cheap. */ private transient Thread firstReader = null; private transient int firstReaderHoldCount; Sync() { readHolds = new ThreadLocalHoldCounter(); setState(getState()); // ensures visibility of readHolds } /* * Acquires and releases use the same code for fair and * nonfair locks, but differ in whether/how they allow barging * when queues are non-empty. */ /** * Returns true if the current thread, when trying to acquire * the read lock, and otherwise eligible to do so, should block * because of policy for overtaking other waiting threads. */ abstract boolean readerShouldBlock(); /** * Returns true if the current thread, when trying to acquire * the write lock, and otherwise eligible to do so, should block * because of policy for overtaking other waiting threads. */ abstract boolean writerShouldBlock(); // ... } /** * Nonfair version of Sync */ static final class NonfairSync extends Sync { private static final long serialVersionUID = -8159625535654395037L; final boolean writerShouldBlock() { return false; // writers can always barge } final boolean readerShouldBlock() { /* As a heuristic to avoid indefinite writer starvation, * block if the thread that momentarily appears to be head * of queue, if one exists, is a waiting writer. This is * only a probabilistic effect since a new reader will not * block if there is a waiting writer behind other enabled * readers that have not yet drained from the queue. */ return apparentlyFirstQueuedIsExclusive(); } } /** * Fair version of Sync */ static final class FairSync extends Sync { private static final long serialVersionUID = -2274990926593161451L; final boolean writerShouldBlock() { return hasQueuedPredecessors(); } final boolean readerShouldBlock() { return hasQueuedPredecessors(); } } /** * The lock returned by method {@link ReentrantReadWriteLock#readLock}. */ public static class ReadLock implements Lock, java.io.Serializable { private static final long serialVersionUID = -5992448646407690164L; private final Sync sync; /** * Constructor for use by subclasses * * @param lock the outer lock object * @throws NullPointerException if the lock is null */ protected ReadLock(ReentrantReadWriteLock lock) { sync = lock.sync; } // ... } /** * The lock returned by method {@link ReentrantReadWriteLock#writeLock}. */ public static class WriteLock implements Lock, java.io.Serializable { private static final long serialVersionUID = -4992448646407690164L; private final Sync sync; /** * Constructor for use by subclasses * * @param lock the outer lock object * @throws NullPointerException if the lock is null */ protected WriteLock(ReentrantReadWriteLock lock) { sync = lock.sync; } // ... } // ... }