Thread pools

The core interface for thread pools is ExecutorService . java.util.concurrent also provides a static factory class Executors , which contains factory methods for the creation of a thread pool with the most common configurations.

Method Description newSingleThreadExecutor Returns an ExecutorService with exactly one thread. newFixedThreadPool Returns an ExecutorService with a fixed number of threads. newCachedThreadPool Returns an ExecutorService with a varying size thread pool. newSingleThreadScheduledExecutor Returns a ScheduledExecutorService with a single thread. newScheduledThreadPool Returns a ScheduledExecutorService with a core set of threads. newWorkStealingPool Returns an work-stealing ExecutorService .

Table 6: Static factory methods

When sizing thread pools, it is often useful to base the size on the number of logical cores in the machine running the application. In Java, you can get that value by calling Runtime.getRuntime().availableProcessors() .

Implementation Description ThreadPoolExecutor Default implementation with an optionally resizing pool of threads, a single working queue and configurable policy for rejected tasks (via RejectedExecutionHandler ), and thread creation (via ThreadFactory ). ScheduledThreadPoolExecutor An extension of ThreadPoolExecutor that provides the ability to schedule periodical tasks. ForkJoinPool Work stealing pool: all threads in the pool try to find and run either submitted tasks or tasks created by other active tasks.

Table 7: Thread pool implementations

Tasks are submitted with ExecutorService#submit , ExecutorService#invokeAll , or ExecutorService#invokeAny , which have multiple overloads for different types of tasks.

Interface Description Runnable Represent a task without a return value. Callable Represents a computation with a return value. It also declares to throw raw Exception , so no wrapping for a checked exception is necessary.

Table 8: Tasks’ functional interfaces

Future

Future is an abstraction for asynchronous computation. It represents the result of the computation, which might be available at some point: either a computed value or an exception. Most of the methods of the ExecutorService use Future as a return type. It exposes methods to examine the current state of the future or block until the result is available.

ExecutorService executorService = Executors.newSingleThreadExecutor(); Future<String> future = executorService.submit(() -> "result"); try { String result = future.get(1L, TimeUnit.SECONDS); System.out.println("Result is '" + result + "'."); } catch (InterruptedException e) { Thread.currentThread().interrupt(); throw new RuntimeException(e); } catch (ExecutionException e) { throw new RuntimeException(e.getCause()); } catch (TimeoutException e) { throw new RuntimeException(e); } assert future.isDone();

Locks

Lock

The java.util.concurrent.locks package has a standard Lock interface. The ReentrantLock implementation duplicates the functionality of the synchronized keyword but also provides additional functionality such as obtaining information about the state of the lock, non-blocking tryLock() , and interruptible locking. Example of using an explicit ReentrantLock instance:

class Counter { private final Lock lock = new ReentrantLock(); private int value; int increment() { lock.lock(); try { return ++value; } finally { lock.unlock(); } } }

ReadWriteLock

The java.util.concurrent.locks package also contains a ReadWriteLock interface (and ReentrantReadWriteLock implementation) which is defined by a pair of locks for reading and writing, typically allowing multiple concurrent readers but only one writer.

class Statistic { private final ReadWriteLock lock = new ReentrantReadWriteLock(); private int value; void increment() { lock.writeLock().lock(); try { value++; } finally { lock.writeLock().unlock(); } } int current() { lock.readLock().lock(); try { return value; } finally { lock.readLock().unlock(); } } }

CountDownLatch

The CountDownLatch is initialized with a count. Threads may call await() to wait for the count to reach 0. Other threads (or the same thread) may call countDown() to reduce the count. Not reusable once the count has reached 0. Used to trigger an unknown set of threads once some number of actions has occurred.

CompletableFuture

CompletableFuture is an abstraction for async computation. Unlike plain Future , where the only possibility to get the result is to block, it's encouraged to register callbacks to create a pipeline of tasks to be executed when either the result or an exception is available. Either during creation (via CompletableFuture#supplyAsync/runAsync ) or during adding callbacks ( *async family’s methods), an executor, where the computation should happen, can be specified (if it is not specified standard global ForkJoinPool#commonPool ).

Take into consideration that if the CompletableFuture is already completed, the callbacks registered via non *async methods are going to be executed in the caller’s thread.

If there are several futures you can use CompletableFuture#allOf to get a future, which is completed when all futures are completed, or CompletableFuture#anyOf , which is completed as soon as any future is completed.

ExecutorService executor0 = Executors.newWorkStealingPool(); ExecutorService executor1 = Executors.newWorkStealingPool(); //Completed when both of the futures are completed CompletableFuture<String> waitingForAll = CompletableFuture .allOf( CompletableFuture.supplyAsync(() -> "first"), CompletableFuture.supplyAsync(() -> "second", executor1) ) .thenApply(ignored -> " is completed."); CompletableFuture<Void> future = CompletableFuture.supplyAsync(() -> "Concurrency Refcard", executor0) //Using same executor .thenApply(result -> "Java " + result) //Using different executor .thenApplyAsync(result -> "Dzone " + result, executor1) //Completed when this and other future are completed .thenCombine(waitingForAll, (first, second) -> first + second) //Implicitly using ForkJoinPool#commonPool as the executor .thenAcceptAsync(result -> { System.out.println("Result is '" + result + "'."); }) //Generic handler .whenComplete((ignored, exception) -> { if (exception != null) exception.printStackTrace(); }); //First blocking call - blocks until it is not finished. future.join(); future //Executes in the current thread (which is main). .thenRun(() -> System.out.println("Current thread is '" + Thread.currentThread().getName() + "'.")) //Implicitly using ForkJoinPool#commonPool as the executor .thenRunAsync(() -> System.out.println("Current thread is '" + Thread.currentThread().getName() + "'."));

Concurrent collections

The easiest way to make a collection thread-safe is to use Collections#synchronized* family methods. Because this solution performs poorly under high contention, java.util.concurrent provides a variety of data structures which are optimized for concurrent usage.

Lists

Implementation Description CopyOnWriteArrayList It provides copy-on-write semantics where each modification of the data structure results in a new internal copy of the data (writes are thus very expensive, whereas reads are cheap). Iterators on the data structure always see a snapshot of the data from when the iterator was created.

Table 9: Lists in java.util.concurrent

Maps





Implementation Description ConcurrentHashMap It usually acts as a bucketed hash table. Read operations, generally, do not block and reflect the results of the most recently completed write. The write of the first node in an empty bin is performed by just CASing (compare-and-set) it to the bin, whereas other writes require locks (the first node of a bucket is used as a lock). ConcurrentSkipListMap It provides concurrent access along with sorted map functionality similar to TreeMap . Performance bounds are similar to TreeMap although multiple threads can generally read and write from the map without contention as long as they are not modifying the same portion of the map.

Table 10: Maps in java.util.concurrent

Sets





Implementation Description CopyOnWriteArraySet Similar to CopyOnWriteArrayList , it uses copy-on-write semantics to implement the Set interface. ConcurrentSkipListSet Similar to ConcurrentSkipListMap , but implements the Set interface.

Table 11: Sets in java.util.concurrent

Another approach to create a concurrent set is to wrap a concurrent map:

Set<T> concurrentSet = Collections.newSetFromMap(new ConcurrentHashMap<T, Boolean>());

Queues

Queues act as pipes between “producers” and “consumers.” Items are put in one end of the pipe and emerge from the other end of the pipe in the same “first-in first-out” (FIFO) order. The BlockingQueue interface extends Queue to provide additional choices of how to handle the scenario where a queue may be full (when a producer adds an item) or empty (when a consumer reads or removes an item). In these cases, BlockingQueue provides methods that either block forever or block for a specified time period, waiting for the condition to change due to the actions of another thread.

Implementation Description ConcurrentLinkedQueue An unbounded non-blocking queue backed by a linked list. LinkedBlockingQueue An optionally bounded blocking queue backed by a linked list. PriorityBlockingQueue An unbounded blocking queue backed by a min heap. Items are removed from the queue in an order based on the Comparator associated with the queue (instead of FIFO order). DelayQueue An unbounded blocking queue of elements, each with a delay value. Elements can only be removed when their delay has passed and are removed in the order of the oldest expired item. SynchronousQueue A 0-length queue where the producer and consumer block until the other arrives. When both threads arrive, the value is transferred directly from producer to consumer. Useful when transferring data between threads.