Concurrency is vital. Without it our computers and smartphones wouldn’t provide the seamless user experience we’ve come to rely on. Today’s computers and operating systems can start, execute and finish multiple tasks within the same time period. This allows us to interact with the UI while the app performs background tasks like networking, filing I/O, database queries or other, long-running operations. While users undoubtedly benefit from concurrent programming, actually implementing the concept can be a challenge. Back in the Computer Stone Age (about forty years ago), personal computers weren’t capable of running multiple tasks at once. Then came the evolution of CPU-architectures; computers started supporting the execution of multiple processes or threads. This was great, but the operating systems and applications still lagged.

But like anything that seems too good to be true, concurrency comes with several hazards. As a developer, you’ll want to watch out for a series of potential issues (think “can of worms”: once you open it, be prepared for trouble).



Multithreading

Of course, programming languages didn’t make it easy to implement multithreading. Most languages simply provided access to the native, low-level threading APIs and constructs of the underlying operating system—and each operating system used a different threading API. Standardized solutions soon appeared, including POSIX Threads. PThreads is a C-library which exposes a set of functions that can be implemented by OS-vendors. This approach lets you use the same interface across multiple platforms. PThreads is supported by UNIX platforms including iOS, macOS and Linux.

Thread pools are another concept meant to simplify an abstract threading. At the core of this pattern stands the idea of having a number of pre-created, idle threads which are ready to be utilized. Whenever there’s a new task to be executed, the thread wakes up, performs the task and then goes back to idle.



So, why would you want to create a bunch of threads to keep around? In one word: performance. Instead of creating a new thread whenever a task is to be executed (and then destroying it when the task finishes), available threads are taken from the thread pool. Thread creation and destruction is an expensive process, so the thread pool pattern offers considerable performance gains. Letting the library or operating system manage the threads means that you have less to worry about (read: fewer lines of code to write). Besides, the library can optimize the thread management behind the scenes.



Concurrency and parallelism

Concurrent tasks can be executed via parallelism. Parallelism is often confused with concurrency, and while these concepts are related, it’s important to know that they’re different things. Parallelism can only be achieved on multi-core devices; while one core executes one task, the other core can run the other task simultaneously, like this:

Parallelism, however, isn’t required for concurrency. With single core devices, concurrency is achieved via context switching: The core runs one task for some time, then switches to the other task or process, runs it, then switches back to the previous task, and so on, until the task is complete.

Concurrency via context-switch doesn’t ruin the illusion because the switching happens quickly. With true parallelism, the execution of concurrent tasks is snappier. Furthermore, a context-switch requires storing and restoring the execution state when switching between threads, which means additional overhead.

Grand Central Dispatch (GCD)

Grand Central Dispatch (GCD) is Apple’s framework and it relies on the thread-pool pattern. GCD was first made available in 2009 with MAC OS X 10.6/Snow Leopard and iOS 4. At the core of GCD is the idea of work items, which can be dispatched to a queue; the queue hides all thread management related tasks. You can configure the queue, but you won’t interact directly with a thread. This model simplifies the creation and the execution of asynchronous or synchronous tasks. GCD abstracts the notion of threads, and exposes dispatch queues to handle work items (work items are blocks of code that you want to execute). These tasks are assigned to a dispatch queue, which processes them in a First-In-First-Out (FIFO) order.



Serial and concurrent GCD queues

There are two types of queues in GCD. These are known as serial and concurrent. First, let’s talk about serial queues. If you submit work items to a serial queue, they’ll be executed one after the other in the order they were added. Since it’s a serial queue, no concurrency of any kind is involved. A serial dispatch queue always executes one work item at a time.



Now, let’s take a quick look at Concurrent dispatch queues. Work items submitted to a concurrent dispatch queue will start in the order of adding them to the queue. The number of tasks that will run simultaneously, and the time it takes to execute the next task, is controlled by the queue. It’s important to note that we can’tinfluence this behavior.

Also, it’s entirely hidden whether concurrency is achieved via parallelism, or via context switching. GCD hides these details from us, and the result is a very simple API, which was further refined in Swift 3.0. The following snippet illustrates the simplicity of concurrently executing work items using a GCD concurrent dispatch queue:



// create the concurrent queue let asyncQueue = DispatchQueue(label: "asyncQueue", attributes: .concurrent) // perform the task asynchronously asyncQueue.async { // perform some long-running task here }

Isn’t it elegant? Creating the concurrent queue is a breeze. First, we pass in a unique identifier, then we specify the concurrent nature of this queue by setting the attributes argument to .concurrent. (Note: If you leave out the attributes argument, the queue will be serial by default).



GCD in Swift 3