Development Blog on software and related things

Practical GCD, Part I: Introduction to queues

As mentioned in the previous part of the series, libdispatch aka the Grand Central Dispatch is at the core of concurrency operations for all Darwin platform as well as easily being the most promising technology for Swift on Linux.

libdispatch is a low level C library that enables concurrent execution on multi-processor hardware and provides abstractions for writing concurrent code based on tasks and queues.

Despite being low level, libdispatch is surprisingly easy to use and because of that is ubiquitous in all kinds of iOS / OSX applications. It makes it trivial to do things like spinning of a task on a background thread and then getting back to the main thread with results, while automatically taking care of underlying complexities such as scheduling and priorities, CPU throughput, concurrency vs parallelism, etc. – all with appropriate settings for individual platforms.

The Basics

In a nutshell, Grand Central Dispatch introduces the concept of thread-safe FIFO queues and then enables submitting tasks to them.

GCD queues could be either serial or concurrent, where serial queues are guaranteed to executed one task at a time in the order of the tasks’ submission.

Concurrent queues can execute multiple tasks at the same time. While GCD will reliably start the tasks in the order of submission, the rest of the execution details such as number of concurrently running jobs and their priorities are automatically decided by the system.

In Swift terms, a task is typically a closure and it can be dispatched to a GCD queue either synchronously or asynchronously.

With that basic knowledge, let’s do already something:

let q = dispatch_get_global_queue(0, 0)
dispatch_async(q) {
    print("hi from a separate thread")

    dispatch_async(dispatch_get_main_queue()) {
        print("all done, and now back to the main thread")
    }
}

The code above concurrently prints a message and then gets back to synchronously reports results on the main application thread.

Despite simplicity of the example, a number of things happens under the hood. After the code is submitted to one of the global concurrent queues, GCD takes care of prioritizing and scheduling its execution on one of the available CPUs, eventually spinning of a new thread or taking one from its thread pool and actually running the code.

Let’s get some more control

With GCD taking care of all numerous platform-dependent settings around tasks scheduling & priorities, it also provides a clear and concise way for developers to express their intents.

Since iOS8, Quality of Service gives developers several predefined priory levels that can be assigned to units of works such as queues, threads and tasks alike.

QOS breaks out things into four distinct categories:

  • User Interactive: work is done synchronously on the main thread, such as updating the UI

  • User Initiated: something started by the user with expectation of prompt results, for example opening a document

  • Utility: work that may take a while and is not expected to be finished right away, such as uploading images while showing overall progress

  • Background: work that is not related to current activities, typically maintenance, indexing, backup, etc.

The QOS API is quite neat and all it takes is selecting the required QOS level that will then determine the appropriate priority level:

queues

The XCode code completion suggestions directly map to the to the four main QOS categories, with some additional values of QOS_CLASS_DEFAULT1, QOS_CLASS_UNSPECIFIED2, and QOS_MIN_RELATIVE_PRIORITY3.

Having covered the QOS basics, lets evolve our code sample and run it via a playground4:

import Foundation
import XCPlayground

XCPlaygroundPage.currentPage.needsIndefiniteExecution = true

let currentQueueLabel = {() -> String? in
    let queue_Label = dispatch_queue_get_label(DISPATCH_CURRENT_QUEUE_LABEL)
    return String(CString: queue_Label, encoding: NSUTF8StringEncoding)
}

let q = dispatch_get_global_queue(QOS_CLASS_UTILITY, 0)
dispatch_async(q) {
    if let label_name = currentQueueLabel() {
        print("Now on a GCD thread with QOS class: \(label_name)")
    }

    dispatch_async(dispatch_get_main_queue()) {
        if let label_name = currentQueueLabel() {
            print("And now back to: \(label_name)")
        }
        XCPlaygroundPage.currentPage.finishExecution()
    }
}
Now on a GCD thread with QOS class: com.apple.root.utility-qos
And now back to: com.apple.main-thread

This explicitly tells GCD to run submitted task on the Utility queue, a global concurrent queue for energy-efficient execution. There are three more concurrent system queues each associated with its respective QOS level, as well as the serial main queue.

While the global queues are useful and readily available, it is important to realize that they are also used by the system and therefore can not always be appropriate for a dedicated kind of work.

The good news is that for these purposes Grand Central Dispatch provides custom queues, that can be either serial or concurrent:

let queue_attributes = dispatch_queue_attr_make_with_qos_class(
                                    DISPATCH_QUEUE_SERIAL, QOS_CLASS_BACKGROUND, 0)
let custom_queue = dispatch_queue_create("com.akpw.bckg.worker", queue_attributes)

This creates a concurrent background custom queue that can be used for dedicated type of work.

Choosing Queue type

Five system queues along with ability to create any number of custom queues provides plenty of options to choose from. So how can we choose the most appropriate queue type for various concurrency scenarios?

The answer obviously depends on specific use-cases, and goes along with how we plan to submit work to the queues.

There are several way to submit tasks to queues. Most common are dispatch_sync, dispatch_async and dispatch_after. Let’s go over each one using our custom queue:

import Foundation
import XCPlayground

XCPlaygroundPage.currentPage.needsIndefiniteExecution = true

let queue_attributes = dispatch_queue_attr_make_with_qos_class(
                                DISPATCH_QUEUE_SERIAL, QOS_CLASS_BACKGROUND, 0)
let custom_queue = dispatch_queue_create("com.akpw.bckg.worker", queue_attributes)

guard let mainQueueLabel = currentQueueLabel() else {XCPlaygroundPage.currentPage.finishExecution()}

print("1. submitting async from \(mainQueueLabel)")
dispatch_async(custom_queue) {
    if let queueLabel = currentQueueLabel() {
        print("2. running async on the \(queueLabel)")
    }
}
print("3. now doing something on \(mainQueueLabel)")

print("4. submitting synchronously from \(mainQueueLabel)")
dispatch_sync(custom_queue) {
    if let queueLabel = currentQueueLabel() {
        print("5. running sync on the \(queueLabel)")
    }
}
print("6. now doing something on \(mainQueueLabel)")


let delay = dispatch_time(DISPATCH_TIME_NOW,
                                Int64(NSTimeInterval(NSEC_PER_SEC) * 1))
print("7. submitting with delay from \(mainQueueLabel)")
dispatch_after(delay, custom_queue) {
    if let queueLabel = currentQueueLabel() {
        print("8. running after delay on the \(queueLabel)")
    }
    XCPlaygroundPage.currentPage.finishExecution()
}
1. submitting async from com.apple.main-thread
3. now doing something on com.apple.main-thread
4. submitting synchronously from com.apple.main-thread
2. running async on the com.akpw.bckg.worker
5. running sync on the com.akpw.bckg.worker
6. now doing something on com.apple.main-thread
7. submitting with delay from com.apple.main-thread
8. running after delay on the com.akpw.bckg.worker

While dispatch_async just submits work to a queue and does not wait on results, dispatch_sync pauses execution of the submitting thread till the the work on the target queue is completed. dispatch_after schedules execution of a task on the target queue after specified amount of delay.

Another interesting way to submit work to queues is via barrier dispatch functions, which is a good way to implement thread-safe resources and address the readers–writers problem. dispach_barrier_async and dispatch_barrier_sync work similarly to dispatch_async and dispatch_sync, except when submitted to a concurrent queue they actually run their tasks serially. This is best shown by example, e.g. here comes a quick implementation of a thread-safe shape-shifting creature:

class ShapeShifter {
    var name: String
    var species: String

    private let synchronizationQueue = dispatch_queue_create(
            "com.akpw.shapeshifter.sync_queue", DISPATCH_QUEUE_CONCURRENT)

    init(name: String, species: String) {
        self.name = name
        self.species = species
    }

    func shiftShape(name: String, species: String) {
        dispatch_barrier_async(synchronizationQueue) {
            self.name = name
            self.species = species
        }
    }

    func shapeInfo() -> String {
        var shapeInfo = ""
        dispatch_sync(synchronizationQueue) {
            shapeInfo = "\(self.name) the \(self.species)"
        }
        return shapeInfo
    }
}

An experienced reader might probably ask why ShapeShifter there is not defined as a struct confirming to the CustomStringConvertible protocol, which would definitely be a very good question. While we will get to it in later parts of the series, for now let’s say the reason is to keep things simple and not to diverge at that point.

The main point of the code above is to show how simple it is to make our shape-shifter support concurrent reading of its current shape, while keeping changes of shapes nicely serialized and synchronized between multiple threads.

With the information so far, let’s now come up with the basic rules of choosing appropriate queues for typical use-cases.

Main Queue

Starting with a citation from the Concurrency Programming Guide:

The main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. This queue works with the application’s run loop (if one is present) to interleave the execution of queued tasks with the execution of other event sources attached to the run loop. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application.

Most of that does not require further explanation, and using main queue is typically a no-brainer: basically, it should only be used for updating the UI and firing tasks to various worker queues.

In addition to that, main queue is also a good candidate for dispatch_after, where it can be used as a replacement for NSTimer-based scheduling.

An extra word of caution for using dispatch_sync there, as this is guaranteed to deadlock:

dispatch_sync(custom_queue) {
    dispatch_sync(dispatch_get_main_queue()) {
        print("deadlocked!!!")
    }
}

While mistakes like that are easy to spot, subtle variations could be even more dangerous:

import Foundation
import XCPlayground

XCPlaygroundPage.currentPage.needsIndefiniteExecution = true
let custom_queue = dispatch_queue_create("com.akpw.bckg.worker", DISPATCH_QUEUE_SERIAL)

dispatch_async(custom_queue) {
    dispatch_sync(dispatch_get_main_queue()) {
        print("deadlocked!!! 👻")
    }
}

// continue doing something
// ...

dispatch_sync(custom_queue) {
    if let queueLabel = currentQueueLabel(){
        print(String(format:
            "supposly running sync on the \(queueLabel)",
            "...or perhaps not???"))
    }
}

A rule of thumb is to avoid using dispatch_sync with serial queues unless absolutely needed, and always prefer dispatch_async instead.

Global queues

The four global concurrent queues are readily available for various concurrent tasks and typically used quite frequently in many different applications.

One thing to remember though is that these global queues are also used by the system and therefore might not be always suitable for dedicated kind of work.

A good illustration of that would be an attempt to use barrier functions with global queues, which would simply have the effect of non-barrier dispatch_async or dispatch_sync.

This is because the global queues are shared concurrent resources, and therefore it does not make sense to serialize / block them for any practical purposes – and that is where custom concurrent queues come handy.

Custom Serial Queues

Most common usage of custom serial queues are serializing concurrent tasks and protecting critical sections from race conditions.

As mentioned above, these are rarely used along with dispatch_sync and it’s also not as practical to use them with dispatch_after.

The barrier functions would be plain meaningful there, as serial queues already execute their tasks one at a time.

Custom Concurrent Queues

Concurrent custom queues is a preferred choice for running dedicated concurrent tasks in the background.

As shown in our shape-shifting example, in combination with barrier functions they are also quite handy for resolving the readers / writers problems where a custom concurrent queue can serialize the write requests while still allowing multiple read operations simultaneously.

Conclusion

This introductory part of the series covered Grand Central Dispatch queues and base tasks dispatching, along with common practices for selecting appropriate queue types.

All code examples are available for download in this playground

The next part will go deeper into QOS features as well as diving into more advanced GCD usage patterns.


  1. QOS_CLASS_DEFAULT is simply a value used when no specific QOS info was provided, and is the default value for both queues and threads. Priority-wise, it sits between User Interactive and User Initiated.

  2. QOS_CLASS_UNSPECIFIED is tied to the concept of QOS propagation, where QoS is inferred from the caller’s QOS level. Such propagation simply transfers the QOS level, except for User Interactive that is automatically translated to User Initiated.

  3. QOS_MIN_RELATIVE_PRIORITY supports relative position within a QoS Class band. It is only within a given QOS class and intended for unusual situations such as e.g. interdependent work within same QOS class.

  4. The code examples playground can be downloaded here