This is a guest post by 2016 Qt Champion Ben Lau.

Ben has a long history with Qt, and many interesting projects on GitHub.

Here's an idea from him on making multithreading simpler in some cases.

The Basics

Multithreading programming may not be difficult at the first glance. You have to pay attention to your shared data to avoid race conditions/deadlocks. So you learn mutex and semaphore and do it carefully. The result works perfectly on your machine.

But one day, your program hangs. You spend an hour to trace out the problem and find out the order of code execution is not same as your expectation. So you add a few mode condition checking and fix the problem.

After a few week of development, the program is getting more complicated. And it begins to crash randomly. This time even after a day you still can’t figure out what is wrong and admit that it is totally out of control.

Does that sound like a familiar story? It is not rare to find complaints about random crashes/hangs due to misuse of a thread. Is it really difficult to write multithreaded programs?

The answer is yes and no. It depends on your software requirement and architecture.

In this article, it is going to introduce a lock-free multithreaded programming method by using QtConcurrent and AsyncFuture. These make multithreaded programming easier.

Let’s take an example. The code below shows an asynchronous ImageReader class. The readImageWorker function will be executed on another thread that won’t block the UI. QFuture represents the result of computation and reports the status change.

class ImageReader : public QObject {

public:

QFuture<QImage> read(const QString& fileName);

}; QFuture<QImage> ImageReader::read(const QString &fileName)

{

auto readImageWorker = [](const QString &fileName) {

QImage image;

image.load(fileName);

return image;

};

return QtConcurrent::run(readImageWorker, fileName);

}

Example of use

ImageReader reader; QFuture future = reader.read(INPUT); QFutureWatcher *watcher = new QFutureWatcher(); connect(watcher, &QFutureWatcher::finished,

[=]() {

setImage(future.result());

}); watcher->setFuture(future);

Multithreaded programming with QtConcurrent is pretty easy. It just takes an input, then produce an output later. QtConcurrent handles all of the low-level threading primitives.

But it is limited to the condition that the concurrent function does not access shared data with other threads. If that happens, it may still need a lock in order to maintain a critical session. That will fall back to the old traditional way.

Make it support Image caching

The above example is quite an ideal case. And of course, a real world problem is usually not that simple. Let’s change the requirement - Make it support image caching.

QFuture ImageReader::read(const QString &fileName)

{

auto readImageWorker = [](const QString &fileName) {

QImage image;

image.load(fileName);

return image;

}; QFuture future = QtConcurrent::run(readImageWorker, fileName); QFutureWatcher *watcher = new QFutureWatcher(this); auto updateCache = [=]() {

m_cache[fileName] = future.result();

watcher->deleteLater();

}; connect(watcher, &QFutureWatcher::finished, updateCache);

watcher->setFuture(future);

return future;

}

The class declaration:

class ImageReader : public QObject {

public:

bool isCached(const QString& fileName) const;

QImage readCache(const QString& fileName) const;

QFuture read(const QString& fileName);

private:

QMap<QString,QImage> m_cache;

};

bool ImageReader::isCached(const QString &fileName) const

{

return m_cache.contains(fileName);

} QImage ImageReader::readCache(const QString &fileName) const

{

return m_cache[fileName];

}

Before getting an image, you have to query is the cache available:

if (reader.isCached(INPUT)) {

setImage(reader.readCache(INPUT));

return;

}

QFuture future = reader.read(INPUT);

This solution works, but the API is not ideal. Because it would violate the “Tell, don’t ask” principle. The best way is to combine readCache() and read() into a single function that always returns a QFuture object. But there is a problem, QFuture/QtConcurrent can only obtain a result from a thread. It is quite odd to start a thread but the data is already available. To get rid of this problem, we need a 3rd party library.

AsyncFuture

AsyncFuture is a C++ library that could converts a signal into a QFuture type and uses it like a Promise object in Javascript. It provides a unified interface for asynchronous and concurrent tasks. The library only contains a single header file, so that it is very easy to bundle in your source tree. Or you may install it by qpm.

Project Site:

https://github.com/benlau/asyncfuture

Let’s rewrite the above function with AsyncFuture:

QFuture ImageReader::read(const QString &fileName)

{

if (m_cache.contains(fileName)) {

// Cache hit. Return an already finished QFuture object with the image

auto defer = AsyncFuture::deferred();

defer.complete(m_cache[fileName]);

return defer.future();

} if (m_futures.contains(fileName)) {

// It is loading. Return the running QFuture

return m_futures[fileName];

} auto readImageWorker = [](const QString &fileName) {

QImage image;

image.load(fileName);

return image;

}; auto updateCache = [=](QImage result) {

m_cache[fileName] = result;

m_futures.remove(fileName);

return result;

}; QFuture future = AsyncFuture::observe(QtConcurrent::run(readImageWorker, fileName)).context(this, updateCache).future();

m_futures[fileName] = future;

return future;

}

This time it is almost perfect. The deferred object provides an interface to complete/cancel a QFuture manually. That could replace readCache() by returning an already finished future object.

Moreover, it has added a new feature to avoid duplicated image reading. In case you have made requests to load the same image twice before it is cached, the original design would start two threads which are totally wasting CPU power. This version solves it by keeping all the running future in a future pool and return that future for duplicated read.

Make the example more complicated

Currently the example is very simple. Let’s try to make it more complicated.

Requirements:

Add a readScaled(fileName, size) function that returns an image which is scaled to specific size Code reuse is a must The scaling must be done in another thread to emulate a high CPU usage function Load cached image if available But scaled image do not need to keep in cache

The most optimistic solution is to make use of the result of read() directly. That mean you have to create a thread that depends on the result of another thread. That is a bit hard to get it works with only QtConcurrent and probably it needs to use a lock. But it can be easy to be done with AsyncFuture’s future chaining feature.

QFuture ImageReader::readScaled(const QString &fileName, const QSize &size)

{

auto scaleImageWorker = [=](QImage input) {

return input.scaled(size);

};

auto callback = [=](QImage result) {

return QtConcurrent::run(scaleImageWorker, result);

};

QFuture input = read(fileName);

QFuture output = AsyncFuture::observe(input).context(this, callback).future();

return output;

}

First of all, it calls read() function to obtain an image from QFuture. It doesn't care about the caching mechanism as it is already handled by the read() function.

Then it creates a new future object to represent the whole work flow of the chain:

QFuture output = AsyncFuture::observe(input).context(this, callback).future();

^^^^^^^^^

A chain begins with a observe() function, then followed by an observer function to bind the callback to the observed future, and that will create a new future object to represent the result of the callback.

auto callback = [=](QImage result) {

return QtConcurrent::run(scaleImageWorker, result);

};

You may wonder if it is alright to run another worker function within the callback function. In fact, this is a feature of AsyncFuture. It provides a chainable API that works like a Promise object in JavaScript. If the callback returns a QFuture object, it will be added to the chain. Then the final output future will depend on the returned future. Therefore, the output future is in fact represents the result of read() , callback() and scaleImageWorker(). The flow could be visualised by this diagram:

Diagram: The workflow of readScaled() - it shows how it uses a single QFuture to represent the result of a multiple steps task.

Conclusion

Using QtConcurrent without sharing data between threads could make multithreaed programming easier because it doesn't need to manage an access lock. But real world problems are usually more complicated. A task may not be able to complete without interaction from other threads. In this case, it may still need an access lock to protect critical session. But once you have used it, it will fall back to the old traditional way, and probably it may get the same problem mentioned at the beginning.

In this article an alternative solution is presented: Use Concurrent API together with asynchronous callback then chain them into a sequence by a promise like API. It works by breaking down a task into multiple steps. Whanever a concurrent function seeks for extra information from another thread, it should just terminate itself and pass the control back to the main thread. So that it doesn’t need an access lock that may raise issues like deadlocks and race conditions.

The whole workflow could be represented by a QFuture object, a unified interface for all kind of asynchronous workflow.

However, this doesn't mean we get rid of locks completely. They are still necessary for some scenarios. So choose your solution case by case.