We've covered co_return and co_yield now. The final new keyword introduced by the coroutines TS is co_await . Where the first two keywords allow a coroutine to suspend whilst its consumer gets ready to use results, co_await is used to allow the coroutine to suspend whilst it waits for things it needs. I tend to think of co_return and co_yield as servicing the outside of the coroutine and co_await as dealing with the inside.

We're going to start with the refactored lazy class that we had a couple of tutorials ago and make it so that our lazy class supports co_await .

lazy<int> answer() { std::cout << "Thinking deep thoghts..." << std::endl; co_return 42; } sync<int> await_answer() { std::cout << "Started await_answer" << std::endl; auto a = answer(); std::cout << "Got a coroutine, let's get a value" << std::endl; auto v = co_await a; std::cout << "And the coroutine value is: " << v << std::endl; v = co_await a; std::cout << "And the coroutine value is still: " << v << std::endl; co_return 0; } int main() { return await_answer().get(); }

The first thing to notice here is that we now have two coroutines, both answer and await_answer . We can't perform the await inside main because main (along with constructors and destructors) isn't allowed to be a coroutine.

The reason for this is that a coroutine's return value needs to be properly dealt with from its calling context, so that the coroutine does its work properly. If we tried to turn main into a coroutine there'd be nothing to handle this for us. There're similar problems for both constructors and destructors, neither of which return values.

Because there are two coroutines in play we'll only keep the prints for the lifetime tracking of the lazy instance and not the sync one. We're using sync only as a mechanism to allow us to enter a context within which we can use coroutines and it doesn't play any part in the example other than that.

The co_await API

Just like our co_return and co_yield examples, the co_await requires a particular API on anything we want to await. There are three parts:

await_ready returns a boolean to describe whether or not the suspend is needed, true for don't suspend and false for suspend. await_suspend which is called when the suspend is needed because the value isn't ready yet. await_resume which returns the value that is being awaited on. This becomes the result of the co_await expression.

Let's look at what these would look like for our lazy class.

bool await_ready() { const auto ready = coro.done(); std::cout << "Await " << (ready ? "is ready" : "isn't ready") << std::endl; return coro.done(); }

We can easily tell if the value is available by looking to see if the lazy is done or not. If it is done then we can move directly on to returning the value to the co_await . If it isn't done we're going to need to resume the lazy body so it can co_return something into the promise type for us to pass on.

void await_suspend(std::experimental::coroutine_handle<> awaiting) { std::cout << "About to resume the lazy" << std::endl; coro.resume(); std::cout << "About to resume the awaiter" << std::endl; awaiting.resume(); }

Here we can directly see the two coroutines in our example. The function is a member of the lazy returned by answer so the class member coro is the handle to the answer coroutine. We always resume this because we can only end up here if the coroutine isn't done already (checked in the await_ready member).

There are a subtle problem with this version of the code. We're recursively calling resume on the awaiting coroutine. If we have a long chain of coroutines all waiting on each other then the stack frames from these recursive calls could add up in such a way that we overflow our stack. See the second addendum below for details about this.

The passed in awaiting handle is for the sync instance associated with the await_answer coroutine, which is where the co_await appears. Once the lazy is done we resume this coroutine.

The last part (returning the value) is trivial:

auto await_resume() { const auto r = coro.promise().value; std::cout << "Await value is returned: " << r << std::endl; return r; }

When we run this we can see things working as expected (remember that we only print lifetime information for the lazy instance and not the sync one):

Started await_answer Promise created Send back a lazy Created a lazy object Started the coroutine, wait for the go! Move constructed a lazy object Lazy gone Got a coroutine, let's get a value Await isn't ready About to resume the lazy Thinking deep thoghts… Got an answer of 42 Finished the coro About to resume the awaiter Await value is returned: 42 And the coroutine value is: 42 Await is ready Await value is returned: 42 And the coroutine value is still: 42 Promise died Lazy gone

The first time await_ready is called the lazy hasn't yet run so we can see answer being entered when it is resumed and then the execution path as the value makes it way back out. The second time, however, the lazy is already done so we don't go down the resume path as we skip directly on to returning a second copy of the value.

Comparison with get

Let's look again at our get implementation:

T get() { std::cout << "We got asked for the return value..." << std::endl; if ( not coro.done() ) coro.resume(); return coro.promise().value; }

You'll notice that all the three parts of our new API appear in this:

The if statement's conditional expression appears as await_ready The resume appears in the await_suspend . The return is handled by the await_resume .

There is one rather important difference though. The await_suspend also gets given the handle to the second coroutine, and this allows it to do a number of interesting things with it, all of which we'll have to wait for future tutorials for. We'll see our first example in the next article where we'll use co_await for something a bit more cool and surprising.

Addendum 1: operator co_await

The thing that is a bit nasty about this code is that the await API is implemented directly on the lazy instance and this allows it be used (and abused) by anybody who has a lazy instance. Luckily there is something we can do about that. We can add await support to any type by adding support for operator co_await . Like many of these operators it can be defined as a method, or stand alone. The signature for the the member version would be:

awaitable_type operator co_await();

Where awaitable_type now implements the above API. If we want it to be stand alone then it would be:

template<typename T> awaitable_type operator co_await(lazy<T> &);

This is analagous to operators like == and << .

Using this, and the new trick of local types we can use the following:

auto operator co_await() { struct awaitable_type { handle_type coro; bool await_ready() { const auto ready = coro.done(); std::cout << "Await " << (ready ? "is ready" : "isn't ready") << std::endl; return coro.done(); } void await_suspend(std::experimental::coroutine_handle<> awaiting) { std::cout << "Got to resume the lazy" << std::endl; coro.resume(); std::cout << "Got to resume the awaiter" << std::endl; awaiting.resume(); } auto await_resume() { const auto r = coro.promise().value; std::cout << "Await value is returned: " << r << std::endl; return r; } }; return awaitable_type{coro}; }

Addendum 2: Tail call optimisation

In the current TS we have to resume both coroutines in the await_suspend . The lazy coroutine's body is executed first and when it suspends after the co_return our await_suspend resumes execution and then continues execution of the awaiting coroutine, in this case

void await_suspend(std::experimental::coroutine_handle<> awaiting) { std::cout << "About to resume the lazy" << std::endl; coro.resume(); std::cout << "About to resume the awaiter" << std::endl; awaiting.resume(); }

It's not quite as dire as it seems though, because the call is in the tail position. This allows the compiler to perform tail call optimisation to remove the stack frame, something that the optimsers in modern compilers are pretty good at. This means optimised production builds should be fine, but it may prove a problem for debug builds.

To deal with this better a future version of the TS is likely to allow await_suspend to return a coroutine handle to be used as a continuation (that is, to be called immediately after await_suspend returns). This looks like:

auto await_suspend(std::experimental::coroutine_handle<> awaiting) { std::cout << "About to resume the lazy" << std::endl; coro.resume(); std::cout << "Returning awaiting coroutine as a continuation" << std::endl; return awaiting; }