Why

Concurrent programming in dynamic languages like Python and Javascript has become a requirement if you want to write any sort of decently performing code doing any kind of significant IO (disk or network). Async concurrency is the most practical way to do this and understanding the basics of it is a must.

Python’s (3.7+) concurrency mechanisms have finally matured and stabilised on using async/await coroutines. This is so similar to how async programming works in modern Javascript (ES8+, Node.js 7.6+) that we can only stop and wonder at the beauty of convergent “evolution” / intelligent-design of programming languages.

There may be lots of good references and tutorials on async(hronous) programming in both Python and Javascript (I’ll refer to Node.js only here, to avoid talking of browser compatibility but same concepts apply). But almost all of them tend to overwhelm readers with tons of new concepts all at once:

This was written to help you avoid suffering the unfortunate fate of the chap pictured above. Especially since async concurrency is in fact quite straightforward.

Where are we

Dynamic languages like Python and Javascript pretty much suck at executing code in parallel (what’s called parallelism, or more precisely code parallelism). Both are practically barred from doing it via threads: Python’s GIL and Javascript’s intentionally single-threaded design make threads-based parallelism (almost) impossible. And doing it via processes makes it equivalent to running multiple programs (or copies of the same program) plus handling inter-process communication between then — this is both complex and resource (RAM) hungry.

But this doesn’t mean they can’t do stuff in parallel (what’s called concurrency — and which includes parallelism but is much broader than that). Because, you see, “doing stuff” doesn’t just mean “executing code”. Part of what a program “does” is “wait for data to be read/written from/to the disk”, “wait for network request” etc.. Just like your “work” also includes “waiting for code to compile/build”, “taking a coffee break”, “waiting for tests to run on CI system”, “prendre une pause déjeuner très très longue”, “waiting for new version to be deployed” etc.

Considering the above, the context of this article will be the purple zone (“ASYNC”) on this diagram:

(The diagram hints at more subtle things too, like the existence of other more exotic forms of parallelism and concurrency, experimental “worker threads” in Node.js having some ability to do some true parallel processing, async having other uses besides speeding things up in the context of IO parallelism etc. But let’s move on to practical examples instead and leave aside such advanced topics for now.)

A practical example

Let’s jump into an actual problem than can be solved better using async concurrency: fetching a list of URLs and scraping some data out of them. An obvious first jab at doing this would look like this, where we just get all the outbound links for a page. (The problem is actually quite similar to the converse one of a server handling multiple client requests, which is obviously of much greater practical importance — but code is much simpler for the client case so we’ll use this as the example here.)

Warning: Almost all code in this article is missing error handling — the point is simply to help you get started for now. Any kind of production-grade code should do thorough and thoughtful error handling!

(It’s better to skip over the Node.js version of the code here. Since node doesn’t have synchronous requests in its standard library, a 3rd party lib is needed. Nevertheless, here’s the equivalent Node.js code if you really want to see it.)

Now, the point here is that for a list of N urls, this would take ~ N * (average time to fetch and process one url) , which is… terrible! It makes absolutely no sense to fetch and handle urls one after another.

Even if you settle with the fact that you’re in a language that doesn’t offer an easy way to run code in parallel, you’d still want to fetch the urls in parallel! This way you’d have a run time of ~ (slowest time to fetch an url) + N * (time to process an url) . Since the first part of this time, (slowest time to fetch an url) (the IO part), it’s what will take most of the time here (we call this an “IO-bound task”), this will obviously run much faster overall.

But how to go about implementing it to work this way?

In Python one could reach for threads and start building the mechanism for fetching urls in parallel. But it would be so much tedious boilerplate for such a trivial task! Oh, and if a bug seeps in that code, I promise you debugging will be thoroughly unpleasant.

In Node.js the default solution that has existed since its creation is callbacks ( http(s).get from the standard library takes as argument a callback that gets invoked after the url is fetched). This is actually the canonical way to code in Node.js the Python example above:

If you never need to write code more complex then this, with Node.js you can stop right here: using callbacks you know enough to do async programming — which is also the only kind of programming node encourages you to employ for IO-doing code.

But real-life programs are much more complicated, and your Node.js code using callbacks would soon hit just as many walls as your Python code using threads. You need do keep track of what is done when, to ensure all things happen in the right order then do proper error handling etc.. Historically, Node.js offered a succession of solutions to the problem of async programming long before async/await & Promisses (the currently explored approach) has matured, and I’ve even went through some of them in my 2017 article Async patterns in Node.js: only 5+ different ways to do it. But let’s focus on the present instead, because the present stage is also the one that brought convergence to the approaches of Python and Javascript/node.

Some preparation

Before diving into writing modern-style async code in Python and JS, let’s replace some bits of code with mocked variants that will not do actual requests, and also make the scrape_data functions behave as if they’re also async because they use some external service (let’s say they use a machine-learning NLP system to do some sentiment analysis on the scraped web contend — we don’t care about the details now…). This will also allows us to play with our code without spamming somebody’s servers for no reason. So, now we’d have (don’t worry, all will be explained later on):

…in Python, and in node:

Looking at the Python snipped above you see that it:

imports the stdlib module asyncio — this has a bunch of utilities required for async programming (there are 3rd party alternatives to it, but unless you have very special needs there is no reason to not stick to the builtins)

— this has a bunch of utilities required for async programming (there are 3rd party alternatives to it, but unless you have very special needs there is no reason to not stick to the builtins) uses async def statements instead of regular def s (will see later how to properly run/call these special async functions that we are creating)

statements instead of regular s (will see later how to properly run/call these special async functions that we are creating) there’s a special await keyword used (which can only be used in functions defined with async def ) — simplifying things a bit, this just means ”currently executing code needs to wait for the following to finish, but code from other functions can execute in the meantime ” (the last part is crucial!)

keyword used (which can only be used in functions defined with ) — simplifying things a bit, this just means ”currently executing code needs to wait for the following to finish, ” (the last part is crucial!) we use asyncio.sleep(n_seconds) instead of time.sleep(n_seconds) — this is because time.sleeps is blocking, aka ”it stops/freezes the entire program for its time, so code from other contexts in the program can’t execute either” (in async programs you must NEVER call any blocking function otherwise you loose all benefits of async — for example this is why you’d use a library like aiohttp instead of one like requests, and from a practical perspective this ends up being the biggest issues with async programming in Python, switching from your old blocking libraries to other async-friendly alternatives)

For the Node.js version, the notable things are:

having to build our own sleep function — JS is a very DIY language unfortunately, and lots of very basic utils are missing from the standard library (yes, there are tons of little packages filling the need, but you’d need to decide which one to use)

function — JS is a very DIY language unfortunately, and lots of very basic utils are missing from the standard library (yes, there are tons of little packages filling the need, but you’d need to decide which one to use) also here we see usage of Promises — if you’re unfamiliar with them and you still care about the Node.js part of this article, take a break and get familiar with them NOW (some good resources are Google Web Fundamentals JS Promises intro and MDN’s “Using Promises”), because async await on Node.js is build in top of Promises (just like Python’s async is built on top of coroutines, Tasks, and Futures, but in JS you will really need to work with the lower level bits too all the time, you can’t leave understanding them for later)

— if you’re unfamiliar with them and you still care about the Node.js part of this article, take a break and get familiar with them NOW (some good resources are Google Web Fundamentals JS Promises intro and MDN’s “Using Promises”), because async await on Node.js is build in top of Promises (just like Python’s async is built on top of coroutines, Tasks, and Futures, but in JS you will really need to work with the lower level bits too all the time, you can’t leave understanding them for later) by comparison, Python’s equivalent for promises, the so called awaitables (which can be coroutines , async Tasks or Futures ) are seen as lower level concept that you can leave to understand later on (buy you will need to learn more about them sooner or later)

(which can be , or ) are seen as lower level concept that you can leave to understand later on (buy you will need to learn more about them sooner or later) the same async function and await keywords that we saw in Python, and, good thing is that they mean basically the same thing!

Running async functions

If you were to call a function like fetch_url above (sometimes called a “coroutine function”) directly, what you’d get back will surprise you: it will not be the result of the function, but a coroutine object:

>>> fetch_url("for https://goo.gl/")

<coroutine object fetch_url at 0x103673548>

Instead you’ll need to pass this coroutine object to aysncio.run and it in turn will give you the actual result:

>>> asyncio.run(fetch_url("for https://goo.gl/"))

~ executing fetch_url(

time of fetch_url(for

'<em>fake</em> page html for ~ executing fetch_url( https://goo.gl/ time of fetch_url(for https://goo.gl/ ): 3.00s' fake page html for https://goo.gl/'

In practice you’ll likely have an async def main(): … that you’ll run at end of your code with asyncio.run(main()) . Note that it’s not asyncio.run(main) , but asyncio.run(main()) — we don’t pass the main function as a callback to asyncio.run , but the coroutine object returned by the called main() .

The story in Node.js is almost the same:

> fetchUrl("https://goo.gl/") // returns a promise

~ executing fetchUrl(

Promise { ... } // returns a promise~ executing fetchUrl( https://goo.gl/ Promise { ... } > fetchUrl("https://goo.gl/").then(r => console.log(r)) // this also runs it

~ executing fetchUrl(

Promise { ... }

fetchUrl("

<em>fake</em> page html for // this also runs it~ executing fetchUrl( https://goo.gl/ Promise { ... }fetchUrl(" https://goo.gl/ "): 1596.755ms fake page html for https://goo.gl/

If you really paid attention though, you may have noticed one very important difference: the ~ executing fetchUrl(https://goo.gl/) output line (after the first entered line of code) shows us that Node.js actually runs the code inside the function, up until the first await statement. Python does not do this! Nothing happens with the code after await sleep… because nothing awaits for main ’s result. So whereas in Python “async functions” / awaitables don’t run until something awaits for them, in node “async functions” / promises do run anyway, but only until their first await statement — after that there’s nothing to “wake them up” again.

There’s also a deeper unseen difference between the two: by default, a Python program does not have the mechanism for running async concurrent code, what we call the Event Loop, initialised and running. Node otoh has it started and running. Actually it’s more like “node has the event loop up and running form the get go because there is nothing else to run the program code except the event loop in Node.js”.

Putting it all together

With all these clarified, let’s see how the improved url fetcher and analyser using async concurrency would look like. First in Python:

Running it would show results like:

$ python ./async_scrape.py $ python ./async_scrape.py

~ executing fetch_url(

~ executing fetch_url(

time of fetch_url(

~ executing analyze_sentiment('<em>fake</em> page html for

time of fetch_url(

~ executing analyze_sentiment('<em>fake</em> page html for

time of analyze_sentiment('<em>fake</em> page html for

time of analyze_sentiment('<em>fake</em> page html for

> extracted data: {'

time elapsed: 6.01s ~ executing fetch_url( https://www.ietf.org/rfc/rfc2616.txt ~ executing fetch_url( https://en.wikipedia.org/wiki/Asynchronous_I/O time of fetch_url( https://en.wikipedia.org/wiki/Asynchronous_I/O ): 1.00s~ executing analyze_sentiment(' fake page html for https://en.wikipedia.org/wiki/Asynchronous_I/O' time of fetch_url( https://www.ietf.org/rfc/rfc2616.txt ): 2.00s~ executing analyze_sentiment(' fake page html for https://www.ietf.org/rfc/rfc2616.txt' time of analyze_sentiment(' fake page html for https://www.ietf.org/rfc/rfc2616.txt' ): 1.00stime of analyze_sentiment(' fake page html for https://en.wikipedia.org/wiki/Asynchronous_I/O' ): 5.00s> extracted data: {' https://www.ietf.org/rfc/rfc2616.txt' : {'positive': 0.7322116418118101}, ' https://en.wikipedia.org/wiki/Asynchronous_I/O' : {'positive': 0.45865434157565066}}time elapsed: 6.01s

The running time roughly equals the largest sum of fetching time for an url plus analysis time for the same url, across all urls. In the example above it’s ~ 1s + 5s == 6s . This is much less than the running time of a fully synchronous version of this that would’ve been simply the sum ~ 1s + 2s + 1s + 5s == 9s . And the difference would only increase with the number of urls!

The Node.js version is still very similar:

If you’re paying attention, you’ll notice that in node we can simply call main , no need for anything like asyncio.run . Now, main() still returns a Promise , but Node.js actually runs the code inside (and waits for it to finish) — the event loop in node is “implicit” and simply does what you’d expect it to do. One way to refer to this is by saying that “in Javascript, Promises are eager”. Since this executed code actually await s for the url processing promises, the full program ends up properly waiting for things to run. When you run the code it produces something like this:

$ node ./async_scrape.js

~ executing fetchUrl(

fetchUrl(

~ analyzeSentiment("<em>fake</em> page html for

analyzeSentiment("<em>fake</em> page html for

fetchUrl(

~ analyzeSentiment("<em>fake</em> page html for

analyzeSentiment("<em>fake</em> page html for

ellapsed: 8909.441ms ~ executing fetchUrl( https://www.ietf.org/rfc/rfc2616.txt ~ executing fetchUrl( https://en.wikipedia.org/wiki/Asynchronous_I/O fetchUrl( https://en.wikipedia.org/wiki/Asynchronous_I/O ): 1477.565ms~ analyzeSentiment(" fake page html for https://en.wikipedia.org/wiki/Asynchronous_I/O ")analyzeSentiment(" fake page html for https://en.wikipedia.org/wiki/Asynchronous_I/O "): 3348.731msfetchUrl( https://www.ietf.org/rfc/rfc2616.txt ): 4987.732ms~ analyzeSentiment(" fake page html for https://www.ietf.org/rfc/rfc2616.txt ")analyzeSentiment(" fake page html for https://www.ietf.org/rfc/rfc2616.txt "): 3919.232msellapsed: 8909.441ms

We can confirm the same behaviour w.r.t. running time as observed for the Python version, so our assumptions hold.

Making it real

Now, in a real-life program there’s a few more things to consider.

First, you won’t always want to run all your promises concurrently and wait for all of them to finish. Maybe you just want to get the fastest result that you can get, then call it done. For this, in Python you can replace asyncio.join(*awaitables) with asyncio.wait(awaitables, return_when=asyncio.FIRST_COMPLETED) (using ALL_COMPLETED instead would do the same things as before), turning your main into:

async def main_race():

t = time.perf_counter()

await asyncio.wait([handle_url(url) for url in urls],

return_when=asyncio.FIRST_COMPLETED)

print("> extracted data:", extracted_data)

print(f"time elapsed: {time.perf_counter() - t:.2f}s")

You may also want to comment the await sleep… line inside analyze_sentiment to make the effect of this more obvious!

For Node.js you’d just use Promise.race(…) instead of Promise.run(…) , but unless this happens inside some more advanced flow control constructs, you’d also need a process.exit(0) to tell to exit program ( 0 as an exit code means “all ok, without any error”) and not wait for all processes (yeah, theres other ways to achieve this, like using cancelable promises instead of default ones and canceling the unfinished ones after there first finishes etc. …but they’re all too fancy for the learning stage we’re at now). (Again, comment await sleep… line inside analyzeSentiment when testing with this to not get confused by randomness.)

async function mainRace() {

console.time('ellapsed');

await Promise.race(urls.map(handleUrl));

console.timeEnd('ellapsed');

process.exit(0);

}

The other common pattern is obviously running async operations in sequence. This is basically akin to partially reverting to the behaviour of sync code, but you’ll end up doing this quite often in practice. This can be done by simply awaiting in sequence.

But in Python things are a bit more subtle — if you await on a Task , a type of coroutine meant to be run concurrently, you’ll still get the old behaviour of asyncio.gather . It’s worth clarifying this with some examples. If you rewrite main like this:

async def main_sequential():

t = time.perf_counter()

for url in urls:

await handle_url(url)

print("> extracted data:", extracted_data)

print(f"time elapsed: {time.perf_counter() - t:.2f}s")

…you get back the sequential behaviour of old sync code. But if you do this instead (pay attention to create_task ), they would run concurrently:

async def main_concurrent():

t = time.perf_counter()

tasks = [asyncio.create_task(handle_url(url))

for url in urls]

for task in tasks:

await task

print("> extracted data:", extracted_data)

print(f"time elapsed: {time.perf_counter() - t:.2f}s")

This is not the kind of code you’d write, but you will use and see code using Tasks all over the place, so being able to understand what is going on is a must! The tricky bit here is grokking that tasks start running at the moment they are created with create_task . Once you get this, the rest becomes pretty obvious. A more plausible way you’d see code like the above written would be like this (using the same asyncio.gather and asyncio.wait that you’ve seen before):

At this point you might start to freak out at all the subtleties and complexities of Python async and start to admire node’s DIY-ish elegance instead. But the truth is that real-world code is much more complicated than these toy examples, and all the tools in Python’s async arsenal actually come in handy. By choosing slightly more complicated elementary building blocks for async, Python can actually help you make your code simpler and easier to reason about than equivalent Node.js code. On the other hand, Python went all TIMTOWTDI on async programming, with alternatives to the asyncio module from the standard library and pluggable 3rd party event loops that can replace the standard implementation. We’ll see if all these extreme flexibility will actually be worth it.

Further (required) reading

For Python:

Python’s code docs on Asynchronous I/O and Coroutines and Tasks — you MUST ABSOLUTELY go and read the relevant section from the standard documentation, especially since it’s very well written and to the point! Python’s async toolkit has a bunch of subtleties that must be understood before diving into writing production code. If you fail to do so you’ll only end up inflicting unnecessary pain and suffering on yourself and others.

Python’s async toolkit has a bunch of subtleties that must be understood before diving into writing production code. If you fail to do so you’ll only end up inflicting unnecessary pain and suffering on yourself and others. Async IO in Python: A Complete Walkthrough — a more thorough introduction to async programming in Python (though a bit less conceptually deep I’d say than this one). This can either be the next thing to go to after reading this, or an alternative to this if you find my writing style confusing and want more examples. It also has links to other resources, and some pointers for doing async programming in Python 3.5 if you’re stuck with that.

For Node.js:

Cheers & Namaste!