Why Is Non-Blocking IO More Scalable?

In nearly all modern web apps, we have a lot of I/O. We talk to the database and ask for records or insert/update them. More often than not, we access some files from the hard disk, which again is an I/O operation.

We are talking to different third-party web services, like OAuth integration or other stuff. Many web apps are also running as a microservice these days where they have to talk to other parts of the same app through HTTP requests.

If you write your web app with Ruby, Python, or many other languages, all of these I/O-related tasks are blocking by default, meaning the process will wait until it receives the response and then continues with the execution of the program.

Node.js [1], on the other hand, is using non-blocking I/O by default. Therefore, the process can continue to work somewhere else and execute a callback or a promise when the request finishes.

This allows the operating system to fully utilize one CPU core. But, is a non-blocking programming model possible in other programming languages too?

Yes, it is! In this blog post, we will discuss how to write a native event loop in Ruby utilizing (nearly) non-blocking I/O and then see how to improve this design.