Oh yes, it is used. I work in the field of network packet processing. I have been at two different companies where we process network packets. So, we are operating at the Ethernet or IP level, not at the level above TCP.

Interestingly, in both companies C was chosen over C++. In one of the companies, one of the two products was built on top of Linux kernel, whereas the other product was built in Linux userspace. The kernel product obviously used C as Linux kernel is programmed in C, but they chose to use C for the userspace product as well. Both products were developed starting from about year 2000 (the kernel product a bit before 2000 and the userspace product a bit after 2000).

In the company where I went after that, the product was built on C, not on C++. It is actually a continuation of a project from the mid-1990s, although due to recent performance improvement demands, it was decided that essentially everything will be rewritten. We had an option to select C++ due to this rewrite, but didn't do so.

In the field of network packet processing, performance counts a lot. So, I want to implement my own hash table having higher performance than existing hash tables. I, not the hash table author, am who selects what hash function is to be used. Perhaps I want performance and go for MurMurHash3. Perhaps I want security and go for SipHash. Memory allocators are obviously custom. In fact, all the important data structures we use have been custom-implemented for the highest possible performance.

While there is nothing that would prevent the use of C++, it is usually a bad idea. A single thrown exception per packet will drop the packet processing rate to unacceptable levels! So, we cannot use C++'s exceptions. Way too slow. We are already using kind-of object-oriented C code by implementing data structures as structs and then implementing functions operating on those structs. C++ would allow having virtual functions, but then again virtual function calls would kill performance if used everywhere. So, it's better to be explicit and have a function pointer if virtual function calls are needed.

C++ will do a lot of things behind your back: memory allocation, etc. On the other hand, in C that usually doesn't happen. You can write a function that allocates memory, but usually it is apparent from the interface of the function that allocation is happening.

As example of the kind of micro-optimizations that you can do when programming in C, take a look at the container_of macro in the Linux kernel. Sure, you could use container_of in C++ code, but who does that? I mean, it is entirely acceptable in most C programs, but typical C++ programmers would immediately propose something else, such as a linked list that allocates the link nodes as separate blocks. We don't want that because every allocated memory block is bad for performance.

Perhaps the only thing that would benefit us in C++ is that C++ allows template metaprogramming, meaning you can sometimes avoid virtual function calls while still having a function parameter, and allow the compiler to inline the functions. But template metaprogramming is complicated, and we have managed to fulfill all requirements in C, so the benefit of this feature in C++ is not so critical.

In one of the companies, we actually had a custom compiled language where part of the features were implemented in. Guess which was the target language of the compiler? Assembly? No, we had to support both 32-bit and 64-bit architectures. C++? Surely you jest. Obviously, it was C with GCC's computed goto. So, the custom language was compiled to C (or actually the gcc variant of C that supported computed goto), and the C compiler produced assembly.