One of the hot topics currently in the .NET community is CoreCLR and ASP.NET Core. As most probably you already know: Microsoft decided to create a cross platform, high performance version of ASP.NET, which they even open-sourced.

What is ASP.NET Core?

There are many great posts about the basics of ASP.NET Core, so in this post we do not talk about the programming model and the differences between ASP.NET Core and the classic ASP.NET. Instead, I would like to focus on another aspect of this new framework: performance.

If you have not tried it yet the official documentation is a great starting point to get to know the framework.

Here is a short list about the basics:

NET Core is a complete rewrite of the ASP.NET framework (including the pipeline and hosting model)

It’s Open-Source and cross-platform

It can run on .NET Core and on Full Framework

It’s fast!

It is not tied to IIS anymore! It can be self-hosted or it can be hosted by basically any web server very easily.

If you go further down to the technical details you will find other interesting things:

It is built on top of libuv (the same library is used by Node.js)

The Performance Aspects – Kestrel

The heart of the ASP.NET Core runtime is a new managed webserver called Kestrel. It is a cross platform web server written in C# based on libuv. Now one of the reasons for this project was performance. The TechEmpower benchmark shows what the situation looked like for the classic asp.net framework which was most of the time not even on the list or if it managed to be there it was at the end. The goal at Microsoft was to create something which lands on top of the list and the “secret goal” was to beat Node.js. Unfortunately, there was no run lately but according to David Fowler they achieved the goal using ASP.NET benchmarks.

Update: On 16. November 2016 the final results of Round 13 was published and in the Plaintext test ASP.NET Core hits 1,822,366 RPS. As comparison nodejs is at 467,246RPS.

https://twitter.com/davidfowl/status/700139258279899137

How did they reach this performance?

Damian Edwards and David Fowler gave a very interesting talk on this topic at NDC Oslo:

ASP.NET Core Kestrel: Adventures in building a fast web server – Damian Edwards, David Fowler from NDC Conferences on Vimeo.

One of the main focus of the performance optimization in Kestrel was reducing GC pressure. This is actually a lesson for everyone writing managed code: the less you allocate the less work the GC does and that saves CPU.

Another example is optimizations around strings: as it turned out in HTTP basically everything is a string. Therefore, well-known strings from the HTTP protocol are allocated once per server and every time it is used for comparison, writing it to the response, etc. they can be reused. An example for this can be found here

(Another interesting optimization is the usage of a Memory Pool is here.)

This memory pool creates a pinned memory space allocated on the Large Object Heap. “Pinned memory” means that the GC won’t move that part of the memory while it is doing a compaction step. This way it is possible to share data between native and managed code without doing any extra allocation or marshaling.

Now until this point, I listed optimizations, which make sense in any application. Pooling well-known objects in memory, preallocating strings, and reusing them can help for any applications where performance matters.

Of course, for a web server you can go further. Here are things, which “you only should do when you really need them”.

It is very interesting (and fast!) how Kastrel decides which HTTP method an incoming HTTP request is using. Every HTTP request starts with the requests method. As you may think: if you want to reach more than 1 million RPS with a webserver then doing string comparison on a character basis and looking for the words ‘GET’ and ‘POST’ in the incoming HTTP request is not the way to go.

How is this implemented in Kestrel?

First, the ASCII code of every well-known HTTP method will be stored in a long.

As it turned out every one of them fits into a long. For every well-known HTTP method there is a mask depending on its length:

Now when a request comes in instead of turning the incoming http request into a string the ASCII code of the incoming request is bitwise and-ed with the ASCII code of the well-known http methods:

(Source)

So basically, without allocating anything on the managed heap, the framework is able to decide whether the incoming request is GET, POST, or something else.

How to monitor an ASP.NET Core application?

If performance matters for your web application, then ASP.NET Core is a perfect fit. And if you care about performance you obviously want to monitor and measure your application, and this is the point where Dynatrace comes into the game. We support ASP.NET Core on .NET Full Framework!

The way you deployed classical ASP.NET applications was always the same: you hosted it in IIS. Yes, there were some special cases, like self-hosting WCF services, or self-hosting Web-API, but in most cases you used IIS.

In the new world every ASP.NET Core application is a console application, therefore they can be self-hosted. The other option is to use a “traditional” web server, which functions as a reverse proxy: it forwards HTTP requests to the ASP.NET Core process, but does not do much more.

Therefore, you have these two options: 1) running self-hosted 2) running with IIS.

Running self-hosted

If you start the compiled .exe then the process will be detected as a .NET process, but by default as any other .NET process it is not monitored. You have to turn on monitoring:

After you restarted the application you are done: monitoring will work as for any other .NET application including Database monitoring, failure analysis, and all the good stuff we offer.

Hosting in IIS

If you use IIS the .NET agent is activated automatically for you, so just by installing the agent you have the same capabilities as for a classical ASP.NET application:

You can start working with Dynatrace right away by going here