Structured logging is awesome, but the top reason for me is easily overlooked:

There are orders of magnitude more events in a log than there are logging statements that generated them.

Have you ever opened up a log file looking for the cause of an issue and just felt helpless? It’s a wall of text! Out of a million events, how can you track down the one or two events with interesting information? When you don’t know what you’re looking for, you can’t search for it.

Even an empty ASP.NET Core application generates a flood when debug logging is turned on. I ran the WebSample project found here and hit refresh a few times. Even the few hundred events that generated is hard on the eyes.

Check this out, though:

We start with a dense log stream containing all kinds of messages, and then - by type - exclude them until only a few interesting ones remain! How does that work???

Message templates as event types

When you write several events with log4net, the message produced for each one is similar:

foreach ( var alert in alerts ) { log . Debug ( "Disk quota {0} exceeded by user {1}" , alert . Quota , alert . Username ); } // -> // Disk quota 100 exceeded by user Alice // Disk quota 150 exceeded by user Bob

But, as far as a log search is concerned, they’re just two lines of different text.

You may wish to find all “disk quota exceeded” events, but the approach of looking for events like 'Disk quota%' falls down as soon as another event appears that is superficially similar:

Disk quota 100 set for user Carol

Text logging throws away the information we initially have about the source of the event, and this has to be reconstructed with increasingly fragile and elaborate match expressions as the application gets more complex.

By contrast, when you write events through Serilog, the library keeps track of the message template as well as the arguments that are subsituted in:

log . Debug ( "Disk quota {Quota} exceeded by user {Username}" , alert . Quota , alert . Username ); // -> // Disk quota 100 exceeded by user "Alice"

The plain text rendering is like the log4net version, but behind the scenes, the "Disk quota {Quota} exceeded by user {Username}" message template itself is associated with all of the events created through this logging statement.

With the events stored in a log server or database, you can query where @MessageTemplate = 'Disk quota {Quota} exceeded by user {Username}' and get exactly the events where the disk quota was exceeded, no more and no less.

This is pretty much how the screencast above tunes out the noise. Excluding events by event type is extremely powerful - even in a very busy log stream, a huge proportion of the events will come from a handful of logging statements. Once you can exclude messages by type, you can filter down to the more interesting and unusual events quickly.

If typing out message templates in log searches sounds like a bore, there’s one small ergonomic improvement to mention.

Event types from numeric hashes

It’s not always convenient to store the entire message template with every log event. Though they act the part, message templates are also a bit of an awkward way of identifying a set of related events.

To make message templates feel like a more natural identifier, they can be hashed into a numeric value that’s easier to work with. This means instead of typing Disk quota {Quota} exceeded by user {Username} you can deal with a number such as 08fea8bb .

" Disk quota {Quota} exceeded by user {Username} " - hash -> 0x08fea8bb " Disk quota {Quota} set for user {Username} " - hash -> 0xa990e732

Where the hashing is performed, depends on your logging pipeline:

For simple log files, you can plug an enricher into Serilog that will assign an EventType property automatically to all Serilog events.

property automatically to all Serilog events. The Serilog RenderedCompactJsonFormatter includes an event type in its JSON output, which is a great option if you plan to store events as JSON.

includes an event type in its JSON output, which is a great option if you plan to store events as JSON. Seq (shown in the screencast) will do this automatically for events from Serilog or Microsoft.Extensions.Logging. Other log servers have various ingestion hooks through which you can plug this in.

You can use the technique mentioned in the first point with just about any log storage mechanism, so it also provides a good fallback.

Explicit event types

The Microsoft.Extensions.Logging API supports message templates, and therefore numeric hashes as well, just like Serilog. For simple log statements, the only difference between the two APIs is the addition of a Log prefix to the method name:

log . LogDebug ( "Disk quota {Quota} exceeded by user {Username}" , alert . Quota , alert . Username );

Because Microsoft.Extensions.Logging is an abstraction fitting many different logging libraries, it’s also possible to specify event types manually by passing one as the first ( eventId ) parameter:

log . LogDebug ( 1001 , "Disk quota {Quota} exceeded by user {Username}" , alert . Quota , alert . Username );

There are a couple of advantages of explicit event ids:

They remain stable even if the message changes

They may be supported by some back-ends that don’t or can’t deal with message templates

The disadvantages are that there’s more to type, and more effort involved in ensuring each id is unique. In some contexts that will make sense, but it’s unnecessary friction in others. My advice is to use message templates - or hashes programmatically derived from them - wherever possible.

There’s no time like now

The sooner you start storing and using event types, the better your diagnostic experience will be. Unless you’re using them day-to-day, it’s easy to log pre-formatted text without realizing it, and compromise the great noise-cancelling power that event types can give.

By capturing the original template that generated an event, or a numeric hash of it, an log stream containing millions of events can be interrogated as a log stream of a few hundred event types. It’s a profound change in perspective and something you have to try.

This post is the fourth post in a series; up next, context and correlation.