In October 2015, Netcraft found that after Apache and NGINX, Microsoft IIS is the third-most-common web server used by the one million largest websites in the world. Although IIS’s popularity is declining, it’s still the most popular commercial web server and it is understandably popular among Microsoft developers.

Still, it is still difficult to receive relevant and actionable insights from the hundreds or even thousands of log entries that IIS web servers can generate every single second. Here, I wanted to look further into IIS log data to provide three instances of how DevOps engineers and system administrators can use Elasticsearch, Logstash, and Kibana to understand their IIS logs.

For reference: IIS logs can be exported in a W3C format, and the different fields can be customized in the IIS admin user interface.

Elasticsearch, Logstash, and Kibana—commonly known as the ELK Stack — can collect, parse, and store all IIS log data. Then, the information can be shown in the Kibana part of the stack in a way that users can be alerted to specific problems and then fix them immediately.

How to Parse IIS Logs Using Logstash

Often, one of the first things to do is to filter and enhance your IIS logs with Logstash. Here is a sample of an IIS log line and the related Logstash configuration that we happen to use in our internal environment.

A sample IIS access log entry:

2015-12-08 06:41:42 GET /handler/someservice.ashx someData= 80 10.223.22.122 HTTP/1.1 Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/46.0.2490.86+Safari/537.36 1.2.1005047168.1446986881 https://www.logz.io/testing.aspx www.logz.io 200 638 795 0

The Logstash configuration to parse that IIS access log entry:

grok { match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:s-sitename} %{WORD:cs-method} %{URIPATH:cs-uri-stem} %{NOTSPACE:cs-uri-query} %{NUMBER:s-port} %{NOTSPACE:cs-username} %{IPORHOST:c-ip} %{NOTSPACE:cs(User-Agent)} %{NOTSPACE:cs(Cookie)} %{NOTSPACE:cs(Referer)} %{NOTSPACE:cs-host} %{NUMBER:sc-status:int} %{NUMBER:sc-substatus:int} %{NUMBER:sc-win32-status:int} %{NUMBER:sc-bytes:int} %{NUMBER:cs-bytes:int} %{NUMBER:time-taken:int}" , "message", "%{TIMESTAMP_ISO8601:timestamp} %{IPORHOST:s-sitename} %{WORD:cs-method} %{URIPATH:cs-uri-stem} %{NOTSPACE:cs-uri-query} %{NUMBER:s-port} %{NOTSPACE:cs-username} %{IPORHOST:c-ip} %{NOTSPACE:cs(User-Agent)} %{NOTSPACE:cs(Referer)} %{NUMBER:response:int} %{NUMBER:sc-substatus:int} %{NUMBER:sc-substatus:int} %{NUMBER:time-taken:int}" , "message", "%{TIMESTAMP_ISO8601:timestamp} %{WORD:cs-method} %{URIPATH:cs-uri-stem} %{NOTSPACE:cs-post-data} %{NUMBER:s-port} %{IPORHOST:c-ip} HTTP/%{NUMBER:c-http-version} %{NOTSPACE:cs(User-Agent)} %{NOTSPACE:cs(Cookie)} %{NOTSPACE:cs(Referer)} %{NOTSPACE:cs-host} %{NUMBER:sc-status:int} %{NUMBER:sc-bytes:int} %{NUMBER:cs-bytes:int} %{NUMBER:time-taken:int}" ] } geoip { source => "c-ip" target => "geoip" add_tag => [ "iis-geoip" ] } useragent { source => "cs(User-Agent)" } }

Now that you’ve seen how to use the ELK Stack to analyze Microsoft IIS log files, I’ll present some use cases of when to use Elasticsearch, Logstash, and Kibana in this context.

IIS Log Analysis Use Cases

Operations Analysis

Whenever traffic significantly exceeds the long-term average of site visits or whenever error rates are higher than normal, the ELK Stack can be used to send alerts to operations teams. This way, slow website response rates can be fixed so that the user experience is not affected.

For example, Elasticsearch, Logstash, and Kibana can be used as a log management stack to see whenever there is a sharp decline in the number of requests for web pages or a significant spike in traffic that caused a server to crash. If both of these things occur in the same dashboard, you could be facing a DDoS attack. In such a scenario, ELK can be used to find the origin IP address and block it.

Within our ELK Stack alerts feature, one visualization that we have is the number of log lines that cache responds to disk.

This visualization and more can be found in our ELK Apps library by searching for IIS.

Technical SEO

In SEO, the need to create quality content is becoming increasingly known. But if Google cannot access and index the content — or if the Googlebot hits its crawl limit before finding the content in the first place — then those marketing materials will be useless.

As the dashboard image shows, IIS log analysis with ELK can tell you when any page on your website was last crawled by Google, how Google prioritizes content in different subdomains and subdirectories, and which URLs are indexed the most and least. In one of our related posts, you can see how to use server log analysis for technical SEO.

Business Intelligence

IIS logs have everything that you need to analyze your application’s users — you can see everything from their geographic locations to the URLs that they visit to the quality of their UX. With ELK, users can correlate the IIS server data with infrastructure-level logs to gain more insight into how your infrastructure is affecting your visitors’ experiences on your website.

For example, memory loads, CPUs, and response times can be analyzed together to see if strong machines might be needed in your overall environment.

Many of these visualizations can be found in our free ELK Apps library by searching for IIS. Here are two examples: one is the response time that we’re getting per response code, and the other is a heat map of all of our visitors.

Shipping IIS Logs to Elasticsearch

Configuring Filebeat’s IIS module is pretty straight-forward. This is how to do it in Linux.

First, set up the IIS Module:

./filebeat modules enable iis

And set up the environment:

./filebeat setup -e

And then run Filebeat:

./filebeat -e</pre

Here is what it looks like with Homebrew.

Set up the IIS Module:

filebeat modules enable iis

Check that the module is there:

filebeat modules list

Set up the environment:

filebeat setup -e

And then run Filebeat:

>filebeat -e

Variations could occur depending on how Filebeat is installed.

Shipping IIS Llogs to Logz.io via NXLog

Configure NXLog

Copy the below code into your config file, which by default will be located at C:\Program Files (x86)

xlog\conf

xlog.conf.

REMEMBER, replace <<SHIPPING-TOKEN>> with the token of the account you want to ship to.

AND

REMEMBER to replace <<LISTENER-HOST>> with your region’s listener host (for example, listener.logz.io).

define ROOT C:\\Program Files (x86)\

xlog define ROOT C:\\Program Files (x86)\

xlog define ROOT_STRING C:\\Program Files (x86)\

xlog define CERTDIR %ROOT%\\cert Moduledir %ROOT%\\modules CacheDir %ROOT%\\data Pidfile %ROOT%\\data\

xlog.pid SpoolDir %ROOT%\\data LogFile %ROOT%\\data\

xlog.log <Extension charconv> Module xm_charconv AutodetectCharsets utf-8, euc-jp, utf-16, utf-32, iso8859-2 </Extension> #create one for each application <Input IIS_Site1> Module im_file File "C:\\inetpub\\logs\\LogFiles\\W3SVC1\\u_ex*.log" SavePos TRUE Exec if $raw_event =~ /^#/ drop(); Exec convert_fields("AUTO", "utf-8"); Exec $raw_event = '[<<SHIPPING-TOKEN>>][type=iis]' + $raw_event; </Input> <Output out> Module om_tcp Host <<LISTENER-HOST>> Port 8010 </Output> <Route IIS> Path IIS_Site1 => out </Route>

Restart NXLog

PS C:\Program Files (x86)

xlog> Restart-Service nxlog

Configure the IIS Module

Configure the IIS module in its YAML file at modules.d/iis.yml. This is a very simple example of the IIS module YAML:

module: iis access: enabled: true var.paths: ["C:/inetpub/logs/LogFiles/*/*.log"] error: enabled: true var.paths: ["C:/Windows/System32/LogFiles/HTTPERR/*.log"]

In Conclusion

IIS users should analyze their IIS logs regularly. From business intelligence to technical SEO and more, we have dashboards for these operations uses cases and more in our free ELK Apps library.

Have any tips on IIS log file analysis? We’d love to hear your thoughts in the comments below!

