So – I wanted to get Splunk but in my organisation that was never going to happen (you want something that costs MONEY?! Ludicrous!) So we had to come up with a compromise. A colleague of mine went hunting for some open source logging software and found that the combination of Elastic Search, LogStash, Kibana and nxLog worked well. He tested it on his PC, wrote a few lines on how to get it roughly working and then sent it through to me to get it working from a server perspective. (Hi Ken! ^_^)

I’ve just recently finished the base setup (with a little assistance) and it’s getting information from our production, test and development AD DC’s and WOAH do they waffle a lot. So after posting on Twitter that I’d got this working (because I was excited I’d got it working…duh!) someone asked me to do a blog post on the setup. So this is that blog post. I’ve written this in a way that anyone else with basic Windows Server knowledge could install this if required – yes, it is dumbed down a bit in some areas, but that’s because I wrote it to be as idiot-proof as possible.

UPDATE (3rd August 2016) – This document has now been updated with details regarding the most recent ELK stack. I have recently done an install using the instructions of ElasticSearch 2.3.4, LogStash 2.3.4 & Kibana 4.5.3.

Software used:

Other basic setup (specific to the environment we setup):

Virtual Server

Windows Server 2012

2vCPU

4GB RAM

Two HDD – C: & D: (disk space is up to you – the more you give it, the more logs you can stash!)

Log Server Installation Instructions

Folder creations

On C: drive Create a C:\Program Files\Java\jdk[version number]



On D: drive Create a D:\LogData directory (or whatever you want to call where you dump your logs) Create a D:\ElasticSearch directory Create a D:\Kibana directory Create a D:\LogStash



Create a Service Account

In your domain: Create a new user service account user

On the LogServer Add the new user to the ‘Administrators’ group (yes, I know this is ugly and dirty, but it was the quickest and easiest way to get this up and running without having to mess too much with permissions)



Install Java JRE

Extract Java JRE to C:\Program Files\Java\jdk[version number]

Set up a system environment variable Right-click on ‘My Computer Select ‘Properties’ Click on “Advanced system settings” Select the ‘Advanced’ tab Click on ‘Environment Variables…’ Under ‘System variables’ click ‘New…’ Enter the following: Variable name: JAVA_HOME Variable value: C:\Program Files\Java\jdk\jre



Java is now installed and the variables required for the LogServer set.

Install ElasticSearch

Extract the downloaded ElasticSearch files to D:\ElasticSearch

Edit the D:\ElasticSearch\config\elasticsearch.yml file Set the cluster.name to “[clustername]” (Take note of what you do use – this will be useful if you decide to add in more ElasticSearch servers later) Set the path.data option to D:\LogData

Edit the D:\ElasticSearch\bin\service.bat file Under the REM ***** JAVA options ***** add in an entry “set ES_MAX_MEM=4g”. (or however much memory you want it to use – we gave it access to everything because it is the only service hosted on this server)



Set up a firewall rule to allow the ElasticSearch ports Open ‘Windows Firewall with Advanced Security’ Select ‘Inbound Rules’ Click ‘New Rule…’ In the window that appears, select ‘Port’ and click ‘Next’ Make sure ‘TCP’ is selected and check ‘Specific local ports:’ 9200, 9300 Click ‘Next’ Select ‘Allow the connection’ and click ‘Next’ Select all the profiles you want it to use (we selected all three, as we’ll have logs coming from multiple sources/multiple domains and from our DMZ) and click ‘Next’ Name the rule ‘ElasticSearch’ and give it a description (if you so desire) Click ‘Finish’



To test that everything has been configured correctly: Open a command prompt (no admin rights needed) cd to D:\ElasticSearch and run bin\elasticsearch.bat If it doesn’t sit there waiting for input, something isn’t configured properly – go back over your configuration to ensure everything has been set correctly Ctrl+C to quit



Install the ElasticSearch Windows service Open a command prompt as an administrator cd to D:\ElasticSearch\bin Type: service install The ElasticSearch service is now installed



Configure the ElasticSearch service Open ‘Services’ Find the ‘ElasticSearch’ service If it’s not present, go back and install the service Right-click and select ‘Properties’ On the ‘General’ tab, change ‘Startup type’ to Automatic On the ‘Log On’ tab, change to use the service account you created Click ‘Apply’



ElasticSearch will now be running on the server successfully as a service.

Install Kibana

(Unlike the previous version, Kibana 4.* no longer requires IIS to run and instead runs inside it’s own webserver, yay!)

Extract the Kibana files to D:\Kibana

Edit the Kibana config file: Browse to D:\Kibana\config Right-click on ‘kibana.yml’ and click ‘Edit’ In the file that opens, edit the line that starts with ‘elasticsearch.url:’ to be: elasticsearch.url: “http://[FQDN of log server]:9200”



Set up a firewall rule to allow the Kibana port Open ‘Windows Firewall with Advanced Security’ Select ‘Inbound Rules’ Click ‘New Rule…’ In the window that appears, select ‘Port’ and click ‘Next’ Make sure ‘TCP’ is selected and check ‘Specific local ports:’ 5601 Click ‘Next’ Select ‘Allow the connection’ and click ‘Next’ Select all three profiles (Domain, Private & Public) and click ‘Next’ Name the rule ‘Kibana’ and give it a description



Setup the Kibana service to run as a service – number of ways to do this – you could install it as a service or use a third-party service manager, but I’ve chosen the easier “run a scheduled task” method: Open Task Scheduler Click ‘Create Task…’ On the ‘General’ tab ‘Name’ field: Start Kibana Click ‘Change User or Group…’ and select the service account you created earlier ‘Security Options’, select ‘Run whether user is logged on or not’ On the ‘Triggers’ tab Select ‘New…’ Beside ‘Begin the task:’ select ‘At startup’ and click ‘OK’ On the ‘Actions’ tab Select ‘New…’ Beside ‘Action:’ select ‘Start a program’ Program/script: D:\Kibana\bin\kibana.bat Start in (optional): D:\Kibana\bin Click ‘OK’ On the ‘Settings’ tab ‘Allow task to be run on demand’ is checked ‘Run task as soon as possible after a schedule start is missed’ is checked ‘If the task fails, restart every:’ 5 minutes ‘Attempt to restart up to:’ 3 times ‘If the running task does not end when requested, force it to stop’ is checked Click ‘OK’ Put in the password for your service account Click on the service and select ‘Run’ The ‘Last Run Result’ will display 0x41301 as the kibana.bat file is being run – this will remain this way unless there is an error (4101 means “running”) Either restart your server or right-click and ‘Run’ this scheduled task



To test that everything has been configured correctly: Open up a web browser on your PC and browser to: http://[FQDN of log server]:5601 (if you want to change this, you can modify the port number in the kibana.yml file) If you cannot access the website, something isn’t configured correctly (either kibana or elasticsearch) – go back and check your configuration



The Kibana webserver will now be running and can be successfully accessed.

Install Logstash

Extract the LogStash files to D:\LogStash

Create the LogStash config file (to read from ElasticSearch) See below for first ‘Update’ regarding our configuration; basic configuration information can be found here from Elastic Place this file in D:\LogStash



Set up a firewall rule to allow the LogStash port Open ‘Windows Firewall with Advanced Security’ Select ‘Inbound Rules’ Click ‘New Rule…’ In the window that appears, select ‘Port’ and click ‘Next’ Make sure ‘TCP’ is selected and check ‘Specific local ports:’ 3515 Click ‘Next’ Select ‘Allow the connection’ and click ‘Next’ Select all three profiles (Domain, Private & Public) and click ‘Next’ Name the rule ‘LogStash’ and give it a description Click ‘Finish’



To test Open a command prompt (no admin rights needed) cd to D:\LogStash and run bin\logstash.bat agent -f logstash.conf If it doesn’t sit there waiting for input, something isn’t configured properly – go back over your configuration to ensure everything has been set correctly Ctrl+C to quit



Setup the LogStash scheduled task Open Task Scheduler Click ‘Create Task…’ On the ‘General’ tab ‘ Name’ field: Start LogStash ‘Security Options’, select ‘Run whether user is logged on or not’ On the ‘Triggers’ tab Select ‘New…’ Beside ‘Begin the task:’ select ‘At startup’ and click ‘OK’ On the ‘Actions’ tab Select ‘New…’ Beside ‘Action:’ select ‘Start a program’ Program/script: D:\LogStash\bin\logstash.bat Add arguments (optional): agent -f logstash.conf Start in (optional): D:\LogStash On the ‘Settings’ tab ‘Allow task to be run on demand’ is checked ‘Run task as soon as possible after a schedule start is missed’ is checked ‘If the task fails, restart every:’ 5 minutes ‘Attempt to restart up to:’ 3 times ‘If the running task does not end when requested, force it to stop’ is checked Click ‘OK’ Put in the password for your service account Click on the service and select ‘Run’ The ‘Last Run Result’ will display 0x41301 as the kibana.bat file is being run – this will remain this way unless there is an error (4101 means “running”)



LogStash will now be running on the server successfully without a user needing to be logged in.

Client Server Installation Instructions

nxLog

Run the nxlog.msi Select ‘I accept the terms in the License Agreement’ Click ‘Install’ If prompted by UAC, click ‘Yes’ Uncheck ‘Open README.txt to read important installation notes’ Click ‘Finish’

After it’s installed Browse to C:\Program Files (x86)

xlog\conf Make a copy of nxlog.conf Rename the existing nxlog.conf file to nxlog-default.conf If prompted by UAC, click ‘Continue’ Make any changes to the nxlog.conf file as required (see ‘Other tips/configuration’ below for changes that have been made in our environment)

Edit the config file Change the host setting near the bottom from 127.0.0.1 to the FQDN of the Log Server

Start the service Open up Services Find the ‘nxlog’ service Right-click and select ‘Start’



The server will now be sending logs to the Log Server

Other tips/configuration

I tried really hard to get LogStash to run as a service…and I failed miserably. If someone knows how to get this working, please enlighten me as my batch file does work, but it’s not quite as clean and lovely as a service.

A really nifty command that we’ve found is that sometimes you may need to delete the logs you’ve collected – either because there’s too many, or you’ve changed your config and want to collect something else, or you were testing and want to get rid of the test logs you’d collected. In this case, the way we were deleting things was via PowerShell (all hail PowerShell!):

Invoke-WebRequest -Uri http://[FQDN of log server]:9200/[name of log file folder] -Method DELETE

In order to not be absolutely FLOODED with events, we also modified the nxLog conf file to only collect what we wanted. You may want to tweak this yourself, depending on what you’re interested in collecting.

Changes we made to the nxlog.conf file:

Query <QueryList>\ <Query Id="0">\ <Select Path="Security">*</Select>\ <Suppress Path="Security">*[System[(EventID=4624 or EventID=4776 or EventID=4634 or EventID=4672 or EventID=4688)]]</Suppress>\ <Select Path="System">*[System[(EventID=1074 or (EventID >= 6005 and EventID <= 6009) or EventID=6013)]]</Select>\ <Select Path="Microsoft-Windows-TerminalServices-LocalSessionManager/Operational">*</Select>\ </Query>\ </QueryList>

This is giving us a few things:

Security Log: we’ve excluded a few ID’s, purely because they were generating way too much traffic to be useful (the “User has logged on”, for example, generated over 6 million log entries in 24 hours…) – if you’re going to be logging your security logs into this thing, you need to exclude stuff. Otherwise you’ll just end up filling your disk way too quickly. Just as an example, leaving security as *, we used 13GB in less than 24 hours – suppressing those 5 event ID’s changed that to only 300MB in 24 hours…

we’ve excluded a few ID’s, purely because they were generating way too much traffic to be useful (the “User has logged on”, for example, generated over 6 log entries in 24 hours…) – if you’re going to be logging your security logs into this thing, you to exclude stuff. Otherwise you’ll just end up filling your disk way too quickly. Just as an example, leaving security as *, we used 13GB in less than 24 hours – suppressing those 5 event ID’s changed that to only 300MB in 24 hours… System Log: we’re only including a few things here – the logs that tell us when the server was shut down/restarted/started.

we’re only including a few things here – the logs that tell us when the server was shut down/restarted/started. Terminal Services – Local Session Manager: this was picked up by a colleague who included it here. This little log lets us know when people are logging on to the domain controller – in particularly, when people are logging on directly to the domain controller via the console. This is bad and we want to strongly discourage it…so we log it.

Our default dashboard (I’ve removed any proprietary info from the image so it’s safe to view!) has also been customised a bit (thanks again to Ken! ^_^) to include some of the information most useful to us and to make it look nice and shiny to management. In particular:

Pie chart breaking down Event ID’s

Bar chart showing our most active DC’s

Pie chart of the accounts that are being locked out the most – this, for me right now, is one of the more interesting charts…

Standard bar chart, showing logs over time

A sorted column list displaying all events but with limited columns, in particular: EventTime, EventID, SourceName, message, SubjectUserName, TargetUserName – this may not include every bit of information we need for certain events, but it fits for most events.

So yes – that’s our log server. Very exciting. If there are any updates or tweaks, I’ll do an updated post.

UPDATE (23rd June 2014) – I was requested to give information on our logstash.conf file as well as the dashboard we use.

The edit logstash.conf file:

input { # Accept messages in on tcp/3515 # Incoming messages will be in json format, one per line # Tag these messages as windows and eventlog so we can filter on them later on tcp { port => 3515 codec => json_lines tags => ["windows","eventlog"] } } filter { # If it is an eventlog message, change some fields to lower case, and rename some fields so they match logstash's default if "eventlog" in [tags] { mutate { lowercase => [ "EventType", "FileName", "Hostname", "Severity", "host" ] rename => [ "Hostname", "host" ] rename => [ "Message", "message" ] } } } output { # Send all the output to the elasticsearch cluster listed below elasticsearch { host => localhost cluster => "YourClusterName" } }

I was also asked for a copy of our dashboard.json file, which I’ve uploaded here: Kibana Dashboard (default.json). Due to security restrictions on my blog, I’ve uploaded it as a .txt file. When you’ve downloaded it, just change the .txt to .json and away you go!