Having the ability to deploy Elasticsearch, Logstash and Kibana (ELK) from a single command is a wonderous thing. Together, in this post, we shall build the Ansible playbook to do just that.

There are some prerequisites. This Ansible playbook is made for Ubuntu Server and executed on Ubuntu Server 16.04. A basic system of 2 CPU cores and 4GB of RAM will be enough. The specs of the machine are entirely up to the situation and the volume of data.

This blog post is an alternative to using the ELK stack on Qbox. To easily deploy and run your own ELK setup on Qbox, simply sign up or launch your cluster here, and refer to the tutorial "Provisioning a Qbox Elasticsearch Cluster."

Tutorial

Let’s jump straight into the Ansible playbook.

Note: Feel free to use your text editor of choice, in this tutorial we will be using Vim.

Update the IP to be the IP or hostname of the ELK server that is going to be deployed.

Edit the hosts file:

$ vim hosts

hosts

[elk] 10.0.5.25

This is the playbook’s directory structure:



. (The root directory) └── install ├── group_vars └── roles ├── elasticsearch │ └── tasks ├── java │ └── tasks ├── kibana │ └── tasks ├── logstash │ ├── tasks │ └── templates └── nginx ├── tasks └── templates

Let’s create the directories needed for this playbook:



$ mkdir -p install/group_vars $ mkdir -p install/roles/elasticsearch/tasks $ mkdir -p install/roles/java/tasks $ mkdir -p install/roles/kibana/tasks $ mkdir -p install/roles/logstash/tasks $ mkdir -p install/roles/logstash/templates $ mkdir -p install/roles/nginx/tasks $ mkdir -p install/roles/nginx/templates

Note: If you are using SSH keys to authenticate the SSH session skip this section:

If you are not using SSH keys to authenticate the SSH session then a change is required in the /etc/ansible/ansible.cfg file. The program sshpass needs to be installed for this method to work. Uncomment the following line so that you can enter the SSH password for the connection:

/etc/ansible/ansible.cfg



ask_pass = True

Create the master playbook file:

Update the remote_user value to the SSH user on the target machine. The options related to become are so that once connected via SSH Ansible can execute commands with sudo.

The roles are overall categories with all the Ansible commands bundled in their respective role and corresponding directories. Keeping it neat and easy to see what all the dependencies are.

$ vim install/elk.yml

elk.yml



--- # # Playbook to install the ELK stack # - hosts: elk remote_user: user become: yes become_user: root roles: - { role: java } - { role: elasticsearch } - { role: kibana } - { role: nginx } - { role: logstash }

The install/group_vars/all.yml file contains variables that are accessible from all the playbook’s roles. A central location to change situational details of the playbook. Please update the values here to what you require them to be.



$ vim install/group_vars/all.yml

all.yml



--- # The hostname of the server that is going to run the ELK stack server_name: elk # -- Nginx Variables -- # The port that Nginx listens to that is forwarded to Kibana's local port nginx_kibana_port: 80 # Nginx SSL listening port elk_server_ssl_cert_port: 8080 # The web authentication credentials to gain access to Kibana kibana_user: admin kibana_password: admin # The system user that Nginx will use nginx_user: www-data # The IP address of the ELK server that is going to be installed elk_ip: 10.0.5.25

This role installs the Java 8 dependency of the ELK stack. It’s important to have the automatic Oracle license acceptor as the Java install will fail otherwise.



$ vim install/roles/java/tasks/main.yml

main.yml



--- # # Installing Java 8 # # Add the Java ppa repository - name: Add Java repository apt_repository: repo: ppa:webupd8team/java # Automatically accepts the Oracle License popup in the terminal - name: Automatically select the Oracle License shell: echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections # Install Java 8 - name: Install the Java 8 package apt: name: oracle-java8-installer state: present update_cache: yes

Elasticsearch

Next is to create the Elasticsearch role. After the Elasticsearch installation this role changes a config file to restrict the access to localhost only - a security measure.

$ vim install/roles/elasticsearch/tasks/main.yml

main.yml



--- # # Installing Elasticsearch # # Adds the apt-key for Elasticsearch - name: Add Elasticsearch apt-key apt_key: url: "https://packages.elastic.co/GPG-KEY-elasticsearch" state: present # Add the Elasticsearch APT-repository - name: Adding Elasticsearch APT repository apt_repository: repo: deb https://artifacts.elastic.co/p... stable main state: present # Install Elasticsearch - name: Update repositories cache and install Elasticsearch apt: name: elasticsearch update_cache: yes # Update Elasticsearch config file to only allow localhost connections - name: Updating the config file to restrict outside access lineinfile: destfile: /etc/elasticsearch/elasticsearch.yml regexp: 'network.host:' line: 'network.host: localhost' # Restart Elasticsearch - name: Restarting Elasticsearch service: name: elasticsearch state: restarted

Kibana

Kibana is the next role we are creating. After installation this role will also restrict it to accepting local connections only - keeping security in mind. Nginx acts as the reverse proxy so one can control access to Kibana with web authentication.

$ vim install/roles/kibana/tasks/main.yml

main.yml



--- # # Installing Kibana # # Add Kibana APT-repository - name: Adding Kibana APT repository apt_repository: repo: deb http://packages.elastic.co/kib... stable main state: present # Install Kibana - name: Update repositories cache and install Kibana apt: name: kibana update_cache: yes # Update kibana config file to only accept local connections - name: Updating the config file to restrict outside access lineinfile: destfile: /etc/kibana/kibana.yml regexp: 'server.host:' line: 'server.host: localhost' # Enable Kibana service - name: Enabling Kibana service systemd: name: kibana enabled: yes daemon_reload: yes # Start Kibana service - name: Starting Kibana service systemd: name: kibana state: started

Nginx is the first role where we make use of the template feature of Ansible. Using the variables from group_vars/all.yml we can adjust the template to fit out needs. The “{{ kibana_password }}” is the variable name that corresponds with a value defined in the global variable yml file.

In order to create the Kibana user to restrict access to Kibana we use an openssl command which requires the user to enter the password. To get around this we use the Ansible module called Expect. In order to use this module it needs to be installed. The installation of this Ansible module is handled by this role. It will be installed in the steps before the Expect module is required.

$ vim install/roles/nginx/tasks/main.yml

main.yml



--- # # Installing Nginx # # Install Nginx - name: Update repositories cache and install Nginx apt: name: nginx update_cache: yes # Create /etc/nginx/conf.d/ directory - name: Create nginx directory structure file: path=/etc/nginx/conf.d/ state=directory mode=0755 # Deploy kibana.conf with FQDN - name: Setup Nginx reverse proxy for kibana template: src=kibana.conf.j2 dest=/etc/nginx/sites-available/default owner=root group=root mode=0644 register: nginx_needs_restart # Enable nginx service - name: Enabling Nginx service systemd: name: nginx enabled: yes # Start Nginx service - name: Starting Nginx service systemd: name: nginx state: started daemon_reload: yes # Install Pexpect to handle promts of the terminal - name: Installing Python Pexpect apt: name: python-pexpect update_cache: yes # Writes the create user script in the temp directory - name: Create kibana admin user template: src=kibanaAdmin.j2 dest=/tmp/createUser owner=root group=root mode=0744 # Runs the script to create Kibana admin user - name: Create Kibana admin user expect: command: bash /tmp/createUser responses: 'Password:' : "{{kibana_password}}" # Restart Nginx service - name: Restart Nginx service systemd: name: nginx state: reloaded daemon_reload: yes

The kibana.conf.j2 template replaces the /etc/nginx/sites-available/default Nginx config file. It redirects the nginx_kibana_port to localhost port 5601 which is Kibana’s port.

$ vim install/roles/nginx/templates/kibana.conf.j2

kibana.conf.j2



server { listen {{nginx_kibana_port}}; server_name {{server_name}}; auth_basic "Restricted Access"; auth_basic_user_file /etc/nginx/htpasswd.users; location / { proxy_pass http://localhost:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }

In addition to the reverse proxy, Nginx creates web authorization to gain access to the Kibana page.

$ vim install/roles/nginx/templates/kibanaAdmin.j2<code> </code>

kibanaAdmin.j2



echo "{{kibana_user}}:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users

Case Study: How Qbox Saved 5 Figures per Month using Supergiant

Logstash

The final role of this playbook - Logstash.



$ vim install/roles/logstash/tasks/main.yml

main.yml



--- # # Installing logstash # # Add Logstash APT repository - name: Adding logstash APT repository apt_repository: repo: deb http://packages.elastic.co/log... stable main state: present # Installing Logstash - name: Update repositories cache and install Logstash apt: name: logstash update_cache: yes # Creating certs directories for SSL - name: Creates SSL directories file: path: /etc/pki/tls/certs state: directory # Creating private directories for SSL - name: Creates SSL directories file: path: /etc/pki/tls/private state: directory # Update SSL to restrict outside access - name: Updating the config file to restrict outside access lineinfile: destfile: /etc/ssl/openssl.cnf regexp: 'subjectAltName =' line: 'subjectAltName = IP: {{ elk_ip }}' # Generate SSL certificates for Logstash - name: Generate SSL certificates shell: "openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt" # Configure Beats input configuration 02-beats-input.conf - name: Configure Beats configuration file template: src=beats-input.conf.j2 dest=/etc/logstash/conf.d/02-beats-input.conf owner=root group=root mode=0644 # Configure Logstash 10-syslog-filter.conf config file - name: Configure Syslog Filter template: src=syslog-filter.conf.j2 dest=/etc/logstash/conf.d/10-syslog-filter.conf owner=root group=root mode=0644 # Configure Elasticsearch output file 30-elasticsearch-output.conf - name: Configure Elasticsearch output file template: src=elasticsearch-output.conf.j2 dest=/etc/logstash/conf.d/30-elasticsearch-output.conf owner=root group=root mode=0644 # Start Logstash service - name: Start Logstash service systemd: name: logstash state: started daemon_reload: yes # Enable Logstash service - name: Enable Logstash service systemd: name: logstash enabled: yes





$ vim install/roles/logstash/templates/beats-input.conf.j2

beats-input.conf.j2



input { beats { port => 5044 ssl => true ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } }

$ vim install/roles/logstash/templates/elasticsearch-output.conf.j2

elasticsearch-output.conf.j2



output { elasticsearch { hosts => ["localhost:9200"] sniffing => true manage_template => false index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } }

$ vim install/roles/nginx/templates/syslog-filter.conf.j2

syslog-filter.conf.j2



filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } }

Execution

To execute this playbook:



$ ansible-playbook -i hosts install/elk.yml

Once the playbook has completed you will have a running ELK stack.



Blog Post: Top Reasons Businesses Should Move to Kubernetes Now

To access Kibana navigate to http://<elk_ip> - You will be prompted for credentials to access the page. The credentials are those that are specified in the global_vars/all.yml file.





After entering the credentials you will be presented with the Kibana page.





You now have a fresh ELK installation, and the ELK stack is quite versatile. Use the stack as a stand-alone application, or integrate with your existing applications to get the most current data. With Elasticsearch, you get all the features to make real-time decisions-all the time. You can use each of these tools separately, or with other products. For example, Kibana often goes together with Solr/Lucene.

Other Helpful Resources

Give It a Whirl!

It's easy to spin up a standard hosted Elasticsearch cluster on any of our 47 Rackspace, Softlayer, Amazon or Microsoft Azure data centers. And you can now provision a replicated cluster.

Questions? Drop us a note, and we'll get you a prompt response.

Not yet enjoying the benefits of a hosted ELK-stack enterprise search on Qbox? We invite you to create an account today and discover how easy it is to manage and scale your Elasticsearch environment in our cloud hosting service.