1. Introduction

ElasticSearch is a search engine that can store large volumes of data and run queries in an extremely faster manner. ElasticSearch is powered by Lucene which is an Apache project. It provides all the necessary infrastructural support around the bare Lucene to provide a highly scalable, highly available and user friendly search engine. ElasticSearch can store all kind of structured/unstructured data in the form of JSON documents; which will be indexed at the time of insertion. It is these indexes that make searches extremely faster.

ElasticSearch provides plain Java and Restful apis. It also nicely integrates with Logstash which is a data processing tool that can collect data from diverse sources, enrich/cleanse/transform the data and finally load it into diverse target systems. It is a pluggable framework with a wide range of input, output and filter plugins.



2. Example

In this example we will experiment with the Rest Apis provided by elastic search to get a feel of how data can be imported into it and later run some queries to see fetch the data. We will use just the soap ui and create a simple Rest based project. Add the endpoint url http://localhost:9200 to your rest project. (This is the url at which your ElasticSearch instance is running). Next follow the steps given below:

2.1 Create mapping

ElasticSearch allows you to add data to it without having to specify the mapping beforehand. When data is imported this way, ElasticSearch tries to guess the type of the fields in the input document based on their values. This in some cases can lead to problems such as parsing exceptions. Therefore it is better to have the mapping defined. There are different kinds of schema modelling that Elastic Search supports – Denormalised, Nested Objects and Parent-Child relationship. Each one of these have their own advantages and disadvantages. Depending on the use case we need to carefully choose one of these modelling schemas to define our mapping. In our example we will use Parent-Child relationship modelling and define the schema mapping accordingly. In our example we will try to map Transactions as parent document and the wordings in different language; associated with the transactions as child documents. So lets create a mapping as shown below.

Add a new child resource to the Rest project in soap UI called ”CreateMapping” Set http verb as PUT Set the endpoint poiting to your elasticsearch home URL and resource as /transactions Paste the content below in the editor – this is the schema mapping. And then click on run.

{ "mappings": { "main_type": { "properties": { "identifier": { "type": "string" }, "transactionAccountingDate": { "type": "date" }, "transactionPostingDate": { "type": "date" }, "transactionValueDate": { "type": "date" }, "transactionAmount": { "type": "string" }, "transactionAmountCurrency": { "type": "string" } } }, "content": { "_parent": { "type": "main_type" }, "properties":{ "language":{ "type":"string" }, "description":{ "type":"string" } } } } }

2.1.1 Document field value type validation

With the schema mapping as defined above where in we have declared the types of each one of the fields in the document, elasticsearch validates the values in the document at the time of insertion. If the values are not of the type specified in the schema mapping then it results in the parsing exception as shown below:

Add a new child resource to the Rest project in soap UI called “Create Document” Set http verb as PUT Set the endpoint poiting to your elasticsearch home URL and resource as /transactions/main_type/_create Paste the content below in the editor. And then click on run.

{"identifier":"XY12363113597800","transactionAccountingDate":"2015-11-29", "transactionPostingDate":"2015-11-28","transactionValueDate":"SomeValue","transactionAmount":"0.18","transactionAmountCurrency":"EUR"}

The response returned is:

{ "error": { "root_cause": [ { "type": "mapper_parsing_exception", "reason": "failed to parse [transactionValueDate]" }], "type": "mapper_parsing_exception", "reason": "failed to parse [transactionValueDate]", "caused_by": { "type": "illegal_argument_exception", "reason": "Invalid format: \"SomeValue\"" } }, "status": 400 }

This way, any invalid data is rejected at the time of insertion itself, ensuring data quality.

2.1.2 Introducing new fields dynamically in the document

Again, with the schema mapping as defined above in 2.1, it is possible to add new fields to the document dynamically, elasticsearch nicely laps them up without any error. On the other hand, if we declare the mapping to be strict then any such additions will be rejected. Consider the scenario below where in we will first modify the schema mapping to enable strict mapping, and then try adding a new field to the document which will result in an error.

2.1.2.1 Update schema mapping

Add a new child resource to the Rest project in soap UI called “Update Mapping” Set http verb as PUT Set the endpoint pointing to your elasticsearch home URL and resource as /transactions/_mapping/main_type Paste the content below in the editor . And then click on run.

{ "dynamic":"strict" }

2.1.2.2 Adding new field to the document

Add a new field to the document by adding a new method to the “Create Document” Resource created in section 2.1.1 above. Paste the contents below and click on run.

{"identifier":"XY12363113597800","transactionAccountingDate":"2015-11-29", "transactionPostingDate":"2015-11-28","transactionValueDate":"2015-11-28","transactionAmount":"0.18","transactionAmountCurrency":"EUR","New_Field":"someValue"}

This results in the error as shown below:

{ "error": { "root_cause": [ { "type": "strict_dynamic_mapping_exception", "reason": "mapping set to strict, dynamic introduction of [New_Field] within [main_type] is not allowed" }], "type": "strict_dynamic_mapping_exception", "reason": "mapping set to strict, dynamic introduction of [New_Field] within [main_type] is not allowed" }, "status": 400

2.2 Import/create data

Elastic search provides api to insert data into it one after the other and also to do a bulk import. Lets do a bulk import using the bulk api with the steps given below:

Add a new child resource to the Rest project in soap UI called ”Bulk Insert” Set http verb as POST Set the endpoint pointing to your elasticsearch home URL and resource as /transactions/_bulk Paste the content below in the editor . And then click on run.

{"index":{"_type":"main_type","_id":1}} {"identifier":"XY12363113597800","transactionAccountingDate":"2015-11-29", "transactionPostingDate":"2015-11-28","transactionValueDate":"2015-11-28","transactionAmount":"0.18","transactionAmountCurrency":"EUR"} {"index":{"_type":"content","_id":1,"_parent":1}} {"language":"NL","description":"Molignestraat"} {"index":{"_type":"content","_id":2,"_parent":1}} {"language":"FR","description":"Rue de la Molignee"} {"index":{"_type":"main_type","_id":2}} {"identifier":"XY12363113597801","transactionAccountingDate":"2015-11-29", "transactionPostingDate":"2015-11-28","transactionValueDate":"2015-11-28","transactionAmount":"0.18","transactionAmountCurrency":"EUR"} {"index":{"_type":"content","_id":3,"_parent":2}} {"language":"NL","description":"Molignestraat"} {"index":{"_type":"content","_id":4,"_parent":2}} {"language":"EN","description":"Molignee Street"}

2.3 Update data

ElasticSearch provides update api using which we can update the previously imported or created data. It updates a document based on the script provided. In our example, since transaction related data is generally immutable, we will introduce/add a new field to the document that can be logically updated, called “tags”, in reality it is possible to tag a transaction based on the user preference. In this section we will see how to modify an existing mapping to add a new field to the document and then tag a transaction using update api.

2.3.1 Update mapping – add new field

To add a new field called “tags” to the document main_type, follow the steps given below:

Add a new method to the resource “Update Mapping” created in section 2.1.2.1 called “Add new field” Set http verb as PUT Paste the content below in the editor . And then click on run.

{ "properties" : { "tags" : { "properties" : { "name" : {"type" : "string"} } } } }

2.3.2 Update transaction using update api

It is possible that the users tag their transactions based on their preferences, to help analyse their spending patterns. To achieve this we will make use of the update api as shown as below:

Add a new child resource to the Rest project in soap UI called “Update Document” Set http verb as POST Set the endpoint pointing to your elasticsearch home URL and resource as /transactions/main_type/1/_update Paste the content below in the editor . And then click on run.

{ "script" : { "inline": "ctx._source.tags += tags", "params" : { "tags" :[ {"name" : "xmas gift" } ] } } }

It is possible that certain things like update, inline scripting are disabled, in which case you might get an error as shown below:

{ "error": { "root_cause": [ { "type": "remote_transport_exception", "reason": "[Arkus][127.0.0.1:9300][indices:data/write/update[s]]" }], "type": "illegal_argument_exception", "reason": "failed to execute script", "caused_by": { "type": "script_exception", "reason": "scripts of type [inline], operation [update] and lang are disabled" } }, "status": 400 }

These can be enabled by editing the elasticsearch.yml file located in the config directory of elasticsearch home. Modify this file by adding the contents given below. Save it and restart elastic search.

script.engine.groovy.inline.update: on

Now the above update should work, which allows you to add as many tags as you want.

2.3.3 Versioning

Updates could result in concurrency issues in a multiuser environment. ElasticSearch addresses concurrency issues using inbuilt versioning mechanism. Everytime a document is updated, its version number is automatically incremented. In addition to this elastic search allows using version numbering from an external system, in which case the version number should be provided as a url parameter.

Add a parameter called “version” to the child resource “Update Document” Set its value to previous version + 1 Paste the content below in the editor . And then click on run.

{ "script" : { "inline": "ctx._source.tags += tags", "params" : { "tags" :[ {"name" : "drinks" } ] } } }

Now, when you query for the transaction with id 1, you will see that it is tagged as both “xmas gift” and “drinks”, and the version being incremented to the value set for the query parameter “version”.

2.4 Run search queries

Elastic search provides Search API, queries can be run either by URI search using a simple Query string as a parameter or using a request body.

To fetch a particular transaction using Uri Search, type the following in the browser: http://localhost:9200/transactions/main_type/_search?q=identifier:XY12363113597800%27

This will return that transaction record in the json format.

Similarly, to fetch a particular child, type the following in the browser: http://localhost:9200/transactions/content/_search?q=language=NL

This will return all the content with Language equals to NL.

Now, to fetch the wording in a given language of a particular transaction, we will write a search query using query DSL as shown below:

Add a new child resource to the Rest project in soap UI called ”Search Query” Set http verb as POST Set the endpoint pointing to your elasticsearch home URL and resource as /transactions/main_type/_search Paste the content below in the editor . And then click on run. The query below will fetch the wording in NL along with the transaction with id XY12363113597800

{ "query": { "filtered": { "query": { "bool": { "must": [ {"match": { "identifier": "XY12363113597800"}} ], "filter": [ { "range": { "transactionValueDate": { "gte": "2015-11-27" }}} ] } }, "filter":{ "has_child": { "type": "content", "query" : { "filtered": { "query": { "match_all": {}}, "filter" : { "and": [ {"match": {"language": "NL"}} ] } } }, "inner_hits" : {} } } } } }

3. Download the soap ui project

In this example we learnt how to get started with ElasticSearch, taking a simple example scenario and seeing how the rest apis provided by elastic search can be used for creating schema mapping, updating the mapping, insertion(bulk api also) and querying using both URl query parameter and Query DSL.