Tuesday, December 25, 2018

Tutorial for ElasticSearch,Kibana, Logstash

Elasticsearch & Kibana Tutorial

Image result for elasticsearch

Elasticsearch is an open source, document-based search platform with fast searching capabilities.  Elasticsearch runs on a clustered environment. A cluster can be one or more servers. Each server in the cluster is a node. As with all document databases, records are called documents.Documents are stored in indexes, which can be sharded, or split into smaller pieces. Elasticsearch can run those shards on separate nodes to distribute the load across servers.

Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.



Elasticsearch Basic concept

-Cluster
-Node
-Shards
-Replicas
-Index
-Documents with Properties
-Types
-Mapping


row=document
type=tablename
database=index

Cluster
  -Node(Shards + Replica, Shards + Replica)
  -Node (Shards + Replica, Shards + Replica)
 
Node is a single server.Each node had unique uuid

Shards

Replica will keep copy of Shards .

Download Elasticsearch & Kibana using below link:

 https://www.elastic.co/downloads

Run Elasticsearch by command line

bin>elasticsearch.bat

Installing Elasticsearch as a Service on windows

c:\elasticsearch-6.5.4\bin>elasticsearch-service.bat



http://localhost:9200/

The commands available are:

install:Install Elasticsearch as a service
bin> elasticsearch-service.bat install

remove :Remove the installed Elasticsearch service (and stop the service if started)

start:Start the Elasticsearch service (if installed)
elasticsearch-service.bat start

stop :Stop the Elasticsearch service (if started)

manager:Start a GUI for managing the installed service

bin> elasticsearch-service.bat manager

Download and unzip Kibana.

Start Kibana

Run bin/kibana (or bin\kibana.bat on Windows)

http://localhost:5601


Upload data in Elasticsearch using curl:

In Win 10,curl is preinstalled, just need to add it in the environment variable.

Path=C:\Program Files\Git\mingw64\bin

We will be using the entire collected works of Shakespeare as our example data. In order to make the best use of Kibana you will likely want to apply a mapping to your new index. Let’s create the shakespeare index with the following mapping.

 



Create mapping in Kibana ->Dev Tools:
 
PUT /shakespeare
{
 "mappings" : {
  "_default_" : {
   "properties" : {
    "speaker" : {"type": "text"},
    "play_name" : {"type": "text"},
    "line_id" : { "type" : "integer" },
    "speech_number" : { "type" : "integer" }
   }
  }
 }
}



Great, we’ve created the index. Now we want to import the data.

Download and file(shakespeare.json) in your local repository using the link


curl -H "Content-Type: application/json" -XPOST "http://localhost:9200/_bulk" --data-binary @shakespeare.json



Example 2:
Download and file(accounts.json) in your local repository using the link

Create mapping in Kibana ->Dev Tools:

PUT /accounts
{
 "mappings" : {
  "_default_" : {
   "properties" : {
    "account_number" : {"type": "integer"},
    "balance" : {"type": "integer"},
    "firstname" : { "type" : "text" },
    "lastname" : { "type" : "text" },
               "age" : { "type" : "integer" },
               "gender" : { "type" : "text" },
               "address" : { "type" : "text" },
               "employer" : { "type" : "text" },
               "email" : { "type" : "text" },
               "city" : { "type" : "text" },
               "state" : { "type" : "text" }
   }
  }
 }
}
 

Import data :

curl -H "Content-Type: application/json" -XPOST "http://localhost:9200/_bulk" --data-binary @accounts.json
 

Create index pattern in Kibana





Discovering your data

Open Discover
In the search field, enter the following string:  

Query:
speaker:KING



By default, all fields are shown for each matching document. To choose which fields to display, hover the pointer over the the list of Available Fields and then click add next to each field you want include as a column in the table.

 

Delete By Query API :

The simplest usage of _delete_by_query just performs a deletion on every document that match a query.


POST accounts1/_delete_by_query
{
  "query": {
    "match": {
      "account_number": 1
    }
  }
}

Update By Query API :

The update API allows to update a document based on a script provided.

---


To be continue...


Using Logstash to import CSV Files Into ElasticSearch

bin/logstash -f [CONFIGURATION FILENAME]

Please start Kibana and elastic server before starting logstash

logstash.bat -f C:\Software\elasticsearch\logstashTutorial.conf





Files

Friends.csv
logstashTutorial.conf

Dev Tools query

get /friends/_search
{
"query":{
"match_all":{}
    }
}


get /friends/_count
{
"query":{
"match_all":{}
    }
}


Leaning source:1











1 comment:

  1. The development of artificial intelligence (AI) has propelled more programming architects, information scientists, and different experts to investigate the plausibility of a vocation in machine learning. Notwithstanding, a few newcomers will in general spotlight a lot on hypothesis and insufficient on commonsense application. machine learning projects for final year In case you will succeed, you have to begin building machine learning projects in the near future.

    Projects assist you with improving your applied ML skills rapidly while allowing you to investigate an intriguing point. Furthermore, you can include projects into your portfolio, making it simpler to get a vocation, discover cool profession openings, and Final Year Project Centers in Chennai even arrange a more significant compensation.


    Data analytics is the study of dissecting crude data so as to make decisions about that data. Data analytics advances and procedures are generally utilized in business ventures to empower associations to settle on progressively Python Training in Chennai educated business choices. In the present worldwide commercial center, it isn't sufficient to assemble data and do the math; you should realize how to apply that data to genuine situations such that will affect conduct. In the program you will initially gain proficiency with the specialized skills, including R and Python dialects most usually utilized in data analytics programming and usage; Python Training in Chennai at that point center around the commonsense application, in view of genuine business issues in a scope of industry segments, for example, wellbeing, promoting and account.

    ReplyDelete