To install elasticsearch plugins by adding proxy (below

Install a small cluster status utility
bin/plugin -DproxyPort=8888 -DproxyHost= –verbose –install mobz/elasticsearch-head

To start elasticsearch
cd /opt/elasticsearch; ./bin/elasticsearch &

To access the node status

To Configure Kibana

host: ""
elasticsearch_url: ""

Start kibana

Logstash howto
Create certificate
cd /etc/pki/tls
openssl req -x509 -nodes -newkey rsa:2048 -days 2365 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt -subj /
openssl req -x509 -days 2365 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

To test logstash
bin/logstash -e ‘input { stdin { } } output { stdout {} }’
hello world

bin/logstash -e ‘input { stdin { } } output { elasticsearch { host => “” } }’

Get a list of indices
curl ‘localhost:9200/_cat/indices?v’
Create logstash.conf

input {
  stdin {
    type => "stdin-type"

  file {
    type => "syslog-ng"

    # Wildcards work, here :)
    path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]

output {
  stdout { }
        type => "stdin-type"
        embedded => false
        host => ""
        port => "9300"
        cluster => "devtech101_cluster1"
        node_name => "SrvNet"

Now to start logstash
bin/logstash -w 4 -f logstash.conf
# Note -w above is for workers (1 worker per cpu makes sense)

sysctl -w vm.swappiness=1

elasticsearch - nofile 65535
elasticsearch - memlock unlimited


#  OR even more, typically 50% of memory  


bootstrap.mlockall: true

Limiting this indices cache size makes sense because you rarely need to retrieve logs that are older than a few days

# To limit the cache size simply add the following value anywhere in your custom elasticsearch.yml 
indices.fielddata.cache.size:  40%

To check that this has value has been configured properly you can run this command.

curl http://localhost:9200/_nodes/process?pretty

# Or per index
GET /_stats/fielddata?fields=*
# Or per node
GET /_nodes/stats/indices/fielddata?fields=*
# Or per index per node
GET /_nodes/stats/indices/fielddata?level=indices&fields=*

More elk tuning rule of thumb

- do not select a smaller number of shards than your total number of nodes you will add to the cluster. Each node should hold at least one shard.

- do not let a shard grow bigger than your JVM heap (this is really a rough estimation) so segment merging will work flawlessly

- if you want fast recovery, or if you want to move shards around (not a common case), the smaller a shard is the faster the operation will get done

- max shard size should be 8g cause my heap size will be about 8.5g. But with 420gb EBS Volume the total amount of shards shouldn't take up more than 200gb

- The heap should be as big as your largest shard, irrespective of what index it belongs to or if it's a replica.

Basic Setup

How to Install Elasticsearch on CentOS 7/6

Advance Setup

Hardware Sizing


0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x
%d bloggers like this: