DevTech101

DevTech101

To install elasticsearch plugins by adding proxy (below

Install a small cluster status utility
bin/plugin -DproxyPort=8888 -DproxyHost=127.0.0.1 –verbose –install mobz/elasticsearch-head

To start elasticsearch
cd /opt/elasticsearch; ./bin/elasticsearch &

To access the node status
http://os3.domain.com:9200/_plugin/head/

To Configure Kibana
/opt/kibana/config/kibana.yml

host: "0.0.0.0"
elasticsearch_url: "http://10.10.10.16:9200"

Start kibana
/opt/kibana/bin/kibana

Logstash howto
Create certificate
cd /etc/pki/tls
openssl req -x509 -nodes -newkey rsa:2048 -days 2365 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt -subj /CN=os3.domain.com
openssl req -x509 -days 2365 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
—–

To test logstash
bin/logstash -e ‘input { stdin { } } output { stdout {} }’
hello world

bin/logstash -e ‘input { stdin { } } output { elasticsearch { host => “os3.domain.com” } }’

Get a list of indices
curl ‘localhost:9200/_cat/indices?v’
—–
Create logstash.conf

input {
  stdin {
    type => "stdin-type"
  }

  file {
    type => "syslog-ng"

    # Wildcards work, here :)
    path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
  }
}

output {
  stdout { }
  elasticsearch{
        type => "stdin-type"
        embedded => false
        host => "10.10.10.16"
        port => "9300"
        cluster => "devtech101_cluster1"
        node_name => "SrvNet"
        }
}

Now to start logstash
bin/logstash -w 4 -f logstash.conf
# Note -w above is for workers (1 worker per cpu makes sense)
Tuning

sysctl -w vm.swappiness=1
/etc/security/limits.conf:

elasticsearch - nofile 65535
elasticsearch - memlock unlimited

/etc/default/elasticsearch:

ES_HEAP_SIZE=512m
#  OR even more, typically 50% of memory  
ES_HEAP_SIZE=4g
MAX_OPEN_FILES=65535
MAX_LOCKED_MEMORY=unlimited

/etc/elasticsearch/elasticsearch.yml:

bootstrap.mlockall: true

Limiting this indices cache size makes sense because you rarely need to retrieve logs that are older than a few days

# To limit the cache size simply add the following value anywhere in your custom elasticsearch.yml 
indices.fielddata.cache.size:  40%

To check that this has value has been configured properly you can run this command.

curl http://localhost:9200/_nodes/process?pretty

# Or per index
GET /_stats/fielddata?fields=*
# Or per node
GET /_nodes/stats/indices/fielddata?fields=*
# Or per index per node
GET /_nodes/stats/indices/fielddata?level=indices&fields=*

More elk tuning rule of thumb

- do not select a smaller number of shards than your total number of nodes you will add to the cluster. Each node should hold at least one shard.

- do not let a shard grow bigger than your JVM heap (this is really a rough estimation) so segment merging will work flawlessly

- if you want fast recovery, or if you want to move shards around (not a common case), the smaller a shard is the faster the operation will get done

- max shard size should be 8g cause my heap size will be about 8.5g. But with 420gb EBS Volume the total amount of shards shouldn't take up more than 200gb

- The heap should be as big as your largest shard, irrespective of what index it belongs to or if it's a replica.

Reference
Basic Setup

How to Install Elasticsearch on CentOS 7/6

Advance Setup
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-4-on-centos-7

Hardware Sizing
https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html

Tips
http://www.pipebug.com/elasticsearch-logstash-kibana-4-mapping-5.html
http://docs.fluentd.org/articles/install-by-rpm
http://docs.fluentd.org/articles/free-alternative-to-splunk-by-fluentd

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
%d bloggers like this: