logstash-shipper-1 \ logstash-shipper-2 -- redis/kafka -- logstash-indexer / | logstash-shipper-x elasticsearchs | kibana
官方文档:https://www.elastic.co/guide/en/logstash/current/installing-logstash.html
确认已安装java版本高于java 7
java -version
配置repo从Package Repositories安装
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch sudo vim /etc/yum.repos.d/logstash.repo
添加以下内容:
[logstash-2.3] name=Logstash repository for 2.3.x packages baseurl=https://packages.elastic.co/logstash/2.3/centos gpgcheck=1 gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch enabled=1
安装logstash:
sudo yum -y install logstash
完成后检查是否安装成功:
[test@vpca-talaris-kerrigan-1 ~]$ /opt/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }' Settings: Default pipeline workers: 4 Pipeline main started hello logstash 2016-06-12T04:03:19.379Z vpca-talaris-kerrigan-1.vm hello logstash
在需要收集log的机器上安装logstash(或者beats),通过将output配置为指定的redis作为shipper。 在集中log的机器上安装logstash将input设置为shipper的output对应的redis作为indexer即可收集多个shipper的log。
例如配置shipper:
sudo vi /etc/logstash/conf.d/talaris.courier.conf
添加配置:
input { file { path => ["/var/log/ves/talaris.courier/*.log"] # 收集该目录下所有log start_position => "beginning" type => "talaris.courier" } } filter { grok { match => { "message" => ".+? .+? (?<log_level>.+?) .+" # 在output中多加一个叫做log_level的字段 } } } output { redis { host => "vpca-talaris-courier-1.vm.elenet.me" # redis地址 data_type => "list" key => "logstash:talaris.courier.logs" } }
配置indexer:
sudo vi /etc/logstash/conf.d/talaris.conf
添加配置:
input { redis { host => "vpca-talaris-courier-1.vm.elenet.me" # 得和shipper配置的redis地址一样 data_type => "list" key => "logstash:talaris.courier.logs" } } output { file { # 收集的log保存到文件 path => "/tmp/%{+yyyy}-%{+MM}-%{+dd}.%{type}.log" # path => "/tmp/%{+yyyy}-%{+MM}-%{+dd}.%{host}.log.gz" # message_format => "%{message}" # 直接保存原始log,不做json转换处理 # gzip => true # 使用gzip压缩日志 } elasticsearch { hosts => ["127.0.0.1:9200"] index => "logstash-%{type}-%{log_level}-%{+YYYY.MM.dd}-logs" flush_size => 20000 idle_flush_time => 10 } }
配置好后各自重启logstash,然后在indexer的/tmp下就能看到来自shipper的日志了。
文档:https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html#_yum_dnf
同样方式配置repo从Package Repositories安装
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch sudo vim /etc/yum.repos.d/elasticsearch.repo
添加:
[elasticsearch-2.x] name=Elasticsearch repository for 2.x packages baseurl=http://packages.elastic.co/elasticsearch/2.x/centos gpgcheck=1 gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch enabled=1
安装:
sudo yum install -y elasticsearch
配置文件: /etc/elasticsearch/elasticsearch.yml
文件默认路径:https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-dir-layout.html#_deb_and_rpm
查看运行状态:
sudo /etc/init.d/elasticsearch status
启动es:
sudo /sbin/chkconfig --add elasticsearch sudo service elasticsearch start
配置systemd:
sudo /bin/systemctl daemon-reload sudo /bin/systemctl enable elasticsearch.service
同样方式配置repo从Package Repositories安装
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch sudo vim /etc/yum.repos.d/kibana.repo
添加:
[kibana-4.5] name=Kibana repository for 4.5.x packages baseurl=http://packages.elastic.co/kibana/4.5/centos gpgcheck=1 gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch enabled=1
安装:
sudo yum -y install kibana
查看运行状态:
sudo /etc/init.d/elasticsearch status
配置systemd:
sudo /bin/systemctl daemon-reload sudo /bin/systemctl enable kibana.service
默认情况下,Kibana会连接运行在localhost的Elasticsearch。要连接其他Elasticsearch实例,修改/opt/kibana/config/kibana.yml
访问localhost:5601
即可访问kibana,在settings页面设置index pattern:logstash-*
(与logstash的output中elasticsearch的index匹配),把的index设置为logstash-appid-loglevel-datetime,这样在设置pattern的时候可以更加灵活。
网友216.*.*.226[Seattle]2022-06-30 06:24
网友185.*.*.20[火星]2022-06-30 06:19
网友54.*.*.91[法国]2022-06-30 06:10
网友17.*.*.126[火星]2022-06-30 06:09
发表评论
亲~ 评论内容是必须的哟! o(∩_∩)o
昵称
邮箱
主页
评论