阿小信大人的头像
Life is short (You need Python) Bruce Eckel

logstash-elasticsearch-kibana环境搭建(CentOS)2016-06-14 17:02

logstash-shipper-1
                   \
logstash-shipper-2 -- redis/kafka -- logstash-indexer
                   /                        |
logstash-shipper-x                    elasticsearchs
                                            |
                                          kibana

安装logstash

官方文档:https://www.elastic.co/guide/en/logstash/current/installing-logstash.html

确认已安装java版本高于java 7

java -version

配置repo从Package Repositories安装

rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
sudo vim /etc/yum.repos.d/logstash.repo

添加以下内容:

[logstash-2.3]
name=Logstash repository for 2.3.x packages
baseurl=https://packages.elastic.co/logstash/2.3/centos
gpgcheck=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

安装logstash:

sudo yum -y install logstash

完成后检查是否安装成功:

[test@vpca-talaris-kerrigan-1 ~]$ /opt/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'
Settings: Default pipeline workers: 4
Pipeline main started
hello logstash
2016-06-12T04:03:19.379Z vpca-talaris-kerrigan-1.vm hello logstash

在需要收集log的机器上安装logstash(或者beats),通过将output配置为指定的redis作为shipper。 在集中log的机器上安装logstash将input设置为shipper的output对应的redis作为indexer即可收集多个shipper的log。

例如配置shipper:

sudo vi /etc/logstash/conf.d/talaris.courier.conf

添加配置:

input {
    file {
        path => ["/var/log/ves/talaris.courier/*.log"]  # 收集该目录下所有log
        start_position => "beginning"
        type => "talaris.courier"
    }
}

filter {
    grok {
        match => {
            "message" => ".+? .+? (?<log_level>.+?) .+"  # 在output中多加一个叫做log_level的字段
        }
    }
}

output {
    redis {
        host => "vpca-talaris-courier-1.vm.elenet.me"  # redis地址
        data_type => "list"
        key => "logstash:talaris.courier.logs"
    }
}

配置indexer:

sudo vi /etc/logstash/conf.d/talaris.conf

添加配置:

input {
    redis {
        host => "vpca-talaris-courier-1.vm.elenet.me"  # 得和shipper配置的redis地址一样
        data_type => "list"
        key => "logstash:talaris.courier.logs"
    }
}

output {
    file {   # 收集的log保存到文件
        path => "/tmp/%{+yyyy}-%{+MM}-%{+dd}.%{type}.log"
        # path => "/tmp/%{+yyyy}-%{+MM}-%{+dd}.%{host}.log.gz"
        # message_format => "%{message}"  # 直接保存原始log,不做json转换处理
        # gzip => true  # 使用gzip压缩日志
    }
    elasticsearch {
        hosts => ["127.0.0.1:9200"]
        index => "logstash-%{type}-%{log_level}-%{+YYYY.MM.dd}-logs"
        flush_size => 20000
        idle_flush_time => 10
    }
}

配置好后各自重启logstash,然后在indexer的/tmp下就能看到来自shipper的日志了。

安装elasticsearch

文档:https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html#_yum_dnf

同样方式配置repo从Package Repositories安装

rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
sudo vim /etc/yum.repos.d/elasticsearch.repo

添加:

[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

安装:

sudo yum install -y elasticsearch

配置文件: /etc/elasticsearch/elasticsearch.yml

文件默认路径:https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-dir-layout.html#_deb_and_rpm

查看运行状态:

sudo /etc/init.d/elasticsearch status

启动es:

sudo /sbin/chkconfig --add elasticsearch
sudo service elasticsearch start

配置systemd:

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service

安装kibana

同样方式配置repo从Package Repositories安装

rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
sudo vim /etc/yum.repos.d/kibana.repo

添加:

[kibana-4.5]
name=Kibana repository for 4.5.x packages
baseurl=http://packages.elastic.co/kibana/4.5/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

安装:

sudo yum -y install kibana

查看运行状态:

sudo /etc/init.d/elasticsearch status

配置systemd:

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service

默认情况下,Kibana会连接运行在localhost的Elasticsearch。要连接其他Elasticsearch实例,修改/opt/kibana/config/kibana.yml

访问localhost:5601即可访问kibana,在settings页面设置index pattern:logstash-*(与logstash的output中elasticsearch的index匹配),把的index设置为logstash-appid-loglevel-datetime,这样在设置pattern的时候可以更加灵活。

参考:https://www.gitbook.com/book/chenryn/kibana-guide-cn

如果您觉得从我的分享中得到了帮助,并且希望我的博客持续发展下去,请点击支付宝捐赠,谢谢!

若非特别声明,文章均为阿小信的个人笔记,转载请注明出处。文章如有侵权内容,请联系我,我会及时删除。

#Linux/Mac#  
分享到:
阅读[1524] 评论[1]

你可能也感兴趣的文章推荐

本文最近访客

发表评论

#1 River222.*.*.232[通化]34650 :
想问下,关于apache2.4 url重定向如何配置呢
2016-07-09 14:55 回复