doina

一个小菜鸟运维工程师.

filebeat+redis+ELK收集nginx日志

  • filebeat采集到redis
  • logstash从redis收集数据
  • logstash处理数据
  • logstash将数据发送到elasticsearch
  • kibana从elasticsearch从查询数据

由于我的服务实在扛不住… 还有geoip没有弄.. 就先这样吧..

安装redis

yum install epel-release
yum -y install redis
systemctl start redis
systemctl enable redis

安装EFLK

$ rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
$ cat > /etc/yum.repos.d/elasticsearch.repo << EOF
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF

$ yum -y install java elasticsearch-6.6.2 logstash-6.6.2 filebeat-6.6.2 kibana-6.6.2

配置Filebeat

修改配置文件

$ vim /etc/filebeat/filebeat.yml
filebeat.prospectors:
- input_type: log
  paths:
    - /usr/local/nginx/logs/access.log
  tags: ecs-nginx-access

- input_type: log
  paths:
    - /usr/local/nginx/logs/error.log
  tags: ecs-nginx-error

output.redis:
  enabled: true
  hosts: ["127.0.0.1:6379"]
  keys:
    - key: "nginx-access-logs"
      when.contains:
        tags: "ecs-nginx-access"
    - key: "nginx-error-logs"
      when.contains:
        tags: "ecs-nginx-error"

$ systemctl start filebeat
$ systemctl enable filebeat        

查看redis中是否有key生成

$ redis-cli                   
127.0.0.1:6379> keys *
1) "nginx-error-logs"
2) "nginx-access-logs"

配置ElasticSearch

调整jvm内存大小

$ vim /etc/elasticsearch/jvm.options 
-Xms256m
-Xmx256m

修改配置文件

$ mkdir -p /data/elasticsearch
$ chown elasticsearch. /data/elasticsearch

$ grep -Ev "^$|^#" /etc/elasticsearch/elasticsearch.yml
node.name: elasticsearch-1
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
network.host: 172.31.217.169
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"

$ systemctl start elasticsearch 
$ systemctl enable elasticsearch 

配置Kibana

$ mkdir /var/log/kibana/
$ chown kibana. /var/log/kibana/

$ vim /etc/kibana/kibana.yml 
server.port: 5601
server.host: "172.31.217.169"
elasticsearch.url: "http://172.31.217.169:9200"
kibana.defaultAppId: "discover"
elasticsearch.pingTimeout: 3000
elasticsearch.shardTimeout: 0
elasticsearch.startupTimeout: 9000
pid.file: /tmp/kibana.pid
logging.dest: /var/log/kibana/kibana.log
logging.verbose: false
ops.interval: 5000

$ systemctl start kibana
$ systemctl enable kibana

配置Logstash

调整jvm内存大小

$ vim /etc/logstash/jvm.options 
-Xms256m
-Xmx256m

修改配置文件

$ mkdir -p /data/logstash/{data,logs}
$ chown logstash. -R /data/logstash/

$ rm -f /etc/logstash/pipelines.yml 
$ vim /etc/logstash/logstash.yml
node.name: "logstash-01"
http.host: "0.0.0.0"
log.level: info
path.data: /data/logstash/data
path.logs: /data/logstash/logs
pipeline.workers: 1
pipeline.output.workers: 1
pipeline.batch.size: 12500
pipeline.batch.delay: 2
pipeline.unsafe_shutdown: false
path.config: /etc/logstash/conf.d/*.conf
config.test_and_exit: false
config.reload.automatic: true
config.reload.interval: 3
config.debug: false
queue.type: memory 

创建流水线文件

$ vim /etc/logstash/conf.d/pipeline.conf
input {
    redis {
        db => 0
        host => "127.0.0.1"
        port => 6379
        key => "nginx-access-logs"
        data_type => "list"
        threads => 8
        batch_count => 500
    }

     redis {
        db => 0
        host => "127.0.0.1"
        port => 6379
        key => "nginx-error-logs"
        data_type => "list"
        threads => 8
        batch_count => 500
    }
}

filter {
    if "ecs-nginx-access" in [tags] {
        grok {
            match =>  {"message" => "%{IPORHOST:client_ip} - (%{GREEDYDATA:user}|-) \[%{HTTPDATE:logtime}\] \"(?:%{WORD:verb} %{URIPATH:request}(%{URIPARAM:parameter})?(?: HTTP/%{NUMBER:httpversion})?|-)\" %{QS:body} %{QS:header} %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer|-} %{QS:agent|-} \"(%{IPORHOST:x_forwarded_for}|-)\" %{QS:domain} (%{QUOTEDSTRING:upstream_addr}) \"?(%{NUMBER:upstream_response_time1}|-)\" \"?(%{BASE10NUM:request_time}|-)\" \"?(%{NOTSPACE:upstream_name}|-)?\""}
            timeout_millis => 0
        }
        mutate {
                gsub => ["x_forwarded_for", "-|unknown", "0.0.0.0" ]
        }
        date {
                locale => "en"
                match => ["timestamp", "dd/MMM/YYYY:HH:mm:ss Z", "YYYY/MM/dd HH:mm:ss"]
                timezone => "Asia/Shanghai"
                target => "@timestamp"
        }
        mutate {
            convert => [ "upstream_response_time1", "float"]
            convert => [ "upstream_response_time2", "float"]
            convert => [ "request_time", "float"]
            convert => [ "body", "string"]
            convert => [ "header", "string"]
        }
    }
}    

output {
    if "ecs-nginx-access" in [tags] {
        elasticsearch {
            hosts => "172.31.217.169:9200"
            index => "ecs-nginx-access-%{+YYYY.MM.dd}"
            manage_template => false

        }
    } else if "ecs-nginx-error" in [tags] {
        elasticsearch {
            hosts => "172.31.217.169:9200"
            index => "ecs-nginx-error-%{+YYYY.MM.dd}"        
            manage_template => false
        }
    }
}

$ systemctl start logstash
$ systemctl enable logstash

配置kibana索引

创建ecs-nginx-acces-* 和 ecs-nginx-error-*索引
《filebeat+redis+ELK收集nginx日志》

《filebeat+redis+ELK收集nginx日志》

根据域名搜索

domain: “http://baiyongjie.com”
《filebeat+redis+ELK收集nginx日志》

domain: “http://img.baiyongjie.com”
《filebeat+redis+ELK收集nginx日志》

根据文章名搜索

parameter: “?p=538”
《filebeat+redis+ELK收集nginx日志》

点赞

发表评论

邮箱地址不会被公开。

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据