Filebeat 收集Nginx日志利用 Redis 缓存发送至Elasticsearch

利用 Redis 缓存日志数据,主要解决应用解耦,异步消息,流量削锋等问题。

图片[1]-Filebeat 收集Nginx日志利用 Redis 缓存发送至Elasticsearch-李佳程的个人主页

将nginx 服务器的logstash收集之后的访问日志写入到redis服务器,然后通过另外的logstash将redis服务器的数据取出在写入到elasticsearch服务器。

局限性

  • 不支持Redis 集群,存在单点问题,但是可以多节点负载均衡
  • Redis 基于内存,因此存放数据量有限

1、部署 Nginx 服务配置 Json 格式的访问日志

root@web01:~# vim /etc/nginx/nginx.conf
http {
        log_format access_json '{"@timestamp":"$time_iso8601",'
                '"host":"$server_addr",'
                '"clientip":"$remote_addr",'
                '"size":$body_bytes_sent,'
                '"responsetime":$request_time,'
                '"upstreamtime":"$upstream_response_time",'
                '"upstreamhost":"$upstream_addr",'
                '"http_host":"$host",'
                '"uri":"$uri",'
                '"domain":"$host",'
                '"xff":"$http_x_forwarded_for",'
                '"referer":"$http_referer",'
                '"tcp_xff":"$proxy_protocol_addr",'
                '"http_user_agent":"$http_user_agent",'
                '"status":"$status"}';
        access_log /var/log/nginx/access_json.log access_json;
}

root@web01:~# systemctl restart nginx

root@web01:~# tail -f /var/log/nginx/access_json.log


{"@timestamp":"2023-01-05T05:15:40+00:00","host":"192.168.1.105","clientip":"192.168.1.1","size":396,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.1.105","uri":"/index.nginx-debian.html","domain":"192.168.1.105","xff":"-","referer":"-","tcp_xff":"-","http_user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36 Edg/108.0.1462.42","status":"200"}
{"@timestamp":"2023-01-05T05:15:40+00:00","host":"192.168.1.105","clientip":"192.168.1.1","size":197,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"192.168.1.105","uri":"/favicon.ico","domain":"192.168.1.105","xff":"-","referer":"http://192.168.1.105/","tcp_xff":"-","http_user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36 Edg/108.0.1462.42","status":"404"}

2、安装配置 Redis

root@redis:~# apt install -y redis
root@redis:~# vim /etc/redis/redis.conf
bind 0.0.0.0
save ""           #禁用rdb持久保存
#save 900 1
#save 300 10
#save 60 10000
requirepass 123456

root@redis:~# systemctl restart redis

3、利用 Filebeat 收集日志到 Redis

3.1、安装Filebeat

3.2、修改 Filebeat 配置

root@web01:~# vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access_json.log
  json.keys_under_root: true #默认False会将json数据存储至message,改为true则会独立
message外存储
  json.overwrite_keys: true  #设为true,覆盖默认的message字段,使用自定义json格式中的key
  tags: ["nginx-access"]
- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["nginx-error"]
- type: log
  enabled: true
  paths:
    - /var/log/syslog
  tags: ["syslog"]
output.redis:
  hosts: ["10.0.0.105:6379"]
  password: "123456"
  db: "0"
  key: "filebeat" #所有日志都存放在key名称为filebeat的列表中,llen filebeat可查看长度,即日志记录数

  #也可以用下面的列表方式实现不同日志存放在不同的key
  #keys:
  # - key: "nginx_access"
  #   when.contains:
  #   tags: "access"
  # - key: "nginx_error"
  #   when.contains:
  #   tags: "error"

3.3、启动 filebeat 服务

root@web01:~# systemctl enable --now filebeat.service 

3.4、redis验证

127.0.0.1:6379> KEYS *
1) "filebeat-1.105"
127.0.0.1:6379> llen filebeat-1.105
(integer) 22782
127.0.0.1:6379> LINDEX filebeat-1.105 1
"{\"@timestamp\":\"2023-01-05T05:15:40.000Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"_doc\",\"version\":\"7.17.8\"},\"tcp_xff\":\"-\",\"ecs\":{\"version\":\"1.12.0\"},\"agent\":{\"version\":\"7.17.8\",\"hostname\":\"web01.test.com\",\"ephemeral_id\":\"f98ca204-3307-4d19-a7a6-9704ea9e7105\",\"id\":\"429c2d99-edff-467c-9511-6839b9548638\",\"name\":\"web01.test.com\",\"type\":\"filebeat\"},\"http_user_agent\":\"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36 Edg/108.0.1462.42\",\"status\":\"404\",\"upstreamhost\":\"-\",\"host\":{\"name\":\"web01.test.com\"},\"uri\":\"/favicon.ico\",\"responsetime\":0,\"xff\":\"-\",\"http_host\":\"192.168.1.105\",\"domain\":\"192.168.1.105\",\"log\":{\"offset\":450,\"file\":{\"path\":\"/var/log/nginx/access_json.log\"}},\"referer\":\"http://192.168.1.105/\",\"clientip\":\"192.168.1.1\",\"tags\":[\"nginx-access\"],\"size\":197,\"upstreamtime\":\"-\",\"input\":{\"type\":\"log\"}}"

4、安装并配置logstash收集Redis数据发送至Elasticsearch

4.1、安装logstash

4.2、配置logstash

root@logstash01:~# vim /etc/logstash/conf.d/redis-to-es.conf

input {
  redis {
    host => "192.168.1.109"
    port => "6379"
    password => "123456"
    db => "0"
    key => "105_syslog"
    data_type => "list"
  }
  redis {
    host => "192.168.1.109"
    port => "6379"
    password => "123456"
    db => "0"
    key => "105_nginxaccess"
    data_type => "list"
  }
  redis {
    host => "192.168.1.109"
    port => "6379"
    password => "123456"
    db => "0"
    key => "105_nginxerror"
    data_type => "list"
  }
}
output {
  if "nginx-access" in [tags] {
    elasticsearch {
      hosts => ["192.168.1.101:9200","192.168.1.102:9200","192.168.1.103:9200"]
      index => "nginxaccess-1.105-%{+YYYY.MM.dd}"
    }
  }
  if "nginx-error" in [tags] {
    elasticsearch {
      hosts => ["192.168.1.101:9200","192.168.1.102:9200","192.168.1.103:9200"]
      index => "nginxerror-1.105-%{+YYYY.MM.dd}"
    }
  }
  if "syslog" in [tags] {
    elasticsearch {
      hosts => ["192.168.1.101:9200","192.168.1.102:9200","192.168.1.103:9200"]
      index => "syslog-1.105-%{+YYYY.MM.dd}"
    }
  }
}

root@logstash01:~# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-to-es.conf -t

root@logstash01:~# systemctl restart logstash.service 

5、通过插件查看索引

图片[2]-Filebeat 收集Nginx日志利用 Redis 缓存发送至Elasticsearch-李佳程的个人主页

6、通过 Kibana 创建索引模式查看

图片[3]-Filebeat 收集Nginx日志利用 Redis 缓存发送至Elasticsearch-李佳程的个人主页
图片[4]-Filebeat 收集Nginx日志利用 Redis 缓存发送至Elasticsearch-李佳程的个人主页
图片[5]-Filebeat 收集Nginx日志利用 Redis 缓存发送至Elasticsearch-李佳程的个人主页
图片[6]-Filebeat 收集Nginx日志利用 Redis 缓存发送至Elasticsearch-李佳程的个人主页

7、监控 Redis 数据长度

实际生产环境当中,利用Redis 缓存日志,logstash由于性能等原因未能及时提取日志,可能会出现Redis 当中堆积了大量的数据,导致redis服务器的内存被大量使用,甚至出现内存即将被使用完毕的情景,可以通过脚本程序监控redis 中key的长度,达到阈值进行及时报警。

7.1、python脚本内容

# apt -y install python3-pip
# pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple redis
# cat check_key_length.py

#!/usr/bin/python3
#coding:utf-8
import redis
def redis_key_len():
  pool=redis.ConnectionPool(host="127.0.0.1",port=6379,db=0,password="123456")
  conn = redis.Redis(connection_pool=pool)
  data = conn.llen('filebeat')
  print(data)
redis_key_len()

# python3 check_key_length.py

7.2、shell 脚本内容

# cat check_key_length.sh
#!/bin/bash
WARNING=10
REDIS=127.0.0.1
PASSWORD=123456
DB=0

for key in `redis-cli -h $REDIS -a $PASSWORD -n $DB keys '*' `;do
    length=`redis-cli -h $REDIS -a $PASSWORD -n $DB -a 123456 -n 1 llen $key`
#   echo $length
    if [ $length -gt $WARNING ];then
        echo "$key is too big,length:$length"
    fi
done

© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享