质量好网站建设费用,蝙蝠侠seo,机关网站建设总结,贵州建网站前言
在分布式系统中#xff0c;日志是排查问题的重要依据。当服务器数量增多时#xff0c;传统的登录服务器查看日志方式效率低下。ELK#xff08;Elasticsearch Logstash Kibana#xff09;是目前最流行的日志集中管理方案。
一、ELK Stack介绍
1.1 组件说明
日志源 →…前言在分布式系统中日志是排查问题的重要依据。当服务器数量增多时传统的登录服务器查看日志方式效率低下。ELKElasticsearch Logstash Kibana是目前最流行的日志集中管理方案。一、ELK Stack介绍1.1 组件说明日志源 → Filebeat → Logstash → Elasticsearch → Kibana (采集) (处理) (存储/搜索) (可视化)组件作用端口Elasticsearch分布式搜索引擎存储和检索日志9200/9300Logstash日志处理管道过滤、转换5044Kibana可视化界面查询和分析5601Filebeat轻量级日志采集器-1.2 数据流┌──────────────────────────────────────────────────────────────┐ │ 应用服务器 │ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │ │ App Log │ │Nginx Log│ │ Sys Log │ │ DB Log │ │ │ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │ │ └────────────┴────────────┴────────────┘ │ │ ↓ │ │ ┌─────────┐ │ │ │Filebeat │ │ │ └────┬────┘ │ └────────────────────────┼─────────────────────────────────────┘ ↓ ┌─────────────────┐ │ Logstash │ │ (过滤/解析) │ └────────┬────────┘ ↓ ┌─────────────────┐ │ Elasticsearch │ │ (存储/索引) │ └────────┬────────┘ ↓ ┌─────────────────┐ │ Kibana │ │ (可视化) │ └─────────────────┘二、Docker Compose部署2.1 完整部署配置# docker-compose.ymlversion:3.8services:elasticsearch:image:docker.elastic.co/elasticsearch/elasticsearch:8.11.0container_name:elasticsearchenvironment:-node.namees01-cluster.namedocker-cluster-discovery.typesingle-node-bootstrap.memory_locktrue-xpack.security.enabledfalse-ES_JAVA_OPTS-Xms2g -Xmx2gulimits:memlock:soft:-1hard:-1volumes:-es_data:/usr/share/elasticsearch/dataports:-9200:9200-9300:9300networks:-elk-nethealthcheck:test:[CMD-SHELL,curl -s http://localhost:9200 || exit 1]interval:30stimeout:10sretries:5logstash:image:docker.elastic.co/logstash/logstash:8.11.0container_name:logstashvolumes:-./logstash/pipeline:/usr/share/logstash/pipeline-./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.ymlports:-5044:5044# Beats input-5000:5000# TCP input-9600:9600# Monitoring APIenvironment:-LS_JAVA_OPTS-Xms512m -Xmx512mnetworks:-elk-netdepends_on:elasticsearch:condition:service_healthykibana:image:docker.elastic.co/kibana/kibana:8.11.0container_name:kibanaenvironment:-ELASTICSEARCH_HOSTShttp://elasticsearch:9200-I18N_LOCALEzh-CNports:-5601:5601networks:-elk-netdepends_on:elasticsearch:condition:service_healthyvolumes:es_data:networks:elk-net:driver:bridge2.2 Logstash配置# logstash/config/logstash.ymlhttp.host:0.0.0.0xpack.monitoring.elasticsearch.hosts:[http://elasticsearch:9200]# logstash/pipeline/logstash.confinput{beats{port5044}tcp{port5000codecjson}}filter{# 解析Nginx访问日志if[fields][log_type]nginx-access{grok{match{message%{IPORHOST:client_ip} - %{DATA:user} \[%{HTTPDATE:timestamp}\] %{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:http_version} %{NUMBER:status} %{NUMBER:bytes} %{DATA:referrer} %{DATA:user_agent}}}date{match[timestamp,dd/MMM/yyyy:HH:mm:ss Z]targettimestamp}geoip{sourceclient_iptargetgeoip}}# 解析应用JSON日志if[fields][log_type]app{json{sourcemessage}date{match[time,ISO8601,yyyy-MM-dd HH:mm:ss]targettimestamp}}# 解析Nginx错误日志if[fields][log_type]nginx-error{grok{match{message%{DATESTAMP:timestamp} \[%{LOGLEVEL:level}\] %{POSINT:pid}#%{NUMBER:tid}: %{GREEDYDATA:error_message}}}}# 移除不需要的字段mutate{remove_field[agent,ecs,host,input,log]}}output{elasticsearch{hosts[http://elasticsearch:9200]index%{[fields][log_type]}-%{YYYY.MM.dd}}# 调试输出# stdout { codec rubydebug }}2.3 启动服务# 创建目录结构mkdir-p logstash/{config,pipeline}# 设置系统参数ES需要sudosysctl -w vm.max_map_count262144# 启动docker-compose up -d# 验证curlhttp://localhost:9200# EScurlhttp://localhost:5601# Kibana三、Filebeat日志采集3.1 安装Filebeat# Ubuntu/Debianwget-qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch|sudoapt-keyadd-echodeb https://artifacts.elastic.co/packages/8.x/apt stable main|sudotee/etc/apt/sources.list.d/elastic-8.x.listaptupdateaptinstallfilebeat# CentOS/RHELrpm--import https://artifacts.elastic.co/GPG-KEY-elasticsearchcat/etc/yum.repos.d/elastic.repoEOF [elastic-8.x] nameElastic repository for 8.x packages baseurlhttps://artifacts.elastic.co/packages/8.x/yum gpgcheck1 gpgkeyhttps://artifacts.elastic.co/GPG-KEY-elasticsearch EOFyuminstallfilebeat3.2 Filebeat配置# /etc/filebeat/filebeat.ymlfilebeat.inputs:# Nginx访问日志-type:logenabled:truepaths:-/var/log/nginx/access.logfields:log_type:nginx-accessfields_under_root:false# Nginx错误日志-type:logenabled:truepaths:-/var/log/nginx/error.logfields:log_type:nginx-errorfields_under_root:falsemultiline:pattern:^\d{4}/\d{2}/\d{2}negate:truematch:after# 应用日志-type:logenabled:truepaths:-/var/log/myapp/*.logfields:log_type:appfields_under_root:falsejson:keys_under_root:trueadd_error_key:true# 输出到Logstashoutput.logstash:hosts:[192.168.1.100:5044]loadbalance:true# 如果Logstash在远程且无公网可配置虚拟内网IP# output.logstash:# hosts: [10.26.0.1:5044]# 处理器processors:-add_host_metadata:when.not.contains.tags:forwarded-add_cloud_metadata:~-drop_fields:fields:[agent.ephemeral_id,agent.id,ecs.version]3.3 启动Filebeat# 测试配置filebeattestconfig filebeattestoutput# 启动systemctlenablefilebeat systemctl start filebeat# 查看日志journalctl -u filebeat -f四、多应用日志采集4.1 Docker容器日志# filebeat-docker.ymlfilebeat.autodiscover:providers:-type:dockerhints.enabled:truetemplates:-condition:contains:docker.container.image:nginxconfig:-type:containerpaths:-/var/lib/docker/containers/${data.docker.container.id}/*.logfields:log_type:nginx-accessoutput.logstash:hosts:[logstash:5044]4.2 Spring Boot应用!-- logback-spring.xml --configurationappendernameJSONclassch.qos.logback.core.rolling.RollingFileAppenderfile/var/log/myapp/app.log/fileencoderclassnet.logstash.logback.encoder.LogstashEncodercustomFields{service:user-service,env:prod}/customFields/encoderrollingPolicyclassch.qos.logback.core.rolling.TimeBasedRollingPolicyfileNamePattern/var/log/myapp/app.%d{yyyy-MM-dd}.log/fileNamePatternmaxHistory30/maxHistory/rollingPolicy/appenderrootlevelINFOappender-refrefJSON//root/configuration4.3 Python应用# logging_config.pyimportloggingimportjsonfromdatetimeimportdatetimeclassJsonFormatter(logging.Formatter):defformat(self,record):log_record{time:datetime.utcnow().isoformat(),level:record.levelname,logger:record.name,message:record.getMessage(),service:python-api,env:prod}ifrecord.exc_info:log_record[exception]self.formatException(record.exc_info)returnjson.dumps(log_record,ensure_asciiFalse)# 使用handlerlogging.FileHandler(/var/log/myapp/app.log)handler.setFormatter(JsonFormatter())logging.getLogger().addHandler(handler)五、Kibana查询与可视化5.1 创建索引模式Stack Management → Index Patterns → Create index pattern Index pattern: nginx-access-* Time field: timestamp5.2 KQL查询语法# 精确匹配 status: 500 # 范围查询 status 400 and status 500 # 通配符 request: /api/* # 组合查询 method: POST and status: 200 and request: /api/order* # 排除 NOT status: 200 # 存在判断 error_message: *5.3 Lucene查询语法# 精确匹配 status:500 # 模糊匹配 message:error~ # 范围 response_time:[100 TO 500] # 正则 request:/\/api\/v[0-9]\/.*/ # 布尔 status:500 AND method:POST status:500 OR status:5025.4 创建可视化# 访问来源Top10 Visualize → Create visualization → Pie Chart - Aggregation: Terms - Field: client_ip.keyword - Size: 10 # 状态码分布 Lens → Bar Chart - X-axis: timestamp - Break down by: status # 响应时间趋势 TSVB → Time Series - Aggregation: Average - Field: response_time六、告警配置6.1 Kibana AlertingStack Management → Rules → Create rule # 5xx错误告警 Rule type: Elasticsearch query Index: nginx-access-* Query: status 500 Threshold: is above 10 Time window: 5 minutes Actions: - Slack webhook - Email6.2 ElastAlert2告警# elastalert_config.yamlrules_folder:/opt/elastalert/rulesrun_every:minutes:1buffer_time:minutes:15es_host:elasticsearches_port:9200# rules/5xx_error.yamlname:5xx Error Alerttype:frequencyindex:nginx-access-*num_events:10timeframe:minutes:5filter:-range:status:gte:500lt:600alert:-slackslack_webhook_url:https://hooks.slack.com/services/xxx七、集群部署与高可用7.1 ES集群配置# docker-compose-cluster.ymlversion:3.8services:es01:image:docker.elastic.co/elasticsearch/elasticsearch:8.11.0environment:-node.namees01-cluster.namees-cluster-discovery.seed_hostses02,es03-cluster.initial_master_nodeses01,es02,es03-ES_JAVA_OPTS-Xms4g -Xmx4gvolumes:-es01_data:/usr/share/elasticsearch/datanetworks:-elk-netes02:image:docker.elastic.co/elasticsearch/elasticsearch:8.11.0environment:-node.namees02-cluster.namees-cluster-discovery.seed_hostses01,es03-cluster.initial_master_nodeses01,es02,es03-ES_JAVA_OPTS-Xms4g -Xmx4gvolumes:-es02_data:/usr/share/elasticsearch/datanetworks:-elk-netes03:image:docker.elastic.co/elasticsearch/elasticsearch:8.11.0environment:-node.namees03-cluster.namees-cluster-discovery.seed_hostses01,es02-cluster.initial_master_nodeses01,es02,es03-ES_JAVA_OPTS-Xms4g -Xmx4gvolumes:-es03_data:/usr/share/elasticsearch/datanetworks:-elk-net7.2 跨网络日志收集在多机房/多云环境下日志服务器可能分布在不同网络中。使用组网软件如星空组网可以简化跨网络日志收集# 分公司Filebeat配置output.logstash:hosts:[10.26.0.1:5044]# 总部Logstash的虚拟内网IP# 所有分公司都可以通过虚拟内网直接上报日志# 无需公网IP无需开放端口八、性能优化8.1 ES索引优化// 创建索引模板PUT_index_template/logs-template{index_patterns:[nginx-*,app-*],template:{settings:{number_of_shards:3,number_of_replicas:1,refresh_interval:30s,index.translog.durability:async,index.translog.sync_interval:30s},mappings:{dynamic:strict,properties:{timestamp:{type:date},message:{type:text},level:{type:keyword},service:{type:keyword},client_ip:{type:ip}}}}}8.2 索引生命周期管理ILM// 创建ILM策略PUT_ilm/policy/logs-policy{policy:{phases:{hot:{min_age:0ms,actions:{rollover:{max_age:1d,max_size:50gb}}},warm:{min_age:7d,actions:{shrink:{number_of_shards:1},forcemerge:{max_num_segments:1}}},cold:{min_age:30d,actions:{freeze:{}}},delete:{min_age:90d,actions:{delete:{}}}}}}8.3 Logstash优化# pipeline配置pipeline.workers:4pipeline.batch.size:1000pipeline.batch.delay:50# 使用persistent queuequeue.type:persisted queue.max_bytes:4gb九、安全配置9.1 启用认证# elasticsearch.ymlxpack.security.enabled:truexpack.security.transport.ssl.enabled:true# 设置密码dockerexec-it elasticsearch bin/elasticsearch-setup-passwords auto9.2 Kibana认证# kibana.ymlelasticsearch.username:kibana_systemelasticsearch.password:your_password十、总结ELK Stack是一个功能强大的日志分析平台本文覆盖了模块内容架构ES Logstash Kibana Filebeat部署Docker Compose快速搭建采集Nginx/应用/容器日志采集分析Grok解析、KQL/Lucene查询可视化Dashboard、告警配置优化索引模板、ILM策略最佳实践日志格式统一使用JSON合理设置日志保留周期生产环境启用安全认证做好容量规划和监控参考资料Elastic官方文档https://www.elastic.co/guide/Filebeat配置指南https://www.elastic.co/guide/en/beats/filebeat/current/Logstash Filter插件https://www.elastic.co/guide/en/logstash/current/filter-plugins.html本文首发于CSDN转载请注明出处。