如何在Logstash和grok过滤器中设置多行java堆栈跟踪



我正在尝试在grok过滤器中设置多行(我使用的是Filebeats),以便解析java堆栈跟踪。

目前我能够解析以下日志:

08/12/2016 14:17:32,746 [ERROR] [nlp.rvp.TTEndpoint] (Thread-38 ActiveMQ-client-global-threads-1048949322) [d762103f-eee0-4dbb-965f-9f8fb500cf92] ERROR: Not found: v1/t/auth/login: Not found: v1/t/auth/login
at nlp.exceptions.nlpException.NOT_FOUND(nlpException.java:147)
at nlp.utils.Dispatcher.forwardVersion1(Dispatcher.java:342)
at nlp.utils.Dispatcher.Forward(Dispatcher.java:189)
at nlp.utils.Dispatcher$Proxy$_$$_WeldSubclass.Forward$$super(Unknown Source)
at sun.reflect.GeneratedMethodAccessor171.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:49)

但是结果不能显示java堆栈跟踪(从java开始…)

这是Grok Debugger的输出(如您所见,缺少java堆栈跟踪):

{
"date": "08/12/2016",
"loglevel": "ERROR",
"logger": "nlp.rvp.TTEndpoint",
"time": "14:17:32,746",
"thread": "Thread-38 ActiveMQ-client-global-threads-1048949322",
"message": "ERROR: Not found: v1/t/auth/login: Not found: v1/t/auth/loginr",
"uuid": "d762103f-eee0-4dbb-965f-9f8fb500cf92"
}

这是Filebeats(日志发货人)的配置:

filebeat:
prospectors:
-
paths:
- /var/log/test
input_type: log
document_type: syslog
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["192.168.1.122:5044"]
bulk_max_size: 8192
compression_level: 3
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB

这是Logstash的配置

input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{DATE:date} %{TIME:time} [%{LOGLEVEL:loglevel}%{SPACE}] [(?<logger>[^]]+)] ((?<thread>[^)]+)) [%{UUID:uuid}] %{GREEDYDATA:message}" }
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

希望你能帮助我,所以最后,我会打破它(:谢谢

谢谢大家,我找到了解决方案!

我的新配置是:

filebeat.yml

filebeat:
prospectors:
- type: log
paths:
- /var/log/*.log
multiline:
pattern: '^[[:space:]]'
match: after
output:
logstash:
hosts: ["xxx.xx.xx.xx:5044"]
bulk_max_size: 8192
compression_level: 3
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB

避免在logstash级别进行多行解析。使用filebeat功能,使用多行选项和相关的regexp。即

multiline.pattern: '^(([0-9]{2}/){2}20[0-9]{2} [0-9]{2}(:[0-9]{2}){2})' 
multiline.negate: true 
multiline.match: after

请参阅https://www.elastic.co/guide/en/beats/filebeat/master/multiline-examples.html

最新更新