我正在尝试用grok解析日志文件。我使用的配置允许我解析单行事件,但如果是多行的(使用java堆栈跟踪),则不能解析。
#what i get on KIBANA for a single line:
{
"_index": "logstash-2015.02.05",
"_type": "logs",
"_id": "mluzA57TnCpH-XBRbeg",
"_score": null,
"_source": {
"message": " - 2014-01-14 11:09:35,962 [main] INFO (api.batch.ThreadPoolWorker) user.country=US",
"@version": "1",
"@timestamp": "2015-02-05T09:38:21.310Z",
"path": "/root/test2.log",
"time": "2014-01-14 11:09:35,962",
"main": "main",
"loglevel": "INFO",
"class": "api.batch.ThreadPoolWorker",
"mydata": " user.country=US"
},
"sort": [
1423129101310,
1423129101310
]
}
#what i get for a multiline with Stack trace:
{
"_index": "logstash-2015.02.05",
"_type": "logs",
"_id": "9G6LsSO-aSpsas_jOw",
"_score": null,
"_source": {
"message": "tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20)",
"@version": "1",
"@timestamp": "2015-02-05T09:38:21.380Z",
"path": "/root/test2.log",
"tags": [
"_grokparsefailure"
]
},
"sort": [
1423129101380,
1423129101380
]
}
input {
file {
path => "/root/test2.log"
start_position => "beginning"
codec => multiline {
pattern => "^ - %{TIMESTAMP_ISO8601} "
negate => true
what => "previous"
}
}
}
filter {
grok {
match => [ "message", " -%{SPACE}%{SPACE}%{TIMESTAMP_ISO8601:time} [%{WORD:main}] %{LOGLEVEL:loglevel}%{SPACE}%{SPACE}(%{JAVACLASS:class}) %{GREEDYDATA:mydata} %{JAVASTACKTRACEPART}"]
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
host => "194.3.227.23"
}
# stdout { codec => rubydebug}
}
有人能告诉我在配置文件上做错了什么吗?谢谢这是我的日志文件示例:-2014-01-14 11:09:36447〔main〕INFO(support.context.ContextFactory)创建默认上下文-2014-01-14 11:09:38623〔main〕错误(support.context.ContextFactory)获取到数据库jdbc:oracle:shin:@HAL9000:1521:DEVPRINT的连接时出错,用户为cisuser,驱动程序为oracle.jdbc.river.OracleDriverjava.sql.sql异常:ORA-28001:密码已过期位于oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)在oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:131)**
*>编辑:这是我使用的最新配置
https://gist.github.com/anonymous/9afe80ad604f9a3d3c00#file-输出-L1*
**
第一点,当使用文件输入重复测试时,请确保使用sincedb_path=>"/dev/null"以确保从文件的开头读取。
关于多行,您的问题内容或多行模式一定有问题,因为所有事件都没有由多行编解码器或筛选器在聚合行时添加的multiline标记。您的消息字段应包含由换行符分隔的所有行\n(\r\n在我的情况下是在windows上)。以下是您的输入配置的预期输出
{
"@timestamp" => "2015-02-10T11:03:33.298Z",
"message" => " - 2014-01-14 11:09:35,962 [main] INFO (api.batch.ThreadPoolWorker) user.country=USrntat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20r",
"@version" => "1",
"tags" => [
[0] "multiline"
],
"host" => "localhost",
"path" => "/root/test.file"
}
关于grok,当您想要匹配多行字符串时,应该使用这样的模式。
filter {
grok {
match => {"message" => [
"(?m)^ -%{SPACE}%{TIMESTAMP_ISO8601:time} [%{WORD:main}] % {LOGLEVEL:loglevel}%{SPACE}(%{JAVACLASS:class}) %{DATA:mydata}n%{GREEDYDATA:stack}",
"^ -%{SPACE}%{TIMESTAMP_ISO8601:time} [%{WORD:main}] %{LOGLEVEL:loglevel}%{SPACE}(%{JAVACLASS:class}) %{GREEDYDATA:mydata}"]
}
}}
(?m)前缀指示正则表达式引擎进行多行匹配。然后你会得到像这样的事件
{
"@timestamp" => "2015-02-10T10:47:20.078Z",
"message" => " - 2014-01-14 11:09:35,962 [main] INFO (api.batch.ThreadPoolWorker) user.country=USrntat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20r",
"@version" => "1",
"tags" => [
[0] "multiline"
],
"host" => "localhost",
"path" => "/root/test.file",
"time" => "2014-01-14 11:09:35,962",
"main" => "main",
"loglevel" => "INFO",
"class" => "api.batch.ThreadPoolWorker",
"mydata" => " user.country=USr",
"stack" => "tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20r"
}
您可以使用此在线工具构建和验证多行阵列http://grokconstructor.appspot.com/do/match
最后一个警告是,使用多行编解码器输入的Logstash文件中目前存在一个错误,如果在路径设置中使用列表或通配符,则会混淆多个文件中的内容。唯一的解决方法是使用多行过滤器
HTH-
编辑:我关注的是多行字符串,您需要为非单行字符串