Logstash grok过滤器调试



请帮忙,我正在尝试在我的Logstash管道中添加grok过滤器,它将转换到日志线以下

2020-11-06 12:57:43,854 INFO Bandwidth: NASDAQ:224.0.130.65:30408 0.000059 Gb/S

{
"ts": [
[
"2020-11-06 12:57:43,854"
]
],
"YEAR": [
[
"2020"
]
],
"MONTHNUM": [
[
"11"
]
],
"MONTHDAY": [
[
"06"
]
],
"HOUR": [
[
"12",
null
]
],
"MINUTE": [
[
"57",
null
]
],
"SECOND": [
[
"43,854"
]
],
"ISO8601_TIMEZONE": [
[
null
]
],
"loglevel": [
[
"INFO"
]
],
"Metric": [
[
"Bandwidth"
]
],
"Chanel": [
[
"NASDAQ:224.0.130.65:30408"
]
],
"Data": [
[
"0.000059 Gb/S"
]
]
}

下面是我的grok过滤器

input{
beats{
port => "5044"
}
}
filter{
if "Bandwidth" in [message]{
grok{
match => {"message" => "%{TIMESTAMP_ISO8601:ts} %{LOGLEVEL:loglevel} %{WORD:Metric}: (?<Chanel>[A-Z]+:[0-9]+.[0-9]+.[0-9]+.[0-9]+:[0-9]+)"}
}
}
}
output{
elasticsearch{
hosts => [ "localhost:9200" ]
}
}

当我在Grok调试器中尝试这个过滤器时,它运行得非常好,但在Kibana中查看时,在Logstash中却没有。我没有从过滤器中看到任何名称捕获。只是信息。如果我删除过滤器的regex部分并添加GREEDYDATA,一切都会正常工作。我确信我在Regex部分做错了什么。

您的正则表达式模式是正确的,并且确实提供了预期的筛选器输出。刷新Kibana中的索引模式或尝试重新获取数据。

尽管如此,我认为如果通道模式将像某些数据:IP地址:端口一样,则不需要使用regex

尝试以下模式

grok{
match => { "message" => ["%{TIMESTAMP_ISO8601:ts} %{LOGLEVEL:loglevel} %{WORD:Metric}: (?<Channel>%{DATA}:%{HOSTPORT}) (?<Data>%{GREEDYDATA})"]}
}

Logstash输出将是

{
"ts" => "2020-11-06 12:57:43,854",
"Metric" => "Bandwidth",
"@timestamp" => 2020-11-06T22:47:20.383Z,
"loglevel" => "INFO",
"host" => "e7c15acec470",
"Data" => "0.000059 Gb/S",
"Channel" => "NASDAQ:224.0.130.65:30408",
"@version" => "1",
"message" => "2020-11-06 12:57:43,854 INFO Bandwidth: NASDAQ:224.0.130.65:30408 0.000059 Gb/S"
}

尝试将stdout输出与elasticsearch一起使用,这样您就可以看到logstash正在向elastical输出什么。

output{
stdout { codec => rubydebug }
}

相关内容

  • 没有找到相关文章