最小节拍 => logstash => 弹性未接收弹性中的日志 (docker)



TLDR

我正在尝试使用一个码头化的弹性堆栈来解析2个日志文件。堆栈获取添加到/usr/share/filebeat/scrape_logs中的文件中的日志,并通过logstash将其存储在elasticsearch中。

我看到日志到达logstash,它们显示如下,但是当我从kibana运行查询GET /_cat/indices/时,没有索引。

我在这里创建了一个带有相关设置的github repo,如果您想运行代码,只需运行docker-compose up,然后运行echo '2021-03-15 09:58:59,255 [INFO] - i am a test' >> beat_test/log1.log即可添加其他日志。

为什么我看不到在弹性搜索中创建的索引?为什么日志没有索引?

详细信息

logstash         | {
logstash         |           "host" => {
logstash         |         "name" => "b5bd03c1654c"
logstash         |     },
logstash         |     "@timestamp" => 2021-03-15T22:09:06.220Z,
logstash         |            "log" => {
logstash         |           "file" => {
logstash         |             "path" => "/usr/share/filebeat/scrape_logs/log1.log"
logstash         |         },
logstash         |         "offset" => 98
logstash         |     },
logstash         |          "input" => {
logstash         |         "type" => "log"
logstash         |     },
logstash         |           "tags" => [
logstash         |         [0] "beats_input_codec_plain_applied"
logstash         |     ],
logstash         |            "ecs" => {
logstash         |         "version" => "1.6.0"
logstash         |     },
logstash         |       "@version" => "1",
logstash         |          "agent" => {
logstash         |                 "name" => "b5bd03c1654c",
logstash         |                 "type" => "filebeat",
logstash         |         "ephemeral_id" => "e171b269-2364-47ff-bc87-3fe0bd73bf8c",
logstash         |              "version" => "7.11.2",
logstash         |             "hostname" => "b5bd03c1654c",
logstash         |                   "id" => "97aaac06-c87f-446f-aadc-8187b155e9e9"
logstash         |     },
logstash         |        "message" => "2021-03-15 09:58:59,255 [INFO] - i am a test"
logstash         | }

docker-compose.yml

version: '3.6'
services:
elasticsearch:
image: elasticsearch:7.11.1
container_name: elasticsearch
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- discovery.type=single-node
ports: ['9200:9200']
volumes:
- ./es_data:/usr/share/elasticsearch/data
kibana:
image: kibana:7.11.1
container_name: kibana
ports: ['5601:5601']
depends_on: ['elasticsearch']
logstash:
image: logstash:7.11.1
container_name: logstash
volumes:
- ./scrape_logs.conf:/usr/share/logstash/config/scrape_logs.conf
depends_on: ['elasticsearch']
filebeat:
image: docker.elastic.co/beats/filebeat:7.11.2
container_name: filebeat
user: root
command: --strict.perms=false -e
volumes:
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml
- /var/run/docker.sock:/var/run/docker.sock:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- ./beat_test:/usr/share/filebeat/scrape_logs
depends_on: ['elasticsearch', 'kibana']
volumes:
es_data:

scrape_logs.conf

input {
beats {
port => 5044
}
}
output {
elasticsearch {
host => "elasticsearch:9200"
index => "scrape_test"
}
}

问题是因为您需要将Logstash管道配置映射到/usr/share/logstash/pipeline文件夹。/usr/share/logstash/config文件夹仅对设置有用。

如果您没有指定,那么有一个默认的/usr/share/logstash/pipeline/logstash.conf管道,它基本上执行以下操作,这就是为什么您在Logstash控制台日志中看到事件:

input {
beats {
port => 5044
}
}
output {
stdout {
codec => rubydebug
}
}

因此,您需要通过将Logstash配置修改为以下内容来替换默认管道:

logstash:
image: logstash:7.11.1
container_name: logstash
volumes:
- ./pipeline:/usr/share/logstash/pipeline
depends_on: ['elasticsearch']

您还需要创建一个名为pipeline的文件夹,并将scrape_logs.conf文件移到其中

最后,您在scrape_logs.conf文件中有一个拼写错误,elasticsearch输出中的host设置应该被称为hosts:

output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "scrape_test"
}
}

一旦所有这些都完成了,你可以重新启动你的docker堆栈,进入Kibana,你会看到你的日志。

最新更新