在docker swarm+fluentd上记录java异常



我在集群中进行了配置,使用fluentd将日志发送到elasticsearch。这部分工作得很好,但是我的java映像的异常日志会出现在记录中的每一行堆栈中。我已经尝试过使用detect_exceptions和多行插件,但在我看来,它们只有在源代码为"tail"类型(在我的情况下为"forward"(时才有效。

我的堆叠yml

version: '3.6'
....
services:
myjavaservice:
image: myjavaservice
logging:
driver: "fluentd"
options:
tag: myjavaservice
deploy:
placement:
constraints: [node.role==worker]
replicas: 1
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
ports:
- "9200:9200"
logging:
driver: "json-file"
options:
max-size: 10M
max-file: 1
volumes:
- esdata:/usr/share/elasticsearch/data
deploy:
replicas: 1
placement:
constraints:
- node.hostname == manager
fluentd:
image: my.repo/fluentd
volumes:
- ./Logs:/fluentd/log
ports:
- "24224:24224"
- "24224:24224/udp"
deploy:
replicas: 1
placement:
constraints: [node.role == manager]
update_config:
delay: 2s
.....

还有我的fluentd.conf

<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<filter *>
@type record_transformer
<record>
hostname "#{Socket.gethostname}"
tag ${tag}
</record>
</filter>
<label @raw>
<match myapp.*>
@type detect_exceptions
remove_tag_prefix myapp
message log
languages java
multiline_flush_interval 0.5
</match>
<match *>
@type copy
<store>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix logstash
logstash_dateformat %Y%m%d
include_tag_key true
tag_key @log_name
flush_interval 1s
</store>
</match>
</label>

你能告诉我是否可以使用swarm上的fluentd日志驱动程序来完成这项工作(将整个异常堆栈放在一个记录中(吗?

谢谢okkez。我使用concat插件解决了这个问题,但我也要测试你通过的这个解决方案。这是我实现的解决方案:

<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<filter **>
@type concat
key log
stream_identity_key container_id
multiline_start_regexp /^S+/
flush_interval 1s
timeout_label @processdata
</filter>
<label @ERROR>
<match **>
@type stdout
</match>
</label>
<label @processdata>
<match **>
@type stdout
</match>
</label>
<match **>
@type elasticsearch
logstash_format true
host elasticsearch
port 9200
index_name fluentd
type_name fluentd
flush_interval 5s
</match>

也许以下片段是有用的(未经测试(:

<source>
@type forward
port 24224
bind 0.0.0.0
@label @INPUT
</source>
<label @INPUT>
<filter>
@type record_transformer
<record>
hostname "#{Socket.gethostname}"
tag ${tag}
</record>
</filter>
<match myapp.*>
@type detect_exceptions
remove_tag_prefix myapp
message log
languages java
multiline_flush_interval 0.5
</match>
<match>
@type relabel
@label @OUTPUT
</match>
</label>
<label @OUTPUT>
<match>
@type copy
<store>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix logstash
logstash_dateformat %Y%m%d
include_tag_key true
tag_key @log_name
flush_interval 1s
</store>
</match>
</label>

它是使用CCD_ 1定义内部路由的点。

如果你想在一个记录中连接堆栈跟踪,你可以使用fluent插件concat。

最新更新