我有一个在kubernetes群集中运行的弹簧启动应用程序和 efk stack(例如 elk ,但使用logstash,而不是logstashFluentd被用作轻巧的替代方案,从所有Kubernetes Pods收集日志并将其发送到Elasticsearch(。
为了将日志调整到JSON输出中,我使用了一个Logstash-Logback-编码库:
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>4.11</version>
</dependency>
,我将日志转换为JSON(很好(。
i登录到stdout,一切都会被捡起并发送到Elasticsearch。在Spring Boot应用程序中不需要特殊的记录配置。
但是问题我现在遇到的是,从kubernetes pod的Stdout实时阅读我的日志时,它们很难使用所有JSON格式来阅读。
。示例:
{"@timestamp":"2018-02-08T12:49:06.080+01:00","@version":1,"message":"Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)","logger_name":"org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping","thread_name":"main","level":"INFO","level_value":20000}
{"@timestamp":"2018-02-08T12:49:06.080+01:00","@version":1,"message":"Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)","logger_name":"org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping","thread_name":"main","level":"INFO","level_value":20000}
{"@timestamp":"2018-02-08T12:49:06.098+01:00","@version":1,"message":"Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]","logger_name":"org.springframework.web.servlet.handler.SimpleUrlHandlerMapping","thread_name":"main","level":"INFO","level_value":20000}
{"@timestamp":"2018-02-08T12:49:06.098+01:00","@version":1,"message":"Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]","logger_name":"org.springframework.web.servlet.handler.SimpleUrlHandlerMapping","thread_name":"main","level":"INFO","level_value":20000}
{"@timestamp":"2018-02-08T12:49:06.137+01:00","@version":1,"message":"Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]","logger_name":"org.springframework.web.servlet.handler.SimpleUrlHandlerMapping","thread_name":"main","level":"INFO","level_value":20000}
{"@timestamp":"2018-02-08T12:49:06.268+01:00","@version":1,"message":"Registering beans for JMX exposure on startup","logger_name":"org.springframework.jmx.export.annotation.AnnotationMBeanExporter","thread_name":"main","level":"INFO","level_value":20000}
{"@timestamp":"2018-02-08T12:49:06.333+01:00","@version":1,"message":"Initializing ProtocolHandler ["http-nio-8080"]","logger_name":"org.apache.coyote.http11.Http11NioProtocol","thread_name":"main","level":"INFO","level_value":20000}
{"@timestamp":"2018-02-08T12:49:06.355+01:00","@version":1,"message":"Starting ProtocolHandler ["http-nio-8080"]","logger_name":"org.apache.coyote.http11.Http11NioProtocol","thread_name":"main","level":"INFO","level_value":20000}
我想做的是 log to to to stdout以'normal non-json'格式,然后以json格式将日志发送到fluentd。
我正在尝试配置两个日志附录(一个到STDOUT,另一个以JSON格式为Fluentd(,但我很确定这会复制数据(Fluentd将获得JSON格式和STDOUT(。
(。我的计划B是构建一个用于部署的图像(没有JSON格式(,而另一个用于生产的图像,但这更像是计划Z,TBH,因为我也想监视生产中的这些POD。
我的问题是如何使用一个日志appender或不复制流利的数据。我可能没有想过一种不同的方法?
我假设您使用的是logback而不是您已标记的log4j,因为您链接到的logstash库似乎是为logback编写的。
最简单的解决方案可能是配置Fluentd以从文件中读取日志并将JSON Appender重定向到该文件。
有一篇关于尾部输入插件的文章,但从本质上讲,您会以这样的方式进行配置:
logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<!-- JSON appender for log collection -->
<appender name="json" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/some/path/to/your/file.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>/some/path/to/your/file.log.%d{yyyy-MM-dd}</fileNamePattern>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<!-- Console appender for humans -->
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<!-- Set threshold for the console log here if you want the
log collection to get all log messages regardless of level -->
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
<!-- encoders are assigned the type
ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
<encoder>
<pattern>%-4relative [%thread] %-5level %logger{35} - %msg %n</pattern>
</encoder>
</appender>
<!-- Tie it all together -->
<root level="all">
<appender-ref ref="json" />
<appender-ref ref="console" />
</root>
</configuration>
Fluentd
<source>
@type tail
path /some/path/to/your/file.log
pos_file /some/path/to/your/file.log
format json
</source>
加入文档,Fluentd将跟随翻车,并在完成旧文件后从新文件的开头开始。
如果您希望控制台输出与常规弹簧启动应用程序相同,则可以从其配置中复制模式
即使我被提出了提出的解决方案,最后我只是使用了 jq
,即JSON解析器在CLI上查看我的日志。我这样做是为了避免重复日志数据,而不必创建文件,也不必专门配置Fluentd以读取文件。