Centralized log management (Graylog, Logstash, Fluentd)
本指南解释了如何将日志发送到 Graylog、Logstash(位于 Elastic Stack 或 ELK - Elasticsearch、Logstash、Kibana 中)或 Fluentd(位于 EFK - Elasticsearch、Fluentd、Kibana 中)等集中式日志管理系统。
This guide explains how you can send your logs to a centralized log management system like Graylog, Logstash (inside the Elastic Stack or ELK - Elasticsearch, Logstash, Kibana) or Fluentd (inside EFK - Elasticsearch, Fluentd, Kibana).
有很多集中日志的方法(如果你正在使用 Kubernetes,最简便的方法是记录到控制台并让你的集群管理员在你的集群中集成一个集中式日志管理器)。在本指南中,我们将展示如何使用 quarkus-logging-gelf
扩展将日志发送到外部工具,它可以使用 TCP 或 UDP 以 Graylog Extended Log Format (GELF) 发送日志。
There are a lot of different ways to centralize your logs (if you are using Kubernetes, the simplest way is to log to the console and ask you cluster administrator to integrate a central log manager inside your cluster).
In this guide, we will expose how to send them to an external tool using the quarkus-logging-gelf
extension that can use TCP or UDP to send logs in the Graylog Extended Log Format (GELF).
quarkus-logging-gelf
扩展会将 GELF 日志处理程序添加到 Quarkus 使用的底层日志后端(jboss-logmanager)。默认情况下,它处于禁用状态,如果你启用它但仍然使用另一个处理程序(默认情况下启用控制台处理程序),你的日志会同时发送到这两个处理程序。
The quarkus-logging-gelf
extension will add a GELF log handler to the underlying logging backend that Quarkus uses (jboss-logmanager).
By default, it is disabled, if you enable it but still use another handler (by default the console handler is enabled), your logs will be sent to both handlers.
Prerequisites
Unresolved directive in centralized-log-management.adoc - include::{includes}/prerequisites.adoc[]
Example application
以下所有示例都基于相同的示例应用程序,你可以按照以下步骤创建该应用程序。
The following examples will all be based on the same example application that you can create with the following steps.
使用 quarkus-logging-gelf
扩展创建一个应用程序。你可以使用以下命令创建它:
Create an application with the quarkus-logging-gelf
extension. You can use the following command to create it:
Unresolved directive in centralized-log-management.adoc - include::{includes}/devtools/create-app.adoc[]
如果你已经配置了 Quarkus 项目,可以通过在项目基本目录中运行以下命令将 logging-gelf
扩展添加到项目中:
If you already have your Quarkus project configured, you can add the logging-gelf
extension
to your project by running the following command in your project base directory:
Unresolved directive in centralized-log-management.adoc - include::{includes}/devtools/extension-add.adoc[]
这会将以下依赖项添加到你的构建文件中:
This will add the following dependency to your build file:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-logging-gelf</artifactId>
</dependency>
implementation("io.quarkus:quarkus-logging-gelf")
为了演示,我们创建一个端点,它除了记录一个句子外不执行任何操作。你无需在应用程序中执行此操作。
For demonstration purposes, we create an endpoint that does nothing but log a sentence. You don’t need to do this inside your application.
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import org.jboss.logging.Logger;
@Path("/gelf-logging")
@ApplicationScoped
public class GelfLoggingResource {
private static final Logger LOG = Logger.getLogger(GelfLoggingResource.class);
@GET
public void log() {
LOG.info("Some useful log message");
}
}
配置 GELF 日志处理程序,以便将日志发送到端口 12201 上的外部 UDP 端点:
Configure the GELF log handler to send logs to an external UDP endpoint on the port 12201:
quarkus.log.handler.gelf.enabled=true
quarkus.log.handler.gelf.host=localhost
quarkus.log.handler.gelf.port=12201
Send logs to Graylog
要将日志发送到 Graylog,你首先需要启动组成 Graylog 栈的组件:
To send logs to Graylog, you first need to launch the components that compose the Graylog stack:
-
MongoDB
-
Elasticsearch
-
Graylog
你可以通过以下 docker-compose.yml
文件执行此操作,你可以通过 docker-compose up -d
启动该文件:
You can do this via the following docker-compose.yml
file that you can launch via docker-compose up -d
:
version: '3.2'
services:
elasticsearch:
image: {elasticsearch-image}
ports:
- "9200:9200"
environment:
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.type: "single-node"
cluster.routing.allocation.disk.threshold_enabled: false
networks:
- graylog
mongo:
image: mongo:4.0
networks:
- graylog
graylog:
image: graylog/graylog:4.3.0
ports:
- "9000:9000"
- "12201:12201/udp"
- "1514:1514"
environment:
GRAYLOG_HTTP_EXTERNAL_URI: "http://127.0.0.1:9000/"
# CHANGE ME (must be at least 16 characters)!
GRAYLOG_PASSWORD_SECRET: "forpasswordencryption"
# Password: admin
GRAYLOG_ROOT_PASSWORD_SHA2: "8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918"
networks:
- graylog
depends_on:
- elasticsearch
- mongo
networks:
graylog:
driver: bridge
然后,你需要在 Graylog 中创建一个 UDP 输入。你可以从 Graylog Web 控制台(系统 → 输入 → 选择 GELF UDP)执行此操作,它位于 [role="bare"][role="bare"]http://localhost:9000 处或通过 API。
Then, you need to create a UDP input in Graylog. You can do it from the Graylog web console (System → Input → Select GELF UDP) available at [role="bare"]http://localhost:9000 or via the API.
此 curl 示例将创建一个 GELF UDP 类型的输入,它使用 Graylog 的默认登录名(admin/admin)。
This curl example will create a new Input of type GELF UDP, it uses the default login from Graylog (admin/admin).
curl -H "Content-Type: application/json" -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "X-Requested-By: curl" -X POST -v -d \
'{"title":"udp input","configuration":{"recv_buffer_size":262144,"bind_address":"0.0.0.0","port":12201,"decompress_size_limit":8388608},"type":"org.graylog2.inputs.gelf.udp.GELFUDPInput","global":true}' \
http://localhost:9000/api/system/inputs
启动您的应用程序,您会看到日志将进入 Graylog。
Launch your application, you should see your logs arriving inside Graylog.
Send logs to Logstash / the Elastic Stack (ELK)
Logstash 默认带有可理解 GELF 格式的输入插件,我们首先将创建一个启用此插件的管道。
Logstash comes by default with an Input plugin that can understand the GELF format, we will first create a pipeline that enables this plugin.
在 $HOME/pipelines/gelf.conf
中创建以下文件:
Create the following file in $HOME/pipelines/gelf.conf
:
input {
gelf {
port => 12201
}
}
output {
stdout {}
elasticsearch {
hosts => ["http://elasticsearch:9200"]
}
}
最后,启动构成 Elasticsearch Stack 的组件:
Finally, launch the components that compose the Elastic Stack:
-
Elasticsearch
-
Logstash
-
Kibana
你可以通过以下 docker-compose.yml
文件执行此操作,你可以通过 docker-compose up -d
启动该文件:
You can do this via the following docker-compose.yml
file that you can launch via docker-compose up -d
:
# Launch Elasticsearch
version: '3.2'
services:
elasticsearch:
image: {elasticsearch-image}
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.type: "single-node"
cluster.routing.allocation.disk.threshold_enabled: false
networks:
- elk
logstash:
image: {logstash-image}
volumes:
- source: $HOME/pipelines
target: /usr/share/logstash/pipeline
type: bind
ports:
- "12201:12201/udp"
- "5000:5000"
- "9600:9600"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: {kibana-image}
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
启动您的应用程序,您会看到日志将进入 Elasticsearch Stack;您可以使用 [role="bare"][role="bare"]http://localhost:5601/ 处提供的 Kibana 访问它们。
Launch your application, you should see your logs arriving inside the Elastic Stack; you can use Kibana available at [role="bare"]http://localhost:5601/ to access them.
Send logs to Fluentd (EFK)
首先,您需要创建一个包含所需插件的 Fluentd 镜像:elasticsearch 和 input-gelf。您可以使用以下应创建在 fluentd
目录中的 Dockerfile。
First, you need to create a Fluentd image with the needed plugins: elasticsearch and input-gelf.
You can use the following Dockerfile that should be created inside a fluentd
directory.
FROM fluent/fluentd:v1.3-debian
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--version", "3.7.0"]
RUN ["gem", "install", "fluent-plugin-input-gelf", "--version", "0.3.1"]
您可以构建镜像或让 docker-compose 为您构建它。
You can build the image or let docker-compose build it for you.
然后,您需要在 $HOME/fluentd/fluent.conf
中创建 fluentd 配置文件。
Then you need to create a fluentd configuration file inside $HOME/fluentd/fluent.conf
<source>
type gelf
tag example.gelf
bind 0.0.0.0
port 12201
</source>
<match example.gelf>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
</match>
最后,启动构成 EFK Stack 的组件:
Finally, launch the components that compose the EFK Stack:
-
Elasticsearch
-
Fluentd
-
Kibana
你可以通过以下 docker-compose.yml
文件执行此操作,你可以通过 docker-compose up -d
启动该文件:
You can do this via the following docker-compose.yml
file that you can launch via docker-compose up -d
:
version: '3.2'
services:
elasticsearch:
image: {elasticsearch-image}
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.type: "single-node"
cluster.routing.allocation.disk.threshold_enabled: false
networks:
- efk
fluentd:
build: fluentd
ports:
- "12201:12201/udp"
volumes:
- source: $HOME/fluentd
target: /fluentd/etc
type: bind
networks:
- efk
depends_on:
- elasticsearch
kibana:
image: {kibana-image}
ports:
- "5601:5601"
networks:
- efk
depends_on:
- elasticsearch
networks:
efk:
driver: bridge
启动您的应用程序,您会看到日志进入 EFK:您可以使用 [role="bare"][role="bare"]http://localhost:5601/ 处提供的 Kibana 访问它们。
Launch your application, you should see your logs arriving inside EFK: you can use Kibana available at [role="bare"]http://localhost:5601/ to access them.
GELF alternative: use Syslog
您还可以使用 Syslog 输入将日志发送至 Fluentd。与 GELF 输入不同,Syslog 输入不会在单个事件中呈现多行日志,这就是我们建议使用我们 Quarkus 中实现的 GELF 输入的原因。
You can also send your logs to Fluentd using a Syslog input. As opposed to the GELF input, the Syslog input will not render multiline logs in one event, that’s why we advise to use the GELF input that we implement in Quarkus.
首先,您需要创建一个包含 elasticsearch 插件的 Fluentd 镜像。您可以使用以下应创建在 fluentd
目录中的 Dockerfile。
First, you need to create a Fluentd image with the elasticsearch plugin.
You can use the following Dockerfile that should be created inside a fluentd
directory.
FROM fluent/fluentd:v1.3-debian
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--version", "3.7.0"]
然后,您需要在 $HOME/fluentd/fluent.conf
中创建 fluentd 配置文件。
Then, you need to create a fluentd configuration file inside $HOME/fluentd/fluent.conf
<source>
@type syslog
port 5140
bind 0.0.0.0
message_format rfc5424
tag system
</source>
<match **>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
</match>
然后,启动构成 EFK Stack 的组件:
Then, launch the components that compose the EFK Stack:
-
Elasticsearch
-
Fluentd
-
Kibana
你可以通过以下 docker-compose.yml
文件执行此操作,你可以通过 docker-compose up -d
启动该文件:
You can do this via the following docker-compose.yml
file that you can launch via docker-compose up -d
:
version: '3.2'
services:
elasticsearch:
image: {elasticsearch-image}
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.type: "single-node"
cluster.routing.allocation.disk.threshold_enabled: false
networks:
- efk
fluentd:
build: fluentd
ports:
- "5140:5140/udp"
volumes:
- source: $HOME/fluentd
target: /fluentd/etc
type: bind
networks:
- efk
depends_on:
- elasticsearch
kibana:
image: {kibana-image}
ports:
- "5601:5601"
networks:
- efk
depends_on:
- elasticsearch
networks:
efk:
driver: bridge
最后,将您的应用程序配置为使用 Syslog 将日志发送至 EFK:
Finally, configure your application to send logs to EFK using Syslog:
quarkus.log.syslog.enable=true
quarkus.log.syslog.endpoint=localhost:5140
quarkus.log.syslog.protocol=udp
quarkus.log.syslog.app-name=quarkus
quarkus.log.syslog.hostname=quarkus-test
启动您的应用程序,您会看到日志进入 EFK:您可以使用 [role="bare"][role="bare"]http://localhost:5601/ 处提供的 Kibana 访问它们。
Launch your application, you should see your logs arriving inside EFK: you can use Kibana available at [role="bare"]http://localhost:5601/ to access them.
Elasticsearch indexing consideration
请注意,默认情况下,Elasticsearch 会通过检测类型自动映射未知字段(如果未在索引设置中禁用)。如果您使用日志参数(默认情况下已包含)或启用 MDC (默认情况下已禁用),这可能会变得棘手,因为第一条日志将定义索引中消息参数(或 MDC 参数)字段的类型。
Be careful that, by default, Elasticsearch will automatically map unknown fields (if not disabled in the index settings) by detecting their type. This can become tricky if you use log parameters (which are included by default), or if you enable MDC inclusion (disabled by default), as the first log will define the type of the message parameter (or MDC parameter) field inside the index.
设想以下情况:
Imagine the following case:
LOG.info("some {} message {} with {} param", 1, 2, 3);
LOG.info("other {} message {} with {} param", true, true, true);
启用日志消息参数后,发送至 Elasticsearch 的第一条日志消息将具有类型为 int
的 MessageParam0
参数;这将为索引配置一个类型为 integer
的字段。当第二条消息抵达 Elasticsearch 时,它将具有布尔值为 true
的 MessageParam0
参数,这将生成一个索引错误。
With log message parameters enabled, the first log message sent to Elasticsearch will have a MessageParam0
parameter with an int
type;
this will configure the index with a field of type integer
.
When the second message will arrive to Elasticsearch, it will have a MessageParam0
parameter with the boolean value true
, and this will generate an indexing error.
要解决此限制,您可以通过配置 quarkus.log.handler.gelf.include-log-message-parameters=false
禁用发送日志消息参数 logging-gelf
,或者可以将 Elasticsearch 索引配置为将这些字段存储为 text 或 keyword,Elasticsearch 会随后自动将 int/boolean 转换为 String。
To work around this limitation, you can disable sending log message parameters via logging-gelf
by configuring quarkus.log.handler.gelf.include-log-message-parameters=false
,
or you can configure your Elasticsearch index to store those fields as text or keyword, Elasticsearch will then automatically make the translation from int/boolean to a String.
有关 Graylog 的文档,请参阅以下内容(但另一个中心日志记录堆栈也存在相同问题): Custom Index Mappings.
See the following documentation for Graylog (but the same issue exists for the other central logging stacks): Custom Index Mappings.
Configuration Reference
配置通过常规 application.properties
文件完成。
Configuration is done through the usual application.properties
file.
Unresolved directive in centralized-log-management.adoc - include::{generated-dir}/config/quarkus-logging-gelf.adoc[]
此扩展使用 logstash-gelf
库,允许通过系统属性配置更多选项,您可以在此处访问其文档:[role="bare"][role="bare"]https://logging.paluch.biz/ .
This extension uses the logstash-gelf
library that allow more configuration options via system properties,
you can access its documentation here: [role="bare"]https://logging.paluch.biz/ .