Spring Cloud 简明教程
Distributed Logging using ELK and Sleuth
Introduction
在分布式环境或单体环境中,当出现问题时,应用程序的日志对于调试至关重要。在本部分,我们将了解如何有效地记录日志并提高可追溯性,以便我们能够轻松查看日志。
In a distributed environment or in a monolithic environment, application logs are very critical for debugging whenever something goes wrong. In this section, we will look at how to effectively log and improve traceability so that we can easily look at the logs.
记录模式对日志记录至关重要的两个主要原因:
Two major reasons why logging patterns become critical for logging −
-
Inter-service calls − In a microservice architecture, we have async and sync calls between services. It is very critical to link these requests, as there can be more than one level of nesting for a single request.
-
Intra-service calls − A single service gets multiple requests and the logs for them can easily get intermingled. That is why, having some ID associated with the request becomes important to filter all the logs for a request.
Sleuth 是一个用于在应用程序中记录日志的知名工具,而 ELK 则用于简化跨系统观察。
Sleuth is a well-known tool used for logging in application and ELK is used for simpler observation across the system.
Dependency Setting
让我们使用我们在每一章中都一直在使用的 Restaurant 案例。因此,假设我们的 Customer 服务和 Restaurant 服务通过 API(即同步通信)进行通信。我们希望 Sleuth 用于追踪请求,而 ELK 堆栈用于集中可视化。
Let us use the case of Restaurant that we have been using in every chapter. So, let us say we have our Customer service and the Restaurant service communicating via API, i.e., synchronous communication. And we want to have Sleuth for tracing the request and the ELK stack for centralized visualization.
为此,首先设置 ELK 堆栈。要做到这一点,首先,我们将设置 ELK 堆栈。我们将使用 Docker 容器启动 ELK 堆栈。以下是可以考虑的镜像:
To do that, first setup the ELK stack. To do that, first, we will setup the ELK stack. We will be starting the ELK stack using Docker containers. Here are the images that we can consider −
配置 ELK 后,通过调用以下 API 确保它按预期工作:
Once ELK setup has been performed, ensure that it is working as expected by hitting the following APIs −
-
Elasticsearch − localhost:9200
-
Kibana − localhost:5601
我们在本节结尾处查看 logstash 配置文件。
We will look at logstash configuration file at the end of this section.
然后,我们向我们的客户服务和餐馆服务添加以下依赖项:
Then, let us add the following dependency to our Customer Service and the Restaurant Service −
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
现在,我们在设置好依赖项并运行 ELK,让我们进入核心示例。
Now that we have the dependency setup and ELK running, let us move to the core example.
Request Tracing inside Service
在非常基本的层面上,以下是 Sleuth 添加的元数据:
On a very basic level, following are the metadata that are added by Sleuth −
-
Service name − Service currently processing the request.
-
Trace Id − A metadata ID is added to the logs which is sent across services for processing an input request. This is useful for inter-service communication for grouping all the internal requests which went in processing one input request.
-
Span Id − A metadata ID is added to the logs which is same across all log statements which are logged by a service for processing a request. It is useful for intra-service logs. Note that Span ID = Trace Id for the parent service.
让我们实际看一看。为此,我们更新客户服务代码以包含日志行。这是我们将使用的控制器代码。
Let us see this in action. For that, let us update our Customer Service code to contain log lines. Here is the controller code that we would use.
package com.tutorialspoint;
import java.util.HashMap;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.messaging.Message;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
@RestController
class RestaurantCustomerInstancesController {
Logger logger =
LoggerFactory.getLogger(RestaurantCustomerInstancesController.class);
static HashMap<Long, Customer> mockCustomerData = new HashMap();
static{
mockCustomerData.put(1L, new Customer(1, "Jane", "DC"));
mockCustomerData.put(2L, new Customer(2, "John", "SFO"));
mockCustomerData.put(3L, new Customer(3, "Kate", "NY"));
}
@RequestMapping("/customer/{id}")
public Customer getCustomerInfo(@PathVariable("id") Long id) {
logger.info("Querying customer with id: " + id);
Customer customer = mockCustomerData.get(id);
if(customer != null) {
logger.info("Found Customer: " + customer);
}
return customer;
}
}
现在,让我们执行代码,像往常一样,启动 Eureka 服务器。请注意,这不是一项硬性要求,此处的存在是为了完整性。
Now let us execute the code, as always, start the Eureka Server. Note that this is not a hard requirement and is present here for the sake of completeness.
然后,让我们使用以下命令编译并开始更新客户服务 -
Then, let us compile and start updating Customer Service using the following command −
mvn clean install ; java -Dapp_port=8083 -jar .\target\spring-cloud-eurekaclient-
1.0.jar
我们已经设置好了,现在让我们通过点击 API 来测试我们的代码块 -
And we are set, let us now test our code pieces by hitting the API −
curl -X GET http://localhost:8083/customer/1
这是我们将为该 API 获得的输出 -
Here is the output that we will get for this API −
{
"id": 1,
"name": "Jane",
"city": "DC"
}
现在,让我们检查客户服务的日志:
And now let us check the logs for Customer Service −
2021-03-23 13:46:59.604 INFO [customerservice,
b63d4d0c733cc675,b63d4d0c733cc675] 11860 --- [nio-8083-exec-7]
.t.RestaurantCustomerInstancesController : Querying customer with id: 1
2021-03-23 13:46:59.605 INFO [customerservice,
b63d4d0c733cc675,b63d4d0c733cc675] 11860 --- [nio-8083-exec-7]
.t.RestaurantCustomerInstancesController : Found Customer: Customer [id=1,
name=Jane, city=DC]
…..
因此,实际上,正如我们所见,日志记录中添加了服务名称、跟踪 ID 和 span ID。
So, effectively, as we see, we have the name of the service, trace ID, and the span ID added to our log statements.
Request Tracing across Service
让我们看看我们如何在服务之间进行日志记录和跟踪。所以,例如,我们将要做的是使用餐馆服务,它在内部调用客户服务。
Let us see how we can do logging and tracing across service. So, for example, what we will do is to use the Restaurant Service which internally calls the Customer Service.
为此,我们更新我们的餐馆服务代码以包含日志行。这是我们将使用的控制器代码。
For that, let us update our Restaurant Service code to contain log lines. Here is the controller code that we would use.
package com.tutorialspoint;
import java.util.HashMap;
import java.util.List;
import java.util.function.Consumer;
import java.util.function.Supplier;
import java.util.stream.Collectors;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
class RestaurantController {
@Autowired
CustomerService customerService;
Logger logger = LoggerFactory.getLogger(RestaurantController.class);
static HashMap<Long, Restaurant> mockRestaurantData = new HashMap();
static{
mockRestaurantData.put(1L, new Restaurant(1, "Pandas", "DC"));
mockRestaurantData.put(2L, new Restaurant(2, "Indies", "SFO"));
mockRestaurantData.put(3L, new Restaurant(3, "Little Italy", "DC"));
mockRestaurantData.put(4L, new Restaurant(4, "Pizeeria", "NY"));
}
@RequestMapping("/restaurant/customer/{id}")
public List<Restaurant> getRestaurantForCustomer(@PathVariable("id") Long id) {
logger.info("Get Customer from Customer Service with customer id: " + id);
Customer customer = customerService.getCustomerById(id);
logger.info("Found following customer: " + customer);
String customerCity = customer.getCity();
return mockRestaurantData.entrySet().stream().filter(
entry -> entry.getValue().getCity().equals(customerCity))
.map(entry -> entry.getValue())
.collect(Collectors.toList());
}
}
让我们编译并使用以下命令启动更新后的餐馆服务:
Let us compile and start updating Restaurant Service using the following command −
mvn clean install; java -Dapp_port=8082 -jar .\target\spring-cloud-feign-client-1.0.jar
确保正在运行 Eureka 服务器和客户服务。我们已经准备就绪,现在让我们通过调用 API 来测试我们的代码片断:
Ensure that we have the Eureka server and the Customer service running. And we are set, let us now test our code pieces by hitting the API −
curl -X GET http://localhost:8082/restaurant/customer/2
这是我们将为该 API 获得的输出 -
Here is the output that we will get for this API −
[
{
"id": 2,
"name": "Indies",
"city": "SFO"
}
]
现在,让我们检查餐馆服务的日志:
And now, let us check the logs for Restaurant Service −
2021-03-23 14:44:29.381 INFO [restaurantservice,
6e0c5b2a4fc533f8,6e0c5b2a4fc533f8] 19600 --- [nio-8082-exec-6]
com.tutorialspoint.RestaurantController : Get Customer from Customer Service
with customer id: 2
2021-03-23 14:44:29.400 INFO [restaurantservice,
6e0c5b2a4fc533f8,6e0c5b2a4fc533f8] 19600 --- [nio-8082-exec-6]
com.tutorialspoint.RestaurantController : Found following customer: Customer
[id=2, name=John, city=SFO]
然后,让我们检查客户服务的日志:
Then, let us check the logs for Customer Service −
2021-03-23 14:44:29.392 INFO [customerservice,
6e0c5b2a4fc533f8,f2806826ac76d816] 11860 --- [io-8083-exec-10]
.t.RestaurantCustomerInstancesController : Querying customer with id: 2
2021-03-23 14:44:29.392 INFO [customerservice,
6e0c5b2a4fc533f8,f2806826ac76d816] 11860 --- [io-8083-exec-10]
.t.RestaurantCustomerInstancesController : Found Customer: Customer [id=2,
name=John, city=SFO]…..
因此,实际上,正如我们所见,日志记录中添加了服务名称、跟踪 ID 和 span ID。此外,我们看到跟踪 ID,即 6e0c5b2a4fc533f8 在客户服务和餐馆服务中重复。
So, effectively, as we see, we have the name of the service, trace ID, and the span ID added to our log statements. Plus, we see the trace Id, i.e., 6e0c5b2a4fc533f8 being repeated in Customer Service and the Restaurant Service.
Centralized Logging with ELK
到目前为止,我们已经看到了一种通过 Sleuth 改进日志记录和跟踪功能的方法。然而,在微服务架构中,我们有多个正在运行的服务和每个服务都有多个实例。查看每个实例的日志以识别请求流并不实用。这就是 ELK 对我们有所帮助的地方。
What we have seen till now is a way to improve our logging and tracing capability via Sleuth. However, in microservice architecture, we have multiple services running and multiple instances of each service running. It is not practical to look at the logs of each instance to identify the request flow. And that is where ELK helps us.
我们以与 Sleuth 相同的跨服务通信案例为例。我们要更新我们的餐厅和顾客,以便为 ELK 栈添加 logback appenders 。
Let us use the same case of inter-service communication as we did for Sleuth. Let us update our Restaurant and Customer to add logback appenders for the ELK stack.
在继续之前,请确保已设置 ELK 栈并且可以通过 localhost:5601 访问 Kibana。另外,使用以下设置配置 Lostash 配置 −
Before moving ahead, please ensure that ELK stack has been setup and Kibana is accessible at localhost:5601. Also, configure the Lostash configuration with the following setup −
input {
tcp {
port => 8089
codec => json
}
}
output {
elasticsearch {
index => "restaurant"
hosts => ["http://localhost:9200"]
}
}
完成此操作后,我们需要执行两步才能在 Spring 应用中使用 logstash。我们将针对我们服务的这两步执行以下步骤。首先,添加 logback 的依赖关系,以便使用 logstash 的附加组件。
Once this is done, there are two steps we need to do to use logstash in our Spring app. We will perform the following steps for both our services. First, add a dependency for logback to use appender for Logstash.
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.6</version>
</dependency>
其次,为 logback 添加一个附加组件,以便 logback 可以使用此附加组件将数据发送到 Logstash
And secondly, add an appender for logback so that the logback can use this appender to send the data to Logstash
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="logStash"
class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>10.24.220.239:8089</destination>
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} -
%msg%n</pattern>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="logStash" />
<appender-ref ref="console" />
</root>
</configuration>
上述 appender 将记录到控制台,并将日志发送到 logstash。现在,完成此操作后,我们就可以开始进行测试。
The above appender would log to console as well as send the logs to logstash. Now one this is done, we are all set to test this out.
现在,让我们像往常一样执行上述代码、启动 Eureka Server。
Now, let us execute the above code as always, start the Eureka Server.
然后,让我们使用以下命令编译并开始更新客户服务 -
Then, let us compile and start updating Customer Service using the following command −
mvn clean install ; java -Dapp_port=8083 -jar .\target\spring-cloud-eurekaclient- 1.0.jar
然后,让我们使用以下命令编译并开始更新餐厅服务 -
Then, let us compile and start updating Restaurant Service using the following command −
mvn clean install; java -Dapp_port=8082 -jar .\target\spring-cloud-feign-client- 1.0.jar
我们已经设置好了,现在让我们通过点击 API 来测试我们的代码块 -
And we are set, let us now test our code pieces by hitting the API −
curl -X GET http://localhost:8082/restaurant/customer/2
这是我们将为该 API 获得的输出 -
Here is the output that we will get for this API −
[
{
"id": 2,
"name": "Indies",
"city": "SFO"
}
]
但更重要的是,日志语句也会在 Kibana 上可用。
But more importantly, the log statements would also be available on Kibana.

因此,正如我们所看到的,我们可以筛选出特定 traceId 并查看为满足请求而记录的跨服务的所有日志语句。
So, as we see, we can filter for a traceId and see all the log statements across services which were logged to fulfill the request.