Spring Cloud 简明教程

Spring Cloud - Quick Guide

Spring Cloud - Introduction

在我们关注 Spring Cloud 之前,让我们简要概述微服务架构以及 Spring Boot 在创建微服务中的作用。

Before we look at Spring Cloud, let’s have a brief overview on Microservice Architecture and the role of Spring Boot in creating microservices.

Microservice Architecture

微服务架构是一种应用程序开发风格,其中应用程序被分解成小型服务,并且这些服务彼此之间松散耦合。以下是使用微服务架构的主要优点:

Microservice architecture is a style of application development where the application is broken down into small services and these services have loose coupling among them. Following are the major advantages of using microservice architecture −

  1. Easy to maintain − Microservices are small in size and are supposed to handle only single business task. So, they are simple to develop and maintain.

  2. Independent Scaling & Deployment − Microservices have their individual deployment pattern and cadence. So, each service can be scaled based on the load which that service is supposed to cater to. Each service can be deployed based on its schedule.

  3. Independent Technology Usage − Microservices have their code base segregated from the deployment environment, so the language and the technology that a microservice needs to use can be decided based on the use-case. There is no need to have a common stack to be used in all microservices.

有关微服务架构的更多详细信息,请访问 Microservice Architecture

More details about Microservice Architecture can be found at Microservice Architecture

Spring Boot

Spring Boot 是一个基于 Java 的框架,用于创建微服务架构中使用的微服务。它进一步缩短了开发 Spring 应用程序所需的时间。以下是它提供的主要好处:

Spring Boot is a Java-based framework which is used to create microservices which are used in microservice architecture. It further brings down the time needed to develop a Spring application. Following are the major benefits it provides −

  1. It is easy to understand and develop a Spring application

  2. Increases productivity

  3. Reduces the development time

有关 Spring Boot 的更多信息,请访问 Spring Boot

More info on Spring Boot can be found at −Spring Boot

Spring Cloud

Spring Cloud 提供了一组组件,这些组件在云中构建分布式应用程序时很有用。我们可以自己开发这些组件,但是这会浪费开发和维护此样板代码的时间。

Spring Cloud provides a collection of components which are useful in building distributed applications in cloud. We can develop these components on our own, however that would waste time in developing and maintaining this boilerplate code.

这就是 Spring Cloud 发挥作用的地方。它为分布式环境中观察到的常见问题提供了开箱即用的云模式。它试图解决的一些模式是:

That is where Spring Cloud comes into picture. It provides ready-to-use cloud patterns for common problems which are observed in a distributed environment. Some of the patterns which it attempts to address are −

  1. Distributed Messaging

  2. Load Balancing

  3. Circuit Breakers

  4. Routing

  5. Distributed Logging

  6. Service Registration

  7. Distributed Lock

  8. Centralized Configuration

这就是为什么它成为开发需要高可伸缩性、性能和可用性的应用程序时非常有用的框架。

That is why, it becomes a very useful framework in developing applications which require high scalability, performance, and availability.

在本教程中,我们将介绍 Spring Cloud 的以上组件。

In this tutorial, we are going to cover the above-listed components of Spring Cloud.

Benefits of Using Spring Cloud

  1. Developers focus on Business Logic − Spring Cloud provides all the boilerplate code to implement common design patterns of the cloud. Developers thus can focus on the business logic without the need to develop and maintain this boilerplate code.

  2. Quick Development Time − As the developers get the boilerplate for free, they can quickly deliver on the required projects while maintaining code quality.

  3. Easy to use − Spring Cloud projects can easily be integrated with existing Spring Projects.

  4. Active Project − Spring Cloud is actively maintained by Pivotal that is the company behind Spring. So, we get all the new features and bug-fixes for free just by upgrading the Spring Cloud version.

微服务架构有多个优点;但是,它最关键的缺点之一是在分布式环境中部署它。对于分布式系统,我们有一些常见问题经常出现,例如:

Microservice architecture has multiple advantages; however, one of its most critical drawbacks is its deployment in a distributed environment. And with the distributed systems, we have some common problems that frequently creep up, for example −

  1. How does service A know where to contact service B, i.e., address of service B?

  2. How do multiple services communicate with each other, i.e., what protocol to use?

  3. How do we monitor various services in our environment?

  4. How do we distribute the configuration of the service with the service instance?

  5. How do we link the calls which travel across services for debugging purposes?

  6. and so on…

这些是 Spring Cloud 尝试解决并提供通用解决方案的一系列问题。

These are the set of problems which Spring Cloud tries to address and provide common solution to.

虽然 Spring Boot 用于快速应用程序开发,但将它与 Spring Cloud 一起使用可以减少我们开发和部署在分布式环境中的微服务的集成时间。

While Spring Boot is used for quick application development, using it along with Spring Cloud can reduce time to integrate our microservices which we develop and deploy in a distributed environment.

Spring Cloud Components

现在让我们来看看 Spring Cloud 提供的各种组件以及这些组件解决的问题

Let us now take a look at the various components which Spring Cloud provides and the problems these components solve

Problem

Components

Distributed Cloud Configuration

Spring Cloud Configuration, Spring Cloud Zookeeper, Spring Consul Config

Distributed Messaging

Spring Stream with Kafka, Spring Stream with RabbitMQ

Service Discovery

Spring Cloud Eureka, Spring Cloud Consul, Spring Cloud Zookeeper

Logging

Spring Cloud Zipkin, Spring Cloud Sleuth

Spring Service Communication

Spring Hystrix, Spring Ribbon, Spring Feign, Spring Zuul

我们将在接下来的章节中研究其中的几个组件。

We will look at a few of these components in the upcoming chapters.

Difference between Spring Cloud and Spring Boot

这是在开始使用 Spring Cloud 时出现的一个非常常见的问题。实际上,这里不存在比较。Spring Cloud 和 Spring Boot 都用于实现不同的目标。

This a very common question that arises when starting with Spring Cloud. Actually, there is no comparison here. Both Spring Cloud and Spring Boot are used to achieve different goals.

Spring Boot 是一个用于更快速应用程序开发的 Java 框架,并在微服务架构中专门使用。

Spring Boot is a Java framework which is used for quicker application development, and is specifically used in Microservice architecture.

Spring Cloud 用于集成这些微服务,以便它们能够在分布式环境中轻松地一起协同工作并相互通信

Spring cloud is used for integrating these microservices so that they can easily work together in a distributed environment and can communicate with each other

事实上,为了获得更少开发时间等最大收益,建议将 Spring Boot 与 Spring Cloud 一起使用。

In fact, to avail maximum benefits like less development time, it is recommended to use Spring Boot along with Spring Cloud.

Spring Cloud - Dependency Management

在本教程中,我们将使用 Spring Cloud 构建我们的第一个应用程序。让我们在使用 Spring Boot 作为基本框架时来了解 Spring Cloud 应用程序的项目结构和依赖设置。

In this chapter, we will build our very first application using Spring Cloud. Let’s go over the project structure and the dependency setup for our Spring Cloud Application while using Spring Boot as the base framework.

Core Dependency

Spring Cloud 组有多个包列为依赖关系。在本教程中,我们将使用 Spring Cloud 组的多个包。为了避免这些包之间的任何兼容性问题,让我们使用下面给出的 Spring Cloud 依赖关系管理 POM −

Spring Cloud group has multiple packages listed as dependency. In this tutorial, we will be using multiple packages from the Spring Cloud group. To avoid any compatibility issue between these packages, let us use Spring Cloud dependency management POM, given below −

<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-dependencies</artifactId>
         <version>Hoxton.SR8</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

Gradle 用户可以通过使用以下内容来实现相同的功能 −

The Gradle user can achieve the same by using the following −

buildscript {
   dependencies {
      classpath "io.spring.gradle:dependency-management-plugin:1.0.10.RELEASE"
   }
}
apply plugin: "io.spring.dependency-management"
dependencyManagement {
   imports {
   mavenBom "org.springframework.cloud:spring-cloud-dependencies:
'Hoxton.SR8')"
   }
}

Project Architecture and Structure

本教程中,我们将使用餐馆示例 -

For this tutorial, we will use the case of a Restaurant −

  1. Restaurant Service Discovery − Used for registering the service address.

  2. Restaurant Customer Service − Provides Customer information to the client and other services.

  3. Restaurant Service − Provides Restaurant information to the client. Uses Customer service to get city information of the customer.

  4. Restaurant Gateway − Entry point for our application. However, we will use this only once in this tutorial for simplicity sake.

下图展示了项目架构 -

On a high level, here is the project architecture −

project architecture

我们还将拥有以下项目结构。注意,我们将在后续章节查看文件。

And we will have the following project structure. Note that we will look at the files in the upcoming chapters.

project structure

Project POM

出于简化考虑,我们将使用基于 Maven 的构建。以下是基 POM 文件,我们将在本教程中使用该文件。

For simplicity sake, we will be using Maven-based builds. Below is the base POM file, which we will use for this tutorial.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
   <modelVersion>4.0.0</modelVersion>
   <groupId>com.tutorials.point</groupId>
   <artifactId>spring-cloud-eureka-client</artifactId>
   <version>1.0</version>
   <packaging>jar</packaging>
   <properties>
      <maven.compiler.source>1.8</maven.compiler.source>
      <maven.compiler.target>1.8</maven.compiler.target>
   </properties>
   <dependencyManagement>
      <dependencies>
         <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>2020.0.1</version>
            <type>pom</type>
            <scope>import</scope>
         </dependency>
         <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-dependencies</artifactId>
            <version>2.4.0</version>
            <type>pom</type>
            <scope>import</scope>
         </dependency>
      </dependencies>
   </dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.springframework.boot</groupId>
         <artifactId>spring-boot-starter-web</artifactId>
      </dependency>
   </dependencies>
   <build>
      <plugins>
         <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
            <executions>
               <execution>
                  <goals>
                     <goal>repackage</goal>
                  </goals>
               </execution>
            </executions>
         </plugin>
      </plugins>
   </build>
</project>

Points to note -

Points to note

  1. The POM dependency management section almost includes all the projects which we require.We will add the dependency section as and when we require.

  2. We will use Spring Boot as the base Framework for the development of our application and that is why you see it listed as a dependency.

Spring Cloud - Service Discovery Using Eureka

Introduction

当应用程序作为云中的微服务部署时,服务发现是其中最关键的部分之一。这是因为,对于任何使用操作,微服务架构中的应用程序可能需要访问多个服务,并进行相互通信。

Service discovery is one of the most critical parts when an application is deployed as microservices in the cloud. This is because for any use operation, an application in a microservice architecture may require access to multiple services and the communication amongst them.

服务发现有助于跟踪服务地址以及可以联系到服务实例的端口。这里有三个组件在发挥作用 -

Service discovery helps tracking the service address and the ports where the service instances can be contacted to. There are three components at play here −

  1. Service Instances − Responsible to handle incoming request for the service and respond to those requests.

  2. Service Registry − Keeps track of the addresses of the service instances. The service instances are supposed to register their address with the service registry.

  3. Service Client − The client which wants access or wants to place a request and get response from the service instances. The service client contacts the service registry to get the address of the instances.

Apache Zookeeper、Eureka 和 Consul 是一些用于服务发现的众所周知的组件。在本教程中,我们将使用 Eureka

Apache Zookeeper, Eureka and Consul are a few well-known components which are used for Service Discovery. In this tutorial, we will use Eureka

Setting up Eureka Server/Registry

要设置 Eureka 服务器,我们需要更新 POM 文件以包含以下依赖项 -

For setting up Eureka Server, we need to update the POM file to contain the following dependency −

<dependencies>
   <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
</dependencies>

然后,使用正确的注解为 Spring 应用程序类添加注解,即 @EnableEurekaServer。

And then, annotate our Spring application class with the correct annotation, i.e.,@EnableEurekaServer.

package com.tutorialspoint;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;
@SpringBootApplication
@EnableEurekaServer
public class RestaurantServiceRegistry{
   public static void main(String[] args) {
      SpringApplication.run(RestaurantServiceRegistry.class, args);
   }
}

如果我们想要配置注册表并更改其默认值,还需要一个 properties file 。以下是我们将做出的一些更改:

We also need a properties file if we want to configure the registry and change its default values. Here are the changes we will make −

  1. Update the port to 8900 rather than the default 8080

  2. In production, one would have more than one node for registry for its high availability. That’s is where we need peer-to-peer communication between registries. As we are executing this in standalone mode, we can simply set client properties to false to avoid any errors.

因此,这就是我们的 application.yml 文件将看起来的样子:

So, this is how our application.yml file will look like −

server:
   port: 8900
eureka:
   client:
      register-with-eureka: false
      fetch-registry: false

就是这样,现在让我们使用以下命令编译项目并运行程序:

And that is it, let us now compile the project and run the program by using the following command −

java -jar .\target\spring-cloud-eureka-server-1.0.jar

现在我们可以在控制台中查看日志:

Now we can see the logs in the console −

...
2021-03-07 13:33:10.156 INFO 17660 --- [ main]
o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8900
(http)
2021-03-07 13:33:10.172 INFO 17660 --- [ main]
o.apache.catalina.core.StandardService : Starting service [Tomcat]
...
2021-03-07 13:33:16.483 INFO 17660 --- [ main]
DiscoveryClientOptionalArgsConfiguration : Eureka HTTP Client uses Jersey
...
2021-03-07 13:33:16.632 INFO 17660 --- [ main]
o.s.c.n.eureka.InstanceInfoFactory : Setting initial instance status as:
STARTING
2021-03-07 13:33:16.675 INFO 17660 --- [ main]
com.netflix.discovery.DiscoveryClient : Initializing Eureka in region useast-
1
2021-03-07 13:33:16.675 INFO 17660 --- [ main]
com.netflix.discovery.DiscoveryClient : Client configured to neither register
nor query for data.
2021-03-07 13:33:16.686 INFO 17660 --- [ main]
com.netflix.discovery.DiscoveryClient : Discovery Client initialized at
timestamp 1615104196685 with initial instances count: 0
...
2021-03-07 13:33:16.873 INFO 17660 --- [ Thread-10]
e.s.EurekaServerInitializerConfiguration : Started Eureka Server
2021-03-07 13:33:18.609 INFO 17660 --- [ main]
c.t.RestaurantServiceRegistry : Started RestaurantServiceRegistry in
15.219 seconds (JVM running for 16.068)

正如我们从上述日志中看到的那样,Eureka 注册表已经设置完毕。我们还为 Eureka 获取了一个仪表板(参见下图),它托管在服务器 URL 上。

As we see from the above logs that the Eureka registry has been setup. We also get a dashboard for Eureka (see the following image) which is hosted on the server URL.

dashboard for eureka

Setting up Eureka Client for Instance

现在,我们将设置将注册到 Eureka 服务器的服务实例。为了设置 Eureka 客户端,我们将使用一个单独的 Maven 项目,并将 POM 文件更新为包含以下依赖项:

Now, we will set up the service instances which would register to the Eureka server. For setting up Eureka Client, we will use a separate Maven project and update the POM file to contain the following dependency −

<dependencies>
   <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
</dependencies>

然后,使用正确的注释对我们的 Spring 应用程序类进行注释,即 @EnableDiscoveryClient

And then, annotate our Spring application class with the correct annotation, i.e.,@EnableDiscoveryClient

package com.tutorialspoint;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
@SpringBootApplication
@EnableDiscoveryClient
public class RestaurantCustomerService{
   public static void main(String[] args) {
      SpringApplication.run(RestaurantCustomerService.class, args);
   }
}

如果我们想要配置客户端并更改其默认值,还需要一个 properties file 。以下是我们将做出的一些更改:

We also need a properties file if we want to configure the client and change its default values. Here are the changes we will make −

  1. We will provide the port at runtime while jar at execution.

  2. We will specify the URL at which Eureka server is running.

因此,这就是我们的 application.yml 文件将看起来的样子

So, this is how our application.yml file will look like

spring:
   application:
      name: customer-service
server:
   port: ${app_port}
eureka:
   client:
      serviceURL:
         defaultZone: http://localhost:8900/eureka

在执行时,我们将运行两个服务实例。为此,让我们打开两个 shell,然后在一个 shell 上执行以下命令:

For execution, we will have two service instances running. To do that, let’s open up two shells and then execute the following command on one shell −

java -Dapp_port=8081 -jar .\target\spring-cloud-eureka-client-1.0.jar

并在另一个 shell 上执行以下命令:

And execute the following on the other shell −

java -Dapp_port=8082 -jar .\target\spring-cloud-eureka-client-1.0.jar

现在我们可以在控制台中查看日志:

Now we can see the logs in the console −

...
2021-03-07 15:22:22.474 INFO 16920 --- [ main]
com.netflix.discovery.DiscoveryClient : Starting heartbeat executor: renew
interval is: 30
2021-03-07 15:22:22.482 INFO 16920 --- [ main]
c.n.discovery.InstanceInfoReplicator : InstanceInfoReplicator onDemand
update allowed rate per min is 4
2021-03-07 15:22:22.490 INFO 16920 --- [ main]
com.netflix.discovery.DiscoveryClient : Discovery Client initialized at
timestamp 1615110742488 with initial instances count: 0
2021-03-07 15:22:22.492 INFO 16920 --- [ main]
o.s.c.n.e.s.EurekaServiceRegistry : Registering application CUSTOMERSERVICE
with eureka with status UP
2021-03-07 15:22:22.494 INFO 16920 --- [ main]
com.netflix.discovery.DiscoveryClient : Saw local status change event
StatusChangeEvent [timestamp=1615110742494, current=UP, previous=STARTING]
2021-03-07 15:22:22.500 INFO 16920 --- [nfoReplicator-0]
com.netflix.discovery.DiscoveryClient : DiscoveryClient_CUSTOMERSERVICE/
localhost:customer-service:8081: registering service...
2021-03-07 15:22:22.588 INFO 16920 --- [ main]
o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8081
(http) with context path ''
2021-03-07 15:22:22.591 INFO 16920 --- [ main]
.s.c.n.e.s.EurekaAutoServiceRegistration : Updating port to 8081
2021-03-07 15:22:22.705 INFO 16920 --- [nfoReplicator-0]
com.netflix.discovery.DiscoveryClient : DiscoveryClient_CUSTOMERSERVICE/
localhost:customer-service:8081 - registration status: 204
...

正如我们从上面的日志中看到的那样,客户端实例已经设置好。我们还可以查看之前看到的 Eureka 服务器仪表板。正如我们所见,Eureka 服务器了解到有正在运行的两个“CUSTOMER-SERVICE”实例:

As we see from above logs that the client instance has been setup. We can also look at the Eureka Server dashboard we saw earlier. As we see, there are two instances of “CUSTOMER-SERVICE” running that the Eureka server is aware of −

setting up eureka client for instance

Eureka Client Consumer Example

我们的 Eureka 服务器已获得已注册的“Customer-Service”设置的客户端实例。现在,我们可以设置消费者,它可以向 Eureka 服务器询问“Customer-Service”节点的地址。

Our Eureka server has got the registered client instances of the “Customer-Service” setup. We can now setup the Consumer which can ask the Eureka Server the address of the “Customer-Service” nodes.

为此,让我们添加一个可以从 Eureka 注册表获取信息的控制器。此控制器将添加到我们之前的 Eureka 客户端本身,即“Customer Service”。让我们为客户端创建以下控制器。

For this purpose, let us add a controller which can get the information from the Eureka Registry. This controller will be added to our earlier Eureka Client itself, i.e., “Customer Service”. Let us create the following controller to the client.

package com.tutorialspoint;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.client.ServiceInstance;
import org.springframework.cloud.client.discovery.DiscoveryClient;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
class RestaurantCustomerInstancesController {
   @Autowired
   private DiscoveryClient eurekaConsumer;
   @RequestMapping("/customer_service_instances")

请注意注释 @DiscoveryClient,这是 Spring 框架提供的内容,用于与注册表通信。

Note the annotation @DiscoveryClient which is what Spring framework provides to talk to the registry.

现在重新编译 Eureka 客户端。执行时,我们有两个服务实例在运行。要做到这一点,让我们打开两个 shell,然后在一个 shell 上执行以下命令:

Let us now recompile our Eureka clients. For execution, we will have two service instances running. To do that, let’s open up two shells and then execute the following command on one shell −

java -Dapp_port=8081 -jar .\target\spring-cloud-eureka-client-1.0.jar

并在另一个 shell 上执行以下命令:

And execute the following on the other shell −

java -Dapp_port=8082 -jar .\target\spring-cloud-eureka-client-1.0.jar

客户端在两个 shell 上启动后,我们现在进入控制器中创建的 [role="bare"] [role="bare"]http://localhost:8081/customer_service_instances 。此 URL 显示了关于这两个实例的完整信息。

Once the client on both shells have started, let us now hit the [role="bare"]http://localhost:8081/customer_service_instances we created in the controller. This URL displays complete information about both the instances.

[
   {
      "scheme": "http",
      "host": "localhost",
      "port": 8081,
      "metadata": {
         "management.port": "8081"
      },
      "secure": false,
      "instanceInfo": {
         "instanceId": "localhost:customer-service:8081",
         "app": "CUSTOMER-SERVICE",
         "appGroupName": null,
         "ipAddr": "10.0.75.1",
         "sid": "na",
         "homePageUrl": "http://localhost:8081/",
         "statusPageUrl": "http://localhost:8081/actuator/info",
         "healthCheckUrl": "http://localhost:8081/actuator/health",
         "secureHealthCheckUrl": null,
         "vipAddress": "customer-service",
         "secureVipAddress": "customer-service",
         "countryId": 1,
         "dataCenterInfo": {
            "@class": "com.netflix.appinfo.InstanceInfo$DefaultDataCenterInfo",
            "name": "MyOwn"
         },
         "hostName": "localhost",
         "status": "UP",
         "overriddenStatus": "UNKNOWN",
         "leaseInfo": {
            "renewalIntervalInSecs": 30,
            "durationInSecs": 90,
            "registrationTimestamp": 1616667914313,
            "lastRenewalTimestamp": 1616667914313,
            "evictionTimestamp": 0,
            "serviceUpTimestamp": 1616667914313
         },
         "isCoordinatingDiscoveryServer": false,
         "metadata": {
            "management.port": "8081"
         },
         "lastUpdatedTimestamp": 1616667914313,
         "lastDirtyTimestamp": 1616667914162,
         "actionType": "ADDED",
         "asgName": null
      },
      "instanceId": "localhost:customer-service:8081",
      "serviceId": "CUSTOMER-SERVICE",
      "uri": "http://localhost:8081"
   },
   {
      "scheme": "http",
      "host": "localhost",
      "port": 8082,
      "metadata": {
         "management.port": "8082"
      },
      "secure": false,
      "instanceInfo": {
      "instanceId": "localhost:customer-service:8082",
      "app": "CUSTOMER-SERVICE",
      "appGroupName": null,
      "ipAddr": "10.0.75.1",
      "sid": "na",
      "homePageUrl": "http://localhost:8082/",
      "statusPageUrl": "http://localhost:8082/actuator/info",
      "healthCheckUrl": "http://localhost:8082/actuator/health",
      "secureHealthCheckUrl": null,
      "vipAddress": "customer-service",
      "secureVipAddress": "customer-service",
      "countryId": 1,
      "dataCenterInfo": {
         "@class": "com.netflix.appinfo.InstanceInfo$DefaultDataCenterInfo",
         "name": "MyOwn"
      },
      "hostName": "localhost",
      "status": "UP",
      "overriddenStatus": "UNKNOWN",
      "leaseInfo": {
         "renewalIntervalInSecs": 30,
         "durationInSecs": 90,
         "registrationTimestamp": 1616667913690,
         "lastRenewalTimestamp": 1616667913690,
         "evictionTimestamp": 0,
         "serviceUpTimestamp": 1616667913690
      },
      "isCoordinatingDiscoveryServer": false,
      "metadata": {
         "management.port": "8082"
      },
      "lastUpdatedTimestamp": 1616667913690,
      "lastDirtyTimestamp": 1616667913505,
      "actionType": "ADDED",
      "asgName": null
     },
     "instanceId": "localhost:customer-service:8082",
     "serviceId": "CUSTOMER-SERVICE",
     "uri": "http://localhost:8082"
   }
]

Eureka Server API

Eureka 服务器为客户端实例或服务相互通信提供各种 API。这些 API 中有很多是抽象的,可以直接与我们之前定义和使用的 @DiscoveryClient 一起使用。需要注意的是,它们也有 HTTP 对应项,可用于 Eureka 的非 Spring 框架用法。

Eureka Server provides various APIs for the client instances or the services to talk to. A lot of these APIs are abstracted and can be used directly with @DiscoveryClient we defined and used earlier. Just to note, their HTTP counterparts also exist and can be useful for Non-Spring framework usage of Eureka.

实际上,我们之前使用的 API,即获取有关运行“Customer_Service”的客户端的信息,也可以通过浏览器使用 [role="bare"] https://javadoc.io/doc/com.netflix.eureka/eureka-client/latest/index.html 调用,如下所示:

In fact, the API that we used earlier, i.e., to get the information about the client running “Customer_Service” can also be invoked via the browser using [role="bare"]http://localhost:8900/eureka/apps/customer-service as can be seen here −

<application slick-uniqueid="3">
   <div>
      <a id="slick_uniqueid"/>
   </div>
   <name>CUSTOMER-SERVICE</name>
   <instance>
         <instanceId>localhost:customer-service:8082</instanceId>
         <hostName>localhost</hostName>
         <app>CUSTOMER-SERVICE</app>
         <ipAddr>10.0.75.1</ipAddr>
         <status>UP</status>
         <overriddenstatus>UNKNOWN</overriddenstatus>
         <port enabled="true">8082</port>
         <securePort enabled="false">443</securePort>
         <countryId>1</countryId>
         <dataCenterInfo
class="com.netflix.appinfo.InstanceInfo$DefaultDataCenterInfo">
               <name>MyOwn</name>
         </dataCenterInfo>
         <leaseInfo>
            <renewalIntervalInSecs>30</renewalIntervalInSecs>
            <durationInSecs>90</durationInSecs>
            <registrationTimestamp>1616667913690</registrationTimestamp>
            <lastRenewalTimestamp>1616668273546</lastRenewalTimestamp>
            <evictionTimestamp>0</evictionTimestamp>
            <serviceUpTimestamp>1616667913690</serviceUpTimestamp>
         </leaseInfo>
         <metadata>
            <management.port>8082</management.port>
         </metadata>
         <homePageUrl>http://localhost:8082/</homePageUrl>
         <statusPageUrl>http://localhost:8082/actuator/info</statusPageUrl>
   <healthCheckUrl>http://localhost:8082/actuator/health</healthCheckUrl>
         <vipAddress>customer-service</vipAddress>
         <secureVipAddress>customer-service</secureVipAddress>
         <isCoordinatingDiscoveryServer>false</isCoordinatingDiscoveryServer>
         <lastUpdatedTimestamp>1616667913690</lastUpdatedTimestamp>
         <lastDirtyTimestamp>1616667913505</lastDirtyTimestamp>
         <actionType>ADDED</actionType>
   </instance>
   <instance>
         <instanceId>localhost:customer-service:8081</instanceId>
         <hostName>localhost</hostName>
         <app>CUSTOMER-SERVICE</app>
         <ipAddr>10.0.75.1</ipAddr>
         <status>UP</status>
         <overriddenstatus>UNKNOWN</overriddenstatus>
         <port enabled="true">8081</port>
         <securePort enabled="false">443</securePort>
         <countryId>1</countryId>
         <dataCenterInfo
class="com.netflix.appinfo.InstanceInfo$DefaultDataCenterInfo">
            <name>MyOwn</name>
         </dataCenterInfo>
         <leaseInfo>
               <renewalIntervalInSecs>30</renewalIntervalInSecs>
               <durationInSecs>90</durationInSecs>
               <registrationTimestamp>1616667914313</registrationTimestamp>
               <lastRenewalTimestamp>1616668274227</lastRenewalTimestamp>
               <evictionTimestamp>0</evictionTimestamp>
               <serviceUpTimestamp>1616667914313</serviceUpTimestamp>
         </leaseInfo>
         <metadata>
            <management.port>8081</management.port>
         </metadata>
         <homePageUrl>http://localhost:8081/</homePageUrl>
         <statusPageUrl>http://localhost:8081/actuator/info</statusPageUrl>
   <healthCheckUrl>http://localhost:8081/actuator/health</healthCheckUrl>
         <vipAddress>customer-service</vipAddress>
         <secureVipAddress>customer-service</secureVipAddress>
         <isCoordinatingDiscoveryServer>false</isCoordinatingDiscoveryServer>
         <lastUpdatedTimestamp>1616667914313</lastUpdatedTimestamp>
         <lastDirtyTimestamp>1616667914162</lastDirtyTimestamp>
         <actionType>ADDED</actionType>
   </instance>
</application>

其他一些有用的 API 包括:

Few other useful APIs are −

Action

API

Register a new service

POST /eureka/apps/{appIdentifier}

Deregister the service

DELTE /eureka/apps/{appIdentifier}

Information about the service

GET /eureka/apps/{appIdentifier}

Information about the service instance

GET /eureka/apps/{appIdentifier}/ {instanceId}

有关编程 API 的更多详细信息可以在此处找到 https://javadoc.io/doc/com.netflix.eureka/eureka-client/latest/index.html

More details about the programmatic API can be found here https://javadoc.io/doc/com.netflix.eureka/eureka-client/latest/index.html

Eureka – High Availability

我们一直在以独立模式使用 Eureka 服务器。然而,在生产环境中,理想情况下,我们应该运行多个 Eureka 服务器实例。这确保了即使一台机器宕机,另一台 Eureka 服务器的机器仍继续运行。

We have been using Eureka server in standalone mode. However, in a Production environment, we should ideally have more than one instance of the Eureka server running. This ensures that even if one machine goes down, the machine with another Eureka server keeps on running.

让我们尝试以高可用性模式设置 Eureka 服务器。对于我们的示例,我们将使用两个实例。为此,我们将使用以下 application-ha.yml 来启动 Eureka 服务器。

Let us try to setup Eureka server in high-availability mode. For our example, we will use two instances.For this, we will use the following application-ha.yml to start the Eureka server.

Points to note -

Points to note

  1. We have parameterized the port so that we can start multiple instances using same the config file.

  2. We have added address, again parameterized, to pass the Eureka server address.

  3. We are naming the app as “Eureka-Server”.

spring:
   application:
      name: eureka-server
server:
   port: ${app_port}
eureka:
   client:
      serviceURL:
         defaultZone: ${eureka_other_server_url}

现在重新编译 Eureka 服务器项目。执行时,我们有两个服务实例在运行。要做到这一点,让我们打开两个 shell,然后在一个 shell 上执行以下命令:

Let us now recompile our Eureka server project. For execution, we will have two service instances running. To do that, let’s open two shells and then execute the following command on one shell −

java -Dapp_port=8900 '-Deureka_other_server_url=http://localhost:8901/eureka' -
jar .\target\spring-cloud-eureka-server-1.0.jar --
spring.config.location=classpath:application-ha.yml

并在另一个 shell 上执行以下命令:

And execute the following on the other shell −

java -Dapp_port=8901 '-Deureka_other_server_url=http://localhost:8900/eureka' -
jar .\target\spring-cloud-eureka-server-1.0.jar --
spring.config.location=classpath:application-ha.yml

我们可以通过查看仪表盘来验证服务器是否已启动并在高可用性模式下运行。例如,以下是 Eureka 服务器 1 上的仪表盘:

We can verify that the servers are up and running in high-availability mode by looking at the dashboard. For example, here is the dashboard on Eureka server 1 −

dashboard on eureka server1

以下是 Eureka 服务器 2 的仪表盘:

And here is the dashboard of Eureka server 2 −

dashboard on eureka server2

因此,正如我们所看到的,我们有两个正在运行且处于同步状态的 Eureka 服务器。即使一台服务器宕机,另一台服务器也将持续工作。

So, as we see, we have two Eureka servers running and in sync. Even if one server goes down, the other server would keep functioning.

我们还可以更新服务实例应用程序,以便通过逗号分隔的服务器地址为两个 Eureka 服务器提供地址。

We can also update the service instance application to have addresses for both Eureka servers by having comma-separated server addresses.

spring:
   application:
      name: customer-service
server:
   port: ${app_port}
eureka:
   client:
      serviceURL:
         defaultZone: http://localhost:8900/eureka,
http://localhost:8901/eureka

Eureka – Zone Awareness

Eureka 还支持区域感知的概念。区域感知是一个非常有用的概念,当我们在不同地理区域拥有一个集群时。比如,我们收到对服务的传入请求,我们需要选择应该为该请求提供服务的服务器。与其在远程服务器上发送和处理该请求,不如选择位于同一区域的服务器更有用。这是因为网络瓶颈在分布式应用程序中非常普遍,因此我们应避免这种情况。

Eureka also supports the concept of zone awareness. Zone awareness as a concept is very useful when we have a cluster across different geographies. Say, we get an incoming request for a service and we need to choose the server which should service the request. Instead of sending and processing that request on a server which is located far, it is more fruitful to choose a server which is in the same zone. This is because, network bottleneck is very common in a distributed application and thus we should avoid it.

现在让我们尝试设置 Eureka 客户端并使其成为区域感知的。为此,让我们添加 application-za.yml

Let us now try to setup Eureka clients and make them Zone aware. For doing that, let us add application-za.yml

spring:
   application:
      name: customer-service
server:
   port: ${app_port}
eureka:
   instance:
      metadataMap:
         zone: ${zoneName}
   client:
      serviceURL:
         defaultZone: http://localhost:8900/eureka

现在,我们重新编译 Eureka 客户端项目。为了执行,我们将运行两个服务实例。为此,让我们打开两个外壳,然后在一个外壳上执行以下命令 −

Let us now recompile our Eureka client project. For execution, we will have two service instances running. To do that, let’s open two shells and then execute the following command on one shell −

java -Dapp_port=8080 -Dzone_name=USA -jar .\target\spring-cloud-eureka-client-
1.0.jar --spring.config.location=classpath:application-za.yml

并在另一个 shell 上执行以下命令:

And execute the following on the other shell −

java -Dapp_port=8081 -Dzone_name=EU -jar .\target\spring-cloud-eureka-client-
1.0.jar --spring.config.location=classpath:application-za.yml

我们可以返回到信息中心,验证 Eureka 服务器是否注册了服务区域。在下图中看到,我们有两个可用区,而不是迄今为止所见的 1 个。

We can go back to the dashboard to verify that the Eureka Server registers the zone of the services. As seen in the following image, we have two availability zones instead of 1, which we have been seeing till now.

eureka server

现在,任何客户端都可以查看它所处的区域。比如,如果客户端在美国,它将选择美国的实例服务。它可以从 Eureka 服务器获取区域信息。

Now, any client can look at the zone it is present in. Say the client is located in USA, it would prefer the service instance of USA. And it can get the zone information from the Eureka Server.

Spring Cloud - Synchronous Communication with Feign

Introduction

在分布式环境中,服务需要彼此通信。通信可以同步或异步发生。在本节中,我们将了解服务如何通过同步 API 调用进行通信。

In a distributed environment, services need to communicate with each other. The communication can either happen synchronously or asynchronously. In this section, we will look at how services can communicate by synchronous API calls.

虽然这听起来很简单,但作为 API 调用的一部分,我们需要处理以下问题 −

Although this sounds simple, as part of making API calls, we need to take care of the following −

  1. Finding address of the callee − The caller service needs to know the address of the service which it wants to call.

  2. Load balancing − The caller service can do some intelligent load balancing to spread the load across callee services.

  3. Zone awareness − The caller service should preferably call the services which are in the same zone for quick responses.

Netflix FeignSpring RestTemplate (连同 Ribbon )是用于进行同步 API 调用的两个众所周知的 HTTP 客户端。在本教程中,我们将使用 Feign Client

Netflix Feign and Spring RestTemplate (along with Ribbon) are two well-known HTTP clients used for making synchronous API calls. In this tutorial, we will use Feign Client.

Feign – Dependency Setting

让我们使用我们在前面章节中使用的 Restaurant 案例。让我们开发一个包含餐厅所有信息的餐厅服务。

Let us use the case of Restaurant we have been using in the previous chapters. Let us develop a Restaurant Service which has all the information about the restaurant.

首先,让我们使用以下依赖更新服务的 pom.xml

First, let us update the pom.xml of the service with the following dependency −

<dependencies>
      <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-openfeign</artifactId>
      </dependency>
      <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
      </dependency>
      <dependency>
         <groupId>org.springframework.boot</groupId>
         <artifactId>spring-boot-starter-web</artifactId>
      </dependency>
</dependencies>

然后,使用正确的注释(即 @EnableDiscoveryClient 和 @EnableFeignCLient)注释我们的 Spring 应用程序类

And then, annotate our Spring application class with the correct annotation, i.e., @EnableDiscoveryClient and @EnableFeignCLient

package com.tutorialspoint;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;
@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class RestaurantService{
   public static void main(String[] args) {
      SpringApplication.run(RestaurantService.class, args);
   }
}

Points to note in the above code −

Points to note in the above code −

  1. @ EnableDiscoveryClient − This is the same annotation which we use for reading/writing to the Eureka server.

  2. @EnableFeignCLient − This annotation scans our packages for enabled feign client in our code and initializes it accordingly.

完成后,现在让我们简要了解一下我们定义 feign 客户端所需的 Feign 接口。

Once done, now let us look briefly at Feign Interfaces which we need to define the Feign clients.

Using Feign Interfaces for API calls

Using Feign Interfaces for API calls

只需在接口中定义 API 调用,Feign 就可以轻松设置 feign 客户端,以便用于构造调用 API 所需的样板代码。例如,考虑我们有两个服务 −

Feign client can be simply setup by defining the API calls in an interface which can be used in Feign to construct the boilerplate code required to call the APIs. For example, consider we have two services −

  1. Service A − Caller service which uses the Feign Client.

  2. Service B − Callee service whose API would be called by the above Feign client

调用者服务,即本例中的服务 A,需要为其打算调用的 API 创建一个接口,即服务 B。

The caller service, i.e., service A in this case needs to create an interface for the API which it intends to call, i.e., service B.

package com.tutorialspoint;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
@FeignClient(name = "service-B")
public interface ServiceBInterface {
   @RequestMapping("/objects/{id}", method=GET)
   public ObjectOfServiceB getObjectById(@PathVariable("id") Long id);
   @RequestMapping("/objects/", method=POST)
   public void postInfo(ObjectOfServiceB b);
   @RequestMapping("/objects/{id}", method=PUT)
   public void postInfo((@PathVariable("id") Long id, ObjectOfBServiceB b);
}

Points to note -

Points to note

  1. The @FeignClient annotates the interfaces which will be initialized by Spring Feign and can be used by rest of the code.

  2. Note that the FeignClient annotation needs to contain the name of the service, this is used to discover the service address, i.e., of service B from Eureka or other discovery platforms.

  3. We can then define all the API function name which we plan to call from service A. This can be general HTTP calls with GET, POST, PUT, etc., verbs.

完成后,服务 A 可以简单地使用以下代码来调用服务 B 的 API -

Once this is done, service A can simply use the following code to call the APIs of service B −

@Autowired
ServiceBInterface serviceB
.
.
.
ObjectOfServiceB object = serviceB. getObjectById(5);

我们来看一个示例,以了解实际操作。

Let us look at an example, to see this in action.

Example – Feign Client with Eureka

Example – Feign Client with Eureka

假设我们要查找与客户所在城市相同的城市的餐厅。我们将使用以下服务 -

Let us say we want to find restaurants which are in the same city as that of the customer. We will use the following services −

  1. Customer Service − Has all the customer information. We had defined this in Eureka Client section earlier.

  2. Eureka Discovery Server − Has information about the above services. We had defined this in the Eureka Server section earlier.

  3. Restaurant Service − New service which we will define which has all the restaurant information.

我们首先向我们的客户服务添加一个基本控制器 -

Let us first add a basic controller to our Customer service −

@RestController
class RestaurantCustomerInstancesController {
   static HashMap<Long, Customer> mockCustomerData = new HashMap();
   static{
      mockCustomerData.put(1L, new Customer(1, "Jane", "DC"));
      mockCustomerData.put(2L, new Customer(2, "John", "SFO"));
      mockCustomerData.put(3L, new Customer(3, "Kate", "NY"));
   }
   @RequestMapping("/customer/{id}")
   public Customer getCustomerInfo(@PathVariable("id") Long id) {
      return mockCustomerData.get(id);
   }
}

我们还将为上述控制器定义一个 Customer.java POJO 服务。

We will also define a Customer.java POJO for the above controller.

package com.tutorialspoint;
public class Customer {
   private long id;
   private String name;
   private String city;
   public Customer() {}
   public Customer(long id, String name, String city) {
      super();
      this.id = id;
      this.name = name;
      this.city = city;
   }
   public long getId() {
      return id;
   }
   public void setId(long id) {
      this.id = id;
   }
   public String getName() {
      return name;
   }
   public void setName(String name) {
      this.name = name;
   }
   public String getCity() {
      return city;
   }
   public void setCity(String city) {
      this.city = city;
   }
}

因此,一旦添加此项,我们重新编译项目并执行以下查询以启动 -

So, once this is added, let us recompile our project and execute the following query to start −

java -Dapp_port=8081 -jar .\target\spring-cloud-eureka-client-1.0.jar

Note - 一旦启动 Eureka 服务器和此服务,我们应该能够看到在 Eureka 中注册的此服务的一个实例。

Note − Once the Eureka server and this service is started, we should be able to see an instance of this service registered in Eureka.

若要查看我们的 API 是否正常工作,让我们点击 [role="bare"] [role="bare"]http://localhost:8081/customer/1

To see if our API works, let’s hit [role="bare"]http://localhost:8081/customer/1

我们将获得以下输出 -

We will get the following output −

{
   "id": 1,
   "name": "Jane",
   "city": "DC"
}

这证明我们的服务运行良好。

This proves that our service is working fine.

现在,让我们开始定义 Restaurant 服务将用来获取客户所在城市的 Feign 客户端。

Now let us move to define the Feign client which the Restaurant service will use to get the customer city.

package com.tutorialspoint;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
@FeignClient(name = "customer-service")
public interface CustomerService {
   @RequestMapping("/customer/{id}")
   public Customer getCustomerById(@PathVariable("id") Long id);
}

Feign 客户端包含服务名称和我们计划在 Restaurant 服务中使用的 API 调用。

The Feign client contains the name of the service and the API call we plan to use in the Restaurant service.

最后,让我们在 Restaurant 服务中定义一个使用上述接口的控制器。

Finally, let us define a controller in the Restaurant service which would use the above interface.

package com.tutorialspoint;
import java.util.HashMap;
import java.util.List;
import java.util.stream.Collectors;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
class RestaurantController {
   @Autowired
   CustomerService customerService;
   static HashMap<Long, Restaurant> mockRestaurantData = new HashMap();
   static{
      mockRestaurantData.put(1L, new Restaurant(1, "Pandas", "DC"));
      mockRestaurantData.put(2L, new Restaurant(2, "Indies", "SFO"));
      mockRestaurantData.put(3L, new Restaurant(3, "Little Italy", "DC"));
}
   @RequestMapping("/restaurant/customer/{id}")
   public List<Restaurant> getRestaurantForCustomer(@PathVariable("id") Long
id) {
      String customerCity = customerService.getCustomerById(id).getCity();
      return mockRestaurantData.entrySet().stream().filter(
entry -> entry.getValue().getCity().equals(customerCity))
.map(entry -> entry.getValue())
.collect(Collectors.toList());
   }
}

此处最重要的行如下所示:

The most important line here is the following −

customerService.getCustomerById(id)

这是我们之前定义的 Feign 客户端调用 API 的关键所在。

which is where the magic of API calling by Feign client we defined earlier happens.

让我们也定义 Restaurant POJO

Let us also define the Restaurant POJO

package com.tutorialspoint;
public class Restaurant {
   private long id;
   private String name;
   private String city;
   public Restaurant(long id, String name, String city) {
      super();
      this.id = id;
      this.name = name;
      this.city = city;
   }
   public long getId() {
      return id;
   }
   public void setId(long id) {
      this.id = id;
   }
   public String getName() {
      return name;
   }
   public void setName(String name) {
      this.name = name;
   }
   public String getCity() {
      return city;
   }
   public void setCity(String city) {
      this.city = city;
   }
}

定义了该内容之后,让我们使用以下 application.properties 文件创建一个简单的 JAR 文件:

Once this is defined, let us create a simple JAR file with the following application.properties file −

spring:
   application:
      name: restaurant-service
server:
   port: ${app_port}
eureka:
   client:
      serviceURL:
         defaultZone: http://localhost:8900/eureka

现在,让我们编译我们的项目,并使用以下命令执行该项目:

Now let us a compile our project and use the following command to execute it −

java -Dapp_port=8083 -jar .\target\spring-cloud-feign-client-1.0.jar

总而言之,我们有以下各项运行:

In all, we have the following items running −

  1. Standalone Eureka server

  2. Customer service

  3. Restaurant service

我们可以从 [role="bare"] [role="bare"]http://localhost:8900/ 上的仪表板上确认上述各项是否正常工作。

We can confirm that the above are working from the dashboard on [role="bare"]http://localhost:8900/

feign client with eureka

现在,让我们尝试找到能够为 Jane 服务的所有餐厅,Jane 居住在华盛顿特区。

Now, let us try to find all the restaurants which can serve to Jane who is placed in DC.

为此,首先让我们访问对应的客户服务: [role="bare"] [role="bare"]http://localhost:8080/customer/1

For this, first let us hit the customer service for the same: [role="bare"]http://localhost:8080/customer/1

{
   "id": 1,
   "name": "Jane",
   "city": "DC"
}

然后,对 Restaurant 服务进行一次调用: [role="bare"] [role="bare"]http://localhost:8082/restaurant/customer/1

And then, make a call to the Restaurant Service: [role="bare"]http://localhost:8082/restaurant/customer/1

[
   {
      "id": 1,
      "name": "Pandas",
      "city": "DC"
   },
   {
      "id": 3,
      "name": "Little Italy",
      "city": "DC"
   }
]

正如我们所见,Jane 可以由华盛顿特区地区的两家餐厅提供服务。

As we see, Jane can be served by 2 restaurants which are in DC area.

此外,我们可以看到客户服务的日志中:

Also, from the logs of the customer service, we can see −

2021-03-11 11:52:45.745 INFO 7644 --- [nio-8080-exec-1]
o.s.web.servlet.DispatcherServlet : Completed initialization in 1 ms
Querying customer for id with: 1

总而言之,正如我们所见,无需编写任何样板代码甚至无需指定服务的地址,我们就可以对服务进行 HTTP 调用。

To conclude, as we see, without writing any boilerplate code and even specifying the address of the service, we can make HTTP calls to the services.

Feign Client – Zone Awareness

Feign 客户端还支持区域感知。假设我们收到一个针对服务的传入请求,我们需要选择应该为该请求服务的服务器。与其在位于远处的服务器上发送和处理该请求,不如选择同一区域中的服务器会更有成效。

Feign client also supports zone awareness. Say, we get an incoming request for a service and we need to choose the server which should serve the request. Instead of sending and processing that request on a server which is located far, it is more fruitful to choose a server which is in the same zone.

现在,让我们尝试设置一个区域感知的 Feign 客户端。为此,我们将使用上一个示例中的案例。我们将遵循以下步骤:

Let us now try to setup a Feign client which is zone aware. For doing that, we will use the same case as in the previous example. we will have following −

  1. A standalone Eureka server

  2. Two instances of zone-aware Customer service (code remains same as above, we will just use the properties file mentioned in “Eureka Zone Awareness”

  3. Two instances of zone-aware Restaurant service.

现在,让我们首先启动分区感知的客户服务。重新回顾一下,以下为 application property 文件。

Now, let us first start the customer service which are zone aware. Just to recap, here is the application property file.

spring:
   application:
      name: customer-service
server:
   port: ${app_port}
eureka:
   instance:
      metadataMap:
         zone: ${zoneName}
   client:
      serviceURL:
         defaultZone: http://localhost:8900/eureka

在执行方面,我们将运行两个服务实例。为此,我们创建一个 shell,然后在该 shell 中执行以下命令 −

For execution, we will have two service instances running. To do that, let’s open two shells and then execute the following command on one shell −

java -Dapp_port=8080 -Dzone_name=USA -jar .\target\spring-cloud-eureka-client-
1.0.jar --spring.config.location=classpath:application-za.yml

并在另一个 shell 上执行以下命令:

And execute the following on the other shell −

java -Dapp_port=8081 -Dzone_name=EU -jar .\target\spring-cloud-eureka-client-
1.0.jar --spring.config.location=classpath:application-za.yml

现在,让我们创建分区感知的餐厅服务。为此,我们将使用以下 application-za.yml

Let us now create restaurant services which are zone aware. For this, we will use the following application-za.yml

spring:
   application:
      name: restaurant-service
server:
   port: ${app_port}
eureka:
   instance:
      metadataMap:
         zone: ${zoneName}
client:
   serviceURL:
      defaultZone: http://localhost:8900/eureka

在执行方面,我们将运行两个服务实例。为此,我们创建一个 shell,然后在该 shell 中执行以下命令:

For execution, we will have two service instances running. To do that, let’s open two shells and then execute the following command on one shell:

java -Dapp_port=8082 -Dzone_name=USA -jar .\target\spring-cloud-feign-client-
1.0.jar --spring.config.location=classpath:application-za.yml

在另一个 shell 中执行以下命令 −

And execute following on the other shell −

java -Dapp_port=8083 -Dzone_name=EU -jar .\target\spring-cloud-feign-client-
1.0.jar --spring.config.location=classpath:application-za.yml

现在,我们已经以分区感知模式设置两个餐厅和客户服务的实例。

Now, we have setup two instances each of restaurant and customer service in zone-aware mode.

zone aware mode

现在,让我们访问 [role="bare"] [role="bare"]http://localhost:8082/restaurant/customer/1 (访问美国分区)进行测试。

Now, let us test this out by hitting [role="bare"]http://localhost:8082/restaurant/customer/1 where we are hitting USA zone.

[
   {
      "id": 1,
      "name": "Pandas",
      "city": "DC"
   },
   {
      "id": 3,
      "name": "Little Italy",
      "city": "DC"
   }
]

但需要注意的一个更重要的问题是,会由美国分区中的客户服务提供服务,而不是欧盟分区中的服务。例如,如果我们访问同一 API 5 次,我们会看到美国分区中运行的客户服务在日志记录中有以下内容 −

But the more important point here to note is that the request is served by the Customer service which is present in the USA zone and not the service which is in EU zone. For example, if we hit the same API 5 times, we will see that the customer service which runs in the USA zone will have the following in the log statements −

2021-03-11 12:25:19.036 INFO 6500 --- [trap-executor-0]
c.n.d.s.r.aws.ConfigClusterResolver : Resolving eureka endpoints via
configuration
Got request for customer with id: 1
Got request for customer with id: 1
Got request for customer with id: 1
Got request for customer with id: 1
Got request for customer with id: 1

而欧盟分区中的客户服务不会提供任何服务。

While the customer service in EU zone does not serve any requests.

Spring Cloud - Load Balancer

Introduction

在分布式环境中,服务需要相互通信。通信可以同步或异步进行。服务同步通信时,最好让这些服务在工作人员之间负载均衡请求,这样就不会让某个工作人员不胜负荷。有两种方法可以对请求进行负载均衡

In a distributed environment, services need to communicate with each other. The communication can either happen synchronously or asynchronously. Now, when a service communicates synchronously, it is better for those services to load balance the request among workers so that a single worker does not get overwhelmed. There are two ways to load balance the request

  1. Server-side LB − The workers are fronted by a software which distributes the incoming requests among the workers.

  2. Client-side LB − The caller service themselves distribute the requests among the workers. The benefit of client-side load balancing is that we do not need to have a separate component in the form of a load balancer. We do not need to have high availability of the load balancer etc. Also, we avoid the need to have extra hop from client to LB to worker to get the request fulfilled. So, we save on latency, infrastructure, and maintenance cost.

Spring Cloud 负载均衡器 ( SLB ) 和 Netflix Ribbon 两个著名的客户端负载均衡器,用于处理此类情况。在本教程中,我们将使用 Spring Cloud 负载均衡器。

Spring Cloud load balancer (SLB) and Netflix Ribbon are two well-known client-side load balancer which are used to handle such situation. In this tutorial, we will use Spring Cloud Load Balancer.

Load Balancer Dependency Setting

让我们使用我们在前几章中已经使用过的餐厅案例。让我们重新使用拥有餐厅所有信息的餐厅服务。请注意,我们会将 Feign 客户端与我们的负载均衡器结合使用。

Let’s use the case of restaurant we have been using in the previous chapters. Let us reuse the Restaurant Service which has all the information about the restaurant. Note that we will use Feign Client with our Load balancer.

首先,让我们用以下依赖项更新服务的 pom.xml

First, let us update the pom.xml of the service with following dependency −

<dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
<dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-loadbalancer</artifactId>
</dependency>
<dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>
<dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
</dependency>

我们的负载均衡器会使用 Eureka 作为发现客户端来获取有关工作人员实例的信息。为此,我们必须使用 @EnableDiscoveryClient 注释。

Our load balancer would be using Eureka as a discovery client to get information about the worker instances. For that, we will have to use @EnableDiscoveryClient annotation.

package com.tutorialspoint;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;
@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
public class RestaurantService{
   public static void main(String[] args) {
      SpringApplication.run(RestaurantService.class, args);
   }
}

Using Spring Load Balancer with Feign

我们在 Feign 中使用的 @FeignClient 注释实际上包含了一个默认设置的负载均衡器客户端,它对我们的请求进行循环处理。我们来测试一下。以下是我们早期 Feign 部分中的相同的 Feign 客户端。

@FeignClient annotation that we had used in Feign actually packs in a default setup for the load balancer client which round-robins our request. Let us test this out. Here is the same Feign client from our Feign section earlier.

package com.tutorialspoint;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
@FeignClient(name = "customer-service")
public interface CustomerService {
   @RequestMapping("/customer/{id}")
   public Customer getCustomerById(@PathVariable("id") Long id);
}

以下是我们将使用的控制器。同上,这没有更改。

And here is the controller which we will use. Again, this has not been changed.

package com.tutorialspoint;
import java.util.HashMap;
import java.util.List;
import java.util.stream.Collectors;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
class RestaurantController {
   @Autowired
   CustomerService customerService;
   static HashMap<Long, Restaurant> mockRestaurantData = new HashMap();
   static{
      mockRestaurantData.put(1L, new Restaurant(1, "Pandas", "DC"));
      mockRestaurantData.put(2L, new Restaurant(2, "Indies", "SFO"));
      mockRestaurantData.put(3L, new Restaurant(3, "Little Italy", "DC"));
      mockRestaurantData.put(4L, new Restaurant(4, "Pizeeria", "NY"));
   }
   @RequestMapping("/restaurant/customer/{id}")
   public List<Restaurant> getRestaurantForCustomer(@PathVariable("id") Long
id) {
      System.out.println("Got request for customer with id: " + id);
      String customerCity = customerService.getCustomerById(id).getCity();
      return mockRestaurantData.entrySet().stream().filter(
         entry -> entry.getValue().getCity().equals(customerCity))
         .map(entry -> entry.getValue())
         .collect(Collectors.toList());
   }
}

现在我们已经完成了设置,让我们尝试一下。这里有一点背景知识,我们要执行以下操作——

Now that we are done with the setup, let us give this a try. Just a bit background here, what we will do is the following −

  1. Start the Eureka Server.

  2. Start two instances of the Customer Service.

  3. Start a Restaurant Service which internally calls Customer Service and uses the Spring Cloud Load balancer

  4. Make four API calls to the Restaurant Service. Ideally, two requests would be served by each customer service.

假设我们已启动 Eureka 服务器和客户服务实例,现在让我们编译餐厅服务代码并使用以下命令执行:

Assuming, we have started the Eureka server and the Customer service instances, let us now compile the Restaurant Service code and execute with the following command −

java -Dapp_port=8082 -jar .\target\spring-cloud-feign-client-1.0.jar

现在,让我们通过访问以下 API 来查找位于 DC 的 Jane 的餐厅 [role="bare"] [role="bare"]http://localhost:8082/restaurant/customer/1 ,并让我们再次访问相同的 API 三次。你会从客户服务的日志中注意到两个实例都服务了 2 个请求。每个客户服务 shell 都会打印以下内容:

Now, let us find restaurants for Jane who is based in DC by hitting the following API [role="bare"]http://localhost:8082/restaurant/customer/1 and let us hit the same API three times again. You would notice from the logs of the Customer Service that both the instances serve 2 requests. Each of the Customer Service shell would print the following −

Querying customer for id with: 1
Querying customer for id with: 1

这实际上意味着请求是循环轮播的。

This effectively means that the request was round robin-ed.

Configuring Spring Load Balancer

我们可以配置负载均衡器来更改算法类型,或者我们还可以提供自定义算法。让我们看看如何调整我们的负载均衡器以优先考虑相同客户端的请求。

We can configure the load balancer to change the type of algorithm or we can also provide customized algorithm. Let us see how to tweak our load balancer to prefer the same client for the request.

为此,让我们更新 Feign Client 以包含负载均衡器定义。

For that purpose, let us update our Feign Client to contain load balancer definition.

package com.tutorialspoint;
import org.springframework.cloud.loadbalancer.annotation.LoadBalancerClient;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
@FeignClient(name = "customer-service")
@LoadBalancerClient(name = "customer-service",
configuration=LoadBalancerConfiguration.class)
public interface CustomerService {
   @RequestMapping("/customer/{id}")
   public Customer getCustomerById(@PathVariable("id") Long id);
}

如果你注意到,我们添加了 @LoadBalancerClient 注解,它指定了将用于此 Feign 客户端的负载均衡器的类型。我们可以为负载均衡器创建配置类,并将类传递到注解本身。现在让我们定义 LoadBalancerConfiguratio.java

If you notice, we have added the @LoadBalancerClient annotation which specifies the type of load balancer which would be used for this Feign client. We can create a configuration class for the load balancer and pass on the class to the annotation itself. Now let us define LoadBalancerConfiguratio.java

package com.tutorialspoint;
import org.springframework.cloud.loadbalancer.core.ServiceInstanceListSupplier;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class LoadBalancerConfiguration {
   @Bean
   public ServiceInstanceListSupplier
discoveryClientServiceInstanceListSupplier(
         ConfigurableApplicationContext context) {
      System.out.println("Configuring Load balancer to prefer same instance");
      return ServiceInstanceListSupplier.builder()
               .withBlockingDiscoveryClient()
               .withSameInstancePreference()
               .build(context);
      }
}

现在,如你所见,我们已设置客户端负载均衡,以每次都优先考虑同一个实例。现在我们已经完成了设置,让我们尝试一下。这里有一个背景知识,我们将执行以下操作:

Now, as you see, we have setup our client-side load balancing to prefer the same instance every time. Now that we are done with the setup, let us give this a try. Just a bit background here, what we will do is the following −

  1. Start the Eureka Server.

  2. Start two instances of the Customer Service.

  3. Start a Restaurant Service which internally calls Customer Service and uses the Spring Cloud Load balancer

  4. Make 4 API calls to the Restaurant Service. Ideally, all four requests would be served by the same customer service.

假设我们已启动 Eureka 服务器和客户服务实例,现在让我们编译餐厅服务代码并使用以下命令执行:

Assuming, we have started the Eureka server and the Customer service instances, let us now compile the Restaurant Service code and now execute with the following command −

java -Dapp_port=8082 -jar .\target\spring-cloud-feign-client-1.0.jar

现在,让我们通过访问以下 API 来查找位于 DC 的 Jane 的餐厅 [role="bare"] [role="bare"]http://localhost:8082/restaurant/customer/1 ,并让我们再次访问相同的 API 三次。你会从客户服务的日志中注意到一个实例处理了所有 4 个请求:

Now, let us find restaurants for Jane who is based in DC by hitting the following API [role="bare"]http://localhost:8082/restaurant/customer/1 and let us hit the same API three times again. You would notice from the logs of the Customer Service that a single instance serves all 4 requests −

Querying customer for id with: 1
Querying customer for id with: 1
Querying customer for id with: 1
Querying customer for id with: 1

这实际上意味着请求优先考虑相同的客户服务代理。

This effectively means that the requests have preferred the same customer service agent.

在类似的情况下,我们可以使用各种其他负载均衡算法来使用粘滞会话、基于提示的负载均衡、区域偏好负载均衡等。

On similar lines, we can have various other load balancing algorithms to use sticky sessions, hintbased load balancing, zone preference load balancing, and so on.

Spring Cloud - Circuit Breaker using Hystrix

Introduction

在分布式环境中,服务需要相互通信。通信可以同步或异步发生。当服务同步通信时,可能会出现多种造成问题的原因。例如:

In a distributed environment, services need to communicate with each other. The communication can either happen synchronously or asynchronously. When services communicate synchronously, there can be multiple reasons where things can break. For example −

  1. Callee service unavailable − The service which is being called is down for some reason, for example − bug, deployment, etc.

  2. Callee service taking time to respond − The service which is being called can be slow due to high load or resource consumption or it is in the middle of initializing the services.

在任何一种情况下,对于调用方来说,都浪费了时间和网络资源而等待被调用者做出响应。对于服务而言,更有意义的做法是退避并根据需要在一段时间后对被调用服务发出调用或共享默认响应。

In either of the cases, it is waste of time and network resources for the caller to wait for the callee to respond. It makes more sense for the service to back off and give calls to the callee service after some time or share default response.

Netflix Hystrix, Resilince4j 是两个众所周知且用于处理这种情况的断路器。在本教程中,我们将使用 Hystrix。

Netflix Hystrix, Resilince4j are two well-known circuit breakers which are used to handle such situations. In this tutorial, we will use Hystrix.

Hystrix – Dependency Setting

让我们使用我们之前用过的 Restaurant 的案例。让我们将 hystrix dependency 添加到我们的 Restaurant 服务中,这些服务会调用 Customer 服务。首先,让我们使用以下依赖关系更新服务的 pom.xml

Let us use the case of Restaurant that we have been using earlier. Let us add hystrix dependency to our Restaurant Services which call the Customer Service. First, let us update the pom.xml of the service with the following dependency −

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-netflix-hystrix</artifactId>
   <version>2.7.0.RELEASE</version>
</dependency>

然后,使用正确的注释(即,@EnableHystrix)为我们的 Spring 应用程序类添加注释

And then, annotate our Spring application class with the correct annotation, i.e., @EnableHystrix

package com.tutorialspoint;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.circuitbreaker.EnableCircuitBreaker;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.hystrix.EnableHystrix;
import org.springframework.cloud.openfeign.EnableFeignClients;
@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
@EnableHystrix
public class RestaurantService{
   public static void main(String[] args) {
      SpringApplication.run(RestaurantService.class, args);
   }
}

Points to Note

Points to Note

  1. @ EnableDiscoveryClient and @EnableFeignCLient − We have already looked at these annotations in the previous chapter.

  2. @EnableHystrix − This annotation scans our packages and looks out for methods which are using @HystrixCommand annotation.

Hystrix Command Annotation

完成之后,我们将重复使用之前在 Restaurant 服务中为客户服务类定义的 Feign 客户端,这里不作任何更改——

Once done, we will reuse the Feign client which we had defined for our customer service class earlier in the Restaurant service, no changes here −

package com.tutorialspoint;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
@FeignClient(name = "customer-service")
public interface CustomerService {
   @RequestMapping("/customer/{id}")
   public Customer getCustomerById(@PathVariable("id") Long id);
}

现在,让我们在此处定义 service implementation 类,该类将使用 Feign 客户端。这将是 feign 客户端的一个简单包装。

Now, let us define the service implementation class here which would use the Feign client. This would be a simple wrapper around the feign client.

package com.tutorialspoint;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import com.netflix.hystrix.contrib.javanica.annotation.HystrixCommand;
@Service
public class CustomerServiceImpl implements CustomerService {
   @Autowired
   CustomerService customerService;
   @HystrixCommand(fallbackMethod="defaultCustomerWithNYCity")
   public Customer getCustomerById(Long id) {
      return customerService.getCustomerById(id);
   }
   // assume customer resides in NY city
   public Customer defaultCustomerWithNYCity(Long id) {
      return new Customer(id, null, "NY");
   }
}

现在,让我们了解一下上面代码中的一些要点——

Now, let us understand couple of points from the above code −

  1. HystrixCommand annotation − This is responsible for wrapping the function call that is getCustomerById and provide a proxy around it. The proxy then gives various hooks through which we can control our call to the customer service. For example, timing out the request,pooling of request, providing a fallback method, etc.

  2. Fallback method − We can specify the method we want to call when Hystrix determines that something is wrong with the callee. This method needs to have same signature as the method which is annotated. In our case, we have decided to provide the data back to our controller for the NY city.

此注释提供了几个有用的选项——

Couple of useful options this annotation provides −

  1. Error threshold percent − Percentage of request allowed to fail before the circuit is tripped, that is, fallback methods are called. This can be controlled by using cicutiBreaker.errorThresholdPercentage

  2. Giving up on the network request after timeout − If the callee service, in our case Customer service, is slow, we can set the timeout after which we will drop the request and move to fallback method. This is controlled by setting execution.isolation.thread.timeoutInMilliseconds

最后,这里是我们称为 CustomerServiceImpl 的控制器

And lastly, here is our controller which we call the CustomerServiceImpl

package com.tutorialspoint;
import java.util.HashMap;
import java.util.List;
import java.util.stream.Collectors;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
class RestaurantController {
   @Autowired
   CustomerServiceImpl customerService;
   static HashMap<Long, Restaurant> mockRestaurantData = new HashMap();
   static{
      mockRestaurantData.put(1L, new Restaurant(1, "Pandas", "DC"));
      mockRestaurantData.put(2L, new Restaurant(2, "Indies", "SFO"));
      mockRestaurantData.put(3L, new Restaurant(3, "Little Italy", "DC"));
      mockRestaurantData.put(3L, new Restaurant(4, "Pizeeria", "NY"));
   }
   @RequestMapping("/restaurant/customer/{id}")
   public List<Restaurant> getRestaurantForCustomer(@PathVariable("id") Long
id)
{
   System.out.println("Got request for customer with id: " + id);
   String customerCity = customerService.getCustomerById(id).getCity();
   return mockRestaurantData.entrySet().stream().filter(
      entry -> entry.getValue().getCity().equals(customerCity))
      .map(entry -> entry.getValue())
      .collect(Collectors.toList());
   }
}

Circuit Tripping/Opening

现在我们已经完成了设置,让我们尝试一下。这里有一点背景知识,我们要执行以下操作——

Now that we are done with the setup, let us give this a try. Just a bit background here, what we will do is the following −

  1. Start the Eureka Server

  2. Start the Customer Service

  3. Start the Restaurant Service which will internally call Customer Service.

  4. Make an API call to Restaurant Service

  5. Shut down the Customer Service

  6. Make an API call to Restaurant Service. Given that Customer Service is down, it would cause failure and ultimately, the fallback method would be called.

现在,让我们编译 Restaurant 服务代码,并使用以下命令执行

Let us now compile the Restaurant Service code and execute with the following command −

java -Dapp_port=8082 -jar .\target\spring-cloud-feign-client-1.0.jar

另外,启动客户服务和 Eureka 服务器。请注意,这些服务没有发生任何变化,它们与我们在前几章中看到的情况相同。

Also, start the Customer Service and the Eureka server. Note that there are no changes in these services and they remain same as seen in the previous chapters.

现在,让我们尝试为在华盛顿特区的简寻找餐馆。

Now, let us try to find restaurant for Jane who is based in DC.

{
   "id": 1,
   "name": "Jane",
   "city": "DC"
}

为此,我们将点击以下 URL: [role="bare"] [role="bare"]http://localhost:8082/restaurant/customer/1

For doing that, we will hit the following URL: [role="bare"]http://localhost:8082/restaurant/customer/1

[
   {
      "id": 1,
      "name": "Pandas",
      "city": "DC"
   },
   {
      "id": 3,
      "name": "Little Italy",
      "city": "DC"
   }
]

因此,这里没有什么新鲜事,我们拿到了位于华盛顿特区的餐厅。现在,让我们转到关闭客户服务这个有趣的环节。你可以通过按 Ctrl+C 或者直接终止 shell 来实现。

So, nothing new here, we got the restaurants which are in DC. Now, let’s move to the interesting part which is shutting down the Customer service. You can do that either by hitting Ctrl+C or simply killing the shell.

现在,让我们再次点击相同的 URL − [role="bare"] [role="bare"]http://localhost:8082/restaurant/customer/1

Now let us hit the same URL again − [role="bare"]http://localhost:8082/restaurant/customer/1

{
   "id": 4,
   "name": "Pizzeria",
   "city": "NY"
}

从输出中可以看到,我们拿到了位于纽约的餐厅,尽管我们的顾客来自华盛顿特区。这是因为我们的后备方法返回了一个位于纽约的虚拟顾客。虽然没有用,但上面的示例显示后备按预期进行了调用。

As is visible from the output, we have got the restaurants from NY, although our customer is from DC.This is because our fallback method returned a dummy customer who is situated in NY. Although, not useful, the above example displays that the fallback was called as expected.

Integrating Caching with Hystrix

为了让上面这个方法更有用,我们可以在使用 Hystrix 时整合缓存。当底层服务不可用时,这会是一个提供更好答案的有用模式。

To make the above method more useful, we can integrate caching when using Hystrix. This can be a useful pattern to provide better answers when the underlying service is not available.

首先,让我们创建该服务的缓存版本。

First, let us create a cached version of the service.

package com.tutorialspoint;
import java.util.HashMap;
import java.util.Map;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import com.netflix.hystrix.contrib.javanica.annotation.HystrixCommand;
@Service
public class CustomerServiceCachedFallback implements CustomerService {
   Map<Long, Customer> cachedCustomer = new HashMap<>();
   @Autowired
   CustomerService customerService;
   @HystrixCommand(fallbackMethod="defaultToCachedData")
   public Customer getCustomerById(Long id) {
      Customer customer = customerService.getCustomerById(id);
      // cache value for future reference
      cachedCustomer.put(customer.getId(), customer);
      return customer;
   }
   // get customer data from local cache
   public Customer defaultToCachedData(Long id) {
      return cachedCustomer.get(id);
   }
}

我们使用 hashMap 作为存储来缓存数据。这是出于开发目的。在生产环境中,我们可能希望使用更好的缓存解决方案,例如 Redis、Hazelcast 等。

We are using hashMap as the storage to cache the data. This for developmental purpose. In Production environment, we may want to use better caching solutions, for example, Redis, Hazelcast, etc.

现在,我们只需要更新控制器中的一行来使用上面的服务 −

Now, we just need to update one line in the controller to use the above service −

@RestController
class RestaurantController {
   @Autowired
   CustomerServiceCachedFallback customerService;
   static HashMap<Long, Restaurant> mockRestaurantData = new HashMap();
   …
}

我们将遵循与上面相同步骤 −

We will follow the same steps as above −

  1. Start the Eureka Server.

  2. Start the Customer Service.

  3. Start the Restaurant Service which internally call Customer Service.

  4. Make an API call to the Restaurant Service.

  5. Shut down the Customer Service.

  6. Make an API call to the Restaurant Service. Given that Customer Service is down but the data is cached, we will get a valid set of data.

现在,让我们进行同样的过程,直到步骤 3。

Now, let us follow the same process till step 3.

现在点击 URL: [role="bare"] [role="bare"]http://localhost:8082/restaurant/customer/1

Now hit the URL: [role="bare"]http://localhost:8082/restaurant/customer/1

[
   {
      "id": 1,
      "name": "Pandas",
      "city": "DC"
   },
   {
      "id": 3,
      "name": "Little Italy",
      "city": "DC"
   }
]

因此,这里没有什么新鲜事,我们拿到了位于华盛顿特区的餐厅。现在,让我们转到关闭客户服务这个有趣的环节。你可以通过按 Ctrl+C 或者直接终止 shell 来实现。

So, nothing new here, we got the restaurants which are in DC. Now, let us move to the interesting part which is shutting down the Customer service. You can do that either by hitting Ctrl+C or simply killing the shell.

现在,让我们再次点击相同的 URL − [role="bare"] [role="bare"]http://localhost:8082/restaurant/customer/1

Now let us hit the same URL again − [role="bare"]http://localhost:8082/restaurant/customer/1

[
   {
      "id": 1,
      "name": "Pandas",
      "city": "DC"
   },
   {
      "id": 3,
      "name": "Little Italy",
      "city": "DC"
   }
]

从输出中可以看到,我们拿到了位于华盛顿特区的餐厅,这正是我们预期的,因为我们的顾客来自华盛顿特区。这是因为我们的后备方法返回了缓存的顾客数据。

As is visible from the output, we have got the restaurants from DC which is what we expect as our customer is from DC. This is because our fallback method returned a cached customer data.

Integrating Feign with Hystrix

我们发现了如何使用 @HystrixCommand 注解来触发断路并提供后备服务。但是,我们不得不另外定义一个 Service 类来封装我们的 Hystrix 客户端。但是,我们也可以通过简单的传递正确参数给 Feign 客户端来实现相同的目标。让我们尝试这么做。为此,首先通过添加注解 fallback class 来更新我们的 CustomerService 的 Feign 客户端。

We saw how to use @HystrixCommand annotation to trip the circuit and provide a fallback. But we had to additionally define a Service class to wrap our Hystrix client. However, we can also achieve the same by simply passing correct arguments to Feign client. Let us try to do that. For that, first update our Feign client for CustomerService by adding a fallback class.

package com.tutorialspoint;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
@FeignClient(name = "customer-service", fallback = FallBackHystrix.class)
public interface CustomerService {
   @RequestMapping("/customer/{id}")
   public Customer getCustomerById(@PathVariable("id") Long id);
}

现在,让我们添加 Feign 客户端的后备类,当 Hystrix 断路触发时将调用该类。

Now, let us add the fallback class for the Feign client which will be called when the Hystrix circuit is tripped.

package com.tutorialspoint;
import org.springframework.stereotype.Component;
@Component
public class FallBackHystrix implements CustomerService{
   @Override
   public Customer getCustomerById(Long id) {
      System.out.println("Fallback called....");
      return new Customer(0, "Temp", "NY");
   }
}

最后,我们还需要创建 application-circuit.yml 以启用 Hystrix。

Lastly, we also need to create the application-circuit.yml to enable hystrix.

spring:
   application:
      name: restaurant-service
server:
   port: ${app_port}
eureka:
   client:
      serviceURL:
         defaultZone: http://localhost:8900/eureka
feign:
   circuitbreaker:
      enabled: true

现在我们已经准备好设置,让我们来测试一下。我们将按以下步骤进行:

Now, that we have the setup ready, let us test this out. We will follow these steps −

  1. Start the Eureka Server.

  2. We do not start the Customer Service.

  3. Start the Restaurant Service which will internally call Customer Service.

  4. Make an API call to Restaurant Service. Given that Customer Service is down, we will notice the fallback.

假设第一步已经完成,让我们继续进行第三步。让我们编译代码并执行以下命令:

Assuming 1st step is already done, let’s move to step 3. Let us compile the code and execute the following command −

java -Dapp_port=8082 -jar .\target\spring-cloud-feign-client-1.0.jar --
spring.config.location=classpath:application-circuit.yml

现在我们尝试点击 − [role="bare"] [role="bare"]http://localhost:8082/restaurant/customer/1

Let us now try to hit − [role="bare"]http://localhost:8082/restaurant/customer/1

由于我们尚未启动客户服务,因此将调用后备,并且后备将 NY 作为城市发送过来,这就是为什么我们在以下输出中看到 NY 餐馆的原因。

As we have not started Customer Service, fallback would be called and the fallback sends over NY as the city, which is why, we see NY restaurants in the following output.

{
   "id": 4,
   "name": "Pizzeria",
   "city": "NY"
}

此外,为了确认,在日志中,我们会看到:

Also, to confirm, in the logs, we would see −

….
2021-03-13 16:27:02.887 WARN 21228 --- [reakerFactory-1]
.s.c.o.l.FeignBlockingLoadBalancerClient : Load balancer does not contain an
instance for the service customer-service
Fallback called....
2021-03-13 16:27:03.802 INFO 21228 --- [ main]
o.s.cloud.commons.util.InetUtils : Cannot determine local hostname
…..

Spring Cloud - Gateway

Introduction

在分布式环境中,服务需要相互通信。但是,这是服务间通信。我们也有这样的用例:我们域外的客户希望点击我们的服务以获取 API。因此,我们可以公开所有可以在客户端调用的微服务的地址,也可以创建一个服务网关,将请求路由到各个微服务,并响应客户端。

In a distributed environment, services need to communicate with each other. However, this is interservice communication. We also have use-cases where a client outside our domain wants to hit our services for the API. So, either we can expose the address of all our microservices which can be called by clients OR we can create a Service Gateway which routes the request to various microservices and responds to the clients.

创建网关在本文档中是一个更好的方法。有以下两个主要优点:

Creating a Gateway is much better approach here. There are two major advantages −

  1. The security for each individual services does not need to maintained.

  2. And, cross-cutting concerns, for example, addition of meta-information can be handled at a single place.

Netflix ZuulSpring Cloud Gateway 是两个众所周知的云网关,用于处理此类情况。在本教程中,我们将使用 Spring Cloud Gateway。

Netflix Zuul and Spring Cloud Gateway are two well-known Cloud Gateways which are used to handle such situations. In this tutorial, we will use Spring Cloud Gateway.

Spring Cloud Gateway – Dependency Setting

让我们使用我们一直在使用的餐馆的案例。让我们在我们两个服务的前面添加一个新服务(网关),即餐馆服务和客户服务。首先,让我们使用以下依赖关系更新服务的 pom.xml

Let us use the case of Restaurant which we have been using. Let us add a new service (gateway) in front of our two services, i.e., Restaurant services and Customer Service. First, let us update the pom.xml of the service with the following dependency −

<dependencies>
   <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
   </dependency>
   <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-starter-gateway</artifactId>
   </dependency>
</dependencies>

然后,用正确的注解注释我们的 Spring 应用程序类,即 @EnableDiscoveryClient。

And then, annotate our Spring application class with the correct annotation, i.e., @EnableDiscoveryClient.

package com.tutorialspoint;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
@SpringBootApplication
@EnableDiscoveryClient
public class RestaurantGatewayService{
   public static void main(String[] args) {
      SpringApplication.run(RestaurantGatewayService.class, args);
   }
}

我们正在使用 @EnableDiscoveryClient 进行注释,因为我们希望使用 Eureka 服务发现来获取托管特定用例的主机列表

We are annotating with @EnableDiscoveryClient because we want to use Eureka service discovery to get the list of hosts which are serving a particular use-case

Dynamic Routing with Gateway

Spring Cloud Gateway 有三个重要的部分。它们是:

The Spring Cloud Gateway has three important parts to it. Those are −

  1. Route − These are the building blocks of the gateway which contain URL to which request is to be forwarded to and the predicates and filters that are applied on the incoming requests.

  2. Predicate − These are the set of criteria which should match for the incoming requests to be forwarded to internal microservices. For example, a path predicate will forward the request only if the incoming URL contains that path.

  3. Filters − These act as the place where you can modify the incoming requests before sending the requests to the internal microservices or before responding back to the client.

我们针对餐厅和客户服务编写一个用于网关的简单配置。

Let us write a simple configuration for the Gateway for our Restaurant and Customer service.

spring:
   application:
      name: restaurant-gateway-service
   cloud:
      gateway:
      discovery:
         locator:
            enabled: true
      routes:
         - id: customers
            uri: lb://customer-service
            predicates:
            - Path=/customer/**
         - id: restaurants
            uri: lb://restaurant-service
            predicates:
            - Path=/restaurant/**
server:
   port: ${app_port}
eureka:
   client:
      serviceURL:
         defaultZone: http://localhost:8900/eureka

Points to note about the above configuration −

Points to note about the above configuration −

  1. We have enabled the discovery.locator to ensure that the gateway can read from the Eureka server.

  2. We have used Path predicated here to route the request. What this means is that any request which begins with /customer would be routed to Customer Service and for /restaurant, we will forward that request to Restaurant Service.

现在让我们设置网关服务之前需要的其他服务——

Now let us setup other services prior to the Gateway service −

  1. Start the Eureka Server

  2. Start the Customer Service

  3. Start the Restaurant Service

现在,让我们编译并执行网关项目。我们将为此使用以下命令——

Now, let us compile and execute the Gateway project. We will use the following command for the same −

java -Dapp_port=8084 -jar .\target\spring-cloud-gateway-1.0.jar

完成后,我们的网关便已准备好好在端口 8084 上进行测试。让我们首先访问 [role="bare"] [role="bare"]http://localhost:8084/customer/1 ,我们看到请求正确路由到客户服务,并且我们获得以下输出——

Once this is done, we have our Gateway ready to be tested on port 8084. Let’s first hit [role="bare"]http://localhost:8084/customer/1 and we see the request is correctly routed to Customer Service and we get the following output −

{
   "id": 1,
   "name": "Jane",
   "city": "DC"
}

现在,访问我们的餐厅 API,即 [role="bare"] [role="bare"]http://localhost:8084/restaurant/customer/1 ,我们获得以下输出——

And now, hit our restaurant API, i.e., [role="bare"]http://localhost:8084/restaurant/customer/1 and we get the following output −

[
   {
      "id": 1,
      "name": "Pandas",
      "city": "DC"
   },
   {
      "id": 3,
      "name": "Little Italy",
      "city": "DC"
   }
]

这意味着两个调用都正确路由到各个服务。

This means that both the calls were correctly routed to the respective services.

Predicates & Filters Request

我们在上述示例中使用了路径谓词。以下是其他几个重要谓词——

We had used Path predicate in our above example. Here are a few other important predicates −

Predicate

Description

Cookie predicate (input: name and regex)

Compares the cookie with the ‘name’ to the ‘regexp’

Header predicate (input: name and regex)

Compares the header with the ‘name’ to the ‘regexp’

Host predicate (input: name and regex)

Compares the ‘name’ of the incoming to the ‘regexp’

Weight Predicate (input: Group name and the weight)

Weight Predicate (input: Group name and the weight)

Filters 用于在向下游服务发送数据或向客户端发送响应之前,向请求添加/删除数据。

Filters are used to add/remove data from the request before sending the data to the downstream service or before sending the response back to the client.

以下是添加元数据的一些重要过滤器。

Following are a few important filters for adding metadata.

Filter

Description

Add request header filter (input: header and the value)

Add a ‘header’ and the ‘value’ before forwarding the request downstream.

Add response header filter (input: header and the value)

Add a ‘header’ and the ‘value’ before forwarding the request upstream that is to the client.

Redirect filter (input: status and URL)

Adds a redirect header along with the URL before passing over o the downstream host.

ReWritePath (input: regexp and replacement)

This is responsible for rewriting the path by replacing the ‘regexp’ matched string with the input replacement.

Monitoring

为监视网关或访问各种路由、谓词等,我们可以在项目中启用执行器。为此,让我们首先更新 pom.xml,将执行器作为依赖项包含在内。

For monitoring of the Gateway or for accessing various routes, predicates, etc., we can enable the actuator in the project. For doing that, let us first update the pom.xml to contain the actuator as a dependency.

<dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

对于监控,我们将使用一个单独的应用程序属性文件,其中将包含用于启用执行器的标志。因此,以下是它的样子 −

For monitoring, we will use a separate application property file which would contain flags to enable the actuator. So, here is how it would look like −

spring:
   application:
      name: restaurant-gateway-service
   cloud:
      gateway:
         discovery:
            locator:
               enabled: true
         routes:
            - id: customers
              uri: lb://customer-service
              predicates:
              - Path=/customer/**
            - id: restaurants
              uri: lb://restaurant-service
              predicates:
              - Path=/restaurant/**
server:
   port: ${app_port}
eureka:
   client:
      serviceURL:
         defaultZone: http://localhost:8900/eureka
management:
   endpoint:
      gateway:
         enabled: true
   endpoints:
      web:
         exposure:
            include: gateway

现在,要列出所有路由,我们可以点击: [role="bare"] [role="bare"]http://localhost:8084/actuator/gateway/routes

Now, to list all the routes, we can hit: [role="bare"]http://localhost:8084/actuator/gateway/routes

[
   {
      "predicate": "Paths: [/customer/**], match trailing slash: true",
      "route_id": "customers",
      "filters": [],
      "uri": "lb://customer-service",
      "order": 0
   },
   {
      "predicate": "Paths: [/restaurant/**], match trailing slash: true",
      "route_id": "restaurants",
      "filters": [],
      "uri": "lb://restaurant-service",
      "order": 0
   }
]

用于监控的其他重要 API −

Other important APIs for monitoring −

API

Description

GET /actuator/gateway/routes/{id}

Get information about a particular route

POST /gateway/routes/{id_to_be assigned}

Add a new route to the Gateway

DELETE /gateway/routes/{id}

Remove the route from Gateway

POST /gateway/refresh

Remove all the cache entries

Spring Cloud - Streams with Apache Kafka

Introduction

在分布式环境中,服务需要互相通信。通信可以同步或异步发生。在本节中,我们将研究服务如何使用 message brokers 异步通信。

In a distributed environment, services need to communicate with each other. The communication can either happen synchronously or asynchronously. In this section, we will look at how services can communicate by asynchronously using message brokers.

执行异步通信的两个主要好处:

Two major benefits of performing asynchronous communication −

  1. Producer and Consumer speed can differ − If the consumer of the data is slow or fast, it does not affect the producer processing and vice versa. Both can work at their own individual speeds without affecting each other.

  2. Producer does not need to handle requests from various consumers − There maybe multiple consumers who want to read the same set of data from the producer. With a message broker in between, the producer does not need to take care of the load these consumers generate. Plus, any outages at producer level would not block the consumer from reading older producer data, as this data would be available in the message brokers.

Apache KafkaRabbitMQ 是用于实现异步通信的两个著名的消息中间件。本教程中,我们将使用 Apache Kafka。

Apache Kafka and RabbitMQ are two well-known message brokers used for making asynchronous communication. In this tutorial, we will use Apache Kafka.

Kafka – Dependency Setting

让我们使用之前一直使用过的“Restaurant”示例。因此,让我们假设将客户服务和餐厅服务通过异步通信进行通信。为此,我们将使用 Apache Kafka。并且我们需要在两个服务中使用它,即客户服务和餐厅服务。

Let’s use the case of Restaurant that we have been using earlier. So, let us say we have our Customer Service and the Restaurant Service communicating via asynchronous communication. To do that, we will use Apache Kafka. And we will need to use that in both services, i.e., Customer Service and Restaurant Service.

为了使用 Apache Kafka,我们将更新两个服务的 POM,并添加以下依赖项。

To use Apache Kafka, we will update the POM of both services and add the following dependency.

<dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>

我们还需要运行 Kafka 实例。可以通过多种方式实现,但我们更愿意使用 Docker 容器启动 Kafka。以下是我们可能考虑使用的几个映像 −

We also need to have Kafka instances running. There are multiple ways through which it can be done,but we will prefer starting Kafka using Docker container. Here are a few images we can consider using −

无论使用哪个映像,这里需要注意的重要一点是,一旦映像启动并运行,请确保能够在 localhost:9092 访问 Kafka 集群。

Whichever image we use, the important thing here to note is that once the image is up and running,please ensure that the Kafka cluster is accessible at localhost:9092

既然我们在映像中运行了 Kafka 集群,让我们进入核心示例。

Now that we have the Kafka cluster running on our image, let’s move to the core example.

Binding & Binders

在 Spring Cloud 流中,有三个重要的概念 −

There are three important concepts when it comes to Spring Cloud streams −

  1. External Messaging System − This is the component which is managed externally and is responsible to store the events/messages produced by the application that can be read by their subscriber/consumer. Note that this is not managed within the app/Spring. Few examples being Apache Kafka, RabbitMQ

  2. Binders − This is the component which provides integration with messaging system, for example, consisting of IP address of messaging system, authentication, etc.

  3. Bindings − This component uses the Binders to produce messages to the messaging system or consume the message from a specific topic/queue.

所有上述属性都在 application properties file 中定义。

All the above properties are defined in the application properties file.

Example

让我们使用之前一直使用过的“Restaurant”示例。因此,让我们假设每当向客户服务添加新服务时,我们都希望将客户信息通知到附近的餐厅。

Let us use the case of Restaurant that we have been using earlier. So, let us suppose whenever a new service is added to the Customer Service, we want to notify the customer info to the nearby Restaurants about him/her.

为此,首先让我们更新我们的客户服务,以包含和使用 Kafka。请注意,我们将使用客户服务作为数据的生产者。也就是说,当我们通过 API 添加客户时,它也会添加到 Kafka 中。

For this purpose, let us update our Customer Service first to include and use Kafka. Note that we will use Customer Service as a producer of the data. That is, whenever we add the Customer via API, it will also be added to the Kafka.

spring:
   application:
      name: customer-service
   cloud:
      stream:
         source: customerBinding-out-0
         kafka:
            binder:
            brokers: localhost:9092
            replicationFactor: 1
      bindings:
         customerBinding-out-0:
            destination: customer
            producer:
               partitionCount: 3
server:
   port: ${app_port}
eureka:
   client:
      serviceURL:
         defaultZone: http://localhost:8900/eureka

Points to note -

Points to note

  1. We have defined a binder with the address of our local Kafka instances.

  2. We have also defined the binding ‘customerBinding-out-0’ which uses ‘customer’ topic to output the messages in.

  3. We have also mentioned our binding in the stream.source so that we can imperatively use that in our code.

完成后,让我们通过添加一个新的“addCustomer”方法来更新我们的控制器,该方法负责提供 POST 请求。然后,从 post 请求,我们将数据发送到 Kafka Broker。

Once this is done, let us now update our controller by adding a new method ‘addCustomer’ which is responsible to serve the POST request. And then, from the post request, we send the data to the Kafka Broker.

package com.tutorialspoint;
import java.util.HashMap;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.stream.function.StreamBridge;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
@RestController
class RestaurantCustomerInstancesController {
   @Autowired
   private StreamBridge streamBridge;
   static HashMap<Long, Customer> mockCustomerData = new HashMap();
   static{
      mockCustomerData.put(1L, new Customer(1, "Jane", "DC"));
      mockCustomerData.put(2L, new Customer(2, "John", "SFO"));
      mockCustomerData.put(3L, new Customer(3, "Kate", "NY"));
   }
   @RequestMapping("/customer/{id}")
   public Customer getCustomerInfo(@PathVariable("id") Long id) {
      System.out.println("Querying customer for id with: " + id);
      return mockCustomerData.get(id);
   }
   @RequestMapping(path = "/customer/{id}", method = RequestMethod.POST)
   public Customer addCustomer(@PathVariable("id") Long id) {
      // add default name
      Customer defaultCustomer = new Customer(id, "Dwayne", "NY");
      streamBridge.send("customerBinding-out-0", defaultCustomer);
      return defaultCustomer;
   }
}

Points to note

Points to note

  1. We are Autowiring StreamBridge which is what we will use to send the messages.

  2. The parameters we use in the ‘send’ method also specify the binding we want to use to send the data to.

现在,让我们更新我们的餐厅服务以包含和订阅“客户”主题。请注意,我们将使用餐厅服务作为数据的消费者。也就是说,每当我们通过 API 添加客户时,餐厅服务都将通过 Kafka 得知此情况。

Now let us update our Restaurant Service to include and subscribe to ‘customer’ topic. Note that we will use Restaurant Service as a consumer of the data. That is, whenever we add the Customer via API, the Restaurant Service would come to know about it via Kafka.

首先,让我们更新 application properties 文件。

First, let us update the application properties file.

spring:
   application:
      name: restaurant-service
   cloud:
      function:
         definition: customerBinding
      stream:
         kafka:
            binder:
               brokers: localhost:9092
               replicationFactor: 1
            bindings:
               customerBinding-in-0:
               destination: customer
server:
   port: ${app_port}
eureka:
   client:
      serviceURL:
         defaultZone: http://localhost:8900/eureka

完成后,让我们通过添加一个新的 customerBinding 方法来更新我们的控制器,该方法负责获取请求并提供一个将打印请求及其元数据详细信息的函数。

Once this is done, let us now update our controller by adding a new method ‘customerBinding’ which is responsible to fetch the request and provide a function which will print the request along with its metadata details.

package com.tutorialspoint;
import java.util.HashMap;
import java.util.List;
import java.util.function.Consumer;
import java.util.function.Supplier;
import java.util.stream.Collectors;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.function.StreamBridge;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.messaging.Message;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
class RestaurantController {
   @Autowired
   CustomerService customerService;
   @Autowired
   private StreamBridge streamBridge;
   static HashMap<Long, Restaurant> mockRestaurantData = new HashMap();
   static{
      mockRestaurantData.put(1L, new Restaurant(1, "Pandas", "DC"));
      mockRestaurantData.put(2L, new Restaurant(2, "Indies", "SFO"));
      mockRestaurantData.put(3L, new Restaurant(3, "Little Italy", "DC"));
      mockRestaurantData.put(4L, new Restaurant(4, "Pizeeria", "NY"));
   }
   @RequestMapping("/restaurant/customer/{id}")
   public List<Restaurant> getRestaurantForCustomer(@PathVariable("id") Long id) {
      System.out.println("Got request for customer with id: " + id);
      String customerCity = customerService.getCustomerById(id).getCity();
      return mockRestaurantData.entrySet().stream().filter(
entry -> entry.getValue().getCity().equals(customerCity))
.map(entry -> entry.getValue())
.collect(Collectors.toList());
   }
   @RequestMapping("/restaurant/cust/{id}")
   public void getRestaurantForCust(@PathVariable("id") Long id) {
      streamBridge.send("ordersBinding-out-0", id);
   }
   @Bean
   public Consumer<Message<Customer>> customerBinding() {
      return msg -> {
         System.out.println(msg);
      };
   }
}

Points to note -

Points to note

  1. We are using ‘customerBinding’ which is supposed to pass on the function which would be called when a message arrives for this binding.

  2. The name that we use for this function/bean also needs to be used in the YAML file while creating the bundling and specifying the topic.

现在,让我们像往常一样执行上述代码,启动 Eureka 服务器。请注意,这不是硬性要求,只是为了完整性而存在的。

Now, let us execute the above code as always, start the Eureka Server. Note that this is not hard requirement and is present here for the sake of completeness.

然后,让我们使用以下命令编译并开始更新客户服务 -

Then, let us compile and start updating Customer Service using the following command −

mvn clean install ; java -Dapp_port=8083 -jar .\target\spring-cloud-eurekaclient-
1.0.jar --spring.config.location=classpath:application-kafka.yml

然后,让我们使用以下命令编译并开始更新餐厅服务 -

Then, let us compile and start updating Restaurant Service using the following command −

mvn clean install; java -Dapp_port=8082 -jar .\target\spring-cloud-feign-client-
1.0.jar --spring.config.location=classpath:application-kafka.yml

我们已经设置好了,现在让我们通过点击 API 来测试我们的代码块 -

And we are set, let us now test our code pieces by hitting the API −

curl -X POST http://localhost:8083/customer/1

这是我们将为该 API 获得的输出 -

Here is the output that we will get for this API −

{
   "id": 1,
   "name": "Dwayne",
   "city": "NY"
}

现在,让我们检查餐厅服务的日志 -

And now, let us check the logs for the Restaurant Service −

GenericMessage [payload=Customer [id=1, name=Dwayne, city=NY],
headers={kafka_offset=1,...

因此,实际上,您看到使用 Kafka Broker,餐厅服务已收到有关新添加的客户的通知。

So, effectively, you see that using Kafka Broker, Restaurant Service was notified about the newly added Customer.

Partitions & Consumer Groups

分区和消费者组是您在使用 Spring Cloud Stream 时应该了解的两个重要概念。

Partitions and Consumer Groups are two important concepts that you should be aware of while using Spring Cloud streams.

Partitions - 它们用于对数据进行分区,以便我们可以在多个使用者之间划分工作。

Partitions − They are used to partition the data so that we can divide the work between multiple consumers.

让我们看看如何使用 Spring Cloud 对数据进行分区。比如说,我们想根据客户 ID 对数据进行分区。因此,让我们更新我们的客户服务以使其相同。为此,我们需要说明

Let us see how to partition the data in Spring Cloud. Say, we want to partition the data based on the Customer ID. So, let us update our Customer Service for the same. For that, what we will need to tell

让我们更新我们的客户服务应用程序属性,以指定我们数据的键。

Let us update our Customer Service application property to specify the key for our data.

spring:
   application:
      name: customer-service
   cloud:
      function:
         definition: ordersBinding
      stream:
         source: customerBinding-out-0
         kafka:
            binder:
               brokers: localhost:9092
               replicationFactor: 1
         bindings:
            customerBinding-out-0:
               destination: customer
               producer:
                  partitionKeyExpression: 'getPayload().getId()'
                  partitionCount: 3
server:
   port: ${app_port}
eureka:
   client:
      serviceURL:
         defaultZone: http://localhost:8900/eureka

对于指定键,即“partitionKeyExpression”,我们提供 Spring Expression Language。该表达式将类型假定为 GenericMessage<Customer>,因为我们正在消息中发送 Customer 数据。请注意,GenericMessage 是 Spring Framework 类,用于在一个对象中包装有效负载和头信息。因此,我们从该消息中获取有效负载,该有效负载的类型为 Customer,然后我们在客户上调用 getId() 方法。

For specifying the key, i.e., “partitionKeyExpression” we provide Spring Expression Language. The expression assumes the type as GenericMessage<Customer> since we are sending the Customer data in the message. Note that GenericMessage is the Spring Framework class used for wrapping the payload and the headers in a single object. So, we get the payload from this message which is of the type Customer and then we call the getId() method on the customer.

现在,让我们还更新我们的消费者(即餐厅服务)以在使用请求时记录更多信息。

Now, let us also update our consumer, i.e., the Restaurant Service to log more info while consuming the request.

现在,我们按惯例执行上面的代码,启动 Eureka 服务端。注意,这不是一项硬性要求,在此出现是为了完整性。

Now, let us execute the above code as always, start the Eureka Server. Note that this is not a hard requirement and is present here for the sake of completeness.

然后,让我们使用以下命令编译并开始更新客户服务 -

Then, let us compile and start updating Customer Service using the following command −

mvn clean install ; java -Dapp_port=8083 -jar .\target\spring-cloud-eurekaclient-
1.0.jar --spring.config.location=classpath:application-kafka.yml

然后,让我们使用以下命令编译并开始更新餐厅服务 -

Then, let us compile and start updating Restaurant Service using the following command −

mvn clean install; java -Dapp_port=8082 -jar .\target\spring-cloud-feign-client-
1.0.jar --spring.config.location=classpath:application-kafka.yml

我们已做好设置,现在开始测试我们的代码部分。下面是我们在测试中将要执行的操作:

And we are set, let us now test our code pieces. As part of testing, here is what we will do −

  1. Insert a customer with Id 1: curl -X POST [role="bare"]http://localhost:8083/customer/1

  2. Insert a customer with Id 1: curl -X POST [role="bare"]http://localhost:8083/customer/1

  3. Insert a customer with Id 1: curl -X POST [role="bare"]http://localhost:8083/customer/5

  4. Insert a customer with Id 1: curl -X POST [role="bare"]http://localhost:8083/customer/3

  5. Insert a customer with Id 1: curl -X POST [role="bare"]http://localhost:8083/customer/1

我们不太关心 API 的输出。相反,我们更关心将数据发送到的分区。由于我们使用顾客 ID 作为关键信息,我们希望具有相同 ID 的顾客将最终进入相同的分区。

We do not care much about the output of the API. Rather, we care more about the partition to which the data is sent to. Since we are using customer ID as the key, we expect that the customer with the same ID would end up in the same partition.

现在,让我们检查餐厅服务的日志 -

And now, let us check the logs for the Restaurant Service −

Consumer: org.apache.kafka.clients.consumer.KafkaConsumer@7d6d8400
Consumer Group: anonymous.9108d02a-b1ee-4a7a-8707-7760581fa323
Partition Id: 1
Customer: Customer [id=1, name=Dwayne, city=NY]
Consumer: org.apache.kafka.clients.consumer.KafkaConsumer@7d6d8400
Consumer Group: anonymous.9108d02a-b1ee-4a7a-8707-7760581fa323
Partition Id: 1
Customer: Customer [id=1, name=Dwayne, city=NY]
Consumer: org.apache.kafka.clients.consumer.KafkaConsumer@7d6d8400
Consumer Group: anonymous.9108d02a-b1ee-4a7a-8707-7760581fa323
Partition Id: 2
Customer: Customer [id=5, name=Dwayne, city=NY]
Consumer: org.apache.kafka.clients.consumer.KafkaConsumer@7d6d8400
Consumer Group: anonymous.9108d02a-b1ee-4a7a-8707-7760581fa323
Partition Id: 0
Customer: Customer [id=3, name=Dwayne, city=NY]
Consumer Group: anonymous.9108d02a-b1ee-4a7a-8707-7760581fa323
Partition Id: 1
Customer: Customer [id=1, name=Dwayne, city=NY]

因此,正如我们所见,具有 Id 1 的顾客每次都最终进入相同的分区,即第 1 分区。

So, as we see, Customer with Id 1 ended up in the same partition every time, i.e., partition 1.

Consumer Group − 消费群组是为相同目的读取相同话题的消费者的逻辑分组。话题中的数据在消费群组中的消费者之间进行分区,因此,特定消费群组中只有一个消费者可以读取话题的一个分区。

Consumer Group − A consumer group is the logical grouping of consumers reading the same topic for the same purpose. Data in a topic is partitioned between the consumers in a consumer group so that only one consumer from a given consumer group can read a partition of a topic.

要定义一个消费群组,我们只需要在使用 Kafka 话题名称的绑定中定义一个群组。例如,让我们在用于控制器的应用程序文件中定义消费群组名称。

To define a consumer group, all we need to do is define a group in the bindings where we use the Kafka topic name. For example, let us define the consumer group name in our application file for our controller.

spring:
   application:
      name: restaurant-service
   cloud:
      function:
         definition: customerBinding
      stream:
         kafka:
            binder:
               brokers: localhost:9092
               replicationFactor: 1
            bindings:
               customerBinding-in-0:
               destination: customer
               group: restController
server:
   port: ${app_port}
eureka:
   client:
      serviceURL:
         defaultZone: http://localhost:8900/eureka

让我们重新编译并启动 Restaurant 服务。现在,让我们通过对 Customer 服务上的 POST API 执行操作来生成事件:

Let us recompile and start the Restaurant Service. Now, let us generate the event by hitting the POST API on the Customer Service −

使用 Id 1 插入一名顾客:curl -X POST [role="bare"] [role="bare"]http://localhost:8083/customer/1

Insert a customer with Id 1: curl -X POST [role="bare"]http://localhost:8083/customer/1

现在,如果我们检查 Restaurant 服务的日志,我们会看到以下内容:

Now, if we check the logs of our Restaurant Service, we will see the following −

Consumer: org.apache.kafka.clients.consumer.KafkaConsumer@7d6d8400
Consumer Group: restContoller
Partition Id: 1
Customer: Customer [id=1, name=Dwayne, city=NY]

因此,从输出中可以看到,我们创建了一个名为“rest-contoller”的消费群组,其消费者负责读取这些话题。在上述情况下,我们仅仅运行了该服务的单个实例,因此“customer”话题的所有分区都分配给了同一个实例。但是,如果我们有多个分区,那么分区将分布在工作进程之间。

So, as we see from the output, we have a consumer group called ‘rest-contoller’ created, whose consumers are responsible to read the topics. In the above case, we just had a single instance of the service running, so all the partition of the ‘customer’ topic was assigned to the same instance. But, if we have multiple partitions, we will have partitions distributed among the workers.

Distributed Logging using ELK and Sleuth

Introduction

在分布式环境或单体环境中,当出现问题时,应用程序的日志对于调试至关重要。在本部分,我们将了解如何有效地记录日志并提高可追溯性,以便我们能够轻松查看日志。

In a distributed environment or in a monolithic environment, application logs are very critical for debugging whenever something goes wrong. In this section, we will look at how to effectively log and improve traceability so that we can easily look at the logs.

记录模式对日志记录至关重要的两个主要原因:

Two major reasons why logging patterns become critical for logging −

  1. Inter-service calls − In a microservice architecture, we have async and sync calls between services. It is very critical to link these requests, as there can be more than one level of nesting for a single request.

  2. Intra-service calls − A single service gets multiple requests and the logs for them can easily get intermingled. That is why, having some ID associated with the request becomes important to filter all the logs for a request.

Sleuth 是一个用于在应用程序中记录日志的知名工具,而 ELK 则用于简化跨系统观察。

Sleuth is a well-known tool used for logging in application and ELK is used for simpler observation across the system.

Dependency Setting

让我们使用我们在每一章中都一直在使用的 Restaurant 案例。因此,假设我们的 Customer 服务和 Restaurant 服务通过 API(即同步通信)进行通信。我们希望 Sleuth 用于追踪请求,而 ELK 堆栈用于集中可视化。

Let us use the case of Restaurant that we have been using in every chapter. So, let us say we have our Customer service and the Restaurant service communicating via API, i.e., synchronous communication. And we want to have Sleuth for tracing the request and the ELK stack for centralized visualization.

为此,首先设置 ELK 堆栈。要做到这一点,首先,我们将设置 ELK 堆栈。我们将使用 Docker 容器启动 ELK 堆栈。以下是可以考虑的镜像:

To do that, first setup the ELK stack. To do that, first, we will setup the ELK stack. We will be starting the ELK stack using Docker containers. Here are the images that we can consider −

配置 ELK 后,通过调用以下 API 确保它按预期工作:

Once ELK setup has been performed, ensure that it is working as expected by hitting the following APIs −

  1. Elasticsearch − localhost:9200

  2. Kibana − localhost:5601

我们在本节结尾处查看 logstash 配置文件。

We will look at logstash configuration file at the end of this section.

然后,我们向我们的客户服务和餐馆服务添加以下依赖项:

Then, let us add the following dependency to our Customer Service and the Restaurant Service −

<dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>

现在,我们在设置好依赖项并运行 ELK,让我们进入核心示例。

Now that we have the dependency setup and ELK running, let us move to the core example.

Request Tracing inside Service

在非常基本的层面上,以下是 Sleuth 添加的元数据:

On a very basic level, following are the metadata that are added by Sleuth −

  1. Service name − Service currently processing the request.

  2. Trace Id − A metadata ID is added to the logs which is sent across services for processing an input request. This is useful for inter-service communication for grouping all the internal requests which went in processing one input request.

  3. Span Id − A metadata ID is added to the logs which is same across all log statements which are logged by a service for processing a request. It is useful for intra-service logs. Note that Span ID = Trace Id for the parent service.

让我们实际看一看。为此,我们更新客户服务代码以包含日志行。这是我们将使用的控制器代码。

Let us see this in action. For that, let us update our Customer Service code to contain log lines. Here is the controller code that we would use.

package com.tutorialspoint;
import java.util.HashMap;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.messaging.Message;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
@RestController
class RestaurantCustomerInstancesController {
   Logger logger =
LoggerFactory.getLogger(RestaurantCustomerInstancesController.class);
   static HashMap<Long, Customer> mockCustomerData = new HashMap();
   static{
      mockCustomerData.put(1L, new Customer(1, "Jane", "DC"));
      mockCustomerData.put(2L, new Customer(2, "John", "SFO"));
      mockCustomerData.put(3L, new Customer(3, "Kate", "NY"));
   }
   @RequestMapping("/customer/{id}")
   public Customer getCustomerInfo(@PathVariable("id") Long id) {
      logger.info("Querying customer with id: " + id);
      Customer customer = mockCustomerData.get(id);
      if(customer != null) {
         logger.info("Found Customer: " + customer);
      }
      return customer;
   }
}

现在,让我们执行代码,像往常一样,启动 Eureka 服务器。请注意,这不是一项硬性要求,此处的存在是为了完整性。

Now let us execute the code, as always, start the Eureka Server. Note that this is not a hard requirement and is present here for the sake of completeness.

然后,让我们使用以下命令编译并开始更新客户服务 -

Then, let us compile and start updating Customer Service using the following command −

mvn clean install ; java -Dapp_port=8083 -jar .\target\spring-cloud-eurekaclient-
1.0.jar

我们已经设置好了,现在让我们通过点击 API 来测试我们的代码块 -

And we are set, let us now test our code pieces by hitting the API −

curl -X GET http://localhost:8083/customer/1

这是我们将为该 API 获得的输出 -

Here is the output that we will get for this API −

{
   "id": 1,
   "name": "Jane",
   "city": "DC"
}

现在,让我们检查客户服务的日志:

And now let us check the logs for Customer Service −

2021-03-23 13:46:59.604 INFO [customerservice,
b63d4d0c733cc675,b63d4d0c733cc675] 11860 --- [nio-8083-exec-7]
.t.RestaurantCustomerInstancesController : Querying customer with id: 1
2021-03-23 13:46:59.605 INFO [customerservice,
b63d4d0c733cc675,b63d4d0c733cc675] 11860 --- [nio-8083-exec-7]
.t.RestaurantCustomerInstancesController : Found Customer: Customer [id=1,
name=Jane, city=DC]
…..

因此,实际上,正如我们所见,日志记录中添加了服务名称、跟踪 ID 和 span ID。

So, effectively, as we see, we have the name of the service, trace ID, and the span ID added to our log statements.

Request Tracing across Service

让我们看看我们如何在服务之间进行日志记录和跟踪。所以,例如,我们将要做的是使用餐馆服务,它在内部调用客户服务。

Let us see how we can do logging and tracing across service. So, for example, what we will do is to use the Restaurant Service which internally calls the Customer Service.

为此,我们更新我们的餐馆服务代码以包含日志行。这是我们将使用的控制器代码。

For that, let us update our Restaurant Service code to contain log lines. Here is the controller code that we would use.

package com.tutorialspoint;
import java.util.HashMap;
import java.util.List;
import java.util.function.Consumer;
import java.util.function.Supplier;
import java.util.stream.Collectors;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
class RestaurantController {
   @Autowired
   CustomerService customerService;
   Logger logger = LoggerFactory.getLogger(RestaurantController.class);
   static HashMap<Long, Restaurant> mockRestaurantData = new HashMap();
   static{
      mockRestaurantData.put(1L, new Restaurant(1, "Pandas", "DC"));
      mockRestaurantData.put(2L, new Restaurant(2, "Indies", "SFO"));
      mockRestaurantData.put(3L, new Restaurant(3, "Little Italy", "DC"));
      mockRestaurantData.put(4L, new Restaurant(4, "Pizeeria", "NY"));
   }
   @RequestMapping("/restaurant/customer/{id}")
   public List<Restaurant> getRestaurantForCustomer(@PathVariable("id") Long id) {
      logger.info("Get Customer from Customer Service with customer id: " + id);
      Customer customer = customerService.getCustomerById(id);
      logger.info("Found following customer: " + customer);
      String customerCity = customer.getCity();
      return mockRestaurantData.entrySet().stream().filter(
      entry -> entry.getValue().getCity().equals(customerCity))
      .map(entry -> entry.getValue())
      .collect(Collectors.toList());
   }
}

让我们编译并使用以下命令启动更新后的餐馆服务:

Let us compile and start updating Restaurant Service using the following command −

mvn clean install; java -Dapp_port=8082 -jar .\target\spring-cloud-feign-client-1.0.jar

确保正在运行 Eureka 服务器和客户服务。我们已经准备就绪,现在让我们通过调用 API 来测试我们的代码片断:

Ensure that we have the Eureka server and the Customer service running. And we are set, let us now test our code pieces by hitting the API −

curl -X GET http://localhost:8082/restaurant/customer/2

这是我们将为该 API 获得的输出 -

Here is the output that we will get for this API −

[
   {
      "id": 2,
      "name": "Indies",
      "city": "SFO"
   }
]

现在,让我们检查餐馆服务的日志:

And now, let us check the logs for Restaurant Service −

2021-03-23 14:44:29.381 INFO [restaurantservice,
6e0c5b2a4fc533f8,6e0c5b2a4fc533f8] 19600 --- [nio-8082-exec-6]
com.tutorialspoint.RestaurantController : Get Customer from Customer Service
with customer id: 2
2021-03-23 14:44:29.400 INFO [restaurantservice,
6e0c5b2a4fc533f8,6e0c5b2a4fc533f8] 19600 --- [nio-8082-exec-6]
com.tutorialspoint.RestaurantController : Found following customer: Customer
[id=2, name=John, city=SFO]

然后,让我们检查客户服务的日志:

Then, let us check the logs for Customer Service −

2021-03-23 14:44:29.392 INFO [customerservice,
6e0c5b2a4fc533f8,f2806826ac76d816] 11860 --- [io-8083-exec-10]
.t.RestaurantCustomerInstancesController : Querying customer with id: 2
2021-03-23 14:44:29.392 INFO [customerservice,
6e0c5b2a4fc533f8,f2806826ac76d816] 11860 --- [io-8083-exec-10]
.t.RestaurantCustomerInstancesController : Found Customer: Customer [id=2,
name=John, city=SFO]…..

因此,实际上,正如我们所见,日志记录中添加了服务名称、跟踪 ID 和 span ID。此外,我们看到跟踪 ID,即 6e0c5b2a4fc533f8 在客户服务和餐馆服务中重复。

So, effectively, as we see, we have the name of the service, trace ID, and the span ID added to our log statements. Plus, we see the trace Id, i.e., 6e0c5b2a4fc533f8 being repeated in Customer Service and the Restaurant Service.

Centralized Logging with ELK

到目前为止,我们已经看到了一种通过 Sleuth 改进日志记录和跟踪功能的方法。然而,在微服务架构中,我们有多个正在运行的服务和每个服务都有多个实例。查看每个实例的日志以识别请求流并不实用。这就是 ELK 对我们有所帮助的地方。

What we have seen till now is a way to improve our logging and tracing capability via Sleuth. However, in microservice architecture, we have multiple services running and multiple instances of each service running. It is not practical to look at the logs of each instance to identify the request flow. And that is where ELK helps us.

我们以与 Sleuth 相同的跨服务通信案例为例。我们要更新我们的餐厅和顾客,以便为 ELK 栈添加 logback appenders

Let us use the same case of inter-service communication as we did for Sleuth. Let us update our Restaurant and Customer to add logback appenders for the ELK stack.

在继续之前,请确保已设置 ELK 栈并且可以通过 localhost:5601 访问 Kibana。另外,使用以下设置配置 Lostash 配置 −

Before moving ahead, please ensure that ELK stack has been setup and Kibana is accessible at localhost:5601. Also, configure the Lostash configuration with the following setup −

input {
   tcp {
      port => 8089
      codec => json
   }
}
output {
   elasticsearch {
      index => "restaurant"
      hosts => ["http://localhost:9200"]
   }
}

完成此操作后,我们需要执行两步才能在 Spring 应用中使用 logstash。我们将针对我们服务的这两步执行以下步骤。首先,添加 logback 的依赖关系,以便使用 logstash 的附加组件。

Once this is done, there are two steps we need to do to use logstash in our Spring app. We will perform the following steps for both our services. First, add a dependency for logback to use appender for Logstash.

<dependency>
<groupId>net.logstash.logback</groupId>
   <artifactId>logstash-logback-encoder</artifactId>
   <version>6.6</version>
</dependency>

其次,为 logback 添加一个附加组件,以便 logback 可以使用此附加组件将数据发送到 Logstash

And secondly, add an appender for logback so that the logback can use this appender to send the data to Logstash

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
   <appender name="logStash"
class="net.logstash.logback.appender.LogstashTcpSocketAppender">
      <destination>10.24.220.239:8089</destination>
      <encoder class="net.logstash.logback.encoder.LogstashEncoder" />
   </appender>
   <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
      <encoder>
         <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} -
%msg%n</pattern>
      </encoder>
   </appender>
   <root level="INFO">
      <appender-ref ref="logStash" />
      <appender-ref ref="console" />
   </root>
</configuration>

上述 appender 将记录到控制台,并将日志发送到 logstash。现在,完成此操作后,我们就可以开始进行测试。

The above appender would log to console as well as send the logs to logstash. Now one this is done, we are all set to test this out.

现在,让我们像往常一样执行上述代码、启动 Eureka Server。

Now, let us execute the above code as always, start the Eureka Server.

然后,让我们使用以下命令编译并开始更新客户服务 -

Then, let us compile and start updating Customer Service using the following command −

mvn clean install ; java -Dapp_port=8083 -jar .\target\spring-cloud-eurekaclient- 1.0.jar

然后,让我们使用以下命令编译并开始更新餐厅服务 -

Then, let us compile and start updating Restaurant Service using the following command −

mvn clean install; java -Dapp_port=8082 -jar .\target\spring-cloud-feign-client- 1.0.jar

我们已经设置好了,现在让我们通过点击 API 来测试我们的代码块 -

And we are set, let us now test our code pieces by hitting the API −

curl -X GET http://localhost:8082/restaurant/customer/2

这是我们将为该 API 获得的输出 -

Here is the output that we will get for this API −

[
   {
      "id": 2,
      "name": "Indies",
      "city": "SFO"
   }
]

但更重要的是,日志语句也会在 Kibana 上可用。

But more importantly, the log statements would also be available on Kibana.

log statements

因此,正如我们所看到的,我们可以筛选出特定 traceId 并查看为满足请求而记录的跨服务的所有日志语句。

So, as we see, we can filter for a traceId and see all the log statements across services which were logged to fulfill the request.