Using Stork with Kubernetes
本指南介绍如何将 Stork 与 Kubernetes 一起用于服务发现和负载平衡。
This guide explains how to use Stork with Kubernetes for service discovery and load balancing.
如果您是 Stork 新手,请阅读 Stork Getting Started Guide。
If you are new to Stork, please read the Stork Getting Started Guide. Unresolved directive in stork-kubernetes.adoc - include::{includes}/extension-status.adoc[]
Prerequisites
include::{includes}/prerequisites.adoc[]* 访问 Kubernetes 集群(Minikube 是一个可行的选择)
Unresolved directive in stork-kubernetes.adoc - include::{includes}/prerequisites.adoc[] * Access to a Kubernetes cluster (Minikube is a viable option)
Architecture
在本指南中,我们将在 Kubernetes 集群中部署几个组件:
In this guide, we will work with a few components deployed in a Kubernetes cluster:
-
A simple blue service.
-
A simple red service.
-
The
color-service
is the Kubernetes service which is the entry point to the Blue and Red instances. -
A client service using a REST client to call the blue or the red service. Service discovery and selection are delegated to Stork.

为了简单起见,所有内容都将部署在 Kubernetes 集群的相同命名空间中。
For the sake of simplicity, everything will be deployed in the same namespace of the Kubernetes cluster.
Solution
我们建议您按照下一部分中的说明进行操作,并逐步创建应用程序。但是,您可以直接转到已完成的示例。
We recommend that you follow the instructions in the next sections and create the applications step by step. However, you can go right to the completed example.
克隆 Git 存储库: git clone {quickstarts-clone-url}
,或下载 {quickstarts-archive-url}[存档]。
Clone the Git repository: git clone {quickstarts-clone-url}
, or download an {quickstarts-archive-url}[archive].
解决方案位于 stork-kubernetes-quickstart
directory。
The solution is located in the stork-kubernetes-quickstart
directory.
Discovery and selection
在继续之前,我们需要讨论发现与选择。
Before going further, we need to discuss discovery vs. selection.
-
Service discovery is the process of locating service instances. It produces a list of service instances that is potentially empty (if no service matches the request) or contains multiple service instances.
-
Service selection, also called load-balancing, chooses the best instance from the list returned by the discovery process. The result is a single service instance or an exception when no suitable instance can be found.
Stork 同时处理发现和选择。但是,它不处理与服务之间的通信,而仅提供一个服务实例。Quarkus 中的各种集成从该服务实例中提取服务的位置。
Stork handles both discovery and selection. However, it does not handle the communication with the service but only provides a service instance. The various integrations in Quarkus extract the location of the service from that service instance.
Bootstrapping the project
使用偏好的方法创建一个导入 quarkus-rest-client 和 quarkus-rest 扩展的 Quarkus 项目:
Create a Quarkus project importing the quarkus-rest-client and quarkus-rest extensions using your favorite approach:
Unresolved directive in stork-kubernetes.adoc - include::{includes}/devtools/create-app.adoc[]
在生成的项目中,还添加以下依赖项:
In the generated project, also add the following dependencies:
<dependency>
<groupId>io.smallrye.stork</groupId>
<artifactId>stork-service-discovery-kubernetes</artifactId>
</dependency>
<dependency>
<groupId>io.smallrye.stork</groupId>
<artifactId>stork-load-balancer-random</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-kubernetes</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-kubernetes-client</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-container-image-jib</artifactId>
</dependency>
implementation("io.smallrye.stork:stork-service-discovery-kubernetes")
implementation("io.smallrye.stork:stork-load-balancer-random")
implementation("io.quarkus:quarkus-kubernetes")
implementation("io.quarkus:quarkus-kubernetes-client")
implementation("io.quarkus:quarkus-container-image-jib")
stork-service-discovery-kubernetes
为 Kubernetes 提供了一种服务发现实现。stork-load-balancer-random
提供了随机负载平衡器的实现。quarkus-kubernetes
允许在每次构建时生成 Kubernetes 清单。quarkuks-kubernetes-client
扩展支持以原生模式使用 Fabric8 Kubernetes 客户端。而 quarkus-container-image-jib
支持使用 Jib 构建容器映像。
stork-service-discovery-kubernetes
provides an implementation of service discovery for Kubernetes. stork-load-balancer-random
provides an implementation of random load balancer. quarkus-kubernetes
enables the generation of Kubernetes manifests each time we perform a build. The quarkuks-kubernetes-client
extension enables the use of the Fabric8 Kubernetes Client in native mode. And quarkus-container-image-jib
enables the build of a container image using Jib.
The Blue and Red services
我们从头开始:我们将要发现、选择和调用的服务。
Let’s start with the very beginning: the service we will discover, select and call.
红色和蓝色是两个简单的 REST 服务,提供了一个分别响应 Hello from Red!
和 Hello from Blue!
的端点。两个应用程序的代码已按照 Getting Started Guide 开发。
The Red and Blue are two simple REST services serving an endpoint responding Hello from Red!
and Hello from Blue!
respectively. The code of both applications has been developed following the Getting Started Guide.
由于本指南的目标是展示如何使用 Stork Kubernetes 服务发现,因此我们不会提供红色和蓝色服务的具体步骤。它们的容器映像已构建完毕并可在公共注册表中使用:
As the goal of this guide is to show how to use Stork Kubernetes service discovery, we won’t provide the specifics steps for the Red and Blue services. Their container images are already built and available in a public registry:
Deploy the Blue and Red services in Kubernetes
现在我们的服务容器映像在公共注册表中可用,我们需要将它们部署到 Kubernetes 集群。
Now that we have our service container images available in a public registry, we need to deploy them into the Kubernetes cluster.
以下文件包含了在集群中部署蓝色和红色服务以及使它们可访问所需的所有 Kubernetes 资源:
The following file contains all the Kubernetes resources needed to deploy the Blue and Red services in the cluster and make them accessible:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: development
name: endpoints-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["endpoints", "pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: stork-rb
namespace: development
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: development
roleRef:
kind: Role
name: endpoints-reader
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
annotations:
app.quarkus.io/commit-id: f747f359406bedfb1a39c57392a5b5a9eaefec56
app.quarkus.io/build-timestamp: 2022-03-31 - 10:36:56 +0000
labels:
app.kubernetes.io/name: color-service
app.kubernetes.io/version: "1.0"
name: color-service (1)
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
app.kubernetes.io/version: "1.0"
type: color-service
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.quarkus.io/commit-id: f747f359406bedfb1a39c57392a5b5a9eaefec56
app.quarkus.io/build-timestamp: 2022-03-31 - 10:36:56 +0000
labels:
color: blue
type: color-service
app.kubernetes.io/name: blue-service
app.kubernetes.io/version: "1.0"
name: blue-service (2)
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: blue-service
app.kubernetes.io/version: "1.0"
template:
metadata:
annotations:
app.quarkus.io/commit-id: f747f359406bedfb1a39c57392a5b5a9eaefec56
app.quarkus.io/build-timestamp: 2022-03-31 - 10:36:56 +0000
labels:
color: blue
type: color-service
app.kubernetes.io/name: blue-service
app.kubernetes.io/version: "1.0"
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: quay.io/quarkus/blue-service:1.0
imagePullPolicy: Always
name: blue-service
ports:
- containerPort: 8080
name: http
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.quarkus.io/commit-id: 27be03414510f776ca70d70d859b33e134570443
app.quarkus.io/build-timestamp: 2022-03-31 - 10:38:54 +0000
labels:
color: red
type: color-service
app.kubernetes.io/version: "1.0"
app.kubernetes.io/name: red-service
name: red-service (2)
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/version: "1.0"
app.kubernetes.io/name: red-service
template:
metadata:
annotations:
app.quarkus.io/commit-id: 27be03414510f776ca70d70d859b33e134570443
app.quarkus.io/build-timestamp: 2022-03-31 - 10:38:54 +0000
labels:
color: red
type: color-service
app.kubernetes.io/version: "1.0"
app.kubernetes.io/name: red-service
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: quay.io/quarkus/red-service:1.0
imagePullPolicy: Always
name: red-service
ports:
- containerPort: 8080
name: http
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress (3)
metadata:
annotations:
app.quarkus.io/commit-id: f747f359406bedfb1a39c57392a5b5a9eaefec56
app.quarkus.io/build-timestamp: 2022-03-31 - 10:46:19 +0000
labels:
app.kubernetes.io/name: color-service
app.kubernetes.io/version: "1.0"
color: blue
type: color-service
name: color-service
spec:
rules:
- host: color-service.127.0.0.1.nip.io
http:
paths:
- backend:
service:
name: color-service
port:
name: http
path: /
pathType: Prefix
此清单中有一些有趣的部分:
There are a few interesting parts in this listing:
1 | The Kubernetes Service resource, color-service , that Stork will discover. |
2 | The Red and Blue service instances behind the color-service Kubernetes service. |
3 | A Kubernetes Ingress resource making the color-service accessible from the outside of the cluster at the color-service.127.0.0.1.nip.io url. Note that the Ingress is not needed for Stork however, it helps to check that the architecture is in place. |
在项目的根目录创建一个名为 kubernetes-setup.yml
的文件,其中包含以上内容,然后运行以下命令以部署 Kubernetes 群集中的所有资源。不要忘记创建一个专用命名空间:
Create a file named kubernetes-setup.yml
with the content above at the root of the project and run the following commands to deploy all the resources in the Kubernetes cluster. Don’t forget to create a dedicated namespace:
kubectl create namespace development
kubectl apply -f kubernetes-setup.yml -n=development
如果一切顺利,则可以在 [role="bare"][role="bare"]http://color-service.127.0.0.1.nip.io 上访问颜色服务。您应该随机收到 Hello from Red!
和 Hello from Blue!
响应。
If everything went well the Color service is accessible on [role="bare"]http://color-service.127.0.0.1.nip.io. You should have Hello from Red!
and Hello from Blue!
response randomly.
Stork 不仅限于 Kubernetes,它还可以与其他服务发现机制集成。 |
Stork is not limited to Kubernetes and integrates with other service discovery mechanisms. |
The REST Client interface and the front end API
到目前为止,我们还没有使用 Stork;我们只是部署了将要发现、选择和调用的服务。
So far, we didn’t use Stork; we just deployed the services we will be discovering, selecting, and calling.
我们将使用 REST Client 调用服务。使用以下内容创建 src/main/java/org/acme/MyService.java
文件:
We will call the services using the REST Client.
Create the src/main/java/org/acme/MyService.java
file with the following content:
package org.acme;
import org.eclipse.microprofile.rest.client.inject.RegisterRestClient;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
/**
* The REST Client interface.
*
* Notice the `baseUri`. It uses `stork://` as URL scheme indicating that the called service uses Stork to locate and
* select the service instance. The `my-service` part is the service name. This is used to configure Stork discovery
* and selection in the `application.properties` file.
*/
@RegisterRestClient(baseUri = "stork://my-service")
public interface MyService {
@GET
@Produces(MediaType.TEXT_PLAIN)
String get();
}
它是一种包含单个方法的简单 REST 客户端界面。但是,请注意 baseUri
属性:* stork://
后缀指示 REST 客户端将服务实例的发现和选择委托给 Stork,* URI 的 my-service
部分是我们将在应用程序配置中使用的服务名称。
It’s a straightforward REST client interface containing a single method. However, note the baseUri
attribute:
* the stork://
suffix instructs the REST client to delegate the discovery and selection of the service instances to Stork,
* the my-service
part of the URI is the service name we will be using in the application configuration.
它不会改变 REST 客户端的使用方式。使用以下内容创建 src/main/java/org/acme/FrontendApi.java
文件:
It does not change how the REST client is used.
Create the src/main/java/org/acme/FrontendApi.java
file with the following content:
package org.acme;
import org.eclipse.microprofile.rest.client.inject.RestClient;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
/**
* A frontend API using our REST Client (which uses Stork to locate and select the service instance on each call).
*/
@Path("/api")
public class FrontendApi {
@RestClient MyService service;
@GET
@Produces(MediaType.TEXT_PLAIN)
public String invoke() {
return service.get();
}
}
它照常注入并使用 REST 客户端。
It injects and uses the REST client as usual.
Stork configuration
现在我们需要配置 Stork,以便其使用 Kubernetes 来发现该服务的红色和蓝色实例。
Now we need to configure Stork for using Kubernetes to discover the red and blue instances of the service.
在 src/main/resources/application.properties
中,添加:
In the src/main/resources/application.properties
, add:
quarkus.stork.my-service.service-discovery.type=kubernetes
quarkus.stork.my-service.service-discovery.k8s-namespace=development
quarkus.stork.my-service.service-discovery.application=color-service
quarkus.stork.my-service.load-balancer.type=random
stork.my-service.service-discovery
指示我们将使用哪种服务发现来查找 my-service
服务。在本例中,是 kubernetes
。如果通过 Kube 配置文件配置了对 Kubernetes 集群的访问,则无需配置访问。否则,使用 quarkus.stork.my-service.service-discovery.k8s-host
属性设置合适的 Kubernetes url。quarkus.stork.my-service.service-discovery.application
包含 Kubernetes 服务 Stork 将要请求的名称。在本例中,这是对应于 Red 和 Blue 实例支持的 Kubernetes 服务的 color-service
。最后,quarkus.stork.my-service.load-balancer.type
配置服务选择。在本例中,我们使用 random
负载均衡器。
stork.my-service.service-discovery
indicates which type of service discovery we will be using to locate the my-service
service.
In our case, it’s kubernetes
.
If your access to the Kubernetes cluster is configured via Kube config file, you don’t need to configure the access to it. Otherwise, set the proper Kubernetes url using the quarkus.stork.my-service.service-discovery.k8s-host
property.
quarkus.stork.my-service.service-discovery.application
contains the name of the Kubernetes service Stork is going to ask for. In our case, this is the color-service
corresponding to the kubernetes service backed by the Red and Blue instances.
Finally, quarkus.stork.my-service.load-balancer.type
configures the service selection. In our case, we use a random
Load Balancer.
Deploy the REST Client interface and the front end API in the Kubernetes cluster
系统几乎已完成。我们只需要将 REST 客户端界面和客户端服务部署到集群中。在 src/main/resources/application.properties
中添加:
The system is almost complete. We only need to deploy the REST Client interface and the client service to the cluster.
In the src/main/resources/application.properties
, add:
quarkus.container-image.registry=<public registry>
quarkus.kubernetes-client.trust-certs=true
quarkus.kubernetes.ingress.expose=true
quarkus.kubernetes.ingress.host=my-service.127.0.0.1.nip.io
quarkus.container-image.registry
包含要使用的容器注册表。quarkus.kubernetes.ingress.expose
表示该服务将从集群外部可访问。quarkus.kubernetes.ingress.host
包含用于访问服务的 url。我们正在为 IP 地址映射使用 nip.io 通配符。
The quarkus.container-image.registry
contains the container registry to use.
The quarkus.kubernetes.ingress.expose
indicates that the service will be accessible from the outside of the cluster.
The quarkus.kubernetes.ingress.host
contains the url to access the service. We are using nip.io wildcard for IP address mappings.
如需更定制的配置,你可以查看 Deploying to Kubernetes guide
For a more customized configuration you can check the Deploying to Kubernetes guide
Build and push the container image
借助我们正在使用的扩展,我们可以使用 Jib 执行容器镜像构建,还可以在构建应用程序时启用 Kubernetes 清单的生成。例如,以下命令将在 target/kubernetes/
目录中生成 Kubernetes 清单,还将构建并推送一个用于该项目的容器镜像:
Thanks to the extensions we are using, we can perform the build of a container image using Jib and also enabling the generation of Kubernetes manifests while building the application. For example, the following command will generate a Kubernetes manifest in the target/kubernetes/
directory and also build and push a container image for the project:
./mvnw package -Dquarkus.container-image.build=true -Dquarkus.container-image.push=true
Deploy client service to the Kubernetes cluster
生成的清单可以使用 kubectl 从项目根目录应用到集群:
The generated manifest can be applied to the cluster from the project root using kubectl:
kubectl apply -f target/kubernetes/kubernetes.yml -n=development
请注意,如果你在 Stork 中使用椭圆曲线密钥并且会收到 Please note that if you use Elliptic Curve keys with Stork and are getting exceptions like 请注意,如果 Note that internally an 您可以按照 BouncyCastle 或 BouncyCastle FIPS 部分中的说明将此提供程序注册。 You can have this provider registered as described in the BouncyCastle or BouncyCastle FIPS sections. |
大功告成!让我们看看是否有效。
We’re done! So, let’s see if it works.
打开浏览器并导航至 [role="bare"] [role="bare"]http://my-service.127.0.0.1.nip.io/api。
Open a browser and navigate to [role="bare"]http://my-service.127.0.0.1.nip.io/api.
或者,如果你偏好,在另一个终端中运行:
Or if you prefer, in another terminal, run:
> curl http://my-service.127.0.0.1.nip.io/api
...
> curl http://my-service.127.0.0.1.nip.io/api
...
> curl http://my-service.127.0.0.1.nip.io/api
...
响应应在 Hello from Red!
和 Hello from Blue!
之间随机交替。
The responses should alternate randomly between Hello from Red!
and Hello from Blue!
.
您可以将此应用程序编译为本机可执行文件:
You can compile this application into a native executable:
Unresolved directive in stork-kubernetes.adoc - include::{includes}/devtools/build-native.adoc[]
然后,你需要基于本机可执行文件构建容器映像。为此,请使用对应的 Dockerfile:
Then, you need to build a container image based on the native executable. For this use the corresponding Dockerfile:
> docker build -f src/main/docker/Dockerfile.native -t quarkus/stork-kubernetes-quickstart .
将新映像发布到容器注册表后。您可以将 Kubernetes 清单重新部署到集群中。
After publishing the new image to the container registry. You can redeploy the Kubernetes manifest to the cluster.