Kubernetes extension

Quarkus 提供了使用 dekorate基于合理的默认值和用户提供的配置自动生成 Kubernetes 资源的能力。它当前支持为常规的 KubernetesOpenShiftKnative生成资源。此外,Quarkus 可以通过将生成的清单应用到目标集群的 API 服务器将应用程序部署到目标 Kubernetes 集群。最后,当存在其中一个容器镜像扩展(有关更多详细信息,请参见 container image guide)时,Quarkus 就可以创建容器镜像并把它推送到注册表 before,从而将应用程序部署到目标平台。

Quarkus offers the ability to automatically generate Kubernetes resources based on sane defaults and user-supplied configuration using dekorate. It currently supports generating resources for vanilla kubernetes, openshift and knative. Furthermore, Quarkus can deploy the application to a target Kubernetes cluster by applying the generated manifests to the target cluster’s API Server. Finally, when either one of container image extensions is present (see the container image guide for more details), Quarkus has the ability to create a container image and push it to a registry before deploying the application to the target platform.

Prerequisites

include::{includes}/prerequisites.adoc[]* 访问 Kubernetes 集群(Minikube 是一个可行的选择)

Unresolved directive in deploying-to-kubernetes.adoc - include::{includes}/prerequisites.adoc[] * Access to a Kubernetes cluster (Minikube is a viable option)

Kubernetes

让我们创建一个包含 Kubernetes 和 Jib 扩展的新项目:

Let’s create a new project that contains both the Kubernetes and Jib extensions:

Unresolved directive in deploying-to-kubernetes.adoc - include::{includes}/devtools/create-app.adoc[]

这向构建文件中添加了以下依赖项:

This added the following dependencies to the build file:

pom.xml
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-rest</artifactId>
</dependency>
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-kubernetes</artifactId>
</dependency>
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-container-image-jib</artifactId>
</dependency>
build.gradle
implementation("io.quarkus:quarkus-rest")
implementation("io.quarkus:quarkus-kubernetes")
implementation("io.quarkus:quarkus-container-image-jib")

通过添加这些依赖项,我们在每次执行构建时都会启用 Kubernetes 清单的生成,同时还会启用使用 Jib 构建容器镜像。例如,按照以下执行:

By adding these dependencies, we enable the generation of Kubernetes manifests each time we perform a build while also enabling the build of a container image using Jib. For example, following the execution of:

Unresolved directive in deploying-to-kubernetes.adoc - include::{includes}/devtools/build.adoc[]

你会注意到,在创建的其他文件中,`target/kubernetes/`目录中包含两个名为 `kubernetes.json`和 `kubernetes.yml`的文件。

you will notice amongst the other files that are created, two files named kubernetes.json and kubernetes.yml in the target/kubernetes/ directory.

如果你查看其中任何一个文件,你将看到它同时包含一个 Kubernetes Deployment`和一个 `Service

If you look at either file you will see that it contains both a Kubernetes Deployment and a Service.

`kubernetes.json`文件的完整源代码看起来像这样:

The full source of the kubernetes.json file looks something like this:

{
  {
    "apiVersion" : "apps/v1",
    "kind" : "Deployment",
    "metadata" : {
      "annotations": {
       "app.quarkus.io/vcs-uri" : "<some url>",
       "app.quarkus.io/commit-id" : "<some git SHA>",
      },
      "labels" : {
        "app.kubernetes.io/name" : "test-quarkus-app",
        "app.kubernetes.io/version" : "1.0.0-SNAPSHOT",
      },
      "name" : "test-quarkus-app"
    },
    "spec" : {
      "replicas" : 1,
      "selector" : {
        "matchLabels" : {
          "app.kubernetes.io/name" : "test-quarkus-app",
          "app.kubernetes.io/version" : "1.0.0-SNAPSHOT",
        }
      },
      "template" : {
        "metadata" : {
          "labels" : {
            "app.kubernetes.io/name" : "test-quarkus-app",
            "app.kubernetes.io/version" : "1.0.0-SNAPSHOT"
          }
        },
        "spec" : {
          "containers" : [ {
            "env" : [ {
              "name" : "KUBERNETES_NAMESPACE",
              "valueFrom" : {
                "fieldRef" : {
                  "fieldPath" : "metadata.namespace"
                }
              }
            } ],
            "image" : "yourDockerUsername/test-quarkus-app:1.0.0-SNAPSHOT",
            "imagePullPolicy" : "Always",
            "name" : "test-quarkus-app"
          } ]
        }
      }
    }
  },
  {
  "apiVersion" : "v1",
  "kind" : "Service",
    "metadata" : {
      "annotations": {
       "app.quarkus.io/vcs-uri" : "<some url>",
       "app.quarkus.io/commit-id" : "<some git SHA>",
      },
      "labels" : {
        "app.kubernetes.io/name" : "test-quarkus-app",
        "app.kubernetes.io/version" : "1.0.0-SNAPSHOT",
      },
      "name" : "test-quarkus-app"
    },
  "spec" : {
    "ports" : [ {
      "name" : "http",
      "port" : 8080,
      "targetPort" : 8080
    } ],
    "selector" : {
      "app.kubernetes.io/name" : "test-quarkus-app",
      "app.kubernetes.io/version" : "1.0.0-SNAPSHOT"
    },
    "type" : "ClusterIP"
  }
 }
}

生成的清单可以使用 `kubectl`从项目根目录应用到集群:

The generated manifest can be applied to the cluster from the project root using kubectl:

kubectl apply -f target/kubernetes/kubernetes.json

关于 Deployment(或 StatefulSet)需要注意的一件重要的事情是,它使用 `yourDockerUsername/test-quarkus-app:1.0.0-SNAPSHOT`作为 `Pod`的容器镜像。图像的名称由 Jib 扩展控制,并且可以使用通常的 `application.properties`自定义。

An important thing to note about the Deployment (or StatefulSet) is that is uses yourDockerUsername/test-quarkus-app:1.0.0-SNAPSHOT as the container image of the Pod. The name of the image is controlled by the Jib extension and can be customized using the usual application.properties.

例如,使用如下配置:

For example with a configuration like:

quarkus.container-image.group=quarkus #optional, default to the system username
quarkus.container-image.name=demo-app #optional, defaults to the application name
quarkus.container-image.tag=1.0       #optional, defaults to the application version

在生成的清单中将使用的图像将为 quarkus/demo-app:1.0

The image that will be used in the generated manifests will be quarkus/demo-app:1.0

Generating idempotent resources

在生成 Kubernetes 清单时,Quarkus 会自动添加一些标签和注释,以提供有关生成日期或版本的额外信息。例如:

When generating the Kubernetes manifests, Quarkus automatically adds some labels and annotations to give extra information about the generation date or versions. For example:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    app.quarkus.io/commit-id: 0f8b87788bc446a9347a7961bea8a60889fe1494
    app.quarkus.io/build-timestamp: 2023-02-10 - 13:07:51 +0000
  labels:
    app.kubernetes.io/managed-by: quarkus
    app.kubernetes.io/version: 0.0.1-SNAPSHOT
    app.kubernetes.io/name: example
  name: example
spec:
  ...

标签 app.quarkus.io/commit-id、`app.quarkus.io/build-timestamp`和注释 `app.kubernetes.io/version`在我们重新构建 Kubernetes 清单时可能会每次发生变更,当我们使用 Git-Ops 工具部署这些资源时,这可能会带来问题(因为这些工具将检测差异并因此执行重新部署)。

The app.quarkus.io/commit-id, app.quarkus.io/build-timestamp labels and the app.kubernetes.io/version annotation might change every time we re-build the Kubernetes manifests which can be problematic when we want to deploy these resources using a Git-Ops tool (because these tools will detect differences and hence will perform a re-deployment).

为了让生成的内容对 Git-Ops 友好,并且仅生成幂等资源(每次我们构建源时都不会改变的资源),我们需要添加以下属性:

To make the generated resources Git-Ops friendly and only produce idempotent resources (resources that won’t change every time we build the sources), we need to add the following property:

quarkus.kubernetes.idempotent=true

此外,默认情况下生成资源的目录是 target/kubernetes,要更改它,我们需要使用:

Moreover, by default the directory where the generated resources are created is target/kubernetes, to change it, we need to use:

quarkus.kubernetes.output-directory=target/kubernetes-with-idempotent

请注意,属性 `quarkus.kubernetes.output-directory`相对于当前项目位置。

Note that the property quarkus.kubernetes.output-directory is relative to the current project location.

Changing the generated deployment resource

除了生成 “@1” 资源外,您还可以选择只生成 “@2”,或 “@3”,或 “@4” 资源,方法是通过 “@5”:

Besides generating a Deployment resource, you can also choose to generate either a StatefulSet, or a Job, or a CronJob resource instead via application.properties:

quarkus.kubernetes.deployment-kind=StatefulSet

Generating Job resources

如果您想要生成 Job 资源,您需要将下列属性添加到 “@6”:

If you want to generate a Job resource, you need to add the following property to the application.properties:

quarkus.kubernetes.deployment-kind=Job

如果您使用的是 Picocli 扩展,默认情况下会生成 Job 资源。

If you are using the Picocli extension, by default a Job resource will be generated.

您可以提供将用于 Kubernetes Job 的参数,方法是通过属性 “@7”。例如,通过添加属性 “@8”。

You can provide the arguments that will be used by the Kubernetes Job via the property quarkus.kubernetes.arguments. For example, by adding the property quarkus.kubernetes.arguments=A,B.

最后,Kubernetes Job 将在每次在 Kubernetes 中安装时启动。您可以在此了解有关如何运行 Kubernetes Job 的更多信息 “@9”。

Finally, the Kubernetes job will be launched every time it is installed in Kubernetes. You can know more about how to run Kubernetes jobs in this link.

您可以使用 “@10” 下的属性配置 Kubernetes Job 配置的其余部分(请参阅 “@11”)。

You can configure the rest of the Kubernetes Job configuration using the properties under quarkus.kubernetes.job.xxx (see link).

Generating CronJob resources

如果您想要生成 CronJob 资源,您需要通过 “@12” 添加下列属性:

If you want to generate a CronJob resource, you need to add the following property via the application.properties:

quarkus.kubernetes.deployment-kind=CronJob
# Cron expression to run the job every hour
quarkus.kubernetes.cron-job.schedule=0 * * * *

CronJob 资源需要 “@14” 表达式,以通过属性 “@13” 指定何时启动 job。如果没有提供,生成将失败。

CronJob resources require the Cron expression to specify when to launch the job via the property quarkus.kubernetes.cron-job.schedule. If not provide, the build will fail.

您可以使用 “@15” 下的属性配置 Kubernetes CronJob 配置的其余部分(请参阅 “@16”)。

You can configure the rest of the Kubernetes CronJob configuration using the properties under quarkus.kubernetes.cron-job.xxx (see link).

Namespace

默认情况下,Quarkus 会在生成的清单文件中忽略命名空间,而不是强制使用 “@17” 命名空间。这意味着当使用 “@18” 时,您可以将清单文件应用于您选择的命名空间,在下方的示例中命名空间是 “@19”:

By default, Quarkus omits the namespace in the generated manifests, rather than enforce the default namespace. That means that you can apply the manifest to your chosen namespace when using kubectl, which in the example below is test:

kubectl apply -f target/kubernetes/kubernetes.json -n=test

要指定清单文件中的命名空间,请使用 “@20” 中的下列属性自定义:

To specify the namespace in your manifest customize with the following property in your application.properties:

quarkus.kubernetes.namespace=mynamespace

Defining a Docker registry

Docker 镜像注册表可以使用下列属性指定:

The Docker registry can be specified with the following property:

quarkus.container-image.registry=my.docker-registry.net

通过添加此属性及上一节中的容器镜像属性的其余部分,生成的清单文件将使用镜像 “@21”。该镜像并非生成的清单文件中可以自定义的唯一内容,这一点在以下章节中会更加明显。

By adding this property along with the rest of the container image properties of the previous section, the generated manifests will use the image my.docker-registry.net/quarkus/demo-app:1.0. The image is not the only thing that can be customized in the generated manifests, as will become evident in the following sections.

Automatic generation of pull secrets

当使用 Docker 镜像注册表时,用户经常会提供凭据,以便在生成期间生成并推送镜像至指定的注册表中。

When docker registries are used, users often provide credentials, so that an image is built and pushed to the specified registry during the build.

quarkus.container-image.username=myusername
quarkus.container-image.password=mypassword

当从注册表中拉取镜像时,Kubernetes 也将需要这些凭据。这就是使用镜像拉取密钥的意义所在。镜像拉取密钥是一种包含所需凭据的特殊类型的密钥。Quarkus 可在下列情况下自动生成和配置:

Kubernetes will also need these credentials when it comes to pull the image from the registry. This is where image pull secrets are used. An image pull secret is a special kind of secret that contains the required credentials. Quarkus can automatically generate and configure when:

quarkus.kubernetes.generate-image-pull-secret=true

更确切地说,会生成类似于下面的 Secret

More specifically a `Secret`like the one bellow is genrated:

apiVersion: v1
kind: Secret
metadata:
  name: test-quarkus-app-pull-secret
data:
  ".dockerconfigjson": ewogCSJhdXRocyI6IHsKCQkibXkucmVnaXN0eS5vcmciOiB7CiAJCQkiYXV0aCI6ImJYbDFjMlZ5Ym1GdFpUcHRlWEJoYzNOM2IzSmsiCgkJfQoJfQp9
type: kubernetes.io/dockerconfigjson

并且也将会把 “@22” 添加到 “@23” 列表中。

And also test-quarkus-app-pull-secret is added to the imagePullSecrets list.

Labels and Annotations

Labels

生成的清单文件使用 Kubernetes “@28”。您可以使用 “@24”, “@25” 和 “@26” 来自定义这些标签。例如,通过将下列配置添加到您的 “@27”:

The generated manifests use the Kubernetes recommended labels. These labels can be customized using quarkus.kubernetes.name, quarkus.kubernetes.version and quarkus.kubernetes.part-of. For example by adding the following configuration to your application.properties:

quarkus.kubernetes.part-of=todo-app
quarkus.kubernetes.name=todo-rest
quarkus.kubernetes.version=1.0-rc.1

正如 “@32” 部分中所详细描述的那样,自定义 OpenShift(或 Knative)属性的方式是相同的,只不过用 “@30”(或 “@31”)替换 “@29”。OpenShift 的前一个示例如下所示:

As is described in detail in the openshift section, customizing OpenShift (or Knative) properties is done in the same way, but replacing kubernetes with openshift (or knative). The previous example for OpenShift would look like this:

quarkus.openshift.part-of=todo-app
quarkus.openshift.name=todo-rest
quarkus.openshift.version=1.0-rc.1

生成的资源中的标签将如下所示:

The labels in generated resources will look like:

  "labels" : {
    "app.kubernetes.io/part-of" : "todo-app",
    "app.kubernetes.io/name" : "todo-rest",
    "app.kubernetes.io/version" : "1.0-rc.1"
  }

您还可以通过应用以下配置来删除 app.kubernetes.io/version 标签:

You can also remove the app.kubernetes.io/version label by applying the following configuration:

quarkus.kubernetes.add-version-to-label-selectors=false

Custom Labels

要添加其他自定义标签,例如 foo=bar,只需应用以下配置:

To add additional custom labels, for example foo=bar just apply the following configuration:

quarkus.kubernetes.labels.foo=bar

使用 quarkus-container-image-jib 扩展来构建镜像时,通过上述属性添加的任何标签也将被添加到生成的镜像中。

When using the quarkus-container-image-jib extension to build a container image, then any label added via the aforementioned property will also be added to the generated container image.

Annotations

从一开始,生成的资源便会被添加与版本控制相关的信息,这些信息可供工具或用户用于故障排除目的。

Out of the box, the generated resources will be annotated with version control related information that can be used either by tooling, or by the user for troubleshooting purposes.

  "annotations": {
    "app.quarkus.io/vcs-uri" : "<some url>",
    "app.quarkus.io/commit-id" : "<some git SHA>",
   }

Custom Annotations

可以像添加标签一样添加自定义注释。例如,只需应用以下配置便可添加注释 foo=barapp.quarkus/id=42

Custom annotations can be added in a way similar to labels.For example to add the annotation foo=bar and app.quarkus/id=42 just apply the following configuration:

quarkus.kubernetes.annotations.foo=bar
quarkus.kubernetes.annotations."app.quarkus/id"=42

Environment variables

Kubernetes 提供定义环境变量的多种方法:

Kubernetes provides multiple ways of defining environment variables:

  • key/value pairs

  • import all values from a Secret or ConfigMap

  • interpolate a single value identified by a given field in a Secret or ConfigMap

  • interpolate a value from a field within the same resource

Environment variables from key/value pairs

要在生成的资源中添加键/值对作为环境变量:

To add a key/value pair as an environment variable in the generated resources:

quarkus.kubernetes.env.vars.my-env-var=foobar

以上的命令将添加 MY_ENV_VAR=foobar 作为环境变量。请注意,键 my-env-var 将转换为大写,且破折号将替换为下划线,从而得到 MY_ENV_VAR

The command above will add MY_ENV_VAR=foobar as an environment variable. Please note that the key my-env-var will be converted to uppercase and dashes will be replaced by underscores resulting in MY_ENV_VAR.

Environment variables from Secret

要添加所有 Secret 键/值对作为环境变量,只需应用以下配置,并通过逗号 (,) 分隔用作源的每个 Secret

To add all key/value pairs of Secret as environment variables just apply the following configuration, separating each Secret to be used as source by a comma (,):

quarkus.kubernetes.env.secrets=my-secret,my-other-secret

这将在容器定义中生成以下内容:

which would generate the following in the container definition:

envFrom:
  - secretRef:
      name: my-secret
      optional: false
  - secretRef:
      name: my-other-secret
      optional: false

以下内容从 my-secret Secret 中提取由 keyName`字段标识的值,并放入 `foo 环境变量:

The following extracts a value identified by the keyName field from the my-secret Secret into a foo environment variable:

quarkus.kubernetes.env.mapping.foo.from-secret=my-secret
quarkus.kubernetes.env.mapping.foo.with-key=keyName

这将在容器的 env 部分生成以下内容:

This would generate the following in the env section of your container:

- env:
  - name: FOO
    valueFrom:
      secretKeyRef:
        key: keyName
        name: my-secret
        optional: false

生成 Secret 的环境变量时,还可以添加前缀,以下配置将使用键 foo 从 Secret 中创建环境变量,并添加前缀 BAR

It is also possible to add a prefix when you are generating env from Secret, the following configuration creates environment variable from Secret with key foo adding a prefix BAR:

quarkus.kubernetes.env.secrets=foo
quarkus.kubernetes.env.using-prefix."BAR".for-secret=foo

这将在容器的 env 部分生成以下内容:

This would generate the following in the env section of your container:

- env:
    envFrom:
    - secretRef:
        name: foo
      prefix: BAR

Environment variables from ConfigMap

要将从 ConfigMap 中获取的所有键/值对用作环境变量,只需应用以下配置,并使用逗号 (,) 将各个 ConfigMap 分隔开以用作源:

To add all key/value pairs from ConfigMap as environment variables just apply the following configuration, separating each ConfigMap to be used as source by a comma (,):

quarkus.kubernetes.env.configmaps=my-config-map,another-config-map

这将在容器定义中生成以下内容:

which would generate the following in the container definition:

envFrom:
  - configMapRef:
      name: my-config-map
      optional: false
  - configMapRef:
      name: another-config-map
      optional: false

以下配置从 my-config-map ConfigMap 中提取 keyName 字段标识的一个值,并将其放入 foo 环境变量中:

The following extracts a value identified by the keyName field from the my-config-map ConfigMap into a foo environment variable:

quarkus.kubernetes.env.mapping.foo.from-configmap=my-configmap
quarkus.kubernetes.env.mapping.foo.with-key=keyName

这将在容器的 env 部分生成以下内容:

This would generate the following in the env section of your container:

- env:
  - name: FOO
    valueFrom:
      configMapKeyRef:
        key: keyName
        name: my-configmap
        optional: false

生成环境变量时也可以添加前缀,以下配置使用键 foo 添加前缀 BAR,从 ConfigMap 中创建环境变量:

It is also possible to add a prefix when you are generating env from ConfigMap, the following configuration creates environment variable from ConfigMap with key foo adding a prefix BAR:

quarkus.kubernetes.env.configmaps=foo
quarkus.kubernetes.prefixes."BAR".for-configmap=foo

这将在容器的 env 部分生成以下内容:

This would generate the following in the env section of your container:

- env:
    envFrom:
    - configMapRef:
        name: foo
      prefix: BAR

Environment variables from fields

还可以使用另一个字段中的值来添加新的环境变量,为此需要指定要作为源使用的字段的路径,如下所示:

It’s also possible to use the value from another field to add a new environment variable by specifying the path of the field to be used as a source, as follows:

quarkus.kubernetes.env.fields.foo=metadata.name

正如 OpenShift 部分中详细描述的那样,定制 OpenShift 属性时执行相同操作,但使用 openshift 代替 kubernetes。针对 OpenShift 的上述示例如下:

As is described in detail in the openshift section, customizing OpenShift properties is done in the same way, but replacing kubernetes with openshift. The previous example for OpenShift would look like this:

quarkus.openshift.env.fields.foo=metadata.name

Validation

如果两个定义有冲突,例如,错误地同时指定一个值和指定变量来源于一个字段,将在构建时出现错误,这样你就有机会在将应用程序部署到集群中,在诊断问题根源会变得更困难前解决问题。

A conflict between two definitions, e.g. mistakenly assigning both a value and specifying that a variable is derived from a field, will result in an error being thrown at build time so that you get the opportunity to fix the issue before you deploy your application to your cluster where it might be more difficult to diagnose the source of the issue.

同样,如果出现两个冗余定义,例如两次从同一个密钥中定义一个注入,则不会导致问题,但确实会报告一个警告,让你知道自己可能无意重复该定义。

Similarly, two redundant definitions, e.g. defining an injection from the same secret twice, will not cause an issue but will indeed report a warning to let you know that you might not have intended to duplicate that definition.

Backwards compatibility

Kubernetes 扩展的先前版本支持一种不同的语法用于添加环境变量。旧语法仍然受支持,但已被弃用,建议你迁移到新语法。

Previous versions of the Kubernetes extension supported a different syntax to add environment variables. The older syntax is still supported but is deprecated, and it’s advised that you migrate to the new syntax.

Table 1. Old vs. new syntax

Old

New

Plain variable

quarkus.kubernetes.env-vars.my-env-var.value=foobar

quarkus.kubernetes.env.vars.my-env-var=foobar

From field

quarkus.kubernetes.env-vars.my-env-var.field=foobar

quarkus.kubernetes.env.fields.my-env-var=foobar

All from ConfigMap

quarkus.kubernetes.env-vars.xxx.configmap=foobar

quarkus.kubernetes.env.configmaps=foobar

All from Secret

quarkus.kubernetes.env-vars.xxx.secret=foobar

quarkus.kubernetes.env.secrets=foobar

From one Secret field

quarkus.kubernetes.env-vars.foo.secret=foobar

quarkus.kubernetes.env.mapping.foo.from-secret=foobar

quarkus.kubernetes.env-vars.foo.value=field

quarkus.kubernetes.env.mapping.foo.with-key=field

From one ConfigMap field

quarkus.kubernetes.env-vars.foo.configmap=foobar

quarkus.kubernetes.env.mapping.foo.from-configmap=foobar

quarkus.kubernetes.env-vars.foo.value=field

quarkus.kubernetes.env.mapping.foo.with-key=field

如果你重新定义同一个变量,使用新语法且同时保留旧语法,则 ONLY 新版本将被保留,并发布一个警告通知你问题。例如,如果你同时定义 quarkus.kubernetes.env-vars.my-env-var.value=foobarquarkus.kubernetes.env.vars.my-env-var=newValue,则扩展将仅生成环境变量 MY_ENV_VAR=newValue 并发出一个警告。

If you redefine the same variable using the new syntax while keeping the old syntax, ONLY the new version will be kept and a warning will be issued to alert you of the problem.For example, if you define both quarkus.kubernetes.env-vars.my-env-var.value=foobar and quarkus.kubernetes.env.vars.my-env-var=newValue, the extension will only generate an environment variable MY_ENV_VAR=newValue and issue a warning.

Mounting volumes

Kubernetes 扩展允许用户同时为应用程序配置卷和挂载。任何卷都可以通过简单的配置挂载:

The Kubernetes extension allows the user to configure both volumes and mounts for the application. Any volume can be mounted with a simple configuration:

quarkus.kubernetes.mounts.my-volume.path=/where/to/mount

这会向 pod 添加挂载卷 my-volume 的路径 /where/to/mount。卷本身可以按以下部分中所示配置。

This will add a mount to the pod for volume my-volume to path /where/to/mount. The volumes themselves can be configured as shown in the sections below.

Secret volumes

quarkus.kubernetes.secret-volumes.my-volume.secret-name=my-secret

ConfigMap volumes

quarkus.kubernetes.config-map-volumes.my-volume.config-map-name=my-config-map

Passing application configuration

Quarkus 支持通过外部位置(通过 Smallrye Config)传递配置。这通常需要设置一个额外的环境变量或系统属性。当需要使用密钥或配置映射进行应用程序配置时,你需要:

Quarkus supports passing configuration from external locations (via Smallrye Config). This usually requires setting an additional environment variable or system property. When you need to use a secret or a config map for the purpose of application configuration, you need to:

  • define a volume

  • mount the volume

  • create an environment variable for SMALLRYE_CONFIG_LOCATIONS

为了简化操作,quarkus 提供了单步骤的替代方案:

To simplify things, quarkus provides single step alternative:

quarkus.kubernetes.app-secret=<name of the secret containing the configuration>

or

quarkus.kubernetes.app-config-map=<name of the config map containing the configuration>

使用这些属性后,生成的清单将包含所有必需的内容。应用程序配置卷将使用路径:密钥的 /mnt/app-secret 和配置映射的 /mnt/app-config-map 创建。

When these properties are used, the generated manifests will contain everything required. The application config volumes will be created using path: /mnt/app-secret and /mnt/app-config-map for secrets and configmaps respectively.

注意:用户可以同时使用这两个属性。

Note: Users may use both properties at the same time.

Changing the number of replicas

要将副本数从 1 更改为 3:

To change the number of replicas from 1 to 3:

quarkus.kubernetes.replicas=3

Add readiness and liveness probes

默认情况下,Kubernetes 资源不会在生成的 Deployment 中包含就绪探测和活动探测。但是,只需添加 SmallRye Health 扩展,即可添加这些探测,如下所示:

By default, the Kubernetes resources do not contain readiness and liveness probes in the generated Deployment. Adding them however is just a matter of adding the SmallRye Health extension like so:

pom.xml
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-smallrye-health</artifactId>
</dependency>
build.gradle
implementation("io.quarkus:quarkus-smallrye-health")

生成的探测量值将由配置的运行状况属性决定:quarkus.smallrye-health.root-pathquarkus.smallrye-health.liveness-path`和 `quarkus.smallrye-health.readiness-path。有关运行状况扩展的更多信息,请参阅“相关guide”。

The values of the generated probes will be determined by the configured health properties: quarkus.smallrye-health.root-path, quarkus.smallrye-health.liveness-path and quarkus.smallrye-health.readiness-path. More information about the health extension can be found in the relevant guide.

Customizing the readiness probe

要将探测的初始延迟设置为 20 秒,并将周期设置为 45:

To set the initial delay of the probe to 20 seconds and the period to 45:

quarkus.kubernetes.readiness-probe.initial-delay=20s
quarkus.kubernetes.readiness-probe.period=45s

Add hostAliases

要向 Pod 的 `/etc/hosts`文件添加条目(可在“ Kubernetes documentation”中找到更多信息),只需应用以下配置:

To add entries to a Pod’s /etc/hosts file (more information can be found in Kubernetes documentation), just apply the following configuration:

quarkus.kubernetes.hostaliases."10.0.0.0".hostnames=foo.com,bar.org

这将在 `deployment`的定义中生成以下 `hostAliases`部分:

This would generate the following hostAliases section in the deployment definition:

kind: Deployment
spec:
  template:
    spec:
      hostAliases:
      - hostnames:
        - foo.com
        - bar.org
        ip: 10.0.0.0

Container Resources Management

CPU 和内存限制以及请求可以使用以下配置应用于 Container(有关更多信息,请参阅“ Kubernetes documentation”):

CPU & Memory limits and requests can be applied to a Container (more info in Kubernetes documentation) using the following configuration:

quarkus.kubernetes.resources.requests.memory=64Mi
quarkus.kubernetes.resources.requests.cpu=250m
quarkus.kubernetes.resources.limits.memory=512Mi
quarkus.kubernetes.resources.limits.cpu=1000m

这将在 `container`部分中生成以下条目:

This would generate the following entry in the container section:

containers:
  - resources:
    limits:
      cpu: 1000m
      memory: 512Mi
    requests:
      cpu: 250m
      memory: 64Mi

Exposing your application in Kubernetes

Kubernetes 使用 Ingress resources公开应用程序。若要生成 Ingress 资源,只需应用以下配置:

Kubernetes exposes applications using Ingress resources. To generate the Ingress resource, just apply the following configuration:

quarkus.kubernetes.ingress.expose=true

这将生成以下 Ingress 资源:

This would generate the following Ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    app.quarkus.io/commit-id: a58d2211c86f07a47d4b073ea9ce000d2c6828d5
    app.quarkus.io/build-timestamp: 2022-06-29 - 13:22:41 +0000
  labels:
    app.kubernetes.io/name: kubernetes-with-ingress
    app.kubernetes.io/version: 0.1-SNAPSHOT
  name: kubernetes-with-ingress
spec:
  rules:
    - http:
        paths:
          - backend:
              service:
                name: kubernetes-with-ingress
                port:
                  name: http
            path: /
            pathType: Prefix

将这些资源部署到 Kubernetes 后,Ingress 资源将允许非安全连接到达您的应用程序。

After deploying these resources to Kubernetes, the Ingress resource will allow unsecured connections to reach out your application.

Adding Ingress rules

若要自定义生成 Ingress 资源的默认 `host`和 `path`属性,您需要应用以下配置:

To customize the default host and path properties of the generated Ingress resources, you need to apply the following configuration:

quarkus.kubernetes.ingress.expose=true
# To change the Ingress host. By default, it's empty.
quarkus.kubernetes.ingress.host=prod.svc.url
# To change the Ingress path of the generated Ingress rule. By default, it's "/".
quarkus.kubernetes.ports.http.path=/prod

这将生成以下 Ingress 资源:

This would generate the following Ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  labels:
    app.kubernetes.io/name: kubernetes-with-ingress
    app.kubernetes.io/version: 0.1-SNAPSHOT
  name: kubernetes-with-ingress
spec:
  rules:
    - host: prod.svc.url
      http:
        paths:
          - backend:
              service:
                name: kubernetes-with-ingress
                port:
                  name: http
            path: /prod
            pathType: Prefix

另外,您还可通过添加以下配置来添加新 Ingress 规则:

Additionally, you can also add new Ingress rules by adding the following configuration:

# Example to add a new rule
quarkus.kubernetes.ingress.rules.1.host=dev.svc.url
quarkus.kubernetes.ingress.rules.1.path=/dev
quarkus.kubernetes.ingress.rules.1.path-type=ImplementationSpecific
# by default, path type is Prefix

# Example to add a new rule that use another service binding
quarkus.kubernetes.ingress.rules.2.host=alt.svc.url
quarkus.kubernetes.ingress.rules.2.path=/ea
quarkus.kubernetes.ingress.rules.2.service-name=updated-service
quarkus.kubernetes.ingress.rules.2.service-port-name=tcpurl

这将生成以下 Ingress 资源:

This would generate the following Ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  labels:
    app.kubernetes.io/name: kubernetes-with-ingress
    app.kubernetes.io/version: 0.1-SNAPSHOT
  name: kubernetes-with-ingress
spec:
  rules:
    - host: prod.svc.url
      http:
        paths:
          - backend:
              service:
                name: kubernetes-with-ingress
                port:
                  name: http
            path: /prod
            pathType: Prefix
    - host: dev.svc.url
      http:
        paths:
          - backend:
              service:
                name: kubernetes-with-ingress
                port:
                  name: http
            path: /dev
            pathType: ImplementationSpecific
    - host: alt.svc.url
      http:
        paths:
          - backend:
              service:
                name: updated-service
                port:
                  name: tcpurl
            path: /ea
            pathType: Prefix

Securing the Ingress resource

为了保护传入连接,Kubernetes 允许在 Ingress 资源中启用 TLS,方法是指定包含 TLS 私钥和证书的 Secret。只需添加“tls.secret-name”属性,即可生成安全的 Ingress 资源:

To secure the incoming connections, Kubernetes allows enabling TLS within the Ingress resource by specifying a Secret that contains a TLS private key and certificate. You can generate a secured Ingress resource by simply adding the "tls.secret-name" properties:

quarkus.kubernetes.ingress.expose=true
## Ingress TLS configuration:
quarkus.kubernetes.ingress.tls.my-secret.enabled=true

此配置将生成以下受保护的 Ingress 资源:

This configuration will generate the following secured Ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  ...
  name: kubernetes-with-secure-ingress
spec:
  rules:
    ...
  tls:
    - secretName: my-secret

现在,Kubernetes 将使用“my-secret”名称中的密钥提供证书,验证所有使用 SSL 的传入连接。

Now, Kubernetes will validate all the incoming connections using SSL with the certificates provided within the secret with name "my-secret".

有关如何在 here中创建密钥的更多信息。

More information about how to create the secret in here.

Using the Kubernetes client

部署到 Kubernetes 并需要访问 API 服务器的应用程序通常会使用 `kubernetes-client`扩展:

Applications that are deployed to Kubernetes and need to access the API server will usually make use of the kubernetes-client extension:

pom.xml
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-kubernetes-client</artifactId>
</dependency>
build.gradle
implementation("io.quarkus:quarkus-kubernetes-client")

若要从 Kubernetes 集群内部访问 API 服务器,需要一些 RBAC 相关的资源(例如 ServiceAccount、RoleBinding)。为了便于使用 `kubernetes-client`扩展,`kubernetes`扩展将生成一个 RoleBinding 资源,以将名为“view”的集群角色绑定到应用程序 ServiceAccount 资源。务必要注意,“view”集群角色不会自动生成,因此需要您已在自己的集群中安装了此名为“view”的集群角色。

To access the API server from within a Kubernetes cluster, some RBAC related resources are required (e.g. a ServiceAccount, a RoleBinding). To ease the usage of the kubernetes-client extension, the kubernetes extension is going to generate a RoleBinding resource that binds a cluster role named "view" to the application ServiceAccount resource. It’s important to note that the cluster role "view" won’t be generated automatically, so it’s expected that you have this cluster role with name "view" already installed in your cluster.

另一方面,可以使用 `quarkus.kubernetes.rbac.role-bindings`下方的属性完全自定义生成的角色、主体和角色绑定,并且如果存在,`kubernetes-client`扩展将使用它,因此不会生成任何 RoleBinding 资源。

On the other hand, you can fully customize the roles, subjects and role bindings to generate using the properties under quarkus.kubernetes.rbac.role-bindings, and if present, the kubernetes-client extension will use it and hence won’t generate any RoleBinding resource.

可以使用属性 `quarkus.kubernetes-client.generate-rbac=false`禁用 RBAC 资源生成。

You can disable the RBAC resources generation using the property quarkus.kubernetes-client.generate-rbac=false.

Generating RBAC resources

在某些场景中,需要生成其他一些由 Kubernetes 用于授予或限制对其他资源访问权限的 RBAC 资源。例如,我们的应用程序使用案例中,我们正在构建 a Kubernetes operator,需要读取已安装部署列表。为此,我们需要将一个服务帐户分配给我们的操作员,并使此服务帐户与授予对 Deployment 资源的访问权限的角色链接在一起。一起看看如何使用 quarkus.kubernetes.rbac 属性来完成此操作:

In some scenarios, it’s necessary to generate additional RBAC resources that are used by Kubernetes to grant or limit access to other resources. For example, in our use case, we are building a Kubernetes operator that needs to read the list of the installed deployments. To do this, we would need to assign a service account to our operator and link this service account with a role that grants access to the Deployment resources. Let’s see how to do this using the quarkus.kubernetes.rbac properties:

# Generate the Role resource with name "my-role" 1
quarkus.kubernetes.rbac.roles.my-role.policy-rules.0.api-groups=extensions,apps
quarkus.kubernetes.rbac.roles.my-role.policy-rules.0.resources=deployments
quarkus.kubernetes.rbac.roles.my-role.policy-rules.0.verbs=list
1 In this example, the role "my-role" will be generated with a policy rule to get the list of deployments.

在默认情况下,如果配置了一个角色,还将生成一个 RoleBinding 资源以将此角色与 ServiceAccount 资源链接在一起。

By default, if one role is configured, a RoleBinding resource will be generated as well to link this role with the ServiceAccount resource.

此外,你可以对要生成的 RBAC 资源进行更多控制:

Moreover, you can have more control over the RBAC resources to be generated:

# Generate Role resource with name "my-role" 1
quarkus.kubernetes.rbac.roles.my-role.policy-rules.0.api-groups=extensions,apps
quarkus.kubernetes.rbac.roles.my-role.policy-rules.0.resources=deployments
quarkus.kubernetes.rbac.roles.my-role.policy-rules.0.verbs=get,watch,list

# Generate ServiceAccount resource with name "my-service-account" in namespace "my_namespace" 2
quarkus.kubernetes.rbac.service-accounts.my-service-account.namespace=my_namespace

# Bind Role "my-role" with ServiceAccount "my-service-account" 3
quarkus.kubernetes.rbac.role-bindings.my-role-binding.subjects.my-service-account.kind=ServiceAccount
quarkus.kubernetes.rbac.role-bindings.my-role-binding.subjects.my-service-account.namespace=my_namespace
quarkus.kubernetes.rbac.role-bindings.my-role-binding.role-name=my-role
1 In this example, the role "my-role" will be generated with the specified policy rules.
2 Also, the service account "my-service-account" will be generated.
3 And we can configure the generated RoleBinding resource by selecting the role to be used and the subject.

最后,我们还可以像下面这样生成“ClusterRole”类型的集群范围角色资源和一个“ClusterRoleBinding”资源:

Finally, we can also generate the cluster wide role resource of "ClusterRole" kind and a "ClusterRoleBinding" resource as follows:

# Generate ClusterRole resource with name "my-cluster-role" 1
quarkus.kubernetes.rbac.cluster-roles.my-cluster-role.policy-rules.0.api-groups=extensions,apps
quarkus.kubernetes.rbac.cluster-roles.my-cluster-role.policy-rules.0.resources=deployments
quarkus.kubernetes.rbac.cluster-roles.my-cluster-role.policy-rules.0.verbs=get,watch,list

# Bind the ClusterRole "my-cluster-role" with the application service account
quarkus.kubernetes.rbac.cluster-role-bindings.my-cluster-role-binding.subjects.manager.kind=Group
quarkus.kubernetes.rbac.cluster-role-bindings.my-cluster-role-binding.subjects.manager.api-group=rbac.authorization.k8s.io
quarkus.kubernetes.rbac.cluster-role-bindings.my-cluster-role-binding.role-name=my-cluster-role 2
1 In this example, the cluster role "my-cluster-role" will be generated with the specified policy rules.
2 The name of the ClusterRole resource to use. Role resources are namespace-based and hence not allowed in ClusterRoleBinding resources.

Deploying to local Kubernetes

在部署到本地 Kubernetes 环境时,用户经常对其清单进行一些细微的更改以简化开发流程。最常见的更改包括:

When deploying to local Kubernetes environments, users often perform minor changes to their manifests that simplify the development process. The most common changes are:

  • Setting imagePullPolicy to IfNotPresent

  • Using NodePort as Service type

Quarkus 提供了扩展,其中包括按照默认值设置这些选项。此类扩展包括:

Quarkus provides extensions that among others set these options by default. Such extensions are:

  • quarkus-minikube

  • quarkus-kind

如果扩展列表与你正在使用的工具不匹配(例如 Docker Desktop、microk8s 等),那么建议使用 quarkus-minikube 扩展,因为其默认值应适用于大多数环境。

If the list of extensions does not match the tool you are using (e.g. Docker Desktop, microk8s etc) then it is suggested to use the quarkus-minikube extension. as its defaults should be reasonable for most environments.

Deploying to Minikube

当出于开发目的需要 Kubernetes 集群时, Minikube 非常流行。为了尽可能使部署到 Minikube 的过程变得顺畅,Quarkus 提供了 quarkus-minikube 扩展。可以像这样将此扩展添加到项目中:

Minikube is quite popular when a Kubernetes cluster is needed for development purposes. To make the deployment to Minikube experience as frictionless as possible, Quarkus provides the quarkus-minikube extension. This extension can be added to a project like so:

pom.xml
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-minikube</artifactId>
</dependency>
build.gradle
implementation("io.quarkus:quarkus-minikube")

此扩展的目的是生成针对 Minikube 量身定制的 Kubernetes 清单 (minikube.yamlminikube.json)。此扩展假定以下几件事:

The purpose of this extension is to generate Kubernetes manifests (minikube.yaml and minikube.json) that are tailored to Minikube. This extension assumes a couple of things:

  • Users won’t be using an image registry and will instead make their container image accessible to the Kubernetes cluster by building it directly into Minikube’s Docker daemon. To use Minikube’s Docker daemon you must first execute:[source, bash]

eval $(minikube -p minikube docker-env)
  • Applications deployed to Kubernetes won’t be accessed via a Kubernetes Ingress, but rather as a NodePort Service. The advantage of doing this is that the URL of an application can be retrieved trivially by executing:[source, bash]

minikube service list

为了控制在这种情况下使用的 nodePort,用户可以设置 quarkus.kubernetes.node-port。但请注意,此配置完全是可选的,因为如果没有设置,Quarkus 会自动使用一个正确(且不变)的值。

To control the nodePort that is used in this case, users can set quarkus.kubernetes.node-port. Note however that this configuration is entirely optional because Quarkus will automatically use a proper (and non-changing) value if none is set.

在部署到生产环境中时强烈不建议使用 Minikube 扩展程序生成的清单,因为这些清单仅供开发用途。在部署到生产环境中时,请考虑使用原生的 Kubernetes 清单(或针对 OpenShift 使用 OpenShift 清单)。

It is highly discouraged to use the manifests generated by the Minikube extension when deploying to production as these manifests are intended for development purposes only. When deploying to production, consider using the vanilla Kubernetes manifests (or the OpenShift ones when targeting OpenShift).

如果 Minikube 扩展程序所做的假设不适合你的工作流程,则没有任何因素会阻止你使用常规的 Kubernetes 扩展程序来生成 Kubernetes 清单并将这些清单应用于 Minikube 集群。

If the assumptions the Minikube extension makes don’t fit your workflow, nothing prevents you from using the regular Kubernetes extension to generate Kubernetes manifests and apply those to your Minikube cluster.

Deploying to Kind

Kind是另一种流行的工具,用作开发用途的 Kubernetes 集群。为了尽可能顺畅地进行部署到 Kindexperience,Quarkus 提供了 `quarkus-kind`扩展程序。此扩展程序可以如此添加到项目中:

Kind is another popular tool used as a Kubernetes cluster for development purposes. To make the deployment to Kind experience as frictionless as possible, Quarkus provides the quarkus-kind extension. This extension can be added to a project like so:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-kind</artifactId>
</dependency>

此扩展程序的目的是生成为 Kind 量身定制的 Kubernetes 清单(kind.yaml`和 `kind.json),还用于在执行容器镜像构建时自动将镜像加载到群集。定制的清单将与Minikube非常相似(它们共享相同的规则)(见上文)。

The purpose of this extension is to generate Kubernetes manifests (kind.yaml and kind.json) that are tailored to Kind and also to automate the process of loading images to the cluster when performing container image builds. The tailor made manifests will be pretty similar (they share the same rules) with Minikube (see above).

Tuning the generated resources using application.properties

Kubernetes 扩展程序允许使用 `application.properties`文件调整生成的清单。以下是一些示例:

The Kubernetes extension allows tuning the generated manifest, using the application.properties file. Here are some examples:

Configuration options

下表描述了所有可用的配置选项。

The table below describe all the available configuration options.

Kubernetes

Unresolved directive in deploying-to-kubernetes.adoc - include::{generated-dir}/config/quarkus-kubernetes_quarkus.kubernetes.adoc[]

可以通过展开属性来引用使用非标准类型的属性。例如,定义一个类型为 Probe`的 `kubernetes-readiness-probe

Properties that use non-standard types, can be referenced by expanding the property. For example to define a kubernetes-readiness-probe which is of type Probe:

quarkus.kubernetes.readiness-probe.initial-delay=20s
quarkus.kubernetes.readiness-probe.period=45s

在此示例中,`initial-delay`和 `period`是类型为 `Probe`的字段。你将在下面找到描述所有可用类型的表格。

In this example initial-delay and period are fields of the type Probe. Below you will find tables describing all available types.

Client Connection Configuration

你可能需要配置与 Kubernetes 集群的连接。默认情况下,它会自动使用 `kubectl`使用的活动 context

You may need to configure the connection to your Kubernetes cluster. By default, it automatically uses the active context used by kubectl.

例如,如果你的群集 API 端点使用自签名 SSL 证书,你需要明确配置客户端以信任它。可以通过定义以下属性来实现此目的:

For instance, if your cluster API endpoint uses a self-signed SSL Certificate you need to explicitly configure the client to trust it. You can achieve this by defining the following property:

quarkus.kubernetes-client.trust-certs=true

Kubernetes 客户端配置属性的完整列表如下。

The full list of the Kubernetes client configuration properties is provided below.

Unresolved directive in deploying-to-kubernetes.adoc - include::{generated-dir}/config/quarkus-kubernetes-client.adoc[]

OpenShift

部署应用程序到 OpenShift 的一种方法是使用 s2i(源到镜像)从源中创建镜像流,然后再部署镜像流:

One way to deploy an application to OpenShift is to use s2i (source to image) to create an image stream from the source and then deploy the image stream:

CLI
quarkus extension remove kubernetes,jib
quarkus extension add openshift

oc new-project quarkus-project
quarkus build -Dquarkus.container-image.build=true

oc new-app --name=greeting  quarkus-project/kubernetes-quickstart:1.0.0-SNAPSHOT
oc expose svc/greeting
oc get route
curl <route>/greeting
Maven
./mvnw quarkus:remove-extension -Dextensions="kubernetes, jib"
./mvnw quarkus:add-extension -Dextensions="openshift"

oc new-project quarkus-project
./mvnw clean package -Dquarkus.container-image.build=true

oc new-app --name=greeting  quarkus-project/kubernetes-quickstart:1.0.0-SNAPSHOT
oc expose svc/greeting
oc get route
curl <route>/greeting
Gradle
./gradlew removeExtension --extensions="kubernetes, jib"
./gradlew addExtension --extensions="openshift"

oc new-project quarkus-project
./gradlew build -Dquarkus.container-image.build=true

oc new-app --name=greeting  quarkus-project/kubernetes-quickstart:1.0.0-SNAPSHOT
oc expose svc/greeting
oc get route
curl <route>/greeting

请参阅 Deploying to OpenShift中的详细信息。

See further information in Deploying to OpenShift.

下面给出 OpenShift 资源和可定制属性的说明以及 Kubernetes 资源,以便在适用情况下显示相似之处。这包括对上述 oc new-app …​`的替代方案,即 `oc apply -f target/kubernetes/openshift.json

A description of OpenShift resources and customisable properties is given below alongside Kubernetes resources to show similarities where applicable. This includes an alternative to oc new-app …​ above, i.e. oc apply -f target/kubernetes/openshift.json .

要启用 OpenShift 资源的生成,你需要在目标平台中纳入 OpenShift:

To enable the generation of OpenShift resources, you need to include OpenShift in the target platforms:

quarkus.kubernetes.deployment-target=openshift

如果你需要为两个平台(原生的 Kubernetes 和 OpenShift)生成资源,则需要同时包含这两个平台(逗号分隔)。

If you need to generate resources for both platforms (vanilla Kubernetes and OpenShift), then you need to include both (comma separated).

quarkus.kubernetes.deployment-target=kubernetes,openshift

在执行 `./mvnw package -Dquarkus.container-image.build=true`后,你会注意到在其他创建的文件中,在 `target/kubernetes/`目录中有两个名为 `openshift.json`和 `openshift.yml`的文件。

Following the execution of ./mvnw package -Dquarkus.container-image.build=true you will notice amongst the other files that are created, two files named openshift.json and openshift.yml in the target/kubernetes/ directory.

这些清单可以使用 @1: 部署到正在运行的集群:

These manifests can be deployed as is to a running cluster, using kubectl:

kubectl apply -f target/kubernetes/openshift.json

OpenShift 的用户可能想要使用 @2: 而不是 @3:

OpenShift’s users might want to use oc rather than kubectl:

oc apply -f target/kubernetes/openshift.json

对于希望将 @4: 与部署平台保持独立的用户,可以通过在 @6: 之外添加 @5: 在部署命令中直接指定部署目标。此外,Quarkus 还允许将这两个属性合并为一个:@7:

For users that prefer to keep the application.properties independent of the deployment platform, the deployment target can be specified directly in the deploy command by adding -Dquarkus.kubernetes.deployment-target=openshift in addition to -Dquarkus.kubernetes.deploy=true. Furthermore, Quarkus allows collapsing the two properties into one: -Dquarkus.openshift.deploy=true.

./mvnw clean package -Dquarkus.openshift.deploy=true

使用 gradle 的等效命令:

The equivalent with gradle:

./gradlew build -Dquarkus.openshift.deploy=true

如果同时使用具有冲突值的两个属性,则会使用 @8:。

In case that both properties are used with conflicting values quarkus.kubernetes.deployment-target is used.

Quarkus 还提供了 @11: 扩展。此扩展基本上是 Kubernetes 扩展的一个包装,减轻了 OpenShift 用户将 @9: 属性设置为 @10: 的必要性

Quarkus also provides the OpenShift extension. This extension is basically a wrapper around the Kubernetes extension and relieves OpenShift users of the necessity of setting the deployment-target property to openshift

OpenShift 资源可以通过与 Kubernetes 类似的方法进行自定义。

The OpenShift resources can be customized in a similar approach with Kubernetes.

OpenShift

@12:

OpenShift

Unresolved directive in deploying-to-kubernetes.adoc - include::{generated-dir}/config/quarkus-kubernetes_quarkus.openshift.adoc[]

Knative

要启用 Knative 资源的生成,您需要在目标平台中包含 Knative:

To enable the generation of Knative resources, you need to include Knative in the target platforms:

quarkus.kubernetes.deployment-target=knative

在执行 @13: 之后,您会注意到在创建的其他文件中,还有两个名为 @14: 和 @15: 的文件位于 @16: 目录中。

Following the execution of ./mvnw package you will notice amongst the other files that are created, two files named knative.json and knative.yml in the target/kubernetes/ directory.

如果您查看任一文件,您会看到它包含一个 Knative @17:。

If you look at either file you will see that it contains a Knative Service.

@18: 文件的完整来源看起来像这样:

The full source of the knative.json file looks something like this:

{
  {
    "apiVersion" : "serving.quarkus.knative.dev/v1alpha1",
    "kind" : "Service",
    "metadata" : {
      "annotations": {
       "app.quarkus.io/vcs-uri" : "<some url>",
       "app.quarkus.io/commit-id" : "<some git SHA>"
      },
      "labels" : {
        "app.kubernetes.io/name" : "test-quarkus-app",
        "app.kubernetes.io/version" : "1.0.0-SNAPSHOT"
      },
      "name" : "knative"
    },
    "spec" : {
      "runLatest" : {
        "configuration" : {
          "revisionTemplate" : {
            "spec" : {
              "container" : {
                "image" : "dev.local/yourDockerUsername/test-quarkus-app:1.0.0-SNAPSHOT",
                "imagePullPolicy" : "Always"
              }
            }
          }
        }
      }
    }
  }
}

生成的清单可以使用 @19: 部署到正在运行的集群:

The generated manifest can be deployed as is to a running cluster, using kubectl:

kubectl apply -f target/kubernetes/knative.json

可以使用以下属性自定义生成的服务:

The generated service can be customized using the following properties:

Knative

@20:

Knative

Unresolved directive in deploying-to-kubernetes.adoc - include::{generated-dir}/config/quarkus-kubernetes_quarkus.knative.adoc[]

Deployment targets

前面各节中提到了 @21: 的概念。此概念允许用户控制哪些 Kubernetes 清单将生成并部署到集群(如果 @22: 已设置为 @23:)。

Mentioned in the previous sections was the concept of deployment-target. This concept allows users to control which Kubernetes manifests will be generated and deployed to a cluster (if quarkus.kubernetes.deploy has been set to true).

默认情况下,如果没有设置 @25:,则仅生成和部署普通的 Kubernetes 资源。如果设置了多个值(例如 @26:),则会生成所有目标的资源,但只有与 @24: 目标相对应的资源才会应用于集群(如果启用了部署)。

By default, when no deployment-target is set, then only vanilla Kubernetes resources are generated and deployed. When multiple values are set (for example quarkus.kubernetes.deployment-target=kubernetes,openshift) then the resources for all targets are generated, but only the resources that correspond to the first target are applied to the cluster (if deployment is enabled).

对于希望将 @27: 与部署平台保持独立的用户,可以通过在 @29: 之外添加 @28: 在部署命令中直接指定部署目标。此外,Quarkus 还允许将这两个属性合并为一个:@30:

For users that prefer to keep the application.properties independent of the deployment platform, the deployment target can be specified directly in the deploy command by adding -Dquarkus.kubernetes.deployment-target=knative in addition to -Dquarkus.knative.deploy=true. Furthermore, Quarkus allows collapsing the two properties into one: -Dquarkus.knative.deploy=true.

./mvnw clean package -Dquarkus.knative.deploy=true

使用 gradle 的等效命令:

The equivalent with gradle:

./gradlew build -Dquarkus.knative.deploy=true

如果同时使用具有冲突值的两个属性,则会使用 @31:。

In case that both properties are used with conflicting values -Dquarkus.kubernetes.deployment-target is used.

在像 OpenShift 和 Minikube 这样的包装器扩展的情况下,当这些扩展显式添加到项目后,默认 `deployment-target`由那些扩展设置。比如如果 `quarkus-minikube`已添加到项目,那么 `minikube`成为默认部署目标,并且在通过 `quarkus.kubernetes.deploy`设置部署时,其资源将被应用于 Kubernetes 集群。用户仍然可以使用 `quarkus.kubernetes.deployment-target`手动覆盖部署目标。

In the case of wrapper extensions like OpenShift and Minikube, when these extensions have been explicitly added to the project, the default deployment-target is set by those extensions. For example if quarkus-minikube has been added to a project, then minikube becomes the default deployment target and its resources will be applied to the Kubernetes cluster when deployment via quarkus.kubernetes.deploy has been set. Users can still override the deployment-targets manually using quarkus.kubernetes.deployment-target.

Deprecated configuration

已弃用的配置项分类如下。

The following categories of configuration properties have been deprecated.

Properties without the quarkus prefix

在旧版本的扩展中,那些项中缺少 quarkus.。这些项现已弃用。

In earlier versions of the extension, the quarkus. was missing from those properties. These properties are now deprecated.

Docker and S2i properties

配置 `docker`和 `s2i`的项也已被弃用,取而代之的是新的容器镜像扩展。

The properties for configuring docker and s2i are also deprecated in favor of the new container-image extensions.

Config group arrays

引用配置组数组的项(例如 kubernetes.labels[0]、`kubernetes.env-vars[0]`等)已转换为映射,以与 Quarkus 生态系统的其余部分保持一致。

Properties referring to config group arrays (e.g. kubernetes.labels[0], kubernetes.env-vars[0] etc) have been converted to Maps to align with the rest of the Quarkus ecosystem.

下面的代码展示了 `labels`配置中的更改:

The code below demonstrates the change in labels config:

# Old labels config:
kubernetes.labels[0].name=foo
kubernetes.labels[0].value=bar

# New labels
quarkus.kubernetes.labels.foo=bar

下面的代码展示了 `env-vars`配置中的更改:

The code below demonstrates the change in env-vars config:

# Old env-vars config:
kubernetes.env-vars[0].name=foo
kubernetes.env-vars[0].configmap=my-configmap

# New env-vars
quarkus.kubernetes.env-vars.foo.configmap=myconfigmap

env-vars properties

`quarkus.kubernetes.env-vars`已弃用(尽管在撰写本文时仍受支持),应改用新的声明样式。有关详细信息,请参阅 Environment variables,更具体地说是 Backwards compatibility

quarkus.kubernetes.env-vars are deprecated (though still currently supported as of this writing) and the new declaration style should be used instead. See Environment variables and more specifically Backwards compatibility for more details.

Deployment

若要触发构建和部署容器镜像,您需要启用 `quarkus.kubernetes.deploy`标志(该标志默认处于禁用状态——此外,在测试运行或开发模式期间它不起作用)。这可以通过命令行轻松完成:

To trigger building and deploying a container image you need to enable the quarkus.kubernetes.deploy flag (the flag is disabled by default - furthermore it has no effect during test runs or dev mode). This can be easily done with the command line:

./mvnw clean package -Dquarkus.kubernetes.deploy=true

Building a container image

构建容器镜像是可能的,可以使用任何 3 个可用的 `container-image`扩展:

Building a container image is possible, using any of the 3 available container-image extensions:

每次请求部署时,都会隐式触发容器镜像构建(在启用 Kubernetes 部署后无需其他属性)。

Each time deployment is requested, a container image build will be implicitly triggered (no additional properties are required when the Kubernetes deployment has been enabled).

Deploying

启用部署后,Kubernetes 扩展将选择由 quarkus.kubernetes.deployment-target`指定并部署的资源。这假定在指向目标 Kubernetes 集群的用户目录中提供了一个 `.kube/config。换句话说,该扩展将使用 `kubectl`使用的任何集群。同样的规则也适用于凭据。

When deployment is enabled, the Kubernetes extension will select the resources specified by quarkus.kubernetes.deployment-target and deploy them. This assumes that a .kube/config is available in your user directory that points to the target Kubernetes cluster. In other words the extension will use whatever cluster kubectl uses. The same applies to credentials.

目前没有提供额外的选项用于进一步自定义。

At the moment no additional options are provided for further customization.

Remote Debugging

若要远程调试在 Kubernetes 环境中运行的应用程序,我们需要按照上一节中的说明部署该应用程序,并添加新的属性:quarkus.kubernetes.remote-debug.enabled=true。此项属性将自动配置 Java 应用程序以追加 Java 代理配置(例如:-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005),以及服务资源以使用 Java 代理端口进行侦听。

To remotely debug applications that are running on a kubernetes environment, we need to deploy the application as described in the previous section and add as new property: quarkus.kubernetes.remote-debug.enabled=true. This property will automatically configure the Java application to append the java agent configuration (for example: -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005) and also the service resource to listen using the java agent port.

在启用调试后部署应用程序之后,接下来你需要将流量从本地主机计算机隧道传输到 Java 代理的指定端口:

After your application has been deployed with the debug enabled, next you need to tunnel the traffic from your local host machine to the specified port of the java agent:

kubectl port-forward svc/<application name> 5005:5005

使用此命令,你将流量从“localhost:5005”转发到运行 Java 代理的 Kubernetes 服务,使用端口“5005”,这是 Java 代理默认用于远程调试的端口。你还可以使用属性 `quarkus.kubernetes.remote-debug.address-port`配置另一个 Java 代理端口。

Using this command, you’ll forward the traffic from the "localhost:5005" to the kubernetes service running the java agent using the port "5005" which is the one that the java agent uses by default for remote debugging. You can also configure another java agent port using the property quarkus.kubernetes.remote-debug.address-port.

最后,你只需要配置你喜欢的 IDE 以附加已转发至 `localhost:5005`的 Java 代理进程并开始调试应用程序。例如,在 IntelliJ IDEA 中,你可以按照 this tutorial进行调试远程应用程序。

Finally, all you need to do is to configure your favorite IDE to attach the java agent process that is forwarded to localhost:5005 and start to debug your application. For example, in IntelliJ IDEA, you can follow this tutorial to debug remote applications.

Using existing resources

有时需要提供其他资源(例如 ConfigMap、Secret、数据库的 Deployment)或提供自定义资源,这些资源将用作生成过程的 base。那些资源可以添加到 src/main/kubernetes`目录下,并可以根据目标环境进行命名(例如 kubernetes.json、openshift.json、knative.json 或 yml 等效项)。提供的文件与生成文件之间的关联是通过文件名完成的。因此,添加到 `src/main/kubernetes`的 `kubernetes.json/kubernetes.yml`文件只会影响生成的 `kubernetes.json/kubernetes.yml。添加到 src/main/kubernetes`的 `openshift.json/openshift.yml`文件只会影响生成的 `openshift.json/openshift.yml。添加到 src/main/kubernetes`的 `knative.json/knative.yml`文件只会影响生成的 `knative.json/knative.yml,依此类推。提供的文件可以为 json 或 yaml 格式,并且可以包含一个或多个资源。这些资源最终将出现在两种生成格式(json 和 yaml)中。例如,添加到 `src/main/kubernetes/kubernetes.yml`的秘密将添加到生成的 `kubernetes.yml`和 `kubernetes.json`中。

Sometimes it’s desirable to either provide additional resources (e.g. a ConfigMap, a Secret, a Deployment for a database) or provide custom ones that will be used as a base for the generation process. Those resources can be added under src/main/kubernetes directory and can be named after the target environment (e.g. kubernetes.json, openshift.json, knative.json, or the yml equivalents). The correlation between provided and generated files is done by file name. So, a kubernetes.json/kubernetes.yml file added in src/main/kubernetes will only affect the generated kubernetes.json/kubernetes.yml. An openshift.json/openshift.yml file added in src/main/kubernetes will only affect the generated openshift.json/openshift.yml. A knative.json/knative.yml file added in src/main/kubernetes will only affect the generated knative.json/knative.yml and so on. The provided file may be either in json or yaml format and may contain one or more resources. These resources will end up in both generated formats (json and yaml). For example, a secret added in src/main/kubernetes/kubernetes.yml will be added to both the generated kubernetes.yml and kubernetes.json.

注意:在撰写本文时,没有允许提供文件和生成文件之间进行一对多关系的机制。Minikube 不例外于上述规则,因此如果你想自定义生成的 minikube 清单文件,则放置在 src/main/kubernetes`下的文件必须命名为 `minikube.json`或 `minikube.yml(将其命名为 `kubernetes.yml`或 `kubernetes.json`将导致仅生成 `kubernetes.yml`和 `kubernetes.json`受到影响)。

Note: At the time of writing there is no mechanism in place that allows a one-to-many relationship between provided and generated files. Minikube is not an exception to the rule above, so if you want to customize the generated minikube manifests, the file placed under src/main/kubernetes will have to be named minikube.json or minikube.yml (naming it kubernetes.yml or kubernetes.json will result in having only the generated kubernetes.yml and kubernetes.json affected).

找到的任何资源都将添加到生成的清单中。全局修改(例如标签和注解)也将应用于这些资源。如果提供的其中一个资源与生成的其中一个同名,那么生成的资源将建立在提供的资源之上,尽可能尊重现有内容(例如现有标签、注解、环境变量、挂载、副本数等)。

Any resource found will be added in the generated manifests. Global modifications (e.g. labels, annotations) will also be applied to those resources. If one of the provided resources has the same name as one of the generated ones, then the generated resource will be created on top of the provided resource, respecting existing content when possible (e.g. existing labels, annotations, environment variables, mounts, replicas etc).

资源的名称由应用程序名称确定,并且可以通过 quarkus.kubernetes.namequarkus.openshift.namequarkus.knative.name 覆盖。

The name of the resource is determined by the application name and may be overridden by quarkus.kubernetes.name, quarkus.openshift.name and quarkus.knative.name.

例如,在 kubernetes-quickstart 应用程序中,我们可以在 src/main/kubernetes 中添加一个类似于以下内容的 kubernetes.yml 文件:

For example, in the kubernetes-quickstart application, we can add a kubernetes.yml file in the src/main/kubernetes that looks like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubernetes-quickstart
  labels:
    app: quickstart
spec:
  replicas: 3
  selector:
    matchLabels:
      app: quickstart
  template:
    metadata:
      labels:
        app: quickstart
    spec:
      containers:
      - name: kubernetes-quickstart
        image: someimage:latest
        ports:
        - containerPort: 80
        env:
        - name: FOO
          value: BAR

生成的 kubernetes.yml 将如下所示:

The generated kubernetes.yml will look like:

apiVersion: "apps/v1"
kind: "Deployment"
metadata:
  annotations:
    app.quarkus.io/build-timestamp: "2020-04-10 - 12:54:37 +0000"
  labels:
    app: "quickstart"
  name: "kubernetes-quickstart"
spec:
  replicas: 3 1
  selector:
    matchLabels:
      app.kubernetes.io/name: "kubernetes-quickstart"
      app.kubernetes.io/version: "1.0.0-SNAPSHOT"
  template:
    metadata:
      annotations:
        app.quarkus.io/build-timestamp: "2020-04-10 - 12:54:37 +0000"
      labels:
        app: "quickstart" 2
    spec:
      containers:
      - env:
        - name: "FOO" 3
          value: "BAR"
        image: "<<yourDockerUsername>>/kubernetes-quickstart:1.0.0-SNAPSHOT" 4
        imagePullPolicy: "Always"
        name: "kubernetes-quickstart"
        ports:
        - containerPort: 8080 5
          name: "http"
          protocol: "TCP"
      serviceAccount: "kubernetes-quickstart"
1 The provided replicas,
2 labels and
3 environment variables were retained.
4 However, the image and
5 the container port were modified.

此外,已添加默认注解。

Moreover, the default annotations have been added.

  • When the resource name does not match the application name (or the overridden name) rather than reusing the resource a new one will be added. Same goes for the container.

  • When the name of the container does not match the application name (or the overridden name), container specific configuration will be ignored.

Using common resources

在为 Kubernetes、OpenShift 或 Knative 等多个部署目标生成清单时,我们可以将公共资源放在 src/main/kubernetes/common.yml 中,这样这些资源将集成到生成的 kubernetes.json/kubernetes.ymlopenshift.json/openshift.yml 文件中(如果你同时配置 Kubernetes 和 OpenShift 扩展)。

When generating the manifests for multiple deployment targets like Kubernetes, OpenShift or Knative, we can place the common resources in src/main/kubernetes/common.yml, so these resources will be integrated into the generated kubernetes.json/kubernetes.yml, and openshift.json/openshift.yml files (if you configure the Kubernetes and OpenShift extensions at the same time).

例如,我们可以在文件 src/main/kubernetes/common.yml 中仅编写一次 ConfigMap 资源:

For example, we can write a ConfigMap resource only once in the file src/main/kubernetes/common.yml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: common-configmap
data:
  hello: world

此 ConfigMap 资源将集成到生成的文件 kubernetes.json/kubernetes.ymlopenshift.json/openshift.yml 中。

And this config map resource will be integrated into the generated kubernetes.json/kubernetes.yml, and openshift.json/openshift.yml files.

Service Binding

Quarkus 支持 Service Binding Specification for Kubernetes 将服务绑定到应用程序。

Quarkus supports the Service Binding Specification for Kubernetes to bind services to applications.

具体而言,Quarkus 实现该规范的 Workload Projection 部分,因此允许应用程序绑定到服务,如数据库或代理,而无需用户配置。

Specifically, Quarkus implements the Workload Projection part of the specification, therefore allowing applications to bind to services, such as a Database or a Broker, without the need for user configuration.

若要为受支持的扩展启用服务绑定,请将 quarkus-kubernetes-service-binding 扩展添加到应用程序依赖项中。

To enable Service Binding for supported extensions, add the quarkus-kubernetes-service-binding extension to the application dependencies.

  • The following extensions can be used with Service Binding and are supported for Workload Projection:======

  • quarkus-jdbc-mariadb

  • quarkus-jdbc-mssql

  • quarkus-jdbc-mysql

  • quarkus-jdbc-postgresql

  • quarkus-mongodb-client

  • quarkus-kafka-client

  • quarkus-messaging-kafka

  • quarkus-reactive-db2-client

  • quarkus-reactive-mssql-client

  • quarkus-reactive-mysql-client

  • quarkus-reactive-oracle-client

  • quarkus-reactive-pg-client

  • quarkus-infinispan-client

=== Workload Projection

工作负载映射是指从 Kubernetes 集群获取服务配置的过程。此配置采用遵循特定约定的目录结构形式,并作为挂载卷附加到应用程序或服务。kubernetes-service-binding 扩展使用此目录结构来创建配置源,从而允许你配置其他模块,例如数据库或消息代理。

Workload Projection is a process of obtaining the configuration for services from the Kubernetes cluster. This configuration takes the form of directory structures that follow certain conventions and is attached to an application or to a service as a mounted volume. The kubernetes-service-binding extension uses this directory structure to create configuration sources, which allows you to configure additional modules, such as databases or message brokers.

在应用程序开发期间,用户可以使用工作负载映射将他们的应用程序连接到开发数据库或其他本地运行的服务,而无需更改实际应用程序代码或配置。

During application development, users can use workload projection to connect their application to a development database, or other locally-run services, without changing the actual application code or configuration.

有关包含目录结构并在集成测试中传递的工作负载映射示例,请参阅 Kubernetes Service Binding datasource GitHub 存储库。

For an example of a workload projection where the directory structure is included in the test resources and passed to integration test, see the Kubernetes Service Binding datasource GitHub repository.

  • The k8s-sb directory is the root of all service bindings. In this example, only one database called fruit-db is intended to be bound. This binding database has the type file, that indicates postgresql as the database type, while the other files in the directory provide the necessary information to establish the connection.

  • After your Quarkus project obtains information from SERVICE_BINDING_ROOT environment variables that are set by OpenShift, you can locate generated configuration files that are present in the file system and use them to map the configuration-file values to properties of certain extensions.

== Introduction to the Service Binding Operator

Service Binding Operator 是一个实现 Service Binding Specification for Kubernetes 的运营商,其目的是简化将服务绑定到应用程序的过程。支持 Workload Projection 的容器化应用程序以卷装载的形式获取服务绑定信息。服务绑定运营商读取绑定服务信息,并将其装载到需要它的应用程序容器上。

The Service Binding Operator is an Operator that implements Service Binding Specification for Kubernetes and is meant to simplify the binding of services to an application. Containerized applications that support Workload Projection obtain service binding information in the form of volume mounts. The Service Binding Operator reads binding service information and mounts it to the application containers that need it.

应用程序与绑定服务之间的对应关系通过 ServiceBinding 资源来表示,该资源声明了哪些服务打算绑定到哪个应用程序。

The correlation between application and bound services is expressed through the ServiceBinding resources, which declares the intent of what services are meant to be bound to what application.

服务绑定运营商会观察 ServiceBinding 资源,这些资源通知运营商哪些应用程序打算与哪些服务绑定。当列出的应用程序被部署后,服务绑定运营商会收集所有必须传递给应用程序的绑定信息,然后通过附加带有绑定信息的卷装载来升级应用程序容器。

The Service Binding Operator watches for ServiceBinding resources, which inform the Operator what applications are meant to be bound with what services. When a listed application is deployed, the Service Binding Operator collects all the binding information that must be passed to the application, then upgrades the application container by attaching a volume mount with the binding information.

服务绑定运营商完成以下操作:

The Service Binding Operator completes the following actions:

  • Observes ServiceBinding resources for workloads intended to be bound to a particular service

  • Applies the binding information to the workload using volume mounts

以下章节介绍了自动和半自动服务绑定方法及其用例。无论采用哪种方法,kubernetes-service-binding 扩展都会生成 ServiceBinding 资源。使用半自动方法时,用户必须手动为目标服务提供配置。使用自动方法时,对于生成 ServiceBinding 资源的有限服务集,不需要其他配置。

The following chapter describes the automatic and semi-automatic service binding approaches and their use cases. With either approach, the kubernetes-service-binding extension generates a ServiceBinding resource. With the semi-automatic approach, users must provide a configuration for target services manually. With the automatic approach, for a limited set of services generating the ServiceBinding resource, no additional configuration is needed.

=== Semi-automatic service binding

服务绑定过程从用户指定要绑定到特定应用程序的所需服务开始。此表达式总结在由 kubernetes-service-binding 扩展生成的 ServiceBinding 资源中。使用 kubernetes-service-binding 扩展有助于用户最小程度地配置来生成 ServiceBinding 资源,从而总体上简化了过程。

A service binding process starts with a user specification of required services that will be bound to a certain application. This expression is summarized in the ServiceBinding resource that is generated by the kubernetes-service-binding extension. The use of the kubernetes-service-binding extensions helps users to generate ServiceBinding resources with minimal configuration, therefore simplifying the process overall.

然后,负责绑定过程的服务绑定运营商读取 ServiceBinding 资源中的信息并相应地将所需的文件装载到容器中。

The Service Binding Operator responsible for the binding process then reads the information from the ServiceBinding resource and mounts the required files to a container accordingly.

  • An example of the ServiceBinding resource:[source, yaml]

apiVersion: binding.operators.coreos.com/v1beta1
kind: ServiceBinding
metadata:
 name: binding-request
 namespace: service-binding-demo
spec:
 application:
   name: java-app
   group: apps
   version: v1
   resource: deployments
 services:
 - group: postgres-operator.crunchydata.com
   version: v1beta1
   kind: Database
   name: db-demo
   id: postgresDB
  • The quarkus-kubernetes-service-binding extension provides a more compact way of expressing the same information. For example:[source, properties]

quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1
quarkus.kubernetes-service-binding.services.db-demo.kind=Database

application.properties 中添加较早的配置属性之后,quarkus-kubernetes,结合 quarkus-kubernetes-service-binding 扩展,会自动生成 ServiceBinding 资源。

After adding the earlier configuration properties inside your application.properties, the quarkus-kubernetes, in combination with the quarkus-kubernetes-service-binding extension, automatically generates the ServiceBinding resource.

前面提到的 db-demo 属于配置标识符,现在具有双重角色,还会完成以下操作:

The earlier mentioned db-demo property-configuration identifier now has a double role and also completes the following actions:

  • Correlates and groups api-version and kind properties together

  • Defines the name property for the custom resource with a possibility for a later edit. For example:[source, properties]

quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1
quarkus.kubernetes-service-binding.services.db-demo.kind=Database
quarkus.kubernetes-service-binding.services.db-demo.name=my-db
Additional resources

=== Automatic service binding

quarkus-kubernetes-service-binding 扩展在检测到应用程序需要访问由可用的可绑定操作员提供的外部服务后,可以自动生成 ServiceBinding 资源.

The quarkus-kubernetes-service-binding extension can generate the ServiceBinding resource automatically after detecting that an application requires access to the external services that are provided by available bindable Operators.

可以为有限数量的服务类型生成自动服务绑定.为了与 Kubernetes 和 Quarkus 服务的既定术语保持一致,本章将这些服务类型称为类型.

Automatic service binding can be generated for a limited number of service types. To be consistent with established terminology for Kubernetes and Quarkus services, this chapter refers to these service types as kinds.

Table 2. Operators that support the service auto-binding

Operator

API Version

Kind

postgresql

CrunchyData Postgres

postgres-operator.crunchydata.com/v1beta1

PostgresCluster

mysql

Percona XtraDB Cluster

pxc.percona.com/v1-9-0

PerconaXtraDBCluster

mongo

Percona Mongo

psmdb.percona.com/v1-9-0

PerconaServerMongoDB

=== Automatic datasource binding

对于传统数据库,每当数据源的配置如下,就会启动自动绑定:

For traditional databases, automatic binding is initiated whenever a datasource is configured as follows:

quarkus.datasource.db-kind=postgresql

先前的配置与应用程序中 quarkus-datasourcequarkus-jdbc-postgresqlquarkus-kubernetesquarkus-kubernetes-service-binding 属性的存在相结合,导致为 postgresql 数据库类型生成了 ServiceBinding 资源。

The previous configuration, combined with the presence of quarkus-datasource, quarkus-jdbc-postgresql, quarkus-kubernetes, and quarkus-kubernetes-service-binding properties in the application, results in the generation of the ServiceBinding resource for the postgresql database type.

通过使用与所用 postgresql Operator 相匹配的 Operator 资源的 apiVersionkind 属性,生成的 ServiceBinding 资源将服务或资源绑定到应用程序。

By using the apiVersion and kind properties of the Operator resource, which matches the used postgresql Operator, the generated ServiceBinding resource binds the service or resource to the application.

如果您未为数据库服务指定名称,则 db-kind 属性的值将用作默认名称。

When you do not specify a name for your database service, the value of the db-kind property is used as the default name.

 services:
 - apiVersion: postgres-operator.crunchydata.com/v1beta1
   kind: PostgresCluster
   name: postgresql

如下指定数据源的名称:

Specified the name of the datasource as follows:

quarkus.datasource.fruits-db.db-kind=postgresql

生成的 ServiceBinding 中的 service 如下所示:

The service in the generated ServiceBinding then displays as follows:

 services:
 - apiVersion: postgres-operator.crunchydata.com/v1beta1
   kind: PostgresCluster
   name: fruits-db

类似地,如果您使用 mysql,则可以如下指定数据源的名称:

Similarly, if you use mysql, the name of the datasource can be specified as follows:

quarkus.datasource.fruits-db.db-kind=mysql

生成的 service 包含以下内容:

The generated service contains the following:

 services:
 - apiVersion: pxc.percona.com/v1-9-0
   kind: PerconaXtraDBCluster
   name: fruits-db

==== Customizing Automatic Service Binding

尽管自动绑定旨在消除尽可能多的手动配置,但在某些情况下,可能仍需要修改生成的 ServiceBinding 资源。生成过程完全依赖于从应用程序中提取的信息和对受支持 Operator 的了解,这可能无法反映集群中部署的内容。所生成的资源完全基于对流行服务类型的受支持可绑定 Operator 的了解以及为防止可能出现的不匹配而开发的一组约定,例如:

Even though automatic binding was developed to eliminate as much manual configuration as possible, there are cases where modifying the generated ServiceBinding resource might still be needed. The generation process exclusively relies on information extracted from the application and the knowledge of the supported Operators, which may not reflect what is deployed in the cluster. The generated resource is based purely on the knowledge of the supported bindable Operators for popular service kinds and a set of conventions that were developed to prevent possible mismatches, such as:

  • The target resource name does not match the datasource name

  • A specific Operator needs to be used rather than the default Operator for that service kind

  • Version conflicts that occur when a user needs to use any other version than default or latest

Conventions
  • The target resource coordinates are determined based on the type of Operator and the kind of service.

  • The target resource name is set by default to match the service kind, such as postgresql, mysql, mongo.

  • For named datasources, the name of the datasource is used.

  • For named mongo clients, the name of the client is used.

Example 1 - Name mismatch

对于需要修改生成的 ServiceBinding 以修复名称不匹配的情况,使用 quarkus.kubernetes-service-binding.services 属性,并将服务名称指定为服务密钥。

Example 1 - Name mismatch

For cases in which you need to modify the generated ServiceBinding to fix a name mismatch, use the quarkus.kubernetes-service-binding.services properties and specify the service’s name as the service key.

service key 通常是服务的名称,例如数据源的名称或 mongo 客户端的名称。如果不提供该值,则将使用数据源类型,例如 postgresqlmysqlmongo

The service key is usually the name of the service, for example the name of the datasource, or the name of the mongo client. When this value is not available, the datasource type, such as postgresql, mysql, mongo, is used instead.

为了避免不同类型服务之间的命名冲突,请使用特定数据源类型为 service key 加上前缀,例如 postgresql-<person>

To avoid naming conflicts between different types of services, prefix the service key with a specific datasource type, such as postgresql-<person>.

以下示例展示如何自定义 PostgresCluster 资源的 apiVersion 属性:

The following example shows how to customize the apiVersion property of the PostgresCluster resource:

quarkus.datasource.db-kind=postgresql
quarkus.kubernetes-service-binding.services.postgresql.api-version=postgres-operator.crunchydata.com/v1beta2
Example 2: Application of a custom name for a datasource

在示例 1 中,db-kind(postgresql) 用作服务密钥。在这个示例中,由于数据源被命名,因此根据约定,将使用数据源名称 (fruits-db)。

Example 2: Application of a custom name for a datasource

In Example 1, the db-kind(postgresql) was used as a service key. In this example, because the datasource is named, according to convention, the datasource name (fruits-db) is used instead.

以下示例展示对于命名的数据源,数据源名称将用作目标资源的名称:

The following example shows that for a named datasource, the datasource name is used as the name of the target resource:

quarkus.datasource.fruits-db.db-kind=postgresql

这与以下配置的效果相同:

This has the same effect as the following configuration:

quarkus.kubernetes-service-binding.services.fruits-db.api-version=postgres-operator.crunchydata.com/v1beta1
quarkus.kubernetes-service-binding.services.fruits-db.kind=PostgresCluster
quarkus.kubernetes-service-binding.services.fruits-db.name=fruits-db
Additional resources
  • For more details about the available properties and how do they work, see the Workload Projection part of the Service Binding specification.