Writing Your Own Extension
Quarkus 扩展为核心产品增加了新的面向开发人员的行为,并由两个不同的部分组成,即构建时增强和运行时容器。增强部分负责所有元数据处理,例如读取注释、XML 描述符等。此增强阶段的输出是记录的字节码,该字节码负责直接实例化相关的运行时服务。
Quarkus extensions add a new developer focused behavior to the core offering, and consist of two distinct parts, buildtime augmentation and runtime container. The augmentation part is responsible for all metadata processing, such as reading annotations, XML descriptors etc. The output of this augmentation phase is recorded bytecode which is responsible for directly instantiating the relevant runtime services.
这意味着元数据仅在构建时处理一次,这既节省了启动时间,也节省了内存使用,因为用于处理的类等不会在运行时 JVM 中加载(甚至不存在)。
This means that metadata is only processed once at build time, which both saves on startup time, and also on memory usage as the classes etc that are used for processing are not loaded (or even present) in the runtime JVM.
这是一份深入的文档,如果您需要简介,请参阅 building my first extension。 |
This is an in-depth documentation, see the building my first extension if you need an introduction. |
Extension philosophy
此部分是一份进行中的工作,并收集了应该设计和编写扩展的理念。
This section is a work in progress and gathers the philosophy under which extensions should be designed and written.
Why an extension framework
Quarkus 的使命是将您的整个应用程序(包括它使用的库)转换为一个工件,该工件使用的资源比传统方法显着减少。然后可以使用这些资源使用 GraalVM 构建本机应用程序。为此,您需要分析并理解应用程序的完整“封闭世界”。如果没有完整和全面的上下文,所能达到的最好结果就是部分的和有限的通用支持。通过使用 Quarkus 扩展方法,我们可以使 Java 应用程序与内存占用受限的环境(如 Kubernetes 或云平台)保持一致。
Quarkus’s mission is to transform your entire application including the libraries it uses, into an artifact that uses significantly less resources than traditional approaches. These can then be used to build native applications using GraalVM. To do this you need to analyze and understand the full "closed world" of the application. Without the full and complete context, the best that can be achieved is partial and limited generic support. By using the Quarkus extension approach, we can bring Java applications in line with memory footprint constrained environments like Kubernetes or cloud platforms.
即使不使用 GraalVM(例如在 HotSpot 中),Quarkus 扩展框架也会显著提高资源利用率。让我们列出扩展执行的操作:
The Quarkus extension framework results in significantly improved resource utilization even when GraalVM is not used (e.g. in HotSpot). Let’s list the actions an extension performs:
-
Gather build time metadata and generate code
-
This part has nothing to do with GraalVM, it is how Quarkus starts frameworks “at build time”
-
The extension framework facilitates reading metadata, scanning classes as well as generating classes as needed
-
A small part of the extension work is executed at runtime via the generated classes, while the bulk of the work is done at build time (called deployment time)
-
-
Enforce opinionated and sensible defaults based on the close world view of the application (e.g. an application with no
@Entity
does not need to start Hibernate ORM) -
An extension hosts Substrate VM code substitution so that libraries can run on GraalVM
-
Most changes are pushed upstream to help the underlying library run on GraalVM
-
Not all changes can be pushed upstream, extensions host Substrate VM substitutions - which is a form of code patching - so that libraries can run
-
-
Host Substrate VM code substitution to help dead code elimination based on the application needs
-
This is application dependent and cannot really be shared in the library itself
-
For example, Quarkus optimizes the Hibernate code because it knows it only needs a specific connection pool and cache provider
-
-
Send metadata to GraalVM for example classes in need of reflection
-
This information is not static per library (e.g. Hibernate) but the framework has the semantic knowledge and knows which classes need to have reflection (for example @Entity classes)
-
Favor build time work over runtime work
尽可能优先在构建时(扩展的部署部分)执行工作,而不是让框架在启动时(运行时)执行工作。在那里执行的越多,使用该扩展的 Quarkus 应用程序就越小,加载速度就越快。
As much as possible favor doing work at build time (deployment part of the extension) as opposed to let the framework do work at startup time (runtime). The more is done there, the smaller Quarkus applications using that extension will be and the faster they will load.
How to expose configuration
Quarkus 简化了最常见的用法。这意味着它的默认值可能与它集成的库不同。
Quarkus simplifies the most common usages. This means that its defaults might be different from the library it integrates.
为了让简单体验更容易,通过 SmallRye Config 统一 application.properties
中的配置。避免使用特定于库的配置文件,或至少使其成为可选的:例如,persistence.xml
对于 Hibernate ORM 是可选的。
To make the simple experience easiest, unify the configuration in application.properties
via SmallRye Config.
Avoid library specific configuration files, or at least make them optional: e.g. persistence.xml
for Hibernate ORM is optional.
扩展应将配置整体视为 Quarkus 应用程序,而不是专注于库体验。例如 quarkus.database.url
及其相关项在扩展之间共享,因为定义数据库访问是一项共享任务(例如,而不是 hibernate.
属性)。最实用的配置选项应公开为 quarkus.[extension].
,而不是库的自然命名空间。不常见的属性可以在库命名空间中使用。
Extensions should see the configuration holistically as a Quarkus application instead of focusing on the library experience.
For example quarkus.database.url
and friends are shared between extensions as defining a database access is a shared task (instead of a hibernate.
property for example).
The most useful configuration options should be exposed as quarkus.[extension].
instead of the natural namespace of the library.
Less common properties can live in the library namespace.
为了充分启用 Quarkus 可以进行最佳优化的封闭世界假设,最好将配置选项视为构建时设置的,而不是可以在运行时覆盖的。当然,诸如主机、端口、密码之类的属性应该可以在运行时覆盖。但诸如启用缓存或设置 JDBC 驱动程序之类的许多属性可以安全地要求重新构建该应用程序。
To fully enable the close world assumptions that Quarkus can optimize best, it is better to consider configuration options as build time settled vs overridable at runtime. Of course properties like host, port, password should be overridable at runtime. But many properties like enable caching or setting the JDBC driver can safely require a rebuild of the application.
Static Init Config
如果扩展提供额外的配置源,并且这些源在静态初始化期间是必需的,则必须使用 StaticInitConfigBuilderBuildItem
注册这些源。静态初始化中的配置不会扫描其他源,以避免在应用程序启动时进行重复初始化。
If the extension provides additional Config Sources and if these are required during Static Init, these must be registered with StaticInitConfigBuilderBuildItem
. Configuration in Static Init does not scan for additional sources to avoid double initialization at application startup time.
Expose your components via CDI
由于 CDI 是组件组合中的核心编程模型,因此框架和扩展应将其组件公开为 bean,以便用户应用程序可以轻松使用它们。例如,Hibernate ORM 公开 EntityManagerFactory
和 EntityManager
bean,连接池公开 DataSource
bean 等。扩展必须在构建时注册这些 bean 定义。
Since CDI is the central programming model when it comes to component composition, frameworks and extensions should expose their components as beans that are easily consumable by user applications.
For example, Hibernate ORM exposes EntityManagerFactory
and EntityManager
beans, the connection pool exposes DataSource
beans etc.
Extensions must register these bean definitions at build time.
Beans backed by classes
扩展可产生一个“AdditionalBeanBuildItem
”,用于指示容器读取某个类提供的 Bean 定义,如同它是原始应用的一部分:
An extension can produce an AdditionalBeanBuildItem
to instruct the container to read a bean definition from a class as if it was part of the original application:
AdditionalBeanBuildItem
@Singleton 1
public class Echo {
public String echo(String val) {
return val;
}
}
1 | If a bean registered by an AdditionalBeanBuildItem does not specify a scope then @Dependent is assumed. |
所有其他 Bean 可注入此类 Bean:
All other beans can inject such a bean:
AdditionalBeanBuildItem
@Path("/hello")
public class ExampleResource {
@Inject
Echo echo;
@GET
@Produces(MediaType.TEXT_PLAIN)
public String hello(String foo) {
return echo.echo(foo);
}
}
反之亦然——扩展 Bean 可注入应用 Bean 和其他扩展提供的 Bean:
And vice versa - the extension bean can inject application beans and beans provided by other extensions:
@Singleton
public class Echo {
@Inject
DataSource dataSource; 1
@Inject
Instance<List<String>> listsOfStrings; 2
//...
}
1 | Inject a bean provided by other extension. |
2 | Inject all beans matching the type List<String> . |
Bean initialization
根据在增强期间收集的信息,某些组件可能需要额外的初始化。最简单的解决方案是从构建步骤中直接提取一个 Bean 实例并调用一个方法。但是,在增强阶段提取一个 Bean 实例是“illegal”。原因在于 CDI 容器尚未启动。它在“Static init bootstrap phase”期间启动。
Some components may require additional initialization based on information collected during augmentation. The most straightforward solution is to obtain a bean instance and call a method directly from a build step. However, it is illegal to obtain a bean instance during the augmentation phase. The reason is that the CDI container is not started yet. It’s started during the bootstrap-three-phases.
“ |
|
不过可以从“recorder method”中调用一个 Bean 方法。如果你需要在“@Record(STATIC_INIT)
”构建步骤中访问一个 Bean,那么它必须依赖于“BeanContainerBuildItem
”或在一个“BeanContainerListenerBuildItem
”中包装逻辑。原因很简单——我们需要确保 CDI 容器已完全初始化并已启动。但是,你可以肯定 CDI 容器已在“@Record(RUNTIME_INIT)
”构建步骤中完全初始化并正在运行。你可以通过“CDI.current()
”或 Quarkus 特有的“Arc.container()
”获取容器的引用。
It is possible to invoke a bean method from a bytecode-recording though.
If you need to access a bean in a @Record(STATIC_INIT)
build step then is must either depend on the BeanContainerBuildItem
or wrap the logic in a BeanContainerListenerBuildItem
.
The reason is simple - we need to make sure the CDI container is fully initialized and started.
However, it is safe to expect that the CDI container is fully initialized and running in a @Record(RUNTIME_INIT)
build step.
You can obtain a reference to the container via CDI.current()
or Quarkus-specific Arc.container()
.
不要忘记确保 Bean 状态保证可见性,例如,通过“volatile
”关键字。
Don’t forget to make sure the bean state guarantees the visibility, e.g. via the volatile
keyword.
这种“延迟初始化”方法有一个重大的缺点。“uninitialized”Bean 可被其他扩展或在启动期间实例化的应用组件访问。我们将在“Synthetic beans”中介绍更健壮的解决方案。 |
There is one significant drawback of this "late initialization" approach. An uninitialized bean may be accessed by other extensions or application components that are instantiated during bootstrap. We’ll cover a more robust solution in the Synthetic beans. |
Default beans
创建此类 Bean 的一个非常有用的模式,同时还赋予应用代码轻松覆盖一些 Bean 及其自定义实现的能力是使用 Quarkus 提供的“@DefaultBean
”。最适合的解释方式是举一个例子。
A very useful pattern of creating such beans but also giving application code the ability to easily override some beans with custom implementations, is to use
the @DefaultBean
that Quarkus provides.
This is best explained with an example.
假设 Quarkus 扩展需要提供一个“Tracer
”Bean,而应用代码打算将其注入到其自己的 Bean 中。
Let us assume that the Quarkus extension needs to provide a Tracer
bean which application code is meant to inject into its own beans.
@Dependent
public class TracerConfiguration {
@Produces
public Tracer tracer(Reporter reporter, Configuration configuration) {
return new Tracer(reporter, configuration);
}
@Produces
@DefaultBean
public Configuration configuration() {
// create a Configuration
}
@Produces
@DefaultBean
public Reporter reporter(){
// create a Reporter
}
}
例如,如果应用代码想要使用“Tracer
”,但同时还需要使用一个自定义“Reporter
”Bean,这样的要求可以用以下类似内容轻松完成:
If for example application code wants to use Tracer
, but also needs to use a custom Reporter
bean, such a requirement could easily be done using something like:
@Dependent
public class CustomTracerConfiguration {
@Produces
public Reporter reporter(){
// create a custom Reporter
}
}
How to Override a Bean Defined by a Library/Quarkus Extension that doesn’t use @DefaultBean
虽然“@DefaultBean
”是建议的做法,但应用代码还可以通过将 Bean 标明为 CDI“@Alternative
”并添加“@Priority
”注解来覆盖扩展提供的 Bean。我们来看一个简单的例子。假设我们正在开发一个假想的“quarkus-parser”扩展,并且我们有一个默认 Bean 实现:
Although @DefaultBean
is the recommended approach, it is also possible for application code to override beans provided by an extension by marking beans as a CDI @Alternative
and including @Priority
annotation.
Let’s show a simple example.
Suppose we work on an imaginary "quarkus-parser" extension and we have a default bean implementation:
@Dependent
class Parser {
String[] parse(String expression) {
return expression.split("::");
}
}
而且我们的扩展也使用此分析器:
And our extension also consumes this parser:
@ApplicationScoped
class ParserService {
@Inject
Parser parser;
//...
}
现在,如果一个用户,或者甚至其他某个扩展需要覆盖“Parser
”的默认实现,最简单的解决方案是使用 CDI“@Alternative
”+“@Priority
”:
Now, if a user or even some other extension needs to override the default implementation of the Parser
the simplest solution is to use CDI @Alternative
+ @Priority
:
@Alternative 1
@Priority(1) 2
@Singleton
class MyParser extends Parser {
String[] parse(String expression) {
// my super impl...
}
}
1 | MyParser is an alternative bean. |
2 | Enables the alternative. The priority could be any number to override the default bean but if there are multiple alternatives the highest priority wins. |
仅在注入和类型安全解析期间考虑 CDI 备用方案。例如,默认实现仍然会接收观察器通知。 |
CDI alternatives are only considered during injection and type-safe resolution. For example the default implementation would still receive observer notifications. |
Synthetic beans
有时,能够注册一个合成 Bean 非常有用。合成 Bean 的 Bean 属性不会从 Java 类、方法或字段派生。相反,属性由扩展指定。
Sometimes it is very useful to be able to register a synthetic bean. Bean attributes of a synthetic bean are not derived from a java class, method or field. Instead, the attributes are specified by an extension.
既然 CDI 容器不控制合成 Bean 的实例化,则不支持依赖注入和其他服务(例如拦截器)。换句话说,为合成 Bean 实例提供所有所需服务取决于扩展。 |
Since the CDI container does not control the instantiation of a synthetic bean the dependency injection and other services (such as interceptors) are not supported. In other words, it’s up to the extension to provide all required services to a synthetic bean instance. |
有许多方法可以在 Quarkus 中注册 synthetic bean。在本章中,我们将介绍一个用例,该用例可用于以安全的方式初始化扩展 Bean(与 Bean initialization相比)。
There are several ways to register a synthetic bean in Quarkus. In this chapter, we will cover a use case that can be used to initialize extension beans in a safe manner (compared to Bean initialization).
`SyntheticBeanBuildItem`可用于注册一个合成 Bean:
The SyntheticBeanBuildItem
can be used to register a synthetic bean:
-
whose instance can be easily produced through a bytecode-recording,
-
to provide a "context" bean that holds all the information collected during augmentation so that the real components do not need any "late initialization" because they can inject the context bean directly.
@BuildStep
@Record(STATIC_INIT)
SyntheticBeanBuildItem syntheticBean(TestRecorder recorder) {
return SyntheticBeanBuildItem.configure(Foo.class).scope(Singleton.class)
.runtimeValue(recorder.createFoo("parameters are recorder in the bytecode")) 1
.done();
}
1 | The string value is recorded in the bytecode and used to initialize the instance of Foo . |
@BuildStep
@Record(STATIC_INIT)
SyntheticBeanBuildItem syntheticBean(TestRecorder recorder) {
return SyntheticBeanBuildItem.configure(TestContext.class).scope(Singleton.class)
.runtimeValue(recorder.createContext("parameters are recorder in the bytecode")) 1
.done();
}
1 | The "real" components can inject the TestContext directly. |
Some types of extensions
扩展有多个原型,让我们列举几个。
There exist multiple stereotypes of extension, let’s list a few.
- Bare library running
-
This is the less sophisticated extension. It consists of a set of patches to make sure a library runs on GraalVM. If possible, contribute these patches upstream, not in extensions. Second best is to write Substrate VM substitutions, which are patches applied during native image compilation.
- Get a framework running
-
A framework at runtime typically reads configuration, scan the classpath and classes for metadata (annotations, getters etc.), build a metamodel on top of which it runs, find options via the service loader pattern, prepare invocation calls (reflection), proxy interfaces, etc. These operations should be done at build time and the metamodel be passed to the recorder DSL that will generate classes that will be executed at runtime and boot the framework.
- Get a CDI portable extension running
-
The CDI portable extension model is very flexible. Too flexible to benefit from the build time boot promoted by Quarkus. Most extension we have seen do not make use of these extreme flexibility capabilities. The way to port a CDI extension to Quarkus is to rewrite it as a Quarkus extension which will define the various beans at build time (deployment time in extension parlance).
Technical aspect
Three Phases of Bootstrap and Quarkus Philosophy
Quarkus APP 有三个不同的引导阶段:
There are three distinct bootstrap phases of a Quarkus app:
- Augmentation
-
This is the first phase, and is done by the Build Step Processors. These processors have access to Jandex annotation information and can parse any descriptors and read annotations, but should not attempt to load any application classes. The output of these build steps is some recorded bytecode, using an extension of the ObjectWeb ASM project called Gizmo(ext/gizmo), that is used to actually bootstrap the application at runtime. Depending on the
io.quarkus.deployment.annotations.ExecutionTime
value of the@io.quarkus.deployment.annotations.Record
annotation associated with the build step, the step may be run in a different JVM based on the following two modes. - Static Init
-
If bytecode is recorded with
@Record(STATIC_INIT)
then it will be executed from a static init method on the main class. For a native executable build, this code is executed in a normal JVM as part of the native build process, and any retained objects that are produced in this stage will be directly serialized into the native executable via an image mapped file. This means that if a framework can boot in this phase then it will have its booted state directly written to the image, and so the boot code does not need to be executed when the image is started.在这一阶段可以做什么有一些限制,因为 Substrate VM 不允许在本地可执行文件中使用某些对象。例如,你不应尝试在这一阶段监听一个端口或启动线程。此外,禁止在静态初始化期间读取运行时配置。
There are some restrictions on what can be done in this stage as the Substrate VM disallows some objects in the native executable. For example you should not attempt to listen on a port or start threads in this phase. In addition, it is disallowed to read run time configuration during static initialization.
在非本机纯 JVM 模式中,静态初始化和运行时初始化之间没有实际区别,不同之处在于静态初始化始终首先执行。此模式受益于与本机模式相同的构建阶段增强,因为描述符解析和注释扫描在构建时完成,并且任何关联的类/框架依赖项都可以从构建输出 jar 中删除。在 WildFly 等服务器中,与部署相关的类(例如 XML 解析器)将伴随应用程序的生命周期,占用宝贵的内存。Quarkus 旨在消除这种情况,以便在运行时加载的唯一类实际上是在运行时使用的。
In non-native pure JVM mode, there is no real difference between Static and Runtime Init, except that Static Init is always executed first. This mode benefits from the same build phase augmentation as native mode as the descriptor parsing and annotation scanning are done at build time and any associated class/framework dependencies can be removed from the build output jar. In servers like WildFly, deployment related classes such as XML parsers hang around for the life of the application, using up valuable memory. Quarkus aims to eliminate this, so that the only classes loaded at runtime are actually used at runtime.
作为一个例子,Quarkus 应用程序加载 XML 解析器的唯一原因是用户在其应用程序中使用 XML。配置的任何 XML 解析都应在增强阶段完成。
As an example, the only reason that a Quarkus application should load an XML parser is if the user is using XML in their application. Any XML parsing of configuration should be done in the Augmentation phase.
- Runtime Init
-
If bytecode is recorded with
@Record(RUNTIME_INIT)
then it is executed from the application’s main method. This code will be run on native executable boot. In general as little code as possible should be executed in this phase, and should be restricted to code that needs to open ports etc.
尽可能多地将内容推送到 `@Record(STATIC_INIT)`阶段允许两种不同的优化:
Pushing as much as possible into the @Record(STATIC_INIT)
phase allows for two different optimizations:
-
In both native executable and pure JVM mode this allows the app to start as fast as possible since processing was done during build time. This also minimizes the classes/native code needed in the application to pure runtime related behaviors.
-
Another benefit with native executable mode is that Substrate can more easily eliminate features that are not used. If features are directly initialized via bytecode, Substrate can detect that a method is never called and eliminate that method. If config is read at runtime, Substrate cannot reason about the contents of the config and so needs to keep all features in case they are required.
Project setup
扩展项目应设置为一个多模块项目,其中包含两个子模块:
Your extension project should be setup as a multi-module project with two submodules:
-
A deployment time submodule that handles the build time processing and bytecode recording.
-
A runtime submodule that contains the runtime behavior that will provide the extension behavior in the native executable or runtime JVM.
如果要使用它们提供的功能,您的运行时工件应依赖于 io.quarkus:quarkus-core
,还可能依赖其他 Quarkus 模块的运行时工件。
Your runtime artifact should depend on io.quarkus:quarkus-core
, and possibly the runtime artifacts of other Quarkus
modules if you want to use functionality provided by them.
您的部署时间模块应依赖于 io.quarkus:quarkus-core-deployment
、运行时工件以及您的扩展依赖的任何其他 Quarkus 扩展的部署工件。这一点至关重要,否则任何暂态性引入的扩展都无法提供其全部功能。
Your deployment time module should depend on io.quarkus:quarkus-core-deployment
, your runtime artifact,
and the deployment artifacts of any other Quarkus extensions your own extension depends on. This is essential, otherwise any transitively
pulled in extensions will not provide their full functionality.
Maven 和 Gradle 插件会为此进行验证,并会提醒您您可能忘记添加的任何部署工件。 |
The Maven and Gradle plugins will validate this for you and alert you to any deployment artifacts you might have forgotten to add. |
在任何情况下,运行时模块都不能依赖于部署工件。这将导致将所有部署时间代码拉入运行时范围,这与拆分的目的是相反的。
Under no circumstances can the runtime module depend on a deployment artifact. This would result in pulling all the deployment time code into runtime scope, which defeats the purpose of having the split.
Using Maven
如果您正在使用 Quarkus 父 pom,它将自动继承正确的配置,那么您需要包含 io.quarkus:quarkus-extension-maven-plugin
并配置 maven-compiler-plugin
以检测 quarkus-extension-processor
注解处理器,才能收集和生成扩展工件所需的 Quarkus extension metadata。
You will need to include the io.quarkus:quarkus-extension-maven-plugin
and configure the maven-compiler-plugin
to detect the quarkus-extension-processor
annotation processor to collect and generate the necessary Quarkus extension metadata for the extension artifacts, if you are using the Quarkus parent pom it will automatically inherit the correct configuration.
您可能希望使用 |
You may want to use the |
根据约定,部署时间工件具有 |
By convention the deployment time artifact has the |
<dependencies>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-core</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-extension-maven-plugin</artifactId>
<!-- Executions configuration can be inherited from quarkus-build-parent -->
<executions>
<execution>
<goals>
<goal>extension-descriptor</goal>
</goals>
<configuration>
<deployment>${project.groupId}:${project.artifactId}-deployment:${project.version}</deployment>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<annotationProcessorPaths>
<path>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-extension-processor</artifactId>
</path>
</annotationProcessorPaths>
</configuration>
</plugin>
</plugins>
</build>
上述 |
The above |
还需要配置部署模块的 maven-compiler-plugin
以检测 quarkus-extension-processor
注释处理器。
You will also need to configure the maven-compiler-plugin
of the deployment module to detect the quarkus-extension-processor
annotation processor.
<dependencies>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-core-deployment</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<annotationProcessorPaths>
<path>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-extension-processor</artifactId>
</path>
</annotationProcessorPaths>
</configuration>
</plugin>
</plugins>
</build>
Create new Quarkus Core extension modules using Maven
Quarkus 提供 create-extension
Maven Mojo 来初始化扩展项目。
Quarkus provides create-extension
Maven Mojo to initialize your extension project.
它将尝试自动检测其选项:
It will try to auto-detect its options:
-
from
quarkus
(Quarkus Core) orquarkus/extensions
directory, it will use the 'Quarkus Core' extension layout and defaults. -
with
-DgroupId=io.quarkiverse.[extensionId]
, it will use the 'Quarkiverse' extension layout and defaults. -
in other cases it will use the 'Standalone' extension layout and defaults.
-
we may introduce other layout types in the future.
如果不指定任何参数,则可以使用交互模式: |
You may not specify any parameter to use the interactive mode: |
作为示例,让我们向 Quarkus 源树添加一个名为 my-ext
的新扩展:
As and example, let’s add a new extension called my-ext
to the Quarkus source tree:
git clone https://github.com/quarkusio/quarkus.git
cd quarkus
mvn {quarkus-platform-groupid}:quarkus-maven-plugin:{quarkus-version}:create-extension -N \
-DextensionId=my-ext \
-DextensionName="My Extension" \
-DextensionDescription="Do something useful."
默认情况下, |
By default, the |
扩展说明很重要,因为它显示在 [role="bare"][role="bare"]https://code.quarkus.io/ 上,例如使用 Quarkus CLI 列出扩展时。 |
The extension description is important as it is displayed on [role="bare"]https://code.quarkus.io/, when listing extensions with the Quarkus CLI, etc. |
上面的命令序列执行以下操作:
The above sequence of commands does the following:
-
Creates four new Maven modules:
-
quarkus-my-ext-parent
in theextensions/my-ext
directory -
quarkus-my-ext
in theextensions/my-ext/runtime
directory -
quarkus-my-ext-deployment
in theextensions/my-ext/deployment
directory; a basicMyExtProcessor
class is generated in this module. -
quarkus-my-ext-integration-test
in theintegration-tests/my-ext/deployment
directory; an empty Jakarta REST Resource class and two test classes (for JVM mode and native mode) are generated in this module.
-
-
Links these three modules where necessary:
-
quarkus-my-ext-parent
is added to the<modules>
ofquarkus-extensions-parent
-
quarkus-my-ext
is added to the<dependencyManagement>
of the Quarkus BOM (Bill of Materials)bom/application/pom.xml
-
quarkus-my-ext-deployment
is added to the<dependencyManagement>
of the Quarkus BOM (Bill of Materials)bom/application/pom.xml
-
quarkus-my-ext-integration-test
is added to the<modules>
ofquarkus-integration-tests-parent
-
您还必须填写 runtime 模块 |
You also have to fill the quarkus-extension.yaml template file that describe your extension inside the runtime module |
以下是 quarkus-agroal
扩展的 quarkus-extension.yaml
模板,您可以将其用作示例:
This is the quarkus-extension.yaml
template of the quarkus-agroal
extension, you can use it as an example:
name: "Agroal - Database connection pool" 1
metadata:
keywords: 2
- "agroal"
- "database-connection-pool"
- "datasource"
- "jdbc"
guide: "https://quarkus.io/guides/datasource" 3
categories: 4
- "data"
status: "stable" 5
1 | the name of the extension that will be displayed to users |
2 | keywords that can be used to find the extension in the extension catalog |
3 | link to the extension’s guide or documentation |
4 | categories under which the extension should appear on code.quarkus.io, could be omitted, in which case the extension will still be listed but not under any specific category |
5 | maturity status, which could be stable , preview or experimental , evaluated by extension maintainers |
mojo 的 |
The |
请参阅 CreateExtensionMojo JavaDoc 以了解 mojo 的所有可用选项。
Please refer to CreateExtensionMojo JavaDoc for all the available options of the mojo.
Using Gradle
您需要在扩展项目的 runtime
模块中应用 io.quarkus.extension
插件。该插件包括将生成 META-INF/quarkus-extension.properties
和 META-INF/quarkus-extension.yml
文件的 extensionDescriptor
任务。该插件还在 deployment
和 runtime
模块中启用 io.quarkus:quarkus-extension-processor
注释处理器以收集和生成其余的 Quarkus extension metadata。部署模块的名称可以通过设置 deploymentModule
属性在插件中配置。该属性默认设置为 deployment
:
You will need to apply the io.quarkus.extension
plugin in the runtime
module of your extension project.
The plugin includes the extensionDescriptor
task that will generate META-INF/quarkus-extension.properties
and META-INF/quarkus-extension.yml
files.
The plugin also enables the io.quarkus:quarkus-extension-processor
annotation processor in both deployment
and runtime
modules to collect and generate the rest of the Quarkus extension metadata.
The name of the deployment module can be configured in the plugin by setting the deploymentModule
property. The property is set to deployment
by default:
plugins {
id 'java'
id 'io.quarkus.extension'
}
quarkusExtension {
deploymentModule = 'deployment'
}
dependencies {
implementation platform('io.quarkus:quarkus-bom:{quarkus-version}')
}
Build Step Processors
工作在生成和使用 build items 的 build steps 的增加时间完成。在与项目构建中扩展相对应的部署模块中发现的构建步骤会自动连接在一起并执行以生成最终构建工件。
Work is done at augmentation time by build steps which produce and consume build items. The build steps found in the deployment modules that correspond to the extensions in the project build are automatically wired together and executed to produce the final build artifact(s).
Build steps
build step 是用 @io.quarkus.deployment.annotations.BuildStep
注释标记的非静态方法。每个构建步骤都可以 consume 由早期阶段生成、并且可以 produce 由后期阶段使用的项目。构建步骤通常只有在其生成最终由另一个步骤使用的构建项目时才运行。
A build step is a non-static method which is annotated with the @io.quarkus.deployment.annotations.BuildStep
annotation.
Each build step may consuming-values items that are produced by earlier stages, and producing-values items that can be consumed by later stages. Build steps are normally only run when they produce a build item that is
ultimately consumed by another step.
构建步骤通常放在扩展的部署模块中的简单类上。在增加过程中会自动实例化类,并利用 injection。
Build steps are normally placed on plain classes within an extension’s deployment module. The classes are automatically instantiated during the augment process and utilize injection.
Build items
构建项目是抽象 io.quarkus.builder.item.BuildItem
类的具体、最终子类。每个构建项目表示必须从一个阶段传递到另一个阶段的信息单元。基本 BuildItem
类本身可能无法直接进行子类化;更确切地说,对于 may 能创建的每种构建项目子类,都有抽象子类: simple、multi 和 empty。
Build items are concrete, final subclasses of the abstract io.quarkus.builder.item.BuildItem
class. Each build item represents
some unit of information that must be passed from one stage to another. The base BuildItem
class may not itself be directly
subclassed; rather, there are abstract subclasses for each of the kinds of build item subclasses that may be created:
simple-build-items, multi-build-items, and empty-build-items.
将构建项目视为不同扩展相互通信的一种方式。例如,构建项目可以:
Think of build items as a way for different extensions to communicate with one another. For example, a build item can:
-
expose the fact that a database configuration exists
-
consume that database configuration (e.g. a connection pool extension or an ORM extension)
-
ask an extension to do work for another extension: e.g. an extension wanting to define a new CDI bean and asking the ArC extension to do so
这是一个非常灵活的机制。
This is a very flexible mechanism.
|
|
仅当构建步骤生成其他构建步骤(及传递依赖项)所需要的构建项目时,才会执行该构建步骤。确保构建步骤生成构建项目,否则您可能需要为构建验证生成 |
Build steps are executed if and only if they produce build items that are (transitively) needed by other build steps. Make sure your build step produces a build item, otherwise you should probably produce either |
Simple build items
简单的构建项目是扩展 io.quarkus.builder.item.SimpleBuildItem
的 final 类。给定构建中只有一个步骤可以生成简单的构建项目;如果构建中的多个步骤声明它们生成相同的简单构建项目,则会引发错误。任意数量的构建步骤都可以使用简单的构建项目。使用简单构建项目的构建步骤始终会在生成该项目的构建步骤 after。
Simple build items are final classes which extend io.quarkus.builder.item.SimpleBuildItem
. Simple build items may only
be produced by one step in a given build; if multiple steps in a build declare that they produce the same simple build item,
an error is raised. Any number of build steps may consume a simple build item. A build step which consumes a simple
build item will always run after the build step which produced that item.
/**
* The build item which represents the Jandex index of the application,
* and would normally be used by many build steps to find usages
* of annotations.
*/
public final class ApplicationIndexBuildItem extends SimpleBuildItem {
private final Index index;
public ApplicationIndexBuildItem(Index index) {
this.index = index;
}
public Index getIndex() {
return index;
}
}
Multi build items
多构建项目或“多”构建项目是扩展 io.quarkus.builder.item.MultiBuildItem
的 final 类。任何数量的步骤都可以生成给定类中的任意数量的多构建项目,但使用多构建项目的任何步骤只会在能生成它们的每个步骤 after 运行。
Multiple or "multi" build items are final classes which extend io.quarkus.builder.item.MultiBuildItem
. Any number of
multi build items of a given class may be produced by any number of steps, but any steps which consume multi build items
will only run after every step which can produce them has run.
public final class ServiceWriterBuildItem extends MultiBuildItem {
private final String serviceName;
private final List<String> implementations;
public ServiceWriterBuildItem(String serviceName, String... implementations) {
this.serviceName = serviceName;
// Make sure it's immutable
this.implementations = Collections.unmodifiableList(
Arrays.asList(
implementations.clone()
)
);
}
public String getServiceName() {
return serviceName;
}
public List<String> getImplementations() {
return implementations;
}
}
/**
* This build step produces a single multi build item that declares two
* providers of one configuration-related service.
*/
@BuildStep
public ServiceWriterBuildItem registerOneService() {
return new ServiceWriterBuildItem(
Converter.class.getName(),
MyFirstConfigConverterImpl.class.getName(),
MySecondConfigConverterImpl.class.getName()
);
}
/**
* This build step produces several multi build items that declare multiple
* providers of multiple configuration-related services.
*/
@BuildStep
public void registerSeveralServices(
BuildProducer<ServiceWriterBuildItem> providerProducer
) {
providerProducer.produce(new ServiceWriterBuildItem(
Converter.class.getName(),
MyThirdConfigConverterImpl.class.getName(),
MyFourthConfigConverterImpl.class.getName()
));
providerProducer.produce(new ServiceWriterBuildItem(
ConfigSource.class.getName(),
MyConfigSourceImpl.class.getName()
));
}
/**
* This build step aggregates all the produced service providers
* and outputs them as resources.
*/
@BuildStep
public void produceServiceFiles(
List<ServiceWriterBuildItem> items,
BuildProducer<GeneratedResourceBuildItem> resourceProducer
) throws IOException {
// Aggregate all the providers
Map<String, Set<String>> map = new HashMap<>();
for (ServiceWriterBuildItem item : items) {
String serviceName = item.getName();
for (String implName : item.getImplementations()) {
map.computeIfAbsent(
serviceName,
(k, v) -> new LinkedHashSet<>()
).add(implName);
}
}
// Now produce the resource(s) for the SPI files
for (Map.Entry<String, Set<String>> entry : map.entrySet()) {
String serviceName = entry.getKey();
try (ByteArrayOutputStream os = new ByteArrayOutputStream()) {
try (OutputStreamWriter w = new OutputStreamWriter(os, StandardCharsets.UTF_8)) {
for (String implName : entry.getValue()) {
w.write(implName);
w.write(System.lineSeparator());
}
w.flush();
}
resourceProducer.produce(
new GeneratedResourceBuildItem(
"META-INF/services/" + serviceName,
os.toByteArray()
)
);
}
}
}
Empty build items
空构建项目是扩展 io.quarkus.builder.item.EmptyBuildItem
的 final(通常为空)类。它们表示实际上不携带任何数据的构建项目,并允许生成和使用此类项目,而无需实例化空类。它们本身无法实例化。
Empty build items are final (usually empty) classes which extend io.quarkus.builder.item.EmptyBuildItem
.
They represent build items that don’t actually carry any data, and allow such items to be produced and consumed
without having to instantiate empty classes. They cannot themselves be instantiated.
由于它们无法实例化,因此它们不能通过任何方式注入,也不能通过构建步骤(或通过 BuildProducer
)返回。要生成空构建项目, 您必须使用 @Produce(MyEmptyBuildItem.class)
对构建步骤进行注释并且通过 @Consume(MyEmptyBuildItem.class)
来使用它们。
As they cannot be instantiated, they cannot be injected by any means, nor be returned by a build step (or via a BuildProducer
).
To produce an empty build item you must annotate the build step with @Produce(MyEmptyBuildItem.class)
and to consume it by @Consume(MyEmptyBuildItem.class)
.
public final class NativeImageBuildItem extends EmptyBuildItem {
// empty
}
空构建项目可以表示可以强制步骤之间的顺序的“障碍”。它们还可以按照流行构建系统使用“伪目标”的方式使用,也就是说构建项目可以表示没有具体表示的概念目标。
Empty build items can represent "barriers" which can impose ordering between steps. They can also be used in the same way that popular build systems use "pseudo-targets", which is to say that the build item can represent a conceptual goal that does not have a concrete representation.
/**
* Contrived build step that produces the native image on disk. The main augmentation
* step (which is run by Maven or Gradle) would be declared to consume this empty item,
* causing this step to be run.
*/
@BuildStep
@Produce(NativeImageBuildItem.class)
void produceNativeImage() {
// ...
// (produce the native image)
// ...
}
/**
* This would always run after {@link #produceNativeImage()} completes, producing
* an instance of {@code SomeOtherBuildItem}.
*/
@BuildStep
@Consume(NativeImageBuildItem.class)
SomeOtherBuildItem secondBuildStep() {
return new SomeOtherBuildItem("foobar");
}
Validation Error build items
它们表示包含使构建失败的验证错误的构建项目。这些构建项目在 CDI 容器初始化期间使用。
They represent build items with validation errors that make the build fail. These build items are consumed during the initialization of the CDI container.
@BuildStep
void checkCompatibility(Capabilities capabilities, BuildProducer<ValidationErrorBuildItem> validationErrors) {
if (capabilities.isPresent(Capability.RESTEASY_REACTIVE)
&& capabilities.isPresent(Capability.RESTEASY)) {
validationErrors.produce(new ValidationErrorBuildItem(
new ConfigurationException("Cannot use both RESTEasy Classic and Reactive extensions at the same time")));
}
}
Artifact Result build items
它们表示构建生成的运行时手工制品,例如 uberjar 或 thin jar。这些构建项目还可用于始终执行构建步骤,而无需生成任何内容。
They represent build items containing the runnable artifact generated by the build, such as an uberjar or thin jar. These build items can also be used to always execute a build step without needing to produce anything.
@BuildStep
@Produce(ArtifactResultBuildItem.class)
void runBuildStepThatProducesNothing() {
// ...
}
Injection
包含构建步骤的类支持以下注入类型:
Classes which contain build steps support the following types of injection:
-
Constructor parameter injection
-
Field injection
-
Method parameter injection (for build step methods only)
构建步骤类在每次构建步骤调用时实例化并注入,然后丢弃。即使步骤在同一类上,也只应通过构建项目在构建步骤之间进行通信。
Build step classes are instantiated and injected for each build step invocation, and are discarded afterwards. State should only be communicated between build steps by way of build items, even if the steps are on the same class.
最终字段不考虑注入,但可以根据需要通过构造函数参数注入来填充。静态字段绝不考虑进行注入。 |
Final fields are not considered for injection, but can be populated by way of constructor parameter injection if desired. Static fields are never considered for injection. |
可以注入的值的类型包括:
The types of values that can be injected include:
-
build-items produced by previous build steps
-
producing-values to produce items for subsequent build steps
-
configuration types
-
Template objects for bytecode-recording
注入到构建步骤方法或其类_must not_中的对象在该方法执行结束前不能使用
Objects which are injected into a build step method or its class must not be used outside that method’s execution.
注入在编译时通过注释处理器解决,生成代码没有权限注入私有字段或调用私有方法 |
Injection is resolved at compile time via an annotation processor, and the resulting code does not have permission to inject private fields or invoke private methods. |
Producing values
构建步骤可以通过几种可能的方式为后续步骤产生值:
A build step may produce values for subsequent steps in several possible ways:
-
By returning a simple-build-items or multi-build-items instance
-
By returning a
List
of a multi build item class -
By injecting a
BuildProducer
of a simple or multi build item class -
By annotating the method with
@io.quarkus.deployment.annotations.Produce
, giving the class name of an empty-build-items
如果在构建步骤中声明了一个简单的构建项目,则 must 会在该构建步骤期间产生,否则将产生错误。注入到步骤中的构建生成器不能 must not 在该步骤外使用
If a simple build item is declared on a build step, it must be produced during that build step, otherwise an error will result. Build producers, which are injected into steps, must not be used outside that step.
请注意,只有当 @BuildStep
方法生成消费者或最终输出所需的东西时,它才会被调用。如果没有特定项目的消费者,那么它将不会生成。生成的内容依赖于正在生成的最终目标。例如,当在开发者模式下运行时,最终输出将不会调用诸如 ReflectiveClassBuildItem
的 GraalVM 特定的构建项目,所以只生成这些项目的那些方法将不会被调用。
Note that a @BuildStep
method will only be called if it produces something that another consumer or the final output
requires. If there is no consumer for a particular item then it will not be produced. What is required will depend on
the final target that is being produced. For example, when running in developer mode the final output will not ask
for GraalVM-specific build items such as ReflectiveClassBuildItem
, so methods that only produce these
items will not be invoked.
Consuming values
构建步骤可以通过以下方式使用先前步骤的值:
A build step may consume values from previous steps in the following ways:
-
By injecting a simple-build-items
-
By injecting an
Optional
of a simple build item class -
By injecting a
List
of a multi-build-items class -
By annotating the method with
@io.quarkus.deployment.annotations.Consume
, giving the class name of an empty-build-items
通常,如果一个包含的步骤使用了未由任何其他步骤生成的一个简单的构建项目,则会出现错误。通过这种方式,可以保证在步骤运行时,所有声明的值都存在并且是非-null
的。
Normally it is an error for a step which is included to consume a simple build item that is not produced by any other
step. In this way, it is guaranteed that all the declared values will be present and non-null
when a step is run.
有时,值对于构建完成而言并非必要,但如果存在,它可能会告知构建步骤的一些行为。在这种情况下,可以有选择地插入该值。
Sometimes a value isn’t necessary for the build to complete, but might inform some behavior of the build step if it is present. In this case, the value can be optionally injected.
多构建值始终被视为 optional。如果不存在,则将插入一个空列表。 |
Multi build values are always considered optional. If not present, an empty list will be injected. |
Weak value production
通常,无论它产生任何构建项,只要其他构建步骤进而使用任何构建项,就都会包括构建步骤。通过这种方式,只会包括生成最终制品所需的步骤,而与未安装的扩展有关的步骤或仅生成与给定制品类型无关的构建项的步骤将被排除在外。
Normally a build step is included whenever it produces any build item which is in turn consumed by any other build step. In this way, only the steps necessary to produce the final artifact(s) are included, and steps which pertain to extensions which are not installed or which only produce build items which are not relevant for the given artifact type are excluded.
如果这不是所需的行为,则可以使用 `@io.quarkus.deployment.annotations.Weak`注释。此注释表示不应仅根据生成带注释的值而自动包含构建步骤。
For cases where this is not desired behavior, the @io.quarkus.deployment.annotations.Weak
annotation may be used. This
annotation indicates that the build step should not automatically be included solely on the basis of producing the annotated value.
/**
* This build step is only run if something consumes the ExecutorClassBuildItem.
*/
@BuildStep
void createExecutor(
@Weak BuildProducer<GeneratedClassBuildItem> classConsumer,
BuildProducer<ExecutorClassBuildItem> executorClassConsumer
) {
ClassWriter cw = new ClassWriter(Gizmo.ASM_API_VERSION);
String className = generateClassThatCreatesExecutor(cw); (1)
classConsumer.produce(new GeneratedClassBuildItem(true, className, cw.toByteArray()));
executorClassConsumer.produce(new ExecutorClassBuildItem(className));
}
1 | This method (not provided in this example) would generate the class using the ASM API. |
某些类型的构建项通常始终被使用,例如生成的类或资源。一个扩展可能会生成一个构建项以及一个生成的类,以方便该构建项的使用。此类构建步骤将在生成的类构建项上使用 `@Weak`注释,同时通常生成其他构建项。如果其他构建项最终被某些内容使用,则该步骤将运行,并且该类将被生成。如果没有任何内容使用其他构建项,则该步骤将不会包含在构建过程中。
Certain types of build items are generally always consumed, such as generated classes or resources.
An extension might produce a build item along with a generated class to facilitate the usage
of that build item. Such a build step would use the @Weak
annotation on the generated class build item, while normally
producing the other build item. If the other build item is ultimately consumed by something, then the step would run
and the class would be generated. If nothing consumes the other build item, the step would not be included in the build
process.
在上面的示例中,仅当其他构建步骤使用 ExecutorClassBuildItem`时才生成 `GeneratedClassBuildItem
。
In the example above, GeneratedClassBuildItem
would only be produced if ExecutorClassBuildItem
is consumed by
some other build step.
请注意,在使用 bytecode recording时,可以通过使用 `@io.quarkus.deployment.annotations.Record`注释的 `optional`属性将隐式生成的类声明为弱。
Note that when using bytecode-recording, the implicitly generated class can be declared to be weak by
using the optional
attribute of the @io.quarkus.deployment.annotations.Record
annotation.
/**
* This build step is only run if something consumes the ExecutorBuildItem.
*/
@BuildStep
@Record(value = ExecutionTime.RUNTIME_INIT, optional = true) (1)
ExecutorBuildItem createExecutor( (2)
ExecutorRecorder recorder,
ThreadPoolConfig threadPoolConfig
) {
return new ExecutorBuildItem(
recorder.setupRunTime(
shutdownContextBuildItem,
threadPoolConfig,
launchModeBuildItem.getLaunchMode()
)
);
}
1 | Note the optional attribute. |
2 | This example is using recorder proxies; see the section on bytecode-recording for more information. |
Application Archives
@BuildStep`注释还可以注册确定类路径上的哪些档案被视为“应用程序档案”的标记文件,因此将被编入索引。这是通过 `applicationArchiveMarkers`完成的。例如,ArC 扩展注册 `META-INF/beans.xml
,这意味着具有 `beans.xml`文件的类路径上的所有档案都将被编入索引。
The @BuildStep
annotation can also register marker files that determine which archives on the class path are considered
to be 'Application Archives', and will therefore get indexed. This is done via the applicationArchiveMarkers
. For
example the ArC extension registers META-INF/beans.xml
, which means that all archives on the class path with a beans.xml
file will be indexed.
Using Thread’s Context Class Loader
构建步骤将使用 TCCL 运行,该 TCCL 可以以转换器安全的方式从部署中加载用户类。此类加载器仅在增强期间存在,之后将被丢弃。该类将在运行时在不同的类加载器中再次加载。这意味着在增强期间加载类不会阻止它在开发/测试模式下运行时被转换。
The build step will be run with a TCCL that can load user classes from the deployment in a transformer-safe way. This class loader only lasts for the life of the augmentation, and is discarded afterwards. The classes will be loaded again in a different class loader at runtime. This means that loading a class during augmentation does not stop it from being transformed when running in the development/test mode.
Adding external JARs to the indexer with IndexDependencyBuildItem
扫描的类索引自动不包括外部类依赖项。若要添加依赖项,请创建一个 @BuildStep
,该 `@BuildStep`为 `groupId`和 `artifactId`生成 `IndexDependencyBuildItem`对象。
The index of scanned classes will not automatically include your external class dependencies.
To add dependencies, create a @BuildStep
that produces IndexDependencyBuildItem
objects, for a groupId
and artifactId
.
指定所有需要添加到索引器中的制品非常重要。没有制品会被隐式地以传递方式添加。 |
It is important to specify all the required artifacts to be added to the indexer. No artifacts are implicitly added transitively. |
`Amazon Alexa`扩展添加了 Alexa SDK 中用于 Jackson JSON 转换的依赖库,以便在 `BUILD_TIME`中识别和包含反射类。
The Amazon Alexa
extension adds dependent libraries from the Alexa SDK that are used in Jackson JSON transformations, in order for the reflective classes to identified and included at BUILD_TIME
.
@BuildStep
void addDependencies(BuildProducer<IndexDependencyBuildItem> indexDependency) {
indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk"));
indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-runtime"));
indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-model"));
indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-lambda-support"));
indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-servlet-support"));
indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-dynamodb-persistence-adapter"));
indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-apache-client"));
indexDependency.produce(new IndexDependencyBuildItem("com.amazon.alexa", "ask-sdk-model-runtime"));
}
通过将制品添加到 `Jandex`索引器,您现在可以搜索索引以识别实现接口的类、特定类的子类或具有目标注释的类。
With the artifacts added to the Jandex
indexer, you can now search the index to identify classes implementing an interface, subclasses of a specific class, or classes with a target annotation.
例如,`Jackson`扩展使用如下代码搜索用于 JSON 反序列化中的注释,并将其添加到 `BUILD_TIME`分析的反射层次结构中。
For example, the Jackson
extension uses code like below to search for annotations used in JSON deserialization,
and add them to the reflective hierarchy for BUILD_TIME
analysis.
DotName JSON_DESERIALIZE = DotName.createSimple(JsonDeserialize.class.getName());
IndexView index = combinedIndexBuildItem.getIndex();
// handle the various @JsonDeserialize cases
for (AnnotationInstance deserializeInstance : index.getAnnotations(JSON_DESERIALIZE)) {
AnnotationTarget annotationTarget = deserializeInstance.target();
if (CLASS.equals(annotationTarget.kind())) {
DotName dotName = annotationTarget.asClass().name();
Type jandexType = Type.create(dotName, Type.Kind.CLASS);
reflectiveHierarchyClass.produce(new ReflectiveHierarchyBuildItem(jandexType));
}
}
Visualizing build step dependencies
偶尔查看各种构建步骤之间的交互的视觉表现可能很有用。对于此类情况,在构建应用程序时添加 `-Dquarkus.builder.graph-output=build.dot`将导致在项目的根目录中创建 `build.dot`文件。请参阅 this以获取可以打开该文件并显示实际视觉表示的软件列表。
It can occasionally be useful to see a visual representation of the interactions between the various build steps. For such cases, adding -Dquarkus.builder.graph-output=build.dot
when building an application
will result in the creation of the build.dot
file in the project’s root directory. See this for a list of software that can open the file and show the actual visual representation.
Configuration
Quarkus 中的配置基于 SmallRye Config。 SmallRye Config提供的所有功能也在 Quarkus 中可用。
Configuration in Quarkus is based on SmallRye Config. All features provided by SmallRye Config are also available in Quarkus.
扩展必须使用 SmallRye Config @ConfigMapping来映射扩展所需的配置。这将允许 Quarkus 自动将映射的实例公开到每个配置阶段并生成配置文档。
Extensions must use SmallRye Config @ConfigMapping to map the configuration required by the Extension. This will allow Quarkus to automatically expose an instance of the mapping to each configuration phase and generate the configuration documentation.
Config Phases
配置映射严格受限于配置阶段,尝试从其对应阶段之外访问配置映射将会导致错误。它们规定了何时从配置读取其包含的密钥,以及何时它们可供应用程序使用。io.quarkus.runtime.annotations.ConfigPhase
定义的阶段如下所示:
Configuration mappings are strictly bound by configuration phase, and attempting to access a configuration mapping from
outside its corresponding phase will result in an error. They dictate when its contained keys are read from the
configuration, and when they are available to applications. The phases defined by
io.quarkus.runtime.annotations.ConfigPhase
are as follows:
Phase name | Read & avail. at build time | Avail. at run time | Read during static init | Re-read during startup (native executable) | Notes |
---|---|---|---|---|---|
|
✓ |
✗ |
✗ |
✗ |
Appropriate for things which affect build. |
|
✓ |
✓ |
✗ |
✗ |
Appropriate for things which affect build and must be visible for run time code. Not read from config at run time. |
|
✗ |
✓ |
✗ |
✓ |
Used when runtime configuration needs to be obtained from an external system (like |
|
✗ |
✓ |
✓ |
✓ |
Not available at build, read at start in all modes. |
对于 BUILD_TIME
之外的所有情况,配置映射接口及其包含的所有配置组和类型都必须位于扩展的运行时工件中或可以从扩展的运行时工件中访问。阶段 BUILD_TIME
的配置映射可以位于扩展的运行时工件或部署工件中或可以从其中访问。
For all cases other than the BUILD_TIME
case, the configuration mapping interface and all the configuration groups and types contained therein must be located in, or reachable from, the extension’s run time artifact. Configuration mappings of phase BUILD_TIME
may be located in or reachable from either of the extension’s run time or deployment artifacts.
Bootstrap 配置步骤在运行时初始化 before 的任何其他运行时步骤期间执行。这意味着作为此步骤一部分执行的代码无法访问在运行时初始化步骤(运行时合成 CDI bean 是一个这样的示例)中初始化的任何内容。
Bootstrap configuration steps are executed during runtime-init before any of other runtime steps. This means that code executed as part of this step cannot access anything that gets initialized in runtime init steps (runtime synthetic CDI beans being one such example).
Configuration Example
import io.quarkus.runtime.annotations.ConfigPhase;
import io.quarkus.runtime.annotations.ConfigRoot;
import io.smallrye.config.ConfigMapping;
import io.smallrye.config.WithDefault;
import java.io.File;
import java.util.logging.Level;
/**
* Logging configuration.
*/
@ConfigMapping(prefix = "quarkus.log") (1)
@ConfigRoot(phase = ConfigPhase.RUN_TIME) (2)
public interface LogConfiguration {
// ...
/**
* Configuration properties for the logging file handler.
*/
FileConfig file();
interface FileConfig {
/**
* Enable logging to a file.
*/
@WithDefault("true")
boolean enable();
/**
* The log format.
*/
@WithDefault("%d{yyyy-MM-dd HH:mm:ss,SSS} %h %N[%i] %-5p [%c{1.}] (%t) %s%e%n")
String format();
/**
* The level of logs to be written into the file.
*/
@WithDefault("ALL")
Level level();
/**
* The name of the file in which logs will be written.
*/
@WithDefault("application.log")
File path();
}
}
public class LoggingProcessor {
// ...
/*
* Logging configuration.
*/
LogConfiguration config; (3)
}
配置属性名称可以划分为段。例如,属性名称 quarkus.log.file.enable
可以划分为以下段:
A configuration property name can be split into segments. For example, a property name like
quarkus.log.file.enable
can be split into the following segments:
-
quarkus
- a namespace claimed by Quarkus which is a prefix for@ConfigMapping
interfaces, -
log
- a name segment which corresponds to the prefix set in the interface annotated with@ConfigMapping
, -
file
- a name segment which corresponds to thefile
field in this class, -
enable
- a name segment which corresponds toenable
field inFileConfig
.
1 | The @ConfigMapping annotation indicates that the interface is a configuration mapping, in this case one which
corresponds to a quarkus.log segment. |
2 | The @ConfigRoot annotation indicated to which Config phase, the configuration applies to. |
3 | Here the LoggingProcessor injects a LogConfiguration instance automatically by detecting the @ConfigRoot
annotation. |
上面示例的相应 application.properties
可以是:
A corresponding application.properties
for the above example could be:
quarkus.log.file.enable=true
quarkus.log.file.level=DEBUG
quarkus.log.file.path=/tmp/debug.log
由于这些属性中未定义 format
,因此将使用 @WithDefault
中的默认值代替。
Since format
is not defined in these properties, the default value from @WithDefault
will be used instead.
配置映射名称可以包含一个额外的后缀片段,以用于有多个 Config Phases 的情况。与 BUILD_TIME
和 BUILD_AND_RUN_TIME_FIXED
对应的类可能以 BuildTimeConfig
或 BuildTimeConfiguration
结尾,与 RUN_TIME
阶段对应的类可能以 RuntimeConfig
、RunTimeConfig
、RuntimeConfiguration
或 RunTimeConfiguration
结尾,而与 BOOTSTRAP
配置对应的类可能以 BootstrapConfig
或 BootstrapConfiguration
结尾。
A configuration mapping name can contain an extra suffix segment for the case where there are configuration
mappings for multiple Config Phases. Classes which correspond to the BUILD_TIME
and BUILD_AND_RUN_TIME_FIXED
may end with BuildTimeConfig
or BuildTimeConfiguration
, classes which correspond to the RUN_TIME
phase
may end with RuntimeConfig
, RunTimeConfig
, RuntimeConfiguration
or RunTimeConfiguration
while classes which
correspond to the BOOTSTRAP
configuration may end with BootstrapConfig
or BootstrapConfiguration
.
Configuration Reference Documentation
配置是每个扩展的重要组成部分,因此需要进行正确的文档记录。每个配置属性都应有正确的 Javadoc 注释。
The configuration is an important part of each extension and therefore needs to be properly documented. Each configuration property should have a proper Javadoc comment.
虽然在编码时拥有可用文档很方便,但配置文档也必须在扩展指南中可用。Quarkus 构建会根据 Javadoc 注释自动生成配置文档,但需要将其明确包含在每个指南中。
While it is handy to have the documentation available when coding, the configuration documentation must also be available in the extension guides. The Quarkus build automatically generates the configuration documentation based on the Javadoc comments, but it needs to be explicitly included in each guide.
Writing the documentation
每个配置属性都需要一个解释其用途的 Javadoc。
Each configuration property, requires a Javadoc explaining its purpose.
第一句话应该是切实有意义且独立的,因为它包含在摘要表中。 The first sentence should be meaningful and self-contained as it is included in the summary table. |
虽然对于简单的文档,标准的 Javadoc 注释完全够用(甚至推荐),但 AsciiDoc 更适合提示、源代码摘录、列表等等:
While standard Javadoc comments are perfectly fine for simple documentation (recommended even), AsciiDoc is more suitable for tips, source code extracts, lists and more:
/**
* Class name of the Hibernate ORM dialect. The complete list of bundled dialects is available in the
* https://docs.jboss.org/hibernate/stable/orm/javadocs/org/hibernate/dialect/package-summary.html[Hibernate ORM JavaDoc].
*
* [NOTE]
* ====
* Not all the dialects are supported in GraalVM native executables: we currently provide driver extensions for
* PostgreSQL, MariaDB, Microsoft SQL Server and H2.
* ====
*
* @asciidoclet
*/
Optional<String> dialect();
要使用 AsciiDoc,必须使用 @asciidoclet
标记对 Javadoc 注释进行注释。此标记具有两个用途:它用作 Quarkus 生成工具的标记,但它也由 javadoc
流程用于生成 Javadoc。
To use AsciiDoc, the Javadoc comment must be annotated with @asciidoclet
tag. This tag serves two purposes: it is
used as a marker for Quarkus generation tool, but it is also used by the javadoc
process for the Javadoc generation.
更详细的示例:
A more detailed example:
/**
* Name of the file containing the SQL statements to execute when Hibernate ORM starts.
* Its default value differs depending on the Quarkus launch mode:
*
* * In dev and test modes, it defaults to `import.sql`.
* Simply add an `import.sql` file in the root of your resources directory
* and it will be picked up without having to set this property.
* Pass `no-file` to force Hibernate ORM to ignore the SQL import file.
* * In production mode, it defaults to `no-file`.
* It means Hibernate ORM won't try to execute any SQL import file by default.
* Pass an explicit value to force Hibernate ORM to execute the SQL import file.
*
* If you need different SQL statements between dev mode, test (`@QuarkusTest`) and in production, use Quarkus
* https://quarkus.io/guides/config#configuration-profiles[configuration profiles facility].
*
* [source,property]
* .application.properties
* ----
* %dev.quarkus.hibernate-orm.sql-load-script = import-dev.sql
* %test.quarkus.hibernate-orm.sql-load-script = import-test.sql
* %prod.quarkus.hibernate-orm.sql-load-script = no-file
* ----
*
* [NOTE]
* ====
* Quarkus supports `.sql` file with SQL statements or comments spread over multiple lines.
* Each SQL statement must be terminated by a semicolon.
* ====
*
* @asciidoclet
*/
Optional<String> sqlLoadScript();
为了使缩进在 Javadoc 注释(跨多行或缩进行源代码的列表项)中得到遵守,必须禁用自动 Eclipse 格式化程序(格式化程序会自动包含在构建中),且使用 // @formatter:off
/// @formatter:on
标记。这些要求单独的注释,并在 //
标记后强制留一个空格。
For indentation to be respected in the Javadoc comment (list items spread on multiple lines or indented
source code), the automatic Eclipse formatter must be disabled (the formatter is automatically included in the build),
with the markers // @formatter:off
/// @formatter:on
. These require separate comments and a mandatory space after the
//
marker.
AsciiDoc 文档中不支持开放块 (--
)。其他所有类型的块(源代码、警告…)均受支持。
Open blocks (--
) are not supported in the AsciiDoc documentation. All the other types of blocks
(source, admonitions…) are supported.
默认情况下,文档生成器将使用带连字符的字段名作为 By default, the documentation generator will use the hyphenated field name as the key of a
可以编写一个文本解释以用于文档默认值,当生成默认值时,这个解释很有用: It is possible to write a textual explanation for the documentation default value, this is useful when it is generated:
|
Writing section documentation
要生成给定组的配置部分,请使用 @ConfigDocSection
注释:
To generate a configuration section of a given group, use the @ConfigDocSection
annotation:
/**
* Config group related configuration.
* Amazing introduction here
*/
@ConfigDocSection (1)
ConfigGroupConfig configGroup();
1 | This will add a section documentation for the configGroup config item in the generated documentation. The section
title and introduction will be derived from the javadoc of the configuration item. The first sentence from the javadoc
is considered as the section title and the remaining sentences used as section introduction. |
Generating the documentation
要生成文档:
To generate the documentation:
-
Execute
./mvnw -DquicklyDocs
-
Can be executed globally or in a specific extension directory (e.g.
extensions/mailer
).
文档在位于项目根目录的全局 target/asciidoc/generated/config/
中生成。
The documentation is generated in the global target/asciidoc/generated/config/
located at the root of the project.
Including the documentation in the extension guide
要将生成的配置参考文档包含在指南中,请使用:
To include the generated configuration reference documentation in a guide, use:
要仅包含一个特定的配置组:
To include only a specific config group:
例如,io.quarkus.vertx.http.runtime.FormAuthConfig
配置组将在名为 quarkus-vertx-http-config-group-form-auth-config.adoc
的文件中生成。
For example, the io.quarkus.vertx.http.runtime.FormAuthConfig
configuration group will be generated in a file named
quarkus-vertx-http-config-group-form-auth-config.adoc
.
一些建议:
A few recommendations:
-
opts=optional
is mandatory to not fail the build if only part of the configuration documentation has been generated. -
The documentation is generated with a title level of 2 (i.e.
==
). It may need an adjustment withleveloffset=+N
. -
The whole configuration documentation should not be included in the middle of the guide.
如果指南包含 application.properties
示例,则必须在代码片段下方加入提示:
If the guide includes an application.properties
example, a tip must be included just below the code snippet:
[TIP]
For more information about the extension configuration please refer to the <<configuration-reference,Configuration Reference>>.
在指南末尾,提供扩展的配置文档:
And at the end of the guide, the extensive configuration documentation:
[[configuration-reference]]
== Configuration Reference
在提交之前,所有文档都应生成并验证。
All documentation should be generated and validated before being committed.
Conditional Step Inclusion
只能在特定条件下包含给定的 @BuildStep
。 @BuildStep
注释有两个可选参数: onlyIf
和 onlyIfNot
。可以将这些参数设置为一个或多个实现 BooleanSupplier
的类。只有当方法返回 true
(对于 onlyIf
)或 false
(对于 onlyIfNot
) 时,才包含该构建步骤。
It is possible to only include a given @BuildStep
under certain conditions. The @BuildStep
annotation
has two optional parameters: onlyIf
and onlyIfNot
. These parameters can be set to one or more classes
which implement BooleanSupplier
. The build step will only be included when the method returns
true
(for onlyIf
) or false
(for onlyIfNot
).
条件类可以注入 configuration mappings,只要它们属于构建时阶段。条件类不提供运行时配置。
The condition class can inject configuration as long as they belong to a build-time phase. Run time configuration is not available for condition classes.
条件类还可以注入类型为 io.quarkus.runtime.LaunchMode
的值。支持构造函数参数和字段注入。
The condition class may also inject a value of type io.quarkus.runtime.LaunchMode
.
Constructor parameter and field injection is supported.
@BuildStep(onlyIf = IsDevMode.class)
LogCategoryBuildItem enableDebugLogging() {
return new LogCategoryBuildItem("org.your.quarkus.extension", Level.DEBUG);
}
static class IsDevMode implements BooleanSupplier {
LaunchMode launchMode;
public boolean getAsBoolean() {
return launchMode == LaunchMode.DEVELOPMENT;
}
}
如果您需要使构建步骤有条件地依赖于其他扩展的存在或不存在,可以使用 [capabilities]。 |
If you need to make your build step conditional on the presence or absence of another extension, you can use [capabilities] for that. |
您还可以使用 @BuildSteps
向给定类中的所有构建步骤应用一组条件:
You can also apply a set of conditions to all build steps in a given class with @BuildSteps
:
@BuildSteps(onlyIf = MyDevModeProcessor.IsDevMode.class) (1)
class MyDevModeProcessor {
@BuildStep
SomeOutputBuildItem mainBuildStep(SomeOtherBuildItem input) { (2)
return new SomeOutputBuildItem(input.getValue());
}
@BuildStep
SomeOtherOutputBuildItem otherBuildStep(SomeOtherInputBuildItem input) { (3)
return new SomeOtherOutputBuildItem(input.getValue());
}
static class IsDevMode implements BooleanSupplier {
LaunchMode launchMode;
public boolean getAsBoolean() {
return launchMode == LaunchMode.DEVELOPMENT;
}
}
}
1 | This condition will apply to all methods defined in MyDevModeProcessor |
2 | The main build step will only be executed in dev mode. |
3 | The other build step will only be executed in dev mode. |
Bytecode Recording
构建过程的主要输出之一是记录的字节码。该字节码实际上设置了运行时环境。例如,为了启动 Undertow,生成的应用程序将具有直接注册所有 Servlet 实例然后启动 Undertow 的一些字节码。
One of the main outputs of the build process is recorded bytecode. This bytecode actually sets up the runtime environment. For example, in order to start Undertow, the resulting application will have some bytecode that directly registers all Servlet instances and then starts Undertow.
因为直接编写字节码很复杂,所以改为通过字节码记录器完成。在部署时,将对包含实际运行时逻辑的记录器对象进行调用,但这些调用不会像往常一样进行,而是会被拦截并记录(这就是名称的由来)。然后,此记录用于生成在运行时执行相同序列调用的字节码。这本质上是一种延迟执行形式,其中在部署时进行的调用被推迟到运行时。
As writing bytecode directly is complex, this is instead done via bytecode recorders. At deployment time, invocations are made on recorder objects that contain the actual runtime logic, but instead of these invocations proceeding as normal they are intercepted and recorded (hence the name). This recording is then used to generate bytecode that performs the same sequence of invocations at runtime. This is essentially a form of deferred execution where invocations made at deployment time get deferred until runtime.
让我们来看一个经典的“Hello World”类型的示例。要采用 Quarkus 方式实现此目的,我们将按如下方式创建一个记录器:
Let’s look at the classic 'Hello World' type example. To do this the Quarkus way we would create a recorder as follows:
@Recorder
class HelloRecorder {
public void sayHello(String name) {
System.out.println("Hello" + name);
}
}
然后创建一个使用此记录器的构建步骤:
And then create a build step that uses this recorder:
@Record(RUNTIME_INIT)
@BuildStep
public void helloBuildStep(HelloRecorder recorder) {
recorder.sayHello("World");
}
运行此构建步骤时,控制台不会打印任何内容。这是因为注入的 @Recorder 实际上是一个记录所有调用的代理。而如果我们运行生成的 Quarkus 程序,我们将看到“Hello World”打印到控制台。
When this build step is run nothing is printed to the console. This is because the HelloRecorder
that is injected is
actually a proxy that records all invocations. Instead, if we run the resulting Quarkus program we will see 'Hello World'
printed to the console.
记录器上的方法可以返回值,该返回值必须是可代理的(如果您想返回一个不可代理的项,请用 @Supplier 包裹它)。但是,这些代理不能直接调用,可以将其传递给其他记录器方法。这可以是任何记录器方法,包括来自其他 @Record 方法的方法,因此一种常见模式是生成 @Supplier 实例,这些实例包含这些记录器调用结果的封装。
Methods on a recorder can return a value, which must be proxiable (if you want to return a non-proxiable item wrap it
in io.quarkus.runtime.RuntimeValue
). These proxies may not be invoked directly, however they can be passed
into other recorder methods. This can be any recorder method, including from other @BuildStep
methods, so a common pattern
is to produce BuildItem
instances that wrap the results of these recorder invocations.
例如,为了对 Servlet 部署进行任意的修改,Undertow 具有一个 @ServletExtension,这是一个 @Recorder,它包装了一个 @Servlet instance。我可以从一个记录器返回一个 @Supplier,而在另一个模块中,Undertow 会使用它并将其传递给启动 Undertow 的记录器方法。
For instance, in order to make arbitrary changes to a Servlet deployment Undertow has a ServletExtensionBuildItem
,
which is a MultiBuildItem
that wraps a ServletExtension
instance. I can return a ServletExtension
from a recorder
in another module, and Undertow will consume it and pass it into the recorder method that starts Undertow.
在运行时,将按生成顺序调用字节码。这意味着构建步骤依赖隐式控制了生成字节码的运行顺序。在上面的示例中,我们知道生成 @Supplier 的字节码将在使用它的字节码之前运行。
At runtime the bytecode will be invoked in the order it is generated. This means that build step dependencies implicitly
control the order that generated bytecode is run. In the example above we know that the bytecode that produces a
ServletExtensionBuildItem
will be run before the bytecode that consumes it.
可以将以下对象传递给记录器:
The following objects can be passed to recorders:
-
Primitives
-
String
-
Class<?> objects
-
Objects returned from a previous recorder invocation
-
Objects with a no-arg constructor and getter/setters for all properties (or public fields)
-
Objects with a constructor annotated with
@RecordableConstructor
with parameter names that match field names -
Any arbitrary object via the
io.quarkus.deployment.recording.RecorderContext#registerSubstitution(Class, Class, Class)
mechanism -
Arrays, Lists and Maps of the above
在应该忽略要记录的对象的一些字段的情况下(即构建时所处的值不应在运行时反映出来),可将 @Ignore 在该字段上。 In cases where some fields of an object to be recorded should be ignored (i.e. the value that being at build time should not be reflected at runtime), the 如果类不能依赖于 Quarkus,那么只要扩展实现 @Record SPI 就可以使用任何自定义注解。 If the class cannot depend on Quarkus, then Quarkus can use any custom annotation, as long as the extension implements the 也可以使用同一个 SPI 提供一个自定义注解,以替代 @Record。 This same SPI can also be used to provide a custom annotation that will substitute for |
Injecting Configuration into Recorders
阶段为 @Build 或 @BuildProducer 的配置对象可以通过构造函数注入注入到记录器中。只需创建一个包含记录器需要的配置对象的构造函数。如果记录器有多个构造函数,您可以使用 @Inject 注解希望 Quarkus 使用的构造函数。如果记录器要注入运行时配置,但也用于静态初始化时间,那么它需要注入 @InitializedBean,该值仅当调用运行时方法时才被设置。
Configuration objects with phase RUNTIME
or BUILD_AND_RUNTIME_FIXED
can be injected into recorders via constructor
injection. Just create a constructor that takes the configuration objects the recorder needs. If the recorder has multiple
constructors you can annotate the one you want Quarkus to use with @Inject
. If the recorder wants to inject runtime config
but is also used at static init time then it needs to inject a RuntimeValue<ConfigObject>
, this value will only be set
when the runtime methods are being invoked.
RecorderContext
io.quarkus.deployment.recording.RecorderContext
提供了一些增强字节码记录的便利方法,其中包括为没有无参数构造函数的类型登记创建功能、登记一个对象置换(基本上从一个不可序列化的对象变换成一个可序列化对象,反之亦然)以及创建一个类代理。这个接口能够直接作为方法参数注入到任何一个 @Record
方法中。
io.quarkus.deployment.recording.RecorderContext
provides some convenience methods to enhance bytecode recording,
this includes the ability to register creation functions for classes without no-arg constructors, to register an object
substitution (basically a transformer from a non-serializable object to a serializable one and vice versa), and to create
a class proxy. This interface can be directly injected as a method parameter into any @Record
method.
使用给定的完全限定类名调用 classProxy
会创建一个 Class
实例,该实例可以传递到一个记录器方法中,并在运行时使用传递到 classProxy()
中的类名进行置换。然而,在大多数情况下不需要使用这个方法,因为直接在生成步骤的处理时间加载部署/应用类是安全的。因此,这个方法被废弃了。尽管如此,在一些情况下这个方法非常有用,比如引用利用`GeneratedClassBuildItem`在之前的生成步骤中生成的类。
Calling classProxy
with a given fully-qualified class name will create a Class
instance that can be passed into a recorder
method, and at runtime will be substituted with the class whose name was passed in to classProxy()
.
However, this method should not be needed in most use cases because directly loading deployment/application classes at processing time in build steps is safe.
Therefore, this method is deprecated.
Nonetheless, there are some use cases where this method comes in handy, such as referring to classes that were generated in previous build steps using GeneratedClassBuildItem
.
Runtime Classpath check
扩展经常需要一种方法来确定一个给定的类是否属于应用的运行类路径的一部分。扩展执行这个检查的正确方法是使用 io.quarkus.bootstrap.classloading.QuarkusClassLoader.isClassPresentAtRuntime
。
Extensions often need a way to determine whether a given class is part of the application’s runtime classpath.
The proper way for an extension to perform this check is to use io.quarkus.bootstrap.classloading.QuarkusClassLoader.isClassPresentAtRuntime
.
Printing step execution time
有时,了解每个启动任务(它是每次字节码记录的结果)在应用程序运行时花费的精确时间可能很有用。确定这个信息的简单方法是启动带有 -Dquarkus.debug.print-startup-times=true
系统属性的 Quarkus 应用程序。输出看起来会像这样:
At times, it can be useful to know how the exact time each startup task (which is the result of each bytecode recording) takes when the application is run.
The simplest way to determine this information is to launch the Quarkus application with the -Dquarkus.debug.print-startup-times=true
system property.
The output will look something like:
Build step LoggingResourceProcessor.setupLoggingRuntimeInit completed in: 42ms
Build step ConfigGenerationBuildStep.checkForBuildTimeConfigChange completed in: 4ms
Build step SyntheticBeansProcessor.initRuntime completed in: 0ms
Build step ConfigBuildStep.validateConfigProperties completed in: 1ms
Build step ResteasyStandaloneBuildStep.boot completed in: 95ms
Build step VertxHttpProcessor.initializeRouter completed in: 1ms
Build step VertxHttpProcessor.finalizeRouter completed in: 4ms
Build step LifecycleEventsBuildStep.startupEvent completed in: 1ms
Build step VertxHttpProcessor.openSocket completed in: 93ms
Build step ShutdownListenerBuildStep.setupShutdown completed in: 1ms
Contexts and Dependency Injection
Extension Points
作为基于 CDI 的运行时,Quarkus 扩展经常将 CDI bean 作为扩展行为的一部分。然而,Quarkus DI 解决方式不支持 CDI 可移植扩展。相反,Quarkus 扩展可以使用各种各样的 Build Time Extension Points。
As a CDI based runtime, Quarkus extensions often make CDI beans available as part of the extension behavior. However, Quarkus DI solution does not support CDI Portable Extensions. Instead, Quarkus extensions can make use of various Build Time Extension Points.
Quarkus Dev UI
你可以让你的扩展支持 Quarkus Dev UI 来提升开发人员体验。
You can make your extension support the Quarkus Dev UI for a greater developer experience.
Extension-defined endpoints
你的扩展可以添加额外的,非应用端点与用于健康、指标、OpenAPI、Swagger UI 等的端点一起提供服务。
Your extension can add additional, non-application endpoints to be served alongside endpoints for Health, Metrics, OpenAPI, Swagger UI, etc.
使用 NonApplicationRootPathBuildItem
定义一个端点:
Use a NonApplicationRootPathBuildItem
to define an endpoint:
@BuildStep
RouteBuildItem myExtensionRoute(NonApplicationRootPathBuildItem nonApplicationRootPathBuildItem) {
return nonApplicationRootPathBuildItem.routeBuilder()
.route("custom-endpoint")
.handler(new MyCustomHandler())
.displayOnNotFoundPage()
.build();
}
请注意,上述路径不以 '/' 开头,表明它是一个相对路径。上述 end-point 将相对于配置的非应用程序端点根来提供服务。默认情况下,非应用程序端点根是 /q
,这意味着找到的结果端点将位于 /q/custom-endpoint
。
Note that the path above does not start with a '/', indicating it is a relative path. The above
endpoint will be served relative to the configured non-application endpoint root. The non-application
endpoint root is /q
by default, which means the resulting endpoint will be found at /q/custom-endpoint
.
绝对路径以不同的方式处理。如果上述调用 route("/custom-endpoint")
,最终端点将会在 /custom-endpoint
处找到。
Absolute paths are handled differently. If the above called route("/custom-endpoint")
, the resulting
endpoint will be found at /custom-endpoint
.
如果一个扩展需要嵌套的非应用端点:
If an extension needs nested non-application endpoints:
@BuildStep
RouteBuildItem myNestedExtensionRoute(NonApplicationRootPathBuildItem nonApplicationRootPathBuildItem) {
return nonApplicationRootPathBuildItem.routeBuilder()
.nestedRoute("custom-endpoint", "deep")
.handler(new MyCustomHandler())
.displayOnNotFoundPage()
.build();
}
给定一个默认的非应用端点根目录 /q
,这会在 /q/custom-endpoint/deep
处创建一个端点。
Given a default non-application endpoint root of /q
, this will create an endpoint at /q/custom-endpoint/deep
.
绝对路径也会对嵌套端点产生影响。如果上述调用 nestedRoute("custom-endpoint", "/deep")
,最终端点将会在 /deep
处找到。
Absolute paths also have an impact on nested endpoints. If the above called nestedRoute("custom-endpoint", "/deep")
,
the resulting endpoint will be found at /deep
.
更多有关如何配置非应用根路径的详细信息,请参考 Quarkus Vertx HTTP configuration reference。
Refer to the Quarkus Vertx HTTP configuration reference for details on how the non-application root path is configured.
Extension Health Check
健康检查通过 quarkus-smallrye-health
扩展提供。它既提供了活动检查又提供了准备检查的能力。
Health checks are provided via the quarkus-smallrye-health
extension. It provides both liveness and readiness checks capabilities.
在编写一个扩展时,最好为扩展提供健康检查功能,它可以自动包含,而不需要开发人员自己编写。
When writing an extension, it’s beneficial to provide health checks for the extension, that can be automatically included without the developer needing to write their own.
为了提供健康检查,你应该:
In order to provide a health check, you should do the following:
-
Import the
quarkus-smallrye-health
extension as an optional dependency in your runtime module so it will not impact the size of the application if health check is not included. -
Create your health check following the SmallRye Health guide. We advise providing only readiness check for an extension (liveness check is designed to express the fact that an application is up and needs to be lightweight).
-
Import the
quarkus-smallrye-health-spi
library in your deployment module. -
Add a build step in your deployment module that produces a
HealthBuildItem
. -
Add a way to disable the extension health check via a config item
quarkus.<extension>.health.enabled
that should be enabled by default.
以下是来自 Agroal 扩展的示例,它提供了 DataSourceHealthCheck
来验证数据源的准备就绪。
Following is an example from the Agroal extension that provides a DataSourceHealthCheck
to validate the readiness of a datasource.
@BuildStep
HealthBuildItem addHealthCheck(AgroalBuildTimeConfig agroalBuildTimeConfig) {
return new HealthBuildItem("io.quarkus.agroal.runtime.health.DataSourceHealthCheck",
agroalBuildTimeConfig.healthEnabled);
}
Extension Metrics
quarkus-micrometer
扩展和 quarkus-smallrye-metrics
扩展提供收集指标的支持。作为兼容性备注,quarkus-micrometer
扩展将 MP 指标 API 调整为 Micrometer 库的基元,因此可以在不破坏依赖于 MP 指标 API 的代码的情况下启用 quarkus-micrometer
扩展。请注意,Micrometer 发出的指标是不同的,更多信息请参见 quarkus-micrometer
扩展文档。
The quarkus-micrometer
extension and the quarkus-smallrye-metrics
extension provide support for collecting metrics.
As a compatibility note, the quarkus-micrometer
extension adapts the MP Metrics API to Micrometer library primitives, so the quarkus-micrometer
extension can be enabled without breaking code that relies on the MP Metrics API.
Note that the metrics emitted by Micrometer are different, see the quarkus-micrometer
extension documentation for more information.
MP 指标 API 的兼容性层将来将移至不同的扩展。 |
The compatibility layer for MP Metrics APIs will move to a different extension in the future. |
扩展可以使用两个广泛的模式与可选指标扩展进行交互,以添加自己的指标:
There are two broad patterns that extensions can use to interact with an optional metrics extension to add their own metrics:
-
Consumer pattern: An extension declares a
MetricsFactoryConsumerBuildItem
and uses that to provide a bytecode recorder to the metrics extension. When the metrics extension has initialized, it will iterate over registered consumers to initialize them with aMetricsFactory
. This factory can be used to declare API-agnostic metrics, which can be a good fit for extensions that provide an instrumentable object for gathering statistics (e.g. Hibernate’sStatistics
class). -
Binder pattern: An extension can opt to use completely different gathering implementations depending on the metrics system. An
Optional<MetricsCapabilityBuildItem> metricsCapability
build step parameter can be used to declare or otherwise initialize API-specific metrics based on the active metrics extension (e.g. "smallrye-metrics" or "micrometer"). This pattern can be combined with the consumer pattern by usingMetricsFactory::metricsSystemSupported()
to test the active metrics extension within the recorder.
请记住,指标支持是可选的。扩展可以在其构建步骤中使用 Optional<MetricsCapabilityBuildItem> metricsCapability
参数来测试是否启用了指标扩展。考虑使用附加配置来控制指标的行为。例如,数据源指标可能开销很高,因此使用额外的配置标志对各个数据源启用指标收集。
Remember that support for metrics is optional. Extensions can use an Optional<MetricsCapabilityBuildItem> metricsCapability
parameter in their build step to test for the presence of an enabled metrics extension. Consider using additional configuration to control behavior of metrics. Datasource metrics can be expensive, for example, so additional configuration flags are used enable metrics collection on individual datasources.
添加扩展的指标时,您可能会发现自己处于以下情况之一:
When adding metrics for your extension, you may find yourself in one of the following situations:
-
An underlying library used by the extension is using a specific Metrics API directly (either MP Metrics, Micrometer, or some other).
-
An underlying library uses its own mechanism for collecting metrics and makes them available at runtime using its own API, e.g. Hibernate’s
Statistics
class, or Vert.xMetricsOptions
. -
An underlying library does not provide metrics (or there is no library at all) and you want to add instrumentation.
Case 1: The library uses a metrics library directly
如果库直接使用指标 API,则有两个选项:
If the library directly uses a metrics API, there are two options:
-
Use an
Optional<MetricsCapabilityBuildItem> metricsCapability
parameter to test which metrics API is supported (e.g. "smallrye-metrics" or "micrometer") in your build step, and use that to selectively declare or initialize API-specific beans or build items. -
Create a separate build step that consumes a
MetricsFactory
, and use theMetricsFactory::metricsSystemSupported()
method within the bytecode recorder to initialize required resources if the desired metrics API is supported (e.g. "smallrye-metrics" or "micrometer").
如果不存在活动指标扩展或扩展不支持库所需的 API,则扩展可能需要提供后备。
Extensions may need to provide a fallback if there is no active metrics extension or the extension doesn’t support the API required by the library.
Case 2: The library provides its own metric API
图书馆提供其自己的指标 API 的例子有两个:
There are two examples of a library providing its own metrics API:
-
The extension defines an instrumentable object as Agroal does with
io.agroal.api.AgroalDataSourceMetrics
, or -
The extension provides its own abstraction of metrics, as Jaeger does with
io.jaegertracing.spi.MetricsFactory
.
Observing instrumentable objects
让我们首先介绍可检测对象(@3)的情况。在这种情况下,你可以执行以下操作:
Let’s take the instrumentable object (io.agroal.api.AgroalDataSourceMetrics
) case first. In this case, you can do the following:
-
Define a
BuildStep
that produces aMetricsFactoryConsumerBuildItem
that uses aRUNTIME_INIT
orSTATIC_INIT
Recorder to define aMetricsFactory
consumer. For example, the following creates aMetricsFactoryConsumerBuildItem
if and only if metrics are enabled both for Agroal generally, and for a datasource specifically:[source, java]
@BuildStep @Record(ExecutionTime.RUNTIME_INIT) void registerMetrics(AgroalMetricsRecorder recorder, DataSourcesBuildTimeConfig dataSourcesBuildTimeConfig, BuildProducer<MetricsFactoryConsumerBuildItem> datasourceMetrics, List<AggregatedDataSourceBuildTimeConfigBuildItem> aggregatedDataSourceBuildTimeConfigs) { for (AggregatedDataSourceBuildTimeConfigBuildItem aggregatedDataSourceBuildTimeConfig : aggregatedDataSourceBuildTimeConfigs) { // Create a MetricsFactory consumer to register metrics for a data source // IFF metrics are enabled globally and for the data source // (they are enabled for each data source by default if they are also enabled globally) if (dataSourcesBuildTimeConfig.metricsEnabled && aggregatedDataSourceBuildTimeConfig.getJdbcConfig().enableMetrics.orElse(true)) { datasourceMetrics.produce(new MetricsFactoryConsumerBuildItem( recorder.registerDataSourceMetrics(aggregatedDataSourceBuildTimeConfig.getName()))); } } }
-
The associated recorder should use the provided
MetricsFactory
to register metrics. For Agroal, this means using theMetricFactory
API to observeio.agroal.api.AgroalDataSourceMetrics
methods. For example:[source, java]
/* RUNTIME_INIT */ public Consumer<MetricsFactory> registerDataSourceMetrics(String dataSourceName) { return new Consumer<MetricsFactory>() { @Override public void accept(MetricsFactory metricsFactory) { String tagValue = DataSourceUtil.isDefault(dataSourceName) ? "default" : dataSourceName; AgroalDataSourceMetrics metrics = getDataSource(dataSourceName).getMetrics(); // When using MP Metrics, the builder uses the VENDOR registry by default. metricsFactory.builder("agroal.active.count") .description( "Number of active connections. These connections are in use and not available to be acquired.") .tag("datasource", tagValue) .buildGauge(metrics::activeCount); ....
@12 为指标注册提供了一个流畅的构建器,最后一步基于 @13 或 @14 构建量表或计数器。计时器可以包装 @15、@16 或 @17 实现,也可以使用 @18 累积时间块。基础指标扩展程序将创建相应的工件来观察或测量已定义的函数。
The MetricsFactory
provides a fluid builder for registration of metrics, with the final step constructing gauges or counters based on a Supplier
or ToDoubleFunction
. Timers can either wrap Callable
, Runnable
, or Supplier
implementations, or can use a TimeRecorder
to accumulate chunks of time. The underlying metrics extension will create appropriate artifacts to observe or measure the defined functions.
Using a Metrics API-specific implementation
在某些情况下,可能更喜欢使用特定的指标 API 实现。例如,Jaeger 定义了自己的指标接口 @20,它用于定义计数器和量表。从该接口到指标系统的直接映射将是最有效的。在这种情况下,重要的是隔离这些专门实现并避免急切类加载,以确保指标 API 保持可选的编译时依赖项。
Using metrics-API specific implementations may be preferred in some cases. Jaeger, for example, defines its own metrics interface, io.jaegertracing.spi.MetricsFactory
, that it uses to define counters and gauges. A direct mapping from that interface to the metrics system will be the most efficient. In this case, it is important to isolate these specialized implementations and to avoid eager classloading to ensure the metrics API remains an optional, compile-time dependency.
可以在构建步骤中使用 @21 来选择性地控制 bean 的初始化或其他构建项的生成。例如,Jaeger 扩展程序可以使用以下命令来控制特殊指标 API 适配器的初始化:
Optional<MetricsCapabilityBuildItem> metricsCapability
can be used in the build step to selectively control initialization of beans or the production of other build items. The Jaeger extension, for example, can use the following to control initialization of specialized Metrics API adapters:
+
/* RUNTIME_INIT */
@BuildStep
@Record(ExecutionTime.RUNTIME_INIT)
void setupTracer(JaegerDeploymentRecorder jdr, JaegerBuildTimeConfig buildTimeConfig, JaegerConfig jaeger,
ApplicationConfig appConfig, Optional<MetricsCapabilityBuildItem> metricsCapability) {
// Indicates that this extension would like the SSL support to be enabled
extensionSslNativeSupport.produce(new ExtensionSslNativeSupportBuildItem(Feature.JAEGER.getName()));
if (buildTimeConfig.enabled) {
// To avoid dependency creep, use two separate recorder methods for the two metrics systems
if (buildTimeConfig.metricsEnabled && metricsCapability.isPresent()) {
if (metricsCapability.get().metricsSupported(MetricsFactory.MICROMETER)) {
jdr.registerTracerWithMicrometerMetrics(jaeger, appConfig);
} else {
jdr.registerTracerWithMpMetrics(jaeger, appConfig);
}
} else {
jdr.registerTracerWithoutMetrics(jaeger, appConfig);
}
}
}
使用 @22 的记录器可以类似地使用 @23 在字节码记录期间控制指标对象的初始化。
A recorder consuming a MetricsFactory
can use MetricsFactory::metricsSystemSupported()
can be used to control initialization of metrics objects during bytecode recording in a similar way.
Case 3: It is necessary to collect metrics within the extension code
要从头定义自己的指标,你有两个基本选择:使用通用的 @24 构建器,或遵循绑定器模式,并创建特定于已启用指标扩展程序的检测。
To define your own metrics from scratch, you have two basic options: Use the generic MetricFactory
builders, or follow the binder pattern, and create instrumentation specific to the enabled metrics extension.
要使用与扩展程序无关的 @25 API,你的处理器可以定义一个 @26,用于生成一个 @27,它使用 @28 或 @29 记录器来定义一个 @30 消费者。
To use the extension-agnostic MetricFactory
API, your processor can define a BuildStep
that produces a MetricsFactoryConsumerBuildItem
that uses a RUNTIME_INIT
or STATIC_INIT
Recorder to define a MetricsFactory
consumer.
+
@BuildStep
@Record(ExecutionTime.RUNTIME_INIT)
MetricsFactoryConsumerBuildItem registerMetrics(MyExtensionRecorder recorder) {
return new MetricsFactoryConsumerBuildItem(recorder.registerMetrics());
}
+- 关联的记录器应使用提供的 @31 注册指标,例如
+
- The associated recorder should use the provided MetricsFactory
to register metrics, for example
+
final LongAdder extensionCounter = new LongAdder();
/* RUNTIME_INIT */
public Consumer<MetricsFactory> registerMetrics() {
return new Consumer<MetricsFactory>() {
@Override
public void accept(MetricsFactory metricsFactory) {
metricsFactory.builder("my.extension.counter")
.buildGauge(extensionCounter::longValue);
....
请记住,指标扩展程序是可选的。让指标相关初始化与扩展程序的其他设置保持隔离,并构建代码以避免急切导入指标 API。收集指标也可能是昂贵的。请考虑使用其他扩展特定的配置来控制指标行为,如果指标支持的存在/不存在不足以满足需求。
Remember that metrics extensions are optional. Keep metrics-related initialization isolated from other setup for your extension, and structure your code to avoid eager imports of metrics APIs. Gathering metrics can also be expensive. Consider using additional extension-specific configuration to control behavior of metrics if the presence/absence of metrics support isn’t sufficient.
Customizing JSON handling from an extension
扩展程序通常需要为扩展程序提供的类型注册序列化器和/或反序列化器。
Extensions often need to register serializers and/or deserializers for types the extension provides.
为此,Jackson 和 JSON-B 扩展程序都提供了一种从扩展程序部署模块内注册序列化器/反序列化器的方法。
For this, both Jackson and JSON-B extensions provide a way to register serializer/deserializer from within an extension deployment module.
请记住,并非每个人都需要 JSON,因此你需要使它成为可选的。
Keep in mind that not everybody will need JSON, so you need to make it optional.
如果扩展程序打算提供与 JSON 相关的自定义,则强烈建议同时为 Jackson 和 JSON-B 提供自定义。
If an extension intends to provide JSON related customization, it is strongly advised to provide customization for both Jackson and JSON-B.
Customizing Jackson
首先,向扩展程序的运行时模块的 @32 中添加一个 @33 依赖项。
First, add an optional dependency to quarkus-jackson
on your extension’s runtime module.
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-jackson</artifactId>
<optional>true</optional>
</dependency>
然后为 Jackson 创建一个序列化器或反序列化器(或两者),可以在 @34 扩展程序中看到其中一个示例。
Then create a serializer or a deserializer (or both) for Jackson, an example of which can be seen in the mongodb-panache
extension.
public class ObjectIdSerializer extends StdSerializer<ObjectId> {
public ObjectIdSerializer() {
super(ObjectId.class);
}
@Override
public void serialize(ObjectId objectId, JsonGenerator jsonGenerator, SerializerProvider serializerProvider)
throws IOException {
if (objectId != null) {
jsonGenerator.writeString(objectId.toString());
}
}
}
在扩展的部署模块中添加对 quarkus-jackson-spi
的依赖项。
Add a dependency to quarkus-jackson-spi
on your extension’s deployment module.
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-jackson-spi</artifactId>
</dependency>
在您的处理器中添加一个构建步聚,以通过 JacksonModuleBuildItem
注册 Jackson 模块。您需要以一个独一无二的方式在所有 Jackson 模块中命名您的模块。
Add a build step to your processor to register a Jackson module via the JacksonModuleBuildItem
.
You need to name your module in a unique way across all Jackson modules.
@BuildStep
JacksonModuleBuildItem registerJacksonSerDeser() {
return new JacksonModuleBuildItem.Builder("ObjectIdModule")
.add(io.quarkus.mongodb.panache.jackson.ObjectIdSerializer.class.getName(),
io.quarkus.mongodb.panache.jackson.ObjectIdDeserializer.class.getName(),
ObjectId.class.getName())
.build();
}
Jackson 扩展将随后使用生成的构建项在 Jackson 中自动注册一个模块。
The Jackson extension will then use the produced build item to register a module within Jackson automatically.
如果您需要比注册模块更多的自定义功能,可以通过 AdditionalBeanBuildItem
实现 io.quarkus.jackson.ObjectMapperCustomizer
的 CDI bean。关于自定义 Jackson 的更多信息可在 JSON 指南 Configuring JSON support 中找到
If you need more customization capabilities than registering a module,
you can produce a CDI bean that implements io.quarkus.jackson.ObjectMapperCustomizer
via an AdditionalBeanBuildItem
.
More info about customizing Jackson can be found on the JSON guide Configuring JSON support
Customizing JSON-B
首先,在您的扩展的运行时模块上将 optional 依赖项添加到 quarkus-jsonb
。
First, add an optional dependency to quarkus-jsonb
on your extension’s runtime module.
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-jsonb</artifactId>
<optional>true</optional>
</dependency>
然后为 JSON-B 创建一个序列化器和/或反序列化器,举例来说,这可以在 mongodb-panache
扩展中看到。
Then create a serializer and/or a deserializer for JSON-B, an example of which can be seen in the mongodb-panache
extension.
public class ObjectIdSerializer implements JsonbSerializer<ObjectId> {
@Override
public void serialize(ObjectId obj, JsonGenerator generator, SerializationContext ctx) {
if (obj != null) {
generator.write(obj.toString());
}
}
}
在您的扩展的部署模块中添加一个对 quarkus-jsonb-spi
的依赖项。
Add a dependency to quarkus-jsonb-spi
on your extension’s deployment module.
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-jsonb-spi</artifactId>
</dependency>
在您的处理器中添加一个构建步聚,以通过 JsonbSerializerBuildItem
注册序列化器。
Add a build step to your processor to register the serializer via the JsonbSerializerBuildItem
.
@BuildStep
JsonbSerializerBuildItem registerJsonbSerializer() {
return new JsonbSerializerBuildItem(io.quarkus.mongodb.panache.jsonb.ObjectIdSerializer.class.getName()));
}
JSON-B 扩展随后会使用生成的构建项自动注册您的序列化器/反序列化器。
The JSON-B extension will then use the produced build item to register your serializer/deserializer automatically.
如果您需要比注册序列化器或反序列化器更多的自定义功能,可以通过 AdditionalBeanBuildItem
实现 io.quarkus.jsonb.JsonbConfigCustomizer
的 CDI bean。关于自定义 JSON-B 的更多信息可在 JSON 指南 Configuring JSON support 中找到
If you need more customization capabilities than registering a serializer or a deserializer,
you can produce a CDI bean that implements io.quarkus.jsonb.JsonbConfigCustomizer
via an AdditionalBeanBuildItem
.
More info about customizing JSON-B can be found on the JSON guide Configuring JSON support
Integrating with Development Mode
您可以使用各种 API 来集成开发模式并获取有关当前状态的信息。
There are various APIS that you can use to integrate with development mode, and to get information about the current state.
Handling restarts
当 Quarkus 启动 io.quarkus.deployment.builditem.LiveReloadBuildItem
时,它会保证存在,提供有关此启动的信息,尤其是以下信息:
When Quarkus is starting the io.quarkus.deployment.builditem.LiveReloadBuildItem
is guaranteed to be present that gives
information about this start, in particular:
-
Is this a clean start or a live reload
-
If this is a live reload which changed files / classes triggered the reload
它还提供了一个全局上下文映射,您可以使用此映射在重启期间存储信息,而不需要使用静态字段。
It also provides a global context map you can use to store information between restarts, without needing to resort to static fields.
Triggering Live Reload
热加载通常由一个 HTTP 请求触发,但是并非所有应用程序都是 HTTP 应用程序,有些扩展可能希望根据其他事件触发热加载。要执行此操作,您需要在运行时模块中实现 io.quarkus.dev.spi.HotReplacementSetup
,并添加列出您的实现的 META-INF/services/io.quarkus.dev.spi.HotReplacementSetup
。
Live reload is generally triggered by an HTTP request, however not all applications are HTTP applications and some extensions
may want to trigger live reload based on other events. To do this you need to implement io.quarkus.dev.spi.HotReplacementSetup
in your runtime module, and add a META-INF/services/io.quarkus.dev.spi.HotReplacementSetup
that lists your implementation.
在启动时,将调用 setupHotDeployment
方法,您可以使用提供的 io.quarkus.dev.spi.HotReplacementContext
启动扫描更改的文件。
On startup the setupHotDeployment
method will be called, and you can use the provided io.quarkus.dev.spi.HotReplacementContext
to initiate a scan for changed files.
Testing Extensions
Quarkus 扩展的测试应该使用 io.quarkus.test.QuarkusUnitTest
JUnit 5 扩展。此扩展允许进行 Arquillian 风格的测试,以测试特定功能。它不打算测试用户应用程序,因为这应该通过 io.quarkus.test.junit.QuarkusTest
来完成。主要不同点在于,QuarkusTest
只是在运行开始时启动应用程序一次,而 QuarkusUnitTest
为每个测试类部署一个自定义 Quarkus 应用程序。
Testing of Quarkus extensions should be done with the io.quarkus.test.QuarkusUnitTest
JUnit 5 extension.
This extension allows for Arquillian-style tests that test specific functionalities.
It is not intended for testing user applications, as this should be done via io.quarkus.test.junit.QuarkusTest
.
The main difference is that QuarkusTest
simply boots the application once at the start of the run, while QuarkusUnitTest
deploys a custom
Quarkus application for each test class.
如果需要其他 Quarkus 模块进行测试,则这些测试应放在部署模块中,还应将它们的部署模块作为测试范围的依赖项添加。
These tests should be placed in the deployment module, if additional Quarkus modules are required for testing their deployment modules should also be added as test scoped dependencies.
请注意,`QuarkusUnitTest`位于 `quarkus-junit5-internal`模块中。
Note that QuarkusUnitTest
is in the quarkus-junit5-internal
module.
示例测试类可能如下所示:
An example test class may look like:
package io.quarkus.health.test;
import static org.junit.jupiter.api.Assertions.assertEquals;
import java.util.ArrayList;
import java.util.List;
import jakarta.enterprise.inject.Instance;
import jakarta.inject.Inject;
import org.eclipse.microprofile.health.Liveness;
import org.eclipse.microprofile.health.HealthCheck;
import org.eclipse.microprofile.health.HealthCheckResponse;
import io.quarkus.test.QuarkusUnitTest;
import org.jboss.shrinkwrap.api.ShrinkWrap;
import org.jboss.shrinkwrap.api.asset.EmptyAsset;
import org.jboss.shrinkwrap.api.spec.JavaArchive;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.RegisterExtension;
import io.restassured.RestAssured;
public class FailingUnitTest {
@RegisterExtension (1)
static final QuarkusUnitTest config = new QuarkusUnitTest()
.setArchiveProducer(() ->
ShrinkWrap.create(JavaArchive.class) (2)
.addClasses(FailingHealthCheck.class)
.addAsManifestResource(EmptyAsset.INSTANCE, "beans.xml")
);
@Inject (3)
@Liveness
Instance<HealthCheck> checks;
@Test
public void testHealthServlet() {
RestAssured.when().get("/q/health").then().statusCode(503); (4)
}
@Test
public void testHealthBeans() {
List<HealthCheck> check = new ArrayList<>(); (5)
for (HealthCheck i : checks) {
check.add(i);
}
assertEquals(1, check.size());
assertEquals(HealthCheckResponse.State.DOWN, check.get(0).call().getState());
}
}
1 | The QuarkusUnitTest extension must be used with a static field. If used with a non-static field, the test application is not started. |
2 | This producer is used to build the application to be tested. It uses Shrinkwrap to create a JavaArchive to test |
3 | It is possible to inject beans from our test deployment directly into the test case |
4 | This method directly invokes the health check Servlet and verifies the response |
5 | This method uses the injected health check bean to verify it is returning the expected result |
如果你想测试扩展是否在构建时正确失败,请使用 `setExpectedException`方法:
If you want to test that an extension properly fails at build time, use the setExpectedException
method:
package io.quarkus.hibernate.orm;
import io.quarkus.runtime.configuration.ConfigurationException;
import io.quarkus.test.QuarkusUnitTest;
import org.jboss.shrinkwrap.api.ShrinkWrap;
import org.jboss.shrinkwrap.api.spec.JavaArchive;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.RegisterExtension;
public class PersistenceAndQuarkusConfigTest {
@RegisterExtension
static QuarkusUnitTest runner = new QuarkusUnitTest()
.setExpectedException(ConfigurationException.class) 1
.withApplicationRoot((jar) -> jar
.addAsManifestResource("META-INF/some-persistence.xml", "persistence.xml")
.addAsResource("application.properties"));
@Test
public void testPersistenceAndConfigTest() {
// should not be called, deployment exception should happen first:
// it's illegal to have Hibernate configuration properties in both the
// application.properties and in the persistence.xml
Assertions.fail();
}
}
1 | This tells JUnit that the Quarkus deployment should fail with a specific exception |
Testing hot reload
还可以编写测试来验证扩展在开发模式下是否可以正常工作,并且可以正确处理更新。
It is also possible to write tests that verify an extension works correctly in development mode and can correctly handle updates.
对于大多数扩展,这都可以直接“开箱即用”,但最好进行集成测试以验证此功能是否按预期工作。我们可以使用 `QuarkusDevModeTest`来进行此测试:
For most extensions this will just work 'out of the box', however it is still a good idea to have a smoke test to
verify that this functionality is working as expected. To test this we use QuarkusDevModeTest
:
public class ServletChangeTestCase {
@RegisterExtension
final static QuarkusDevModeTest test = new QuarkusDevModeTest()
.setArchiveProducer(new Supplier<>() {
@Override
public JavaArchive get() {
return ShrinkWrap.create(JavaArchive.class) 1
.addClass(DevServlet.class)
.addAsManifestResource(new StringAsset("Hello Resource"), "resources/file.txt");
}
});
@Test
public void testServletChange() throws InterruptedException {
RestAssured.when().get("/dev").then()
.statusCode(200)
.body(is("Hello World"));
test.modifySourceFile("DevServlet.java", new Function<String, String>() { 2
@Override
public String apply(String s) {
return s.replace("Hello World", "Hello Quarkus");
}
});
RestAssured.when().get("/dev").then()
.statusCode(200)
.body(is("Hello Quarkus"));
}
@Test
public void testAddServlet() throws InterruptedException {
RestAssured.when().get("/new").then()
.statusCode(404);
test.addSourceFile(NewServlet.class); 3
RestAssured.when().get("/new").then()
.statusCode(200)
.body(is("A new Servlet"));
}
@Test
public void testResourceChange() throws InterruptedException {
RestAssured.when().get("/file.txt").then()
.statusCode(200)
.body(is("Hello Resource"));
test.modifyResourceFile("META-INF/resources/file.txt", new Function<String, String>() { 4
@Override
public String apply(String s) {
return "A new resource";
}
});
RestAssured.when().get("file.txt").then()
.statusCode(200)
.body(is("A new resource"));
}
@Test
public void testAddResource() throws InterruptedException {
RestAssured.when().get("/new.txt").then()
.statusCode(404);
test.addResourceFile("META-INF/resources/new.txt", "New File"); 5
RestAssured.when().get("/new.txt").then()
.statusCode(200)
.body(is("New File"));
}
}
1 | This starts the deployment, your test can modify it as part of the test suite. Quarkus will be restarted between each test method so every method starts with a clean deployment. |
2 | This method allows you to modify the source of a class file. The old source is passed into the function, and the updated source is returned. |
3 | This method adds a new class file to the deployment. The source that is used will be the original source that is part of the current project. |
4 | This method modifies a static resource |
5 | This method adds a new static resource |
Native Executable Support
Quarkus 提供了许多生成项来控制本机可执行程序构建的各个方面。这允许扩展以编程方式执行任务,例如注册反射类或向本机可执行程序添加静态资源。下面列出了其中一些生成项:
There Quarkus provides a lot of build items that control aspects of the native executable build. This allows for extensions to programmatically perform tasks such as registering classes for reflection or adding static resources to the native executable. Some of these build items are listed below:
io.quarkus.deployment.builditem.nativeimage.NativeImageResourceBuildItem
-
Includes static resources into the native executable.
io.quarkus.deployment.builditem.nativeimage.NativeImageResourceDirectoryBuildItem
-
Includes directory’s static resources into the native executable.
io.quarkus.deployment.builditem.nativeimage.RuntimeReinitializedClassBuildItem
-
A class that will be reinitialized at runtime by Substrate. This will result in the static initializer running twice.
io.quarkus.deployment.builditem.nativeimage.NativeImageSystemPropertyBuildItem
-
A system property that will be set at native executable build time.
io.quarkus.deployment.builditem.nativeimage.NativeImageResourceBundleBuildItem
-
Includes a resource bundle in the native executable.
io.quarkus.deployment.builditem.nativeimage.ReflectiveClassBuildItem
-
Registers a class for reflection in Substrate. Constructors are always registered, while methods and fields are optional.
io.quarkus.deployment.builditem.nativeimage.RuntimeInitializedClassBuildItem
-
A class that will be initialized at runtime rather than build time. This will cause the build to fail if the class is initialized as part of the native executable build process, so care must be taken.
io.quarkus.deployment.builditem.nativeimage.NativeImageConfigBuildItem
-
A convenience feature that allows you to control most of the above features from a single build item.
io.quarkus.deployment.builditem.NativeImageEnableAllCharsetsBuildItem
-
Indicates that all charsets should be enabled in native image.
io.quarkus.deployment.builditem.ExtensionSslNativeSupportBuildItem
-
A convenient way to tell Quarkus that the extension requires SSL, and it should be enabled during native image build. When using this feature, remember to add your extension to the list of extensions that offer SSL support automatically on the native and ssl guide.
IDE support tips
Writing Quarkus extensions in Eclipse
在 Eclipse 中编写 Quarkus 扩展的唯一特定方面是 APT(注解处理工具)是扩展构建的一部分所需的,这意味着你需要:
The only particular aspect of writing Quarkus extensions in Eclipse is that APT (Annotation Processing Tool) is required as part of extension builds, which means you need to:
-
Install
m2e-apt
from [role="bare"]https://marketplace.eclipse.org/content/m2e-apt -
Define this property in your
pom.xml
:<m2e.apt.activation>jdt_apt</m2e.apt.activation>
, although if you rely onio.quarkus:quarkus-build-parent
you will get it for free. -
If you have the
io.quarkus:quarkus-extension-processor
project open at the same time in your IDE (for example, if you have the Quarkus sources checked out and open in your IDE) you will need to close that project. Otherwise, Eclipse will not invoke the APT plugin that it contains. -
If you just closed the extension processor project, be sure to do
Maven > Update Project
on the other projects in order for Eclipse to pick up the extension processor from the Maven repository.
Troubleshooting / Debugging Tips
Inspecting the Generated/Transformed Classes
Quarkus 在构建阶段会生成大量类,在许多情况下还会转换现有类。在扩展开发过程中,经常非常有必要看到生成的字节码和转换过的类。
Quarkus generates a lot of classes during the build phase and in many cases also transforms existing classes. It is often extremely useful to see the generated bytecode and transformed classes during the development of an extension.
如果您将 quarkus.package.jar.decompiler.enabled
属性设置为 true
,Quarkus 将下载并调用 Vineflower decompiler,并将结果转储到构建工具输出的 decompiled
目录中(例如,对于 Maven 为 target/decompiled
)。
If you set the quarkus.package.jar.decompiler.enabled
property to true
then Quarkus will download and invoke the Vineflower decompiler and dump the result in the decompiled
directory of the build tool output (target/decompiled
for Maven for example).
此属性仅在正常的生产构建期间有效(即,不适用于开发模式/测试),且在使用 |
This property only works during a normal production build (i.e. not for dev mode/tests) and when |
还有三个系统属性允许您将生成/转换的类转储到文件系统并稍后进行检查,例如通过 IDE 中的反编译器。
There are also three system properties that allow you to dump the generated/transformed classes to the filesystem and inspect them later, for example via a decompiler in your IDE.
-
quarkus.debug.generated-classes-dir
- to dump the generated classes, such as bean metadata -
quarkus.debug.transformed-classes-dir
- to dump the transformed classes, e.g. Panache entities -
quarkus.debug.generated-sources-dir
- to dump the ZIG files; ZIG file is a textual representation of the generated code that is referenced in the stack traces
这些属性在开发模式或在运行仅将生成/转换的类保存在类加载器中内存中的测试时特别有用。
These properties are especially useful in the development mode or when running the tests where the generated/transformed classes are only held in memory in a class loader.
例如,您可以在开发模式下指定 quarkus.debug.generated-classes-dir
系统属性,以让这些类被写入磁盘,以便进行检查:
For example, you can specify the quarkus.debug.generated-classes-dir
system property to have these classes written out to disk for inspection in the development mode:
./mvnw quarkus:dev -Dquarkus.debug.generated-classes-dir=dump-classes
属性值可以是绝对路径,例如 Linux 机器上的 |
The property value could be either an absolute path, such as |
您应该在每个写入目录的类中看到一行日志:
You should see a line in the log for each class written to the directory:
INFO [io.qua.run.boo.StartupActionImpl] (main) Wrote /path/to/my/app/target/dump-classes/io/quarkus/arc/impl/ActivateRequestContextInterceptor_Bean.class
在运行测试时,该属性也会得到尊崇:
The property is also honored when running tests:
./mvnw clean test -Dquarkus.debug.generated-classes-dir=target/dump-generated-classes
类似地,您可以使用 quarkus.debug.transformed-classes-dir
和 quarkus.debug.generated-sources-dir
属性转储相关输出。
Analogously, you can use the quarkus.debug.transformed-classes-dir
and quarkus.debug.generated-sources-dir
properties to dump the relevant output.
Multi-module Maven Projects and the Development Mode
在包含“示例”模块的多模块 Maven 项目中开发扩展是常见做法。但是,如果您要在开发模式下运行示例,则必须使用 -DnoDeps
系统属性来排除本地项目依赖项。否则,Quarkus 会尝试监控扩展类,这可能会导致奇怪的类加载问题。
It’s not uncommon to develop an extension in a multi-module Maven project that also contains an "example" module.
However, if you want to run the example in the development mode then the -DnoDeps
system property must be used in order to exclude the local project dependencies.
Otherwise, Quarkus attempts to monitor the extension classes and this may result in weird class loading issues.
./mvnw compile quarkus:dev -DnoDeps
Sample Test Extension
我们有一个扩展程序,用于测试扩展处理中的回归。它位于 {quarkus-tree-url}/integration-tests/test-extension/extension 目录中。在本节中,我们将介绍一个扩展作者通常需要使用 test-extension 代码来执行的一些任务,以说明如何完成该任务。
We have an extension that is used to test for regressions in the extension processing. It is located in {quarkus-tree-url}/integration-tests/test-extension/extension directory. In this section we touch on some tasks an extension author will typically need to perform using the test-extension code to illustrate how the task could be done.
Features and Capabilities
Features
feature 表示扩展程序提供的功能。特性名称在应用程序引导期间显示在日志中。
A feature represents a functionality provided by an extension. The name of the feature gets displayed in the log during application bootstrap.
2019-03-22 14:02:37,884 INFO [io.quarkus] (main) Quarkus 999-SNAPSHOT started in 0.061s.
2019-03-22 14:02:37,884 INFO [io.quarkus] (main) Installed features: [cdi, test-extension] 1
1 | A list of features installed in the runtime image |
功能可以在生成 FeatureBuildItem
的 Build Step Processors 方法中注册:
A feature can be registered in a Build Step Processors method that produces a FeatureBuildItem
:
@BuildStep
FeatureBuildItem feature() {
return new FeatureBuildItem("test-extension");
}
该功能的名称应仅包含小写字符,单词用破折号分隔;例如 security-jpa
。一个扩展应最多提供一个功能,该名称必须是唯一的。如果多个扩展注册了同名功能,构建将失败。
The name of the feature should only contain lowercase characters, words are separated by dash; e.g. security-jpa
.
An extension should provide at most one feature and the name must be unique.
If multiple extensions register a feature of the same name the build fails.
该功能名称还应映射到扩展的 devtools/common/src/main/filtered/extensions.json
条目中的标签,以便启动行显示的功能名称与在创建项目时使用 Quarkus maven 插件来选择扩展时可以使用的一个标签相匹配,如从 Writing JSON REST Services 指南中摘取的此示例所示,其中引用了 rest-jackson
功能:
The feature name should also map to a label in the extension’s devtools/common/src/main/filtered/extensions.json
entry so that
the feature name displayed by the startup line matches a label that one can use to select the extension when creating a project
using the Quarkus maven plugin as shown in this example taken from the Writing JSON REST Services guide where the rest-jackson
feature is referenced:
mvn {quarkus-platform-groupid}:quarkus-maven-plugin:{quarkus-version}:create \
-DprojectGroupId=org.acme \
-DprojectArtifactId=rest-json \
-DclassName="org.acme.rest.json.FruitResource" \
-Dpath="/fruits" \
-Dextensions="rest,rest-jackson"
cd rest-json
Capabilities
capability 表示其他扩展可以查询的技术功能。一个扩展可以提供多个功能,多个扩展可以提供相同的功能。默认情况下,功能不会显示给用户。在检查扩展是否存在时应使用功能,而不是基于类路径的检查。
A capability represents a technical capability that can be queried by other extensions. An extension may provide multiple capabilities and multiple extensions can provide the same capability. By default, capabilities are not displayed to users. Capabilities should be used when checking for the presence of an extension rather than class path based checks.
功能可以在生成 CapabilityBuildItem
的 Build Step Processors 方法中注册:
Capabilities can be registered in a Build Step Processors method that produces a CapabilityBuildItem
:
@BuildStep
void capabilities(BuildProducer<CapabilityBuildItem> capabilityProducer) {
capabilityProducer.produce(new CapabilityBuildItem("org.acme.test-transactions"));
capabilityProducer.produce(new CapabilityBuildItem("org.acme.test-metrics"));
}
扩展可以使用 Capabilities
构建项使用已注册的功能:
Extensions can consume registered capabilities using the Capabilities
build item:
@BuildStep
void doSomeCoolStuff(Capabilities capabilities) {
if (capabilities.isPresent(Capability.TRANSACTIONS)) {
// do something only if JTA transactions are in...
}
}
功能应遵循 Java 包的命名约定;例如 io.quarkus.security.jpa
。核心扩展提供的功能应在 io.quarkus.deployment.Capability
枚举中列出,其名称应始终以 io.quarkus
前缀开头。
Capabilities should follow the naming conventions of Java packages; e.g. io.quarkus.security.jpa
.
Capabilities provided by core extensions should be listed in the io.quarkus.deployment.Capability
enum and their name should always start with the io.quarkus
prefix.
Bean Defining Annotations
CDI 层处理明确注册或基于 2.5.1. Bean defining annotations 中定义的 bean 定义注释发现的 CDI bean。您可以使用 BeanDefiningAnnotationBuildItem
扩展此注释集以包括扩展进程注释,如此 TestProcessor#registerBeanDefinningAnnotations
示例所示:
The CDI layer processes CDI beans that are either explicitly registered or that it discovers based on bean defining annotations as defined in 2.5.1. Bean defining annotations. You can expand this set of annotations to include annotations your extension processes using a BeanDefiningAnnotationBuildItem
as shown in this TestProcessor#registerBeanDefinningAnnotations
example:
import jakarta.enterprise.context.ApplicationScoped;
import org.jboss.jandex.DotName;
import io.quarkus.extest.runtime.TestAnnotation;
public final class TestProcessor {
static DotName TEST_ANNOTATION = DotName.createSimple(TestAnnotation.class.getName());
static DotName TEST_ANNOTATION_SCOPE = DotName.createSimple(ApplicationScoped.class.getName());
...
@BuildStep
BeanDefiningAnnotationBuildItem registerX() {
1
return new BeanDefiningAnnotationBuildItem(TEST_ANNOTATION, TEST_ANNOTATION_SCOPE);
}
...
}
/**
* Marker annotation for test configuration target beans
*/
@Target({ TYPE })
@Retention(RUNTIME)
@Documented
@Inherited
public @interface TestAnnotation {
}
/**
* A sample bean
*/
@TestAnnotation 2
public class ConfiguredBean implements IConfigConsumer {
...
1 | Register the annotation class and CDI default scope using the Jandex DotName class. |
2 | ConfiguredBean will be processed by the CDI layer the same as a bean annotated with the CDI standard @ApplicationScoped. |
Parsing Config to Objects
扩展很可能要做的主要事情之一是将行为的配置阶段与运行时阶段完全分开。框架通常在启动时执行配置的解析/加载,这可以在构建期间完成,以同时减少对类似 xml 解析器之类的框架的运行时依赖以及缩短解析所花费的启动时间。
One of the main things an extension is likely to do is completely separate the configuration phase of behavior from the runtime phase. Frameworks often do parsing/load of configuration on startup that can be done during build time to both reduce the runtime dependencies on frameworks like xml parsers as well as reducing the startup time the parsing incurs.
在 TestProcessor#parseServiceXmlConfig
方法中展示了使用 JAXB 解析 XML 配置文件的一个示例:
An example of parsing an XML config file using JAXB is shown in the TestProcessor#parseServiceXmlConfig
method:
@BuildStep
@Record(STATIC_INIT)
RuntimeServiceBuildItem parseServiceXmlConfig(TestRecorder recorder) throws JAXBException {
RuntimeServiceBuildItem serviceBuildItem = null;
JAXBContext context = JAXBContext.newInstance(XmlConfig.class);
Unmarshaller unmarshaller = context.createUnmarshaller();
InputStream is = getClass().getResourceAsStream("/config.xml"); 1
if (is != null) {
log.info("Have XmlConfig, loading");
XmlConfig config = (XmlConfig) unmarshaller.unmarshal(is); 2
...
}
return serviceBuildItem;
}
1 | Look for a config.xml classpath resource |
2 | If found, parse using JAXB context for XmlConfig.class |
如果构建环境中没有可用的 /config.xml 资源,则返回一个 null If there was no /config.xml resource available in the build environment, then a null |
通常,加载配置是为了创建某个运行时组件/服务,就像 parseServiceXmlConfig
所做的那样。在以下 Manage Non-CDI Service 部分中的 parseServiceXmlConfig
中,我们将在后面回到行为的余下部分。
Typically, one is loading a configuration to create some runtime component/service as parseServiceXmlConfig
is doing. We will come back to the rest of the behavior in parseServiceXmlConfig
in the following Manage Non-CDI Service section.
如果因为某个原因,您需要在扩展进程的其他构建步骤中解析配置并使用它,则需要创建一个 XmlConfigBuildItem
来传递解析的 XmlConfig 实例。
If for some reason you need to parse the config and use it in other build steps in an extension processor, you would need to create an XmlConfigBuildItem
to pass the parsed XmlConfig instance around.
如果您查看 XmlConfig 代码,您将看到它确实承载 JAXB 注释。如果您希望运行时镜像中不包含这些注释,您可以将 XmlConfig 实例克隆到某些 POJO 对象图中,然后使用 POJO 类替换 XmlConfig。我们将在 Replacing Classes in the Native Image 中执行此操作。 If you look at the XmlConfig code you will see that it does carry around the JAXB annotations. If you don’t want these in the runtime image, you could clone the XmlConfig instance into some POJO object graph and then replace XmlConfig with the POJO class. We will do this in Replacing Classes in the Native Image. |
Scanning Deployments Using Jandex
如果您的扩展定义了用于标记需要处理的 bean 的注释或接口,那么您可以使用 Jandex API(Java 注释索引器和离线反射库)来定位这些 bean。下面的 TestProcessor#scanForBeans
方法展示了如何查找也实现了 IConfigConsumer
接口的用 @TestAnnotation
进行注释的 bean:
If your extension defines annotations or interfaces that mark beans needing to be processed, you can locate these beans using the Jandex API, a Java annotation indexer and offline reflection library. The following TestProcessor#scanForBeans
method shows how to find the beans annotated with our @TestAnnotation
that also implement the IConfigConsumer
interface:
static DotName TEST_ANNOTATION = DotName.createSimple(TestAnnotation.class.getName());
...
@BuildStep
@Record(STATIC_INIT)
void scanForBeans(TestRecorder recorder, BeanArchiveIndexBuildItem beanArchiveIndex, 1
BuildProducer<TestBeanBuildItem> testBeanProducer) {
IndexView indexView = beanArchiveIndex.getIndex(); 2
Collection<AnnotationInstance> testBeans = indexView.getAnnotations(TEST_ANNOTATION); 3
for (AnnotationInstance ann : testBeans) {
ClassInfo beanClassInfo = ann.target().asClass();
try {
boolean isConfigConsumer = beanClassInfo.interfaceNames()
.stream()
.anyMatch(dotName -> dotName.equals(DotName.createSimple(IConfigConsumer.class.getName()))); 4
if (isConfigConsumer) {
Class<IConfigConsumer> beanClass = (Class<IConfigConsumer>) Class.forName(beanClassInfo.name().toString(), false, Thread.currentThread().getContextClassLoader());
testBeanProducer.produce(new TestBeanBuildItem(beanClass)); 5
log.infof("Configured bean: %s", beanClass);
}
} catch (ClassNotFoundException e) {
log.warn("Failed to load bean class", e);
}
}
}
1 | Depend on a BeanArchiveIndexBuildItem to have the build step be run after the deployment has been indexed. |
2 | Retrieve the index. |
3 | Find all beans annotated with @TestAnnotation . |
4 | Determine which of these beans also has the IConfigConsumer interface. |
5 | Save the bean class in a TestBeanBuildItem for use in a latter RUNTIME_INIT build step that will interact with the bean instances. |
Interacting With Extension Beans
您可以使用 io.quarkus.arc.runtime.BeanContainer
接口与扩展 bean 交互。以下 configureBeans
方法说明了如何与前面部分中扫描的 bean 进行交互:
You can use the io.quarkus.arc.runtime.BeanContainer
interface to interact with your extension beans. The following configureBeans
methods illustrate interacting with the beans scanned for in the previous section:
// TestProcessor#configureBeans
@BuildStep
@Record(RUNTIME_INIT)
void configureBeans(TestRecorder recorder, List<TestBeanBuildItem> testBeans, 1
BeanContainerBuildItem beanContainer, 2
TestRunTimeConfig runTimeConfig) {
for (TestBeanBuildItem testBeanBuildItem : testBeans) {
Class<IConfigConsumer> beanClass = testBeanBuildItem.getConfigConsumer();
recorder.configureBeans(beanContainer.getValue(), beanClass, buildAndRunTimeConfig, runTimeConfig); 3
}
}
// TestRecorder#configureBeans
public void configureBeans(BeanContainer beanContainer, Class<IConfigConsumer> beanClass,
TestBuildAndRunTimeConfig buildTimeConfig,
TestRunTimeConfig runTimeConfig) {
log.info("Begin BeanContainerListener callback\n");
IConfigConsumer instance = beanContainer.beanInstance(beanClass); 4
instance.loadConfig(buildTimeConfig, runTimeConfig); 5
log.infof("configureBeans, instance=%s\n", instance);
}
1 | Consume the `TestBeanBuildItem`s produced from the scanning build step. |
2 | Consume the BeanContainerBuildItem to order this build step to run after the CDI bean container has been created. |
3 | Call the runtime recorder to record the bean interactions. |
4 | Runtime recorder retrieves the bean using its type. |
5 | Runtime recorder invokes the IConfigConsumer#loadConfig(…) method passing in the configuration objects with runtime information. |
Manage Non-CDI Service
扩展的一个常见目的是将非 CDI 感知服务集成到基于 CDI 的 Quarkus 运行时中。此任务的第一步是在 STATIC_INIT 构建步骤中加载任何需要的配置,就像我们在 Parsing Config to Objects 中所做的那样。现在我们需要使用配置创建服务实例。让我们回到 TestProcessor#parseServiceXmlConfig
方法来了解如何做到这一点。
A common purpose for an extension is to integrate a non-CDI aware service into the CDI based Quarkus runtime.
Step 1 of this task is to load any configuration needed in a STATIC_INIT build step as we did in Parsing Config to Objects.
Now we need to create an instance of the service using the configuration.
Let’s return to the TestProcessor#parseServiceXmlConfig
method to see how this can be done.
// TestProcessor#parseServiceXmlConfig
@BuildStep
@Record(STATIC_INIT)
RuntimeServiceBuildItem parseServiceXmlConfig(TestRecorder recorder) throws JAXBException {
RuntimeServiceBuildItem serviceBuildItem = null;
JAXBContext context = JAXBContext.newInstance(XmlConfig.class);
Unmarshaller unmarshaller = context.createUnmarshaller();
InputStream is = getClass().getResourceAsStream("/config.xml");
if (is != null) {
log.info("Have XmlConfig, loading");
XmlConfig config = (XmlConfig) unmarshaller.unmarshal(is);
log.info("Loaded XmlConfig, creating service");
RuntimeValue<RuntimeXmlConfigService> service = recorder.initRuntimeService(config); (1)
serviceBuildItem = new RuntimeServiceBuildItem(service); (3)
}
return serviceBuildItem;
}
// TestRecorder#initRuntimeService
public RuntimeValue<RuntimeXmlConfigService> initRuntimeService(XmlConfig config) {
RuntimeXmlConfigService service = new RuntimeXmlConfigService(config); (2)
return new RuntimeValue<>(service);
}
// RuntimeServiceBuildItem
final public class RuntimeServiceBuildItem extends SimpleBuildItem {
private RuntimeValue<RuntimeXmlConfigService> service;
public RuntimeServiceBuildItem(RuntimeValue<RuntimeXmlConfigService> service) {
this.service = service;
}
public RuntimeValue<RuntimeXmlConfigService> getService() {
return service;
}
}
1 | Call into the runtime recorder to record the creation of the service. |
2 | Using the parsed XmlConfig instance, create an instance of RuntimeXmlConfigService and wrap it in a RuntimeValue . Use a RuntimeValue wrapper for non-interface objects that are non-proxiable. |
3 | Wrap the return service value in a RuntimeServiceBuildItem for use in a RUNTIME_INIT build step that will start the service. |
Starting a Service
现在您已经记录了在构建阶段创建服务,您需要在引导期间记录如何在运行时启动服务。您可以使用 RUNTIME_INIT 构建步骤来完成此操作,如 TestProcessor#startRuntimeService
方法所示。
Now that you have recorded the creation of a service during the build phase, you need to record how to start the service at runtime during booting.
You do this with a RUNTIME_INIT build step as shown in the TestProcessor#startRuntimeService
method.
// TestProcessor#startRuntimeService
@BuildStep
@Record(RUNTIME_INIT)
ServiceStartBuildItem startRuntimeService(TestRecorder recorder, ShutdownContextBuildItem shutdownContextBuildItem , (1)
RuntimeServiceBuildItem serviceBuildItem) throws IOException { (2)
if (serviceBuildItem != null) {
log.info("Registering service start");
recorder.startRuntimeService(shutdownContextBuildItem, serviceBuildItem.getService()); (3)
} else {
log.info("No RuntimeServiceBuildItem seen, check config.xml");
}
return new ServiceStartBuildItem("RuntimeXmlConfigService"); (4)
}
// TestRecorder#startRuntimeService
public void startRuntimeService(ShutdownContext shutdownContext, RuntimeValue<RuntimeXmlConfigService> runtimeValue)
throws IOException {
RuntimeXmlConfigService service = runtimeValue.getValue();
service.startService(); (5)
shutdownContext.addShutdownTask(service::stopService); (6)
}
1 | We consume a ShutdownContextBuildItem to register the service shutdown. |
2 | We consume the previously initialized service captured in RuntimeServiceBuildItem . |
3 | Call the runtime recorder to record the service start invocation. |
4 | Produce a ServiceStartBuildItem to indicate the startup of a service. See Startup and Shutdown Events for details. |
5 | Runtime recorder retrieves the service instance reference and calls its startService method. |
6 | Runtime recorder registers an invocation of the service instance stopService method with the Quarkus ShutdownContext . |
可以在此处查看 RuntimeXmlConfigService
的代码:{quarkus-blob-url}/integration-tests/test-extension/extension/runtime/src/main/java/io/quarkus/extest/runtime/RuntimeXmlConfigService.java[RuntimeXmlConfigService.java]
The code for the RuntimeXmlConfigService
can be viewed here:
{quarkus-blob-url}/integration-tests/test-extension/extension/runtime/src/main/java/io/quarkus/extest/runtime/RuntimeXmlConfigService.java[RuntimeXmlConfigService.java]
可在 testRuntimeXmlConfigService
的 ConfiguredBeanTest
和 NativeImageIT
测试中找到用于验证 RuntimeXmlConfigService
已启动的测试用例。
The testcase for validating that the RuntimeXmlConfigService
has started can be found in the testRuntimeXmlConfigService
test of ConfiguredBeanTest
and NativeImageIT
.
Startup and Shutdown Events
Quarkus 容器支持启动和关闭生命周期事件,以通知组件容器启动和关闭。已激发组件可以观察的 CDI 事件,此示例对此进行了说明:
The Quarkus container supports startup and shutdown lifecycle events to notify components of the container startup and shutdown. There are CDI events fired that components can observe are illustrated in this example:
import io.quarkus.runtime.ShutdownEvent;
import io.quarkus.runtime.StartupEvent;
public class SomeBean {
/**
* Called when the runtime has started
* @param event
*/
void onStart(@Observes StartupEvent event) { (1)
System.out.printf("onStart, event=%s%n", event);
}
/**
* Called when the runtime is shutting down
* @param event
*/
void onStop(@Observes ShutdownEvent event) { (2)
System.out.printf("onStop, event=%s%n", event);
}
}
1 | Observe a StartupEvent to be notified the runtime has started. |
2 | Observe a ShutdownEvent to be notified when the runtime is going to shut down. |
启动和关闭事件与扩展作者有什么关系?我们已看到 ShutdownContext
的使用,用于在 Starting a Service 部分中注册回调以执行关闭任务。这些关闭任务将在发送 ShutdownEvent
后调用。
What is the relevance of startup and shutdown events for extension authors? We have already seen the use of a ShutdownContext
to register a callback to perform shutdown tasks in the Starting a Service section.
These shutdown tasks would be called
after a ShutdownEvent
had been sent.
在已使用所有 io.quarkus.deployment.builditem.ServiceStartBuildItem
产生器后激发 StartupEvent
。这意味着如果扩展中包含服务,则应用程序组件会希望在观察到 StartupEvent
时启动这些服务,则调用运行时代码以启动这些服务的构建步骤需要产生 ServiceStartBuildItem
以确保在发送 StartupEvent
之前运行该运行时代码。回想一下,我们在上一部分中看到了 ServiceStartBuildItem
的生成,此处为清楚起见而重复:
A StartupEvent
is fired after all io.quarkus.deployment.builditem.ServiceStartBuildItem
producers have been consumed.
The implication of this is that if an extension has services that application components would expect to have been
started when they observe a StartupEvent
, the build steps that invoke the runtime code to start those services needs
to produce a ServiceStartBuildItem
to ensure that the runtime code is run before the StartupEvent
is sent. Recall that
we saw the production of a ServiceStartBuildItem
in the previous section, and it is repeated here for clarity:
// TestProcessor#startRuntimeService
@BuildStep
@Record(RUNTIME_INIT)
ServiceStartBuildItem startRuntimeService(TestRecorder recorder, ShutdownContextBuildItem shutdownContextBuildItem,
RuntimeServiceBuildItem serviceBuildItem) throws IOException {
...
return new ServiceStartBuildItem("RuntimeXmlConfigService"); (1)
}
1 | Produce a ServiceStartBuildItem to indicate that this is a service starting step that needs to run before the StartupEvent is sent. |
Register Resources for Use in Native Image
并非所有配置或资源都可以在构建时使用。如果您有运行时需要访问的类路径资源,则需要通知构建阶段需要将这些资源复制到本机映像中。这可以通过在资源包的情况下产生一个或多个 NativeImageResourceBuildItem
或 NativeImageResourceBundleBuildItem
来完成。此 registerNativeImageResources
构建步骤中显示了此示例:
Not all configuration or resources can be consumed at build time. If you have classpath resources that the runtime needs to access, you need to inform the build phase that these resources need to be copied into the native image. This is done by producing one or more NativeImageResourceBuildItem
or NativeImageResourceBundleBuildItem
in the case of resource bundles. Examples of this are shown in this sample registerNativeImageResources
build step:
public final class MyExtProcessor {
@BuildStep
void registerNativeImageResources(BuildProducer<NativeImageResourceBuildItem> resource, BuildProducer<NativeImageResourceBundleBuildItem> resourceBundle) {
resource.produce(new NativeImageResourceBuildItem("/security/runtime.keys")); (1)
resource.produce(new NativeImageResourceBuildItem(
"META-INF/my-descriptor.xml")); (2)
resourceBundle.produce(new NativeImageResourceBuildItem("jakarta.xml.bind.Messages")); (3)
}
}
1 | Indicate that the /security/runtime.keys classpath resource should be copied into native image. |
2 | Indicate that the META-INF/my-descriptor.xml resource should be copied into native image |
3 | Indicate that the "jakarta.xml.bind.Messages" resource bundle should be copied into native image. |
Service files
如果正在使用 @28 文件,则需要将文件注册为资源,以便本机映像可以找到它们,但还需要为每个列出的类注册反射,以便它们可以在运行时实例化或检查:
If you are using META-INF/services
files you need to register the files as resources so that your native image can find them,
but you also need to register each listed class for reflection so they can be instantiated or inspected at run-time:
public final class MyExtProcessor {
@BuildStep
void registerNativeImageResources(BuildProducer<ServiceProviderBuildItem> services) {
String service = "META-INF/services/" + io.quarkus.SomeService.class.getName();
// find out all the implementation classes listed in the service files
Set<String> implementations =
ServiceUtil.classNamesNamedIn(Thread.currentThread().getContextClassLoader(),
service);
// register every listed implementation class so they can be instantiated
// in native-image at run-time
services.produce(
new ServiceProviderBuildItem(io.quarkus.SomeService.class.getName(),
implementations.toArray(new String[0])));
}
}
ServiceProviderBuildItem
将一组服务实现类作为参数:如果您没有从服务文件读取这些类,请确保它们与服务文件内容对应,因为系统仍将在运行时读取并使用服务文件。这并不能替代编写服务文件。
ServiceProviderBuildItem
takes a list of service implementation classes as parameters: if
you are not reading them from the service file, make sure that they correspond to the service file contents
because the service file will still be read and used at run-time. This is not a substitute for writing a service
file.
这只会通过反射为实例化注册实现类(您将无法检查其字段和方法)。如果您需要执行此操作,则可以通过以下方式进行: |
This only registers the implementation classes for instantiation via reflection (you will not be able to inspect its fields and methods). If you need to do that, you can do it this way: |
public final class MyExtProcessor {
@BuildStep
void registerNativeImageResources(BuildProducer<NativeImageResourceBuildItem> resource,
BuildProducer<ReflectiveClassBuildItem> reflectionClasses) {
String service = "META-INF/services/" + io.quarkus.SomeService.class.getName();
// register the service file so it is visible in native-image
resource.produce(new NativeImageResourceBuildItem(service));
// register every listed implementation class so they can be inspected/instantiated
// in native-image at run-time
Set<String> implementations =
ServiceUtil.classNamesNamedIn(Thread.currentThread().getContextClassLoader(),
service);
reflectionClasses.produce(
new ReflectiveClassBuildItem(true, true, implementations.toArray(new String[0])));
}
}
虽然这是让您的服务本机运行的最简单方式,但它效率低于在构建时扫描实现类并生成在静态初始化时而不是依赖反射来注册这些类的代码。
While this is the easiest way to get your services running natively, it’s less efficient than scanning the implementation classes at build time and generating code that registers them at static-init time instead of relying on reflection.
你可以通过改用静态初始化记录器来实现,而不是为反射注册类:
You can achieve that by adapting the previous build step to use a static-init recorder instead of registering classes for reflection:
public final class MyExtProcessor {
@BuildStep
@Record(ExecutionTime.STATIC_INIT)
void registerNativeImageResources(RecorderContext recorderContext,
SomeServiceRecorder recorder) {
String service = "META-INF/services/" + io.quarkus.SomeService.class.getName();
// read the implementation classes
Collection<Class<? extends io.quarkus.SomeService>> implementationClasses = new LinkedHashSet<>();
Set<String> implementations = ServiceUtil.classNamesNamedIn(Thread.currentThread().getContextClassLoader(),
service);
for(String implementation : implementations) {
implementationClasses.add((Class<? extends io.quarkus.SomeService>)
recorderContext.classProxy(implementation));
}
// produce a static-initializer with those classes
recorder.configure(implementationClasses);
}
}
@Recorder
public class SomeServiceRecorder {
public void configure(List<Class<? extends io.quarkus.SomeService>> implementations) {
// configure our service statically
SomeServiceProvider serviceProvider = SomeServiceProvider.instance();
SomeServiceBuilder builder = serviceProvider.getSomeServiceBuilder();
List<io.quarkus.SomeService> services = new ArrayList<>(implementations.size());
// instantiate the service implementations
for (Class<? extends io.quarkus.SomeService> implementationClass : implementations) {
try {
services.add(implementationClass.getConstructor().newInstance());
} catch (Exception e) {
throw new IllegalArgumentException("Unable to instantiate service " + implementationClass, e);
}
}
// build our service
builder.withSomeServices(implementations.toArray(new io.quarkus.SomeService[0]));
ServiceManager serviceManager = builder.build();
// register it
serviceProvider.registerServiceManager(serviceManager, Thread.currentThread().getContextClassLoader());
}
}
Object Substitution
在构建阶段创建并传递到运行时的对象需要具有一个默认构造函数,以便它们在从构建时状态启动运行时时创建和配置。如果一个对象没有默认构造函数,那么在生成扩展制品期间,您将看到类似于以下内容的错误:
Objects created during the build phase that are passed into the runtime need to have a default constructor in order for them to be created and configured at startup of the runtime from the build time state. If an object does not have a default constructor you will see an error similar to the following during generation of the augmented artifacts:
[error]: Build step io.quarkus.deployment.steps.MainClassBuildStep#build threw an exception: java.lang.RuntimeException: Unable to serialize objects of type class sun.security.provider.DSAPublicKeyImpl to bytecode as it has no default constructor
at io.quarkus.builder.Execution.run(Execution.java:123)
at io.quarkus.builder.BuildExecutionBuilder.execute(BuildExecutionBuilder.java:136)
at io.quarkus.deployment.QuarkusAugmentor.run(QuarkusAugmentor.java:110)
at io.quarkus.runner.RuntimeRunner.run(RuntimeRunner.java:99)
... 36 more
有一个“@1”接口可以实现,以便告诉 Quarkus 如何处理此类类。这里显示了“@2”的示例实现:
There is a io.quarkus.runtime.ObjectSubstitution
interface that can be implemented to tell Quarkus how to handle such classes. An example implementation for the DSAPublicKey
is shown here:
package io.quarkus.extest.runtime.subst;
import java.security.KeyFactory;
import java.security.NoSuchAlgorithmException;
import java.security.interfaces.DSAPublicKey;
import java.security.spec.InvalidKeySpecException;
import java.security.spec.X509EncodedKeySpec;
import java.util.logging.Logger;
import io.quarkus.runtime.ObjectSubstitution;
public class DSAPublicKeyObjectSubstitution implements ObjectSubstitution<DSAPublicKey, KeyProxy> {
private static final Logger log = Logger.getLogger("DSAPublicKeyObjectSubstitution");
@Override
public KeyProxy serialize(DSAPublicKey obj) { (1)
log.info("DSAPublicKeyObjectSubstitution.serialize");
byte[] encoded = obj.getEncoded();
KeyProxy proxy = new KeyProxy();
proxy.setContent(encoded);
return proxy;
}
@Override
public DSAPublicKey deserialize(KeyProxy obj) { (2)
log.info("DSAPublicKeyObjectSubstitution.deserialize");
byte[] encoded = obj.getContent();
X509EncodedKeySpec publicKeySpec = new X509EncodedKeySpec(encoded);
DSAPublicKey dsaPublicKey = null;
try {
KeyFactory kf = KeyFactory.getInstance("DSA");
dsaPublicKey = (DSAPublicKey) kf.generatePublic(publicKeySpec);
} catch (NoSuchAlgorithmException | InvalidKeySpecException e) {
e.printStackTrace();
}
return dsaPublicKey;
}
}
1 | The serialize method takes the object without a default constructor and creates a KeyProxy that contains the information necessary to recreate the DSAPublicKey . |
2 | The deserialize method uses the KeyProxy to recreate the DSAPublicKey from its encoded form using the key factory. |
一个扩展通过产生“@7”来注册此替代表现,如这个“@8”片段中所示:
An extension registers this substitution by producing an ObjectSubstitutionBuildItem
as shown in this TestProcessor#loadDSAPublicKey
fragment:
@BuildStep
@Record(STATIC_INIT)
PublicKeyBuildItem loadDSAPublicKey(TestRecorder recorder,
BuildProducer<ObjectSubstitutionBuildItem> substitutions) throws IOException, GeneralSecurityException {
...
// Register how to serialize DSAPublicKey
ObjectSubstitutionBuildItem.Holder<DSAPublicKey, KeyProxy> holder = new ObjectSubstitutionBuildItem.Holder(
DSAPublicKey.class, KeyProxy.class, DSAPublicKeyObjectSubstitution.class);
ObjectSubstitutionBuildItem keysub = new ObjectSubstitutionBuildItem(holder);
substitutions.produce(keysub);
log.info("loadDSAPublicKey run");
return new PublicKeyBuildItem(publicKey);
}
Replacing Classes in the Native Image
Graal SDK 支持本机映像中的类替代表现。这些示例类显示了如何用没有 JAXB 注释依赖项的版本来替换“@9”类:
The Graal SDK supports substitutions of classes in the native image.
An example of how one could replace the XmlConfig/XmlData
classes with versions that have no JAXB annotation dependencies is shown in these example classes:
package io.quarkus.extest.runtime.graal;
import java.util.Date;
import com.oracle.svm.core.annotate.Substitute;
import com.oracle.svm.core.annotate.TargetClass;
import io.quarkus.extest.runtime.config.XmlData;
@TargetClass(XmlConfig.class)
@Substitute
public final class Target_XmlConfig {
@Substitute
private String address;
@Substitute
private int port;
@Substitute
private ArrayList<XData> dataList;
@Substitute
public String getAddress() {
return address;
}
@Substitute
public int getPort() {
return port;
}
@Substitute
public ArrayList<XData> getDataList() {
return dataList;
}
@Substitute
@Override
public String toString() {
return "Target_XmlConfig{" +
"address='" + address + '\'' +
", port=" + port +
", dataList=" + dataList +
'}';
}
}
@TargetClass(XmlData.class)
@Substitute
public final class Target_XmlData {
@Substitute
private String name;
@Substitute
private String model;
@Substitute
private Date date;
@Substitute
public String getName() {
return name;
}
@Substitute
public String getModel() {
return model;
}
@Substitute
public Date getDate() {
return date;
}
@Substitute
@Override
public String toString() {
return "Target_XmlData{" +
"name='" + name + '\'' +
", model='" + model + '\'' +
", date='" + date + '\'' +
'}';
}
}
Ecosystem integration
一些扩展可能是私有的,而一些可能希望成为更广泛的 Quarkus 生态系统的一部分,即“@10”。纳入 Quarkiverse Hub 是一种方便的机制,用于处理持续测试和发布。“@11”包含用于接入您的扩展的指令。
Some extensions may be private, and some may wish to be part of the broader Quarkus ecosystem, and available for community re-use. Inclusion in the Quarkiverse Hub is a convenient mechanism for handling continuous testing and publication. The Quarkiverse Hub wiki has instructions for on-boarding your extension.
或者,可以手动处理持续测试和发布。
Alternatively, continuous testing and publication can be handled manually.
Continuous testing of your extension
为了让扩展作者每天轻松地针对 Quarkus 的最新快照测试他们的扩展,Quarkus 引入了生态系统 CI 的概念。生态系统 CI “@12”包含有关如何设置 GitHub Actions 作业以利用此功能的所有详细信息, जबकि此“@13”提供了该流程的概述。
In order to make it easy for extension authors to test their extensions daily against the latest snapshot of Quarkus, Quarkus has introduced the notion of Ecosystem CI. The Ecosystem CI README has all the details on how to set up a GitHub Actions job to take advantage of this capability, while this video provides an overview of what the process looks like.
Publish your extension in registry.quarkus.io
在将您的扩展发布到“@14”之前,请确保满足以下要求:
Before publishing your extension to the Quarkus tooling, make sure that the following requirements are met:
-
The quarkus-extension.yaml file (in the extension’s
runtime/
module) has the minimum metadata set:-
name
-
description
(unless you have it already set in theruntime/pom.xml’s `<description>
element, which is the recommended approach)
-
-
Your extension is published in Maven Central
-
Your extension repository is configured to use the ecosystem-ci.
然后,您必须创建一个拉取请求,在“@22”中的“@21”目录中添加一个“@20”文件。YAML 必须具有以下结构:
Then you must create a pull request adding a your-extension.yaml
file in the extensions/
directory in the Quarkus Extension Catalog. The YAML must have the following structure:
group-id: <YOUR_EXTENSION_RUNTIME_GROUP_ID>
artifact-id: <YOUR_EXTENSION_RUNTIME_ARTIFACT_ID>
当您的仓库包含多个扩展时,您需要为每个单独的扩展创建单独的文件,而不是为整个仓库创建一个文件。 |
When your repository contains multiple extensions, you need to create a separate file for each individual extension, not just one file for the entire repository. |
仅此而已。拉取请求合并后,一个计划的任务将检查 Maven Central 是否有新版本,并更新“@23”。
That’s all. Once the pull request is merged, a scheduled job will check Maven Central for new versions and update the Quarkus Extension Registry.