Using Apache Kafka with Schema Registry and Avro
本指南展示了 Quarkus 应用程序如何使用 Apache Kafka、“ Avro”序列化的记录,以及连接到模式注册表(如“ Confluent Schema Registry”或“ Apicurio Registry”)。
This guide shows how your Quarkus application can use Apache Kafka, Avro serialized records, and connect to a schema registry (such as the Confluent Schema Registry or Apicurio Registry).
如果您不熟悉 Kafka 和 Quarkus 中的 Kafka,请考虑首先阅读“Using Apache Kafka with Reactive Messaging”指南。
If you are not familiar with Kafka and Kafka in Quarkus in particular, consider first going through the Using Apache Kafka with Reactive Messaging guide.
Prerequisites
Unresolved directive in kafka-schema-registry-avro.adoc - include::{includes}/prerequisites.adoc[]
Architecture
在本指南中,我们要实现一个 REST 资源,即“MovieResource
”,它将消耗电影 DTO 并将它们放入 Kafka 主题。
In this guide we are going to implement a REST resource, namely MovieResource
, that
will consume movie DTOs and put them in a Kafka topic.
然后,我们将实现一个消费者,它将消耗并收集来自同一主题的消息。收集到的消息将随后通过“ Server-Sent Events”公开另一个资源“ConsumedMovieResource
”。
Then, we will implement a consumer that will consume and collect messages from the same topic.
The collected messages will be then exposed by another resource, ConsumedMovieResource
, via
Server-Sent Events.
“Movies”将使用 Avro 序列化和反序列化。“Movie”的模式存储在 Apicurio Registry 中。如果您使用的是 Confluent Avro “serde”和 Confluent Schema Registry,则适用于相同概念。
The Movies will be serialized and deserialized using Avro. The schema, describing the Movie, is stored in Apicurio Registry. The same concept applies if you are using the Confluent Avro serde and Confluent Schema Registry.
Solution
我们建议您遵循接下来的部分中的说明,按部就班地创建应用程序。然而,您可以直接跳到完成的示例。
We recommend that you follow the instructions in the next sections and create the application step by step. However, you can go right to the completed example.
克隆 Git 存储库: git clone {quickstarts-clone-url}
,或下载 {quickstarts-archive-url}[存档]。
Clone the Git repository: git clone {quickstarts-clone-url}
, or download an {quickstarts-archive-url}[archive].
解决方案位于“kafka-avro-schema-quickstart
” directory”。
The solution is located in the kafka-avro-schema-quickstart
directory.
Creating the Maven Project
首先,我们需要一个新项目。使用以下命令创建一个新项目:
First, we need a new project. Create a new project with the following command:
Unresolved directive in kafka-schema-registry-avro.adoc - include::{includes}/devtools/create-app.adoc[]
如果您使用的是 Confluent Schema Registry,则不需要“ If you use Confluent Schema Registry, you don’t need the |
Avro schema
Apache Avro 是一个数据序列化系统。数据结构使用模式进行描述。我们需要做的第一件事是创建一个描述“Movie
”结构的模式。使用我们记录(Kafka 消息)的模式创建一个名为“src/main/avro/movie.avsc
”的文件:
Apache Avro is a data serialization system. Data structures are described using schemas.
The first thing we need to do is to create a schema describing the Movie
structure.
Create a file called src/main/avro/movie.avsc
with the schema for our record (Kafka message):
{
"namespace": "org.acme.kafka.quarkus",
"type": "record",
"name": "Movie",
"fields": [
{
"name": "title",
"type": "string"
},
{
"name": "year",
"type": "int"
}
]
}
如果您使用以下方式构建项目:
If you build the project with:
Unresolved directive in kafka-schema-registry-avro.adoc - include::{includes}/devtools/build.adoc[]
“movies.avsc
”将被编译为“Movie.java
”文件,并放置在“target/generated-sources/avsc
”目录中。
the movies.avsc
will get compiled to a Movie.java
file
placed in the target/generated-sources/avsc
directory.
查看“ Avro specification”以了解更多关于 Avro 语法和受支持类型的知识。
Take a look at the Avro specification to learn more about the Avro syntax and supported types.
使用 Quarkus 无需使用特定 Maven 插件来处理 Avro 模式,所有这些操作都将通过“ |
With Quarkus, there’s no need to use a specific Maven plugin to process the Avro schema, this is all done for you by the |
如果您使用以下方式运行该项目:
If you run the project with:
Unresolved directive in kafka-schema-registry-avro.adoc - include::{includes}/devtools/dev.adoc[]
您对模式文件所做的更改将自动应用到生成的 Java 文件中。
the changes you do to the schema file will be automatically applied to the generated Java files.
The Movie
producer
定义了架构后,我们现在可以跳至实现 MovieResource
。
Having defined the schema, we can now jump to implementing the MovieResource
.
让我们打开 MovieResource
,注入一个 Movie
DTO 的 Emitter
,并实现一个 @POST
方法,该方法消耗 Movie
并通过 Emitter
发送它:
Let’s open the MovieResource
, inject an Emitter
of Movie
DTO and implement a @POST
method
that consumes Movie
and sends it through the Emitter
:
package org.acme.kafka;
import org.acme.kafka.quarkus.Movie;
import org.eclipse.microprofile.reactive.messaging.Channel;
import org.eclipse.microprofile.reactive.messaging.Emitter;
import org.jboss.logging.Logger;
import jakarta.ws.rs.POST;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.core.Response;
@Path("/movies")
public class MovieResource {
private static final Logger LOGGER = Logger.getLogger(MovieResource.class);
@Channel("movies")
Emitter<Movie> emitter;
@POST
public Response enqueueMovie(Movie movie) {
LOGGER.infof("Sending movie %s to Kafka", movie.getTitle());
emitter.send(movie);
return Response.accepted().build();
}
}
现在,我们需要将 movies
通道(Emitter
发送到此通道)map 到 Kafka 主题。为了实现这一点,编辑 application.properties
文件,并添加以下内容:
Now, we need to map the movies
channel (the Emitter
emits to this channel) to a Kafka topic.
To achieve this, edit the application.properties
file, and add the following content:
# set the connector for the outgoing channel to `smallrye-kafka`
mp.messaging.outgoing.movies.connector=smallrye-kafka
# set the topic name for the channel to `movies`
mp.messaging.outgoing.movies.topic=movies
# automatically register the schema with the registry, if not present
mp.messaging.outgoing.movies.apicurio.registry.auto-register=true
你可能已经注意到,我们没有定义 You might have noticed that we didn’t define the 如果您使用 Confluent Schema Registry,那么您也无需配置 If you use Confluent Schema Registry, you don’t have to configure |
The Movie
consumer
因此,我们可以将包含 Movie
数据的记录写入 Kafka 中。该数据使用 Avro 序列化。现在,是时候为它们实现一个消费者了。
So, we can write records into Kafka containing our Movie
data.
That data is serialized using Avro.
Now, it’s time to implement a consumer for them.
让我们创建一个将消费 movies-from-kafka
通道中的 Movie
消息并通过服务器端事件公开它的 ConsumedMovieResource
:
Let’s create ConsumedMovieResource
that will consume Movie
messages
from the movies-from-kafka
channel and will expose it via Server-Sent Events:
package org.acme.kafka;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import org.acme.kafka.quarkus.Movie;
import org.eclipse.microprofile.reactive.messaging.Channel;
import org.jboss.resteasy.reactive.RestStreamElementType;
import io.smallrye.mutiny.Multi;
@ApplicationScoped
@Path("/consumed-movies")
public class ConsumedMovieResource {
@Channel("movies-from-kafka")
Multi<Movie> movies;
@GET
@Produces(MediaType.SERVER_SENT_EVENTS)
@RestStreamElementType(MediaType.TEXT_PLAIN)
public Multi<String> stream() {
return movies.map(movie -> String.format("'%s' from %s", movie.getTitle(), movie.getYear()));
}
}
应用程序代码的最后一部分是在 application.properties
中配置 movies-from-kafka
通道:
The last bit of the application’s code is the configuration of the movies-from-kafka
channel in
application.properties
:
# set the connector for the incoming channel to `smallrye-kafka`
mp.messaging.incoming.movies-from-kafka.connector=smallrye-kafka
# set the topic name for the channel to `movies`
mp.messaging.incoming.movies-from-kafka.topic=movies
# disable auto-commit, Reactive Messaging handles it itself
mp.messaging.incoming.movies-from-kafka.enable.auto.commit=false
mp.messaging.incoming.movies-from-kafka.auto.offset.reset=earliest
您可能已经注意到我们没有定义 You might have noticed that we didn’t define the 如果您使用 Confluent Schema Registry,那么您也不必配置 If you use Confluent Schema Registry, you don’t have to configure |
Running the application
在开发模式下启动应用程序:
Start the application in dev mode:
Unresolved directive in kafka-schema-registry-avro.adoc - include::{includes}/devtools/dev.adoc[]
多亏了开发服务,Kafka 代理和 Apicurio Registry 实例会自动启动。有关更多详细信息,请参见 Dev Services for Kafka 和 Dev Services for Apicurio Registry。
Kafka broker and Apicurio Registry instance are started automatically thanks to Dev Services. See Dev Services for Kafka and Dev Services for Apicurio Registry for more details.
您可能已经注意到我们没有在任何地方配置架构注册表 URL。这是因为 Apicurio Registry 的开发服务将 Quarkus Messaging 中的 Kafka 通道配置为使用自动启动的注册表实例。 You might have noticed that we didn’t configure the schema registry URL anywhere. This is because Dev Services for Apicurio Registry configures all Kafka channels in Quarkus Messaging to use the automatically started registry instance. Apicurio Registry 除了其原生 API 以外,还公开了与 Confluent Schema Registry 在 API 兼容的端点。因此,这种自动配置对 Apicurio Registry serde 和 Confluent Schema Registry serde 都有效。 Apicurio Registry, in addition to its native API, also exposes an endpoint that is API-compatible with Confluent Schema Registry. Therefore, this automatic configuration works both for Apicurio Registry serde and Confluent Schema Registry serde. 但是,请注意,没有任何 Dev Services 支持可以运行 Confluent Schema Registry 本身。如果你想使用 Confluent Schema Registry 的运行中实例,请与 Kafka 代理一起配置其 URL: However, note that there’s no Dev Services support for running Confluent Schema Registry itself. If you want to use a running instance of Confluent Schema Registry, configure its URL, together with the URL of a Kafka broker:
|
在第二个终端中,使用 curl
查询 ConsumedMovieResource
资源:
In the second terminal, query the ConsumedMovieResource
resource with curl
:
curl -N http://localhost:8080/consumed-movies
在第三个中,发布一些电影:
In the third one, post a few movies:
curl --header "Content-Type: application/json" \
--request POST \
--data '{"title":"The Shawshank Redemption","year":1994}' \
http://localhost:8080/movies
curl --header "Content-Type: application/json" \
--request POST \
--data '{"title":"The Godfather","year":1972}' \
http://localhost:8080/movies
curl --header "Content-Type: application/json" \
--request POST \
--data '{"title":"The Dark Knight","year":2008}' \
http://localhost:8080/movies
curl --header "Content-Type: application/json" \
--request POST \
--data '{"title":"12 Angry Men","year":1957}' \
http://localhost:8080/movies
观察在第二个终端中打印的内容。您应该看到类似以下的内容:
Observe what is printed in the second terminal. You should see something along the lines of:
data:'The Shawshank Redemption' from 1994
data:'The Godfather' from 1972
data:'The Dark Knight' from 2008
data:'12 Angry Men' from 1957
Running in JVM or Native mode
在非开发或测试模式下运行时,您需要启动自己的 Kafka 代理和 Apicurio Registry。使它们运行的最简单方法是使用 docker-compose
启动适当的容器。
When not running in dev or test mode, you will need to start your own Kafka broker and Apicurio Registry.
The easiest way to get them running is to use docker-compose
to start the appropriate containers.
如果您使用 Confluent Schema Registry,则您已经运行了 Kafka 代理和 Confluent Schema Registry 实例并将其配置好了。您可以忽略此处的 |
If you use Confluent Schema Registry, you already have a Kafka broker and Confluent Schema Registry instance running and configured.
You can ignore the |
在项目的根目录创建一个 docker-compose.yaml
文件,内容如下:
Create a docker-compose.yaml
file at the root of the project with the following content:
version: '2'
services:
zookeeper:
image: quay.io/strimzi/kafka:0.41.0-kafka-3.7.0
command: [
"sh", "-c",
"bin/zookeeper-server-start.sh config/zookeeper.properties"
]
ports:
- "2181:2181"
environment:
LOG_DIR: /tmp/logs
kafka:
image: quay.io/strimzi/kafka:0.41.0-kafka-3.7.0
command: [
"sh", "-c",
"bin/kafka-server-start.sh config/server.properties --override listeners=$${KAFKA_LISTENERS} --override advertised.listeners=$${KAFKA_ADVERTISED_LISTENERS} --override zookeeper.connect=$${KAFKA_ZOOKEEPER_CONNECT}"
]
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
LOG_DIR: "/tmp/logs"
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
schema-registry:
image: apicurio/apicurio-registry-mem:2.4.2.Final
ports:
- 8081:8080
depends_on:
- kafka
environment:
QUARKUS_PROFILE: prod
在启动应用程序之前,我们首先启动 Kafka 代理和 Apicurio Registry:
Before starting the application, let’s first start the Kafka broker and Apicurio Registry:
docker-compose up
要停止容器,请使用 |
To stop the containers, use |
您可以使用以下命令构建应用程序:
You can build the application with:
Unresolved directive in kafka-schema-registry-avro.adoc - include::{includes}/devtools/build.adoc[]
并使用以下命令在 JVM 模式中运行它:
And run it in JVM mode with:
java -Dmp.messaging.connector.smallrye-kafka.apicurio.registry.url=http://localhost:8081/apis/registry/v2 -jar target/quarkus-app/quarkus-run.jar
默认情况下,应用程序尝试连接到在 |
By default, the application tries to connect to a Kafka broker listening at |
在命令行上指定注册表 URL 不太方便,因此您只能为 prod
配置文件添加一个配置属性:
Specifying the registry URL on the command line is not very convenient, so you can add a configuration property only for the prod
profile:
%prod.mp.messaging.connector.smallrye-kafka.apicurio.registry.url=http://localhost:8081/apis/registry/v2
您可以使用以下命令构建一个本机可执行文件:
You can build a native executable with:
Unresolved directive in kafka-schema-registry-avro.adoc - include::{includes}/devtools/build-native.adoc[]
并使用以下命令运行它:
and run it with:
./target/kafka-avro-schema-quickstart-1.0.0-SNAPSHOT-runner -Dkafka.bootstrap.servers=localhost:9092
Testing the application
如上所述,Kafka 和 Apicurio Registry 的开发服务在开发模式和测试中自动启动并配置一个 Kafka 代理和 Apicurio Registry 实例。因此,我们不必自己设置 Kafka 和 Apicurio Registry。我们只需专注于编写测试。
As mentioned above, Dev Services for Kafka and Apicurio Registry automatically start and configure a Kafka broker and Apicurio Registry instance in dev mode and for tests. Hence, we don’t have to set up Kafka and Apicurio Registry ourselves. We can just focus on writing the test.
首先,让我们将对 REST 客户端和 Awaitility 的测试依赖关系添加到构建文件中:
First, let’s add test dependencies on REST Client and Awaitility to the build file:
<!-- we'll use Jakarta REST Client for talking to the SSE endpoint -->
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-rest-client</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.awaitility</groupId>
<artifactId>awaitility</artifactId>
<scope>test</scope>
</dependency>
testImplementation("io.quarkus:quarkus-rest-client")
testImplementation("org.awaitility:awaitility")
在测试中,我们将循环发送电影并检查 ConsumedMovieResource
是否返回了我们发送的内容。
In the test, we will send movies in a loop and check if the ConsumedMovieResource
returns
what we send.
package org.acme.kafka;
import io.quarkus.test.common.WithTestResource;
import io.quarkus.test.common.http.TestHTTPResource;
import io.quarkus.test.junit.QuarkusTest;
import io.restassured.http.ContentType;
import org.hamcrest.Matchers;
import org.junit.jupiter.api.Test;
import jakarta.ws.rs.client.Client;
import jakarta.ws.rs.client.ClientBuilder;
import jakarta.ws.rs.client.WebTarget;
import jakarta.ws.rs.sse.SseEventSource;
import java.net.URI;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import static io.restassured.RestAssured.given;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
import static java.util.concurrent.TimeUnit.SECONDS;
import static org.awaitility.Awaitility.await;
import static org.hamcrest.MatcherAssert.assertThat;
@QuarkusTest
public class MovieResourceTest {
@TestHTTPResource("/consumed-movies")
URI consumedMovies;
@Test
public void testHelloEndpoint() throws InterruptedException {
// create a client for `ConsumedMovieResource` and collect the consumed resources in a list
Client client = ClientBuilder.newClient();
WebTarget target = client.target(consumedMovies);
List<String> received = new CopyOnWriteArrayList<>();
SseEventSource source = SseEventSource.target(target).build();
source.register(inboundSseEvent -> received.add(inboundSseEvent.readData()));
// in a separate thread, feed the `MovieResource`
ExecutorService movieSender = startSendingMovies();
source.open();
// check if, after at most 5 seconds, we have at least 2 items collected, and they are what we expect
await().atMost(5, SECONDS).until(() -> received.size() >= 2);
assertThat(received, Matchers.hasItems("'The Shawshank Redemption' from 1994",
"'12 Angry Men' from 1957"));
source.close();
// shutdown the executor that is feeding the `MovieResource`
movieSender.shutdownNow();
movieSender.awaitTermination(5, SECONDS);
}
private ExecutorService startSendingMovies() {
ExecutorService executorService = Executors.newSingleThreadExecutor();
executorService.execute(() -> {
while (true) {
given()
.contentType(ContentType.JSON)
.body("{\"title\":\"The Shawshank Redemption\",\"year\":1994}")
.when()
.post("/movies")
.then()
.statusCode(202);
given()
.contentType(ContentType.JSON)
.body("{\"title\":\"12 Angry Men\",\"year\":1957}")
.when()
.post("/movies")
.then()
.statusCode(202);
try {
Thread.sleep(200L);
} catch (InterruptedException e) {
break;
}
}
});
return executorService;
}
}
我们修改了与该项目一起生成的 |
We modified the |
Unresolved directive in kafka-schema-registry-avro.adoc - include::{includes}/devtools/build-native.adoc[]
Manual setup
如果我们无法使用开发服务并希望手动启动 Kafka 代理和 Apicurio Registry 实例,我们将定义一个 QuarkusTestResourceLifecycleManager。
If we couldn’t use Dev Services and wanted to start a Kafka broker and Apicurio Registry instance manually, we would define a QuarkusTestResourceLifecycleManager.
<dependency>
<groupId>io.strimzi</groupId>
<artifactId>strimzi-test-container</artifactId>
<version>0.105.0</version>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
</exclusion>
</exclusions>
</dependency>
testImplementation("io.strimzi:strimzi-test-container:0.105.0") {
exclude group: "org.apache.logging.log4j", module: "log4j-core"
}
package org.acme.kafka;
import java.util.HashMap;
import java.util.Map;
import org.testcontainers.containers.GenericContainer;
import io.quarkus.test.common.QuarkusTestResourceLifecycleManager;
import io.strimzi.StrimziKafkaContainer;
public class KafkaAndSchemaRegistryTestResource implements QuarkusTestResourceLifecycleManager {
private final StrimziKafkaContainer kafka = new StrimziKafkaContainer();
private GenericContainer<?> registry;
@Override
public Map<String, String> start() {
kafka.start();
registry = new GenericContainer<>("apicurio/apicurio-registry-mem:2.4.2.Final")
.withExposedPorts(8080)
.withEnv("QUARKUS_PROFILE", "prod");
registry.start();
Map<String, String> properties = new HashMap<>();
properties.put("mp.messaging.connector.smallrye-kafka.apicurio.registry.url",
"http://" + registry.getHost() + ":" + registry.getMappedPort(8080) + "/apis/registry/v2");
properties.put("kafka.bootstrap.servers", kafka.getBootstrapServers());
return properties;
}
@Override
public void stop() {
registry.stop();
kafka.stop();
}
}
@QuarkusTest
@WithTestResource(KafkaAndSchemaRegistryTestResource.class)
public class MovieResourceTest {
...
}
Using compatible versions of the Apicurio Registry
quarkus-apicurio-registry-avro
扩展依赖于 Apicurio Registry 客户端的最新版本,并且大多数版本的 Apicurio Registry 服务器和客户端都向后兼容。对于某些版本,您需要确保 Serdes 使用的客户端与服务器兼容。
The quarkus-apicurio-registry-avro
extension depends on recent versions of Apicurio Registry client,
and most versions of Apicurio Registry server and client are backwards compatible.
For some you need to make sure that the client used by Serdes is compatible with the server.
例如,如果您将 Apicurio dev 服务的映像名称设置为使用版本 2.1.5.Final
:
For example, with Apicurio dev service if you set the image name to use version 2.1.5.Final
:
quarkus.apicurio-registry.devservices.image-name=quay.io/apicurio/apicurio-registry-mem:2.1.5.Final
您需要确保 apicurio-registry-serdes-avro-serde
依赖项和 REST 客户端 apicurio-common-rest-client-vertx
依赖项都设置为兼容版本:
You need to make sure that apicurio-registry-serdes-avro-serde
dependency
and the REST client apicurio-common-rest-client-vertx
dependency are set to compatible versions:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-apicurio-registry-avro</artifactId>
<exclusions>
<exclusion>
<groupId>io.apicurio</groupId>
<artifactId>apicurio-common-rest-client-vertx</artifactId>
</exclusion>
<exclusion>
<groupId>io.apicurio</groupId>
<artifactId>apicurio-registry-serdes-avro-serde</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>io.apicurio</groupId>
<artifactId>apicurio-registry-client</artifactId>
<version>2.1.5.Final</version>
</dependency>
<dependency>
<groupId>io.apicurio</groupId>
<artifactId>apicurio-registry-common</artifactId>
<version>2.1.5.Final</version>
</dependency>
<dependency>
<groupId>io.apicurio</groupId>
<artifactId>apicurio-registry-serdes-avro-serde</artifactId>
<version>2.1.5.Final</version>
<exclusions>
<exclusion>
<groupId>io.apicurio</groupId>
<artifactId>apicurio-common-rest-client-jdk</artifactId>
</exclusion>
<exclusion>
<groupId>io.apicurio</groupId>
<artifactId>apicurio-registry-client</artifactId>
</exclusion>
<exclusion>
<groupId>io.apicurio</groupId>
<artifactId>apicurio-registry-common</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>io.apicurio</groupId>
<artifactId>apicurio-common-rest-client-vertx</artifactId>
<version>0.1.5.Final</version>
</dependency>
dependencies {
implementation(platform("{quarkus-platform-groupid}:quarkus-bom:2.12.3.Final"))
...
implementation("io.quarkus:quarkus-apicurio-registry-avro")
implementation("io.apicurio:apicurio-registry-serdes-avro-serde") {
exclude group: "io.apicurio", module: "apicurio-common-rest-client-jdk"
exclude group: "io.apicurio", module: "apicurio-registry-client"
exclude group: "io.apicurio", module: "apicurio-registry-common"
version {
strictly "2.1.5.Final"
}
}
implementation("io.apicurio:apicurio-registry-client") {
version {
strictly "2.1.5.Final"
}
}
implementation("io.apicurio:apicurio-registry-common") {
version {
strictly "2.1.5.Final"
}
}
implementation("io.apicurio:apicurio-common-rest-client-vertx") {
version {
strictly "0.1.5.Final"
}
}
}
apicurio-registry-client
和 apicurio-common-rest-client-vertx
的已知先前兼容版本如下所列:
Known previous compatible versions for apicurio-registry-client
and apicurio-common-rest-client-vertx
are the following
-
apicurio-registry-client
2.1.5.Final withapicurio-common-rest-client-vertx
0.1.5.Final -
apicurio-registry-client
2.3.1.Final withapicurio-common-rest-client-vertx
0.1.13.Final
Using the Confluent Schema Registry
如果您想使用 Confluent Schema Registry,则需要 quarkus-confluent-registry-avro
扩展,而不是 quarkus-apicurio-registry-avro
扩展。此外,您需要向 pom.xml
/ build.gradle
文件中添加一些依赖项和一个自定义 Maven 存储库:
If you want to use the Confluent Schema Registry, you need the quarkus-confluent-registry-avro
extension, instead of the quarkus-apicurio-registry-avro
extension.
Also, you need to add a few dependencies and a custom Maven repository to your pom.xml
/ build.gradle
file:
<dependencies>
...
<!-- the extension -->
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-confluent-registry-avro</artifactId>
</dependency>
<!-- Confluent registry libraries use Jakarta REST client -->
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-rest-client</artifactId>
</dependency>
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-avro-serializer</artifactId>
<version>7.2.0</version>
<exclusions>
<exclusion>
<groupId>jakarta.ws.rs</groupId>
<artifactId>jakarta.ws.rs-api</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<repositories>
<!-- io.confluent:kafka-avro-serializer is only available from this repository: -->
<repository>
<id>confluent</id>
<url>https://packages.confluent.io/maven/</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
repositories {
...
maven {
url "https://packages.confluent.io/maven/"
}
}
dependencies {
...
implementation("io.quarkus:quarkus-confluent-registry-avro")
// Confluent registry libraries use Jakarta REST client
implementation("io.quarkus:quarkus-rest-client")
implementation("io.confluent:kafka-avro-serializer:7.2.0") {
exclude group: "jakarta.ws.rs", module: "jakarta.ws.rs-api"
}
}
在 JVM 模式下,可以使用任何版本的 io.confluent:kafka-avro-serializer
。在原生模式中,Quarkus 支持以下版本: 6.2.x
、 7.0.x
、 7.1.x
、 7.2.x
、 7.3.x
。
In JVM mode, any version of io.confluent:kafka-avro-serializer
can be used.
In native mode, Quarkus supports the following versions: 6.2.x
, 7.0.x
, 7.1.x
, 7.2.x
, 7.3.x
.
对于版本 7.4.x
和 7.5.x
,由于 Confluent Schema Serializer 出现问题,因此您需要添加另一个依赖项:
For version 7.4.x
and 7.5.x
, due to an issue with the Confluent Schema Serializer, you need to add another dependency:
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-csv</artifactId>
</dependency>
dependencies {
implementation("com.fasterxml.jackson.dataformat:jackson-dataformat-csv")
}
对于任何其他版本,可能需要调整原生配置。
For any other versions, the native configuration may need to be adjusted.
Avro code generation details
在本指南中,我们使用 Quarkus 代码生成机制从 Avro 模式生成 Java 文件。
In this guide we used the Quarkus code generation mechanism to generate Java files from Avro schema.
实际上,该机制使用 org.apache.avro:avro-compiler
。
Under the hood, the mechanism uses org.apache.avro:avro-compiler
.
您可以使用以下配置属性来更改其工作方式:
You can use the following configuration properties to alter how it works:
-
avro.codegen.[avsc|avdl|avpr].imports
- a list of files or directories that should be compiled first thus making them importable by subsequently compiled schemas. Note that imported files should not reference each other. All paths should be relative to thesrc/[main|test]/avro
directory, oravro
sub-directory in any source directory configured by the build system. Passed as a comma-separated list. -
avro.codegen.stringType
- the Java type to use for Avro strings. May be one ofCharSequence
,String
orUtf8
. Defaults toString
-
avro.codegen.createOptionalGetters
- enables generating thegetOptional…
methods that return an Optional of the requested type. Defaults tofalse
-
avro.codegen.enableDecimalLogicalType
- determines whether to use Java classes for decimal types, defaults tofalse
-
avro.codegen.createSetters
- determines whether to create setters for the fields of the record. Defaults tofalse
-
avro.codegen.gettersReturnOptional
- enables generatingget…
methods that return an Optional of the requested type. Defaults tofalse
-
avro.codegen.optionalGettersForNullableFieldsOnly
, works in conjunction withgettersReturnOptional
option. If it is set,Optional
getters will be generated only for fields that are nullable. If the field is mandatory, regular getter will be generated. Defaults tofalse
Further reading
-
SmallRye Reactive Messaging Kafka documentation
-
How to Use Kafka, Schema Registry and Avro with Quarkus - a blog post on which the guide is based. It gives a good introduction to Avro and the concept of schema registry