Scheduling Periodic Tasks with Quartz
现代应用程序通常需要定期运行特定的任务。在本指南中,您将学习如何使用 Quartz 扩展来调度周期性集群任务。
Modern applications often need to run specific tasks periodically. In this guide, you learn how to schedule periodic clustered tasks using the Quartz extension. Unresolved directive in quartz.adoc - include::{includes}/extension-status.adoc[]
如果只需要运行内存中调度程序,请使用 Scheduler 扩展。 |
If you only need to run in-memory scheduler use the Scheduler extension. |
Architecture
在本指南中,我们将公开一个 Rest API tasks
以展示每 10 秒运行一次 Quartz 任务所创建的任务列表。
In this guide, we are going to expose one Rest API tasks
to visualise the list of tasks created by a Quartz job running every 10 seconds.
Solution
我们建议您遵循接下来的部分中的说明,按部就班地创建应用程序。然而,您可以直接跳到完成的示例。
We recommend that you follow the instructions in the next sections and create the application step by step. However, you can go right to the completed example.
克隆 Git 存储库: git clone {quickstarts-clone-url}
,或下载 {quickstarts-archive-url}[存档]。
Clone the Git repository: git clone {quickstarts-clone-url}
, or download an {quickstarts-archive-url}[archive].
解决方案位于 quartz-quickstart
directory 中。
The solution is located in the quartz-quickstart
directory.
Creating the Maven project
首先,我们需要一个新项目。使用以下命令创建一个新项目:
First, we need a new project. Create a new project with the following command:
Unresolved directive in quartz.adoc - include::{includes}/devtools/create-app.adoc[]
它生成:
It generates:
-
the Maven structure
-
a landing page accessible on
http://localhost:8080
-
example
Dockerfile
files for bothnative
andjvm
modes -
the application configuration file
Maven 项目还会导入 Quarkus Quartz 扩展。
The Maven project also imports the Quarkus Quartz extension.
如果您已配置好 Quarkus 项目,则可以通过在项目基目录中运行以下命令将 quartz
扩展添加到您的项目中:
If you already have your Quarkus project configured, you can add the quartz
extension
to your project by running the following command in your project base directory:
Unresolved directive in quartz.adoc - include::{includes}/devtools/extension-add.adoc[]
这会将以下内容添加到构建文件中:
This will add the following to your build file:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-quartz</artifactId>
</dependency>
implementation("io.quarkus:quarkus-quartz")
要使用 JDBC 存储,还需要 To use a JDBC store, the |
Creating the Task Entity
在 org.acme.quartz
包中,使用以下内容创建 Task
类:
In the org.acme.quartz
package, create the Task
class, with the following content:
package org.acme.quartz;
import jakarta.persistence.Entity;
import java.time.Instant;
import jakarta.persistence.Table;
import io.quarkus.hibernate.orm.panache.PanacheEntity;
@Entity
@Table(name="TASKS")
public class Task extends PanacheEntity { 1
public Instant createdAt;
public Task() {
createdAt = Instant.now();
}
public Task(Instant time) {
this.createdAt = time;
}
}
1 | Declare the entity using Panache |
Creating a scheduled job
在 org.acme.quartz
包中创建 TaskBean
类,内容如下:
In the org.acme.quartz
package, create the TaskBean
class, with the following content:
package org.acme.quartz;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.transaction.Transactional;
import io.quarkus.scheduler.Scheduled;
@ApplicationScoped 1
public class TaskBean {
@Transactional
@Scheduled(every = "10s", identity = "task-job") 2
void schedule() {
Task task = new Task(); 3
task.persist(); 4
}
}
1 | Declare the bean in the application scope |
2 | Use the @Scheduled annotation to instruct Quarkus to run this method every 10 seconds and set the unique identifier for this job. |
3 | Create a new Task with the current start time. |
4 | Persist the task in database using Panache. |
Scheduling Jobs Programmatically
注入的 io.quarkus.scheduler.Scheduler
可以用于 schedule a job programmatically。但是,也可以直接利用 Quartz API。可以在任何 bean 中注入底层的 org.quartz.Scheduler
:
An injected io.quarkus.scheduler.Scheduler
can be used to schedule a job programmatically.
However, it is also possible to leverage the Quartz API directly.
You can inject the underlying org.quartz.Scheduler
in any bean:
package org.acme.quartz;
@ApplicationScoped
public class TaskBean {
@Inject
org.quartz.Scheduler quartz; 1
void onStart(@Observes StartupEvent event) throws SchedulerException {
JobDetail job = JobBuilder.newJob(MyJob.class)
.withIdentity("myJob", "myGroup")
.build();
Trigger trigger = TriggerBuilder.newTrigger()
.withIdentity("myTrigger", "myGroup")
.startNow()
.withSchedule(
SimpleScheduleBuilder.simpleSchedule()
.withIntervalInSeconds(10)
.repeatForever())
.build();
quartz.scheduleJob(job, trigger); 2
}
@Transactional
void performTask() {
Task task = new Task();
task.persist();
}
// A new instance of MyJob is created by Quartz for every job execution
public static class MyJob implements Job {
@Inject
TaskBean taskBean;
public void execute(JobExecutionContext context) throws JobExecutionException {
taskBean.performTask(); 3
}
}
}
1 | Inject the underlying org.quartz.Scheduler instance. |
2 | Schedule a new job using the Quartz API. |
3 | Invoke the TaskBean#performTask() method from the job. Jobs are also container-managed beans if they belong to a bean archive. |
默认情况下,除非找到 |
By default, the scheduler is not started unless a |
Updating the application configuration file
编辑 application.properties
文件并添加以下配置:
Edit the application.properties
file and add the below configuration:
# Quartz configuration
quarkus.quartz.clustered=true 1
quarkus.quartz.store-type=jdbc-cmt 2
quarkus.quartz.misfire-policy.task-job=ignore-misfire-policy 3
# Datasource configuration.
quarkus.datasource.db-kind=postgresql
quarkus.datasource.username=quarkus_test
quarkus.datasource.password=quarkus_test
quarkus.datasource.jdbc.url=jdbc:postgresql://localhost/quarkus_test
# Hibernate configuration
quarkus.hibernate-orm.database.generation=none
quarkus.hibernate-orm.log.sql=true
quarkus.hibernate-orm.sql-load-script=no-file
# flyway configuration
quarkus.flyway.connect-retries=10
quarkus.flyway.table=flyway_quarkus_history
quarkus.flyway.migrate-at-start=true
quarkus.flyway.baseline-on-migrate=true
quarkus.flyway.baseline-version=1.0
quarkus.flyway.baseline-description=Quartz
1 | Indicate that the scheduler will be run in clustered mode |
2 | Use the database store to persist job related information so that they can be shared between nodes |
3 | The misfire policy can be configured for each job. task-job is the identity of the job. |
cron 作业的有效失火策略为:smart-policy
、ignore-misfire-policy
、fire-now
和 cron-trigger-do-nothing
。间隔作业的有效失火策略为:smart-policy
、ignore-misfire-policy
、fire-now
、simple-trigger-reschedule-now-with-existing-repeat-count
、simple-trigger-reschedule-now-with-remaining-repeat-count
、simple-trigger-reschedule-next-with-existing-count`和 `simple-trigger-reschedule-next-with-remaining-count
。
Valid misfire policy for cron jobs are: smart-policy
, ignore-misfire-policy
, fire-now
and cron-trigger-do-nothing
.
Valid misfire policy for interval jobs are: smart-policy
, ignore-misfire-policy
, fire-now
, simple-trigger-reschedule-now-with-existing-repeat-count
, simple-trigger-reschedule-now-with-remaining-repeat-count
, simple-trigger-reschedule-next-with-existing-count
and simple-trigger-reschedule-next-with-remaining-count
.
Creating a REST resource and a test
创建 org.acme.quartz.TaskResource
类,其内容如下:
Create the org.acme.quartz.TaskResource
class with the following content:
package org.acme.quartz;
import java.util.List;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
@Path("/tasks")
public class TaskResource {
@GET
public List<Task> listAll() {
return Task.listAll(); 1
}
}
1 | Retrieve the list of created tasks from the database |
还可以创建 org.acme.quartz.TaskResourceTest
测试,其内容如下:
You also have the option to create a org.acme.quartz.TaskResourceTest
test with the following content:
package org.acme.quartz;
import io.quarkus.test.junit.QuarkusTest;
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
import org.junit.jupiter.api.Test;
import static io.restassured.RestAssured.given;
import static org.hamcrest.CoreMatchers.is;
@QuarkusTest
public class TaskResourceTest {
@Test
public void tasks() throws InterruptedException {
Thread.sleep(1000); // wait at least a second to have the first task created
given()
.when().get("/tasks")
.then()
.statusCode(200)
.body("size()", is(greaterThanOrEqualTo(1))); 1
}
}
1 | Ensure that we have a 200 response and at least one task created |
Creating Quartz Tables
添加一个名为 src/main/resources/db/migration/V2.0.0__QuarkusQuartzTasks.sql
的 SQL 迁移文件,其内容是从链接中复制的文件内容:{quickstarts-blob-url}/quartz-quickstart/src/main/resources/db/migration/V2.0.0_QuarkusQuartzTasks.sql[V2.0.0_QuarkusQuartzTasks.sql]。
Add a SQL migration file named src/main/resources/db/migration/V2.0.0QuarkusQuartzTasks.sql
with the content copied from
file with the content from V2.0.0__QuarkusQuartzTasks.sql.
Configuring the load balancer
在根目录中,创建包含以下内容的 nginx.conf
文件:
In the root directory, create a nginx.conf
file with the following content:
user nginx;
events {
worker_connections 1000;
}
http {
server {
listen 8080;
location / {
proxy_pass http://tasks:8080; 1
}
}
}
1 | Route all traffic to our tasks application |
Setting Application Deployment
在根目录中,创建包含以下内容的 docker-compose.yml
文件:
In the root directory, create a docker-compose.yml
file with the following content:
version: '3'
services:
tasks: 1
image: quarkus-quickstarts/quartz:1.0
build:
context: ./
dockerfile: src/main/docker/Dockerfile.${QUARKUS_MODE:-jvm}
environment:
QUARKUS_DATASOURCE_URL: jdbc:postgresql://postgres/quarkus_test
networks:
- tasks-network
depends_on:
- postgres
nginx: 2
image: nginx:1.17.6
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- tasks
ports:
- 8080:8080
networks:
- tasks-network
postgres: 3
image: postgres:14.1
container_name: quarkus_test
environment:
- POSTGRES_USER=quarkus_test
- POSTGRES_PASSWORD=quarkus_test
- POSTGRES_DB=quarkus_test
ports:
- 5432:5432
networks:
- tasks-network
networks:
tasks-network:
driver: bridge
1 | Define the tasks service |
2 | Define the nginx load balancer to route incoming traffic to an appropriate node |
3 | Define the configuration to run the database |
Running the database
在单独的终端中,运行以下命令:
In a separate terminal, run the below command:
docker-compose up postgres 1
1 | Start the database instance using the configuration options supplied in the docker-compose.yml file |
Run the application in Dev Mode
使用以下内容运行应用程序:
Run the application with:
Unresolved directive in quartz.adoc - include::{includes}/devtools/dev.adoc[]
几秒钟后,打开另一个终端并运行 curl localhost:8080/tasks
,以验证我们至少创建了一个任务。
After a few seconds, open another terminal and run curl localhost:8080/tasks
to verify that we have at least one task created.
和往常一样,可以使用以下命令打包应用程序:
As usual, the application can be packaged using:
Unresolved directive in quartz.adoc - include::{includes}/devtools/build.adoc[]
并使用 java -jar target/quarkus-app/quarkus-run.jar
执行。
and executed with java -jar target/quarkus-app/quarkus-run.jar
.
你还可以按如下方式生成本机可执行文件:
You can also generate the native executable with:
Unresolved directive in quartz.adoc - include::{includes}/devtools/build-native.adoc[]
Packaging the application and run several instances
可以使用以下命令打包应用程序:
The application can be packaged using:
Unresolved directive in quartz.adoc - include::{includes}/devtools/build.adoc[]
生成成功后,运行以下命令:
Once the build is successful, run the below command:
docker-compose up --scale tasks=2 --scale nginx=1 1
1 | Start two instances of the application and a load balancer |
几秒钟后,在另一个终端中,运行 curl localhost:8080/tasks
,以验证任务仅在不同时刻和 10 秒间隔内创建。
After a few seconds, in another terminal, run curl localhost:8080/tasks
to verify that tasks were only created at different instants and in an interval of 10 seconds.
你还可以按如下方式生成本机可执行文件:
You can also generate the native executable with:
Unresolved directive in quartz.adoc - include::{includes}/devtools/build-native.adoc[]
清除/删除先前状态(即过时作业和触发器)是部署人员的责任。而且,组成“Quartz 集群”的应用程序应是相同的,否则可能会出现不可预测的结果。
It’s the responsibility of the deployer to clear/remove the previous state, i.e. stale jobs and triggers. Moreover, the applications that form the "Quartz cluster" should be identical, otherwise an unpredictable result may occur.
Configuring the Instance ID
默认情况下,调度程序配置了一个使用机器主机名和当前时间戳的简单实例 ID 生成器,因此在以集群模式运行时,您不必担心为每个节点设置合适的 instance-id
。但是,您可以通过设置配置属性引用或使用其他生成器来自定义定义特定的 instance-id
。
By default, the scheduler is configured with a simple instance ID generator using the machine hostname and the current timestamp, so you don’t need to worry about setting a appropriate instance-id
for each node when running in clustered mode. However, you can define a specific instance-id
by yourself setting a configuration property reference or using other generators.
quarkus.quartz.instance-id=${HOST:AUTO} 1
1 | This will expand the HOST environment variable and use AUTO as the default value if HOST is not set. |
以下示例配置了名为 hostname
的生成器 org.quartz.simpl.HostnameInstanceIdGenerator
,因此您可以使用其名称 instance-id
进行使用。该生成器仅使用机器主机名,可能适用于为节点提供唯一名称的环境。
The example below configure the generator org.quartz.simpl.HostnameInstanceIdGenerator
named as hostname
, so you can use its name as instance-id
to be used. That generator uses just the machine hostname and can be appropriate in environments providing unique names for the nodes.
quarkus.quartz.instance-id=hostname
quarkus.quartz.instance-id-generators.hostname.class=org.quartz.simpl.HostnameInstanceIdGenerator
定义适当的实例标识符是部署人员的责任。此外,组成“Quartz 集群”的应用程序应包含唯一的实例标识符,否则可能会出现不可预测的结果。建议使用适当的实例 ID 生成器,而不是指定显式标识符。
It’s the responsibility of the deployer to define appropriate instance identifiers. Moreover, the applications that form the "Quartz cluster" should contain unique instance identifiers, otherwise an unpredictable result may occur. It’s recommended to use an appropriate instance ID generator rather than specifying explicit identifiers.
Registering Plugin and Listeners
您可以通过 Quarkus 配置注册 plugins
、job-listeners
和 trigger-listeners
。
You can register plugins
, job-listeners
and trigger-listeners
through Quarkus configuration.
以下示例以 Job [{1}.{0}] execution complete and reports: {8}
定义的属性 jobSuccessMessage
注册名为 jobHistory
的插件 org.quartz.plugins.history.LoggingJobHistoryPlugin
The example below registers the plugin org.quartz.plugins.history.LoggingJobHistoryPlugin
named as jobHistory
with the property jobSuccessMessage
defined as Job [{1}.{0}] execution complete and reports: {8}
quarkus.quartz.plugins.jobHistory.class=org.quartz.plugins.history.LoggingJobHistoryPlugin
quarkus.quartz.plugins.jobHistory.properties.jobSuccessMessage=Job [{1}.{0}] execution complete and reports: {8}
您还可以使用注入的 org.quartz.Scheduler
以编程方式注册一个监听程序:
You can also register a listener programmatically with an injected org.quartz.Scheduler
:
public class MyListenerManager {
void onStart(@Observes StartupEvent event, org.quartz.Scheduler scheduler) throws SchedulerException {
scheduler.getListenerManager().addJobListener(new MyJogListener());
scheduler.getListenerManager().addTriggerListener(new MyTriggerListener());
}
}
Run scheduled methods on virtual threads
用 @Scheduled
注释的方法也可以用 @RunOnVirtualThread
注释。在这种情况下,方法在虚拟线程上调用。
Methods annotated with @Scheduled
can also be annotated with @RunOnVirtualThread
.
In this case, the method is invoked on a virtual thread.
该方法必须返回 void
,并且您的 Java 运行时必须为虚拟线程提供支持。阅读 the virtual thread guide 了解更多详情。
The method must return void
and your Java runtime must provide support for virtual threads.
Read the virtual thread guide for more details.
该功能无法与 run-blocking-method-on-quartz-thread
选项结合使用。如果设置了 run-blocking-method-on-quartz-thread
,则计划方法将在一根由 Quartz 管理的(平台)线程上运行。
This feature cannot be combined with the run-blocking-method-on-quartz-thread
option.
If run-blocking-method-on-quartz-thread
is set, the scheduled method runs on a (platform) thread managed by Quartz.