Reactive Core
-
服务器端请求处理,使用 HttpHandler 合约和 WebHandler API
-
客户端请求处理,使用 ClientHttpConnector 合约
-
HTTP 请求和响应内容的序列化和反序列化编解码器
spring-web
模块包含对反应式网络应用程序提供以下基本支持:
The spring-web
module contains the following foundational support for reactive web
applications:
-
For server request processing there are two levels of support.
-
HttpHandler: Basic contract for HTTP request handling with non-blocking I/O and Reactive Streams back pressure, along with adapters for Reactor Netty, Undertow, Tomcat, Jetty, and any Servlet container.
-
WebHandler
API: Slightly higher level, general-purpose web API for request handling, on top of which concrete programming models such as annotated controllers and functional endpoints are built.
-
-
For the client side, there is a basic
ClientHttpConnector
contract to perform HTTP requests with non-blocking I/O and Reactive Streams back pressure, along with adapters for Reactor Netty, reactive Jetty HttpClient and Apache HttpComponents. The higher level WebClient used in applications builds on this basic contract. -
For client and server, codecs for serialization and deserialization of HTTP request and response content.
HttpHandler
HttpHandler 是一个简单的约定,其中有一个方法用于处理请求和响应。它故意是最小的,其主要也是唯一目的是成为不同 HTTP 服务器 API 上的一个最小抽象。
HttpHandler is a simple contract with a single method to handle a request and a response. It is intentionally minimal, and its main and only purpose is to be a minimal abstraction over different HTTP server APIs.
下表描述了受支持的服务器 API:
The following table describes the supported server APIs:
Server name | Server API used | Reactive Streams support |
---|---|---|
Netty |
Netty API |
|
Undertow |
Undertow API |
spring-web: Undertow to Reactive Streams bridge |
Tomcat |
Servlet non-blocking I/O; Tomcat API to read and write ByteBuffers vs byte[] |
spring-web: Servlet non-blocking I/O to Reactive Streams bridge |
Jetty |
Servlet non-blocking I/O; Jetty API to write ByteBuffers vs byte[] |
spring-web: Servlet non-blocking I/O to Reactive Streams bridge |
Servlet container |
Servlet non-blocking I/O |
spring-web: Servlet non-blocking I/O to Reactive Streams bridge |
下表描述了服务器依赖项(另请参阅 受支持的版本):
The following table describes server dependencies (also see supported versions):
Server name | Group id | Artifact name |
---|---|---|
Reactor Netty |
io.projectreactor.netty |
reactor-netty |
Undertow |
io.undertow |
undertow-core |
Tomcat |
org.apache.tomcat.embed |
tomcat-embed-core |
Jetty |
org.eclipse.jetty |
jetty-server, jetty-servlet |
下面的代码片段显示了使用 HttpHandler
适配器与每个服务器 API 的情况:
The code snippets below show using the HttpHandler
adapters with each server API:
Reactor Netty
-
Java
-
Kotlin
HttpHandler handler = ...
ReactorHttpHandlerAdapter adapter = new ReactorHttpHandlerAdapter(handler);
HttpServer.create().host(host).port(port).handle(adapter).bindNow();
val handler: HttpHandler = ...
val adapter = ReactorHttpHandlerAdapter(handler)
HttpServer.create().host(host).port(port).handle(adapter).bindNow()
Undertow
-
Java
-
Kotlin
HttpHandler handler = ...
UndertowHttpHandlerAdapter adapter = new UndertowHttpHandlerAdapter(handler);
Undertow server = Undertow.builder().addHttpListener(port, host).setHandler(adapter).build();
server.start();
val handler: HttpHandler = ...
val adapter = UndertowHttpHandlerAdapter(handler)
val server = Undertow.builder().addHttpListener(port, host).setHandler(adapter).build()
server.start()
Tomcat
-
Java
-
Kotlin
HttpHandler handler = ...
Servlet servlet = new TomcatHttpHandlerAdapter(handler);
Tomcat server = new Tomcat();
File base = new File(System.getProperty("java.io.tmpdir"));
Context rootContext = server.addContext("", base.getAbsolutePath());
Tomcat.addServlet(rootContext, "main", servlet);
rootContext.addServletMappingDecoded("/", "main");
server.setHost(host);
server.setPort(port);
server.start();
val handler: HttpHandler = ...
val servlet = TomcatHttpHandlerAdapter(handler)
val server = Tomcat()
val base = File(System.getProperty("java.io.tmpdir"))
val rootContext = server.addContext("", base.absolutePath)
Tomcat.addServlet(rootContext, "main", servlet)
rootContext.addServletMappingDecoded("/", "main")
server.host = host
server.setPort(port)
server.start()
Jetty
-
Java
-
Kotlin
HttpHandler handler = ...
Servlet servlet = new JettyHttpHandlerAdapter(handler);
Server server = new Server();
ServletContextHandler contextHandler = new ServletContextHandler(server, "");
contextHandler.addServlet(new ServletHolder(servlet), "/");
contextHandler.start();
ServerConnector connector = new ServerConnector(server);
connector.setHost(host);
connector.setPort(port);
server.addConnector(connector);
server.start();
val handler: HttpHandler = ...
val servlet = JettyHttpHandlerAdapter(handler)
val server = Server()
val contextHandler = ServletContextHandler(server, "")
contextHandler.addServlet(ServletHolder(servlet), "/")
contextHandler.start();
val connector = ServerConnector(server)
connector.host = host
connector.port = port
server.addConnector(connector)
server.start()
Servlet 容器
Servlet Container
要将 WAR 部署到任何 Servlet 容器,您可以在 WAR 中扩展并包含 AbstractReactiveWebInitializer
。该类使用 [ServletHttpHandlerAdapter
] 将一个 [HttpHandler
] 封装起来,并将其注册为一个 [Servlet
]。
To deploy as a WAR to any Servlet container, you can extend and include
AbstractReactiveWebInitializer
in the WAR. That class wraps an HttpHandler
with ServletHttpHandlerAdapter
and registers
that as a Servlet
.
WebHandler
API
org.springframework.web.server`包基于 `HttpHandler
契约,为通过一组多个 WebExceptionHandler
、多个 WebFilter
和单个 WebHandler
组件处理请求提供了通用 Web API。可以通过简单地指向 Spring ApplicationContext
,其中组件是 auto-detected的,和/或使用生成器注册组件来使用 `WebHttpHandlerBuilder`将该组装在一起。
The org.springframework.web.server
package builds on the HttpHandler
contract
to provide a general-purpose web API for processing requests through a chain of multiple
WebExceptionHandler
, multiple
WebFilter
, and a single
WebHandler
component. The chain can
be put together with WebHttpHandlerBuilder
by simply pointing to a Spring
ApplicationContext
where components are
auto-detected, and/or by registering components
with the builder.
虽然 HttpHandler
的简单目标是抽象不同 HTTP 服务器的使用,但是`WebHandler` API 旨在提供一系列通常用于网络应用程序中的特性,例如:
While HttpHandler
has a simple goal to abstract the use of different HTTP servers, the
WebHandler
API aims to provide a broader set of features commonly used in web applications
such as:
-
User session with attributes.
-
Request attributes.
-
Resolved
Locale
orPrincipal
for the request. -
Access to parsed and cached form data.
-
Abstractions for multipart data.
-
and more..
Special bean types
下表列出了 WebHttpHandlerBuilder
可以自动在 Spring ApplicationContext 中检测到的组件,或者可以直接对其进行注册的组件:
The table below lists the components that WebHttpHandlerBuilder
can auto-detect in a
Spring ApplicationContext, or that can be registered directly with it:
Bean name | Bean type | Count | Description |
---|---|---|---|
<any> |
|
0..N |
Provide handling for exceptions from the chain of |
<any> |
|
0..N |
Apply interception style logic to before and after the rest of the filter chain and
the target |
|
|
1 |
The handler for the request. |
|
|
0..1 |
The manager for |
|
|
0..1 |
For access to |
|
|
0..1 |
The resolver for |
|
|
0..1 |
For processing forwarded type headers, either by extracting and removing them or by removing them only. Not used by default. |
Form Data
ServerWebExchange
公开了以下方法用于访问表单数据:
ServerWebExchange
exposes the following method for accessing form data:
-
Java
-
Kotlin
Mono<MultiValueMap<String, String>> getFormData();
suspend fun getFormData(): MultiValueMap<String, String>
DefaultServerWebExchange`使用已配置的 `HttpMessageReader`将表单数据 (`application/x-www-form-urlencoded
) 解析为 MultiValueMap
。默认情况下, `FormHttpMessageReader`已配置为由 `ServerCodecConfigurer`bean 使用(请参阅 Web Handler API)。
The DefaultServerWebExchange
uses the configured HttpMessageReader
to parse form data
(application/x-www-form-urlencoded
) into a MultiValueMap
. By default,
FormHttpMessageReader
is configured for use by the ServerCodecConfigurer
bean
(see the Web Handler API).
Multipart Data
ServerWebExchange
公开了以下访问多部件数据的方法:
ServerWebExchange
exposes the following method for accessing multipart data:
-
Java
-
Kotlin
Mono<MultiValueMap<String, Part>> getMultipartData();
suspend fun getMultipartData(): MultiValueMap<String, Part>
DefaultServerWebExchange`使用已配置的 `HttpMessageReader<MultiValueMap<String, Part>>`将 `multipart/form-data
、multipart/mixed`和 `multipart/related`内容解析为 `MultiValueMap
。默认情况下,这是 DefaultPartHttpMessageReader
,它没有任何第三方依赖项。或者,可以使用 SynchronossPartHttpMessageReader
,它基于 Synchronoss NIO Multipart库。两者均通过 `ServerCodecConfigurer`bean 配置(请参阅 Web Handler API)。
The DefaultServerWebExchange
uses the configured
HttpMessageReader<MultiValueMap<String, Part>>
to parse multipart/form-data
,
multipart/mixed
, and multipart/related
content into a MultiValueMap
.
By default, this is the DefaultPartHttpMessageReader
, which does not have any third-party
dependencies.
Alternatively, the SynchronossPartHttpMessageReader
can be used, which is based on the
Synchronoss NIO Multipart library.
Both are configured through the ServerCodecConfigurer
bean
(see the Web Handler API).
要解析流式传输方式的多部件数据,可以使用 PartEventHttpMessageReader
返回的 Flux<PartEvent>
,而不是使用 @RequestPart
,因为这意味着按名称访问类似于 Map
的各个部件,因此需要完全解析多部件数据。相比之下,你可以使用 @RequestBody
将内容解码到 Flux<PartEvent>
,而不收集到 MultiValueMap
。
To parse multipart data in streaming fashion, you can use the Flux<PartEvent>
returned from the
PartEventHttpMessageReader
instead of using @RequestPart
, as that implies Map
-like access
to individual parts by name and, hence, requires parsing multipart data in full.
By contrast, you can use @RequestBody
to decode the content to Flux<PartEvent>
without
collecting to a MultiValueMap
.
Non-standard Headers
当请求经过负载平衡器等代理时,主机、端口和协议可能会更改,而且这对于从客户端角度创建指向正确的主机、端口和协议的链接来说是个挑战。
As a request goes through proxies such as load balancers the host, port, and scheme may change, and that makes it a challenge to create links that point to the correct host, port, and scheme from a client perspective.
RFC 7239 定义了 Forwarded
HTTP 标头,代理可以使用它来提供有关原始请求的信息。
RFC 7239 defines the Forwarded
HTTP header
that proxies can use to provide information about the original request.
Non-standard Headers
也有一些其他非标准头,包括 X-Forwarded-Host
、X-Forwarded-Port
、X-Forwarded-Proto
、X-Forwarded-Ssl
和 X-Forwarded-Prefix
。
There are other non-standard headers, too, including X-Forwarded-Host
, X-Forwarded-Port
,
X-Forwarded-Proto
, X-Forwarded-Ssl
, and X-Forwarded-Prefix
.
X-Forwarded-Host
虽然不是标准,但 X-Forwarded-Host: <host>
是一个事实上的标准标头,用于将原始主机传达给下游服务器。例如,如果将 https://example.com/resource
的请求发送到将请求转发给 http://localhost:8080/resource
的代理,那么可以发送 X-Forwarded-Host: example.com
的标头以告知服务器原始主机是 example.com
。
While not standard, X-Forwarded-Host: <host>
is a de-facto standard header that is used to communicate the original host to a
downstream server. For example, if a request of https://example.com/resource
is sent to
a proxy which forwards the request to http://localhost:8080/resource
, then a header of
X-Forwarded-Host: example.com
can be sent to inform the server that the original host was example.com
.
X-Forwarded-Port
虽然不是标准,X-Forwarded-Port: <port>`是用于向服务器传递原始端口的实际标准头信息。例如,如果发送`https://example.com/resource`请求到将请求转发到`http://localhost:8080/resource`的代理,那么可以发送`X-Forwarded-Port: 443`头信息来通知服务器原始端口为`443
。
While not standard, X-Forwarded-Port: <port>
is a de-facto standard header that is used to
communicate the original port to a downstream server. For example, if a request of
https://example.com/resource
is sent to a proxy which forwards the request to
http://localhost:8080/resource
, then a header of X-Forwarded-Port: 443
can be sent
to inform the server that the original port was 443
.
X-Forwarded-Proto
虽然不是标准,但 X-Forwarded-Proto: (https|http)
是一个事实上的标准标头,用于将原始协议(例如 https/https
)传达给下游服务器。例如,如果将 https://example.com/resource
的请求发送到将请求转发给 http://localhost:8080/resource
的代理,那么可以发送 X-Forwarded-Proto: https
的标头以告知服务器原始协议是 https
。
While not standard, X-Forwarded-Proto: (https|http)
is a de-facto standard header that is used to communicate the original protocol (e.g. https / https)
to a downstream server. For example, if a request of https://example.com/resource
is sent to
a proxy which forwards the request to http://localhost:8080/resource
, then a header of
X-Forwarded-Proto: https
can be sent to inform the server that the original protocol was https
.
X-Forwarded-Ssl
虽然不是标准,X-Forwarded-Ssl: (on|off)`是用于向服务器传递原始协议(例如https/https)的实际标准头信息。例如,如果发送`https://example.com/resource`请求到将请求转发到`http://localhost:8080/resource`的代理,那么可以发送`X-Forwarded-Ssl: on`头信息来通知服务器原始协议为`https
。
While not standard, X-Forwarded-Ssl: (on|off)
is a de-facto standard header that is used to communicate the
original protocol (e.g. https / https) to a downstream server. For example, if a request of
https://example.com/resource
is sent to a proxy which forwards the request to
http://localhost:8080/resource
, then a header of X-Forwarded-Ssl: on
to inform the server that the
original protocol was https
.
X-Forwarded-Prefix
虽然不是标准,但 X-Forwarded-Prefix: <prefix>
是一个事实上的标准标头,用于将原始 URL 路径前缀传达给下游服务器。
While not standard, X-Forwarded-Prefix: <prefix>
is a de-facto standard header that is used to communicate the original URL path prefix to a
downstream server.
`X-Forwarded-Prefix`的使用可因部署情况而异,并且需要灵活才能替换、删除或前置目标服务器的路径前缀。
Use of X-Forwarded-Prefix
can vary by deployment scenario, and needs to be flexible to
allow replacing, removing, or prepending the path prefix of the target server.
场景1:覆盖路径前缀
Scenario 1: Override path prefix
https://example.com/api/{path} -> http://localhost:8080/app1/{path}
前缀是捕获组 {path}
之前的路径开头。对于代理,前缀是`/api`,而对于服务器,前缀是`/app1`。在这种情况下,代理可以发送`X-Forwarded-Prefix: /api`,让原始前缀`/api`覆盖服务器前缀`/app1`。
The prefix is the start of the path before the capture group {path}
. For the proxy,
the prefix is /api
while for the server the prefix is /app1
. In this case, the proxy
can send X-Forwarded-Prefix: /api
to have the original prefix /api
override the
server prefix /app1
.
场景2:删除路径前缀
Scenario 2: Remove path prefix
有时,应用程序可能希望删除前缀。例如,考虑以下代理到服务器的映射:
At times, an application may want to have the prefix removed. For example, consider the following proxy to server mapping:
https://app1.example.com/{path} -> http://localhost:8080/app1/{path} https://app2.example.com/{path} -> http://localhost:8080/app2/{path}
代理没有前缀,而应用程序`app1`和`app2`的路径前缀分别为`/app1`和`/app2`。代理可以发送`X-Forwarded-Prefix: ,让空前缀覆盖服务器前缀
/app1`和`/app2`。
The proxy has no prefix, while applications app1
and app2
have path prefixes
/app1
and /app2
respectively. The proxy can send X-Forwarded-Prefix: ` to
have the empty prefix override server prefixes `/app1
and /app2
.
这种情况的常见部署场景是,每个生产应用程序服务器都要付费获得许可证,最好在每个服务器上部属于多个应用程序以减少费用。另一个原因是在同一服务器上运行更多应用程序,以便共享服务器运行所需的资源。 A common case for this deployment scenario is where licenses are paid per production application server, and it is preferable to deploy multiple applications per server to reduce fees. Another reason is to run more applications on the same server in order to share the resources required by the server to run. 在这些场景中,应用程序需要一个非空上下文根,因为同一服务器上存在多个应用程序。然而,这不应在应用程序可能使用不同子域的公开API 的URL路径中可见,这些子域提供的优点包括: In these scenarios, applications need a non-empty context root because there are multiple applications on the same server. However, this should not be visible in URL paths of the public API where applications may use different subdomains that provides benefits such as:
|
场景3:插入路径前缀
Scenario 3: Insert path prefix
在其他情况下,可能需要前置前缀。例如,考虑以下代理到服务器的映射:
In other cases, it may be necessary to prepend a prefix. For example, consider the following proxy to server mapping:
https://example.com/api/app1/{path} -> http://localhost:8080/app1/{path}
在这种情况下,代理的前缀为`/api/app1`,服务器的前缀为`/app1`。代理可以发送`X-Forwarded-Prefix: /api/app1`,让原始前缀`/api/app1`覆盖服务器前缀`/app1`。
In this case, the proxy has a prefix of /api/app1
and the server has a prefix of
/app1
. The proxy can send X-Forwarded-Prefix: /api/app1
to have the original prefix
/api/app1
override the server prefix /app1
.
ForwardedHeaderTransformer
`ForwardedHeaderTransformer`是一个组件,它根据已转发的标头修改请求的主机、端口和方案,然后删除这些标头。如果你将其声明为名称为 `forwardedHeaderTransformer`的 bean,它会 detected并使用。
ForwardedHeaderTransformer
is a component that modifies the host, port, and scheme of
the request, based on forwarded headers, and then removes those headers. If you declare
it as a bean with the name forwardedHeaderTransformer
, it will be
detected and used.
5.1 中的 |
In 5.1 |
Security Considerations
由于应用程序无法知道标头是由代理按预期添加的还是由恶意客户端添加的,因此转发标头有安全注意事项。这就是为什么信任边界处的代理应该配置为删除来自外部的不可信转发流量。你还可以使用 removeOnly=true
配置 ForwardedHeaderTransformer
,在这种情况下,它会删除标头,但不会使用它们。
There are security considerations for forwarded headers since an application cannot know
if the headers were added by a proxy, as intended, or by a malicious client. This is why
a proxy at the boundary of trust should be configured to remove untrusted forwarded traffic coming
from the outside. You can also configure the ForwardedHeaderTransformer
with
removeOnly=true
, in which case it removes but does not use the headers.
Filters
在 WebHandler
API中,你可以使用 `WebFilter`在过滤器和其他目标 `WebHandler`的其余处理链之前和之后应用拦截式逻辑。当使用 WebFlux Config时,注册 `WebFilter`就像将其声明为 Spring bean,并且(可选)通过在 bean 声明中使用 `@Order`或实现 `Ordered`来说明优先级。
In the WebHandler
API, you can use a WebFilter
to apply interception-style
logic before and after the rest of the processing chain of filters and the target
WebHandler
. When using the WebFlux Config, registering a WebFilter
is as simple
as declaring it as a Spring bean and (optionally) expressing precedence by using @Order
on
the bean declaration or by implementing Ordered
.
CORS
Spring WebFlux 通过控制器上的注解提供精细的 CORS 配置支持。但是,在将它与 Spring Security 搭配使用时,我们建议依赖内置的 CorsFilter
,它必须排在 Spring Security 的过滤器链之前。
Spring WebFlux provides fine-grained support for CORS configuration through annotations on
controllers. However, when you use it with Spring Security, we advise relying on the built-in
CorsFilter
, which must be ordered ahead of Spring Security’s chain of filters.
请参阅有关 CORS和 CORS WebFilter
的部分以了解更多详细信息。
See the section on CORS and the CORS WebFilter
for more details.
Exceptions
在 WebHandler
API中,你可以使用 `WebExceptionHandler`来处理来自 `WebFilter`实例和目标 `WebHandler`的链的异常。当使用 WebFlux Config时,注册 `WebExceptionHandler`就像将其声明为 Spring bean,并且(可选)通过在 bean 声明中使用 `@Order`或通过实现 `Ordered`来说明优先级。
In the WebHandler
API, you can use a WebExceptionHandler
to handle
exceptions from the chain of WebFilter
instances and the target WebHandler
. When using the
WebFlux Config, registering a WebExceptionHandler
is as simple as declaring it as a
Spring bean and (optionally) expressing precedence by using @Order
on the bean declaration or
by implementing Ordered
.
下表描述了可用的 WebExceptionHandler
实现:
The following table describes the available WebExceptionHandler
implementations:
Exception Handler | Description |
---|---|
|
Provides handling for exceptions of type
|
|
Extension of This handler is declared in the WebFlux Config. |
Codecs
spring-web
和 spring-core
模块通过带有 Reactive Streams 背压的非阻塞 I/O 提供将字节内容序列化和反序列化为更高级别对象的支持。以下对此支持进行说明:
The spring-web
and spring-core
modules provide support for serializing and
deserializing byte content to and from higher level objects through non-blocking I/O with
Reactive Streams back pressure. The following describes this support:
-
Encoder
andDecoder
are low level contracts to encode and decode content independent of HTTP. -
HttpMessageReader
andHttpMessageWriter
are contracts to encode and decode HTTP message content. -
An
Encoder
can be wrapped withEncoderHttpMessageWriter
to adapt it for use in a web application, while aDecoder
can be wrapped withDecoderHttpMessageReader
. -
DataBuffer
abstracts different byte buffer representations (e.g. NettyByteBuf
,java.nio.ByteBuffer
, etc.) and is what all codecs work on. See Data Buffers and Codecs in the "Spring Core" section for more on this topic.
spring-core
模块提供 byte[]
、ByteBuffer
、DataBuffer
、Resource
和 String
编码器和解码器实现。spring-web
模块提供 Jackson JSON、Jackson Smile、JAXB2、协议缓冲区以及其他编码器和解码器,以及表單數據、分段內容、伺服器發送的事件和其他內容的 Web 專用 HTTP 訊息讀取器和寫入器實現。
The spring-core
module provides byte[]
, ByteBuffer
, DataBuffer
, Resource
, and
String
encoder and decoder implementations. The spring-web
module provides Jackson
JSON, Jackson Smile, JAXB2, Protocol Buffers and other encoders and decoders along with
web-only HTTP message reader and writer implementations for form data, multipart content,
server-sent events, and others.
`ClientCodecConfigurer`和 `ServerCodecConfigurer`通常用于配置和自定义在应用程序中使用的编解码器。请参阅有关配置 HTTP message codecs的部分。
ClientCodecConfigurer
and ServerCodecConfigurer
are typically used to configure and
customize the codecs to use in an application. See the section on configuring
HTTP message codecs.
Jackson JSON
當存在 Jackson 函式庫時,JSON 和二進位 JSON (Smile) 受支持。
JSON and binary JSON (Smile) are both supported when the Jackson library is present.
Jackson2Decoder
的工作原理如下:
The Jackson2Decoder
works as follows:
-
Jackson’s asynchronous, non-blocking parser is used to aggregate a stream of byte chunks into `TokenBuffer’s each representing a JSON object.
-
Each
TokenBuffer
is passed to Jackson’sObjectMapper
to create a higher level object. -
When decoding to a single-value publisher (e.g.
Mono
), there is oneTokenBuffer
. -
When decoding to a multi-value publisher (e.g.
Flux
), eachTokenBuffer
is passed to theObjectMapper
as soon as enough bytes are received for a fully formed object. The input content can be a JSON array, or any line-delimited JSON format such as NDJSON, JSON Lines, or JSON Text Sequences.
Jackson2Encoder
的工作原理如下:
The Jackson2Encoder
works as follows:
-
For a single value publisher (e.g.
Mono
), simply serialize it through theObjectMapper
. -
For a multi-value publisher with
application/json
, by default collect the values withFlux#collectToList()
and then serialize the resulting collection. -
For a multi-value publisher with a streaming media type such as
application/x-ndjson
orapplication/stream+x-jackson-smile
, encode, write, and flush each value individually using a line-delimited JSON format. Other streaming media types may be registered with the encoder. -
For SSE the
Jackson2Encoder
is invoked per event and the output is flushed to ensure delivery without delay.
默认情况下, By default both |
Form Data
FormHttpMessageReader
和 FormHttpMessageWriter
支持对 application/x-www-form-urlencoded
内容进行解码和编码。
FormHttpMessageReader
and FormHttpMessageWriter
support decoding and encoding
application/x-www-form-urlencoded
content.
在服务器端经常需要从多个地方访问表单内容时,ServerWebExchange`提供了一个专门的 `getFormData()`方法,该方法通过 `FormHttpMessageReader`解析内容,然后缓存结果以供重复访问。请参阅 `WebHandler
API部分中的 Form Data。
On the server side where form content often needs to be accessed from multiple places,
ServerWebExchange
provides a dedicated getFormData()
method that parses the content
through FormHttpMessageReader
and then caches the result for repeated access.
See Form Data in the WebHandler
API section.
一旦使用 getFormData()
,便无法再从请求正文中读取原始原始内容。因此,预期应用程序将始终通过 ServerWebExchange
访问缓存的表单数据,而不是从原始请求正文读取内容。
Once getFormData()
is used, the original raw content can no longer be read from the
request body. For this reason, applications are expected to go through ServerWebExchange
consistently for access to the cached form data versus reading from the raw request body.
Multipart
MultipartHttpMessageReader
和 MultipartHttpMessageWriter
支援對 "multipart/form-data"、"multipart/mixed" 和 "multipart/related" 內容進行解碼和編碼。反過來,MultipartHttpMessageReader
會委派給另一個 HttpMessageReader
對 Flux<Part>
進行實際解析,然後只將各個部分收集到一個 MultiValueMap
中。預設情況下,會使用 DefaultPartHttpMessageReader
,但是這可以使用 ServerCodecConfigurer
來改變。有關 DefaultPartHttpMessageReader
的更多資訊,請參閱 javadoc of DefaultPartHttpMessageReader
。
MultipartHttpMessageReader
and MultipartHttpMessageWriter
support decoding and
encoding "multipart/form-data", "multipart/mixed", and "multipart/related" content.
In turn MultipartHttpMessageReader
delegates to another HttpMessageReader
for the actual parsing to a Flux<Part>
and then simply collects the parts into a MultiValueMap
.
By default, the DefaultPartHttpMessageReader
is used, but this can be changed through the
ServerCodecConfigurer
.
For more information about the DefaultPartHttpMessageReader
, refer to the
javadoc of DefaultPartHttpMessageReader
.
在服务器端可能需要从多个地方访问多部分表单内容时,ServerWebExchange`提供了一个专门的 `getMultipartData()`方法,该方法通过 `MultipartHttpMessageReader`解析内容,然后缓存结果以供重复访问。请参阅 `WebHandler
API部分中的 Multipart Data。
On the server side where multipart form content may need to be accessed from multiple
places, ServerWebExchange
provides a dedicated getMultipartData()
method that parses
the content through MultipartHttpMessageReader
and then caches the result for repeated access.
See Multipart Data in the WebHandler
API section.
一旦使用 getMultipartData()
,便无法再从请求正文中读取原始原始内容。因此,应用程序必须始终使用 getMultipartData()
对各个部分进行重复的映射式的访问,或者依赖 SynchronossPartHttpMessageReader
对 Flux<Part>
进行一次性访问。
Once getMultipartData()
is used, the original raw content can no longer be read from the
request body. For this reason applications have to consistently use getMultipartData()
for repeated, map-like access to parts, or otherwise rely on the
SynchronossPartHttpMessageReader
for a one-time access to Flux<Part>
.
Protocol Buffers
ProtobufEncoder
和 ProtobufDecoder
支持对 “application/x-protobuf”、“application/octet-stream” 和 “application/vnd.google.protobuf” 内容进行解码和编码,用于 com.google.protobuf.Message
类型。如果内容使用内容类型中的 “delimited” 参数(例如 “application/x-protobuf;delimited=true”)进行接收或发送,它们还支持值流。这需要使用 “com.google.protobuf:protobuf-java” 库,版本为 3.29 及更高版本。
ProtobufEncoder
and ProtobufDecoder
supporting decoding and encoding "application/x-protobuf", "application/octet-stream"
and "application/vnd.google.protobuf" content for com.google.protobuf.Message
types. They also support stream of values
if content is received/sent with the "delimited" parameter along the content type (like "application/x-protobuf;delimited=true").
This requires the "com.google.protobuf:protobuf-java" library, version 3.29 and higher.
ProtobufJsonDecoder
和 ProtobufJsonEncoder
變體支援讀取和寫入 JSON 文件到 Protobuf 訊息和從 Protobuf 訊息讀取和寫入 JSON 文件。它們需要 "com.google.protobuf:protobuf-java-util" 相依性。請注意,JSON 變體不支援讀取訊息串流,請參閱 javadoc of ProtobufJsonDecoder
以取得更多詳細資訊。
The ProtobufJsonDecoder
and ProtobufJsonEncoder
variants support reading and writing JSON documents to and from Protobuf messages.
They require the "com.google.protobuf:protobuf-java-util" dependency. Note, the JSON variants do not support reading stream of messages,
see the javadoc of ProtobufJsonDecoder
for more details.
Limits
可以在記憶體中設定緩衝某些或全部輸入串流的 Decoder
和 HttpMessageReader
實現對要緩衝的位元組的最大數量加以限制。在某些情況下,緩衝發生是因為輸入聚集在一起並表示為一個單一物件 - 例如,具有 @RequestBody byte[]
的控制器方法、x-www-form-urlencoded
資料等等。在分拆輸入串流時,緩衝也可能發生串流,例如區隔文字、JSON 物件的串流等等。對於那些串流情況,限制適用於串流中與一個物件相關聯的位元組數。
Decoder
and HttpMessageReader
implementations that buffer some or all of the input
stream can be configured with a limit on the maximum number of bytes to buffer in memory.
In some cases buffering occurs because input is aggregated and represented as a single
object — for example, a controller method with @RequestBody byte[]
,
x-www-form-urlencoded
data, and so on. Buffering can also occur with streaming, when
splitting the input stream — for example, delimited text, a stream of JSON objects, and
so on. For those streaming cases, the limit applies to the number of bytes associated
with one object in the stream.
要配置缓冲区大小,您可以检查指定的 Decoder
或 HttpMessageReader
是否公开了 maxInMemorySize
属性,如果是,则 Javadoc 将包含有关默认值详细信息。在服务器端,ServerCodecConfigurer
提供了一个可以设置所有编解码器的位置,请参阅 HTTP message codecs。在客户端,WebClient.Builder 中可以更改所有编解码器的限制。
To configure buffer sizes, you can check if a given Decoder
or HttpMessageReader
exposes a maxInMemorySize
property and if so the Javadoc will have details about default
values. On the server side, ServerCodecConfigurer
provides a single place from where to
set all codecs, see HTTP message codecs. On the client side, the limit for
all codecs can be changed in
WebClient.Builder.
对于 Multipart parsing,maxInMemorySize
属性限制非文件部分的大小。对于文件部分,它确定将部分写入磁盘的阈值。对于写入磁盘的文件部分,有一个附加的 maxDiskUsagePerPart
属性来限制每个部分的磁盘空间量。还有一个 maxParts
属性来限制 multipart 请求中部分的总数。要在 WebFlux 中配置所有三个属性,您需要向 ServerCodecConfigurer
提供预配置的 MultipartHttpMessageReader
实例。
For Multipart parsing the maxInMemorySize
property limits
the size of non-file parts. For file parts, it determines the threshold at which the part
is written to disk. For file parts written to disk, there is an additional
maxDiskUsagePerPart
property to limit the amount of disk space per part. There is also
a maxParts
property to limit the overall number of parts in a multipart request.
To configure all three in WebFlux, you’ll need to supply a pre-configured instance of
MultipartHttpMessageReader
to ServerCodecConfigurer
.
Streaming
当流式传输到 HTTP 响应(例如 text/event-stream
、application/x-ndjson
)时,重要的是定期发送数据,以比迟发现断开的客户端更早地可靠地检测到断开的客户端。这样的发送可以是仅包含注释的空 SSE 事件或任何其他无操作数据,实际上可以作为心跳。
When streaming to the HTTP response (for example, text/event-stream
,
application/x-ndjson
), it is important to send data periodically, in order to
reliably detect a disconnected client sooner rather than later. Such a send could be a
comment-only, empty SSE event or any other "no-op" data that would effectively serve as
a heartbeat.
DataBuffer
DataBuffer
是 WebFlux 中位元組緩衝區的表示式。此參考的 Spring Core 部分在有關 Data Buffers and Codecs 的部分中提供了更多資訊。要了解的一個重點是,在像 Netty 那樣的某些伺服器上,位元組緩衝區是分池的且具有參考計算功能,并且必須在使用時釋放,以避免記憶體外洩。
DataBuffer
is the representation for a byte buffer in WebFlux. The Spring Core part of
this reference has more on that in the section on
Data Buffers and Codecs. The key point to understand is that on some
servers like Netty, byte buffers are pooled and reference counted, and must be released
when consumed to avoid memory leaks.
除非直接使用数据缓冲区(而不是依赖编解码器从高级对象转换),或除非选择创建自定义编解码器,否则 WebFlux 应用程序通常不需要关注此类问题。对于此类情况,请查看 Data Buffers and Codecs 中的信息,特别是 Using DataBuffer 部分。
WebFlux applications generally do not need to be concerned with such issues, unless they consume or produce data buffers directly, as opposed to relying on codecs to convert to and from higher level objects, or unless they choose to create custom codecs. For such cases please review the information in Data Buffers and Codecs, especially the section on Using DataBuffer.
Logging
Spring WebFlux 中的 DEBUG
级别日志记录旨在精简、简洁且对用户友好。它侧重于有用的高价值信息片段,一遍遍地重复使用与只在调试特定问题时有用的信息片段相反。
DEBUG
level logging in Spring WebFlux is designed to be compact, minimal, and
human-friendly. It focuses on high value bits of information that are useful over and
over again vs others that are useful only when debugging a specific issue.
TRACE
级别日志记录通常遵循与 DEBUG
相同的原则(例如,也不应该是消防软管),但可用于调试任何问题。此外,某些日志消息在 TRACE
和 DEBUG
级别显示的详情级别不同。
TRACE
level logging generally follows the same principles as DEBUG
(and for example also
should not be a firehose) but can be used for debugging any issue. In addition, some log
messages may show a different level of detail at TRACE
vs DEBUG
.
良好的日志记录得益于使用日志的经验。如果你发现任何不符合既定目标的信息,请告知我们。
Good logging comes from the experience of using the logs. If you spot anything that does not meet the stated goals, please let us know.
Log Id
在 WebFlux 中,单个请求可以在多个线程上运行,而线程 ID 对于关联属于特定请求的日志消息没有用。这就是 WebFlux 日志消息默认以特定请求的 ID 为前缀的原因。
In WebFlux, a single request can be run over multiple threads and the thread ID is not useful for correlating log messages that belong to a specific request. This is why WebFlux log messages are prefixed with a request-specific ID by default.
在服务器端,日志 ID 存储在 ServerWebExchange
属性(LOG_ID_ATTRIBUTE
) 中,可以从 ServerWebExchange#getLogPrefix()
中获得基于该 ID 的格式完整的修饰符。在 WebClient
端,日志 ID 存储在 ClientRequest
属性(LOG_ID_ATTRIBUTE
) 中,可以从 ClientRequest#logPrefix()
中获得格式完整的修饰符。
On the server side, the log ID is stored in the ServerWebExchange
attribute
(LOG_ID_ATTRIBUTE
),
while a fully formatted prefix based on that ID is available from
ServerWebExchange#getLogPrefix()
. On the WebClient
side, the log ID is stored in the
ClientRequest
attribute
(LOG_ID_ATTRIBUTE
)
,while a fully formatted prefix is available from ClientRequest#logPrefix()
.
Sensitive Data
DEBUG
和 TRACE
日志记录可以记录敏感信息。这就是表单参数和标头默认情况下被屏蔽,而你必须明确完全启用其日志记录的原因。
DEBUG
and TRACE
logging can log sensitive information. This is why form parameters and
headers are masked by default and you must explicitly enable their logging in full.
以下示例演示如何对服务器端请求执行此操作:
The following example shows how to do so for server-side requests:
-
Java
-
Kotlin
@Configuration
@EnableWebFlux
class MyConfig implements WebFluxConfigurer {
@Override
public void configureHttpMessageCodecs(ServerCodecConfigurer configurer) {
configurer.defaultCodecs().enableLoggingRequestDetails(true);
}
}
@Configuration
@EnableWebFlux
class MyConfig : WebFluxConfigurer {
override fun configureHttpMessageCodecs(configurer: ServerCodecConfigurer) {
configurer.defaultCodecs().enableLoggingRequestDetails(true)
}
}
以下示例演示如何对客户端请求执行此操作:
The following example shows how to do so for client-side requests:
-
Java
-
Kotlin
Consumer<ClientCodecConfigurer> consumer = configurer ->
configurer.defaultCodecs().enableLoggingRequestDetails(true);
WebClient webClient = WebClient.builder()
.exchangeStrategies(strategies -> strategies.codecs(consumer))
.build();
val consumer: (ClientCodecConfigurer) -> Unit = { configurer -> configurer.defaultCodecs().enableLoggingRequestDetails(true) }
val webClient = WebClient.builder()
.exchangeStrategies({ strategies -> strategies.codecs(consumer) })
.build()
Appenders
SLF4J 和 Log4J 2 等日志库提供异步记录器,可避免阻塞。虽然这些库有自己的缺点,例如可能会丢弃无法排队进行日志记录的消息,但它们是当前在响应式非阻塞应用程序中使用的最佳可用选项。
Logging libraries such as SLF4J and Log4J 2 provide asynchronous loggers that avoid blocking. While those have their own drawbacks such as potentially dropping messages that could not be queued for logging, they are the best available options currently for use in a reactive, non-blocking application.
Custom codecs
应用程序可以注册自定义编解码器,以支持默认编解码器不支持的附加媒体类型或特定行为。
Applications can register custom codecs for supporting additional media types, or specific behaviors that are not supported by the default codecs.
开发人员表达的一些配置选项在默认编解码器中强制执行。自定义编解码器可能希望有机会符合这些首选项,例如 enforcing buffering limits 或 logging sensitive data。
Some configuration options expressed by developers are enforced on default codecs. Custom codecs might want to get a chance to align with those preferences, like enforcing buffering limits or logging sensitive data.
以下示例演示如何对客户端请求执行此操作:
The following example shows how to do so for client-side requests:
-
Java
-
Kotlin
WebClient webClient = WebClient.builder()
.codecs(configurer -> {
CustomDecoder decoder = new CustomDecoder();
configurer.customCodecs().registerWithDefaultConfig(decoder);
})
.build();
val webClient = WebClient.builder()
.codecs({ configurer ->
val decoder = CustomDecoder()
configurer.customCodecs().registerWithDefaultConfig(decoder)
})
.build()