Performance

在性能方面,不存在万能的解决方法。很多因素都会影响性能,包括消息的大小和数量、应用程序方法是否执行需要阻止的工作以及外部因素(如网络速度和其他问题)。本部分的目标是对可用的配置选项提供概述,并提出一些有关如何推理扩展的方法。

There is no silver bullet when it comes to performance. Many factors affect it, including the size and volume of messages, whether application methods perform work that requires blocking, and external factors (such as network speed and other issues). The goal of this section is to provide an overview of the available configuration options along with some thoughts on how to reason about scaling.

在消息传递应用程序中,消息通过通道进行异步执行(由线程池支持)。配置此类应用程序需要充分了解通道和消息流。因此,建议查看 Flow of Messages

In a messaging application, messages are passed through channels for asynchronous executions that are backed by thread pools. Configuring such an application requires good knowledge of the channels and the flow of messages. Therefore, it is recommended to review Flow of Messages.

最明显的开始之处是配置支持 clientInboundChannelclientOutboundChannel 的线程池。默认情况下,这两个线程池同时配置为可用处理器的两倍。

The obvious place to start is to configure the thread pools that back the clientInboundChannel and the clientOutboundChannel. By default, both are configured at twice the number of available processors.

如果注释方法中消息的处理主要是 CPU 绑定的,那么 clientInboundChannel 的线程数应保持接近于处理器的数量。如果这些方法所做的工作更多地是 IO 绑定的,并且需要阻止或等待数据库或其他外部系统,那么可能需要增加线程池大小。

If the handling of messages in annotated methods is mainly CPU-bound, the number of threads for the clientInboundChannel should remain close to the number of processors. If the work they do is more IO-bound and requires blocking or waiting on a database or other external system, the thread pool size probably needs to be increased.

ThreadPoolExecutor 具有三个重要的属性:核心线程池大小、最大线程池大小以及存储在其中没有可用线程的任务的队列容量。

ThreadPoolExecutor has three important properties: the core thread pool size, the max thread pool size, and the capacity for the queue to store tasks for which there are no available threads.

一个常见的困惑点是,配置核心池大小(例如 10)和最大池大小(例如 20)会导致线程池中有 10 到 20 个线程。实际上,如果容量保留为其默认值 Integer.MAX_VALUE,由于所有其他任务都已排队,因此线程池永远不会增加超过核心池大小。

A common point of confusion is that configuring the core pool size (for example, 10) and max pool size (for example, 20) results in a thread pool with 10 to 20 threads. In fact, if the capacity is left at its default value of Integer.MAX_VALUE, the thread pool never increases beyond the core pool size, since all additional tasks are queued.

请参阅 ThreadPoolExecutor 的 javadoc 以了解这些属性如何工作以及了解各种队列策略。

See the javadoc of ThreadPoolExecutor to learn how these properties work and understand the various queuing strategies.

clientOutboundChannel 方面,其重点是向 WebSocket 客户端发送消息。如果客户端位于快速网络中,那么线程数应保持接近于可用处理器的数量。如果客户端速度较慢或带宽较低,那么它们将需要更长时间来消耗消息,并给线程池带来负担。因此,有必要增加线程池大小。

On the clientOutboundChannel side, it is all about sending messages to WebSocket clients. If clients are on a fast network, the number of threads should remain close to the number of available processors. If they are slow or on low bandwidth, they take longer to consume messages and put a burden on the thread pool. Therefore, increasing the thread pool size becomes necessary.

虽然有可能预测 clientInboundChannel 的工作负载(毕竟这是基于应用程序所执行内容的),但很难配置 "clientOutboundChannel",因为这是基于应用程序无法控制的因素的。出于此原因,有两个附加属性与发送消息相关:sendTimeLimitsendBufferSizeLimit。你可以使用这些方法来配置发送所允许的持续时间以及在向客户端发送消息时可以缓冲多少数据。

While the workload for the clientInboundChannel is possible to predict — after all, it is based on what the application does — how to configure the "clientOutboundChannel" is harder, as it is based on factors beyond the control of the application. For this reason, two additional properties relate to the sending of messages: sendTimeLimit and sendBufferSizeLimit. You can use those methods to configure how long a send is allowed to take and how much data can be buffered when sending messages to a client.

总的来说,任何给定的时间,只能使用单个线程发送到客户端。与此同时,所有其他消息都将被缓冲,并且你可以使用这些属性来决定允许发送消息的持续时间以及在此期间可以缓冲多少数据。有关重要其他详细信息,请参阅 XML 模式的 javadoc 和文档。

The general idea is that, at any given time, only a single thread can be used to send to a client. All additional messages, meanwhile, get buffered, and you can use these properties to decide how long sending a message is allowed to take and how much data can be buffered in the meantime. See the javadoc and documentation of the XML schema for important additional details.

下例显示了可能的配置:

The following example shows a possible configuration:

  • Java

  • Kotlin

  • Xml

@Configuration
@EnableWebSocketMessageBroker
public class WebSocketConfiguration implements WebSocketMessageBrokerConfigurer {

	@Override
	public void configureWebSocketTransport(WebSocketTransportRegistration registration) {
		registration.setSendTimeLimit(15 * 1000).setSendBufferSizeLimit(512 * 1024);
	}

	// ...

}
@Configuration
@EnableWebSocketMessageBroker
class WebSocketConfiguration : WebSocketMessageBrokerConfigurer {

	override fun configureWebSocketTransport(registration: WebSocketTransportRegistration) {
		registration.setSendTimeLimit(15 * 1000).setSendBufferSizeLimit(512 * 1024)
	}

	// ...
}
<beans xmlns="http://www.springframework.org/schema/beans"
	   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	   xmlns:websocket="http://www.springframework.org/schema/websocket"
	   xsi:schemaLocation="
		http://www.springframework.org/schema/beans
		https://www.springframework.org/schema/beans/spring-beans.xsd
		http://www.springframework.org/schema/websocket
		https://www.springframework.org/schema/websocket/spring-websocket.xsd">

	<websocket:message-broker>
		<websocket:transport send-timeout="15000" send-buffer-size="524288" />
		<!-- ... -->
	</websocket:message-broker>

</beans>

你还可以使用前面显示的 WebSocket 传输配置来配置允许的 STOMP 输入消息的最大大小。从理论上讲,WebSocket 消息的大小几乎可以无限。实际上,WebSocket 服务器会施加限制,例如 Tomcat 上的 8K 和 Jetty 上的 64K。由于此原因,诸如 stomp-js/stompjs 之类的 STOMP 客户端和其他客户端会以 16K 边界分割较大的 STOMP 消息,然后将它们作为多个 WebSocket 消息发送,这需要服务器进行缓冲和重新组装。

You can also use the WebSocket transport configuration shown earlier to configure the maximum allowed size for incoming STOMP messages. In theory, a WebSocket message can be almost unlimited in size. In practice, WebSocket servers impose limits — for example, 8K on Tomcat and 64K on Jetty. For this reason, STOMP clients such as stomp-js/stompjs and others split larger STOMP messages at 16K boundaries and send them as multiple WebSocket messages, which requires the server to buffer and re-assemble.

Spring 的 STOMP-over-WebSocket 支持会执行此操作,因此应用程序可以配置 STOMP 消息的最大大小,而不考虑特定于 WebSocket 服务器的消息大小。请记住,如果需要,WebSocket 消息大小会自动调整,以确保其至少可以承载 16K 的 WebSocket 消息。

Spring’s STOMP-over-WebSocket support does this ,so applications can configure the maximum size for STOMP messages irrespective of WebSocket server-specific message sizes. Keep in mind that the WebSocket message size is automatically adjusted, if necessary, to ensure they can carry 16K WebSocket messages at a minimum.

下例显示了一种可能的配置:

The following example shows one possible configuration:

  • Java

  • Kotlin

  • Xml

@Configuration
@EnableWebSocketMessageBroker
public class MessageSizeLimitWebSocketConfiguration implements WebSocketMessageBrokerConfigurer {

	@Override
	public void configureWebSocketTransport(WebSocketTransportRegistration registration) {
		registration.setMessageSizeLimit(128 * 1024);
	}

	// ...

}
@Configuration
@EnableWebSocketMessageBroker
class MessageSizeLimitWebSocketConfiguration : WebSocketMessageBrokerConfigurer {

	override fun configureWebSocketTransport(registration: WebSocketTransportRegistration) {
		registration.setMessageSizeLimit(128 * 1024)
	}

	// ...
}
<beans xmlns="http://www.springframework.org/schema/beans"
	   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	   xmlns:websocket="http://www.springframework.org/schema/websocket"
	   xsi:schemaLocation="
		http://www.springframework.org/schema/beans
		https://www.springframework.org/schema/beans/spring-beans.xsd
		http://www.springframework.org/schema/websocket
		https://www.springframework.org/schema/websocket/spring-websocket.xsd">

	<websocket:message-broker>
		<websocket:transport message-size="131072" />
		<!-- ... -->
	</websocket:message-broker>

</beans>

有关扩展的一个重点涉及使用多个应用程序实例。当前,你无法使用简单代理来执行此操作。但是,在你使用全功能代理(如 RabbitMQ)时,每个应用程序实例都会连接到代理,并且通过一个应用程序实例广播的消息可以通过该代理广播到通过任何其他应用程序实例连接的 WebSocket 客户端。

An important point about scaling involves using multiple application instances. Currently, you cannot do that with the simple broker. However, when you use a full-featured broker (such as RabbitMQ), each application instance connects to the broker, and messages broadcast from one application instance can be broadcast through the broker to WebSocket clients connected through any other application instances.