Azure OpenAI Chat
-
提供了自动配置和手动配置 AzureOpenAiChatClient 的选项,该选项支持模型配置和强大的函数调用功能。
Azure 的 OpenAI 产品由 ChatGPT 提供支持,它的功能超越了传统 OpenAI 的能力,提供具有增强功能的 AI 驱动的文本生成。Azure 提供了额外的 AI 安全和负责任的 AI 功能,如其最近的更新 here 中所强调的那样。
Azure’s OpenAI offering, powered by ChatGPT, extends beyond traditional OpenAI capabilities, delivering AI-driven text generation with enhanced functionality. Azure offers additional AI safety and responsible AI features, as highlighted in their recent update here.
Azure 为 Java 开发人员提供了将人工智能与其一系列 Azure 服务(包括 Azure 上的 Vector Store 等与人工智能相关的资源)集成在一起,从而充分利用人工智能潜力的机会。
Azure offers Java developers the opportunity to leverage AI’s full potential by integrating it with an array of Azure services, which includes AI-related resources such as Vector Stores on Azure.
Prerequisites
从 Azure Portal 上的 Azure OpenAI 服务部分获取你的 Azure OpenAI endpoint
和 api-key
。Spring AI 定义了一个名为 spring.ai.azure.openai.api-key
的配置属性,你应将其设置为从 Azure 获得的 API Key
的值。还有一个名为 spring.ai.azure.openai.endpoint
的配置属性,你应将其设置为在 Azure 中初始化你的模型时获得的端点 URL。导出环境变量是设置这些配置属性的一种方法:
Obtain your Azure OpenAI endpoint
and api-key
from the Azure OpenAI Service section on the Azure Portal.
Spring AI defines a configuration property named spring.ai.azure.openai.api-key
that you should set to the value of the API Key
obtained from Azure.
There is also a configuration property named spring.ai.azure.openai.endpoint
that you should set to the endpoint URL obtained when provisioning your model in Azure.
Exporting environment variables is one way to set these configuration properties:
export SPRING_AI_AZURE_OPENAI_API_KEY=<INSERT KEY HERE>
export SPRING_AI_AZURE_OPENAI_ENDPOINT=<INSERT ENDPOINT URL HERE>
Deployment Name
若要使用运行 Azure AI 应用程序,请通过 [Azure AI 门户]([role="bare"][role="bare"]https://oai.azure.com/portal) 创建一个 Azure AI 部署。
To use run Azure AI applications, create an Azure AI Deployment through the [Azure AI Portal]([role="bare"]https://oai.azure.com/portal).
在 Azure 中,每个客户端必须指定一个 Deployment Name
才能连接到 Azure OpenAI 服务。
In Azure, each client must specify a Deployment Name
to connect to the Azure OpenAI service.
了解 Deployment Name
与您选择部署的模型不同非常重要。
It’s essential to understand that the Deployment Name
is different from the model you choose to deploy
例如,名为 MyAiDeployment
的部署可以配置为使用 GPT 3.5 Turbo 模型或 GPT 4.0 模型。
For instance, a deployment named 'MyAiDeployment' could be configured to use either the GPT 3.5 Turbo model or the GPT 4.0 model.
现在,为了简化起见,您可以使用以下设置创建部署:
For now, to keep things simple, you can create a deployment using the following settings:
Deployment Name: gpt-35-turbo
Model Name: gpt-35-turbo
Deployment Name: gpt-35-turbo
Model Name: gpt-35-turbo
此 Azure 配置将与 Spring Boot Azure AI Starter 及其自动配置功能的默认配置保持一致。
This Azure configuration will align with the default configurations of the Spring Boot Azure AI Starter and its Autoconfiguration feature.
如果您使用不同的部署名称,请相应地更新配置属性:
If you use a different Deployment Name, update the configuration property accordingly:
spring.ai.azure.openai.chat.options.model=<my deployment name>
Azure OpenAI 和 OpenAI 不同的部署结构导致 Azure OpenAI 客户端库中的一个属性名为 deploymentOrModelName
。这是因为在 OpenAI 中没有“部署名称”,只有“模型名称”。
The different deployment structures of Azure OpenAI and OpenAI leads to a property in the Azure OpenAI client library named deploymentOrModelName
.
This is because in OpenAI there is no Deployment Name
, only a Model Name
.
Spring AI 将在后续版本中将属性 spring.ai.azure.openai.chat.options.model
重命名为 spring.ai.azure.openai.chat.options.deployment-name
以避免混淆。
In a subsequent release, Spring AI will rename the property spring.ai.azure.openai.chat.options.model
to spring.ai.azure.openai.chat.options.deployment-name
to avoid confusion.
Add Repositories and BOM
Spring AI 工件发布在 Spring Milestone 和 Snapshot 存储库中。有关将这些存储库添加到你的构建系统的说明,请参阅 Repositories 部分。
Spring AI artifacts are published in Spring Milestone and Snapshot repositories. Refer to the Repositories section to add these repositories to your build system.
为了帮助进行依赖项管理,Spring AI 提供了一个 BOM(物料清单)以确保在整个项目中使用一致版本的 Spring AI。有关将 Spring AI BOM 添加到你的构建系统的说明,请参阅 Dependency Management 部分。
To help with dependency management, Spring AI provides a BOM (bill of materials) to ensure that a consistent version of Spring AI is used throughout the entire project. Refer to the Dependency Management section to add the Spring AI BOM to your build system.
Auto-configuration
Spring AI 为 Azure OpenAI Chat 客户端提供 Spring Boot 自动配置。若要启用它,请将以下依赖项添加到项目的 Maven pom.xml
文件中:
Spring AI provides Spring Boot auto-configuration for the Azure OpenAI Chat Client.
To enable it add the following dependency to your project’s Maven pom.xml
file:
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-azure-openai-spring-boot-starter</artifactId>
</dependency>
或添加到 Gradle build.gradle
构建文件中。
or to your Gradle build.gradle
build file.
dependencies {
implementation 'org.springframework.ai:spring-ai-azure-openai-spring-boot-starter'
}
|
Refer to the Dependency Management section to add the Spring AI BOM to your build file. |
Chat Properties
前缀 spring.ai.azure.openai
是用于配置与 Azure OpenAI 连接的属性前缀。
The prefix spring.ai.azure.openai
is the property prefix to configure the connection to Azure OpenAI.
Property | Description | Default |
---|---|---|
spring.ai.azure.openai.api-key |
The Key from Azure AI OpenAI |
- |
spring.ai.azure.openai.endpoint |
The endpoint from the Azure AI OpenAI |
- |
前缀 spring.ai.azure.openai.chat
是配置适用于 Azure OpenAI 的 ChatClient
实现的属性前缀。
The prefix spring.ai.azure.openai.chat
is the property prefix that configures the ChatClient
implementation for Azure OpenAI.
Property | Description | Default |
---|---|---|
spring.ai.azure.openai.chat.enabled |
Enable Azure OpenAI chat client. |
true |
spring.ai.azure.openai.chat.options.deployment-name |
* In use with Azure, this refers to the "Deployment Name" of your model, which you can find at [role="bare"]https://oai.azure.com/portal. It’s important to note that within an Azure OpenAI deployment, the "Deployment Name" is distinct from the model itself. The confusion around these terms stems from the intention to make the Azure OpenAI client library compatible with the original OpenAI endpoint. The deployment structures offered by Azure OpenAI and Sam Altman’s OpenAI differ significantly. Deployments model name to provide as part of this completions request. |
gpt-35-turbo |
spring.ai.azure.openai.chat.options.maxTokens |
The maximum number of tokens to generate. |
- |
spring.ai.azure.openai.chat.options.temperature |
The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict. |
0.7 |
spring.ai.azure.openai.chat.options.topP |
An alternative to sampling with temperature called nucleus sampling. This value causes the model to consider the results of tokens with the provided probability mass. |
- |
spring.ai.azure.openai.chat.options.logitBias |
A map between GPT token IDs and bias scores that influences the probability of specific tokens appearing in a completions response. Token IDs are computed via external tokenizer tools, while bias scores reside in the range of -100 to 100 with minimum and maximum values corresponding to a full ban or exclusive selection of a token, respectively. The exact behavior of a given bias score varies by model. |
- |
spring.ai.azure.openai.chat.options.user |
An identifier for the caller or end user of the operation. This may be used for tracking or rate-limiting purposes. |
- |
spring.ai.azure.openai.chat.options.n |
The number of chat completions choices that should be generated for a chat completions response. |
- |
spring.ai.azure.openai.chat.options.stop |
A collection of textual sequences that will end completions generation. |
- |
spring.ai.azure.openai.chat.options.presencePenalty |
A value that influences the probability of generated tokens appearing based on their existing presence in generated text. Positive values will make tokens less likely to appear when they already exist and increase the model’s likelihood to output new topics. |
- |
spring.ai.azure.openai.chat.options.frequencyPenalty |
A value that influences the probability of generated tokens appearing based on their cumulative frequency in generated text. Positive values will make tokens less likely to appear as their frequency increases and decrease the likelihood of the model repeating the same statements verbatim. |
- |
以 |
All properties prefixed with |
Chat Options
AzureOpenAiChatOptions.java 提供模型配置,例如要使用的模型、温度、频率惩罚等。
The AzureOpenAiChatOptions.java provides model configurations, such as the model to use, the temperature, the frequency penalty, etc.
在启动时,可以使用 AzureOpenAiChatClient(api, options)
构造函数或 spring.ai.azure.openai.chat.options.*
属性配置默认选项。
On start-up, the default options can be configured with the AzureOpenAiChatClient(api, options)
constructor or the spring.ai.azure.openai.chat.options.*
properties.
在运行时,可以通过向 Prompt
调用添加新的请求特定选项来覆盖默认选项。例如,覆盖特定请求的默认模型和温度:
At runtime you can override the default options by adding new, request specific, options to the Prompt
call.
For example to override the default model and temperature for a specific request:
ChatResponse response = chatClient.call(
new Prompt(
"Generate the names of 5 famous pirates.",
AzureOpenAiChatOptions.builder()
.withModel("gpt-4-32k")
.withTemperature(0.4)
.build()
));
|
In addition to the model specific AzureOpenAiChatOptions.java you can use a portable ChatOptions instance, created with the ChatOptionsBuilder#builder(). |
Function Calling
您可以使用 AzureOpenAiChatClient 注册自定义 Java 函数,并让模型智能地选择输出一个 JSON 对象,其中包含调用其中一个或多个已注册函数的参数。这是一种将 LLM 能力与外部工具和 API 相连接的强大技术。阅读有关 Azure OpenAI Function Calling 的更多信息。
You can register custom Java functions with the AzureOpenAiChatClient and have the model intelligently choose to output a JSON object containing arguments to call one or many of the registered functions. This is a powerful technique to connect the LLM capabilities with external tools and APIs. Read more about Azure OpenAI Function Calling.
Sample Controller (Auto-configuration)
Create 一个新的 Spring Boot 项目,并将 spring-ai-azure-openai-spring-boot-starter
添加到你的 pom(或 gradle)依赖项。
Create a new Spring Boot project and add the spring-ai-azure-openai-spring-boot-starter
to your pom (or gradle) dependencies.
在 src/main/resources
目录下添加一个 application.properties
文件,以启用和配置 OpenAI Chat 客户端:
Add a application.properties
file, under the src/main/resources
directory, to enable and configure the OpenAi Chat client:
spring.ai.azure.openai.api-key=YOUR_API_KEY
spring.ai.azure.openai.endpoint=YOUR_ENDPOINT
spring.ai.azure.openai.chat.options.model=gpt-35-turbo
spring.ai.azure.openai.chat.options.temperature=0.7
将 |
replace the |
这样会创建可以注入到类中的 AzureOpenAiChatClient
实现。这是一个简单的 @Controller
类示例,它将聊天客户端用于文本生成。
This will create a AzureOpenAiChatClient
implementation that you can inject into your class.
Here is an example of a simple @Controller
class that uses the chat client for text generations.
@RestController
public class ChatController {
private final AzureOpenAiChatClient chatClient;
@Autowired
public ChatController(AzureOpenAiChatClient chatClient) {
this.chatClient = chatClient;
}
@GetMapping("/ai/generate")
public Map generate(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of("generation", chatClient.call(message));
}
@GetMapping("/ai/generateStream")
public Flux<ChatResponse> generateStream(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
Prompt prompt = new Prompt(new UserMessage(message));
return chatClient.stream(prompt);
}
}
Manual Configuration
AzureOpenAiChatClient 实现 ChatClient
和 StreamingChatClient
,并使用 Azure OpenAI Java Client。
The AzureOpenAiChatClient implements the ChatClient
and StreamingChatClient
and uses the Azure OpenAI Java Client.
若要启用它,请将 spring-ai-azure-openai
依赖项添加到项目的 Maven pom.xml
文件中:
To enable it, add the spring-ai-azure-openai
dependency to your project’s Maven pom.xml
file:
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-azure-openai</artifactId>
</dependency>
或添加到 Gradle build.gradle
构建文件中。
or to your Gradle build.gradle
build file.
dependencies {
implementation 'org.springframework.ai:spring-ai-azure-openai'
}
|
Refer to the Dependency Management section to add the Spring AI BOM to your build file. |
|
The |
接下来,创建一个 AzureOpenAiChatClient
实例并使用它生成文本响应:
Next, create an AzureOpenAiChatClient
instance and use it to generate text responses:
var openAIClient = OpenAIClientBuilder()
.credential(new AzureKeyCredential(System.getenv("AZURE_OPENAI_API_KEY")))
.endpoint(System.getenv("AZURE_OPENAI_ENDPOINT"))
.buildClient();
var chatClient = new AzureOpenAiChatClient(openAIClient).withDefaultOptions(
AzureOpenAiChatOptions.builder()
.withModel("gpt-35-turbo")
.withTemperature(0.4)
.withMaxTokens(200)
.build());
ChatResponse response = chatClient.call(
new Prompt("Generate the names of 5 famous pirates."));
// Or with streaming responses
Flux<ChatResponse> response = chatClient.stream(
new Prompt("Generate the names of 5 famous pirates."));
|
the |