[Documentation] Mistral open mixtral8x22b docs (#979)

Mistral AI documentation updated with new open source model and some
references.
This commit is contained in:
Carlos Zela Bueno 2024-05-22 16:32:43 +01:00 committed by GitHub
parent 4f307520f7
commit acdefd34b0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 133 additions and 15 deletions

View File

@ -5,7 +5,7 @@ sidebar_position: 10
# MistralAI # MistralAI
[MistralAI Documentation](https://docs.mistral.ai/) [MistralAI Documentation](https://docs.mistral.ai/)
### Project setup ## Project setup
To install langchain4j to your project, add the following dependency: To install langchain4j to your project, add the following dependency:
@ -32,7 +32,7 @@ For Gradle project `build.gradle`
implementation 'dev.langchain4j:langchain4j:{your-version}' implementation 'dev.langchain4j:langchain4j:{your-version}'
implementation 'dev.langchain4j:langchain4j-mistral-ai:{your-version}' implementation 'dev.langchain4j:langchain4j-mistral-ai:{your-version}'
``` ```
#### API Key setup ### API Key setup
Add your MistralAI API key to your project, you can create a class ```ApiKeys.java``` with the following code Add your MistralAI API key to your project, you can create a class ```ApiKeys.java``` with the following code
```java ```java
@ -47,16 +47,21 @@ SET MISTRAL_AI_API_KEY=your-api-key #For Windows OS
``` ```
More details on how to get your MistralAI API key can be found [here](https://docs.mistral.ai/#api-access) More details on how to get your MistralAI API key can be found [here](https://docs.mistral.ai/#api-access)
#### Model Selection ### Model Selection
You can use `MistralAiChatModelName.class` enum class to found appropriate model names for your use case. You can use `MistralAiChatModelName.class` enum class to found appropriate model names for your use case.
MistralAI updated a new selection and classification of models according to performance and cost trade-offs. MistralAI updated a new selection and classification of models according to performance and cost trade-offs.
Here a list of available models: | Model name | Deployment or available from | Description |
- open-mistral-7b (aka mistral-tiny-2312) |------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
- open-mixtral-8x7b (aka mistral-small-2312) | open-mistral-7b | - Mistral AI La Plateforme.<br/>- Cloud platforms (Azure, AWS, GCP).<br/>- Hugging Face.<br/>- Self-hosted (On-premise, IaaS, docker, local). | **OpenSource**<br/>The first dense model released by Mistral AI, <br/> perfect for experimentation, <br/> customization, and quick iteration. <br/><br/>Max tokens 32K<br/><br/>Java Enum<br/>`MistralAiChatModelName.OPEN_MISTRAL_7B` |
- mistral-small-latest (aka mistral-small-2402) | open-mixtral-8x7b | - Mistral AI La Plateforme.<br/>- Cloud platforms (Azure, AWS, GCP).<br/>- Hugging Face.<br/>- Self-hosted (On-premise, IaaS, docker, local). | **OpenSource**<br/>Ideal to handle multi-languages operations, <br/> code generationand fine-tuned.<br/> Excellent cost/performance trade-offs. <br/><br/>Max tokens 32K<br/><br/>Java Enum<br/>`MistralAiChatModelName.OPEN_MIXTRAL_8x7B` |
- mistral-medium-latest (aka mistral-medium-2312) | open-mixtral-8x22b | - Mistral AI La Plateforme.<br/>- Cloud platforms (Azure, AWS, GCP).<br/>- Hugging Face.<br/>- Self-hosted (On-premise, IaaS, docker, local). | **OpenSource**<br/>It has all Mixtral-8x7B capabilities plus strong maths <br/> and coding natively capable of function calling <br/><br/>Max tokens 64K.<br/><br/>Java Enum<br/>`MistralAiChatModelName.OPEN_MIXTRAL_8X22B` |
- mistral-large-latest (aka mistral-large-2402) | mistral-small-latest | - Mistral AI La Plateforme.<br/>- Cloud platforms (Azure, AWS, GCP). | **Commercial**<br/>Suitable for simple tasks that one can do in bulk <br/>(Classification, Customer Support, or Text Generation).<br/><br/>Max tokens 32K<br/><br/>Java Enum<br/>`MistralAiChatModelName.MISTRAL_SMALL_LATEST` |
| mistral-medium-latest | - Mistral AI La Plateforme.<br/>- Cloud platforms (Azure, AWS, GCP). | **Commercial**<br/>Ideal for intermediate tasks that require moderate <br/> reasoning (Data extraction, Summarizing, <br/>Writing emails, Writing descriptions.<br/><br/>Max tokens 32K<br/><br/>Java Enum<br/>`MistralAiChatModelName.MISTRAL_MEDIUM_LATEST` |
| mistral-large-latest | - Mistral AI La Plateforme.<br/>- Cloud platforms (Azure, AWS, GCP). | **Commercial**<br/>Ideal for complex tasks that require large reasoning <br/> capabilities or are highly specialized <br/>(Text Generation, Code Generation, RAG, or Agents).<br/><br/>Max tokens 32K<br/><br/>Java Enum<br/>`MistralAiChatModelName.MISTRAL_LARGE_LATEST` |
| mistral-embed | - Mistral AI La Plateforme.<br/>- Cloud platforms (Azure, AWS, GCP). | **Commercial**<br/>Converts text into numerical vectors of <br/> embeddings in 1024 dimensions. <br/>Embedding models enable retrieval and RAG applications.<br/><br/>Max tokens 8K<br/><br/>Java Enum<br/>`MistralAiEmbeddingModelName.MISTRAL_EMBED` |
`@Deprecated` models:
- mistral-tiny (`@Deprecated`) - mistral-tiny (`@Deprecated`)
- mistral-small (`@Deprecated`) - mistral-small (`@Deprecated`)
- mistral-medium (`@Deprecated`) - mistral-medium (`@Deprecated`)
@ -145,6 +150,18 @@ In [Set Model Parameters](/tutorials/model-parameters) you will learn how to set
### Function Calling ### Function Calling
Function calling allows Mistral chat models ([synchronous](#synchronous) and [streaming](#streaming)) to connect to external tools. For example, you can call a `Tool` to get the payment transaction status as shown in the Mistral AI function calling [tutorial](https://docs.mistral.ai/guides/function-calling/). Function calling allows Mistral chat models ([synchronous](#synchronous) and [streaming](#streaming)) to connect to external tools. For example, you can call a `Tool` to get the payment transaction status as shown in the Mistral AI function calling [tutorial](https://docs.mistral.ai/guides/function-calling/).
<details>
<summary>What are the supported mistral models?</summary>
:::note
Currently, function calling is available for the following models:
- Mistral Small `MistralAiChatModelName.MISTRAL_SMALL_LATEST`
- Mistral Large `MistralAiChatModelName.MISTRAL_LARGE_LATEST`
- Mixtral 8x22B `MistralAiChatModelName.OPEN_MIXTRAL_8X22B`
:::
</details>
#### 1. Define a `Tool` class and how get the payment data #### 1. Define a `Tool` class and how get the payment data
Let's assume you have a dataset of payment transaction like this. In real applications you should inject a database source or REST API client to get the data. Let's assume you have a dataset of payment transaction like this. In real applications you should inject a database source or REST API client to get the data.
@ -190,7 +207,7 @@ private String getPaymentData(String transactionId, String data) {
} }
} }
``` ```
It uses a `@Tool` annotation to define the function description and `@P` annotation to define the parameter description of the package `dev.langchain4j.agent.tool.*`. It uses a `@Tool` annotation to define the function description and `@P` annotation to define the parameter description of the package `dev.langchain4j.agent.tool.*`. More info [here](/tutorials/tools#high-level-tool-api)
#### 2. Define an interface as an `agent` to send chat messages. #### 2. Define an interface as an `agent` to send chat messages.
@ -221,7 +238,7 @@ public class PaymentDataAssistantApp {
ChatLanguageModel mistralAiModel = MistralAiChatModel.builder() ChatLanguageModel mistralAiModel = MistralAiChatModel.builder()
.apiKey(System.getenv("MISTRAL_AI_API_KEY")) // Please use your own Mistral AI API key .apiKey(System.getenv("MISTRAL_AI_API_KEY")) // Please use your own Mistral AI API key
.modelName(MistralAiChatModelName.MISTRAL_LARGE_LATEST) .modelName(MistralAiChatModelName.MISTRAL_LARGE_LATEST) // Also you can use MistralAiChatModelName.OPEN_MIXTRAL_8X22B as open source model
.logRequests(true) .logRequests(true)
.logResponses(true) .logResponses(true)
.build(); .build();
@ -250,7 +267,108 @@ and expect an answer like this:
```shell ```shell
The status of transaction T1005 is Pending. The payment date is October 8, 2021. The status of transaction T1005 is Pending. The payment date is October 8, 2021.
``` ```
### JSON mode
You can also use the JSON mode to get the response in JSON format. To do this, you need to set the `responseFormat` parameter to `json_object` or the java enum `MistralAiResponseFormatType.JSON_OBJECT` in the `MistralAiChatModel` builder OR `MistralAiStreamingChatModel` builder.
Syncronous example:
```java
ChatLanguageModel model = MistralAiChatModel.builder()
.apiKey(System.getenv("MISTRAL_AI_API_KEY")) // Please use your own Mistral AI API key
.responseFormat(MistralAiResponseFormatType.JSON_OBJECT)
.build();
String userMessage = "Return JSON with two fields: transactionId and status with the values T123 and paid.";
String json = model.generate(userMessage);
System.out.println(json); // {"transactionId":"T123","status":"paid"}
```
Streaming example:
```java
StreamingChatLanguageModel streamingModel = MistralAiStreamingChatModel.builder()
.apiKey(System.getenv("MISTRAL_AI_API_KEY")) // Please use your own Mistral AI API key
.responseFormat(MistralAiResponseFormatType.JSON_OBJECT)
.build();
String userMessage = "Return JSON with two fields: transactionId and status with the values T123 and paid.";
CompletableFuture<Response<AiMessage>> futureResponse = new CompletableFuture<>();
streamingModel.generate(userMessage, new StreamingResponseHandler() {
@Override
public void onNext(String token) {
System.out.print(token);
}
@Override
public void onComplete(Response<AiMessage> response) {
futureResponse.complete(response);
}
@Override
public void onError(Throwable error) {
futureResponse.completeExceptionally(error);
}
});
String json = futureResponse.get().content().text();
System.out.println(json); // {"transactionId":"T123","status":"paid"}
```
### Guardrailing
Guardrails are a way to limit the behavior of the model to prevent it from generating harmful or unwanted content. You can set optionally `safePrompt` parameter in the `MistralAiChatModel` builder or `MistralAiStreamingChatModel` builder.
Syncronous example:
```java
ChatLanguageModel model = MistralAiChatModel.builder()
.apiKey(System.getenv("MISTRAL_AI_API_KEY"))
.safePrompt(true)
.build();
String userMessage = "What is the best French cheese?";
String response = model.generate(userMessage);
```
Streaming example:
```java
StreamingChatLanguageModel streamingModel = MistralAiStreamingChatModel.builder()
.apiKey(System.getenv("MISTRAL_AI_API_KEY"))
.safePrompt(true)
.build();
String userMessage = "What is the best French cheese?";
CompletableFuture<Response<AiMessage>> futureResponse = new CompletableFuture<>();
streamingModel.generate(userMessage, new StreamingResponseHandler() {
@Override
public void onNext(String token) {
System.out.print(token);
}
@Override
public void onComplete(Response<AiMessage> response) {
futureResponse.complete(response);
}
@Override
public void onError(Throwable error) {
futureResponse.completeExceptionally(error);
}
});
futureResponse.join();
```
Toggling the safe prompt will prepend your messages with the following `@SystemMessage`:
```plaintext
Always assist with care, respect, and truth. Respond with utmost utility yet securely. Avoid harmful, unethical, prejudiced, or negative content. Ensure replies promote fairness and positivity.
```
### More examples ### More examples
If you want to check more MistralAI examples, you can find them in the [langchain4j-examples/mistral-ai-examples](https://github.com/langchain4j/langchain4j-examples/tree/main/mistral-ai-examples) project. If you want to check more MistralAI examples, you can find them in the [langchain4j-examples/mistral-ai-examples](https://github.com/langchain4j/langchain4j-examples/tree/main/mistral-ai-examples) project.

View File

@ -187,7 +187,7 @@ ChatLanguageModel model = OpenAiChatModel.builder()
Now let's take a look at some examples. Now let's take a look at some examples.
`Enum` and `boolean` as return types: ### `Enum` and `boolean` as return types
```java ```java
enum Sentiment { enum Sentiment {
POSITIVE, NEUTRAL, NEGATIVE POSITIVE, NEUTRAL, NEGATIVE
@ -211,7 +211,7 @@ boolean positive = sentimentAnalyzer.isPositive("It's awful!");
// false // false
``` ```
Custom POJO as a return type: ### Custom POJO as a return type
```java ```java
class Person { class Person {
String firstName; String firstName;
@ -297,7 +297,7 @@ AzureOpenAiChatModel.builder()
```java ```java
MistralAiChatModel.builder() MistralAiChatModel.builder()
... ...
.responseFormat(JSON_OBJECT) .responseFormat(MistralAiResponseFormatType.JSON_OBJECT)
.build(); .build();
``` ```
@ -559,7 +559,6 @@ Also, I can integration test `GreetingExpert` and `ChatBot` separately.
I can evaluate both of them separately and find the most optimal parameters for each subtask, I can evaluate both of them separately and find the most optimal parameters for each subtask,
or, in the long run, even fine-tune a small specialized model for each specific subtask. or, in the long run, even fine-tune a small specialized model for each specific subtask.
TODO
## Related Tutorials ## Related Tutorials
- [LangChain4j AiServices Tutorial](https://www.sivalabs.in/langchain4j-ai-services-tutorial/) by [Siva](https://www.sivalabs.in/) - [LangChain4j AiServices Tutorial](https://www.sivalabs.in/langchain4j-ai-services-tutorial/) by [Siva](https://www.sivalabs.in/)

View File

@ -111,6 +111,7 @@ Please note that not all models support tools.
Currently, the following models have tool support: Currently, the following models have tool support:
- `OpenAiChatModel` - `OpenAiChatModel`
- `AzureOpenAiChatModel` - `AzureOpenAiChatModel`
- `MistralAiChatModel`
- `LocalAiChatModel` - `LocalAiChatModel`
- `QianfanChatModel` - `QianfanChatModel`
::: :::