docu: AI Services, structured outputs
This commit is contained in:
parent
3a6cb3dc2d
commit
4af5306d50
|
@ -20,9 +20,9 @@ Here's how:
|
|||
LangChain4j currently supports [15+ popular LLM providers](https://docs.langchain4j.dev/integrations/language-models/)
|
||||
and [15+ embedding stores](https://docs.langchain4j.dev/integrations/embedding-stores/).
|
||||
2. **Comprehensive Toolbox:**
|
||||
During the past year, the community has been building numerous LLM-powered applications,
|
||||
Since early 2023, the community has been building numerous LLM-powered applications,
|
||||
identifying common abstractions, patterns, and techniques. LangChain4j has refined these into practical code.
|
||||
Our toolbox includes tools ranging from low-level prompt templating, chat memory management, and output parsing
|
||||
Our toolbox includes tools ranging from low-level prompt templating, chat memory management, and function calling
|
||||
to high-level patterns like AI Services and RAG.
|
||||
For each abstraction, we provide an interface along with multiple ready-to-use implementations based on common techniques.
|
||||
Whether you're building a chatbot or developing a RAG with a complete pipeline from data ingestion to retrieval,
|
||||
|
|
|
@ -17,9 +17,9 @@ Here's how:
|
|||
LangChain4j currently supports [15+ popular LLM providers](/integrations/language-models/)
|
||||
and [20+ embedding stores](/integrations/embedding-stores/).
|
||||
2. **Comprehensive Toolbox:**
|
||||
Over the past year, the community has been building numerous LLM-powered applications,
|
||||
Since early 2023, the community has been building numerous LLM-powered applications,
|
||||
identifying common abstractions, patterns, and techniques. LangChain4j has refined these into a ready to use package.
|
||||
Our toolbox includes tools ranging from low-level prompt templating, chat memory management, and output parsing
|
||||
Our toolbox includes tools ranging from low-level prompt templating, chat memory management, and function calling
|
||||
to high-level patterns like AI Services and RAG.
|
||||
For each abstraction, we provide an interface along with multiple ready-to-use implementations based on common techniques.
|
||||
Whether you're building a chatbot or developing a RAG with a complete pipeline from data ingestion to retrieval,
|
||||
|
|
|
@ -152,12 +152,9 @@ Friend friend = AiServices.create(Friend.class, model);
|
|||
String answer = friend.chat("Hello"); // Hey! What's shakin'?
|
||||
```
|
||||
We have replaced the `@SystemMessage` annotation with `@UserMessage`
|
||||
and specified a prompt template with the variable `it` to refer to the only method argument.
|
||||
and specified a prompt template containing the variable `it` that refers to the only method argument.
|
||||
|
||||
`@UserMessage` can also load a prompt template from resources:
|
||||
`@UserMessage(fromResource = "my-prompt-template.txt")`
|
||||
|
||||
Additionally, it's possible to annotate the `String userMessage` with `@V`
|
||||
It's also possible to annotate the `String userMessage` with `@V`
|
||||
and assign a custom name to the prompt template variable:
|
||||
```java
|
||||
interface Friend {
|
||||
|
@ -167,18 +164,98 @@ interface Friend {
|
|||
}
|
||||
```
|
||||
|
||||
## Output Parsing (aka Structured Outputs)
|
||||
:::note
|
||||
Please note that using `@V` is not necessary when using LangChain4j with Quarkus or Spring Boot.
|
||||
This annotation is necessary only when the `-parameters` option is *not* enabled during Java compilation.
|
||||
:::
|
||||
|
||||
`@UserMessage` can also load a prompt template from resources:
|
||||
`@UserMessage(fromResource = "my-prompt-template.txt")`
|
||||
|
||||
## Examples of valid AI Service methods
|
||||
|
||||
Below are some examples of valid AI service methods.
|
||||
|
||||
<details>
|
||||
<summary>`UserMessage`</summary>
|
||||
|
||||
```java
|
||||
String chat(String userMessage);
|
||||
|
||||
String chat(@UserMessage String userMessage);
|
||||
|
||||
String chat(@UserMessage String userMessage, @V("country") String country); // userMessage contains "{{country}}" template variable
|
||||
|
||||
@UserMessage("What is the capital of Germany?")
|
||||
String chat();
|
||||
|
||||
@UserMessage("What is the capital of {{it}}?")
|
||||
String chat(String country);
|
||||
|
||||
@UserMessage("What is the capital of {{country}}?")
|
||||
String chat(@V("country") String country);
|
||||
|
||||
@UserMessage("What is the {{something}} of {{country}}?")
|
||||
String chat(@V("something") String something, @V("country") String country);
|
||||
|
||||
@UserMessage("What is the capital of {{country}}?")
|
||||
String chat(String country); // this works only in Quarkus and Spring Boot applications
|
||||
```
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>`SystemMessage` and `UserMessage`</summary>
|
||||
|
||||
```java
|
||||
@SystemMessage("Given a name of a country, answer with a name of it's capital")
|
||||
String chat(String userMessage);
|
||||
|
||||
@SystemMessage("Given a name of a country, answer with a name of it's capital")
|
||||
String chat(@UserMessage String userMessage);
|
||||
|
||||
@SystemMessage("Given a name of a country, {{answerInstructions}}")
|
||||
String chat(@V("answerInstructions") String answerInstructions, @UserMessage String userMessage);
|
||||
|
||||
@SystemMessage("Given a name of a country, answer with a name of it's capital")
|
||||
String chat(@UserMessage String userMessage, @V("country") String country); // userMessage contains "{{country}}" template variable
|
||||
|
||||
@SystemMessage("Given a name of a country, {{answerInstructions}}")
|
||||
String chat(@V("answerInstructions") String answerInstructions, @UserMessage String userMessage, @V("country") String country); // userMessage contains "{{country}}" template variable
|
||||
|
||||
@SystemMessage("Given a name of a country, answer with a name of it's capital")
|
||||
@UserMessage("Germany")
|
||||
String chat();
|
||||
|
||||
@SystemMessage("Given a name of a country, {{answerInstructions}}")
|
||||
@UserMessage("Germany")
|
||||
String chat(@V("answerInstructions") String answerInstructions);
|
||||
|
||||
@SystemMessage("Given a name of a country, answer with a name of it's capital")
|
||||
@UserMessage("{{it}}")
|
||||
String chat(String country);
|
||||
|
||||
@SystemMessage("Given a name of a country, answer with a name of it's capital")
|
||||
@UserMessage("{{country}}")
|
||||
String chat(@V("country") String country);
|
||||
|
||||
@SystemMessage("Given a name of a country, {{answerInstructions}}")
|
||||
@UserMessage("{{country}}")
|
||||
String chat(@V("answerInstructions") String answerInstructions, @V("country") String country);
|
||||
```
|
||||
</details>
|
||||
|
||||
## Structured Outputs
|
||||
If you want to receive a structured output from the LLM,
|
||||
you can change the return type of your AI Service method from `String` to something else.
|
||||
Currently, AI Services support the following return types:
|
||||
- `String`
|
||||
- `AiMessage`
|
||||
- Any custom POJO
|
||||
- Any `Enum` or `List<Enum>` or `Set<Enum>`, if you want to classify text, e.g. sentiment, user intent, etc.
|
||||
- `boolean`/`Boolean`, if you need to get "yes" or "no" answer
|
||||
- `byte`/`Byte`/`short`/`Short`/`int`/`Integer`/`BigInteger`/`long`/`Long`/`float`/`Float`/`double`/`Double`/`BigDecimal`
|
||||
- `byte`/`short`/`int`/`BigInteger`/`long`/`float`/`double`/`BigDecimal`
|
||||
- `Date`/`LocalDate`/`LocalTime`/`LocalDateTime`
|
||||
- `List<String>`/`Set<String>`, if you want to get the answer in the form of a list of bullet points
|
||||
- Any `Enum`, `List<Enum>` and `Set<Enum>`, if you want to classify text, e.g. sentiment, user intent, etc.
|
||||
- Any custom POJO
|
||||
- `Result<T>`, if you need to access `TokenUsage`, `FinishReason`, sources (`Content`s retrieved during RAG) and executed tools, aside from `T`, which can be of any type listed above. For example: `Result<String>`, `Result<MyCustomPojo>`
|
||||
|
||||
Unless the return type is `String` or `AiMessage`, the AI Service will automatically append instructions
|
||||
|
@ -187,6 +264,13 @@ Before the method returns, the AI Service will parse the output of the LLM into
|
|||
|
||||
You can observe appended instructions by [enabling logging](/tutorials/logging).
|
||||
|
||||
:::note
|
||||
Some LLMs support JSON mode (aka [Structured Outputs](https://openai.com/index/introducing-structured-outputs-in-the-api/)),
|
||||
where the LLM API has an option to specify a JSON schema for the desired output. If such a feature is supported and enabled,
|
||||
instructions will not be appended to the end of the `UserMessage`. In this case, the JSON schema will be automatically
|
||||
created from your POJO and passed to the LLM. This will guarantee that the LLM adheres to this JSON schema.
|
||||
:::
|
||||
|
||||
Now let's take a look at some examples.
|
||||
|
||||
### `boolean` as return type
|
||||
|
@ -246,6 +330,7 @@ class Person {
|
|||
Address address;
|
||||
}
|
||||
|
||||
@Description("an address") // you can add an optional description to help an LLM have a better understanding
|
||||
class Address {
|
||||
String street;
|
||||
Integer streetNumber;
|
||||
|
|
|
@ -1,11 +0,0 @@
|
|||
---
|
||||
sidebar_position: 11
|
||||
---
|
||||
|
||||
# Structured Data Extraction
|
||||
|
||||
Documentation on structured data extraction can be found [here](/tutorials/ai-services#output-parsing-aka-structured-outputs).
|
||||
|
||||
## Examples
|
||||
|
||||
- [Example of extracting POJOs from text]( https://github.com/langchain4j/langchain4j-examples/blob/337186583f4dc5e4e122b0cdf0a42ddb586c7fe0/other-examples/src/main/java/OtherServiceExamples.java#L133)
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
sidebar_position: 11
|
||||
---
|
||||
|
||||
# Structured Outputs
|
||||
|
||||
Documentation on structured outputs can be found [here](/tutorials/ai-services#structured-outputs).
|
Loading…
Reference in New Issue