cleaned up documentation and README

This commit is contained in:
LangChain4j 2024-06-10 13:22:36 +02:00
parent 39c9dfdbc6
commit 28565a7657
8 changed files with 126 additions and 472 deletions

327
README.md
View File

@ -1,28 +1,28 @@
# LangChain for Java: Supercharge your Java application with the power of LLMs
[![](https://img.shields.io/twitter/follow/langchain4j)](https://twitter.com/intent/follow?screen_name=langchain4j)
[![](https://dcbadge.vercel.app/api/server/JzTFvyjG6R?compact=true&style=flat)](https://discord.gg/JzTFvyjG6R)
[Discord](https://discord.gg/JzTFvyjG6R)
## Introduction
Welcome!
The goal of LangChain4j is to simplify integrating AI/LLM capabilities into Java applications.
The goal of LangChain4j is to simplify integrating LLMs into Java applications.
Here's how:
1. **Unified APIs:**
LLM providers (like OpenAI or Google Vertex AI) and embedding (vector) stores (such as Pinecone or Vespa)
LLM providers (like OpenAI or Google Vertex AI) and embedding (vector) stores (such as Pinecone or Milvus)
use proprietary APIs. LangChain4j offers a unified API to avoid the need for learning and implementing specific APIs for each of them.
To experiment with a different LLM or embedding store, you can easily switch between them without the need to rewrite your code.
LangChain4j currently supports over 10 popular LLM providers and more than 15 embedding stores.
Think of it as a Hibernate, but for LLMs and embedding stores.
To experiment with different LLMs or embedding stores, you can easily switch between them without the need to rewrite your code.
LangChain4j currently supports [15+ popular LLM providers](https://docs.langchain4j.dev/integrations/language-models/)
and [15+ embedding stores](https://docs.langchain4j.dev/integrations/embedding-stores/).
2. **Comprehensive Toolbox:**
During the past year, the community has been building numerous LLM-powered applications,
identifying common patterns, abstractions, and techniques. LangChain4j has refined these into practical code.
Our toolbox includes tools ranging from low-level prompt templating, memory management, and output parsing
to high-level patterns like Agents and RAGs.
For each pattern and abstraction, we provide an interface along with multiple ready-to-use implementations based on proven techniques.
identifying common abstractions, patterns, and techniques. LangChain4j has refined these into practical code.
Our toolbox includes tools ranging from low-level prompt templating, chat memory management, and output parsing
to high-level patterns like AI Services and RAG.
For each abstraction, we provide an interface along with multiple ready-to-use implementations based on common techniques.
Whether you're building a chatbot or developing a RAG with a complete pipeline from data ingestion to retrieval,
LangChain4j offers a wide variety of options.
3. **Numerous Examples:**
@ -37,304 +37,37 @@ LlamaIndex, and the broader community, spiced up with a touch of our own innovat
We actively monitor community developments, aiming to quickly incorporate new techniques and integrations,
ensuring you stay up-to-date.
The library is under active development. While some features from the Python version of LangChain
are still being worked on, the core functionality is in place, allowing you to start building LLM-powered apps now!
The library is under active development. While some features are still being worked on,
the core functionality is in place, allowing you to start building LLM-powered apps now!
For easier integration, LangChain4j also includes integration with
Quarkus ([extension](https://quarkus.io/extensions/io.quarkiverse.langchain4j/quarkus-langchain4j-core))
and Spring Boot ([starters](https://github.com/langchain4j/langchain4j-spring)).
## Code Examples
Please see examples of how LangChain4j can be used in [langchain4j-examples](https://github.com/langchain4j/langchain4j-examples) repo:
- [Examples in plain Java](https://github.com/langchain4j/langchain4j-examples/tree/main/other-examples/src/main/java)
- [Examples with Quarkus](https://github.com/quarkiverse/quarkus-langchain4j/tree/main/samples) (uses [quarkus-langchain4j](https://github.com/quarkiverse/quarkus-langchain4j) dependency)
- [Example with Spring Boot](https://github.com/langchain4j/langchain4j-examples/tree/main/spring-boot-example/src/main/java/dev/langchain4j/example)
## Documentation
Documentation can be found [here](https://docs.langchain4j.dev).
## Tutorials
Tutorials can be found [here](https://docs.langchain4j.dev/tutorials).
## Getting Started
Getting started guide can be found [here](https://docs.langchain4j.dev/get-started).
## Code Examples
Please see examples of how LangChain4j can be used in [langchain4j-examples](https://github.com/langchain4j/langchain4j-examples) repo:
- [Examples in plain Java](https://github.com/langchain4j/langchain4j-examples/tree/main/other-examples/src/main/java)
- [Examples with Quarkus](https://github.com/quarkiverse/quarkus-langchain4j/tree/main/samples) (uses [quarkus-langchain4j](https://github.com/quarkiverse/quarkus-langchain4j) dependency)
- [Example with Spring Boot](https://github.com/langchain4j/langchain4j-examples/tree/main/spring-boot-example/src/main/java/dev/langchain4j/example)
## Useful Materials
[Useful Materials](https://docs.langchain4j.dev/useful-materials)
## Library Structure
LangChain4j features a modular design, comprising:
- The `langchain4j-core` module, which defines core abstractions (such as `ChatLanguageModel` and `EmbeddingStore`) and their APIs.
- The main `langchain4j` module, containing useful tools like `ChatMemory`, `OutputParser` as well as a high-level features like `AiServices`.
- A wide array of `langchain4j-{integration}` modules, each providing integration with various LLM providers and embedding stores into LangChain4j.
You can use the `langchain4j-{integration}` modules independently. For additional features, simply import the main `langchain4j` dependency.
## Highlights
You can define declarative "AI Services" that are powered by LLMs:
```java
interface Assistant {
String chat(String userMessage);
}
Assistant assistant = AiServices.create(Assistant.class, model);
String answer = assistant.chat("Hello");
System.out.println(answer); // Hello! How can I assist you today?
```
You can use LLM as a classifier:
```java
enum Sentiment {
POSITIVE, NEUTRAL, NEGATIVE
}
interface SentimentAnalyzer {
@UserMessage("Analyze sentiment of {{it}}")
Sentiment analyzeSentimentOf(String text);
@UserMessage("Does {{it}} have a positive sentiment?")
boolean isPositive(String text);
}
SentimentAnalyzer sentimentAnalyzer = AiServices.create(SentimentAnalyzer.class, model);
Sentiment sentiment = sentimentAnalyzer.analyzeSentimentOf("It is good!"); // POSITIVE
boolean positive = sentimentAnalyzer.isPositive("It is bad!"); // false
```
You can easily extract structured information from unstructured data:
```java
class Person {
private String firstName;
private String lastName;
private LocalDate birthDate;
}
interface PersonExtractor {
@UserMessage("Extract information about a person from {{text}}")
Person extractPersonFrom(@V("text") String text);
}
PersonExtractor extractor = AiServices.create(PersonExtractor.class, model);
String text = "In 1968, amidst the fading echoes of Independence Day, "
+ "a child named John arrived under the calm evening sky. "
+ "This newborn, bearing the surname Doe, marked the start of a new journey.";
Person person = extractor.extractPersonFrom(text);
// Person { firstName = "John", lastName = "Doe", birthDate = 1968-07-04 }
```
You can provide tools that LLMs can use! It can be anything: retrieve information from DB, call APIs, etc.
See example [here](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/ServiceWithToolsExample.java).
## Compatibility
- Java: 8 or higher
- Spring Boot: 2 or higher
## Getting started
1. Add LangChain4j OpenAI dependency to your project:
- Maven:
```xml
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai</artifactId>
<version>0.31.0</version>
</dependency>
```
- Gradle:
```groovy
implementation 'dev.langchain4j:langchain4j-open-ai:0.31.0'
```
2. Import your OpenAI API key:
```java
String apiKey = System.getenv("OPENAI_API_KEY");
```
You can also use the API key `demo` to test OpenAI, which we provide for free.
[How to get an API key?](https://github.com/langchain4j/langchain4j#how-to-get-an-api-key)
Useful materials can be found [here](https://docs.langchain4j.dev/useful-materials).
3. Create an instance of a model and start interacting:
```java
OpenAiChatModel model = OpenAiChatModel.withApiKey(apiKey);
String answer = model.generate("Hello world!");
System.out.println(answer); // Hello! How can I assist you today?
```
## Supported LLM Integrations ([Docs](https://docs.langchain4j.dev/category/integrations))
| Provider | Native Image | [Sync Completion](https://docs.langchain4j.dev/category/language-models) | [Streaming Completion](https://docs.langchain4j.dev/integrations/language-models/response-streaming) | [Embedding](https://docs.langchain4j.dev/category/embedding-models) | [Image Generation](https://docs.langchain4j.dev/category/image-models) | [Scoring](https://docs.langchain4j.dev/category/scoring-models) | [Function Calling](https://docs.langchain4j.dev/tutorials/tools) |
|----------------------------------------------------------------------------------------------------|--------------|--------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------|------------------------------------------------------------------------|-----------------------------------------------------------------|------------------------------------------------------------------|
| [OpenAI](https://docs.langchain4j.dev/integrations/language-models/open-ai) | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ |
| [Azure OpenAI](https://docs.langchain4j.dev/integrations/language-models/azure-open-ai) | | ✅ | ✅ | ✅ | ✅ | | ✅ |
| [Hugging Face](https://docs.langchain4j.dev/integrations/language-models/hugging-face) | | ✅ | | ✅ | | | | |
| [Amazon Bedrock](https://docs.langchain4j.dev/integrations/language-models/amazon-bedrock) | | ✅ | ✅ | ✅ | ✅ | | |
| [Google Vertex AI Gemini](https://docs.langchain4j.dev/integrations/language-models/google-gemini) | | ✅ | ✅ | | ✅ | | ✅ |
| [Google Vertex AI](https://docs.langchain4j.dev/integrations/language-models/google-palm) | ✅ | ✅ | | ✅ | ✅ | | |
| [Mistral AI](https://docs.langchain4j.dev/integrations/language-models/mistral-ai) | | ✅ | ✅ | ✅ | | | ✅ |
| [DashScope](https://docs.langchain4j.dev/integrations/language-models/dashscope) | | ✅ | ✅ | ✅ | | | |
| [LocalAI](https://docs.langchain4j.dev/integrations/language-models/local-ai) | | ✅ | ✅ | ✅ | | | ✅ |
| [Ollama](https://docs.langchain4j.dev/integrations/language-models/ollama) | | ✅ | ✅ | ✅ | | | |
| [Cohere](https://docs.langchain4j.dev/integrations/reranking-models/cohere) | | | | | | ✅ | |
| [Qianfan](https://docs.langchain4j.dev/integrations/language-models/qianfan) | | ✅ | ✅ | ✅ | | | ✅ |
| [ChatGLM](https://docs.langchain4j.dev/integrations/language-models/chatglm) | | ✅ | | | | | |
| [Nomic](https://docs.langchain4j.dev/integrations/language-models/nomic) | | | | ✅ | | | |
| [Anthropic](https://docs.langchain4j.dev/integrations/language-models/anthropic) | ✅ | ✅ | ✅ | | | | ✅ |
| [Zhipu AI](https://docs.langchain4j.dev/integrations/language-models/zhipu-ai) | | ✅ | ✅ | ✅ | | | ✅ |
## Get Help
Please use [Discord](https://discord.gg/JzTFvyjG6R) or [GitHub discussions](https://github.com/langchain4j/langchain4j/discussions)
to get help.
## Disclaimer
Please note that the library is in active development and:
## Request Features
Please let us know what features you need by [opening an issue](https://github.com/langchain4j/langchain4j/issues/new/choose).
- Some features are still missing. We are working hard on implementing them ASAP.
- API might change at any moment. At this point, we prioritize good design in the future over backward compatibility
now. We hope for your understanding.
- We need your input! Please [let us know](https://github.com/langchain4j/langchain4j/issues/new/choose) what features you need and your concerns about the current implementation.
## Current features (this list is outdated, we have much more):
- AI Services:
- [Simple](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/SimpleServiceExample.java)
- [With Memory](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/ServiceWithMemoryExample.java)
- [With Tools](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/ServiceWithToolsExample.java)
- [With Streaming](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/ServiceWithStreamingExample.java)
- [With RAG](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/ServiceWithRetrieverExample.java)
- [With Auto-Moderation](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/ServiceWithAutoModerationExample.java)
- [With Structured Outputs, Structured Prompts, etc](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/OtherServiceExamples.java)
- Integration with [OpenAI](https://platform.openai.com/docs/introduction) and [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview) for:
- [Chats](https://platform.openai.com/docs/guides/chat) (sync + streaming + functions)
- [Completions](https://platform.openai.com/docs/guides/completion) (sync + streaming)
- [Embeddings](https://platform.openai.com/docs/guides/embeddings)
- Integration with [Google Vertex AI](https://cloud.google.com/vertex-ai) for:
- [Chats](https://cloud.google.com/vertex-ai/docs/generative-ai/chat/chat-prompts)
- [Completions](https://cloud.google.com/vertex-ai/docs/generative-ai/text/text-overview)
- [Embeddings](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings)
- Integration with [Hugging Face Inference API](https://huggingface.co/docs/api-inference/index) for:
- [Chats](https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task)
- [Completions](https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task)
- [Embeddings](https://huggingface.co/docs/api-inference/detailed_parameters#feature-extraction-task)
- Integration with [LocalAI](https://localai.io/) for:
- Chats (sync + streaming + functions)
- Completions (sync + streaming)
- Embeddings
- Integration with [DashScope](https://dashscope.aliyun.com/) for:
- Chats (sync + streaming)
- Completions (sync + streaming)
- Embeddings
- [Chat memory](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/ChatMemoryExamples.java)
- [Persistent chat memory](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/ServiceWithPersistentMemoryForEachUserExample.java)
- [Chat with Documents](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/ChatWithDocumentsExamples.java)
- Integration with [Astra DB](https://www.datastax.com/products/datastax-astra) and [Cassandra](https://cassandra.apache.org/)
- [Integration](https://github.com/langchain4j/langchain4j-examples/blob/main/chroma-example/src/main/java/ChromaEmbeddingStoreExample.java) with [Chroma](https://www.trychroma.com/)
- [Integration](https://github.com/langchain4j/langchain4j-examples/blob/main/elasticsearch-example/src/main/java/ElasticsearchEmbeddingStoreExample.java) with [Elasticsearch](https://www.elastic.co/)
- [Integration](https://github.com/langchain4j/langchain4j-examples/blob/main/milvus-example/src/main/java/MilvusEmbeddingStoreExample.java) with [Milvus](https://milvus.io/)
- [Integration](https://github.com/langchain4j/langchain4j-examples/blob/main/pinecone-example/src/main/java/PineconeEmbeddingStoreExample.java) with [Pinecone](https://www.pinecone.io/)
- [Integration](https://github.com/langchain4j/langchain4j-examples/blob/main/redis-example/src/main/java/RedisEmbeddingStoreExample.java) with [Redis](https://redis.io/)
- [Integration](https://github.com/langchain4j/langchain4j-examples/blob/main/vespa-example/src/main/java/VespaEmbeddingStoreExample.java) with [Vespa](https://vespa.ai/)
- [Integration](https://github.com/langchain4j/langchain4j-examples/blob/main/weaviate-example/src/main/java/WeaviateEmbeddingStoreExample.java) with [Weaviate](https://weaviate.io/)
- [In-memory embedding store](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/embedding/store/InMemoryEmbeddingStoreExample.java) (can be persisted)
- [Structured outputs](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/OtherServiceExamples.java)
- [Prompt templates](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/PromptTemplateExamples.java)
- [Structured prompt templates](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/StructuredPromptTemplateExamples.java)
- [Streaming of LLM responses](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/StreamingExamples.java)
- [Loading txt, html, pdf, doc, xls and ppt documents from the file system and via URL](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/DocumentLoaderExamples.java)
- [Splitting documents into segments](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/ChatWithDocumentsExamples.java):
- by paragraphs, lines, sentences, words, etc
- recursively
- with overlap
- Token count estimation (so that you can predict how much you will pay)
## Coming soon:
- Extending "AI Service" features
- Integration with more LLM providers (commercial and free)
- Integrations with more embedding stores (commercial and free)
- Support for more document types
- Long-term memory for chatbots and agents
- Chain-of-Thought and Tree-of-Thought
## Request features
Please [let us know](https://github.com/langchain4j/langchain4j/issues/new/choose) what features you need!
## Contribution Guidelines
## Contribute
Contribution guidelines can be found [here](https://github.com/langchain4j/langchain4j/blob/main/CONTRIBUTING.md).
## Use cases
You might ask why would I need all of this?
Here are a couple of examples:
- You want to implement a custom AI-powered chatbot that has access to your data and behaves the way you want it:
- Customer support chatbot that can:
- politely answer customer questions
- take /change/cancel orders
- Educational assistant that can:
- Teach various subjects
- Explain unclear parts
- Assess user's understanding/knowledge
- You want to process a lot of unstructured data (files, web pages, etc) and extract structured information from them.
For example:
- extract insights from customer reviews and support chat history
- extract interesting information from the websites of your competitors
- extract insights from CVs of job applicants
- You want to generate information, for example:
- Emails tailored for each of your customers
- Content for your app/website:
- Blog posts
- Stories
- You want to transform information, for example:
- Summarize
- Proofread and rewrite
- Translate
## Best practices
We highly recommend
watching [this amazing 90-minute tutorial](https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/)
on prompt engineering best practices, presented by Andrew Ng (DeepLearning.AI) and Isa Fulford (OpenAI).
This course will teach you how to use LLMs efficiently and achieve the best possible results. Good investment of your
time!
Here are some best practices for using LLMs:
- Be responsible. Use AI for Good.
- Be specific. The more specific your query, the best results you will get.
- Add a ["Lets think step by step" instruction](https://arxiv.org/pdf/2205.11916.pdf) to your prompt.
- Specify steps to achieve the desired goal yourself. This will make the LLM do what you want it to do.
- Provide examples. Sometimes it is best to show LLM a few examples of what you want instead of trying to explain it.
- Ask LLM to provide structured output (JSON, XML, etc). This way you can parse response more easily and distinguish
different parts of it.
- Use unusual delimiters, such as \```triple backticks``` to help the LLM distinguish
data or input from instructions.
## How to get an API key
You will need an API key from OpenAI (paid) or Hugging Face (free) to use LLMs hosted by them.
We recommend using OpenAI LLMs (`gpt-3.5-turbo` and `gpt-4`) as they are by far the most capable and are reasonably priced.
It will cost approximately $0.01 to generate 10 pages (A4 format) of text with `gpt-3.5-turbo`. With `gpt-4`, the cost will be $0.30 to generate the same amount of text. However, for some use cases, this higher cost may be justified.
[How to get OpenAI API key](https://www.howtogeek.com/885918/how-to-get-an-openai-api-key/).
For embeddings, we recommend using one of the models from the [Hugging Face MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
You'll have to find the best one for your specific use case.
Here's how to get a Hugging Face API key:
- Create an account on https://huggingface.co
- Go to https://huggingface.co/settings/tokens
- Generate a new access token

View File

@ -4,20 +4,15 @@ sidebar_position: 5
# Get Started
## Prerequisites
:::note
Ensure you have Java 8 or higher installed. Verify it by typing this command in your terminal:
```shell
java --version
```
If you are using Quarkus, see [Quarkus Integration](/tutorials/quarkus-integration/).
If you are using Spring Boot, see [Spring Boot Integration](/tutorials/spring-boot-integration).
:::
## Write a "Hello World" program
The simplest way to begin is with the OpenAI integration.
LangChain4j offers integration with many LLMs.
Each integration has its own dependency.
In this case, we should add the OpenAI dependency:
LangChain4j offers [integration with many LLM providers](/integrations/language-models/).
Each integration has its own maven dependency.
The simplest way to begin is with the OpenAI integration:
- For Maven in `pom.xml`:
```xml
@ -27,8 +22,9 @@ In this case, we should add the OpenAI dependency:
<version>0.31.0</version>
</dependency>
```
If you want to use more of the Langchain4j non-integration specific classes, such as Services, you will also need to add
the base library:
If you wish to use a high-level [AI Services](/tutorials/ai-services) API, you will also need to add
the following dependency:
```xml
<dependency>
@ -41,6 +37,7 @@ the base library:
- For Gradle in `build.gradle`:
```groovy
implementation 'dev.langchain4j:langchain4j-open-ai:0.31.0'
implementation 'dev.langchain4j:langchain4j:0.31.0'
```
Then, import your OpenAI API key.

View File

@ -7,26 +7,26 @@ title: Introduction
Welcome!
The goal of LangChain4j is to simplify integrating AI/LLM capabilities into Java applications.
The goal of LangChain4j is to simplify integrating LLMs into Java applications.
Here's how:
1. **Unified APIs:**
LLM providers (like OpenAI or Google Vertex AI) and embedding (vector) stores (such as Pinecone or Vespa)
use proprietary APIs. LangChain4j offers a unified API to avoid the need for learning and implementing specific APIs for each of them.
To experiment with a different LLM or embedding store, you can easily switch between them without the need to rewrite your code.
LangChain4j currently supports over 10 popular LLM providers and more than 15 embedding stores.
Think of it as a Hibernate, but for LLMs and embedding stores.
LLM providers (like OpenAI or Google Vertex AI) and embedding (vector) stores (such as Pinecone or Milvus)
use proprietary APIs. LangChain4j offers a unified API to avoid the need for learning and implementing specific APIs for each of them.
To experiment with different LLMs or embedding stores, you can easily switch between them without the need to rewrite your code.
LangChain4j currently supports [15+ popular LLM providers](/integrations/language-models/)
and [15+ embedding stores](/integrations/embedding-stores/).
2. **Comprehensive Toolbox:**
During the past year, the community has been building numerous LLM-powered applications,
identifying common patterns, abstractions, and techniques. LangChain4j has refined these into practical code.
Our toolbox includes tools ranging from low-level prompt templating, memory management, and output parsing
to high-level patterns like Agents and RAGs.
For each pattern and abstraction, we provide an interface along with multiple ready-to-use implementations based on proven techniques.
Whether you're building a chatbot or developing a RAG with a complete pipeline from data ingestion to retrieval,
LangChain4j offers a wide variety of options.
During the past year, the community has been building numerous LLM-powered applications,
identifying common abstractions, patterns, and techniques. LangChain4j has refined these into practical code.
Our toolbox includes tools ranging from low-level prompt templating, chat memory management, and output parsing
to high-level patterns like AI Services and RAG.
For each abstraction, we provide an interface along with multiple ready-to-use implementations based on common techniques.
Whether you're building a chatbot or developing a RAG with a complete pipeline from data ingestion to retrieval,
LangChain4j offers a wide variety of options.
3. **Numerous Examples:**
These [examples](https://github.com/langchain4j/langchain4j-examples) showcase how to begin creating various LLM-powered applications,
providing inspiration and enabling you to start building quickly.
These [examples](https://github.com/langchain4j/langchain4j-examples) showcase how to begin creating various LLM-powered applications,
providing inspiration and enabling you to start building quickly.
LangChain4j began development in early 2023 amid the ChatGPT hype.
We noticed a lack of Java counterparts to the numerous Python and JavaScript LLM libraries and frameworks,
@ -36,33 +36,29 @@ LlamaIndex, and the broader community, spiced up with a touch of our own innovat
We actively monitor community developments, aiming to quickly incorporate new techniques and integrations,
ensuring you stay up-to-date.
The library is under active development. While some features from the Python version of LangChain
are still being worked on, the core functionality is in place, allowing you to start building LLM-powered apps now!
The library is under active development. While some features are still being worked on,
the core functionality is in place, allowing you to start building LLM-powered apps now!
For easier integration, LangChain4j also includes integration with
Quarkus ([extension](https://quarkus.io/extensions/io.quarkiverse.langchain4j/quarkus-langchain4j-core))
and Spring Boot ([starters](https://github.com/langchain4j/langchain4j-spring)).
[Quarkus](/tutorials/quarkus-integration) and [Spring Boot](/tutorials/spring-boot-integration).
### Features
- Integration with more than 10 managed and self-hosted language models (LLMs) for chat and completion
- Prompt templates
- Support for texts and images as inputs (multimodality)
- Streaming of responses from language models
- Tools for tokenization and estimation of token counts
- Output parsers for common Java types (e.g., `List`, `LocalDate`, etc.) and custom POJOs
- Integration with over three managed and self-hosted image generation models
- Integration with more than 10 managed and self-hosted embedding models
- Integration with more than 15 managed and self-hosted embedding stores
## LangChain4j Features
- Integration with [15+ LLM providers](/integrations/language-models)
- Integration with [15+ embedding (vector) stores](/integrations/embedding-stores)
- Integration with [10+ embedding models](/category/embedding-models)
- Integration with [3 cloud and local image generation models](/category/image-models)
- Integration with [2 scoring (re-ranking) models](/category/scoring-reranking-models)
- Integration with one moderation model: OpenAI
- Integration with one scoring (re-ranking) model: Cohere (with more expected to come)
- Tools (function calling)
- Support for texts and images as inputs (multimodality)
- [AI Services](/tutorials/ai-services) (high-level LLM API)
- Prompt templates
- Implementation of persistent and in-memory [chat memory](/tutorials/chat-memory) algorithms: message window and token window
- [Streaming of responses from LLMs](/tutorials/response-streaming)
- Output parsers for common Java types and custom POJOs
- [Tools (function calling)](/tutorials/tools)
- Dynamic Tools (execution of dynamically generated LLM code)
- "Lite" agents (OpenAI functions)
- AI Services
- Chains (legacy)
- Implementation of persistent and in-memory chat memory algorithms: message window and token window
- Text classification
- RAG (Retrieval-Augmented-Generation):
- [RAG (Retrieval-Augmented-Generation)](/tutorials/rag):
- Ingestion:
- Importing various types of documents (TXT, PDFs, DOC, PPT, XLS etc.) from multiple sources (file system, URL, GitHub, Azure Blob Storage, Amazon S3, etc.)
- Splitting documents into smaller segments using multiple splitting algorithms
@ -76,74 +72,62 @@ and Spring Boot ([starters](https://github.com/langchain4j/langchain4j-spring)).
- Re-ranking
- Reciprocal Rank Fusion
- Customization of each step in the RAG flow
- Text classification
- Tools for tokenization and estimation of token counts
### 2 levels of abstraction
## 2 levels of abstraction
LangChain4j operates on two levels of abstraction:
- Low level. At this level, you have the most freedom and access to all the low-level components such as
- [Low level](/tutorials/chat-and-language-models). At this level, you have the most freedom and access to all the low-level components such as
`ChatLanguageModel`, `UserMessage`, `AiMessage`, `EmbeddingStore`, `Embedding`, etc.
These are the "primitives" of your LLM-powered application.
You have complete control over how to combine them, but you will need to write more glue code.
- High level. At this level, you interact with LLMs using high-level APIs like `AiServices` and `Chain`s,
- [High level](/tutorials/ai-services). At this level, you interact with LLMs using high-level APIs like `AiServices`,
which hides all the complexity and boilerplate from you.
You still have the flexibility to adjust and fine-tune the behavior, but it is done in a declarative manner.
[![](/img/langchain4j-components.png)](/intro)
### Library Structure
## LangChain4j Library Structure
LangChain4j features a modular design, comprising:
- The `langchain4j-core` module, which defines core abstractions (such as `ChatLanguageModel` and `EmbeddingStore`) and their APIs.
- The main `langchain4j` module, containing useful tools like `ChatMemory`, `OutputParser` as well as a high-level features like `AiServices`.
- A wide array of `langchain4j-{integration}` modules, each providing integration with various LLM providers and embedding stores into LangChain4j.
You can use the `langchain4j-{integration}` modules independently. For additional features, simply import the main `langchain4j` dependency.
### Tutorials (User Guide)
Discover inspiring [use cases](/tutorials/#or-consider-some-of-the-use-cases) or follow our step-by-step introduction to LangChain4j features under [Tutorials](/category/tutorials).
You will get a tour of all LangChain4j functionality in steps of increasing complexity. All steps are demonstrated with complete code examples and code explanation.
## LangChain4j Repositories
- [Main repository](https://github.com/langchain4j/langchain4j)
- [Quarkus extension](https://github.com/quarkiverse/quarkus-langchain4j)
- [Spring Boot integration](https://github.com/langchain4j/langchain4j-spring)
- [Examples](https://github.com/langchain4j/langchain4j-examples)
- [Community resources](https://github.com/langchain4j/langchain4j-community-resources)
- [In-process embeddings](https://github.com/langchain4j/langchain4j-embeddings)
### Integrations and Models
LangChain4j offers ready-to-use integrations with models of OpenAI, HuggingFace, Google, Azure, and many more.
It has document loaders for all common document types, and integrations with plenty of embedding models and embedding stores, to facilitate retrieval-augmented generation and AI-powered classification.
All integrations are listed [here](/category/integrations).
### Code Examples
You can browse through code examples in the `langchain4j-examples` repo:
- [Examples in plain Java](https://github.com/langchain4j/langchain4j-examples/tree/main/other-examples/src/main/java)
- [Example with Spring Boot](https://github.com/langchain4j/langchain4j-examples/blob/main/spring-boot-example/src/test/java/dev/example/CustomerSupportApplicationTest.java)
Quarkus specific examples (leveraging the [quarkus-langchain4j](https://github.com/quarkiverse/quarkus-langchain4j)
dependency which builds on this project) can be
found [here](https://github.com/quarkiverse/quarkus-langchain4j/tree/main/samples)
### Useful Materials
[Useful Materials](https://docs.langchain4j.dev/useful-materials)
### Disclaimer
Please note that the library is in active development and:
- Some features are still missing. We are working hard on implementing them ASAP.
- API might change at any moment. At this point, we prioritize good design in the future over backward compatibility
now. We hope for your understanding.
- We need your input! Please [let us know](https://github.com/langchain4j/langchain4j/issues/new/choose) what features
you need and your concerns about the current implementation.
### Coming soon
- Extending "AI Service" features
- Integration with more LLM providers (commercial and free)
- Integrations with more embedding stores (commercial and free)
- Support for more document types
- Long-term memory for chatbots and agents
- Chain-of-Thought and Tree-of-Thought
### Request features
Please [let us know](https://github.com/langchain4j/langchain4j/issues/new/choose) what features you need!
### Contribute
Please help us make this open-source library better by contributing to our [github repo](https://github.com/langchain4j/langchain4j).
## Use Cases
You might ask why would I need all of this?
Here are a couple of examples:
- You want to implement a custom AI-powered chatbot that has access to your data and behaves the way you want it:
- Customer support chatbot that can:
- politely answer customer questions
- take /change/cancel orders
- Educational assistant that can:
- Teach various subjects
- Explain unclear parts
- Assess user's understanding/knowledge
- You want to process a lot of unstructured data (files, web pages, etc) and extract structured information from them.
For example:
- extract insights from customer reviews and support chat history
- extract interesting information from the websites of your competitors
- extract insights from CVs of job applicants
- You want to generate information, for example:
- Emails tailored for each of your customers
- Content for your app/website:
- Blog posts
- Stories
- You want to transform information, for example:
- Summarize
- Proofread and rewrite
- Translate

View File

@ -4,6 +4,11 @@ sidebar_position: 2
# Chat and Language Models
:::note
This page describes a low-level LLM API.
See [AI Services](/tutorials/ai-services) for a high-level LLM API.
:::
LLMs are currently available in two API types:
- `LanguageModel`s. Their API is very simple - they accept a `String` as input and return a `String` as output.
This API is now becoming obsolete in favor of chat API (second API type).

View File

@ -4,6 +4,11 @@ sidebar_position: 5
# Response Streaming
:::note
This page describes response streaming with a low-level LLM API.
See [AI Services](/tutorials/ai-services#streaming) for a high-level LLM API.
:::
LLMs generate text one token at a time, so many LLM providers offer a way to stream the response
token-by-token instead of waiting for the entire text to be generated.
This significantly improves the user experience, as the user does not need to wait an unknown

View File

@ -1,70 +0,0 @@
---
title: Overview
hide_title: false
sidebar_position: 1
---
Here you will find tutorials covering all of LangChain4j's functionality, to guide you through the framework in steps of increasing complexity.
We will typically use OpenAI models for demonstration purposes, but we support a lot of other model providers too. The entire list of supported models can be found [here](/category/integrations).
## Need inspiration?
### Watch the talk by [Lize Raes](https://github.com/LizeRaes) at Devoxx Belgium
<iframe width="640" height="480" src="https://www.youtube.com/embed/BD1MSLbs9KE"
title="Java Meets AI: A Hands On Guide to Building LLM Powered Applications with LangChain4j By Lize Raes"
frameborder="0"
style={{marginTop: '10px', marginBottom: '20px'}}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen></iframe>
### Talk by [Vaadin](https://vaadin.com/) team about Building a RAG AI system in Spring Boot & LangChain4j
<iframe width="640" height="480" src="https://www.youtube.com/embed/J-3n7xs98Kc"
title="How to build a retrieval-augmented generation (RAG) AI system in Java (Spring Boot + LangChain4j)"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen></iframe>
### Fireside Chat: LangChain4j & Quarkus by [Quarkusio](https://quarkus.io/)
<iframe width="640" height="480"
src="https://www.youtube.com/embed/mYw9ySwmK34"
title="Fireside Chat: LangChain4j &amp; Quarkus"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen></iframe>
### The Magic of AI Services with LangChain4j by [Tales from the jar side](https://www.youtube.com/@talesfromthejarside)
<iframe width="640" height="480" src="https://www.youtube.com/embed/Bx2OpE1nj34" title="The Magic of AI Services with LangChain4j" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
### Or consider some of the use cases
- You want to:
- Implement a custom AI-powered chatbot that has access to your data and behaves the way you want it.
- Implement a customer support chatbot that can:
- politely answer customer questions
- take /change/cancel orders
- Implement an educational assistant that can:
- Teach various subjects
- Explain unclear parts
- Assess user's understanding/knowledge
- You want to process a lot of unstructured data (files, web pages, etc) and extract structured information from them. For example:
- extract insights from customer reviews and support chat history
- extract interesting information from the websites of your competitors
- extract insights from CVs of job applicants
- Generate information, for example:
- Emails tailored for each of your customers
- Generate content for your app/website:
- Blog posts
- Stories
- Transform information, for example:
- Summarize
- Proofread and rewrite
- Translate

View File

@ -85,9 +85,9 @@ class AssistantController {
More details [here](https://github.com/langchain4j/langchain4j-spring/blob/main/langchain4j-spring-boot-starter/src/main/java/dev/langchain4j/service/spring/AiService.java).
## Supported Spring Boot Versions
## Supported versions
Spring Boot 2 and 3 are supported.
LangChain4j Spring Boot integration requires Java 17 and Spring Boot 3.2.
## Examples
- [Low-level Spring Boot example](https://github.com/langchain4j/langchain4j-examples/blob/main/spring-boot-example/src/main/java/dev/langchain4j/example/lowlevel/ChatLanguageModelController.java) using [ChatLanguageModel API](/tutorials/chat-and-language-models)

View File

@ -22,11 +22,11 @@ const FeatureList = [
),
},
{
title: 'AI Services, RAG, Tools, Chains',
title: 'AI Services, RAG, Tools',
Svg: require('@site/static/img/functionality-logos.svg').default,
description: (
<>
Our extensive toolbox provides a wide range of tools for common LLM operations, from low-level prompt templating, memory management, and output parsing, to high-level patterns like Agents and RAG.
Our extensive toolbox provides a wide range of tools for common LLM operations, from low-level prompt templating, chat memory management, and output parsing, to high-level patterns like AI Services and RAG.
</>
),
}