Go to file
Cedrick Lunven cd006b166c
Rework support of AstraDB and Cassandra (#548)
In the Datastax Astra DB saas solution, a new way to integrate with
vector databases has been introduced: using an HTTP APi instead of the
Cassandra Cluster. It is called the DataAPI and use the MongoDB
principles with collections.

The pull request includes the following:

### Update on previous implementations

- Previous implementations of embedding stores have been grouped in a
single `CassandraEmbeddingStore`. It can be instantiated for Astra or
OSS Cassandra based on 2 different constructor builders but everything
else is the same.

- Previous implementations of chat memory stores have been grouped in a
single `CassandraChatMemoryStore`. It can be instantiated for Astra or
OSS Cassandra based on 2 different constructor builders but everything
else is the same.

- Integration test for OSS Cassandra now using test containers (as
Cassandra 5-alpha2 image is out)

- Usage
```java
// Using with Astra (Cassandra AAS in the cloud)
CassandraEmbeddingStore.builderAstra()
  .token(token)
  .databaseId(dbId)
  .databaseRegion(TEST_REGION)
  .keyspace(KEYSPACE)
  .table(TEST_INDEX)
  .dimension(11)
  .metric(CassandraSimilarityMetric.COSINE)
  .build();

// Using OSS Cassandra
CassandraEmbeddingStore.builder()
  .contactPoints(Arrays.asList(contactPoint.getHostName()))
  .port(contactPoint.getPort())
  .localDataCenter(DATACENTER)
  .keyspace(KEYSPACE)
  .table(TEST_INDEX)
  .dimension(11)
  .metric(CassandraSimilarityMetric.COSINE)
  .build();
```

-Adding jdk11 in the pom

```
<maven.compiler.source>11</maven.compiler.source>
<maven.compiler.target>11</maven.compiler.target>
```

- introducing `insertMany()`,  distributed to all bulk loading

- Extending the variables `EmbeddingStoreIT`

- Using `MessageWindowChatMemory` for the tests.
2024-02-08 15:54:53 +01:00
.devcontainer Add Dev Container support (#337) 2023-12-12 19:36:45 +01:00
.github Rework support of AstraDB and Cassandra (#548) 2024-02-08 15:54:53 +01:00
.idea Add IntelliJ icon (#495) 2024-01-10 08:29:04 +01:00
.mvn/wrapper Correctly configure Maven wrapper (#348) 2023-12-13 12:48:49 +01:00
code-execution-engines/langchain4j-code-execution-engine-graalvm-polyglot Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
docker/ollama added tinyllama docker build 2024-01-06 13:32:41 +01:00
docs First Infinispan Integration / Langchain (#552) 2024-02-08 11:15:07 +01:00
document-loaders Add IT for Azure Blob Storage with Testcontainers (#591) 2024-02-08 14:39:12 +01:00
document-parsers Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j langchain4j/model.output coverage tests. (#581) 2024-02-08 14:35:18 +01:00
langchain4j-azure-ai-search Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-azure-open-ai Azure OpenAI : Configure the langchain4j user-agent for reporting (#611) 2024-02-08 14:47:15 +01:00
langchain4j-bedrock Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-bom Integration with MongoDB (#535) 2024-02-08 10:13:29 +01:00
langchain4j-cassandra Rework support of AstraDB and Cassandra (#548) 2024-02-08 15:54:53 +01:00
langchain4j-chatglm Revert ServiceHelper refactoring (#485) 2024-02-05 14:39:34 +01:00
langchain4j-chroma Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-cohere Cohere: added maxRetries 2024-02-02 09:04:29 +01:00
langchain4j-core Adding disabled implementation of all model interfaces (#549) 2024-02-08 10:59:21 +01:00
langchain4j-dashscope Revert ServiceHelper refactoring (#485) 2024-02-05 14:39:34 +01:00
langchain4j-elasticsearch Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-hugging-face Revert ServiceHelper refactoring (#485) 2024-02-05 14:39:34 +01:00
langchain4j-infinispan Infinispan: cosmetics 2024-02-08 11:50:09 +01:00
langchain4j-local-ai Revert ServiceHelper refactoring (#485) 2024-02-05 14:39:34 +01:00
langchain4j-milvus Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-mistral-ai Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-mongodb-atlas MongoDB: added javadoc, updated IT to use env vars 2024-02-08 10:55:46 +01:00
langchain4j-neo4j Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-nomic Adding NomicEmbeddingModel (#592) 2024-02-05 08:39:28 +01:00
langchain4j-ollama Revert ServiceHelper refactoring (#485) 2024-02-05 14:39:34 +01:00
langchain4j-open-ai Revert ServiceHelper refactoring (#485) 2024-02-05 14:39:34 +01:00
langchain4j-opensearch Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-parent Upgrade com.azure:azure-identity from 1.11.1 to 1.11.2 (#607) 2024-02-08 14:46:05 +01:00
langchain4j-pgvector Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-pinecone Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-qdrant Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-qianfan Revert ServiceHelper refactoring (#485) 2024-02-05 14:39:34 +01:00
langchain4j-redis Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-vearch Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-vertex-ai Revert ServiceHelper refactoring (#485) 2024-02-05 14:39:34 +01:00
langchain4j-vertex-ai-gemini Revert ServiceHelper refactoring (#485) 2024-02-05 14:39:34 +01:00
langchain4j-vespa Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
langchain4j-weaviate Beautifying Maven output (#572) 2024-01-30 16:54:54 +01:00
.gitattributes Added in-process embedding models (#41) 2023-07-23 19:05:13 +02:00
.gitignore Correctly configure Maven wrapper (#348) 2023-12-13 12:48:49 +01:00
.prettierrc OpenAI DALL·E support (#298) 2023-12-19 11:37:11 +01:00
LICENSE Initial commit 2023-06-20 17:30:29 +02:00
README.md Docu: fixes 2024-02-08 09:22:29 +01:00
mvnw Correctly configure Maven wrapper (#348) 2023-12-13 12:48:49 +01:00
mvnw.cmd [misc] Clean up maven wrapper scripts: have one at the root, delete the others. (#12) 2023-07-06 08:33:45 +02:00
pom.xml First Infinispan Integration / Langchain (#552) 2024-02-08 11:15:07 +01:00

README.md

LangChain for Java: Supercharge your Java application with the power of LLMs

Introduction

Welcome!

The goal of LangChain4j is to simplify integrating AI/LLM capabilities into Java applications.

Here's how:

  1. Unified APIs: LLM providers (like OpenAI or Google Vertex AI) and embedding (vector) stores (such as Pinecone or Vespa) use proprietary APIs. LangChain4j offers a unified API to avoid the need for learning and implementing specific APIs for each of them. To experiment with a different LLM or embedding store, you can easily switch between them without the need to rewrite your code. LangChain4j currently supports over 10 popular LLM providers and more than 15 embedding stores. Think of it as a Hibernate, but for LLMs and embedding stores.
  2. Comprehensive Toolbox: During the past year, the community has been building numerous LLM-powered applications, identifying common patterns, abstractions, and techniques. LangChain4j has refined these into practical code. Our toolbox includes tools ranging from low-level prompt templating, memory management, and output parsing to high-level patterns like Agents and RAGs. For each pattern and abstraction, we provide an interface along with multiple ready-to-use implementations based on proven techniques. Whether you're building a chatbot or developing a RAG with a complete pipeline from data ingestion to retrieval, LangChain4j offers a wide variety of options.
  3. Numerous Examples: These examples showcase how to begin creating various LLM-powered applications, providing inspiration and enabling you to start building quickly.

LangChain4j began development in early 2023 amid the ChatGPT hype. We noticed a lack of Java counterparts to the numerous Python and JavaScript LLM libraries and frameworks, and we had to fix that! Although "LangChain" is in our name, the project is a fusion of ideas and concepts from LangChain, Haystack, LlamaIndex, and the broader community, spiced up with a touch of our own innovation.

We actively monitor community developments, aiming to quickly incorporate new techniques and integrations, ensuring you stay up-to-date. The library is under active development. While some features from the Python version of LangChain are still being worked on, the core functionality is in place, allowing you to start building LLM-powered apps now!

For easier integration, LangChain4j also includes integration with Quarkus (extension) and Spring Boot (starters).

Code Examples

Please see examples of how LangChain4j can be used in langchain4j-examples repo:

Documentation

Documentation can be found here.

Tutorials

Tutorials can be found here.

Useful Materials

Library Structure

LangChain4j features a modular design, comprising:

  • The langchain4j-core module, which defines core abstractions (such as ChatLanguageModel and EmbeddingStore) and their APIs.
  • The main langchain4j module, containing useful tools like ChatMemory, OutputParser as well as a high-level features like AiServices.
  • A wide array of langchain4j-xyz modules, each providing integration with various LLM providers and embedding stores into LangChain4j. You can use the langchain4j-xyz modules independently. For additional features, simply import the main langchain4j dependency.

News

30 January:

Previous News

22 December:

12 November:

29 September:

  • Updates to models API: return Response<T> instead of T. Response<T> contains token usage and finish reason.
  • All model and embedding store integrations now live in their own modules
  • Integration with Vespa by @Heezer
  • Integration with Elasticsearch by @Martin7-1
  • Integration with Redis by @Martin7-1
  • Integration with Milvus by @IuriiKoval
  • Integration with Astra DB and Cassandra by @clun
  • Added support for overlap in document splitters
  • Some bugfixes and smaller improvements

29 August:

19 August:

10 August:

26 July:

21 July:

17 July:

  • You can now try out OpenAI's gpt-3.5-turbo and text-embedding-ada-002 models with LangChain4j for free, without needing an OpenAI account and keys! Simply use the API key "demo".

15 July:

  • Added EmbeddingStoreIngestor
  • Redesigned document loaders (see FileSystemDocumentLoader)
  • Simplified ConversationalRetrievalChain
  • Renamed DocumentSegment into TextSegment
  • Added output parsers for numeric types
  • Added @UserName for AI Services
  • Fixed 23 and 24

11 July:

  • Added "Dynamic Tools": Now, the LLM can generate code for tasks that require precise calculations, such as math and string manipulation. This will be dynamically executed in a style akin to GPT-4's code interpreter! We use Judge0, hosted by Rapid API, for code execution. You can subscribe and receive 50 free executions per day.

5 July:

  • Now you can add your custom knowledge base to "AI Services". Relevant information will be automatically retrieved and injected into the prompt. This way, the LLM will have a context of your data and will answer based on it!
  • The current date and time can now be automatically injected into the prompt using special {{current_date}}, {{current_time}} and {{current_date_time}} placeholders.

3 July:

  • Added support for Spring Boot 3

2 July:

1 July:

Highlights

You can define declarative "AI Services" that are powered by LLMs:

interface Assistant {

    String chat(String userMessage);
}

Assistant assistant = AiServices.create(Assistant.class, model);

String answer = assistant.chat("Hello");
    
System.out.println(answer);
// Hello! How can I assist you today?

You can use LLM as a classifier:

enum Sentiment {
    POSITIVE, NEUTRAL, NEGATIVE
}

interface SentimentAnalyzer {

    @UserMessage("Analyze sentiment of {{it}}")
    Sentiment analyzeSentimentOf(String text);

    @UserMessage("Does {{it}} have a positive sentiment?")
    boolean isPositive(String text);
}

SentimentAnalyzer sentimentAnalyzer = AiServices.create(SentimentAnalyzer.class, model);

Sentiment sentiment = sentimentAnalyzer.analyzeSentimentOf("It is good!");
// POSITIVE

boolean positive = sentimentAnalyzer.isPositive("It is bad!");
// false

You can easily extract structured information from unstructured data:

class Person {

    private String firstName;
    private String lastName;
    private LocalDate birthDate;
}

interface PersonExtractor {

    @UserMessage("Extract information about a person from {{text}}")
    Person extractPersonFrom(@V("text") String text);
}

PersonExtractor extractor = AiServices.create(PersonExtractor.class, model);

String text = "In 1968, amidst the fading echoes of Independence Day, "
    + "a child named John arrived under the calm evening sky. "
    + "This newborn, bearing the surname Doe, marked the start of a new journey.";

Person person = extractor.extractPersonFrom(text);
// Person { firstName = "John", lastName = "Doe", birthDate = 1968-07-04 }

You can provide tools that LLMs can use! It can be anything: retrieve information from DB, call APIs, etc. See example here.

Compatibility

  • Java: 8 or higher
  • Spring Boot: 2 or 3

Getting started

  1. Add LangChain4j OpenAI dependency to your project:

    • Maven:
      <dependency>
          <groupId>dev.langchain4j</groupId>
          <artifactId>langchain4j-open-ai</artifactId>
          <version>0.26.1</version>
      </dependency>
      
    • Gradle:
      implementation 'dev.langchain4j:langchain4j-open-ai:0.26.1'
      
  2. Import your OpenAI API key:

    String apiKey = System.getenv("OPENAI_API_KEY");
    

    You can also use the API key demo to test OpenAI, which we provide for free. How to get an API key?

  3. Create an instance of a model and start interacting:

    OpenAiChatModel model = OpenAiChatModel.withApiKey(apiKey);
    
    String answer = model.generate("Hello world!");
    
    System.out.println(answer); // Hello! How can I assist you today?
    

Supported LLM Integrations (Docs)

Provider Native Image Completion Streaming Async Completion Async Streaming Embedding Image Generation ReRanking
OpenAI
Azure OpenAI
Hugging Face
Amazon Bedrock
Google Vertex AI Gemini
Google Vertex AI
Mistral AI
DashScope
LocalAI
Ollama
Cohere
Qianfan
ChatGLM
Nomic

Disclaimer

Please note that the library is in active development and:

  • Some features are still missing. We are working hard on implementing them ASAP.
  • API might change at any moment. At this point, we prioritize good design in the future over backward compatibility now. We hope for your understanding.
  • We need your input! Please let us know what features you need and your concerns about the current implementation.

Current features (this list is outdated, we have much more):

Coming soon:

  • Extending "AI Service" features
  • Integration with more LLM providers (commercial and free)
  • Integrations with more embedding stores (commercial and free)
  • Support for more document types
  • Long-term memory for chatbots and agents
  • Chain-of-Thought and Tree-of-Thought

Request features

Please let us know what features you need!

Contribute

Please help us make this open-source library better by contributing.

Some guidelines:

  1. Follow Google's Best Practices for Java Libraries.
  2. Keep the code compatible with Java 8.
  3. Avoid adding new dependencies as much as possible. If absolutely necessary, try to (re)use the same libraries which are already present.
  4. Follow existing code styles present in the project.
  5. Ensure to add Javadoc where necessary.
  6. Provide unit and/or integration tests for your code.
  7. Large features should be discussed with maintainers before implementation.

Use cases

You might ask why would I need all of this? Here are a couple of examples:

  • You want to implement a custom AI-powered chatbot that has access to your data and behaves the way you want it:
    • Customer support chatbot that can:
      • politely answer customer questions
      • take /change/cancel orders
    • Educational assistant that can:
      • Teach various subjects
      • Explain unclear parts
      • Assess user's understanding/knowledge
  • You want to process a lot of unstructured data (files, web pages, etc) and extract structured information from them. For example:
    • extract insights from customer reviews and support chat history
    • extract interesting information from the websites of your competitors
    • extract insights from CVs of job applicants
  • You want to generate information, for example:
    • Emails tailored for each of your customers
    • Content for your app/website:
      • Blog posts
      • Stories
  • You want to transform information, for example:
    • Summarize
    • Proofread and rewrite
    • Translate

Best practices

We highly recommend watching this amazing 90-minute tutorial on prompt engineering best practices, presented by Andrew Ng (DeepLearning.AI) and Isa Fulford (OpenAI). This course will teach you how to use LLMs efficiently and achieve the best possible results. Good investment of your time!

Here are some best practices for using LLMs:

  • Be responsible. Use AI for Good.
  • Be specific. The more specific your query, the best results you will get.
  • Add a "Lets think step by step" instruction to your prompt.
  • Specify steps to achieve the desired goal yourself. This will make the LLM do what you want it to do.
  • Provide examples. Sometimes it is best to show LLM a few examples of what you want instead of trying to explain it.
  • Ask LLM to provide structured output (JSON, XML, etc). This way you can parse response more easily and distinguish different parts of it.
  • Use unusual delimiters, such as ```triple backticks``` to help the LLM distinguish data or input from instructions.

How to get an API key

You will need an API key from OpenAI (paid) or HuggingFace (free) to use LLMs hosted by them.

We recommend using OpenAI LLMs (gpt-3.5-turbo and gpt-4) as they are by far the most capable and are reasonably priced.

It will cost approximately $0.01 to generate 10 pages (A4 format) of text with gpt-3.5-turbo. With gpt-4, the cost will be $0.30 to generate the same amount of text. However, for some use cases, this higher cost may be justified.

How to get OpenAI API key.

For embeddings, we recommend using one of the models from the HuggingFace MTEB leaderboard. You'll have to find the best one for your specific use case.

Here's how to get a HuggingFace API key: