langchain4j/langchain4j-core/pom.xml

184 lines
7.0 KiB
XML
Raw Normal View History

2023-06-20 23:57:52 +08:00
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<properties>
<!-- Tell the compiler to stop warning us about Java8 -->
<Xlint>-options</Xlint>
</properties>
<parent>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-parent</artifactId>
2024-01-30 23:18:40 +08:00
<version>0.27.0-SNAPSHOT</version>
<relativePath>../langchain4j-parent/pom.xml</relativePath>
</parent>
2023-06-20 23:57:52 +08:00
<artifactId>langchain4j-core</artifactId>
<packaging>jar</packaging>
<name>LangChain4j :: Core</name>
Foundation for advanced RAG (#538) So far, LangChain4j had only a simple (a.k.a., naive) RAG implementation: a single `Retriever` was invoked on each interaction with the LLM, and all retrieved `TextSegments` were appended to the end of the `UserMessage`. This approach was very limiting. This PR introduces support for much more advanced RAG use cases. The design and mental model are inspired by [this article](https://blog.langchain.dev/deconstructing-rag/) and [this paper](https://arxiv.org/abs/2312.10997), making it advisable to read the article. This PR introduces a `RetrievalAugmentor` interface responsible for augmenting a `UserMessage` with relevant content before sending it to the LLM. The `RetrievalAugmentor` can be used with both `AiServices` and `ConversationalRetrievalChain`, as well as stand-alone. A default implementation of `RetrievalAugmentor` (`DefaultRetrievalAugmentor`) is provided with the library and is suggested as a good starting point. However, users are not limited to it and can have more freedom with their own custom implementations. `DefaultRetrievalAugmentor` decomposes the entire RAG flow into more granular steps and base components: - `QueryTransformer` - `QueryRouter` - `ContentRetriever` (the old `Retriever` is now deprecated) - `ContentAggregator` - `ContentInjector` This modular design aims to separate concerns and simplify development, testing, and evaluation. Most (if not all) currently known and proven RAG techniques can be represented as one or multiple base components listed above. Here is how the decomposed RAG flow can be visualized: ![advanced-rag](https://github.com/langchain4j/langchain4j/assets/132277850/b699077d-dabf-4768-a241-3fcd9ab0286c) This mental and software model aims to simplify the thinking, reasoning, and implementation of advanced RAG flows. Each base component listed above has a sensible and simple default implementation configured in `DefaultRetrievalAugmentor` by default but can be overridden by more sophisticated implementations (provided by the library out-of-the-box) as well as custom ones. The list of implementations is expected to grow over time as we discover new techniques and implement existing proven ones. This PR also introduces out-of-the-box support for the following proven RAG techniques: - Query expansion - Query compression - Query routing using LLM - [Reciprocal Rank Fusion](https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking) - Re-ranking ([Cohere Rerank](https://docs.cohere.com/docs/reranking) integration is coming in a [separate PR](https://github.com/langchain4j/langchain4j/pull/539)).
2024-01-26 23:25:24 +08:00
<description>Core classes and interfaces of LangChain4j</description>
2023-06-20 23:57:52 +08:00
<dependencies>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</dependency>
Foundation for advanced RAG (#538) So far, LangChain4j had only a simple (a.k.a., naive) RAG implementation: a single `Retriever` was invoked on each interaction with the LLM, and all retrieved `TextSegments` were appended to the end of the `UserMessage`. This approach was very limiting. This PR introduces support for much more advanced RAG use cases. The design and mental model are inspired by [this article](https://blog.langchain.dev/deconstructing-rag/) and [this paper](https://arxiv.org/abs/2312.10997), making it advisable to read the article. This PR introduces a `RetrievalAugmentor` interface responsible for augmenting a `UserMessage` with relevant content before sending it to the LLM. The `RetrievalAugmentor` can be used with both `AiServices` and `ConversationalRetrievalChain`, as well as stand-alone. A default implementation of `RetrievalAugmentor` (`DefaultRetrievalAugmentor`) is provided with the library and is suggested as a good starting point. However, users are not limited to it and can have more freedom with their own custom implementations. `DefaultRetrievalAugmentor` decomposes the entire RAG flow into more granular steps and base components: - `QueryTransformer` - `QueryRouter` - `ContentRetriever` (the old `Retriever` is now deprecated) - `ContentAggregator` - `ContentInjector` This modular design aims to separate concerns and simplify development, testing, and evaluation. Most (if not all) currently known and proven RAG techniques can be represented as one or multiple base components listed above. Here is how the decomposed RAG flow can be visualized: ![advanced-rag](https://github.com/langchain4j/langchain4j/assets/132277850/b699077d-dabf-4768-a241-3fcd9ab0286c) This mental and software model aims to simplify the thinking, reasoning, and implementation of advanced RAG flows. Each base component listed above has a sensible and simple default implementation configured in `DefaultRetrievalAugmentor` by default but can be overridden by more sophisticated implementations (provided by the library out-of-the-box) as well as custom ones. The list of implementations is expected to grow over time as we discover new techniques and implement existing proven ones. This PR also introduces out-of-the-box support for the following proven RAG techniques: - Query expansion - Query compression - Query routing using LLM - [Reciprocal Rank Fusion](https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking) - Re-ranking ([Cohere Rerank](https://docs.cohere.com/docs/reranking) integration is coming in a [separate PR](https://github.com/langchain4j/langchain4j/pull/539)).
2024-01-26 23:25:24 +08:00
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<scope>provided</scope>
</dependency>
2023-06-20 23:57:52 +08:00
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-params</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.assertj</groupId>
<artifactId>assertj-core</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<scope>test</scope>
</dependency>
2023-06-20 23:57:52 +08:00
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-junit-jupiter</artifactId>
<scope>test</scope>
</dependency>
Foundation for advanced RAG (#538) So far, LangChain4j had only a simple (a.k.a., naive) RAG implementation: a single `Retriever` was invoked on each interaction with the LLM, and all retrieved `TextSegments` were appended to the end of the `UserMessage`. This approach was very limiting. This PR introduces support for much more advanced RAG use cases. The design and mental model are inspired by [this article](https://blog.langchain.dev/deconstructing-rag/) and [this paper](https://arxiv.org/abs/2312.10997), making it advisable to read the article. This PR introduces a `RetrievalAugmentor` interface responsible for augmenting a `UserMessage` with relevant content before sending it to the LLM. The `RetrievalAugmentor` can be used with both `AiServices` and `ConversationalRetrievalChain`, as well as stand-alone. A default implementation of `RetrievalAugmentor` (`DefaultRetrievalAugmentor`) is provided with the library and is suggested as a good starting point. However, users are not limited to it and can have more freedom with their own custom implementations. `DefaultRetrievalAugmentor` decomposes the entire RAG flow into more granular steps and base components: - `QueryTransformer` - `QueryRouter` - `ContentRetriever` (the old `Retriever` is now deprecated) - `ContentAggregator` - `ContentInjector` This modular design aims to separate concerns and simplify development, testing, and evaluation. Most (if not all) currently known and proven RAG techniques can be represented as one or multiple base components listed above. Here is how the decomposed RAG flow can be visualized: ![advanced-rag](https://github.com/langchain4j/langchain4j/assets/132277850/b699077d-dabf-4768-a241-3fcd9ab0286c) This mental and software model aims to simplify the thinking, reasoning, and implementation of advanced RAG flows. Each base component listed above has a sensible and simple default implementation configured in `DefaultRetrievalAugmentor` by default but can be overridden by more sophisticated implementations (provided by the library out-of-the-box) as well as custom ones. The list of implementations is expected to grow over time as we discover new techniques and implement existing proven ones. This PR also introduces out-of-the-box support for the following proven RAG techniques: - Query expansion - Query compression - Query routing using LLM - [Reciprocal Rank Fusion](https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking) - Re-ranking ([Cohere Rerank](https://docs.cohere.com/docs/reranking) integration is coming in a [separate PR](https://github.com/langchain4j/langchain4j/pull/539)).
2024-01-26 23:25:24 +08:00
<dependency>
<groupId>org.tinylog</groupId>
<artifactId>tinylog-impl</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.tinylog</groupId>
<artifactId>slf4j-tinylog</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.3.0</version>
<executions>
<execution>
<goals>
<goal>test-jar</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.8.11</version>
<executions>
<execution>
<id>prepare-agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>report</id>
<phase>prepare-package</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
<execution>
<id>jacoco-check</id>
<goals>
<goal>check</goal>
</goals>
<configuration>
<rules>
<rule>
Foundation for advanced RAG (#538) So far, LangChain4j had only a simple (a.k.a., naive) RAG implementation: a single `Retriever` was invoked on each interaction with the LLM, and all retrieved `TextSegments` were appended to the end of the `UserMessage`. This approach was very limiting. This PR introduces support for much more advanced RAG use cases. The design and mental model are inspired by [this article](https://blog.langchain.dev/deconstructing-rag/) and [this paper](https://arxiv.org/abs/2312.10997), making it advisable to read the article. This PR introduces a `RetrievalAugmentor` interface responsible for augmenting a `UserMessage` with relevant content before sending it to the LLM. The `RetrievalAugmentor` can be used with both `AiServices` and `ConversationalRetrievalChain`, as well as stand-alone. A default implementation of `RetrievalAugmentor` (`DefaultRetrievalAugmentor`) is provided with the library and is suggested as a good starting point. However, users are not limited to it and can have more freedom with their own custom implementations. `DefaultRetrievalAugmentor` decomposes the entire RAG flow into more granular steps and base components: - `QueryTransformer` - `QueryRouter` - `ContentRetriever` (the old `Retriever` is now deprecated) - `ContentAggregator` - `ContentInjector` This modular design aims to separate concerns and simplify development, testing, and evaluation. Most (if not all) currently known and proven RAG techniques can be represented as one or multiple base components listed above. Here is how the decomposed RAG flow can be visualized: ![advanced-rag](https://github.com/langchain4j/langchain4j/assets/132277850/b699077d-dabf-4768-a241-3fcd9ab0286c) This mental and software model aims to simplify the thinking, reasoning, and implementation of advanced RAG flows. Each base component listed above has a sensible and simple default implementation configured in `DefaultRetrievalAugmentor` by default but can be overridden by more sophisticated implementations (provided by the library out-of-the-box) as well as custom ones. The list of implementations is expected to grow over time as we discover new techniques and implement existing proven ones. This PR also introduces out-of-the-box support for the following proven RAG techniques: - Query expansion - Query compression - Query routing using LLM - [Reciprocal Rank Fusion](https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking) - Re-ranking ([Cohere Rerank](https://docs.cohere.com/docs/reranking) integration is coming in a [separate PR](https://github.com/langchain4j/langchain4j/pull/539)).
2024-01-26 23:25:24 +08:00
<excludes>
<exclude>dev.langchain4j.rag</exclude>
<exclude>dev.langchain4j.rag.content</exclude>
<exclude>dev.langchain4j.rag.content.aggregator</exclude>
<exclude>dev.langchain4j.rag.content.injector</exclude>
<exclude>dev.langchain4j.rag.content.retriever</exclude>
<exclude>dev.langchain4j.rag.query</exclude>
<exclude>dev.langchain4j.rag.query.router</exclude>
<exclude>dev.langchain4j.rag.query.transformer</exclude>
</excludes>
<element>PACKAGE</element>
<limits>
<limit>
<counter>INSTRUCTION</counter>
<value>COVEREDRATIO</value>
<minimum>0.95</minimum>
</limit>
</limits>
</rule>
Foundation for advanced RAG (#538) So far, LangChain4j had only a simple (a.k.a., naive) RAG implementation: a single `Retriever` was invoked on each interaction with the LLM, and all retrieved `TextSegments` were appended to the end of the `UserMessage`. This approach was very limiting. This PR introduces support for much more advanced RAG use cases. The design and mental model are inspired by [this article](https://blog.langchain.dev/deconstructing-rag/) and [this paper](https://arxiv.org/abs/2312.10997), making it advisable to read the article. This PR introduces a `RetrievalAugmentor` interface responsible for augmenting a `UserMessage` with relevant content before sending it to the LLM. The `RetrievalAugmentor` can be used with both `AiServices` and `ConversationalRetrievalChain`, as well as stand-alone. A default implementation of `RetrievalAugmentor` (`DefaultRetrievalAugmentor`) is provided with the library and is suggested as a good starting point. However, users are not limited to it and can have more freedom with their own custom implementations. `DefaultRetrievalAugmentor` decomposes the entire RAG flow into more granular steps and base components: - `QueryTransformer` - `QueryRouter` - `ContentRetriever` (the old `Retriever` is now deprecated) - `ContentAggregator` - `ContentInjector` This modular design aims to separate concerns and simplify development, testing, and evaluation. Most (if not all) currently known and proven RAG techniques can be represented as one or multiple base components listed above. Here is how the decomposed RAG flow can be visualized: ![advanced-rag](https://github.com/langchain4j/langchain4j/assets/132277850/b699077d-dabf-4768-a241-3fcd9ab0286c) This mental and software model aims to simplify the thinking, reasoning, and implementation of advanced RAG flows. Each base component listed above has a sensible and simple default implementation configured in `DefaultRetrievalAugmentor` by default but can be overridden by more sophisticated implementations (provided by the library out-of-the-box) as well as custom ones. The list of implementations is expected to grow over time as we discover new techniques and implement existing proven ones. This PR also introduces out-of-the-box support for the following proven RAG techniques: - Query expansion - Query compression - Query routing using LLM - [Reciprocal Rank Fusion](https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking) - Re-ranking ([Cohere Rerank](https://docs.cohere.com/docs/reranking) integration is coming in a [separate PR](https://github.com/langchain4j/langchain4j/pull/539)).
2024-01-26 23:25:24 +08:00
<rule>
<includes>
<include>dev.langchain4j.rag</include>
<include>dev.langchain4j.rag.content</include>
<include>dev.langchain4j.rag.content.aggregator</include>
<include>dev.langchain4j.rag.content.injector</include>
<include>dev.langchain4j.rag.content.retriever</include>
<include>dev.langchain4j.rag.query</include>
<include>dev.langchain4j.rag.query.router</include>
<include>dev.langchain4j.rag.query.transformer</include>
</includes>
<element>PACKAGE</element>
<limits>
<limit>
<counter>INSTRUCTION</counter>
<value>COVEREDRATIO</value>
<minimum>0.80</minimum>
</limit>
</limits>
</rule>
</rules>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
<licenses>
<license>
<name>Apache License, Version 2.0</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
</license>
</licenses>
2023-09-29 00:21:01 +08:00
2023-06-20 23:57:52 +08:00
</project>