langchain4j/langchain4j-core/pom.xml

204 lines
8.6 KiB
XML
Raw Permalink Normal View History

2023-06-20 23:57:52 +08:00
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<properties>
<!-- Tell the compiler to stop warning us about Java8 -->
<Xlint>-options</Xlint>
</properties>
<parent>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-parent</artifactId>
2024-03-14 20:31:28 +08:00
<version>0.29.0-SNAPSHOT</version>
<relativePath>../langchain4j-parent/pom.xml</relativePath>
</parent>
2023-06-20 23:57:52 +08:00
<artifactId>langchain4j-core</artifactId>
<name>LangChain4j :: Core</name>
Foundation for advanced RAG (#538) So far, LangChain4j had only a simple (a.k.a., naive) RAG implementation: a single `Retriever` was invoked on each interaction with the LLM, and all retrieved `TextSegments` were appended to the end of the `UserMessage`. This approach was very limiting. This PR introduces support for much more advanced RAG use cases. The design and mental model are inspired by [this article](https://blog.langchain.dev/deconstructing-rag/) and [this paper](https://arxiv.org/abs/2312.10997), making it advisable to read the article. This PR introduces a `RetrievalAugmentor` interface responsible for augmenting a `UserMessage` with relevant content before sending it to the LLM. The `RetrievalAugmentor` can be used with both `AiServices` and `ConversationalRetrievalChain`, as well as stand-alone. A default implementation of `RetrievalAugmentor` (`DefaultRetrievalAugmentor`) is provided with the library and is suggested as a good starting point. However, users are not limited to it and can have more freedom with their own custom implementations. `DefaultRetrievalAugmentor` decomposes the entire RAG flow into more granular steps and base components: - `QueryTransformer` - `QueryRouter` - `ContentRetriever` (the old `Retriever` is now deprecated) - `ContentAggregator` - `ContentInjector` This modular design aims to separate concerns and simplify development, testing, and evaluation. Most (if not all) currently known and proven RAG techniques can be represented as one or multiple base components listed above. Here is how the decomposed RAG flow can be visualized: ![advanced-rag](https://github.com/langchain4j/langchain4j/assets/132277850/b699077d-dabf-4768-a241-3fcd9ab0286c) This mental and software model aims to simplify the thinking, reasoning, and implementation of advanced RAG flows. Each base component listed above has a sensible and simple default implementation configured in `DefaultRetrievalAugmentor` by default but can be overridden by more sophisticated implementations (provided by the library out-of-the-box) as well as custom ones. The list of implementations is expected to grow over time as we discover new techniques and implement existing proven ones. This PR also introduces out-of-the-box support for the following proven RAG techniques: - Query expansion - Query compression - Query routing using LLM - [Reciprocal Rank Fusion](https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking) - Re-ranking ([Cohere Rerank](https://docs.cohere.com/docs/reranking) integration is coming in a [separate PR](https://github.com/langchain4j/langchain4j/pull/539)).
2024-01-26 23:25:24 +08:00
<description>Core classes and interfaces of LangChain4j</description>
2023-06-20 23:57:52 +08:00
<dependencies>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</dependency>
Foundation for advanced RAG (#538) So far, LangChain4j had only a simple (a.k.a., naive) RAG implementation: a single `Retriever` was invoked on each interaction with the LLM, and all retrieved `TextSegments` were appended to the end of the `UserMessage`. This approach was very limiting. This PR introduces support for much more advanced RAG use cases. The design and mental model are inspired by [this article](https://blog.langchain.dev/deconstructing-rag/) and [this paper](https://arxiv.org/abs/2312.10997), making it advisable to read the article. This PR introduces a `RetrievalAugmentor` interface responsible for augmenting a `UserMessage` with relevant content before sending it to the LLM. The `RetrievalAugmentor` can be used with both `AiServices` and `ConversationalRetrievalChain`, as well as stand-alone. A default implementation of `RetrievalAugmentor` (`DefaultRetrievalAugmentor`) is provided with the library and is suggested as a good starting point. However, users are not limited to it and can have more freedom with their own custom implementations. `DefaultRetrievalAugmentor` decomposes the entire RAG flow into more granular steps and base components: - `QueryTransformer` - `QueryRouter` - `ContentRetriever` (the old `Retriever` is now deprecated) - `ContentAggregator` - `ContentInjector` This modular design aims to separate concerns and simplify development, testing, and evaluation. Most (if not all) currently known and proven RAG techniques can be represented as one or multiple base components listed above. Here is how the decomposed RAG flow can be visualized: ![advanced-rag](https://github.com/langchain4j/langchain4j/assets/132277850/b699077d-dabf-4768-a241-3fcd9ab0286c) This mental and software model aims to simplify the thinking, reasoning, and implementation of advanced RAG flows. Each base component listed above has a sensible and simple default implementation configured in `DefaultRetrievalAugmentor` by default but can be overridden by more sophisticated implementations (provided by the library out-of-the-box) as well as custom ones. The list of implementations is expected to grow over time as we discover new techniques and implement existing proven ones. This PR also introduces out-of-the-box support for the following proven RAG techniques: - Query expansion - Query compression - Query routing using LLM - [Reciprocal Rank Fusion](https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking) - Re-ranking ([Cohere Rerank](https://docs.cohere.com/docs/reranking) integration is coming in a [separate PR](https://github.com/langchain4j/langchain4j/pull/539)).
2024-01-26 23:25:24 +08:00
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<scope>provided</scope>
</dependency>
2023-06-20 23:57:52 +08:00
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-params</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.assertj</groupId>
<artifactId>assertj-core</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<scope>test</scope>
</dependency>
2023-06-20 23:57:52 +08:00
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-junit-jupiter</artifactId>
<scope>test</scope>
</dependency>
Foundation for advanced RAG (#538) So far, LangChain4j had only a simple (a.k.a., naive) RAG implementation: a single `Retriever` was invoked on each interaction with the LLM, and all retrieved `TextSegments` were appended to the end of the `UserMessage`. This approach was very limiting. This PR introduces support for much more advanced RAG use cases. The design and mental model are inspired by [this article](https://blog.langchain.dev/deconstructing-rag/) and [this paper](https://arxiv.org/abs/2312.10997), making it advisable to read the article. This PR introduces a `RetrievalAugmentor` interface responsible for augmenting a `UserMessage` with relevant content before sending it to the LLM. The `RetrievalAugmentor` can be used with both `AiServices` and `ConversationalRetrievalChain`, as well as stand-alone. A default implementation of `RetrievalAugmentor` (`DefaultRetrievalAugmentor`) is provided with the library and is suggested as a good starting point. However, users are not limited to it and can have more freedom with their own custom implementations. `DefaultRetrievalAugmentor` decomposes the entire RAG flow into more granular steps and base components: - `QueryTransformer` - `QueryRouter` - `ContentRetriever` (the old `Retriever` is now deprecated) - `ContentAggregator` - `ContentInjector` This modular design aims to separate concerns and simplify development, testing, and evaluation. Most (if not all) currently known and proven RAG techniques can be represented as one or multiple base components listed above. Here is how the decomposed RAG flow can be visualized: ![advanced-rag](https://github.com/langchain4j/langchain4j/assets/132277850/b699077d-dabf-4768-a241-3fcd9ab0286c) This mental and software model aims to simplify the thinking, reasoning, and implementation of advanced RAG flows. Each base component listed above has a sensible and simple default implementation configured in `DefaultRetrievalAugmentor` by default but can be overridden by more sophisticated implementations (provided by the library out-of-the-box) as well as custom ones. The list of implementations is expected to grow over time as we discover new techniques and implement existing proven ones. This PR also introduces out-of-the-box support for the following proven RAG techniques: - Query expansion - Query compression - Query routing using LLM - [Reciprocal Rank Fusion](https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking) - Re-ranking ([Cohere Rerank](https://docs.cohere.com/docs/reranking) integration is coming in a [separate PR](https://github.com/langchain4j/langchain4j/pull/539)).
2024-01-26 23:25:24 +08:00
<dependency>
<groupId>org.tinylog</groupId>
<artifactId>tinylog-impl</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.tinylog</groupId>
<artifactId>slf4j-tinylog</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.3.0</version>
<executions>
<execution>
<goals>
<goal>test-jar</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.8.11</version>
<executions>
<execution>
<id>prepare-agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>report</id>
<phase>prepare-package</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
<execution>
<id>jacoco-check</id>
<goals>
<goal>check</goal>
</goals>
<configuration>
<rules>
<rule>
Foundation for advanced RAG (#538) So far, LangChain4j had only a simple (a.k.a., naive) RAG implementation: a single `Retriever` was invoked on each interaction with the LLM, and all retrieved `TextSegments` were appended to the end of the `UserMessage`. This approach was very limiting. This PR introduces support for much more advanced RAG use cases. The design and mental model are inspired by [this article](https://blog.langchain.dev/deconstructing-rag/) and [this paper](https://arxiv.org/abs/2312.10997), making it advisable to read the article. This PR introduces a `RetrievalAugmentor` interface responsible for augmenting a `UserMessage` with relevant content before sending it to the LLM. The `RetrievalAugmentor` can be used with both `AiServices` and `ConversationalRetrievalChain`, as well as stand-alone. A default implementation of `RetrievalAugmentor` (`DefaultRetrievalAugmentor`) is provided with the library and is suggested as a good starting point. However, users are not limited to it and can have more freedom with their own custom implementations. `DefaultRetrievalAugmentor` decomposes the entire RAG flow into more granular steps and base components: - `QueryTransformer` - `QueryRouter` - `ContentRetriever` (the old `Retriever` is now deprecated) - `ContentAggregator` - `ContentInjector` This modular design aims to separate concerns and simplify development, testing, and evaluation. Most (if not all) currently known and proven RAG techniques can be represented as one or multiple base components listed above. Here is how the decomposed RAG flow can be visualized: ![advanced-rag](https://github.com/langchain4j/langchain4j/assets/132277850/b699077d-dabf-4768-a241-3fcd9ab0286c) This mental and software model aims to simplify the thinking, reasoning, and implementation of advanced RAG flows. Each base component listed above has a sensible and simple default implementation configured in `DefaultRetrievalAugmentor` by default but can be overridden by more sophisticated implementations (provided by the library out-of-the-box) as well as custom ones. The list of implementations is expected to grow over time as we discover new techniques and implement existing proven ones. This PR also introduces out-of-the-box support for the following proven RAG techniques: - Query expansion - Query compression - Query routing using LLM - [Reciprocal Rank Fusion](https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking) - Re-ranking ([Cohere Rerank](https://docs.cohere.com/docs/reranking) integration is coming in a [separate PR](https://github.com/langchain4j/langchain4j/pull/539)).
2024-01-26 23:25:24 +08:00
<excludes>
EmbeddingStore (Metadata) Filter API (#610) ## New EmbeddingStore (metadata) `Filter` API Many embedding stores, such as [Pinecone](https://docs.pinecone.io/docs/metadata-filtering) and [Milvus](https://milvus.io/docs/boolean.md) support strict filtering (think of an SQL "WHERE" clause) during similarity search. So, if one has an embedding store with movies, for example, one could search not only for the most semantically similar movies to the given user query but also apply strict filtering by metadata fields like year, genre, rating, etc. In this case, the similarity search will be performed only on those movies that match the filter expression. Since LangChain4j supports (and abstracts away) many embedding stores, there needs to be an embedding-store-agnostic way for users to define the filter expression. This PR introduces a `Filter` interface, which can represent both simple (e.g., `type = "documentation"`) and composite (e.g., `type in ("documentation", "tutorial") AND year > 2020`) filter expressions in an embedding-store-agnostic manner. `Filter` currently supports the following operations: - Comparison: - `IsEqualTo` - `IsNotEqualTo` - `IsGreaterThan` - `IsGreaterThanOrEqualTo` - `IsLessThan` - `IsLessThanOrEqualTo` - `IsIn` - `IsNotIn` - Logical: - `And` - `Not` - `Or` These operations are supported by most embedding stores and serve as a good starting point. However, the list of operations will expand over time to include other operations (e.g., `Contains`) supported by embedding stores. Currently, the DSL looks like this: ```java Filter onlyDocs = metadataKey("type").isEqualTo("documentation"); Filter docsAndTutorialsAfter2020 = metadataKey("type").isIn("documentation", "tutorial").and(metadataKey("year").isGreaterThan(2020)); // or Filter docsAndTutorialsAfter2020 = and( metadataKey("type").isIn("documentation", "tutorial"), metadataKey("year").isGreaterThan(2020) ); ``` ## Filter expression as a `String` Filter expression can also be specified as a `String`. This might be necessary, for example, if the filter expression is generated dynamically by the application or by the LLM (as in [self querying](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/)). This PR introduces a `FilterParser` interface with a simple `Filter parse(String)` API, allowing for future support of multiple syntaxes (if this will be required). For the out-of-the-box filter syntax, ANSI SQL's `WHERE` clause is proposed as a suitable candidate for several reasons: - SQL is well-known among Java developers - There is extensive tooling available for SQL (e.g., parsers) - LLMs are pretty good at generating valid SQL, as there are tons of SQL queries on the internet, which are included in the LLM training datasets. There are also specialized LLMs that are trained for text-to-SQL task, such as [SQLCoder](https://huggingface.co/defog). The downside is that SQL's `WHERE` clause might not support all operations and data types that could be supported in the future by various embedding stores. In such case, we could extend it to a superset of ANSI SQL `WHERE` syntax and/or provide an option to express filters in the native syntax of the store. An out-of-the-box implementation of the SQL `FilterParser` is provided as a `SqlFilterParser` in a separate module `langchain4j-embedding-store-filter-parser-sql`, using [JSqlParser](https://github.com/JSQLParser/JSqlParser) under the hood. `SqlFilterParser` can parse SQL "SELECT" (or just "WHERE" clause) statement into a `Filter` object: - `SELECT * FROM fake_table WHERE userId = '123-456'` -> `metadataKey("userId").isEqualTo("123-456")` - `userId = '123-456'` -> `metadataKey("userId").isEqualTo("123-456")` It can also resolve `CURDATE()` and `CURRENT_DATE`/`CURRENT_TIME`/`CURRENT_TIMESTAMP`: `SELECT * FROM fake_table WHERE year = EXTRACT(YEAR FROM CURRENT_DATE` -> `metadataKey("year").isEqualTo(LocalDate.now().getYear())` ## Changes in `Metadata` API Until now, `Metadata` supported only `String` values. This PR expands the list of supported value types to `Integer`, `Long`, `Float` and `Double`. In the future, more types may be added (if needed). The method `String get(String key)` will be deprecated later in favor of: - `String getString(String key)` - `Integer getInteger(String key)` - `Long getLong(String key)` - etc New overloaded `put(key, value)` methods are introduced to support more value types: - `put(String key, int value)` - `put(String key, long value)` - etc ## Changes in `EmbeddingStore` API New method `search` is added that will become the main entry point for search in the future. All `findRelevant` methods will be deprecated later. New `search` method accepts `EmbeddingSearchRequest` and returns `EmbeddingSearchResult`. `EmbeddingSearchRequest` contains all search criteria (e.g. `maxResults`, `minScore`), including new `Filter`. `EmbeddingSearchResult` contains a list of `EmbeddingMatch`. ```java EmbeddingSearchResult search(EmbeddingSearchRequest request); ``` ## Changes in `EmbeddingStoreContentRetriever` API `EmbeddingStoreContentRetriever` can now be configured with a static `filter` as well as dynamic `dynamicMaxResults`, `dynamicMinScore` and `dynamicFilter` in the builder: ```java ContentRetriever contentRetriever = EmbeddingStoreContentRetriever.builder() .embeddingStore(embeddingStore) .embeddingModel(embeddingModel) ... .maxResults(3) // or .dynamicMaxResults(query -> 3) // You can define maxResults dynamically. The value could, for example, depend on the query or the user associated with the query. ... .minScore(0.3) // or .dynamicMinScore(query -> 0.3) ... .filter(metadataKey("userId").isEqualTo("123-456")) // Assuming your TextSegments contain Metadata with key "userId" // or .dynamicFilter(query -> metadataKey("userId").isEqualTo(query.metadata().chatMemoryId().toString())) ... .build(); ``` So now you can define `maxResults`, `minScore` and `filter` both statically and dynamically (they can depend on the query, user, etc.). These values will be propagated to the underlying `EmbeddingStore`. ## ["Self-querying"](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/) This PR also introduces `LanguageModelSqlFilterBuilder` in `langchain4j-embedding-store-filter-parser-sql` module which can be used with `EmbeddingStoreContentRetriever`'s `dynamicFilter` to automatically build a `Filter` object from the `Query` using language model and `SqlFilterParser`. For example: ```java TextSegment groundhogDay = TextSegment.from("Groundhog Day", new Metadata().put("genre", "comedy").put("year", 1993)); TextSegment forrestGump = TextSegment.from("Forrest Gump", new Metadata().put("genre", "drama").put("year", 1994)); TextSegment dieHard = TextSegment.from("Die Hard", new Metadata().put("genre", "action").put("year", 1998)); // describe metadata keys as if they were columns in the SQL table TableDefinition tableDefinition = TableDefinition.builder() .name("movies") .addColumn("genre", "VARCHAR", "one of [comedy, drama, action]") .addColumn("year", "INT") .build(); LanguageModelSqlFilterBuilder sqlFilterBuilder = new LanguageModelSqlFilterBuilder(model, tableDefinition); ContentRetriever contentRetriever = EmbeddingStoreContentRetriever.builder() .embeddingStore(embeddingStore) .embeddingModel(embeddingModel) .dynamicFilter(sqlFilterBuilder::build) .build(); String answer = assistant.answer("Recommend me a good drama from 90s"); // Forrest Gump ``` ## Which embedding store integrations will support `Filter`? In the long run, all (provided the embedding store itself supports it). In the first iteration, I aim to add support to just a few: - `InMemoryEmbeddingStore` - Elasticsearch - Milvus <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Summary by CodeRabbit - **New Features** - Introduced filters for checking key's value existence in a collection for improved data handling. - **Enhancements** - Updated `InMemoryEmbeddingStoreTest` to extend a different class for improved testing coverage and added a new test method. - **Refactor** - Made minor formatting adjustments in the assertion block for better readability. - **Documentation** - Updated class hierarchy information for clarity. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-03-09 00:06:58 +08:00
<exclude>dev.langchain4j.data.document</exclude>
<exclude>dev.langchain4j.store.embedding</exclude>
<exclude>dev.langchain4j.store.embedding.filter</exclude>
<exclude>dev.langchain4j.store.embedding.filter.logical</exclude>
<exclude>dev.langchain4j.store.embedding.filter.comparison</exclude>
Foundation for advanced RAG (#538) So far, LangChain4j had only a simple (a.k.a., naive) RAG implementation: a single `Retriever` was invoked on each interaction with the LLM, and all retrieved `TextSegments` were appended to the end of the `UserMessage`. This approach was very limiting. This PR introduces support for much more advanced RAG use cases. The design and mental model are inspired by [this article](https://blog.langchain.dev/deconstructing-rag/) and [this paper](https://arxiv.org/abs/2312.10997), making it advisable to read the article. This PR introduces a `RetrievalAugmentor` interface responsible for augmenting a `UserMessage` with relevant content before sending it to the LLM. The `RetrievalAugmentor` can be used with both `AiServices` and `ConversationalRetrievalChain`, as well as stand-alone. A default implementation of `RetrievalAugmentor` (`DefaultRetrievalAugmentor`) is provided with the library and is suggested as a good starting point. However, users are not limited to it and can have more freedom with their own custom implementations. `DefaultRetrievalAugmentor` decomposes the entire RAG flow into more granular steps and base components: - `QueryTransformer` - `QueryRouter` - `ContentRetriever` (the old `Retriever` is now deprecated) - `ContentAggregator` - `ContentInjector` This modular design aims to separate concerns and simplify development, testing, and evaluation. Most (if not all) currently known and proven RAG techniques can be represented as one or multiple base components listed above. Here is how the decomposed RAG flow can be visualized: ![advanced-rag](https://github.com/langchain4j/langchain4j/assets/132277850/b699077d-dabf-4768-a241-3fcd9ab0286c) This mental and software model aims to simplify the thinking, reasoning, and implementation of advanced RAG flows. Each base component listed above has a sensible and simple default implementation configured in `DefaultRetrievalAugmentor` by default but can be overridden by more sophisticated implementations (provided by the library out-of-the-box) as well as custom ones. The list of implementations is expected to grow over time as we discover new techniques and implement existing proven ones. This PR also introduces out-of-the-box support for the following proven RAG techniques: - Query expansion - Query compression - Query routing using LLM - [Reciprocal Rank Fusion](https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking) - Re-ranking ([Cohere Rerank](https://docs.cohere.com/docs/reranking) integration is coming in a [separate PR](https://github.com/langchain4j/langchain4j/pull/539)).
2024-01-26 23:25:24 +08:00
<exclude>dev.langchain4j.rag</exclude>
<exclude>dev.langchain4j.rag.content</exclude>
<exclude>dev.langchain4j.rag.content.aggregator</exclude>
<exclude>dev.langchain4j.rag.content.injector</exclude>
<exclude>dev.langchain4j.rag.content.retriever</exclude>
<exclude>dev.langchain4j.rag.query</exclude>
<exclude>dev.langchain4j.rag.query.router</exclude>
<exclude>dev.langchain4j.rag.query.transformer</exclude>
</excludes>
<element>PACKAGE</element>
<limits>
<limit>
<counter>INSTRUCTION</counter>
<value>COVEREDRATIO</value>
<minimum>0.9</minimum>
</limit>
</limits>
</rule>
Foundation for advanced RAG (#538) So far, LangChain4j had only a simple (a.k.a., naive) RAG implementation: a single `Retriever` was invoked on each interaction with the LLM, and all retrieved `TextSegments` were appended to the end of the `UserMessage`. This approach was very limiting. This PR introduces support for much more advanced RAG use cases. The design and mental model are inspired by [this article](https://blog.langchain.dev/deconstructing-rag/) and [this paper](https://arxiv.org/abs/2312.10997), making it advisable to read the article. This PR introduces a `RetrievalAugmentor` interface responsible for augmenting a `UserMessage` with relevant content before sending it to the LLM. The `RetrievalAugmentor` can be used with both `AiServices` and `ConversationalRetrievalChain`, as well as stand-alone. A default implementation of `RetrievalAugmentor` (`DefaultRetrievalAugmentor`) is provided with the library and is suggested as a good starting point. However, users are not limited to it and can have more freedom with their own custom implementations. `DefaultRetrievalAugmentor` decomposes the entire RAG flow into more granular steps and base components: - `QueryTransformer` - `QueryRouter` - `ContentRetriever` (the old `Retriever` is now deprecated) - `ContentAggregator` - `ContentInjector` This modular design aims to separate concerns and simplify development, testing, and evaluation. Most (if not all) currently known and proven RAG techniques can be represented as one or multiple base components listed above. Here is how the decomposed RAG flow can be visualized: ![advanced-rag](https://github.com/langchain4j/langchain4j/assets/132277850/b699077d-dabf-4768-a241-3fcd9ab0286c) This mental and software model aims to simplify the thinking, reasoning, and implementation of advanced RAG flows. Each base component listed above has a sensible and simple default implementation configured in `DefaultRetrievalAugmentor` by default but can be overridden by more sophisticated implementations (provided by the library out-of-the-box) as well as custom ones. The list of implementations is expected to grow over time as we discover new techniques and implement existing proven ones. This PR also introduces out-of-the-box support for the following proven RAG techniques: - Query expansion - Query compression - Query routing using LLM - [Reciprocal Rank Fusion](https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking) - Re-ranking ([Cohere Rerank](https://docs.cohere.com/docs/reranking) integration is coming in a [separate PR](https://github.com/langchain4j/langchain4j/pull/539)).
2024-01-26 23:25:24 +08:00
<rule>
<includes>
<include>dev.langchain4j.rag</include>
<include>dev.langchain4j.rag.content</include>
<include>dev.langchain4j.rag.content.aggregator</include>
<include>dev.langchain4j.rag.content.injector</include>
<include>dev.langchain4j.rag.content.retriever</include>
<include>dev.langchain4j.rag.query</include>
<include>dev.langchain4j.rag.query.router</include>
<include>dev.langchain4j.rag.query.transformer</include>
</includes>
<element>PACKAGE</element>
<limits>
<limit>
<counter>INSTRUCTION</counter>
<value>COVEREDRATIO</value>
<minimum>0.80</minimum>
</limit>
</limits>
</rule>
EmbeddingStore (Metadata) Filter API (#610) ## New EmbeddingStore (metadata) `Filter` API Many embedding stores, such as [Pinecone](https://docs.pinecone.io/docs/metadata-filtering) and [Milvus](https://milvus.io/docs/boolean.md) support strict filtering (think of an SQL "WHERE" clause) during similarity search. So, if one has an embedding store with movies, for example, one could search not only for the most semantically similar movies to the given user query but also apply strict filtering by metadata fields like year, genre, rating, etc. In this case, the similarity search will be performed only on those movies that match the filter expression. Since LangChain4j supports (and abstracts away) many embedding stores, there needs to be an embedding-store-agnostic way for users to define the filter expression. This PR introduces a `Filter` interface, which can represent both simple (e.g., `type = "documentation"`) and composite (e.g., `type in ("documentation", "tutorial") AND year > 2020`) filter expressions in an embedding-store-agnostic manner. `Filter` currently supports the following operations: - Comparison: - `IsEqualTo` - `IsNotEqualTo` - `IsGreaterThan` - `IsGreaterThanOrEqualTo` - `IsLessThan` - `IsLessThanOrEqualTo` - `IsIn` - `IsNotIn` - Logical: - `And` - `Not` - `Or` These operations are supported by most embedding stores and serve as a good starting point. However, the list of operations will expand over time to include other operations (e.g., `Contains`) supported by embedding stores. Currently, the DSL looks like this: ```java Filter onlyDocs = metadataKey("type").isEqualTo("documentation"); Filter docsAndTutorialsAfter2020 = metadataKey("type").isIn("documentation", "tutorial").and(metadataKey("year").isGreaterThan(2020)); // or Filter docsAndTutorialsAfter2020 = and( metadataKey("type").isIn("documentation", "tutorial"), metadataKey("year").isGreaterThan(2020) ); ``` ## Filter expression as a `String` Filter expression can also be specified as a `String`. This might be necessary, for example, if the filter expression is generated dynamically by the application or by the LLM (as in [self querying](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/)). This PR introduces a `FilterParser` interface with a simple `Filter parse(String)` API, allowing for future support of multiple syntaxes (if this will be required). For the out-of-the-box filter syntax, ANSI SQL's `WHERE` clause is proposed as a suitable candidate for several reasons: - SQL is well-known among Java developers - There is extensive tooling available for SQL (e.g., parsers) - LLMs are pretty good at generating valid SQL, as there are tons of SQL queries on the internet, which are included in the LLM training datasets. There are also specialized LLMs that are trained for text-to-SQL task, such as [SQLCoder](https://huggingface.co/defog). The downside is that SQL's `WHERE` clause might not support all operations and data types that could be supported in the future by various embedding stores. In such case, we could extend it to a superset of ANSI SQL `WHERE` syntax and/or provide an option to express filters in the native syntax of the store. An out-of-the-box implementation of the SQL `FilterParser` is provided as a `SqlFilterParser` in a separate module `langchain4j-embedding-store-filter-parser-sql`, using [JSqlParser](https://github.com/JSQLParser/JSqlParser) under the hood. `SqlFilterParser` can parse SQL "SELECT" (or just "WHERE" clause) statement into a `Filter` object: - `SELECT * FROM fake_table WHERE userId = '123-456'` -> `metadataKey("userId").isEqualTo("123-456")` - `userId = '123-456'` -> `metadataKey("userId").isEqualTo("123-456")` It can also resolve `CURDATE()` and `CURRENT_DATE`/`CURRENT_TIME`/`CURRENT_TIMESTAMP`: `SELECT * FROM fake_table WHERE year = EXTRACT(YEAR FROM CURRENT_DATE` -> `metadataKey("year").isEqualTo(LocalDate.now().getYear())` ## Changes in `Metadata` API Until now, `Metadata` supported only `String` values. This PR expands the list of supported value types to `Integer`, `Long`, `Float` and `Double`. In the future, more types may be added (if needed). The method `String get(String key)` will be deprecated later in favor of: - `String getString(String key)` - `Integer getInteger(String key)` - `Long getLong(String key)` - etc New overloaded `put(key, value)` methods are introduced to support more value types: - `put(String key, int value)` - `put(String key, long value)` - etc ## Changes in `EmbeddingStore` API New method `search` is added that will become the main entry point for search in the future. All `findRelevant` methods will be deprecated later. New `search` method accepts `EmbeddingSearchRequest` and returns `EmbeddingSearchResult`. `EmbeddingSearchRequest` contains all search criteria (e.g. `maxResults`, `minScore`), including new `Filter`. `EmbeddingSearchResult` contains a list of `EmbeddingMatch`. ```java EmbeddingSearchResult search(EmbeddingSearchRequest request); ``` ## Changes in `EmbeddingStoreContentRetriever` API `EmbeddingStoreContentRetriever` can now be configured with a static `filter` as well as dynamic `dynamicMaxResults`, `dynamicMinScore` and `dynamicFilter` in the builder: ```java ContentRetriever contentRetriever = EmbeddingStoreContentRetriever.builder() .embeddingStore(embeddingStore) .embeddingModel(embeddingModel) ... .maxResults(3) // or .dynamicMaxResults(query -> 3) // You can define maxResults dynamically. The value could, for example, depend on the query or the user associated with the query. ... .minScore(0.3) // or .dynamicMinScore(query -> 0.3) ... .filter(metadataKey("userId").isEqualTo("123-456")) // Assuming your TextSegments contain Metadata with key "userId" // or .dynamicFilter(query -> metadataKey("userId").isEqualTo(query.metadata().chatMemoryId().toString())) ... .build(); ``` So now you can define `maxResults`, `minScore` and `filter` both statically and dynamically (they can depend on the query, user, etc.). These values will be propagated to the underlying `EmbeddingStore`. ## ["Self-querying"](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/) This PR also introduces `LanguageModelSqlFilterBuilder` in `langchain4j-embedding-store-filter-parser-sql` module which can be used with `EmbeddingStoreContentRetriever`'s `dynamicFilter` to automatically build a `Filter` object from the `Query` using language model and `SqlFilterParser`. For example: ```java TextSegment groundhogDay = TextSegment.from("Groundhog Day", new Metadata().put("genre", "comedy").put("year", 1993)); TextSegment forrestGump = TextSegment.from("Forrest Gump", new Metadata().put("genre", "drama").put("year", 1994)); TextSegment dieHard = TextSegment.from("Die Hard", new Metadata().put("genre", "action").put("year", 1998)); // describe metadata keys as if they were columns in the SQL table TableDefinition tableDefinition = TableDefinition.builder() .name("movies") .addColumn("genre", "VARCHAR", "one of [comedy, drama, action]") .addColumn("year", "INT") .build(); LanguageModelSqlFilterBuilder sqlFilterBuilder = new LanguageModelSqlFilterBuilder(model, tableDefinition); ContentRetriever contentRetriever = EmbeddingStoreContentRetriever.builder() .embeddingStore(embeddingStore) .embeddingModel(embeddingModel) .dynamicFilter(sqlFilterBuilder::build) .build(); String answer = assistant.answer("Recommend me a good drama from 90s"); // Forrest Gump ``` ## Which embedding store integrations will support `Filter`? In the long run, all (provided the embedding store itself supports it). In the first iteration, I aim to add support to just a few: - `InMemoryEmbeddingStore` - Elasticsearch - Milvus <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Summary by CodeRabbit - **New Features** - Introduced filters for checking key's value existence in a collection for improved data handling. - **Enhancements** - Updated `InMemoryEmbeddingStoreTest` to extend a different class for improved testing coverage and added a new test method. - **Refactor** - Made minor formatting adjustments in the assertion block for better readability. - **Documentation** - Updated class hierarchy information for clarity. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-03-09 00:06:58 +08:00
<rule>
<includes>
<include>dev.langchain4j.data.document</include>
<include>dev.langchain4j.store.embedding</include>
<include>dev.langchain4j.store.embedding.filter</include>
<include>dev.langchain4j.store.embedding.filter.logical</include>
<include>dev.langchain4j.store.embedding.filter.comparison</include>
</includes>
<element>PACKAGE</element>
<limits>
<limit>
<counter>INSTRUCTION</counter>
<value>COVEREDRATIO</value>
<minimum>0.00</minimum>
</limit>
</limits>
</rule>
</rules>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
<licenses>
<license>
<name>Apache License, Version 2.0</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
</license>
</licenses>
2023-09-29 00:21:01 +08:00
2023-06-20 23:57:52 +08:00
</project>