2023-06-24 15:07:23 +08:00
|
|
|
<?xml version="1.0" encoding="UTF-8"?>
|
|
|
|
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
|
|
|
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
|
|
|
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
|
|
|
|
<modelVersion>4.0.0</modelVersion>
|
|
|
|
|
|
|
|
<groupId>dev.langchain4j</groupId>
|
2023-07-14 04:59:25 +08:00
|
|
|
<artifactId>langchain4j-aggregator</artifactId>
|
2024-09-09 16:11:09 +08:00
|
|
|
<version>0.35.0-SNAPSHOT</version>
|
2023-06-24 15:07:23 +08:00
|
|
|
<packaging>pom</packaging>
|
2024-01-30 23:54:54 +08:00
|
|
|
<name>LangChain4j :: Aggregator</name>
|
2023-06-24 15:07:23 +08:00
|
|
|
|
2024-08-16 19:52:42 +08:00
|
|
|
<properties>
|
|
|
|
<gib.disable>true</gib.disable>
|
|
|
|
</properties>
|
|
|
|
|
2023-06-24 15:07:23 +08:00
|
|
|
<modules>
|
2023-08-19 02:49:50 +08:00
|
|
|
|
2023-07-14 04:59:25 +08:00
|
|
|
<module>langchain4j-parent</module>
|
2023-10-27 20:18:04 +08:00
|
|
|
<module>langchain4j-bom</module>
|
2023-07-24 01:05:13 +08:00
|
|
|
|
2023-07-14 04:59:25 +08:00
|
|
|
<module>langchain4j-core</module>
|
2023-07-24 01:05:13 +08:00
|
|
|
<module>langchain4j</module>
|
2023-08-19 02:38:45 +08:00
|
|
|
|
POC: Easy RAG (#686)
Implementing RAG applications is hard. Especially for those who are just
getting started exploring LLMs and RAG.
This PR introduces an "Easy RAG" feature that should help developers to
get started with RAG as easy as possible.
With it, there is no need to learn about
chunking/splitting/segmentation, embeddings, embedding models, vector
databases, retrieval techniques and other RAG-related concepts.
This is similar to how one can simply upload one or multiple files into
[OpenAI Assistants
API](https://platform.openai.com/docs/assistants/overview) and the LLM
will automagically know about their contents when answering questions.
Easy RAG is using local embedding model running in your CPU (GPU support
can be added later).
Your files are ingested into an in-memory embedding store.
Please note that "Easy RAG" will not replace manual RAG setups and
especially [advanced RAG
techniques](https://github.com/langchain4j/langchain4j/pull/538), but
will provide an easier way to get started with RAG.
The quality of an "Easy RAG" should be sufficient for demos, proof of
concepts and for getting started.
To use "Easy RAG", simply import `langchain4j-easy-rag` dependency that
includes everything needed to do RAG:
- Apache Tika document loader (to parse all document types
automatically)
- Quantized [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) in-process embedding model which has an impressive (for it's size) 51.68 [score](https://huggingface.co/spaces/mteb/leaderboard) for retrieval
Here is the proposed API:
```java
List<Document> documents = FileSystemDocumentLoader.loadDocuments(directoryPath); // one can also load documents recursively and filter with glob/regex
EmbeddingStore<TextSegment> embeddingStore = new InMemoryEmbeddingStore<>(); // we will use an in-memory embedding store for simplicity
EmbeddingStoreIngestor.ingest(documents, embeddingStore);
Assistant assistant = AiServices.builder(Assistant.class)
.chatLanguageModel(model)
.contentRetriever(EmbeddingStoreContentRetriever.from(embeddingStore))
.build();
String answer = assistant.chat("Who is Charlie?"); // Charlie is a carrot...
```
`FileSystemDocumentLoader` in the above code loads documents using
`DocumentParser` available in classpath via SPI, in this case an
`ApacheTikaDocumentParser` imported with the `langchain4j-easy-rag`
dependency.
The `EmbeddingStoreIngestor` in the above code:
- splits documents into smaller text segments using a `DocumentSplitter`
loaded via SPI from the `langchain4j-easy-rag` dependency. Currently it
uses `DocumentSplitters.recursive(300, 30, new HuggingFaceTokenizer())`
- embeds text segments using an `AllMiniLmL6V2QuantizedEmbeddingModel`
loaded via SPI from the `langchain4j-easy-rag` dependency
- stores text segments and their embeddings into the specified embedding
store
When using `InMemoryEmbeddingStore`, one can serialize/persist it into a
JSON string on into a file.
This way one can skip loading documents and embedding them on each
application run.
It is easy to customize the ingestion in the above code, just change
```java
EmbeddingStoreIngestor.ingest(documents, embeddingStore);
```
into
```java
EmbeddingStoreIngestor ingestor = EmbeddingStoreIngestor.builder()
//.documentTransformer(...) // you can optionally transform (clean, enrich, etc) documents before splitting
//.documentSplitter(...) // you can optionally specify another splitter
//.textSegmentTransformer(...) // you can optionally transform (clean, enrich, etc) segments before embedding
//.embeddingModel(...) // you can optionally specify another embedding model to use for embedding
.embeddingStore(embeddingStore)
.build();
ingestor.ingest(documents)
```
Over time, we can add an auto-eval feature that will find the most
suitable hyperparametes for a given documents (e.g. which embedding
model to use, which splitting method, possibly advanced RAG techniques,
etc.) so that "easy RAG" can be comparable to the "advanced RAG".
Related:
https://github.com/langchain4j/langchain4j-embeddings/pull/16
---------
Co-authored-by: dliubars <dliubars@redhat.com>
2024-03-22 00:37:38 +08:00
|
|
|
<module>langchain4j-easy-rag</module>
|
|
|
|
|
2023-08-19 02:49:50 +08:00
|
|
|
<!-- model providers -->
|
2024-03-11 22:32:34 +08:00
|
|
|
<module>langchain4j-anthropic</module>
|
2023-09-25 02:11:09 +08:00
|
|
|
<module>langchain4j-azure-open-ai</module>
|
2023-11-10 20:47:13 +08:00
|
|
|
<module>langchain4j-bedrock</module>
|
2023-12-22 17:19:52 +08:00
|
|
|
<module>langchain4j-chatglm</module>
|
2024-01-26 23:30:00 +08:00
|
|
|
<module>langchain4j-cohere</module>
|
2023-08-19 02:49:50 +08:00
|
|
|
<module>langchain4j-dashscope</module>
|
2023-09-25 02:11:09 +08:00
|
|
|
<module>langchain4j-hugging-face</module>
|
2024-07-02 15:20:46 +08:00
|
|
|
<module>langchain4j-jlama</module>
|
2024-05-22 19:49:13 +08:00
|
|
|
<module>langchain4j-jina</module>
|
2023-09-25 02:11:09 +08:00
|
|
|
<module>langchain4j-local-ai</module>
|
2024-02-05 15:39:28 +08:00
|
|
|
<module>langchain4j-mistral-ai</module>
|
|
|
|
<module>langchain4j-nomic</module>
|
2023-12-22 17:19:52 +08:00
|
|
|
<module>langchain4j-ollama</module>
|
2024-07-23 22:57:34 +08:00
|
|
|
<module>langchain4j-ovh-ai</module>
|
2023-09-25 02:11:09 +08:00
|
|
|
<module>langchain4j-open-ai</module>
|
2024-05-21 16:39:29 +08:00
|
|
|
<module>langchain4j-qianfan</module>
|
2024-09-25 01:18:42 +08:00
|
|
|
<module>langchain4j-github-models</module>
|
2024-09-04 21:39:15 +08:00
|
|
|
<module>langchain4j-google-ai-gemini</module>
|
2023-08-29 03:30:18 +08:00
|
|
|
<module>langchain4j-vertex-ai</module>
|
2023-12-22 17:19:52 +08:00
|
|
|
<module>langchain4j-vertex-ai-gemini</module>
|
2024-06-13 21:44:30 +08:00
|
|
|
<module>langchain4j-workers-ai</module>
|
2024-03-11 14:58:26 +08:00
|
|
|
<module>langchain4j-zhipu-ai</module>
|
2024-09-24 14:59:01 +08:00
|
|
|
<module>langchain4j-voyage-ai</module>
|
2023-08-29 03:30:18 +08:00
|
|
|
|
2023-08-19 02:49:50 +08:00
|
|
|
<!-- embedding stores -->
|
2024-01-29 14:35:53 +08:00
|
|
|
<module>langchain4j-azure-ai-search</module>
|
2024-05-21 16:39:29 +08:00
|
|
|
<module>langchain4j-azure-cosmos-mongo-vcore</module>
|
2024-05-23 17:23:55 +08:00
|
|
|
<module>langchain4j-azure-cosmos-nosql</module>
|
Cassandra and Astra (dbaas) as VectorStore and ChatMemoryStore (#162)
#### Context
Apache Cassandra is a popular open-source database created back in 2008.
This year with
[CEP30](https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-30%3A+Approximate+Nearest+Neighbor%28ANN%29+Vector+Search+via+Storage-Attached+Indexes)
support for vector and similarity searches have been introduced.
Cassandra is very fast in read and write and is used as a cache by many
companies, it as an opportunity to implement the ChatMemoryStore. This
feature is expected for Cassandra 5 at the end of the year but some
docker images are already available.
DataStax AstraDb is a distribution of Apache Cassandra available as Saas
providing a free tier (free forever) of 80 millions queries/month.
[Registration](https://astra.datastax.com). The vector capability is
there production ready.
#### Data Modelling
With the proper data model in Cassandra we can perform both similarity
search, keyword search, metadata search.
```sql
CREATE TABLE sample_vector_table (
row_id text PRIMARY KEY,
attributes_blob text,
body_blob text,
metadata_s map<text, text>,
vector vector<float, 1536>
);
```
#### Implementation Throughts
- The **configuration** to connect to Astra and Cassandra are not
exactly the same so 2 different classes with associated builder are
provided:
[Astra](https://github.com/clun/langchain4j/blob/main/langchain4j/src/main/java/dev/langchain4j/store/embedding/cassandra/AstraDbEmbeddingConfiguration.java)
and [OSS
Cassandra](https://github.com/clun/langchain4j/blob/main/langchain4j/src/main/java/dev/langchain4j/store/embedding/cassandra/CassandraEmbeddingConfiguration.java).
A couple of fields are mutualized but creating a superclass to inherit
from lead to the use of Lombok `@SuperBuilder` and the Javadoc was not
able to found out what to do.
- Instead of passing a large number of arguments like other stores I
prefer to wrap them as a bean. With this trick you can add or remove
attributes, make then optional or mandatory at will. If you need to add
a new attribute in the configuration you do not have to change the
implementation of `XXXStore` and `XXXStoreImpl`
- I create an
[AstractEmbeddedStore<T>](https://github.com/clun/langchain4j/blob/main/langchain4j/src/main/java/dev/langchain4j/store/embedding/AbstractEmbeddingStore.java)
that could very well become the super class for any store. It handles
the different call of the real concrete implementation. (_delegate
pattern_). Some default implementation can be implemented
```java
/**
* Add a list of embeddings to the store.
*
* @param embeddings
* list of embeddings (hold vector)
* @return
* list of ids
*/
@Override
public List<String> addAll(List<Embedding> embeddings) {
Objects.requireNonNull(embeddings, "embeddings must not be null");
return embeddings.stream().map(this::add).collect(Collectors.toList());
}
```
The only method to implement at the Store level is:
```java
/**
* Initialize the concrete implementation.
* @return create implementation class for the store
*/
protected abstract EmbeddingStore<T> loadImplementation()
throws ClassNotFoundException, NoSuchMethodException, InstantiationException,
IllegalAccessException, InvocationTargetException;
```
-
[CassandraEmbeddedStore](https://github.com/clun/langchain4j/blob/main/langchain4j/src/main/java/dev/langchain4j/store/embedding/cassandra/CassandraEmbeddingStore.java#L30)
proposes 2 constructors, one could override the implementation class if
they want (extension point)
#### Tests
- Test classes are provided including some long form examples based on
classed found in `langchain4j-examples` but test are disabled.
- To start a local cassandra use docker and the
[docker-compose](https://github.com/clun/langchain4j/blob/main/langchain4j-cassandra/src/test/resources/docker-compose.yml)
```
docker compose up -d
```
- To run Test with Astra signin with your github account, create a token
(api Key) with role `Organization Administrator` following this
[procedure](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure)
<img width="926" alt="Screenshot 2023-09-06 at 18 14 12"
src="https://github.com/langchain4j/langchain4j/assets/726536/dfd2d9e5-09c9-4504-bfaa-31cfd87704a1">
- Pick the full value of the `token` from the json
<img width="713" alt="Screenshot 2023-09-06 at 18 15 53"
src="https://github.com/langchain4j/langchain4j/assets/726536/1be56234-dd98-4f59-af71-03df42ed6997">
- Create the environment variable `ASTRA_DB_APPLICATION_TOKEN`
```console
export ASTRA_DB_APPLICATION_TOKEN=AstraCS:....<your_token>
```
2023-09-27 21:50:04 +08:00
|
|
|
<module>langchain4j-cassandra</module>
|
2023-09-17 22:51:11 +08:00
|
|
|
<module>langchain4j-chroma</module>
|
2024-09-02 21:11:36 +08:00
|
|
|
<module>langchain4j-couchbase</module>
|
2023-09-14 22:58:47 +08:00
|
|
|
<module>langchain4j-elasticsearch</module>
|
2024-02-08 18:15:07 +08:00
|
|
|
<module>langchain4j-infinispan</module>
|
2023-08-19 02:38:45 +08:00
|
|
|
<module>langchain4j-milvus</module>
|
2024-02-08 18:15:07 +08:00
|
|
|
<module>langchain4j-mongodb-atlas</module>
|
|
|
|
<module>langchain4j-neo4j</module>
|
2024-08-27 16:38:29 +08:00
|
|
|
<module>langchain4j-oracle</module>
|
2023-10-09 18:09:24 +08:00
|
|
|
<module>langchain4j-opensearch</module>
|
2023-10-27 22:45:57 +08:00
|
|
|
<module>langchain4j-pgvector</module>
|
2023-11-19 19:59:24 +08:00
|
|
|
<module>langchain4j-pinecone</module>
|
2024-01-26 01:32:16 +08:00
|
|
|
<module>langchain4j-qdrant</module>
|
Cassandra and Astra (dbaas) as VectorStore and ChatMemoryStore (#162)
#### Context
Apache Cassandra is a popular open-source database created back in 2008.
This year with
[CEP30](https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-30%3A+Approximate+Nearest+Neighbor%28ANN%29+Vector+Search+via+Storage-Attached+Indexes)
support for vector and similarity searches have been introduced.
Cassandra is very fast in read and write and is used as a cache by many
companies, it as an opportunity to implement the ChatMemoryStore. This
feature is expected for Cassandra 5 at the end of the year but some
docker images are already available.
DataStax AstraDb is a distribution of Apache Cassandra available as Saas
providing a free tier (free forever) of 80 millions queries/month.
[Registration](https://astra.datastax.com). The vector capability is
there production ready.
#### Data Modelling
With the proper data model in Cassandra we can perform both similarity
search, keyword search, metadata search.
```sql
CREATE TABLE sample_vector_table (
row_id text PRIMARY KEY,
attributes_blob text,
body_blob text,
metadata_s map<text, text>,
vector vector<float, 1536>
);
```
#### Implementation Throughts
- The **configuration** to connect to Astra and Cassandra are not
exactly the same so 2 different classes with associated builder are
provided:
[Astra](https://github.com/clun/langchain4j/blob/main/langchain4j/src/main/java/dev/langchain4j/store/embedding/cassandra/AstraDbEmbeddingConfiguration.java)
and [OSS
Cassandra](https://github.com/clun/langchain4j/blob/main/langchain4j/src/main/java/dev/langchain4j/store/embedding/cassandra/CassandraEmbeddingConfiguration.java).
A couple of fields are mutualized but creating a superclass to inherit
from lead to the use of Lombok `@SuperBuilder` and the Javadoc was not
able to found out what to do.
- Instead of passing a large number of arguments like other stores I
prefer to wrap them as a bean. With this trick you can add or remove
attributes, make then optional or mandatory at will. If you need to add
a new attribute in the configuration you do not have to change the
implementation of `XXXStore` and `XXXStoreImpl`
- I create an
[AstractEmbeddedStore<T>](https://github.com/clun/langchain4j/blob/main/langchain4j/src/main/java/dev/langchain4j/store/embedding/AbstractEmbeddingStore.java)
that could very well become the super class for any store. It handles
the different call of the real concrete implementation. (_delegate
pattern_). Some default implementation can be implemented
```java
/**
* Add a list of embeddings to the store.
*
* @param embeddings
* list of embeddings (hold vector)
* @return
* list of ids
*/
@Override
public List<String> addAll(List<Embedding> embeddings) {
Objects.requireNonNull(embeddings, "embeddings must not be null");
return embeddings.stream().map(this::add).collect(Collectors.toList());
}
```
The only method to implement at the Store level is:
```java
/**
* Initialize the concrete implementation.
* @return create implementation class for the store
*/
protected abstract EmbeddingStore<T> loadImplementation()
throws ClassNotFoundException, NoSuchMethodException, InstantiationException,
IllegalAccessException, InvocationTargetException;
```
-
[CassandraEmbeddedStore](https://github.com/clun/langchain4j/blob/main/langchain4j/src/main/java/dev/langchain4j/store/embedding/cassandra/CassandraEmbeddingStore.java#L30)
proposes 2 constructors, one could override the implementation class if
they want (extension point)
#### Tests
- Test classes are provided including some long form examples based on
classed found in `langchain4j-examples` but test are disabled.
- To start a local cassandra use docker and the
[docker-compose](https://github.com/clun/langchain4j/blob/main/langchain4j-cassandra/src/test/resources/docker-compose.yml)
```
docker compose up -d
```
- To run Test with Astra signin with your github account, create a token
(api Key) with role `Organization Administrator` following this
[procedure](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure)
<img width="926" alt="Screenshot 2023-09-06 at 18 14 12"
src="https://github.com/langchain4j/langchain4j/assets/726536/dfd2d9e5-09c9-4504-bfaa-31cfd87704a1">
- Pick the full value of the `token` from the json
<img width="713" alt="Screenshot 2023-09-06 at 18 15 53"
src="https://github.com/langchain4j/langchain4j/assets/726536/1be56234-dd98-4f59-af71-03df42ed6997">
- Create the environment variable `ASTRA_DB_APPLICATION_TOKEN`
```console
export ASTRA_DB_APPLICATION_TOKEN=AstraCS:....<your_token>
```
2023-09-27 21:50:04 +08:00
|
|
|
<module>langchain4j-redis</module>
|
2024-09-18 17:41:53 +08:00
|
|
|
<module>langchain4j-tablestore</module>
|
2024-02-08 18:15:07 +08:00
|
|
|
<module>langchain4j-vearch</module>
|
2023-09-14 22:58:47 +08:00
|
|
|
<module>langchain4j-vespa</module>
|
2023-08-19 02:49:50 +08:00
|
|
|
<module>langchain4j-weaviate</module>
|
2023-11-10 20:47:13 +08:00
|
|
|
|
2023-12-18 23:32:22 +08:00
|
|
|
<!-- document loaders -->
|
|
|
|
<module>document-loaders/langchain4j-document-loader-amazon-s3</module>
|
2024-01-09 17:31:31 +08:00
|
|
|
<module>document-loaders/langchain4j-document-loader-azure-storage-blob</module>
|
2024-01-30 14:20:20 +08:00
|
|
|
<module>document-loaders/langchain4j-document-loader-github</module>
|
2024-05-27 17:23:07 +08:00
|
|
|
<module>document-loaders/langchain4j-document-loader-selenium</module>
|
2024-01-08 17:11:03 +08:00
|
|
|
<module>document-loaders/langchain4j-document-loader-tencent-cos</module>
|
2024-09-23 18:29:57 +08:00
|
|
|
<module>document-loaders/langchain4j-document-loader-google-cloud-storage</module>
|
2023-12-18 23:32:22 +08:00
|
|
|
|
|
|
|
<!-- document parsers -->
|
|
|
|
<module>document-parsers/langchain4j-document-parser-apache-pdfbox</module>
|
|
|
|
<module>document-parsers/langchain4j-document-parser-apache-poi</module>
|
POC: Easy RAG (#686)
Implementing RAG applications is hard. Especially for those who are just
getting started exploring LLMs and RAG.
This PR introduces an "Easy RAG" feature that should help developers to
get started with RAG as easy as possible.
With it, there is no need to learn about
chunking/splitting/segmentation, embeddings, embedding models, vector
databases, retrieval techniques and other RAG-related concepts.
This is similar to how one can simply upload one or multiple files into
[OpenAI Assistants
API](https://platform.openai.com/docs/assistants/overview) and the LLM
will automagically know about their contents when answering questions.
Easy RAG is using local embedding model running in your CPU (GPU support
can be added later).
Your files are ingested into an in-memory embedding store.
Please note that "Easy RAG" will not replace manual RAG setups and
especially [advanced RAG
techniques](https://github.com/langchain4j/langchain4j/pull/538), but
will provide an easier way to get started with RAG.
The quality of an "Easy RAG" should be sufficient for demos, proof of
concepts and for getting started.
To use "Easy RAG", simply import `langchain4j-easy-rag` dependency that
includes everything needed to do RAG:
- Apache Tika document loader (to parse all document types
automatically)
- Quantized [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) in-process embedding model which has an impressive (for it's size) 51.68 [score](https://huggingface.co/spaces/mteb/leaderboard) for retrieval
Here is the proposed API:
```java
List<Document> documents = FileSystemDocumentLoader.loadDocuments(directoryPath); // one can also load documents recursively and filter with glob/regex
EmbeddingStore<TextSegment> embeddingStore = new InMemoryEmbeddingStore<>(); // we will use an in-memory embedding store for simplicity
EmbeddingStoreIngestor.ingest(documents, embeddingStore);
Assistant assistant = AiServices.builder(Assistant.class)
.chatLanguageModel(model)
.contentRetriever(EmbeddingStoreContentRetriever.from(embeddingStore))
.build();
String answer = assistant.chat("Who is Charlie?"); // Charlie is a carrot...
```
`FileSystemDocumentLoader` in the above code loads documents using
`DocumentParser` available in classpath via SPI, in this case an
`ApacheTikaDocumentParser` imported with the `langchain4j-easy-rag`
dependency.
The `EmbeddingStoreIngestor` in the above code:
- splits documents into smaller text segments using a `DocumentSplitter`
loaded via SPI from the `langchain4j-easy-rag` dependency. Currently it
uses `DocumentSplitters.recursive(300, 30, new HuggingFaceTokenizer())`
- embeds text segments using an `AllMiniLmL6V2QuantizedEmbeddingModel`
loaded via SPI from the `langchain4j-easy-rag` dependency
- stores text segments and their embeddings into the specified embedding
store
When using `InMemoryEmbeddingStore`, one can serialize/persist it into a
JSON string on into a file.
This way one can skip loading documents and embedding them on each
application run.
It is easy to customize the ingestion in the above code, just change
```java
EmbeddingStoreIngestor.ingest(documents, embeddingStore);
```
into
```java
EmbeddingStoreIngestor ingestor = EmbeddingStoreIngestor.builder()
//.documentTransformer(...) // you can optionally transform (clean, enrich, etc) documents before splitting
//.documentSplitter(...) // you can optionally specify another splitter
//.textSegmentTransformer(...) // you can optionally transform (clean, enrich, etc) segments before embedding
//.embeddingModel(...) // you can optionally specify another embedding model to use for embedding
.embeddingStore(embeddingStore)
.build();
ingestor.ingest(documents)
```
Over time, we can add an auto-eval feature that will find the most
suitable hyperparametes for a given documents (e.g. which embedding
model to use, which splitting method, possibly advanced RAG techniques,
etc.) so that "easy RAG" can be comparable to the "advanced RAG".
Related:
https://github.com/langchain4j/langchain4j-embeddings/pull/16
---------
Co-authored-by: dliubars <dliubars@redhat.com>
2024-03-22 00:37:38 +08:00
|
|
|
<module>document-parsers/langchain4j-document-parser-apache-tika</module>
|
2023-12-18 23:32:22 +08:00
|
|
|
|
2024-09-24 21:08:23 +08:00
|
|
|
<!-- document transformers -->
|
|
|
|
<module>document-transformers/langchain4j-document-transformer-jsoup</module>
|
|
|
|
|
2023-12-18 23:32:22 +08:00
|
|
|
<!-- code execution engines -->
|
2023-12-22 20:11:57 +08:00
|
|
|
<module>code-execution-engines/langchain4j-code-execution-engine-graalvm-polyglot</module>
|
2024-05-21 14:47:25 +08:00
|
|
|
<module>code-execution-engines/langchain4j-code-execution-engine-judge0</module>
|
2023-11-19 16:19:48 +08:00
|
|
|
|
2024-05-21 21:37:30 +08:00
|
|
|
<!-- web search engines -->
|
|
|
|
<module>web-search-engines/langchain4j-web-search-engine-google-custom</module>
|
2024-05-21 22:10:59 +08:00
|
|
|
<module>web-search-engines/langchain4j-web-search-engine-tavily</module>
|
2024-08-22 17:05:39 +08:00
|
|
|
<module>web-search-engines/langchain4j-web-search-engine-searchapi</module>
|
2024-05-21 22:10:59 +08:00
|
|
|
|
|
|
|
<!-- embedding store filter parsers -->
|
|
|
|
<module>embedding-store-filter-parsers/langchain4j-embedding-store-filter-parser-sql</module>
|
2024-05-21 20:05:21 +08:00
|
|
|
|
2024-05-21 22:49:02 +08:00
|
|
|
<!-- experimental -->
|
|
|
|
<module>experimental/langchain4j-experimental-sql</module>
|
2024-09-20 18:18:45 +08:00
|
|
|
<module>langchain4j-onnx-scoring</module>
|
2024-05-21 22:49:02 +08:00
|
|
|
|
2023-06-24 15:07:23 +08:00
|
|
|
</modules>
|
|
|
|
|
2024-01-12 19:50:44 +08:00
|
|
|
<build>
|
2024-08-16 19:52:42 +08:00
|
|
|
<extensions>
|
|
|
|
<extension>
|
|
|
|
<groupId>com.vackosar.gitflowincrementalbuilder</groupId>
|
|
|
|
<artifactId>gitflow-incremental-builder</artifactId>
|
|
|
|
<version>3.15.0</version>
|
|
|
|
</extension>
|
|
|
|
</extensions>
|
2024-01-12 19:50:44 +08:00
|
|
|
<plugins>
|
2024-01-30 14:20:20 +08:00
|
|
|
<plugin>
|
|
|
|
<artifactId>maven-deploy-plugin</artifactId>
|
|
|
|
<configuration>
|
|
|
|
<!-- do not deploy langchain4j-aggregator's pom.xml (this file) -->
|
|
|
|
<skip>true</skip>
|
|
|
|
</configuration>
|
|
|
|
</plugin>
|
2024-01-12 19:50:44 +08:00
|
|
|
<plugin>
|
|
|
|
<groupId>org.apache.maven.plugins</groupId>
|
|
|
|
<artifactId>maven-javadoc-plugin</artifactId>
|
|
|
|
<version>3.5.0</version>
|
|
|
|
<executions>
|
|
|
|
<execution>
|
|
|
|
<id>attach-javadocs</id>
|
|
|
|
<goals>
|
|
|
|
<goal>jar</goal>
|
|
|
|
</goals>
|
|
|
|
</execution>
|
|
|
|
<execution>
|
|
|
|
<id>aggregate</id>
|
|
|
|
<goals>
|
|
|
|
<goal>aggregate</goal>
|
|
|
|
</goals>
|
|
|
|
<phase>site</phase>
|
|
|
|
</execution>
|
|
|
|
</executions>
|
|
|
|
</plugin>
|
|
|
|
</plugins>
|
|
|
|
</build>
|
|
|
|
|
|
|
|
<reporting>
|
|
|
|
<plugins>
|
|
|
|
<plugin>
|
|
|
|
<groupId>org.apache.maven.plugins</groupId>
|
|
|
|
<artifactId>maven-javadoc-plugin</artifactId>
|
|
|
|
<version>3.5.0</version>
|
|
|
|
<reportSets>
|
|
|
|
<reportSet>
|
|
|
|
<id>aggregate</id>
|
|
|
|
<inherited>false</inherited>
|
|
|
|
<reports>
|
|
|
|
<report>aggregate</report>
|
|
|
|
</reports>
|
|
|
|
</reportSet>
|
|
|
|
<reportSet>
|
|
|
|
<id>default</id>
|
|
|
|
<reports>
|
|
|
|
<report>javadoc</report>
|
|
|
|
</reports>
|
|
|
|
</reportSet>
|
|
|
|
</reportSets>
|
|
|
|
</plugin>
|
|
|
|
</plugins>
|
|
|
|
</reporting>
|
|
|
|
|
|
|
|
|
2023-12-14 23:48:02 +08:00
|
|
|
</project>
|