Commit Graph

9 Commits

Author SHA1 Message Date
LangChain4j a1b733d96d bumped version to 0.32.0-SNAPSHOT 2024-05-24 16:25:13 +02:00
LangChain4j d9cb1e9b81
Release 0.31.0 (#1151) 2024-05-23 17:40:52 +02:00
LangChain4j 66c338c135 changed version to 0.31.0-SNAPSHOT 2024-04-29 11:21:00 +02:00
LangChain4j 1a340893ec
Release 0.30.0 (#945) 2024-04-16 18:21:01 +02:00
LangChain4j 1827302342 release snapshots in 2 steps 2024-04-08 19:31:24 +02:00
LangChain4j d1d9b45adc bumped to 0.30.0-SNAPSHOT 2024-04-08 17:36:52 +02:00
LangChain4j 45b58ac993
released 0.29.1 (#857) 2024-03-28 16:42:45 +01:00
LangChain4j d1e3cc1693
Release 0.29.0 (#830) 2024-03-26 11:54:43 +01:00
LangChain4j 2f425da9f7
POC: Easy RAG (#686)
Implementing RAG applications is hard. Especially for those who are just
getting started exploring LLMs and RAG.

This PR introduces an "Easy RAG" feature that should help developers to
get started with RAG as easy as possible.

With it, there is no need to learn about
chunking/splitting/segmentation, embeddings, embedding models, vector
databases, retrieval techniques and other RAG-related concepts.

This is similar to how one can simply upload one or multiple files into
[OpenAI Assistants
API](https://platform.openai.com/docs/assistants/overview) and the LLM
will automagically know about their contents when answering questions.

Easy RAG is using local embedding model running in your CPU (GPU support
can be added later).
Your files are ingested into an in-memory embedding store.

Please note that "Easy RAG" will not replace manual RAG setups and
especially [advanced RAG
techniques](https://github.com/langchain4j/langchain4j/pull/538), but
will provide an easier way to get started with RAG.
The quality of an "Easy RAG" should be sufficient for demos, proof of
concepts and for getting started.


To use "Easy RAG", simply import `langchain4j-easy-rag` dependency that
includes everything needed to do RAG:
- Apache Tika document loader (to parse all document types
automatically)
- Quantized [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) in-process embedding model which has an impressive (for it's size) 51.68 [score](https://huggingface.co/spaces/mteb/leaderboard) for retrieval


Here is the proposed API:

```java
List<Document> documents = FileSystemDocumentLoader.loadDocuments(directoryPath); // one can also load documents recursively and filter with glob/regex

EmbeddingStore<TextSegment> embeddingStore = new InMemoryEmbeddingStore<>(); // we will use an in-memory embedding store for simplicity

EmbeddingStoreIngestor.ingest(documents, embeddingStore);

Assistant assistant = AiServices.builder(Assistant.class)
                .chatLanguageModel(model)
                .contentRetriever(EmbeddingStoreContentRetriever.from(embeddingStore))
                .build();

String answer = assistant.chat("Who is Charlie?"); // Charlie is a carrot...
```

`FileSystemDocumentLoader` in the above code loads documents using
`DocumentParser` available in classpath via SPI, in this case an
`ApacheTikaDocumentParser` imported with the `langchain4j-easy-rag`
dependency.

The `EmbeddingStoreIngestor` in the above code:
- splits documents into smaller text segments using a `DocumentSplitter`
loaded via SPI from the `langchain4j-easy-rag` dependency. Currently it
uses `DocumentSplitters.recursive(300, 30, new HuggingFaceTokenizer())`
- embeds text segments using an `AllMiniLmL6V2QuantizedEmbeddingModel`
loaded via SPI from the `langchain4j-easy-rag` dependency
- stores text segments and their embeddings into the specified embedding
store

When using `InMemoryEmbeddingStore`, one can serialize/persist it into a
JSON string on into a file.
This way one can skip loading documents and embedding them on each
application run.

It is easy to customize the ingestion in the above code, just change
```java
EmbeddingStoreIngestor.ingest(documents, embeddingStore);
```
into
```java
EmbeddingStoreIngestor ingestor = EmbeddingStoreIngestor.builder()
                //.documentTransformer(...) // you can optionally transform (clean, enrich, etc) documents before splitting
                //.documentSplitter(...) // you can optionally specify another splitter
                //.textSegmentTransformer(...) // you can optionally transform (clean, enrich, etc) segments before embedding
                //.embeddingModel(...) // you can optionally specify another embedding model to use for embedding
                .embeddingStore(embeddingStore)
                .build();

ingestor.ingest(documents)
```

Over time, we can add an auto-eval feature that will find the most
suitable hyperparametes for a given documents (e.g. which embedding
model to use, which splitting method, possibly advanced RAG techniques,
etc.) so that "easy RAG" can be comparable to the "advanced RAG".

Related:
https://github.com/langchain4j/langchain4j-embeddings/pull/16

---------

Co-authored-by: dliubars <dliubars@redhat.com>
2024-03-21 17:37:38 +01:00