Commit Graph

47 Commits

Author SHA1 Message Date
LangChain4j a1b733d96d bumped version to 0.32.0-SNAPSHOT 2024-05-24 16:25:13 +02:00
LangChain4j d9cb1e9b81
Release 0.31.0 (#1151) 2024-05-23 17:40:52 +02:00
LangChain4j e2239639a9 Ollama: fix ITs 2024-05-22 13:29:01 +02:00
LangChain4j 0484e594e5 Ollama: log requests and responses (#662) 2024-05-22 13:28:49 +02:00
Hashcon 15b58ad756
Ollama: log requests and responses (#662) 2024-05-22 13:19:09 +02:00
ZYinNJU e36ef57d07
Ollama add OkhttpClient inject (#911)
## Context
See #586 

## Change
Add `customHeaders` parameter in `Ollama*Model` builders

## Checklist
Before submitting this PR, please check the following points:
- [ ] I have added unit and integration tests for my change
- [x] All unit and integration tests in the module I have added/changed
are green
- [x] All unit and integration tests in the
[core](https://github.com/langchain4j/langchain4j/tree/main/langchain4j-core)
and
[main](https://github.com/langchain4j/langchain4j/tree/main/langchain4j)
modules are green
- [ ] I have added/updated the
[documentation](https://github.com/langchain4j/langchain4j/tree/main/docs/docs)
- [ ] I have added an example in the [examples
repo](https://github.com/langchain4j/langchain4j-examples) (only for
"big" features)
2024-05-07 16:40:34 +02:00
LangChain4j 66c338c135 changed version to 0.31.0-SNAPSHOT 2024-04-29 11:21:00 +02:00
Hashcon b464633298
fix ollama client response handle (#867)
Thanks for @wangrushuang
[wangrushuang](https://github.com.wangrushuang), she find this problem
and I provide a way to fix that.
2024-04-18 14:38:48 +02:00
LangChain4j 1a340893ec
Release 0.30.0 (#945) 2024-04-16 18:21:01 +02:00
LangChain4j d1d9b45adc bumped to 0.30.0-SNAPSHOT 2024-04-08 17:36:52 +02:00
LangChain4j 45b58ac993
released 0.29.1 (#857) 2024-03-28 16:42:45 +01:00
LangChain4j a2572c862c
Fix 804 (#856)
<!-- Thank you so much for your contribution! -->

## Context
See https://github.com/langchain4j/langchain4j/issues/804

## Change
- `OpenAiStreamingChatModel`: in case `modelName` is not one of the
known OpenAI models, do not return `TokenUsage` in the `Response`. This
is done for cases when `OpenAiStreamingChatModel` is used to connect to
other OpenAI-API-compatible LLM providers like Ollama and Groq. In such
cases it is better to not return `TokenUsage` then returning a wrong
one.
- For all OpenAI models, default `Tokenizer` will now use
"gpt-3.5-turbo" model name instead of the one provided by the user in
the `modelName` parameter. This is done to avoid crashing with "Model
'ft:gpt-3.5-turbo:my-org:custom_suffix:id' is unknown to jtokkit" for
fine-tuned OpenAI models. It should be safe to use "gpt-3.5-turbo" by
default with all current OpenAI models, as they all use the same
cl100k_base encoding.


## Checklist
Before submitting this PR, please check the following points:
- [X] I have added unit and integration tests for my change
- [X] All unit and integration tests in the module I have added/changed
are green
- [X] All unit and integration tests in the
[core](https://github.com/langchain4j/langchain4j/tree/main/langchain4j-core)
and
[main](https://github.com/langchain4j/langchain4j/tree/main/langchain4j)
modules are green
- [ ] I have added/updated the
[documentation](https://github.com/langchain4j/langchain4j/tree/main/docs/docs)
- [ ] I have added an example in the [examples
repo](https://github.com/langchain4j/langchain4j-examples) (only for
"big" features)
- [ ] I have added my new module in the
[BOM](https://github.com/langchain4j/langchain4j/blob/main/langchain4j-bom/pom.xml)
(only when a new module is added)
2024-03-28 14:06:28 +01:00
LangChain4j d1e3cc1693
Release 0.29.0 (#830) 2024-03-26 11:54:43 +01:00
LangChain4j fbced4e70e Ollama: test that OpenAI API (OpenAiChatModel) works 2024-03-22 11:46:28 +01:00
LangChain4j da816fd491
Fix #756: Allow blank content in AiMessage, propagate failures into streaming handler (Ollama) (#782)
<!-- Thank you so much for your contribution! -->

## Context
See https://github.com/langchain4j/langchain4j/issues/756

## Change
- Allow creating `AiMessage` with blank ("", " ") content. `null` is
still prohibited.
- In `OllamaStreamingChat/LanguageModel`: propagate failures from
`onResponse()` method into `StreamingResponseHandler.onError()` method

## Checklist
Before submitting this PR, please check the following points:
- [X] I have added unit and integration tests for my change
- [X] All unit and integration tests in the module I have added/changed
are green
- [X] All unit and integration tests in the
[core](https://github.com/langchain4j/langchain4j/tree/main/langchain4j-core)
and
[main](https://github.com/langchain4j/langchain4j/tree/main/langchain4j)
modules are green
- [ ] I have added/updated the
[documentation](https://github.com/langchain4j/langchain4j/tree/main/docs/docs)
- [ ] I have added an example in the [examples
repo](https://github.com/langchain4j/langchain4j-examples) (only for
"big" features)
- [ ] I have added my new module in the
[BOM](https://github.com/langchain4j/langchain4j/blob/main/langchain4j-bom/pom.xml)
(only when a new module is added)
2024-03-22 09:39:48 +01:00
Eddú Meléndez Gonzales 3fabe0ed66
Use Testcontainers Ollama module (#702)
Testcontainers 1.19.7 offers an Ollama module. It also configures gpu if
available.
2024-03-19 13:05:24 +01:00
LangChain4j 91db3d354a bumped to 0.29.0-SNAPSHOT 2024-03-14 13:31:28 +01:00
LangChain4j 90fe3040b9
released 0.28.0 (#735) 2024-03-11 20:08:55 +01:00
mmanrai 77364028a6
Add option to specify num_ctx parameter in for ollama. Fixes #682 (#683) 2024-03-08 14:21:12 +01:00
LangChain4j 197b4af9d1 bumped version to 0.28.0-SNAPSHOT 2024-02-09 15:11:52 +01:00
LangChain4j c1462c087f
release 0.27.1 (#621) 2024-02-09 15:00:42 +01:00
LangChain4j ad2fd90f32 bumped version to 0.28.0-SNAPSHOT 2024-02-09 08:12:28 +01:00
LangChain4j a22d297104
Release 0.27.0 (#615) 2024-02-09 08:00:34 +01:00
LangChain4j 13c7ee7f1c Revert ServiceHelper refactoring (#485) 2024-02-05 14:39:34 +01:00
Antonio Goncalves baac759766
Beautifying Maven output (#572)
Looking at the Maven output I thought it could benefit from a little
renaming. I just changed the `<name>` in the `pom.xml`, nothing more.
The output is like this at the moment:

![Screenshot 2024-01-30 at 16 26
53](https://github.com/langchain4j/langchain4j/assets/729277/940886d1-565e-416f-a58e-91f609fc0c00)

It could look like this if this PR is merged:

![Screenshot 2024-01-30 at 16 42
38](https://github.com/langchain4j/langchain4j/assets/729277/f8787af2-b869-4e95-90bd-72bce5622737)

Just a personal taste. Let me know if you like it or not (or want to
change it). If not, just discard it, it's fine ;o)
2024-01-30 16:54:54 +01:00
LangChain4j fca8ca48f7 bump version to 0.27.0-SNAPSHOT 2024-01-30 16:18:40 +01:00
LangChain4j 3958e01738
release 0.26.1 (#570) 2024-01-30 16:11:21 +01:00
LangChain4j 469699b944 bump version to 0.27.0-SNAPSHOT 2024-01-30 08:07:45 +01:00
LangChain4j 06df1181ce Ollama*Test -> Ollama*IT 2024-01-30 07:31:43 +01:00
LangChain4j a8ad9e48d9
Automate release (#562) 2024-01-30 07:20:20 +01:00
LangChain4j 53c71401dd removed unnecessary class 2024-01-29 07:39:49 +01:00
LangChain4j 3ef7ba1e5f code formatting 2024-01-29 07:39:39 +01:00
bidek 8207d75767
Public Ollama Client + 2 additional methods (#533)
Public Ollama Client

- list model method
- get model details method


### Motivation
In my research project, I'm using Langchain4j, as anyone should :)
From my research, it seems that this client code is in sync with the
Ollama API, and it is the easiest and most maintainable code. So, in my
project, I use Langchain4j, and it's backed by the Ollama provider. In
my use case, I need to be able to list models and, in the future, even
create one. Is it possible to make the OllamaClient code public?

I wrote some thoughts about OpenAPI in Ollama in this issue.
https://github.com/jmorganca/ollama/issues/716#issuecomment-1904415711
So, if Ollama developers consider adding an OpenAPI endpoint, I will be
the first to make the OllamaClient package-private again.
2024-01-29 07:36:08 +01:00
Eric Deandrea c7c4ee8eab
Allowing model builders to be extended and to be provided by `ServiceLoader`s (#531)
This is a small prototype based on discussions originating from
https://github.com/ai-for-java/openai4j/pull/13

The approach I took here is to allow for decorating the models/builders
with additional functionality without having to extend model classes or
builders. I did it for a single model in this prototype - the
`OpenAiChatModel`, but this pattern could be applied to all of the other
models across Langchain4J.

That doesn't mean you couldn't extend the model classes if you wanted to
use inheritance. I just try to avoid it and use composition instead.

I also added a test which shows how it would be used. Downstream
libraries (like Spring Boot or Quarkus) could use this mechanism to
extend/enhance with their own capabilities which aren't necessarily part
of the model.

Let me know what you think @geoand / @langchain4j !

Happy to continue conversation and see where we can bring this!
2024-01-25 09:55:02 +01:00
bidek f8f57bccb6
Ollama input images support (#462)
Added Ollama Input Images support.
2024-01-09 10:58:50 +01:00
LangChain4j 50f32ba198 Ollama: fix tests 2024-01-03 12:10:43 +01:00
LangChain4j 7e5e82b7b2 updated to 0.26.0-SNAPSHOT 2023-12-22 18:08:19 +01:00
LangChain4j 2a5308b794 released 0.25.0 2023-12-22 18:02:04 +01:00
LangChain4j 7181633dcc
Ollama: add OllamaStreamingChatModel, "format" (json) and other parameters (#373)
- added `OllamaStreamingChatModel`
- added `format` parameter to all models, now can get valid JSON with
`format="json"`
- added `top_k`, `top_p`, `repeat_penalty`, `seed`, `num_predict`,
`stop` paramerters to all models
2023-12-21 12:14:11 +01:00
LangChain4j e1dddb33a2
bumped version to 0.25.0-SNAPSHOT (#369) 2023-12-19 13:03:48 +01:00
Fintan MacMahon 38049a197b
feat: add OllamaChatModel and its corresponding integration test (#323)
A new implementation of ChatLanguageModel, OllamaChatModel, is added to
handle interactions with the Ollama AI and has an associated integration
test. This includes necessary configurations and methods for message
generation. This increases the project's modularity and provides a more
convenient and encapsulated way of interfacing with the Ollama AI.
2023-12-18 18:11:32 +01:00
Eddú Meléndez Gonzales 00db7557ce
Use Testcontainers in Ollama IT (#315)
Currently, integration tests in Ollama module are disabled because
it needs a Ollama instance running in order to execute them.
Testcontainers provides this infrastructure not only by running
the ollama container but also by automating the pull model step.

This commit, use the Singleton Container approach to reuse the single
instance across multiple IT. Also, pull step is only executed when the
image is `ollama/ollama`.

The behavior on this is:
1st execution:
1. Pull `ollama/ollama` image
2. Start the container based on `ollama/ollama` image
3. Download the `orca-mini` model
4. Create an image based on the current state (with the model in it)
5. Declare the container ready to use
6. Run test

Next executions:
1. Look for the local image created in the 1st execution
2. Start the container based on the local image
3. Declare the container ready to use
4. Run test

1st execution is expected to take longer because of the model (3GB).
Next execution are way more faster.
2023-12-05 10:31:02 +01:00
deep-learning-dynamo 16f60dbef9 reducing duplication of *EmbeddingStoreIT 2023-11-18 16:23:29 +01:00
deep-learning-dynamo 21dfc8b317 released 0.24.0 2023-11-12 18:58:31 +01:00
deep-learning-dynamo 4f7b574d9c disabling ollama integration tests 2023-11-10 13:48:10 +01:00
deep-learning-dynamo e4d37614d3 disabling ollama integration tests 2023-11-10 13:47:58 +01:00
ZYinNJU 677cf26bca
Ollama integration (#249)
@langchain4j Hi! I make some progress to integrate with Ollama, see
#244. Using `retrofit` to define REST API in [Ollama
doc](https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-a-completion).

Local test need local deployment of `Ollama`. Looking forward to get
your code review!

---------

Co-authored-by: Heezer <33568148+Heezer@users.noreply.github.com>
2023-11-07 21:39:40 +01:00