## Context
I noted that for more complex POJO object, the JSON returned by the LLM
often contained a last comma before closing bracket, making the JSON
invalid and resulting in an error while parsing it. I traced it back to
the proposed jsonStructure that is part of the SystemMessage, sending a
wrong format to the LLM.
## Change
Removed the last comma that shouldn't be there in the jsonStructure +
updated the unit tests
---------
Co-authored-by: Lize
Fix baseUrl misconfiguration in ZhipuAiClient builder
<!-- Thank you so much for your contribution! -->
## Context
<!-- Please provide some context so that it is clear why this change is
required. -->
This commit addresses an issue where the baseUrl in the builder of
ZhipuAiClient was not properly configured in Module
langchain4j-zhipu-ai. This misconfiguration could potentially lead to
confusion or errors when using the client directly. By correcting the
baseUrl configuration, this commit ensures consistent and accurate
behavior of the client.
## Change
<!-- Please describe the changed you made. -->
Change the baseUrl value in
`dev.langchain4j.model.zhipu.ZhipuAiClient.Builder` from
`https://aip.baidubce.com/` to `https://open.bigmodel.cn/`
## Checklist
Before submitting this PR, please check the following points:
- [ ] I have added unit and integration tests for my change
- [ ] All unit and integration tests in the module I have added/changed
are green
- [ ] All unit and integration tests in the
[core](https://github.com/langchain4j/langchain4j/tree/main/langchain4j-core)
and
[main](https://github.com/langchain4j/langchain4j/tree/main/langchain4j)
modules are green
- [ ] I have added/updated the
[documentation](https://github.com/langchain4j/langchain4j/tree/main/docs/docs)
- [ ] I have added an example in the [examples
repo](https://github.com/langchain4j/langchain4j-examples) (only for
"big" features)
- [ ] I have added my new module in the
[BOM](https://github.com/langchain4j/langchain4j/blob/main/langchain4j-bom/pom.xml)
(only when a new module is added)
<!-- Thank you so much for your contribution! -->
## Context
See https://github.com/langchain4j/langchain4j/issues/804
## Change
- `OpenAiStreamingChatModel`: in case `modelName` is not one of the
known OpenAI models, do not return `TokenUsage` in the `Response`. This
is done for cases when `OpenAiStreamingChatModel` is used to connect to
other OpenAI-API-compatible LLM providers like Ollama and Groq. In such
cases it is better to not return `TokenUsage` then returning a wrong
one.
- For all OpenAI models, default `Tokenizer` will now use
"gpt-3.5-turbo" model name instead of the one provided by the user in
the `modelName` parameter. This is done to avoid crashing with "Model
'ft:gpt-3.5-turbo:my-org:custom_suffix:id' is unknown to jtokkit" for
fine-tuned OpenAI models. It should be safe to use "gpt-3.5-turbo" by
default with all current OpenAI models, as they all use the same
cl100k_base encoding.
## Checklist
Before submitting this PR, please check the following points:
- [X] I have added unit and integration tests for my change
- [X] All unit and integration tests in the module I have added/changed
are green
- [X] All unit and integration tests in the
[core](https://github.com/langchain4j/langchain4j/tree/main/langchain4j-core)
and
[main](https://github.com/langchain4j/langchain4j/tree/main/langchain4j)
modules are green
- [ ] I have added/updated the
[documentation](https://github.com/langchain4j/langchain4j/tree/main/docs/docs)
- [ ] I have added an example in the [examples
repo](https://github.com/langchain4j/langchain4j-examples) (only for
"big" features)
- [ ] I have added my new module in the
[BOM](https://github.com/langchain4j/langchain4j/blob/main/langchain4j-bom/pom.xml)
(only when a new module is added)
<!-- Thank you so much for your contribution! -->
## Context
This is very similar to what is done for Mistral (#744) and OpenAI. It
will directly enable quarkiverse/quarkus-langchain4j#413.
<!-- Thank you so much for your contribution! -->
## Context
Currenty AI Service can use tools only if there is a `ChatMemory` or
`ChatMemoryProvider` configured.
This is probably an unnecessary and overly strict requirement. In
practice, tools can be used without the memory that spans between calls
to AI service, using only a "temporary memory" that spans single call to
the AI service (that can include multiple interactions with LLM due to
tool executions).
## Change
Allow using AI Services with tools without memory. If there is no memory
configured, a "temporary memory" is used for each AI Service invocation
to store all messages created/produced in scope of this AI Service
invocation.
## Checklist
Before submitting this PR, please check the following points:
- [X] I have added unit and integration tests for my change
- [X] All unit and integration tests in the module I have added/changed
are green
- [X] All unit and integration tests in the
[core](https://github.com/langchain4j/langchain4j/tree/main/langchain4j-core)
and
[main](https://github.com/langchain4j/langchain4j/tree/main/langchain4j)
modules are green
- [ ] I have added/updated the
[documentation](https://github.com/langchain4j/langchain4j/tree/main/docs/docs)
- [ ] I have added an example in the [examples
repo](https://github.com/langchain4j/langchain4j-examples) (only for
"big" features)
- [ ] I have added my new module in the
[BOM](https://github.com/langchain4j/langchain4j/blob/main/langchain4j-bom/pom.xml)
(only when a new module is added)
Currently, langchain4j doesn't have support for streaming of Bedrock
Anthropics model, this PR tries to address it.
Adding support for Anthropics v2 and v2:1 streaming.
New tests disabled due to need for AWS credentials but pass when
enabled.
This PR addresses issue
https://github.com/langchain4j/langchain4j/issues/734:
- Added functionality to pass in strings & enums as model ID.
- Added Llama and Mistral AI models.
- Added Anthropic Message API model
For Issue #685
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- **New Features**
- Introduced a `Neo4jGraph` class for enhanced interaction with Neo4j
databases, including read/write operations and schema management.
- Added a `Neo4jContentRetriever` for generating and executing Cypher
queries from user questions, improving content retrieval from Neo4j
databases.
- **Tests**
- Implemented tests for Neo4j database interactions and content
retrieval functionalities, ensuring reliability and performance.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
This PR adds `function calling` support for MistralAI `chat` and
`streaming` completions models.
Also, add and deprecate the Mistral AI models like:
- open-mistral-7b,
- open-mixtral-8x7b,
- mistral-small-latest,
- mistral-medium-latest,
- mistral-large-latest,
- mistral-tiny (deprecated)
- mistral-medium (deprecated)
Others PR related:
1. MistralAI Examples with function calling
[PR#68](https://github.com/langchain4j/langchain4j-examples/pull/68)
2. MistralAI Function Calling documentation
[PR#765](https://github.com/langchain4j/langchain4j/pull/765)
3. Updating the integration table by adding the function calling as new
column for other integrations [
PR#766](https://github.com/langchain4j/langchain4j/pull/766)