Merge branch 'open-webui:main' into main

This commit is contained in:
itxProfessor 2025-03-06 08:48:18 +05:00 committed by GitHub
commit fc151dbdd4
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
143 changed files with 3918 additions and 1748 deletions

View File

@ -1,80 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
# Bug Report
## Important Notes
- **Before submitting a bug report**: Please check the Issues or Discussions section to see if a similar issue or feature request has already been posted. It's likely we're already tracking it! If youre unsure, start a discussion post first. This will help us efficiently focus on improving the project.
- **Collaborate respectfully**: We value a constructive attitude, so please be mindful of your communication. If negativity is part of your approach, our capacity to engage may be limited. Were here to help if youre open to learning and communicating positively. Remember, Open WebUI is a volunteer-driven project managed by a single maintainer and supported by contributors who also have full-time jobs. We appreciate your time and ask that you respect ours.
- **Contributing**: If you encounter an issue, we highly encourage you to submit a pull request or fork the project. We actively work to prevent contributor burnout to maintain the quality and continuity of Open WebUI.
- **Bug reproducibility**: If a bug cannot be reproduced with a `:main` or `:dev` Docker setup, or a pip install with Python 3.11, it may require additional help from the community. In such cases, we will move it to the "issues" Discussions section due to our limited resources. We encourage the community to assist with these issues. Remember, its not that the issue doesnt exist; we need your help!
Note: Please remove the notes above when submitting your post. Thank you for your understanding and support!
---
## Installation Method
[Describe the method you used to install the project, e.g., git clone, Docker, pip, etc.]
## Environment
- **Open WebUI Version:** [e.g., v0.3.11]
- **Ollama (if applicable):** [e.g., v0.2.0, v0.1.32-rc1]
- **Operating System:** [e.g., Windows 10, macOS Big Sur, Ubuntu 20.04]
- **Browser (if applicable):** [e.g., Chrome 100.0, Firefox 98.0]
**Confirmation:**
- [ ] I have read and followed all the instructions provided in the README.md.
- [ ] I am on the latest version of both Open WebUI and Ollama.
- [ ] I have included the browser console logs.
- [ ] I have included the Docker container logs.
- [ ] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.
## Expected Behavior:
[Describe what you expected to happen.]
## Actual Behavior:
[Describe what actually happened.]
## Description
**Bug Summary:**
[Provide a brief but clear summary of the bug]
## Reproduction Details
**Steps to Reproduce:**
[Outline the steps to reproduce the bug. Be as detailed as possible.]
## Logs and Screenshots
**Browser Console Logs:**
[Include relevant browser console logs, if applicable]
**Docker Container Logs:**
[Include relevant Docker container logs, if applicable]
**Screenshots/Screen Recordings (if applicable):**
[Attach any relevant screenshots to help illustrate the issue]
## Additional Information
[Include any additional details that may help in understanding and reproducing the issue. This could include specific configurations, error messages, or anything else relevant to the bug.]
## Note
If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!

144
.github/ISSUE_TEMPLATE/bug_report.yaml vendored Normal file
View File

@ -0,0 +1,144 @@
name: Bug Report
description: Create a detailed bug report to help us improve Open WebUI.
title: 'issue: '
labels: ['bug', 'triage']
assignees: []
body:
- type: markdown
attributes:
value: |
# Bug Report
## Important Notes
- **Before submitting a bug report**: Please check the [Issues](https://github.com/open-webui/open-webui/issues) or [Discussions](https://github.com/open-webui/open-webui/discussions) sections to see if a similar issue has already been reported. If unsure, start a discussion first, as this helps us efficiently focus on improving the project.
- **Respectful collaboration**: Open WebUI is a volunteer-driven project with a single maintainer and contributors who also have full-time jobs. Please be constructive and respectful in your communication.
- **Contributing**: If you encounter an issue, consider submitting a pull request or forking the project. We prioritize preventing contributor burnout to maintain Open WebUI's quality.
- **Bug Reproducibility**: If a bug cannot be reproduced using a `:main` or `:dev` Docker setup or with `pip install` on Python 3.11, community assistance may be required. In such cases, we will move it to the "[Issues](https://github.com/open-webui/open-webui/discussions/categories/issues)" Discussions section. Your help is appreciated!
- type: checkboxes
id: issue-check
attributes:
label: Check Existing Issues
description: Confirm that youve checked for existing reports before submitting a new one.
options:
- label: I have searched the existing issues and discussions.
required: true
- type: dropdown
id: installation-method
attributes:
label: Installation Method
description: How did you install Open WebUI?
options:
- Git Clone
- Pip Install
- Docker
- Other
validations:
required: true
- type: input
id: open-webui-version
attributes:
label: Open WebUI Version
description: Specify the version (e.g., v0.3.11)
validations:
required: true
- type: input
id: ollama-version
attributes:
label: Ollama Version (if applicable)
description: Specify the version (e.g., v0.2.0, or v0.1.32-rc1)
validations:
required: false
- type: input
id: operating-system
attributes:
label: Operating System
description: Specify the OS (e.g., Windows 10, macOS Sonoma, Ubuntu 22.04)
validations:
required: true
- type: input
id: browser
attributes:
label: Browser (if applicable)
description: Specify the browser/version (e.g., Chrome 100.0, Firefox 98.0)
validations:
required: false
- type: checkboxes
id: confirmation
attributes:
label: Confirmation
description: Ensure the following prerequisites have been met.
options:
- label: I have read and followed all instructions in `README.md`.
required: true
- label: I am using the latest version of **both** Open WebUI and Ollama.
required: true
- label: I have checked the browser console logs.
required: true
- label: I have checked the Docker container logs.
required: true
- label: I have listed steps to reproduce the bug in detail.
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected Behavior
description: Describe what should have happened.
validations:
required: true
- type: textarea
id: actual-behavior
attributes:
label: Actual Behavior
description: Describe what actually happened.
validations:
required: true
- type: textarea
id: reproduction-steps
attributes:
label: Steps to Reproduce
description: Provide step-by-step instructions to reproduce the issue.
placeholder: |
1. Go to '...'
2. Click on '...'
3. Scroll down to '...'
4. See the error message '...'
validations:
required: true
- type: textarea
id: logs-screenshots
attributes:
label: Logs & Screenshots
description: Include relevant logs, errors, or screenshots to help diagnose the issue.
placeholder: 'Attach logs from the browser console, Docker logs, or error messages.'
validations:
required: true
- type: textarea
id: additional-info
attributes:
label: Additional Information
description: Provide any extra details that may assist in understanding the issue.
validations:
required: false
- type: markdown
attributes:
value: |
## Note
If the bug report is incomplete or does not follow instructions, it may not be addressed. Ensure that you've followed all the **README.md** and **troubleshooting.md** guidelines, and provide all necessary information for us to reproduce the issue.
Thank you for contributing to Open WebUI!

View File

@ -1,35 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
# Feature Request
## Important Notes
- **Before submitting a report**: Please check the Issues or Discussions section to see if a similar issue or feature request has already been posted. It's likely we're already tracking it! If youre unsure, start a discussion post first. This will help us efficiently focus on improving the project.
- **Collaborate respectfully**: We value a constructive attitude, so please be mindful of your communication. If negativity is part of your approach, our capacity to engage may be limited. Were here to help if youre open to learning and communicating positively. Remember, Open WebUI is a volunteer-driven project managed by a single maintainer and supported by contributors who also have full-time jobs. We appreciate your time and ask that you respect ours.
- **Contributing**: If you encounter an issue, we highly encourage you to submit a pull request or fork the project. We actively work to prevent contributor burnout to maintain the quality and continuity of Open WebUI.
- **Bug reproducibility**: If a bug cannot be reproduced with a `:main` or `:dev` Docker setup, or a pip install with Python 3.11, it may require additional help from the community. In such cases, we will move it to the "issues" Discussions section due to our limited resources. We encourage the community to assist with these issues. Remember, its not that the issue doesnt exist; we need your help!
Note: Please remove the notes above when submitting your post. Thank you for your understanding and support!
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@ -0,0 +1,64 @@
name: Feature Request
description: Suggest an idea for this project
title: 'feat: '
labels: ['triage']
body:
- type: markdown
attributes:
value: |
## Important Notes
### Before submitting
Please check the [Issues](https://github.com/open-webui/open-webui/issues) or [Discussions](https://github.com/open-webui/open-webui/discussions) to see if a similar request has been posted.
It's likely we're already tracking it! If youre unsure, start a discussion post first.
This will help us efficiently focus on improving the project.
### Collaborate respectfully
We value a **constructive attitude**, so please be mindful of your communication. If negativity is part of your approach, our capacity to engage may be limited. We're here to help if you're **open to learning** and **communicating positively**.
Remember:
- Open WebUI is a **volunteer-driven project**
- It's managed by a **single maintainer**
- It's supported by contributors who also have **full-time jobs**
We appreciate your time and ask that you **respect ours**.
### Contributing
If you encounter an issue, we highly encourage you to submit a pull request or fork the project. We actively work to prevent contributor burnout to maintain the quality and continuity of Open WebUI.
### Bug reproducibility
If a bug cannot be reproduced with a `:main` or `:dev` Docker setup, or a `pip install` with Python 3.11, it may require additional help from the community. In such cases, we will move it to the "[issues](https://github.com/open-webui/open-webui/discussions/categories/issues)" Discussions section due to our limited resources. We encourage the community to assist with these issues. Remember, its not that the issue doesnt exist; we need your help!
- type: checkboxes
id: existing-issue
attributes:
label: Check Existing Issues
description: Please confirm that you've checked for existing similar requests
options:
- label: I have searched the existing issues and discussions.
required: true
- type: textarea
id: problem-description
attributes:
label: Problem Description
description: Is your feature request related to a problem? Please provide a clear and concise description of what the problem is.
placeholder: "Ex. I'm always frustrated when..."
validations:
required: true
- type: textarea
id: solution-description
attributes:
label: Desired Solution you'd like
description: Clearly describe what you want to happen.
validations:
required: true
- type: textarea
id: alternatives-considered
attributes:
label: Alternatives Considered
description: A clear and concise description of any alternative solutions or features you've considered.
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: Add any other context or screenshots about the feature request here.

View File

@ -14,7 +14,7 @@ env:
jobs:
build-main-image:
runs-on: ubuntu-latest
runs-on: ${{ matrix.platform == 'linux/arm64' && 'ubuntu-24.04-arm' || 'ubuntu-latest' }}
permissions:
contents: read
packages: write
@ -111,7 +111,7 @@ jobs:
retention-days: 1
build-cuda-image:
runs-on: ubuntu-latest
runs-on: ${{ matrix.platform == 'linux/arm64' && 'ubuntu-24.04-arm' || 'ubuntu-latest' }}
permissions:
contents: read
packages: write
@ -211,7 +211,7 @@ jobs:
retention-days: 1
build-ollama-image:
runs-on: ubuntu-latest
runs-on: ${{ matrix.platform == 'linux/arm64' && 'ubuntu-24.04-arm' || 'ubuntu-latest' }}
permissions:
contents: read
packages: write

View File

@ -5,6 +5,36 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.5.20] - 2025-03-05
### Added
- **⚡ Toggle Code Execution On/Off**: You can now enable or disable code execution, providing more control over security, ensuring a safer and more customizable experience.
### Fixed
- **📜 Pinyin Keyboard Enter Key Now Works Properly**: Resolved an issue where the Enter key for Pinyin keyboards was not functioning as expected, ensuring seamless input for Chinese users.
- **🖼️ Web Manifest Loading Issue Fixed**: Addressed inconsistencies with 'site.webmanifest', guaranteeing proper loading and representation of the app across different browsers and devices.
- **📦 Non-Root Container Issue Resolved**: Fixed a critical issue where the UI failed to load correctly in non-root containers, ensuring reliable deployment in various environments.
## [0.5.19] - 2025-03-04
### Added
- **📊 Logit Bias Parameter Support**: Fine-tune conversation dynamics by adjusting the Logit Bias parameter directly in chat settings, giving you more control over model responses.
- **⌨️ Customizable Enter Behavior**: You can now configure Enter to send messages only when combined with Ctrl (Ctrl+Enter) via Settings > Interface, preventing accidental message sends.
- **📝 Collapsible Code Blocks**: Easily collapse long code blocks to declutter your chat, making it easier to focus on important details.
- **🏷️ Tag Selector in Model Selector**: Quickly find and categorize models with the new tag filtering system in the Model Selector, streamlining model discovery.
- **📈 Experimental Elasticsearch Vector DB Support**: Now supports Elasticsearch as a vector database, offering more flexibility for data retrieval in Retrieval-Augmented Generation (RAG) workflows.
- **⚙️ General Reliability Enhancements**: Various stability improvements across the WebUI, ensuring a smoother, more consistent experience.
- **🌍 Updated Translations**: Refined multilingual support for better localization and accuracy across various languages.
### Fixed
- **🔄 "Stream" Hook Activation**: Fixed an issue where the "Stream" hook only worked when globally enabled, ensuring reliable real-time filtering.
- **📧 LDAP Email Case Sensitivity**: Resolved an issue where LDAP login failed due to email case sensitivity mismatches, improving authentication reliability.
- **💬 WebSocket Chat Event Registration**: Fixed a bug preventing chat event listeners from being registered upon sign-in, ensuring real-time updates work properly.
## [0.5.18] - 2025-02-27
### Fixed

View File

@ -587,6 +587,17 @@ load_oauth_providers()
STATIC_DIR = Path(os.getenv("STATIC_DIR", OPEN_WEBUI_DIR / "static")).resolve()
for file_path in (FRONTEND_BUILD_DIR / "static").glob("**/*"):
if file_path.is_file():
target_path = STATIC_DIR / file_path.relative_to(
(FRONTEND_BUILD_DIR / "static")
)
target_path.parent.mkdir(parents=True, exist_ok=True)
try:
shutil.copyfile(file_path, target_path)
except Exception as e:
logging.error(f"An error occurred: {e}")
frontend_favicon = FRONTEND_BUILD_DIR / "static" / "favicon.png"
if frontend_favicon.exists():
@ -659,11 +670,7 @@ if CUSTOM_NAME:
# LICENSE_KEY
####################################
LICENSE_KEY = PersistentConfig(
"LICENSE_KEY",
"license.key",
os.environ.get("LICENSE_KEY", ""),
)
LICENSE_KEY = os.environ.get("LICENSE_KEY", "")
####################################
# STORAGE PROVIDER
@ -695,16 +702,16 @@ AZURE_STORAGE_KEY = os.environ.get("AZURE_STORAGE_KEY", None)
# File Upload DIR
####################################
UPLOAD_DIR = f"{DATA_DIR}/uploads"
Path(UPLOAD_DIR).mkdir(parents=True, exist_ok=True)
UPLOAD_DIR = DATA_DIR / "uploads"
UPLOAD_DIR.mkdir(parents=True, exist_ok=True)
####################################
# Cache DIR
####################################
CACHE_DIR = f"{DATA_DIR}/cache"
Path(CACHE_DIR).mkdir(parents=True, exist_ok=True)
CACHE_DIR = DATA_DIR / "cache"
CACHE_DIR.mkdir(parents=True, exist_ok=True)
####################################
@ -1373,6 +1380,11 @@ Responses from models: {{responses}}"""
# Code Interpreter
####################################
ENABLE_CODE_EXECUTION = PersistentConfig(
"ENABLE_CODE_EXECUTION",
"code_execution.enable",
os.environ.get("ENABLE_CODE_EXECUTION", "True").lower() == "true",
)
CODE_EXECUTION_ENGINE = PersistentConfig(
"CODE_EXECUTION_ENGINE",
@ -1541,6 +1553,17 @@ OPENSEARCH_CERT_VERIFY = os.environ.get("OPENSEARCH_CERT_VERIFY", False)
OPENSEARCH_USERNAME = os.environ.get("OPENSEARCH_USERNAME", None)
OPENSEARCH_PASSWORD = os.environ.get("OPENSEARCH_PASSWORD", None)
# ElasticSearch
ELASTICSEARCH_URL = os.environ.get("ELASTICSEARCH_URL", "https://localhost:9200")
ELASTICSEARCH_CA_CERTS = os.environ.get("ELASTICSEARCH_CA_CERTS", None)
ELASTICSEARCH_API_KEY = os.environ.get("ELASTICSEARCH_API_KEY", None)
ELASTICSEARCH_USERNAME = os.environ.get("ELASTICSEARCH_USERNAME", None)
ELASTICSEARCH_PASSWORD = os.environ.get("ELASTICSEARCH_PASSWORD", None)
ELASTICSEARCH_CLOUD_ID = os.environ.get("ELASTICSEARCH_CLOUD_ID", None)
SSL_ASSERT_FINGERPRINT = os.environ.get("SSL_ASSERT_FINGERPRINT", None)
ELASTICSEARCH_INDEX_PREFIX = os.environ.get(
"ELASTICSEARCH_INDEX_PREFIX", "open_webui_collections"
)
# Pgvector
PGVECTOR_DB_URL = os.environ.get("PGVECTOR_DB_URL", DATABASE_URL)
if VECTOR_DB == "pgvector" and not PGVECTOR_DB_URL.startswith("postgres"):
@ -1977,6 +2000,12 @@ EXA_API_KEY = PersistentConfig(
os.getenv("EXA_API_KEY", ""),
)
PERPLEXITY_API_KEY = PersistentConfig(
"PERPLEXITY_API_KEY",
"rag.web.search.perplexity_api_key",
os.getenv("PERPLEXITY_API_KEY", ""),
)
RAG_WEB_SEARCH_RESULT_COUNT = PersistentConfig(
"RAG_WEB_SEARCH_RESULT_COUNT",
"rag.web.search.result_count",

View File

@ -65,10 +65,8 @@ except Exception:
# LOGGING
####################################
log_levels = ["CRITICAL", "ERROR", "WARNING", "INFO", "DEBUG"]
GLOBAL_LOG_LEVEL = os.environ.get("GLOBAL_LOG_LEVEL", "").upper()
if GLOBAL_LOG_LEVEL in log_levels:
if GLOBAL_LOG_LEVEL in logging.getLevelNamesMapping():
logging.basicConfig(stream=sys.stdout, level=GLOBAL_LOG_LEVEL, force=True)
else:
GLOBAL_LOG_LEVEL = "INFO"
@ -78,6 +76,7 @@ log.info(f"GLOBAL_LOG_LEVEL: {GLOBAL_LOG_LEVEL}")
if "cuda_error" in locals():
log.exception(cuda_error)
del cuda_error
log_sources = [
"AUDIO",
@ -100,7 +99,7 @@ SRC_LOG_LEVELS = {}
for source in log_sources:
log_env_var = source + "_LOG_LEVEL"
SRC_LOG_LEVELS[source] = os.environ.get(log_env_var, "").upper()
if SRC_LOG_LEVELS[source] not in log_levels:
if SRC_LOG_LEVELS[source] not in logging.getLevelNamesMapping():
SRC_LOG_LEVELS[source] = GLOBAL_LOG_LEVEL
log.info(f"{log_env_var}: {SRC_LOG_LEVELS[source]}")
@ -386,6 +385,7 @@ ENABLE_WEBSOCKET_SUPPORT = (
WEBSOCKET_MANAGER = os.environ.get("WEBSOCKET_MANAGER", "")
WEBSOCKET_REDIS_URL = os.environ.get("WEBSOCKET_REDIS_URL", REDIS_URL)
WEBSOCKET_REDIS_LOCK_TIMEOUT = os.environ.get("WEBSOCKET_REDIS_LOCK_TIMEOUT", 60)
AIOHTTP_CLIENT_TIMEOUT = os.environ.get("AIOHTTP_CLIENT_TIMEOUT", "")
@ -397,19 +397,20 @@ else:
except Exception:
AIOHTTP_CLIENT_TIMEOUT = 300
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST = os.environ.get(
"AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST", ""
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST = os.environ.get(
"AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST",
os.environ.get("AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST", ""),
)
if AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST == "":
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST = None
if AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST == "":
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST = None
else:
try:
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST = int(
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST
)
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST = int(AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
except Exception:
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST = 5
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST = 5
####################################
# OFFLINE_MODE

View File

@ -105,6 +105,7 @@ from open_webui.config import (
# Direct Connections
ENABLE_DIRECT_CONNECTIONS,
# Code Execution
ENABLE_CODE_EXECUTION,
CODE_EXECUTION_ENGINE,
CODE_EXECUTION_JUPYTER_URL,
CODE_EXECUTION_JUPYTER_AUTH,
@ -215,6 +216,7 @@ from open_webui.config import (
BING_SEARCH_V7_SUBSCRIPTION_KEY,
BRAVE_SEARCH_API_KEY,
EXA_API_KEY,
PERPLEXITY_API_KEY,
KAGI_SEARCH_API_KEY,
MOJEEK_SEARCH_API_KEY,
BOCHA_SEARCH_API_KEY,
@ -400,8 +402,8 @@ async def lifespan(app: FastAPI):
if RESET_CONFIG_ON_START:
reset_config()
if app.state.config.LICENSE_KEY:
get_license_data(app, app.state.config.LICENSE_KEY)
if LICENSE_KEY:
get_license_data(app, LICENSE_KEY)
asyncio.create_task(periodic_usage_pool_cleanup())
yield
@ -419,7 +421,7 @@ oauth_manager = OAuthManager(app)
app.state.config = AppConfig()
app.state.WEBUI_NAME = WEBUI_NAME
app.state.config.LICENSE_KEY = LICENSE_KEY
app.state.LICENSE_METADATA = None
########################################
#
@ -603,6 +605,7 @@ app.state.config.JINA_API_KEY = JINA_API_KEY
app.state.config.BING_SEARCH_V7_ENDPOINT = BING_SEARCH_V7_ENDPOINT
app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY = BING_SEARCH_V7_SUBSCRIPTION_KEY
app.state.config.EXA_API_KEY = EXA_API_KEY
app.state.config.PERPLEXITY_API_KEY = PERPLEXITY_API_KEY
app.state.config.RAG_WEB_SEARCH_RESULT_COUNT = RAG_WEB_SEARCH_RESULT_COUNT
app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS = RAG_WEB_SEARCH_CONCURRENT_REQUESTS
@ -658,6 +661,7 @@ app.state.EMBEDDING_FUNCTION = get_embedding_function(
#
########################################
app.state.config.ENABLE_CODE_EXECUTION = ENABLE_CODE_EXECUTION
app.state.config.CODE_EXECUTION_ENGINE = CODE_EXECUTION_ENGINE
app.state.config.CODE_EXECUTION_JUPYTER_URL = CODE_EXECUTION_JUPYTER_URL
app.state.config.CODE_EXECUTION_JUPYTER_AUTH = CODE_EXECUTION_JUPYTER_AUTH
@ -1019,7 +1023,7 @@ async def chat_completion(
"files": form_data.get("files", None),
"features": form_data.get("features", None),
"variables": form_data.get("variables", None),
"model": model_info.model_dump() if model_info else model,
"model": model,
"direct": model_item.get("direct", False),
**(
{"function_calling": "native"}
@ -1037,7 +1041,7 @@ async def chat_completion(
form_data["metadata"] = metadata
form_data, metadata, events = await process_chat_payload(
request, form_data, metadata, user, model
request, form_data, user, metadata, model
)
except Exception as e:
@ -1051,7 +1055,7 @@ async def chat_completion(
response = await chat_completion_handler(request, form_data, user)
return await process_chat_response(
request, response, form_data, user, events, metadata, tasks
request, response, form_data, user, metadata, model, events, tasks
)
except Exception as e:
raise HTTPException(
@ -1140,9 +1144,10 @@ async def get_app_config(request: Request):
if data is not None and "id" in data:
user = Users.get_user_by_id(data["id"])
user_count = Users.get_num_users()
onboarding = False
if user is None:
user_count = Users.get_num_users()
onboarding = user_count == 0
return {
@ -1170,6 +1175,7 @@ async def get_app_config(request: Request):
"enable_direct_connections": app.state.config.ENABLE_DIRECT_CONNECTIONS,
"enable_channels": app.state.config.ENABLE_CHANNELS,
"enable_web_search": app.state.config.ENABLE_RAG_WEB_SEARCH,
"enable_code_execution": app.state.config.ENABLE_CODE_EXECUTION,
"enable_code_interpreter": app.state.config.ENABLE_CODE_INTERPRETER,
"enable_image_generation": app.state.config.ENABLE_IMAGE_GENERATION,
"enable_autocomplete_generation": app.state.config.ENABLE_AUTOCOMPLETE_GENERATION,
@ -1188,6 +1194,7 @@ async def get_app_config(request: Request):
{
"default_models": app.state.config.DEFAULT_MODELS,
"default_prompt_suggestions": app.state.config.DEFAULT_PROMPT_SUGGESTIONS,
"user_count": user_count,
"code": {
"engine": app.state.config.CODE_EXECUTION_ENGINE,
},
@ -1211,6 +1218,14 @@ async def get_app_config(request: Request):
"api_key": GOOGLE_DRIVE_API_KEY.value,
},
"onedrive": {"client_id": ONEDRIVE_CLIENT_ID.value},
"license_metadata": app.state.LICENSE_METADATA,
**(
{
"active_entries": app.state.USER_COUNT,
}
if user.role == "admin"
else {}
),
}
if user is not None
else {}

View File

@ -414,6 +414,13 @@ def get_sources_from_files(
]
],
}
elif file.get("file").get("data"):
context = {
"documents": [[file.get("file").get("data", {}).get("content")]],
"metadatas": [
[file.get("file").get("data", {}).get("metadata", {})]
],
}
else:
collection_names = []
if file.get("type") == "collection":

View File

@ -16,6 +16,10 @@ elif VECTOR_DB == "pgvector":
from open_webui.retrieval.vector.dbs.pgvector import PgvectorClient
VECTOR_DB_CLIENT = PgvectorClient()
elif VECTOR_DB == "elasticsearch":
from open_webui.retrieval.vector.dbs.elasticsearch import ElasticsearchClient
VECTOR_DB_CLIENT = ElasticsearchClient()
else:
from open_webui.retrieval.vector.dbs.chroma import ChromaClient

View File

@ -0,0 +1,295 @@
from elasticsearch import Elasticsearch, BadRequestError
from typing import Optional
import ssl
from elasticsearch.helpers import bulk, scan
from open_webui.retrieval.vector.main import VectorItem, SearchResult, GetResult
from open_webui.config import (
ELASTICSEARCH_URL,
ELASTICSEARCH_CA_CERTS,
ELASTICSEARCH_API_KEY,
ELASTICSEARCH_USERNAME,
ELASTICSEARCH_PASSWORD,
ELASTICSEARCH_CLOUD_ID,
ELASTICSEARCH_INDEX_PREFIX,
SSL_ASSERT_FINGERPRINT,
)
class ElasticsearchClient:
"""
Important:
in order to reduce the number of indexes and since the embedding vector length is fixed, we avoid creating
an index for each file but store it as a text field, while seperating to different index
baesd on the embedding length.
"""
def __init__(self):
self.index_prefix = ELASTICSEARCH_INDEX_PREFIX
self.client = Elasticsearch(
hosts=[ELASTICSEARCH_URL],
ca_certs=ELASTICSEARCH_CA_CERTS,
api_key=ELASTICSEARCH_API_KEY,
cloud_id=ELASTICSEARCH_CLOUD_ID,
basic_auth=(
(ELASTICSEARCH_USERNAME, ELASTICSEARCH_PASSWORD)
if ELASTICSEARCH_USERNAME and ELASTICSEARCH_PASSWORD
else None
),
ssl_assert_fingerprint=SSL_ASSERT_FINGERPRINT,
)
# Status: works
def _get_index_name(self, dimension: int) -> str:
return f"{self.index_prefix}_d{str(dimension)}"
# Status: works
def _scan_result_to_get_result(self, result) -> GetResult:
if not result:
return None
ids = []
documents = []
metadatas = []
for hit in result:
ids.append(hit["_id"])
documents.append(hit["_source"].get("text"))
metadatas.append(hit["_source"].get("metadata"))
return GetResult(ids=[ids], documents=[documents], metadatas=[metadatas])
# Status: works
def _result_to_get_result(self, result) -> GetResult:
if not result["hits"]["hits"]:
return None
ids = []
documents = []
metadatas = []
for hit in result["hits"]["hits"]:
ids.append(hit["_id"])
documents.append(hit["_source"].get("text"))
metadatas.append(hit["_source"].get("metadata"))
return GetResult(ids=[ids], documents=[documents], metadatas=[metadatas])
# Status: works
def _result_to_search_result(self, result) -> SearchResult:
ids = []
distances = []
documents = []
metadatas = []
for hit in result["hits"]["hits"]:
ids.append(hit["_id"])
distances.append(hit["_score"])
documents.append(hit["_source"].get("text"))
metadatas.append(hit["_source"].get("metadata"))
return SearchResult(
ids=[ids],
distances=[distances],
documents=[documents],
metadatas=[metadatas],
)
# Status: works
def _create_index(self, dimension: int):
body = {
"mappings": {
"dynamic_templates": [
{
"strings": {
"match_mapping_type": "string",
"mapping": {"type": "keyword"},
}
}
],
"properties": {
"collection": {"type": "keyword"},
"id": {"type": "keyword"},
"vector": {
"type": "dense_vector",
"dims": dimension, # Adjust based on your vector dimensions
"index": True,
"similarity": "cosine",
},
"text": {"type": "text"},
"metadata": {"type": "object"},
},
}
}
self.client.indices.create(index=self._get_index_name(dimension), body=body)
# Status: works
def _create_batches(self, items: list[VectorItem], batch_size=100):
for i in range(0, len(items), batch_size):
yield items[i : min(i + batch_size, len(items))]
# Status: works
def has_collection(self, collection_name) -> bool:
query_body = {"query": {"bool": {"filter": []}}}
query_body["query"]["bool"]["filter"].append(
{"term": {"collection": collection_name}}
)
try:
result = self.client.count(index=f"{self.index_prefix}*", body=query_body)
return result.body["count"] > 0
except Exception as e:
return None
def delete_collection(self, collection_name: str):
query = {"query": {"term": {"collection": collection_name}}}
self.client.delete_by_query(index=f"{self.index_prefix}*", body=query)
# Status: works
def search(
self, collection_name: str, vectors: list[list[float]], limit: int
) -> Optional[SearchResult]:
query = {
"size": limit,
"_source": ["text", "metadata"],
"query": {
"script_score": {
"query": {
"bool": {"filter": [{"term": {"collection": collection_name}}]}
},
"script": {
"source": "cosineSimilarity(params.vector, 'vector') + 1.0",
"params": {
"vector": vectors[0]
}, # Assuming single query vector
},
}
},
}
result = self.client.search(
index=self._get_index_name(len(vectors[0])), body=query
)
return self._result_to_search_result(result)
# Status: only tested halfwat
def query(
self, collection_name: str, filter: dict, limit: Optional[int] = None
) -> Optional[GetResult]:
if not self.has_collection(collection_name):
return None
query_body = {
"query": {"bool": {"filter": []}},
"_source": ["text", "metadata"],
}
for field, value in filter.items():
query_body["query"]["bool"]["filter"].append({"term": {field: value}})
query_body["query"]["bool"]["filter"].append(
{"term": {"collection": collection_name}}
)
size = limit if limit else 10
try:
result = self.client.search(
index=f"{self.index_prefix}*",
body=query_body,
size=size,
)
return self._result_to_get_result(result)
except Exception as e:
return None
# Status: works
def _has_index(self, dimension: int):
return self.client.indices.exists(
index=self._get_index_name(dimension=dimension)
)
def get_or_create_index(self, dimension: int):
if not self._has_index(dimension=dimension):
self._create_index(dimension=dimension)
# Status: works
def get(self, collection_name: str) -> Optional[GetResult]:
# Get all the items in the collection.
query = {
"query": {"bool": {"filter": [{"term": {"collection": collection_name}}]}},
"_source": ["text", "metadata"],
}
results = list(scan(self.client, index=f"{self.index_prefix}*", query=query))
return self._scan_result_to_get_result(results)
# Status: works
def insert(self, collection_name: str, items: list[VectorItem]):
if not self._has_index(dimension=len(items[0]["vector"])):
self._create_index(dimension=len(items[0]["vector"]))
for batch in self._create_batches(items):
actions = [
{
"_index": self._get_index_name(dimension=len(items[0]["vector"])),
"_id": item["id"],
"_source": {
"collection": collection_name,
"vector": item["vector"],
"text": item["text"],
"metadata": item["metadata"],
},
}
for item in batch
]
bulk(self.client, actions)
# Upsert documents using the update API with doc_as_upsert=True.
def upsert(self, collection_name: str, items: list[VectorItem]):
if not self._has_index(dimension=len(items[0]["vector"])):
self._create_index(dimension=len(items[0]["vector"]))
for batch in self._create_batches(items):
actions = [
{
"_op_type": "update",
"_index": self._get_index_name(dimension=len(item["vector"])),
"_id": item["id"],
"doc": {
"collection": collection_name,
"vector": item["vector"],
"text": item["text"],
"metadata": item["metadata"],
},
"doc_as_upsert": True,
}
for item in batch
]
bulk(self.client, actions)
# Delete specific documents from a collection by filtering on both collection and document IDs.
def delete(
self,
collection_name: str,
ids: Optional[list[str]] = None,
filter: Optional[dict] = None,
):
query = {
"query": {"bool": {"filter": [{"term": {"collection": collection_name}}]}}
}
# logic based on chromaDB
if ids:
query["query"]["bool"]["filter"].append({"terms": {"_id": ids}})
elif filter:
for field, value in filter.items():
query["query"]["bool"]["filter"].append(
{"term": {f"metadata.{field}": value}}
)
self.client.delete_by_query(index=f"{self.index_prefix}*", body=query)
def reset(self):
indices = self.client.indices.get(index=f"{self.index_prefix}*")
for index in indices:
self.client.indices.delete(index=index)

View File

@ -20,9 +20,9 @@ class MilvusClient:
def __init__(self):
self.collection_prefix = "open_webui"
if MILVUS_TOKEN is None:
self.client = Client(uri=MILVUS_URI, database=MILVUS_DB)
self.client = Client(uri=MILVUS_URI, db_name=MILVUS_DB)
else:
self.client = Client(uri=MILVUS_URI, database=MILVUS_DB, token=MILVUS_TOKEN)
self.client = Client(uri=MILVUS_URI, db_name=MILVUS_DB, token=MILVUS_TOKEN)
def _result_to_get_result(self, result) -> GetResult:
ids = []

View File

@ -49,7 +49,7 @@ class OpenSearchClient:
ids=ids, distances=distances, documents=documents, metadatas=metadatas
)
def _create_index(self, index_name: str, dimension: int):
def _create_index(self, collection_name: str, dimension: int):
body = {
"mappings": {
"properties": {
@ -72,24 +72,28 @@ class OpenSearchClient:
}
}
}
self.client.indices.create(index=f"{self.index_prefix}_{index_name}", body=body)
self.client.indices.create(
index=f"{self.index_prefix}_{collection_name}", body=body
)
def _create_batches(self, items: list[VectorItem], batch_size=100):
for i in range(0, len(items), batch_size):
yield items[i : i + batch_size]
def has_collection(self, index_name: str) -> bool:
def has_collection(self, collection_name: str) -> bool:
# has_collection here means has index.
# We are simply adapting to the norms of the other DBs.
return self.client.indices.exists(index=f"{self.index_prefix}_{index_name}")
return self.client.indices.exists(
index=f"{self.index_prefix}_{collection_name}"
)
def delete_colleciton(self, index_name: str):
def delete_colleciton(self, collection_name: str):
# delete_collection here means delete index.
# We are simply adapting to the norms of the other DBs.
self.client.indices.delete(index=f"{self.index_prefix}_{index_name}")
self.client.indices.delete(index=f"{self.index_prefix}_{collection_name}")
def search(
self, index_name: str, vectors: list[list[float]], limit: int
self, collection_name: str, vectors: list[list[float]], limit: int
) -> Optional[SearchResult]:
query = {
"size": limit,
@ -108,7 +112,7 @@ class OpenSearchClient:
}
result = self.client.search(
index=f"{self.index_prefix}_{index_name}", body=query
index=f"{self.index_prefix}_{collection_name}", body=query
)
return self._result_to_search_result(result)
@ -141,21 +145,22 @@ class OpenSearchClient:
except Exception as e:
return None
def get_or_create_index(self, index_name: str, dimension: int):
if not self.has_index(index_name):
self._create_index(index_name, dimension)
def _create_index_if_not_exists(self, collection_name: str, dimension: int):
if not self.has_index(collection_name):
self._create_index(collection_name, dimension)
def get(self, index_name: str) -> Optional[GetResult]:
def get(self, collection_name: str) -> Optional[GetResult]:
query = {"query": {"match_all": {}}, "_source": ["text", "metadata"]}
result = self.client.search(
index=f"{self.index_prefix}_{index_name}", body=query
index=f"{self.index_prefix}_{collection_name}", body=query
)
return self._result_to_get_result(result)
def insert(self, index_name: str, items: list[VectorItem]):
if not self.has_index(index_name):
self._create_index(index_name, dimension=len(items[0]["vector"]))
def insert(self, collection_name: str, items: list[VectorItem]):
self._create_index_if_not_exists(
collection_name=collection_name, dimension=len(items[0]["vector"])
)
for batch in self._create_batches(items):
actions = [
@ -173,15 +178,17 @@ class OpenSearchClient:
]
self.client.bulk(actions)
def upsert(self, index_name: str, items: list[VectorItem]):
if not self.has_index(index_name):
self._create_index(index_name, dimension=len(items[0]["vector"]))
def upsert(self, collection_name: str, items: list[VectorItem]):
self._create_index_if_not_exists(
collection_name=collection_name, dimension=len(items[0]["vector"])
)
for batch in self._create_batches(items):
actions = [
{
"index": {
"_id": item["id"],
"_index": f"{self.index_prefix}_{collection_name}",
"_source": {
"vector": item["vector"],
"text": item["text"],
@ -193,9 +200,9 @@ class OpenSearchClient:
]
self.client.bulk(actions)
def delete(self, index_name: str, ids: list[str]):
def delete(self, collection_name: str, ids: list[str]):
actions = [
{"delete": {"_index": f"{self.index_prefix}_{index_name}", "_id": id}}
{"delete": {"_index": f"{self.index_prefix}_{collection_name}", "_id": id}}
for id in ids
]
self.client.bulk(body=actions)

View File

@ -0,0 +1,87 @@
import logging
from typing import Optional, List
import requests
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
from open_webui.env import SRC_LOG_LEVELS
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"])
def search_perplexity(
api_key: str,
query: str,
count: int,
filter_list: Optional[list[str]] = None,
) -> list[SearchResult]:
"""Search using Perplexity API and return the results as a list of SearchResult objects.
Args:
api_key (str): A Perplexity API key
query (str): The query to search for
count (int): Maximum number of results to return
"""
# Handle PersistentConfig object
if hasattr(api_key, "__str__"):
api_key = str(api_key)
try:
url = "https://api.perplexity.ai/chat/completions"
# Create payload for the API call
payload = {
"model": "sonar",
"messages": [
{
"role": "system",
"content": "You are a search assistant. Provide factual information with citations.",
},
{"role": "user", "content": query},
],
"temperature": 0.2, # Lower temperature for more factual responses
"stream": False,
}
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
}
# Make the API request
response = requests.request("POST", url, json=payload, headers=headers)
# Parse the JSON response
json_response = response.json()
# Extract citations from the response
citations = json_response.get("citations", [])
# Create search results from citations
results = []
for i, citation in enumerate(citations[:count]):
# Extract content from the response to use as snippet
content = ""
if "choices" in json_response and json_response["choices"]:
if i == 0:
content = json_response["choices"][0]["message"]["content"]
result = {"link": citation, "title": f"Source {i+1}", "snippet": content}
results.append(result)
if filter_list:
results = get_filtered_results(results, filter_list)
return [
SearchResult(
link=result["link"], title=result["title"], snippet=result["snippet"]
)
for result in results[:count]
]
except Exception as e:
log.error(f"Error searching with Perplexity API: {e}")
return []

View File

@ -54,7 +54,7 @@ MAX_FILE_SIZE = MAX_FILE_SIZE_MB * 1024 * 1024 # Convert MB to bytes
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["AUDIO"])
SPEECH_CACHE_DIR = Path(CACHE_DIR).joinpath("./audio/speech/")
SPEECH_CACHE_DIR = CACHE_DIR / "audio" / "speech"
SPEECH_CACHE_DIR.mkdir(parents=True, exist_ok=True)

View File

@ -230,9 +230,12 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
entry = connection_app.entries[0]
username = str(entry[f"{LDAP_ATTRIBUTE_FOR_USERNAME}"]).lower()
mail = str(entry[f"{LDAP_ATTRIBUTE_FOR_MAIL}"])
if not mail or mail == "" or mail == "[]":
raise HTTPException(400, f"User {form_data.user} does not have mail.")
email = str(entry[f"{LDAP_ATTRIBUTE_FOR_MAIL}"])
if not email or email == "" or email == "[]":
raise HTTPException(400, f"User {form_data.user} does not have email.")
else:
email = email.lower()
cn = str(entry["cn"])
user_dn = entry.entry_dn
@ -247,7 +250,7 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
if not connection_user.bind():
raise HTTPException(400, f"Authentication failed for {form_data.user}")
user = Users.get_user_by_email(mail)
user = Users.get_user_by_email(email)
if not user:
try:
user_count = Users.get_num_users()
@ -259,7 +262,10 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
)
user = Auths.insert_new_auth(
email=mail, password=str(uuid.uuid4()), name=cn, role=role
email=email,
password=str(uuid.uuid4()),
name=cn,
role=role,
)
if not user:
@ -272,7 +278,7 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
except Exception as err:
raise HTTPException(500, detail=ERROR_MESSAGES.DEFAULT(err))
user = Auths.authenticate_user_by_trusted_header(mail)
user = Auths.authenticate_user_by_trusted_header(email)
if user:
token = create_token(

View File

@ -70,6 +70,7 @@ async def set_direct_connections_config(
# CodeInterpreterConfig
############################
class CodeInterpreterConfigForm(BaseModel):
ENABLE_CODE_EXECUTION: bool
CODE_EXECUTION_ENGINE: str
CODE_EXECUTION_JUPYTER_URL: Optional[str]
CODE_EXECUTION_JUPYTER_AUTH: Optional[str]
@ -89,6 +90,7 @@ class CodeInterpreterConfigForm(BaseModel):
@router.get("/code_execution", response_model=CodeInterpreterConfigForm)
async def get_code_execution_config(request: Request, user=Depends(get_admin_user)):
return {
"ENABLE_CODE_EXECUTION": request.app.state.config.ENABLE_CODE_EXECUTION,
"CODE_EXECUTION_ENGINE": request.app.state.config.CODE_EXECUTION_ENGINE,
"CODE_EXECUTION_JUPYTER_URL": request.app.state.config.CODE_EXECUTION_JUPYTER_URL,
"CODE_EXECUTION_JUPYTER_AUTH": request.app.state.config.CODE_EXECUTION_JUPYTER_AUTH,
@ -111,6 +113,8 @@ async def set_code_execution_config(
request: Request, form_data: CodeInterpreterConfigForm, user=Depends(get_admin_user)
):
request.app.state.config.ENABLE_CODE_EXECUTION = form_data.ENABLE_CODE_EXECUTION
request.app.state.config.CODE_EXECUTION_ENGINE = form_data.CODE_EXECUTION_ENGINE
request.app.state.config.CODE_EXECUTION_JUPYTER_URL = (
form_data.CODE_EXECUTION_JUPYTER_URL
@ -153,6 +157,7 @@ async def set_code_execution_config(
)
return {
"ENABLE_CODE_EXECUTION": request.app.state.config.ENABLE_CODE_EXECUTION,
"CODE_EXECUTION_ENGINE": request.app.state.config.CODE_EXECUTION_ENGINE,
"CODE_EXECUTION_JUPYTER_URL": request.app.state.config.CODE_EXECUTION_JUPYTER_URL,
"CODE_EXECUTION_JUPYTER_AUTH": request.app.state.config.CODE_EXECUTION_JUPYTER_AUTH,

View File

@ -74,7 +74,7 @@ async def create_new_function(
function = Functions.insert_new_function(user.id, function_type, form_data)
function_cache_dir = Path(CACHE_DIR) / "functions" / form_data.id
function_cache_dir = CACHE_DIR / "functions" / form_data.id
function_cache_dir.mkdir(parents=True, exist_ok=True)
if function:

View File

@ -25,7 +25,7 @@ from pydantic import BaseModel
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["IMAGES"])
IMAGE_CACHE_DIR = Path(CACHE_DIR).joinpath("./image/generations/")
IMAGE_CACHE_DIR = CACHE_DIR / "image" / "generations"
IMAGE_CACHE_DIR.mkdir(parents=True, exist_ok=True)
@ -517,7 +517,13 @@ async def image_generations(
images = []
for image in res["data"]:
image_data, content_type = load_b64_image_data(image["b64_json"])
if "url" in image:
image_data, content_type = load_url_image_data(
image["url"], headers
)
else:
image_data, content_type = load_b64_image_data(image["b64_json"])
url = upload_image(request, data, image_data, content_type, user)
images.append({"url": url})
return images

View File

@ -55,7 +55,7 @@ from open_webui.env import (
ENV,
SRC_LOG_LEVELS,
AIOHTTP_CLIENT_TIMEOUT,
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST,
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST,
BYPASS_MODEL_ACCESS_CONTROL,
)
from open_webui.constants import ERROR_MESSAGES
@ -72,7 +72,7 @@ log.setLevel(SRC_LOG_LEVELS["OLLAMA"])
async def send_get_request(url, key=None, user: UserModel = None):
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST)
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
try:
async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session:
async with session.get(
@ -216,7 +216,7 @@ async def verify_connection(
key = form_data.key
async with aiohttp.ClientSession(
timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST)
timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
) as session:
try:
async with session.get(

View File

@ -22,7 +22,7 @@ from open_webui.config import (
)
from open_webui.env import (
AIOHTTP_CLIENT_TIMEOUT,
AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST,
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST,
ENABLE_FORWARD_USER_INFO_HEADERS,
BYPASS_MODEL_ACCESS_CONTROL,
)
@ -53,7 +53,7 @@ log.setLevel(SRC_LOG_LEVELS["OPENAI"])
async def send_get_request(url, key=None, user: UserModel = None):
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST)
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
try:
async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session:
async with session.get(
@ -192,7 +192,7 @@ async def speech(request: Request, user=Depends(get_verified_user)):
body = await request.body()
name = hashlib.sha256(body).hexdigest()
SPEECH_CACHE_DIR = Path(CACHE_DIR).joinpath("./audio/speech/")
SPEECH_CACHE_DIR = CACHE_DIR / "audio" / "speech"
SPEECH_CACHE_DIR.mkdir(parents=True, exist_ok=True)
file_path = SPEECH_CACHE_DIR.joinpath(f"{name}.mp3")
file_body_path = SPEECH_CACHE_DIR.joinpath(f"{name}.json")
@ -448,9 +448,7 @@ async def get_models(
r = None
async with aiohttp.ClientSession(
timeout=aiohttp.ClientTimeout(
total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST
)
timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
) as session:
try:
async with session.get(
@ -530,7 +528,7 @@ async def verify_connection(
key = form_data.key
async with aiohttp.ClientSession(
timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST)
timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
) as session:
try:
async with session.get(

View File

@ -59,7 +59,7 @@ from open_webui.retrieval.web.serpstack import search_serpstack
from open_webui.retrieval.web.tavily import search_tavily
from open_webui.retrieval.web.bing import search_bing
from open_webui.retrieval.web.exa import search_exa
from open_webui.retrieval.web.perplexity import search_perplexity
from open_webui.retrieval.utils import (
get_embedding_function,
@ -405,6 +405,7 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
"bing_search_v7_endpoint": request.app.state.config.BING_SEARCH_V7_ENDPOINT,
"bing_search_v7_subscription_key": request.app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY,
"exa_api_key": request.app.state.config.EXA_API_KEY,
"perplexity_api_key": request.app.state.config.PERPLEXITY_API_KEY,
"result_count": request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT,
"trust_env": request.app.state.config.RAG_WEB_SEARCH_TRUST_ENV,
"concurrent_requests": request.app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS,
@ -465,6 +466,7 @@ class WebSearchConfig(BaseModel):
bing_search_v7_endpoint: Optional[str] = None
bing_search_v7_subscription_key: Optional[str] = None
exa_api_key: Optional[str] = None
perplexity_api_key: Optional[str] = None
result_count: Optional[int] = None
concurrent_requests: Optional[int] = None
trust_env: Optional[bool] = None
@ -617,6 +619,10 @@ async def update_rag_config(
request.app.state.config.EXA_API_KEY = form_data.web.search.exa_api_key
request.app.state.config.PERPLEXITY_API_KEY = (
form_data.web.search.perplexity_api_key
)
request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT = (
form_data.web.search.result_count
)
@ -683,6 +689,7 @@ async def update_rag_config(
"bing_search_v7_endpoint": request.app.state.config.BING_SEARCH_V7_ENDPOINT,
"bing_search_v7_subscription_key": request.app.state.config.BING_SEARCH_V7_SUBSCRIPTION_KEY,
"exa_api_key": request.app.state.config.EXA_API_KEY,
"perplexity_api_key": request.app.state.config.PERPLEXITY_API_KEY,
"result_count": request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT,
"concurrent_requests": request.app.state.config.RAG_WEB_SEARCH_CONCURRENT_REQUESTS,
"trust_env": request.app.state.config.RAG_WEB_SEARCH_TRUST_ENV,
@ -1182,9 +1189,13 @@ def process_web(
content = " ".join([doc.page_content for doc in docs])
log.debug(f"text_content: {content}")
save_docs_to_vector_db(
request, docs, collection_name, overwrite=True, user=user
)
if not request.app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL:
save_docs_to_vector_db(
request, docs, collection_name, overwrite=True, user=user
)
else:
collection_name = None
return {
"status": True,
@ -1196,6 +1207,7 @@ def process_web(
},
"meta": {
"name": form_data.url,
"source": form_data.url,
},
},
}
@ -1221,6 +1233,7 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
- SERPLY_API_KEY
- TAVILY_API_KEY
- EXA_API_KEY
- PERPLEXITY_API_KEY
- SEARCHAPI_API_KEY + SEARCHAPI_ENGINE (by default `google`)
- SERPAPI_API_KEY + SERPAPI_ENGINE (by default `google`)
Args:
@ -1385,6 +1398,13 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT,
request.app.state.config.RAG_WEB_SEARCH_DOMAIN_FILTER_LIST,
)
elif engine == "perplexity":
return search_perplexity(
request.app.state.config.PERPLEXITY_API_KEY,
query,
request.app.state.config.RAG_WEB_SEARCH_RESULT_COUNT,
request.app.state.config.RAG_WEB_SEARCH_DOMAIN_FILTER_LIST,
)
else:
raise Exception("No search engine API key found in environment variables")

View File

@ -105,7 +105,7 @@ async def create_new_tools(
specs = get_tools_specs(TOOLS[form_data.id])
tools = Tools.insert_new_tool(user.id, form_data, specs)
tool_cache_dir = Path(CACHE_DIR) / "tools" / form_data.id
tool_cache_dir = CACHE_DIR / "tools" / form_data.id
tool_cache_dir.mkdir(parents=True, exist_ok=True)
if tools:

View File

@ -12,6 +12,7 @@ from open_webui.env import (
ENABLE_WEBSOCKET_SUPPORT,
WEBSOCKET_MANAGER,
WEBSOCKET_REDIS_URL,
WEBSOCKET_REDIS_LOCK_TIMEOUT,
)
from open_webui.utils.auth import decode_token
from open_webui.socket.utils import RedisDict, RedisLock
@ -61,7 +62,7 @@ if WEBSOCKET_MANAGER == "redis":
clean_up_lock = RedisLock(
redis_url=WEBSOCKET_REDIS_URL,
lock_name="usage_cleanup_lock",
timeout_secs=TIMEOUT_DURATION * 2,
timeout_secs=WEBSOCKET_REDIS_LOCK_TIMEOUT,
)
aquire_func = clean_up_lock.aquire_lock
renew_func = clean_up_lock.renew_lock

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 14 KiB

View File

@ -3,13 +3,13 @@
"short_name": "WebUI",
"icons": [
{
"src": "/favicon/web-app-manifest-192x192.png",
"src": "/static/web-app-manifest-192x192.png",
"sizes": "192x192",
"type": "image/png",
"purpose": "maskable"
},
{
"src": "/favicon/web-app-manifest-512x512.png",
"src": "/static/web-app-manifest-512x512.png",
"sizes": "512x512",
"type": "image/png",
"purpose": "maskable"

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

View File

@ -101,19 +101,33 @@ class LocalStorageProvider(StorageProvider):
class S3StorageProvider(StorageProvider):
def __init__(self):
self.s3_client = boto3.client(
"s3",
region_name=S3_REGION_NAME,
endpoint_url=S3_ENDPOINT_URL,
aws_access_key_id=S3_ACCESS_KEY_ID,
aws_secret_access_key=S3_SECRET_ACCESS_KEY,
config=Config(
s3={
"use_accelerate_endpoint": S3_USE_ACCELERATE_ENDPOINT,
"addressing_style": S3_ADDRESSING_STYLE,
},
),
config = Config(
s3={
"use_accelerate_endpoint": S3_USE_ACCELERATE_ENDPOINT,
"addressing_style": S3_ADDRESSING_STYLE,
},
)
# If access key and secret are provided, use them for authentication
if S3_ACCESS_KEY_ID and S3_SECRET_ACCESS_KEY:
self.s3_client = boto3.client(
"s3",
region_name=S3_REGION_NAME,
endpoint_url=S3_ENDPOINT_URL,
aws_access_key_id=S3_ACCESS_KEY_ID,
aws_secret_access_key=S3_SECRET_ACCESS_KEY,
config=config,
)
else:
# If no explicit credentials are provided, fall back to default AWS credentials
# This supports workload identity (IAM roles for EC2, EKS, etc.)
self.s3_client = boto3.client(
"s3",
region_name=S3_REGION_NAME,
endpoint_url=S3_ENDPOINT_URL,
config=config,
)
self.bucket_name = S3_BUCKET_NAME
self.key_prefix = S3_KEY_PREFIX if S3_KEY_PREFIX else ""

View File

@ -187,6 +187,17 @@ class TestS3StorageProvider:
assert not (upload_dir / self.filename).exists()
assert not (upload_dir / self.filename_extra).exists()
def test_init_without_credentials(self, monkeypatch):
"""Test that S3StorageProvider can initialize without explicit credentials."""
# Temporarily unset the environment variables
monkeypatch.setattr(provider, "S3_ACCESS_KEY_ID", None)
monkeypatch.setattr(provider, "S3_SECRET_ACCESS_KEY", None)
# Should not raise an exception
storage = provider.S3StorageProvider()
assert storage.s3_client is not None
assert storage.bucket_name == provider.S3_BUCKET_NAME
class TestGCSStorageProvider:
Storage = provider.GCSStorageProvider()

View File

@ -72,7 +72,7 @@ def get_license_data(app, key):
if key:
try:
res = requests.post(
"https://api.openwebui.com/api/v1/license",
"https://api.openwebui.com/api/v1/license/",
json={"key": key, "version": "1"},
timeout=5,
)
@ -83,11 +83,12 @@ def get_license_data(app, key):
if k == "resources":
for p, c in v.items():
globals().get("override_static", lambda a, b: None)(p, c)
elif k == "user_count":
elif k == "count":
setattr(app.state, "USER_COUNT", v)
elif k == "webui_name":
elif k == "name":
setattr(app.state, "WEBUI_NAME", v)
elif k == "metadata":
setattr(app.state, "LICENSE_METADATA", v)
return True
else:
log.error(

View File

@ -149,7 +149,7 @@ async def generate_direct_chat_completion(
}
)
if "error" in res:
if "error" in res and res["error"]:
raise Exception(res["error"])
return res
@ -328,9 +328,14 @@ async def chat_completed(request: Request, form_data: dict, user: Any):
}
try:
filter_functions = [
Functions.get_function_by_id(filter_id)
for filter_id in get_sorted_filter_ids(model)
]
result, _ = await process_filter_functions(
request=request,
filter_ids=get_sorted_filter_ids(model),
filter_functions=filter_functions,
filter_type="outlet",
form_data=data,
extra_params=extra_params,

View File

@ -1,148 +1,210 @@
import asyncio
import json
import logging
import uuid
from typing import Optional
import aiohttp
import websockets
import requests
from urllib.parse import urljoin
from pydantic import BaseModel
from open_webui.env import SRC_LOG_LEVELS
logger = logging.getLogger(__name__)
logger.setLevel(SRC_LOG_LEVELS["MAIN"])
async def execute_code_jupyter(
jupyter_url, code, token=None, password=None, timeout=10
):
class ResultModel(BaseModel):
"""
Executes Python code in a Jupyter kernel.
Supports authentication with a token or password.
:param jupyter_url: Jupyter server URL (e.g., "http://localhost:8888")
:param code: Code to execute
:param token: Jupyter authentication token (optional)
:param password: Jupyter password (optional)
:param timeout: WebSocket timeout in seconds (default: 10s)
:return: Dictionary with stdout, stderr, and result
- Images are prefixed with "base64:image/png," and separated by newlines if multiple.
Execute Code Result Model
"""
session = requests.Session() # Maintain cookies
headers = {} # Headers for requests
# Authenticate using password
if password and not token:
stdout: Optional[str] = ""
stderr: Optional[str] = ""
result: Optional[str] = ""
class JupyterCodeExecuter:
"""
Execute code in jupyter notebook
"""
def __init__(
self,
base_url: str,
code: str,
token: str = "",
password: str = "",
timeout: int = 60,
):
"""
:param base_url: Jupyter server URL (e.g., "http://localhost:8888")
:param code: Code to execute
:param token: Jupyter authentication token (optional)
:param password: Jupyter password (optional)
:param timeout: WebSocket timeout in seconds (default: 60s)
"""
self.base_url = base_url.rstrip("/")
self.code = code
self.token = token
self.password = password
self.timeout = timeout
self.kernel_id = ""
self.session = aiohttp.ClientSession(base_url=self.base_url)
self.params = {}
self.result = ResultModel()
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
if self.kernel_id:
try:
async with self.session.delete(
f"/api/kernels/{self.kernel_id}", params=self.params
) as response:
response.raise_for_status()
except Exception as err:
logger.exception("close kernel failed, %s", err)
await self.session.close()
async def run(self) -> ResultModel:
try:
login_url = urljoin(jupyter_url, "/login")
response = session.get(login_url)
await self.sign_in()
await self.init_kernel()
await self.execute_code()
except Exception as err:
logger.exception("execute code failed, %s", err)
self.result.stderr = f"Error: {err}"
return self.result
async def sign_in(self) -> None:
# password authentication
if self.password and not self.token:
async with self.session.get("/login") as response:
response.raise_for_status()
xsrf_token = response.cookies["_xsrf"].value
if not xsrf_token:
raise ValueError("_xsrf token not found")
self.session.cookie_jar.update_cookies(response.cookies)
self.session.headers.update({"X-XSRFToken": xsrf_token})
async with self.session.post(
"/login",
data={"_xsrf": xsrf_token, "password": self.password},
allow_redirects=False,
) as response:
response.raise_for_status()
self.session.cookie_jar.update_cookies(response.cookies)
# token authentication
if self.token:
self.params.update({"token": self.token})
async def init_kernel(self) -> None:
async with self.session.post(
url="/api/kernels", params=self.params
) as response:
response.raise_for_status()
xsrf_token = session.cookies.get("_xsrf")
if not xsrf_token:
raise ValueError("Failed to fetch _xsrf token")
login_data = {"_xsrf": xsrf_token, "password": password}
login_response = session.post(
login_url, data=login_data, cookies=session.cookies
)
login_response.raise_for_status()
headers["X-XSRFToken"] = xsrf_token
except Exception as e:
return {
"stdout": "",
"stderr": f"Authentication Error: {str(e)}",
"result": "",
}
# Construct API URLs with authentication token if provided
params = f"?token={token}" if token else ""
kernel_url = urljoin(jupyter_url, f"/api/kernels{params}")
try:
response = session.post(kernel_url, headers=headers, cookies=session.cookies)
response.raise_for_status()
kernel_id = response.json()["id"]
websocket_url = urljoin(
jupyter_url.replace("http", "ws"),
f"/api/kernels/{kernel_id}/channels{params}",
)
kernel_data = await response.json()
self.kernel_id = kernel_data["id"]
def init_ws(self) -> (str, dict):
ws_base = self.base_url.replace("http", "ws")
ws_params = "?" + "&".join([f"{key}={val}" for key, val in self.params.items()])
websocket_url = f"{ws_base}/api/kernels/{self.kernel_id}/channels{ws_params if len(ws_params) > 1 else ''}"
ws_headers = {}
if password and not token:
ws_headers["X-XSRFToken"] = session.cookies.get("_xsrf")
cookies = {name: value for name, value in session.cookies.items()}
ws_headers["Cookie"] = "; ".join(
[f"{name}={value}" for name, value in cookies.items()]
)
if self.password and not self.token:
ws_headers = {
"Cookie": "; ".join(
[
f"{cookie.key}={cookie.value}"
for cookie in self.session.cookie_jar
]
),
**self.session.headers,
}
return websocket_url, ws_headers
async def execute_code(self) -> None:
# initialize ws
websocket_url, ws_headers = self.init_ws()
# execute
async with websockets.connect(
websocket_url, additional_headers=ws_headers
) as ws:
msg_id = str(uuid.uuid4())
execute_request = {
"header": {
"msg_id": msg_id,
"msg_type": "execute_request",
"username": "user",
"session": str(uuid.uuid4()),
"date": "",
"version": "5.3",
},
"parent_header": {},
"metadata": {},
"content": {
"code": code,
"silent": False,
"store_history": True,
"user_expressions": {},
"allow_stdin": False,
"stop_on_error": True,
},
"channel": "shell",
}
await ws.send(json.dumps(execute_request))
await self.execute_in_jupyter(ws)
stdout, stderr, result = "", "", []
while True:
try:
message = await asyncio.wait_for(ws.recv(), timeout)
message_data = json.loads(message)
if message_data.get("parent_header", {}).get("msg_id") == msg_id:
msg_type = message_data.get("msg_type")
if msg_type == "stream":
if message_data["content"]["name"] == "stdout":
stdout += message_data["content"]["text"]
elif message_data["content"]["name"] == "stderr":
stderr += message_data["content"]["text"]
elif msg_type in ("execute_result", "display_data"):
data = message_data["content"]["data"]
if "image/png" in data:
result.append(
f"data:image/png;base64,{data['image/png']}"
)
elif "text/plain" in data:
result.append(data["text/plain"])
elif msg_type == "error":
stderr += "\n".join(message_data["content"]["traceback"])
elif (
msg_type == "status"
and message_data["content"]["execution_state"] == "idle"
):
async def execute_in_jupyter(self, ws) -> None:
# send message
msg_id = uuid.uuid4().hex
await ws.send(
json.dumps(
{
"header": {
"msg_id": msg_id,
"msg_type": "execute_request",
"username": "user",
"session": uuid.uuid4().hex,
"date": "",
"version": "5.3",
},
"parent_header": {},
"metadata": {},
"content": {
"code": self.code,
"silent": False,
"store_history": True,
"user_expressions": {},
"allow_stdin": False,
"stop_on_error": True,
},
"channel": "shell",
}
)
)
# parse message
stdout, stderr, result = "", "", []
while True:
try:
# wait for message
message = await asyncio.wait_for(ws.recv(), self.timeout)
message_data = json.loads(message)
# msg id not match, skip
if message_data.get("parent_header", {}).get("msg_id") != msg_id:
continue
# check message type
msg_type = message_data.get("msg_type")
match msg_type:
case "stream":
if message_data["content"]["name"] == "stdout":
stdout += message_data["content"]["text"]
elif message_data["content"]["name"] == "stderr":
stderr += message_data["content"]["text"]
case "execute_result" | "display_data":
data = message_data["content"]["data"]
if "image/png" in data:
result.append(f"data:image/png;base64,{data['image/png']}")
elif "text/plain" in data:
result.append(data["text/plain"])
case "error":
stderr += "\n".join(message_data["content"]["traceback"])
case "status":
if message_data["content"]["execution_state"] == "idle":
break
except asyncio.TimeoutError:
stderr += "\nExecution timed out."
break
except asyncio.TimeoutError:
stderr += "\nExecution timed out."
break
self.result.stdout = stdout.strip()
self.result.stderr = stderr.strip()
self.result.result = "\n".join(result).strip() if result else ""
except Exception as e:
return {"stdout": "", "stderr": f"Error: {str(e)}", "result": ""}
finally:
if kernel_id:
requests.delete(
f"{kernel_url}/{kernel_id}", headers=headers, cookies=session.cookies
)
return {
"stdout": stdout.strip(),
"stderr": stderr.strip(),
"result": "\n".join(result).strip() if result else "",
}
async def execute_code_jupyter(
base_url: str, code: str, token: str = "", password: str = "", timeout: int = 60
) -> dict:
async with JupyterCodeExecuter(
base_url, code, token, password, timeout
) as executor:
result = await executor.run()
return result.model_dump()

View File

@ -9,7 +9,7 @@ log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MAIN"])
def get_sorted_filter_ids(model):
def get_sorted_filter_ids(model: dict):
def get_priority(function_id):
function = Functions.get_function_by_id(function_id)
if function is not None and hasattr(function, "valves"):
@ -33,12 +33,13 @@ def get_sorted_filter_ids(model):
async def process_filter_functions(
request, filter_ids, filter_type, form_data, extra_params
request, filter_functions, filter_type, form_data, extra_params
):
skip_files = None
for filter_id in filter_ids:
filter = Functions.get_function_by_id(filter_id)
for function in filter_functions:
filter = function
filter_id = function.id
if not filter:
continue
@ -48,6 +49,11 @@ async def process_filter_functions(
function_module, _, _ = load_function_module_by_id(filter_id)
request.app.state.FUNCTIONS[filter_id] = function_module
# Prepare handler function
handler = getattr(function_module, filter_type, None)
if not handler:
continue
# Check if the function has a file_handler variable
if filter_type == "inlet" and hasattr(function_module, "file_handler"):
skip_files = function_module.file_handler
@ -59,11 +65,6 @@ async def process_filter_functions(
**(valves if valves else {})
)
# Prepare handler function
handler = getattr(function_module, filter_type, None)
if not handler:
continue
try:
# Prepare parameters
sig = inspect.signature(handler)

View File

@ -68,6 +68,7 @@ from open_webui.utils.misc import (
get_last_user_message,
get_last_assistant_message,
prepend_to_first_user_message_content,
convert_logit_bias_input_to_json,
)
from open_webui.utils.tools import get_tools
from open_webui.utils.plugin import load_function_module_by_id
@ -610,11 +611,18 @@ def apply_params_to_form_data(form_data, model):
if "reasoning_effort" in params:
form_data["reasoning_effort"] = params["reasoning_effort"]
if "logit_bias" in params:
try:
form_data["logit_bias"] = json.loads(
convert_logit_bias_input_to_json(params["logit_bias"])
)
except Exception as e:
print(f"Error parsing logit_bias: {e}")
return form_data
async def process_chat_payload(request, form_data, metadata, user, model):
async def process_chat_payload(request, form_data, user, metadata, model):
form_data = apply_params_to_form_data(form_data, model)
log.debug(f"form_data: {form_data}")
@ -707,9 +715,14 @@ async def process_chat_payload(request, form_data, metadata, user, model):
raise e
try:
filter_functions = [
Functions.get_function_by_id(filter_id)
for filter_id in get_sorted_filter_ids(model)
]
form_data, flags = await process_filter_functions(
request=request,
filter_ids=get_sorted_filter_ids(model),
filter_functions=filter_functions,
filter_type="inlet",
form_data=form_data,
extra_params=extra_params,
@ -856,7 +869,7 @@ async def process_chat_payload(request, form_data, metadata, user, model):
async def process_chat_response(
request, response, form_data, user, events, metadata, tasks
request, response, form_data, user, metadata, model, events, tasks
):
async def background_tasks_handler():
message_map = Chats.get_messages_by_chat_id(metadata["chat_id"])
@ -1061,9 +1074,14 @@ async def process_chat_response(
},
"__metadata__": metadata,
"__request__": request,
"__model__": metadata.get("model"),
"__model__": model,
}
filter_ids = get_sorted_filter_ids(form_data.get("model"))
filter_functions = [
Functions.get_function_by_id(filter_id)
for filter_id in get_sorted_filter_ids(model)
]
print(f"{filter_functions=}")
# Streaming response
if event_emitter and event_caller:
@ -1470,7 +1488,7 @@ async def process_chat_response(
data, _ = await process_filter_functions(
request=request,
filter_ids=filter_ids,
filter_functions=filter_functions,
filter_type="stream",
form_data=data,
extra_params=extra_params,
@ -1544,9 +1562,59 @@ async def process_chat_response(
value = delta.get("content")
if value:
content = f"{content}{value}"
reasoning_content = delta.get("reasoning_content")
if reasoning_content:
if (
not content_blocks
or content_blocks[-1]["type"] != "reasoning"
):
reasoning_block = {
"type": "reasoning",
"start_tag": "think",
"end_tag": "/think",
"attributes": {
"type": "reasoning_content"
},
"content": "",
"started_at": time.time(),
}
content_blocks.append(reasoning_block)
else:
reasoning_block = content_blocks[-1]
reasoning_block["content"] += reasoning_content
data = {
"content": serialize_content_blocks(
content_blocks
)
}
if value:
if (
content_blocks
and content_blocks[-1]["type"]
== "reasoning"
and content_blocks[-1]
.get("attributes", {})
.get("type")
== "reasoning_content"
):
reasoning_block = content_blocks[-1]
reasoning_block["ended_at"] = time.time()
reasoning_block["duration"] = int(
reasoning_block["ended_at"]
- reasoning_block["started_at"]
)
content_blocks.append(
{
"type": "text",
"content": "",
}
)
content = f"{content}{value}"
if not content_blocks:
content_blocks.append(
{
@ -2017,7 +2085,7 @@ async def process_chat_response(
for event in events:
event, _ = await process_filter_functions(
request=request,
filter_ids=filter_ids,
filter_functions=filter_functions,
filter_type="stream",
form_data=event,
extra_params=extra_params,
@ -2029,7 +2097,7 @@ async def process_chat_response(
async for data in original_generator:
data, _ = await process_filter_functions(
request=request,
filter_ids=filter_ids,
filter_functions=filter_functions,
filter_type="stream",
form_data=data,
extra_params=extra_params,

View File

@ -6,6 +6,7 @@ import logging
from datetime import timedelta
from pathlib import Path
from typing import Callable, Optional
import json
import collections.abc
@ -450,3 +451,15 @@ def parse_ollama_modelfile(model_text):
data["params"]["messages"] = messages
return data
def convert_logit_bias_input_to_json(user_input):
logit_bias_pairs = user_input.split(",")
logit_bias_json = {}
for pair in logit_bias_pairs:
token, bias = pair.split(":")
token = str(token.strip())
bias = int(bias.strip())
bias = 100 if bias > 100 else -100 if bias < -100 else bias
logit_bias_json[token] = bias
return json.dumps(logit_bias_json)

View File

@ -234,7 +234,7 @@ class OAuthManager:
log.warning(f"OAuth callback error: {e}")
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
user_data: UserInfo = token.get("userinfo")
if not user_data or "email" not in user_data:
if not user_data or auth_manager_config.OAUTH_EMAIL_CLAIM not in user_data:
user_data: UserInfo = await client.userinfo(token=token)
if not user_data:
log.warning(f"OAuth callback failed, user data is missing: {token}")

View File

@ -62,6 +62,7 @@ def apply_model_params_to_body_openai(params: dict, form_data: dict) -> dict:
"reasoning_effort": str,
"seed": lambda x: x,
"stop": lambda x: [bytes(s, "utf-8").decode("unicode_escape") for s in x],
"logit_bias": lambda x: x,
}
return apply_model_params_to_body(params, form_data, mappings)

View File

@ -110,7 +110,7 @@ class PDFGenerator:
# When running using `pip install -e .` the static directory is in the site packages.
# This path only works if `open-webui serve` is run from the root of this project.
if not FONTS_DIR.exists():
FONTS_DIR = Path("./backend/static/fonts")
FONTS_DIR = Path(".") / "backend" / "static" / "fonts"
pdf.add_font("NotoSans", "", f"{FONTS_DIR}/NotoSans-Regular.ttf")
pdf.add_font("NotoSans", "b", f"{FONTS_DIR}/NotoSans-Bold.ttf")

View File

@ -104,7 +104,7 @@ def replace_prompt_variable(template: str, prompt: str) -> str:
def replace_messages_variable(
template: str, messages: Optional[list[str]] = None
template: str, messages: Optional[list[dict]] = None
) -> str:
def replacement_function(match):
full_match = match.group(0)

View File

@ -1,5 +1,5 @@
fastapi==0.115.7
uvicorn[standard]==0.30.6
uvicorn[standard]==0.34.0
pydantic==2.10.6
python-multipart==0.0.18
@ -13,14 +13,14 @@ async-timeout
aiocache
aiofiles
sqlalchemy==2.0.32
sqlalchemy==2.0.38
alembic==1.14.0
peewee==3.17.8
peewee==3.17.9
peewee-migrate==1.12.2
psycopg2-binary==2.9.9
pgvector==0.3.5
PyMySQL==1.1.1
bcrypt==4.2.0
bcrypt==4.3.0
pymongo
redis
@ -40,8 +40,8 @@ anthropic
google-generativeai==0.7.2
tiktoken
langchain==0.3.7
langchain-community==0.3.7
langchain==0.3.19
langchain-community==0.3.18
fake-useragent==1.5.1
chromadb==0.6.2
@ -49,6 +49,8 @@ pymilvus==2.5.0
qdrant-client~=1.12.0
opensearch-py==2.8.0
playwright==1.49.1 # Caution: version must match docker-compose.playwright.yaml
elasticsearch==8.17.1
transformers
sentence-transformers==3.3.1
@ -85,7 +87,7 @@ faster-whisper==1.1.1
PyJWT[crypto]==2.10.1
authlib==1.4.1
black==24.8.0
black==25.1.0
langfuse==2.44.0
youtube-transcript-api==0.6.3
pytube==15.0.0

26
package-lock.json generated
View File

@ -1,13 +1,14 @@
{
"name": "open-webui",
"version": "0.5.18",
"version": "0.5.20",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "open-webui",
"version": "0.5.18",
"version": "0.5.20",
"dependencies": {
"@azure/msal-browser": "^4.5.0",
"@codemirror/lang-javascript": "^6.2.2",
"@codemirror/lang-python": "^6.1.6",
"@codemirror/language-data": "^6.5.1",
@ -135,6 +136,27 @@
"node": ">=6.0.0"
}
},
"node_modules/@azure/msal-browser": {
"version": "4.5.0",
"resolved": "https://registry.npmjs.org/@azure/msal-browser/-/msal-browser-4.5.0.tgz",
"integrity": "sha512-H7mWmu8yI0n0XxhJobrgncXI6IU5h8DKMiWDHL5y+Dc58cdg26GbmaMUehbUkdKAQV2OTiFa4FUa6Fdu/wIxBg==",
"license": "MIT",
"dependencies": {
"@azure/msal-common": "15.2.0"
},
"engines": {
"node": ">=0.8.0"
}
},
"node_modules/@azure/msal-common": {
"version": "15.2.0",
"resolved": "https://registry.npmjs.org/@azure/msal-common/-/msal-common-15.2.0.tgz",
"integrity": "sha512-HiYfGAKthisUYqHG1nImCf/uzcyS31wng3o+CycWLIM9chnYJ9Lk6jZ30Y6YiYYpTQ9+z/FGUpiKKekd3Arc0A==",
"license": "MIT",
"engines": {
"node": ">=0.8.0"
}
},
"node_modules/@babel/runtime": {
"version": "7.26.9",
"resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.26.9.tgz",

View File

@ -1,6 +1,6 @@
{
"name": "open-webui",
"version": "0.5.18",
"version": "0.5.20",
"private": true,
"scripts": {
"dev": "npm run pyodide:fetch && vite dev --host",
@ -51,6 +51,7 @@
},
"type": "module",
"dependencies": {
"@azure/msal-browser": "^4.5.0",
"@codemirror/lang-javascript": "^6.2.2",
"@codemirror/lang-python": "^6.1.6",
"@codemirror/language-data": "^6.5.1",

View File

@ -7,7 +7,7 @@ authors = [
license = { file = "LICENSE" }
dependencies = [
"fastapi==0.115.7",
"uvicorn[standard]==0.30.6",
"uvicorn[standard]==0.34.0",
"pydantic==2.10.6",
"python-multipart==0.0.18",
@ -21,14 +21,14 @@ dependencies = [
"aiocache",
"aiofiles",
"sqlalchemy==2.0.32",
"sqlalchemy==2.0.38",
"alembic==1.14.0",
"peewee==3.17.8",
"peewee==3.17.9",
"peewee-migrate==1.12.2",
"psycopg2-binary==2.9.9",
"pgvector==0.3.5",
"PyMySQL==1.1.1",
"bcrypt==4.2.0",
"bcrypt==4.3.0",
"pymongo",
"redis",
@ -48,8 +48,8 @@ dependencies = [
"google-generativeai==0.7.2",
"tiktoken",
"langchain==0.3.7",
"langchain-community==0.3.7",
"langchain==0.3.19",
"langchain-community==0.3.18",
"fake-useragent==1.5.1",
"chromadb==0.6.2",
@ -57,6 +57,7 @@ dependencies = [
"qdrant-client~=1.12.0",
"opensearch-py==2.8.0",
"playwright==1.49.1",
"elasticsearch==8.17.1",
"transformers",
"sentence-transformers==3.3.1",
@ -92,7 +93,7 @@ dependencies = [
"PyJWT[crypto]==2.10.1",
"authlib==1.4.1",
"black==24.8.0",
"black==25.1.0",
"langfuse==2.44.0",
"youtube-transcript-api==0.6.3",
"pytube==15.0.0",

View File

@ -2,12 +2,14 @@
<html lang="en">
<head>
<meta charset="utf-8" />
<link rel="icon" type="image/png" href="/favicon/favicon-96x96.png" sizes="96x96" />
<link rel="icon" type="image/svg+xml" href="/favicon/favicon.svg" />
<link rel="shortcut icon" href="/favicon/favicon.ico" />
<link rel="apple-touch-icon" sizes="180x180" href="/favicon/apple-touch-icon.png" />
<link rel="icon" type="image/png" href="/static/favicon.png" />
<link rel="icon" type="image/png" href="/static/favicon-96x96.png" sizes="96x96" />
<link rel="icon" type="image/svg+xml" href="/static/favicon.svg" />
<link rel="shortcut icon" href="/static/favicon.ico" />
<link rel="apple-touch-icon" sizes="180x180" href="/static/apple-touch-icon.png" />
<meta name="apple-mobile-web-app-title" content="Open WebUI" />
<link rel="manifest" href="/favicon/site.webmanifest" />
<link rel="manifest" href="/manifest.json" />
<meta
name="viewport"
content="width=device-width, initial-scale=1, maximum-scale=1, viewport-fit=cover"
@ -74,6 +76,28 @@
}
}
});
function setSplashImage() {
const logo = document.getElementById('logo');
const isDarkMode = document.documentElement.classList.contains('dark');
if (isDarkMode) {
const darkImage = new Image();
darkImage.src = '/static/splash-dark.png';
darkImage.onload = () => {
logo.src = '/static/splash-dark.png';
logo.style.filter = ''; // Ensure no inversion is applied if splash-dark.png exists
};
darkImage.onerror = () => {
logo.style.filter = 'invert(1)'; // Invert image if splash-dark.png is missing
};
}
}
// Runs after classes are assigned
window.onload = setSplashImage;
})();
</script>
@ -176,10 +200,6 @@
background: #000;
}
html.dark #splash-screen img {
filter: invert(1);
}
html.her #splash-screen {
background: #983724;
}

View File

@ -1,5 +1,5 @@
<script>
import { getContext } from 'svelte';
import { getContext, onMount } from 'svelte';
const i18n = getContext('i18n');
import { WEBUI_BASE_URL } from '$lib/constants';
@ -10,6 +10,32 @@
export let show = true;
export let getStartedHandler = () => {};
function setLogoImage() {
const logo = document.getElementById('logo');
if (logo) {
const isDarkMode = document.documentElement.classList.contains('dark');
if (isDarkMode) {
const darkImage = new Image();
darkImage.src = '/static/favicon-dark.png';
darkImage.onload = () => {
logo.src = '/static/favicon-dark.png';
logo.style.filter = ''; // Ensure no inversion is applied if splash-dark.png exists
};
darkImage.onerror = () => {
logo.style.filter = 'invert(1)'; // Invert image if splash-dark.png is missing
};
}
}
}
$: if (show) {
setLogoImage();
}
</script>
{#if show}
@ -18,6 +44,7 @@
<div class="flex space-x-2">
<div class=" self-center">
<img
id="logo"
crossorigin="anonymous"
src="{WEBUI_BASE_URL}/static/favicon.png"
class=" w-6 rounded-full"

View File

@ -45,6 +45,16 @@
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
<div class="mb-2.5">
<div class=" flex w-full justify-between">
<div class=" self-center text-xs font-medium">
{$i18n.t('Enable Code Execution')}
</div>
<Switch bind:state={config.ENABLE_CODE_EXECUTION} />
</div>
</div>
<div class="mb-2.5">
<div class="flex w-full justify-between">
<div class=" self-center text-xs font-medium">{$i18n.t('Code Execution Engine')}</div>

View File

@ -1,4 +1,6 @@
<script lang="ts">
import DOMPurify from 'dompurify';
import { getBackendConfig, getVersionUpdates, getWebhookUrl, updateWebhookUrl } from '$lib/apis';
import {
getAdminConfig,
@ -220,15 +222,44 @@
<div class="">
{$i18n.t('License')}
</div>
<a
class=" text-xs text-gray-500 hover:underline"
href="https://docs.openwebui.com/enterprise"
target="_blank"
>
{$i18n.t(
'Upgrade to a licensed plan for enhanced capabilities, including custom theming and branding, and dedicated support.'
)}
</a>
{#if $config?.license_metadata}
<a
href="https://docs.openwebui.com/enterprise"
target="_blank"
class="text-gray-500 mt-0.5"
>
<span class=" capitalize text-black dark:text-white"
>{$config?.license_metadata?.type}
license</span
>
registered to
<span class=" capitalize text-black dark:text-white"
>{$config?.license_metadata?.organization_name}</span
>
for
<span class=" font-medium text-black dark:text-white"
>{$config?.license_metadata?.seats ?? 'Unlimited'} users.</span
>
</a>
{#if $config?.license_metadata?.html}
<div class="mt-0.5">
{@html DOMPurify.sanitize($config?.license_metadata?.html)}
</div>
{/if}
{:else}
<a
class=" text-xs hover:underline"
href="https://docs.openwebui.com/enterprise"
target="_blank"
>
<span class="text-gray-500">
{$i18n.t(
'Upgrade to a licensed plan for enhanced capabilities, including custom theming and branding, and dedicated support.'
)}
</span>
</a>
{/if}
</div>
<!-- <button

View File

@ -29,7 +29,8 @@
'tavily',
'jina',
'bing',
'exa'
'exa',
'perplexity'
];
let youtubeLanguage = 'en';
@ -361,6 +362,17 @@
/>
</div>
</div>
{:else if webConfig.search.engine === 'perplexity'}
<div>
<div class=" self-center text-xs font-medium mb-1">
{$i18n.t('Perplexity API Key')}
</div>
<SensitiveInput
placeholder={$i18n.t('Enter Perplexity API Key')}
bind:value={webConfig.search.perplexity_api_key}
/>
</div>
{:else if webConfig.search.engine === 'bing'}
<div class="mb-2.5 flex w-full flex-col">
<div>

View File

@ -28,6 +28,7 @@
import ChevronUp from '$lib/components/icons/ChevronUp.svelte';
import ChevronDown from '$lib/components/icons/ChevronDown.svelte';
import About from '$lib/components/chat/Settings/About.svelte';
import Banner from '$lib/components/common/Banner.svelte';
const i18n = getContext('i18n');
@ -124,12 +125,43 @@
/>
<UserChatsModal bind:show={showUserChatsModal} user={selectedUser} />
{#if ($config?.license_metadata?.seats ?? null) !== null && users.length > $config?.license_metadata?.seats}
<div class=" mt-1 mb-2 text-xs text-red-500">
<Banner
className="mx-0"
banner={{
type: 'error',
title: 'License Error',
content:
'Exceeded the number of seats in your license. Please contact support to increase the number of seats.',
dismissable: true
}}
/>
</div>
{/if}
<div class="mt-0.5 mb-2 gap-1 flex flex-col md:flex-row justify-between">
<div class="flex md:self-center text-lg font-medium px-0.5">
{$i18n.t('Users')}
<div class="flex-shrink-0">
{$i18n.t('Users')}
</div>
<div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-50 dark:bg-gray-850" />
<span class="text-lg font-medium text-gray-500 dark:text-gray-300">{users.length}</span>
{#if ($config?.license_metadata?.seats ?? null) !== null}
{#if users.length > $config?.license_metadata?.seats}
<span class="text-lg font-medium text-red-500"
>{users.length} of {$config?.license_metadata?.seats}
<span class="text-sm font-normal">available users</span></span
>
{:else}
<span class="text-lg font-medium text-gray-500 dark:text-gray-300"
>{users.length} of {$config?.license_metadata?.seats}
<span class="text-sm font-normal">available users</span></span
>
{/if}
{:else}
<span class="text-lg font-medium text-gray-500 dark:text-gray-300">{users.length}</span>
{/if}
</div>
<div class="flex gap-1">

View File

@ -1937,9 +1937,33 @@
<PaneGroup direction="horizontal" class="w-full h-full">
<Pane defaultSize={50} class="h-full flex w-full relative">
{#if $banners.length > 0 && !history.currentId && !$chatId && selectedModels.length <= 1}
{#if !history.currentId && !$chatId && selectedModels.length <= 1 && ($banners.length > 0 || ($config?.license_metadata?.type ?? null) === 'trial' || (($config?.license_metadata?.seats ?? null) !== null && $config?.user_count > $config?.license_metadata?.seats))}
<div class="absolute top-12 left-0 right-0 w-full z-30">
<div class=" flex flex-col gap-1 w-full">
{#if ($config?.license_metadata?.type ?? null) === 'trial'}
<Banner
banner={{
type: 'info',
title: 'Trial License',
content: $i18n.t(
'You are currently using a trial license. Please contact support to upgrade your license.'
)
}}
/>
{/if}
{#if ($config?.license_metadata?.seats ?? null) !== null && $config?.user_count > $config?.license_metadata?.seats}
<Banner
banner={{
type: 'error',
title: 'License Error',
content: $i18n.t(
'Exceeded the number of seats in your license. Please contact support to increase the number of seats.'
)
}}
/>
{/if}
{#each $banners.filter( (b) => (b.dismissible ? !JSON.parse(localStorage.getItem('dismissedBannerIds') ?? '[]').includes(b.id) : true) ) as banner}
<Banner
{banner}

View File

@ -121,7 +121,8 @@
toast.error('Model not selected');
return;
}
prompt = `Explain this section to me in more detail\n\n\`\`\`\n${selectedText}\n\`\`\``;
const explainText = $i18n.t('Explain this section to me in more detail');
prompt = `${explainText}\n\n\`\`\`\n${selectedText}\n\`\`\``;
responseContent = '';
const [res, controller] = await chatCompletion(localStorage.token, {
@ -246,7 +247,7 @@
>
<ChatBubble className="size-3 shrink-0" />
<div class="shrink-0">Ask</div>
<div class="shrink-0">{$i18n.t('Ask')}</div>
</button>
<button
class="px-1 hover:bg-gray-50 dark:hover:bg-gray-800 rounded-sm flex items-center gap-1 min-w-fit"
@ -257,7 +258,7 @@
>
<LightBlub className="size-3 shrink-0" />
<div class="shrink-0">Explain</div>
<div class="shrink-0">{$i18n.t('Explain')}</div>
</button>
</div>
{:else}

View File

@ -85,6 +85,8 @@
let loaded = false;
let recording = false;
let isComposing = false;
let chatInputContainerElement;
let chatInputElement;
@ -676,12 +678,13 @@
bind:value={prompt}
id="chat-input"
messageInput={true}
shiftEnter={!$mobile ||
!(
'ontouchstart' in window ||
navigator.maxTouchPoints > 0 ||
navigator.msMaxTouchPoints > 0
)}
shiftEnter={!($settings?.ctrlEnterToSend ?? false) &&
(!$mobile ||
!(
'ontouchstart' in window ||
navigator.maxTouchPoints > 0 ||
navigator.msMaxTouchPoints > 0
))}
placeholder={placeholder ? placeholder : $i18n.t('Send a Message')}
largeTextAsFile={$settings?.largeTextAsFile ?? false}
autocomplete={$config?.features.enable_autocomplete_generation}
@ -706,6 +709,8 @@
console.log(res);
return res;
}}
oncompositionstart={() => (isComposing = true)}
oncompositionend={() => (isComposing = false)}
on:keydown={async (e) => {
e = e.detail.event;
@ -805,19 +810,24 @@
navigator.msMaxTouchPoints > 0
)
) {
// Prevent Enter key from creating a new line
// Uses keyCode '13' for Enter key for chinese/japanese keyboards
if (e.keyCode === 13 && !e.shiftKey) {
e.preventDefault();
if (isComposing) {
return;
}
// Submit the prompt when Enter key is pressed
if (
(prompt !== '' || files.length > 0) &&
e.keyCode === 13 &&
!e.shiftKey
) {
dispatch('submit', prompt);
// Uses keyCode '13' for Enter key for chinese/japanese keyboards.
//
// Depending on the user's settings, it will send the message
// either when Enter is pressed or when Ctrl+Enter is pressed.
const enterPressed =
($settings?.ctrlEnterToSend ?? false)
? (e.key === 'Enter' || e.keyCode === 13) && isCtrlPressed
: (e.key === 'Enter' || e.keyCode === 13) && !e.shiftKey;
if (enterPressed) {
e.preventDefault();
if (prompt !== '' || files.length > 0) {
dispatch('submit', prompt);
}
}
}
}
@ -880,38 +890,19 @@
class="scrollbar-hidden bg-transparent dark:text-gray-100 outline-hidden w-full pt-3 px-1 resize-none"
placeholder={placeholder ? placeholder : $i18n.t('Send a Message')}
bind:value={prompt}
on:keypress={(e) => {
if (
!$mobile ||
!(
'ontouchstart' in window ||
navigator.maxTouchPoints > 0 ||
navigator.msMaxTouchPoints > 0
)
) {
// Prevent Enter key from creating a new line
if (e.key === 'Enter' && !e.shiftKey) {
e.preventDefault();
}
// Submit the prompt when Enter key is pressed
if (
(prompt !== '' || files.length > 0) &&
e.key === 'Enter' &&
!e.shiftKey
) {
dispatch('submit', prompt);
}
}
}}
on:compositionstart={() => (isComposing = true)}
on:compositionend={() => (isComposing = false)}
on:keydown={async (e) => {
const isCtrlPressed = e.ctrlKey || e.metaKey; // metaKey is for Cmd key on Mac
console.log('keydown', e);
const commandsContainerElement =
document.getElementById('commands-container');
if (e.key === 'Escape') {
stopResponse();
}
// Command/Ctrl + Shift + Enter to submit a message pair
if (isCtrlPressed && e.key === 'Enter' && e.shiftKey) {
e.preventDefault();
@ -947,51 +938,87 @@
editButton?.click();
}
if (commandsContainerElement && e.key === 'ArrowUp') {
e.preventDefault();
commandsElement.selectUp();
if (commandsContainerElement) {
if (commandsContainerElement && e.key === 'ArrowUp') {
e.preventDefault();
commandsElement.selectUp();
const commandOptionButton = [
...document.getElementsByClassName('selected-command-option-button')
]?.at(-1);
commandOptionButton.scrollIntoView({ block: 'center' });
}
const commandOptionButton = [
...document.getElementsByClassName('selected-command-option-button')
]?.at(-1);
commandOptionButton.scrollIntoView({ block: 'center' });
}
if (commandsContainerElement && e.key === 'ArrowDown') {
e.preventDefault();
commandsElement.selectDown();
if (commandsContainerElement && e.key === 'ArrowDown') {
e.preventDefault();
commandsElement.selectDown();
const commandOptionButton = [
...document.getElementsByClassName('selected-command-option-button')
]?.at(-1);
commandOptionButton.scrollIntoView({ block: 'center' });
}
const commandOptionButton = [
...document.getElementsByClassName('selected-command-option-button')
]?.at(-1);
commandOptionButton.scrollIntoView({ block: 'center' });
}
if (commandsContainerElement && e.key === 'Enter') {
e.preventDefault();
if (commandsContainerElement && e.key === 'Enter') {
e.preventDefault();
const commandOptionButton = [
...document.getElementsByClassName('selected-command-option-button')
]?.at(-1);
const commandOptionButton = [
...document.getElementsByClassName('selected-command-option-button')
]?.at(-1);
if (e.shiftKey) {
prompt = `${prompt}\n`;
} else if (commandOptionButton) {
commandOptionButton?.click();
} else {
document.getElementById('send-message-button')?.click();
}
}
if (commandsContainerElement && e.key === 'Tab') {
e.preventDefault();
const commandOptionButton = [
...document.getElementsByClassName('selected-command-option-button')
]?.at(-1);
if (e.shiftKey) {
prompt = `${prompt}\n`;
} else if (commandOptionButton) {
commandOptionButton?.click();
} else {
document.getElementById('send-message-button')?.click();
}
} else {
if (
!$mobile ||
!(
'ontouchstart' in window ||
navigator.maxTouchPoints > 0 ||
navigator.msMaxTouchPoints > 0
)
) {
if (isComposing) {
return;
}
console.log('keypress', e);
// Prevent Enter key from creating a new line
const isCtrlPressed = e.ctrlKey || e.metaKey;
const enterPressed =
($settings?.ctrlEnterToSend ?? false)
? (e.key === 'Enter' || e.keyCode === 13) && isCtrlPressed
: (e.key === 'Enter' || e.keyCode === 13) && !e.shiftKey;
console.log('Enter pressed:', enterPressed);
if (enterPressed) {
e.preventDefault();
}
// Submit the prompt when Enter key is pressed
if ((prompt !== '' || files.length > 0) && enterPressed) {
dispatch('submit', prompt);
}
}
}
if (commandsContainerElement && e.key === 'Tab') {
e.preventDefault();
const commandOptionButton = [
...document.getElementsByClassName('selected-command-option-button')
]?.at(-1);
commandOptionButton?.click();
} else if (e.key === 'Tab') {
if (e.key === 'Tab') {
const words = findWordIndices(prompt);
if (words.length > 0) {

View File

@ -231,7 +231,6 @@
mediaRecorder.onstart = () => {
console.log('Recording started');
audioChunks = [];
analyseAudio(audioStream);
};
mediaRecorder.ondataavailable = (event) => {
@ -245,7 +244,7 @@
stopRecordingCallback();
};
mediaRecorder.start();
analyseAudio(audioStream);
}
};
@ -321,6 +320,9 @@
if (hasSound) {
// BIG RED TEXT
console.log('%c%s', 'color: red; font-size: 20px;', '🔊 Sound detected');
if (mediaRecorder && mediaRecorder.state !== 'recording') {
mediaRecorder.start();
}
if (!hasStartedSpeaking) {
hasStartedSpeaking = true;

View File

@ -228,7 +228,8 @@
role: 'user',
content: userPrompt,
...(history.messages[messageId].files && { files: history.messages[messageId].files }),
models: selectedModels
models: selectedModels,
timestamp: Math.floor(Date.now() / 1000) // Unix epoch
};
let messageParentId = history.messages[messageId].parentId;

View File

@ -14,6 +14,9 @@
import { config } from '$lib/stores';
import { executeCode } from '$lib/apis/utils';
import { toast } from 'svelte-sonner';
import ChevronUp from '$lib/components/icons/ChevronUp.svelte';
import ChevronUpDown from '$lib/components/icons/ChevronUpDown.svelte';
import CommandLine from '$lib/components/icons/CommandLine.svelte';
const i18n = getContext('i18n');
@ -57,9 +60,14 @@
let result = null;
let files = null;
let collapsed = false;
let copied = false;
let saved = false;
const collapseCodeBlock = () => {
collapsed = !collapsed;
};
const saveCode = () => {
saved = true;
@ -418,18 +426,39 @@
class="sticky {stickyButtonsClassName} mb-1 py-1 pr-2.5 flex items-center justify-end z-10 text-xs text-black dark:text-white"
>
<div class="flex items-center gap-0.5 translate-y-[1px]">
{#if lang.toLowerCase() === 'python' || lang.toLowerCase() === 'py' || (lang === '' && checkPythonCode(code))}
<button
class="flex gap-1 items-center bg-none border-none bg-gray-50 hover:bg-gray-100 dark:bg-gray-850 dark:hover:bg-gray-800 transition rounded-md px-1.5 py-0.5"
on:click={collapseCodeBlock}
>
<div>
<ChevronUpDown className="size-3" />
</div>
<div>
{collapsed ? $i18n.t('Expand') : $i18n.t('Collapse')}
</div>
</button>
{#if ($config?.features?.enable_code_execution ?? true) && (lang.toLowerCase() === 'python' || lang.toLowerCase() === 'py' || (lang === '' && checkPythonCode(code)))}
{#if executing}
<div class="run-code-button bg-none border-none p-1 cursor-not-allowed">Running</div>
{:else if run}
<button
class="run-code-button bg-none border-none bg-gray-50 hover:bg-gray-100 dark:bg-gray-850 dark:hover:bg-gray-800 transition rounded-md px-1.5 py-0.5"
class="flex gap-1 items-center run-code-button bg-none border-none bg-gray-50 hover:bg-gray-100 dark:bg-gray-850 dark:hover:bg-gray-800 transition rounded-md px-1.5 py-0.5"
on:click={async () => {
code = _code;
await tick();
executePython(code);
}}>{$i18n.t('Run')}</button
}}
>
<div>
<CommandLine className="size-3" />
</div>
<div>
{$i18n.t('Run')}
</div>
</button>
{/if}
{/if}
@ -457,65 +486,80 @@
: 'rounded-b-lg'} overflow-hidden"
>
<div class=" pt-7 bg-gray-50 dark:bg-gray-850"></div>
<CodeEditor
value={code}
{id}
{lang}
onSave={() => {
saveCode();
}}
onChange={(value) => {
_code = value;
}}
/>
{#if !collapsed}
<CodeEditor
value={code}
{id}
{lang}
onSave={() => {
saveCode();
}}
onChange={(value) => {
_code = value;
}}
/>
{:else}
<div
class="bg-gray-50 dark:bg-black dark:text-white rounded-b-lg! pt-2 pb-2 px-4 flex flex-col gap-2 text-xs"
>
<span class="text-gray-500 italic">
{$i18n.t('{{COUNT}} hidden lines', {
COUNT: code.split('\n').length
})}
</span>
</div>
{/if}
</div>
<div
id="plt-canvas-{id}"
class="bg-gray-50 dark:bg-[#202123] dark:text-white max-w-full overflow-x-auto scrollbar-hidden"
/>
{#if executing || stdout || stderr || result || files}
{#if !collapsed}
<div
class="bg-gray-50 dark:bg-[#202123] dark:text-white rounded-b-lg! py-4 px-4 flex flex-col gap-2"
>
{#if executing}
<div class=" ">
<div class=" text-gray-500 text-xs mb-1">STDOUT/STDERR</div>
<div class="text-sm">Running...</div>
</div>
{:else}
{#if stdout || stderr}
id="plt-canvas-{id}"
class="bg-gray-50 dark:bg-[#202123] dark:text-white max-w-full overflow-x-auto scrollbar-hidden"
/>
{#if executing || stdout || stderr || result || files}
<div
class="bg-gray-50 dark:bg-[#202123] dark:text-white rounded-b-lg! py-4 px-4 flex flex-col gap-2"
>
{#if executing}
<div class=" ">
<div class=" text-gray-500 text-xs mb-1">STDOUT/STDERR</div>
<div
class="text-sm {stdout?.split('\n')?.length > 100
? `max-h-96`
: ''} overflow-y-auto"
>
{stdout || stderr}
</div>
<div class="text-sm">Running...</div>
</div>
{/if}
{#if result || files}
<div class=" ">
<div class=" text-gray-500 text-xs mb-1">RESULT</div>
{#if result}
<div class="text-sm">{`${JSON.stringify(result)}`}</div>
{/if}
{#if files}
<div class="flex flex-col gap-2">
{#each files as file}
{#if file.type.startsWith('image')}
<img src={file.data} alt="Output" class=" w-full max-w-[36rem]" />
{/if}
{/each}
{:else}
{#if stdout || stderr}
<div class=" ">
<div class=" text-gray-500 text-xs mb-1">STDOUT/STDERR</div>
<div
class="text-sm {stdout?.split('\n')?.length > 100
? `max-h-96`
: ''} overflow-y-auto"
>
{stdout || stderr}
</div>
{/if}
</div>
</div>
{/if}
{#if result || files}
<div class=" ">
<div class=" text-gray-500 text-xs mb-1">RESULT</div>
{#if result}
<div class="text-sm">{`${JSON.stringify(result)}`}</div>
{/if}
{#if files}
<div class="flex flex-col gap-2">
{#each files as file}
{#if file.type.startsWith('image')}
<img src={file.data} alt="Output" class=" w-full max-w-[36rem]" />
{/if}
{/each}
</div>
{/if}
</div>
{/if}
{/if}
{/if}
</div>
</div>
{/if}
{/if}
{/if}
</div>

View File

@ -17,6 +17,7 @@
import Collapsible from '$lib/components/common/Collapsible.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import ArrowDownTray from '$lib/components/icons/ArrowDownTray.svelte';
import Source from './Source.svelte';
const dispatch = createEventDispatcher();
@ -91,7 +92,7 @@
onCode={(value) => {
dispatch('code', value);
}}
onSave={(e) => {
onSave={(value) => {
dispatch('update', {
raw: token.raw,
oldContent: token.text,
@ -261,6 +262,8 @@
{@html html}
{:else if token.text.includes(`<iframe src="${WEBUI_BASE_URL}/api/v1/files/`)}
{@html `${token.text}`}
{:else if token.text.includes(`<source_id`)}
<Source {id} {token} onClick={onSourceClick} />
{:else}
{token.text}
{/if}

View File

@ -52,12 +52,17 @@
export let className = 'w-[32rem]';
export let triggerClassName = 'text-lg';
let tagsContainerElement;
let show = false;
let tags = [];
let selectedModel = '';
$: selectedModel = items.find((item) => item.value === value) ?? '';
let searchValue = '';
let selectedTag = '';
let ollamaVersion = null;
let selectedModelIdx = 0;
@ -79,10 +84,23 @@
);
$: filteredItems = searchValue
? fuse.search(searchValue).map((e) => {
return e.item;
})
: items;
? fuse
.search(searchValue)
.map((e) => {
return e.item;
})
.filter((item) => {
if (selectedTag === '') {
return true;
}
return item.model?.info?.meta?.tags?.map((tag) => tag.name).includes(selectedTag);
})
: items.filter((item) => {
if (selectedTag === '') {
return true;
}
return item.model?.info?.meta?.tags?.map((tag) => tag.name).includes(selectedTag);
});
const pullModelHandler = async () => {
const sanitizedModelTag = searchValue.trim().replace(/^ollama\s+(run|pull)\s+/, '');
@ -214,6 +232,13 @@
onMount(async () => {
ollamaVersion = await getOllamaVersion(localStorage.token).catch((error) => false);
if (items) {
tags = items.flatMap((item) => item.model?.info?.meta?.tags ?? []).map((tag) => tag.name);
// Remove duplicates and sort
tags = Array.from(new Set(tags)).sort((a, b) => a.localeCompare(b));
}
});
const cancelModelPullHandler = async (model: string) => {
@ -269,7 +294,7 @@
>
<slot>
{#if searchEnabled}
<div class="flex items-center gap-2.5 px-5 mt-3.5 mb-3">
<div class="flex items-center gap-2.5 px-5 mt-3.5 mb-1.5">
<Search className="size-4" strokeWidth="2.5" />
<input
@ -297,11 +322,42 @@
}}
/>
</div>
<hr class="border-gray-100 dark:border-gray-850" />
{/if}
<div class="px-3 my-2 max-h-64 overflow-y-auto scrollbar-hidden group">
<div class="px-3 mb-2 max-h-64 overflow-y-auto scrollbar-hidden group relative">
{#if tags}
<div class=" flex w-full sticky">
<div
class="flex gap-1 scrollbar-none overflow-x-auto w-fit text-center text-sm font-medium rounded-full bg-transparent px-1.5 pb-0.5"
bind:this={tagsContainerElement}
>
<button
class="min-w-fit outline-none p-1.5 {selectedTag === ''
? ''
: 'text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'} transition capitalize"
on:click={() => {
selectedTag = '';
}}
>
{$i18n.t('All')}
</button>
{#each tags as tag}
<button
class="min-w-fit outline-none p-1.5 {selectedTag === tag
? ''
: 'text-gray-300 dark:text-gray-600 hover:text-gray-700 dark:hover:text-white'} transition capitalize"
on:click={() => {
selectedTag = tag;
}}
>
{tag}
</button>
{/each}
</div>
</div>
{/if}
{#each filteredItems as item, index}
<button
aria-label="model-item"
@ -441,11 +497,13 @@
{/if}
{#if !$mobile && (item?.model?.info?.meta?.tags ?? []).length > 0}
<div class="flex gap-0.5 self-center items-center h-full translate-y-[0.5px]">
<div
class="flex gap-0.5 self-center items-center h-full translate-y-[0.5px] overflow-x-auto scrollbar-none"
>
{#each item.model?.info?.meta.tags as tag}
<Tooltip content={tag.name}>
<Tooltip content={tag.name} className="flex-shrink-0">
<div
class=" text-xs font-bold px-1 rounded-sm uppercase line-clamp-1 bg-gray-500/20 text-gray-700 dark:text-gray-200"
class=" text-xs font-bold px-1 rounded-sm uppercase bg-gray-500/20 text-gray-700 dark:text-gray-200"
>
{tag.name}
</div>
@ -575,7 +633,7 @@
</div>
{#if showTemporaryChatControl}
<hr class="border-gray-100 dark:border-gray-850" />
<hr class="border-gray-100 dark:border-gray-800" />
<div class="flex items-center mx-2 my-2">
<button

View File

@ -102,7 +102,7 @@
{/if}
<div
class="w-full text-3xl text-gray-800 dark:text-gray-100 font-medium text-center flex items-center gap-4 font-primary"
class="w-full text-3xl text-gray-800 dark:text-gray-100 text-center flex items-center gap-4 font-primary"
>
<div class="w-full flex flex-col justify-center items-center">
<div class="flex flex-row justify-center gap-3 @sm:gap-3.5 w-fit px-5">
@ -126,7 +126,7 @@
($i18n.language === 'dg-DG'
? `/doge.png`
: `${WEBUI_BASE_URL}/static/favicon.png`)}
class=" size-9 @sm:size-10 rounded-full border-[1px] border-gray-200 dark:border-none"
class=" size-9 @sm:size-10 rounded-full border-[1px] border-gray-100 dark:border-none"
alt="logo"
draggable="false"
/>

View File

@ -106,35 +106,46 @@
<hr class=" border-gray-100 dark:border-gray-850" />
{#if $config?.license_metadata}
<div class="mb-2 text-xs">
{#if !$WEBUI_NAME.includes('Open WebUI')}
<span class=" text-gray-500 dark:text-gray-300 font-medium">{$WEBUI_NAME}</span> -
{/if}
<span class=" capitalize">{$config?.license_metadata?.type}</span> license purchased by
<span class=" capitalize">{$config?.license_metadata?.organization_name}</span>
</div>
{:else}
<div class="flex space-x-1">
<a href="https://discord.gg/5rJgQTnV4s" target="_blank">
<img
alt="Discord"
src="https://img.shields.io/badge/Discord-Open_WebUI-blue?logo=discord&logoColor=white"
/>
</a>
<a href="https://twitter.com/OpenWebUI" target="_blank">
<img
alt="X (formerly Twitter) Follow"
src="https://img.shields.io/twitter/follow/OpenWebUI"
/>
</a>
<a href="https://github.com/open-webui/open-webui" target="_blank">
<img
alt="Github Repo"
src="https://img.shields.io/github/stars/open-webui/open-webui?style=social&label=Star us on Github"
/>
</a>
</div>
{/if}
<div class="mt-2 text-xs text-gray-400 dark:text-gray-500">
Emoji graphics provided by
<a href="https://github.com/jdecked/twemoji" target="_blank">Twemoji</a>, licensed under
<a href="https://creativecommons.org/licenses/by/4.0/" target="_blank">CC-BY 4.0</a>.
</div>
<div class="flex space-x-1">
<a href="https://discord.gg/5rJgQTnV4s" target="_blank">
<img
alt="Discord"
src="https://img.shields.io/badge/Discord-Open_WebUI-blue?logo=discord&logoColor=white"
/>
</a>
<a href="https://twitter.com/OpenWebUI" target="_blank">
<img
alt="X (formerly Twitter) Follow"
src="https://img.shields.io/twitter/follow/OpenWebUI"
/>
</a>
<a href="https://github.com/open-webui/open-webui" target="_blank">
<img
alt="Github Repo"
src="https://img.shields.io/github/stars/open-webui/open-webui?style=social&label=Star us on Github"
/>
</a>
</div>
<div>
<pre
class="text-xs text-gray-400 dark:text-gray-500">Copyright (c) {new Date().getFullYear()} <a
@ -172,9 +183,6 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
</div>
<div class="mt-2 text-xs text-gray-400 dark:text-gray-500">
{#if !$WEBUI_NAME.includes('Open WebUI')}
<span class=" text-gray-500 dark:text-gray-300 font-medium">{$WEBUI_NAME}</span> -
{/if}
{$i18n.t('Created by')}
<a
class=" text-gray-500 dark:text-gray-300 font-medium"

View File

@ -17,6 +17,7 @@
stop: null,
temperature: null,
reasoning_effort: null,
logit_bias: null,
frequency_penalty: null,
repeat_last_n: null,
mirostat: null,
@ -114,7 +115,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)'
'Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.'
)}
placement="top-start"
className="inline-tooltip"
@ -203,7 +204,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)'
'The temperature of the model. Increasing the temperature will make the model answer more creatively.'
)}
placement="top-start"
className="inline-tooltip"
@ -258,7 +259,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)'
'Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.'
)}
placement="top-start"
className="inline-tooltip"
@ -301,10 +302,53 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)'
'Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)'
)}
placement="top-start"
className="inline-tooltip"
>
<div class="flex w-full justify-between">
<div class=" self-center text-xs font-medium">
{$i18n.t('Logit Bias')}
</div>
<button
class="p-1 px-3 text-xs flex rounded-sm transition shrink-0 outline-hidden"
type="button"
on:click={() => {
params.logit_bias = (params?.logit_bias ?? null) === null ? '' : null;
}}
>
{#if (params?.logit_bias ?? null) === null}
<span class="ml-2 self-center"> {$i18n.t('Default')} </span>
{:else}
<span class="ml-2 self-center"> {$i18n.t('Custom')} </span>
{/if}
</button>
</div>
</Tooltip>
{#if (params?.logit_bias ?? null) !== null}
<div class="flex mt-0.5 space-x-2">
<div class=" flex-1">
<input
class="w-full rounded-lg pl-2 py-2 px-1 text-sm dark:text-gray-300 dark:bg-gray-850 outline-hidden"
type="text"
placeholder={$i18n.t(
'Enter comma-seperated "token:bias_value" pairs (example: 5432:100, 413:-100)'
)}
bind:value={params.logit_bias}
autocomplete="off"
/>
</div>
</div>
{/if}
</div>
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t('Enable Mirostat sampling for controlling perplexity.')}
placement="top-start"
className="inline-tooltip"
>
<div class="flex w-full justify-between">
<div class=" self-center text-xs font-medium">
@ -356,7 +400,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)'
'Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.'
)}
placement="top-start"
className="inline-tooltip"
@ -411,7 +455,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)'
'Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.'
)}
placement="top-start"
className="inline-tooltip"
@ -467,7 +511,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)'
'Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.'
)}
placement="top-start"
className="inline-tooltip"
@ -522,7 +566,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)'
'Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.'
)}
placement="top-start"
className="inline-tooltip"
@ -578,7 +622,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)'
'Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.'
)}
placement="top-start"
className="inline-tooltip"
@ -633,7 +677,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)'
'Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.'
)}
placement="top-start"
className="inline-tooltip"
@ -689,7 +733,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)'
'Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.'
)}
placement="top-start"
className="inline-tooltip"
@ -745,7 +789,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)'
'Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.'
)}
placement="top-start"
className="inline-tooltip"
@ -800,9 +844,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)'
)}
content={$i18n.t('Sets how far back for the model to look back to prevent repetition.')}
placement="top-start"
className="inline-tooltip"
>
@ -857,7 +899,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)'
'Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.'
)}
placement="top-start"
className="inline-tooltip"
@ -912,9 +954,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'Sets the size of the context window used to generate the next token. (Default: 2048)'
)}
content={$i18n.t('Sets the size of the context window used to generate the next token.')}
placement="top-start"
className="inline-tooltip"
>
@ -968,7 +1008,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)'
'The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.'
)}
placement="top-start"
className="inline-tooltip"
@ -1023,7 +1063,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)'
'This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.'
)}
placement="top-start"
className="inline-tooltip"
@ -1078,7 +1118,7 @@
<div class=" py-0.5 w-full justify-between">
<Tooltip
content={$i18n.t(
'This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)'
'This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.'
)}
placement="top-start"
className="inline-tooltip"

View File

@ -1,7 +1,7 @@
<script lang="ts">
import { toast } from 'svelte-sonner';
import { createEventDispatcher, onMount, getContext } from 'svelte';
import { getLanguages } from '$lib/i18n';
import { getLanguages, changeLanguage } from '$lib/i18n';
const dispatch = createEventDispatcher();
import { models, settings, theme, user } from '$lib/stores';
@ -50,6 +50,7 @@
seed: null,
temperature: null,
reasoning_effort: null,
logit_bias: null,
frequency_penalty: null,
presence_penalty: null,
repeat_penalty: null,
@ -198,7 +199,7 @@
bind:value={lang}
placeholder="Select a language"
on:change={(e) => {
$i18n.changeLanguage(lang);
changeLanguage(lang);
}}
>
{#each languages as language}
@ -348,6 +349,7 @@
temperature: params.temperature !== null ? params.temperature : undefined,
reasoning_effort:
params.reasoning_effort !== null ? params.reasoning_effort : undefined,
logit_bias: params.logit_bias !== null ? params.logit_bias : undefined,
frequency_penalty:
params.frequency_penalty !== null ? params.frequency_penalty : undefined,
presence_penalty:

View File

@ -37,6 +37,7 @@
let landingPageMode = '';
let chatBubble = true;
let chatDirection: 'LTR' | 'RTL' = 'LTR';
let ctrlEnterToSend = false;
let imageCompression = false;
let imageCompressionSize = {
@ -193,6 +194,11 @@
saveSettings({ chatDirection });
};
const togglectrlEnterToSend = async () => {
ctrlEnterToSend = !ctrlEnterToSend;
saveSettings({ ctrlEnterToSend });
};
const updateInterfaceHandler = async () => {
saveSettings({
models: [defaultModelId],
@ -232,6 +238,7 @@
notificationSound = $settings.notificationSound ?? true;
hapticFeedback = $settings.hapticFeedback ?? false;
ctrlEnterToSend = $settings.ctrlEnterToSend ?? false;
imageCompression = $settings.imageCompression ?? false;
imageCompressionSize = $settings.imageCompressionSize ?? { width: '', height: '' };
@ -652,6 +659,28 @@
</div>
</div> -->
<div>
<div class=" py-0.5 flex w-full justify-between">
<div class=" self-center text-xs">
{$i18n.t('Enter Key Behavior')}
</div>
<button
class="p-1 px-3 text-xs flex rounded transition"
on:click={() => {
togglectrlEnterToSend();
}}
type="button"
>
{#if ctrlEnterToSend === true}
<span class="ml-2 self-center">{$i18n.t('Ctrl+Enter to Send')}</span>
{:else}
<span class="ml-2 self-center">{$i18n.t('Enter to Send')}</span>
{/if}
</button>
</div>
</div>
<div>
<div class=" py-0.5 flex w-full justify-between">
<div class=" self-center text-xs">

View File

@ -16,6 +16,7 @@
dismissable: true,
timestamp: Math.floor(Date.now() / 1000)
};
export let className = 'mx-4';
export let dismissed = false;
@ -41,7 +42,7 @@
{#if !dismissed}
{#if mounted}
<div
class=" top-0 left-0 right-0 p-2 mx-4 px-3 flex justify-center items-center relative rounded-xl border border-gray-100 dark:border-gray-850 text-gray-800 dark:text-gary-100 bg-white dark:bg-gray-900 backdrop-blur-xl z-30"
class="{className} top-0 left-0 right-0 p-2 px-3 flex justify-center items-center relative rounded-xl border border-gray-100 dark:border-gray-850 text-gray-800 dark:text-gary-100 bg-white dark:bg-gray-900 backdrop-blur-xl z-30"
transition:fade={{ delay: 100, duration: 300 }}
>
<div class=" flex flex-col md:flex-row md:items-center flex-1 text-sm w-fit gap-1.5">

View File

@ -27,6 +27,9 @@
import { PASTED_TEXT_CHARACTER_LIMIT } from '$lib/constants';
export let oncompositionstart = (e) => {};
export let oncompositionend = (e) => {};
// create a lowlight instance with all languages loaded
const lowlight = createLowlight(all);
@ -226,6 +229,14 @@
editorProps: {
attributes: { id },
handleDOMEvents: {
compositionstart: (view, event) => {
oncompositionstart(event);
return false;
},
compositionend: (view, event) => {
oncompositionend(event);
return false;
},
focus: (view, event) => {
eventDispatch('focus', { event });
return false;

View File

@ -227,7 +227,11 @@
</DragGhost>
{/if}
<div bind:this={itemElement} class=" w-full {className} relative group" {draggable}>
<div
bind:this={itemElement}
class=" w-full {className} relative group"
draggable={draggable && !confirmEdit}
>
{#if confirmEdit}
<div
class=" w-full flex justify-between rounded-lg px-[11px] py-[6px] {id === $chatId ||

View File

@ -5,7 +5,7 @@
import { flyAndScale } from '$lib/utils/transitions';
import { goto } from '$app/navigation';
import ArchiveBox from '$lib/components/icons/ArchiveBox.svelte';
import { showSettings, activeUserIds, USAGE_POOL, mobile, showSidebar } from '$lib/stores';
import { showSettings, activeUserIds, USAGE_POOL, mobile, showSidebar, user } from '$lib/stores';
import { fade, slide } from 'svelte/transition';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import { userSignOut } from '$lib/apis/auths';
@ -157,8 +157,11 @@
class="flex rounded-md py-2 px-3 w-full hover:bg-gray-50 dark:hover:bg-gray-800 transition"
on:click={async () => {
await userSignOut();
user.set(null);
localStorage.removeItem('token');
location.href = '/auth';
show = false;
}}
>

View File

@ -198,7 +198,7 @@ class Tools:
}
}}
>
<div class="flex flex-col flex-1 overflow-auto h-0">
<div class="flex flex-col flex-1 overflow-auto h-0 rounded-lg">
<div class="w-full mb-2 flex flex-col gap-0.5">
<div class="flex w-full items-center">
<div class=" shrink-0 mr-2">
@ -218,7 +218,7 @@ class Tools:
<div class="flex-1">
<Tooltip content={$i18n.t('e.g. My Tools')} placement="top-start">
<input
class="w-full text-2xl font-semibold bg-transparent outline-hidden"
class="w-full text-2xl font-medium bg-transparent outline-hidden font-primary"
type="text"
placeholder={$i18n.t('Tool Name')}
bind:value={name}
@ -282,12 +282,12 @@ class Tools:
<CodeEditor
bind:this={codeEditor}
value={content}
{boilerplate}
lang="python"
{boilerplate}
onChange={(e) => {
_content = e;
}}
onSave={() => {
onSave={async () => {
if (formElement) {
formElement.requestSubmit();
}

View File

@ -37,7 +37,7 @@ const createIsLoadingStore = (i18n: i18nType) => {
return isLoading;
};
export const initI18n = (defaultLocale: string | undefined) => {
export const initI18n = (defaultLocale?: string | undefined) => {
let detectionOrder = defaultLocale
? ['querystring', 'localStorage']
: ['querystring', 'localStorage', 'navigator'];
@ -66,6 +66,9 @@ export const initI18n = (defaultLocale: string | undefined) => {
escapeValue: false // not needed for svelte as it escapes by default
}
});
const lang = i18next?.language || defaultLocale || 'en-US';
document.documentElement.setAttribute('lang', lang);
};
const i18n = createI18nStore(i18next);
@ -75,5 +78,10 @@ export const getLanguages = async () => {
const languages = (await import(`./locales/languages.json`)).default;
return languages;
};
export const changeLanguage = (lang: string) => {
document.documentElement.setAttribute('lang', lang);
i18next.changeLanguage(lang);
};
export default i18n;
export const isLoading = isLoadingStore;

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "( `sh webui.sh --api`مثال)",
"(latest)": "(الأخير)",
"{{ models }}": "{{ نماذج }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "دردشات {{user}}",
"{{webUIName}} Backend Required": "{{webUIName}} مطلوب",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "التعليمات المتقدمة",
"Advanced Params": "المعلمات المتقدمة",
"All": "",
"All Documents": "جميع الملفات",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "",
"Allowed Endpoints": "",
"Already have an account?": "هل تملك حساب ؟",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "مساعد",
@ -93,6 +95,7 @@
"Are you sure?": "هل أنت متأكد ؟",
"Arena Models": "",
"Artifacts": "",
"Ask": "",
"Ask a question": "",
"Assistant": "",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "مفتاح واجهة برمجة تطبيقات البحث الشجاع",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "مجموعة",
"Color": "",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "اتصالات",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "",
"Content": "الاتصال",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "",
"Copied shared chat URL to clipboard!": "تم نسخ عنوان URL للدردشة المشتركة إلى الحافظة",
"Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "أنشئت من",
"Created by": "",
"CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "الموديل المختار",
"Current Password": "كلمة السر الحالية",
"Custom": "مخصص",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "تم تعيين نموذج التضمين على \"{{embedding_model}}\"",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "تمكين مشاركة المجتمع",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "تفعيل عمليات التسجيل الجديدة",
"Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "تأكد من أن ملف CSV الخاص بك يتضمن 4 أعمدة بهذا الترتيب: Name, Email, Password, Role.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "أدخل الChunk Overlap",
"Enter Chunk Size": "أدخل Chunk الحجم",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "أدخل كود اللغة",
"Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "(e.g. {{modelTag}}) أدخل الموديل تاق",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "(e.g. 50) أدخل عدد الخطوات",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "أدخل Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "الرابط (e.g. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "URL (e.g. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "تجريبي",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "تصدير",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "قم بتضمين علامة `-api` عند تشغيل Stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "معلومات",
"Input commands": "إدخال الأوامر",
"Install from Github URL": "التثبيت من عنوان URL لجيثب",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "من جهة اليسار إلى اليمين",
"Made by Open WebUI Community": "OpenWebUI تم إنشاؤه بواسطة مجتمع ",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "{{error}} تم رفض الإذن عند الوصول إلى الميكروفون ",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "التخصيص",
"Pin": "",
"Pinned": "",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "سجل صوت",
"Redirecting you to Open WebUI Community": "OpenWebUI إعادة توجيهك إلى مجتمع ",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "",
"Refused when it shouldn't have": "رفض عندما لا ينبغي أن يكون",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "ضبط الصوت",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "الاعدادات",
"Settings saved successfully!": "تم حفظ الاعدادات بنجاح",
@ -964,7 +981,7 @@
"System Prompt": "محادثة النظام",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "شكرا لملاحظاتك!",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "يجب أن تكون النتيجة قيمة تتراوح بين 0.0 (0%) و1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "الثيم",
"Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "وهذا يضمن حفظ محادثاتك القيمة بشكل آمن في قاعدة بياناتك الخلفية. شكرًا لك!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "",
"This will delete": "",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "مساحة العمل",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "أمس",
"You": "انت",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(напр. `sh webui.sh --api`)",
"(latest)": "(последна)",
"{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} Отговори",
"{{user}}'s Chats": "{{user}}'s чатове",
"{{webUIName}} Backend Required": "{{webUIName}} Изисква се Бекенд",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Администраторите имат достъп до всички инструменти по всяко време; потребителите се нуждаят от инструменти, присвоени за всеки модел в работното пространство.",
"Advanced Parameters": "Разширени Параметри",
"Advanced Params": "Разширени параметри",
"All": "",
"All Documents": "Всички Документи",
"All models deleted successfully": "Всички модели са изтрити успешно",
"Allow Chat Controls": "Разреши контроли на чата",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Разреши прекъсване на гласа по време на разговор",
"Allowed Endpoints": "Разрешени крайни точки",
"Already have an account?": "Вече имате акаунт?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Алтернатива на top_p, която цели да осигури баланс между качество и разнообразие. Параметърът p представлява минималната вероятност за разглеждане на токен, спрямо вероятността на най-вероятния токен. Например, при p=0.05 и най-вероятен токен с вероятност 0.9, логитите със стойност по-малка от 0.045 се филтрират. (По подразбиране: 0.0)",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "Винаги",
"Amazing": "Невероятно",
"an assistant": "асистент",
@ -93,6 +95,7 @@
"Are you sure?": "Сигурни ли сте?",
"Arena Models": "Arena Модели",
"Artifacts": "Артефакти",
"Ask": "",
"Ask a question": "Задайте въпрос",
"Assistant": "Асистент",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Крайна точка за Bing Search V7",
"Bing Search V7 Subscription Key": "Абонаментен ключ за Bing Search V7",
"Bocha Search API Key": "API ключ за Bocha Search",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "API ключ за Brave Search",
"By {{name}}": "От {{name}}",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "Интерпретатор на код",
"Code Interpreter Engine": "Двигател на интерпретатора на код",
"Code Interpreter Prompt Template": "Шаблон за промпт на интерпретатора на код",
"Collapse": "",
"Collection": "Колекция",
"Color": "Цвят",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "Потвърдете новата си парола",
"Connect to your own OpenAI compatible API endpoints.": "Свържете се със собствени крайни точки на API, съвместими с OpenAI.",
"Connections": "Връзки",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "Ограничава усилията за разсъждение при модели за разсъждение. Приложимо само за модели за разсъждение от конкретни доставчици, които поддържат усилия за разсъждение. (По подразбиране: средно)",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Свържете се с администратор за достъп до WebUI",
"Content": "Съдържание",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Продължете с имейл",
"Continue with LDAP": "Продължете с LDAP",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Контролирайте как текстът на съобщението се разделя за TTS заявки. 'Пунктуация' разделя на изречения, 'параграфи' разделя на параграфи, а 'нищо' запазва съобщението като един низ.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "Контролирайте повторението на последователности от токени в генерирания текст. По-висока стойност (напр. 1.5) ще наказва повторенията по-силно, докато по-ниска стойност (напр. 1.1) ще бъде по-снизходителна. При 1 е изключено. (По подразбиране: 1.1)",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Контроли",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Контролира баланса между съгласуваност и разнообразие на изхода. По-ниска стойност ще доведе до по-фокусиран и съгласуван текст. (По подразбиране: 5.0)",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Копирано",
"Copied shared chat URL to clipboard!": "Копирана е връзката за споделен чат в клипборда!",
"Copied to clipboard": "Копирано в клипборда",
@ -245,6 +250,7 @@
"Created At": "Създадено на",
"Created by": "Създадено от",
"CSV Import": "Импортиране на CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Текущ модел",
"Current Password": "Текуща Парола",
"Custom": "Персонализиран",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "Модел за вграждане е настроен на \"{{embedding_model}}\"",
"Enable API Key": "Активиране на API ключ",
"Enable autocomplete generation for chat messages": "Активиране на автоматично довършване за съобщения в чата",
"Enable Code Execution": "",
"Enable Code Interpreter": "Активиране на интерпретатор на код",
"Enable Community Sharing": "Разрешаване на споделяне в общност",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Активиране на заключване на паметта (mlock), за да се предотврати изваждането на данните на модела от RAM. Тази опция заключва работния набор от страници на модела в RAM, гарантирайки, че няма да бъдат изхвърлени на диска. Това може да помогне за поддържане на производителността, като се избягват грешки в страниците и се осигурява бърз достъп до данните.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Активиране на мапиране на паметта (mmap) за зареждане на данни на модела. Тази опция позволява на системата да използва дисковото пространство като разширение на RAM, третирайки дисковите файлове, сякаш са в RAM. Това може да подобри производителността на модела, като позволява по-бърз достъп до данните. Въпреки това, може да не работи правилно с всички системи и може да консумира значително количество дисково пространство.",
"Enable Message Rating": "Активиране на оценяване на съобщения",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Активиране на Mirostat семплиране за контрол на перплексията. (По подразбиране: 0, 0 = Деактивирано, 1 = Mirostat, 2 = Mirostat 2.0)",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Включване на нови регистрации",
"Enabled": "Активирано",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Уверете се, че вашият CSV файл включва 4 колони в следния ред: Име, Имейл, Парола, Роля.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "Въведете CFG Scale (напр. 7.0)",
"Enter Chunk Overlap": "Въведете припокриване на чънкове",
"Enter Chunk Size": "Въведете размер на чънк",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Въведете описание",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "Въведете токен за Jupyter",
"Enter Jupyter URL": "Въведете URL адрес за Jupyter",
"Enter Kagi Search API Key": "Въведете API ключ за Kagi Search",
"Enter Key Behavior": "",
"Enter language codes": "Въведете кодове на езика",
"Enter Model ID": "Въведете ID на модела",
"Enter model tag (e.g. {{modelTag}})": "Въведете таг на модел (напр. {{modelTag}})",
"Enter Mojeek Search API Key": "Въведете API ключ за Mojeek Search",
"Enter Number of Steps (e.g. 50)": "Въведете брой стъпки (напр. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "Въведете URL адрес на прокси (напр. https://потребител:парола@хост:порт)",
"Enter reasoning effort": "Въведете усилие за разсъждение",
"Enter Sampler (e.g. Euler a)": "Въведете семплер (напр. Euler a)",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Въведете публичния URL адрес на вашия WebUI. Този URL адрес ще бъде използван за генериране на връзки в известията.",
"Enter Tika Server URL": "Въведете URL адрес на Tika сървър",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Въведете Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Въведете URL (напр. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Въведете URL (напр. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "Пример: поща",
"Example: ou=users,dc=foo,dc=example": "Пример: ou=users,dc=foo,dc=example",
"Example: sAMAccountName or uid or userPrincipalName": "Пример: sAMAccountName или uid или userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Изключи",
"Execute code for analysis": "Изпълнете код за анализ",
"Expand": "",
"Experimental": "Експериментално",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Изследвайте космоса",
"Export": "Износ",
"Export All Archived Chats": "Износ на всички архивирани чатове",
@ -566,7 +581,7 @@
"Include": "Включи",
"Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Влияе върху това колко бързо алгоритъмът реагира на обратната връзка от генерирания текст. По-ниска скорост на обучение ще доведе до по-бавни корекции, докато по-висока скорост на обучение ще направи алгоритъма по-отзивчив. (По подразбиране: 0.1)",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Информация",
"Input commands": "Въведете команди",
"Install from Github URL": "Инсталиране от URL адреса на Github",
@ -624,6 +639,7 @@
"Local": "Локално",
"Local Models": "Локални модели",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "Изгубено",
"LTR": "LTR",
"Made by Open WebUI Community": "Направено от OpenWebUI общността",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Отказан достъп при опит за достъп до микрофона",
"Permission denied when accessing microphone: {{error}}": "Отказан достъп при опит за достъп до микрофона: {{error}}",
"Permissions": "Разрешения",
"Perplexity API Key": "",
"Personalization": "Персонализация",
"Pin": "Закачи",
"Pinned": "Закачено",
@ -809,7 +826,7 @@
"Reasoning Effort": "Усилие за разсъждение",
"Record voice": "Записване на глас",
"Redirecting you to Open WebUI Community": "Пренасочване към OpenWebUI общността",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Намалява вероятността за генериране на безсмислици. По-висока стойност (напр. 100) ще даде по-разнообразни отговори, докато по-ниска стойност (напр. 10) ще бъде по-консервативна. (По подразбиране: 40)",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Отнасяйте се към себе си като \"Потребител\" (напр. \"Потребителят учи испански\")",
"References from": "Препратки от",
"Refused when it shouldn't have": "Отказано, когато не трябва да бъде",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Задайте броя работни нишки, използвани за изчисления. Тази опция контролира колко нишки се използват за едновременна обработка на входящи заявки. Увеличаването на тази стойност може да подобри производителността при високи натоварвания с паралелизъм, но може също да консумира повече CPU ресурси.",
"Set Voice": "Задай Глас",
"Set whisper model": "Задай модел на шепот",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "Задава плоско отклонение срещу токени, които са се появили поне веднъж. По-висока стойност (напр. 1.5) ще наказва повторенията по-силно, докато по-ниска стойност (напр. 0.9) ще бъде по-снизходителна. При 0 е деактивирано. (По подразбиране: 0)",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "Задава мащабиращо отклонение срещу токени за наказване на повторения, базирано на това колко пъти са се появили. По-висока стойност (напр. 1.5) ще наказва повторенията по-силно, докато по-ниска стойност (напр. 0.9) ще бъде по-снизходителна. При 0 е деактивирано. (По подразбиране: 1.1)",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Задава колко назад моделът да гледа, за да предотврати повторение. (По подразбиране: 64, 0 = деактивирано, -1 = num_ctx)",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Задава семето на случайното число, което да се използва за генериране. Задаването на конкретно число ще накара модела да генерира същия текст за същата подкана. (По подразбиране: случайно)",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Задава размера на контекстния прозорец, използван за генериране на следващия токен. (По подразбиране: 2048)",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Задава последователностите за спиране, които да се използват. Когато се срещне този модел, LLM ще спре да генерира текст и ще се върне. Множество модели за спиране могат да бъдат зададени чрез определяне на множество отделни параметри за спиране в моделния файл.",
"Settings": "Настройки",
"Settings saved successfully!": "Настройките са запазени успешно!",
@ -964,7 +981,7 @@
"System Prompt": "Системен Промпт",
"Tags Generation": "Генериране на тагове",
"Tags Generation Prompt": "Промпт за генериране на тагове",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Безопашковото семплиране се използва за намаляване на влиянието на по-малко вероятните токени от изхода. По-висока стойност (напр. 2.0) ще намали влиянието повече, докато стойност 1.0 деактивира тази настройка. (по подразбиране: 1)",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "Докоснете за прекъсване",
"Tasks": "Задачи",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Благодарим ви за вашия отзив!",
"The Application Account DN you bind with for search": "DN на акаунта на приложението, с който се свързвате за търсене",
"The base to search for users": "Базата за търсене на потребители",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Разработчиците зад този плъгин са страстни доброволци от общността. Ако намирате този плъгин полезен, моля, обмислете да допринесете за неговото развитие.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Класацията за оценка се базира на рейтинговата система Elo и се обновява в реално време.",
"The LDAP attribute that maps to the mail that users use to sign in.": "LDAP атрибутът, който съответства на имейла, който потребителите използват за вписване.",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Максималният размер на файла в MB. Ако размерът на файла надвишава този лимит, файлът няма да бъде качен.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Максималният брой файлове, които могат да се използват едновременно в чата. Ако броят на файловете надвишава този лимит, файловете няма да бъдат качени.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Резултатът трябва да бъде стойност между 0.0 (0%) и 1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "Температурата на модела. Увеличаването на температурата ще накара модела да отговаря по-креативно. (По подразбиране: 0.8)",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Тема",
"Thinking...": "Мисля...",
"This action cannot be undone. Do you wish to continue?": "Това действие не може да бъде отменено. Желаете ли да продължите?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Това гарантира, че ценните ви разговори се запазват сигурно във вашата бекенд база данни. Благодарим ви!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Това е експериментална функция, може да не работи според очакванията и подлежи на промяна по всяко време.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Тази опция контролира колко токена се запазват при обновяване на контекста. Например, ако е зададено на 2, последните 2 токена от контекста на разговора ще бъдат запазени. Запазването на контекста може да помогне за поддържане на непрекъснатостта на разговора, но може да намали способността за отговор на нови теми. (По подразбиране: 24)",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Тази опция ще изтрие всички съществуващи файлове в колекцията и ще ги замени с новокачени файлове.",
"This response was generated by \"{{model}}\"": "Този отговор беше генериран от \"{{model}}\"",
"This will delete": "Това ще изтрие",
@ -1132,7 +1149,7 @@
"Why?": "Защо?",
"Widescreen Mode": "Широкоекранен режим",
"Won": "Спечелено",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Работи заедно с top-k. По-висока стойност (напр. 0.95) ще доведе до по-разнообразен текст, докато по-ниска стойност (напр. 0.5) ще генерира по-фокусиран и консервативен текст. (По подразбиране: 0.9)",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Работно пространство",
"Workspace Permissions": "Разрешения за работното пространство",
"Write": "Напиши",
@ -1142,6 +1159,7 @@
"Write your model template content here": "Напишете съдържанието на вашия шаблон за модел тук",
"Yesterday": "вчера",
"You": "Вие",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Можете да чатите с максимум {{maxCount}} файл(а) наведнъж.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Можете да персонализирате взаимодействията си с LLM-и, като добавите спомени чрез бутона 'Управление' по-долу, правейки ги по-полезни и съобразени с вас.",
"You cannot upload an empty file.": "Не можете да качите празен файл.",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(যেমন `sh webui.sh --api`)",
"(latest)": "(সর্বশেষ)",
"{{ models }}": "{{ মডেল}}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}}র চ্যাটস",
"{{webUIName}} Backend Required": "{{webUIName}} ব্যাকএন্ড আবশ্যক",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "এডভান্সড প্যারামিটার্স",
"Advanced Params": "অ্যাডভান্সড প্যারাম",
"All": "",
"All Documents": "সব ডকুমেন্ট",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "",
"Allowed Endpoints": "",
"Already have an account?": "আগে থেকেই একাউন্ট আছে?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "একটা এসিস্ট্যান্ট",
@ -93,6 +95,7 @@
"Are you sure?": "আপনি নিশ্চিত?",
"Arena Models": "",
"Artifacts": "",
"Ask": "",
"Ask a question": "",
"Assistant": "",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "সাহসী অনুসন্ধান API কী",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "সংগ্রহ",
"Color": "",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "কানেকশনগুলো",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "",
"Content": "বিষয়বস্তু",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "",
"Copied shared chat URL to clipboard!": "শেয়ারকৃত কথা-ব্যবহারের URL ক্লিপবোর্ডে কপি করা হয়েছে!",
"Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "নির্মানকাল",
"Created by": "",
"CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "বর্তমান মডেল",
"Current Password": "বর্তমান পাসওয়ার্ড",
"Custom": "কাস্টম",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "ইমেজ ইমেবডিং মডেল সেট করা হয়েছে - \"{{embedding_model}}\"",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "সম্প্রদায় শেয়ারকরণ সক্ষম করুন",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "নতুন সাইনআপ চালু করুন",
"Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "আপনার সিএসভি ফাইলটিতে এই ক্রমে 4 টি কলাম অন্তর্ভুক্ত রয়েছে তা নিশ্চিত করুন: নাম, ইমেল, পাসওয়ার্ড, ভূমিকা।.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "চাঙ্ক ওভারল্যাপ লিখুন",
"Enter Chunk Size": "চাংক সাইজ লিখুন",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "ল্যাঙ্গুয়েজ কোড লিখুন",
"Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "মডেল ট্যাগ লিখুন (e.g. {{modelTag}})",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "ধাপের সংখ্যা দিন (যেমন: 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Top K লিখুন",
"Enter URL (e.g. http://127.0.0.1:7860/)": "ইউআরএল দিন (যেমন http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "ইউআরএল দিন (যেমন http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "পরিক্ষামূলক",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "রপ্তানি",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "stable-diffusion-webui চালু করার সময় `--api` ফ্ল্যাগ সংযুক্ত করুন",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "তথ্য",
"Input commands": "ইনপুট কমান্ডস",
"Install from Github URL": "Github URL থেকে ইনস্টল করুন",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "LTR",
"Made by Open WebUI Community": "OpenWebUI কমিউনিটিকর্তৃক নির্মিত",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "মাইক্রোফোন ব্যবহারের অনুমতি পাওয়া যায়নি: {{error}}",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "ডিজিটাল বাংলা",
"Pin": "",
"Pinned": "",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "ভয়েস রেকর্ড করুন",
"Redirecting you to Open WebUI Community": "আপনাকে OpenWebUI কমিউনিটিতে পাঠানো হচ্ছে",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "",
"Refused when it shouldn't have": "যদি উপযুক্ত নয়, তবে রেজিগেনেট করা হচ্ছে",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "কন্ঠস্বর নির্ধারণ করুন",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "সেটিংসমূহ",
"Settings saved successfully!": "সেটিংগুলো সফলভাবে সংরক্ষিত হয়েছে",
@ -964,7 +981,7 @@
"System Prompt": "সিস্টেম প্রম্পট",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "আপনার মতামত ধন্যবাদ!",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "স্কোর একটি 0.0 (0%) এবং 1.0 (100%) এর মধ্যে একটি মান হওয়া উচিত।",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "থিম",
"Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "এটা নিশ্চিত করে যে, আপনার গুরুত্বপূর্ণ আলোচনা নিরাপদে আপনার ব্যাকএন্ড ডেটাবেজে সংরক্ষিত আছে। ধন্যবাদ!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "",
"This will delete": "",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "ওয়ার্কস্পেস",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "আগামী",
"You": "আপনি",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(p. ex. `sh webui.sh --api`)",
"(latest)": "(últim)",
"{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "{{COUNT}} línies ocultes",
"{{COUNT}} Replies": "{{COUNT}} respostes",
"{{user}}'s Chats": "Els xats de {{user}}",
"{{webUIName}} Backend Required": "El Backend de {{webUIName}} és necessari",
@ -13,7 +14,7 @@
"A task model is used when performing tasks such as generating titles for chats and web search queries": "Un model de tasca s'utilitza quan es realitzen tasques com ara generar títols per a xats i consultes de cerca per a la web",
"a user": "un usuari",
"About": "Sobre",
"Accept autocomplete generation / Jump to prompt variable": "",
"Accept autocomplete generation / Jump to prompt variable": "Acceptar la generació autocompletada / Saltar a la variable d'indicació",
"Access": "Accés",
"Access Control": "Control d'accés",
"Accessible to all users": "Accessible a tots els usuaris",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Els administradors tenen accés a totes les eines en tot moment; els usuaris necessiten eines assignades per model a l'espai de treball.",
"Advanced Parameters": "Paràmetres avançats",
"Advanced Params": "Paràmetres avançats",
"All": "Tots",
"All Documents": "Tots els documents",
"All models deleted successfully": "Tots els models s'han eliminat correctament",
"Allow Chat Controls": "Permetre els controls de xat",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Permetre la interrupció de la veu en una trucada",
"Allowed Endpoints": "Punts d'accés permesos",
"Already have an account?": "Ja tens un compte?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Alternativa al top_p, i pretén garantir un equilibri de qualitat i varietat. El paràmetre p representa la probabilitat mínima que es consideri un token, en relació amb la probabilitat del token més probable. Per exemple, amb p=0,05 i el token més probable amb una probabilitat de 0,9, es filtren els logits amb un valor inferior a 0,045. (Per defecte: 0.0)",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "Alternativa al top_p, i pretén garantir un equilibri de qualitat i varietat. El paràmetre p representa la probabilitat mínima que es consideri un token, en relació amb la probabilitat del token més probable. Per exemple, amb p=0,05 i el token més probable amb una probabilitat de 0,9, es filtren els logits amb un valor inferior a 0,045.",
"Always": "Sempre",
"Amazing": "Al·lucinant",
"an assistant": "un assistent",
@ -86,16 +88,17 @@
"Archive All Chats": "Arxiva tots els xats",
"Archived Chats": "Xats arxivats",
"archived-chat-export": "archived-chat-export",
"Are you sure you want to clear all memories? This action cannot be undone.": "",
"Are you sure you want to clear all memories? This action cannot be undone.": "Estàs segur que vols netejar totes les memòries? Aquesta acció no es pot desfer.",
"Are you sure you want to delete this channel?": "Estàs segur que vols eliminar aquest canal?",
"Are you sure you want to delete this message?": "Estàs segur que vols eliminar aquest missatge?",
"Are you sure you want to unarchive all archived chats?": "Estàs segur que vols desarxivar tots els xats arxivats?",
"Are you sure?": "Estàs segur?",
"Arena Models": "Models de l'Arena",
"Artifacts": "Artefactes",
"Ask": "Preguntar",
"Ask a question": "Fer una pregunta",
"Assistant": "Assistent",
"Attach file from knowledge": "",
"Attach file from knowledge": "Associar arxiu del coneixement",
"Attention to detail": "Atenció al detall",
"Attribute for Mail": "Atribut per al Correu",
"Attribute for Username": "Atribut per al Nom d'usuari",
@ -127,9 +130,10 @@
"Bing Search V7 Endpoint": "Punt de connexió a Bing Search V7",
"Bing Search V7 Subscription Key": "Clau de subscripció a Bing Search V7",
"Bocha Search API Key": "Clau API de Bocha Search",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "Potenciar o penalitzar tokens específics per a respostes limitades. Els valors de biaix es fixaran entre -100 i 100 (inclosos). (Per defecte: cap)",
"Brave Search API Key": "Clau API de Brave Search",
"By {{name}}": "Per {{name}}",
"Bypass Embedding and Retrieval": "",
"Bypass Embedding and Retrieval": "Desactivar l'Embedding i el Retrieval",
"Bypass SSL verification for Websites": "Desactivar la verificació SSL per a l'accés a Internet",
"Calendar": "Calendari",
"Call": "Trucada",
@ -163,7 +167,7 @@
"Ciphers": "Xifradors",
"Citation": "Cita",
"Clear memory": "Esborrar la memòria",
"Clear Memory": "",
"Clear Memory": "Esborrar la memòria",
"click here": "prem aquí",
"Click here for filter guides.": "Clica aquí per filtrar les guies.",
"Click here for help.": "Clica aquí per obtenir ajuda.",
@ -190,6 +194,7 @@
"Code Interpreter": "Intèrpret de codi",
"Code Interpreter Engine": "Motor de l'intèrpret de codi",
"Code Interpreter Prompt Template": "Plantilla de la indicació de l'intèrpret de codi",
"Collapse": "Col·lapsar",
"Collection": "Col·lecció",
"Color": "Color",
"ComfyUI": "ComfyUI",
@ -208,19 +213,19 @@
"Confirm your new password": "Confirma la teva nova contrasenya",
"Connect to your own OpenAI compatible API endpoints.": "Connecta als teus propis punts de connexió de l'API compatible amb OpenAI",
"Connections": "Connexions",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "Restringeix l'esforç de raonament dels models de raonament. Només aplicable a models de raonament de proveïdors específics que donen suport a l'esforç de raonament. (Per defecte: mitjà)",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "Restringeix l'esforç de raonament dels models de raonament. Només aplicable a models de raonament de proveïdors específics que donen suport a l'esforç de raonament.",
"Contact Admin for WebUI Access": "Posat en contacte amb l'administrador per accedir a WebUI",
"Content": "Contingut",
"Content Extraction Engine": "",
"Content Extraction Engine": "Motor d'extracció de contingut",
"Context Length": "Mida del context",
"Continue Response": "Continuar la resposta",
"Continue with {{provider}}": "Continuar amb {{provider}}",
"Continue with Email": "Continuar amb el correu",
"Continue with LDAP": "Continuar amb LDAP",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Controlar com es divideix el text del missatge per a les sol·licituds TTS. 'Puntuació' divideix en frases, 'paràgrafs' divideix en paràgrafs i 'cap' manté el missatge com una cadena única.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "Controlar la repetició de seqüències de tokens en el text generat. Un valor més alt (p. ex., 1,5) penalitzarà les repeticions amb més força, mentre que un valor més baix (p. ex., 1,1) serà més indulgent. A l'1, està desactivat. (Per defecte: 1.1)",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "Controlar la repetició de seqüències de tokens en el text generat. Un valor més alt (p. ex., 1,5) penalitzarà les repeticions amb més força, mentre que un valor més baix (p. ex., 1,1) serà més indulgent. A l'1, està desactivat.",
"Controls": "Controls",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Controlar l'equilibri entre la coherència i la diversitat de la sortida. Un valor més baix donarà lloc a un text més enfocat i coherent. (Per defecte: 5.0)",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "Controla l'equilibri entre la coherència i la diversitat de la sortida. Un valor més baix donarà lloc a un text més enfocat i coherent.",
"Copied": "Copiat",
"Copied shared chat URL to clipboard!": "S'ha copiat l'URL compartida al porta-retalls!",
"Copied to clipboard": "Copiat al porta-retalls",
@ -245,10 +250,11 @@
"Created At": "Creat el",
"Created by": "Creat per",
"CSV Import": "Importar CSV",
"Ctrl+Enter to Send": "Ctrl+Enter per enviar",
"Current Model": "Model actual",
"Current Password": "Contrasenya actual",
"Custom": "Personalitzat",
"Danger Zone": "",
"Danger Zone": "Zona de perill",
"Dark": "Fosc",
"Database": "Base de dades",
"December": "Desembre",
@ -309,8 +315,8 @@
"Do not install functions from sources you do not fully trust.": "No instal·lis funcions de fonts en què no confiïs plenament.",
"Do not install tools from sources you do not fully trust.": "No instal·lis eines de fonts en què no confiïs plenament.",
"Document": "Document",
"Document Intelligence": "",
"Document Intelligence endpoint and key required.": "",
"Document Intelligence": "Document Intelligence",
"Document Intelligence endpoint and key required.": "Fa falta un punt de connexió i una clau per a Document Intelligence.",
"Documentation": "Documentació",
"Documents": "Documents",
"does not make any external connections, and your data stays securely on your locally hosted server.": "no realitza connexions externes, i les teves dades romanen segures al teu servidor allotjat localment.",
@ -346,19 +352,20 @@
"ElevenLabs": "ElevenLabs",
"Email": "Correu electrònic",
"Embark on adventures": "Embarcar en aventures",
"Embedding": "",
"Embedding": "Incrustació",
"Embedding Batch Size": "Mida del lot d'incrustació",
"Embedding Model": "Model d'incrustació",
"Embedding Model Engine": "Motor de model d'incrustació",
"Embedding model set to \"{{embedding_model}}\"": "Model d'incrustació configurat a \"{{embedding_model}}\"",
"Enable API Key": "Activar la Clau API",
"Enable autocomplete generation for chat messages": "Activar la generació automàtica per als missatges del xat",
"Enable Code Execution": "",
"Enable Code Interpreter": "Activar l'intèrpret de codi",
"Enable Community Sharing": "Activar l'ús compartit amb la comunitat",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Activar el bloqueig de memòria (mlock) per evitar que les dades del model s'intercanviïn fora de la memòria RAM. Aquesta opció bloqueja el conjunt de pàgines de treball del model a la memòria RAM, assegurant-se que no s'intercanviaran al disc. Això pot ajudar a mantenir el rendiment evitant errors de pàgina i garantint un accés ràpid a les dades.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Activar l'assignació de memòria (mmap) per carregar les dades del model. Aquesta opció permet que el sistema utilitzi l'emmagatzematge en disc com a extensió de la memòria RAM tractant els fitxers de disc com si estiguessin a la memòria RAM. Això pot millorar el rendiment del model permetent un accés més ràpid a les dades. Tanmateix, és possible que no funcioni correctament amb tots els sistemes i pot consumir una quantitat important d'espai en disc.",
"Enable Message Rating": "Permetre la qualificació de missatges",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Activar el mostreig de Mirostat per controlar la perplexitat. (Per defecte: 0, 0 = Inhabilitat, 1 = Mirostat, 2 = Mirostat 2.0)",
"Enable Mirostat sampling for controlling perplexity.": "Permetre el mostreig de Mirostat per controlar la perplexitat",
"Enable New Sign Ups": "Permetre nous registres",
"Enabled": "Habilitat",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Assegura't que els teus fitxers CSV inclouen 4 columnes en aquest ordre: Nom, Correu electrònic, Contrasenya, Rol.",
@ -375,9 +382,10 @@
"Enter CFG Scale (e.g. 7.0)": "Entra l'escala CFG (p.ex. 7.0)",
"Enter Chunk Overlap": "Introdueix la mida de solapament de blocs",
"Enter Chunk Size": "Introdueix la mida del bloc",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "Introdueix parelles de \"token:valor de biaix\" separats per comes (exemple: 5432:100, 413:-100)",
"Enter description": "Introdueix la descripció",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
"Enter Document Intelligence Endpoint": "Introdueix el punt de connexió de Document Intelligence",
"Enter Document Intelligence Key": "Introdueix la clau de Document Intelligence",
"Enter domains separated by commas (e.g., example.com,site.org)": "Introdueix els dominis separats per comes (p. ex. example.com,site.org)",
"Enter Exa API Key": "Introdueix la clau API de d'EXA",
"Enter Github Raw URL": "Introdueix l'URL en brut de Github",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "Introdueix el token de Jupyter",
"Enter Jupyter URL": "Introdueix la URL de Jupyter",
"Enter Kagi Search API Key": "Introdueix la clau API de Kagi Search",
"Enter Key Behavior": "Introdueix el comportament de clau",
"Enter language codes": "Introdueix els codis de llenguatge",
"Enter Model ID": "Introdueix l'identificador del model",
"Enter model tag (e.g. {{modelTag}})": "Introdueix l'etiqueta del model (p. ex. {{modelTag}})",
"Enter Mojeek Search API Key": "Introdueix la clau API de Mojeek Search",
"Enter Number of Steps (e.g. 50)": "Introdueix el nombre de passos (p. ex. 50)",
"Enter Perplexity API Key": "Introdueix la clau API de Perplexity",
"Enter proxy URL (e.g. https://user:password@host:port)": "Entra l'URL (p. ex. https://user:password@host:port)",
"Enter reasoning effort": "Introdueix l'esforç de raonament",
"Enter Sampler (e.g. Euler a)": "Introdueix el mostrejador (p.ex. Euler a)",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Entra la URL pública de WebUI. Aquesta URL s'utilitzarà per generar els enllaços en les notificacions.",
"Enter Tika Server URL": "Introdueix l'URL del servidor Tika",
"Enter timeout in seconds": "Entra el temps màxim en segons",
"Enter to Send": "Enter per enviar",
"Enter Top K": "Introdueix Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Introdueix l'URL (p. ex. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Introdueix l'URL (p. ex. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "Exemple: mail",
"Example: ou=users,dc=foo,dc=example": "Exemple: ou=users,dc=foo,dc=example",
"Example: sAMAccountName or uid or userPrincipalName": "Exemple: sAMAccountName o uid o userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "S'ha superat el nombre de places a la vostra llicència. Poseu-vos en contacte amb el servei d'assistència per augmentar el nombre de places.",
"Exclude": "Excloure",
"Execute code for analysis": "Executa el codi per analitzar-lo",
"Execute code for analysis": "Executar el codi per analitzar-lo",
"Expand": "Expandir",
"Experimental": "Experimental",
"Explain": "Explicar",
"Explain this section to me in more detail": "Explica'm aquesta secció amb més detall",
"Explore the cosmos": "Explorar el cosmos",
"Export": "Exportar",
"Export All Archived Chats": "Exportar tots els xats arxivats",
@ -515,7 +530,7 @@
"General": "General",
"Generate an image": "Generar una imatge",
"Generate Image": "Generar imatge",
"Generate prompt pair": "",
"Generate prompt pair": "Generar parella d'indicació",
"Generating search query": "Generant consulta",
"Get started": "Començar",
"Get started with {{WEBUI_NAME}}": "Començar amb {{WEBUI_NAME}}",
@ -566,12 +581,12 @@
"Include": "Incloure",
"Include `--api-auth` flag when running stable-diffusion-webui": "Inclou `--api-auth` quan executis stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Inclou `--api` quan executis stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Influeix amb la rapidesa amb què l'algoritme respon als comentaris del text generat. Una taxa d'aprenentatge més baixa donarà lloc a ajustos més lents, mentre que una taxa d'aprenentatge més alta farà que l'algorisme sigui més sensible. (Per defecte: 0,1)",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "Influeix amb la rapidesa amb què l'algoritme respon als comentaris del text generat. Una taxa d'aprenentatge més baixa donarà lloc a ajustos més lents, mentre que una taxa d'aprenentatge més alta farà que l'algorisme sigui més sensible.",
"Info": "Informació",
"Input commands": "Entra comandes",
"Install from Github URL": "Instal·lar des de l'URL de Github",
"Instant Auto-Send After Voice Transcription": "Enviament automàtic després de la transcripció de veu",
"Integration": "",
"Integration": "Integració",
"Interface": "Interfície",
"Invalid file format.": "Format d'arxiu no vàlid.",
"Invalid Tag": "Etiqueta no vàlida",
@ -619,11 +634,12 @@
"Listening...": "Escoltant...",
"Llama.cpp": "Llama.cpp",
"LLMs can make mistakes. Verify important information.": "Els models de llenguatge poden cometre errors. Verifica la informació important.",
"Loader": "",
"Loader": "Carregador",
"Loading Kokoro.js...": "Carregant Kokoro.js",
"Local": "Local",
"Local Models": "Models locals",
"Location access not allowed": "",
"Location access not allowed": "Accés a la ubicació no permesa",
"Logit Bias": "Biaix Logit",
"Lost": "Perdut",
"LTR": "LTR",
"Made by Open WebUI Community": "Creat per la Comunitat OpenWebUI",
@ -697,7 +713,7 @@
"No HTML, CSS, or JavaScript content found.": "No s'ha trobat contingut HTML, CSS o JavaScript.",
"No inference engine with management support found": "No s'ha trobat un motor d'inferència amb suport de gestió",
"No knowledge found": "No s'ha trobat Coneixement",
"No memories to clear": "",
"No memories to clear": "No hi ha memòries per netejar",
"No model IDs": "No hi ha IDs de model",
"No models found": "No s'han trobat models",
"No models selected": "No s'ha seleccionat cap model",
@ -727,7 +743,7 @@
"Ollama API settings updated": "La configuració de l'API d'Ollama s'ha actualitzat",
"Ollama Version": "Versió d'Ollama",
"On": "Activat",
"OneDrive": "",
"OneDrive": "OneDrive",
"Only alphanumeric characters and hyphens are allowed": "Només es permeten caràcters alfanumèrics i guions",
"Only alphanumeric characters and hyphens are allowed in the command string.": "Només es permeten caràcters alfanumèrics i guions en la comanda.",
"Only collections can be edited, create a new knowledge base to edit/add documents.": "Només es poden editar col·leccions, crea una nova base de coneixement per editar/afegir documents.",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Permís denegat en accedir al micròfon",
"Permission denied when accessing microphone: {{error}}": "Permís denegat en accedir al micròfon: {{error}}",
"Permissions": "Permisos",
"Perplexity API Key": "Clau API de Perplexity",
"Personalization": "Personalització",
"Pin": "Fixar",
"Pinned": "Fixat",
@ -809,7 +826,7 @@
"Reasoning Effort": "Esforç de raonament",
"Record voice": "Enregistrar la veu",
"Redirecting you to Open WebUI Community": "Redirigint-te a la comunitat OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Redueix la probabilitat de generar ximpleries. Un valor més alt (p. ex. 100) donarà respostes més diverses, mentre que un valor més baix (p. ex. 10) serà més conservador. (Per defecte: 40)",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "Redueix la probabilitat de generar ximpleries. Un valor més alt (p. ex. 100) donarà respostes més diverses, mentre que un valor més baix (p. ex. 10) serà més conservador.",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Fes referència a tu mateix com a \"Usuari\" (p. ex., \"L'usuari està aprenent espanyol\")",
"References from": "Referències de",
"Refused when it shouldn't have": "Refusat quan no hauria d'haver estat",
@ -835,7 +852,7 @@
"Response notifications cannot be activated as the website permissions have been denied. Please visit your browser settings to grant the necessary access.": "Les notifications de resposta no es poden activar perquè els permisos del lloc web han estat rebutjats. Comprova les preferències del navegador per donar l'accés necessari.",
"Response splitting": "Divisió de la resposta",
"Result": "Resultat",
"Retrieval": "",
"Retrieval": "Retrieval",
"Retrieval Query Generation": "Generació de consultes Retrieval",
"Rich Text Input for Chat": "Entrada de text ric per al xat",
"RK": "RK",
@ -866,7 +883,7 @@
"Search options": "Opcions de cerca",
"Search Prompts": "Cercar indicacions",
"Search Result Count": "Recompte de resultats de cerca",
"Search the internet": "Cerca a internet",
"Search the internet": "Cercar a internet",
"Search Tools": "Cercar eines",
"SearchApi API Key": "Clau API de SearchApi",
"SearchApi Engine": "Motor de SearchApi",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Establir el nombre de fils de treball utilitzats per al càlcul. Aquesta opció controla quants fils s'utilitzen per processar les sol·licituds entrants simultàniament. Augmentar aquest valor pot millorar el rendiment amb càrregues de treball de concurrència elevada, però també pot consumir més recursos de CPU.",
"Set Voice": "Establir la veu",
"Set whisper model": "Establir el model whisper",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "Estableix un biaix pla contra tokens que han aparegut almenys una vegada. Un valor més alt (p. ex., 1,5) penalitzarà les repeticions amb més força, mentre que un valor més baix (p. ex., 0,9) serà més indulgent. A 0, està desactivat. (Per defecte: 0)",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "Estableix un biaix d'escala contra tokens per penalitzar les repeticions, en funció de quantes vegades han aparegut. Un valor més alt (p. ex., 1,5) penalitzarà les repeticions amb més força, mentre que un valor més baix (p. ex., 0,9) serà més indulgent. A 0, està desactivat. (Per defecte: 1.1)",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Establir fins a quin punt el model mira enrere per evitar la repetició. (Per defecte: 64, 0 = desactivat, -1 = num_ctx)",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Establir la llavor del nombre aleatori que s'utilitzarà per a la generació. Establir-ho a un número específic farà que el model generi el mateix text per a la mateixa sol·licitud. (Per defecte: aleatori)",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Estableix la mida de la finestra de context utilitzada per generar el següent token. (Per defecte: 2048)",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "Estableix un biaix pla contra tokens que han aparegut almenys una vegada. Un valor més alt (p. ex., 1,5) penalitzarà les repeticions amb més força, mentre que un valor més baix (p. ex., 0,9) serà més indulgent. A 0, està desactivat.",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "Estableix un biaix d'escala contra tokens per penalitzar les repeticions, en funció de quantes vegades han aparegut. Un valor més alt (p. ex., 1,5) penalitzarà les repeticions amb més força, mentre que un valor més baix (p. ex., 0,9) serà més indulgent. A 0, està desactivat.",
"Sets how far back for the model to look back to prevent repetition.": "Estableix fins a quin punt el model mira enrere per evitar la repetició.",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "Estableix la llavor del nombre aleatori que s'utilitzarà per a la generació. Establir-ho a un número específic farà que el model generi el mateix text per a la mateixa sol·licitud.",
"Sets the size of the context window used to generate the next token.": "Estableix la mida de la finestra de context utilitzada per generar el següent token.",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Establir les seqüències d'aturada a utilitzar. Quan es trobi aquest patró, el LLM deixarà de generar text. Es poden establir diversos patrons de parada especificant diversos paràmetres de parada separats en un fitxer model.",
"Settings": "Preferències",
"Settings saved successfully!": "Les preferències s'han desat correctament",
@ -964,8 +981,8 @@
"System Prompt": "Indicació del Sistema",
"Tags Generation": "Generació d'etiquetes",
"Tags Generation Prompt": "Indicació per a la generació d'etiquetes",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "El mostreig sense cua s'utilitza per reduir l'impacte de tokens menys probables de la sortida. Un valor més alt (p. ex., 2,0) reduirà més l'impacte, mentre que un valor d'1,0 desactiva aquesta configuració. (per defecte: 1)",
"Talk to model": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "El mostreig sense cua s'utilitza per reduir l'impacte de tokens menys probables de la sortida. Un valor més alt (p. ex., 2,0) reduirà més l'impacte, mentre que un valor d'1,0 desactiva aquesta configuració.",
"Talk to model": "Parlar amb el model",
"Tap to interrupt": "Prem per interrompre",
"Tasks": "Tasques",
"Tavily API Key": "Clau API de Tavily",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Gràcies pel teu comentari!",
"The Application Account DN you bind with for search": "El DN del compte d'aplicació per realitzar la cerca",
"The base to search for users": "La base per cercar usuaris",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "La mida del lot determina quantes sol·licituds de text es processen alhora. Una mida de lot més gran pot augmentar el rendiment i la velocitat del model, però també requereix més memòria. (Per defecte: 512)",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "La mida del lot determina quantes sol·licituds de text es processen alhora. Una mida de lot més gran pot augmentar el rendiment i la velocitat del model, però també requereix més memòria.",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Els desenvolupadors d'aquest complement són voluntaris apassionats de la comunitat. Si trobeu útil aquest complement, considereu contribuir al seu desenvolupament.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "La classificació d'avaluació es basa en el sistema de qualificació Elo i s'actualitza en temps real.",
"The LDAP attribute that maps to the mail that users use to sign in.": "L'atribut LDAP que s'associa al correu que els usuaris utilitzen per iniciar la sessió.",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "La mida màxima del fitxer en MB. Si la mida del fitxer supera aquest límit, el fitxer no es carregarà.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "El nombre màxim de fitxers que es poden utilitzar alhora al xat. Si el nombre de fitxers supera aquest límit, els fitxers no es penjaran.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "El valor de puntuació hauria de ser entre 0.0 (0%) i 1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "La temperatura del model. Augmentar la temperatura farà que el model respongui de manera més creativa. (Per defecte: 0,8)",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "La temperatura del model. Augmentar la temperatura farà que el model respongui de manera més creativa.",
"Theme": "Tema",
"Thinking...": "Pensant...",
"This action cannot be undone. Do you wish to continue?": "Aquesta acció no es pot desfer. Vols continuar?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Això assegura que les teves converses valuoses queden desades de manera segura a la teva base de dades. Gràcies!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Aquesta és una funció experimental, és possible que no funcioni com s'espera i està subjecta a canvis en qualsevol moment.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Aquesta opció controla quants tokens es conserven en actualitzar el context. Per exemple, si s'estableix en 2, es conservaran els darrers 2 tokens del context de conversa. Preservar el context pot ajudar a mantenir la continuïtat d'una conversa, però pot reduir la capacitat de respondre a nous temes. (Per defecte: 24)",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Aquesta opció estableix el nombre màxim de tokens que el model pot generar en la seva resposta. Augmentar aquest límit permet que el model proporcioni respostes més llargues, però també pot augmentar la probabilitat que es generi contingut poc útil o irrellevant. (Per defecte: 128)",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "Aquesta opció controla quants tokens es conserven en actualitzar el context. Per exemple, si s'estableix en 2, es conservaran els darrers 2 tokens del context de conversa. Preservar el context pot ajudar a mantenir la continuïtat d'una conversa, però pot reduir la capacitat de respondre a nous temes.",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "Aquesta opció estableix el nombre màxim de tokens que el model pot generar en la seva resposta. Augmentar aquest límit permet que el model proporcioni respostes més llargues, però també pot augmentar la probabilitat que es generi contingut poc útil o irrellevant.",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Aquesta opció eliminarà tots els fitxers existents de la col·lecció i els substituirà per fitxers recentment penjats.",
"This response was generated by \"{{model}}\"": "Aquesta resposta l'ha generat el model \"{{model}}\"",
"This will delete": "Això eliminarà",
@ -1050,7 +1067,7 @@
"Top P": "Top P",
"Transformers": "Transformadors",
"Trouble accessing Ollama?": "Problemes en accedir a Ollama?",
"Trust Proxy Environment": "",
"Trust Proxy Environment": "Confiar en l'entorn proxy",
"TTS Model": "Model TTS",
"TTS Settings": "Preferències de TTS",
"TTS Voice": "Veu TTS",
@ -1132,7 +1149,7 @@
"Why?": "Per què?",
"Widescreen Mode": "Mode de pantalla ampla",
"Won": "Ha guanyat",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Funciona juntament amb top-k. Un valor més alt (p. ex., 0,95) donarà lloc a un text més divers, mentre que un valor més baix (p. ex., 0,5) generarà un text més concentrat i conservador. (Per defecte: 0,9)",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "Funciona juntament amb top-k. Un valor més alt (p. ex., 0,95) donarà lloc a un text més divers, mentre que un valor més baix (p. ex., 0,5) generarà un text més concentrat i conservador.",
"Workspace": "Espai de treball",
"Workspace Permissions": "Permisos de l'espai de treball",
"Write": "Escriure",
@ -1142,6 +1159,7 @@
"Write your model template content here": "Introdueix el contingut de la plantilla del teu model aquí",
"Yesterday": "Ahir",
"You": "Tu",
"You are currently using a trial license. Please contact support to upgrade your license.": "Actualment esteu utilitzant una llicència de prova. Poseu-vos en contacte amb el servei d'assistència per actualitzar la vostra llicència.",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Només pots xatejar amb un màxim de {{maxCount}} fitxers alhora.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Pots personalitzar les teves interaccions amb els models de llenguatge afegint memòries mitjançant el botó 'Gestiona' que hi ha a continuació, fent-les més útils i adaptades a tu.",
"You cannot upload an empty file.": "No es pot pujar un ariux buit.",
@ -1155,6 +1173,6 @@
"Your account status is currently pending activation.": "El compte està actualment pendent d'activació",
"Your entire contribution will go directly to the plugin developer; Open WebUI does not take any percentage. However, the chosen funding platform might have its own fees.": "Tota la teva contribució anirà directament al desenvolupador del complement; Open WebUI no se'n queda cap percentatge. Tanmateix, la plataforma de finançament escollida pot tenir les seves pròpies comissions.",
"Youtube": "Youtube",
"Youtube Language": "",
"Youtube Language": "Idioma de YouTube",
"Youtube Proxy URL": ""
}

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(pananglitan `sh webui.sh --api`)",
"(latest)": "",
"{{ models }}": "",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "",
"{{webUIName}} Backend Required": "Backend {{webUIName}} gikinahanglan",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "advanced settings",
"Advanced Params": "",
"All": "",
"All Documents": "",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "",
"Allowed Endpoints": "",
"Already have an account?": "Naa na kay account ?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "usa ka katabang",
@ -93,6 +95,7 @@
"Are you sure?": "Sigurado ka ?",
"Arena Models": "",
"Artifacts": "",
"Ask": "",
"Ask a question": "",
"Assistant": "",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Koleksyon",
"Color": "",
"ComfyUI": "",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Mga koneksyon",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "",
"Content": "Kontento",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "",
"Copied shared chat URL to clipboard!": "",
"Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "",
"Created by": "",
"CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "Kasamtangang modelo",
"Current Password": "Kasamtangang Password",
"Custom": "Custom",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "I-enable ang bag-ong mga rehistro",
"Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "Pagsulod sa block overlap",
"Enter Chunk Size": "Isulod ang block size",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "",
"Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Pagsulod sa template tag (e.g. {{modelTag}})",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Pagsulod sa gidaghanon sa mga lakang (e.g. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Pagsulod sa Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Pagsulod sa URL (e.g. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "Eksperimento",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "Iapil ang `--api` nga bandila kung nagdagan nga stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "",
"Input commands": "Pagsulod sa input commands",
"Install from Github URL": "",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "",
"Made by Open WebUI Community": "Gihimo sa komunidad sa OpenWebUI",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "Gidili ang pagtugot sa dihang nag-access sa mikropono: {{error}}",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "",
"Pin": "",
"Pinned": "",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "Irekord ang tingog",
"Redirecting you to Open WebUI Community": "Gi-redirect ka sa komunidad sa OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "",
"Refused when it shouldn't have": "",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Ibutang ang tingog",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Mga setting",
"Settings saved successfully!": "Malampuson nga na-save ang mga setting!",
@ -964,7 +981,7 @@
"System Prompt": "Madasig nga Sistema",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema",
"Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Kini nagsiguro nga ang imong bililhon nga mga panag-istoryahanay luwas nga natipig sa imong backend database. ",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "",
"This will delete": "",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "",
"You": "",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(např. `sh webui.sh --api`)",
"(latest)": "Nejnovější",
"{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}}'s konverzace",
"{{webUIName}} Backend Required": "Požadován {{webUIName}} Backend",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administrátoři mají přístup ke všem nástrojům kdykoliv; uživatelé potřebují mít nástroje přiřazené podle modelu ve workspace.",
"Advanced Parameters": "Pokročilé parametry",
"Advanced Params": "Pokročilé parametry",
"All": "",
"All Documents": "Všechny dokumenty",
"All models deleted successfully": "Všechny modely úspěšně odstráněny",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Povolit přerušení hlasu při hovoru",
"Allowed Endpoints": "",
"Already have an account?": "Už máte účet?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "asistent",
@ -93,6 +95,7 @@
"Are you sure?": "Jste si jistý?",
"Arena Models": "Arena modely",
"Artifacts": "Artefakty",
"Ask": "",
"Ask a question": "Zeptejte se na otázku",
"Assistant": "Ano, jak vám mohu pomoci?",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Klíč API pro Brave Search",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "",
"Color": "Barva",
"ComfyUI": "ComfyUI.",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Připojení",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Kontaktujte administrátora pro přístup k webovému rozhraní.",
"Content": "Obsah",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Řízení, jak se text zprávy rozděluje pro požadavky TTS. 'Punctuation' rozděluje text na věty, 'paragraphs' rozděluje text na odstavce a 'none' udržuje zprávu jako jeden celý řetězec.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Ovládací prvky",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Zkopírováno",
"Copied shared chat URL to clipboard!": "URL sdíleného chatu zkopírován do schránky!",
"Copied to clipboard": "Zkopírováno do schránky",
@ -245,6 +250,7 @@
"Created At": "Vytvořeno dne",
"Created by": "Vytvořeno uživatelem",
"CSV Import": "CSV import",
"Ctrl+Enter to Send": "",
"Current Model": "Aktuální model",
"Current Password": "Aktuální heslo",
"Custom": "Na míru",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "Model vkládání nastaven na \"{{embedding_model}}\"",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "Povolit sdílení komunity",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "Povolit hodnocení zpráv",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Povolit nové registrace",
"Enabled": "Povoleno",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Ujistěte se, že váš CSV soubor obsahuje 4 sloupce v tomto pořadí: Name, Email, Password, Role.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "Zadejte měřítko CFG (např. 7.0)",
"Enter Chunk Overlap": "Zadejte překryv části",
"Enter Chunk Size": "Zadejte velikost bloku",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Zadejte popis",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Zadejte kódy jazyků",
"Enter Model ID": "Zadejte ID modelu",
"Enter model tag (e.g. {{modelTag}})": "Zadejte označení modelu (např. {{modelTag}})",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Zadejte počet kroků (např. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "Zadejte vzorkovač (např. Euler a)",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "Zadejte URL serveru Tika",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Zadejte horní K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Zadejte URL (např. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Zadejte URL (např. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Vyloučit",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "Experimentální",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "Exportovat",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "Zahrnout",
"Include `--api-auth` flag when running stable-diffusion-webui": "Zahrňte přepínač `--api-auth` při spuštění stable-diffusion-webui.",
"Include `--api` flag when running stable-diffusion-webui": "Při spuštění stable-diffusion-webui zahrňte příznak `--api`.",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Info",
"Input commands": "Vstupní příkazy",
"Install from Github URL": "Instalace z URL adresy Githubu",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "Lokální modely",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "Ztracený",
"LTR": "LTR",
"Made by Open WebUI Community": "Vytvořeno komunitou OpenWebUI",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Přístup k mikrofonu byl odepřen",
"Permission denied when accessing microphone: {{error}}": "Oprávnění zamítnuto při přístupu k mikrofonu: {{error}}",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "Personalizace",
"Pin": "",
"Pinned": "",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "Nahrát hlas",
"Redirecting you to Open WebUI Community": "Přesměrování na komunitu OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Odkazujte na sebe jako na \"uživatele\" (např. \"Uživatel se učí španělsky\").",
"References from": "Reference z",
"Refused when it shouldn't have": "Odmítnuto, když nemělo být.",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Nastavit hlas",
"Set whisper model": "Nastavit model whisper",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Nastavení",
"Settings saved successfully!": "Nastavení byla úspěšně uložena!",
@ -964,7 +981,7 @@
"System Prompt": "Systémový prompt",
"Tags Generation": "",
"Tags Generation Prompt": "Prompt pro generování značek",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "Klepněte pro přerušení",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Děkujeme za vaši zpětnou vazbu!",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Vývojáři stojící za tímto pluginem jsou zapálení dobrovolníci z komunity. Pokud považujete tento plugin za užitečný, zvažte příspěvek k jeho vývoji.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Hodnotící žebříček je založen na systému hodnocení Elo a je aktualizován v reálném čase.",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Maximální velikost souboru v MB. Pokud velikost souboru překročí tento limit, soubor nebude nahrán.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Maximální počet souborů, které mohou být použity najednou v chatu. Pokud počet souborů překročí tento limit, soubory nebudou nahrány.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Skóre by mělo být hodnotou mezi 0,0 (0%) a 1,0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Téma",
"Thinking...": "Přemýšlím...",
"This action cannot be undone. Do you wish to continue?": "Tuto akci nelze vrátit zpět. Přejete si pokračovat?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "To zajišťuje, že vaše cenné konverzace jsou bezpečně uloženy ve vaší backendové databázi. Děkujeme!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Jedná se o experimentální funkci, nemusí fungovat podle očekávání a může být kdykoliv změněna.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Tato volba odstraní všechny existující soubory ve sbírce a nahradí je nově nahranými soubory.",
"This response was generated by \"{{model}}\"": "Tato odpověď byla vygenerována pomocí \"{{model}}\"",
"This will delete": "Tohle odstraní",
@ -1132,7 +1149,7 @@
"Why?": "Proč?",
"Widescreen Mode": "Režim širokoúhlého zobrazení",
"Won": "Vyhrál",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "Včera",
"You": "Vy",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Můžete komunikovat pouze s maximálně {{maxCount}} soubor(y) najednou.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Můžete personalizovat své interakce s LLM pomocí přidávání vzpomínek prostřednictvím tlačítka 'Spravovat' níže, což je učiní pro vás užitečnějšími a lépe přizpůsobenými.",
"You cannot upload an empty file.": "Nemůžete nahrát prázdný soubor.",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(f.eks. `sh webui.sh --api`)",
"(latest)": "(seneste)",
"{{ models }}": "{{ modeller }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}}s chats",
"{{webUIName}} Backend Required": "{{webUIName}} Backend kræves",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administratorer har adgang til alle værktøjer altid; brugere skal tilføjes værktøjer pr. model i hvert workspace.",
"Advanced Parameters": "Advancerede indstillinger",
"Advanced Params": "Advancerede indstillinger",
"All": "",
"All Documents": "Alle dokumenter",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Tillad afbrydelser i stemme i opkald",
"Allowed Endpoints": "",
"Already have an account?": "Har du allerede en profil?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "en assistent",
@ -93,6 +95,7 @@
"Are you sure?": "Er du sikker?",
"Arena Models": "",
"Artifacts": "Artifakter",
"Ask": "",
"Ask a question": "Stil et spørgsmål",
"Assistant": "",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Search API nøgle",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Samling",
"Color": "",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Forbindelser",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Kontakt din administrator for adgang til WebUI",
"Content": "Indhold",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Kontroller hvordan beskedens tekst bliver splittet til TTS requests. 'Punctuation' (tegnsætning) splitter i sætninger, 'paragraphs' splitter i paragraffer, og 'none' beholder beskeden som en samlet streng.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Indstillinger",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Kopieret",
"Copied shared chat URL to clipboard!": "Link til deling kopieret til udklipsholder",
"Copied to clipboard": "Kopieret til udklipsholder",
@ -245,6 +250,7 @@
"Created At": "Oprettet",
"Created by": "Oprettet af",
"CSV Import": "Importer CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Nuværende model",
"Current Password": "Nuværende password",
"Custom": "Custom",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "Embedding model sat til \"{{embedding_model}}\"",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "Aktiver deling til Community",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "Aktiver rating af besked",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Aktiver nye signups",
"Enabled": "Aktiveret",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Sørg for at din CSV-fil indeholder 4 kolonner in denne rækkefølge: Name, Email, Password, Role.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "Indtast CFG-skala (f.eks. 7.0)",
"Enter Chunk Overlap": "Indtast overlapning af tekststykker",
"Enter Chunk Size": "Indtast størrelse af tekststykker",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Indtast sprogkoder",
"Enter Model ID": "Indtast model-ID",
"Enter model tag (e.g. {{modelTag}})": "Indtast modelmærke (f.eks. {{modelTag}})",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Indtast antal trin (f.eks. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "Indtast sampler (f.eks. Euler a)",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "Indtast Tika Server URL",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Indtast Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Indtast URL (f.eks. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Indtast URL (f.eks. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "Eksperimentel",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "Eksportér",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "Inkluder `--api-auth` flag, når du kører stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Inkluder `--api` flag, når du kører stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Info",
"Input commands": "Inputkommandoer",
"Install from Github URL": "Installer fra Github URL",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "Lokale modeller",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "LTR",
"Made by Open WebUI Community": "Lavet af OpenWebUI Community",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Tilladelse nægtet ved adgang til mikrofon",
"Permission denied when accessing microphone: {{error}}": "Tilladelse nægtet ved adgang til mikrofon: {{error}}",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "Personalisering",
"Pin": "Fastgør",
"Pinned": "Fastgjort",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "Optag stemme",
"Redirecting you to Open WebUI Community": "Omdirigerer dig til OpenWebUI Community",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Referer til dig selv som \"Bruger\" (f.eks. \"Bruger lærer spansk\")",
"References from": "",
"Refused when it shouldn't have": "Afvist, når den ikke burde have været det",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Indstil stemme",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Indstillinger",
"Settings saved successfully!": "Indstillinger gemt!",
@ -964,7 +981,7 @@
"System Prompt": "Systemprompt",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "Tryk for at afbryde",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Tak for din feedback!",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Udviklerne bag dette plugin er passionerede frivillige fra fællesskabet. Hvis du finder dette plugin nyttigt, kan du overveje at bidrage til dets udvikling.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Den maksimale filstørrelse i MB. Hvis filstørrelsen overstiger denne grænse, uploades filen ikke.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Det maksimale antal filer, der kan bruges på én gang i chatten. Hvis antallet af filer overstiger denne grænse, uploades filerne ikke.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Scoren skal være en værdi mellem 0,0 (0%) og 1,0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema",
"Thinking...": "Tænker...",
"This action cannot be undone. Do you wish to continue?": "Denne handling kan ikke fortrydes. Vil du fortsætte?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Dette sikrer, at dine værdifulde samtaler gemmes sikkert i din backend-database. Tak!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Dette er en eksperimentel funktion, den fungerer muligvis ikke som forventet og kan ændres når som helst.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Denne indstilling sletter alle eksisterende filer i samlingen og erstatter dem med nyligt uploadede filer.",
"This response was generated by \"{{model}}\"": "",
"This will delete": "Dette vil slette",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "Widescreen-tilstand",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Arbejdsområde",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "I går",
"You": "Du",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Du kan kun chatte med maksimalt {{maxCount}} fil(er) ad gangen.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Du kan personliggøre dine interaktioner med LLM'er ved at tilføje minder via knappen 'Administrer' nedenfor, hvilket gør dem mere nyttige og skræddersyet til dig.",
"You cannot upload an empty file.": "",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(z. B. `sh webui.sh --api`)",
"(latest)": "(neueste)",
"{{ models }}": "{{ Modelle }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} Antworten",
"{{user}}'s Chats": "{{user}}s Unterhaltungen",
"{{webUIName}} Backend Required": "{{webUIName}}-Backend erforderlich",
@ -13,7 +14,7 @@
"A task model is used when performing tasks such as generating titles for chats and web search queries": "Aufgabenmodelle können Unterhaltungstitel oder Websuchanfragen generieren.",
"a user": "ein Benutzer",
"About": "Über",
"Accept autocomplete generation / Jump to prompt variable": "",
"Accept autocomplete generation / Jump to prompt variable": "Automatische Vervollständigung akzeptieren / Zur Prompt-Variable springen",
"Access": "Zugang",
"Access Control": "Zugangskontrolle",
"Accessible to all users": "Für alle Benutzer zugänglich",
@ -21,7 +22,7 @@
"Account Activation Pending": "Kontoaktivierung ausstehend",
"Accurate information": "Präzise Information(en)",
"Actions": "Aktionen",
"Activate": "",
"Activate": "Aktivieren",
"Activate this command by typing \"/{{COMMAND}}\" to chat input.": "Aktivieren Sie diesen Befehl, indem Sie \"/{{COMMAND}}\" in die Chat-Eingabe eingeben.",
"Active Users": "Aktive Benutzer",
"Add": "Hinzufügen",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administratoren haben jederzeit Zugriff auf alle Werkzeuge. Benutzer können im Arbeitsbereich zugewiesen.",
"Advanced Parameters": "Erweiterte Parameter",
"Advanced Params": "Erweiterte Parameter",
"All": "",
"All Documents": "Alle Dokumente",
"All models deleted successfully": "Alle Modelle erfolgreich gelöscht",
"Allow Chat Controls": "Chat-Steuerung erlauben",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Unterbrechung durch Stimme im Anruf zulassen",
"Allowed Endpoints": "Erlaubte Endpunkte",
"Already have an account?": "Haben Sie bereits einen Account?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Alternative zu top_p und zielt darauf ab, ein Gleichgewicht zwischen Qualität und Vielfalt zu gewährleisten. Der Parameter p repräsentiert die Mindestwahrscheinlichkeit für ein Token, um berücksichtigt zu werden, relativ zur Wahrscheinlichkeit des wahrscheinlichsten Tokens. Zum Beispiel, bei p=0.05 und das wahrscheinlichste Token hat eine Wahrscheinlichkeit von 0.9, werden Logits mit einem Wert von weniger als 0.045 herausgefiltert. (Standard: 0.0)",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "Immer",
"Amazing": "Fantastisch",
"an assistant": "ein Assistent",
@ -86,23 +88,24 @@
"Archive All Chats": "Alle Unterhaltungen archivieren",
"Archived Chats": "Archivierte Unterhaltungen",
"archived-chat-export": "archivierter-chat-export",
"Are you sure you want to clear all memories? This action cannot be undone.": "",
"Are you sure you want to clear all memories? This action cannot be undone.": "Sind Sie sicher, dass Sie alle Erinnerungen löschen möchten? Diese Handlung kann nicht rückgängig gemacht werden.",
"Are you sure you want to delete this channel?": "Sind Sie sicher, dass Sie diesen Kanal löschen möchten?",
"Are you sure you want to delete this message?": "Sind Sie sicher, dass Sie diese Nachricht löschen möchten?",
"Are you sure you want to unarchive all archived chats?": "Sind Sie sicher, dass Sie alle archivierten Unterhaltungen wiederherstellen möchten?",
"Are you sure?": "Sind Sie sicher?",
"Arena Models": "Arena-Modelle",
"Artifacts": "Artefakte",
"Ask": "",
"Ask a question": "Stellen Sie eine Frage",
"Assistant": "Assistent",
"Attach file from knowledge": "",
"Attach file from knowledge": "Datei aus Wissensspeicher anhängen",
"Attention to detail": "Aufmerksamkeit für Details",
"Attribute for Mail": "Attribut für E-Mail",
"Attribute for Username": "Attribut für Benutzername",
"Audio": "Audio",
"August": "August",
"Authenticate": "Authentifizieren",
"Authentication": "",
"Authentication": "Authentifizierung",
"Auto-Copy Response to Clipboard": "Antwort automatisch in die Zwischenablage kopieren",
"Auto-playback response": "Antwort automatisch abspielen",
"Autocomplete Generation": "Automatische Vervollständigung",
@ -127,11 +130,12 @@
"Bing Search V7 Endpoint": "Bing Search V7-Endpunkt",
"Bing Search V7 Subscription Key": "Bing Search V7-Abonnement-Schlüssel",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Search API-Schlüssel",
"By {{name}}": "Von {{name}}",
"Bypass Embedding and Retrieval": "",
"Bypass Embedding and Retrieval": "Embedding und Retrieval umgehen",
"Bypass SSL verification for Websites": "SSL-Überprüfung für Webseiten umgehen",
"Calendar": "",
"Calendar": "Kalender",
"Call": "Anrufen",
"Call feature is not supported when using Web STT engine": "Die Anruffunktion wird nicht unterstützt, wenn die Web-STT-Engine verwendet wird.",
"Camera": "Kamera",
@ -170,7 +174,7 @@
"Click here to": "Klicken Sie hier, um",
"Click here to download user import template file.": "Klicken Sie hier, um die Vorlage für den Benutzerimport herunterzuladen.",
"Click here to learn more about faster-whisper and see the available models.": "Klicken Sie hier, um mehr über faster-whisper zu erfahren und die verfügbaren Modelle zu sehen.",
"Click here to see available models.": "",
"Click here to see available models.": "Klicken Sie hier, um die verfügbaren Modelle anzuzeigen.",
"Click here to select": "Klicke Sie zum Auswählen hier",
"Click here to select a csv file.": "Klicken Sie zum Auswählen einer CSV-Datei hier.",
"Click here to select a py file.": "Klicken Sie zum Auswählen einer py-Datei hier.",
@ -183,13 +187,14 @@
"Clone of {{TITLE}}": "Klon von {{TITLE}}",
"Close": "Schließen",
"Code execution": "Codeausführung",
"Code Execution": "",
"Code Execution": "Codeausführung",
"Code Execution Engine": "",
"Code Execution Timeout": "",
"Code formatted successfully": "Code erfolgreich formatiert",
"Code Interpreter": "Code-Interpreter",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Kollektion",
"Color": "Farbe",
"ComfyUI": "ComfyUI",
@ -206,9 +211,9 @@
"Confirm Password": "Passwort bestätigen",
"Confirm your action": "Bestätigen Sie Ihre Aktion.",
"Confirm your new password": "Neues Passwort bestätigen",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connect to your own OpenAI compatible API endpoints.": "Verbinden Sie sich zu Ihren OpenAI-kompatiblen Endpunkten.",
"Connections": "Verbindungen",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "Beschränkt den Aufwand für das Schlussfolgern bei Schlussfolgerungsmodellen. Nur anwendbar auf Schlussfolgerungsmodelle von spezifischen Anbietern, die den Schlussfolgerungsaufwand unterstützen. (Standard: medium)",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Kontaktieren Sie den Administrator für den Zugriff auf die Weboberfläche",
"Content": "Info",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Mit Email fortfahren",
"Continue with LDAP": "Mit LDAP fortfahren",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Kontrollieren Sie, wie Nachrichtentext für TTS-Anfragen aufgeteilt wird. 'Punctuation' teilt in Sätze auf, 'paragraphs' teilt in Absätze auf und 'none' behält die Nachricht als einzelnen String.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Steuerung",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Kontrolliert das Gleichgewicht zwischen Kohärenz und Vielfalt des Ausgabetextes. Ein niedrigerer Wert führt zu fokussierterem und kohärenterem Text. (Standard: 5.0)",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Kopiert",
"Copied shared chat URL to clipboard!": "Freigabelink in die Zwischenablage kopiert!",
"Copied to clipboard": "In die Zwischenablage kopiert",
@ -230,7 +235,7 @@
"Copy Link": "Link kopieren",
"Copy to clipboard": "In die Zwischenablage kopieren",
"Copying to clipboard was successful!": "Das Kopieren in die Zwischenablage war erfolgreich!",
"CORS must be properly configured by the provider to allow requests from Open WebUI.": "",
"CORS must be properly configured by the provider to allow requests from Open WebUI.": "CORS muss vom Anbieter korrekt konfiguriert werden, um Anfragen von Open WebUI zuzulassen.",
"Create": "Erstellen",
"Create a knowledge base": "Wissensspeicher erstellen",
"Create a model": "Modell erstellen",
@ -245,10 +250,11 @@
"Created At": "Erstellt am",
"Created by": "Erstellt von",
"CSV Import": "CSV-Import",
"Ctrl+Enter to Send": "",
"Current Model": "Aktuelles Modell",
"Current Password": "Aktuelles Passwort",
"Custom": "Benutzerdefiniert",
"Danger Zone": "",
"Danger Zone": "Gefahrenzone",
"Dark": "Dunkel",
"Database": "Datenbank",
"December": "Dezember",
@ -286,9 +292,9 @@
"Describe your knowledge base and objectives": "Beschreibe deinen Wissensspeicher und deine Ziele",
"Description": "Beschreibung",
"Didn't fully follow instructions": "Nicht genau den Answeisungen gefolgt",
"Direct Connections": "",
"Direct Connections allow users to connect to their own OpenAI compatible API endpoints.": "",
"Direct Connections settings updated": "",
"Direct Connections": "Direktverbindungen",
"Direct Connections allow users to connect to their own OpenAI compatible API endpoints.": "Direktverbindungen ermöglichen es Benutzern, sich mit ihren eigenen OpenAI-kompatiblen API-Endpunkten zu verbinden.",
"Direct Connections settings updated": "Direktverbindungs-Einstellungen aktualisiert",
"Disabled": "Deaktiviert",
"Discover a function": "Entdecken Sie weitere Funktionen",
"Discover a model": "Entdecken Sie weitere Modelle",
@ -321,14 +327,14 @@
"Don't like the style": "schlechter Schreibstil",
"Done": "Erledigt",
"Download": "Exportieren",
"Download as SVG": "",
"Download as SVG": "Exportieren als SVG",
"Download canceled": "Exportierung abgebrochen",
"Download Database": "Datenbank exportieren",
"Drag and drop a file to upload or select a file to view": "Ziehen Sie eine Datei zum Hochladen oder wählen Sie eine Datei zum Anzeigen aus",
"Draw": "Zeichnen",
"Drop any files here to add to the conversation": "Ziehen Sie beliebige Dateien hierher, um sie der Unterhaltung hinzuzufügen",
"e.g. '30s','10m'. Valid time units are 's', 'm', 'h'.": "z. B. '30s','10m'. Gültige Zeiteinheiten sind 's', 'm', 'h'.",
"e.g. 60": "",
"e.g. 60": "z. B. 60",
"e.g. A filter to remove profanity from text": "z. B. Ein Filter, um Schimpfwörter aus Text zu entfernen",
"e.g. My Filter": "z. B. Mein Filter",
"e.g. My Tools": "z. B. Meine Werkzeuge",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "Embedding-Modell auf \"{{embedding_model}}\" gesetzt",
"Enable API Key": "API-Schlüssel aktivieren",
"Enable autocomplete generation for chat messages": "Automatische Vervollständigung für Chat-Nachrichten aktivieren",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "Community-Freigabe aktivieren",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Aktiviere Memory Locking (mlock), um zu verhindern, dass Modelldaten aus dem RAM ausgelagert werden. Diese Option sperrt die Arbeitsseiten des Modells im RAM, um sicherzustellen, dass sie nicht auf die Festplatte ausgelagert werden. Dies kann die Leistung verbessern, indem Page Faults vermieden und ein schneller Datenzugriff sichergestellt werden.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Aktiviere Memory Mapping (mmap), um Modelldaten zu laden. Diese Option ermöglicht es dem System, den Festplattenspeicher als Erweiterung des RAM zu verwenden, indem Festplattendateien so behandelt werden, als ob sie im RAM wären. Dies kann die Modellleistung verbessern, indem ein schnellerer Datenzugriff ermöglicht wird. Es kann jedoch nicht auf allen Systemen korrekt funktionieren und einen erheblichen Teil des Festplattenspeichers beanspruchen.",
"Enable Message Rating": "Nachrichtenbewertung aktivieren",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Mirostat Sampling zur Steuerung der Perplexität aktivieren. (Standard: 0, 0 = Deaktiviert, 1 = Mirostat, 2 = Mirostat 2.0)",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Registrierung erlauben",
"Enabled": "Aktiviert",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Stellen Sie sicher, dass Ihre CSV-Datei 4 Spalten in dieser Reihenfolge enthält: Name, E-Mail, Passwort, Rolle.",
@ -375,10 +382,11 @@
"Enter CFG Scale (e.g. 7.0)": "Geben Sie die CFG-Skala ein (z. B. 7.0)",
"Enter Chunk Overlap": "Geben Sie die Blocküberlappung ein",
"Enter Chunk Size": "Geben Sie die Blockgröße ein",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Geben Sie eine Beschreibung ein",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
"Enter domains separated by commas (e.g., example.com,site.org)": "",
"Enter domains separated by commas (e.g., example.com,site.org)": "Geben Sie die Domains durch Kommas separiert ein (z.B. example.com,site.org)",
"Enter Exa API Key": "Geben Sie den Exa-API-Schlüssel ein",
"Enter Github Raw URL": "Geben Sie die Github Raw-URL ein",
"Enter Google PSE API Key": "Geben Sie den Google PSE-API-Schlüssel ein",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "Geben sie den Kagi Search API-Schlüssel ein",
"Enter Key Behavior": "",
"Enter language codes": "Geben Sie die Sprachcodes ein",
"Enter Model ID": "Geben Sie die Modell-ID ein",
"Enter model tag (e.g. {{modelTag}})": "Geben Sie den Model-Tag ein",
"Enter Mojeek Search API Key": "Geben Sie den Mojeek Search API-Schlüssel ein",
"Enter Number of Steps (e.g. 50)": "Geben Sie die Anzahl an Schritten ein (z. B. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "Geben sie die Proxy-URL ein (z. B. https://user:password@host:port)",
"Enter reasoning effort": "Geben Sie den Schlussfolgerungsaufwand ein",
"Enter Sampler (e.g. Euler a)": "Geben Sie den Sampler ein (z. B. Euler a)",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Geben sie die öffentliche URL Ihrer WebUI ein. Diese URL wird verwendet, um Links in den Benachrichtigungen zu generieren.",
"Enter Tika Server URL": "Geben Sie die Tika-Server-URL ein",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Geben Sie Top K ein",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Geben Sie die URL ein (z. B. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Geben Sie die URL ein (z. B. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "Beispiel: mail",
"Example: ou=users,dc=foo,dc=example": "Beispiel: ou=users,dc=foo,dc=example",
"Example: sAMAccountName or uid or userPrincipalName": "Beispiel: sAMAccountName or uid or userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Ausschließen",
"Execute code for analysis": "Code für Analyse ausführen",
"Expand": "",
"Experimental": "Experimentell",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Erforschen Sie das Universum",
"Export": "Exportieren",
"Export All Archived Chats": "Alle archivierten Unterhaltungen exportieren",
@ -464,7 +479,7 @@
"Failed to save models configuration": "Fehler beim Speichern der Modellkonfiguration",
"Failed to update settings": "Fehler beim Aktualisieren der Einstellungen",
"Failed to upload file.": "Fehler beim Hochladen der Datei.",
"Features": "",
"Features": "Funktionalitäten",
"Features Permissions": "Funktionen-Berechtigungen",
"February": "Februar",
"Feedback History": "Feedback-Verlauf",
@ -566,7 +581,7 @@
"Include": "Einschließen",
"Include `--api-auth` flag when running stable-diffusion-webui": "Fügen Sie beim Ausführen von stable-diffusion-webui die Option `--api-auth` hinzu",
"Include `--api` flag when running stable-diffusion-webui": "Fügen Sie beim Ausführen von stable-diffusion-webui die Option `--api` hinzu",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Beeinflusst, wie schnell der Algorithmus auf Feedback aus dem generierten Text reagiert. Eine niedrigere Lernrate führt zu langsameren Anpassungen, während eine höhere Lernrate den Algorithmus reaktionsschneller macht. (Standard: 0.1)",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Info",
"Input commands": "Eingabebefehle",
"Install from Github URL": "Installiere von der Github-URL",
@ -613,24 +628,25 @@
"Leave empty to include all models from \"{{URL}}/models\" endpoint": "Leer lassen, um alle Modelle vom \"{{URL}}/models\"-Endpunkt einzuschließen",
"Leave empty to include all models or select specific models": "Leer lassen, um alle Modelle einzuschließen oder spezifische Modelle auszuwählen",
"Leave empty to use the default prompt, or enter a custom prompt": "Leer lassen, um den Standardprompt zu verwenden, oder geben Sie einen benutzerdefinierten Prompt ein",
"Leave model field empty to use the default model.": "",
"License": "",
"Leave model field empty to use the default model.": "Leer lassen, um das Standardmodell zu verwenden.",
"License": "Lizenz",
"Light": "Hell",
"Listening...": "Höre zu...",
"Llama.cpp": "Llama.cpp",
"LLMs can make mistakes. Verify important information.": "LLMs können Fehler machen. Überprüfe wichtige Informationen.",
"Loader": "",
"Loading Kokoro.js...": "",
"Loading Kokoro.js...": "Lade Kokoro.js...",
"Local": "Lokal",
"Local Models": "Lokale Modelle",
"Location access not allowed": "",
"Location access not allowed": "Standortzugriff nicht erlaub",
"Logit Bias": "",
"Lost": "Verloren",
"LTR": "LTR",
"Made by Open WebUI Community": "Von der OpenWebUI-Community",
"Make sure to enclose them with": "Umschließe Variablen mit",
"Make sure to export a workflow.json file as API format from ComfyUI.": "Stellen Sie sicher, dass sie eine workflow.json-Datei im API-Format von ComfyUI exportieren.",
"Manage": "Verwalten",
"Manage Direct Connections": "",
"Manage Direct Connections": "Direkte Verbindungen verwalten",
"Manage Models": "Modelle verwalten",
"Manage Ollama": "Ollama verwalten",
"Manage Ollama API Connections": "Ollama-API-Verbindungen verwalten",
@ -697,7 +713,7 @@
"No HTML, CSS, or JavaScript content found.": "Keine HTML-, CSS- oder JavaScript-Inhalte gefunden.",
"No inference engine with management support found": "Keine Inferenz-Engine mit Management-Unterstützung gefunden",
"No knowledge found": "Kein Wissen gefunden",
"No memories to clear": "",
"No memories to clear": "Keine Erinnerungen zum Entfernen",
"No model IDs": "Keine Modell-IDs",
"No models found": "Keine Modelle gefunden",
"No models selected": "Keine Modelle ausgewählt",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Zugriff auf das Mikrofon verweigert",
"Permission denied when accessing microphone: {{error}}": "Zugriff auf das Mikrofon verweigert: {{error}}",
"Permissions": "Berechtigungen",
"Perplexity API Key": "",
"Personalization": "Personalisierung",
"Pin": "Anheften",
"Pinned": "Angeheftet",
@ -776,7 +793,7 @@
"Plain text (.txt)": "Nur Text (.txt)",
"Playground": "Testumgebung",
"Please carefully review the following warnings:": "Bitte überprüfen Sie die folgenden Warnungen sorgfältig:",
"Please do not close the settings page while loading the model.": "",
"Please do not close the settings page while loading the model.": "Bitte schließen die Einstellungen-Seite nicht, während das Modell lädt.",
"Please enter a prompt": "Bitte geben Sie einen Prompt ein",
"Please fill in all fields.": "Bitte füllen Sie alle Felder aus.",
"Please select a model first.": "Bitte wählen Sie zuerst ein Modell aus.",
@ -809,7 +826,7 @@
"Reasoning Effort": "Schlussfolgerungsaufwand",
"Record voice": "Stimme aufnehmen",
"Redirecting you to Open WebUI Community": "Sie werden zur OpenWebUI-Community weitergeleitet",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Reduziert die Wahrscheinlichkeit, Unsinn zu generieren. Ein höherer Wert (z.B. 100) liefert vielfältigere Antworten, während ein niedrigerer Wert (z.B. 10) konservativer ist. (Standard: 40)",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Beziehen Sie sich auf sich selbst als \"Benutzer\" (z. B. \"Benutzer lernt Spanisch\")",
"References from": "Referenzen aus",
"Refused when it shouldn't have": "Abgelehnt, obwohl es nicht hätte abgelehnt werden sollen",
@ -821,7 +838,7 @@
"Rename": "Umbenennen",
"Reorder Models": "Modelle neu anordnen",
"Repeat Last N": "Wiederhole die letzten N",
"Repeat Penalty (Ollama)": "",
"Repeat Penalty (Ollama)": "Wiederholungsstrafe (Ollama)",
"Reply in Thread": "Im Thread antworten",
"Request Mode": "Anforderungsmodus",
"Reranking Model": "Reranking-Modell",
@ -885,7 +902,7 @@
"Select a pipeline": "Wählen Sie eine Pipeline",
"Select a pipeline url": "Wählen Sie eine Pipeline-URL",
"Select a tool": "Wählen Sie ein Werkzeug",
"Select an auth method": "",
"Select an auth method": "Wählen Sie eine Authentifizierungsmethode",
"Select an Ollama instance": "Wählen Sie eine Ollama-Instanz",
"Select Engine": "Engine auswählen",
"Select Knowledge": "Wissensdatenbank auswählen",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Legt die Anzahl der für die Berechnung verwendeten GPU-Geräte fest. Diese Option steuert, wie viele GPU-Geräte (falls verfügbar) zur Verarbeitung eingehender Anfragen verwendet werden. Eine Erhöhung dieses Wertes kann die Leistung für Modelle, die für GPU-Beschleunigung optimiert sind, erheblich verbessern, kann jedoch auch mehr Strom und GPU-Ressourcen verbrauchen.",
"Set Voice": "Stimme festlegen",
"Set whisper model": "Whisper-Modell festlegen",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Legt fest, wie weit das Modell zurückblicken soll, um Wiederholungen zu verhindern. (Standard: 64, 0 = deaktiviert, -1 = num_ctx)",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Legt den Zufallszahlengenerator-Seed für die Generierung fest. Wenn dieser auf eine bestimmte Zahl gesetzt wird, erzeugt das Modell denselben Text für denselben Prompt. (Standard: zufällig)",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Legt die Größe des Kontextfensters fest, das zur Generierung des nächsten Tokens verwendet wird. (Standard: 2048)",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Legt die zu verwendenden Stoppsequenzen fest. Wenn dieses Muster erkannt wird, stoppt das LLM die Textgenerierung und gibt zurück. Mehrere Stoppmuster können festgelegt werden, indem mehrere separate Stopp-Parameter in einer Modelldatei angegeben werden.",
"Settings": "Einstellungen",
"Settings saved successfully!": "Einstellungen erfolgreich gespeichert!",
@ -964,10 +981,10 @@
"System Prompt": "System-Prompt",
"Tags Generation": "Tag-Generierung",
"Tags Generation Prompt": "Prompt für Tag-Generierung",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Tail-Free Sampling wird verwendet, um den Einfluss weniger wahrscheinlicher Tokens auf die Ausgabe zu reduzieren. Ein höherer Wert (z.B. 2.0) reduziert den Einfluss stärker, während ein Wert von 1.0 diese Einstellung deaktiviert. (Standard: 1)",
"Talk to model": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "Tail-Free Sampling wird verwendet, um den Einfluss weniger wahrscheinlicher Tokens auf die Ausgabe zu reduzieren. Ein höherer Wert (z.B. 2.0) reduziert den Einfluss stärker, während ein Wert von 1.0 diese Einstellung deaktiviert. (Standard: 1)",
"Talk to model": "Zu einem Modell sprechen",
"Tap to interrupt": "Zum Unterbrechen tippen",
"Tasks": "",
"Tasks": "Aufgaben",
"Tavily API Key": "Tavily-API-Schlüssel",
"Tell us more:": "Erzähl uns mehr",
"Temperature": "Temperatur",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Danke für Ihr Feedback!",
"The Application Account DN you bind with for search": "Der Anwendungs-Konto-DN, mit dem Sie für die Suche binden",
"The base to search for users": "Die Basis, in der nach Benutzern gesucht wird",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "Die Batch-Größe bestimmt, wie viele Textanfragen gleichzeitig verarbeitet werden. Eine größere Batch-Größe kann die Leistung und Geschwindigkeit des Modells erhöhen, erfordert jedoch auch mehr Speicher. (Standard: 512)",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Die Entwickler hinter diesem Plugin sind leidenschaftliche Freiwillige aus der Community. Wenn Sie dieses Plugin hilfreich finden, erwägen Sie bitte, zu seiner Entwicklung beizutragen.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Die Bewertungs-Bestenliste basiert auf dem Elo-Bewertungssystem und wird in Echtzeit aktualisiert.",
"The LDAP attribute that maps to the mail that users use to sign in.": "Das LDAP-Attribut, das der Mail zugeordnet ist, die Benutzer zum Anmelden verwenden.",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Die maximale Dateigröße in MB. Wenn die Dateigröße dieses Limit überschreitet, wird die Datei nicht hochgeladen.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Die maximale Anzahl von Dateien, die gleichzeitig in der Unterhaltung verwendet werden können. Wenn die Anzahl der Dateien dieses Limit überschreitet, werden die Dateien nicht hochgeladen.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Die Punktzahl sollte ein Wert zwischen 0,0 (0 %) und 1,0 (100 %) sein.",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "Die Temperatur des Modells. Eine Erhöhung der Temperatur führt dazu, dass das Modell kreativer antwortet. (Standard: 0,8)",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Design",
"Thinking...": "Denke nach...",
"This action cannot be undone. Do you wish to continue?": "Diese Aktion kann nicht rückgängig gemacht werden. Möchten Sie fortfahren?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Dies stellt sicher, dass Ihre wertvollen Unterhaltungen sicher in Ihrer Backend-Datenbank gespeichert werden. Vielen Dank!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Dies ist eine experimentelle Funktion, sie funktioniert möglicherweise nicht wie erwartet und kann jederzeit geändert werden.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Diese Option steuert, wie viele Tokens beim Aktualisieren des Kontexts beibehalten werden. Wenn sie beispielsweise auf 2 gesetzt ist, werden die letzten 2 Tokens des Gesprächskontexts beibehalten. Das Beibehalten des Kontexts kann helfen, die Kontinuität eines Gesprächs aufrechtzuerhalten, kann jedoch die Fähigkeit verringern, auf neue Themen zu reagieren. (Standard: 24)",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Diese Option legt die maximale Anzahl von Tokens fest, die das Modell in seiner Antwort generieren kann. Eine Erhöhung dieses Limits ermöglicht es dem Modell, längere Antworten zu geben, kann jedoch auch die Wahrscheinlichkeit erhöhen, dass unhilfreicher oder irrelevanter Inhalt generiert wird. (Standard: 128)",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Diese Option löscht alle vorhandenen Dateien in der Sammlung und ersetzt sie durch neu hochgeladene Dateien.",
"This response was generated by \"{{model}}\"": "Diese Antwort wurde von \"{{model}}\" generiert",
"This will delete": "Dies löscht",
@ -1005,7 +1022,7 @@
"This will reset the knowledge base and sync all files. Do you wish to continue?": "Dadurch wird die Wissensdatenbank zurückgesetzt und alle Dateien synchronisiert. Möchten Sie fortfahren?",
"Thorough explanation": "Ausführliche Erklärung",
"Thought for {{DURATION}}": "Nachgedacht für {{DURATION}}",
"Thought for {{DURATION}} seconds": "",
"Thought for {{DURATION}} seconds": "Nachgedacht für {{DURATION}} Sekunden",
"Tika": "Tika",
"Tika Server URL required.": "Tika-Server-URL erforderlich.",
"Tiktoken": "Tiktoken",
@ -1014,7 +1031,7 @@
"Title (e.g. Tell me a fun fact)": "Titel (z. B. Erzähl mir einen lustigen Fakt)",
"Title Auto-Generation": "Unterhaltungstitel automatisch generieren",
"Title cannot be an empty string.": "Titel darf nicht leer sein.",
"Title Generation": "",
"Title Generation": "Titelgenerierung",
"Title Generation Prompt": "Prompt für Titelgenerierung",
"TLS": "TLS",
"To access the available model names for downloading,": "Um auf die verfügbaren Modellnamen zuzugreifen,",
@ -1132,7 +1149,7 @@
"Why?": "Warum?",
"Widescreen Mode": "Breitbildmodus",
"Won": "Gewonnen",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Funktioniert zusammen mit top-k. Ein höherer Wert (z.B. 0,95) führt zu vielfältigerem Text, während ein niedrigerer Wert (z.B. 0,5) fokussierteren und konservativeren Text erzeugt. (Standard: 0,9)",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Arbeitsbereich",
"Workspace Permissions": "Arbeitsbereichsberechtigungen",
"Write": "Schreiben",
@ -1142,6 +1159,7 @@
"Write your model template content here": "Schreiben Sie hier Ihren Modellvorlageninhalt",
"Yesterday": "Gestern",
"You": "Sie",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Sie können nur mit maximal {{maxCount}} Datei(en) gleichzeitig chatten.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Personalisieren Sie Interaktionen mit LLMs, indem Sie über die Schaltfläche \"Verwalten\" Erinnerungen hinzufügen.",
"You cannot upload an empty file.": "Sie können keine leere Datei hochladen.",
@ -1155,6 +1173,6 @@
"Your account status is currently pending activation.": "Ihr Kontostatus ist derzeit ausstehend und wartet auf Aktivierung.",
"Your entire contribution will go directly to the plugin developer; Open WebUI does not take any percentage. However, the chosen funding platform might have its own fees.": "Ihr gesamter Beitrag geht direkt an den Plugin-Entwickler; Open WebUI behält keinen Prozentsatz ein. Die gewählte Finanzierungsplattform kann jedoch eigene Gebühren haben.",
"Youtube": "YouTube",
"Youtube Language": "",
"Youtube Language": "YouTube Sprache",
"Youtube Proxy URL": ""
}

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(such e.g. `sh webui.sh --api`)",
"(latest)": "(much latest)",
"{{ models }}": "",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "",
"{{webUIName}} Backend Required": "{{webUIName}} Backend Much Required",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "Advanced Parameters",
"Advanced Params": "",
"All": "",
"All Documents": "",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "",
"Allowed Endpoints": "",
"Already have an account?": "Such account exists?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "such assistant",
@ -93,6 +95,7 @@
"Are you sure?": "Such certainty?",
"Arena Models": "",
"Artifacts": "",
"Ask": "",
"Ask a question": "",
"Assistant": "",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Collection",
"Color": "",
"ComfyUI": "",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Connections",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "",
"Content": "Content",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "",
"Copied shared chat URL to clipboard!": "",
"Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "",
"Created by": "",
"CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "Current Model",
"Current Password": "Current Password",
"Custom": "Custom",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Enable New Bark Ups",
"Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "Enter Overlap of Chunks",
"Enter Chunk Size": "Enter Size of Chunk",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "",
"Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Enter model doge tag (e.g. {{modelTag}})",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Enter Number of Steps (e.g. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Enter Top Wow",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Enter URL (e.g. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "Much Experiment",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "Include `--api` flag when running stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "",
"Input commands": "Input commands",
"Install from Github URL": "",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "",
"Made by Open WebUI Community": "Made by Open WebUI Community",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "Permission denied when accessing microphone: {{error}}",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "Personalization",
"Pin": "",
"Pinned": "",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "Record Bark",
"Redirecting you to Open WebUI Community": "Redirecting you to Open WebUI Community",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "",
"Refused when it shouldn't have": "",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Set Voice so speak",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Settings much settings",
"Settings saved successfully!": "Settings saved successfully! Very success!",
@ -964,7 +981,7 @@
"System Prompt": "System Prompt much prompt",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Theme much theme",
"Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "This ensures that your valuable conversations are securely saved to your backend database. Thank you! Much secure!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "",
"This will delete": "",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "",
"You": "",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(π.χ. `sh webui.sh --api`)",
"(latest)": "(τελευταίο)",
"{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "Συνομιλίες του {{user}}",
"{{webUIName}} Backend Required": "{{webUIName}} Απαιτείται Backend",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Οι διαχειριστές έχουν πρόσβαση σε όλα τα εργαλεία ανά πάσα στιγμή· οι χρήστες χρειάζονται εργαλεία ανά μοντέλο στον χώρο εργασίας.",
"Advanced Parameters": "Προηγμένοι Παράμετροι",
"Advanced Params": "Προηγμένα Παράμετροι",
"All": "",
"All Documents": "Όλα τα Έγγραφα",
"All models deleted successfully": "Όλα τα μοντέλα διαγράφηκαν με επιτυχία",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Επιτρέπεται η Παύση Φωνής στην Κλήση",
"Allowed Endpoints": "",
"Already have an account?": "Έχετε ήδη λογαριασμό;",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Εναλλακτικό στο top_p, και στοχεύει στη διασφάλιση μιας ισορροπίας μεταξύ ποιότητας και ποικιλίας. Η παράμετρος p αντιπροσωπεύει την ελάχιστη πιθανότητα για ένα token να θεωρηθεί, σε σχέση με την πιθανότητα του πιο πιθανού token. Για παράδειγμα, με p=0.05 και το πιο πιθανό token να έχει πιθανότητα 0.9, τα logits με τιμή μικρότερη από 0.045 φιλτράρονται. (Προεπιλογή: 0.0)",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "Καταπληκτικό",
"an assistant": "ένας βοηθός",
@ -93,6 +95,7 @@
"Are you sure?": "Είστε σίγουροι;",
"Arena Models": "Μοντέλα Arena",
"Artifacts": "Αρχεία",
"Ask": "",
"Ask a question": "Ρωτήστε μια ερώτηση",
"Assistant": "Βοηθός",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Τέλος Bing Search V7",
"Bing Search V7 Subscription Key": "Κλειδί Συνδρομής Bing Search V7",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Κλειδί API Brave Search",
"By {{name}}": "Από {{name}}",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Συλλογή",
"Color": "Χρώμα",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Συνδέσεις",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Επικοινωνήστε με τον Διαχειριστή για Πρόσβαση στο WebUI",
"Content": "Περιεχόμενο",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Συνέχεια με Email",
"Continue with LDAP": "Συνέχεια με LDAP",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Έλεγχος πώς διαχωρίζεται το κείμενο του μηνύματος για αιτήματα TTS. Το 'Στίξη' διαχωρίζει σε προτάσεις, οι 'παραγράφοι' σε παραγράφους, και το 'κανένα' κρατά το μήνυμα ως μια αλυσίδα.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Έλεγχοι",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Διαχειρίζεται την ισορροπία μεταξύ συνεκτικότητας και ποικιλίας της εξόδου. Μια χαμηλότερη τιμή θα έχει ως αποτέλεσμα πιο εστιασμένο και συνεκτικό κείμενο. (Προεπιλογή: 5.0)",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Αντιγράφηκε",
"Copied shared chat URL to clipboard!": "Αντιγράφηκε το URL της κοινόχρηστης συνομιλίας στο πρόχειρο!",
"Copied to clipboard": "Αντιγράφηκε στο πρόχειρο",
@ -245,6 +250,7 @@
"Created At": "Δημιουργήθηκε στις",
"Created by": "Δημιουργήθηκε από",
"CSV Import": "Εισαγωγή CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Τρέχον Μοντέλο",
"Current Password": "Τρέχων Κωδικός",
"Custom": "Προσαρμοσμένο",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "Το μοντέλο ενσωμάτωσης έχει οριστεί σε \"{{embedding_model}}\"",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "Ενεργοποίηση Κοινοτικής Κοινής Χρήσης",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Ενεργοποίηση Κλείδωσης Μνήμης (mlock) για την αποτροπή της ανταλλαγής δεδομένων του μοντέλου από τη μνήμη RAM. Αυτή η επιλογή κλειδώνει το σύνολο εργασίας των σελίδων του μοντέλου στη μνήμη RAM, διασφαλίζοντας ότι δεν θα ανταλλαχθούν στο δίσκο. Αυτό μπορεί να βοηθήσει στη διατήρηση της απόδοσης αποφεύγοντας σφάλματα σελίδων και διασφαλίζοντας γρήγορη πρόσβαση στα δεδομένα.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Ενεργοποίηση Χαρτογράφησης Μνήμης (mmap) για φόρτωση δεδομένων μοντέλου. Αυτή η επιλογή επιτρέπει στο σύστημα να χρησιμοποιεί αποθήκευση δίσκου ως επέκταση της μνήμης RAM, αντιμετωπίζοντας αρχεία δίσκου σαν να ήταν στη μνήμη RAM. Αυτό μπορεί να βελτιώσει την απόδοση του μοντέλου επιτρέποντας γρηγορότερη πρόσβαση στα δεδομένα. Ωστόσο, μπορεί να μην λειτουργεί σωστά με όλα τα συστήματα και να καταναλώνει σημαντικό χώρο στο δίσκο.",
"Enable Message Rating": "Ενεργοποίηση Αξιολόγησης Μηνυμάτων",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Ενεργοποίηση δειγματοληψίας Mirostat για έλεγχο της περιπλοκότητας. (Προεπιλογή: 0, 0 = Απενεργοποιημένο, 1 = Mirostat, 2 = Mirostat 2.0)",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Ενεργοποίηση Νέων Εγγραφών",
"Enabled": "Ενεργοποιημένο",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Βεβαιωθείτε ότι το αρχείο CSV σας περιλαμβάνει 4 στήλες με αυτή τη σειρά: Όνομα, Email, Κωδικός, Ρόλος.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "Εισάγετε το CFG Scale (π.χ. 7.0)",
"Enter Chunk Overlap": "Εισάγετε την Επικάλυψη Τμημάτων",
"Enter Chunk Size": "Εισάγετε το Μέγεθος Τμημάτων",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Εισάγετε την περιγραφή",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Εισάγετε κωδικούς γλώσσας",
"Enter Model ID": "Εισάγετε το ID Μοντέλου",
"Enter model tag (e.g. {{modelTag}})": "Εισάγετε την ετικέτα μοντέλου (π.χ. {{modelTag}})",
"Enter Mojeek Search API Key": "Εισάγετε το Κλειδί API Mojeek Search",
"Enter Number of Steps (e.g. 50)": "Εισάγετε τον Αριθμό Βημάτων (π.χ. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "Εισάγετε τον Sampler (π.χ. Euler a)",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "Εισάγετε το URL διακομιστή Tika",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Εισάγετε το Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Εισάγετε το URL (π.χ. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Εισάγετε το URL (π.χ. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "Παράδειγμα: ou=users,dc=foo,dc=example",
"Example: sAMAccountName or uid or userPrincipalName": "Παράδειγμα: sAMAccountName ή uid ή userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Εξαίρεση",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "Πειραματικό",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Εξερευνήστε το σύμπαν",
"Export": "Εξαγωγή",
"Export All Archived Chats": "Εξαγωγή Όλων των Αρχειοθετημένων Συνομιλιών",
@ -566,7 +581,7 @@
"Include": "Συμπερίληψη",
"Include `--api-auth` flag when running stable-diffusion-webui": "Συμπεριλάβετε το flag `--api-auth` όταν τρέχετε το stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Συμπεριλάβετε το flag `--api` όταν τρέχετε το stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Επηρεάζει πόσο γρήγορα ανταποκρίνεται ο αλγόριθμος στην ανατροφοδότηση από το παραγόμενο κείμενο. Μια χαμηλότερη ταχύτητα μάθησης θα έχει ως αποτέλεσμα πιο αργές προσαρμογές, ενώ μια υψηλότερη ταχύτητα μάθησης θα κάνει τον αλγόριθμο πιο ανταποκρινόμενο. (Προεπιλογή: 0.1)",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Πληροφορίες",
"Input commands": "Εισαγωγή εντολών",
"Install from Github URL": "Εγκατάσταση από URL Github",
@ -624,6 +639,7 @@
"Local": "Τοπικό",
"Local Models": "Τοπικά Μοντέλα",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "Χαμένος",
"LTR": "LTR",
"Made by Open WebUI Community": "Δημιουργήθηκε από την Κοινότητα OpenWebUI",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Άρνηση δικαιώματος κατά την πρόσβαση σε μικρόφωνο",
"Permission denied when accessing microphone: {{error}}": "Άρνηση δικαιώματος κατά την πρόσβαση σε μικρόφωνο: {{error}}",
"Permissions": "Δικαιώματα",
"Perplexity API Key": "",
"Personalization": "Προσωποποίηση",
"Pin": "Καρφίτσωμα",
"Pinned": "Καρφιτσωμένο",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "Εγγραφή φωνής",
"Redirecting you to Open WebUI Community": "Μετακατεύθυνση στην Κοινότητα OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Μειώνει την πιθανότητα δημιουργίας ανοησιών. Μια υψηλότερη τιμή (π.χ. 100) θα δώσει πιο ποικίλες απαντήσεις, ενώ μια χαμηλότερη τιμή (π.χ. 10) θα δημιουργήσει πιο συντηρητικές απαντήσεις. (Προεπιλογή: 40)",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Αναφέρεστε στον εαυτό σας ως \"User\" (π.χ., \"User μαθαίνει Ισπανικά\")",
"References from": "Αναφορές από",
"Refused when it shouldn't have": "Αρνήθηκε όταν δεν έπρεπε",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Ορισμός του αριθμού των νημάτων εργασίας που χρησιμοποιούνται για υπολογισμούς. Αυτή η επιλογή ελέγχει πόσα νήματα χρησιμοποιούνται για την επεξεργασία των εισερχόμενων αιτημάτων ταυτόχρονα. Η αύξηση αυτής της τιμής μπορεί να βελτιώσει την απόδοση σε εργασίες υψηλής συγχρονισμένης φόρτωσης αλλά μπορεί επίσης να καταναλώσει περισσότερους πόρους CPU.",
"Set Voice": "Ορισμός Φωνής",
"Set whisper model": "Ορισμός μοντέλου whisper",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Ορίζει πόσο πίσω θα κοιτάξει το μοντέλο για να αποτρέψει την επανάληψη. (Προεπιλογή: 64, 0 = απενεργοποιημένο, -1 = num_ctx)",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Ορίζει τον τυχαίο σπόρο αριθμού που θα χρησιμοποιηθεί για τη δημιουργία. Ορισμός αυτού σε έναν συγκεκριμένο αριθμό θα κάνει το μοντέλο να δημιουργεί το ίδιο κείμενο για την ίδια προτροπή. (Προεπιλογή: τυχαίο)",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Ορίζει το μέγεθος του παραθύρου πλαισίου που χρησιμοποιείται για τη δημιουργία του επόμενου token. (Προεπιλογή: 2048)",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Ορίζει τις σειρές παύσης που θα χρησιμοποιηθούν. Όταν εντοπιστεί αυτό το μοτίβο, το LLM θα σταματήσει να δημιουργεί κείμενο και θα επιστρέψει. Πολλαπλά μοτίβα παύσης μπορούν να οριστούν καθορίζοντας πολλαπλές ξεχωριστές παραμέτρους παύσης σε ένα αρχείο μοντέλου.",
"Settings": "Ρυθμίσεις",
"Settings saved successfully!": "Οι Ρυθμίσεις αποθηκεύτηκαν με επιτυχία!",
@ -964,7 +981,7 @@
"System Prompt": "Προτροπή Συστήματος",
"Tags Generation": "",
"Tags Generation Prompt": "Προτροπή Γενιάς Ετικετών",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Η δειγματοληψία Tail free χρησιμοποιείται για να μειώσει την επίδραση των λιγότερο πιθανών tokens από την έξοδο. Μια υψηλότερη τιμή (π.χ., 2.0) θα μειώσει την επίδραση περισσότερο, ενώ μια τιμή 1.0 απενεργοποιεί αυτή τη ρύθμιση. (προεπιλογή: 1)",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "Πατήστε για παύση",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Ευχαριστούμε για την ανατροφοδότησή σας!",
"The Application Account DN you bind with for search": "Το DN του Λογαριασμού Εφαρμογής που συνδέετε για αναζήτηση",
"The base to search for users": "Η βάση για αναζήτηση χρηστών",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "Το μέγεθος παρτίδας καθορίζει πόσες αιτήσεις κειμένου επεξεργάζονται μαζί ταυτόχρονα. Ένα μεγαλύτερο μέγεθος παρτίδας μπορεί να αυξήσει την απόδοση και την ταχύτητα του μοντέλου, αλλά απαιτεί επίσης περισσότερη μνήμη. (Προεπιλογή: 512)",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Οι προγραμματιστές πίσω από αυτό το plugin είναι παθιασμένοι εθελοντές από την κοινότητα. Αν βρείτε αυτό το plugin χρήσιμο, παρακαλώ σκεφτείτε να συνεισφέρετε στην ανάπτυξή του.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Η κατάταξη αξιολόγησης βασίζεται στο σύστημα βαθμολόγησης Elo και ενημερώνεται σε πραγματικό χρόνο.",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Το μέγιστο μέγεθος αρχείου σε MB. Αν το μέγεθος του αρχείου υπερβαίνει αυτό το όριο, το αρχείο δεν θα ανεβεί.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Ο μέγιστος αριθμός αρχείων που μπορούν να χρησιμοποιηθούν ταυτόχρονα στη συνομιλία. Αν ο αριθμός των αρχείων υπερβαίνει αυτό το όριο, τα αρχεία δεν θα ανεβούν.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Η βαθμολογία θα πρέπει να είναι μια τιμή μεταξύ 0.0 (0%) και 1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "Η θερμοκρασία του μοντέλου. Η αύξηση της θερμοκρασίας θα κάνει το μοντέλο να απαντά πιο δημιουργικά. (Προεπιλογή: 0.8)",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Θέμα",
"Thinking...": "Σκέφτομαι...",
"This action cannot be undone. Do you wish to continue?": "Αυτή η ενέργεια δεν μπορεί να αναιρεθεί. Θέλετε να συνεχίσετε;",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Αυτό διασφαλίζει ότι οι πολύτιμες συνομιλίες σας αποθηκεύονται με ασφάλεια στη βάση δεδομένων backend σας. Ευχαριστούμε!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Αυτή είναι μια πειραματική λειτουργία, μπορεί να μην λειτουργεί όπως αναμένεται και υπόκειται σε αλλαγές οποιαδήποτε στιγμή.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Αυτή η επιλογή ελέγχει πόσα tokens διατηρούνται κατά την ανανέωση του πλαισίου. Για παράδειγμα, αν οριστεί σε 2, τα τελευταία 2 tokens του πλαισίου συνομιλίας θα διατηρηθούν. Η διατήρηση του πλαισίου μπορεί να βοηθήσει στη διατήρηση της συνέχειας μιας συνομιλίας, αλλά μπορεί να μειώσει την ικανότητα ανταπόκρισης σε νέα θέματα. (Προεπιλογή: 24)",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Αυτή η επιλογή ορίζει τον μέγιστο αριθμό tokens που μπορεί να δημιουργήσει το μοντέλο στην απάντησή του. Η αύξηση αυτού του ορίου επιτρέπει στο μοντέλο να παρέχει μεγαλύτερες απαντήσεις, αλλά μπορεί επίσης να αυξήσει την πιθανότητα δημιουργίας αχρήσιμου ή άσχετου περιεχομένου. (Προεπιλογή: 128)",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Αυτή η επιλογή θα διαγράψει όλα τα υπάρχοντα αρχεία στη συλλογή και θα τα αντικαταστήσει με νέα ανεβασμένα αρχεία.",
"This response was generated by \"{{model}}\"": "Αυτή η απάντηση δημιουργήθηκε από \"{{model}}\"",
"This will delete": "Αυτό θα διαγράψει",
@ -1132,7 +1149,7 @@
"Why?": "Γιατί?",
"Widescreen Mode": "Λειτουργία Οθόνης Ευρείας",
"Won": "Κέρδισε",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Συνεργάζεται μαζί με top-k. Μια υψηλότερη τιμή (π.χ., 0.95) θα οδηγήσει σε πιο ποικίλο κείμενο, ενώ μια χαμηλότερη τιμή (π.χ., 0.5) θα δημιουργήσει πιο εστιασμένο και συντηρητικό κείμενο. (Προεπιλογή: 0.9)",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Χώρος Εργασίας",
"Workspace Permissions": "Δικαιώματα Χώρου Εργασίας",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "Γράψτε το περιεχόμενο του προτύπου μοντέλου σας εδώ",
"Yesterday": "Εχθές",
"You": "Εσείς",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Μπορείτε να συνομιλήσετε μόνο με μέγιστο αριθμό {{maxCount}} αρχείου(-ων) ταυτόχρονα.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Μπορείτε να προσωποποιήσετε τις αλληλεπιδράσεις σας με τα LLMs προσθέτοντας αναμνήσεις μέσω του κουμπιού 'Διαχείριση' παρακάτω, κάνοντάς τα πιο χρήσιμα και προσαρμοσμένα σε εσάς.",
"You cannot upload an empty file.": "Δεν μπορείτε να ανεβάσετε ένα κενό αρχείο.",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "",
"(latest)": "",
"{{ models }}": "",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "",
"{{webUIName}} Backend Required": "",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "",
"Advanced Params": "",
"All": "",
"All Documents": "",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "",
"Allowed Endpoints": "",
"Already have an account?": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "",
@ -93,6 +95,7 @@
"Are you sure?": "",
"Arena Models": "",
"Artifacts": "",
"Ask": "",
"Ask a question": "",
"Assistant": "",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "",
"Color": "",
"ComfyUI": "",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "",
"Content": "",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "",
"Copied shared chat URL to clipboard!": "",
"Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "",
"Created by": "",
"CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "",
"Current Password": "",
"Custom": "",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "",
"Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "",
"Enter Chunk Size": "",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "",
"Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "",
"Enter URL (e.g. http://127.0.0.1:7860/)": "",
"Enter URL (e.g. http://localhost:11434)": "",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "",
"Input commands": "",
"Install from Github URL": "",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "",
"Made by Open WebUI Community": "",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "",
"Pin": "",
"Pinned": "",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "",
"Redirecting you to Open WebUI Community": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "",
"Refused when it shouldn't have": "",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "",
"Settings saved successfully!": "",
@ -964,7 +981,7 @@
"System Prompt": "",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "",
"Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "",
"This will delete": "",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "",
"You": "",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "",
"(latest)": "",
"{{ models }}": "",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "",
"{{webUIName}} Backend Required": "",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "",
"Advanced Params": "",
"All": "",
"All Documents": "",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "",
"Allowed Endpoints": "",
"Already have an account?": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "",
@ -93,6 +95,7 @@
"Are you sure?": "",
"Arena Models": "",
"Artifacts": "",
"Ask": "",
"Ask a question": "",
"Assistant": "",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "",
"Color": "",
"ComfyUI": "",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "",
"Content": "",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "",
"Copied shared chat URL to clipboard!": "",
"Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "",
"Created by": "",
"CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "",
"Current Password": "",
"Custom": "",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "",
"Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "",
"Enter Chunk Size": "",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "",
"Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "",
"Enter URL (e.g. http://127.0.0.1:7860/)": "",
"Enter URL (e.g. http://localhost:11434)": "",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "",
"Input commands": "",
"Install from Github URL": "",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "",
"Made by Open WebUI Community": "",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "",
"Pin": "",
"Pinned": "",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "",
"Redirecting you to Open WebUI Community": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "",
"Refused when it shouldn't have": "",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "",
"Settings saved successfully!": "",
@ -964,7 +981,7 @@
"System Prompt": "",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "",
"Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "",
"This will delete": "",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "",
"You": "",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(p.ej. `sh webui.sh --api`)",
"(latest)": "(latest)",
"{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} Respuestas",
"{{user}}'s Chats": "Chats de {{user}}",
"{{webUIName}} Backend Required": "{{webUIName}} Servidor Requerido",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Admins tienen acceso a todas las herramientas en todo momento; los usuarios necesitan herramientas asignadas por modelo en el espacio de trabajo.",
"Advanced Parameters": "Parámetros Avanzados",
"Advanced Params": "Parámetros avanzados",
"All": "",
"All Documents": "Todos los Documentos",
"All models deleted successfully": "Todos los modelos han sido borrados",
"Allow Chat Controls": "Permitir Control de Chats",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Permitir interrupción de voz en llamada",
"Allowed Endpoints": "Endpoints permitidos",
"Already have an account?": "¿Ya tienes una cuenta?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Alternativa a top_p, y busca asegurar un equilibrio entre calidad y variedad. El parámetro p representa la probabilidad mínima para que un token sea considerado, en relación con la probabilidad del token más probable. Por ejemplo, con p=0.05 y el token más probable con una probabilidad de 0.9, los logits con un valor menor a 0.045 son filtrados. (Predeterminado: 0.0)",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "Siempre",
"Amazing": "Sorprendente",
"an assistant": "un asistente",
@ -93,6 +95,7 @@
"Are you sure?": "¿Está seguro?",
"Arena Models": "Arena de Modelos",
"Artifacts": "Artefactos",
"Ask": "",
"Ask a question": "Haz una pregunta",
"Assistant": "Asistente",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Endpoint de Bing Search V7",
"Bing Search V7 Subscription Key": "Clave de suscripción de Bing Search V7",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Clave de API de Brave Search",
"By {{name}}": "Por {{name}}",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "Interprete de Código",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Colección",
"Color": "Color",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "Confirmar tu nueva contraseña",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Conexiones",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": " Restringe el esfuerzo en la razonamiento para los modelos de razonamiento. Solo aplicable a los modelos de razonamiento de proveedores específicos que admiten el esfuerzo de razonamiento. (Por defecto: medio)",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Contacta el administrador para obtener acceso al WebUI",
"Content": "Contenido",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Continuar con email",
"Continue with LDAP": "Continuar con LDAP",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Controlar como el texto del mensaje se divide para las solicitudes de TTS. 'Punctuation' divide en oraciones, 'paragraphs' divide en párrafos y 'none' mantiene el mensaje como una sola cadena.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Controles",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": " Controlar el equilibrio entre la coherencia y la diversidad de la salida. Un valor más bajo resultará en un texto más enfocado y coherente. (Por defecto: 5.0)",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Copiado",
"Copied shared chat URL to clipboard!": "¡URL de chat compartido copiado al portapapeles!",
"Copied to clipboard": "Copiado al portapapeles",
@ -245,6 +250,7 @@
"Created At": "Creado en",
"Created by": "Creado por",
"CSV Import": "Importa un CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Modelo Actual",
"Current Password": "Contraseña Actual",
"Custom": "Personalizado",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "Modelo de Embedding configurado a \"{{embedding_model}}\"",
"Enable API Key": "Habilitar clave de API",
"Enable autocomplete generation for chat messages": "Habilitar generación de autocompletado para mensajes de chat",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "Habilitar el uso compartido de la comunidad",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Habilitar bloqueo de memoria (mlock) para evitar que los datos del modelo se intercambien fuera de la RAM. Esta opción bloquea el conjunto de páginas de trabajo del modelo en la RAM, asegurando que no se intercambiarán fuera del disco. Esto puede ayudar a mantener el rendimiento evitando fallos de página y asegurando un acceso rápido a los datos.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Habilitar asignación de memoria (mmap) para cargar datos del modelo. Esta opción permite al sistema usar el almacenamiento en disco como una extensión de la RAM al tratar los archivos en disco como si estuvieran en la RAM. Esto puede mejorar el rendimiento del modelo permitiendo un acceso más rápido a los datos. Sin embargo, puede no funcionar correctamente con todos los sistemas y puede consumir una cantidad significativa de espacio en disco.",
"Enable Message Rating": "Habilitar la calificación de los mensajes",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Habilitar muestreo Mirostat para controlar la perplejidad. (Predeterminado: 0, 0 = Deshabilitado, 1 = Mirostat, 2 = Mirostat 2.0)",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Habilitar Nuevos Registros",
"Enabled": "Activado",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Asegúrese de que su archivo CSV incluya 4 columnas en este orden: Nombre, Correo Electrónico, Contraseña, Rol.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "Ingresa la escala de CFG (p.ej., 7.0)",
"Enter Chunk Overlap": "Ingresar superposición de fragmentos",
"Enter Chunk Size": "Ingrese el tamaño del fragmento",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Ingrese la descripción",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "Ingrese la clave API de Kagi Search",
"Enter Key Behavior": "",
"Enter language codes": "Ingrese códigos de idioma",
"Enter Model ID": "Ingresa el ID del modelo",
"Enter model tag (e.g. {{modelTag}})": "Ingrese la etiqueta del modelo (p.ej. {{modelTag}})",
"Enter Mojeek Search API Key": "Ingrese la clave API de Mojeek Search",
"Enter Number of Steps (e.g. 50)": "Ingrese el número de pasos (p.ej., 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "Ingrese la URL del proxy (p.ej. https://user:password@host:port)",
"Enter reasoning effort": "Ingrese el esfuerzo de razonamiento",
"Enter Sampler (e.g. Euler a)": "Ingrese el sampler (p.ej., Euler a)",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Ingrese la URL pública de su WebUI. Esta URL se utilizará para generar enlaces en las notificaciones.",
"Enter Tika Server URL": "Ingrese la URL del servidor Tika",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Ingrese el Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Ingrese la URL (p.ej., http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Ingrese la URL (p.ej., http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "Ejemplo: correo",
"Example: ou=users,dc=foo,dc=example": "Ejemplo: ou=usuarios,dc=foo,dc=ejemplo",
"Example: sAMAccountName or uid or userPrincipalName": "Ejemplo: sAMAccountName o uid o userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Excluir",
"Execute code for analysis": "Ejecutar código para análisis",
"Expand": "",
"Experimental": "Experimental",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Explora el cosmos",
"Export": "Exportar",
"Export All Archived Chats": "Exportar todos los chats archivados",
@ -566,7 +581,7 @@
"Include": "Incluir",
"Include `--api-auth` flag when running stable-diffusion-webui": "Incluir el indicador `--api-auth` al ejecutar stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Incluir el indicador `--api` al ejecutar stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Influencia en qué medida el algoritmo responde rápidamente a la retroalimentación del texto generado. Una tasa de aprendizaje más baja resultará en ajustes más lentos, mientras que una tasa de aprendizaje más alta hará que el algoritmo sea más receptivo. (Predeterminado: 0.1)",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Información",
"Input commands": "Ingresar comandos",
"Install from Github URL": "Instalar desde la URL de Github",
@ -624,6 +639,7 @@
"Local": "Local",
"Local Models": "Modelos locales",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "Perdido",
"LTR": "LTR",
"Made by Open WebUI Community": "Hecho por la comunidad de OpenWebUI",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Permiso denegado al acceder a la micrófono",
"Permission denied when accessing microphone: {{error}}": "Permiso denegado al acceder al micrófono: {{error}}",
"Permissions": "Permisos",
"Perplexity API Key": "",
"Personalization": "Personalización",
"Pin": "Fijar",
"Pinned": "Fijado",
@ -809,7 +826,7 @@
"Reasoning Effort": "Esfuerzo de razonamiento",
"Record voice": "Grabar voz",
"Redirecting you to Open WebUI Community": "Redireccionándote a la comunidad OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Reduce la probabilidad de generar tonterías. Un valor más alto (p.ej. 100) dará respuestas más diversas, mientras que un valor más bajo (p.ej. 10) será más conservador. (Predeterminado: 40)",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Referirse a usted mismo como \"Usuario\" (por ejemplo, \"El usuario está aprendiendo Español\")",
"References from": "Referencias de",
"Refused when it shouldn't have": "Rechazado cuando no debería",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Establece el número de hilos de trabajo utilizados para el cálculo. Esta opción controla cuántos hilos se utilizan para procesar las solicitudes entrantes simultáneamente. Aumentar este valor puede mejorar el rendimiento bajo cargas de trabajo de alta concurrencia, pero también puede consumir más recursos de CPU.",
"Set Voice": "Establecer la voz",
"Set whisper model": "Establecer modelo de whisper",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Establece cuán lejos atrás debe mirar el modelo para evitar la repetición. (Predeterminado: 64, 0 = deshabilitado, -1 = num_ctx)",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Establece la semilla de número aleatorio a usar para la generación. Establecer esto en un número específico hará que el modelo genere el mismo texto para el mismo prompt. (Predeterminado: aleatorio)",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Establece el tamaño de la ventana de contexto utilizada para generar el siguiente token. (Predeterminado: 2048)",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Establece las secuencias de parada a usar. Cuando se encuentre este patrón, el LLM dejará de generar texto y devolverá. Se pueden establecer varios patrones de parada especificando múltiples parámetros de parada separados en un archivo de modelo.",
"Settings": "Configuración",
"Settings saved successfully!": "¡Configuración guardada con éxito!",
@ -964,7 +981,7 @@
"System Prompt": "Prompt del sistema",
"Tags Generation": "Generación de etiquetas",
"Tags Generation Prompt": "Prompt de generación de etiquetas",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "El muestreo libre de cola se utiliza para reducir el impacto de los tokens menos probables en la salida. Un valor más alto (p.ej., 2.0) reducirá el impacto más, mientras que un valor de 1.0 deshabilitará esta configuración. (predeterminado: 1)",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "Toca para interrumpir",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "¡Gracias por tu retroalimentación!",
"The Application Account DN you bind with for search": "La cuenta de aplicación DN que vincula para la búsqueda",
"The base to search for users": "La base para buscar usuarios",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Los desarrolladores de este plugin son apasionados voluntarios de la comunidad. Si encuentras este plugin útil, por favor considere contribuir a su desarrollo.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "El tablero de líderes de evaluación se basa en el sistema de clasificación Elo y se actualiza en tiempo real.",
"The LDAP attribute that maps to the mail that users use to sign in.": "El atributo LDAP que se asigna al correo que los usuarios utilizan para iniciar sesión.",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "El tamaño máximo del archivo en MB. Si el tamaño del archivo supera este límite, el archivo no se subirá.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "El número máximo de archivos que se pueden utilizar a la vez en chat. Si este límite es superado, los archivos no se subirán.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "La puntuación debe ser un valor entre 0.0 (0%) y 1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "La temperatura del modelo. Aumentar la temperatura hará que el modelo responda de manera más creativa. (Predeterminado: 0.8)",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema",
"Thinking...": "Pensando...",
"This action cannot be undone. Do you wish to continue?": "Esta acción no se puede deshacer. ¿Desea continuar?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Esto garantiza que sus valiosas conversaciones se guarden de forma segura en su base de datos en el backend. ¡Gracias!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Esta es una característica experimental que puede no funcionar como se esperaba y está sujeto a cambios en cualquier momento.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Esta opción controla cuántos tokens se conservan al actualizar el contexto. Por ejemplo, si se establece en 2, se conservarán los últimos 2 tokens del contexto de la conversación. Conservar el contexto puede ayudar a mantener la continuidad de una conversación, pero puede reducir la capacidad de responder a nuevos temas. (Predeterminado: 24)",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Esta opción eliminará todos los archivos existentes en la colección y los reemplazará con nuevos archivos subidos.",
"This response was generated by \"{{model}}\"": "Esta respuesta fue generada por \"{{model}}\"",
"This will delete": "Esto eliminará",
@ -1132,7 +1149,7 @@
"Why?": "¿Por qué?",
"Widescreen Mode": "Modo de pantalla ancha",
"Won": "Ganado",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Funciona junto con top-k. Un valor más alto (p.ej., 0.95) dará como resultado un texto más diverso, mientras que un valor más bajo (p.ej., 0.5) generará un texto más enfocado y conservador. (Predeterminado: 0.9)",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Espacio de trabajo",
"Workspace Permissions": "Permisos del espacio de trabajo",
"Write": "Escribir",
@ -1142,6 +1159,7 @@
"Write your model template content here": "Escribe el contenido de tu plantilla de modelo aquí",
"Yesterday": "Ayer",
"You": "Usted",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Solo puede chatear con un máximo de {{maxCount}} archivo(s) a la vez.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Puede personalizar sus interacciones con LLMs añadiendo memorias a través del botón 'Gestionar' debajo, haciendo que sean más útiles y personalizados para usted.",
"You cannot upload an empty file.": "No puede subir un archivo vacío.",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(adib. `sh webui.sh --api`)",
"(latest)": "(azkena)",
"{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}}-ren Txatak",
"{{webUIName}} Backend Required": "{{webUIName}} Backend-a Beharrezkoa",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Administratzaileek tresna guztietarako sarbidea dute beti; erabiltzaileek lan-eremuan eredu bakoitzeko esleituak behar dituzte tresnak.",
"Advanced Parameters": "Parametro Aurreratuak",
"Advanced Params": "Parametro Aurreratuak",
"All": "",
"All Documents": "Dokumentu Guztiak",
"All models deleted successfully": "Eredu guztiak ongi ezabatu dira",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Baimendu Ahots Etena Deietan",
"Allowed Endpoints": "",
"Already have an account?": "Baduzu kontu bat?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "top_p-ren alternatiba, kalitate eta aniztasunaren arteko oreka bermatzea du helburu. p parametroak token bat kontuan hartzeko gutxieneko probabilitatea adierazten du, token probableenaren probabilitatearen arabera. Adibidez, p=0.05 balioarekin eta token probableenaren probabilitatea 0.9 denean, 0.045 baino balio txikiagoko logit-ak baztertzen dira. (Lehenetsia: 0.0)",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "Harrigarria",
"an assistant": "laguntzaile bat",
@ -93,6 +95,7 @@
"Are you sure?": "Ziur zaude?",
"Arena Models": "Arena Ereduak",
"Artifacts": "Artefaktuak",
"Ask": "",
"Ask a question": "Egin galdera bat",
"Assistant": "Laguntzailea",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Bing Bilaketa V7 Endpointua",
"Bing Search V7 Subscription Key": "Bing Bilaketa V7 Harpidetza Gakoa",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Bilaketa API Gakoa",
"By {{name}}": "{{name}}-k",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Bilduma",
"Color": "Kolorea",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Konexioak",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Jarri harremanetan Administratzailearekin WebUI Sarbiderako",
"Content": "Edukia",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Jarraitu Posta Elektronikoarekin",
"Continue with LDAP": "Jarraitu LDAP-rekin",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Kontrolatu nola banatzen den mezuaren testua TTS eskaeretarako. 'Puntuazioa'-k esaldietan banatzen du, 'paragrafoak'-k paragrafoetan, eta 'bat ere ez'-ek mezua kate bakar gisa mantentzen du.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Kontrolak",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Irteeraren koherentzia eta aniztasunaren arteko oreka kontrolatzen du. Balio txikiagoak testu zentratuagoa eta koherenteagoa emango du. (Lehenetsia: 5.0)",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Kopiatuta",
"Copied shared chat URL to clipboard!": "Partekatutako txataren URLa arbelera kopiatu da!",
"Copied to clipboard": "Arbelera kopiatuta",
@ -245,6 +250,7 @@
"Created At": "Sortze Data",
"Created by": "Sortzailea",
"CSV Import": "CSV Inportazioa",
"Ctrl+Enter to Send": "",
"Current Model": "Uneko Eredua",
"Current Password": "Uneko Pasahitza",
"Custom": "Pertsonalizatua",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "Embedding eredua \"{{embedding_model}}\"-ra ezarri da",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "Gaitu Komunitatearen Partekatzea",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Gaitu Memoria Blokeatzea (mlock) ereduaren datuak RAM memoriatik kanpo ez trukatzeko. Aukera honek ereduaren lan-orri multzoa RAMean blokatzen du, diskora ez direla trukatuko ziurtatuz. Honek errendimendua mantentzen lagun dezake, orri-hutsegiteak saihestuz eta datuen sarbide azkarra bermatuz.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Gaitu Memoria Mapaketa (mmap) ereduaren datuak kargatzeko. Aukera honek sistemari disko-biltegiratzea RAM memoriaren luzapen gisa erabiltzea ahalbidetzen dio, diskoko fitxategiak RAMean baleude bezala tratatuz. Honek ereduaren errendimendua hobe dezake, datuen sarbide azkarragoa ahalbidetuz. Hala ere, baliteke sistema guztietan behar bezala ez funtzionatzea eta disko-espazio handia kontsumitu dezake.",
"Enable Message Rating": "Gaitu Mezuen Balorazioa",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Gaitu Mirostat laginketa nahasmena kontrolatzeko. (Lehenetsia: 0, 0 = Desgaituta, 1 = Mirostat, 2 = Mirostat 2.0)",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Gaitu Izena Emate Berriak",
"Enabled": "Gaituta",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Ziurtatu zure CSV fitxategiak 4 zutabe dituela ordena honetan: Izena, Posta elektronikoa, Pasahitza, Rola.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "Sartu CFG Eskala (adib. 7.0)",
"Enter Chunk Overlap": "Sartu Zatien Gainjartzea (chunk overlap)",
"Enter Chunk Size": "Sartu Zati Tamaina",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Sartu deskribapena",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Sartu hizkuntza kodeak",
"Enter Model ID": "Sartu Eredu IDa",
"Enter model tag (e.g. {{modelTag}})": "Sartu eredu etiketa (adib. {{modelTag}})",
"Enter Mojeek Search API Key": "Sartu Mojeek Bilaketa API Gakoa",
"Enter Number of Steps (e.g. 50)": "Sartu Urrats Kopurua (adib. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "Sartu Sampler-a (adib. Euler a)",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "Sartu Tika Zerbitzari URLa",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Sartu Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Sartu URLa (adib. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Sartu URLa (adib. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "Adibidea: ou=users,dc=foo,dc=example",
"Example: sAMAccountName or uid or userPrincipalName": "Adibidea: sAMAccountName edo uid edo userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Baztertu",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "Esperimentala",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Esploratu kosmosa",
"Export": "Esportatu",
"Export All Archived Chats": "Esportatu Artxibatutako Txat Guztiak",
@ -566,7 +581,7 @@
"Include": "Sartu",
"Include `--api-auth` flag when running stable-diffusion-webui": "Sartu `--api-auth` bandera stable-diffusion-webui exekutatzean",
"Include `--api` flag when running stable-diffusion-webui": "Sartu `--api` bandera stable-diffusion-webui exekutatzean",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Algoritmoak sortutako testutik jasotako feedbackari erantzuteko abiadura zehazten du. Ikasketa-tasa baxuago batek doikuntza motelagoak eragingo ditu, eta ikasketa-tasa altuago batek algoritmoaren erantzuna bizkorragoa egingo du. (Lehenetsia: 0.1)",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Informazioa",
"Input commands": "Sartu komandoak",
"Install from Github URL": "Instalatu Github URLtik",
@ -624,6 +639,7 @@
"Local": "Lokala",
"Local Models": "Modelo lokalak",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "Galduta",
"LTR": "LTR",
"Made by Open WebUI Community": "OpenWebUI Komunitateak egina",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Baimena ukatu da mikrofonoa atzitzean",
"Permission denied when accessing microphone: {{error}}": "Baimena ukatu da mikrofonoa atzitzean: {{error}}",
"Permissions": "Baimenak",
"Perplexity API Key": "",
"Personalization": "Pertsonalizazioa",
"Pin": "Ainguratu",
"Pinned": "Ainguratuta",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "Grabatu ahotsa",
"Redirecting you to Open WebUI Community": "OpenWebUI Komunitatera berbideratzen",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Zentzugabekeriak sortzeko probabilitatea murrizten du. Balio altuago batek (adib. 100) erantzun anitzagoak emango ditu, balio baxuago batek (adib. 10) kontserbadoreagoa izango den bitartean. (Lehenetsia: 40)",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Egin erreferentzia zure buruari \"Erabiltzaile\" gisa (adib., \"Erabiltzailea gaztelania ikasten ari da\")",
"References from": "Erreferentziak hemendik",
"Refused when it shouldn't have": "Ukatu duenean ukatu behar ez zuenean",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Ezarri kalkulurako erabilitako langile harien kopurua. Aukera honek kontrolatzen du zenbat hari erabiltzen diren sarrerako eskaerak aldi berean prozesatzeko. Balio hau handitzeak errendimendua hobetu dezake konkurrentzia altuko lan-kargetan, baina CPU baliabide gehiago kontsumitu ditzake.",
"Set Voice": "Ezarri ahotsa",
"Set whisper model": "Ezarri whisper modeloa",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Ezartzen du modeloak zenbat atzera begiratu behar duen errepikapenak saihesteko. (Lehenetsia: 64, 0 = desgaituta, -1 = num_ctx)",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Ezartzen du sorkuntzarako erabiliko den ausazko zenbakien hazia. Hau zenbaki zehatz batera ezartzeak modeloak testu bera sortzea eragingo du prompt bererako. (Lehenetsia: ausazkoa)",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Ezartzen du hurrengo tokena sortzeko erabilitako testuinguru leihoaren tamaina. (Lehenetsia: 2048)",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Ezartzen ditu erabiliko diren gelditzeko sekuentziak. Patroi hau aurkitzen denean, LLMak testua sortzeari utziko dio eta itzuli egingo da. Gelditzeko patroi anitz ezar daitezke modelfile batean gelditzeko parametro anitz zehaztuz.",
"Settings": "Ezarpenak",
"Settings saved successfully!": "Ezarpenak ongi gorde dira!",
@ -964,7 +981,7 @@
"System Prompt": "Sistema prompta",
"Tags Generation": "",
"Tags Generation Prompt": "Etiketa sortzeko prompta",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Isats-libre laginketa erabiltzen da irteran probabilitate txikiagoko tokenen eragina murrizteko. Balio altuago batek (adib., 2.0) eragina gehiago murriztuko du, 1.0 balioak ezarpen hau desgaitzen duen bitartean. (lehenetsia: 1)",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "Ukitu eteteko",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Eskerrik asko zure iritzia emateagatik!",
"The Application Account DN you bind with for search": "Bilaketarako lotzen duzun aplikazio kontuaren DN-a",
"The base to search for users": "Erabiltzaileak bilatzeko oinarria",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "Sorta tamainak zehazten du zenbat testu eskaera prozesatzen diren batera aldi berean. Sorta tamaina handiago batek modeloaren errendimendua eta abiadura handitu ditzake, baina memoria gehiago behar du. (Lehenetsia: 512)",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Plugin honen atzean dauden garatzaileak komunitateko boluntario sutsuak dira. Plugin hau baliagarria iruditzen bazaizu, mesedez kontuan hartu bere garapenean laguntzea.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Ebaluazio sailkapena Elo sailkapen sisteman oinarritzen da eta denbora errealean eguneratzen da.",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Fitxategiaren gehienezko tamaina MB-tan. Fitxategiaren tamainak muga hau gainditzen badu, fitxategia ez da kargatuko.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Txatean aldi berean erabili daitezkeen fitxategien gehienezko kopurua. Fitxategi kopuruak muga hau gainditzen badu, fitxategiak ez dira kargatuko.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Puntuazioa 0.0 (0%) eta 1.0 (100%) arteko balio bat izan behar da.",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "Modeloaren tenperatura. Tenperatura handitzeak modeloaren erantzunak sortzaileagoak izatea eragingo du. (Lehenetsia: 0.8)",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Gaia",
"Thinking...": "Pentsatzen...",
"This action cannot be undone. Do you wish to continue?": "Ekintza hau ezin da desegin. Jarraitu nahi duzu?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Honek zure elkarrizketa baliotsuak modu seguruan zure backend datu-basean gordeko direla ziurtatzen du. Eskerrik asko!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Hau funtzionalitate esperimental bat da, baliteke espero bezala ez funtzionatzea eta edozein unetan aldaketak izatea.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Aukera honek kontrolatzen du zenbat token mantentzen diren testuingurua freskatzean. Adibidez, 2-ra ezarrita badago, elkarrizketaren testuinguruko azken 2 tokenak mantenduko dira. Testuingurua mantentzeak elkarrizketaren jarraitutasuna mantentzen lagun dezake, baina gai berriei erantzuteko gaitasuna murriztu dezake. (Lehenetsia: 24)",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Aukera honek ereduak bere erantzunean sor dezakeen token kopuru maximoa ezartzen du. Muga hau handitzeak ereduari erantzun luzeagoak emateko aukera ematen dio, baina eduki ez-erabilgarri edo ez-egokia sortzeko probabilitatea ere handitu dezake. (Lehenetsia: 128)",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Aukera honek bilduman dauden fitxategi guztiak ezabatuko ditu eta berriki kargatutako fitxategiekin ordezkatuko ditu.",
"This response was generated by \"{{model}}\"": "Erantzun hau \"{{model}}\" modeloak sortu du",
"This will delete": "Honek ezabatuko du",
@ -1132,7 +1149,7 @@
"Why?": "Zergatik?",
"Widescreen Mode": "Pantaila zabaleko modua",
"Won": "Irabazi du",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Top-k-rekin batera lan egiten du. Balio altuago batek (adib., 0.95) testu anitzagoa sortuko du, balio baxuago batek (adib., 0.5) testu fokatu eta kontserbadoreagoa sortuko duen bitartean. (Lehenetsia: 0.9)",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Lan-eremua",
"Workspace Permissions": "Lan-eremuaren baimenak",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "Idatzi hemen zure modelo txantiloi edukia",
"Yesterday": "Atzo",
"You": "Zu",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Gehienez {{maxCount}} fitxategirekin txateatu dezakezu aldi berean.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "LLMekin dituzun interakzioak pertsonalizatu ditzakezu memoriak gehituz beheko 'Kudeatu' botoiaren bidez, lagungarriagoak eta zuretzat egokituagoak eginez.",
"You cannot upload an empty file.": "Ezin duzu fitxategi huts bat kargatu.",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(e.g. `sh webui.sh --api`)",
"(latest)": "(آخرین)",
"{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}} گفتگوهای",
"{{webUIName}} Backend Required": "بکند {{webUIName}} نیاز است.",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "پارامترهای پیشرفته",
"Advanced Params": "پارام\u200cهای پیشرفته",
"All": "",
"All Documents": "همهٔ سند\u200cها",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "",
"Allowed Endpoints": "",
"Already have an account?": "از قبل حساب کاربری دارید؟",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "یک دستیار",
@ -93,6 +95,7 @@
"Are you sure?": "مطمئنید؟",
"Arena Models": "",
"Artifacts": "",
"Ask": "",
"Ask a question": "سوالی بپرسید",
"Assistant": "دستیار",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "کلید API جستجوی شجاع",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "مجموعه",
"Color": "",
"ComfyUI": "کومیوآی",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "ارتباطات",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "برای دسترسی به WebUI با مدیر تماس بگیرید",
"Content": "محتوا",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "کنترل\u200cها",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "کپی شد",
"Copied shared chat URL to clipboard!": "URL چت به کلیپ بورد کپی شد!",
"Copied to clipboard": "به بریده\u200cدان کپی\u200cشد",
@ -245,6 +250,7 @@
"Created At": "ایجاد شده در",
"Created by": "ایجاد شده توسط",
"CSV Import": "درون\u200cریزی CSV",
"Ctrl+Enter to Send": "",
"Current Model": "مدل فعلی",
"Current Password": "رمز عبور فعلی",
"Custom": "دلخواه",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "مدل پیدائش را به \"{{embedding_model}}\" تنظیم کنید",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "فعالسازی اشتراک انجمن",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "فعال کردن ثبت نام\u200cهای جدید",
"Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "اطمینان حاصل کنید که فایل CSV شما شامل چهار ستون در این ترتیب است: نام، ایمیل، رمز عبور، نقش.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "مقدار Chunk Overlap را وارد کنید",
"Enter Chunk Size": "مقدار Chunk Size را وارد کنید",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "کد زبان را وارد کنید",
"Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "تگ مدل را وارد کنید (مثلا {{modelTag}})",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "تعداد گام ها را وارد کنید (مثال: 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "مقدار Top K را وارد کنید",
"Enter URL (e.g. http://127.0.0.1:7860/)": "مقدار URL را وارد کنید (مثال http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "مقدار URL را وارد کنید (مثال http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "آزمایشی",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "برون\u200cریزی",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "شامل",
"Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "فلگ `--api` را هنکام اجرای stable-diffusion-webui استفاده کنید.",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "اطلاعات",
"Input commands": "ورودی دستورات",
"Install from Github URL": "نصب از ادرس Github",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "LTR",
"Made by Open WebUI Community": "ساخته شده توسط OpenWebUI Community",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "هنگام دسترسی به میکروفون، اجازه داده نشد: {{error}}",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "شخصی سازی",
"Pin": "",
"Pinned": "",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "ضبط صدا",
"Redirecting you to Open WebUI Community": "در حال هدایت به OpenWebUI Community",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "",
"Refused when it shouldn't have": "رد شده زمانی که باید نباشد",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "تنظیم صدا",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "تنظیمات",
"Settings saved successfully!": "تنظیمات با موفقیت ذخیره شد!",
@ -964,7 +981,7 @@
"System Prompt": "پرامپت سیستم",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "با تشکر از بازخورد شما!",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "امتیاز باید یک مقدار بین 0.0 (0%) و 1.0 (100%) باشد.",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "پوسته",
"Thinking...": "در حال فکر...",
"This action cannot be undone. Do you wish to continue?": "این اقدام قابل بازگردانی نیست. برای ادامه اطمینان دارید؟",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "این تضمین می کند که مکالمات ارزشمند شما به طور ایمن در پایگاه داده بکند ذخیره می شود. تشکر!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "",
"This will delete": "",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "حالت صفحهٔ عریض",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "محیط کار",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "دیروز",
"You": "شما",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "شما در هر زمان نهایتا می\u200cتوانید با {{maxCount}} پرونده گفتگو کنید.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(esim. `sh webui.sh --api`)",
"(latest)": "(uusin)",
"{{ models }}": "{{ mallit }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} vastausta",
"{{user}}'s Chats": "{{user}}:n keskustelut",
"{{webUIName}} Backend Required": "{{webUIName}}-backend vaaditaan",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Ylläpitäjillä on pääsy kaikkiin työkaluihin koko ajan; käyttäjät tarvitsevat työkaluja mallille määritettynä työtilassa.",
"Advanced Parameters": "Edistyneet parametrit",
"Advanced Params": "Edistyneet parametrit",
"All": "",
"All Documents": "Kaikki asiakirjat",
"All models deleted successfully": "Kaikki mallit poistettu onnistuneesti",
"Allow Chat Controls": "Salli keskustelujen hallinta",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Salli äänen keskeytys puhelussa",
"Allowed Endpoints": "Hyväksytyt päätepisteet",
"Already have an account?": "Onko sinulla jo tili?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Vaihtoehto top_p:lle, jolla pyritään varmistamaan laadun ja monipuolisuuden tasapaino. Parametri p edustaa pienintä todennäköisyyttä, jolla token otetaan huomioon suhteessa todennäköisimpään tokeniin. Esimerkiksi p=0.05 ja todennäköisin token todennäköisyydellä 0.9, arvoltaan alle 0.045 olevat logit suodatetaan pois. (Oletus: 0.0)",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "Aina",
"Amazing": "Hämmästyttävä",
"an assistant": "avustaja",
@ -93,6 +95,7 @@
"Are you sure?": "Oletko varma?",
"Arena Models": "Arena-mallit",
"Artifacts": "Artefaktit",
"Ask": "",
"Ask a question": "Kysyä kysymys",
"Assistant": "Avustaja",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Bing Search V7 -päätepisteen osoite",
"Bing Search V7 Subscription Key": "Bing Search V7 -tilauskäyttäjäavain",
"Bocha Search API Key": "Bocha Search API -avain",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Search API -avain",
"By {{name}}": "Tekijä {{name}}",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "Ohjelmatulkki",
"Code Interpreter Engine": "Ohjelmatulkin moottori",
"Code Interpreter Prompt Template": "Ohjelmatulkin kehotemalli",
"Collapse": "",
"Collection": "Kokoelma",
"Color": "Väri",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "Vahvista uusi salasanasi",
"Connect to your own OpenAI compatible API endpoints.": "Yhdistä oma OpenAI yhteensopiva API päätepiste.",
"Connections": "Yhteydet",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Ota yhteyttä ylläpitäjään WebUI-käyttöä varten",
"Content": "Sisältö",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Jatka sähköpostilla",
"Continue with LDAP": "Jatka LDAP:illa",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Säädä, miten viestin teksti jaetaan puhesynteesipyyntöjä varten. 'Välimerkit' jakaa lauseisiin, 'kappaleet' jakaa kappaleisiin ja 'ei mitään' pitää viestin yhtenä merkkijonona.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Ohjaimet",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Säätelee tulosteen yhtenäisyyden ja monimuotoisuuden välistä tasapainoa. Alhaisempi arvo tuottaa keskittyneempää ja yhtenäisempää tekstiä. (Oletus: 5.0)",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Kopioitu",
"Copied shared chat URL to clipboard!": "Jaettu keskustelulinkki kopioitu leikepöydälle!",
"Copied to clipboard": "Kopioitu leikepöydälle",
@ -245,6 +250,7 @@
"Created At": "Luotu",
"Created by": "Luonut",
"CSV Import": "CSV-tuonti",
"Ctrl+Enter to Send": "",
"Current Model": "Nykyinen malli",
"Current Password": "Nykyinen salasana",
"Custom": "Mukautettu",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "\"{{embedding_model}}\" valittu upotusmalliksi",
"Enable API Key": "Ota API -avain käyttöön",
"Enable autocomplete generation for chat messages": "Ota automaattinen täydennys käyttöön keskusteluviesteissä",
"Enable Code Execution": "",
"Enable Code Interpreter": "Ota ohjelmatulkki käyttöön",
"Enable Community Sharing": "Ota yhteisön jakaminen käyttöön",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Ota Memory Locking (mlock) käyttöön estääksesi mallidatan vaihtamisen pois RAM-muistista. Tämä lukitsee mallin työsivut RAM-muistiin, varmistaen että niitä ei vaihdeta levylle. Tämä voi parantaa suorituskykyä välttämällä sivuvikoja ja varmistamalla nopean tietojen käytön.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Ota Memory Mapping (mmap) käyttöön ladataksesi mallidataa. Tämä vaihtoehto sallii järjestelmän käyttää levytilaa RAM-laajennuksena käsittelemällä levytiedostoja kuin ne olisivat RAM-muistissa. Tämä voi parantaa mallin suorituskykyä sallimalla nopeamman tietojen käytön. Kuitenkin se ei välttämättä toimi oikein kaikissa järjestelmissä ja voi kuluttaa huomattavasti levytilaa.",
"Enable Message Rating": "Ota viestiarviointi käyttöön",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Ota Mirostat-näytteenotto käyttöön hallinnan monimerkityksellisyydelle. (Oletus: 0, 0 = Ei käytössä, 1 = Mirostat, 2 = Mirostat 2.0)",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Salli uudet rekisteröitymiset",
"Enabled": "Käytössä",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Varmista, että CSV-tiedostossasi on 4 saraketta tässä järjestyksessä: Nimi, Sähköposti, Salasana, Rooli.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "Kirjoita CFG-mitta (esim. 7.0)",
"Enter Chunk Overlap": "Syötä osien päällekkäisyys",
"Enter Chunk Size": "Syötä osien koko",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Kirjoita kuvaus",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "Kirjoita Juypyter token",
"Enter Jupyter URL": "Kirjoita Jupyter verkko-osoite",
"Enter Kagi Search API Key": "Kirjoita Kagi Search API -avain",
"Enter Key Behavior": "",
"Enter language codes": "Kirjoita kielikoodit",
"Enter Model ID": "Kirjoita mallitunnus",
"Enter model tag (e.g. {{modelTag}})": "Kirjoita mallitagi (esim. {{modelTag}})",
"Enter Mojeek Search API Key": "Kirjoita Mojeek Search API -avain",
"Enter Number of Steps (e.g. 50)": "Kirjoita askelten määrä (esim. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "Kirjoita välityspalvelimen verkko-osoite (esim. https://käyttäjä:salasana@host:portti)",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "Kirjoita näytteistäjä (esim. Euler a)",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Kirjoita julkinen WebUI verkko-osoitteesi. Verkko-osoitetta käytetään osoitteiden luontiin ilmoituksissa.",
"Enter Tika Server URL": "Kirjoita Tika Server URL",
"Enter timeout in seconds": "Aseta aikakatkaisu sekunneissa",
"Enter to Send": "",
"Enter Top K": "Kirjoita Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Kirjoita verkko-osoite (esim. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Kirjoita verkko-osoite (esim. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "Esimerkki: posti",
"Example: ou=users,dc=foo,dc=example": "Esimerkki: ou=käyttäjät,dc=foo,dc=example",
"Example: sAMAccountName or uid or userPrincipalName": "Esimerkki: sAMAccountName tai uid tai userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Jätä pois",
"Execute code for analysis": "Suorita koodi analysointia varten",
"Expand": "",
"Experimental": "Kokeellinen",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Tutki avaruutta",
"Export": "Vie",
"Export All Archived Chats": "Vie kaikki arkistoidut keskustelut",
@ -566,7 +581,7 @@
"Include": "Sisällytä",
"Include `--api-auth` flag when running stable-diffusion-webui": "Sisällytä `--api-auth`-lippu ajettaessa stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Sisällytä `--api`-lippu ajettaessa stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Vaikuttaa siihen, kuinka nopeasti algoritmi reagoi tuotetusta tekstistä saatuun palautteeseen. Alhaisempi oppimisaste johtaa hitaampiin säätöihin, kun taas korkeampi oppimisaste tekee algoritmista reaktiivisemman. (Oletus: 0.1)",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Tiedot",
"Input commands": "Syötekäskyt",
"Install from Github URL": "Asenna Github-URL:stä",
@ -624,6 +639,7 @@
"Local": "Paikallinen",
"Local Models": "Paikalliset mallit",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "Mennyt",
"LTR": "LTR",
"Made by Open WebUI Community": "Tehnyt OpenWebUI-yhteisö",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Käyttöoikeus evätty mikrofonille",
"Permission denied when accessing microphone: {{error}}": "Käyttöoikeus evätty mikrofonille: {{error}}",
"Permissions": "Käyttöoikeudet",
"Perplexity API Key": "",
"Personalization": "Personointi",
"Pin": "Kiinnitä",
"Pinned": "Kiinnitetty",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "Nauhoita ääntä",
"Redirecting you to Open WebUI Community": "Ohjataan sinut OpenWebUI-yhteisöön",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Vähentää merkityksetöntä sisältöä tuottavan todennäköisyyttä. Korkeampi arvo (esim. 100) antaa monipuolisempia vastauksia, kun taas alhaisempi arvo (esim. 10) on konservatiivisempi. (Oletus: 40)",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Viittaa itseen \"Käyttäjänä\" (esim. \"Käyttäjä opiskelee espanjaa\")",
"References from": "Viitteet lähteistä",
"Refused when it shouldn't have": "Kieltäytyi, vaikka ei olisi pitänyt",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Aseta työntekijäsäikeiden määrä laskentaa varten. Tämä asetus kontrolloi, kuinka monta säiettä käytetään saapuvien pyyntöjen rinnakkaiseen käsittelyyn. Arvon kasvattaminen voi parantaa suorituskykyä suurissa samanaikaisissa työkuormissa, mutta voi myös kuluttaa enemmän keskussuorittimen resursseja.",
"Set Voice": "Aseta puheääni",
"Set whisper model": "Aseta whisper-malli",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Määrittää, kuinka kauas taaksepäin malli katsoo välttääkseen toistoa. (Oletus: 64, 0 = pois käytöstä, -1 = num_ctx)",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Määrittää satunnaislukujen siemenen käytettäväksi generoinnissa. Tämän asettaminen tiettyyn numeroon saa mallin tuottamaan saman tekstin samalle kehoteelle. (Oletus: satunnainen)",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Määrittää kontekstiikkunan koon, jota käytetään seuraavan tokenin tuottamiseen. (Oletus: 2048)",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Määrittää käytettävät lopetussekvenssit. Kun tämä kuvio havaitaan, LLM lopettaa tekstin tuottamisen ja palauttaa. Useita lopetuskuvioita voidaan asettaa määrittämällä useita erillisiä lopetusparametreja mallitiedostoon.",
"Settings": "Asetukset",
"Settings saved successfully!": "Asetukset tallennettu onnistuneesti!",
@ -964,7 +981,7 @@
"System Prompt": "Järjestelmäkehote",
"Tags Generation": "Tagien luonti",
"Tags Generation Prompt": "Tagien luontikehote",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Tail-free-otanta käytetään vähentämään vähemmän todennäköisten tokenien vaikutusta tulokseen. Korkeampi arvo (esim. 2,0) vähentää vaikutusta enemmän, kun taas arvo 1,0 poistaa tämän asetuksen käytöstä. (oletus: 1)",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "Napauta keskeyttääksesi",
"Tasks": "Tehtävät",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Kiitos palautteestasi!",
"The Application Account DN you bind with for search": "Hakua varten sidottu sovelluksen käyttäjätilin DN",
"The base to search for users": "Käyttäjien haun perusta",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "Erän koko määrittää, kuinka monta tekstipyyntöä käsitellään yhdessä kerralla. Suurempi erän koko voi parantaa mallin suorituskykyä ja nopeutta, mutta se vaatii myös enemmän muistia. (Oletus: 512)",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Tämän lisäosan takana olevat kehittäjät ovat intohimoisia vapaaehtoisyhteisöstä. Jos koet tämän lisäosan hyödylliseksi, harkitse sen kehittämisen tukemista.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Arviointitulosluettelo perustuu Elo-luokitusjärjestelmään ja päivittyy reaaliajassa.",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Enimmäistiedostokoko megatavuissa. Jos tiedoston koko ylittää tämän rajan, tiedostoa ei ladata.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Suurin sallittu tiedostojen määrä käytettäväksi kerralla chatissa. Jos tiedostojen määrä ylittää tämän rajan, niitä ei ladata.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Pisteytyksen tulee olla arvo välillä 0,0 (0 %) ja 1,0 (100 %).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "Mallin lämpötila. Lämpötilan nostaminen saa mallin vastaamaan luovemmin. (Oletus: 0,8)",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Teema",
"Thinking...": "Ajattelee...",
"This action cannot be undone. Do you wish to continue?": "Tätä toimintoa ei voi peruuttaa. Haluatko jatkaa?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Tämä varmistaa, että arvokkaat keskustelusi tallennetaan turvallisesti backend-tietokantaasi. Kiitos!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Tämä on kokeellinen ominaisuus, se ei välttämättä toimi odotetulla tavalla ja se voi muuttua milloin tahansa.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Tämä asetus kontrolloi, kuinka monta tokenia säilytetään päivittäessä kontekstia. Esimerkiksi, jos asetetaan arvoksi 2, säilytetään viimeiset 2 keskustelukon-tekstin tokenia. Kontekstin säilyttäminen voi auttaa ylläpitämään keskustelun jatkuvuutta, mutta se voi vähentää kykyä vastata uusiin aiheisiin. (Oletus: 24)",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Tämä asetus määrittää mallin vastauksen enimmäistokenmäärän. Tämän rajan nostaminen mahdollistaa mallin antavan pidempiä vastauksia, mutta se voi myös lisätä epähyödyllisen tai epärelevantin sisällön todennäköisyyttä. (Oletus: 128)",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Tämä vaihtoehto poistaa kaikki kokoelman nykyiset tiedostot ja korvaa ne uusilla ladatuilla tiedostoilla.",
"This response was generated by \"{{model}}\"": "Tämän vastauksen tuotti \"{{model}}\"",
"This will delete": "Tämä poistaa",
@ -1132,7 +1149,7 @@
"Why?": "Miksi?",
"Widescreen Mode": "Laajakuvatila",
"Won": "Voitti",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Toimii yhdessä top-k:n kanssa. Korkeampi arvo (esim. 0,95) tuottaa monipuolisempaa tekstiä, kun taas alhaisempi arvo (esim. 0,5) tuottaa keskittyneempää ja konservatiivisempaa tekstiä. (Oletus: 0,9)",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Työtila",
"Workspace Permissions": "Työtilan käyttöoikeudet",
"Write": "Kirjoita",
@ -1142,6 +1159,7 @@
"Write your model template content here": "Kirjoita mallisi mallinnesisältö tähän",
"Yesterday": "Eilen",
"You": "Sinä",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Voit keskustella enintään {{maxCount}} tiedoston kanssa kerralla.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Voit personoida vuorovaikutustasi LLM-ohjelmien kanssa lisäämällä muistoja 'Hallitse'-painikkeen kautta, jolloin ne ovat hyödyllisempiä ja räätälöityjä sinua varten.",
"You cannot upload an empty file.": "Et voi ladata tyhjää tiedostoa.",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(par exemple `sh webui.sh --api`)",
"(latest)": "(dernier)",
"{{ models }}": "{{ modèles }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "Discussions de {{user}}",
"{{webUIName}} Backend Required": "Backend {{webUIName}} requis",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Les administrateurs ont accès à tous les outils en tout temps ; les utilisateurs ont besoin d'outils affectés par modèle dans l'espace de travail.",
"Advanced Parameters": "Paramètres avancés",
"Advanced Params": "Paramètres avancés",
"All": "",
"All Documents": "Tous les documents",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Autoriser l'interruption vocale pendant un appel",
"Allowed Endpoints": "",
"Already have an account?": "Avez-vous déjà un compte ?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "un assistant",
@ -93,6 +95,7 @@
"Are you sure?": "Êtes-vous certain ?",
"Arena Models": "",
"Artifacts": "",
"Ask": "",
"Ask a question": "",
"Assistant": "",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Clé API Brave Search",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Collection",
"Color": "",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Connexions",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Contacter l'administrateur pour l'accès à l'interface Web",
"Content": "Contenu",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Contrôle comment le texte des messages est divisé pour les demandes de TTS. 'Ponctuation' divise en phrases, 'paragraphes' divise en paragraphes et 'aucun' garde le message comme une seule chaîne.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Contrôles",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "",
"Copied shared chat URL to clipboard!": "URL du chat copiée dans le presse-papiers\u00a0!",
"Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "Créé le",
"Created by": "Créé par",
"CSV Import": "Import CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Modèle actuel amélioré",
"Current Password": "Mot de passe actuel",
"Custom": "Sur mesure",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "Modèle d'encodage défini sur « {{embedding_model}} »",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "Activer le partage communautaire",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Activer les nouvelles inscriptions",
"Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Vérifiez que votre fichier CSV comprenne les 4 colonnes dans cet ordre : Name, Email, Password, Role.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "Entrez le chevauchement de chunk",
"Enter Chunk Size": "Entrez la taille de bloc",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Entrez les codes de langue",
"Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Entrez l'étiquette du modèle (par ex. {{modelTag}})",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Entrez le nombre de pas (par ex. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Entrez les Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Entrez l'URL (par ex. {http://127.0.0.1:7860/})",
"Enter URL (e.g. http://localhost:11434)": "Entrez l'URL (par ex. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "Expérimental",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "Exportation",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "Inclure le drapeau `--api-auth` lors de l'exécution de stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Inclure le drapeau `--api` lorsque vous exécutez stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Info",
"Input commands": "Entrez les commandes",
"Install from Github URL": "Installer depuis l'URL GitHub",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "Modèles locaux",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "LTR",
"Made by Open WebUI Community": "Réalisé par la communauté OpenWebUI",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Autorisation refusée lors de l'accès au micro",
"Permission denied when accessing microphone: {{error}}": "Permission refusée lors de l'accès au microphone : {{error}}",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "Personnalisation",
"Pin": "Épingler",
"Pinned": "Épinglé",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "Enregistrer la voix",
"Redirecting you to Open WebUI Community": "Redirection vers la communauté OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Désignez-vous comme « Utilisateur » (par ex. « L'utilisateur apprend l'espagnol »)",
"References from": "",
"Refused when it shouldn't have": "Refusé alors qu'il n'aurait pas dû l'être",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Définir la voix",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Paramètres",
"Settings saved successfully!": "Paramètres enregistrés avec succès !",
@ -964,7 +981,7 @@
"System Prompt": "Prompt du système",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "Appuyez pour interrompre",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Merci pour vos commentaires !",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Le score doit être une valeur comprise entre 0,0 (0\u00a0%) et 1,0 (100\u00a0%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Thème",
"Thinking...": "En train de réfléchir...",
"This action cannot be undone. Do you wish to continue?": "Cette action ne peut pas être annulée. Souhaitez-vous continuer ?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Cela garantit que vos conversations précieuses soient sauvegardées en toute sécurité dans votre base de données backend. Merci !",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Il s'agit d'une fonctionnalité expérimentale, elle peut ne pas fonctionner comme prévu et est sujette à modification à tout moment.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "",
"This will delete": "Cela supprimera",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "Mode Grand Écran",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Espace de travail",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "Hier",
"You": "Vous",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Vous pouvez personnaliser vos interactions avec les LLM en ajoutant des souvenirs via le bouton 'Gérer' ci-dessous, ce qui les rendra plus utiles et adaptés à vos besoins.",
"You cannot upload an empty file.": "",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(par exemple `sh webui.sh --api`)",
"(latest)": "(dernière version)",
"{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} réponses",
"{{user}}'s Chats": "Conversations de {{user}}",
"{{webUIName}} Backend Required": "Backend {{webUIName}} requis",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Les administrateurs ont accès à tous les outils en permanence ; les utilisateurs doivent se voir attribuer des outils pour chaque modèle dans lespace de travail.",
"Advanced Parameters": "Paramètres avancés",
"Advanced Params": "Paramètres avancés",
"All": "",
"All Documents": "Tous les documents",
"All models deleted successfully": "Tous les modèles ont été supprimés avec succès",
"Allow Chat Controls": "Autoriser les contrôles de chat",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Autoriser l'interruption vocale pendant un appel",
"Allowed Endpoints": "Points de terminaison autorisés",
"Already have an account?": "Avez-vous déjà un compte ?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Alternative au top_p, visant à assurer un équilibre entre qualité et variété. Le paramètre p représente la probabilité minimale pour qu'un token soit pris en compte, par rapport à la probabilité du token le plus probable. Par exemple, avec p=0.05 et le token le plus probable ayant une probabilité de 0.9, les logits ayant une valeur inférieure à 0.045 sont filtrés. (Par défaut : 0.0)",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "Incroyable",
"an assistant": "un assistant",
@ -93,6 +95,7 @@
"Are you sure?": "Êtes-vous certain ?",
"Arena Models": "Modèles d'arène",
"Artifacts": "Artéfacts",
"Ask": "",
"Ask a question": "Posez votre question",
"Assistant": "Assistant",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "Point de terminaison Bing Search V7",
"Bing Search V7 Subscription Key": "Clé d'abonnement Bing Search V7",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Clé API Brave Search",
"By {{name}}": "Par {{name}}",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Collection",
"Color": "Couleur",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "Confirmer votre nouveau mot de passe",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Connexions",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "Contraint l'effort de raisonnement pour les modèles de raisonnement. Applicable uniquement aux modèles de raisonnement de fournisseurs spécifiques qui prennent en charge l'effort de raisonnement. (Par défaut : medium)",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Contacter l'administrateur pour obtenir l'accès à WebUI",
"Content": "Contenu",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "Continuer avec l'email",
"Continue with LDAP": "Continuer avec LDAP",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Contrôle la façon dont le texte des messages est divisé pour les demandes de Text-to-Speech. « ponctuation » divise en phrases, « paragraphes » divise en paragraphes et « aucun » garde le message en tant que chaîne de texte unique.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Contrôles",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Contrôle l'équilibre entre la cohérence et la diversité de la sortie. Une valeur plus basse produira un texte plus focalisé et cohérent. (Par défaut : 5.0)",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Copié",
"Copied shared chat URL to clipboard!": "URL du chat copié dans le presse-papiers !",
"Copied to clipboard": "Copié dans le presse-papiers",
@ -245,6 +250,7 @@
"Created At": "Créé le",
"Created by": "Créé par",
"CSV Import": "Import CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Modèle actuel",
"Current Password": "Mot de passe actuel",
"Custom": "Sur mesure",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "Modèle d'embedding défini sur « {{embedding_model}} »",
"Enable API Key": "Activer la clé API",
"Enable autocomplete generation for chat messages": "Activer la génération des suggestions pour les messages",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "Activer le partage communautaire",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Activer le verrouillage de la mémoire (mlock) pour empêcher les données du modèle d'être échangées de la RAM. Cette option verrouille l'ensemble de pages de travail du modèle en RAM, garantissant qu'elles ne seront pas échangées vers le disque. Cela peut aider à maintenir les performances en évitant les défauts de page et en assurant un accès rapide aux données.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Activer le mappage de la mémoire (mmap) pour charger les données du modèle. Cette option permet au système d'utiliser le stockage disque comme une extension de la RAM en traitant les fichiers disque comme s'ils étaient en RAM. Cela peut améliorer les performances du modèle en permettant un accès plus rapide aux données. Cependant, cela peut ne pas fonctionner correctement avec tous les systèmes et peut consommer une quantité significative d'espace disque.",
"Enable Message Rating": "Activer l'évaluation des messages",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Activer l'échantillonnage Mirostat pour contrôler la perplexité. (Par défaut : 0, 0 = Désactivé, 1 = Mirostat, 2 = Mirostat 2.0)",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Activer les nouvelles inscriptions",
"Enabled": "Activé",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Vérifiez que votre fichier CSV comprenne les 4 colonnes dans cet ordre : Name, Email, Password, Role.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "Entrez l'échelle CFG (par ex. 7.0)",
"Enter Chunk Overlap": "Entrez le chevauchement des chunks",
"Enter Chunk Size": "Entrez la taille des chunks",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Entrez la description",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "Entrez la clé API Kagi Search",
"Enter Key Behavior": "",
"Enter language codes": "Entrez les codes de langue",
"Enter Model ID": "Entrez l'ID du modèle",
"Enter model tag (e.g. {{modelTag}})": "Entrez le tag du modèle (par ex. {{modelTag}})",
"Enter Mojeek Search API Key": "Entrez la clé API Mojeek",
"Enter Number of Steps (e.g. 50)": "Entrez le nombre d'étapes (par ex. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "Entrez l'URL du proxy (par ex. https://use:password@host:port)",
"Enter reasoning effort": "Entrez l'effort de raisonnement",
"Enter Sampler (e.g. Euler a)": "Entrez le sampler (par ex. Euler a)",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Entrez l'URL publique de votre WebUI. Cette URL sera utilisée pour générer des liens dans les notifications.",
"Enter Tika Server URL": "Entrez l'URL du serveur Tika",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Entrez les Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Entrez l'URL (par ex. {http://127.0.0.1:7860/})",
"Enter URL (e.g. http://localhost:11434)": "Entrez l'URL (par ex. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "Exemple: mail",
"Example: ou=users,dc=foo,dc=example": "Exemple: ou=utilisateurs,dc=foo,dc=exemple",
"Example: sAMAccountName or uid or userPrincipalName": "Exemple: sAMAccountName ou uid ou userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Exclure",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "Expérimental",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Explorer le cosmos",
"Export": "Exportation",
"Export All Archived Chats": "Exporter toutes les conversations archivées",
@ -566,7 +581,7 @@
"Include": "Inclure",
"Include `--api-auth` flag when running stable-diffusion-webui": "Inclure le drapeau `--api-auth` lors de l'exécution de stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Inclure le drapeau `--api` lorsque vous exécutez stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Influence la rapidité avec laquelle l'algorithme répond aux retours du texte généré. Un taux d'apprentissage plus bas entraînera des ajustements plus lents, tandis qu'un taux d'apprentissage plus élevé rendra l'algorithme plus réactif. (Par défaut : 0.1)",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Info",
"Input commands": "Commandes d'entrée",
"Install from Github URL": "Installer depuis une URL GitHub",
@ -624,6 +639,7 @@
"Local": "Local",
"Local Models": "Modèles locaux",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "Perdu",
"LTR": "LTR",
"Made by Open WebUI Community": "Réalisé par la communauté OpenWebUI",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Accès au microphone refusé",
"Permission denied when accessing microphone: {{error}}": "Accès au microphone refusé : {{error}}",
"Permissions": "Permissions",
"Perplexity API Key": "",
"Personalization": "Personnalisation",
"Pin": "Épingler",
"Pinned": "Épinglé",
@ -809,7 +826,7 @@
"Reasoning Effort": "Effort de raisonnement",
"Record voice": "Enregistrer la voix",
"Redirecting you to Open WebUI Community": "Redirection vers la communauté OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Réduit la probabilité de générer des non-sens. Une valeur plus élevée (par exemple 100) donnera des réponses plus diversifiées, tandis qu'une valeur plus basse (par exemple 10) sera plus conservatrice. (Par défaut : 40)",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Désignez-vous comme « Utilisateur » (par ex. « L'utilisateur apprend l'espagnol »)",
"References from": "Références de",
"Refused when it shouldn't have": "Refusé alors qu'il n'aurait pas dû l'être",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Définir le nombre de threads de travail utilisés pour le calcul. Cette option contrôle combien de threads sont utilisés pour traiter les demandes entrantes simultanément. L'augmentation de cette valeur peut améliorer les performances sous de fortes charges de travail concurrentes mais peut également consommer plus de ressources CPU.",
"Set Voice": "Choisir la voix",
"Set whisper model": "Choisir le modèle Whisper",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Définit la profondeur de recherche du modèle pour prévenir les répétitions. (Par défaut : 64, 0 = désactivé, -1 = num_ctx)",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Définit la graine de nombre aléatoire à utiliser pour la génération. La définition de cette valeur à un nombre spécifique fera que le modèle générera le même texte pour le même prompt. (Par défaut : aléatoire)",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Définit la taille de la fenêtre contextuelle utilisée pour générer le prochain token. (Par défaut : 2048)",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Définit les séquences d'arrêt à utiliser. Lorsque ce motif est rencontré, le LLM cessera de générer du texte et retournera. Plusieurs motifs d'arrêt peuvent être définis en spécifiant plusieurs paramètres d'arrêt distincts dans un fichier modèle.",
"Settings": "Paramètres",
"Settings saved successfully!": "Paramètres enregistrés avec succès !",
@ -964,7 +981,7 @@
"System Prompt": "Prompt système",
"Tags Generation": "Génération de tags",
"Tags Generation Prompt": "Prompt de génération de tags",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "L'échantillonnage sans queue est utilisé pour réduire l'impact des tokens moins probables dans la sortie. Une valeur plus élevée (par exemple 2.0) réduira davantage l'impact, tandis qu'une valeur de 1.0 désactive ce paramètre. (par défaut : 1)",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "Parler au modèle",
"Tap to interrupt": "Appuyez pour interrompre",
"Tasks": "Tâches",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Merci pour vos commentaires !",
"The Application Account DN you bind with for search": "Le DN du compte de l'application avec lequel vous vous liez pour la recherche",
"The base to search for users": "La base pour rechercher des utilisateurs",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "La taille de lot détermine combien de demandes de texte sont traitées ensemble en une fois. Une taille de lot plus grande peut augmenter les performances et la vitesse du modèle, mais elle nécessite également plus de mémoire. (Par défaut : 512)",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Les développeurs de ce plugin sont des bénévoles passionnés issus de la communauté. Si vous trouvez ce plugin utile, merci de contribuer à son développement.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Le classement d'évaluation est basé sur le système de notation Elo et est mis à jour en temps réel.",
"The LDAP attribute that maps to the mail that users use to sign in.": "L'attribut LDAP qui correspond à l'adresse e-mail que les utilisateurs utilisent pour se connecter.",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "La taille maximale du fichier en Mo. Si la taille du fichier dépasse cette limite, le fichier ne sera pas téléchargé.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "Le nombre maximal de fichiers pouvant être utilisés en même temps dans la conversation. Si le nombre de fichiers dépasse cette limite, les fichiers ne seront pas téléchargés.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Le score doit être une valeur comprise entre 0,0 (0%) et 1,0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "La température du modèle. Augmenter la température rendra le modèle plus créatif dans ses réponses. (Par défaut : 0.8)",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Thème",
"Thinking...": "En train de réfléchir...",
"This action cannot be undone. Do you wish to continue?": "Cette action ne peut pas être annulée. Souhaitez-vous continuer ?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Cela garantit que vos conversations précieuses soient sauvegardées en toute sécurité dans votre base de données backend. Merci !",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Il s'agit d'une fonctionnalité expérimentale, elle peut ne pas fonctionner comme prévu et est sujette à modification à tout moment.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Cette option contrôle combien de tokens sont conservés lors du rafraîchissement du contexte. Par exemple, si ce paramètre est défini à 2, les 2 derniers tokens du contexte de conversation seront conservés. Préserver le contexte peut aider à maintenir la continuité d'une conversation, mais cela peut réduire la capacité à répondre à de nouveaux sujets. (Par défaut : 24)",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Cette option définit le nombre maximum de tokens que le modèle peut générer dans sa réponse. Augmenter cette limite permet au modèle de fournir des réponses plus longues, mais cela peut également augmenter la probabilité de générer du contenu inutile ou non pertinent. (Par défaut : 128)",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Cette option supprimera tous les fichiers existants dans la collection et les remplacera par les fichiers nouvellement téléchargés.",
"This response was generated by \"{{model}}\"": "Cette réponse a été générée par \"{{model}}\"",
"This will delete": "Cela supprimera",
@ -1132,7 +1149,7 @@
"Why?": "Pourquoi ?",
"Widescreen Mode": "Mode grand écran",
"Won": "Victoires",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Fonctionne avec le top-k. Une valeur plus élevée (par ex. 0.95) donnera un texte plus diversifié, tandis qu'une valeur plus basse (par ex. 0.5) générera un texte plus concentré et conservateur. (Par défaut : 0.9)",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Espace de travail",
"Workspace Permissions": "Autorisations de l'espace de travail",
"Write": "Écrire",
@ -1142,6 +1159,7 @@
"Write your model template content here": "Écrivez ici le contenu de votre modèle",
"Yesterday": "Hier",
"You": "Vous",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Vous ne pouvez discuter qu'avec un maximum de {{maxCount}} fichier(s) à la fois.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Vous pouvez personnaliser vos interactions avec les LLM en ajoutant des mémoires à l'aide du bouton « Gérer » ci-dessous, ce qui les rendra plus utiles et mieux adaptées à vos besoins.",
"You cannot upload an empty file.": "Vous ne pouvez pas envoyer un fichier vide.",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(למשל `sh webui.sh --api`)",
"(latest)": "(האחרון)",
"{{ models }}": "{{ דגמים }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "צ'אטים של {{user}}",
"{{webUIName}} Backend Required": "נדרש Backend של {{webUIName}}",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "פרמטרים מתקדמים",
"Advanced Params": "פרמטרים מתקדמים",
"All": "",
"All Documents": "כל המסמכים",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "",
"Allowed Endpoints": "",
"Already have an account?": "כבר יש לך חשבון?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "עוזר",
@ -93,6 +95,7 @@
"Are you sure?": "האם אתה בטוח?",
"Arena Models": "",
"Artifacts": "",
"Ask": "",
"Ask a question": "",
"Assistant": "",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "מפתח API של חיפוש אמיץ",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "אוסף",
"Color": "",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "חיבורים",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "",
"Content": "תוכן",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "",
"Copied shared chat URL to clipboard!": "העתקת כתובת URL של צ'אט משותף ללוח!",
"Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "נוצר ב",
"Created by": "",
"CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "המודל הנוכחי",
"Current Password": "הסיסמה הנוכחית",
"Custom": "מותאם אישית",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "מודל ההטמעה הוגדר ל-\"{{embedding_model}}\"",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "הפיכת שיתוף קהילה לזמין",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "אפשר הרשמות חדשות",
"Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "ודא שקובץ ה-CSV שלך כולל 4 עמודות בסדר הבא: שם, דוא\"ל, סיסמה, תפקיד.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "הזן חפיפת נתונים",
"Enter Chunk Size": "הזן גודל נתונים",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "הזן קודי שפה",
"Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "הזן תג מודל (למשל {{modelTag}})",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "הזן מספר שלבים (למשל 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "הזן Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "הזן כתובת URL (למשל http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "הזן כתובת URL (למשל http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "ניסיוני",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "ייצא",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "כלול את הדגל `--api` בעת הרצת stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "מידע",
"Input commands": "פקודות קלט",
"Install from Github URL": "התקן מכתובת URL של Github",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "LTR",
"Made by Open WebUI Community": "נוצר על ידי קהילת OpenWebUI",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "ההרשאה נדחתה בעת גישה למיקרופון: {{error}}",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "תאור",
"Pin": "",
"Pinned": "",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "הקלט קול",
"Redirecting you to Open WebUI Community": "מפנה אותך לקהילת OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "",
"Refused when it shouldn't have": "נדחה כאשר לא היה צריך",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "הגדר קול",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "הגדרות",
"Settings saved successfully!": "ההגדרות נשמרו בהצלחה!",
@ -964,7 +981,7 @@
"System Prompt": "תגובת מערכת",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "תודה על המשוב שלך!",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "ציון צריך להיות ערך בין 0.0 (0%) ל-1.0 (100%)",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "נושא",
"Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "פעולה זו מבטיחה שהשיחות בעלות הערך שלך יישמרו באופן מאובטח במסד הנתונים העורפי שלך. תודה!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "",
"This will delete": "",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "סביבה",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "אתמול",
"You": "אתה",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(e.g. `sh webui.sh --api`)",
"(latest)": "(latest)",
"{{ models }}": "{{ मॉडल }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}} की चैट",
"{{webUIName}} Backend Required": "{{webUIName}} बैकएंड आवश्यक",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "उन्नत पैरामीटर",
"Advanced Params": "उन्नत परम",
"All": "",
"All Documents": "सभी डॉक्यूमेंट्स",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "",
"Allowed Endpoints": "",
"Already have an account?": "क्या आपके पास पहले से एक खाता मौजूद है?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "एक सहायक",
@ -93,6 +95,7 @@
"Are you sure?": "क्या आपको यकीन है?",
"Arena Models": "",
"Artifacts": "",
"Ask": "",
"Ask a question": "",
"Assistant": "",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave सर्च एपीआई कुंजी",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "संग्रह",
"Color": "",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "सम्बन्ध",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "",
"Content": "सामग्री",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "",
"Copied shared chat URL to clipboard!": "साझा चैट URL को क्लिपबोर्ड पर कॉपी किया गया!",
"Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "किस समय बनाया गया",
"Created by": "",
"CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "वर्तमान मॉडल",
"Current Password": "वर्तमान पासवर्ड",
"Custom": "कस्टम संस्करण",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "एम्बेडिंग मॉडल को \"{{embedding_model}}\" पर सेट किया गया",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "समुदाय साझाकरण सक्षम करें",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "नए साइन अप सक्रिय करें",
"Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "सुनिश्चित करें कि आपकी CSV फ़ाइल में इस क्रम में 4 कॉलम शामिल हैं: नाम, ईमेल, पासवर्ड, भूमिका।",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "चंक ओवरलैप दर्ज करें",
"Enter Chunk Size": "खंड आकार दर्ज करें",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "भाषा कोड दर्ज करें",
"Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Model tag दर्ज करें (उदा. {{modelTag}})",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "चरणों की संख्या दर्ज करें (उदा. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "शीर्ष K दर्ज करें",
"Enter URL (e.g. http://127.0.0.1:7860/)": "यूआरएल दर्ज करें (उदा. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "यूआरएल दर्ज करें (उदा. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "प्रयोगात्मक",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "निर्यातित माल",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "stable-diffusion-webui चलाते समय `--api` ध्वज शामिल करें",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "सूचना-विषयक",
"Input commands": "इनपुट क命",
"Install from Github URL": "Github URL से इंस्टॉल करें",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "LTR",
"Made by Open WebUI Community": "OpenWebUI समुदाय द्वारा निर्मित",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "माइक्रोफ़ोन तक पहुँचने पर अनुमति अस्वीकृत: {{error}}",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "पेरसनलाइज़मेंट",
"Pin": "",
"Pinned": "",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "आवाज रिकॉर्ड करना",
"Redirecting you to Open WebUI Community": "आपको OpenWebUI समुदाय पर पुनर्निर्देशित किया जा रहा है",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "",
"Refused when it shouldn't have": "जब ऐसा नहीं होना चाहिए था तो मना कर दिया",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "आवाज सेट करें",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "सेटिंग्स",
"Settings saved successfully!": "सेटिंग्स सफलतापूर्वक सहेजी गईं!",
@ -964,7 +981,7 @@
"System Prompt": "सिस्टम प्रॉम्प्ट",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "आपकी प्रतिक्रिया के लिए धन्यवाद!",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "स्कोर का मान 0.0 (0%) और 1.0 (100%) के बीच होना चाहिए।",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "थीम",
"Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "यह सुनिश्चित करता है कि आपकी मूल्यवान बातचीत आपके बैकएंड डेटाबेस में सुरक्षित रूप से सहेजी गई है। धन्यवाद!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "",
"This will delete": "",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "वर्कस्पेस",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "कल",
"You": "आप",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(npr. `sh webui.sh --api`)",
"(latest)": "(najnovije)",
"{{ models }}": "{{ modeli }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "Razgovori korisnika {{user}}",
"{{webUIName}} Backend Required": "{{webUIName}} Backend je potreban",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "Napredni parametri",
"Advanced Params": "Napredni parametri",
"All": "",
"All Documents": "Svi dokumenti",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "",
"Allowed Endpoints": "",
"Already have an account?": "Već imate račun?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "asistent",
@ -93,6 +95,7 @@
"Are you sure?": "Jeste li sigurni?",
"Arena Models": "",
"Artifacts": "",
"Ask": "",
"Ask a question": "",
"Assistant": "",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave tražilica - API ključ",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Kolekcija",
"Color": "",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Povezivanja",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Kontaktirajte admina za WebUI pristup",
"Content": "Sadržaj",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "",
"Copied shared chat URL to clipboard!": "URL dijeljenog razgovora kopiran u međuspremnik!",
"Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "Stvoreno",
"Created by": "",
"CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "Trenutni model",
"Current Password": "Trenutna lozinka",
"Custom": "Prilagođeno",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "Embedding model postavljen na \"{{embedding_model}}\"",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "Omogući zajedničko korištenje zajednice",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Omogući nove prijave",
"Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Provjerite da vaša CSV datoteka uključuje 4 stupca u ovom redoslijedu: Name, Email, Password, Role.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "Unesite preklapanje dijelova",
"Enter Chunk Size": "Unesite veličinu dijela",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Unesite kodove jezika",
"Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Unesite oznaku modela (npr. {{modelTag}})",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Unesite broj koraka (npr. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Unesite Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Unesite URL (npr. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Unesite URL (npr. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "Eksperimentalno",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "Izvoz",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "Uključite zastavicu `--api` prilikom pokretanja stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Informacije",
"Input commands": "Unos naredbi",
"Install from Github URL": "Instaliraj s Github URL-a",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "Lokalni modeli",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "LTR",
"Made by Open WebUI Community": "Izradio OpenWebUI Community",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Dopuštenje je odbijeno prilikom pristupa mikrofonu",
"Permission denied when accessing microphone: {{error}}": "Pristup mikrofonu odbijen: {{error}}",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "Prilagodba",
"Pin": "",
"Pinned": "",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "Snimanje glasa",
"Redirecting you to Open WebUI Community": "Preusmjeravanje na OpenWebUI zajednicu",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Nazivajte se \"Korisnik\" (npr. \"Korisnik uči španjolski\")",
"References from": "",
"Refused when it shouldn't have": "Odbijen kada nije trebao biti",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Postavi glas",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Postavke",
"Settings saved successfully!": "Postavke su uspješno spremljene!",
@ -964,7 +981,7 @@
"System Prompt": "Sistemski prompt",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Hvala na povratnim informacijama!",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Ocjena treba biti vrijednost između 0,0 (0%) i 1,0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema",
"Thinking...": "Razmišljam",
"This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Ovo osigurava da su vaši vrijedni razgovori sigurno spremljeni u bazu podataka. Hvala vam!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Ovo je eksperimentalna značajka, možda neće funkcionirati prema očekivanjima i podložna je promjenama u bilo kojem trenutku.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "",
"This will delete": "",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "Mod širokog zaslona",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Radna ploča",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "Jučer",
"You": "Vi",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Možete personalizirati svoje interakcije s LLM-ima dodavanjem uspomena putem gumba 'Upravljanje' u nastavku, čineći ih korisnijima i prilagođenijima vama.",
"You cannot upload an empty file.": "",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(pl. `sh webui.sh --api`)",
"(latest)": "(legújabb)",
"{{ models }}": "{{ modellek }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}} beszélgetései",
"{{webUIName}} Backend Required": "{{webUIName}} Backend szükséges",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Az adminok mindig hozzáférnek minden eszközhöz; a felhasználóknak modellenként kell eszközöket hozzárendelni a munkaterületen.",
"Advanced Parameters": "Haladó paraméterek",
"Advanced Params": "Haladó paraméterek",
"All": "",
"All Documents": "Minden dokumentum",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Hang megszakítás engedélyezése hívás közben",
"Allowed Endpoints": "",
"Already have an account?": "Már van fiókod?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "egy asszisztens",
@ -93,6 +95,7 @@
"Are you sure?": "Biztos vagy benne?",
"Arena Models": "Arena modellek",
"Artifacts": "Műtermékek",
"Ask": "",
"Ask a question": "Kérdezz valamit",
"Assistant": "Asszisztens",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Brave Search API kulcs",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Gyűjtemény",
"Color": "",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Kapcsolatok",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Lépj kapcsolatba az adminnal a WebUI hozzáférésért",
"Content": "Tartalom",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Szabályozd, hogyan legyen felosztva az üzenet szövege a TTS kérésekhez. A 'Központozás' mondatokra bontja, a 'Bekezdések' bekezdésekre bontja, a 'Nincs' pedig egyetlen szövegként kezeli az üzenetet.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "Vezérlők",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Másolva",
"Copied shared chat URL to clipboard!": "Megosztott beszélgetés URL másolva a vágólapra!",
"Copied to clipboard": "Vágólapra másolva",
@ -245,6 +250,7 @@
"Created At": "Létrehozva",
"Created by": "Létrehozta",
"CSV Import": "CSV importálás",
"Ctrl+Enter to Send": "",
"Current Model": "Jelenlegi modell",
"Current Password": "Jelenlegi jelszó",
"Custom": "Egyéni",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "Beágyazási modell beállítva: \"{{embedding_model}}\"",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "Közösségi megosztás engedélyezése",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "Üzenet értékelés engedélyezése",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Új regisztrációk engedélyezése",
"Enabled": "Engedélyezve",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Győződj meg róla, hogy a CSV fájl tartalmazza ezt a 4 oszlopot ebben a sorrendben: Név, Email, Jelszó, Szerep.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "Add meg a CFG skálát (pl. 7.0)",
"Enter Chunk Overlap": "Add meg a darab átfedést",
"Enter Chunk Size": "Add meg a darab méretet",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Add meg a leírást",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Add meg a nyelvi kódokat",
"Enter Model ID": "Add meg a modell azonosítót",
"Enter model tag (e.g. {{modelTag}})": "Add meg a modell címkét (pl. {{modelTag}})",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Add meg a lépések számát (pl. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "Add meg a mintavételezőt (pl. Euler a)",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "Add meg a Tika szerver URL-t",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Add meg a Top K értéket",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Add meg az URL-t (pl. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Add meg az URL-t (pl. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Kizárás",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "Kísérleti",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "Exportálás",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "Tartalmaz",
"Include `--api-auth` flag when running stable-diffusion-webui": "Add hozzá a `--api-auth` kapcsolót a stable-diffusion-webui futtatásakor",
"Include `--api` flag when running stable-diffusion-webui": "Add hozzá a `--api` kapcsolót a stable-diffusion-webui futtatásakor",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Információ",
"Input commands": "Beviteli parancsok",
"Install from Github URL": "Telepítés Github URL-ről",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "Helyi modellek",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "Elveszett",
"LTR": "LTR",
"Made by Open WebUI Community": "Az OpenWebUI közösség által készítve",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Hozzáférés megtagadva a mikrofonhoz",
"Permission denied when accessing microphone: {{error}}": "Hozzáférés megtagadva a mikrofonhoz: {{error}}",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "Személyre szabás",
"Pin": "Rögzítés",
"Pinned": "Rögzítve",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "Hang rögzítése",
"Redirecting you to Open WebUI Community": "Átirányítás az OpenWebUI közösséghez",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Hivatkozzon magára \"Felhasználó\"-ként (pl. \"A Felhasználó spanyolul tanul\")",
"References from": "Hivatkozások innen",
"Refused when it shouldn't have": "Elutasítva, amikor nem kellett volna",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Hang beállítása",
"Set whisper model": "Whisper modell beállítása",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Beállítások",
"Settings saved successfully!": "Beállítások sikeresen mentve!",
@ -964,7 +981,7 @@
"System Prompt": "Rendszer prompt",
"Tags Generation": "",
"Tags Generation Prompt": "Címke generálási prompt",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "Koppintson a megszakításhoz",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Köszönjük a visszajelzést!",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "A bővítmény fejlesztői lelkes önkéntesek a közösségből. Ha hasznosnak találja ezt a bővítményt, kérjük, fontolja meg a fejlesztéséhez való hozzájárulást.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Az értékelési ranglista az Elo értékelési rendszeren alapul és valós időben frissül.",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "A maximális fájlméret MB-ban. Ha a fájlméret meghaladja ezt a limitet, a fájl nem lesz feltöltve.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "A chatben egyszerre használható fájlok maximális száma. Ha a fájlok száma meghaladja ezt a limitet, a fájlok nem lesznek feltöltve.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "A pontszámnak 0,0 (0%) és 1,0 (100%) közötti értéknek kell lennie.",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Téma",
"Thinking...": "Gondolkodik...",
"This action cannot be undone. Do you wish to continue?": "Ez a művelet nem vonható vissza. Szeretné folytatni?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Ez biztosítja, hogy értékes beszélgetései biztonságosan mentésre kerüljenek a backend adatbázisban. Köszönjük!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Ez egy kísérleti funkció, lehet, hogy nem a várt módon működik és bármikor változhat.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Ez az opció törli az összes meglévő fájlt a gyűjteményben és lecseréli őket az újonnan feltöltött fájlokkal.",
"This response was generated by \"{{model}}\"": "Ezt a választ a \"{{model}}\" generálta",
"This will delete": "Ez törölni fogja",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "Szélesvásznú mód",
"Won": "Nyert",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Munkaterület",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "Tegnap",
"You": "Ön",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Egyszerre maximum {{maxCount}} fájllal tud csevegni.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Az LLM-ekkel való interakcióit személyre szabhatja emlékek hozzáadásával a lenti 'Kezelés' gomb segítségével, így azok még hasznosabbak és személyre szabottabbak lesznek.",
"You cannot upload an empty file.": "Nem tölthet fel üres fájlt.",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(contoh: `sh webui.sh --api`)",
"(latest)": "(terbaru)",
"{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "Obrolan {{user}}",
"{{webUIName}} Backend Required": "{{webUIName}} Diperlukan Backend",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Admin memiliki akses ke semua alat setiap saat; pengguna memerlukan alat yang ditetapkan per model di ruang kerja.",
"Advanced Parameters": "Parameter Lanjutan",
"Advanced Params": "Parameter Lanjutan",
"All": "",
"All Documents": "Semua Dokumen",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Izinkan Gangguan Suara dalam Panggilan",
"Allowed Endpoints": "",
"Already have an account?": "Sudah memiliki akun?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "asisten",
@ -93,6 +95,7 @@
"Are you sure?": "Apakah Anda yakin?",
"Arena Models": "",
"Artifacts": "",
"Ask": "",
"Ask a question": "",
"Assistant": "",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Kunci API Pencarian Berani",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Koleksi",
"Color": "",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Koneksi",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Hubungi Admin untuk Akses WebUI",
"Content": "Konten",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "",
"Copied shared chat URL to clipboard!": "Menyalin URL obrolan bersama ke papan klip!",
"Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "Dibuat di",
"Created by": "Dibuat oleh",
"CSV Import": "Impor CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Model Saat Ini",
"Current Password": "Kata Sandi Saat Ini",
"Custom": "Kustom",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "Model penyematan diatur ke \"{{embedding_model}}\"",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "Aktifkan Berbagi Komunitas",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Aktifkan Pendaftaran Baru",
"Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Pastikan file CSV Anda menyertakan 4 kolom dengan urutan sebagai berikut: Nama, Email, Kata Sandi, Peran.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "Masukkan Tumpang Tindih Chunk",
"Enter Chunk Size": "Masukkan Ukuran Potongan",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Masukkan kode bahasa",
"Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Masukkan tag model (misalnya {{modelTag}})",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Masukkan Jumlah Langkah (mis. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Masukkan Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Masukkan URL (mis. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Masukkan URL (mis. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "Percobaan",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "Ekspor",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "Sertakan bendera `--api-auth` saat menjalankan stable-diffusion-webui",
"Include `--api` flag when running stable-diffusion-webui": "Sertakan bendera `--api` saat menjalankan stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Info",
"Input commands": "Perintah masukan",
"Install from Github URL": "Instal dari URL Github",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "Model Lokal",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "LTR",
"Made by Open WebUI Community": "Dibuat oleh Komunitas OpenWebUI",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Izin ditolak saat mengakses mikrofon",
"Permission denied when accessing microphone: {{error}}": "Izin ditolak saat mengakses mikrofon: {{error}}",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "Personalisasi",
"Pin": "",
"Pinned": "",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "Rekam suara",
"Redirecting you to Open WebUI Community": "Mengarahkan Anda ke Komunitas OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Merujuk diri Anda sebagai \"Pengguna\" (misalnya, \"Pengguna sedang belajar bahasa Spanyol\")",
"References from": "",
"Refused when it shouldn't have": "Menolak ketika seharusnya tidak",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Mengatur Suara",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Pengaturan",
"Settings saved successfully!": "Pengaturan berhasil disimpan!",
@ -964,7 +981,7 @@
"System Prompt": "Permintaan Sistem",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "Ketuk untuk menyela",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Terima kasih atas umpan balik Anda!",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Nilai yang diberikan haruslah nilai antara 0,0 (0%) dan 1,0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema",
"Thinking...": "Berpikir",
"This action cannot be undone. Do you wish to continue?": "Tindakan ini tidak dapat dibatalkan. Apakah Anda ingin melanjutkan?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Ini akan memastikan bahwa percakapan Anda yang berharga disimpan dengan aman ke basis data backend. Terima kasih!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Ini adalah fitur eksperimental, mungkin tidak berfungsi seperti yang diharapkan dan dapat berubah sewaktu-waktu.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "",
"This will delete": "Ini akan menghapus",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "Mode Layar Lebar",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Ruang Kerja",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "Kemarin",
"You": "Anda",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Anda dapat mempersonalisasi interaksi Anda dengan LLM dengan menambahkan kenangan melalui tombol 'Kelola' di bawah ini, sehingga lebih bermanfaat dan disesuaikan untuk Anda.",
"You cannot upload an empty file.": "",

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(m.sh. `sh webui.sh --api`)",
"(latest)": "(is déanaí)",
"{{ models }}": "{{ models }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "{{COUNT}} Freagra",
"{{user}}'s Chats": "Comhráite {{user}}",
"{{webUIName}} Backend Required": "{{webUIName}} Ceoldeireadh Riachtanach",
@ -13,7 +14,7 @@
"A task model is used when performing tasks such as generating titles for chats and web search queries": "Úsáidtear múnla tasc agus tascanna á ndéanamh agat mar theidil a ghiniúint do chomhráite agus ceisteanna cuardaigh gréasáin",
"a user": "úsáideoir",
"About": "Maidir",
"Accept autocomplete generation / Jump to prompt variable": "",
"Accept autocomplete generation / Jump to prompt variable": "Glac giniúint uathchríochnaithe / Léim chun athróg a spreagadh",
"Access": "Rochtain",
"Access Control": "Rialaithe Rochtana",
"Accessible to all users": "Inrochtana do gach úsáideoir",
@ -21,7 +22,7 @@
"Account Activation Pending": "Gníomhachtaithe Cuntas",
"Accurate information": "Faisnéis chruinn",
"Actions": "Gníomhartha",
"Activate": "",
"Activate": "Gníomhachtaigh",
"Activate this command by typing \"/{{COMMAND}}\" to chat input.": "Gníomhachtaigh an t-ordú seo trí \"/{{COMMAND}}\" a chlóscríobh chun ionchur comhrá a dhéanamh.",
"Active Users": "Úsáideoirí Gníomhacha",
"Add": "Cuir",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "Tá rochtain ag riarthóirí ar gach uirlis i gcónaí; teastaíonn ó úsáideoirí uirlisí a shanntar in aghaidh an mhúnla sa spás oibre.",
"Advanced Parameters": "Paraiméadair Casta",
"Advanced Params": "Paraiméid Casta",
"All": "",
"All Documents": "Gach Doiciméad",
"All models deleted successfully": "Scriosadh na múnlaí go léir go rathúil",
"Allow Chat Controls": "Ceadaigh Rialuithe Comhrá",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "Ceadaigh Briseadh Guth i nGlao",
"Allowed Endpoints": "Críochphointí Ceadaithe",
"Already have an account?": "Tá cuntas agat cheana féin?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "Rogha eile seachas an top_p, agus tá sé mar aidhm aige cothromaíocht cáilíochta agus éagsúlachta a chinntiú. Léiríonn an paraiméadar p an dóchúlacht íosta go mbreithneofar comhartha, i gcoibhneas le dóchúlacht an chomhartha is dóichí. Mar shampla, le p=0.05 agus dóchúlacht 0.9 ag an comhartha is dóichí, déantar logits le luach níos lú ná 0.045 a scagadh amach. (Réamhshocrú: 0.0)",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "I gcónaí",
"Amazing": "Iontach",
"an assistant": "cúntóir",
@ -86,23 +88,24 @@
"Archive All Chats": "Cartlann Gach Comhrá",
"Archived Chats": "Comhráite Cartlann",
"archived-chat-export": "gcartlann-comhrá-onnmhairiú",
"Are you sure you want to clear all memories? This action cannot be undone.": "",
"Are you sure you want to clear all memories? This action cannot be undone.": "An bhfuil tú cinnte gur mhaith leat na cuimhní go léir a ghlanadh? Ní féidir an gníomh seo a chealú.",
"Are you sure you want to delete this channel?": "An bhfuil tú cinnte gur mhaith leat an cainéal seo a scriosadh?",
"Are you sure you want to delete this message?": "An bhfuil tú cinnte gur mhaith leat an teachtaireacht seo a scriosadh?",
"Are you sure you want to unarchive all archived chats?": "An bhfuil tú cinnte gur mhaith leat gach comhrá cartlainne a dhíchartlannú?",
"Are you sure?": "An bhfuil tú cinnte?",
"Arena Models": "Múnlaí Airéine",
"Artifacts": "Déantáin",
"Ask": "",
"Ask a question": "Cuir ceist",
"Assistant": "Cúntóir",
"Attach file from knowledge": "",
"Attach file from knowledge": "Ceangail comhad ó eolas",
"Attention to detail": "Aird ar mhionsonraí",
"Attribute for Mail": "Tréith don Phost",
"Attribute for Username": "Tréith don Ainm Úsáideora",
"Audio": "Fuaim",
"August": "Lúnasa",
"Authenticate": "Fíordheimhnigh",
"Authentication": "",
"Authentication": "Fíordheimhniú",
"Auto-Copy Response to Clipboard": "Freagra AutoCopy go Gearrthaisce",
"Auto-playback response": "Freagra uathsheinm",
"Autocomplete Generation": "Giniúint Uathchríochnaithe",
@ -126,12 +129,13 @@
"Beta": "Béite",
"Bing Search V7 Endpoint": "Cuardach Bing V7 Críochphointe",
"Bing Search V7 Subscription Key": "Eochair Síntiúis Bing Cuardach V7",
"Bocha Search API Key": "",
"Bocha Search API Key": "Eochair API Cuardach Bocha",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Eochair API Cuardaigh Brave",
"By {{name}}": "Le {{name}}",
"Bypass Embedding and Retrieval": "",
"Bypass Embedding and Retrieval": "Seachbhóthar Leabú agus Aisghabháil",
"Bypass SSL verification for Websites": "Seachbhachtar fíorú SSL do Láithreáin",
"Calendar": "",
"Calendar": "Féilire",
"Call": "Glaoigh",
"Call feature is not supported when using Web STT engine": "Ní thacaítear le gné glaonna agus inneall Web STT á úsáid",
"Camera": "Ceamara",
@ -163,14 +167,14 @@
"Ciphers": "Cipéirí",
"Citation": "Lua",
"Clear memory": "Cuimhne ghlan",
"Clear Memory": "",
"Clear Memory": "Glan Cuimhne",
"click here": "cliceáil anseo",
"Click here for filter guides.": "Cliceáil anseo le haghaidh treoracha scagaire.",
"Click here for help.": "Cliceáil anseo le haghaidh cabhair.",
"Click here to": "Cliceáil anseo chun",
"Click here to download user import template file.": "Cliceáil anseo chun an comhad iompórtála úsáideora a íoslódáil.",
"Click here to learn more about faster-whisper and see the available models.": "Cliceáil anseo chun níos mó a fhoghlaim faoi cogar níos tapúla agus na múnlaí atá ar fáil a fheiceáil.",
"Click here to see available models.": "",
"Click here to see available models.": "Cliceáil anseo chun na samhlacha atá ar fáil a fheiceáil.",
"Click here to select": "Cliceáil anseo chun roghnú",
"Click here to select a csv file.": "Cliceáil anseo chun comhad csv a roghnú.",
"Click here to select a py file.": "Cliceáil anseo chun comhad py a roghnú.",
@ -183,13 +187,14 @@
"Clone of {{TITLE}}": "Clón de {{TITLE}}",
"Close": "Dún",
"Code execution": "Cód a fhorghníomhú",
"Code Execution": "",
"Code Execution Engine": "",
"Code Execution Timeout": "",
"Code Execution": "Forghníomhú Cóid",
"Code Execution Engine": "Inneall Forghníomhaithe Cóid",
"Code Execution Timeout": "Teorainn Ama Forghníomhaithe Cóid",
"Code formatted successfully": "Cód formáidithe go rathúil",
"Code Interpreter": "Ateangaire Cód",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Code Interpreter Engine": "Inneall Ateangaire Cóid",
"Code Interpreter Prompt Template": "Teimpléad Pras Ateangaire Cód",
"Collapse": "",
"Collection": "Bailiúchán",
"Color": "Dath",
"ComfyUI": "ComfyUI",
@ -206,21 +211,21 @@
"Confirm Password": "Deimhnigh Pasfhocal",
"Confirm your action": "Deimhnigh do ghníomh",
"Confirm your new password": "Deimhnigh do phasfhocal nua",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connect to your own OpenAI compatible API endpoints.": "Ceangail le do chríochphointí API atá comhoiriúnach le OpenAI.",
"Connections": "Naisc",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "Srianann iarracht ar réasúnaíocht a dhéanamh ar shamhlacha réasúnaíochta. Ní bhaineann ach le samhlacha réasúnaíochta ó sholáthraithe sonracha a thacaíonn le hiarracht réasúnaíochta. (Réamhshocrú: meánach)",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "Déan teagmháil le Riarachán le haghaidh Rochtana WebUI",
"Content": "Ábhar",
"Content Extraction Engine": "",
"Content Extraction Engine": "Inneall Eastóscadh Ábhar",
"Context Length": "Fad Comhthéacs",
"Continue Response": "Leanúint ar aghaidh",
"Continue with {{provider}}": "Lean ar aghaidh le {{provider}}",
"Continue with Email": "Lean ar aghaidh le Ríomhphost",
"Continue with LDAP": "Lean ar aghaidh le LDAP",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "Rialú conas a roinntear téacs teachtaireachta d'iarratais TTS. Roinneann 'poncaíocht' ina abairtí, scoilteann 'míreanna' i míreanna, agus coinníonn 'aon' an teachtaireacht mar shreang amháin.",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "Rialú a dhéanamh ar athrá seichimh chomharthaí sa téacs ginte. Cuirfidh luach níos airde (m.sh., 1.5) pionós níos láidre ar athrá, agus beidh luach níos ísle (m.sh., 1.1) níos boige. Ag 1, tá sé díchumasaithe. (Réamhshocrú: 1.1)",
"Controls": "Rialuithe",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "Rialaíonn sé an chothromaíocht idir comhleanúnachas agus éagsúlacht an aschuir. Beidh téacs níos dírithe agus níos soiléire mar thoradh ar luach níos ísle. (Réamhshocrú: 5.0)",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "Cóipeáladh",
"Copied shared chat URL to clipboard!": "Cóipeáladh URL an chomhrá roinnte chuig an ngearrthaisce!",
"Copied to clipboard": "Cóipeáilte go gear",
@ -230,7 +235,7 @@
"Copy Link": "Cóipeáil Nasc",
"Copy to clipboard": "Cóipeáil chuig an ngearrthaisce",
"Copying to clipboard was successful!": "D'éirigh le cóipeáil chuig an ngearrthaisce!",
"CORS must be properly configured by the provider to allow requests from Open WebUI.": "",
"CORS must be properly configured by the provider to allow requests from Open WebUI.": "Ní mór don soláthraí CORS a chumrú i gceart chun iarratais ó Open WebUI a cheadú.",
"Create": "Cruthaigh",
"Create a knowledge base": "Cruthaigh bonn eolais",
"Create a model": "Cruthaigh múnla",
@ -245,10 +250,11 @@
"Created At": "Cruthaithe Ag",
"Created by": "Cruthaithe ag",
"CSV Import": "Iompórtáil CSV",
"Ctrl+Enter to Send": "",
"Current Model": "Múnla Reatha",
"Current Password": "Pasfhocal Reatha",
"Custom": "Saincheaptha",
"Danger Zone": "",
"Danger Zone": "Crios Contúirte",
"Dark": "Dorcha",
"Database": "Bunachar Sonraí",
"December": "Nollaig",
@ -275,7 +281,7 @@
"Delete folder?": "Scrios fillteán?",
"Delete function?": "Scrios feidhm?",
"Delete Message": "Scrios Teachtaireacht",
"Delete message?": "",
"Delete message?": "Scrios teachtaireacht?",
"Delete prompt?": "Scrios leid?",
"delete this link": "scrios an nasc seo",
"Delete tool?": "Uirlis a scriosadh?",
@ -286,15 +292,15 @@
"Describe your knowledge base and objectives": "Déan cur síos ar do bhunachar eolais agus do chuspóirí",
"Description": "Cur síos",
"Didn't fully follow instructions": "Níor lean sé treoracha go hiomlán",
"Direct Connections": "",
"Direct Connections allow users to connect to their own OpenAI compatible API endpoints.": "",
"Direct Connections settings updated": "",
"Direct Connections": "Naisc Dhíreacha",
"Direct Connections allow users to connect to their own OpenAI compatible API endpoints.": "Ligeann Connections Direct dúsáideoirí ceangal lena gcríochphointí API féin atá comhoiriúnach le OpenAI.",
"Direct Connections settings updated": "Nuashonraíodh socruithe Connections Direct",
"Disabled": "Díchumasaithe",
"Discover a function": "Faigh amach feidhm",
"Discover a model": "Faigh amach múnla",
"Discover a prompt": "Faigh amach leid",
"Discover a tool": "Faigh amach uirlis",
"Discover how to use Open WebUI and seek support from the community.": "",
"Discover how to use Open WebUI and seek support from the community.": "Faigh amach conas Open WebUI a úsáid agus lorg tacaíocht ón bpobal.",
"Discover wonders": "Faigh amach iontais",
"Discover, download, and explore custom functions": "Faigh amach, íoslódáil agus iniúchadh feidhmeanna saincheaptha",
"Discover, download, and explore custom prompts": "Leideanna saincheaptha a fháil amach, a íoslódáil agus a iniúchadh",
@ -309,26 +315,26 @@
"Do not install functions from sources you do not fully trust.": "Ná suiteáil feidhmeanna ó fhoinsí nach bhfuil muinín iomlán agat.",
"Do not install tools from sources you do not fully trust.": "Ná suiteáil uirlisí ó fhoinsí nach bhfuil muinín iomlán agat.",
"Document": "Doiciméad",
"Document Intelligence": "",
"Document Intelligence endpoint and key required.": "",
"Document Intelligence": "Faisnéise Doiciméad",
"Document Intelligence endpoint and key required.": "Críochphointe Faisnéise Doiciméad agus eochair ag teastáil.",
"Documentation": "Doiciméadú",
"Documents": "Doiciméid",
"does not make any external connections, and your data stays securely on your locally hosted server.": "ní dhéanann sé aon naisc sheachtracha, agus fanann do chuid sonraí go slán ar do fhreastalaí a óstáiltear go háitiúil.",
"Domain Filter List": "",
"Domain Filter List": "Liosta Scagairí Fearainn",
"Don't have an account?": "Níl cuntas agat?",
"don't install random functions from sources you don't trust.": "ná suiteáil feidhmeanna randamacha ó fhoinsí nach bhfuil muinín agat.",
"don't install random tools from sources you don't trust.": "ná suiteáil uirlisí randamacha ó fhoinsí nach bhfuil muinín agat.",
"Don't like the style": "Ná thaitníonn an stíl",
"Done": "Déanta",
"Download": "Íoslódáil",
"Download as SVG": "",
"Download as SVG": "Íoslódáil i SVG",
"Download canceled": "Íoslódáil cealaithe",
"Download Database": "Íoslódáil Bunachair",
"Drag and drop a file to upload or select a file to view": "Tarraing agus scaoil comhad le huaslódáil nó roghnaigh comhad le féachaint air",
"Draw": "Tarraing",
"Drop any files here to add to the conversation": "Scaoil aon chomhaid anseo le cur leis an gcomhrá",
"e.g. '30s','10m'. Valid time units are 's', 'm', 'h'.": "m.sh. '30s', '10m'. Is iad aonaid ama bailí ná 's', 'm', 'h'.",
"e.g. 60": "",
"e.g. 60": "m.sh. 60",
"e.g. A filter to remove profanity from text": "m.h. Scagaire chun profanity a bhaint as téacs",
"e.g. My Filter": "m.sh. Mo Scagaire",
"e.g. My Tools": "e.g. Mo Uirlisí",
@ -346,19 +352,20 @@
"ElevenLabs": "Eleven Labs",
"Email": "Ríomhphost",
"Embark on adventures": "Dul ar eachtraí",
"Embedding": "",
"Embedding Batch Size": "Méid Baisc a ionchorprú",
"Embedding": "Leabú",
"Embedding Batch Size": "Méid Baisc Leabaith",
"Embedding Model": "Múnla Leabháilte",
"Embedding Model Engine": "Inneall Múnla Ionchorprú",
"Embedding model set to \"{{embedding_model}}\"": "Samhail leabaithe atá socraithe go \"{{embedding_model}}\"",
"Embedding Model Engine": "Inneall Múnla Leabaithe",
"Embedding model set to \"{{embedding_model}}\"": "Múnla leabaithe socraithe go \"{{embedding_model}}\"",
"Enable API Key": "Cumasaigh Eochair API",
"Enable autocomplete generation for chat messages": "Cumasaigh giniúint uathchríochnaithe le haghaidh teachtaireachtaí comhrá",
"Enable Code Interpreter": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "Cumasaigh Ateangaire Cóid",
"Enable Community Sharing": "Cumasaigh Comhroinnt Pobail",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "Cumasaigh Glasáil Cuimhne (mlock) chun sonraí samhaltaithe a chosc ó RAM. Glasálann an rogha seo sraith oibre leathanaigh an mhúnla isteach i RAM, ag cinntiú nach ndéanfar iad a mhalartú go diosca. Is féidir leis seo cabhrú le feidhmíocht a choinneáil trí lochtanna leathanaigh a sheachaint agus rochtain tapa ar shonraí a chinntiú.",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "Cumasaigh Mapáil Cuimhne (mmap) chun sonraí samhla a lódáil. Ligeann an rogha seo don chóras stóráil diosca a úsáid mar leathnú ar RAM trí chomhaid diosca a chóireáil amhail is dá mba i RAM iad. Is féidir leis seo feidhmíocht na samhla a fheabhsú trí rochtain níos tapúla ar shonraí a cheadú. Mar sin féin, d'fhéadfadh sé nach n-oibreoidh sé i gceart le gach córas agus féadfaidh sé méid suntasach spáis diosca a ithe.",
"Enable Message Rating": "Cumasaigh Rátáil Teachtai",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "Cumasaigh sampláil Mirostat chun seachrán a rialú. (Réamhshocrú: 0, 0 = Díchumasaithe, 1 = Mirostat, 2 = Mirostat 2.0)",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Cumasaigh Clárúcháin Nua",
"Enabled": "Cumasaithe",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Déan cinnte go bhfuil 4 cholún san ord seo i do chomhad CSV: Ainm, Ríomhphost, Pasfhocal, Ról.",
@ -369,31 +376,34 @@
"Enter Application DN Password": "Iontráil Feidhmchlár DN Pasfhocal",
"Enter Bing Search V7 Endpoint": "Cuir isteach Cuardach Bing V7 Críochphointe",
"Enter Bing Search V7 Subscription Key": "Cuir isteach Eochair Síntiúis Bing Cuardach V7",
"Enter Bocha Search API Key": "",
"Enter Bocha Search API Key": "Cuir isteach Eochair API Bocha Cuardach",
"Enter Brave Search API Key": "Cuir isteach Eochair API Brave Cuardach",
"Enter certificate path": "Cuir isteach cosán an teastais",
"Enter CFG Scale (e.g. 7.0)": "Cuir isteach Scála CFG (m.sh. 7.0)",
"Enter Chunk Overlap": "Cuir isteach Chunk Forluí",
"Enter Chunk Size": "Cuir isteach Méid an Chunc",
"Enter Chunk Size": "Cuir isteach Méid an Smután",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "Iontráil cur síos",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
"Enter domains separated by commas (e.g., example.com,site.org)": "",
"Enter Document Intelligence Endpoint": "Iontráil Críochphointe Faisnéise Doiciméid",
"Enter Document Intelligence Key": "Iontráil Eochair Faisnéise Doiciméad",
"Enter domains separated by commas (e.g., example.com,site.org)": "Cuir isteach fearainn atá scartha le camóga (m.sh., example.com,site.org)",
"Enter Exa API Key": "Cuir isteach Eochair Exa API",
"Enter Github Raw URL": "Cuir isteach URL Github Raw",
"Enter Google PSE API Key": "Cuir isteach Eochair API Google PSE",
"Enter Google PSE Engine Id": "Cuir isteach ID Inneall Google PSE",
"Enter Image Size (e.g. 512x512)": "Iontráil Méid Íomhá (m.sh. 512x512)",
"Enter Jina API Key": "Cuir isteach Eochair API Jina",
"Enter Jupyter Password": "",
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "Cuir isteach Eochair Kagi Search API",
"Enter Jupyter Password": "Cuir isteach Pasfhocal Jupyter",
"Enter Jupyter Token": "Cuir isteach Jupyter Chomhartha",
"Enter Jupyter URL": "Cuir isteach URL Jupyter",
"Enter Kagi Search API Key": "Cuir isteach Eochair Kagi Cuardach API",
"Enter Key Behavior": "",
"Enter language codes": "Cuir isteach cóid teanga",
"Enter Model ID": "Iontráil ID Mhúnla",
"Enter model tag (e.g. {{modelTag}})": "Cuir isteach chlib samhail (m.sh. {{modelTag}})",
"Enter Mojeek Search API Key": "Cuir isteach Eochair API Cuardach Mojeek",
"Enter Number of Steps (e.g. 50)": "Iontráil Líon na gCéimeanna (m.sh. 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "Cuir isteach URL seachfhreastalaí (m.sh. https://user:password@host:port)",
"Enter reasoning effort": "Cuir isteach iarracht réasúnaíochta",
"Enter Sampler (e.g. Euler a)": "Cuir isteach Sampler (m.sh. Euler a)",
@ -403,8 +413,8 @@
"Enter SearchApi Engine": "Cuir isteach Inneall SearchAPI",
"Enter Searxng Query URL": "Cuir isteach URL Ceist Searxng",
"Enter Seed": "Cuir isteach Síl",
"Enter SerpApi API Key": "",
"Enter SerpApi Engine": "",
"Enter SerpApi API Key": "Cuir isteach Eochair API SerpApi",
"Enter SerpApi Engine": "Cuir isteach Inneall SerpApi",
"Enter Serper API Key": "Cuir isteach Eochair API Serper",
"Enter Serply API Key": "Cuir isteach Eochair API Serply",
"Enter Serpstack API Key": "Cuir isteach Eochair API Serpstack",
@ -416,7 +426,8 @@
"Enter Tavily API Key": "Cuir isteach eochair API Tavily",
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "Cuir isteach URL poiblí do WebUI. Bainfear úsáid as an URL seo chun naisc a ghiniúint sna fógraí.",
"Enter Tika Server URL": "Cuir isteach URL freastalaí Tika",
"Enter timeout in seconds": "",
"Enter timeout in seconds": "Cuir isteach an t-am istigh i soicindí",
"Enter to Send": "",
"Enter Top K": "Cuir isteach Barr K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Iontráil URL (m.sh. http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Iontráil URL (m.sh. http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "Sampla: ríomhphost",
"Example: ou=users,dc=foo,dc=example": "Sampla: ou=úsáideoirí,dc=foo,dc=sampla",
"Example: sAMAccountName or uid or userPrincipalName": "Sampla: sAMAaccountName nó uid nó userPrincipalName",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "Eisigh",
"Execute code for analysis": "Íosluchtaigh cód le haghaidh anailíse",
"Expand": "",
"Experimental": "Turgnamhach",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "Déan iniúchadh ar an cosmos",
"Export": "Easpórtáil",
"Export All Archived Chats": "Easpórtáil Gach Comhrá Cartlainne",
@ -464,7 +479,7 @@
"Failed to save models configuration": "Theip ar chumraíocht na múnlaí a shábháil",
"Failed to update settings": "Theip ar shocruithe a nuashonrú",
"Failed to upload file.": "Theip ar uaslódáil an chomhaid.",
"Features": "",
"Features": "Gnéithe",
"Features Permissions": "Ceadanna Gnéithe",
"February": "Feabhra",
"Feedback History": "Stair Aiseolais",
@ -494,7 +509,7 @@
"Form": "Foirm",
"Format your variables using brackets like this:": "Formáidigh na hathróga ag baint úsáide as lúibíní mar seo:",
"Frequency Penalty": "Pionós Minicíochta",
"Full Context Mode": "",
"Full Context Mode": "Mód Comhthéacs Iomlán",
"Function": "Feidhm",
"Function Calling": "Glaonna Feidhme",
"Function created successfully": "Cruthaíodh feidhm go rathúil",
@ -509,13 +524,13 @@
"Functions allow arbitrary code execution": "Ligeann feidhmeanna forghníomhú cód",
"Functions allow arbitrary code execution.": "Ceadaíonn feidhmeanna forghníomhú cód treallach.",
"Functions imported successfully": "Feidhmeanna allmhairi",
"Gemini": "",
"Gemini API Config": "",
"Gemini API Key is required.": "",
"Gemini": "Gemini",
"Gemini API Config": "Cumraíocht Gemini API",
"Gemini API Key is required.": "Tá Eochair Gemini API ag teastáil.",
"General": "Ginearálta",
"Generate an image": "Gin íomhá",
"Generate Image": "Ginigh Íomhá",
"Generate prompt pair": "",
"Generate prompt pair": "Gin péire pras",
"Generating search query": "Giniúint ceist cuardaigh",
"Get started": "Cuir tús leis",
"Get started with {{WEBUI_NAME}}": "Cuir tús le {{WEBUI_NAME}}",
@ -538,7 +553,7 @@
"Hex Color": "Dath Heics",
"Hex Color - Leave empty for default color": "Dath Heics - Fág folamh don dath réamhshocraithe",
"Hide": "Folaigh",
"Home": "",
"Home": "Baile",
"Host": "Óstach",
"How can I help you today?": "Conas is féidir liom cabhrú leat inniu?",
"How would you rate this response?": "Cad é mar a mheasfá an freagra seo?",
@ -566,12 +581,12 @@
"Include": "Cuir san áireamh",
"Include `--api-auth` flag when running stable-diffusion-webui": "Cuir bratach `--api-auth` san áireamh agus webui stable-diffusion-reatha á rith",
"Include `--api` flag when running stable-diffusion-webui": "Cuir bratach `--api` san áireamh agus webui cobhsaí-scaipthe á rith",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "Bíonn tionchar aige ar chomh tapa agus a fhreagraíonn an t-algartam daiseolas ón téacs ginte. Beidh coigeartuithe níos moille mar thoradh ar ráta foghlama níos ísle, agus déanfaidh ráta foghlama níos airde an t-algartam níos freagraí. (Réamhshocrú: 0.1)",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Eolas",
"Input commands": "Orduithe ionchuir",
"Install from Github URL": "Suiteáil ó Github URL",
"Instant Auto-Send After Voice Transcription": "Seoladh Uathoibríoch Láithreach Tar éis",
"Integration": "",
"Integration": "Comhtháthú",
"Interface": "Comhéadan",
"Invalid file format.": "Formáid comhaid neamhbhailí.",
"Invalid Tag": "Clib neamhbhailí",
@ -583,8 +598,8 @@
"JSON Preview": "Réamhamharc JSON",
"July": "Lúil",
"June": "Meitheamh",
"Jupyter Auth": "",
"Jupyter URL": "",
"Jupyter Auth": "Fíordheimhniú Jupyter",
"Jupyter URL": "URL Jupyter",
"JWT Expiration": "Éag JWT",
"JWT Token": "Comhartha JWT",
"Kagi Search API Key": "Eochair API Chuardaigh Kagi",
@ -597,8 +612,8 @@
"Knowledge deleted successfully.": "D'éirigh leis an eolas a scriosadh.",
"Knowledge reset successfully.": "D'éirigh le hathshocrú eolais.",
"Knowledge updated successfully": "D'éirigh leis an eolas a nuashonrú",
"Kokoro.js (Browser)": "",
"Kokoro.js Dtype": "",
"Kokoro.js (Browser)": "Kokoro.js (Brabhsálaí)",
"Kokoro.js Dtype": "Kokoro.js Dtype",
"Label": "Lipéad",
"Landing Page Mode": "Mód Leathanach Tuirlingthe",
"Language": "Teanga",
@ -613,24 +628,25 @@
"Leave empty to include all models from \"{{URL}}/models\" endpoint": "Fág folamh chun gach múnla ón gcríochphointe \"{{URL}}/models\" a chur san áireamh",
"Leave empty to include all models or select specific models": "Fág folamh chun gach múnla a chur san áireamh nó roghnaigh múnlaí sonracha",
"Leave empty to use the default prompt, or enter a custom prompt": "Fág folamh chun an leid réamhshocraithe a úsáid, nó cuir isteach leid saincheaptha",
"Leave model field empty to use the default model.": "",
"License": "",
"Leave model field empty to use the default model.": "Fág réimse an mhúnla folamh chun an tsamhail réamhshocraithe a úsáid.",
"License": "Ceadúnas",
"Light": "Solas",
"Listening...": "Éisteacht...",
"Llama.cpp": "Llama.cpp",
"LLMs can make mistakes. Verify important information.": "Is féidir le LLManna botúin a dhéanamh. Fíoraigh faisnéis thábhachtach.",
"Loader": "",
"Loading Kokoro.js...": "",
"Loader": "Lódóir",
"Loading Kokoro.js...": "Kokoro.js á lódáil...",
"Local": "Áitiúil",
"Local Models": "Múnlaí Áitiúla",
"Location access not allowed": "",
"Location access not allowed": "Ní cheadaítear rochtain suímh",
"Logit Bias": "",
"Lost": "Cailleadh",
"LTR": "LTR",
"Made by Open WebUI Community": "Déanta ag OpenWebUI Community",
"Make sure to enclose them with": "Déan cinnte iad a cheangal le",
"Make sure to export a workflow.json file as API format from ComfyUI.": "Déan cinnte comhad workflow.json a onnmhairiú mar fhormáid API ó ComfyUI.",
"Manage": "Bainistiú",
"Manage Direct Connections": "",
"Manage Direct Connections": "Bainistigh Naisc Dhíreacha",
"Manage Models": "Samhlacha a bhainistiú",
"Manage Ollama": "Bainistigh Ollama",
"Manage Ollama API Connections": "Bainistigh Naisc API Ollama",
@ -697,7 +713,7 @@
"No HTML, CSS, or JavaScript content found.": "Níor aimsíodh aon ábhar HTML, CSS nó JavaScript.",
"No inference engine with management support found": "Níor aimsíodh aon inneall tátail le tacaíocht bhainistíochta",
"No knowledge found": "Níor aimsíodh aon eolas",
"No memories to clear": "",
"No memories to clear": "Gan cuimhní cinn a ghlanadh",
"No model IDs": "Gan IDanna múnla",
"No models found": "Níor aimsíodh aon mhúnlaí",
"No models selected": "Níor roghnaíodh aon mhúnlaí",
@ -727,7 +743,7 @@
"Ollama API settings updated": "Nuashonraíodh socruithe Olama API",
"Ollama Version": "Leagan Ollama",
"On": "Ar",
"OneDrive": "",
"OneDrive": "OneDrive",
"Only alphanumeric characters and hyphens are allowed": "Ní cheadaítear ach carachtair alfa-uimhriúla agus fleiscíní",
"Only alphanumeric characters and hyphens are allowed in the command string.": "Ní cheadaítear ach carachtair alfauméireacha agus braithíní sa sreangán ordaithe.",
"Only collections can be edited, create a new knowledge base to edit/add documents.": "Ní féidir ach bailiúcháin a chur in eagar, bonn eolais nua a chruthú chun doiciméid a chur in eagar/a chur leis.",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "Cead diúltaithe agus tú ag rochtain ar",
"Permission denied when accessing microphone: {{error}}": "Cead diúltaithe agus tú ag teacht ar mhicreafón: {{error}}",
"Permissions": "Ceadanna",
"Perplexity API Key": "",
"Personalization": "Pearsantú",
"Pin": "Bioráin",
"Pinned": "Pinneáilte",
@ -776,7 +793,7 @@
"Plain text (.txt)": "Téacs simplí (.txt)",
"Playground": "Clós súgartha",
"Please carefully review the following warnings:": "Déan athbhreithniú cúramach ar na rabhaidh seo a leanas le do thoil:",
"Please do not close the settings page while loading the model.": "",
"Please do not close the settings page while loading the model.": "Ná dún leathanach na socruithe agus an tsamhail á luchtú.",
"Please enter a prompt": "Cuir isteach leid",
"Please fill in all fields.": "Líon isteach gach réimse le do thoil.",
"Please select a model first.": "Roghnaigh munla ar dtús le do thoil.",
@ -786,7 +803,7 @@
"Positive attitude": "Dearcadh dearfach",
"Prefix ID": "Aitheantas Réimír",
"Prefix ID is used to avoid conflicts with other connections by adding a prefix to the model IDs - leave empty to disable": "Úsáidtear Aitheantas Réimír chun coinbhleachtaí le naisc eile a sheachaint trí réimír a chur le haitheantas na samhla - fág folamh le díchumasú",
"Presence Penalty": "",
"Presence Penalty": "Pionós Láithreacht",
"Previous 30 days": "30 lá roimhe seo",
"Previous 7 days": "7 lá roimhe seo",
"Profile Image": "Íomhá Próifíl",
@ -809,7 +826,7 @@
"Reasoning Effort": "Iarracht Réasúnúcháin",
"Record voice": "Taifead guth",
"Redirecting you to Open WebUI Community": "Tú a atreorú chuig OpenWebUI Community",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "Laghdaíonn sé an dóchúlacht go giniúint nonsense. Tabharfaidh luach níos airde (m.sh. 100) freagraí níos éagsúla, agus beidh luach níos ísle (m.sh. 10) níos coimeádaí. (Réamhshocrú: 40)",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "Tagairt duit féin mar \"Úsáideoir\" (m.sh., \"Tá an úsáideoir ag foghlaim Spáinnis\")",
"References from": "Tagairtí ó",
"Refused when it shouldn't have": "Diúltaíodh nuair nár chóir dó",
@ -821,7 +838,7 @@
"Rename": "Athainmnigh",
"Reorder Models": "Múnlaí Athordú",
"Repeat Last N": "Déan an N deireanach arís",
"Repeat Penalty (Ollama)": "",
"Repeat Penalty (Ollama)": "Pionós Athrá (Ollama)",
"Reply in Thread": "Freagra i Snáithe",
"Request Mode": "Mód Iarratais",
"Reranking Model": "Múnla Athrangú",
@ -835,13 +852,13 @@
"Response notifications cannot be activated as the website permissions have been denied. Please visit your browser settings to grant the necessary access.": "Ní féidir fógraí freagartha a ghníomhachtú toisc gur diúltaíodh ceadanna an tsuímh Ghréasáin. Tabhair cuairt ar do shocruithe brabhsálaí chun an rochtain riachtanach a dheonú.",
"Response splitting": "Scoilt freagartha",
"Result": "Toradh",
"Retrieval": "",
"Retrieval": "Aisghabháil",
"Retrieval Query Generation": "Aisghabháil Giniúint Ceist",
"Rich Text Input for Chat": "Ionchur Saibhir Téacs don Chomhrá",
"RK": "RK",
"Role": "Ról",
"Rosé Pine": "Pine Rosé",
"Rosé Pine Dawn": "Rose Pine Dawn",
"Rosé Pine": "Péine Rosé",
"Rosé Pine Dawn": "Rosé Péine Breacadh an lae",
"RTL": "RTL",
"Run": "Rith",
"Running": "Ag rith",
@ -885,7 +902,7 @@
"Select a pipeline": "Roghnaigh píblíne",
"Select a pipeline url": "Roghnaigh url píblíne",
"Select a tool": "Roghnaigh uirlis",
"Select an auth method": "",
"Select an auth method": "Roghnaigh modh an údair",
"Select an Ollama instance": "Roghnaigh sampla Olama",
"Select Engine": "Roghnaigh Inneall",
"Select Knowledge": "Roghnaigh Eolais",
@ -897,8 +914,8 @@
"Send message": "Seol teachtaireacht",
"Sends `stream_options: { include_usage: true }` in the request.\nSupported providers will return token usage information in the response when set.": "Seolann `stream_options: { include_usage: true }` san iarratas.\nTabharfaidh soláthraithe a fhaigheann tacaíocht faisnéis úsáide chomharthaí ar ais sa fhreagra nuair a bheidh sé socraithe.",
"September": "Meán Fómhair",
"SerpApi API Key": "",
"SerpApi Engine": "",
"SerpApi API Key": "Eochair API SerpApi",
"SerpApi Engine": "Inneall SerpApi",
"Serper API Key": "Serper API Eochair",
"Serply API Key": "Eochair API Serply",
"Serpstack API Key": "Eochair API Serpstack",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "Socraigh líon na snáitheanna oibrithe a úsáidtear le haghaidh ríomh. Rialaíonn an rogha seo cé mhéad snáithe a úsáidtear chun iarratais a thagann isteach a phróiseáil i gcomhthráth. D'fhéadfadh méadú ar an luach seo feidhmíocht a fheabhsú faoi ualaí oibre comhairgeadra ard ach féadfaidh sé níos mó acmhainní LAP a úsáid freisin.",
"Set Voice": "Socraigh Guth",
"Set whisper model": "Socraigh múnla cogar",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "Socraíonn sé cé chomh fada siar is atá an tsamhail le breathnú siar chun athrá a chosc. (Réamhshocrú: 64, 0 = díchumasaithe, -1 = num_ctx)",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "Socraíonn sé an síol uimhir randamach a úsáid le haghaidh giniúna. Má shocraítear é seo ar uimhir shainiúil, ginfidh an tsamhail an téacs céanna don leid céanna. (Réamhshocrú: randamach)",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "Socraíonn sé méid na fuinneoige comhthéacs a úsáidtear chun an chéad chomhartha eile a ghiniúint. (Réamhshocrú: 2048)",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Socraíonn sé na stadanna le húsáid. Nuair a thagtar ar an bpatrún seo, stopfaidh an LLM ag giniúint téacs agus ag filleadh. Is féidir patrúin stad iolracha a shocrú trí pharaiméadair stadanna iolracha a shonrú i gcomhad samhail.",
"Settings": "Socruithe",
"Settings saved successfully!": "Socruithe sábhálta go rathúil!",
@ -964,10 +981,10 @@
"System Prompt": "Córas Leid",
"Tags Generation": "Giniúint Clibeanna",
"Tags Generation Prompt": "Clibeanna Giniúint Leid",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "Úsáidtear sampláil saor ó eireabaill chun tionchar na n-chomharthaí ón aschur nach bhfuil chomh dóchúil céanna a laghdú. Laghdóidh luach níos airde (m.sh., 2.0) an tionchar níos mó, agus díchumasaíonn luach 1.0 an socrú seo. (réamhshocraithe: 1)",
"Talk to model": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "Úsáidtear sampláil saor ó eireabaill chun tionchar na n-chomharthaí ón aschur nach bhfuil chomh dóchúil céanna a laghdú. Laghdóidh luach níos airde (m.sh., 2.0) an tionchar níos mó, agus díchumasaíonn luach 1.0 an socrú seo. (réamhshocraithe: 1)",
"Talk to model": "Labhair le múnla",
"Tap to interrupt": "Tapáil chun cur isteach",
"Tasks": "",
"Tasks": "Tascanna",
"Tavily API Key": "Eochair API Tavily",
"Tell us more:": "Inis dúinn níos mó:",
"Temperature": "Teocht",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Go raibh maith agat as do chuid aiseolas!",
"The Application Account DN you bind with for search": "An Cuntas Feidhmchláir DN a nascann tú leis le haghaidh cuardaigh",
"The base to search for users": "An bonn chun cuardach a dhéanamh ar úsáideoirí",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "Cinneann méid an bhaisc cé mhéad iarratas téacs a phróiseáiltear le chéile ag an am céanna. Is féidir le méid baisc níos airde feidhmíocht agus luas an mhúnla a mhéadú, ach éilíonn sé níos mó cuimhne freisin. (Réamhshocrú: 512)",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "Is deonacha paiseanta ón bpobal iad na forbróirí taobh thiar den bhreiseán seo. Má aimsíonn an breiseán seo cabhrach leat, smaoinigh ar rannchuidiú lena fhorbairt.",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "Tá an clár ceannairí meastóireachta bunaithe ar chóras rátála Elo agus déantar é a nuashonrú i bhfíor-am.",
"The LDAP attribute that maps to the mail that users use to sign in.": "An tréith LDAP a mhapálann don ríomhphost a úsáideann úsáideoirí chun síniú isteach.",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "Uasmhéid an chomhaid i MB. Má sháraíonn méid an chomhaid an teorainn seo, ní uaslódófar an comhad.",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "An líon uasta na gcomhaid is féidir a úsáid ag an am céanna i gcomhrá. Má sháraíonn líon na gcomhaid an teorainn seo, ní uaslódófar na comhaid.",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Ba chóir go mbeadh an scór ina luach idir 0.0 (0%) agus 1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "Teocht an mhúnla. Déanfaidh méadú ar an teocht an freagra múnla níos cruthaithí. (Réamhshocrú: 0.8)",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Téama",
"Thinking...": "Ag smaoineamh...",
"This action cannot be undone. Do you wish to continue?": "Ní féidir an gníomh seo a chur ar ais. Ar mhaith leat leanúint ar aghaidh?",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Cinntíonn sé seo go sábhálfar do chomhráite luachmhara go daingean i do bhunachar sonraí cúltaca Go raibh maith agat!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "Is gné turgnamhach í seo, b'fhéidir nach bhfeidhmeoidh sé mar a bhíothas ag súil leis agus tá sé faoi réir athraithe ag am ar bith.",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "Rialaíonn an rogha seo cé mhéad comhartha a chaomhnaítear agus an comhthéacs á athnuachan. Mar shampla, má shocraítear go 2 é, coinneofar an 2 chomhartha dheireanacha de chomhthéacs an chomhrá. Is féidir le comhthéacs a chaomhnú cabhrú le leanúnachas comhrá a choinneáil, ach dfhéadfadh sé laghdú a dhéanamh ar an gcumas freagairt do thopaicí nua. (Réamhshocrú: 24)",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "Socraíonn an rogha seo an t-uaslíon comharthaí is féidir leis an tsamhail a ghiniúint ina fhreagra. Tríd an teorainn seo a mhéadú is féidir leis an tsamhail freagraí níos faide a sholáthar, ach dfhéadfadh go méadódh sé an dóchúlacht go nginfear ábhar neamhchabhrach nó nach mbaineann le hábhar. (Réamhshocrú: 128)",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "Scriosfaidh an rogha seo gach comhad atá sa bhailiúchán agus cuirfear comhaid nua-uaslódála ina n-ionad.",
"This response was generated by \"{{model}}\"": "Gin an freagra seo ag \"{{model}}\"",
"This will delete": "Scriosfaidh sé seo",
@ -1005,7 +1022,7 @@
"This will reset the knowledge base and sync all files. Do you wish to continue?": "Déanfaidh sé seo an bonn eolais a athshocrú agus gach comhad a shioncronú. Ar mhaith leat leanúint ar aghaidh?",
"Thorough explanation": "Míniú críochnúil",
"Thought for {{DURATION}}": "Smaoineamh ar {{DURATION}}",
"Thought for {{DURATION}} seconds": "",
"Thought for {{DURATION}} seconds": "Smaoineamh ar feadh {{DURATION}} soicind",
"Tika": "Tika",
"Tika Server URL required.": "Teastaíonn URL Freastalaí Tika.",
"Tiktoken": "Tictoken",
@ -1014,7 +1031,7 @@
"Title (e.g. Tell me a fun fact)": "Teideal (m.sh. inis dom fíric spraíúil)",
"Title Auto-Generation": "Teideal Auto-Generation",
"Title cannot be an empty string.": "Ní féidir leis an teideal a bheith ina teaghrán folamh.",
"Title Generation": "",
"Title Generation": "Giniúint Teidil",
"Title Generation Prompt": "Leid Giniúint Teideal",
"TLS": "TLS",
"To access the available model names for downloading,": "Chun teacht ar na hainmneacha múnla atá ar fáil le híoslódáil,",
@ -1050,7 +1067,7 @@
"Top P": "Barr P",
"Transformers": "Claochladáin",
"Trouble accessing Ollama?": "Deacracht teacht ar Ollama?",
"Trust Proxy Environment": "",
"Trust Proxy Environment": "Timpeallacht Iontaobhais do Phróicís",
"TTS Model": "TTS Múnla",
"TTS Settings": "Socruithe TTS",
"TTS Voice": "Guth TTS",
@ -1072,14 +1089,14 @@
"Updated": "Nuashonraithe",
"Updated at": "Nuashonraithe ag",
"Updated At": "Nuashonraithe Ag",
"Upgrade to a licensed plan for enhanced capabilities, including custom theming and branding, and dedicated support.": "",
"Upgrade to a licensed plan for enhanced capabilities, including custom theming and branding, and dedicated support.": "Uasghrádú go dtí plean ceadúnaithe le haghaidh cumais fheabhsaithe, lena n-áirítear téamaí saincheaptha agus brandáil, agus tacaíocht thiomanta.",
"Upload": "Uaslódáil",
"Upload a GGUF model": "Uaslódáil múnla GGUF",
"Upload directory": "Uaslódáil eolaire",
"Upload files": "Uaslódáil comhaid",
"Upload Files": "Uaslódáil Comhaid",
"Upload Pipeline": "Uaslódáil píblíne",
"Upload Progress": "Uaslódáil an Dul",
"Upload Progress": "Dul Chun Cinn an Uaslódála",
"URL": "URL",
"URL Mode": "Mód URL",
"Use '#' in the prompt input to load and include your knowledge.": "Úsáid '#' san ionchur leid chun do chuid eolais a lódáil agus a chur san áireamh.",
@ -1111,7 +1128,7 @@
"Warning:": "Rabhadh:",
"Warning: Enabling this will allow users to upload arbitrary code on the server.": "Rabhadh: Cuirfidh sé seo ar chumas úsáideoirí cód treallach a uaslódáil ar an bhfreastalaí.",
"Warning: If you update or change your embedding model, you will need to re-import all documents.": "Rabhadh: Má nuashonraíonn tú nó má athraíonn tú do mhúnla leabaithe, beidh ort gach doiciméad a athiompórtáil.",
"Warning: Jupyter execution enables arbitrary code execution, posing severe security risks—proceed with extreme caution.": "",
"Warning: Jupyter execution enables arbitrary code execution, posing severe security risks—proceed with extreme caution.": "Rabhadh: Trí fhorghníomhú Jupyter is féidir cód a fhorghníomhú go treallach, rud a chruthaíonn mór-rioscaí slándála - bí fíorchúramach.",
"Web": "Gréasán",
"Web API": "API Gréasáin",
"Web Search": "Cuardach Gréasáin",
@ -1132,7 +1149,7 @@
"Why?": "Cén fáth?",
"Widescreen Mode": "Mód Leathanscáileán",
"Won": "Bhuaigh",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "Oibríonn sé le barr-k. Beidh téacs níos éagsúla mar thoradh ar luach níos airde (m.sh., 0.95), agus ginfidh luach níos ísle (m.sh., 0.5) téacs níos dírithe agus níos coimeádaí. (Réamhshocrú: 0.9)",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Spás oibre",
"Workspace Permissions": "Ceadanna Spás Oibre",
"Write": "Scríobh",
@ -1142,6 +1159,7 @@
"Write your model template content here": "Scríobh do mhúnla ábhar teimpléad anseo",
"Yesterday": "Inné",
"You": "Tú",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "Ní féidir leat comhrá a dhéanamh ach le comhad {{maxCount}} ar a mhéad ag an am.",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "Is féidir leat do chuid idirghníomhaíochtaí le LLManna a phearsantú ach cuimhní cinn a chur leis tríd an gcnaipe 'Bainistigh' thíos, rud a fhágann go mbeidh siad níos cabhrach agus níos oiriúnaí duit.",
"You cannot upload an empty file.": "Ní féidir leat comhad folamh a uaslódáil.",
@ -1155,6 +1173,6 @@
"Your account status is currently pending activation.": "Tá stádas do chuntais ar feitheamh faoi ghníomhachtú.",
"Your entire contribution will go directly to the plugin developer; Open WebUI does not take any percentage. However, the chosen funding platform might have its own fees.": "Rachaidh do ranníocaíocht iomlán go díreach chuig an bhforbróir breiseán; Ní ghlacann Open WebUI aon chéatadán. Mar sin féin, d'fhéadfadh a tháillí féin a bheith ag an ardán maoinithe roghnaithe.",
"Youtube": "Youtube",
"Youtube Language": "",
"Youtube Proxy URL": ""
"Youtube Language": "Teanga Youtube",
"Youtube Proxy URL": "URL Seachfhreastalaí YouTube"
}

View File

@ -5,6 +5,7 @@
"(e.g. `sh webui.sh --api`)": "(p.e. `sh webui.sh --api`)",
"(latest)": "(ultima)",
"{{ models }}": "{{ modelli }}",
"{{COUNT}} hidden lines": "",
"{{COUNT}} Replies": "",
"{{user}}'s Chats": "{{user}} Chat",
"{{webUIName}} Backend Required": "{{webUIName}} Backend richiesto",
@ -51,6 +52,7 @@
"Admins have access to all tools at all times; users need tools assigned per model in the workspace.": "",
"Advanced Parameters": "Parametri avanzati",
"Advanced Params": "Parametri avanzati",
"All": "",
"All Documents": "Tutti i documenti",
"All models deleted successfully": "",
"Allow Chat Controls": "",
@ -64,7 +66,7 @@
"Allow Voice Interruption in Call": "",
"Allowed Endpoints": "",
"Already have an account?": "Hai già un account?",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0)": "",
"Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out.": "",
"Always": "",
"Amazing": "",
"an assistant": "un assistente",
@ -93,6 +95,7 @@
"Are you sure?": "Sei sicuro?",
"Arena Models": "",
"Artifacts": "",
"Ask": "",
"Ask a question": "",
"Assistant": "",
"Attach file from knowledge": "",
@ -127,6 +130,7 @@
"Bing Search V7 Endpoint": "",
"Bing Search V7 Subscription Key": "",
"Bocha Search API Key": "",
"Boosting or penalizing specific tokens for constrained responses. Bias values will be clamped between -100 and 100 (inclusive). (Default: none)": "",
"Brave Search API Key": "Chiave API di ricerca Brave",
"By {{name}}": "",
"Bypass Embedding and Retrieval": "",
@ -190,6 +194,7 @@
"Code Interpreter": "",
"Code Interpreter Engine": "",
"Code Interpreter Prompt Template": "",
"Collapse": "",
"Collection": "Collezione",
"Color": "",
"ComfyUI": "ComfyUI",
@ -208,7 +213,7 @@
"Confirm your new password": "",
"Connect to your own OpenAI compatible API endpoints.": "",
"Connections": "Connessioni",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort. (Default: medium)": "",
"Constrains effort on reasoning for reasoning models. Only applicable to reasoning models from specific providers that support reasoning effort.": "",
"Contact Admin for WebUI Access": "",
"Content": "Contenuto",
"Content Extraction Engine": "",
@ -218,9 +223,9 @@
"Continue with Email": "",
"Continue with LDAP": "",
"Control how message text is split for TTS requests. 'Punctuation' splits into sentences, 'paragraphs' splits into paragraphs, and 'none' keeps the message as a single string.": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled. (Default: 1.1)": "",
"Control the repetition of token sequences in the generated text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. At 1, it is disabled.": "",
"Controls": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0)": "",
"Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.": "",
"Copied": "",
"Copied shared chat URL to clipboard!": "URL della chat condivisa copiato negli appunti!",
"Copied to clipboard": "",
@ -245,6 +250,7 @@
"Created At": "Creato il",
"Created by": "",
"CSV Import": "",
"Ctrl+Enter to Send": "",
"Current Model": "Modello corrente",
"Current Password": "Password corrente",
"Custom": "Personalizzato",
@ -353,12 +359,13 @@
"Embedding model set to \"{{embedding_model}}\"": "Modello di embedding impostato su \"{{embedding_model}}\"",
"Enable API Key": "",
"Enable autocomplete generation for chat messages": "",
"Enable Code Execution": "",
"Enable Code Interpreter": "",
"Enable Community Sharing": "Abilita la condivisione della community",
"Enable Memory Locking (mlock) to prevent model data from being swapped out of RAM. This option locks the model's working set of pages into RAM, ensuring that they will not be swapped out to disk. This can help maintain performance by avoiding page faults and ensuring fast data access.": "",
"Enable Memory Mapping (mmap) to load model data. This option allows the system to use disk storage as an extension of RAM by treating disk files as if they were in RAM. This can improve model performance by allowing for faster data access. However, it may not work correctly with all systems and can consume a significant amount of disk space.": "",
"Enable Message Rating": "",
"Enable Mirostat sampling for controlling perplexity. (Default: 0, 0 = Disabled, 1 = Mirostat, 2 = Mirostat 2.0)": "",
"Enable Mirostat sampling for controlling perplexity.": "",
"Enable New Sign Ups": "Abilita nuove iscrizioni",
"Enabled": "",
"Ensure your CSV file includes 4 columns in this order: Name, Email, Password, Role.": "Assicurati che il tuo file CSV includa 4 colonne in questo ordine: Nome, Email, Password, Ruolo.",
@ -375,6 +382,7 @@
"Enter CFG Scale (e.g. 7.0)": "",
"Enter Chunk Overlap": "Inserisci la sovrapposizione chunk",
"Enter Chunk Size": "Inserisci la dimensione chunk",
"Enter comma-seperated \"token:bias_value\" pairs (example: 5432:100, 413:-100)": "",
"Enter description": "",
"Enter Document Intelligence Endpoint": "",
"Enter Document Intelligence Key": "",
@ -389,11 +397,13 @@
"Enter Jupyter Token": "",
"Enter Jupyter URL": "",
"Enter Kagi Search API Key": "",
"Enter Key Behavior": "",
"Enter language codes": "Inserisci i codici lingua",
"Enter Model ID": "",
"Enter model tag (e.g. {{modelTag}})": "Inserisci il tag del modello (ad esempio {{modelTag}})",
"Enter Mojeek Search API Key": "",
"Enter Number of Steps (e.g. 50)": "Inserisci il numero di passaggi (ad esempio 50)",
"Enter Perplexity API Key": "",
"Enter proxy URL (e.g. https://user:password@host:port)": "",
"Enter reasoning effort": "",
"Enter Sampler (e.g. Euler a)": "",
@ -417,6 +427,7 @@
"Enter the public URL of your WebUI. This URL will be used to generate links in the notifications.": "",
"Enter Tika Server URL": "",
"Enter timeout in seconds": "",
"Enter to Send": "",
"Enter Top K": "Inserisci Top K",
"Enter URL (e.g. http://127.0.0.1:7860/)": "Inserisci URL (ad esempio http://127.0.0.1:7860/)",
"Enter URL (e.g. http://localhost:11434)": "Inserisci URL (ad esempio http://localhost:11434)",
@ -440,9 +451,13 @@
"Example: mail": "",
"Example: ou=users,dc=foo,dc=example": "",
"Example: sAMAccountName or uid or userPrincipalName": "",
"Exceeded the number of seats in your license. Please contact support to increase the number of seats.": "",
"Exclude": "",
"Execute code for analysis": "",
"Expand": "",
"Experimental": "Sperimentale",
"Explain": "",
"Explain this section to me in more detail": "",
"Explore the cosmos": "",
"Export": "Esportazione",
"Export All Archived Chats": "",
@ -566,7 +581,7 @@
"Include": "",
"Include `--api-auth` flag when running stable-diffusion-webui": "",
"Include `--api` flag when running stable-diffusion-webui": "Includi il flag `--api` quando esegui stable-diffusion-webui",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1)": "",
"Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.": "",
"Info": "Informazioni",
"Input commands": "Comandi di input",
"Install from Github URL": "Eseguire l'installazione dall'URL di Github",
@ -624,6 +639,7 @@
"Local": "",
"Local Models": "",
"Location access not allowed": "",
"Logit Bias": "",
"Lost": "",
"LTR": "LTR",
"Made by Open WebUI Community": "Realizzato dalla comunità OpenWebUI",
@ -764,6 +780,7 @@
"Permission denied when accessing microphone": "",
"Permission denied when accessing microphone: {{error}}": "Autorizzazione negata durante l'accesso al microfono: {{error}}",
"Permissions": "",
"Perplexity API Key": "",
"Personalization": "Personalizzazione",
"Pin": "",
"Pinned": "",
@ -809,7 +826,7 @@
"Reasoning Effort": "",
"Record voice": "Registra voce",
"Redirecting you to Open WebUI Community": "Reindirizzamento alla comunità OpenWebUI",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)": "",
"Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.": "",
"Refer to yourself as \"User\" (e.g., \"User is learning Spanish\")": "",
"References from": "",
"Refused when it shouldn't have": "Rifiutato quando non avrebbe dovuto",
@ -918,11 +935,11 @@
"Set the number of worker threads used for computation. This option controls how many threads are used to process incoming requests concurrently. Increasing this value can improve performance under high concurrency workloads but may also consume more CPU resources.": "",
"Set Voice": "Imposta voce",
"Set whisper model": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 0)": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled. (Default: 1.1)": "",
"Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: random)": "",
"Sets the size of the context window used to generate the next token. (Default: 2048)": "",
"Sets a flat bias against tokens that have appeared at least once. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets a scaling bias against tokens to penalize repetitions, based on how many times they have appeared. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. At 0, it is disabled.": "",
"Sets how far back for the model to look back to prevent repetition.": "",
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Settings": "Impostazioni",
"Settings saved successfully!": "Impostazioni salvate con successo!",
@ -964,7 +981,7 @@
"System Prompt": "Prompt di sistema",
"Tags Generation": "",
"Tags Generation Prompt": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1)": "",
"Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.": "",
"Talk to model": "",
"Tap to interrupt": "",
"Tasks": "",
@ -979,7 +996,7 @@
"Thanks for your feedback!": "Grazie per il tuo feedback!",
"The Application Account DN you bind with for search": "",
"The base to search for users": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory. (Default: 512)": "",
"The batch size determines how many text requests are processed together at once. A higher batch size can increase the performance and speed of the model, but it also requires more memory.": "",
"The developers behind this plugin are passionate volunteers from the community. If you find this plugin helpful, please consider contributing to its development.": "",
"The evaluation leaderboard is based on the Elo rating system and is updated in real-time.": "",
"The LDAP attribute that maps to the mail that users use to sign in.": "",
@ -988,14 +1005,14 @@
"The maximum file size in MB. If the file size exceeds this limit, the file will not be uploaded.": "",
"The maximum number of files that can be used at once in chat. If the number of files exceeds this limit, the files will not be uploaded.": "",
"The score should be a value between 0.0 (0%) and 1.0 (100%).": "Il punteggio dovrebbe essere un valore compreso tra 0.0 (0%) e 1.0 (100%).",
"The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8)": "",
"The temperature of the model. Increasing the temperature will make the model answer more creatively.": "",
"Theme": "Tema",
"Thinking...": "",
"This action cannot be undone. Do you wish to continue?": "",
"This ensures that your valuable conversations are securely saved to your backend database. Thank you!": "Ciò garantisce che le tue preziose conversazioni siano salvate in modo sicuro nel tuo database backend. Grazie!",
"This is an experimental feature, it may not function as expected and is subject to change at any time.": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics. (Default: 24)": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated. (Default: 128)": "",
"This option controls how many tokens are preserved when refreshing the context. For example, if set to 2, the last 2 tokens of the conversation context will be retained. Preserving context can help maintain the continuity of a conversation, but it may reduce the ability to respond to new topics.": "",
"This option sets the maximum number of tokens the model can generate in its response. Increasing this limit allows the model to provide longer answers, but it may also increase the likelihood of unhelpful or irrelevant content being generated.": "",
"This option will delete all existing files in the collection and replace them with newly uploaded files.": "",
"This response was generated by \"{{model}}\"": "",
"This will delete": "",
@ -1132,7 +1149,7 @@
"Why?": "",
"Widescreen Mode": "",
"Won": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)": "",
"Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.": "",
"Workspace": "Area di lavoro",
"Workspace Permissions": "",
"Write": "",
@ -1142,6 +1159,7 @@
"Write your model template content here": "",
"Yesterday": "Ieri",
"You": "Tu",
"You are currently using a trial license. Please contact support to upgrade your license.": "",
"You can only chat with a maximum of {{maxCount}} file(s) at a time.": "",
"You can personalize your interactions with LLMs by adding memories through the 'Manage' button below, making them more helpful and tailored to you.": "",
"You cannot upload an empty file.": "",

Some files were not shown because too many files have changed in this diff Show More