Go to file
Jun Siang Cheah 3176fe0c2b fix: another attempt at fixing tag docker builds 2024-06-02 22:35:59 +01:00
.github fix: another attempt at fixing tag docker builds 2024-06-02 22:35:59 +01:00
backend feat: include num_thread in advanced params 2024-06-02 13:20:38 -07:00
cypress feat: add image gen with automatic1111 to integration test 2024-05-20 23:03:05 +01:00
docs Fixed link/formatting 2024-05-12 01:33:49 +00:00
kubernetes Remove Helm charts, which have moved to a separate repo 2024-05-06 20:39:10 -06:00
scripts feat: seaborn added to pyodide 2024-05-19 09:46:43 -07:00
src refac: styling 2024-06-02 14:04:37 -07:00
static refac: fetch pyodide deps at build time, not checked into git 2024-05-18 14:30:42 +08:00
test/test_files/image_gen feat: add image gen with automatic1111 to integration test 2024-05-20 23:03:05 +01:00
.dockerignore fix: default openai api value 2024-03-06 20:25:24 -08:00
.env.example refac: byebye litellm 2024-05-25 14:43:35 -07:00
.eslintignore chat feature added 2023-10-08 15:38:42 -07:00
.eslintrc.cjs feat: add basic cypress test as initial work towards e2e tests 2024-04-27 14:10:10 +01:00
.gitattributes fixed Docker problem some users experienced 2024-02-13 12:46:21 +01:00
.gitignore refac: fetch pyodide deps at build time, not checked into git 2024-05-18 14:30:42 +08:00
.npmrc chat feature added 2023-10-08 15:38:42 -07:00
.prettierignore chore: format 2024-05-16 18:21:19 -10:00
.prettierrc chat feature added 2023-10-08 15:38:42 -07:00
CHANGELOG.md Update CHANGELOG.md 2024-06-02 14:08:23 -07:00
CODE_OF_CONDUCT.md doc: typo 2024-06-01 20:50:01 -07:00
Caddyfile.localhost feat: update .env.example and add Caddyfile 2023-10-22 12:26:55 -06:00
Dockerfile fix: don't use BUILD_HASH build arg until needed, busts cache otherwise 2024-06-01 14:16:39 +01:00
INSTALLATION.md rename 2024-02-17 00:07:43 -08:00
LICENSE chore: revert license change 2023-11-14 13:32:59 -08:00
Makefile Chose between "docker-compose" and "docker compose" in Makefile 2024-04-19 19:30:25 +02:00
README.md doc: readme 2024-06-01 20:43:10 -07:00
TROUBLESHOOTING.md refac: OLLAMA_API_BASE_URL deprecated 2024-03-06 11:44:00 -08:00
bun.lockb first draft 2024-02-22 11:54:55 +01:00
confirm_remove.sh Chose between "docker-compose" and "docker compose" in confirm_remove.sh 2024-04-19 20:19:24 +02:00
cypress.config.ts feat: add basic cypress test as initial work towards e2e tests 2024-04-27 14:10:10 +01:00
demo.gif doc: demo.gif 2024-03-16 03:22:37 -07:00
docker-compose.a1111-test.yaml feat: add image gen with automatic1111 to integration test 2024-05-20 23:03:05 +01:00
docker-compose.amdgpu.yaml Add variables 2024-04-05 23:46:20 -04:00
docker-compose.api.yaml Removed version synatax as its no longer needed per Docker Docs 2024-05-09 14:54:26 -04:00
docker-compose.data.yaml Removed version synatax as its no longer needed per Docker Docs 2024-05-09 14:54:26 -04:00
docker-compose.gpu.yaml Removed version synatax as its no longer needed per Docker Docs 2024-05-09 14:54:26 -04:00
docker-compose.yaml Update docker-compose.yaml 2024-05-20 19:56:14 -04:00
hatch_build.py refac: rename build hash vars 2024-05-26 08:49:30 +01:00
i18next-parser.config.ts fix: configure i18next to not return empty strings 2024-03-09 03:33:20 +03:30
package-lock.json chore: version bump 2024-06-02 13:54:11 -07:00
package.json chore: version bump 2024-06-02 13:54:11 -07:00
postcss.config.js chore: npm run fmt 2023-10-21 22:47:30 -06:00
pyproject.toml chore: update python dependencies 2024-05-22 09:50:22 +01:00
requirements-dev.lock rm litellm dependency 2024-05-25 14:52:08 -07:00
requirements.lock rm litellm dependency 2024-05-25 14:52:08 -07:00
run-compose.sh fix: docker gpus option "all" support 2024-04-27 23:50:07 +03:00
run-ollama-docker.sh fix: run-ollama-docker.sh 2024-01-29 15:38:56 -08:00
run.sh Update run.sh 2024-02-19 11:43:53 -08:00
svelte.config.js feat: python code execution 2024-05-16 17:49:28 -10:00
tailwind.config.js dynamically adjust --color-gray-950 value for OLED black sidebar 2024-03-27 22:39:47 -07:00
tsconfig.json chat feature added 2023-10-08 15:38:42 -07:00
update_ollama_models.sh fix: update Makefile and rename script for open-webui integration 2024-02-24 07:51:27 +01:00
vite.config.ts Merge remote-tracking branch 'upstream/dev' into feat/include-git-hash-everywhere 2024-05-26 11:27:02 +01:00

README.md

Open WebUI (Formerly Ollama WebUI) 👋

GitHub stars GitHub forks GitHub watchers GitHub repo size GitHub language count GitHub top language GitHub last commit Hits Discord

Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs. For more information, be sure to check out our Open WebUI Documentation.

Open WebUI Demo

Key Features of Open WebUI

  • 🚀 Effortless Setup: Install seamlessly using Docker or Kubernetes (kubectl, kustomize or helm) for a hassle-free experience with support for both :ollama and :cuda tagged images.

  • 🤝 Ollama/OpenAI API Integration: Effortlessly integrate OpenAI-compatible APIs for versatile conversations alongside Ollama models. Customize the OpenAI API URL to link with LMStudio, GroqCloud, Mistral, OpenRouter, and more.

  • 🧩 Pipelines, Open WebUI Plugin Support: Seamlessly integrate custom logic and Python libraries into Open WebUI using Pipelines Plugin Framework. Launch your Pipelines instance, set the OpenAI URL to the Pipelines URL, and explore endless possibilities. Examples include Function Calling, User Rate Limiting to control access, Usage Monitoring with tools like Langfuse, Live Translation with LibreTranslate for multilingual support, Toxic Message Filtering and much more.

  • 📱 Responsive Design: Enjoy a seamless experience across Desktop PC, Laptop, and Mobile devices.

  • 📱 Progressive Web App (PWA) for Mobile: Enjoy a native app-like experience on your mobile device with our PWA, providing offline access on localhost and a seamless user interface.

  • ✒️🔢 Full Markdown and LaTeX Support: Elevate your LLM experience with comprehensive Markdown and LaTeX capabilities for enriched interaction.

  • 🛠️ Model Builder: Easily create Ollama models via the Web UI. Create and add custom characters/agents, customize chat elements, and import models effortlessly through Open WebUI Community integration.

  • 📚 Local RAG Integration: Dive into the future of chat interactions with groundbreaking Retrieval Augmented Generation (RAG) support. This feature seamlessly integrates document interactions into your chat experience. You can load documents directly into the chat or add files to your document library, effortlessly accessing them using the # command before a query.

  • 🔍 Web Search for RAG: Perform web searches using providers like SearXNG, Google PSE, Brave Search, serpstack, and serper, and inject the results directly into your chat experience.

  • 🌐 Web Browsing Capability: Seamlessly integrate websites into your chat experience using the # command followed by a URL. This feature allows you to incorporate web content directly into your conversations, enhancing the richness and depth of your interactions.

  • 🎨 Image Generation Integration: Seamlessly incorporate image generation capabilities using options such as AUTOMATIC1111 API or ComfyUI (local), and OpenAI's DALL-E (external), enriching your chat experience with dynamic visual content.

  • ⚙️ Many Models Conversations: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.

  • 🔐 Role-Based Access Control (RBAC): Ensure secure access with restricted permissions; only authorized individuals can access your Ollama, and exclusive model creation/pulling rights are reserved for administrators.

  • 🌐🌍 Multilingual Support: Experience Open WebUI in your preferred language with our internationalization (i18n) support. Join us in expanding our supported languages! We're actively seeking contributors!

  • 🌟 Continuous Updates: We are committed to improving Open WebUI with regular updates, fixes, and new features.

Want to learn more about Open WebUI's features? Check out our Open WebUI documentation for a comprehensive overview!

🔗 Also Check Out Open WebUI Community!

Don't forget to explore our sibling project, Open WebUI Community, where you can discover, download, and explore customized Modelfiles. Open WebUI Community offers a wide range of exciting possibilities for enhancing your chat interactions with Open WebUI! 🚀

How to Install 🚀

[!NOTE]
Please note that for certain Docker environments, additional configurations might be needed. If you encounter any connection issues, our detailed guide on Open WebUI Documentation is ready to assist you.

Quick Start with Docker 🐳

[!WARNING] When using Docker to install Open WebUI, make sure to include the -v open-webui:/app/backend/data in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.

[!TIP]
If you wish to utilize Open WebUI with Ollama included or CUDA acceleration, we recommend utilizing our official images tagged with either :cuda or :ollama. To enable CUDA, you must install the Nvidia CUDA container toolkit on your Linux/WSL system.

Installation with Default Configuration

  • If Ollama is on your computer, use this command:

    docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
    
  • If Ollama is on a Different Server, use this command:

    To connect to Ollama on another server, change the OLLAMA_BASE_URL to the server's URL:

    docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
    
    • To run Open WebUI with Nvidia GPU support, use this command:
    docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
    

Installation for OpenAI API Usage Only

  • If you're only using OpenAI API, use this command:

    docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
    

Installing Open WebUI with Bundled Ollama Support

This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup:

  • With GPU Support: Utilize GPU resources by running the following command:

    docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
    
  • For CPU Only: If you're not using a GPU, use this command instead:

    docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
    

Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly.

After installation, you can access Open WebUI at http://localhost:3000. Enjoy! 😄

Other Installation Methods

We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our Open WebUI Documentation or join our Discord community for comprehensive guidance.

Troubleshooting

Encountering connection issues? Our Open WebUI Documentation has got you covered. For further assistance and to join our vibrant community, visit the Open WebUI Discord.

Open WebUI: Server Connection Error

If you're experiencing connection issues, its often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the --network=host flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: http://localhost:8080.

Example Docker Command:

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Keeping Your Docker Installation Up-to-Date

In case you want to update your local Docker installation to the latest version, you can do it with Watchtower:

docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui

In the last part of the command, replace open-webui with your container name if it is different.

Moving from Ollama WebUI to Open WebUI

Check our Migration Guide available in our Open WebUI Documentation.

What's Next? 🌟

Discover upcoming features on our roadmap in the Open WebUI Documentation.

Supporters

A big shoutout to our amazing supporters who's helping to make this project possible! 🙏

Platinum Sponsors 🤍

  • We're looking for Sponsors!

Acknowledgments

Special thanks to Prof. Lawrence Kim and Prof. Nick Vincent for their invaluable support and guidance in shaping this project into a research endeavor. Grateful for your mentorship throughout the journey! 🙌

License 📜

This project is licensed under the MIT License - see the LICENSE file for details. 📄

Support 💬

If you have any questions, suggestions, or need assistance, please open an issue or join our Open WebUI Discord community to connect with us! 🤝

Star History

Star History Chart

Created by Timothy J. Baek - Let's make Open WebUI even more amazing together! 💪