Open WebUI: First Impressions


Every time I use Claude Desktop and hit my subscription limit with “You’re almost out of tokens…” I sigh. It’s not like you can just continue with another frontier model in a different web UI; you are stuck.
Of course, I could fork over more money to Anthropic, but based on surveys of AI model usage, I feel alone trying to create AI-assisted workflow tools for people who don’t want to pay $200/month. I also feel like my target user base will balk at using the command line and something like Claude Code.
So, where does someone like me go in times of token need?
AI Clients and MCP Tools
I’m not sure if “AI Clients” is the right term to use, and I originally was looking for “MCP clients” when I started my search for a Claude Desktop clone. The general project I am working on started as a way to test out creating MCP (Model Context Protocol) servers and seeing what I could do with them.
With only a few MCP servers, Claude was able to edit files, look up web content, use Chrome via Playwright…heck, I even tested out the Blender MCP server for 3D model editing. I will write a post later about that MCP server, but it’s crazy the number of MCP servers people have created to allow AI models to interact with the outside world.
The thing is: most of the MCP servers are useless when you start considering what more primitive tools + the super intelligent AI model can accomplish. For example, let’s take the Blender MCP server…since I will likely forget to write an article about it…
I was initially impressed with the idea of talking to an AI model that could then talk to Blender and help me create 3D models to use in Three.js scenes. It seemed too perfect, and in my experience, it was just that. The MCP server had trouble connecting often enough, and the number of tools added, like 30+, confused Claude on other tool usage tasks.
Instead, I asked about Claude’s knowledge of Blender and was informed he knew quite a bit about the software since creating 3D models is apparently part of the model training process. Then I looked and saw the MCP server was simply wrapping a Python library’s functions, but limiting what you could call into the Python library.
So, I ditched the Blender MCP tool, used the filesystem MCP server and my own CLI command MCP server, and in no time, Claude was producing decent 3D models. The best part was packaging up the code the way I wanted to instead of haphazardly using an MCP server and the way someone else wanted to abstract the Blender Python library.
AI Client Features Needed
Based on Claude Desktop usage, I knew I needed the following features in an OSS AI client:
Filesystem access - Not hard to load via an MCP server
Web search - Can’t rely on only a model’s training data set
Memory/db - While the memory system I was using kind of sucked, I need some kind of persistent store to use across conversations
Project knowledge base - It is very helpful to upload documents that the AI model can access via RAG (Retrieval Augmented Generation)
CLI runner - Being able to run commands reduces the need to search for and find MCP servers
So, of course, I started a new chat with one of the AI frontier models asking it what the best open source alternative to Claude Desktop was. It gave me a very long-winded response, but in the end, I chose to investigate Open WebUI after comparing it with a competitor, LobeChat.
I also remembered a “MCP client” called Goose, and I’m sure there are many options to look at. However, with 105,000+ GitHub stars and a very active contribution community, I thought I would give Open WebUI a go.
Open WebUI
From a glance at the GitHub repository, Open WebUI is made with JavaScript, Svelte, Python, TypeScript, and a handful of other programming languages. I’m used to only seeing about two or three languages mentioned on a GitHub repository, so this made my eyes go sideways for a bit, but maybe the maintainers have a good reason for the language complexity of their application.
I should coin the term “language complexity”, but you should pay attention to things like this since it means your development environment will have to cater to multiple programming languages, and with other projects on your machine, versions might start bumping into each other.
Installation
The GitHub readme starts with installing via Python, but the Quickstart documentation emphasizes using Docker as the recommended approach. So, I pulled down the Docker image and was able to spin up a version of Open WebUI easily enough.
You need some kind of LLM-runner to connect to a local model, and I had already installed Ollama for this purpose, complete with a couple of local models, Gemma and Deepseek R1.
Usage and Tools/Functions
The Open WebUI interface is pretty smooth, with a traditional input in the center of the screen to start a chat with an AI model. You can use a dropdown to pick or install a new model, but the models I added from Ollama were automatically picked up so I had no need to try and add more models.
In my experience of using Claude Desktop, the AI chats were far more interesting with the use of MCP servers and their tools. With web search, filesystem usage, and command line running, I could do far more than the Claude.ai web interface allowed me to do. However, upon asking the model I was using, Deepseek-r1:8b, about tool usage, I did not get back any results.
After clicking on “Workspaces”, I saw an area to add tools, and when I clicked to add a tool, I was greeted with a sample Python class. So, I guess the preferred way of adding deterministic function calling into Open WebUI is via Python functions.
I expected a marketplace of sorts, like I’ve seen popping up in tools like Claude Desktop, and the Open WebUI does have a way to import tools as well as a link to a community marketplace of tools. However, my problem was being sent to a mostly blank page when trying to find the community tools.
MCP-to-OpenAPI
After searching for anything MCP-related in the Open WebUI app as well as the documentation sections, I had to search for “MCP Support” to find a documentation page buried in an “OpenAPI Tool Servers” section.
While MCP tool servers are powerful and flexible, they commonly communicate via standard input/output (stdio)—often running on your local machine where they can easily access your filesystem, environment, and other native system capabilities.
That’s a strength—but also a limitation.
Shots fired! Dev down!
I am no MCP groupie, but after making a couple of MCP servers and having few issues working with them locally, I was hoping Open WebUI would provide decent MCP support. Instead, the maintainers are taking a stand that MCP is not suitable for deploying to cloud services.
If you want to deploy your main interface (like Open WebUI) on the cloud, you quickly run into a problem: your cloud instance can’t speak directly to an MCP server running locally on your machine via stdio.
But curiously, most users of Open WebUI will start on their local machines. Classic case of putting the cart before the horse, IMHO. Instead of being able to test out your carefully crafted MCP servers using stdio
now we have to involve network calls for some reason.
While some bicker about introducing a new protocol to the mix in MCP, I think the statefulness of AI chats makes good sense to experiment with a new protocol and using classic REST does not work well for AI chats. Furthermore, the problems with authentication are being improved with MCP servers.
So even though adding mcpo might at first seem like "just one more layer"—in reality, it simplifies everything…
Unlike the Open WebUI maintainers, I do not think mcpo
simplifies my life. If they did support MCP servers, I would be happily moving onto adding the MCP servers I like to use without any friction that the Open WebUI docs mention.
Conclusion
“To each, there own” the saying goes, and I’ll have to depart here since I don’t want to start writing Python code to approximate the MCP servers I’ve happily been using in Claude Desktop. I would try out community tools, if I could find them, but maybe the next time I look towards Open WebUI’s direction, they will have installed more useful tools by default. Web search, for starters, would be good to add to the Docker image.
Subscribe to my newsletter
Read articles from Alex Finnarn directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
