
Devlog: Integrating ComfyUI with Open WebUI
What exists already
There are seemingly no helm charts available except for a few “homemade” ones, with little usage.
So we’ll make out own manifests. A google search for ComfyUI images gives us saladtechnologies/comfyui. Which have a decent amount of downloads and updated images coming out at regular intervals. The images are stateless, meaning that they come prebuild with all model parameters. The large stable diffusion 3.5 image is about 22GB, and should fit perfectly inside my RTX3090 card.
AI generated manifests
Gemma2
# Deployment manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: comfyui-deployment
namespace: openwebui
spec:
replicas: 1
selector:
matchLabels:
app: comfyui
template:
metadata:
labels:
app: comfyui
spec:
containers:
- name: comfyui
image: saladtechnologies/comfyui:comfy0.3.12-api1.8.1-sd3.5-large
ports:
- containerPort: 8188
# Service manifest
apiVersion: v1
kind: Service
metadata:
name: comfyui-service
namespace: openwebui
spec:
selector:
app: comfyui
ports:
- protocol: TCP
port: 80
targetPort: 8188
type: LoadBalancer
Except for the service type this looks fine, lets smack it into Kubernetes and see what happens.
It took about 20 minutes to pull the image but when pulled the container ran flawlessly
Configuring Open WebUI
Going to the admin settings under “Images” setting the url to the ComfyUI service. Copypasting in the workflow.json from saladtechnologies’ Github.
In the ComfyUI Workflow Nodes
Set the prompt node id to the same as in the workflow.json
Generate a colorful cartoon sketch drawing of a hands in the air happy developer that has gotten his code to work late at night in his home office. With black IT equipment and a custom build PC and multiple screens. Outside the window is a starry night sky with a full moon. The desk has a few coffee cups.
Success with an asterisk
It worked out with a lot less work than expected. A downside is that ComfyUI loads the full model into VRAM at startup, which does not leave a lot left for the LLM. The model is kept in VRAM even when it is idle. I guess we have to scale down ComfyUI when we’re not using it. At least for now.
Finally some probes
Let us delegate the task to qwen2.5-coder:32b
Me: "Can you add a health and ready check to this deployment manifest?
The health probe is available at the /health route on the same port as the comfyui container
The readyness probe is available at the /ready route on the same port as well"
Qwen:
apiVersion: apps/v1
kind: Deployment
metadata:
name: comfyui-deployment
namespace: openwebui
spec:
replicas: 1
selector:
matchLabels:
app: comfyui
template:
metadata:
labels:
app: comfyui
spec:
containers:
- name: comfyui
image: saladtechnologies/comfyui:comfy0.3.12-api1.8.1-sd3.5-large
ports:
- containerPort: 8188
livenessProbe:
httpGet:
path: /health
port: 8188
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8188
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
The probes was actually on port 3000, my bad. After correcting for wrong port number and adjusting the failure threshold to fit a readiness time of about a minute, we are DONE!