SpriteDX - Stage 2 Integration


So far, we’ve worked on getting Stage 1 integrated. Most of the work done there was to get a ComfyUI running headless on RunPod. It is working as expected for the most part.
Today, I’ll focus on getting Stage 2 integrated. Here is the Comfy workflow we generated previously.
Stage 2 handles generating animated frames given a static reference character image. We have the static character image generated from Stage 1, and we need to call Seedance 1 Pro API to get animated frames out of it.
API Vendor Selection
Parameters:
Image: Output from last stage.
Resolution: 480p
Duration: 5s
Camera: Fixed
We have few options for API vendor:
Fal AI: $0.121
Pollo AI: ~$0.72
$29 per 600 credits per month → $0.0483/credit
15 credits/generation → $0.72
Scenario: ~$0.18
$45 per 5000 credits per month → $0.009/credit
20 credits/generation → 0.18
No API support.
There are bunch others, but looks like the monthly plan options from likes of Scenario and Pollo AI doesn’t actually give much of discounts. One would think they would but they don’t.
So instead of getting caught up in pricing battles, let’s just go with Fal.ai and move forward.
Integrating Fal.ai into a TypeScript server is straightforward—you grab an API key and send a request. In my case, though, I’m routing it through a headless Comfy instance running on RunPod.
Why Comfy?
Positioning SpriteDX as powered by Comfy immedately resonates with hobbyists already in that ecosystem. The Comfy brand conveys familiarity, trust and affordability—values that are equally embedded in SpriteDX’s DNA.
Integration
We first install a custom node that enables communication with Fal.ai. The one that I’ve been using is a modified version of ComfyUI-fal-API. This works like charm but it only has code for Seedance 1 Lite, so we need to fix it up a bit.
There was another issue where the requirements.txt
does not list a python-opencv
.
Now, looking good so far.
Now, we update the Dockerfile to include this custom node and push it to RunPod.
RUN git clone https://github.com/kndlt/ComfyUI-fal-API.git /comfyui/custom_nodes/ComfyUI-fal-API
RUN pip install --no-cache-dir -r /comfyui/custom_nodes/ComfyUI-fal-API/requirements.txt
After this, I noticed that the video gets generated fine, but noticed two issues.
Double queuing — RunPod queues up the task and Fal.ai also queues up the task. So, in effect, user ends up experiencing double the wait time.
RunPod’s Worker ComfyUI does not support video outputs.
I’m going to not think about the first issue too much for now since this is more of a scheduling and scaling problem.
To solve the second problem, we are using series of nodes to convert video URL into a animated WEBP file.
So far so good animation is generated and placed in Intermediate folder as an animated WEBP file.
Tomorrow will be lay-low day because school started and I need to focus on the course work. I will try to work on refining Stage 1. Stage 1 has some hardcoding when cropping images to get final output. I need to make it able to handle different types of templates and perhaps add one more template into the mix.
That said, perhaps I will try and integrate Stage 3. That will get me closer to “first-demoable” product.
— Sprited Dev 🌱
Subscribe to my newsletter
Read articles from Sprited Dev directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
