We’re sharing a new set of preprocessor-focused template workflows that make ComfyUI’s most common conditioning steps easier, more consistent, and reusable.
They cover core tasks used across image, animation, and video workflows:
Depth Estimation
Lineart Conversion
Pose Detection
Normals Estimation
Frame Interpolation
Each workflow is modular, inspectable, and easy to plug into larger graphs—whether for ControlNet, image-to-image, or video.
Why It Matters
Preprocessors are often treated as setup steps, but in practice, they are foundational creative tools. Clean depth, lineart, pose, and motion structure drive better control and consistency.
These workflows enable:
Faster iteration without full graph reruns
Clear separation of preprocessing and generation
Easier debugging and tuning
More predictable image and video results
Use them standalone, or drop them into any ComfyUI graph as reliable building blocks.
Depth Estimation Workflow
Depth estimation converts a flat image into a depth map representing relative distance within a scene. This structural signal is foundational for controlled generation, spatially aware edits, and relighting workflows.
This workflow emphasizes:
Clean, stable depth extraction
Consistent normalization for downstream use
Easy integration with ControlNet and image-edit pipelines
Depth outputs generated here can be reused across multiple passes, making it easier to iterate without re-running expensive upstream steps.
Depth Estimation on Comfy Cloud
Lineart Conversion Workflow
Lineart preprocessors distill an image down to its essential edges and contours, removing texture and color while preserving structure.
This workflow is designed to:
Produce clean, high-contrast lineart
Minimize broken or noisy edges
Provide reliable structural guidance for stylization and redraw workflows
Lineart pairs especially well with depth and pose, offering strong structural constraints without overconstraining style.
Lineart Conversion on Comfy Cloud
Pose Detection Workflow
Pose detection extracts body keypoints and skeletal structure from images, enabling precise control over human posture and movement.
This workflow focuses on:
Clear, readable pose outputs
Stable keypoint detection suitable for reuse across frames
Compatibility with pose-based ControlNet and animation pipelines
By isolating pose extraction into a dedicated workflow, pose data becomes easier to inspect, refine, and reuse.
Normals Extraction Workflow
Normals estimation converts a flat image into a surface normal map—a per-pixel direction field that describes how each part of a surface is oriented (typically encoded as RGB). This signal is extremely useful for relighting, material-aware stylization, and highly structured edits, and it often complements depth by adding fine surface detail that depth maps can’t capture.
This workflow emphasizes:
Clean, stable, normal extraction with minimal speckling
Consistent orientation and normalization for reliable downstream use
ControlNet-ready outputs for relighting, refinement, and structure-preserving edits
Reuse across passes so you can iterate without re-running earlier steps
Normal outputs generated here can be used to:
Drive relight/shading changes while preserving geometry
Add a stronger 3D-like structure to stylization and redraw pipelines
Improve consistency across frames when paired with pose/depth for animation work
Normals Extraction on Comfy Cloud
Frame Interpolation Workflow
Frame interpolation generates intermediate frames between existing frames, resulting in smoother motion and improved temporal consistency.
This workflow supports:
Increasing frame rate in short clips
Smoothing motion in generated or edited video
Preparing sequences for downstream video models
Fixing low-FPS generations (especially 16fps outputs)
Many image- and video-generation workflows still default to 16fps, which can introduce noticeable stutter, stepping, and uneven motion—especially in camera moves and character animation. Frame interpolation is an effective way to smooth these artifacts without regenerating the source frames, making motion feel more natural while preserving the original composition and timing.
Rather than always treating interpolation as a final post-process, it can also be used as a preprocessing step—allowing you to standardize frame rate early and feed cleaner temporal data into larger animation and video pipelines.
Frame Interpolation on Comfy Cloud
Getting Started
Update ComfyUI to the latest version or find them on Comfy Cloud!
Download the workflows linked in this post or find them in Templates on Comfy Cloud!
Follow the pop-up dialogs to download the required models and custom nodes
Review inputs, adjust settings, and run the workflow
As always, enjoy creating!




