# ComfyUI-WanAnimatePreprocess **Repository Path**: kyle9088/ComfyUI-WanAnimatePreprocess ## Basic Information - **Project Name**: ComfyUI-WanAnimatePreprocess - **Description**: https://github.com/kijai/ComfyUI-WanAnimatePreprocess - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-01-24 - **Last Updated**: 2026-01-24 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README ## ComfyUI helper nodes for [Wan video 2.2 Animate preprocessing](https://github.com/Wan-Video/Wan2.2/tree/main/wan/modules/animate/preprocess) Nodes to run the ViTPose model, get face crops and keypoint list for SAM2 segmentation. Models: to `ComfyUI/models/detection` (subject to change in the future) YOLO: https://huggingface.co/Wan-AI/Wan2.2-Animate-14B/blob/main/process_checkpoint/det/yolov10m.onnx ViTPose ONNX: Use either the Large model from here: https://huggingface.co/JunkyByte/easy_ViTPose/tree/main/onnx/wholebody Or the Huge model like in the original code, it's split into two files due to ONNX file size limit: Both files need to be in same directory, and the onnx file selected in the model loader: `vitpose_h_wholebody_data.bin` and `vitpose_h_wholebody_model.onnx` https://huggingface.co/Kijai/vitpose_comfy/tree/main/onnx ![example](example.png)