


We’ll be providing more dedicated workflows for Fusion in future tutorials.

We’re currently working on a workaround for this in the next patch update, but you can get around it now by either merging the still image with the source (and ignoring the source on output) or applying a frame expression to any value in the still image. One aspect of Fusion warp workflows that can trip people up is that Fusion treats still images and text as single frames, so you need to make sure the image you want to use is seen as the the same length as the source footage, otherwise Mocha can’t animate the warp. Are you specifically looking at alembic workflows, or the warp render workflows?
