I’ve been adding something multimedia to my chapters as I post them on Patreon for my latest book, either an image, a video, or a soundtrack song.
And I probably should not in fact drop two hours of my time to go playing with the latest new shiny video generator, but, hey. I’ve been writing nonstop the last few days, and I needed a break.
Bytedance’s Seedance video model is new as of two weeks ago (I think?), and is currently scoring better than Veo3 in some of the video generator benchmarks, right up with the new Hailuo 2 Minimax model (which has good physics). Seedance is SO very crisp and clear. It doesn’t do audio like Veo3, though, alas. But I think the picture is actually crisper than Veo3, the physics a little better. It’s more cinematic on the whole, though that competition’s pretty close.
And it can do multi-shot in a single video like Veo3, too!
You can get Seedance in Fal.ai, or an easier way to access Seedance is at Enhancor.ai, which is what I was using because I bought into it a few weeks ago to try it, and remembered I still have a month of credits left. As one does. Also, it’s pretty decent.
I’m planning to figure out how to use this by API, though, because it’s very very pretty and not nearly as expensive as Veo3 at API. And making mass calls for rolling for movies is going to need to be a thing. (It’s 1/4 the cost of Veo3 via API.) But they have a similar feel, and I think Veo3 and Seedance videos could pair well together within a movie as a whole. The movements and physics of everything are fantastic, and it takes direction very well.
And also—I think multi-shot capability is going to be really, really important as we start actually making movies and shows. Seedance definitely does this better than Veo3, which can give multi-shot, but doesn’t necessarily do it well or with as much cohesion or good timing. Most of the multi-shot videos I rolled for my trailer, I cut them down to only one shot.
But if you do successful multi-shot videos, you’re cutting down your cost for multiple shots because you’d be trimming longer clips down anyway, and you’re getting more dynamic shots that already have professional cuts baked in. And you’re not having to spend as much time in editing yourself.
Anyhow, I took this character shot out of Midjourney, and used Runway’s reference images to put a different character in there for an opposite side in the conversation, then ran the image outputs through Seedance through Enhancor.
Second shot out of Runway:
There’s no actual talking in the video—you’d have to voice over and lipsync here somewhere else.
But these are seven shots spread across three actual video clips. I didn’t trim the clips at all here, though I’d probably tweak front and end frame timing a tad in actual production. Forgive my Suno music thrown in there because I wasn’t going to show a silent thing.
Seedance completely extrapolated that last clip from my prompting, and I think it did pretty well.
To cut, you need to put an interrupt in the prompt like [Cut] and then the next shot.
Also, if you haven’t hit up Runway’s references for images yet, it is MAGIC. You can subtract the background, add in new things, you tell it what you want, you tag what you want. I screenshot a still from my trailer for this book and used it in references to get a different angle of the palace. (I probably should have upscaled that somewhere, but, hey. I’m just testing.)
Look at this:
MAGIC.
Actually, Flux Kontext can do that now, too, or something similar. So there’s options!
I am head down this week mostly, trying to make a significant dent in the 70K words I need to write and polish by the end of the month (meep!), but will keep plugging away here as I take my brain-dives into the tech! I swear, it helps offset writing brain.
-Novae
I started writing a new project this week so I’ve stepped away from my video projects but this has me wanting to run back. Thanks for sharing the update!