I’ve been quiet because I’ve been writing! (First draft done, book half edited, throwing everything at it but the kitchen sink and having lots of arguments about plot with Gemini, ha, because deadline in 16 daaaays.)
BUT I’m keeping an eye on everything, and the latest race between the image and video generators and aggregators seems to be making either images or videos in a sub have unlimited generations.
YEP that is a very cool feature to have, especially if you’re running at scale.
A week-ish ago, Freepik made their top two plans have unlimited image generations, including Runway’s references model which was credit-hungry before, AND I just saw they made the new Hailuo 2 video model have unlimited generations, too. Which, honestly, is a game-changer if you’re not a Midjourney person. Or, even if you are. Hailuo 2 is really good, and has excellent real world physics...something Midjourney sometimes struggles with. (Anyone else have a swinging sword go right through the head of the bearer? Ha.)
Here’s a video I just made for a Patreon post that uses Runway’s references for character consistency (which is getting a little dated, alas, but still one of the best for consistency), and Hailuo 2. No sound. This was basically free, past the cost of the sub itself, which is very very reasonable if you get a year of it.
Hailuo’s always been good at more natural couples interactions than some of the other generators (cough, Runway, cough).
ANYHOW, these are the generators at the moment that have unlimited tiers with higher level subs:
Midjourney ($$, unlimited relaxed)
Freepik ($, unlimited all images, some image editing, some upscales, some videos)
Luma Dream Machine ($$, higher tier, unlimited relaxed)
Runway ($$, unlimited relaxed, some monthly credits)
Hailuo 2 ($$$, unlimited)
There might be more, but I’m only one me, alas, and haven’t seen them yet.
But if you’re after social media at scale or movies…this is a good idea. Of all of these, I’d go after MJ or Freepik.
Other things that have changed in the last week:
Veo3 updated to now allow dialogue in image to video which is HUGE. Now we can use other image models like Midjourney or Runway for character consistency, then get dialogue/animate in Veo3!
Veo3 is still expensive AF. But, it’s still the best most cohesive generator in the game, and just pulled ahead again.
I was all going to run excitedly here, too, about a new text model in Fal.ai that’s supposed to take fonts and styles from one thing and transfer it to different text, but…alas, my tests were…DOA. So, uh, no, I’ll spare you that. So not every new shiny thing coming out is actually…shiny. It’s a bit wild and woolly out there right now!
And this is more a me thing, but I’ve been shifting my story to movie adaptation mindset from story to screenplay, to story to screenplay to shot list.
And the shot list is what I’ll use to create the movie, not the screenplay.
Because, honestly, a screenplay is meant for a different medium than this. And we are building a whole new medium with AI generated movies here. It’s not quite animation, it’s not quite traditional cinematography. It’s something else entirely, and it requires a different way of breaking it down.
So, I’ve been going after shot lists. Giving the AI the story, giving the AI everything it needs to fill it out, my style preferences, and prompt templates. And then asking it to go from story to screenplay (to process that story cinematically first), then to shot list, the shot list specifically tailored to the generators I’m working in, complete with prompts per shot.
It’ll break it down to something like this, depending on what you ask for (and need, honestly):
This whole thing could be dropped into Veo3 with a few tweaks (I’m still working on baking that into the flow, as this kind of jumbled Veo3 and MJ.)
But this will get you a framework for the individual components of an AI-generated shot, each shot being only as long as the generators allow at the moment.
If you plan shots per minute, you’ll also have a rough idea of how much a film will post per minute if you’re using, say, Veo3.
This, for me, is a game changer. This lets me think modularly, vs trying to translate the individual things to generate from a screenplay.
And I’m going to figure out how to automate it, for sure!
ALSO come hang with Cassie and me on Friday July 25 at 11AM PST! (See her last post for the details on how to get the link.) We are gonna be talking about cool things and making cool shit, and probably brain dumping at will because IT’S ALL SO EXCITING AHHH.
More soon!
-Novae