NEW SHORT: Into the Night Owl Club 🦉 (MonStar Screen Test)
Oh Hey Void,
Today's short film drops us right into the shoes of Benny — a cowardly criminal who desperately wants to be tough.
🎬 The Scene
Benny gets a call from his crime boss with a simple job: deliver something to the Night Owl Club. Easy, right? Except Benny doesn’t realize the Night Owl is no ordinary nightclub. It’s a vampire den — a hidden party spot for the worst monsters in the city.
🛠️ The Tech Side
On the production side, this short film doubled as a workflow experiment. We tested the brand-new Wan 2.2 AI models guided by Unreal Engine 5 MetaHuman renders.
Here’s how it worked:
-
UE5 guides animation, lights, & character performance.
-
We then used Wan 2.2 with a Re-Styled Frame.
-
By feeding in custom ControlNet workflow, we re-styled the raw game engine look into something with more cinematic textures and detail.
It’s still a rough test — lots to learn and refine — but this process has huge potential to strip away the “game engine feel” and get us closer to film-quality visuals on an indie budget.
🚧 What’s Next
This is just the beginning of Benny’s night from hell. We’ll be sharing more scenes soon as we expand this into the full MonStar world.
Thanks for being here at the ground floor — where everything is rough, messy, and raw. You’re helping us shape what these films will become.
🖤 Amber + Jayson
Wan 2.2 Test w/ MonStar Benny
The last few weeks, we’ve been tinkering with Wan 2.2 + Fun Control and figuring out how to drop it into our pipeline. After months on Wan 2.1, 2.2 feels noticeably better at “locking in” identity and shape. There is less drift, and more consistency.
Render stats: this 1:39 clip took ~3 hours on a 4070 (12 GB VRAM) to generate about 20 clips. There was some trial and error. Here’s what worked for us.
Frame rate vs. frames per clip
We switched to 15 fps with 90 generated frames per pass.
-
That gave us ~6 seconds per batch, which let us build longer segments.
-
Our rig is happiest around 81 frames; we pushed to 90 anyway to squeeze out more runtime.
-
If you stick to 24 fps, the same frame budget only nets you ~3–3.5 seconds per batch, so your shots get choppier to stitch.
Lower OpenPose strength (to stop bleed-through)
Because the shot slowly pushes in, the Control Video started bleeding through and we got those white dot artifacts on Benny's face.
-
Dialing down the OpenPose blend strength in the final Control Video helped.
-
Close-ups: 0.25 worked well.
-
Wider shots: going below 0.30 messed up the eyes. It’s a balance.
Note: you can still spot a few dots if you look closely in the demo—we left them in for funsies.
Test the first and last frames
We didn't notice the issue mentioned above until halfway through.
So to catch mid-clip artifacts early, we started proofing the first 5 and last 5 frames of every 90-frame set. If dots show up there, they’ll usually pop mid-clip too.
Use the same first frame for every batch
Even with the push-in, using the same exact starting frame for all batches kept the face consistent across clips.
-
We first tried using the last frame of the previous batch to “match” into the next batch. That actually made the output drift/cartoonify over time, and the cut from last to first was obvious (Benny didn’t look the same).
-
Reusing the same start frame every time fixed it—even if expression or camera position didn’t match perfectly. Wan 2.2 figured it out.
Stagger frame starts (for cleaner blends)
We staggered starts by 10 frames to overlap batches:
-
Clip 1: 0-90
-
Clip 2: 80-170
-
Clip 3: 160- 250
Why: the overlap makes it easier to blend/crossfade batches and avoid hard color/contrast jumps. Net effect: a steadier, mostly seamless 1:30 shot.
Conclusion
Wan 2.2 is clearly better at teeth and eye movement, and with our ComfyUI workflow it’s already usable—but we’ve still got a few knobs to dial in.
Next up: a full scene test—more complex lighting, locations, and camera moves—to see how the model behaves under real production pressure.
If you want to experiment with a Wan Fun Workflow, don't forget that members also get access to the Wan 2.1 Fun Control Workflow we released earlier this week!
Wan 2.1 Fun Control - Frame 2 Frame Workflow
Hey Tech-Heads, this one’s for you.
After months of chaos, tweaks, crashes, and render miracles—we’re finally releasing our Wan 2.1 Fun Control workflow to members.
It’s the same setup we’ve used for:
🧪 WHAT IS IT?
This is our frame-to-frame (f2f) pipeline. It's a style transfer beast, not a perfect interpolator.
Think of it as a Runway Restyle alternative that lives fully inside ComfyUI.
You feed it:
-
🧠 Your raw video
-
🖼️ A restyled first frame
-
🤖 ...and let the machine do the magic trick.
⚙️ QUICK NOTES:
-
Built for style transfer.
-
First frame matters A LOT.
-
Includes Canny + DWpose + Depth Anything for triple-layer control
-
Needs decent RAM/VRAM (check specs in the product listing)
-
No support. No patches. No crying, please.
-
Not for beginners—you will need ComfyUI chops
⚠️ USE AT YOUR OWN RISK
This workflow is a Frankenstein remix of public ComfyUI tutorials. We made it functional-ish. It can probably be improved by smarter users.
No refunds. No promises.
Basic user manual included in the JSON.
🧠 SHOUTOUTS TO THE LEGENDS:
Huge thanks to the creators we learned from:
Drop your experiments, wins, or WTF moments in the comments.
We’ll cheer you on—just don’t ask us to debug the spaghetti.
MEMBERS UNLOCK FREE E-BOOKS
One of our favorite perks of joining the Void is Free Stuff!
Download the coolest Screenplay Books you'll ever see to get a taste of what worlds we're cookin' up.
Free Ebooks:
-
Survivors Don't Die - Episode 1
- A naive girl struggles to keep her family alive in a zomborg-infested city. When a corporate cult offers salvation, they must decide what safety and humanity are worth.
-
This Ebook Includes;
- Full Screenplay for Episode 1 of the 7 Episode Series
- Full Pitch Deck for the 1st Season
- Lots of previs images & concept art
-
Zombie Portraits Digital Coloring Book
-
Unlock the inspiration that turned into Survivors Don't Die.
-
Is anyone there? The Nuvitta Municipal Network has been brought back online after 2 years of a zombie infestation. T.I.M, a Teachable Imagination Machine, seeks connection to a lone survivor in need of aid. During his rescue mission, he maps the location of zombie hordes, volatile mutations, and twisted traps of nightmares.
Join us on this Oh Hey Void Coloring Story and color 50 of the most detailed zombies you’ve ever seen.
- The Coloring Book Includes
-
Editable PDF w/ transparency layers designed to be opened in photoshop (best experience)
-
Reader Friendly PDF designed for reading & ability to be imported into softwares that allow.
-
Single Page PNGS A folder of transparent PNG's for compatibility with all art programs
-
-
- Monstar (Coming Soon)
- Astral Annie (Coming Soon)
- KungFu Kittens (Coming Soon)
Quick AI Render of Chaos Cloth w/ UE5 + WanFun
Here's a quick side-by-side test of an AI Render test on our cloth sim animation.
We've been banging our heads on the keyboard for a few weeks trying to learn Chaos Cloth in UE5 and finally ... kind of ... figured out how to get some cloth wiggle.
Is it perfect? No.
Will it work? Yes.
The AI does a great job of covering up some of the "junk" that Chaos Cloth makes. It adds details where there aren't any otherwise, and we're really digging the AI pass as a shortcut to getting the visuals looking better while allowing us to focus more on the story and performance.
We haven't completely mastered the AI cloth situation yet, but if you're interested in tinkering, here are a few tutorials we found helpful to get started;
-
TUF - The Only Cloth Simulation Tutorial You Need - This one is pretty good. It covers Kinematic & A Basic Chaos Cloth workflow.
-
Nova Effectrus - Cloth Physics for Metahumans in Unreal Engine 5.6 - This one is very straight forward. Works pretty good.
-
Tiedtke - Effortlessly Simulate ANY 3D Clothes & Garments in Ue5.5 with Cloth Assets & Kinetic Collider - This ones more indepth and a little more complex. It can also be pretty heavy on the computer. But it looks the best.
- How to Render Chaos Cloth Simulations w/ Motion Blur - We don't use Motion Blur since we do an AI Render pass on top and Blue just messes everything up - BUT if you need blur - this could be helpful.
- Chaos Cloth Demystified: A Practical Guide For Artists - This was on their Unreal Fest Orlando 2025 event. It is a gloss over of some pretty complex concepts - but shows off whats actually possible. ONE Day maybe they'll actually do a more indepth breakdown of how some of these things function in an actual pipelien.
Theres still a lot to learn in the Chaos Cloth space, but let us know if this is something you'd want to learn more on. We'll add it to the upcoming UE5 study guides.
AIFF Film Festival Submission: No One Likes You
This was our fully AI film submission to the AIFF Film Festival. We didn't place, but it was a great exercise in what Runway (Gen4) could pull off.
THE CONCEPT
We've been tinkering with an idea of building a web series called "NO ONE LIKES YOU." Based on a phrase we've seen thrown around in various debates online. Because... the internet is such a nice, friendly place to have discussions... XD
What we find interesting about LLMs is that they are just one big reflection of humanity. Trained on everything the human species has created and then asked to reflect information in a non-controversial manner.
The idea for this series was to take two opposing sides of a controversial topic and ask CHATGPT to evaluate what each side fears and desires most. Then reflect those fears in a mirroring conversation/monologue.
A lot of the time the fears of opposing sides mimic one another - and the deep desires often mimic each other as well. They just seek solutions in different ways.
We figured this approach could bridge the gap between various topics our world is facing today. But we started with the one we've been entrenched in since 2022.
THE DEBATE
To explore this idea, we decided to lean into the Pro-AI and Anti-AI Filmmaker debate.
We've been following the pros and cons of AI use in art, film, and publishing. While we're obviously pro-ai in workflows, we recognized that there are some very specific topics that pop up on the regular.
We prompted the script from ChatGPT and directed the edits to the script the same way a producer would a writer or director.
Eventually, the script sorted itself out, and we were able to dive into production.
THE CREATIVE INTERPRETATION
Originally, the ChatGPT script was written as if it were 1 PRO-AI individual and 1 ANTI-AI individual monologuing, and we were going to intercut the conversations together. But we thought it'd be fun to break it up as though two separate conversations were happening in one location.
The decision allowed us to get a bit more diversity in characters, genders, and ethnicities to showcase how broad the sphere of filmmaking can be and is.
THE LOCATION
The Coffee Shop location was a regular meeting spot for a lot of our clients and crew back in the day.
So we figured, why not engage the location as though it's a character in the story? A place where discussions of opposing views can be shared.
Like the walls were listening and compiling the differences and similarities in the story.
THE PERFORMANCES
We performed the piece using Runway's ACT 1. It was fun exploring characters that would say things we wouldn't actually say. But we tried our best to give life to the opposing side that felt genuine and real.
The fear of AI is a real one that we didn't want to diminish, and we worked hard to capture the passion on both sides for storytelling and their ultimate fears of being left behind, forgotten, or stuck in a purgatory of making things that don't actually matter to anyone.
OVERALL RESULT
While it's not in the vein of what we want to be making, this was a really fun exercise. And it allowed us to really put Runway to the test. We probably won't pursue "No One Likes You' as a series, but it was a fun development piece that we're pretty proud of.
Let us know what you think. Should we make more? What opposing topics would you be interested in seeing the AI reflect back at us?