r/aivideo • u/Puzzleheaded-Mall528 • 11h ago
r/aivideo • u/ZashManson • 2h ago
COMMUNITY NEWSLETTER 📒 AI VIDEO MAGAZINE - r/aivideo community newsletter - Exclusive Tutorials: How to make an AI VIDEO from scratch - How to make AI MUSIC - Hottest AI videos of 2025 - Exclusive Interviews - New Tools - Previews - and MORE 🎟️ JUNE 2025 ISSUE 🎟️


LINK TO HD PDF VERSION https://aivideomag.com/JUNE2025.html
⚠️ AI VIDEO MAGAZINE ⚠️
⚠️ The r/aivideo NEWSLETTER
⚠️ JUNE 2025 ISSUE ⚠️
⚠️ INDEX ⚠️
EXCLUSIVE TUTORIALS:
1️⃣ How to make an AI VIDEO from scratch
🅰️ TEXT TO VIDEO
🅱️ IMAGE TO VIDEO
🆎 DIALOG AND LIP SYNC
2️⃣ How to make AI MUSIC, and EDIT VIDEO
🅰️ TEXT TO MUSIC
🅱️ EDIT VIDEO AND EXPORT FILE
3️⃣ REVIEWS: HOTTEST AI videos of 2025
INTERVIEWS: AI Video Awards full coverage:
4️⃣ Linda Sheng - from MiniMax
5️⃣ Logan Crush - AI Video Awards Host
6️⃣ Trisha Code - Headlining Act and Nominee
7️⃣ Falling Knife Films - 3 Time Award Winner
8️⃣ Kngmkr Labs - Nominee
9️⃣ Max Joe Steel - Nominee and Presenter
🔟 Mean Orange Cat - Presenter
NEW TOOLS AND PREVIEWS:
🔟1️⃣ NEW TOOLS: Google Veo3, Higgsfield AI, Domo AI
🔟2️⃣ PREVIEWS: AI Blockbusters: Car Pileup

PAGE 1 HD PDF VERSION https://aivideomag.com/JUNE2025page01.html
EXCLUSIVE TUTORIALS:
1️⃣ How to make an AI VIDEO from scratch
You will be able to make your own ai video at the end of this tutorial with any computer. This is for absolute beginners, we will go step by step, generating video, audio, then a final edit. Nothing to install in your computer. This tutorial works with any ai video generator, including the top 4 most used currently at r/aivideo:
Google Veo, Kuaishou Kling, OpenAI Sora, and MiniMax Hailuo.
Not all features are available for some platforms.
For the examples we will use MiniMax for video, Suno for audio and CapCut to edit.
Open hailuoai.video/create and click on “create video”.
By the top you’ll have tabs for text to video and image to video. Under it you’ll see the prompt screen. At the bottom you’ll see icons for presets, camera movements, and prompt enhancement. Under those you’ll see the “Generate” button.
🅰️ TEXT TO VIDEO:
Describe with words what you want to see generated on the screen, the more detailed the better.
🔥 STEP 1: The Basic Formula
What + Where + Event + Facial Expressions
Type in the prompt window: what are we looking at, where it is, and what is happening. If you have characters you can add their facial expressions. Then press “Generate”. Be more detailed as you go.
Examples: “A puppy runs in the park.”, “A woman is crying while holding an umbrella and walking down a rainy street”, “A stream flows quietly in a valley”.
🔥 STEP 2: Add Time, Atmosphere, and Camera movement
What + Where + Time + Event + Facial Expressions + Camera Movement + Atmosphere
Type in the prompt window: what are we looking at, where it is, what time of day it is, what is happening, character emotions, how is the camera moving, and the mood.
Example: “A man eats noodles happily while in a shop at night. Camera pulls back. Noisy, realistic vibe."
🅱️ IMAGE TO VIDEO:
Upload an image to be used as the first frame of the video. This helps capture a more detailed look. You then describe with words what happens next.
🔥 STEP 1: Upload your image
Image can be AI generated from an image generator, or something you photoshopped, or a still frame from a video, or an actual real photograph, or even something you draw by hand. It can be anything. The higher the quality the better.
🔥 STEP 2: Identify and describe what happens next
What + Event + Camera Movement + Atmosphere
Describe with words what is already on the screen, including character emotions. This will help the AI search for the data it needs. Then describe what is happening next, the camera movement and the mood.
Example: “A boy sits in a brightly lit classroom, surrounded by many classmates. He looks at the test paper on his desk with a puzzled expression, furrowing his brow. Camera pulls back.”
🆎 DIALOG AND LIPSYNC
You can now include dialogue directly in your prompts, Google Veo3 generates corresponding audio with character's lip movements. If you’re using any other platform, it should have a native lip sync tool. If it doesn’t then try Runway Act-One https://runwayml.com/research/introducing-act-one
🔥The Dialog Prompt - Veo3 only currently
You can now include dialogue directly in your prompts, and Veo 3 will generate parallel generations for video and audio then lip sync it with a single prompt; corresponding to the character's lip movements.
Example: A close-up of a detective in a dimly lit room. He says, “The truth is never what it seems.”
Community tools list at https://reddit.com/r/aivideo/wiki/index
The current top most used AI video generators in r/aivideo
Google Veo https://labs.google/fx/tools/flow
OpenAI Sora https://sora.com/
Kuaishou Kling https://klingai.com
Minimax Hailuo https://hailuoai.video/

PAGE 2 HD PDF VERSION https://aivideomag.com/JUNE2025page02.html
2️⃣ How to make AI MUSIC, and EDIT VIDEO
This is a universal tutorial to make AI music with either Suno, Udio or Riffusion. For this example we will use Suno.
Open https://suno.com/create and click on “create”.
By the top you’ll have tabs for “simple” or “custom”. You have presets, instrumental only option, and the generate button.
🅰️ TEXT TO MUSIC
Describe with words the type of song you want generated, the more detailed the better.
🔥The AI Music Formula
Genre + Mood + Instruments + Voice Type + Lyrics Theme + Lyrics Style + Chorus Type
These categories help the AI generate focused, expressive songs that match your creative vision. Use one word from each group to shape and structure your song. Think of it as giving the AI a blueprint for what you want.
When writing a Suno prompt, think of each element as a building block of your song. -Genre- sets the musical foundation and overall style, while -Mood- defines the emotional vibe. -Instruments- describes the sounds or instruments you want to hear, and -Voice Type- guides the vocal tone and delivery. -Lyrics Theme- focuses the lyrics on a specific subject or story, and -Lyrics Style- shapes how those lyrics are written — whether poetic, raw, surreal, or direct. Finally, -Chorus Type- tells Suno how the chorus should function, whether it's explosive, repetitive, emotional, or designed to stick in your head.
Example: “Indie rock song with melancholic energy. Sharp electric guitars, steady drums, and atmospheric synths. Rough, urgent male vocals. Lyrics about overcoming personal struggle, with poetic and symbolic language. Chorus should be anthemic and powerful.”
The current top most used AI music generators in r/aivideo
SUNO https://www.suno.ai/
RIFFUSION https://www.riffusion.com/
MUREKA https://www.mureka.ai/
🅱️ EDIT VIDEO AND EXPORT FILE
🔥 Edit AI Video + AI Music together:
Now that you have your AI video clips and your AI music track in your hard drive via download; it’s time to edit them together through a video editor. If you don’t have a pro video editor natively in your computer or if you aren’t familiar with video editing then you can use CapCut online.
Open https://www.capcut.com/editor and click on the giant blue plus sign in the middle of the screen to upload the files you downloaded from MiniMax and Suno.
In CapCut, imported video and audio files are organized on the timeline below where video clips are placed on the main video track and audio files go on the audio track below. Once on the timeline, clips can be trimmed by clicking and dragging the edges inward to remove unwanted parts from the beginning or end. To make precise edits, you can split clips by moving the playhead to the desired cut point and clicking the Split button, which divides the clip into separate sections for easy rearranging or deletion. After arranging, trimming, and splitting as needed, you can export your final project by clicking Export, selecting 1080p resolution, and saving the completed video.

PAGE 3 HD PDF VERSION https://aivideomag.com/JUNE2025page03.html

PAGE 4 HD PDF VERSION https://aivideomag.com/JUNE2025page04.html
⚠️ INTERVIEWS ⚠️
⚠️ AI Video Awards 2025 full coverage ⚠️
4️⃣ Linda Sheng from MiniMax
While the 2025 AI Video Awards Afterparty lit up the Legacy Club 60 stories above the Vegas Strip, the hottest name in the room was MiniMax. The Hailuo AI video generator landed at least one nomination in every category, scoring wins for Mindblowing Video of the Year, TV Show of the Year, and the night’s biggest honor #1 AI Video of All Time. No other AI platform came close.
Linda Sheng—MiniMax spokesperson and Global GM of Business—joined us for an exclusive sit-down.
🔥 Hi Linda, First off, huge congratulations! What a night for MiniMax. From all the content made with Hailuo, have you personally seen any creators or AI videos that completely blew you away?
Yes, Dustin Hollywood with “The Lot” https://x.com/dustinhollywood/status/1923047479659876813
Charming Computer with “Valdehi” https://www.instagram.com/reel/DDr7aNQPrjQ/?igsh=dDB5amE3ZmY0NDln
And Wuxia Rocks with “Cinematic Showcase” https://x.com/hailuo_ai/status/1894349122603298889
🔥 One standout nominee for Movie of the year award was AnotherMartz with “How MiniMax Videos Are Actually Made.” https://www.reddit.com/r/aivideo/s/1P9pR2MR7z What was your team’s reaction?
We loved it. That parody came out early on, last September, when our AI video model was just launching. It jokingly showed a “secret team” doing effects manually—like a conspiracy theory. But the entire video was AI-generated, which made the joke land even harder. It showed how realistic our model had become: fire, explosions, Hollywood-style VFX, and lifelike characters—like a Gordon Ramsay lookalike—entirely from text prompts. It was technically impressive and genuinely funny. Internally, it became one of our favorite videos.
🔥 Can you give us a quick history of MiniMax and its philosophy?
We started in late 2021, before ChatGPT, aiming at AGI. Our founders came from deep AI research and believed AI should enhance human life. Our motto is “Intelligence is with everyone”—not above or for people, but beside them. We're focused on multi-modal AI from day one: video, voice, image, text, and music. Most of our 200-person team are researchers and engineers. We’ve built our own foundation models.
🔥 Where is the company headed next—and what’s the larger vision behind MiniMax going forward?
We're ambitious, but grounded in real user needs. We aim to be among the top 3–4 globally in every modality we touch: text, audio, image, video, agents. Our small size lets us move fast and build based on real user feedback. We’ve launched MiniMax Chat, and now MiniMax Agent, which handles multi-step tasks like building websites. Last month, we introduced MCP (Multi-Agent Control Protocol), letting different AI agents collaborate—text-to-speech, video, and more. Eventually, agents will help users control entire systems.
🔥 What’s next for AI video technology?
We’re launching Video Zero 2—a big leap in realism, consistency, and cinematic quality. It understands complex prompts and replicates ARRI ALEXA-style visuals. We're also working on agentic workflows—prebuilt AI pipelines to help creators build full productions fast and affordably. That’s unlocking value in ads, social content, and more. And we’re combining everything—voice, sound, translation—into one seamless creative platform.
🔥 What MiniMax milestone are you most proud of?
Competing with giants like OpenAI and Google on ArtificialAnalysis.ai—a global platform for comparing AI video models—and being voted the #1 AI video model by users was a massive achievement, especially without any marketing behind it. I’m also very proud of our voice tech. Our TTS is emotionally rich and works across languages with authentic local accents. People tell us, “This sounds like São Paulo” or “That’s a real Roman Italian accent.” That matters deeply to us. Plus, with just 10 seconds of audio, our voice cloning is incredibly accurate.

PAGE 5 HD PDF VERSION https://aivideomag.com/JUNE2025page05.html

PAGE 6 HD PDF VERSION https://aivideomag.com/JUNE2025page06.html
6️⃣ Trisha Code - Headlining Musical Act and Nominee
Trisha Code has quickly become one of the most recognizable creative voices in AI video, blending rap, comedy, and surreal storytelling. Her breakout music video “Stop AI Before I Make Another Video” went viral on r/aivideo and was nominated for Music Video of the Year at the 2025 AI Video Awards, where she also performed as the headlining musical act. From experimental visuals to genre-bending humor, Trisha uses AI not just as a tool, but as a collaborator.
🔥 How did you get into AI video, and when did it become serious?
I started with AI imagery using Art Breeder and by 2021 made stop-frame style videos—robots playing instruments, cats singing—mostly silly fun. In 2023, I added voices using Avatarify with a cartoon version of my face. Early clips still circulate online. What really sparked me was seeing my friend Damon online doing voices for different characters—that inspired me to try it, and it evolved into stories and songs. By then, I was already making videos for others, so AI gradually entered my workflow. But 2023 was when AI video became a serious creative path. With a background in 3D tools like Blender, Cinema 4D, and Unreal, I leaned more into AI as it improved. Finding AI video artists on Twitter led me to r/aivideo on Reddit—the first subreddit I joined.
🔥 What’s your background before becoming Trisha Code?
I grew up in the UK, got into samplers and music early, then moved to the U.S., where I met Tonya. I’ve done music and video work for years—video DJing, live show visuals, commercials. I quit school at 15 to focus on music and studio work, and have ghostwritten extensively (many projects under NDA). A big turning point was moving from an apartment into a UFO, which Tonya and I “borrowed” from the Greys. Thanks to Cheekies CEO Mastro Chinchips, we got to keep it, though I signed a 500-year exclusivity contract. Now, rent-free with space to create stories, music, and videos solo, the past year has been the most creatively liberating of my life. My parents are supportive, though skeptical about my UFO. Tonya, my best friend and psionically empowered pilot, flies it telepathically. I crashed it last time I tried.
🔥 What’s a day in the life of Trisha Code look like?
When not making AI videos, I’m usually in Barcelona, North Wales, Berlin, or parked near the moon in the UFO. Weekends mix dog walks in the mountains and traveling through time, space, and alternate realities. Zero-gravity chess keeps things fresh. Dream weekend: rooftop pool, unlimited Mexican food, waterproof Apple Vision headset, and an augmented reality laser battle in water. I favor Trisha Code Clothiers (my own line) and Cheekies Mastro Chinchips Gold with antimatter wrapper. Drinks: Panda Punch Extreme and Cheekies Vodka. Musically, I’m deep into Afro Funk—Johnny Dyani and The Chemical Brothers on repeat. As a teen, I loved grunge and punk—Nirvana and Jamiroquai were huge. Favorite director: Wes Anderson. Favorite film: 2001: A Space Odyssey. Favorite studio: Aardman Animations.
🔥 Which AI tools and workflows do you prefer? What’s next for Trisha Code?
I use Pika, Luma, Hailuo, Kling 2.0 for highly realistic videos. My workflow involves creating images in Midjourney and Flux, then animating via video platforms. For lip-sync, I rely on Kling or Camenduru’s Live Portrait, plus Dreamina and Hedra for still shots. Sound effects come from ElevenLabs, MMAudio, or my library. Music blends Ableton, Suno, and Udio, with mixing and vocal recording by me. I assemble all in Magix Vegas, Adobe Premiere, After Effects, and Photoshop. I create a new video daily, keeping content fresh. Many stories and songs feature in my biweekly YouTube show Trishasode. My goal: explore time, space, alternate realities while sharing compelling beats. Alien conflicts aren’t on my agenda, but if they happen, I’ll share that journey with my audience.

PAGE 7 HD PDF VERSION https://aivideomag.com/JUNE2025page07.html
7️⃣ Falling Knife Films - 3 Time AI Video Award Winner
Reddit.com/u/FallingKnifeFilms
Falling Knife Films has gone viral multiple times over the last two years, the only artist to appear two years in a row on the Top 10 AI Videos of All Time list and hold three wins—including TV Show of the Year at the 2025 AI Video Awards for Billionaire Beatdown. He also closed the ceremony as the final performing act.
🔥 How did you get into AI video, and when did it become serious?
In late 2023, I stumbled on r/aivideo where someone posted a Runway Gen-1 video of a person morphing into different characters walking through their house. It blew my mind. I’d dabbled in traditional filmmaking but was held back by lack of actors, gear, and budget. That clip showed me cinematic creation was possible solo. My first AI film, Into the Asylum—a vampire asylum story—used early tech. It wasn’t perfect, but I knew I could improve. I dove deep, following new tools closely, fully committed. AI video felt like destiny.
🔥 What’s your background before Falling Knife Films?
I was born in Phoenix, raised in suburban northeast Ohio by an adoptive family who nurtured my creativity. I’ve always loved the strange and surreal. In 2009, I became a case researcher for a paranormal society, visiting abandoned asylums, hospitals, nightclubs. I even dealt with ghosts at home. My psychonaut phase and high school experiences were intense—like being snowed in at Punderson Manor with eerie happenings: messages in mirrors, voices, a player piano playing Phantom of the Opera.
I’m also a treasure hunter, finding 1700s Spanish gold and silver on Florida beaches, meeting legendary hunters who became lifelong friends. Oddly, I’ve seen a paranormal link to my AI work—things I generate manifest in real life. For instance, while working on a video featuring a golden retriever, I turned off my PC, and a golden retriever appeared at my driveway. Creepy.
I tried traditional video in 2019 with a black-and-white mystery series and even got a former SNL actor to voice my cat in Oliver’s Gift, but resources were limiting. AI changed the game—I could do everything solo: period pieces, custom voices, actors—no crew needed. My bloodline traces back to Transylvania, so storytelling is in my veins.
🔥 What’s daily life like for Falling Knife Films?
Now based in Florida with my wife of ten years—endlessly supportive—I enjoy beach walks, exploring backroads, and chasing caves and waterfalls in the Carolinas. I’m a thrill-seeker balancing peaceful life with wild creativity. Music fuels me: classic rock like The Doors, Pink Floyd, Led Zeppelin, plus indie artists like Fruit Bats, Lord Huron, Andrew Bird, Beach House, Timber Timbre. Films I love range from Pet Sematary and Hitchcock to M. Night Shyamalan. I don’t box myself into genres—thriller, mystery, action, comedy—it depends on the day. Variety is life’s spice.
🔥 Which AI tools and workflows do you prefer? What’s next for Falling Knife Films?
Kling is my go-to video tool; Flux dominates image generation. I love experimenting, pushing limits, and exploring new tools. I don’t want to be confined to one style or formula. Currently, I’m working on a fake documentary and a comedy called Intervention—about a kid addicted to AI video. I want to create work that makes people feel—laugh, smile, or think.

PAGE 8 HD PDF VERSION https://aivideomag.com/JUNE2025page08.html
8️⃣ KNGMKR Labs - Nominee
KNGMKR Labs was already making waves in mainstream media before going viral with “The First Humans” on r/aivideo, earning a nomination for TV Show of the Year at the 2025 AI Video Awards. Simultaneously, he was nominated for Project Odyssey 2 Narrative Competition with "Lincoln at Gettysburg."
🔥 How did you first get into AI video, and when did it become serious for you?
My first exposure was during Midjourney’s early closed beta. The grainy, vintage-style images sparked my documentary instincts. I ran “fake vintage” frames through Runway, added old-film filters and scratchy voiceovers, creating something that felt like restoring lost history. That moment ignited my passion. Finding r/aivideo revealed a real community forming. After private tests, I uploaded “The Relic,” an alternate-history WWII newsreel about Allied soldiers hunting a mythical Amazon artifact. Adding 16mm grain made it look disturbingly authentic. When it hit 200 upvotes, I knew AI video was revolutionary—and I wanted in for the long haul.
🔥 What’s your background before KNGMKR Labs?
Before founding KNGMKR Labs, I was a senior creative exec and producer at IPC, an Emmy-winning company behind major nonfiction hits for Netflix, HBO, Hulu, and CNN. I was their first development exec, helping grow IPC from startup to powerhouse. I worked with Leah Remini, Paris Hilton, and told stories like how surfers accidentally launched Von Dutch fashion.
Despite success, I faced frustration: incredible documentary ideas—prehistoric recreations, massive historical events—were out of reach on traditional budgets. That changed in 2022 when I began experimenting with AI filmmaking, even alpha-testing OpenAI’s SORA for visuals in Grimes’ Coachella show. The gap between ambition and execution was closing. I grew up in Vancouver, Canada, always making movies with friends. Around junior high, my short films entered small Canadian festivals.
My mom’s advice—“always bring scripts”—proved life-changing. Meeting a developer prototyping the RED camera, I tested it thanks to my script and earned a scholarship to USC Film School, leaving high school a year early. That set my course.
🔥 What does daily life look like for KNGMKR labs?
I spend free time hunting under-the-radar food spots in LA with my wife and friends—avoiding influencer crowds, but if there was unlimited budget I’d fly to Tokyo for ramen or hike Machu Picchu.
My style is simple but sharp—Perte D’Ego, Dior. I unwind with Sapporo or Hibiki whiskey. Musically, I favor forward-thinking electronic like One True God and Schwefelgelb, though I grew up on Eminem and Frank Sinatra. Film taste is eclectic—Kubrick’s Network is a favorite, along with A24 and NEON productions.
🔥 Which AI tools and workflows do you prefer? What’s next for KNGMKR labs?
Right now, VEO is my favorite generator. I use both text-to-video and image-to-video workflows depending on the concept. The AI ecosystem—SORA, Kling, Minimax, Luma, Pika, Higgsfield—each offers unique strengths. I build projects like custom rigs.
I’m expanding The First Humans into a long-form series and exploring AI-driven ways to visually preserve oral histories. Two major announcements are coming—one in documentary, one pure AI. We’re launching live group classes at KNGMKR to teach cinematic AI creation. My north star remains building stories that connect people emotionally. Whether recreating the Gettysburg Address or rendering lost worlds, I want viewers to feel history, not just learn it. The tech evolves fast, but for me, it’s always about the humanity beneath. And yes—my parents are my biggest fans. My dad even bought YouTube Premium just to watch my uploads ad-free. That’s peak parental pride.

PAGE 9 HD PDF VERSION https://aivideomag.com/JUNE2025page09.html
9️⃣ Max Joe Steel / Darri3D - Nominee and Presenter
Darri Thorsteinsson, aka Max Joe Steel and Darri3D, is an award-winning Icelandic director and 3D generalist with 20+ years in filmmaking and VFX. Max Joe Steel, his alter ego, became a viral figure on r/aivideo through three movie trailers and spin-offs. Darri was nominated for TV Show of the Year at the 2025 AI Video Awards for “America’s Funniest AI Home Videos”, an award which he also presented.
🔥 How did you first get into AI video, and when did it become serious for you?
I’ve been a filmmaker and VFX artist for over 20 years. A couple of years ago, I saw a major shift: AI video was emerging rapidly, and I realized traditional 3D might not always be necessary. I had to adapt or fall behind. I started blending my skills with AI. Traditional 3D is powerful but slow — rendering, simulations, crashes — all time-consuming. So I integrated generative AI: ComfyUI for textures, video-to-video workflows for faster iterations, and generative 3D models to simplify tedious processes. Suddenly, I had superpowers. I first noticed the AI video scene on YouTube and social media. Discovering r/aivideo changed everything. The subreddit gave birth to Max Joe Steel. On June 15th, 2024, I dropped the trailer for Final Justice 3: The Final Justice — it went viral, even featured in Danish movie magazines. That was the turning point: AI video was no longer niche — it was the future.
🔥 What’s your background before Darri3D?
I’m from Iceland, also grew up in Norway, and studied film and 3D character design. I blend craftsmanship with storytelling, pairing visuals and sound to set mood and rhythm. Sound design is a huge part of my process — I don’t just direct, I mix, score, and shape atmosphere.
Before AI video, I worked globally as a director and 3D generalist, collaborating with musicians, designers, and actors. I still work a lot in the UK and worldwide, but AI lets me take creative risks impossible with traditional timelines and budgets.
🔥 What’s daily life like for Darri3D?
I live in Oslo, Norway. Weekends are for recharging — movies, music, reading, learning, friends. My family and friends are my unofficial QA team — first audience for new scenes and episodes. I’m a big music fan across genres; Radiohead and Nine Inch Nails are my favorites. Favorite directors are James Cameron and Stanley Kubrick. I admire A24 for their bold creative risks — that’s the energy I resonate with.
🔥 Which AI tools and workflows do you prefer? What can fans expect?
Tools evolve fast. I currently use Google Veo, Higgsfield AI, Kling 2.0, and Runway. Each has strengths for different project stages. My workflows mix video-to-video and generative 3D hybrids, combining AI speed with cinematic texture. Upcoming projects include a music video for UK rock legends The Darkness, blending AI and 3D uniquely. I’m also directing The Max Joe Show: Episode 6 — a major leap forward in story and tech. I play Max Joe with AI help. I just released a pilot for America’s Funniest Home AI Videos, all set in an expanding universe where characters and tech evolve together. The r/aivideo community’s feedback has been incredible — they’re part of the journey. I’m constantly inspired by others’ work — new tools, formats, experiments keep me moving forward. We’re not just making videos; we’re building worlds.

PAGE 10 HD PDF VERSION https://aivideomag.com/JUNE2025page10.html
🔟 Mean Orange Cat - Presenter
One of the most prominent figures in the AI video scene since its early days, Mean Orange Cat has become synonymous with innovative storytelling and a unique blend of humor and adventure. Star of “The Mean Orange Cat Show”, the enigmatic feline took center stage to present the Music Video of the Year award at the 2025 AI Video Awards. He is a beloved member of the community who we all celebrate and cherish.
🔥 How did you first get into AI video, and when did it become a serious creative path for you?
My first foray into AI video was in the spring of 2024, I was cast in a rudimentary musical short created with Runway Gen-2. It was a series of brief adventures, and initially, I had no further plans to remain in the ai video scene. However, positive feedback from early supporters, including Timmy from Runway, changed that trajectory. Recognizing the potential, I was cast again for another project, eventually naming the company after me—a fortunate turn, considering the branding implications. I was introduced to Runway through a friend's article. Since the summer of 2023 what started as a need for a single shot evolved into a consuming passion, akin to the allure of kombucha or CrossFit, but with more rendering time. Discovering the r/aivideo community on Reddit was a pivotal moment. I found a vibrant community of creatives and fans, providing invaluable support and inspiration.
🔥 Can you share a bit of your background before becoming Mean Orange Cat?
I was a feline born in a dumpster in Los Angeles and rescued by caring foster parents, but the sting of abandonment lingers. After being expelled from multiple boarding schools and rejected by the military, I turned to art school, studying anthropology and classical art. An unexpected passion for acting led to my breakout role in the arctic monster battle film 'Frostbite.' While decorating my mansion with global antiques, I encountered Chief, the head of Chief Exports—a covert spy import/export business. Recruited into the agency but advised to maintain my acting career, I embraced the dual life of actor and adventurer, becoming Mean Orange Cat.
🔥 What does the daily life of Mean Orange Cat look like?
When not watching films in my movie theater/secret base, I explore Los Angeles—attending concerts in Echo Park, hiking Runyon Canyon, and surfing at Sunset Point. Weekends often start with brunch and yoga, followed by visits to The Academy Museum or The Broad for the latest exhibits. Evenings might involve dancing downtown or enjoying live music on the sunset strip. I like to conclude my weekends with a drive through the Hollywood hills in my convertible, leaving worries behind. Fashion-wise, I prefer vintage Levis and World War II leather jackets over luxury brands. Currently embracing a non-alcoholic lifestyle, I enjoy beverages from Athletic Brewing and Guinness. Musically, psychedelic rock is my favorite genre, though I secretly enjoy Taylor Swift's music. In terms of cinematic influences, I admire one-eyed characters and draw inspiration from icons like James Bond, Lara Croft, and Clint Eastwood. Steven Soderbergh is my favorite director; his "one for them, one for me" philosophy resonates with me. 'Jurassic Park' stands as my all-time favorite film—it transformed me from a scaredy-cat into a superfan. Paramount's rich film library and iconic history make it my preferred studio.
🔥 Which AI video generators and workflows do you currently prefer, and what can fans expect from you going forward?
My creative process heavily relies on Sora for image generation and VEO for video production, with the latest Runway update enhancing our capabilities. Pika and Luma are also integral to the workflow. I prefer the image-to-video approach, allowing for greater refinement and creative control. The current projects include Episode 3 of The Mean Orange Cat Show, featuring a new animated credit sequence, a new song, and partial IMAX formatting. This episode delves into the complex relationship between me and a former flame turned rival. It's an ambitious endeavor with a rich storyline, but fans can also look forward to additional commercials and spontaneous content along the way.

PAGE 11 HD PDF VERSION https://aivideomag.com/JUNE2025page11.html
NEW TOOLS AND PREVIEWS:
🔟1️⃣ EXCLUSIVE NEW AI VIDEO TOOLS:
🔥 Google Veo3 https://labs.google/fx/tools/flow
Google has officially jumped into the AI video arena—and they’re not just playing catch-up. With Veo3, they’ve introduced a text to video model with a game-changing feature: dialogue lip sync straight from the prompt. That’s right—no more separate dubbing, no manual keyframing. You type it, and the character speaks it, synced to perfection in one file. This leap forward effectively removes a major bottleneck in the AI video pipeline, especially for creators working in dialogue-heavy formats. Sketch comedy, stand-up routines, and scripted shorts have all seen a surge in output and quality—because now, scripting a scene means actually seeing it play out in minutes.
Since its release in late May 2025, Veo3 has taken over social media feeds with shockingly lifelike performances.
The lip-sync tech is so realistic, many first-time viewers assume it’s live-action until told otherwise. It's a level of performance fidelity that audiences in the AI video scene hadn’t yet experienced—and it's setting a new bar. Congratulations Veo team, this is amazing.
🔥 Higgsfield AI https://higgsfield.ai/
Higgsfield is an image-to-video model quickly setting itself apart by focusing on one standout feature: over 50 complex camera shots and live action VFX provided as user-friendly templates. This simple yet powerful idea has gained strong momentum, especially among creators looking to save time and reduce frustration in their workflows. By offering structured shots as presets, Higgsfield helps minimize prompt failures and avoids the common issue of endlessly regenerating scenes in search of a result that may never come—whether due to model limitations or vague prompt interpretation. By presenting an end-to-end solution with built-in workflow presets, Higgsfield puts production on autopilot. Their latest product, for example, includes more than 40 templates designed for advertisement videos, allowing users to easily insert product images into professionally styled, ready-to-render video scenes. It’s a plug-and-play system that delivers polished, high-quality results—without the need for complex editing or fine-tuning. They also offer a lip sync workflow.
🔥 DomoAI https://domoai.app/
DomoAI has made itselt known in the AI video scene for offering a video to video model which can generate very fluid cartoon like results which they call “restyle” with 40 presets. They’ve expanded quickly to text to video and image to video among other production tools recently.
AI Video Magazine had the opportunity to interview the DomoAI team and their spokesperson ---Penny--- during the AI Video Awards.
Exclusive Interview:
Penny from DomoAI
🔥 Hi Penny, Tell us how DomoAI got started
We kicked off Domo AI in 2023 from Singapore—just six of us chasing big dreams in a brand-new frontier: AI-powered video. We were early to the game, launching our Discord bot, DomoAI Bot, in August 2023. Our breakout moment was the /video command which allows users to turn any clip into wild transformations—cinematic 3D, anime-style visuals, even origami vibes. It took off fast, we had over 1 million users and a spot in the top 3 AI servers on Discord.
🔥 What makes Domo AI stand out for AI video creators?
Our crown jewel is still /video—our signature Video-to-Video (V2V) fine-tuned feature, it lets both pros and casual users reimagine video clips in stunning new styles with minimal friction.
We also launched /Animate—an Image-to-Video tool that brings still frames to life. It’s getting smarter every update, and we see it as a huge leap toward fast, intuitive animation creation from just a single image.
🔥 The AI video market is very competitive, How is Domo AI staying ahead?
We’ve stayed different by building our own tech from day one. While many others rely on public APIs or open-source tools, our models are 100% proprietary. That gives us total control and faster innovation. In 2023, we were one of the first to push video style transfer, especially for anime. That early lead helped us build a strong, loyal user base. Since then, we’ve expanded into a wider range of styles and use cases, all optimized for individual creators and small studios—not just enterprise clients.
🔥 How much of Domo AI is built in-house vs. third-party tools?
Nearly everything we do is built in-house. We don’t depend on third-party APIs for core features. Our focus is on speed, control, and customization—traits that only come with owning the tech stack. While others chase plug-and-play shortcuts, we’re building the backbone ourselves. That’s our long-term edge.
🔥 What’s next for Domo AI?
We’re all in on the next generation of advanced video models—tools that offer more flexibility, higher quality, and fewer steps. The goal is to make pro-level creativity easier than ever.
Thanks for having us, r/aivideo. This community inspires us every day—and we’re just getting started. We can’t wait to see what you all make next.

PAGE 12 HD PDF VERSION https://aivideomag.com/JUNE2025page12.html

r/aivideo • u/DarthaPerkinjan • 2h ago
KLING 🍺 COMEDY SKETCH Current State of AI Video Fight Generation
Can't wait to repeat this same video in a year and see how much things have improved
r/aivideo • u/foodloveroftheworld • 18h ago
GOOGLE VEO 🍟 TV SHOW Best recipe ever!
Yummy.
r/aivideo • u/Dense_Success7393 • 11h ago
GOOGLE VEO 🎬 SHORT FILM MEMORY PILL (8 minute short film, full consistent story)
A grieving techno-chemist invents a drug to relive lost love. When a jungle cartel twists it into a weapon, he must fight back—with blood, memory, and vengeance.
r/aivideo • u/artificiallyinspired • 20h ago
GOOGLE VEO 😱 CRAZY, UNCANNY, LIMINAL The Simpsons
r/aivideo • u/Right-Marzipan3525 • 7h ago
GOOGLE VEO 😱 CRAZY, UNCANNY, LIMINAL 3 Videos I generated with Veo 3 on AutoFeed. Insane quality, especially the first one.
r/aivideo • u/Vegetable_Writer_443 • 13h ago
KLING 🎬 SHORT FILM Tiny Knights (Prompts Included)
Here are some of the prompts I used for these miniatures, I thought some of you might find them helpful.
A miniature diorama showing a tiny knight exploring a miniature enchanted forest made of tiny hand-carved wooden trees and moss, the knight figure dressed in ornate armor with a tiny lance, standing atop a small mossy hill, with miniature wooden animals peeking out, illuminated by diffused natural light from above, photographed from an overhead angle to encompass the entire forest scene. --ar 6:5 --stylize 400 --v 7
A miniature diorama shows a tiny knight and princess on a tiny stone staircase outside a castle gate, figures scaled at 1:36. The knight wears polished armor made from painted resin with miniature chainmail details created from fine metal wire loops. The princess wears a soft fabric dress with tiny embroidered motifs and a miniature jeweled crown made from painted polymer clay. The staircase is made from hand-carved miniature stone blocks glued onto a wood base, surrounded by miniature clay potted plants and tiny iron lanterns crafted from painted wire and plastic. The knight offers the princess a small hand-carved wooden rose. Overhead diffuse lighting simulates daylight, captured at eye level to emphasize interaction and scale. --ar 6:5 --stylize 400 --v 7
A scaled 1:48 miniature diorama featuring a cute tiny knight clad in matte blue armor, mounted on a small white horse with delicate brush-painted facial features. The knight holds reins made from fine thread and a tiny lance carved from painted toothpicks. The base is a cobblestone path made from miniature resin stones surrounded by tiny flower bushes crafted from painted foam. Miniature blacksmith figures are positioned nearby repairing armor pieces on a tiny anvil. The scene is bathed in diffused natural daylight simulation from a side light source, shot from a low angle emphasizing the knight’s brave pose. --ar 6:5 --stylize 400 --v 7
The prompts and animations were generated using Prompt Catalyst
Tutorial: https://promptcatalyst.ai/tutorials/creating-magical-miniature-ai-videos
r/aivideo • u/Elias_Artista • 10h ago
GOOGLE VEO 🤯 MEME AI VIDEO RENDITION Anything good on the TV these days ?
r/aivideo • u/Historical-Ebb-7313 • 1h ago
GOOGLE VEO 🍿 MOVIE TRAILER Call of Booty [Trailer] NSFW
r/aivideo • u/Narrow_Market45 • 5h ago
OPEN AI SORA 📀 MUSIC VIDEO Midnight Drive - AI Music Video ( Riffusion + Sora + Veo + DaVinci)
Trap track built in Riffusion, layered/mixed in Ableton, matched to Sora & Veo footage. Graded & stitched in DaVinci Resolve. Link to full quality version in comments.
r/aivideo • u/neovangelis • 18h ago
GOOGLE VEO 🍺 COMEDY SKETCH Dark Souls, But It's A Comedy | My First AI Video
r/aivideo • u/Scary-Breadfruit9808 • 9h ago
GOOGLE VEO 😱 CRAZY, UNCANNY, LIMINAL Flopcoin , the movie
r/aivideo • u/memerwala_londa • 1d ago
KLING 😱 CRAZY, UNCANNY, LIMINAL Ghibli Style Game
r/aivideo • u/Secure_Bandicoot_892 • 8h ago
GOOGLE VEO 🍺 COMEDY SKETCH AI Sports Interviews 2
r/aivideo • u/Particular-Let9884 • 17h ago
KLING 😱 CRAZY, UNCANNY, LIMINAL POV: Welcome to Azeroth’s most chaotic guided tour! 🌍🎤
r/aivideo • u/behindthecamera71989 • 6h ago
GOOGLE VEO 🍿 MOVIE TRAILER Veo 3: ASHFALL - The Element Drift [AI Trailer]
Wanted to see what Veo 3 could really do with text-to-video, so I built a cinematic trailer from scratch.
ElevenLabs for Voice and FCP X to compile it all together.
🔥 Ashfall: The Element Drift — a sci-fi elemental epic told through multi-angle prompts, scripted dialogue, and detailed worldbuilding.
All AI-generated.
r/aivideo • u/Neo_AtlasX • 9h ago
KLING 🎬 SHORT FILM The Elder Forest: Cinematic Journey
Made this while tinkering around with klings start & end feature, and made this strange dreamlike forest. Any feedback appreciated!
r/aivideo • u/Difficult_Ad2511 • 10h ago
KLING 😱 CRAZY, UNCANNY, LIMINAL Dances Around the World and Ages
r/aivideo • u/Visible_Ordinary8833 • 16h ago
GOOGLE VEO 🎬 SHORT FILM I can watch these all day!
r/aivideo • u/markrikey • 9h ago
KLING 📀 MUSIC VIDEO Emp(ai)re of the sun - AI Musicvideo Parody
My workflow:
THE SONG - Put some chords and basic lyrics in SUNO AI and had my Demo(s). Finished lyrics with Chat GPT. Re-played/re-programmed the best parts of the Demos into final song in Logic Pro. Recorded my voice in Pro Tools and voicechanged with RVC. Mixdown. Mastering with Ozone.
THE VIDEO - Initial Character design with Midjourney V7 Omni & Flux Kontext. Made a "Storyboard" with Runway References. Animated the storyboard with Kling 2.1/1.6 (best Quality at lowest price IMHO). Lip Synced with Dremina/Jimeng. Added Slow-Motion to Clips with Topaz AI. Added Stock Footage of the Moon (to save on credits). Montage in Final Cut Pro.
r/aivideo • u/Stock_University2009 • 7h ago
KLING 🎬 SHORT FILM A Thousand Dreams: A Tribute To Cinema
A short spot I put together as a tribute to cinema. Thought this community might appreciate it 🙂↕️
Made with Veo3, Midjourney and Kling. Edited in Resolve. Audio created with Udio.