30 Seconds into the Future: Next gen video production
30 Seconds into the Future: Next gen video production
Feb 12, 2025
Feb 12, 2025
Less Clutter, More Impact—Why Simplicity Wins
Less Clutter, More Impact—Why Simplicity Wins


The film and television industry is on the cusp of a technological revolution. Over the next three years, AI-driven tools and virtual production techniques are poised to transform how content is created from script to screen. Cutting-edge experiments underway today – from generative AI video platforms (OpenAI’s Sora, Runway’s Gen-2, etc.) to real-time virtual production stages – hint at a future where storytelling becomes faster, more collaborative, and accessible. This forecast explores how emerging technologies like SamurAI, Runway, OpenAI Sora, and other AI-native video generators could reshape every step of production: screenwriting, pre-visualization, casting, set design, directing, VFX/post, and even distribution. We also examine implications for budgets, timelines, and creative control, highlighting current pioneers and what these changes mean for independent creators versus traditional studios.
AI-Native Video Generation Transforms Content Creation
AI-driven video generation tools can produce short “films” from just a text prompt, featuring realistic characters and environments. OpenAI’s newly launched Sora model, for example, can generate a 5–20 second clip in under a minute based on a simple description aitopics.org. In one demo, typing “two people in a living room in the mountains” produced a convincing mountain backdrop and cozy interior – all synthesized by AI in 45 seconds aitopics.org. While the “actors” in these auto-generated videos still show telltale artifacts (e.g. distorted hands aitopics.org), the technology is improving rapidly. Runway’s Gen-2 text-to-video system similarly boasts: “If you can say it, now you can see it,” promising “no lights. No camera. All action” when turning a prompt into moving imagery designboom.com,
runwayml.com. These AI-native platforms “realistically and consistently synthesize new videos…without filming anything at all,” essentially letting creators “film” scenes via a keyboard runwayml.com.
Early uses of generative video are focused on pre-production and concept work. Studios and ad agencies are already using tools like Sora to “produce film and advertising concepts and pitches,” according to OpenAI theguardian.com. A UK digital artist noted that Sora “expanded opportunities for younger creatives” and is being used to storyboard ideas for clients theguardian.com. In advertising, major brands have begun experimenting – Coca-Cola even released an entirely AI-generated Christmas ad in 2024 theguardian.com.
The appeal is clear: Text-to-video AI offers a way to visualize ideas faster and cheaper than traditional shoots. Industry observers predict a “tectonic disruption” as this tech matures theguardian.com. Tyler Perry, upon seeing early Sora previews, was so struck by the realistic results that he paused a planned $800M studio expansion, realizing “I may not need to build [new] sets…I can sit in an office and do this with a computer”, which he found “shocking” aitopics.org. By 2028, these tools could handle longer-form content; we may see the first short films or entire scenes generated largely by AI, with human creators guiding the process as “directors” of AI. Such a shift could dramatically cut certain production costs (imagine generating a crowd scene or exotic location without travel or construction) while also raising new questions around visual quality, originality and copyright. (Notably, concerns over AI training data and deepfakes are growing – OpenAI has temporarily restricted Sora’s ability to depict real people as it works to “address…misuse” of likenesses apnews.com, and copyright lawsuits over AI-generated content are underway theguardian.com.)
Still, with Big Tech and startups alike racing forward (Google and Meta have unveiled their own text-to-video research, and China’s Kuaishou has a model called Kling theguardian.com), generative video is on track to become a mainstream creative tool. In the next three years we can expect AI video quality to rapidly improve (fixing glitches like hands) and clip lengths to extend, making AI a ubiquitous collaborator in content creation – from previz animations to entire indie shorts.
Virtual Production and Real-Time Filmmaking Go Mainstream
Virtual production using LED “volume” stages allows filmmakers to shoot actors against dynamic digital backdrops rendered in real time. In this setup, towering LED walls (and sometimes LED ceilings) display 3D environments that move in sync with the camera, surrounding actors with an immersive, photo-real setting. This technique – exemplified by Industrial Light & Magic’s StageCraft volume used on The Mandalorian – is replacing green screens on many high-end productions. By the end of 2023 there were over 200 LED in-camera VFX stages in operation worldwide futuresource-consulting.com, and that number is projected to grow around 18% annually through 2028 as more studios and even mid-budget projects adopt the tech futuresource-consulting.com. Far from a passing fad, virtual production has proven its value: it “radically reconfigures the production pipeline, bringing the VFX department much closer to pre-production and on set,” notes one industry analyst futuresource-consulting.com
. In practice, this shrinks timelines – visual effects that used to be done in post (months after filming) can now be finalized during the shoot, and directors see the final background in-camera instead of imagining a green void futuresource-consulting.com.
The benefits of LED volumes are significant for creative quality and efficiency. Actors can react to actual scenery (e.g. an alien sunset or a bustling city) projected around them, making performances more natural than acting against blank green walls futuresource-consulting.com, futuresource-consulting.com. Lighting from the LED backdrop is accurate and dynamic – if the virtual sun is setting, the warm glow actually reflects on the actors in real time. This avoids the dreaded green “spill” lighting issues and heavy color correction later futuresource-consulting.com. Filmmakers also gain unlimited “golden hour”: for instance, a sunset scene can be shot over many hours or days with the exact same lighting, since the digital sky can be paused or rotated as needed futuresource-consulting.com.
Logistically, virtual sets cut down location expenses and set construction – you don’t need to build a full street or travel to a mountaintop for one scene if the LED wall can display it. Producers can spin up multiple locations on a single soundstage, even doing a 180° background turn in seconds to get reverse angles futuresource-consulting.com. By 2025–2027, as more rental LED stages and freelancers skilled in these tools become available, virtual production will no longer be limited to Disney or Lucasfilm-sized projects. Even regional studios and indie filmmakers are starting to access smaller LED volumes (some rental stages cost only $5K–$10K per day, a cost that can be offset by saved post-production and location fees) filmlocal.com. Major camera and software companies are also supporting this trend – e.g. Unreal Engine (Epic Games) and Unity are continually improving real-time rendering tools specifically for filmmakers, and new entrants offer pre-configured virtual set packages.
We’ll see virtual production pipelines become standard, with directors planning shots in game engines during pre-vis, and cinematographers working hand-in-hand with Unreal Engine artists on set. Notably, this merges roles: the VFX, art department, and camera crew collaborate from day one, shifting some of the creative iteration to pre-production. Overall, virtual production promises faster turnaround and more control over filmmaking’s variables. A potential drawback is high upfront setup cost, but as LED walls become more common and demand for giant “mega-volumes” gives way to many smaller stages, the average cost is expected to gradually drop futuresource-consulting.com.
AI in Pre-Production: Smarter Writing, Previs and Design
Screenwriting and development are being augmented by AI “co-pilots.” While AI won’t be winning Best Original Screenplay just yet, writers have begun using tools like GPT-4 to brainstorm plots, generate character backstories, or even draft scenes in screenplay format. In the near future, we anticipate specialized scriptwriting AIs (fine-tuned on screenplay structure) to help create first-draft dialog or alternate scene options on the fly. This could greatly speed up the iteration process – for example, a showrunner could ask an AI to “generate five ways our finale could end, with twists,” then refine those ideas manually. Some startups (e.g. Mélyès AI and others) are already marketing AI story development software to filmmakers.
The Writers Guild of America has acknowledged these tools, outlining in 2023 that writers can use AI as long as writing credits go to humans – essentially treating AI as just another tool in the writer’s room. In the next three years, expect AI to become a trusted assistant for screenwriters: suggesting improvements in dialogue, checking script consistency, or generating quick synopsis and pitch materials. Creators must remain cautious to avoid overly formulaic, AI-suggested tropes, but as one filmmaker noted, “AI isn’t here to replace filmmakers but could be immensely beneficial for tasks like storyboarding and pre-visualization, speeding up tedious processes and letting us focus more on storytelling and creativity.” reddit.com
In other words, mundane groundwork can be offloaded to GPT-style assistants, freeing human writers for the nuanced creative decisions.
Pre-visualization and storyboarding have perhaps one of the most immediate boosts from AI. Traditionally, turning a script into storyboards or concept art is labor-intensive, requiring artists to hand-draw frames or render 3D animatics. Now, tools like Storyboarder.ai and Midjourney (with custom model fine-tuning) let filmmakers quickly generate storyboard images from text descriptions of a scene. For example, given a scene description (“Exterior – futuristic city street at night – hero faces down a robot army”), an AI image generator can output a panel in seconds. This rapid visualization helps directors and cinematographers experiment with camera angles, lighting, and composition long before actual filming story-boards.ai.
Some platforms even animate these storyboards: one upgraded system can take static storyboard frames and transform them into dynamic animatics using AI video – essentially “sketches” of the motion in a scene storyboarder.ai. By 2025, it may become common to have AI-driven previz reels, where a director types the rough action and “watches” a draft version of the scene.
Real-time game engines also play a role here: virtual scouting tools (like Unreal Engine’s VR Location Scout) allow creators to explore digital sets and plan shots in a game-like sandbox. AI comes into play by populating these previs worlds with auto-generated extras, vehicles, and set dressing to simulate a living scene.
Art departments are embracing generative design too – concept artists use tools such as DALL·E or Stable Diffusion to generate hundreds of set design ideas, costumes, and props from text prompts or sketches, then refine the best concepts by hand. This speeds up creative iteration and opens the door for more visually daring ideas, since an AI can quickly visualize wild concepts that an artist might not have attempted under tight deadlines.
As an example, Marvel’s art teams reportedly used generative models to explore variations of the psychedelic title sequence in Doctor Strange, and independent creators have used Midjourney to design everything from fantasy landscapes to spaceship interiors as starting points. We can expect AI-previsualization to become a standard step: before committing budget to building a set or prosthetic creature, filmmakers will have seen a high-quality AI mockup of it in context. This lowers risk and encourages experimental storytelling, as even indie creators can see their imagination before investing real dollars.
AI and Automation in Production (Casting, Acting and Directing)
The production phase – casting, shooting, directing – is also evolving with AI assistance. Casting might seem like a human judgment domain, but AI is starting to play a role in both the search and the performance aspects of casting. On one hand, casting directors can use AI-powered tools to sift through thousands of audition tapes or actor reels, using computer vision and voice analysis to flag candidates that match a role’s requirements (e.g., finding actors with a certain look, tone, or acting style). This doesn’t replace the nuanced eye of a casting director, but it can winnow down options faster.
Over the next few years, we might see casting platforms offer an AI feature: “generate a composite character” based on a script description, which then suggests real actors or even outputs an AI-generated face as an ideal match. In fact, AI-generated virtual actors are an emerging concept. Companies are creating photorealistic digital humans (using technologies like Unreal Engine’s MetaHuman framework) that can perform onscreen. These digital actors can be puppeteered by motion capture or animated with AI.
We’ve already seen early forays: the sci-fi film “b” (scheduled for 2025) features an AI robot as an actor, and projects in China have showcased entirely virtual TV hosts. In the short term, such virtual actors will mainly serve for stunt doubles, background characters, or de-aged versions of real actors. For instance, Lucasfilm used an AI deepfake to resurrect a young Luke Skywalker in The Mandalorian and The Book of Boba Fett – an AI artist’s deepfake was so good that Disney hired him to improve their VFX, resulting in a highly realistic digital Luke.
By 2028, it’s plausible that a major film or series will introduce a fully AI-generated supporting character (with a lifelike face and voice) that audiences accept as part of the cast. This blurs the line of casting – is the “actor” the AI or the person who designed the AI character? Unions like SAG-AFTRA are already negotiating rules for “digital replicas” of performers sagaftra.org to ensure consent and compensation when an actor’s likeness is cloned. Those rules will be crucial as studios begin scanning actors to use their digital double for certain shots (imagine an action star licensing their 3D likeness so that an AI can generate some of their minor scenes or dangerous stunts, under their approval).
Voice casting is similarly being disrupted: AI voice cloning can mimic a famous actor’s voice to dub them into other languages or have them narrate without recording. In 2022, James Earl Jones officially allowed an AI model to replicate his Darth Vader voice for future Star Wars projects, essentially casting an AI as his voice stand-in. Over the next few years, using AI voices for dubbing or for minor characters (with proper permissions) will likely become routine, streamlining the casting of multilingual productions and animation.
On set, directors and cinematographers are beginning to collaborate with AI in real time. One area is intelligent camera systems: companies are incorporating machine learning into cameras and rigs for features like auto-tracking, focus assist, and even shot suggestion. Drones and robot cameras can follow complex action guided by AI vision (for example, automatically tracking an actor’s face through a crowd with predictive algorithms).
We see early versions in sports broadcasting and live events; by 2025, film sets will start using “smart” camera dollies that can repeat precise moves or adjust framing if an actor misses their mark slightly. SamurAI, for instance, is an open-source tool that leverages Meta’s Segment-Anything model to perform real-time object tracking in footage thevfxmedia.com, thevfxmedia.com. In a film context, such AI tracking could be used to keep a moving subject in frame or later to attach CG effects to an actor without the usual markers. This technology delivered “remarkable improvements in zero-shot object tracking” in tests (achieving >7% accuracy gains on benchmarks) thevfxmedia.com, meaning it can track things in video without any manual setup – a big boon for VFX work.
We anticipate virtual production stages will integrate these AI trackers to sync virtual elements with actors sans the traditional motion-capture suits. Indeed, tools like Wonder Dynamics’ Wonder Studio (recently acquired and rebranded by Autodesk) already allow filmmakers to replace an actor with a CG character automatically, without mocap – the AI analyzes the actor’s performance in a shot and animates a digital character to do the same invadeai.com, invadeai.com. This platform “automates up to 90% of the VFX process” for inserting CG actors, handling animation, lighting, and compositing of the character into the live scene invadeai.com. By offloading the technical heavy-lifting to AI, a director can see a rough composite of, say, an alien character in place of a human stand-in on the same day of shooting. The remaining 10% is where human VFX artists fine-tune and add creative polish. The net effect is faster and cheaper VFX – Wonder Dynamics enabled one TV studio to create 134 creature shots in 6 weeks for Superman & Lois, a pace that would be impossible via manual methods awn.com.
Directors can also leverage AI to coordinate complex scenes. Imagine a battle scene with hundreds of digital soldiers – an AI system can drive those background extras (ensuring random but realistic movements) while the director focuses on the leads. Even for real crowds, AI analysis can help: computer vision can flag if an extra is out of costume or anachronistic, acting like a continuity supervisor. Real-time feedback and editing is another emerging capability – systems are being developed that analyze camera feeds on set and can suggest alternate camera angles or edits based on cinematic databases (e.g., comparing how similar scenes in classics were shot). While still experimental, such tools might advise a director that a close-up shot is under-lit compared to the intended mood, or even generate a quick re-edit of a scene during production to see if they got enough coverage. By 2028, the notion of an “AI assistant director” might be semi-formalized: not making creative decisions, but always present as a background service monitoring technical details and continuity, so the crew catches issues in the moment rather than in the editing room.
Post-Production Revolution: AI in Editing and VFX
It’s in post-production that AI has already planted deep roots, and the coming years will solidify those gains. Editing is becoming smarter and more automated. For example, Adobe’s AI-powered features can analyze raw footage and suggest selects (finding the best take where actors didn’t flub lines, identifying emotive expressions, etc.), drastically reducing the first assembly time. There are experimental tools that can auto-generate a rough cut given a script – matching lines of dialogue with the best takes and camera angles based on learned editing patterns. While a human editor will always finesse the rhythm and storytelling, these assistants can handle the grunt work of logging and sorting footage. We’re also seeing text-based editing: apps where an editor edits the transcript and the software automatically cuts the video to match (the app Descript does this for simple projects already). By 2025–2026, expect major NLE (non-linear editing) platforms like Avid or Premiere to incorporate AI that can recommend edits, flag pacing issues, or even create quick trailers and highlight reels from a finished film automatically.
The VFX and post-processing pipeline is arguably being upended (in a good way) by AI. One striking example: an editor on The Late Show with Stephen Colbert used Runway’s AI tools to accomplish a complex rotoscoping task (cutting out an object frame-by-frame) in 3 minutes – a task that normally took him two days of manual work amplifypartners.com, amplifypartners.com. After seeing an online demo, he licensed the software and quietly started using it, shocking his colleagues when he’d return with finished shots in 20 minutes that used to require entire afternoons amplifypartners.com. Now multiply that productivity boost across the many tedious chores in post: masking, tracking, paint-outs, wire removal, crowd duplication, explosion simulations – all are getting AI assists. SamurAI (mentioned earlier) is one such assist, providing high-accuracy object tracking that can eliminate hours of manual keyframing for VFX artists thevfxmedia.com, thevfxmedia.com. AI upscaling and restoration tools are also routinely used now: they can take grainy footage or lower-resolution shots and enhance details to near 4K quality. This is invaluable for documentary and archival projects (e.g., AI-uprezzing old 1960s footage for a modern film). SamurAI (the John Daro project) in fact specializes in multi-step AI restoration of low-res video to UHD, used to clean up footage that would previously be unusable.
Perhaps the most visible AI impact for audiences is in digital face and voice manipulation. We’ve seen mainstream films use deepfake-like techniques to de-age actors or even resurrect them (as with Peter Cushing in Rogue One and a young Carrie Fisher cameo, which were early attempts using CGI). Today’s AI makes this more accessible: a skilled artist with the right model can convincingly de-age an actor’s face at a fraction of the traditional VFX cost. We will likely see routine de-aging/enhancement in post – e.g., touching up actors’ faces to maintain continuity or subtly altering an actor’s expression if the director wants a different emotional tone (yes, AI can now “paint” a slight smile or frown on a face in motion, within limits). Respeecher and similar AI voice tools can modulate dialogue – for instance, adjusting an actor’s line delivery in post by blending in AI to change the tone or even language, all while sounding like the actor. By 2028, a director might have the freedom to direct some performance changes after filming: “Let’s have our actor’s voice sound more tired in that scene” – an AI voice tweak does it without ADR reshoots.
Another breakthrough is AI-assisted dubbing and localization. Startups already offer AI that can translate dialogue and make the actor’s on-screen lip movements match the new language, by subtly warping the mouth in each frame. This kind of automated lip-sync for dubbing could make international releases far smoother, and even allow each viewer to choose their preferred language while watching, with the actors magically speaking it.
In color grading, AI reference matching lets colorists apply the look of one film to another automatically, or ensure that scenes shot on different days have perfectly consistent color/lighting by analyzing and correcting any deviation. Generative fill (as seen in Photoshop’s recent updates) is making its way to video: need to remove a boom mic or a crew member accidentally caught in frame? An AI can paint them out seamlessly across the moving shot, which used to be painstaking manual work.
Critically, these advancements mean post-production timelines shrink. A film that might have needed six months of post may finish in three, or achieve much higher quality in the same time. The award-winning indie film Everything Everywhere All At Once (2022) was a bellwether in this regard – its small VFX team used Runway’s AI tools extensively to speed up their workflow amplifypartners.com. By using AI to handle tasks like background removal and simple composites, they completed an effects-heavy film on an indie budget and schedule. Another case is The People’s Joker, a crowdfunded indie feature: the director Vera Drew had an experimental vision involving tons of mixed-media composites and found footage, which would have required impossible amounts of manual rotoscoping. Instead, she turned to Runway’s AI and was able to create a feature film that realized her vision amplifypartners.com, amplifypartners.com.
In short, AI is democratizing post-production – you no longer need a big VFX house with hundreds of artists for many effects; a small team (or single creator) with the right AI tools can achieve complex shots that would have been out of reach before. This trend will continue, enabling independent filmmakers to punch above their weight in terms of visuals and allowing big studios to produce tentpole-level imagery faster (or to re-allocate artists to the truly challenging creative shots while AI handles the drudgery). The flip side is a concern about jobs: roles like rotoscope artist or junior editor may diminish in number as those tasks are automated.
The industry will need to evolve, retraining artists to work alongside AI (for example, an artist might supervise 5 AI processes at once rather than hand-do one task). Overall, by 2028, post-production will be a much more real-time, on-demand process – with cloud-based AI services, a director can request fixes or variations (like “make that explosion bigger” or “remove that car in the background”) and see results back in hours instead of waiting weeks for a VFX team’s next iteration.
Decentralized Content Creation and Distribution
Beyond the production process itself, technology is changing who gets to make and monetize content. The rise of decentralized content platforms – often tied to blockchain and Web3 – could dramatically benefit independent creators in the coming years. These platforms aim to “put Hollywood decision-making into the hands of creators and fans” businesswire.com, businesswire.com by removing traditional gatekeepers.
One example is Film.io, a just-launched decentralized filmmaking platform that uses a community-driven model. Creators can submit project ideas (scripts, pitches) to Film.io’s blockchain-based ecosystem, and fans holding the platform’s token vote on which projects they want to see “greenlit.” In this way, a filmmaker with a great idea but no studio connections can rally a global fanbase to support the project. According to Film.io, fans engage by voting and reviewing projects with the native $FAN token, helping surface market-validated projects with pre-established audiences businesswire.com, businesswire.com. In 2024, Film.io even offered grants to top voted projects, and it’s implementing on-chain IP protections for creators (registering their content ideas immutably via VaultLock® tech) businesswire.com, businesswire.com. Over the next three years, we expect more such “DAO-based” studios and funding platforms to emerge, potentially creating a new pipeline for indie films to get made and distributed outside the studio system.
Decentralized streaming and monetization is another frontier. Platforms like FanTV (launched in 2025) are blending blockchain and AI to create creator-centric streaming services. FanTV’s CEO noted the biggest challenges for independent creators are “discovery, distribution and monetization” – in other words, getting seen and getting paid decential.io. To tackle this, FanTV uses token economics to reward both creators and viewers for engagement decential.io.
A simplified view: when you watch content on FanTV, you can earn tokens, and creators earn based on how much their content is watched, with smart contracts ensuring transparent payouts. This “decentralizes the ‘view’ mechanism” so that no single corporation controls the monetization – instead, the community and the algorithms (powered by AI for recommendations) drive what gets popular and funded decential.io. Furthermore, FanTV is building a peer-to-peer content delivery network (in partnership with Huddle01) where users with good internet can become nodes to host/stream content, earning a share of what a data center would – an approach that “takes the money that would go to Amazon’s servers and gives it to the people”, according to Huddle01’s CEO decential.io, decential.io. This hints at a 2025–2028 trend of decentralized distribution networks, which could reduce streaming costs and empower niche content.
For creators, these developments mean new monetization avenues beyond the classic Netflix deal or YouTube ad revenue. We’re seeing filmmakers sell NFTs that grant perks to fans (e.g. an NFT that gives you producer credit or exclusive behind-the-scenes access). Some indie films have been financed through NFT sales – for instance, the film Calladita was funded by selling NFT art related to the movie, raising over $750k in 2022 and winning awards at Sundance for this innovative approach open.substack.com.
In the next few years, a film’s IP might be partially “owned” by a community of token holders who then have a stake in its success – imagine getting a tiny percentage of the profits because you held a token from day one. This could challenge traditional studio financing, which is often top-down. Instead of a few executives deciding what gets made, thousands of fans could collectively decide (and bankroll it in micro amounts). Audience engagement is built in, since those who fund it are invested (emotionally and financially) in promoting the work. We might also see interactive storytelling tied to these platforms: e.g. a series where token-holding fans vote on plot directions (kind of like a decentralized choose-your-own-adventure). This was attempted in small scale with Web3 projects like The Gimmicks, an animated wrestling show where NFT holders voted on storyline branching. The concept could mature such that by 2027, audience participation in content is normalized – not just commenting on forums, but directly influencing creative outcomes or even co-creating elements (perhaps fans submit designs for a creature in a sci-fi show, and the community’s favorite design – potentially AI-refined – gets used in the episode).
Traditional studios are taking note and starting to experiment themselves (Disney, for one, has explored NFT collectibles for Marvel and Star Wars, and is certainly watching the decentralized trend). Some may incorporate these ideas in distribution – for example, a studio might release a film on a blockchain platform where each view is a micropayment that splits between the studio and the film’s creators immediately via smart contract, reducing reliance on middlemen. Decentralized content creation overall suggests a future media landscape with more voices and diverse content. Independent creators stand to benefit as they can build their own fan communities and finance projects without giving up creative control to a big studio. If successful, this “takes an axe to Hollywood’s barrier to entry,” as Film.io’s founders put it, “transforming the entertainment industry at large” by opening doors to talent that previously couldn’t get a break businesswire.com, businesswire.com.
Implications for Budgets, Timelines and Creative Control
These emerging technologies carry profound implications for how films and shows are budgeted, how quickly they are made, and who holds creative power.
Budgets in many cases could shrink or be reallocated. Expensive line items – building large sets, on-location shoots in remote areas, hiring massive VFX teams for rote tasks – can be trimmed when virtual production and AI can achieve similar results for less. Tyler Perry’s realization with Sora is a prime example: why spend millions constructing a set or flying a crew to a mountain, if an AI can generate a convincing setting virtually? aitopics.org
Many productions could see a significant drop in below-the-line costs (physical set costs, travel, some labor) and post-production costs (since AI and real-time workflows prevent the costly “fix it in post” crunch by catching issues earlier or avoiding them). That said, new budget lines will grow: investing in LED volumes, powerful computing for AI rendering, and technical experts to run these systems. In the short term, a virtual production setup might add cost (LED stage rental, etc.), but it often pays for itself by avoiding overruns and enabling more efficient shooting schedules (which saves on crew days and equipment rental in the long run). Timelines and budgets are intertwined – if you can shoot a show in 8 weeks instead of 12 thanks to these tools, that’s a big budget win. Also, consider episodic TV: AI tools could allow showrunners to create more ambitious visuals each week without blowing the budget, or even enable content to be produced on the fly responding to audience feedback (imagine a late-night show able to insert an AI-synthesized comedy sketch about that day’s news – as some have started doing with deepfake parody videos).
Timelines are poised to accelerate across the board. Pre-production can be condensed when AI previsualization answers many questions early. The production itself becomes more streamlined with virtual sets that can be reset at the push of a button and AI assisting with continuity (fewer reshoots needed). Post-production, which used to be a bottleneck for heavy VFX projects, might no longer dictate release dates so strictly – if AI can cut VFX and editing times by 50%, studios might move up premiere dates or take on more projects in parallel. We might even see a move toward real-time filmmaking for certain content: for instance, a director could film actors on an LED stage while an AI simultaneously composites other elements and an editor starts cutting scenes moments after they’re shot. By 2028, the concept of “live post-production” may emerge, where the gap between production and post is so blurred (thanks to on-set rendering and AI) that a near-final cut exists just days after principal photography wraps. This agility could enable more serialized storytelling or rapid iteration (a filmmaker could shoot different endings and have finished versions of each to test with audiences, something too costly/time-consuming to do traditionally).
Creative control stands to both expand and face new challenges. On one hand, independent creators gain more control than ever. The democratization effect means a small team can realize a grand vision without needing a giant studio’s resources – which also means they don’t have to cede creative control in exchange for those resources. As Runway’s CEO observed, these AI tools allow “people who never thought they’d have access” to high-end production to create “high-quality content,” effectively “making storytelling more accessible” and enabling a “huge new market to emerge.” amplifypartners.com, amplifypartners.com
The result could be an explosion of original voices and niche content for underserved audiences, since creators can bypass the traditional gatekeepers. We already see YouTube and streaming enabling niche content; with these advancements, the quality gap between a fan-made film and a Hollywood film closes further, meaning fan creators or startups can directly compete on creative ideas rather than being limited by execution quality.
On the other hand, established studios and creatives will have to navigate creative risks with AI involvement. There’s a valid fear that over-reliance on AI (especially in writing or ideation) could lead to homogeneous content – if everyone’s script assistant draws from the same corpus of successful movies, you might get cliched outputs. Maintaining human originality and quirkiness will be key; AI should be a tool, not the source of the art. Artist and guild concerns also come into play: Writers and actors unions in 2023 fought hard to put guardrails on AI – for instance, ensuring that studios can’t use AI to generate scripts without crediting/pay for writers, or scan background actors to reuse their likeness endlessly without consent. The new union contracts now include provisions about digital replicas and AI usage (e.g. requiring performer approval and payment for digital doubles) sagaftra.org. These policies will influence creative control too: actors might negotiate the right to veto AI uses of their performance, directors might insist on a clause that AI-generated material is subject to their approval to avoid studio meddling, etc.
Traditional studios may face challenges to their control as well. If decentralized platforms and indie AI-enabled productions start scoring successes, talent may choose to work outside the studio system, and audiences might follow. Studios will need to adapt by either adopting these technologies themselves (which many are, setting up virtual production stages and in-house AI research labs) or by offering something unique (huge marketing reach, established IP, etc.). A possible outcome is studios leveraging AI to double down on franchise content, pumping out more Marvel/Star Wars at a faster clip (since AI can help scale content production), while independent creators use the tech to offer fresh alternatives. This could set up a battle of quantity vs quality, or formula vs innovation. However, we might also see collaboration: studios partnering with internet creators (already happening with things like Netflix buying indie films or hiring YouTube filmmakers for projects). New forms of creative partnerships may arise, where a community creates a popular proof-of-concept (like a crowd-created short film with AI effects) and a studio then gives it a bigger platform (similar to how Logic Breakdown on YouTube used AI to make an anime-style short film that went viral, and now those creators might get deals).
For audiences, the future holds more interactive and personalized content. With AI, it’s conceivable that a film or series can be tailored to viewer preferences – for instance, slight variations in editing or even plot depending on viewer feedback or profile. Creators might release multiple cuts and let audiences choose (somewhat like interactive films, but more seamless). Monetization could also become more direct: fans might pay via crypto tokens for bonus scenes or to unlock an alternate ending rendered by AI on-demand. Audience engagement in the creative process, as mentioned, means fandoms become part of content creation communities, blurring the line between creator and consumer.
In summary, the next three years will likely see leaner productions and faster turnarounds, with empowered creators at the helm – but also a period of negotiation and adjustment as the industry figures out fair and ethical use of these powerful tools. Independent filmmakers stand to gain the most by challenging the old guard with bold experiments that succeed on shoestring budgets, whereas traditional studios will be pushed to innovate or risk seeming stale. It’s an era where a visionary with a laptop and AI access could launch the next big franchise from their garage, and where the definition of “film production” expands to include global fan communities and intelligent algorithms as part of the studio. The storytelling landscape by 2028 will not only have new technologies, but entirely new workflows, business models, and creative possibilities – truly a new era of production.
Examples of Pioneering Projects and Companies
To illustrate this evolution, here are a few of the key projects and players experimenting with these methods today:
Lucasfilm’s The Mandalorian – Pioneered large-scale virtual production using ILM’s StageCraft LED volume, proving the viability of in-camera VFX for a major series and inspiring dozens of other productions to follow ilm.com, ilm.com. (Now used in projects like The Batman (2022) and the upcoming Percy Jackson series for dynamic backdrops ilm.com.)
OpenAI’s Sora – A cutting-edge AI text-to-video generator that launched in 2025 to select users. Sora can create short, film-like video clips from prompts, which industry figures like Tyler Perry predict could “transform the film…industry” by enabling virtual set creation and rapid prototyping of scenes aitopics.org.
Runway ML – A startup offering AI tools for creators. Its Gen-1 and Gen-2 models enable both video-to-video transformations and text-to-video generation. Runway’s tools have been used in real productions (e.g. by Everything Everywhere All At Once’s team to accelerate VFX amplifypartners.com, and by The Late Show editors for instant VFX edits amplifypartners.com). Runway also runs an AI Film Festival, highlighting short films made with AI, demonstrating the creative potential of these techniques.
Wonder Dynamics (Autodesk Flow) – An AI platform automating VFX work, particularly inserting CG characters into live action. It handles motion capture from regular footage and lighting/compositing, reportedly automating 80–90% of the manual work invadeai.com. Used by Boxel Studios to create complex VFX shots on a TV timeline awn.com, indicating how AI can assist VFX houses to deliver faster.
SamurAI – (Segment Anything Model Unified & Robust AI) An open-source AI tool for motion tracking and rotoscoping in video thevfxmedia.com. SamurAI is being explored in post-production to quickly isolate actors or objects without green screen, which could streamline both VFX and editing tasks (for example, replacing a sky or background behind an actor with a few clicks).
Film.io – A decentralized filmmaking DAO platform launched in 2024 that lets creators pitch projects to a community of fans for voting and funding businesswire.com. It’s experimenting with blockchain governance in content creation, and has attracted thousands of users in its early stage.
FanTV – A new blockchain-based streaming service that uses AI for content recommendations and a token system to reward viewers and creators for engagement decential.io. Currently boasting 7+ million users decential.io, FanTV is testing the model of “watch-to-earn” and aims to solve indie creators’ distribution woes through decentralization.
Netflix and Disney’s Virtual Production initiatives – Most major studios now have virtual production divisions. Netflix built an LED stage in 2023 in Los Angeles and is using it for upcoming originals (they’ve noted improved production speed and creative flexibility as reasons). Disney is expanding StageCraft to more soundstages for Marvel and Star Wars productions, integrating Unreal Engine artists directly into its film crews.
Innovative Indie Projects:
“Artificial” – a Web series that used audience input (via Twitch) live to influence the storyline, a precursor to interactive AI-driven narratives.
“Raster” (hypothetical future example) – an independent animated short entirely generated by AI models (images, movements, voices) overseen by a single artist, making rounds at festivals by 2026 and sparking debate on what qualifies as “film”.
The Sandbox – Not a film, but a virtual world platform where users create stories and assets. A small studio took characters from a Sandbox game and, using AI video tools, turned it into an animated web series – showing how decentralized, user-generated IP can jump to more traditional media via AI.
Each of these examples, whether big-budget productions or experimental indies, showcases elements of the future workflow: real-time rendering, AI generation, community collaboration, and new monetization models. They are trailblazers lighting the path for the broader industry.
Conclusion: A New Era of Production
By 2028, the production of films and television is likely to look very different from today’s norms. The convergence of AI and real-time technologies will make the creative process more fluid and less constrained by physical or financial limits. A creator with a bold idea will have an expanded toolbox: AI assistants to help write and visualize it, virtual stages to film it with any backdrop imaginable, and AI-driven post pipelines to polish it – all with unprecedented speed. The balance of power could shift toward artists and audiences: artists because they can do more with less (needing studios mainly for distribution hype, not the entire creation process), and audiences because they can directly support and influence the stories they care about via new platforms.
Traditional studios and craft professionals are not obsolete by any means – in fact, those who embrace these tools can amplify their artistry (a virtuoso cinematographer with an LED volume can paint with light in new ways, a skilled editor with AI at their fingertips can try bolder cuts knowing the AI safety net is there to mend continuity). But the old hierarchical, labor-intensive production model will be challenged by a leaner, tech-infused model. We’ll likely see a blend of both in the industry: some projects will stick closer to practical filmmaking for artistic reasons, while others will fully digitize production. And some fundamentally new formats will arise – perhaps AI-generated interactive films where a movie’s storyline or visuals regenerate differently on each viewing, or massively collaborative cloud productions with thousands of contributors worldwide.
Audiences can look forward to content that is more immersive, more frequent, and more personalized. Niche voices that never had a chance in the old system may produce cult hits with fan communities. Big franchises may experiment with letting fans “into the sandbox” of the story via virtual experiences or voting mechanisms. And with distribution decentralizing, viewers might even earn rewards or ownership from the content they love (turning fandom into a participatory economy).
Inevitably, there will be growing pains. Issues of quality control, ethical use of AI (deepfakes vs creative expression), and equitable compensation will need constant attention. The industry will have to refine norms about crediting AI contributions and protecting human creators’ rights and income – debates already raging in courtrooms and guild meetings theguardian.com, theguardian.com. Moreover, not every experiment will succeed – some AI-generated content will flop or feel gimmicky, and some traditionalists will resist these changes.
Yet, if history is any guide, filmmaking has always been shaped by technology: sound, color, CGI, digital editing – each stirred initial fear but ultimately empowered new forms of storytelling. AI and virtual production are poised to be similar catalysts. In the next three years, they won’t replace the human imagination at the heart of filmmaking; rather, they will turbocharge it, allowing creators to realize their visions in ways previously impossible. The result will be an industry that is more inclusive, innovative, and agile – one where the only limit truly is the imagination.
Sources:
Milmo, Dan. “Sora, OpenAI’s video generator, has hit the UK. It’s obvious why creatives are worried.” The Guardian, Feb. 28, 2025 aitopics.org, theguardian.com.
Futuresource Consulting – Virtual Production & XR Report, Jul. 12, 2024 (via futuresource-consulting.com) futuresource-consulting.com, futuresource-consulting.com.
Amplify Partners. “How Runway revolutionized film production with AI.” (Case Study) 2023 amplifypartners.com, amplifypartners.com.
Designboom. “Runway generative AI tool ‘Gen-2’ makes realistic movies with just words.” Mar. 28, 2023 designboom.com.
VFXMedia. “Samurai AI: The Future of Real-Time Motion Tracking.” Nov. 28, 2024 thevfxmedia.com, thevfxmedia.com.
Business Wire. “SXSW: Film.io Announces Public Launch… to Empower Independent Filmmakers & Democratize the Film Industry.” Mar. 8, 2024 businesswire.com, businesswire.com.
Decential.io. “FanTV Is Betting on Blockchain and AI to Help Video Artists Keep More of What They Earn.” Feb. 4, 2025 decential.io, decential.io.
Reddit r/filmmaking discussion on AI-assisted storyboarding (user perspectives), Aug. 2024 reddit.com.
InvadeAI. “Revolutionizing VFX with AI – Wonder Dynamics,” 2023 invadeai.com.
TechCrunch. “How The Mandalorian and ILM invisibly reinvented film and TV production.” Feb. 20, 2020 techcrunch.com, techcrunch.com.
The film and television industry is on the cusp of a technological revolution. Over the next three years, AI-driven tools and virtual production techniques are poised to transform how content is created from script to screen. Cutting-edge experiments underway today – from generative AI video platforms (OpenAI’s Sora, Runway’s Gen-2, etc.) to real-time virtual production stages – hint at a future where storytelling becomes faster, more collaborative, and accessible. This forecast explores how emerging technologies like SamurAI, Runway, OpenAI Sora, and other AI-native video generators could reshape every step of production: screenwriting, pre-visualization, casting, set design, directing, VFX/post, and even distribution. We also examine implications for budgets, timelines, and creative control, highlighting current pioneers and what these changes mean for independent creators versus traditional studios.
AI-Native Video Generation Transforms Content Creation
AI-driven video generation tools can produce short “films” from just a text prompt, featuring realistic characters and environments. OpenAI’s newly launched Sora model, for example, can generate a 5–20 second clip in under a minute based on a simple description aitopics.org. In one demo, typing “two people in a living room in the mountains” produced a convincing mountain backdrop and cozy interior – all synthesized by AI in 45 seconds aitopics.org. While the “actors” in these auto-generated videos still show telltale artifacts (e.g. distorted hands aitopics.org), the technology is improving rapidly. Runway’s Gen-2 text-to-video system similarly boasts: “If you can say it, now you can see it,” promising “no lights. No camera. All action” when turning a prompt into moving imagery designboom.com,
runwayml.com. These AI-native platforms “realistically and consistently synthesize new videos…without filming anything at all,” essentially letting creators “film” scenes via a keyboard runwayml.com.
Early uses of generative video are focused on pre-production and concept work. Studios and ad agencies are already using tools like Sora to “produce film and advertising concepts and pitches,” according to OpenAI theguardian.com. A UK digital artist noted that Sora “expanded opportunities for younger creatives” and is being used to storyboard ideas for clients theguardian.com. In advertising, major brands have begun experimenting – Coca-Cola even released an entirely AI-generated Christmas ad in 2024 theguardian.com.
The appeal is clear: Text-to-video AI offers a way to visualize ideas faster and cheaper than traditional shoots. Industry observers predict a “tectonic disruption” as this tech matures theguardian.com. Tyler Perry, upon seeing early Sora previews, was so struck by the realistic results that he paused a planned $800M studio expansion, realizing “I may not need to build [new] sets…I can sit in an office and do this with a computer”, which he found “shocking” aitopics.org. By 2028, these tools could handle longer-form content; we may see the first short films or entire scenes generated largely by AI, with human creators guiding the process as “directors” of AI. Such a shift could dramatically cut certain production costs (imagine generating a crowd scene or exotic location without travel or construction) while also raising new questions around visual quality, originality and copyright. (Notably, concerns over AI training data and deepfakes are growing – OpenAI has temporarily restricted Sora’s ability to depict real people as it works to “address…misuse” of likenesses apnews.com, and copyright lawsuits over AI-generated content are underway theguardian.com.)
Still, with Big Tech and startups alike racing forward (Google and Meta have unveiled their own text-to-video research, and China’s Kuaishou has a model called Kling theguardian.com), generative video is on track to become a mainstream creative tool. In the next three years we can expect AI video quality to rapidly improve (fixing glitches like hands) and clip lengths to extend, making AI a ubiquitous collaborator in content creation – from previz animations to entire indie shorts.
Virtual Production and Real-Time Filmmaking Go Mainstream
Virtual production using LED “volume” stages allows filmmakers to shoot actors against dynamic digital backdrops rendered in real time. In this setup, towering LED walls (and sometimes LED ceilings) display 3D environments that move in sync with the camera, surrounding actors with an immersive, photo-real setting. This technique – exemplified by Industrial Light & Magic’s StageCraft volume used on The Mandalorian – is replacing green screens on many high-end productions. By the end of 2023 there were over 200 LED in-camera VFX stages in operation worldwide futuresource-consulting.com, and that number is projected to grow around 18% annually through 2028 as more studios and even mid-budget projects adopt the tech futuresource-consulting.com. Far from a passing fad, virtual production has proven its value: it “radically reconfigures the production pipeline, bringing the VFX department much closer to pre-production and on set,” notes one industry analyst futuresource-consulting.com
. In practice, this shrinks timelines – visual effects that used to be done in post (months after filming) can now be finalized during the shoot, and directors see the final background in-camera instead of imagining a green void futuresource-consulting.com.
The benefits of LED volumes are significant for creative quality and efficiency. Actors can react to actual scenery (e.g. an alien sunset or a bustling city) projected around them, making performances more natural than acting against blank green walls futuresource-consulting.com, futuresource-consulting.com. Lighting from the LED backdrop is accurate and dynamic – if the virtual sun is setting, the warm glow actually reflects on the actors in real time. This avoids the dreaded green “spill” lighting issues and heavy color correction later futuresource-consulting.com. Filmmakers also gain unlimited “golden hour”: for instance, a sunset scene can be shot over many hours or days with the exact same lighting, since the digital sky can be paused or rotated as needed futuresource-consulting.com.
Logistically, virtual sets cut down location expenses and set construction – you don’t need to build a full street or travel to a mountaintop for one scene if the LED wall can display it. Producers can spin up multiple locations on a single soundstage, even doing a 180° background turn in seconds to get reverse angles futuresource-consulting.com. By 2025–2027, as more rental LED stages and freelancers skilled in these tools become available, virtual production will no longer be limited to Disney or Lucasfilm-sized projects. Even regional studios and indie filmmakers are starting to access smaller LED volumes (some rental stages cost only $5K–$10K per day, a cost that can be offset by saved post-production and location fees) filmlocal.com. Major camera and software companies are also supporting this trend – e.g. Unreal Engine (Epic Games) and Unity are continually improving real-time rendering tools specifically for filmmakers, and new entrants offer pre-configured virtual set packages.
We’ll see virtual production pipelines become standard, with directors planning shots in game engines during pre-vis, and cinematographers working hand-in-hand with Unreal Engine artists on set. Notably, this merges roles: the VFX, art department, and camera crew collaborate from day one, shifting some of the creative iteration to pre-production. Overall, virtual production promises faster turnaround and more control over filmmaking’s variables. A potential drawback is high upfront setup cost, but as LED walls become more common and demand for giant “mega-volumes” gives way to many smaller stages, the average cost is expected to gradually drop futuresource-consulting.com.
AI in Pre-Production: Smarter Writing, Previs and Design
Screenwriting and development are being augmented by AI “co-pilots.” While AI won’t be winning Best Original Screenplay just yet, writers have begun using tools like GPT-4 to brainstorm plots, generate character backstories, or even draft scenes in screenplay format. In the near future, we anticipate specialized scriptwriting AIs (fine-tuned on screenplay structure) to help create first-draft dialog or alternate scene options on the fly. This could greatly speed up the iteration process – for example, a showrunner could ask an AI to “generate five ways our finale could end, with twists,” then refine those ideas manually. Some startups (e.g. Mélyès AI and others) are already marketing AI story development software to filmmakers.
The Writers Guild of America has acknowledged these tools, outlining in 2023 that writers can use AI as long as writing credits go to humans – essentially treating AI as just another tool in the writer’s room. In the next three years, expect AI to become a trusted assistant for screenwriters: suggesting improvements in dialogue, checking script consistency, or generating quick synopsis and pitch materials. Creators must remain cautious to avoid overly formulaic, AI-suggested tropes, but as one filmmaker noted, “AI isn’t here to replace filmmakers but could be immensely beneficial for tasks like storyboarding and pre-visualization, speeding up tedious processes and letting us focus more on storytelling and creativity.” reddit.com
In other words, mundane groundwork can be offloaded to GPT-style assistants, freeing human writers for the nuanced creative decisions.
Pre-visualization and storyboarding have perhaps one of the most immediate boosts from AI. Traditionally, turning a script into storyboards or concept art is labor-intensive, requiring artists to hand-draw frames or render 3D animatics. Now, tools like Storyboarder.ai and Midjourney (with custom model fine-tuning) let filmmakers quickly generate storyboard images from text descriptions of a scene. For example, given a scene description (“Exterior – futuristic city street at night – hero faces down a robot army”), an AI image generator can output a panel in seconds. This rapid visualization helps directors and cinematographers experiment with camera angles, lighting, and composition long before actual filming story-boards.ai.
Some platforms even animate these storyboards: one upgraded system can take static storyboard frames and transform them into dynamic animatics using AI video – essentially “sketches” of the motion in a scene storyboarder.ai. By 2025, it may become common to have AI-driven previz reels, where a director types the rough action and “watches” a draft version of the scene.
Real-time game engines also play a role here: virtual scouting tools (like Unreal Engine’s VR Location Scout) allow creators to explore digital sets and plan shots in a game-like sandbox. AI comes into play by populating these previs worlds with auto-generated extras, vehicles, and set dressing to simulate a living scene.
Art departments are embracing generative design too – concept artists use tools such as DALL·E or Stable Diffusion to generate hundreds of set design ideas, costumes, and props from text prompts or sketches, then refine the best concepts by hand. This speeds up creative iteration and opens the door for more visually daring ideas, since an AI can quickly visualize wild concepts that an artist might not have attempted under tight deadlines.
As an example, Marvel’s art teams reportedly used generative models to explore variations of the psychedelic title sequence in Doctor Strange, and independent creators have used Midjourney to design everything from fantasy landscapes to spaceship interiors as starting points. We can expect AI-previsualization to become a standard step: before committing budget to building a set or prosthetic creature, filmmakers will have seen a high-quality AI mockup of it in context. This lowers risk and encourages experimental storytelling, as even indie creators can see their imagination before investing real dollars.
AI and Automation in Production (Casting, Acting and Directing)
The production phase – casting, shooting, directing – is also evolving with AI assistance. Casting might seem like a human judgment domain, but AI is starting to play a role in both the search and the performance aspects of casting. On one hand, casting directors can use AI-powered tools to sift through thousands of audition tapes or actor reels, using computer vision and voice analysis to flag candidates that match a role’s requirements (e.g., finding actors with a certain look, tone, or acting style). This doesn’t replace the nuanced eye of a casting director, but it can winnow down options faster.
Over the next few years, we might see casting platforms offer an AI feature: “generate a composite character” based on a script description, which then suggests real actors or even outputs an AI-generated face as an ideal match. In fact, AI-generated virtual actors are an emerging concept. Companies are creating photorealistic digital humans (using technologies like Unreal Engine’s MetaHuman framework) that can perform onscreen. These digital actors can be puppeteered by motion capture or animated with AI.
We’ve already seen early forays: the sci-fi film “b” (scheduled for 2025) features an AI robot as an actor, and projects in China have showcased entirely virtual TV hosts. In the short term, such virtual actors will mainly serve for stunt doubles, background characters, or de-aged versions of real actors. For instance, Lucasfilm used an AI deepfake to resurrect a young Luke Skywalker in The Mandalorian and The Book of Boba Fett – an AI artist’s deepfake was so good that Disney hired him to improve their VFX, resulting in a highly realistic digital Luke.
By 2028, it’s plausible that a major film or series will introduce a fully AI-generated supporting character (with a lifelike face and voice) that audiences accept as part of the cast. This blurs the line of casting – is the “actor” the AI or the person who designed the AI character? Unions like SAG-AFTRA are already negotiating rules for “digital replicas” of performers sagaftra.org to ensure consent and compensation when an actor’s likeness is cloned. Those rules will be crucial as studios begin scanning actors to use their digital double for certain shots (imagine an action star licensing their 3D likeness so that an AI can generate some of their minor scenes or dangerous stunts, under their approval).
Voice casting is similarly being disrupted: AI voice cloning can mimic a famous actor’s voice to dub them into other languages or have them narrate without recording. In 2022, James Earl Jones officially allowed an AI model to replicate his Darth Vader voice for future Star Wars projects, essentially casting an AI as his voice stand-in. Over the next few years, using AI voices for dubbing or for minor characters (with proper permissions) will likely become routine, streamlining the casting of multilingual productions and animation.
On set, directors and cinematographers are beginning to collaborate with AI in real time. One area is intelligent camera systems: companies are incorporating machine learning into cameras and rigs for features like auto-tracking, focus assist, and even shot suggestion. Drones and robot cameras can follow complex action guided by AI vision (for example, automatically tracking an actor’s face through a crowd with predictive algorithms).
We see early versions in sports broadcasting and live events; by 2025, film sets will start using “smart” camera dollies that can repeat precise moves or adjust framing if an actor misses their mark slightly. SamurAI, for instance, is an open-source tool that leverages Meta’s Segment-Anything model to perform real-time object tracking in footage thevfxmedia.com, thevfxmedia.com. In a film context, such AI tracking could be used to keep a moving subject in frame or later to attach CG effects to an actor without the usual markers. This technology delivered “remarkable improvements in zero-shot object tracking” in tests (achieving >7% accuracy gains on benchmarks) thevfxmedia.com, meaning it can track things in video without any manual setup – a big boon for VFX work.
We anticipate virtual production stages will integrate these AI trackers to sync virtual elements with actors sans the traditional motion-capture suits. Indeed, tools like Wonder Dynamics’ Wonder Studio (recently acquired and rebranded by Autodesk) already allow filmmakers to replace an actor with a CG character automatically, without mocap – the AI analyzes the actor’s performance in a shot and animates a digital character to do the same invadeai.com, invadeai.com. This platform “automates up to 90% of the VFX process” for inserting CG actors, handling animation, lighting, and compositing of the character into the live scene invadeai.com. By offloading the technical heavy-lifting to AI, a director can see a rough composite of, say, an alien character in place of a human stand-in on the same day of shooting. The remaining 10% is where human VFX artists fine-tune and add creative polish. The net effect is faster and cheaper VFX – Wonder Dynamics enabled one TV studio to create 134 creature shots in 6 weeks for Superman & Lois, a pace that would be impossible via manual methods awn.com.
Directors can also leverage AI to coordinate complex scenes. Imagine a battle scene with hundreds of digital soldiers – an AI system can drive those background extras (ensuring random but realistic movements) while the director focuses on the leads. Even for real crowds, AI analysis can help: computer vision can flag if an extra is out of costume or anachronistic, acting like a continuity supervisor. Real-time feedback and editing is another emerging capability – systems are being developed that analyze camera feeds on set and can suggest alternate camera angles or edits based on cinematic databases (e.g., comparing how similar scenes in classics were shot). While still experimental, such tools might advise a director that a close-up shot is under-lit compared to the intended mood, or even generate a quick re-edit of a scene during production to see if they got enough coverage. By 2028, the notion of an “AI assistant director” might be semi-formalized: not making creative decisions, but always present as a background service monitoring technical details and continuity, so the crew catches issues in the moment rather than in the editing room.
Post-Production Revolution: AI in Editing and VFX
It’s in post-production that AI has already planted deep roots, and the coming years will solidify those gains. Editing is becoming smarter and more automated. For example, Adobe’s AI-powered features can analyze raw footage and suggest selects (finding the best take where actors didn’t flub lines, identifying emotive expressions, etc.), drastically reducing the first assembly time. There are experimental tools that can auto-generate a rough cut given a script – matching lines of dialogue with the best takes and camera angles based on learned editing patterns. While a human editor will always finesse the rhythm and storytelling, these assistants can handle the grunt work of logging and sorting footage. We’re also seeing text-based editing: apps where an editor edits the transcript and the software automatically cuts the video to match (the app Descript does this for simple projects already). By 2025–2026, expect major NLE (non-linear editing) platforms like Avid or Premiere to incorporate AI that can recommend edits, flag pacing issues, or even create quick trailers and highlight reels from a finished film automatically.
The VFX and post-processing pipeline is arguably being upended (in a good way) by AI. One striking example: an editor on The Late Show with Stephen Colbert used Runway’s AI tools to accomplish a complex rotoscoping task (cutting out an object frame-by-frame) in 3 minutes – a task that normally took him two days of manual work amplifypartners.com, amplifypartners.com. After seeing an online demo, he licensed the software and quietly started using it, shocking his colleagues when he’d return with finished shots in 20 minutes that used to require entire afternoons amplifypartners.com. Now multiply that productivity boost across the many tedious chores in post: masking, tracking, paint-outs, wire removal, crowd duplication, explosion simulations – all are getting AI assists. SamurAI (mentioned earlier) is one such assist, providing high-accuracy object tracking that can eliminate hours of manual keyframing for VFX artists thevfxmedia.com, thevfxmedia.com. AI upscaling and restoration tools are also routinely used now: they can take grainy footage or lower-resolution shots and enhance details to near 4K quality. This is invaluable for documentary and archival projects (e.g., AI-uprezzing old 1960s footage for a modern film). SamurAI (the John Daro project) in fact specializes in multi-step AI restoration of low-res video to UHD, used to clean up footage that would previously be unusable.
Perhaps the most visible AI impact for audiences is in digital face and voice manipulation. We’ve seen mainstream films use deepfake-like techniques to de-age actors or even resurrect them (as with Peter Cushing in Rogue One and a young Carrie Fisher cameo, which were early attempts using CGI). Today’s AI makes this more accessible: a skilled artist with the right model can convincingly de-age an actor’s face at a fraction of the traditional VFX cost. We will likely see routine de-aging/enhancement in post – e.g., touching up actors’ faces to maintain continuity or subtly altering an actor’s expression if the director wants a different emotional tone (yes, AI can now “paint” a slight smile or frown on a face in motion, within limits). Respeecher and similar AI voice tools can modulate dialogue – for instance, adjusting an actor’s line delivery in post by blending in AI to change the tone or even language, all while sounding like the actor. By 2028, a director might have the freedom to direct some performance changes after filming: “Let’s have our actor’s voice sound more tired in that scene” – an AI voice tweak does it without ADR reshoots.
Another breakthrough is AI-assisted dubbing and localization. Startups already offer AI that can translate dialogue and make the actor’s on-screen lip movements match the new language, by subtly warping the mouth in each frame. This kind of automated lip-sync for dubbing could make international releases far smoother, and even allow each viewer to choose their preferred language while watching, with the actors magically speaking it.
In color grading, AI reference matching lets colorists apply the look of one film to another automatically, or ensure that scenes shot on different days have perfectly consistent color/lighting by analyzing and correcting any deviation. Generative fill (as seen in Photoshop’s recent updates) is making its way to video: need to remove a boom mic or a crew member accidentally caught in frame? An AI can paint them out seamlessly across the moving shot, which used to be painstaking manual work.
Critically, these advancements mean post-production timelines shrink. A film that might have needed six months of post may finish in three, or achieve much higher quality in the same time. The award-winning indie film Everything Everywhere All At Once (2022) was a bellwether in this regard – its small VFX team used Runway’s AI tools extensively to speed up their workflow amplifypartners.com. By using AI to handle tasks like background removal and simple composites, they completed an effects-heavy film on an indie budget and schedule. Another case is The People’s Joker, a crowdfunded indie feature: the director Vera Drew had an experimental vision involving tons of mixed-media composites and found footage, which would have required impossible amounts of manual rotoscoping. Instead, she turned to Runway’s AI and was able to create a feature film that realized her vision amplifypartners.com, amplifypartners.com.
In short, AI is democratizing post-production – you no longer need a big VFX house with hundreds of artists for many effects; a small team (or single creator) with the right AI tools can achieve complex shots that would have been out of reach before. This trend will continue, enabling independent filmmakers to punch above their weight in terms of visuals and allowing big studios to produce tentpole-level imagery faster (or to re-allocate artists to the truly challenging creative shots while AI handles the drudgery). The flip side is a concern about jobs: roles like rotoscope artist or junior editor may diminish in number as those tasks are automated.
The industry will need to evolve, retraining artists to work alongside AI (for example, an artist might supervise 5 AI processes at once rather than hand-do one task). Overall, by 2028, post-production will be a much more real-time, on-demand process – with cloud-based AI services, a director can request fixes or variations (like “make that explosion bigger” or “remove that car in the background”) and see results back in hours instead of waiting weeks for a VFX team’s next iteration.
Decentralized Content Creation and Distribution
Beyond the production process itself, technology is changing who gets to make and monetize content. The rise of decentralized content platforms – often tied to blockchain and Web3 – could dramatically benefit independent creators in the coming years. These platforms aim to “put Hollywood decision-making into the hands of creators and fans” businesswire.com, businesswire.com by removing traditional gatekeepers.
One example is Film.io, a just-launched decentralized filmmaking platform that uses a community-driven model. Creators can submit project ideas (scripts, pitches) to Film.io’s blockchain-based ecosystem, and fans holding the platform’s token vote on which projects they want to see “greenlit.” In this way, a filmmaker with a great idea but no studio connections can rally a global fanbase to support the project. According to Film.io, fans engage by voting and reviewing projects with the native $FAN token, helping surface market-validated projects with pre-established audiences businesswire.com, businesswire.com. In 2024, Film.io even offered grants to top voted projects, and it’s implementing on-chain IP protections for creators (registering their content ideas immutably via VaultLock® tech) businesswire.com, businesswire.com. Over the next three years, we expect more such “DAO-based” studios and funding platforms to emerge, potentially creating a new pipeline for indie films to get made and distributed outside the studio system.
Decentralized streaming and monetization is another frontier. Platforms like FanTV (launched in 2025) are blending blockchain and AI to create creator-centric streaming services. FanTV’s CEO noted the biggest challenges for independent creators are “discovery, distribution and monetization” – in other words, getting seen and getting paid decential.io. To tackle this, FanTV uses token economics to reward both creators and viewers for engagement decential.io.
A simplified view: when you watch content on FanTV, you can earn tokens, and creators earn based on how much their content is watched, with smart contracts ensuring transparent payouts. This “decentralizes the ‘view’ mechanism” so that no single corporation controls the monetization – instead, the community and the algorithms (powered by AI for recommendations) drive what gets popular and funded decential.io. Furthermore, FanTV is building a peer-to-peer content delivery network (in partnership with Huddle01) where users with good internet can become nodes to host/stream content, earning a share of what a data center would – an approach that “takes the money that would go to Amazon’s servers and gives it to the people”, according to Huddle01’s CEO decential.io, decential.io. This hints at a 2025–2028 trend of decentralized distribution networks, which could reduce streaming costs and empower niche content.
For creators, these developments mean new monetization avenues beyond the classic Netflix deal or YouTube ad revenue. We’re seeing filmmakers sell NFTs that grant perks to fans (e.g. an NFT that gives you producer credit or exclusive behind-the-scenes access). Some indie films have been financed through NFT sales – for instance, the film Calladita was funded by selling NFT art related to the movie, raising over $750k in 2022 and winning awards at Sundance for this innovative approach open.substack.com.
In the next few years, a film’s IP might be partially “owned” by a community of token holders who then have a stake in its success – imagine getting a tiny percentage of the profits because you held a token from day one. This could challenge traditional studio financing, which is often top-down. Instead of a few executives deciding what gets made, thousands of fans could collectively decide (and bankroll it in micro amounts). Audience engagement is built in, since those who fund it are invested (emotionally and financially) in promoting the work. We might also see interactive storytelling tied to these platforms: e.g. a series where token-holding fans vote on plot directions (kind of like a decentralized choose-your-own-adventure). This was attempted in small scale with Web3 projects like The Gimmicks, an animated wrestling show where NFT holders voted on storyline branching. The concept could mature such that by 2027, audience participation in content is normalized – not just commenting on forums, but directly influencing creative outcomes or even co-creating elements (perhaps fans submit designs for a creature in a sci-fi show, and the community’s favorite design – potentially AI-refined – gets used in the episode).
Traditional studios are taking note and starting to experiment themselves (Disney, for one, has explored NFT collectibles for Marvel and Star Wars, and is certainly watching the decentralized trend). Some may incorporate these ideas in distribution – for example, a studio might release a film on a blockchain platform where each view is a micropayment that splits between the studio and the film’s creators immediately via smart contract, reducing reliance on middlemen. Decentralized content creation overall suggests a future media landscape with more voices and diverse content. Independent creators stand to benefit as they can build their own fan communities and finance projects without giving up creative control to a big studio. If successful, this “takes an axe to Hollywood’s barrier to entry,” as Film.io’s founders put it, “transforming the entertainment industry at large” by opening doors to talent that previously couldn’t get a break businesswire.com, businesswire.com.
Implications for Budgets, Timelines and Creative Control
These emerging technologies carry profound implications for how films and shows are budgeted, how quickly they are made, and who holds creative power.
Budgets in many cases could shrink or be reallocated. Expensive line items – building large sets, on-location shoots in remote areas, hiring massive VFX teams for rote tasks – can be trimmed when virtual production and AI can achieve similar results for less. Tyler Perry’s realization with Sora is a prime example: why spend millions constructing a set or flying a crew to a mountain, if an AI can generate a convincing setting virtually? aitopics.org
Many productions could see a significant drop in below-the-line costs (physical set costs, travel, some labor) and post-production costs (since AI and real-time workflows prevent the costly “fix it in post” crunch by catching issues earlier or avoiding them). That said, new budget lines will grow: investing in LED volumes, powerful computing for AI rendering, and technical experts to run these systems. In the short term, a virtual production setup might add cost (LED stage rental, etc.), but it often pays for itself by avoiding overruns and enabling more efficient shooting schedules (which saves on crew days and equipment rental in the long run). Timelines and budgets are intertwined – if you can shoot a show in 8 weeks instead of 12 thanks to these tools, that’s a big budget win. Also, consider episodic TV: AI tools could allow showrunners to create more ambitious visuals each week without blowing the budget, or even enable content to be produced on the fly responding to audience feedback (imagine a late-night show able to insert an AI-synthesized comedy sketch about that day’s news – as some have started doing with deepfake parody videos).
Timelines are poised to accelerate across the board. Pre-production can be condensed when AI previsualization answers many questions early. The production itself becomes more streamlined with virtual sets that can be reset at the push of a button and AI assisting with continuity (fewer reshoots needed). Post-production, which used to be a bottleneck for heavy VFX projects, might no longer dictate release dates so strictly – if AI can cut VFX and editing times by 50%, studios might move up premiere dates or take on more projects in parallel. We might even see a move toward real-time filmmaking for certain content: for instance, a director could film actors on an LED stage while an AI simultaneously composites other elements and an editor starts cutting scenes moments after they’re shot. By 2028, the concept of “live post-production” may emerge, where the gap between production and post is so blurred (thanks to on-set rendering and AI) that a near-final cut exists just days after principal photography wraps. This agility could enable more serialized storytelling or rapid iteration (a filmmaker could shoot different endings and have finished versions of each to test with audiences, something too costly/time-consuming to do traditionally).
Creative control stands to both expand and face new challenges. On one hand, independent creators gain more control than ever. The democratization effect means a small team can realize a grand vision without needing a giant studio’s resources – which also means they don’t have to cede creative control in exchange for those resources. As Runway’s CEO observed, these AI tools allow “people who never thought they’d have access” to high-end production to create “high-quality content,” effectively “making storytelling more accessible” and enabling a “huge new market to emerge.” amplifypartners.com, amplifypartners.com
The result could be an explosion of original voices and niche content for underserved audiences, since creators can bypass the traditional gatekeepers. We already see YouTube and streaming enabling niche content; with these advancements, the quality gap between a fan-made film and a Hollywood film closes further, meaning fan creators or startups can directly compete on creative ideas rather than being limited by execution quality.
On the other hand, established studios and creatives will have to navigate creative risks with AI involvement. There’s a valid fear that over-reliance on AI (especially in writing or ideation) could lead to homogeneous content – if everyone’s script assistant draws from the same corpus of successful movies, you might get cliched outputs. Maintaining human originality and quirkiness will be key; AI should be a tool, not the source of the art. Artist and guild concerns also come into play: Writers and actors unions in 2023 fought hard to put guardrails on AI – for instance, ensuring that studios can’t use AI to generate scripts without crediting/pay for writers, or scan background actors to reuse their likeness endlessly without consent. The new union contracts now include provisions about digital replicas and AI usage (e.g. requiring performer approval and payment for digital doubles) sagaftra.org. These policies will influence creative control too: actors might negotiate the right to veto AI uses of their performance, directors might insist on a clause that AI-generated material is subject to their approval to avoid studio meddling, etc.
Traditional studios may face challenges to their control as well. If decentralized platforms and indie AI-enabled productions start scoring successes, talent may choose to work outside the studio system, and audiences might follow. Studios will need to adapt by either adopting these technologies themselves (which many are, setting up virtual production stages and in-house AI research labs) or by offering something unique (huge marketing reach, established IP, etc.). A possible outcome is studios leveraging AI to double down on franchise content, pumping out more Marvel/Star Wars at a faster clip (since AI can help scale content production), while independent creators use the tech to offer fresh alternatives. This could set up a battle of quantity vs quality, or formula vs innovation. However, we might also see collaboration: studios partnering with internet creators (already happening with things like Netflix buying indie films or hiring YouTube filmmakers for projects). New forms of creative partnerships may arise, where a community creates a popular proof-of-concept (like a crowd-created short film with AI effects) and a studio then gives it a bigger platform (similar to how Logic Breakdown on YouTube used AI to make an anime-style short film that went viral, and now those creators might get deals).
For audiences, the future holds more interactive and personalized content. With AI, it’s conceivable that a film or series can be tailored to viewer preferences – for instance, slight variations in editing or even plot depending on viewer feedback or profile. Creators might release multiple cuts and let audiences choose (somewhat like interactive films, but more seamless). Monetization could also become more direct: fans might pay via crypto tokens for bonus scenes or to unlock an alternate ending rendered by AI on-demand. Audience engagement in the creative process, as mentioned, means fandoms become part of content creation communities, blurring the line between creator and consumer.
In summary, the next three years will likely see leaner productions and faster turnarounds, with empowered creators at the helm – but also a period of negotiation and adjustment as the industry figures out fair and ethical use of these powerful tools. Independent filmmakers stand to gain the most by challenging the old guard with bold experiments that succeed on shoestring budgets, whereas traditional studios will be pushed to innovate or risk seeming stale. It’s an era where a visionary with a laptop and AI access could launch the next big franchise from their garage, and where the definition of “film production” expands to include global fan communities and intelligent algorithms as part of the studio. The storytelling landscape by 2028 will not only have new technologies, but entirely new workflows, business models, and creative possibilities – truly a new era of production.
Examples of Pioneering Projects and Companies
To illustrate this evolution, here are a few of the key projects and players experimenting with these methods today:
Lucasfilm’s The Mandalorian – Pioneered large-scale virtual production using ILM’s StageCraft LED volume, proving the viability of in-camera VFX for a major series and inspiring dozens of other productions to follow ilm.com, ilm.com. (Now used in projects like The Batman (2022) and the upcoming Percy Jackson series for dynamic backdrops ilm.com.)
OpenAI’s Sora – A cutting-edge AI text-to-video generator that launched in 2025 to select users. Sora can create short, film-like video clips from prompts, which industry figures like Tyler Perry predict could “transform the film…industry” by enabling virtual set creation and rapid prototyping of scenes aitopics.org.
Runway ML – A startup offering AI tools for creators. Its Gen-1 and Gen-2 models enable both video-to-video transformations and text-to-video generation. Runway’s tools have been used in real productions (e.g. by Everything Everywhere All At Once’s team to accelerate VFX amplifypartners.com, and by The Late Show editors for instant VFX edits amplifypartners.com). Runway also runs an AI Film Festival, highlighting short films made with AI, demonstrating the creative potential of these techniques.
Wonder Dynamics (Autodesk Flow) – An AI platform automating VFX work, particularly inserting CG characters into live action. It handles motion capture from regular footage and lighting/compositing, reportedly automating 80–90% of the manual work invadeai.com. Used by Boxel Studios to create complex VFX shots on a TV timeline awn.com, indicating how AI can assist VFX houses to deliver faster.
SamurAI – (Segment Anything Model Unified & Robust AI) An open-source AI tool for motion tracking and rotoscoping in video thevfxmedia.com. SamurAI is being explored in post-production to quickly isolate actors or objects without green screen, which could streamline both VFX and editing tasks (for example, replacing a sky or background behind an actor with a few clicks).
Film.io – A decentralized filmmaking DAO platform launched in 2024 that lets creators pitch projects to a community of fans for voting and funding businesswire.com. It’s experimenting with blockchain governance in content creation, and has attracted thousands of users in its early stage.
FanTV – A new blockchain-based streaming service that uses AI for content recommendations and a token system to reward viewers and creators for engagement decential.io. Currently boasting 7+ million users decential.io, FanTV is testing the model of “watch-to-earn” and aims to solve indie creators’ distribution woes through decentralization.
Netflix and Disney’s Virtual Production initiatives – Most major studios now have virtual production divisions. Netflix built an LED stage in 2023 in Los Angeles and is using it for upcoming originals (they’ve noted improved production speed and creative flexibility as reasons). Disney is expanding StageCraft to more soundstages for Marvel and Star Wars productions, integrating Unreal Engine artists directly into its film crews.
Innovative Indie Projects:
“Artificial” – a Web series that used audience input (via Twitch) live to influence the storyline, a precursor to interactive AI-driven narratives.
“Raster” (hypothetical future example) – an independent animated short entirely generated by AI models (images, movements, voices) overseen by a single artist, making rounds at festivals by 2026 and sparking debate on what qualifies as “film”.
The Sandbox – Not a film, but a virtual world platform where users create stories and assets. A small studio took characters from a Sandbox game and, using AI video tools, turned it into an animated web series – showing how decentralized, user-generated IP can jump to more traditional media via AI.
Each of these examples, whether big-budget productions or experimental indies, showcases elements of the future workflow: real-time rendering, AI generation, community collaboration, and new monetization models. They are trailblazers lighting the path for the broader industry.
Conclusion: A New Era of Production
By 2028, the production of films and television is likely to look very different from today’s norms. The convergence of AI and real-time technologies will make the creative process more fluid and less constrained by physical or financial limits. A creator with a bold idea will have an expanded toolbox: AI assistants to help write and visualize it, virtual stages to film it with any backdrop imaginable, and AI-driven post pipelines to polish it – all with unprecedented speed. The balance of power could shift toward artists and audiences: artists because they can do more with less (needing studios mainly for distribution hype, not the entire creation process), and audiences because they can directly support and influence the stories they care about via new platforms.
Traditional studios and craft professionals are not obsolete by any means – in fact, those who embrace these tools can amplify their artistry (a virtuoso cinematographer with an LED volume can paint with light in new ways, a skilled editor with AI at their fingertips can try bolder cuts knowing the AI safety net is there to mend continuity). But the old hierarchical, labor-intensive production model will be challenged by a leaner, tech-infused model. We’ll likely see a blend of both in the industry: some projects will stick closer to practical filmmaking for artistic reasons, while others will fully digitize production. And some fundamentally new formats will arise – perhaps AI-generated interactive films where a movie’s storyline or visuals regenerate differently on each viewing, or massively collaborative cloud productions with thousands of contributors worldwide.
Audiences can look forward to content that is more immersive, more frequent, and more personalized. Niche voices that never had a chance in the old system may produce cult hits with fan communities. Big franchises may experiment with letting fans “into the sandbox” of the story via virtual experiences or voting mechanisms. And with distribution decentralizing, viewers might even earn rewards or ownership from the content they love (turning fandom into a participatory economy).
Inevitably, there will be growing pains. Issues of quality control, ethical use of AI (deepfakes vs creative expression), and equitable compensation will need constant attention. The industry will have to refine norms about crediting AI contributions and protecting human creators’ rights and income – debates already raging in courtrooms and guild meetings theguardian.com, theguardian.com. Moreover, not every experiment will succeed – some AI-generated content will flop or feel gimmicky, and some traditionalists will resist these changes.
Yet, if history is any guide, filmmaking has always been shaped by technology: sound, color, CGI, digital editing – each stirred initial fear but ultimately empowered new forms of storytelling. AI and virtual production are poised to be similar catalysts. In the next three years, they won’t replace the human imagination at the heart of filmmaking; rather, they will turbocharge it, allowing creators to realize their visions in ways previously impossible. The result will be an industry that is more inclusive, innovative, and agile – one where the only limit truly is the imagination.
Sources:
Milmo, Dan. “Sora, OpenAI’s video generator, has hit the UK. It’s obvious why creatives are worried.” The Guardian, Feb. 28, 2025 aitopics.org, theguardian.com.
Futuresource Consulting – Virtual Production & XR Report, Jul. 12, 2024 (via futuresource-consulting.com) futuresource-consulting.com, futuresource-consulting.com.
Amplify Partners. “How Runway revolutionized film production with AI.” (Case Study) 2023 amplifypartners.com, amplifypartners.com.
Designboom. “Runway generative AI tool ‘Gen-2’ makes realistic movies with just words.” Mar. 28, 2023 designboom.com.
VFXMedia. “Samurai AI: The Future of Real-Time Motion Tracking.” Nov. 28, 2024 thevfxmedia.com, thevfxmedia.com.
Business Wire. “SXSW: Film.io Announces Public Launch… to Empower Independent Filmmakers & Democratize the Film Industry.” Mar. 8, 2024 businesswire.com, businesswire.com.
Decential.io. “FanTV Is Betting on Blockchain and AI to Help Video Artists Keep More of What They Earn.” Feb. 4, 2025 decential.io, decential.io.
Reddit r/filmmaking discussion on AI-assisted storyboarding (user perspectives), Aug. 2024 reddit.com.
InvadeAI. “Revolutionizing VFX with AI – Wonder Dynamics,” 2023 invadeai.com.
TechCrunch. “How The Mandalorian and ILM invisibly reinvented film and TV production.” Feb. 20, 2020 techcrunch.com, techcrunch.com.

