
The strikes of 2023 put AI firmly on the negotiating table. The agreements that followed, covering consent, residuals, and synthetic likeness, were a start. But for post production and VFX teams, the regulatory landscape around AI is still evolving fast, and the contracts your facility relies on today will likely change significantly in the next 12-18 months.
This isn't just an HR or legal question. It directly impacts how workflows are built, how assets are managed, and how consent is tracked across the production pipeline. In short, AI in post production is shifting from a creative tool to a regulated workflow requirement.
The WGA and SAG-AFTRA deals established some foundational protections. SAG-AFTRA's agreement introduces informed consent and compensation requirements around the use of digital replicas and synthetic performers, while the WGA agreements include provisions around disclosure and the use of AI-generated material in writing workflows.
In particular, SAG-AFTRA agreements include requirements for informed consent for certain uses of a performer's likeness or voice in AI-generated content, while broader questions around training AI models on protected material remain a highly contested and evolving legal issue.
For VFX teams, the implications are more nuanced. AI tools are increasingly used in de-ageing, background population, environment extension, and even previsualization. Some of these workflows may involve likeness, biometric, or performance-adjacent data in ways that existing agreements did not fully anticipate.
The key question for post facilities right now isn't whether AI will be used - it's whether the workflows you're building today are defensible under the agreements coming next.
Beyond industry agreements, legislative frameworks are catching up quickly. The EU AI Act entered into force in August 2024, with the majority of its requirements, including transparency obligations on AI-generated content, applying from August 2026. Further implementation milestones extend into 2027 and beyond, and the regulatory picture continues to develop. Media companies operating in the EU should monitor this closely and work with legal counsel to understand where their obligations sit.
In the US, state-level regulation is becoming increasingly important. California has introduced expanded protections around digital replicas, while New York's right-of-publicity laws remain relevant to AI-generated likenesses and synthetic performers. In practice, many compliance requirements are also driven by studio, streamer, and distributor delivery standards.
For post production, the practical impact includes:
Provenance tracking. Studios, streamers, and rights-holders are placing increasing emphasis on documenting what AI tools were used, on which assets, and under what conditions. This means the finished file can no longer just be the finished file - it needs a paper trail.
Consent verification at delivery. Deliverables that include AI-generated or AI-modified likenesses will increasingly require accompanying consent documentation. Facilities that can't provide this may find themselves unable to clear content for distribution.
Synthetic performer registries. Studios and facilities will need internal systems to track approved digital replicas, consent terms, and usage restrictions across productions. Sohonet Core, Sohonet's production asset management platform, provides the structured asset tracking and metadata management that makes building and maintaining these registries practical at scale.
The operational impact is starting to show up in three areas.
Review and approval. When AI-generated content moves through the post pipeline, approval workflows need to be tighter. Who signed off? When? On which version? The days of informal review via email attachments or shared drives are becoming a liability. Teams are moving toward structured review processes where every approval is logged and attributable. ClearView Flex, which supports watermarked, access-controlled sessions, helps create the kind of attributable audit trails that legal and compliance teams increasingly expect.
Controlled delivery. Sending AI-modified content to partners, co-producers, and licensors requires more than a fast transfer. Facilities need to know what was sent, to whom, and when, with version control that holds up if a dispute arises. Sohonet FileRunner enablestracked, browser-based transfers that support stronger delivery auditability and documentation.
Asset classification. Not all content carries the same regulatory risk. AI-composited crowd scenes have different obligations than AI-generated lead character likenesses. Facilities that are thinking ahead are classifying assets by their AI involvement at ingest, so that downstream handling, review, delivery, archiving, can be governed appropriately. Sohonet Core supports this by providing a centralised, version-controlled register where assets can be classified, tracked, and accessed across departments with the metadata intact.
Key compliance requirements for post production teams
If your facility hasn't already audited its AI tool stack against current agreements, that's the first step. Not to stop using AI, the efficiency gains are real and competitive pressure is significant, but to understand where your exposure sits.
The facilities that will navigate this well aren't the ones who avoid AI. They're the ones who've built workflows that are transparent, auditable, and defensible. That means understanding what the agreements require, building review and delivery processes that generate the right documentation, and staying close to how the regulatory landscape is developing.
For many teams, AI compliance is shifting from a policy discussion to a delivery and documentation requirement. The creative industry has always evolved. AI is the current inflection point, and post production is, as usual, where the complexity lands.
AI in post production is moving from experimentation to regulated infrastructure. As agreements and legislation evolve, compliance will depend not just on policy, but on how workflows are designed. Post production teams that prioritise transparency, auditability, and controlled delivery today will be best positioned as regulatory expectations continue to tighten.
Current agreements focus heavily on protecting performers and creative workers, though the exact protections differ across unions and contracts. VFX facilities should audit their AI tools for any use of likeness, voice, or biometric data. Proactive documentation of AI use and consent is increasingly expected by studios and streamers.
Regulation comes from multiple sources: union agreements, rights-of-publicity laws, studio and platform requirements, and broader legislative frameworks such as the EU AI Act. The picture varies by territory and is changing quickly, post teams should work closely with their legal counsel and studio partners to stay current.
At minimum: log which AI tools were applied to which assets, capture consent documentation for any likeness or voice data used, and maintain version-controlled records of review and approval decisions. Structured review platforms and tracked delivery systems help create defensible records. Production asset management tools like Sohonet Core provide a centralised register for asset-level tracking across the production pipeline.
A synthetic performer is a digital recreation or simulation of a real person, actor, voice performer, or crew member, generated using AI. Current agreements focus on screen talent, but definitions are expanding as AI capabilities grow.
In the short term, compliance workflows add steps. In the medium term, facilities that build structured review and delivery processes will actually move faster, because they won't be going back to fix documentation gaps before a deliverable can clear legal.
