Advertisement
Advertisement
Advertisement
Indiewire

Documentary Producers Set Best Practices for Generative-AI Use in Film

Brian Welk
4 min read
Generate Key Takeaways

A group of documentary filmmakers, producers, and archivists has written a series of guidelines on how they believe filmmakers should — and should not — use generative AI in their documentary movies.

While the AI guidelines for many entertainment folks may go something like this: “never, ever, a billion times no,” the reality is that generative AI has already crept into documentary filmmaking and is likely here to stay. An organization called the Archival Producers Alliance has outlined its best practices for filmmakers when it comes to handling consent, being transparent, and preserving history and truth.

More from IndieWire

Advertisement
Advertisement

“We recognize that AI is here, and it is here to stay. And we recognize that it brings with it potential for amazing creative opportunities,” APA co-founder Jennifer Petrucelli (“Crip Camp”) said at the IDA’s Getting Real event on Wednesday. “At the same time, we want to really encourage people to take a collective breath and move forward with thoughtfulness and intention as we begin to navigate this new and rapidly changing landscape.”

The initial guidelines developed together by Petrucelli and co-founders Stephanie Jenkins and Rachel Antell — a nine-page document obtained by IndieWire — are just a draft at this point, with the group intending to formally publish them in June. (The APA will be soliciting more feedback on the proposals in the meantime.) And these are just suggestions to offer guidance for the industry, not hard and fast rules or regulations against the use of AI.

While generative AI has its values, the proposals makes the case that primary sources of original images and video footage should come first. The APA is a group of several hundred archival producers who aim to uphold “truthfulness” and journalistic integrity in documentaries.

In their view, it’s OK to use AI to lightly touch up or restore an image (the group distinguishes between “GenAI” and other machine learning to be used for workflow improvements), but filmmakers should think very carefully about anything that would be newly created, alter a primary source, or “change their meaning in ways that could mislead the audience.” The APA acknowledges too that even archival footage can be biased or problematic, but says the source material’s intent can be known and put into context. AI, the guidelines say, has “no accountability of authorship.”

Advertisement
Advertisement

If you must use AI because no primary source is available, it’s important to take into account the bias that could be implicit in the training data, to take special legal care, and to consider how any images you create could be put out into the world and be “in danger of forever muddying the historical record,” the draft reads. It does however argue there are positive use cases for AI in documentaries, like in protecting an individual’s identity.

The group also believes filmmakers should get “additional consent” from subjects when appropriate about how AI is being used, mimicking some of the language actors have pushed for in their contracts.

To that end, the APA’s guidelines advocate for transparency: disclosing to filmmakers, the subject, to an estate, and especially to the viewers, that this is AI you are hearing or seeing. Just as if you were filming a reenactment of an event in a documentary, it should be abundantly clear to everyone involved in the production (through real-time communication and time-codes in editing) and watching it on screen (including appropriate lower-thirds or visual cues), and filmmakers should use “the same intentionality” they would with other material. For instance, the APA says filmmakers should be more transparent if they intend to use AI to make a real person do or say something they didn’t actually do, to create a realistic-looking historical event that never happened, or to alter footage of a real place or event.

One controversial example of this is when filmmaker Morgan Neville used AI to create a digital voice replica of Anthony Bourdain for the 2021 documentary “Roadrunner.” An AI voice that sounded like Bourdain read aloud a few lines from Bourdain’s journals, things he hadn’t actually spoken or put to tape. Neville revealed his use after the fact in interviews, but it wasn’t clear to viewers that what you were hearing wasn’t Bourdain. A more-agreeable example of use was from the film “The Andy Warhol Diaries,” which used an AI Warhol only after his estate gave approval.

Advertisement
Advertisement

The APA was only recently founded in 2023 and now has 300 members, including many who have worked on Oscar nominated documentaries. In November, the group published an open letter warning against AI and calling for industry standards around the use of generative material, particularly in documentary films.

The group’s possible next steps will be to get endorsement on the proposals from other industry and awards organizations, to convene distributors and streamers on their opinions, and to establish an “AI Board” to annually review changes in the space.

THR first reported the news of the guidelines.

Best of IndieWire

Sign up for Indiewire's Newsletter. For the latest news, follow us on Facebook, Twitter, and Instagram.

Solve the daily Crossword

The daily Crossword was played 11,212 times last week. Can you solve it faster than others?
CrosswordCrossword
Crossword
Advertisement
Advertisement
Advertisement