The AI Screenwriting Dilemma
Aaron Persily | May 7, 2026
In May 2023, Hollywood stopped making movies when the Writers Guild of America launched its first strike in sixteen years. One of the WGA’s main points of emphasis was restrictions on artificial intelligence. After 148 days of negotiations, the WGA secured an agreement stating that studios could not use AI-generated material as a substitute for a writer's work. Moreover, AI-generated text could not be considered "literary material" under the MBA (WGA, 2023). This collective bargaining agreement established general overall improvements to writers across the board and clarified that AI was not a writer that could be used. On paper, it was a victory for writers everywhere. However, the enforcement of this agreement has been nearly impossible, and as a result, this agreement is largely considered not a big enough step in the right direction because there are so many loopholes regarding AI.
The 2023 agreement was more carefully worded than most coverage suggested. While studios were allowed to ask writers to work with AI tools, the agreement required that they still received the writer’s consent and were compensated employees under full WGA minimums. If a studio violates the AI provision in their agreement, a writer must file a grievance that goes to arbitration, and the burden of proof falls entirely on the writer (NLRB, 2024).
This is where the framework starts to fall apart. To prove a studio used AI to generate material that should have been written by a human, a writer, or the WGA needs concrete evidence. Detecting AI-generated text is extremely difficult, and there is no true way to prove whether a text is AI-generated. Texts can be entirely written by humans, and tools like GPTZero and Quilbot’s AI detector may say that they were written by AI. The AI detection tools have a wide range of error rates, with it often being greater than 50%. These AI detection systems work by identifying statistical patterns in writing that are common among AI models. Against a completely AI response, the detection websites work relatively well, but when the text is partially written by a human, the detection tools are not always accurate. Every piece of human editing degrades the detectors' confidence, and there is currently no reliable method to distinguish between the writing of AI or human writing.
The main challenge is that AI and human writing increasingly overlap in style as the AI models get better. Labor law scholars distinguish between de jure and de facto protection, and this is exactly where that distinction bites. The WGA has de jure protection, which means that AI has a contractual right to exist on paper. However, it lacks de facto enforcement: the practical ability to act on a violation. The gap between these two types of protection is exactly what allows studios to operate freely and continue to get away with using AI in screenwriting. If the legal standard for an AI labor violation requires proof that AI was used, but there is no surefire way to prove that AI was used, then the entire act of enforcing these laws falls apart. The NLRA was designed for a world where violations left evidence that was enforceable; however, the lack of AI detection ruins this. A studio quietly running drafts through an AI model and polishing them up afterward leaves no trace.
The most promising fix bypasses detection entirely. The Writers Guild of America may ask for full disclosure of AI tools' history and every script used in the process of the final script to ensure that AI is not being used (Authors Guild, 2023). These disclosure requirements already exist in other regulated industries for exactly this reason. For example, a pharmaceutical company does not have to prove its drug was tested safely after the fact; instead, it must document that the testing happened beforehand. Some legal scholars have asked studios to give writers compensation regardless of the use of AI; however, this would mean putting many writers out of jobs and ruining the joy of screenwriting. The WGA's 2023 victory was progress, but it was also just the beginning of a much longer fight. It is critical to acknowledge that writers' rights in the future might not be secured; hence, writers must fight to keep AI regulated in studios and for full transparency.
Works Cited
Authors Guild. (2023). “AI and the future of authorship”. Authors Guild.
National Labor Relations Board. (2024). “About the NLRA”. NLRB.gov.
Writers Guild of America. (2023). 2023 MBA summary: Artificial intelligence. WGA.