Morris Chapdelaine always has an impressive stack of scripts on his desk. As an independent producer, he reads about three a week and turns the rest over to interns and film students, who return detailed coverage reports. But he struggles to overcome them all.
At a film festival, friends suggested she study artificial intelligence to help with her workload. “I was a bit distant from anything related to AI,” he says. “Some things scare me.”
But Chapdelaine did some research and eventually signed up for Greenlight Coverage, which uses large language models to summarize scripts and rate things like plot, character arcs, pacing, and dialogue on a scale of 1 to 10. It even gives a verdict: pass, consider, or recommend.
He found the AI more honest than human comments – even his own – while doubling his reading rate.
“It’s a real time saver,” he says. “And it’s getting better and better.”
If AI does anything well, it’s summarizing written documents well. So, of all the jobs in development, perhaps the most vulnerable is the very first: the script reader. The industry’s first gatekeeper may one day be software.
In fact, machines already play a role. At WME, agents and assistants use ScriptSense, another AI platform, to sort submissions and track client work. Aspiring screenwriters are also turning to AI tools like ScreenplayIQ and Greenlight to provide (sometimes overly flattering) feedback on their drafts.
At the big studios, human story analysts are still digging through piles of submissions, as they have for 100 years. But as AI creeps into everyone’s workflow, everyone is worried about their jobs.
Jason Hallock, a story analyst at Paramount, recalls his unsettling early experiences with ChatGPT, the bot that launched the current AI frenzy. “How soon will I be replaced?” » he wondered. “Is it six weeks? Or six months?”
Working with the Editors Guild, which represents about 100 unionized history analysts, he decided to find out. Earlier this year, he set up an experiment. It asked AI tools to cover certain scripts, then compared their reports to the coverage generated by humans. It was a test to see if he and his colleagues could compete.
Since the dawn of Hollywood, story analysts have been its threshers, separating the wheat from the chaff. Proponents of AI argue that algorithms can make this process more efficient, more objective, and therefore more fair, allowing new voices to be heard instead of relying on readers who bring their own subjective tastes to their work.
But something could also be lost. A human reader is the first to sense if a scenario has potential, if the characters are endearing and if the story sweeps you away and has something new to say. Can AI do this?
“The biggest thing I look for is ‘Do I care?'” says longtime Warner Bros. story analyst Holly Sklar. “An LLM doesn’t care.”
Yet AI seems to be coming regardless. So rather than ignoring it, some try to understand it.
“No one wants to lose their job,” says Alegre Rodriquez, an Editors Guild analyst who participated in Hallock’s study. “We’re not burying our heads in the ground and pretending it doesn’t exist, and we’re not waiting for them to give us a pink slip. I think people are dusting themselves off and saying, ‘How can I stay in this game?’
Kartik Hosanagar is a business professor at Wharton and an Internet marketing entrepreneur. He is also a cinema enthusiast and has a few scripts in his drawer: a drama about a startup and a thriller about an assassinated Indian diplomat. As an outsider in Hollywood, he struggled to sell his scripts. This led him to develop an algorithm to level the playing field by objectively evaluating talent. That venture didn’t pan out, but the next one did: Hosanagar developed ScriptSense, now one of the hottest AI scripting platforms. The pitch: “Evaluate the scenarios 100 times. »
“There is a huge pile of unread documents,” says Hosanagar. “It’s a great way to clear the pile and figure out where to focus your attention.”
In March, Hosanagar sold his company to Cinelytic, a service provider that integrates ScriptSense into a suite of management tools. “It’s about saving time,” says Tobias Queisser, CEO of the company. “Opportunities are being passed over because there isn’t enough capacity to vet everything. Unknown writers never get a chance because their script isn’t submitted by a top agency.”
ScriptSense provides summaries, character details, line-ups, and casting suggestions. The tone is relatively neutral. He offers neither praise nor criticism.
“Our design philosophy was that we weren’t going to make the decision for you,” says Hosanagar. “You’ll never see a statement that says ‘Amazing!’ or “Reject it.”
Platforms for screenwriters have a different philosophy. Jack Zhang, the founder of Greenlight, believes in the power of AI to make critical judgments. “What AI does really well is be the average of things,” he says. “In terms of feedback, you’re trying to reach a broad audience. You want the average person to like your work. That’s where AI really shines.”
ScreenplayIQ offers qualitative assessments but not numerical scores. The program summarizes plots and assesses the “growth” and “depth” of characters, helping writers see their work from an outsider’s perspective. “Our goal is to support writers where they feel they are struggling and need support,” says developer Guy Goldstein. “It’s about holding a mirror up to your script. You wrote it with an intention; it’s about seeing if that intention came to fruition.”
To test AI platforms, Hallock needed scripts. Screenwriters may be reluctant to feed their material into AI models because they assume it will be used for training purposes. But a close friend agreed to provide some old scripts for the cause. One of them was an unproduced script for the Syfy channel about a killer insect. Another was billed as “The Heart of Darkness” in space. The author didn’t mind if the AI trained on this.
“He said he hoped it would make AI dumber,” says Hallock.
He gathered a few more and handed them all over to human analysts. It then compared their coverage with loglines, synopses and notes produced by six AI platforms. The results were both encouraging and disturbing.
The AI-generated loglines were indistinguishable from human loglines – maybe even a little better. The differences began to emerge with AI-generated synopses. “They tend to have an 11th grade essay quality,” says Hallock. “It uses the same types of constructions, like ‘Our story begins with…'”
The more complicated the storyline, the more likely the AI was to make mistakes – wrongly attributing one character’s action to another and hallucinating plot points.
Humans won hands down when it came to notes, which require real analysis rather than simple distillation. AI programs have been “an almost total failure across the board,” Hallock says.
The screenplay for “Heart of Darkness in Space” received a “recommendation” even though it toured Hollywood 20 years ago and failed to sell. It was a recurring problem. Instead of offering unvarnished critiques, Rodriquez says, the models were “biased in favor of the writer.”
“They would definitely tell you anything that was positive and working well,” she says. “But when it came to tackling the problems, they weren’t necessarily able to identify them.”
In some cases, AI programs did not evaluate; they were cheerleaders.
“He has the quality of a puppy,” Hallock says. “He wants to please you.”
A Romantic Comedy was praised by AI as “a compelling, well-crafted coming-of-age story, balancing humor, heartbreak, and the bittersweet realities of life in your 30s. Strong character development makes this a standout work.”
The human reader, meanwhile, was disappointed: “Familiar Las Vegas girlfriend template. Potential as light streaming content, especially with Sydney Sweeney. Bad language, but the jokes aren’t difficult; lacks the bite of ‘Girls Trip’ or ‘Bridesmaids’.”
Zhang defends Greenlight’s taste, saying only 5% of scripts submitted to the platform receive a “recommendation.” “It’s very little,” he said. “I wouldn’t say there’s huge inflation.”
Hosanagar says ScriptSense doesn’t make recommendations, in part because the AI can be too sycophantic. “Can AI reach a point where it can be truly critical? he asks. “I think it can get there. We’re not there yet.”
Many analysts were encouraged by the study, Rodriquez says. AI may be faster, but it can’t pull something original and brilliant out of the pile.
“There will still need to be a human being looking at these reports and reviewing the documents,” she says. “It doesn’t save as much time as they think.”
And those who rely on it too much risk missing out on something great. But the study wasn’t entirely reassuring, concluding: “Studios may be tempted to forgo quality and accuracy in favor of fast, cheap products.” »
The creators of the AI models say these fears are misplaced. “It’s not about cutting jobs,” Queisser says. “We see this as an improvement for humans.”
Avail CEO Chris Giliberti says story analysts are already using its AI platform to perform routine tasks, freeing up time to take on more complex analytical work. “It’s unstoppable,” he said. “The cat’s out of the bag. It makes people’s work and lives easier.”
Sklar, however, is worried about where this is going. Today’s leaders value human input. But a younger generation could emerge, more comfortable with AI summaries. She worries that some in Hollywood – “people who cut costs and don’t understand everything we do” – will come to view her role as superfluous.
“It’s what keeps me up at night,” she says.