
The growing use of editing tools and artificial intelligence is forcing courts to reconsider how visual evidence should be handled and evaluated.
“Video footage of a crime has become the expectation,” said Jim Williams, senior partner with Burnett & Williams. “Ever since the Rodney King case, the jury expects it.”
That expectation has faced renewed scrutiny following the shootings of Renée Good and Alex Pretti by U.S. Immigration and Customs Enforcement agents in Minneapolis. Debate continues within the legal community over when video evidence should be released; how it should be edited, contextualized and understood; and where courts should draw the line between permissible enhancement and impermissible manipulation, especially as AI becomes more common in video processing.
Williams said the legal rules surrounding video capture are clear. “The number one guideline around video footage is that taking video as a bystander is legal,” he said. “This is first-year law school: If you’re out in public, generally, anyone can record you.”
He added that bystanders may record police or federal officers as long as the person taking the video does not interfere in police activity. “You can record from a distance,” Williams said. “If you get in their face, you could argue that is an obstruction of justice.”
In Glik v. Cunniffe (2011), the First Circuit Court of Appeals held that private citizens possess a First Amendment right to record police officers carrying out public duties.
However, whether video footage can be edited or enhanced and how it should be interpreted is less settled.
Sandra Ristovska, founding director of the Visual Evidence Lab and an associate professor

of media studies at the University of Colorado Boulder, said a video’s power lies in its ability to make viewers feel like they “were there when it happened.” Yet, she cautioned, this can lead to varying interpretations of the same video.
“Video has the ability to make us feel like we are witnessing an event firsthand,” she said. “But decades of research show that how people interpret images depends on technological, social and cognitive factors.”
Ristovska said video does not simply capture reality but frames it, as camera angle, lighting, playback speed and other editing choices influence perception. Slow-motion playback, for example, can make actions appear more deliberate than they seem in real time, she said.
“There’s still an assumption that video speaks for itself,” Ristovska said. “In reality, any video can reveal and conceal at the same time.”
The value of video evidence is especially clear when the same incident is captured multiple times from varying angles, she noted. In the recent Minneapolis shootings of Renée Good and Alex Pretti by federal immigration agents, footage recorded by bystanders was widely reviewed by journalists and the public, quickly leading to an evaluation of the incidents that did not align with federal statements.
AI has further complicated how video is managed and interpreted. AI tools can clarify grainy footage or enhance audio for court, but they can also be used to alter or fabricate images.
The difficulty, Ristovska said, is that jurors and judges may struggle to distinguish between what may be acceptable enhancement using AI and improper manipulation.
Increased awareness about the risks of video manipulation can creates an additional problem: heightened skepticism toward authentic footage. She said, “The more we talk about high-tech manipulation, the greater the risk that people will discount real video evidence.”
Williams concurred: “The biggest problem anymore is, ‘Is this legitimate or not legitimate?’”
Courts rely on authentication standards that require a party to establish when and where a video was recorded and that it accurately depicts the events at issue. Those rules, mostly developed in response to the use of photography in court, provide little guidance on how AI-powered video enhancements should be evaluated or disclosed.
Ristovska and her Video Lab team recommend clearer and more consistent standards, including documentation of any edits or enhancements, training for judges on how video and AI tools work and jury instructions that acknowledge the limits of visual evidence.
“For justice to be fair and equal,” she said, “courts have to recognize that video evidence is powerful, but it is not neutral.”