When the Dead Speak: AI, Ethics, and the Voice of a Murder Victim
By Skeeter Wesinger
May 7, 2025
In a Phoenix courtroom not long ago, something happened that stopped time.
A voice echoed through the chamber—steady, direct, unmistakably human.
“To Gabriel Horcasitas, the man who shot me: it is a shame we encountered each other that day in those circumstances.”
It was the voice of Chris Pelkey, who had been dead for more than three years—killed in a road rage incident. What the judge, the defendant, and the grieving family were hearing was not a recording. It was a digital recreation of Chris, constructed using artificial intelligence from photos, voice samples, and memory fragments.
For the first time, a murder victim addressed their killer in court using AI.
Chris’s sister, Stacey Wales, had been collecting victim impact statements. Forty-nine in total. But one voice—the most important—was missing. So she turned to her husband Tim and a friend, Scott Yentzer, both experienced in emerging tech. Together, they undertook a painful and complicated process of stitching together an AI-generated likeness of Chris, complete with voice, expression, and tone.
There was no app. No packaged software. Just trial, error, and relentless care.
Stacey made a deliberate choice not to project her own grief into Chris’s words. “He said things that would never come out of my mouth,” she explained. “But I know would come out his.”
What came through wasn’t vengeance. It was grace.
“In another life, we probably could’ve been friends. I believe in forgiveness and in God who forgives. I always have and I still do.”
It left the courtroom stunned. Judge Todd Lang called it “genuine.” Chris’s brother John described it as waves of healing. “That was the man I knew,” he said.
I’ve written before about this phenomenon. In January, I covered the digital resurrection of John McAfee as a Web3 AI agent—an animated persona driven by blockchain and artificial intelligence. That project blurred the line between tribute and branding, sparking ethical questions about legacy, consent, and who has the right to speak for the dead.
But this—what happened in Phoenix—was different. No coin. No viral play. Just a family trying to give one man—a brother, a son, a victim—a voice in the only place it still mattered.
And that’s the line we need to watch.
AI is going to continue pushing into the past. We’ll see more digital likenesses, more synthesized voices, more synthetic presence. Some will be exploitative. Some will be powerful. But we owe it to the living—and the dead—to recognize the difference.
Sometimes, the most revolutionary thing AI can do isn’t about what’s next.
It’s about letting someone finally say goodbye.
Let’s talk:
➡ Should AI have a role in courtrooms?
➡ Who owns the voice of the deceased?
➡ Where should we draw the ethical boundary between tribute and manipulation?
Leave a Reply
Want to join the discussion?Feel free to contribute!