Is this an ethical move by the documentary filmmaker?
Many people are looking forward to the release of the new documentary, Roadrunnerthat tells the story of the life and tragic death of Anthony Bourdain. Filled with a lot of never-before-seen footage, the enthusiasm for documentary film is palpable all over the world. Many of the people Bourdain touched while hosting his travel show were incredibly excited to see the film.
A lot of work went into making this film and the director of the film, Morgan Neville, said: “Those were the toughest interviews I’ve ever done, hands down. I was the grief counselor who showed up to speak to everyone. ”
But a recent interview in the The New Yorker upset a lot of people very much.
Inside the piece, Neville talked about how they got the lines for some parts of the movie.
According to the items:
“During the film, Neville and his team used stitched clips of Bourdain’s narrative from television, radio, podcasts, and audiobooks. ‘But there were three quotes that I wanted his vote for,” Neville explained. gave her about a dozen hours of recording and said, “I made an AI model of his voice.” In a world of computer simulations and deepfakes, the voice of a dead man speaking his own words of desperation is hardly the most dystopian use of technology. But the seamlessness of the effect is eerily mentioned, you probably don’t know what the other lines spoken by the AI are, and neither will you, “said Neville.” We may have a documentary ethics panel on that later . ”
This has raised a lot of red flags from many viewers who want the documentary ethics panel now.
Showing footage and unseen clips of the deceased chef is one thing, but having him posthumously recount parts of his own eulogy thanks to artificial intelligence seems dystopian and gloomy. Especially when the ethos of its shows No reservations and Parts unknown seemed to emphasize the human experience and real connection that people around the world can have. Using a computer to mimic Bourdain seems insincere for this reason.
Neville admits in the article that he never met or knew Bourdain, which I think is fine if we want our documentary to be objective, but one has to wonder how much he actually learned about the guy, if he thinks he is okay with computers, his voice is.
Without trying to sound cliché, this also raises many concerns about the technology’s slippery slope. How can we be sure that what is said in the documentation is true when we have no evidence of who is saying what? And what’s stopping filmmakers from recreating a lifelike Bourdain to say and do things on screen that we’ve never seen before?
What are your feelings on the subject? Should you have replicated Bourdain’s voice throughout the documentary?
Sound off in the comments.