BI Debate: Should I make a digital clone of myself using AI? Or is that a terrible idea?
Dara Ladjevardian, CEO of Delphi, and Will Kreth, founder of Human & Digital and HAND. Courtesy of Delphi/Erin Oberhauser of Pixelstream; Tyler Le/BI Digital avatars, trained on the ideas or likenesses of real people, are flooding the internet. We asked two media industry execs to debate the pros and cons of the technology. Tell us what you think at the bottom of this story. Celebrities, influencers, and business leaders are spinning up digital replicas of themselves using artificial intelligence. Is it creepy? Is it a clever way to expand your reach and make more money? Both? Some internet personalities are making digital avatars of their physical likeness for e-commerce or marketing. An AI avatar of Chinese influencer Luo Yonghao helped drive over $7 million in product sales during a livestream last year, for example. Others are training chatbots on their bodies of work to make a digital version of their minds or personalities available to fans at all times. The technology is seeping into various parts of the media business. An AI replica of deceased actor Val Kilmer is set to appear in the film "As Deep as the Grave" later this year. Fans of motivational speaker Tony Robbins are paying $39 a month to get life coaching from his AI replica. We asked two prominent industry professionals to debate the pros and cons of digital avatars to help us better understand an area of AI that is quickly moving from science fiction to reality. We told them to stick to a side, even if their personal views on a subject were a bit more nuanced. Arguing the pro side: Dara Ladjevardian is the cofounder and CEO of Delphi, a startup that helps creators build AI chatbots trained on their creative output. Arguing the con side: Will Kreth is the founder of Human & Digital and HAND, a nonprofit that helps authenticate the identities of human actors, professional athletes, and other figures that appear in media using a mix of legal verification and metadata. We hosted the debate on a video call in early May. It's been edited below for length and clarity. Should creators make digital avatars of themselves? Dara: A digital clone allows you to broadcast your ideas. A replica can engage with everyone on your behalf, helping surface the conversations, people, and opportunities that genuinely deserve your time in person. You can only be in one place at a time, so naturally, you become selective about where your attention goes and who you respond to. The same way I may paint a painting, and you can experience bits of me in that painting, now you can experience my mind in an interactive way. Will: There's a certain mysterious quality to how humans come up with ideas. An LLM is a great combiner of lots of sets of inputs, but it cannot necessarily, at least not yet, generate truly novel outputs. It can draw conclusions, but can it take the next leap of consciousness, the next leap of idea, and come up with things that make someone say, "I never thought of that"? There's just something a little off about it. An uncanny valley of representation. There's a visceral reaction people have because it doesn't feel natural. What is the passive income opportunity for creators who choose to create a synthetic version of themselves? Dara: People can monetize a book. They can monetize a course. Now they have something that is like a 24/7 mentor in your pocket. We have people who are monetizing it as an add-on to their community or their course, where you have 24/7 office hours. And then we also have people who have it for free completely, and they use the data of what people are saying to upsell products in conversation, and it's more indirect monetization. The top 1% of creators will get the majority of the benefits because they are the most trusted, and they should be able to benefit from that trust that they've built. Do you think you're devaluing the authentic version of you by allowing synthetic versions to speak on your behalf or promote things on your behalf? Will: If the knockoff version of you bombs, that reflects badly on you, I would say. I think back to the point of authenticity and human trust. People go to a Barnes & Noble to meet the author. They didn't come there to meet the agents or meet the replica. Dara: It's all in how you position it. The people who have been most successful on our platform talk about it very transparently: "If I could be in multiple places at a time, I would. I get all these questions over and over again, and I feel bad about all the people that I'm ghosting." When there's an abundance of something, that scarce thing actually becomes more valuable. We have data that shows when people talk to digital minds, they end up wanting to meet the person more. What do you think about this idea of fans having relationships with synthetic versions of people? Will: Social media networks and large language model companies are being sued because users are allegedly developing parasocial relationships with their products. These individuals may not have been socially, emotionally adjusted to handle — and to deal with — the potential for addiction to the relationship. Guardrails are still broadly missing from the conversation to protect vulnerable people in society who are forming these bonds. Dara: When my sister was little, she was obsessed with Johnny Depp after watching "Pirates of the Caribbean." She had a cardboard cutout of him in her room, and she was a fan. And you could argue that's kind of a weird parasocial relationship. But she's a fan. Digital replicas are just an evolution of something that we've already been doing with media, which is when there's someone you don't have access to, and you really resonate with their ideas, you create a fantasy in your head of who they are. And if anything, this interactive experience actually creates more realism in a way. Are we prepared to operate in a world where there are lots of synthetic versions of people across all types of media? Will: It's already getting worse. The frameworks aren't in place. If these things start popping up more and more, which we think probably will happen, then it may flood the zone. It's not just deepfakes we have to worry about, it's shallow fakes. Things that are really close to being believable because they have elements of the truth mixed in with the lies. We need to be in a world where we can manage this and where the traceability and observability of replicas is documented. Where there's no weak link in the chain of authentication and consent. Dara: Assuming we build the technology to automatically take down deepfakes and we have watermarks so that you know it's verified, the much bigger problem we have is: There are going to be infinitely more fake characters and agents than there are digital replicas of humans. Those are the things I think we should worry about because it's almost like taking away what's real and what's not, versus the abundance of human replicas. Scaling humans at least maintains an internet that is human-derived versus agent-derived. Read the original article on Business Insider
