AI identity platform combats ownership and deep fakes

The rise of artificial intelligence has introduced new possibilities for human expression and capabilities to create content. Simultaneously, the ability to do almost anything and to do so with a digital AI-generated identity has caused new problems to troubleshoot. 

According to data from Sumsub, the portion of fraud stemming from deep fakes more than doubled from 2022 to the first quarter of 2023, with the United States even seeing a rise from 0.2% to 2.6%.

In recent months there have been multiple instances of celebrities like actors Tom Hanks and Jennifer Aniston and YouTube personality Mr. Beast calling out deep fakes that used their digital fakes to sell products.

In response to the situation, California-based Hollo.AI launched on Nov. 16. The platform allows users to claim their AI identity, or “persona,” and features a personalized chatbot to help users monetize and verify their AI work.

Hollo.AI says that this “ethical use of AI” is made possible through blockchain technology verification. Rex Wong, CEO of the platform, told Cointelegraph that creators and personalities will be able to take “sovereign ownership” of their AI through the platform’s verified AI registry.

“The registry serves as a public registry ledger that offers AI identities, once verified by Hollo.AI, to be logged on the blockchain for all to see.”

Creators will receive a blue check mark for verified identities, which will then give them control over when, where and how this identity can be used. They can then earn revenue through any licensing of that identity.

Related: OpenAI debuts ChatGPT Enterprise — 4 times the power of consumer version

Wong told Cointelegraph that the services work similarly to credit identity theft protection but are tailored to safeguard AI identities. 

“They monitor and alert users of unauthorized uses of their digital personas, helping to prevent the spread and impact of deep fakes.”

In addition to monitoring for unauthorized uses, Wong elaborated that Hollo.AI intends to help users after such use is detected to assist in resolving fraud cases.

He said that “empowerment” in this area is crucial in a time where “digital identities can be easily replicated and misappropriated for unauthorized use.”

Once a user has created an AI “digital twin” on the platform, it “continues learning” based on the user’s social links provided to create a more accurate digital identity. 

While Hollo.AI is trying to tackle these issues of transparency and ethical use of AI for creators and viewers, these topics are also on the table within other institutions and platforms. YouTube recently updated its community guidelines to include more AI transparency measures.

The entertainment industry union SAG-AFTRA is currently negotiating final terms with major Hollywood studios over the use of AI-generated “digital twins” for its actors, following a 118-day strike that had the AI topic as one of its critical terms.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change