AI app mimicked Scarlett Johansson’s voice for an ad, highlighting the broader concern of deepfakes beyond celebrities.

According to Variety, actress Scarlett Johansson is suing an artificial intelligence app that utilized her name and a voice generated by the app in an advertisement without getting her consent.

Variety claims that the AI image-generating app Lisa AI: 90s Yearbook & Avatar uploaded the 22-second commercial to X, formerly Twitter, on October 28. Images of Johansson and an artificial intelligence voice that sounded like hers promoted the app in the advertisement. The AI-generated content, according to the fine print underneath the advertisement, “has nothing to do with this person.”

Johansson’s representatives informed Variety that she is not a representative of the app, and her attorney informed the publication that legal action is being pursued. The advertisement doesn’t appear to have been removed, and hasn’t seen it. Make It reached out to Lisa AI and a Johansson representative, but neither provided a response.

Deep fakes can cause issues for regular people as well, even though many celebrities have been the target of them. This is important to know.

A Deep fake: what is it?

The term “deep fake” originates from the notion of “deep learning,” which is a subset of machine learning. It occurs when algorithms are trained to recognize patterns in massive amounts of data and then apply those abilities to either a new data set or to generate outputs that bear similarities to the original data set.

Here’s a condensed illustration: An artificial intelligence model could be trained to recognize speech patterns, tonality, and other distinctive features of a human voice by feeding it audio clips of the person speaking. The voice could then be synthesized by the AI model.

According to Jamyn Edis, an adjunct professor at New York University with over 25 years of experience in the media and technology industries, the issue is that technology can be used harmfully.

According to him, whether or not a person is a celebrity, “deep fakes are just a new vector for impersonation and fraud, and as such can be used in similar malicious ways,” Make It reports. “Instances could include your likeness or the likenesses of your loved ones being used to create pornography, used as a means of extortion, or used to circumvent security by taking on the identity of another person.”

Even more worrisome, according to Edis, is that as deep fake technology advances quickly, it’s getting more difficult to distinguish between fake and real content.

How to safeguard oneself

If you find yourself questioning whether something you’re watching might be a deep fake, there are a few things you can do.

First, Edis advises asking yourself if the pictures you’re seeing seem realistic. It’s a good idea to check a celebrity’s other social media accounts for a disclosure if you see an advertisement featuring them pushing an obscure product, as they are obligated to disclose when they are paid to promote products.

Big tech firms like Microsoft, Google, and Meta are also creating tools to assist users in identifying deep fakes.

The first executive order on AI, recently announced by President Biden, calls for additional safety precautions in addition to watermarking to clearly identify content created by AI.

But according to Edis, technology has always managed to keep one step ahead of laws and attempts to control it.

He claims that social conventions and governing laws “usually correct humanity’s worst instincts over time.”
“Until then, deep fake technology will continue to be weaponized for harmful purposes.”

 

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top