New Bill Suggests “Watermarking” AI Content to Fight Deepfake Scams

New Bill Suggests “Watermarking” AI Content to Fight Deepfake Scams

A bipartisan group of senators introduced a new bill on July 11 to tackle deepfake scams, copyright infringement, and AI training on data it’s not supposed to.

The group announced the bill with a press release led by Democratic Party Senator Maria Cantwell, outlining several measures to regulate AI-generated content.

This tackles critical issues like protecting online creators’ intellectual property and controlling the types of content AI can train.

The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) calls for a standardized method for watermarking AI-generated content online.

AI service providers must embed AI-generated content with metadata disclosing its originality, which AI tools cannot remove or exclude.

Cantwell emphasized the unchecked nature of these issues amid AI’s rapid rise, stressing the bill’s role in providing “much-needed transparency.”

THURSDAY: The Senate Commerce Committee is holding a hearing on how AI is accelerating the need to protect Americans’ privacy. Tune in on July 11 at 10am: https://t.co/czZoPO5wpo pic.twitter.com/Xtzku6uVdq — Senate Commerce, Science, Transportation Committee (@commercedems) July 10, 2024

“The COPIED Act will also put creators, including local journalists, artists, and musicians, back in control of their content with a provenance and watermark process that I think is very much needed,” she added.

Crypto Deepfake Scams Thwarted

The crypto industry stands to benefit the most from the bill as Deepfake scams remain one of the main perpetrators of crypto crimes.

Deepfakes exploit the likeness of influential figures and celebrities to promote fraudulent investment schemes.

They falsely imply that the project has legitimate or official backing – thereby legitimizing it among potential victims.

Recently, over 35 YouTube channels live-streamed the Space X launch using an AI-generated voice and Deepfake to impersonate Elon Musk.

An issue that was only expected to escalate, cited to account for over 70% of all crypto crimes within the next 2 years.

Therefore, this bill is a monumental step towards thwarting these efforts by clearly distinguishing AI-generated deceptive material.

AI Has Been Leading a New Wave Of Crypto Crime

Although deepfakes remain the most prominent application of AI technology in crypto crime, it has a range of applications.

A recent Elliptic report exposed the rise of AI crypto crimes, marking a new era of cyber threats exploited for deepfake scams, state-sponsored attacks, and other sophisticated illicit activities.

AI has driven beneficial innovation in many industries, including the AI cryptoasset sector. This innovation has birthed many projects poised to redefine the landscape of AI crypto.

As with any emerging technology, there is always a risk of bad actors seeking to exploit new developments for illicit purposes.

Dark web forums explore large language models (LLMs) for crypto-related crimes – exploiting the power of AI to facilitate other crimes. This includes reverse-engineering wallet seed phrases and automating scams like phishing and malware deployment.

Dark web markets offer “unethical” versions of GPTs designed for AI crypto crime. These tools aim to evade detection by legitimate GPTs.

WormGPT, the self-described “enemy of ChatGPT,” was noted in the report. It introduces itself as a tool that “transcends the boundaries of legality.” It openly advertises itself for facilitating the creation of phishing emails, carding, malware, and generating malicious code.

As a result, Elliptic calls for a review of early warning signs of illegal activity to ensure long-term innovation and mitigate emerging risks in their early stages.

Source