Really, you made this without AI? Prove it

by Admin
Really, you made this without AI? Prove it

“This looks like AI.”

It’s a phrase I dread seeing as a writer who dabbles in illustration and amateur photography. In a world where generative AI technology is increasingly adept at mimicking the work of humans, people are naturally skeptical when online platforms refuse to label even obvious AI content.

This leads me to one conclusion: maybe we should start labeling human-made text, images, audio, and video with something akin to a universally recognized Fair Trade logo. The machines sure as hell aren’t motivated to label their work, but the creators at risk of being displaced most definitely are.

Fortunately, I’m not alone in my thinking.

Instagram head Adam Mosseri suggested as much in December, saying that it will be “more practical to fingerprint real media than fake media” as AI technology improves to the point of making content that’s visually indistinguishable from that made by creative professionals.

Nobody can say for sure how much of what we find on the internet is AI-generated, but there’s widespread perception that news sites, social media platforms, and search engine results are rife with it, according to a recent Reuters Institute survey.

Authenticating human-made works was something the C2PA content credentials standard — which is already used by Meta’s platforms — was supposed to do. But so far, its implementation has been wholly ineffectual, despite having received broad industry support. It turns out that lots of people making and platforming AI content are motivated to hide its origins because of the clicks, chaos, and money it can generate.

In a bid to help human creatives distinguish their work from that spat out by AI generators, a large number of solutions have emerged in recent years. And like C2PA, they face a number of challenges for widespread adoption.

Here are just a handful of the badges being offered by organizations trying to distinguish human-made works from AI-generated content.
Image compiled by The Verge

Right now, there are too many AI-free labelling alternatives to choose from. In total, I count at least 12, all trying to address the same issue with a variety of eligibility criteria and authentication approaches. Some are industry-specific, such as the Authors Guild’s “human authored certification” for books and other written works, and can’t be broadly applied to all forms of creative content.

Other solutions like Proudly Human and Not by AI aim to be broader, covering published text, visual art, videography, and music, but the verification processes being used by these services can be just as questionable as those used by AI-labelling solutions. Some, like Made by Human, operate purely on trust, making badges and labels publicly available for anyone to download and apply to their work without actually establishing provenance. Others like No-AI-Icon say they visually inspect works and run them through AI detection services, which can be notoriously unreliable.

Most of the services I’ve checked are doing it the hard way: by getting creatives to manually show their working processes to a human auditor, such as sketches and written drafts. It’s extremely labor-intensive, but without any technological shortcuts, it’s the most reliable method we currently have to establish if something was made by a real human.

Another issue is agreeing what “human-made” even means. With AI now embedded in so many creative tools, and its use being encouraged by creative educators, where do you draw the line?

“The problem is going to be definition and verification. Does chatting with an LLM about the idea before executing it manually count as using AI? And how could the creator prove no AI was involved?” Jonathan Stray, senior scientist at the UC Berkeley Center for Human-Compatible AI told The Verge. “Other consumer labels, such as ‘Organic’ have regulations and agencies that enforce them.”

UC Berkeley School of Information lecturer Nina Beguš says we’ve already entered the era of hybrid content that’s clashing with how we define something as being authentically made.

“Any creative output today can be touched by AI in one way or another without us being able to prove it,” Beguš told The Verge. “Authorship is disintegrating into new directions, becoming more technologically enhanced and more collective. We need to revamp our creativity criteria that were made solely for humans.”

A solution offered by one human-made label contender called Not by AI is trying to take this ambiguity into account. It offers a variety of badges that creators can apply to websites, blogs, art, films, essays, books, podcasts, and more, provided that at least 90 percent of the work is created by a real human. But the voluntary approach lacks any verification of truthfulness.

Other solutions like Proof I Did It are leaning on blockchain technology to provide a permanent record that anyone can use to reference creators and works that have been verified by the service. By storing verification on the blockchain, creators get an unforgeable digital certificate that proves a human made their work, which is much more reliable than trying to use software to guess if a piece of media was generated by AI.

Thomas Beyer, an executive director at the University of California’s Rady School of Management, says that Web3 and blockchain technology can provide a robust solution by shifting the question from “does this look like AI?” to “can this account prove its human history?”

“By issuing ‘Made by Human’ tokens to verified creators, the market creates a ‘premium tier’ of art where authenticity is mathematically guaranteed,” Beyer told The Verge. Other experts like Beguš echoed similar sentiments regarding the potential increase in value of “human and biological creativity” amid the flood of synthetic media.

Despite its faults, established standards like C2PA provide something that AI-free labelling solutions desperately need: unification. Big names in the tech industry, like Adobe, Microsoft, and Google, have committed to the standard, and AI providers are implementing it to appease global regulators. That said, when I weigh up the various pros and cons between AI labelling efforts and those that focus on verifying authentic human-made content, I feel the latter is more likely to succeed.

Many creative professionals, even those who don’t entirely oppose the use of AI tools, are understandably motivated to distinguish their work from the synthetically-generated competition that’s saturating the industry and threatening their livelihood. And while, yes, there are plenty of AI-evangelists across social media platforms who are happy to showcase what the technology can achieve, there’s hesitancy around disclosing its use when money and influence could be lost.

Take the case of porn actors creating digital clones of themselves that will stay hot and young forever, or AI influencers selling a fantasy life that doesn’t exist. Disclosing that they’re AI might break the illusion for people thinking they’re getting a genuine human experience. Scammers that use AI-generated imagery to sell online products surely don’t want to be outed either, and the platforms like Etsy that host them don’t seem too concerned. Likewise, anyone using generative AI to sow discord or create mischief on social media can only succeed when people believe it is real. It’s no wonder AI labeling with C2PA has failed to catch on.

We know that some AI-focused creators will avoid being transparent because it’s already happening. A notable example of this is Coral Hart, a romance author who told The New York Times that she made a six-figure sum after producing more than 200 AI-generated novels last year. She doesn’t have a label on any of her books that discloses they were written using AI tools, however, over fears it would “damage her business for that work” because of the “strong stigma” around the technology.

We can see that disdain in action with how often synthetically-generated content is described as “slop,” even if the works themselves are visually, audibly, or technologically impressive. And that raises the question of how these human-made or AI-free labelling providers will prevent their logos from being abused by those who profit off deception. Trevor Woods, CEO of Proudly Human, acknowledges that doing so may not be possible.

“Like other certification marks and company logos, we cannot prevent fraudulently displaying the Proudly Human certification mark. However, we make it easy for consumers to verify it,” Woods told The Verge. “If a bad actor identified by us refuses to stop using the label, we will take legal action against them.”

If the goal is to achieve a universally recognized and enforced solution, then a standard needs to be agreed upon not just by creators and online platforms, but also by global governments and regulatory authorities. To my understanding, those conversations are currently few and far between.

“Proudly Human has occasionally briefed government and industry associations but is not involved in formal negotiations regarding a unified human origin certification,” said Woods. “The rapid evolution of AI capabilities and AI-generated content will outpace government and regulator responses.”

Clearly, there’s a demand for making human-made works easier for consumers to identify, so creatives, regulators, and authentication agencies need to pick which approach to rally behind. If one singular standard can rise to the same level as symbols like Fair Trade and Organic — which carry their own concerns, but are recognized globally as something that aligns with a particular ethos — maybe we can return to the days of trusting what we see with our eyes.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


Related Posts

Leave a Comment