Discover Robust AI Watermark Techniques with Insights From ZDNet
Discover Robust AI Watermark Techniques with Insights From ZDNet
Anadmist/Getty Images
We’re inundated with them now – “deep-fake “ photos that are virtually indistinguishable from real ones (except for extra fingers), AI-generated articles and term papers that sound realistic (though they still come across as stilted), AI-generated reviews, and many others. Plus, AI systems may be scraping copyrighted material or intellectual property from websites as training data, subjecting users to potential violations.
Also: Most people worry about deepfakes - and overestimate their ability to spot them
The problem is, of course, the AI content keeps getting better. Will there ever be a foolproof way to identify AI-generated material? And what should AI creators and their companies understand about emerging techniques?
“The initial use case for generative AI was for fun and educational purposes, but now we see a lot of bad actors using AI for malicious purposes,” Andy Thurai , vice president and principal analyst with Constellation Research, told ZDNET.
Media content – images, videos, audio files – is especially prone to being “miscredited, plagiarized, stolen, or not credited at all,” Thurai added. This means “creators will not get proper credit or revenue.” An added danger, he said, is the “spread of disinformation that can influence decisions.”
Newsletters
ZDNET Tech Today
ZDNET’s Tech Today newsletter is a daily briefing of the newest, most talked about stories, five days a week.
Subscribe
From a text perspective, a key issue is the multiple prompts and iterations against language models tend to wash out watermarks or offer only minimal information, according to a recent paper authored by researchers at the University of Chicago, led by Aloni Cohen , assistant professor at the university. They call for a new approach – multi-user watermarks – “which allow tracing model-generated text to individual users or groups of colluding users, even in the face of adaptive prompting.”
Also: Photoshop vs. Midjourney vs. DALL-E 3: Only one AI image generator passed my 5 tests
The challenge for both text and media is to digitally watermark language models and AI output, you must implant detectable signals that can’t be modified or removed.
Industrywide initiatives are underway to develop foolproof AI watermarks. For example, the Coalition for Content Provenance and Authenticity (C2PA) – a joint effort formed through an alliance between Adobe, Arm, Intel, Microsoft, and Truepic – is developing an open technical standard intended to provide publishers, creators, and consumers “the ability to trace the origin of different types of media.”
Also: AI scientist: ‘We need to think outside the large language model box’
C2PA unifies the efforts of the Adobe-led Content Authenticity Initiative (CAI) , which focuses on systems to provide context and history for digital media, and Project Origin , a Microsoft- and BBC-led initiative that tackles disinformation in the digital news ecosystem.
“Without standardized access to detection tools, checking if the content is AI-generated becomes a costly, inefficient, and ad hoc process,” according to Shutterstock’s Alessandra Sala, in a report published by the International Telecommunication Union (ITU) –the UN agency for digital technologies. “In effect, it involves trying all available AI detection tools one at a time and still not being sure if some content is AI-generated.”
The proliferation of generative AI platforms “necessitates a public registry of watermarked models, along with universal detection tools,” Sala urged. “Until then, ethical AI users must query each company’s watermarking service ad hoc to check if a piece of content is watermarked.”
Also: Today’s challenge: Working around AI’s fuzzy returns and questionable accuracy
The C2PA initiative promotes “widespread adoption of content credentials, pr tamper-evident metadata that can be attached to digital content,” Thurai explained. He equates the content credentials to a ‘nutrition label’ that creators can attach to their digital content, which can be used to track content provenance.” With this open standard, publishers, creators, and consumers will be able to “trace the origin and evolution of a piece of media, including images, videos, audio, and documents,” he added.
The way it works is content creators can “get recognition for their work online by attaching information such as their name or social media accounts directly to the content they create,” Thurai said. This would simply involve either clicking on a pin attached to a piece of content or going to a website to verify provenance. Such tools “validate relevant information, as well as providing a detailed history of changes over time.”
Artificial Intelligence
Photoshop vs. Midjourney vs. DALL-E 3: Only one AI image generator passed my 5 tests
AI-powered ‘narrative attacks’ a growing threat: 3 defense strategies for business leaders
Copilot Pro vs. ChatGPT Plus: Which AI chatbot is worth your $20 a month?
How my 4 favorite AI tools help me get more done at work
- Photoshop vs. Midjourney vs. DALL-E 3: Only one AI image generator passed my 5 tests
- AI-powered ‘narrative attacks’ a growing threat: 3 defense strategies for business leaders
- Copilot Pro vs. ChatGPT Plus: Which AI chatbot is worth your $20 a month?
- How my 4 favorite AI tools help me get more done at work
Also read:
- [New] The Smartphone Lens Creating Sweeping Panos for 2024
- Ensure Stable Wi-Fi with the Newest Realtek Network Adapter Driver - Compatible with Windows 11 & 7
- Expert Assessment of the CumulusPRO Ergonomic Stand-Up Desk Mat for Enhanced Workspace Comfort
- How a Manager's Frustration with Team Slack Usage Led to Unforeseen Consequences - A ZDNet Analysis
- How to Successfully Launch Red Dead Redemption 2 After Encountering Initial Failures
- In-Depth Analysis: The Apple iPad Air (2019) - Your Ideal Companion for Multimedia Mastery
- M1 Pro Vs. M1 Max Delving Into the Details of Apple's Latest CPUs for 2024
- Rising Curiosity in AI Reveals a Need for Greater Transparency - Insights From ZDNet
- Securing Mozilla Firefox Saved Login Credentials: Setting up a Master Password
- Speeding Up Generative AI: How Nvidia's Latest 'NIMs' Revolutionize Computing Efficiency | Insights From ZDNET
- The Call for Clarity: How AI's Popular Surge Highlights the Demand for Openness by ZDNET
- The Top 5 Android Apps That Use Fingerprint Sensor to Lock Your Apps On Motorola Moto G 5G (2023)
- Title: Discover Robust AI Watermark Techniques with Insights From ZDNet
- Author: Matthew
- Created at : 2024-10-14 03:19:50
- Updated at : 2024-10-18 03:02:08
- Link: https://app-tips.techidaily.com/discover-robust-ai-watermark-techniques-with-insights-from-zdnet/
- License: This work is licensed under CC BY-NC-SA 4.0.