Mastering Impenetrable AI Watermark Techniques for Content Security | ZDNet
Mastering Impenetrable AI Watermark Techniques for Content Security | ZDNet
Anadmist/Getty Images
We’re inundated with them now – “deep-fake “ photos that are virtually indistinguishable from real ones (except for extra fingers), AI-generated articles and term papers that sound realistic (though they still come across as stilted), AI-generated reviews, and many others. Plus, AI systems may be scraping copyrighted material or intellectual property from websites as training data, subjecting users to potential violations.
Also: Most people worry about deepfakes - and overestimate their ability to spot them
The problem is, of course, the AI content keeps getting better. Will there ever be a foolproof way to identify AI-generated material? And what should AI creators and their companies understand about emerging techniques?
“The initial use case for generative AI was for fun and educational purposes, but now we see a lot of bad actors using AI for malicious purposes,” Andy Thurai , vice president and principal analyst with Constellation Research, told ZDNET.
Media content – images, videos, audio files – is especially prone to being “miscredited, plagiarized, stolen, or not credited at all,” Thurai added. This means “creators will not get proper credit or revenue.” An added danger, he said, is the “spread of disinformation that can influence decisions.”
Newsletters
ZDNET Tech Today
ZDNET’s Tech Today newsletter is a daily briefing of the newest, most talked about stories, five days a week.
Subscribe
From a text perspective, a key issue is the multiple prompts and iterations against language models tend to wash out watermarks or offer only minimal information, according to a recent paper authored by researchers at the University of Chicago, led by Aloni Cohen , assistant professor at the university. They call for a new approach – multi-user watermarks – “which allow tracing model-generated text to individual users or groups of colluding users, even in the face of adaptive prompting.”
Also: Photoshop vs. Midjourney vs. DALL-E 3: Only one AI image generator passed my 5 tests
The challenge for both text and media is to digitally watermark language models and AI output, you must implant detectable signals that can’t be modified or removed.
Industrywide initiatives are underway to develop foolproof AI watermarks. For example, the Coalition for Content Provenance and Authenticity (C2PA) – a joint effort formed through an alliance between Adobe, Arm, Intel, Microsoft, and Truepic – is developing an open technical standard intended to provide publishers, creators, and consumers “the ability to trace the origin of different types of media.”
Also: AI scientist: ‘We need to think outside the large language model box’
C2PA unifies the efforts of the Adobe-led Content Authenticity Initiative (CAI) , which focuses on systems to provide context and history for digital media, and Project Origin , a Microsoft- and BBC-led initiative that tackles disinformation in the digital news ecosystem.
“Without standardized access to detection tools, checking if the content is AI-generated becomes a costly, inefficient, and ad hoc process,” according to Shutterstock’s Alessandra Sala, in a report published by the International Telecommunication Union (ITU) –the UN agency for digital technologies. “In effect, it involves trying all available AI detection tools one at a time and still not being sure if some content is AI-generated.”
The proliferation of generative AI platforms “necessitates a public registry of watermarked models, along with universal detection tools,” Sala urged. “Until then, ethical AI users must query each company’s watermarking service ad hoc to check if a piece of content is watermarked.”
Also: Today’s challenge: Working around AI’s fuzzy returns and questionable accuracy
The C2PA initiative promotes “widespread adoption of content credentials, pr tamper-evident metadata that can be attached to digital content,” Thurai explained. He equates the content credentials to a ‘nutrition label’ that creators can attach to their digital content, which can be used to track content provenance.” With this open standard, publishers, creators, and consumers will be able to “trace the origin and evolution of a piece of media, including images, videos, audio, and documents,” he added.
The way it works is content creators can “get recognition for their work online by attaching information such as their name or social media accounts directly to the content they create,” Thurai said. This would simply involve either clicking on a pin attached to a piece of content or going to a website to verify provenance. Such tools “validate relevant information, as well as providing a detailed history of changes over time.”
Artificial Intelligence
Photoshop vs. Midjourney vs. DALL-E 3: Only one AI image generator passed my 5 tests
AI-powered ‘narrative attacks’ a growing threat: 3 defense strategies for business leaders
Copilot Pro vs. ChatGPT Plus: Which AI chatbot is worth your $20 a month?
How my 4 favorite AI tools help me get more done at work
- Photoshop vs. Midjourney vs. DALL-E 3: Only one AI image generator passed my 5 tests
- AI-powered ‘narrative attacks’ a growing threat: 3 defense strategies for business leaders
- Copilot Pro vs. ChatGPT Plus: Which AI chatbot is worth your $20 a month?
- How my 4 favorite AI tools help me get more done at work
Also read:
- [Updated] In 2024, Mastering Instagram Highlights Comprehensive Photography Tips
- [Updated] In 2024, Unveiling the Purpose Behind Facebook's Blue Emoji
- [Updated] Use of Analytics for 2024
- 2024 Approved Easy-to-Follow Steps for Exporting YouTube and Pinterest Videos as MP3s
- 2024 Approved Finessing Voices in Instagram's Storytelling and Videos
- 2024 Approved How to Restore Windows Photo Viewer in Windows 11 (2 Methods)
- Advancing with Microsoft: A Step-by-Step Guide on Upgrading to Windows Pro Edition & Its Advantages
- Continuity of Microsoft 365 Support for Office 2016 & 2019: What to Expect in the Coming Year | ZDNet
- DevOps Engineer Career: Navigating the Pros, Cons, and Challenges – Insights From ZDNet
- Five Alternatives to Explore as Microsoft's Support for Windows 10 Concludes - Insights From ZDNet
- Five Last Resort Options for Your Windows 10 System as Microsoft Ends Support
- How Design Thinking Bridges the Gap Between Technology Professionals and Enhancing Customer Experiences: Insights From ZDNet
- In 2024, How to Transfer Data from Itel A70 to Other Android Devices? | Dr.fone
- Interference Reduction
- Los Mejores 8 Aplicaciones De Registro De Pantalla Para Sistema Operativo Windows
- Mastering Instagram's Musical Licensing Landscape
- Revolutionizing Industry with XR: The Role of Digital Twins in Advanced User Experience | Expert Analysis by ZDNET
- Top 7 Compelling Benefits of Adopting AIOps: Insights From ZDNet
- Upcoming Android 16 Details Revealed: Free Version, Launch Timeline, and Performance Specs
- Title: Mastering Impenetrable AI Watermark Techniques for Content Security | ZDNet
- Author: Matthew
- Created at : 2024-10-09 10:55:39
- Updated at : 2024-10-12 10:28:14
- Link: https://app-tips.techidaily.com/mastering-impenetrable-ai-watermark-techniques-for-content-security-zdnet/
- License: This work is licensed under CC BY-NC-SA 4.0.