this post was submitted on 21 Jul 2023
78 points (98.8% liked)

Technology

59086 readers
3617 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI, Alphabet, Meta, Anthropic, Inflection, Amazon, and Microsoft committed to developing a system to "watermark" all forms of content, from text, images, audios, to videos generated by AI so that users will know when the technology has been used.

top 10 comments
sorted by: hot top controversial new old
[–] bernieecclestoned@sh.itjust.works 11 points 1 year ago (2 children)

So, make content with AI, then screen grab it, removing watermark?

[–] Four_lights77@lemm.ee 9 points 1 year ago

The watermark would likely be comprised of a few different methods to embed marker pixel sets that would be difficult/impossible to see in addition to ones that are visible. Think printed currency. I’m not saying there won’t be an arms race to circumvent it like drm, or bad actors who counterfeit it, but the work should be done to try to ensure some semblance of reliability in important distributed content.

[–] lazyplayboy@lemmy.world 4 points 1 year ago (1 children)

It's possible for AI generated text to be made such that detection is straight-forward, due to probability of word selection. https://youtu.be/XZJc1p6RE78

[–] PipedLinkBot@feddit.rocks 2 points 1 year ago

Here is an alternative Piped link(s): https://piped.video/XZJc1p6RE78

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[–] notfromhere@lemmy.one 11 points 1 year ago (1 children)

Of course the watermark will only apply to their consumer versions of things, maybe their business things, and absolutely none of their government or internal things.

[–] The_Mixer_Dude@lemmus.org -1 points 1 year ago (1 children)
[–] notfromhere@lemmy.one 1 points 1 year ago

It doesn’t say much of anything, I’m just extrapolating from the current trajectory of society.

[–] tdawg@lemmy.world 8 points 1 year ago

This is going to need to happen anyway if these companies want to differentiate between human generated and ai generated content for the purposes of training new models

[–] consciouslyoblivious@lemmy.world 4 points 1 year ago (1 children)

how to put watermark on textual content?

[–] SamC@lemmy.nz 8 points 1 year ago

LLMs choose words based on probabilities, i.e. given the word "blue", it will have a list of words and probabilities that those words should follow "blue". So "sky" would be a high probability, "car" might also be quite high, as well as a long list of other words. The LLM chooses the words not by selecting whatever has the highest probability, but with a degree of randomness. This has been found to make the text sound more natural.

To watermark, you essentially make this randomness happen in a predefined way, at least for cases where many different words could fit. So (to use a flawed example), you might make it so that "blue" is followed by "car" rather than "sky". You do this throughout the text, and in a way that doesn't affect the meaning of the text. It is then possible to write a simple algorithm to detect whether this text was written by an AI, because of the probability of different words appearing in particular sequences. Because its spread throughout the text, it's quite difficult to remove the watermark completely (although not impossible).

Here's an article that explains it better than I can: https://www.kdnuggets.com/2023/03/watermarking-help-mitigate-potential-risks-llms.html