OpenAI’s attempts to watermark AI text hit limits

It’s proving tough to reign in systems like ChatGPTOpenAI’s attempts to watermark AI text hit limits
It’s proving tough to reign in systems like ChatGPTIt’s proving tough to reign in systems like ChatGPT

Did a human write that, or ChatGPT? It can be hard to tell — perhaps too hard, its creator OpenAI thinks, which is why it is working on a way to “watermark” AI-generated content.

In a lecture at the University of Texas at Austin, computer science professor Scott Aaronson, currently a guest researcher at OpenAI, revealed that OpenAI is developing a tool for “statistically watermarking the outputs of a text [AI system].” Whenever a system — say, ChatGPT — generates text, the tool would embed an “unnoticeable secret signal” indicating where the text came from. Read More

#fake