Some extended thoughts about prompt injection attacks against software built on top of AI language models such a GPT-3. This post started as a Twitter thread but I’m promoting it to a full blog entry here.
The more I think about these prompt injection attacks against GPT-3, the more my amusement turns to genuine concern.
I know how to beat XSS, and SQL injection, and so many other exploits.
I have no idea how to reliably beat prompt injection! Read More