Quantcast
Channel: academic papers – Schneier on Security
Viewing all articles
Browse latest Browse all 31

Jailbreaking LLMs with ASCII Art

$
0
0

Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude, and Llama2—to ignore their safety instructions.

Research paper.


Viewing all articles
Browse latest Browse all 31

Latest Images

Trending Articles





Latest Images