How many malicious docs does it take to poison an LLM? Far fewer than you might think, Anthropic warns

20:13 - 14 Oct 2025
Anthropic’s study shows just 250 malicious documents is enough to poison massive AI models.

Article info: