Training data poisoning lets attackers silently corrupt AI models with tiny amounts of fake data. Learn how it works, real-world examples, and the six proven ways to defend your LLMs.