AI/LLM poisoning

Summary

As AI services become increasingly common and available to the average user through AI-powered user support chatbots, AI-summarised content, and widespread LLM use, the risk of data feeds being poisoned to include malicious content will continue to rise. Attack scenarios include:

  • LLM responses including links to phishing pages and other malicious content
  • AI summarised content containing phishing links

As AI-assisted (and autonomous) web browsing becomes more common, the risk of an automated service accessing malicious content and entering sensitive information (e.g. credentials, banking information) or downloading malware will increase further.

Examples

Further reading