Tag: backdoor attacks

The Unsettling Reality: How a Few Malicious Documents Can Undermine Any Large Language Model

New research reveals that as few as 250 poisoned documents can create backdoor vulnerabilities in large language models, regardless of their size or training data volume. This finding challenges the long-held assumption that attackers need significant control over training data, suggesting data-poisoning attacks may be more feasible and accessible than previously believed.

0
0
Read More