Tag: AI security
New research reveals that as few as 250 poisoned documents can create backdoor vulnerabilities in large language models, regardless of their size or training data volume. This finding challenges the long-held assumption that attackers need significant control over training data, suggesting data-poisoning attacks may be more feasible and accessible than previously believed.
New research from Anthropic reveals that a mere 250 malicious documents can compromise large language models, regardless of their size, challenging long-held assumptions about AI security and data integrity.
A recent analysis reveals that a surprisingly small number of malicious documents, around 250, can be sufficient to compromise the integrity of large language models (LLMs). This vulnerability, detailed by Red Hot Cyber, highlights a significant security risk in AI systems, potentially leading to biased outputs, data leakage, or the generation of harmful content.
CrowdStrike has acquired Pangea, a move set to significantly enhance its Falcon platform with advanced AI security capabilities. This strategic acquisition introduces AI Detection and Response (AIDR) to safeguard the entire enterprise AI lifecycle, addressing critical vulnerabilities like prompt injection and data leakage.
Researchers have demonstrated a critical vulnerability in OpenAI's Guardrails framework, showing how simple prompt injection attacks can bypass its safety mechanisms, raising concerns about AI self-regulation.
Exabeam has integrated Google Agentspace and Model Armor into its New-Scale Platform, enabling the monitoring and detection of threats posed by AI agents. This move addresses the growing concern of AI-driven insider risks, a trend highlighted by Exabeam's own research, and aims to provide enhanced security for organizations increasingly adopting AI technologies.
Microsoft has issued a stark warning about the proliferation of "Shadow AI" – artificial intelligence tools used by employees without organizational approval. While these tools offer productivity gains, they pose significant privacy and security risks, potentially exposing sensitive company and customer data. The tech giant urges businesses to adopt enterprise-grade AI solutions that balance functionality with robust security and privacy measures.
Google is reinforcing its commitment to AI security with a multi-pronged strategy, introducing an AI Vulnerability Reward Program, an updated Secure AI Framework 2.0, and the AI-powered agent CodeMender to enhance code security. This initiative aims to address the evolving threats in the AI landscape by fostering research, securing AI agents, and collaborating with partners to ensure AI remains a force for good.