Researchers have demonstrated a critical vulnerability in OpenAI's Guardrails framework, showing how simple prompt injection attacks can bypass its safety mechanisms, raising concerns about AI self-regulation.