Microsoft's security research team disclosed remote code execution vulnerabilities in popular AI agent frameworks, demonstrating how adversarial prompts can be leveraged to escape sandboxed contexts and execute arbitrary code on host systems. The research, published on the Microsoft Security Blog, details how prompt injection can be chained with insecure tool-execution patterns in agent frameworks to achieve code execution, effectively turning prompts into shells. Microsoft Defender coverage and mitigation guidance are referenced alongside the disclosure. No active exploitation in the wild is reported, but the findings affect the broader AI agent ecosystem and warrant prompt review by developers building on these frameworks. Other items in the source set (Apache HTTP/2, Linux 'Dirty Frag', Q1 vulnerability roundup) are unrelated to Microsoft as a service and were excluded.
// Service
Microsoft
// Alerts
Recent threats
// MicrosoftMEDIUM