Cisco has introduced Foundation-sec-8b-reasoning, an 8-billion-parameter reasoning large language model (LLM) tailored for complex cybersecurity tasks. Announced in a blog post by Cisco’s Foundation AI team, the model is now in private preview and is set for a public release later this summer as an open-weight model. The release builds on Foundation-sec-8b, a security-focused version of Llama 3.1, and positions Cisco at the forefront of AI-native security system development.
Unlike generic LLMs, which struggle with deeply contextualized threat assessments, Foundation-sec-8b-reasoning is designed to handle intricate, multi-step logic required in real-world cybersecurity workflows. It assists with threat modeling, adversary behavior mapping, configuration analysis, and insider risk detection—functions that traditionally require a mix of human expertise and static tools. The model is optimized for practitioners and developers seeking to build security-aware AI tools that deliver both accuracy and explainability.
Developed by AI experts Yaron Singer and Amin Karbasi, the Foundation AI team emphasizes that reasoning capabilities are no longer optional for security LLMs. Instead, they are essential in bridging fragmented threat intelligence, logs, configurations, and access control data. Cisco’s 2025 Cybersecurity Readiness Index reinforces the urgency: 86% of surveyed business leaders reported AI-related security incidents in the past year. This model aims to meet that challenge by delivering advanced reasoning in environments where both privacy and performance are paramount.
Foundation-sec-8b-reasoning supports several key use cases:
- System and Configuration Analysis: Identifies vulnerabilities in system settings
- Adversary Behavior Mapping: Correlates threat intel with attacker tactics
- Threat Detection and Analysis: Pinpoints malicious traffic in complex logs
- Access and Privilege Management: Flags excessive permissions
- Contextual Investigation: Accelerates response with rich, contextual insights
Cisco also promises deployment flexibility. The open-weight model can run on-prem, in secure cloud environments, or in air-gapped systems—without requiring inference via third-party APIs. It will also be integrated into the Nvidia NIM model factory for scalable deployment. A complementary benchmark suite and practical use-case notebooks will be released to help security teams evaluate and integrate the model effectively.
“Security workflows demand more than just pattern recognition—they require contextual awareness and adaptive reasoning,” said Yaron Singer, co-lead of the Foundation AI initiative. “With Foundation-sec-8b-reasoning, we’re making that level of intelligence available in an open, secure, and customizable package.”