Upwind Expands CNAPP with Integrated AI Security Suite
As enterprises race to embed AI into every corner of their operations, Upwind is stepping up with an integrated AI security suite, baked directly into their Cloud Native Application Protection Platform (CNAPP). This isn’t just another bolt-on; it’s a recognition that AI security needs to be woven into the fabric of cloud protection.

The problem? AI’s rapid expansion has outpaced our ability to secure it. Models, agents, and inference endpoints are scattered across services and infrastructures, leaving security teams scrambling for visibility. Upwind aims to solve this by bringing AI posture management, real-time threat detection, and runtime protection under a single umbrella.

Upwind isn’t content with static configurations and snapshots. Their philosophy is “inside-out” security, focusing on real-time signals, API calls, and data flows within the workload. This means observing traffic and behavior as it happens, rather than relying on assumptions.

According to Amiram Shachar, CEO of Upwind, “AI security should not be a stand-alone security component. It should be part of a larger ecosystem. It just makes perfect sense to go down this route and make sure that AI security benefits from all the data and context that our CNAPP already holds.”

Runtime Clarity: The Key to Secure AI

This runtime-first model grounds AI risk in actual activity, providing security teams with a prioritized view of what’s happening at the most critical moment. It’s about seeing where AI is running, how models and agents behave, and what sensitive data they’re interacting with.

Essentially, Upwind is extending its existing cloud security expertise directly into the AI layer, offering a unified platform for posture, inventory, behavior tracing, and vulnerability testing.

Upwind’s AI security suite introduces a range of capabilities designed to strengthen AI management and monitoring:

  • AI Security Posture Management (AI-SPM): Secures inference endpoints, enforces model versioning, and detects exposed AI API keys.
  • AI Detection & Response (AI-DR): Monitors agents and LLM infrastructure for anomalous behavior and jailbreak attempts.
  • AI Bill of Materials (AI-BOM): Maps models, frameworks, and cloud AI products to create a real-time inventory of AI components.
  • AI Network Visibility: Decodes AI-native traffic and identifies unauthorized AI usage, highlighting sensitive data moving through prompts.
  • MCP Security: Traces the full sequence of AI agent actions, providing evidence of what an agent did and its impact.
  • AI Security Testing: Validates AI systems against adversarial techniques like prompt injection and jailbreaks.

These components work together to offer a comprehensive view of AI risk, reducing operational complexity and enabling secure AI innovation at scale.

One particularly interesting aspect of Upwind’s offering is its focus on detecting “shadow AI” – unauthorized or unknown AI usage within an organization. This is a growing concern as employees experiment with AI tools without proper oversight, potentially exposing sensitive data and creating security vulnerabilities. Upwind’s AI network visibility tools address this directly.

“AI is now driving critical decisions across modern systems, yet most organizations still can’t see what their models and agents are actually doing,” says Shachar.

Upwind’s suite aims to change that, providing the factual, end-to-end visibility needed to understand how AI behaves in the real world.

Upwind’s integrated AI security suite represents a significant step towards making secure AI a reality. By embedding security directly into the CNAPP, they’re providing organizations with the tools they need to manage the risks associated with AI adoption. As AI continues to evolve, expect to see more security vendors adopting similar approaches, prioritizing real-time visibility and integrated solutions to protect this rapidly expanding attack surface. The future of AI depends on it.