Capability * Intent != Risk
Why Traditional Risk Models Don't Fit Modern Cyber Threat Intelligence
In the world of threat intelligence, one of the oldest formulas in the book is deceptively simple: Capability x Intent = Risk.
On paper, it works. If an adversary has both the means and the motivation to harm you, then they represent a tangible risk. National security agencies have used this framework for decades to assess and prioritize threats - considering things like access to weapons, organizational infrastructure, or ideology to determine if a threat actor might act against a nation or company.
But in cybersecurity, this model breaks down - badly.
When Everyone Has Capability
In the physical world, "capability" might mean warheads, tanks, trained operatives, or logistical reach. But in the digital world? Capability is just a computer and an internet connection. That's it.
Thanks to the proliferation of open-source offensive tools, malware-as-a-service platforms, cracked versions of commercial offensive security frameworks, and communities that share tactics and exploits freely, the bar for "capability" is on the floor. This leads us to an uncomfortable truth: everyone is capable.
From hobbyists to hacktivists, from cybercriminals to nation-state proxies, the potential to harm an organization is effectively democratized. If everyone has capability, then the risk equation quickly degrades - we go from analyzing a focused set of capable actors to assigning capability to virtually the entire internet.
Intent: Broad, Noisy, and Often Meaningless
The second half of the equation - intent - isn't much better. In the cyber realm, intent is hard to measure. It's often reduced to assumed financial motivation, ideological statements in forums, or geopolitical context. Most cybercriminals want money. Most hacktivists want attention. And most nation-states want strategic advantage.
So if you conclude that most of the internet has both the capability and a plausible motivation to attack you, where does that leave your risk calculation? Every actor becomes a risk. Which means your risk model no longer helps you prioritize - it just drowns you in theoretical threats. The Phantom Menace of Intelligence-Led Ghost Chasing
When risk models based on capability and intent are applied uncritically in cybersecurity, they become an exercise in ghost-chasing. Teams react to vague reports about foreign APTs exploiting brand-new CVEs or to third-party analyses of malware seen "in the wild," often with no relevance to their environment. The result? Time and energy spent preparing for attacks that may never come, from actors who may never care about your organization.
Meanwhile, real threats - like a malvertising campaign delivering infostealers to your workforce through search engine results - are ignored because they don't come with a dramatic nation-state label or weren't mentioned in the last big CTI webinar. Measurable Threat > Hypothetical Risk
Risk models should support action, not paralyze decision-making. That's why the most valuable form of cyber threat intelligence is the kind grounded in observables:
- Indicators that appear in your logs - IoC, hashes, domains, C2 infrastructure, malvertising sites, etc.
- Tactics that match activity in your environment - spam, phishing, watering hole attacks, scam texts, etc.
- Malware families that show up in sandbox analysis or in your EDR block logs
- Credential theft campaigns affecting your users, your domains, your infrastructure (think password spray attacks you see in your VPN logs)
- EDR blocks for USB malware, obfuscated PowerShell, etc.
You don't need to know who is behind the threat - not right away. But if they're in your network or hammering your perimeter, they matter. And they deserve focus.
Rebuilding the Model for Cyber Reality
This isn't to say that strategic-level threat modeling has no place. But instead of over-indexing on abstract intent and assumed capability, organizations should ground their prioritization in the following:
- Telemetry-Driven Intelligence: If you see it in your logs, it's relevant, and a presently demonstrable danger
- Attribution by Behavior, not Reputation: Track attacker infrastructure, tools, and patterns over time to cluster activity. If that sender IP or malicious URL has shown up in previous phishing campaigns it means your a target of a persistent threat and you need to track that. The most valuable metric to executive leadership isn't "we've been attacked" but rather "we've experienced the third attack from a threat group using this infrastructure this quarter."
- Threat Classification Over Time: Build internal knowledge of who is attacking you, not who might attack someone else. It doesn't matter what FIN29 is doing to leverage iOS 0-day against the Polish government, it matters what malware you're seeing in your environment, especially if it's slipping past your EDR.
- Resource Allocation Based on Actionability: Focus on what you can detect, block, mitigate, and respond to - not what might happen in a theoretical scenario. Test your defenses against identfied threats using your Adversary Emulation (Red/Purple) Teams and leverage your Threat Hunt team to find what might have slipped through the net.
Conclusion: Move from Theory to Reality
The traditional Capability x Intent = Risk model may work for geopolitics, but in cybersecurity, it creates noise, not clarity. When everyone is capable and most are motivated, this formula flattens threat prioritization to the point of uselessness.
Risk in cybersecurity should be grounded in evidence, telemetry, and targeted activity. It's time to leave behind the seductive simplicity of outdated models and embrace a threat intelligence approach that focuses on what's real - not just what's possible.