Why AppSec teams need full exploitability analysis, not just function reachability flags
AppSec scanners are very good at telling you that you might have a problem.
SCA, SAST, ASPM, container scanners, and cloud posture tools are all staring at the same dependency graphs and frameworks, just emitting slightly different views of the same universe of CVEs.
- Tens of thousands of open findings
- The same CVE repeated across services and tools
- A wall of 9.8s with no clear sense of which five actually matter this week
Static reachability was supposed to be the fix.
Vendors promised: "If the vulnerable function is not reachable from your code, you can safely ignore the finding." It sounded reasonable: build a call graph, see if you can ever hit the vulnerable symbol, and downgrade everything else.
In practice, reachability is a useful signal; it is not a decision. Most implementations are conservative and mark a large number of findings as reachable. You go from "everything is critical" to "everything is reachable and critical", which is not much of an improvement. You still end up with hundreds or thousands of "reachable" vulnerabilities that all allegedly deserve attention, and you are back to manual triage and gut feel.
Reachability also has structural limitations. It cannot express configuration or data flow, it has no notion of who can actually hit an entry point, and it treats every reachable finding as roughly equivalent.
So in effect, reachability answers only one question: "Can this code be executed in theory somewhere in this application?"
AppSec teams need an answer to a different one: "Can an attacker in our environment use this to cause impact, given how we have configured and deployed it?"
We call that exploitability. Most teams are still stuck between those two questions, especially because conservative reachability analysis tends to mark almost everything as reachable.
In practice that leads to two predictable outcomes. Teams still spend time fixing reachable but non exploitable issues, and they still cannot reliably pick out the small number of truly exploitable issues that matter for their environment.
This post looks at that gap, walks through concrete CVEs where reachability falls down, and explains how we at Konvu model exploitability as a first class concept rather than a boolean flag, so that you can actually shrink the backlog instead of just relabeling it.
What static code reachability actually is
The basic model
Static reachability analysis is call-graph analysis wired into your dependency data.
At a high level, the tool:
- Identifies application entry points (HTTP handlers, CLI commands, scheduled jobs, etc.)
- Builds a call graph from those entry points into your code and dependencies
- Checks whether there exists at least one path to a function or method associated with a CVE
Very loosely:
for each vulnerable_symbol in advisory_db:
for each entrypoint in app_entrypoints:
if path_exists(entrypoint, vulnerable_symbol, call_graph):
mark_reachable(vulnerable_symbol, true)
If any path exists, the function is "reachable". If none do, it's "unreachable".
How it's usually implemented
Most modern tools do some variation of:
- Parse source/bytecode and build a call graph (with varying levels of precision)
- Use framework-specific knowledge to discover entry points:
- Rails controllers, Spring MVC controllers, Django views, etc.
- Map functions/methods to vulnerable symbols from SCA databases:
org.apache.logging.log4j.core.lookup.JndiLookup#lookupRails::HTML::Sanitizer#sanitize
- Attach a
reachable: true/falseflag to each finding
This is often over-approximate for dynamic languages and under-approximate when you rely heavily on reflection, dependency injection, or dynamic routing, but that's a separate discussion.
What reachability does get right
We'll give it credit for three things:
- It eliminates truly dead libraries you never import
- It should de-prioritize dependencies only used in test or tooling code paths
- It's strictly better than "every transitive dependency is critical by default"
If all you had before was a flat SCA list, reachability is a step forward.
The issue is that it's the first step, and many teams stop there.
Reachability vs exploitability: precise definitions
To understand the limits of reachability, we want clean definitions.
Reachable function
A vulnerable function is reachable if:
There exists at least one path in the call graph from an application entry point to that function.
But:
- The entry point for that path might only be callable by internal users or services (for example, an admin API behind SSO/VPN or an internal RPC endpoint).
- The path might only ever be exercised by internal traffic, not by attacker-controlled traffic.
- Parts of the path may depend on configuration or feature flags that are disabled in production.
Static reachability doesn't know any of that; it just sees the graph.
Exploitable vulnerability
A vulnerability is exploitable in your environment only if all of the following are true:
-
Attacker access — An external attacker or untrusted tenant can reach the relevant entry point and send requests that exercise that code path.
-
Attacker-controlled data flow — Untrusted input can reach the vulnerable sink in the form the exploit needs, for example inside a log message, HTML content, or an HTTP header.
-
The environment matches the exploit conditions — The configuration and usage patterns from the advisory or PoC actually hold in this service, for example specific features are enabled or certain tags or headers are allowed.
-
Mitigations do not neutralize the exploit — Existing controls do not stop the attack before it has effect, for example strict configurations, sanitizers or WAF rules.
If any one of those is false in your deployment, the vulnerability is not exploitable there, regardless of pure code reachability.
Reachability is necessary but not sufficient
Almost every exploit path is "reachable" in the call graph sense, but the reverse is not true.
You can think of it as:
reachable_vulns = { v | ∃ path(entrypoint, v.sink) }
exploitable_vulns = { v ∈ reachable_vulns | all_conditions_satisfied(v, environment) }
In other words, to be exploitable a vulnerability has to be reachable and all of the exploitability conditions must hold in your environment. Reachability is required, but it is not sufficient on its own; stopping there is what creates both noise and blind spots.
Four ways static reachability fails (with concrete CVEs)
Let's make this concrete with four real vulnerabilities where reachability alone is misleading.
Reason 1: It ignores attacker control and real data flow
Example: Log4Shell (CVE-2021-44228)
Advisory-level conditions
Log4Shell requires all of the following:
- Vulnerable Log4j 2 version in use
- Message lookup substitution and JNDI lookups available
- Attacker-controlled log messages including
${jndi:...}payloads
In a simplified form:
cve: CVE-2021-44228
conditions:
- component: log4j2
version: "<=2.14.1"
- config:
messageLookupSubstitution: enabled
- feature:
jndi_lookup: enabled
- data_flow:
source: attacker_input
sink: logger_api
payload_pattern: "${jndi:"
How SCA + reachability see it
Typical pipeline:
- SCA: "You depend on
log4j-core2.14.1" - Reachability: "Your HTTP handler calls
logger.info(...)" - Verdict: reachable RCE, critical, fix immediately
The static analysis logic is essentially:
if log4j_version_vulnerable and path_exists(http_handler, logger):
mark_reachable("CVE-2021-44228")
No further questions asked.
Why exploitability may be false
In many services, one or more of these are true:
- Only internal (not externally exposed / not accepting untrusted user input) subsystems write to the logger (no attacker-controlled content)
- All user input is normalized and scrubbed upstream (e.g.,
${rejected) messageLookupSubstitutionand/or JNDI are explicitly disabled
Here's a log4j2 configuration that intentionally disables the dangerous pieces:
<!-- log4j2.xml -->
<Configuration status="WARN" packages="">
<Properties>
<!-- Disable message lookups -->
<Property name="log4j2.formatMsgNoLookups">true</Property>
</Properties>
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d %-5p [%t] %c - %m%n"/>
</Console>
</Appenders>
<Loggers>
<Root level="info">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
And many teams also remove the JndiLookup class from the classpath entirely.
From an exploitability engine's perspective, the decision might look like:
{
"cve": "CVE-2021-44228",
"service": "payments-worker",
"reachable": true,
"exploitable": false,
"reason": [
"No attacker-controlled log sources: only internal batch events",
"log4j2.formatMsgNoLookups=true in log4j2.xml",
"JndiLookup class not present on classpath"
]
}
Static reachability took you to "maybe"; exploitability analysis takes you to a defensible "no".
Reason 2: It ignores exploit preconditions in advisories
Example: Rails HTML sanitizer XSS (CVE-2024-53985)
Advisory-level conditions
CVE-2024-53985 in Rails::HTML::Sanitizer is exploitable only when:
- HTML5 sanitization is enabled
- The application overrides allowed tags such that pairs like:
math+style, orsvg+styleare both permitted
We can model that roughly as:
cve: CVE-2024-53985
conditions:
- component: rails-html-sanitizer
version: "<= X.Y.Z" # advisory specific
- config:
html5_sanitizer: enabled
- config:
allowed_tags:
includes_any_pair:
- [math, style]
- [svg, style]
How reachability sees it
Static view:
- The app uses
sanitizein controllers and views - Those controllers are reachable from public routes
So you get:
if path_exists(public_controller, Rails::HTML::Sanitizer#sanitize):
mark_reachable("CVE-2024-53985")
And a ticket appears: "reachable XSS in sanitizer".
Why exploitability may be false
Consider a fairly typical Rails initializer:
# config/initializers/sanitizers.rb
Rails::Html::WhiteListSanitizer.allowed_tags = %w[
strong em a p ul ol li br span div
]
# HTML5 sanitizer is never enabled
# Rails::Html::Sanitizer.full_sanitizer is used for untrusted inputs
In this case:
- HTML5 sanitization is not enabled
math,svg, andstyleare not in the allowlist
So even though the vulnerable code path is clearly reachable from user-facing controllers, the exploit conditions are not satisfied.
An exploitability-aware decision engine would annotate the finding as:
{
"cve": "CVE-2024-53985",
"service": "marketing-site",
"reachable": true,
"exploitable": false,
"reason": [
"Rails HTML5 sanitizer mode not enabled",
"Allowed tags do not include math/svg/style combinations"
]
}
Static reachability can't express those config-level nuances at all.
Reason 3: It doesn't model dynamic architecture and deployment
Example: Apache HTTP/2 early pushes (CVE-2019-10081)
Advisory-level conditions
CVE-2019-10081 affects Apache HTTP Server's HTTP/2 module (mod_http2) when "very early pushes" are configured via H2PushResource.
Exploitability requires:
- HTTP/2 enabled for a given virtual host
H2PushResourcedirectives configured for that host
Modelled:
cve: CVE-2019-10081
conditions:
- module: mod_http2
enabled: true
- vhost:
directive: H2PushResource
present: true
How inventory + reachability see it
In a typical scanner:
- Your server image includes the
mod_http2module on disk (for examplemodules/mod_http2.so). - The module exports the vulnerable functions.
From an inventory point of view, the vulnerable code is present and treated as reachable.
No one asks whether mod_http2 is actually enabled, or whether H2PushResource is used.
Why many deployments are not exploitable
Here is a hardened configuration example:
# /etc/httpd/conf/httpd.conf
# Explicitly limit protocols
Protocols http/1.1
# Ensure mod_http2 is not loaded
# LoadModule http2_module modules/mod_http2.so # commented out
<VirtualHost *:80>
ServerName example.com
DocumentRoot /var/www/html
# No H2PushResource directives at all
</VirtualHost>
In this environment:
mod_http2is not loaded- HTTP/2 is not negotiated
- No
H2PushResourcepaths exist
The vulnerable code is technically present in the binary, but there is no protocol-level path an attacker can use to exercise it.
A proper exploitability engine combines:
httpd -M/ module listProtocolsconfiguration- Virtual host configs
...to conclude:
{
"cve": "CVE-2019-10081",
"service": "edge-apache",
"reachable": true,
"exploitable": false,
"reason": [
"mod_http2 not enabled in loaded module list",
"Protocols=http/1.1 only",
"No H2PushResource directives configured"
]
}
Static reachability sees "code exists in module"; exploitability analysis sees "attackers can't route traffic into it."
Reason 4: It treats all reachable paths as equal
Example: Tomcat path equivalence RCE (CVE-2025-24813)
Advisory-level conditions
CVE-2025-24813 is a critical Tomcat vulnerability in path equivalence handling, exploitable via the default servlet.
Exploitation for RCE typically relies on all of the following:
- The default servlet being active and write enabled (its
readonlyattribute set tofalse). - Partial PUT support enabled (for example,
allowPartialPut="true"). - File-based session persistence using the default storage location.
- The application including a library that can be used in a Java deserialization gadget chain.
Roughly:
cve: CVE-2025-24813
conditions:
- component: org.apache.catalina.servlets.DefaultServlet
readonly: false
- connector:
allowPartialPut: true
- manager:
persistence: file
How reachability sees it
From a static view:
- Vulnerable Tomcat version detected
- Default servlet in classpath and mapped into the app
So you get: "reachable RCE on all Tomcat instances on this version."
Why risk differs drastically between teams
Consider two configurations.
Team A – Hardened:
<!-- web.xml -->
<servlet>
<servlet-name>default</servlet-name>
<servlet-class>org.apache.catalina.servlets.DefaultServlet</servlet-class>
<init-param>
<param-name>readonly</param-name>
<param-value>true</param-value> <!-- safe default -->
</init-param>
</servlet>
<!-- server.xml -->
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"
allowPartialPut="false" />
<!-- context.xml -->
<Manager pathname="" /> <!-- disable file-based session persistence -->
Team B – Permissive:
<!-- web.xml -->
<servlet>
<servlet-name>default</servlet-name>
<servlet-class>org.apache.catalina.servlets.DefaultServlet</servlet-class>
<init-param>
<param-name>readonly</param-name>
<param-value>false</param-value> <!-- write enabled -->
</init-param>
</servlet>
<!-- server.xml -->
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"
allowPartialPut="true" />
<!-- context.xml -->
<Manager pathname="SESSIONS.ser" /> <!-- file-based sessions -->
Static reachability treats both as identical: same version, same servlet, same code path.
Exploitability analysis sees:
- Team A: conditions not satisfied → non-exploitable
- Team B: all conditions satisfied → high-risk RCE
Again, reachability has no way to express "this instance is hardened; that one isn't".
What full exploitability analysis looks like
To move beyond reachability, we need a different mental model: CVE → structured exploitability conditions → evaluation against your environment.
Transform CVEs into machine-readable conditions
Instead of just storing "this symbol is vulnerable", we treat each CVE as a small program: a set of conditions that must hold true in order for exploitation to be possible.
Conceptually:
class ExploitCondition:
def __init__(self, id, conditions):
self.id = id
self.conditions = conditions # callables over environment
def is_exploitable(self, environment):
reasons = []
for p in self.conditions:
ok, reason = p(environment)
if not ok:
return False, reason
reasons.append(reason)
return True, reasons
For Log4Shell, the conditions might look like:
def p_version(env):
return env.log4j_version <= "2.14.1", "vulnerable log4j version in use"
def p_message_lookups(env):
return env.log4j_config.formatMsgNoLookups is False, "message lookups enabled"
def p_jndi_available(env):
return env.classpath.contains("JndiLookup"), "JNDI lookup class present"
def p_attacker_logs(env):
return env.data_flows.exists("attacker_input -> logger"), "attacker-controlled logs"
And ExploitCondition("CVE-2021-44228", [p_version, p_message_lookups, p_jndi_available, p_attacker_logs]) becomes a reusable rule.
Combine static, config, runtime, and org context
To evaluate exploitability conditions, you need signals from all layers of the system, not just the call graph.
Code
- Where is the vulnerable symbol used?
- What arguments are passed? Are they attacker controlled or constants?
Configuration
- Log4j XML or properties, Rails initializers, Apache or Tomcat configs.
- Framework modes and feature flags.
Runtime and deployment
- Are the vulnerable symbols executed at runtime?
- Which services are actually deployed and exposed to the internet.
- Which routes receive untrusted traffic and how often.
- Environment specific toggles that change behavior.
Organizational and control context
- Asset criticality and data classification.
- Network segmentation, WAF rules, and authentication or authorization models.
The evaluation output should look like a decision, not a flag:
{
"cve": "CVE-2019-10081",
"service": "cdn-edge",
"reachable": true,
"exploitable": false,
"status": "not_exploitable",
"evaluated_at": "2025-02-10T13:24:05Z",
"signals": {
"httpd_modules": {
"command": "httpd -M",
"output_sample": "Loaded Modules: core_module, mpm_event_module, http_core_module, ...",
"mod_http2_enabled": false
},
"vhost_configs": {
"paths": [
"/etc/httpd/conf/httpd.conf",
"/etc/httpd/conf.d/*.conf"
],
"h2_push_resource_directives": [],
"http2_protocols_enabled": false
}
},
"failed_conditions": [
{
"id": "mod_http2_enabled",
"description": "mod_http2 is loaded",
"result": false,
"evidence": {
"source": "httpd -M",
"snippet": "http2_module (shared) not present in loaded modules"
}
},
{
"id": "h2_push_configured",
"description": "H2PushResource is configured on at least one virtual host",
"result": false,
"evidence": {
"source": "/etc/httpd/conf.d/*.conf",
"snippet": "no H2PushResource directives found in parsed vhost configs"
}
}
],
"missing_signals": []
}
This is the level of detail that lets an AppSec engineer say "yes, auto-close this" with a straight face.
How Konvu goes beyond reachability
This is the problem we built Konvu to solve.
The Log4j, Rails, Apache, and Tomcat examples earlier in the post are exactly the style of exploitability conditions we encode in Konvu's vulnerability database and evaluate automatically for each finding in your environment.
Most of us have lived the "reachable vs exploitable" pain on the receiving end of scanners. Konvu's design is essentially: "What would an experienced AppSec engineer do to assess exploitability if they had unlimited time, and how do we automate that?"
Reachability as one signal, not the decision
We ingest reachability data from existing tools and our own static analysis. We don't throw it away.
But in Konvu's system, reachable=true is just one feature among many in the exploitability model:
{
"finding_id": "SCA-12345",
"cve": "CVE-2021-44228",
"signals": {
"reachable": true,
"service_internet_exposed": true,
"attacker_to_logger_flow": false,
"log4j_lookups_enabled": false,
"jndi_class_present": false,
"asset_criticality": "high"
}
}
The decision engine reasons over this full set of signals, not just the reachable bit.
Exploitability vulnerability database
We maintain an internal exploitability vulnerability database that encodes CVEs as structured conditions.
Sources include:
- Vendor advisories and NVD entries
- Linked references and PoCs
- Diff analyses of vulnerable vs patched releases
- Dependency graphs and API usage patterns
For each CVE, we build an internal representation along the lines of:
cve: CVE-2025-24813
component: apache-tomcat
affected_versions: "9.0.0 - 9.0.X"
sinks:
- org.apache.catalina.servlets.DefaultServlet
exploitability:
conditions:
- name: writable_default_servlet
query: tomcat.web_xml.default_servlet.readonly == false
- name: partial_put_enabled
query: tomcat.server_xml.connector.allowPartialPut == true
- name: file_session_persistence
query: tomcat.context_xml.manager.pathname != ""
These conditions are testable against real environment data. The vulnerability database is kept current with AI-assisted extraction from new advisories and human review for correctness.
Agentic analysis in your environment
On top of that vulnerability database, Konvu runs AI agents that orchestrate the actual analysis work around each finding.
For a single CVE in a single service, Konvu orchestrate agents that run deterministic tools to:
-
Pull version and component data — Read SCA or SBOM data to confirm which libraries and versions are in use. Map those components to entries in the vulnerability database.
-
Analyse the code — Find all usages of the vulnerable symbol. Trace data flows from known untrusted sources into that sink.
-
Read and normalise configuration — Check code properties, initializers, or configurations. Extract the config flags and modes that matter for this CVE.
-
Look at runtime and deployment — Check whether the vulnerable dependency and functions are executed at runtime. Check if the service is deployed and internet exposed. Check whether it serves untrusted tenants or only internal callers.
-
Evaluate the CVE specific conditions — Run the exploitability conditions from the vulnerability database against the signals collected above.
The output for each finding includes:
- A final classification:
exploitable,not_exploitable, orinconclusive. - The evaluated condition set, showing which conditions passed and which failed.
- Evidence snippets, such as:
- Extracts from configuration files.
- Code locations for relevant calls.
- Short summaries of the data flows that matter.
This is what replaces the "someone has to read the advisory and grep configs" loop.
Integrated back into your existing tools
We expect teams to keep their current scanners. Konvu doesn't try to be "yet another SCA."
Instead, we:
- Ingest findings from your existing SCA/ASPM sources
- Enrich those findings with exploitability decisions
- Push the results back into your existing tools:
- SCA / ASPM UI
- Ticketing systems like Jira
- SCM comments / CI annotations
That enables workflows like:
- Auto-close findings marked
not_exploitablewith strong evidence - Auto-open tickets for
exploitablefindings with pre-filled context and evidence
The net effect:
- The backlog shrinks because non-exploitable-but-reachable issues are removed
- The remaining queue is smaller, higher-signal, and easier to justify to engineers and leadership
In day to day terms, that means your scanners can keep producing as much signal as they like, but engineering teams see a much smaller queue of issues that are actually exploitable in your environment, each with a concrete explanation. You move from arguing about scanner output to reviewing evidence, making a decision, and closing the loop.
Reachability is a hint, not a verdict
Static reachability was a useful reaction to "everything is critical". It removed dead code and some obvious false positives.
On its own, it does not:
- Understand attacker control or real data flow
- Model the preconditions described in advisories and PoCs
- See configuration, deployment topology, or organizational context
As long as teams treat reachability as a proxy for risk, they will keep:
- Burning time on reachable but non exploitable issues
- Struggling to explain to the business why those tickets matter
- Missing subtle but genuinely dangerous paths that are exploitable in their environment
Konvu's view is simple:
- Detection and reachability answer: "What might be risky in theory?"
- Exploitability analysis answers: "What is actually risky here, right now, in our environment?"
We have built Konvu around that second question, and we back each decision with concrete evidence from code, configuration, and runtime signals.
If you are an AppSec team that still uses reachability as your main prioritisation signal, we'd love to talk. We can walk through a handful of findings from your existing scanners, and show you how many "reachable" issues become "not exploitable, safe to ignore" once you look at the full context.
That is when the backlog starts to shrink, and your attention can move to the small number of exploit paths that really deserve it.
