The original post: /r/netsec by /u/skimfl925 on 2025-03-05 03:31:20.

I came across an interesting case that I wanted to share with r/netsec - it shows how traditional vulnerability scoring systems can fall short when prioritizing vulnerabilities that are actively being exploited.

The vulnerability: CVE-2024-50302

This vulnerability was just added to CISA’s KEV (Known Exploited Vulnerabilities) catalog today, but if you were looking at standard metrics, you probably wouldn’t have prioritized it:

Base CVSS: 5.5 (MEDIUM) CVSS-BT (with temporal): 5.5 (MEDIUM) EPSS Score: 0.04% (extremely low probability of exploitation)

But here’s the kicker - despite these metrics, this vulnerability is actively being exploited in the wild.

Why standard vulnerability metrics let us down:

I’ve been frustrated with vulnerability management for a while, and this example hits on three problems I consistently see:

  1. Static scoring: Base CVSS scores are frozen in time, regardless of what’s happening in the real world
  2. Temporal limitations: Even CVSS-BT (Base+Temporal) often doesn’t capture actual exploitation activity well
  3. Probability vs. actuality: EPSS is great for statistical likelihood, but can miss targeted exploits

A weekend project: Threat-enhanced scoring

As a side project, I’ve been tinkering with an enhanced scoring algorithm that incorporates threat intel sources to provide a more practical risk score. I’m calling it CVSS-TE.

For this specific vulnerability, here’s what it showed:

Before CISA KEV addition:

  • Base CVSS: 5.5 (MEDIUM)
  • CVSS-BT: 5.5 (MEDIUM)
  • CVSS-TE: 7.0 (HIGH) - Already elevated due to VulnCheck KEV data
  • Indicators: VulnCheck KEV

After CISA KEV addition:

  • Base CVSS: 5.5 (MEDIUM)
  • CVSS-BT: 5.5 (MEDIUM)
  • CVSS-TE: 7.5 (HIGH) - Further increased
  • Indicators: CISA KEV + VulnCheck KEV

Technical implementation

Since this is r/netsec, I figure some of you might be interested in how I approached this:

The algorithm:

  1. Uses standard CVSS-BT score as a baseline
  2. Applies a quality multiplier based on exploit reliability and effectiveness data
  3. Adds threat intelligence factors from various sources (CISA KEV, VulnCheck, EPSS, exploit count)
  4. Uses a weighted formula to prevent dilution of high-quality exploits

The basic formula is: CVSS-TE = min(10, CVSS-BT_Score * Quality_Multiplier + Threat_Intel_Factor - Time_Decay)

Threat intel factors are weighted roughly like this:

  • CISA KEV presence: +1.0
  • VulnCheck KEV presence: +0.8
  • High EPSS (≥0.5): +0.5
  • Multiple exploit sources present: +0.25 to +0.75 based on count

The interesting part

What makes this vulnerability particularly interesting is the contrast between its EPSS score (0.04%, which is tiny) and the fact that it’s being actively exploited. This is exactly the kind of case that probability-based models can miss.

For me, it’s a validation that augmenting traditional scores with actual threat intel can catch things that might otherwise slip through the cracks.

I made a thing

I built a small lookup tool at github.io/cvss-te where you can search for CVEs and see how they score with this approach.

The code and methodology is on GitHub if anyone wants to take a look. It’s just a weekend project, so there’s plenty of room for improvement - would appreciate any feedback or suggestions from the community.

Anyone else run into similar issues with standard vulnerability metrics? Or have alternative approaches you’ve found useful?