Methodology
What the Drift Score measures
The Drift Score is
|GitHub CVSS - NVD CVSS|
— the absolute difference between the CVSS base scores assigned by
NVD and the GitHub Advisory Database for the same CVE. Scores are
only compared when both sources use the
same CVSS version (e.g., both v3.1). Cross-version
comparisons (v3.1 vs. v4.0) are classified as data gaps, not
conflicts.
Note: The
drift_score
and
cvss_variance
fields in the data are currently equivalent. Both represent the raw
score delta. The project retains both fields in case the formula
evolves in the future.
Classification
Every CVE in the dataset is assigned a
drift_type:
- conflict — Both NVD and GitHub have assigned a CVSS score using the same version, and the scores differ.
- gap — One or both sources have not assigned a score, or the scores use different CVSS versions (cross-version mismatch).
- rejected — NVD has marked the CVE as Rejected, but GitHub still maintains an advisory for it. This is an existence dispute, not a score dispute, so the Drift Score is 0.0.
Data sources
- NVD API 2.0 — CVSS scores, CWE assignments, CPE strings, publication dates, and analysis status. For CVEs with status "Analyzed" or "Modified," the score reflects NVD's independent assessment. For CVEs with status "Deferred," NVD did not independently analyze the CVE — the score is the CNA-provided score passed through the NVD API. Approximately 7% of conflicts in this dataset fall into this category.
- GitHub Advisory Database — CVSS scores (primarily v3.1), affected package versions, and GHSA identifiers. GitHub's scores are not always independent assessments — many come from maintainer-submitted advisories or CNA-provided vectors.
Both sources are fetched every six hours via GitHub Actions. After the initial backfill, only incremental updates are fetched (last 25 hours for NVD, last 48 hours for GitHub).
Known limitations
- Ecosystem bias. GitHub Advisory Database covers software package ecosystems (npm, Maven, pip, Go, NuGet, Composer, RubyGems, Rust). CVEs for hardware, firmware, network appliances, and non-packaged software are largely absent. The conflict rate reported here is representative of these ecosystems, not all CVEs.
- Survivorship bias. Only ~4.5% of CVEs in the dataset have scores from both NVD and GitHub Advisory. The conflict rate applies to this small, non-random subset of dual-scored CVEs. It should not be generalized to "X% of all CVEs have conflicting scores."
- CNA pass-through. ~7% of conflict CVEs have NVD status "Deferred." For these, the "NVD score" is actually the CNA-provided score — not an independent NVD assessment. The leaderboard provides a filter to exclude these.
- GitHub upstream sources. GitHub Advisory scores are not always independent assessments. When GitHub and the assigning CNA agree on a score but NVD independently re-scores differently, the measured "disagreement" is NVD-vs-CNA rather than NVD-vs-GitHub.
- Temporal alignment. NVD and GitHub data are fetched in the same CI run but not simultaneously. For very new CVEs, one source may have been updated between fetch times.
- CVSS calculator rounding. Different implementations of the CVSS specification can produce slightly different scores from identical vector strings. ~8% of conflicts have a variance of exactly 0.1, and deltas of 0.2-0.3 may also be implementation artifacts rather than genuine analytical disagreements.
-
Terminology.
The field
cvss_variancein the data is the range (max - min), not the statistical variance (mean of squared deviations). It is equivalent to|GitHub CVSS - NVD CVSS|. - CNA minimum threshold. The CNA analysis page only includes CNAs with 5 or more dual-scored CVEs to avoid misleading statistics from small samples. The CWE analysis page applies the same threshold.
Update frequency
Data is updated every six hours (00:00, 06:00, 12:00, 18:00 UTC) and on every push to the main branch via a GitHub Actions pipeline. After the initial historical backfill, only CVEs modified in the last 25 hours (NVD) or 48 hours (GitHub) are re-fetched. All aggregate indexes are recomputed from scratch on every run.
How to cite
Gamblin, J. (2026). The Consensus Engine: Tracking CVSS Scoring Divergence Between NVD and GitHub Advisory Database. RogoLabs. https://github.com/RogoLabs/consensus-engine