Every recommendation traces to a specific, citable federal data signal. If the data doesn't support a recommendation, we say so — we don't fill gaps with generic capture playbook advice.
CPARS performance ratings are not publicly accessible. Any platform claiming to incorporate CPARS data is misleading their users. GovBiz.ai uses publicly verifiable proxy signals instead and documents this explicitly.
Vulnerability scores are computed by a rule-based scoring engine — not a machine learning model. Each signal is independently calculated and weighted. The rules are deterministic and auditable.
Contracts with more than 3 modifications per year signal instability. Each modification is a data point in the public FPDS record. High-frequency modification patterns correlate with scope creep, performance issues, or budget instability.
Bridge contracts and PoP extensions (modification reason P00xxx) indicate the government delayed re-competition. Two or more extensions substantially raise vulnerability probability — the government is continuing a contract it chose not to recompete on schedule.
Contracts that have not been competitively re-awarded in 5+ years represent structural vulnerability. Cross-referenced with the original award date and subsequent modifications.
Agencies with SB goal shortfalls have regulatory pressure to set aside contracts. Contracts held by large businesses in agencies below their SB goals are disproportionately vulnerable to set-aside reclassification at recompete.
Declining award values across modifications (after adjusting for scope changes) may indicate the government is reducing scope in anticipation of transition, or is dissatisfied with cost performance.
If the current incumbent lost a prior competition on the same vehicle or for the same agency/NAICS combination, that history is a forward-looking vulnerability indicator.
If a signal cannot be computed due to missing data, its contribution is set to 0.0 and the absence is recorded in the evidence chain. Scores are never imputed or estimated.
PTW ranges are derived from two primary sources: FPDS historical award values for the same NAICS code and agency combination, and GSA CALC+ labor rate benchmarks cross-referenced with BLS wage data.
GovBiz.ai presents PTW as ranges, not point estimates. False precision in PTW analysis is a common failure mode — a $4.2M award does not mean the next award will be $4.2M. We show the distribution of historical awards and the percentile range, not a single number.
Opportunity scores are contractor-specific. The same SAM.gov solicitation receives a different score for different companies. Factors and weights:
Each signal is independently measured against a threshold. Signals that exceed their threshold contribute their full weight; signals below threshold contribute proportionally. Component scores are summed and capped at 100.
Recompete timing predictions are derived from FPDS data — not from any non-public government planning documents. The prediction method:
Agency spending forecasts use Meta's Prophet time-series model, one model per (agency_code, NAICS_code) pair. Training data comes from FPDS obligation records.
Not publicly accessible. The Contractor Performance Assessment Reporting System is restricted to government contracting officials. Any platform claiming to incorporate CPARS data is misleading users. We explicitly document this limitation and use the proxy signals above instead.
Non-public and inconsistently published. While some agencies publish acquisition forecasts, these are advisory and unreliable for quantitative modeling.
Proprietary to each contractor. We have no mechanism to collect it and do not try. Win probability is not a number we compute — we compute vulnerability of the incumbent, which is a distinct and more reliable signal.
Not consistently publicly disclosed before award. Post-award JV data is in FPDS but is unreliably structured.
The Claude API is used as the final synthesis layer — it translates pre-computed analytical outputs into capture narratives. The LLM does not do analysis. It receives scored results and evidence from the layers below and narrates them in capture manager language.
If the underlying data has a gap (missing records, insufficient signal), the LLM is instructed to surface that gap explicitly — not fill it with generic advice. Capture managers who encounter generic advice in the output should report it as a bug.
Public federal data flows through our rule-based scoring engine, which produces scored signals with cited evidence. The LLM then translates those scores into actionable capture narratives — it never invents analysis or fills gaps with generic advice.
Every recommendation in the output traces back to a specific scored signal and its underlying data source.