Perception Divergence
ActiveCollege Baseball
BSI model versus mainstream consensus on every qualifying college baseball game. Weights, reliability gates, verdict thresholds, and the v1 SEC + ranked-vs-ranked coverage policy.

Blaze Sports Intel
Born to Blaze the Path Beaten Less
BSI · Process & Sources
How BSI does the work, written down. Every published methodology in one place, plus the standards every BSI surface is held to before it ships.
BSI publishes the math behind its products because the work and the audit are the same thing. Anyone reading a power ranking, a verdict card, a coach ranking, or a NIL valuation should be able to walk back from the number to the inputs that produced it. These pages are how we keep that promise.
Each methodology page below covers a published BSI product. Weights, reliability flags, source-data origins, and the moment in the season when the surface becomes trustworthy.
College Baseball
BSI model versus mainstream consensus on every qualifying college baseball game. Weights, reliability gates, verdict thresholds, and the v1 SEC + ranked-vs-ranked coverage policy.
Sabermetrics
How BSI computes wOBA, wRC+, FIP, ERA-, and team-level offense and pitching composites for D1 college baseball. Formulas, source data, recompute cadence.
Programs & Personnel
How BSI ranks pitching coaches and program leadership using a tier-weighted multi-criteria framework. Verification levels, alias guards, opinion-doctrine.
Athlete Markets
How BSI estimates name-image-likeness valuation tiers, the data sources behind them, and the limits of public-record valuation.
Coverage & Sources
How BSI identifies stories, monitors conversation across platforms, validates claims before publishing, and prioritizes content. The process behind the product.
A methodology page is one thing. The disciplines below apply to every surface BSI publishes — scoreboards, team pages, leaders, intel cards, the whole site. They’re the floor, not the ceiling.
No mock arrays, no sample numbers, no placeholder content on a published page. If a source is unknown or unavailable, the page says so instead of inventing.
Every data surface tags its source (Highlightly, SportsDataIO, ESPN, BSI internal) and timestamps when it was fetched. Fresh or stale, the visitor sees which.
Loading, error, empty, populated — each rendered explicitly. A spinner that never resolves or a blank table is treated as a defect, not a finished surface.
When poll, model, and consensus disagree, the disagreement gets named — not averaged into a single confident-sounding number.
Every formula on a methodology page traces back to a constants file in the codebase. Changes to weights show up in commit history alongside the re-run that validated them.
Every data card carries a small badge: the source (where the numbers came from) and the freshness (when they were fetched). If you’re unsure whether the number you’re looking at is current, that badge is the answer. No badge means BSI doesn’t have a fresh source — and the surface should say so explicitly.
When BSI’s view differs from the consensus view (poll voters, mainstream media, betting markets), the difference gets named. We don’t average our way into a confident-looking middle. The divergence itself is the product.