How the ScanAble compliance score is calculated
Every Audit Report shows a single 0–100 compliance score. This page documents exactly how that number is produced — the formula, the inputs, the severity weights, and what the score does and does not represent. We publish it so buyers, auditors, and your developers can verify the math instead of taking it on faith.
The formula
The score is the share of weighted points your site earns from passing checks vs. weighted violations:
score = round( passes / (passes + Σ(violation_nodes × severity_weight)) × 100 )
- passes — the number of axe-core rule passes detected on the page. Each pass counts as 1 point.
- violation_nodes — the number of HTML elements that fail a given rule. Five buttons missing labels = 5 nodes for that one rule, not 1.
- severity_weight — a multiplier based on the impact axe-core assigns to the rule (see table below).
- The result is clamped to
[0, 100]and rounded to the nearest integer. - If the page produces zero passes and zero violations (e.g. the page didn’t render correctly), the score is
100by convention — but the report itself will flag the page as failed so a 100 isn’t mistaken for a clean audit.
Not every violation is equal
A missing form label blocks a screen-reader user from completing checkout. A decorative <hr> with a redundant ARIA role does not. The score reflects that. Severities come from axe-core’s own impact classification — we do not reweight rules ourselves.
| Severity | Weight | Example rules |
|---|---|---|
| Critical | 10× | Buttons with no accessible name, images with no alt, duplicate IDs on form fields |
| Serious | 5× | Insufficient color contrast, links without discernible text, missing landmark structure |
| Moderate | 2× | Page lacks main landmark, heading levels skipped, lang attribute missing on <html> |
| Minor | 1× | Best-practice violations: redundant ARIA roles, suspicious tab order hints |
A site with one critical violation on ten elements will score lower than a site with one minor violation on a hundred elements — even though the second has more total findings.
Worked example
A page passes 40 axe-core rules and has these violations:
| Rule | Severity | Nodes | Weighted |
|---|---|---|---|
| button-name | Critical | 2 | 2 × 10 = 20 |
| color-contrast | Serious | 3 | 3 × 5 = 15 |
| heading-order | Moderate | 1 | 1 × 2 = 2 |
| Total weighted violations | 37 | ||
score = round( 40 / (40 + 37) × 100 )
= round( 40 / 77 × 100 )
= round( 51.95 )
= 52Aggregating across many pages
For the $149 Site Audit, each page is scored independently using the formula above. The aggregate score on the cover of the PDF is the unweighted average of the per-page scores for pages that scanned successfully:
aggregate_score = round( sum(page_scores_ok) / count(pages_ok) )
Pages that fail to load or are blocked are excluded from the average so they don’t artificially drag the score down. They’re still surfaced separately on the per-page table in the PDF, marked as failed.
What the score is — and what it isn’t
It is a reproducible, automated measure of how your page performs against the WCAG 2.0/2.1/2.2 Level AA rules that axe-core can evaluate programmatically. Two scans of the same page on the same day against the same code will produce the same score.
It is not a certification of WCAG compliance. Per Deque’s own published research, automated tools (including ours, Lighthouse, Pa11y, and WAVE) catch roughly 30–40% of WCAG issues. The rest — keyboard flow, screen-reader UX, alt-text quality, focus management on dynamic content, cognitive load — require manual review by a human. A 100 in ScanAble does not mean your site is fully accessible; it means the automated rules passed.
It is not a legal opinion. ScanAble produces audit evidence and remediation guidance. Use it alongside manual testing and, where stakes are high, a certified accessibility professional.
The PDF you receive includes a full “What This Audit Does Not Cover” section listing the manual-review categories so you can plan around the gap.
Source of truth
The scoring code lives in lib/scanner.ts in our codebase. Every report is signed with an Ed25519 key and verifiable at /verify/<report-id> using the report ID printed on the cover — the signature covers the URL, scan timestamp, score, and violation set, so any tampering invalidates it.
If we change the formula or the severity weights, this page is updated, and reports generated under the old formula remain verifiable under the algorithm version stamped on the signature.