Canonical Metrics and Wording (style guide)

The definitive reference for how SuiteCentral 2.0’s numerical claims should be phrased in executive-facing material. Any future wiki page that quotes a test count, coverage percentage, connector count, or AI provider list should check this page first.

What this page is

A style-guide page. 26-canonical-metrics-and-wording is explicitly authored as a wording style guide — its governance section says “Any document added to NotebookLM should conform to this file before packaging.” This wiki page captures the same rules so future ingests can reference them without re-reading the raw source.

Why it matters

Consistency. The corpus has at least three test-count vintages (slide vintage, Talking-Points vintage, current vintage — all documented on production-proof). The canonical metrics file tells us which vintage to use in current communications and how to phrase it. Future Brain1 pages that cite numbers should cite the canonical form to avoid introducing a fourth or fifth vintage.

Canonical test wording (3-sentence sequence) — APRIL 2026 REFRESH

Per april-2026-refresh-batch (the April 2026 canonical metrics refresh), the approved sequence is now:

  1. “100% suite pass rate (419 suites).” (up from 412 on April 18, 406 on April 8)
  2. “100% of executed tests passed (9,476 tests across unit, integration, and E2E).” (up from 9,410 on April 18, 9,364 on April 8)
  3. “Full breakdown: 9,286 unit (23 skipped), 170 integration (11 skipped), 20 E2E portal.” (up from 9,286 unit + 170 integration + 20 E2E on April 18)

Do NOT shorten to “100% pass rate” without the executed/skipped context.

Space-limited version: “9,476 tests, 100% pass rate.”

Note: the March 2026 numbers (419 suites / 9,364 tests) are superseded. Some earlier wiki pages still cite the March numbers in their vintage comparison tables — those are historically accurate snapshots. For NEW content, use the April numbers above.

Canonical coverage wording

  • Statements: 64.48%
  • Branches: 52.34%
  • Functions: 67.15%
  • Lines: 64.59%

Short form: “65% line coverage across 45,757 lines of production TypeScript.”

Note: 45,757 is the line count for production TypeScript specifically — the Start Here page’s “~854K text LOC” figure is the total repo (code + tests + config + docs + generated files).

Canonical module wording

“16-module demo library (12 core + 4 extension/platform).”

If listing only core modules, explicitly say “12 core modules”. See module-library for the full list.

Canonical connector wording

“6 production connectors demonstrated (NetSuite, Business Central, Salesforce, HubSpot, ShipStation, Oracle).”

⚠️ Discrepancy flagged: docs/connectors/CONNECTOR_STATUS.md marks Oracle as “⚠️ Beta (Basic Implementation)”, not production-ready. The canonical wording lists Oracle as one of the 6 production connectors, but the connector-status doc disagrees. Honest count per CONNECTOR_STATUS: 5 production + 1 beta. The canonical phrasing should either be updated to “5 production + 1 beta” or Oracle should be promoted to production in the connector doc. See production-vs-demo for the full inventory.

Avoid: “4 connectors verified” (outdated).

The 6 connectors span ERP (NetSuite, Business Central, Oracle), CRM (Salesforce, HubSpot), and logistics/shipping (ShipStation). Oracle appears here as a connector even though it also appears as a competitor in the Oracle comparison page — these are different Oracle products.

Canonical AI provider wording

“4 production-ready AI providers (OpenAI, Claude, OpenRouter, LMStudio).”

With cost context:

  • OpenAI GPT-4o — $0.02/mapping (primary inference)
  • Claude 3.5 Sonnet — $0.003/mapping (secondary/validation) — 6.7× cheaper than GPT-4o
  • OpenRouter (multi-model, free tier available) — routing/fallback
  • LMStudio (local, free) — on-premise/fallback

Avoid: “3 providers” (outdated); omitting OpenRouter from the list.

External comparison wording (the “avoid” list)

Preferred phrasings (use these):

  • “Comparable in complexity dimensions”
  • “Positioned against”
  • “Based on publicly available sources (date-stamped)”

Avoid unless source-locked and dated:

  • “No competitor has this”
  • “Uncontested 6-12 month window” (note: ai-governance-layer-video at 04:05 uses exactly this phrasing — the hook video is out of compliance with the current style guide; flagged)
  • Any absolute timing claim without a refresh date

Competitive pricing wording (March 2026 refresh)

Celigo: “Celigo ~50K+/yr, roughly 4K/mo equivalent, scope-based/per-endpoint pricing.”

Avoid: “10K/integration”, “$4-8K/mo” (older phrasings, no longer fresh).

Celigo dual-ERP framing: describe as “Generic / not dual-ERP-specific.” Avoid positive-match framing that implies Celigo is purpose-built for dual-ERP; also avoid a flat “No” when the measurement is breadth rather than total connector absence.

Reviewer-facing proof routing

  • Prefer reviewer-friendly proof pages over raw API/JSON payloads in executive-facing materials.
  • Raw JSON endpoints should be labeled as “technical proof only” — not the primary reviewer path.

Date and decision wording

Avoid hardcoded past dates in reusable documents.

Preferred:

  • “Decision requested within 10 business days”
  • “Quarterly refresh required for external claims”

How to use this page

  1. When adding a numerical claim to a new wiki page, look it up here first.
  2. When updating an existing wiki page with a new source’s numbers, check whether the new phrasing conflicts with this page.
  3. When a future ingest surfaces a claim using avoid-list phrasing (e.g., “Uncontested 6-12 month window”), flag it as a style-guide violation in the source-summary and use the preferred phrasing on the wiki page.
  4. When this page itself needs updating (e.g., test counts change, new connectors ship), the update should come from a new ingest of 26-CANONICAL-METRICS-AND-WORDING.md — the source file is versioned and refreshed periodically.

Sources