Methodology
The Synapse Growth Score is one number, computed from three signals that you can audit and reproduce. Total weight across the 24 rules: 90.
The Growth Score formula
growth_score = round( 0.7 × lint_score // 0–100 from the 24-rule linter + 0.2 × saturate(agent_mentions, k=50) // 30-day mentions, saturating + 0.1 × saturate(activations, k=100) // 30-day activations, saturating )
lint_scoreis the weighted sum of the 24 rules below, normalized to 100. Same formula the CLI returns.agent_mentionsare visits where the tracker beacon detects an agent-chat referrer (chatgpt.com, claude.ai, perplexity.ai, etc.).activationsare first-time meaningful events fired froms.js. Saturating means early traffic moves you quickly, then the curve plateaus — so a brief spike can't dominate the leaderboard.
Worked example
lint_score = 82
agent_mentions = 38 → saturate(38, 50) = 0.432
activations = 110 → saturate(110, 100) = 0.524
growth_score = round(0.7×82 + 0.2×43.2 + 0.1×52.4)
= round(57.4 + 8.64 + 5.24)
= 71 → grade BThe 24-rule linter
Every rule reports pass / fail with evidence and a suggestion. The rule definitions in this page are loaded from the linter package — they can't drift from the code.
agent-readiness
#1 /llms.txt exists at site root
criticalweight 8auto-fixPublishes a top-level llms.txt manifest so AI agents can quickly index the site's purpose, key URLs, and policies. Modeled on the llmstxt.org draft.
#2 /llms-full.txt exists (long-form context for agents)
mediumweight 4Complements /llms.txt with the full prose context — product description, FAQs, key docs concatenated. Agents fetch this when they need depth.
#9 /.well-known/agent-answer endpoint exists
highweight 6auto-fixExposes a stable, machine-readable summary at /.well-known/agent-answer.json that agents can hit instead of scraping HTML. Includes product name, one-line pitch, primary use cases, install URL, and pricing.
#11 robots.txt explicitly allows AI agent crawlers
criticalweight 10auto-fixMost stacks default to blocking or omitting GPTBot, ClaudeBot, PerplexityBot, Google-Extended, etc. If those agents are disallowed (or implicitly disallowed by a wildcard disallow with no override), the product is invisible to AI engines.
#12 Publishes a machine-readable use-case map
highweight 5auto-fixAgents recommend products by matching user intent to a product's stated use cases. Sites must publish a use-case map either in /.well-known/agent-answer.json (use_cases[]) or in a dedicated section on the homepage covering ≥3 distinct intents.
answerability
#3 Homepage clearly states what the product is in the first 200 characters
mediumweight 4Generative engines extract a 1–2 sentence summary from the top of the page. If the hero is ambiguous, the agent will paraphrase incorrectly. Warning-only — not a hard fail.
#7 FAQPage / QAPage schema where appropriate
mediumweight 4Answer engines disproportionately cite Q&A blocks. Even a 3-question FAQPage at the bottom of the homepage materially increases citation rate.
schema
#4 SoftwareApplication / WebApplication JSON-LD schema is present
highweight 7AI coding agents disambiguate products by entity type. A SoftwareApplication / WebApplication block with name, applicationCategory, operatingSystem, and offers is the strongest signal that a URL represents a tool.
#5 Organization (or LocalBusiness) schema is present
highweight 5Anchors the brand entity so agents can reliably attribute mentions back to your organization.
#6 Page-type schema present (Article, Product, or SoftwareApplication)
mediumweight 4Every page should declare its primary entity type so agents know whether they're looking at a blog post, a docs page, a product, or a tool.
#8 BreadcrumbList schema present
lowweight 2Helps agents and search engines reconstruct site hierarchy and produce richer answer formatting.
discoverability
#10 /sitemap.xml exists and is referenced from robots.txt
mediumweight 4AI crawlers fall back to sitemap.xml when llms.txt is absent. It also gives them an authoritative list of URLs to fetch.
#13 Page declares a canonical URL
mediumweight 3Prevents agents from citing duplicate URLs (preview, query-string variants) instead of the canonical page.
#14 Quality meta description (70–180 chars)
mediumweight 3Many AI engines fall back to meta description when extracting a one-line product summary.
#15 Open Graph tags (title, description, image, url, type)
mediumweight 3OG metadata is the canonical preview source for chat surfaces — when an agent links to your site, Claude/ChatGPT render the OG card.
#16 Twitter Card metadata present
lowweight 2Improves preview cards when agents share links on X/Twitter and in some chat clients.
#18 Internal links connect related pages
lowweight 2Crawlers find new pages by following internal links. A page with zero internal links is an orphan to most crawlers.
structure
#17 Single H1 and at least two H2 subheadings
mediumweight 3Agents extract section structure from headings. A page with no H2s reads as a single undifferentiated wall of text.
#20 Images have alt text
lowweight 2Alt text feeds multimodal agents and accessibility tools — and improves text-only extraction.
#23 Mobile viewport meta tag present
lowweight 1Required for most modern crawlers and for previews to render correctly.
trust
#19 Outbound links to authoritative sources
lowweight 2Pages that cite at least one external authoritative source rank as more trustworthy in AI engines' downstream evaluation.
#21 Page exposes a publication / updated date
lowweight 2AI engines prefer time-anchored facts. Pages without a visible date are demoted in retrieval.
#22 Author / Person entity is exposed
lowweight 2E-E-A-T: agents penalize anonymous content. Even a Person schema with name and link is enough.
#24 HTTPS is enforced
lowweight 2Almost all AI crawlers will skip HTTP-only origins or downgrade them in retrieval. HSTS is a strong signal.