Paste a citation. The verifier resolves the identifier, compares the resolved title to the cited title, and flags the dominant fabrication pattern documented by Topaz et al. (Lancet 2026): a real, resolvable identifier paired with an invented title.
8 identifier types · 4 verdict outcomes · Field-level diff · Free, no signup
Paste the cited title and one identifier. Optional refinements tighten the match but never gate the verdict on their own. Calls POST /api/verify — no authentication required.
ambiguous) from real DOI + invented title (verdict: mismatch).Confidence is reported as high, medium, or low. Low-confidence mismatches are the bucket the opt-in LLM screen rescues — informal abbreviations, paraphrased titles — behind paid authentication.
One identifier required. Optional refinements — first-author family name, year, journal — tighten the match but never gate the verdict on their own.
This page calls POST /api/verify behind the scenes. If you want to script verification or run it inside an AI coding workflow, two other surfaces expose the same backend:
verifyCitation tool ships with the Scholar Sidekick MCP, callable from Claude Desktop, Cursor, or any MCP-aware clientA batch web UI for .bib / .ris / .csl-json upload is in progress as Phase 12i.4B.
Three things, in order. (1) It resolves the identifier you supplied (DOI, PMID, PMCID, arXiv, ISBN, ISSN, ADS bibcode, or WHO IRIS URL) against the appropriate registry. (2) It compares the claimed title against the resolved title field-by-field, with similarity scores per field. (3) If the resolved title and the claimed title disagree, it title-searches the claim across Crossref, PubMed, and OpenAlex to see whether the cited title exists under a different identifier - distinguishing 'wrong identifier' (ambiguous) from 'fabricated title' (mismatch).
Any of: DOI, PubMed ID, PMCID, arXiv ID, ISBN, ISSN, NASA ADS bibcode, WHO IRIS URL. You only need one. Optional refinements - first-author family name, year, journal - are used to tighten the match but never gate the verdict on their own.
Matched means the claimed title matches the resolved paper. Mismatch means the identifier resolves but the title disagrees - the Topaz et al. fabrication pattern. Ambiguous means the identifier resolves to one paper but the claimed title matches a different paper found via search, which usually indicates a wrong identifier for a real paper. Not found means neither the identifier nor the title resolves anywhere.
Free at the anonymous tier with a published rate limit (see API documentation). No signup. Nothing you paste is stored beyond standard server logs. The opt-in LLM screen, which catches informal-abbreviation false positives, is gated to authenticated paid callers because each model call carries real per-call cost.
Programmatically, yes - the verifyCitation MCP tool lets you script per-reference verification from Claude Desktop, Cursor, or any MCP-aware client. A web-based batch upload UI (.bib / .ris / .csl-json) is planned as Phase 12i.4B. For now, this page handles one citation at a time.
Retraction asks 'has this real paper been retracted'. Verification asks 'does this citation actually correspond to a real paper at all'. A fabricated citation is not retracted because the paper never existed. Use the Retraction Checker after verification, on the resolved paper, not on the claim.
Against a 20-entry hand-curated fixture (sourced from Topaz et al. Suppl Appx 2 plus independent Crossref/PubMed/arXiv lookups), the verifier returns the expected verdict on 20/20 entries. The full fixture and validation methodology are published at /citation-integrity.
verifyCitation tool for AI-coding workflows