---
title: Best AI Citation Verifier in 2026 - 10 Fake-Citation Detectors Compared
description: Honest comparison of the ten AI citation verifiers that catch fabricated DOIs and hallucinated references from ChatGPT, Claude, Gemini, and Perplexity. Per-tool capsules, feature matrix, where each wins, and why source-of-truth metadata matters more than database size.
doc_version: "1.0"
last_updated: "2026-05-14"
---

# Best AI Citation Verifier in 2026 - 10 Fake-Citation Detectors Compared

> From a clinician-educator who reads citations weekly. I built one of the ten tools on this page (Scholar Sidekick). I tried to write the comparison the way I would want to read it.
> Last updated: 2026-05-14
> HTML version: https://scholar-sidekick.com/compare/best-ai-citation-verifier

In early 2026, Topaz et al. audited 2.5 million biomedical papers and found that roughly 1 in 277 contains at least one fabricated reference (Lancet 2026;407(10541):1779-1781). The dominant fabrication pattern is not what people expect: it is not the obviously fake DOI or the made-up journal name. It is a *real, resolvable DOI* paired with an *invented title and abstract*. The identifier checks out. The metadata does not match. A naive DOI-resolver-only check passes.

The verifier category exists because that pattern is hard. There are now at least ten dedicated tools that try to detect it, mostly launched in the past 18 months in response to ChatGPT/Claude/Gemini-generated reference lists slipping past peer review. This page is an honest roundup of those ten: what each is for, what it costs, what databases it cross-references, and which one I would reach for in each situation. The methodology is simple - I visited each tool's product page on [2026-05-14](https://scholar-sidekick.com/citation-integrity) and recorded what they say they do.

One framing note up front. I built one of the ten tools on this page ([Scholar Sidekick](https://scholar-sidekick.com/tools/citation-verifier)). The page is not a sales pitch - several of the other nine win cleanly on specific axes (PDF upload, manuscript-wide passes, education-focused integrations, generous free tiers). What Scholar Sidekick is built for is *source-of-truth metadata across the broadest identifier surface I could assemble*: DOI, PMID, PMCID, ISBN, ISSN, arXiv, ADS bibcode, WHO IRIS URL. Plus retraction-awareness via Retraction Watch and open-access classification via Unpaywall, plus a free public MCP server for AI agents. If those things matter to your workflow, the case for Scholar Sidekick gets stronger. If they do not, one of the others is probably the right answer.

## Pick a verifier in 30 seconds

Each tool is built for a specific workflow. If you recognise yours below, that is probably the right tool for you - the rest of the page is the longer version of the same answer.

- **Verifying biomedical, clinical, or systematic-review references with retraction-awareness?** → [Scholar Sidekick](https://scholar-sidekick.com/tools/citation-verifier) (free, no signup; broader identifier surface; live Retraction Watch + Unpaywall; public MCP server).
- **Screening student essays for hallucinated citations in an educational setting?** → [GPTZero Source Finder](https://gptzero.me/sources) (education-focused, Canvas / Google Classroom integration, 1M+ educators).
- **Need an AI assistant that suggests replacement citations when it flags a fake?** → [Sourcely](https://www.sourcely.net/citation-verification) (returns ready-to-use replacements; broad database coverage).
- **Want to call a verifier from your own agent or pipeline via MCP?** → [Scholar Sidekick MCP](https://scholar-sidekick.com/mcp) (only verifier on this page with a public MCP server).
- **Paying user who wants a polished UI with team libraries and a chat assistant?** → [Citely](https://citely.ai) ($14-19/month; 95% accuracy claim; team workspaces; chat guidance).
- **Free tier only, occasional use, generic AI-fabricated reference?** → [CiteTrue](https://citetrue.com) (positions as the free option) or [AiCitationChecker](https://aicitationchecker.org) (50 free credits/day, no signup).

Scholar Sidekick is not the right choice if you need PDF upload, manuscript-wide structural checks, plagiarism detection, Word-document export of corrected references, or replacement-citation suggestions - other tools on this page win on each of those axes.

## When to use which verifier

| If you need... | Reach for | Why |
| --- | --- | --- |
| Source-of-truth metadata across the broadest identifier surface (DOI, PMID, PMCID, ISBN, ISSN, arXiv, ADS, WHO IRIS) | Scholar Sidekick | Only verifier on this page that resolves PMCID, ADS bibcode, ISSN, and WHO IRIS URL alongside the common DOI/PMID/arXiv set. |
| Retraction-awareness as part of citation verification | Scholar Sidekick | Only verifier on this page with live Retraction Watch integration (separate /tools/retraction-checker, plus enrichment on the main verifier). |
| Open-access status with a free legal PDF link | Scholar Sidekick | Live Unpaywall integration with Gold/Green/Hybrid/Bronze classification via /tools/open-access-checker. |
| MCP server so an AI agent can verify citations on demand | Scholar Sidekick MCP | Only verifier on this page with a first-party MCP server. `verifyCitation` is one of six tools. |
| Education / classroom setting (essays, plagiarism context, Canvas / Google Classroom integration) | GPTZero Source Finder | Built for educators screening student work; integrates with the LMS stack that schools already use. |
| Replacement citations when a fake is flagged | Sourcely | Returns formatted ready-to-use real citations, not just a flag. Closest to the 'find me a real source for this claim' workflow. |
| Polished paid UI with chat assistant and team libraries | Citely | $14-19/month, claims 95% accuracy, has a research-assistant chat and team workspaces. The most consumer-friendly verifier of the ten. |
| Generous free tier for occasional use, no signup | AiCitationChecker or CiteTrue | AiCitationChecker gives 50 free credits/day with no signup; CiteTrue positions itself as the free option. |
| Catching the dominant 'real DOI + invented title' fabrication pattern documented by Topaz et al. (Lancet 2026) | Scholar Sidekick | Only verifier on this page that names the Topaz pattern explicitly and is designed around it. |
| Deterministic, version-pinned verifier output for an audit or publication | Scholar Sidekick | x-scholar-transform-version header pins the resolver chain, normalisation, comparator, and CSL engine to a specific snapshot. No other verifier on this page publishes a deterministic-output contract. |
| Verifying a whole manuscript file (.docx, .tex, .md) end-to-end | Scribbr / Paperpal / Recite | These are structural manuscript tools - in-text-to-bibliography matching, formatting consistency, manuscript-wide passes. Different category from fabrication detection; pair with one of the verifiers above. |

## At a glance

| Tool | Pricing | Databases | Identifier surface | Retraction / OA | API / MCP |
| --- | --- | --- | --- | --- | --- |
| Scholar Sidekick | Free anonymous tier (no signup, rate-limited); paid via RapidAPI ($0-$249/mo) | Crossref, PubMed, DataCite, arXiv, ADS, OpenLibrary, WHO IRIS, Unpaywall, Retraction Watch | DOI, PMID, PMCID, ISBN, ISSN, arXiv, ADS bibcode, WHO IRIS URL | Both (Retraction Watch + Unpaywall) | Public REST API + first-party MCP server with `verifyCitation` |
| Citely | $9 trial / $14-19/mo / $347 lifetime | Crossref, PubMed, arXiv, Google Scholar, OpenAlex, Semantic Scholar | Not stated | Neither | Not mentioned |
| CiteTrue | Free (paid tier not stated) | Multiple academic databases (not enumerated) | Not stated | Neither | Not mentioned |
| GPTZero Source Finder | Free up to 10,000 chars; paid educator tiers | 220M scholarly articles, preprints, real-time news | Not identifier-first (pasted text) | Neither | Public API + Chrome extension + Canvas + Google Classroom + Zapier |
| Sourcely | Free tier (with limits); $19-39/mo (annual) | Google Scholar, Semantic Scholar, Crossref, PubMed, arXiv, Scopus, Web of Science, JSTOR, CORE | Citation strings (APA/MLA/Chicago/Harvard) | Neither | Not mentioned |
| TrueCitation | Free (paid tier not stated) | 17+ academic databases (not enumerated) | Citation strings, journals, URLs | Neither (predatory-publisher flag instead) | Not mentioned |
| AiCitationChecker | 50 credits/day free (no signup); $9.99-$99.99 packs | Crossref, OpenAlex, Semantic Scholar | Citation strings in any format | Neither | API for AI Agents mentioned (not detailed) |
| CiteMe AI Reference Verifier | Not stated on landing page | 250M+ academic sources | AI-generated references (ChatGPT/Gemini/Claude) | Neither | Not mentioned |
| SwanRef | Not stated | 150M papers via Crossref + Google Scholar | Not stated | Neither | Not mentioned |
| CiteSure | Not stated | Not enumerated | Citation strings | Neither | Not mentioned |

## The ten verifiers

*Each capsule below reflects what I found on the tool's product page on 2026-05-14, plus what the AI engines (ChatGPT, Perplexity, Google AI Overviews) report when asked about each tool. Pricing, free-tier limits, database lists, and accuracy claims change frequently - verify the linked product page before subscribing.*

### Scholar Sidekick - source-of-truth metadata with retraction-awareness

*Free citation verifier with the broadest identifier surface on this page, live retraction and open-access status, and a public MCP server. The project I built.*

[Scholar Sidekick's citation verifier](https://scholar-sidekick.com/tools/citation-verifier) resolves any of eight identifier types - DOI, PMID, PMCID, ISBN, ISSN, arXiv, ADS bibcode, WHO IRIS URL - against the live upstream registries (Crossref, PubMed, DataCite, arXiv, ADS, OpenLibrary, WHO IRIS) and compares the resolved title to the claimed title. When the identifier resolves but the titles disagree, it flags the dominant fabrication pattern documented by [Topaz et al. (Lancet 2026)](https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(26)00603-3/fulltext): a real, resolvable DOI paired with an invented title. That pattern is the one a naive DOI-resolver-only check passes.

Verification is shipped three ways: a free web tool at [/tools/citation-verifier](https://scholar-sidekick.com/tools/citation-verifier) (no signup, rate-limited), a public REST endpoint at /api/verify, and a first-party MCP server (`scholar-sidekick-mcp` on npm) that exposes `verifyCitation` alongside `resolveIdentifier`, `formatCitation`, `exportCitation`, `checkRetraction`, and `checkOpenAccess`. The MCP server is the only one on this page; if you are building an AI-agent pipeline that needs to verify a citation as part of a longer workflow, that matters.

What is distinctive: retraction-awareness via [Retraction Watch](https://scholar-sidekick.com/tools/retraction-checker) is built in, open-access status via [Unpaywall](https://scholar-sidekick.com/tools/open-access-checker) is built in, the identifier surface covers astrophysics (ADS), global health (WHO IRIS), and biomedical edge cases (PMCID), and every response carries an `x-scholar-transform-version` header that pins the resolver chain, normalisation, comparator, and CSL engine to a specific snapshot. None of the other nine tools publish a versioned-output contract; if your verifier runs against the same DOI six months from now, you cannot tell from any of them whether a change in output is a real change in upstream metadata or a silent change in their processing chain.

- **Free tier.** Anonymous web access with a published rate limit; no signup, no account, no PDF upload. Paid tiers via [RapidAPI](https://rapidapi.com/scholar-sidekick-scholar-sidekick-default/api/scholar-sidekick) for higher request limits ($0-$249/month).
- **Identifier surface.** DOI, PMID, PMCID, ISBN, ISSN, arXiv, ADS bibcode, WHO IRIS URL. PMCID and ADS bibcode coverage matter for biomedical preprints and astrophysics; WHO IRIS for global health policy documents.
- **Retraction-awareness.** Live Retraction Watch via Crossref integration. Available as a dedicated [/tools/retraction-checker](https://scholar-sidekick.com/tools/retraction-checker), as a public API endpoint, as per-item enrichment on /api/format via `?checks=retraction`, and as the `checkRetraction` MCP tool.
- **Open-access status.** Live Unpaywall integration with Gold / Green / Hybrid / Bronze classification. Available as a dedicated [/tools/open-access-checker](https://scholar-sidekick.com/tools/open-access-checker), as a public API endpoint, as per-item enrichment via `?checks=oa`, and as the `checkOpenAccess` MCP tool.
- **Deterministic output.** `x-scholar-transform-version` header pins the resolver chain, normalisation, formatter, comparator, and CSL engine to a specific snapshot. Identical inputs at a fixed transform version produce byte-identical output for a cache hit. Cache hits are exposed via `x-scholar-cache` so changes are explainable.
- **Where it does not win.** No PDF upload (paste-only), no Word-document export of corrected references, no plagiarism detection, no replacement-citation suggestions, no team libraries, no chat assistant. If your workflow needs any of those, one of the other tools on this page is the better choice.

### Citely - paid consumer-friendly verifier with chat assistant

*AI-powered citation checker with a polished paid UI, chat assistant, and team workspaces. The most consumer-facing verifier on this page.*

[Citely](https://citely.ai) (citely.ai) is the most actively-marketed verifier in the category. It cross-references citations against Crossref, PubMed, arXiv, Google Scholar, OpenAlex, and Semantic Scholar, and claims a 95% accuracy rate on detecting fabricated vs authentic citations. Pricing starts at a $9 one-time trial (15 credits, 2,000 characters - about 8 to 15 references), then $19/month or $14/month billed annually, with a $347 lifetime ('Believer') tier. The product page emphasises an AI research-assistant chat, team workspaces with shared libraries, and integrations with Zotero, Mendeley, and EndNote.

Citely is the verifier I most expect a paying student or researcher to land on through advertising or AI-engine recommendations - it is the most-cited tool on Perplexity and Google AI Overviews for 'fake citation detector' queries in our 2026-05-14 baseline, including a 'Best AI Citation Checker in 2026' blog post that Google AI Overviews cites verbatim. The product is polished and the audience-fit for a paying user is real. Where Citely does not differentiate from the rest of this list: identifier coverage (DOI/PMID/arXiv only as far as the landing page describes), retraction-awareness (not mentioned), open-access status (not mentioned), public API (not mentioned), MCP server (none), or a published deterministic-output contract.

- **Pricing.** $9 trial / $14-19/month / $347 lifetime. Credit-based (character-counted).
- **Databases.** Crossref, PubMed, arXiv, Google Scholar, OpenAlex, Semantic Scholar.
- **Strengths.** Polished UI, chat assistant, team workspaces, integrations with Zotero / Mendeley / EndNote, the strongest marketing presence in the category.
- **Gaps for biomedical or systematic-review workflows.** No PMCID, ADS, or ISSN support per the landing page; no retraction or open-access surface; no MCP; no versioned-output contract.

### CiteTrue - positions as the free citation checker

*Free AI-powered citation verifier; the consistent 'free option' brand in AI-engine recommendations for this category.*

[CiteTrue](https://citetrue.com) (citetrue.com) is the free counterpart to Citely in AI-engine recommendation lists for the fake-citation-detector category. The site styles itself as the '#1 free AI-powered citation verification tool' and is cited consistently across ChatGPT, Perplexity, and Google AI Overviews on the verifier queries we tracked. It searches 'vast authoritative academic databases' (not enumerated on the landing page) to verify citations and flag fabricated or AI-generated references.

What is hard to assess: the landing page does not enumerate which databases are queried, does not state identifier coverage, and does not describe the verification algorithm. It is free, which is real - most other tools in this list charge once you pass a small free quota - but the lack of detail makes it hard to evaluate against a specific workflow. If you need a free first-pass triage tool and do not require retraction status, identifier-surface guarantees, or replacement-citation suggestions, CiteTrue is a reasonable choice.

- **Pricing.** Free (no paid tier described on the landing page).
- **Strengths.** No-cost first-pass triage; strong AI-engine recommendation presence; positions as the free option in the category.
- **Gaps.** Databases not enumerated; identifier surface not specified; no API; no MCP; no retraction or open-access status.

### GPTZero Source Finder - education-focused hallucination detector

*Education-first hallucinated-citation detector integrated with Canvas, Google Classroom, and Zapier. Different primary audience from the others.*

[GPTZero Source Finder](https://gptzero.me/sources) is GPTZero's citation-verification surface, built on the same brand and infrastructure as their AI-text detector. It scans pasted essays for hallucinated citations and unsupported claims, cross-references against 220 million scholarly articles plus preprints and real-time news, and provides line-specific analysis with source recommendations for unsupported claims. Free tier allows up to 10,000 characters; paid tiers and a public API are available.

GPTZero's primary audience is *educators screening student work*, not researchers verifying their own bibliographies. The product reflects that: there are 1 million educators on the platform, Canvas and Google Classroom integrations, multilingual support (EN/ES/FR/DE and others), and a Chrome extension. If you are a teacher screening essays for AI-generated content (including hallucinated citations), GPTZero is built for you. If you are a researcher verifying a clinical reference list, the workflow fit is weaker - it is essay-shaped, not bibliography-shaped.

- **Pricing.** Free up to 10,000 characters; paid educator tiers above (specific pricing not on the landing page).
- **Databases.** 220M scholarly articles, preprints, real-time news.
- **Strengths.** Education-focused workflow, LMS integrations (Canvas, Google Classroom, Zapier), public API, Chrome extension, multilingual.
- **Different category.** Built for screening student essays, not for verifying a researcher's own bibliography. Closer to GPTZero's AI-detection product than to the source-of-truth verifiers on this page.

### Sourcely - verification plus replacement-citation suggestions

*Verifier that also returns ready-to-use real citations when it flags a fake. The closest fit for the 'find me a real source for this claim' workflow.*

[Sourcely](https://www.sourcely.net/citation-verification) (sourcely.net) is the verifier most clearly oriented around the 'replace my fake citation with a real one' workflow. It cross-references against an unusually broad database list - Google Scholar, Semantic Scholar, Crossref, PubMed, arXiv, Scopus, Web of Science, JSTOR, and CORE - and explicitly handles AI hallucinations from ChatGPT, Claude, Gemini, and Perplexity by name. When a citation fails, it returns specific reasons (invalid DOI, non-existent journal, etc.) and suggests formatted ready-to-use replacement citations.

Pricing is paid: Ultra at $19/month (annual; 30 deep searches/month) or Max at $39/month (annual; 1,000 deep searches/month). A free tier exists with limits not specified on the landing page. The replacement-suggestion feature is the genuine differentiator here - none of the other verifiers on this page are framed around finding a *real* source for a claim you are trying to support, only around flagging fakes.

- **Pricing.** $19-39/month (annual). Free tier with unspecified limits.
- **Databases.** Nine: Google Scholar, Semantic Scholar, Crossref, PubMed, arXiv, Scopus, Web of Science, JSTOR, CORE.
- **Strengths.** Returns formatted replacement citations, not just a flag. Specific failure reasons per citation. Broad database coverage including Scopus and Web of Science.
- **Gaps.** No API or MCP. No retraction or open-access surface. Identifier coverage not specified beyond citation strings.

### TrueCitation - free source-credibility checker with predatory-publisher detection

*Free verifier that pairs fabrication detection with predatory-publisher detection. The only one on this page that surfaces predatory-publisher signals.*

[TrueCitation](https://truecitation.com) (truecitation.com) frames itself as a 'free source reliability checker' that detects AI-fabricated references and predatory publishers. The site claims coverage of 17+ academic databases (not enumerated on the landing page) and accepts citation strings, journal names, and URLs as inputs.

The predatory-publisher detection is a useful adjacent signal - it overlaps with retraction-awareness in spirit (both are 'should you trust this source?' signals beyond 'does it exist?') though they answer different questions. TrueCitation does not surface formal retraction status (Retraction Watch / Crossref) or open-access classification (Unpaywall). Its primary strength is the predatory-publisher angle, which none of the other tools on this page name.

- **Pricing.** Free.
- **Databases.** 17+ academic databases (not enumerated).
- **Strengths.** Predatory-publisher detection alongside fabrication detection. Free.
- **Gaps.** Databases not enumerated; identifier surface not specified; no retraction or OA classification; no API or MCP.

### AiCitationChecker - generous free tier with no signup

*50 free credits per day, no signup, paste-anything inputs, with output reformatting to APA, IEEE, Chicago, Harvard, Vancouver, and MDPI styles.*

[AiCitationChecker](https://aicitationchecker.org) (aicitationchecker.org) has the most generous no-signup free tier of the paid verifiers on this page: 50 credits per day (roughly 7-12 reference checks), with paid packs at $9.99-$99.99 for credit bundles that last 90-180 days. It cross-references Crossref, OpenAlex, and Semantic Scholar, and claims a 95% match rate for DOI references.

Two thoughtful features: output reformatting (you can ask it to return the verified citation in APA / IEEE / Chicago / Harvard / Vancouver / MDPI styles, and export a Word document of verified citations) and 'silent citation drift' detection (catches AI-rewritten metadata in manuscripts). The site states no data retention - text is processed in-memory only - which is the right answer for sensitive manuscripts. An API is mentioned in the footer ('For AI Agents') but not described in detail.

- **Pricing.** 50 credits/day free (no signup). Silver $9.99, Gold $19.99, Platinum $99.99 for 90-180 day bundles.
- **Databases.** Crossref, OpenAlex, Semantic Scholar.
- **Strengths.** No-signup free tier, Word-document export, output reformatting to multiple styles, in-memory processing (no data retention), silent-citation-drift detection.
- **Gaps.** No retraction or OA status; MCP not present; identifier surface narrower than Scholar Sidekick (no PMCID, ADS, WHO IRIS, ISSN).

### CiteMe AI Reference Verifier - 250M-source verifier with ChatGPT/Gemini/Claude focus

*Verifier framed around catching AI-generated references from ChatGPT, Gemini, and Claude specifically, against 250M academic sources.*

[CiteMe AI Reference Verifier](https://citeme.app/tools/ai-reference-verifier) (citeme.app) advertises 250 million academic sources for cross-reference and frames the product around 'catch hallucinated citations from ChatGPT, Gemini, or any AI tool' before submission. CiteMe also offers a broader citation-generator product, with the verifier sitting in their tools menu.

What is on the public landing page is thin on detail (database list, accuracy, identifier surface, pricing not specified at the URL I checked). The 250M-source figure is the headline; it is comparable to GPTZero's 220M and Sourcely's 200M, all of which are claims about Crossref + Scholar + adjacent registries. CiteMe was previously on Scholar Sidekick's competitive radar as a Zotero alternative (its citation-generator side); the verifier surface is a newer surface for them.

- **Pricing.** Not stated on the verifier landing page.
- **Database scale.** 250M+ academic sources (in line with GPTZero's 220M and Sourcely's 200M).
- **Strengths.** Scale claim, AI-engine framing (names ChatGPT / Gemini / Claude explicitly), brand presence in the broader citation-tool category.
- **Gaps.** Public-facing detail is thin; identifier coverage and accuracy not specified at the verifier URL.

### SwanRef - AI hallucination detector for academic citations

*Smaller, more focused verifier framed specifically around AI hallucination detection. Surfaced primarily through Perplexity.*

[SwanRef](https://swanref.org) (swanref.org) frames itself as an 'AI hallucination detector for academic citations' against more than 150 million papers via Crossref and Google Scholar. The product page is sparse on detail and SwanRef does not appear in ChatGPT's or Google AI Overviews' tool lists for verifier queries - Perplexity is the AI engine that consistently surfaces it.

It is hard to assess SwanRef relative to the other tools on this page without more detail than the landing page provides. Listed here for completeness because AI engines do cite it; verify the linked product page before relying on it.

- **Pricing.** Not stated.
- **Database scale.** 150M papers via Crossref and Google Scholar.
- **Strengths.** AI-engine presence on Perplexity for verifier queries.
- **Gaps.** Sparse public-facing detail; identifier surface, accuracy, and pricing not specified.

### CiteSure - AI-powered citation verification

*Newer commercial verifier in the same category as Citely. Cited on Perplexity but not yet on ChatGPT or Google AI Overviews.*

[CiteSure](https://citesure.com) (citesure.com) is a newer entrant in the AI-citation-verification category. The landing page describes 'AI-powered analysis' to verify citations 'in seconds' and is framed as a commercial product. Like SwanRef, public-facing detail on databases, identifier surface, accuracy claims, and pricing is thin.

Listed here for completeness; Perplexity recommends it among verifiers but Google AI Overviews and ChatGPT do not. Verify the product page before subscribing.

- **Pricing.** Not stated on the landing page.
- **Strengths.** Commercial product positioning in a category where most free tools have minimal-detail landing pages.
- **Gaps.** Sparse public-facing detail.

## Where Scholar Sidekick wins

Scholar Sidekick is built for the case where 'this citation looks plausible' is not good enough - where you need the resolved metadata to actually match the source registry, where retraction status changes the answer, and where an audit six months from now needs to reproduce the exact verifier output. The wins below are specific to that case; if your workflow is 'paste an undergraduate essay and triage the obvious hallucinations', other tools are a better fit.

- **Broadest identifier surface in the verifier category.** DOI, PMID, PMCID, ISBN, ISSN, arXiv, ADS bibcode, WHO IRIS URL. The other nine verifiers on this page are mostly DOI/PMID/arXiv-only as described on their landing pages. PMCID coverage matters for biomedical preprints whose PMID is not yet indexed; ADS coverage matters for astrophysics; WHO IRIS for global health policy documents; ISSN as a container reference.
- **Retraction-aware.** None of the other nine verifiers on this page surface formal retraction status from Retraction Watch / Crossref. [/tools/retraction-checker](https://scholar-sidekick.com/tools/retraction-checker) handles it as a dedicated tool, /api/retraction-check exposes it as a REST endpoint, the main /api/format endpoint accepts `?checks=retraction` for per-item enrichment, and the MCP server exposes a `checkRetraction` tool. This is a real gap in the category: a citation can be real, correctly cited, and still retracted - that should change the recommendation.
- **Open-access classification.** Live Unpaywall integration with Gold / Green / Hybrid / Bronze classification via [/tools/open-access-checker](https://scholar-sidekick.com/tools/open-access-checker). Useful when verifying someone else's reference list and wanting to confirm a free legal copy is available.
- **First-party MCP server.** [`scholar-sidekick-mcp@latest`](https://www.npmjs.com/package/scholar-sidekick-mcp) on npm installs in one line and exposes six tools, including `verifyCitation`. Among the ten verifiers on this page, Scholar Sidekick is the only one with a public MCP server. If you are building an AI-agent pipeline that needs verification as one step in a longer workflow, that is the only working option today.
- **Topaz pattern by name.** Scholar Sidekick is the only verifier on this page that names the Topaz et al. (Lancet 2026) fabrication pattern explicitly and is designed around it. The page at [/citation-integrity](https://scholar-sidekick.com/citation-integrity) walks through what the pattern looks like, why naive DOI checks miss it, and how the verifier surfaces the mismatch.
- **Deterministic, version-pinned output.** Every response carries an `x-scholar-transform-version` header that pins the resolver chain, normalisation, comparator, and CSL engine to a specific snapshot. Identical inputs at a fixed transform version produce byte-identical output for a cache hit. A `x-scholar-cache` header makes cache hits vs upstream re-queries explicit. None of the other nine verifiers publish a versioned-output contract; their output 'is what it is today'.
- **Public provenance manifest.** A machine-readable [/.well-known/sources.json](https://scholar-sidekick.com/.well-known/sources.json) declares the resolver chain, fallback order per identifier type, allowlisted upstream hosts, and network-safety guarantees. You can read what the verifier does before integrating.
- **Free anonymous tier.** No signup, no account, no PDF upload required. Paid tiers via RapidAPI for higher request limits, but the free anonymous tier covers most evaluation and many production loads.

## Where Scholar Sidekick does not win

An honest list of the things other verifiers on this page do that Scholar Sidekick does not. If your workflow needs one of these, one of the others is the right tool.

- **PDF upload.** Several tools (Citely, Sourcely, GPTZero) accept PDFs or DOCX files and extract references for you. Scholar Sidekick is paste-only. If your workflow is 'upload a manuscript and verify everything in it', use one of the manuscript-shaped tools.
- **Replacement-citation suggestions.** Sourcely returns formatted real-citation replacements when it flags a fake. Scholar Sidekick flags the mismatch and returns the resolved metadata; it does not suggest what citation you *should* have used.
- **Plagiarism detection.** Scribbr (paid), Cite This For Me Premium, and Citely have plagiarism checkers as separate paid products. Scholar Sidekick does not do plagiarism detection - it is a different signal (text similarity to other published text, not metadata fidelity to source registries).
- **Word-document export of verified references.** AiCitationChecker exports a Word document of verified citations. Scholar Sidekick returns CSL JSON / BibTeX / RIS / NBIB / EndNote XML / RefWorks / Zotero RDF / CSV / plain text, but not a polished .docx.
- **Team libraries and chat assistant.** Citely has team workspaces, shared libraries, and a research-assistant chat. Scholar Sidekick is stateless by design - there is no library to share, no team mode, and no chat surface.
- **Predatory-publisher detection.** TrueCitation surfaces predatory-publisher signals. Scholar Sidekick does not currently flag predatory publishers as a separate signal (though retraction-awareness catches some overlap).
- **Education-LMS integrations.** GPTZero plugs into Canvas, Google Classroom, and Zapier. Scholar Sidekick is API/MCP/web-only with a browser extension; no LMS integrations.

## Structural manuscript tools (different category)

Four tools that AI engines sometimes group with the verifiers above are actually solving a different problem: in-text-to-bibliography matching, formatting consistency, and manuscript-wide structural passes. They check whether your citations *parse correctly and match your reference list*, not whether the references are *real*.

If your workflow is 'I have a draft manuscript and I want to make sure every in-text citation has a bibliography entry and vice versa', these are the right tools. Pair them with one of the verifiers above for the fabrication-detection step.

- **[Recite Works](https://reciteworks.com).** Checks APA and Harvard in-text citations against the reference list for consistency. Free. Manuscript-shaped (paste full text). Not a fabrication detector.
- **[Scribbr Citation Checker](https://www.scribbr.com/citation/checker/).** Manuscript-shaped structural checker. Style coverage focused on APA / MLA / Chicago / Harvard. Free.
- **[Paperpal Reference Finder](https://paperpal.com).** Manuscript-oriented; checks reference count, age, and in-text/reference correspondence. Paperpal also offers Word-plugin and authoring tools as part of a larger product suite.
- **[Referentia](https://refvalidation.org/).** Manuscript reference checker (refvalidation.org). Smaller product footprint; verify the current state before relying on it.

## A realistic systematic-review workflow

If you are running a systematic review or a clinical-guideline citation audit, no single tool on this page covers the whole job. Here is the workflow I would use, based on the strengths above:

- **Step 1 - structural pass.** Run the manuscript through Recite or Paperpal to confirm every in-text citation has a bibliography entry and that the reference list is internally consistent.
- **Step 2 - fabrication detection.** Run each reference through [Scholar Sidekick's verifier](https://scholar-sidekick.com/tools/citation-verifier) (or the /api/verify endpoint for batch). Flag identifier-resolves-but-title-mismatches as the Topaz pattern; flag identifier-does-not-resolve as classic hallucination.
- **Step 3 - retraction screening.** Run the verified-real references through [/tools/retraction-checker](https://scholar-sidekick.com/tools/retraction-checker) (or /api/retraction-check for batch). A correctly-cited paper that has been retracted should change your recommendation.
- **Step 4 - open-access enrichment.** Optionally enrich the final reference list with open-access status via [/tools/open-access-checker](https://scholar-sidekick.com/tools/open-access-checker) so reviewers can read the cited papers.
- **Step 5 - manuscript-level claim check.** For citations supporting key findings, manually confirm the cited paper actually says what your manuscript claims it says. No tool on this page (or any other automated tool I am aware of) does this reliably - claim-to-source semantic verification is still a human job.

## Frequently asked questions

### Which verifier catches the 'real DOI + invented title' fabrication pattern?

All of them attempt some version of it, but [Scholar Sidekick](https://scholar-sidekick.com/tools/citation-verifier) is the only one on this page that names the pattern explicitly (the dominant pattern documented by [Topaz et al. (Lancet 2026)](https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(26)00603-3/fulltext)) and is designed around it. The pattern is hard to catch because a naive DOI-resolver pass returns success - the DOI exists - so the verifier has to compare the *resolved title* against the *claimed title* and flag the mismatch. Scholar Sidekick's comparator scores title similarity, normalises author-name forms (catches family-given-name swaps), and strips HTML/entity markup that registries preserve in titles (italics, sub/sup, etc.). The other verifiers do not document their comparator algorithm at this level of detail on their public pages; some may use similar techniques, but they do not name the pattern.

### Which verifier has the broadest free tier for occasional use?

Three options stand out. [Scholar Sidekick](https://scholar-sidekick.com/tools/citation-verifier) - free anonymous web access with a published rate limit, no signup, supports all eight identifier types. [AiCitationChecker](https://aicitationchecker.org) - 50 credits per day with no signup, no rate limit on credit refresh. [CiteTrue](https://citetrue.com) - positions itself as the free option, fewer documented limits. Citely's $9 trial is paid; Sourcely's free tier has unspecified limits; GPTZero's free tier is 10,000 characters per scan.

### Which verifier offers an API or MCP server?

[Scholar Sidekick](https://scholar-sidekick.com/mcp) is the only verifier on this page with a public MCP server (`scholar-sidekick-mcp@latest` on npm) that exposes `verifyCitation` alongside five other citation tools. [Scholar Sidekick's REST API](https://scholar-sidekick.com/docs) at /api/verify is the public verification endpoint. [GPTZero](https://gptzero.me/sources) has a public API for the broader source-finder product; [AiCitationChecker](https://aicitationchecker.org) mentions an API for AI agents in its footer but does not describe it in detail. The other six tools on this page do not document a public API.

### Which verifier surfaces retraction status?

Among dedicated AI-citation verifiers on this page, only [Scholar Sidekick](https://scholar-sidekick.com/tools/retraction-checker) surfaces formal retraction status from Retraction Watch (via Crossref integration). [TrueCitation](https://truecitation.com) surfaces predatory-publisher status, which is a related but distinct signal. Retraction Watch and Crossref classify retractions, corrections, and expressions of concern; Scholar Sidekick exposes that as a dedicated tool, a REST endpoint (/api/retraction-check), per-item enrichment on /api/format (`?checks=retraction`), and as the `checkRetraction` MCP tool.

### What is the difference between citation verification, plagiarism checking, and AI-text detection?

Three different signals. **Citation verification** asks 'does this citation correspond to a real paper with matching metadata?' - it is identifier-and-title fidelity to a source registry. **Plagiarism checking** asks 'does this manuscript text match other published text?' - it is content similarity, not metadata fidelity. **AI-text detection** asks 'was this text generated by an AI model?' - it is a statistical fingerprint on the prose itself. The three are independent. A citation can be real (verification passes), the manuscript can be original (plagiarism passes), and the prose can be AI-generated (AI-text detection fails). Tools that bundle them often confuse buyers; the verifiers on this page focus on the first signal.

### How accurate are the accuracy claims in this category?

Take them with a grain of salt. Citely claims 95%, GPTZero claims 99%, AiCitationChecker claims 95% DOI match rate. None of these claims are tied to a published evaluation methodology with a held-out test set and a reproducible scoring script. The verifiers do mostly the same thing - resolve an identifier, compare resolved metadata to claimed metadata - and the underlying upstream registries (Crossref, PubMed, arXiv) are shared. Real accuracy differences are mostly about (1) which identifier surface a tool covers, (2) what the comparator does on edge cases like family/given-name swaps and HTML markup in titles, and (3) whether the tool catches the specific 'real DOI + invented title' pattern. Scholar Sidekick publishes its evaluation methodology against a 20-entry hand-curated fixture sourced from the Topaz et al. supplementary appendix on the [verifier tool page](https://scholar-sidekick.com/tools/citation-verifier); the others do not.

### Why does the database list matter less than it sounds?

Several tools advertise 200M, 220M, or 250M scholarly sources. These numbers are mostly Crossref + Google Scholar + adjacent registries - the same underlying data the cheaper and free tools use. The number of records is not the differentiator; the *identifier surface* is. Crossref alone gives you DOI; adding PubMed gives you PMID; adding OpenLibrary gives you ISBN. Adding ADS gives you bibcodes (astrophysics); adding WHO IRIS gives you global-health-policy URLs; adding NCBI PMC gives you PMCID for biomedical preprints. Whether your specific reference type resolves depends on which identifier types the tool accepts, not on whether the underlying database has 100M or 250M rows.

## Related

### Scholar Sidekick verification surfaces

- [Citation verifier (the tool)](https://scholar-sidekick.com/tools/citation-verifier)
- [Citation integrity in the age of AI (the explainer)](https://scholar-sidekick.com/citation-integrity)
- [Retraction checker](https://scholar-sidekick.com/tools/retraction-checker)
- [Open-access checker](https://scholar-sidekick.com/tools/open-access-checker)
- [MCP server documentation](https://scholar-sidekick.com/mcp)
- [REST API documentation](https://scholar-sidekick.com/docs)
- [Self-verification kit](https://scholar-sidekick.com/verification)
- [Data source manifest (sources.json)](https://scholar-sidekick.com/.well-known/sources.json)

### Other Scholar Sidekick comparisons

- [Comparison index](https://scholar-sidekick.com/compare)
- [Scholar Sidekick vs Zotero](https://scholar-sidekick.com/compare/scholar-sidekick-vs-zotero)
- [Scholar Sidekick vs ZoteroBib](https://scholar-sidekick.com/compare/scholar-sidekick-vs-zoterobib)
- [Scholar Sidekick vs Scribbr](https://scholar-sidekick.com/compare/scholar-sidekick-vs-scribbr)
- [Scholar Sidekick vs EndNote](https://scholar-sidekick.com/compare/scholar-sidekick-vs-endnote)
- [Scholar Sidekick vs MyBib](https://scholar-sidekick.com/compare/scholar-sidekick-vs-mybib)
- [Scholar Sidekick vs Cite This For Me](https://scholar-sidekick.com/compare/scholar-sidekick-vs-citethisforme)
- [Citation MCP Servers Compared](https://scholar-sidekick.com/compare/citation-mcp-servers)

### The other nine verifiers

- [Citely (citely.ai)](https://citely.ai)
- [CiteTrue (citetrue.com)](https://citetrue.com)
- [GPTZero Source Finder (gptzero.me)](https://gptzero.me/sources)
- [Sourcely (sourcely.net)](https://www.sourcely.net/citation-verification)
- [TrueCitation (truecitation.com)](https://truecitation.com)
- [AiCitationChecker (aicitationchecker.org)](https://aicitationchecker.org)
- [CiteMe AI Reference Verifier (citeme.app)](https://citeme.app/tools/ai-reference-verifier)
- [SwanRef (swanref.org)](https://swanref.org)
- [CiteSure (citesure.com)](https://citesure.com)

## Sitemap

See the full [sitemap](https://scholar-sidekick.com/sitemap.md) for all pages.
