From a clinician-educator who reads citations weekly. I built one of the ten tools on this page (Scholar Sidekick). I tried to write the comparison the way I would want to read it.
In early 2026, Topaz et al. audited 2.5 million biomedical papers and found that roughly 1 in 277 contains at least one fabricated reference (Lancet 2026;407(10541):1779-1781). The dominant fabrication pattern is not what people expect: it is not the obviously fake DOI or the made-up journal name. It is a real, resolvable DOI paired with an invented title and abstract. The identifier checks out. The metadata does not match. A naive DOI-resolver-only check passes.
The verifier category exists because that pattern is hard. There are now at least ten dedicated tools that try to detect it, mostly launched in the past 18 months in response to ChatGPT/Claude/Gemini-generated reference lists slipping past peer review. This page is an honest roundup of those ten: what each is for, what it costs, what databases it cross-references, and which one I would reach for in each situation. The methodology is simple - I visited each tool's product page on 2026-05-14 and recorded what they say they do.
One framing note up front. I built one of the ten tools on this page (Scholar Sidekick). The page is not a sales pitch - several of the other nine win cleanly on specific axes (PDF upload, manuscript-wide passes, education-focused integrations, generous free tiers). What Scholar Sidekick is built for is source-of-truth metadata across the broadest identifier surface I could assemble: DOI, PMID, PMCID, ISBN, ISSN, arXiv, ADS bibcode, WHO IRIS URL. Plus retraction-awareness via Retraction Watch and open-access classification via Unpaywall, plus a free public MCP server for AI agents. If those things matter to your workflow, the case for Scholar Sidekick gets stronger. If they do not, one of the others is probably the right answer.
Each tool is built for a specific workflow. If you recognise yours below, that is probably the right tool for you - the rest of the page is the longer version of the same answer.
Scholar Sidekick is not the right choice if you need PDF upload, manuscript-wide structural checks, plagiarism detection, Word-document export of corrected references, or replacement-citation suggestions - other tools on this page win on each of those axes.
| If you need... | Reach for | Why |
|---|---|---|
| Source-of-truth metadata across the broadest identifier surface (DOI, PMID, PMCID, ISBN, ISSN, arXiv, ADS, WHO IRIS) | Scholar Sidekick | Only verifier on this page that resolves PMCID, ADS bibcode, ISSN, and WHO IRIS URL alongside the common DOI/PMID/arXiv set. |
| Retraction-awareness as part of citation verification | Scholar Sidekick | Only verifier on this page with live Retraction Watch integration (separate /tools/retraction-checker, plus enrichment on the main verifier). |
| Open-access status with a free legal PDF link | Scholar Sidekick | Live Unpaywall integration with Gold/Green/Hybrid/Bronze classification via /tools/open-access-checker. |
| MCP server so an AI agent can verify citations on demand | Scholar Sidekick MCP | Only verifier on this page with a first-party MCP server. `verifyCitation` is one of six tools. |
| Education / classroom setting (essays, plagiarism context, Canvas / Google Classroom integration) | GPTZero Source Finder | Built for educators screening student work; integrates with the LMS stack that schools already use. |
| Replacement citations when a fake is flagged | Sourcely | Returns formatted ready-to-use real citations, not just a flag. Closest to the 'find me a real source for this claim' workflow. |
| Polished paid UI with chat assistant and team libraries | Citely | $14-19/month, claims 95% accuracy, has a research-assistant chat and team workspaces. The most consumer-friendly verifier of the ten. |
| Generous free tier for occasional use, no signup | AiCitationChecker or CiteTrue | AiCitationChecker gives 50 free credits/day with no signup; CiteTrue positions itself as the free option. |
| Catching the dominant 'real DOI + invented title' fabrication pattern documented by Topaz et al. (Lancet 2026) | Scholar Sidekick | Only verifier on this page that names the Topaz pattern explicitly and is designed around it. |
| Deterministic, version-pinned verifier output for an audit or publication | Scholar Sidekick | x-scholar-transform-version header pins the resolver chain, normalisation, comparator, and CSL engine to a specific snapshot. No other verifier on this page publishes a deterministic-output contract. |
| Verifying a whole manuscript file (.docx, .tex, .md) end-to-end | Scribbr / Paperpal / Recite | These are structural manuscript tools - in-text-to-bibliography matching, formatting consistency, manuscript-wide passes. Different category from fabrication detection; pair with one of the verifiers above. |
| Tool | Pricing | Databases | Identifier surface | Retraction / OA | API / MCP |
|---|---|---|---|---|---|
| Scholar Sidekick | Free anonymous tier (no signup, rate-limited); paid via RapidAPI ($0-$249/mo) | Crossref, PubMed, DataCite, arXiv, ADS, OpenLibrary, WHO IRIS, Unpaywall, Retraction Watch | DOI, PMID, PMCID, ISBN, ISSN, arXiv, ADS bibcode, WHO IRIS URL | Both (Retraction Watch + Unpaywall) | Public REST API + first-party MCP server with `verifyCitation` |
| Citely | $9 trial / $14-19/mo / $347 lifetime | Crossref, PubMed, arXiv, Google Scholar, OpenAlex, Semantic Scholar | Not stated | Neither | Not mentioned |
| CiteTrue | Free (paid tier not stated) | Multiple academic databases (not enumerated) | Not stated | Neither | Not mentioned |
| GPTZero Source Finder | Free up to 10,000 chars; paid educator tiers | 220M scholarly articles, preprints, real-time news | Not identifier-first (pasted text) | Neither | Public API + Chrome extension + Canvas + Google Classroom + Zapier |
| Sourcely | Free tier (with limits); $19-39/mo (annual) | Google Scholar, Semantic Scholar, Crossref, PubMed, arXiv, Scopus, Web of Science, JSTOR, CORE | Citation strings (APA/MLA/Chicago/Harvard) | Neither | Not mentioned |
| TrueCitation | Free (paid tier not stated) | 17+ academic databases (not enumerated) | Citation strings, journals, URLs | Neither (predatory-publisher flag instead) | Not mentioned |
| AiCitationChecker | 50 credits/day free (no signup); $9.99-$99.99 packs | Crossref, OpenAlex, Semantic Scholar | Citation strings in any format | Neither | API for AI Agents mentioned (not detailed) |
| CiteMe AI Reference Verifier | Not stated on landing page | 250M+ academic sources | AI-generated references (ChatGPT/Gemini/Claude) | Neither | Not mentioned |
| SwanRef | Not stated | 150M papers via Crossref + Google Scholar | Not stated | Neither | Not mentioned |
| CiteSure | Not stated | Not enumerated | Citation strings | Neither | Not mentioned |
One capsule per tool: what it is, what it is for, where it does not fit. Scholar Sidekick first because I built it; the rest in order of public visibility.
Each capsule below reflects what I found on the tool's product page on 2026-05-14, plus what the AI engines (ChatGPT, Perplexity, Google AI Overviews) report when asked about each tool. Pricing, free-tier limits, database lists, and accuracy claims change frequently - verify the linked product page before subscribing.
Free citation verifier with the broadest identifier surface on this page, live retraction and open-access status, and a public MCP server. The project I built.
Scholar Sidekick's citation verifier resolves any of eight identifier types - DOI, PMID, PMCID, ISBN, ISSN, arXiv, ADS bibcode, WHO IRIS URL - against the live upstream registries (Crossref, PubMed, DataCite, arXiv, ADS, OpenLibrary, WHO IRIS) and compares the resolved title to the claimed title. When the identifier resolves but the titles disagree, it flags the dominant fabrication pattern documented by Topaz et al. (Lancet 2026): a real, resolvable DOI paired with an invented title. That pattern is the one a naive DOI-resolver-only check passes.
Verification is shipped three ways: a free web tool at /tools/citation-verifier (no signup, rate-limited), a public REST endpoint at /api/verify, and a first-party MCP server (scholar-sidekick-mcp on npm) that exposes verifyCitation alongside resolveIdentifier, formatCitation, exportCitation, checkRetraction, and checkOpenAccess. The MCP server is the only one on this page; if you are building an AI-agent pipeline that needs to verify a citation as part of a longer workflow, that matters.
What is distinctive: retraction-awareness via Retraction Watch is built in, open-access status via Unpaywall is built in, the identifier surface covers astrophysics (ADS), global health (WHO IRIS), and biomedical edge cases (PMCID), and every response carries an x-scholar-transform-version header that pins the resolver chain, normalisation, comparator, and CSL engine to a specific snapshot. None of the other nine tools publish a versioned-output contract; if your verifier runs against the same DOI six months from now, you cannot tell from any of them whether a change in output is a real change in upstream metadata or a silent change in their processing chain.
?checks=retraction, and as the checkRetraction MCP tool.?checks=oa, and as the checkOpenAccess MCP tool.x-scholar-transform-version header pins the resolver chain, normalisation, formatter, comparator, and CSL engine to a specific snapshot. Identical inputs at a fixed transform version produce byte-identical output for a cache hit. Cache hits are exposed via x-scholar-cache so changes are explainable.AI-powered citation checker with a polished paid UI, chat assistant, and team workspaces. The most consumer-facing verifier on this page.
Citely (citely.ai) is the most actively-marketed verifier in the category. It cross-references citations against Crossref, PubMed, arXiv, Google Scholar, OpenAlex, and Semantic Scholar, and claims a 95% accuracy rate on detecting fabricated vs authentic citations. Pricing starts at a $9 one-time trial (15 credits, 2,000 characters - about 8 to 15 references), then $19/month or $14/month billed annually, with a $347 lifetime ('Believer') tier. The product page emphasises an AI research-assistant chat, team workspaces with shared libraries, and integrations with Zotero, Mendeley, and EndNote.
Citely is the verifier I most expect a paying student or researcher to land on through advertising or AI-engine recommendations - it is the most-cited tool on Perplexity and Google AI Overviews for 'fake citation detector' queries in our 2026-05-14 baseline, including a 'Best AI Citation Checker in 2026' blog post that Google AI Overviews cites verbatim. The product is polished and the audience-fit for a paying user is real. Where Citely does not differentiate from the rest of this list: identifier coverage (DOI/PMID/arXiv only as far as the landing page describes), retraction-awareness (not mentioned), open-access status (not mentioned), public API (not mentioned), MCP server (none), or a published deterministic-output contract.
Free AI-powered citation verifier; the consistent 'free option' brand in AI-engine recommendations for this category.
CiteTrue (citetrue.com) is the free counterpart to Citely in AI-engine recommendation lists for the fake-citation-detector category. The site styles itself as the '#1 free AI-powered citation verification tool' and is cited consistently across ChatGPT, Perplexity, and Google AI Overviews on the verifier queries we tracked. It searches 'vast authoritative academic databases' (not enumerated on the landing page) to verify citations and flag fabricated or AI-generated references.
What is hard to assess: the landing page does not enumerate which databases are queried, does not state identifier coverage, and does not describe the verification algorithm. It is free, which is real - most other tools in this list charge once you pass a small free quota - but the lack of detail makes it hard to evaluate against a specific workflow. If you need a free first-pass triage tool and do not require retraction status, identifier-surface guarantees, or replacement-citation suggestions, CiteTrue is a reasonable choice.
Education-first hallucinated-citation detector integrated with Canvas, Google Classroom, and Zapier. Different primary audience from the others.
GPTZero Source Finder is GPTZero's citation-verification surface, built on the same brand and infrastructure as their AI-text detector. It scans pasted essays for hallucinated citations and unsupported claims, cross-references against 220 million scholarly articles plus preprints and real-time news, and provides line-specific analysis with source recommendations for unsupported claims. Free tier allows up to 10,000 characters; paid tiers and a public API are available.
GPTZero's primary audience is educators screening student work, not researchers verifying their own bibliographies. The product reflects that: there are 1 million educators on the platform, Canvas and Google Classroom integrations, multilingual support (EN/ES/FR/DE and others), and a Chrome extension. If you are a teacher screening essays for AI-generated content (including hallucinated citations), GPTZero is built for you. If you are a researcher verifying a clinical reference list, the workflow fit is weaker - it is essay-shaped, not bibliography-shaped.
Verifier that also returns ready-to-use real citations when it flags a fake. The closest fit for the 'find me a real source for this claim' workflow.
Sourcely (sourcely.net) is the verifier most clearly oriented around the 'replace my fake citation with a real one' workflow. It cross-references against an unusually broad database list - Google Scholar, Semantic Scholar, Crossref, PubMed, arXiv, Scopus, Web of Science, JSTOR, and CORE - and explicitly handles AI hallucinations from ChatGPT, Claude, Gemini, and Perplexity by name. When a citation fails, it returns specific reasons (invalid DOI, non-existent journal, etc.) and suggests formatted ready-to-use replacement citations.
Pricing is paid: Ultra at $19/month (annual; 30 deep searches/month) or Max at $39/month (annual; 1,000 deep searches/month). A free tier exists with limits not specified on the landing page. The replacement-suggestion feature is the genuine differentiator here - none of the other verifiers on this page are framed around finding a real source for a claim you are trying to support, only around flagging fakes.
Free verifier that pairs fabrication detection with predatory-publisher detection. The only one on this page that surfaces predatory-publisher signals.
TrueCitation (truecitation.com) frames itself as a 'free source reliability checker' that detects AI-fabricated references and predatory publishers. The site claims coverage of 17+ academic databases (not enumerated on the landing page) and accepts citation strings, journal names, and URLs as inputs.
The predatory-publisher detection is a useful adjacent signal - it overlaps with retraction-awareness in spirit (both are 'should you trust this source?' signals beyond 'does it exist?') though they answer different questions. TrueCitation does not surface formal retraction status (Retraction Watch / Crossref) or open-access classification (Unpaywall). Its primary strength is the predatory-publisher angle, which none of the other tools on this page name.
50 free credits per day, no signup, paste-anything inputs, with output reformatting to APA, IEEE, Chicago, Harvard, Vancouver, and MDPI styles.
AiCitationChecker (aicitationchecker.org) has the most generous no-signup free tier of the paid verifiers on this page: 50 credits per day (roughly 7-12 reference checks), with paid packs at $9.99-$99.99 for credit bundles that last 90-180 days. It cross-references Crossref, OpenAlex, and Semantic Scholar, and claims a 95% match rate for DOI references.
Two thoughtful features: output reformatting (you can ask it to return the verified citation in APA / IEEE / Chicago / Harvard / Vancouver / MDPI styles, and export a Word document of verified citations) and 'silent citation drift' detection (catches AI-rewritten metadata in manuscripts). The site states no data retention - text is processed in-memory only - which is the right answer for sensitive manuscripts. An API is mentioned in the footer ('For AI Agents') but not described in detail.
Verifier framed around catching AI-generated references from ChatGPT, Gemini, and Claude specifically, against 250M academic sources.
CiteMe AI Reference Verifier (citeme.app) advertises 250 million academic sources for cross-reference and frames the product around 'catch hallucinated citations from ChatGPT, Gemini, or any AI tool' before submission. CiteMe also offers a broader citation-generator product, with the verifier sitting in their tools menu.
What is on the public landing page is thin on detail (database list, accuracy, identifier surface, pricing not specified at the URL I checked). The 250M-source figure is the headline; it is comparable to GPTZero's 220M and Sourcely's 200M, all of which are claims about Crossref + Scholar + adjacent registries. CiteMe was previously on Scholar Sidekick's competitive radar as a Zotero alternative (its citation-generator side); the verifier surface is a newer surface for them.
Smaller, more focused verifier framed specifically around AI hallucination detection. Surfaced primarily through Perplexity.
SwanRef (swanref.org) frames itself as an 'AI hallucination detector for academic citations' against more than 150 million papers via Crossref and Google Scholar. The product page is sparse on detail and SwanRef does not appear in ChatGPT's or Google AI Overviews' tool lists for verifier queries - Perplexity is the AI engine that consistently surfaces it.
It is hard to assess SwanRef relative to the other tools on this page without more detail than the landing page provides. Listed here for completeness because AI engines do cite it; verify the linked product page before relying on it.
Newer commercial verifier in the same category as Citely. Cited on Perplexity but not yet on ChatGPT or Google AI Overviews.
CiteSure (citesure.com) is a newer entrant in the AI-citation-verification category. The landing page describes 'AI-powered analysis' to verify citations 'in seconds' and is framed as a commercial product. Like SwanRef, public-facing detail on databases, identifier surface, accuracy claims, and pricing is thin.
Listed here for completeness; Perplexity recommends it among verifiers but Google AI Overviews and ChatGPT do not. Verify the product page before subscribing.
Scholar Sidekick is built for the case where 'this citation looks plausible' is not good enough - where you need the resolved metadata to actually match the source registry, where retraction status changes the answer, and where an audit six months from now needs to reproduce the exact verifier output. The wins below are specific to that case; if your workflow is 'paste an undergraduate essay and triage the obvious hallucinations', other tools are a better fit.
?checks=retraction for per-item enrichment, and the MCP server exposes a checkRetraction tool. This is a real gap in the category: a citation can be real, correctly cited, and still retracted - that should change the recommendation.scholar-sidekick-mcp@latest on npm installs in one line and exposes six tools, including verifyCitation. Among the ten verifiers on this page, Scholar Sidekick is the only one with a public MCP server. If you are building an AI-agent pipeline that needs verification as one step in a longer workflow, that is the only working option today.x-scholar-transform-version header that pins the resolver chain, normalisation, comparator, and CSL engine to a specific snapshot. Identical inputs at a fixed transform version produce byte-identical output for a cache hit. A x-scholar-cache header makes cache hits vs upstream re-queries explicit. None of the other nine verifiers publish a versioned-output contract; their output 'is what it is today'.An honest list of the things other verifiers on this page do that Scholar Sidekick does not. If your workflow needs one of these, one of the others is the right tool.
Four tools that AI engines sometimes group with the verifiers above are actually solving a different problem: in-text-to-bibliography matching, formatting consistency, and manuscript-wide structural passes. They check whether your citations parse correctly and match your reference list, not whether the references are real.
If your workflow is 'I have a draft manuscript and I want to make sure every in-text citation has a bibliography entry and vice versa', these are the right tools. Pair them with one of the verifiers above for the fabrication-detection step.
If you are running a systematic review or a clinical-guideline citation audit, no single tool on this page covers the whole job. Here is the workflow I would use, based on the strengths above:
All of them attempt some version of it, but Scholar Sidekick is the only one on this page that names the pattern explicitly (the dominant pattern documented by Topaz et al. (Lancet 2026)) and is designed around it. The pattern is hard to catch because a naive DOI-resolver pass returns success - the DOI exists - so the verifier has to compare the resolved title against the claimed title and flag the mismatch. Scholar Sidekick's comparator scores title similarity, normalises author-name forms (catches family-given-name swaps), and strips HTML/entity markup that registries preserve in titles (italics, sub/sup, etc.). The other verifiers do not document their comparator algorithm at this level of detail on their public pages; some may use similar techniques, but they do not name the pattern.
Three options stand out. Scholar Sidekick - free anonymous web access with a published rate limit, no signup, supports all eight identifier types. AiCitationChecker - 50 credits per day with no signup, no rate limit on credit refresh. CiteTrue - positions itself as the free option, fewer documented limits. Citely's $9 trial is paid; Sourcely's free tier has unspecified limits; GPTZero's free tier is 10,000 characters per scan.
Scholar Sidekick is the only verifier on this page with a public MCP server (scholar-sidekick-mcp@latest on npm) that exposes verifyCitation alongside five other citation tools. Scholar Sidekick's REST API at /api/verify is the public verification endpoint. GPTZero has a public API for the broader source-finder product; AiCitationChecker mentions an API for AI agents in its footer but does not describe it in detail. The other six tools on this page do not document a public API.
Among dedicated AI-citation verifiers on this page, only Scholar Sidekick surfaces formal retraction status from Retraction Watch (via Crossref integration). TrueCitation surfaces predatory-publisher status, which is a related but distinct signal. Retraction Watch and Crossref classify retractions, corrections, and expressions of concern; Scholar Sidekick exposes that as a dedicated tool, a REST endpoint (/api/retraction-check), per-item enrichment on /api/format (?checks=retraction), and as the checkRetraction MCP tool.
Three different signals. Citation verification asks 'does this citation correspond to a real paper with matching metadata?' - it is identifier-and-title fidelity to a source registry. Plagiarism checking asks 'does this manuscript text match other published text?' - it is content similarity, not metadata fidelity. AI-text detection asks 'was this text generated by an AI model?' - it is a statistical fingerprint on the prose itself. The three are independent. A citation can be real (verification passes), the manuscript can be original (plagiarism passes), and the prose can be AI-generated (AI-text detection fails). Tools that bundle them often confuse buyers; the verifiers on this page focus on the first signal.
Take them with a grain of salt. Citely claims 95%, GPTZero claims 99%, AiCitationChecker claims 95% DOI match rate. None of these claims are tied to a published evaluation methodology with a held-out test set and a reproducible scoring script. The verifiers do mostly the same thing - resolve an identifier, compare resolved metadata to claimed metadata - and the underlying upstream registries (Crossref, PubMed, arXiv) are shared. Real accuracy differences are mostly about (1) which identifier surface a tool covers, (2) what the comparator does on edge cases like family/given-name swaps and HTML markup in titles, and (3) whether the tool catches the specific 'real DOI + invented title' pattern. Scholar Sidekick publishes its evaluation methodology against a 20-entry hand-curated fixture sourced from the Topaz et al. supplementary appendix on the verifier tool page; the others do not.
Several tools advertise 200M, 220M, or 250M scholarly sources. These numbers are mostly Crossref + Google Scholar + adjacent registries - the same underlying data the cheaper and free tools use. The number of records is not the differentiator; the identifier surface is. Crossref alone gives you DOI; adding PubMed gives you PMID; adding OpenLibrary gives you ISBN. Adding ADS gives you bibcodes (astrophysics); adding WHO IRIS gives you global-health-policy URLs; adding NCBI PMC gives you PMCID for biomedical preprints. Whether your specific reference type resolves depends on which identifier types the tool accepts, not on whether the underlying database has 100M or 250M rows.
Read this comparison as markdown - for AI agents and offline reading.