---
title: Citation MCP Servers Compared - Roundup of Scholar Sidekick, zotero-mcp, CiteAssist, citecheck, OneCite
description: Honest roundup of the five citation MCP servers AI assistants currently cite: zotero-mcp, CiteAssist, citecheck, OneCite, and Scholar Sidekick MCP. Per-tool capsules, what each is for, where each wins.
doc_version: "1.0"
last_updated: "2026-05-07"
---

# Citation MCP Servers Compared

> From the builder of Scholar Sidekick MCP, one of the five projects on this page. I tried to write the comparison the way I would want to read it.
> Last updated: 2026-05-07
> HTML version: https://scholar-sidekick.com/compare/citation-mcp-servers

Five MCP servers solve five different citation jobs. There is no single best one - there is a best one for the workflow you have. This page is a roundup, not a sales pitch: each tool gets a capsule that says what it is for, what it is not, and links to the project so you can read the source.

The five tools are zotero-mcp (bridge to your local Zotero library), CiteAssist (a thin BibTeX lookup over CiteAs and Google Scholar), citecheck (reference repair for manuscript files), OneCite (a Python CLI and library that also exposes an MCP entry point), and Scholar Sidekick MCP (the project I built - a hosted citation API exposed as an MCP server).

If you are building an agent pipeline that needs to be reproducible months later, the determinism and verification surface is where Scholar Sidekick MCP differentiates from the other four reviewed here. The roundup below is honest about where each tool wins and where Scholar Sidekick MCP does not.

## Pick a tool in 30 seconds

Each tool is built for a specific workflow. If you recognise yours below, that is probably the right tool for you - the rest of the page is the longer version of the same answer.

- **Have a Zotero library you want your AI to read?** → zotero-mcp.
- **Need to repair the references in a manuscript file (.tex / .md / .bib / .docx)?** → citecheck.
- **Running a Python pipeline that needs a citation library you can `import`?** → OneCite.
- **Just need a quick BibTeX entry from a DOI or Google Scholar query?** → CiteAssist.
- **Resolving DOI/PMID/PMCID/ISBN/arXiv/ADS/WHO IRIS to a citation in any of 10,000+ styles, or to a non-BibTeX export format?** → Scholar Sidekick MCP.

Scholar Sidekick MCP is not the right choice if you need offline / fully on-machine processing, Zotero-library access, Google Scholar lookup, manuscript-file rewriting, or a Python library you can import - in those cases one of the other four wins cleanly.

## When to use which

| If you want to... | Reach for | Why |
| --- | --- | --- |
| Give your AI access to your existing Zotero library | zotero-mcp | 30+ tools for collection, annotation, and library-management operations on a real Zotero install. |
| Get a BibTeX entry from a DOI in five seconds | CiteAssist | Two tools, one job, BibTeX out - no setup beyond the MCP config. |
| Repair the broken references in a manuscript file (.tex, .md, .bib, .docx) | citecheck | Reads the file, validates references against PubMed/Crossref/arXiv/Semantic Scholar, rewrites in place. |
| Run citation processing inside a Python pipeline | OneCite | CLI + Python library + MCP triad; multi-source validation; self-contained on your machine. |
| Resolve PMID, PMCID, ISBN, ADS bibcode, or WHO IRIS URLs (not just DOI/arXiv) | Scholar Sidekick MCP | Identifier breadth across science, medicine, and books - the long tail other tools do not cover. |
| Format citations in Vancouver, AMA, APA, IEEE, CSE, or any of 10,000+ CSL styles | Scholar Sidekick MCP | The only one of the five with output-style flexibility; the other four are BibTeX-centric. |
| Export to RIS, EndNote XML, MEDLINE/NBIB, RefWorks, Zotero RDF, CSL JSON, or CSV | Scholar Sidekick MCP | Other four output BibTeX (sometimes also JSON); none cover the reference-manager interchange formats. |
| Build a reproducible citation pipeline an auditor can re-run | Scholar Sidekick MCP | Versioned outputs (x-scholar-transform-version header), request-id provenance, declared upstream fallback order in /.well-known/sources.json. |
| Run everything on your own machine without trusting an intermediate hosted service | OneCite or citecheck | Both call upstream backends (Crossref, PubMed, arXiv, etc.) directly from your install. Scholar Sidekick MCP and CiteAssist call hosted intermediaries. |

## At a glance

| Tool | What it is | Tools | Backends | Output | Install |
| --- | --- | --- | --- | --- | --- |
| zotero-mcp | Bridges your local Zotero library to AI assistants | 30+ | Local Zotero 7+ (or web API), optional embeddings | BibTeX, markdown | Python; local Zotero install required |
| CiteAssist (= ndchikin/reference-mcp) | Thin BibTeX lookup over CiteAs and Google Scholar | 2 | CiteAs, Google Scholar | BibTeX | Python (uv) |
| citecheck | Repairs references in existing manuscript files | 6 | PubMed, Crossref, arXiv, Semantic Scholar | JSON, BibTeX, numbered text | Node.js (npx) |
| OneCite | Python CLI + library; also exposes an MCP entry point | 3 | Crossref, Semantic Scholar, PubMed, arXiv, DataCite, Zenodo, Google Books | BibTeX | Python (pip) |
| Scholar Sidekick MCP | Hosted citation API exposed as a stdio MCP server | 3 | Hosted backend (Crossref, PubMed, arXiv, ISBN, ADS, WHO IRIS, DataCite) | 10,000+ CSL styles in text/HTML/JSON; BibTeX, RIS, CSL JSON, EndNote XML, EndNote Refer, RefWorks, MEDLINE/NBIB, Zotero RDF, CSV, plain text | Node.js (npx); RapidAPI key (free tier available) |

## The five tools

*Each capsule below reflects what I found in the project's README and public listings on 2026-05-07. Tool counts, supported identifiers, output formats, and install instructions can change - verify against the linked GitHub repos before integrating.*

### zotero-mcp - bridge to your local Zotero library

*An MCP server that exposes your existing Zotero 7+ library to AI assistants - search, read, annotate, manage.*

[zotero-mcp](https://github.com/54yyyu/zotero-mcp) is the MCP for people who already use Zotero. It connects to either a local Zotero 7+ install (via the local API) or to the Zotero web API with credentials, and exposes the library to MCP-compatible clients through 30+ tools covering search, collections, tags, annotations, notes, full-text access, and library management.

It includes optional semantic-search support via ChromaDB and OpenAI/Gemini embeddings, can pull annotations and notes out of PDFs (when paired with the Better BibTeX plugin), and integrates with the Scite API for citation intelligence and retraction alerts. Output is BibTeX or markdown; this is a library-management tool first, a citation-format tool second.

Where it wins: depth of integration with an existing Zotero workflow. Where it does not: it is not a citation-format engine and it does not resolve identifiers from outside your library. If you do not already have a Zotero library, this is not the tool for you.

### CiteAssist - thin BibTeX lookup over CiteAs and Google Scholar

*Two MCP tools, one job: send a DOI, URL, or query; get a BibTeX entry back. Same project as ndchikin/reference-mcp.*

[CiteAssist](https://github.com/ndchikin/reference-mcp) is the smallest tool on this page. It exposes two MCP tools (`get_citeas_data`, `get_scholar_data`) that proxy through to the [CiteAs](https://citeas.org) service and Google Scholar respectively, returning BibTeX-formatted citation data. The GitHub repo is `ndchikin/reference-mcp`; the project is marketed as `CiteAssist` on Smithery, PulseMCP, and LobeHub - same code, two names.

It is unopinionated and minimal. There is no identifier normalization, no style flexibility, no output beyond BibTeX, no batch endpoint. If your job is `give me the BibTeX for this DOI`, that is also exactly what you get.

Where it wins: simplicity and the Google Scholar long tail (Scholar surfaces conference papers, book chapters, and humanities sources Crossref does not always catch). Where it does not: anything beyond BibTeX; anything that needs determinism or provenance; anything at scale.

Has an accompanying peer-reviewed paper: [Kaesberg et al., "CiteAssist: A System for Automated Preprint Citation and BibTeX Generation"](https://arxiv.org/abs/2407.03192), accepted at the SDProc workshop @ ACL 2024.

### citecheck - reference repair for manuscript files

*Scans a manuscript folder, finds broken references, validates them against multiple backends, rewrites in place.*

[citecheck](https://github.com/jhlee0619/citecheck) takes a different shape from the others. Instead of resolving identifiers in a chat or pipeline, it operates on existing manuscript files. It reads `.tex`, `.md`, `.bib`, `.txt`, and `.docx`, extracts references from mixed document sections, validates each one against PubMed, Crossref, arXiv, and Semantic Scholar, and writes corrected references back into the source file.

Its six tools (`scan_workspace`, `analyze_references`, `plan_reference_rewrite`, `apply_reference_rewrite`, `repair_paper`, `citecheck_version`) are agent-shaped: planning is separated from application so an AI can review the proposed rewrites before committing them. Installation is `npx -y @jhlee0619/citecheck` for Claude or Codex; no Python toolchain required.

Where it wins: among the five reviewed here, it is the one I found built for manuscript-file repair as a primary workflow. Where it does not: it does not provide on-demand citation generation outside that workflow, and it has no style flexibility (BibTeX-shaped output).

Has an accompanying [arXiv preprint](https://arxiv.org/abs/2603.17339) (Lee, 2026; preprint, not peer-reviewed at time of writing) describing the system as a software paper.

### OneCite - Python CLI and library that also exposes an MCP entry point

*A self-contained Python toolkit for citation management with multi-source validation; ships an MCP server as a secondary entry point.*

[OneCite](https://github.com/HzaCode/OneCite) is the most feature-rich of the four GitHub projects. Its README pitches it as a CLI tool and Python library first; the MCP server (`onecite-mcp`) is a secondary integration. It accepts mixed inputs (DOIs, paper titles, arXiv IDs, PMIDs, ISBNs, GitHub URLs, Zenodo DOIs, plain text, BibTeX files), runs them through a four-stage pipeline (clean, query, validate, format) against eight backends (Crossref, Semantic Scholar, PubMed, arXiv, DataCite, Zenodo, Google Books, optionally Google Scholar), and produces formatted bibliographic entries.

It is the most credible of the four when the goal is citation accuracy: multi-source validation acts as a deliberate firewall against AI-generated inaccuracies, and the disambiguation mode lets you pick the right match interactively when fuzzy-matching is ambiguous. Three MCP tools are exposed: `cite`, `batch_cite`, and `search`.

Where it wins: identifier breadth (closest of the four to Scholar Sidekick MCP), self-contained Python install, and the ability to import OneCite as a library inside your own script. Where it does not: the README I read on 2026-05-07 documented BibTeX as the primary output (some third-party listings mention APA and MLA - verify against current docs); regardless, it does not currently expose the 10,000+ CSL style flexibility or the reference-manager interchange formats (RIS, EndNote XML, RefWorks, MEDLINE/NBIB, Zotero RDF) that Scholar Sidekick MCP does.

### Scholar Sidekick MCP - hosted citation API exposed as a stdio MCP server

*An MCP server that wraps our public identifier-resolution and citation-formatting API, with versioned outputs and full provenance headers.*

[Scholar Sidekick MCP](https://github.com/mlava/scholar-sidekick-mcp) installs in one line - `npx -y scholar-sidekick-mcp@latest` - and exposes three tools: `resolveIdentifier`, `formatCitation`, and `exportCitation`. It accepts DOIs, PMIDs, PMCIDs, ISBNs, ISSNs, arXiv IDs, ADS bibcodes, and WHO IRIS URLs, formats output in any of 10,000+ CSL styles plus five hand-tuned builtins (Vancouver, AMA, APA, IEEE, CSE), and exports to BibTeX, RIS, CSL JSON, EndNote XML, EndNote Refer, RefWorks, MEDLINE/NBIB, Zotero RDF, CSV, or plain text.

Unlike OneCite and citecheck, it does not contact upstream backends from your machine. The MCP package is a thin client over the hosted API at `scholar-sidekick.com`, which fans out to the upstream sources. That trade-off costs you data sovereignty (you trust our hosted service to make the upstream calls) and gains you a small install footprint, no Python dependency, no local Zotero, and a single stable endpoint to integrate against.

Among the five reviewed here, it is the one I built specifically for reproducible agent pipelines. Every response carries an [`x-scholar-transform-version`](https://scholar-sidekick.com/verification) header that pins the resolver chain, normalisation, formatter, and CSL engine to a specific snapshot, plus an `x-request-id` for traceability and an `x-scholar-cache` header that exposes cache hit/miss so you can tell when output came from a pinned snapshot vs a fresh upstream fetch.

- **Authentication.** Requires a [RapidAPI](https://rapidapi.com/scholar-sidekick-scholar-sidekick-default/api/scholar-sidekick) key passed via the `RAPIDAPI_KEY` environment variable. Free tier available; paid plans scale to 500K requests/month. Among the five reviewed here, it is the only one I found that requires an external account.
- **Backend dependency.** The MCP server calls a hosted endpoint at `scholar-sidekick.com`. If you require fully on-machine processing with no intermediate service, OneCite and citecheck are the better choices.
- **Determinism (with one important nuance).** The transform_version pins our processing chain - normalisation, formatter, CSL engine - so identical bytes go in and identical bytes come out for a cache hit. On a cache miss the upstream sources (Crossref, PubMed, etc.) are re-queried, and if they have legitimately updated their record (a corrected author name, a retraction notice) the bytes will reflect that change. The `x-scholar-cache` header makes the difference visible. See [/verification](https://scholar-sidekick.com/verification) for the worked examples.
- **No accompanying academic paper (yet).** Two of the other four projects on this page have arXiv submissions (CiteAssist peer-reviewed at SDProc @ ACL 2024; citecheck a preprint). Scholar Sidekick MCP does not. The system, contracts, and provenance manifest are documented at [/verification](https://scholar-sidekick.com/verification) and [/.well-known/sources.json](https://scholar-sidekick.com/.well-known/sources.json), but there is no peer-reviewed citation for the system itself.

## Where Scholar Sidekick MCP wins (relative to the other four)

Scholar Sidekick MCP is built for agents and scripts that need formatted citations on demand from arbitrary identifiers. The places it pulls ahead of the other four are exactly the places they are not designed for:

- **Identifier breadth.** PMID, PMCID, ISBN, ISSN, ADS bibcode, and WHO IRIS URL - in addition to DOI and arXiv. Covers the clinical, medical, library-science, astronomy, and global-health long tail. zotero-mcp resolves what is already in your library; CiteAssist takes DOI/URL/keyword; citecheck and OneCite cover most identifiers but not ADS or WHO IRIS.
- **Style flexibility.** 10,000+ CSL styles via `citeproc-js`, plus five hand-tuned builtins (Vancouver, AMA, APA, IEEE, CSE). Among the five reviewed here, this is the one I found that produces styled prose; the other four are BibTeX-centric.
- **Output format breadth.** RIS, CSL JSON, EndNote XML, EndNote Refer, RefWorks, MEDLINE/NBIB, Zotero RDF, CSV, plain text - in addition to BibTeX. Covers the reference-manager interchange formats the other four do not currently document.
- **Single `npx` install with no toolchain to manage.** No Python, no local Zotero, no compile step. citecheck has the same install shape; the others require Python.
- **Determinism and verification.** Versioned outputs (`x-scholar-transform-version`), request-id provenance, public `.well-known/sources.json` manifest, dedicated [/verification](https://scholar-sidekick.com/verification) page. I did not find an equivalent surface published on the other four, though the underlying tools can of course add one.
- **Streaming batch.** The underlying API exposes an NDJSON streaming endpoint that emits one JSON object per identifier - suitable for processing thousands of references without buffering the whole response.

## Where Scholar Sidekick MCP does not win

Honest accounting matters more on a roundup than on a brand-vs-brand page. Each of the other four has a job Scholar Sidekick MCP cannot do:

- **vs zotero-mcp - no library access.** We do not read your Zotero library, your annotations, your notes, or your collections. If your AI needs to work with what you have already saved, zotero-mcp is the right tool.
- **vs CiteAssist - no Google Scholar.** Our resolver chain does not include Scholar, which covers some humanities and conference-paper long tail Crossref misses. For those queries, CiteAssist is the better fit.
- **vs citecheck - no manuscript-file workflow.** We do not scan, repair, or rewrite references inside `.tex`/`.md`/`.bib`/`.docx` files. citecheck is purpose-built for that loop.
- **vs OneCite - not an importable library.** OneCite can be `pip install`-ed and imported into a Python script (`from onecite import ...`); Scholar Sidekick MCP is MCP-only on the install side. (You can call our REST API from any language, but that is not the same as importing a library.)
- **Account requirement.** Among the five reviewed here, Scholar Sidekick MCP is the only one I found that requires an external account - a [RapidAPI](https://rapidapi.com/) subscription, free tier available. The other four require nothing beyond their direct upstream services (Zotero local API for zotero-mcp; nothing for CiteAssist/citecheck/OneCite).
- **Backend dependency.** Calling our hosted API trades a local-data-sovereignty story for the install simplicity. If you need every byte of upstream traffic to come from your own machine, OneCite and citecheck are the better choices.

## Determinism and verification: the agent-pipeline differentiator

If your agent pipeline runs once today and once in six months, ideally the answer is the same both times - or, if it has changed, you can see exactly why. Among the five reviewed here, I did not find a published versioned-output contract on zotero-mcp, CiteAssist, citecheck, or OneCite: zotero-mcp returns whatever your library has today, CiteAssist returns whatever CiteAs and Scholar return today, and citecheck and OneCite validate against multiple backends without pinning the formatter or normalisation chain. Scholar Sidekick MCP makes reproducibility part of the response contract.

Two important honest caveats. First, the contract pins our processing chain (resolver, normalisation, formatter, CSL engine), not the upstream metadata itself. Crossref, PubMed, and the others can and should update their records (corrected author names, retraction notices, new identifiers). On a cache miss those updates will be reflected in the next response. The `x-scholar-cache` header makes that visible: a cache hit guarantees byte-identical output; a miss may legitimately differ. Second, an MCP server is open-source code and other projects could add equivalent surfaces tomorrow - this is the state I found in the public repos on 2026-05-07.

- **`x-scholar-transform-version` header.** Pin our resolver chain, normalisation, formatter, and CSL engine to a specific snapshot. Identical inputs at a fixed transform version produce byte-identical output for a cache hit.
- **`x-request-id` header.** Every response is traceable to a specific call - useful for log correlation, support tickets, and audit trails.
- **`x-scholar-cache` header.** Visible cache hit/miss so you can tell whether the response came from a pinned snapshot (byte-identical) or a fresh upstream fetch (may reflect upstream metadata changes).
- **[`/.well-known/sources.json`](https://scholar-sidekick.com/.well-known/sources.json) manifest.** Declared resolver chain, fallback order per identifier type, allowlisted upstream hosts, and network-safety guarantees. Read what we do before integrating.
- **[`/verification`](https://scholar-sidekick.com/verification) page.** Copy-paste curl commands and expected outputs that let an external evaluator independently confirm determinism and edge-case behaviour against the live API.

## Use multiple together

Most agent setups can register more than one MCP server at once. The five tools on this page compose well because they solve different jobs:

- **zotero-mcp + Scholar Sidekick MCP.** zotero-mcp gives the agent search and read access to your saved library; Scholar Sidekick MCP turns any DOI - whether it came from your library or not - into a Vancouver, AMA, or APA citation, or into a BibTeX/RIS/EndNote-XML export file.
- **citecheck + Scholar Sidekick MCP.** citecheck repairs the references inside your manuscript file. Scholar Sidekick MCP then converts the cleaned-up reference list into a non-BibTeX target style (RIS for EndNote, CSL JSON for Pandoc, MEDLINE/NBIB for PubMed import) that citecheck does not produce.
- **OneCite + Scholar Sidekick MCP.** OneCite is the right tool for batch BibTeX inside a Python pipeline you control end-to-end. Scholar Sidekick MCP is the right tool when the same pipeline needs styled citation prose, an [RIS file](https://scholar-sidekick.com/tools/doi-to-ris), or any output beyond BibTeX.
- **CiteAssist + Scholar Sidekick MCP.** CiteAssist gives the agent a Google Scholar search lane Scholar Sidekick MCP does not have. Scholar Sidekick MCP gives the agent identifier breadth and style flexibility CiteAssist does not have. Run both; let the agent pick.

## Frequently asked questions

### Are all five tools really MCP servers in the same way?

Mostly, with one footnote. zotero-mcp, CiteAssist, citecheck, and Scholar Sidekick MCP are all primary MCP servers - that is the main way they are intended to be used. OneCite is primarily a Python CLI and library; the MCP server is a secondary entry point. All five run as stdio MCP servers from a one-line install command, so from a client integration perspective they are interchangeable.

### Why is there no single best citation MCP?

Because they solve different jobs. zotero-mcp brokers access to a stateful library you already maintain; CiteAssist is a thin lookup proxy; citecheck rewrites references inside manuscript files; OneCite runs as part of a Python pipeline; Scholar Sidekick MCP is an identifier-resolution and citation-formatting API for agents. No tool that tried to do all five jobs would do any of them well.

### What is the difference between CiteAssist and ndchikin/reference-mcp?

They are the same project. The GitHub repo is named `reference-mcp`; the project is marketed as `CiteAssist` on Smithery, PulseMCP, LobeHub, and other MCP directories. AI assistants currently cite both names interchangeably for the same tool. We use `CiteAssist` on this page because that is the name AI evaluators most often surface.

### Is OneCite really an MCP server, or a CLI tool with MCP bolted on?

Both descriptions are accurate. OneCite started as a Python CLI and library and the README still leads with that framing. The MCP server (`onecite-mcp`) is a real MCP integration with three tools (`cite`, `batch_cite`, `search`) that work in Claude Desktop or Cursor when configured. We include it on this page because AI evaluators currently cite OneCite as an MCP server, but if you are deciding between OneCite and the others, know that you are getting a CLI tool first and an MCP server second.

### Can I use multiple citation MCPs in the same Claude or Cursor session?

Yes. Both Claude Desktop and Cursor support multiple MCP servers in a single configuration. The agent will see all the tools from all the servers and pick whichever is most appropriate for the user's request. The five tools on this page compose well because their tool names do not collide and they target different workflows. See the [Use multiple together](#use-multiple-together) section above for concrete pairings.

### Does Scholar Sidekick MCP work offline?

No. The MCP package is a thin client that calls the hosted [Scholar Sidekick API](https://scholar-sidekick.com/docs) for every request. If your environment requires fully offline operation, OneCite (with a local cache of upstream metadata) is the closest fit; citecheck also calls upstream backends but requires network access at request time.

### What does 'deterministic' mean in this context, and why does it matter for agents?

It matters because reproducibility is the point of an audit trail: if the same DOI produces a different citation next month, every downstream consumer (a published paper, a regulatory filing, a citation count) can be silently wrong without any signal. Scholar Sidekick MCP pins our resolver chain, normalisation, formatter, and CSL engine to a specific `x-scholar-transform-version` snapshot. There is one important nuance: the contract pins our processing chain, not the upstream metadata. On a cache hit, the response is byte-identical for the same input + transform version. On a cache miss, the upstream sources (Crossref, PubMed, etc.) are re-queried, and if they have updated their record (a corrected author name, a retraction notice) the bytes will reflect that change. The `x-scholar-cache` header makes the difference visible, so 'why did this output change?' is always answerable. Among the five reviewed here, this is the contract I found published; the other four may produce stable output most of the time but do not commit to a versioned-output contract.

## Related

### Other Scholar Sidekick comparisons

- [Comparison index](https://scholar-sidekick.com/compare)
- [Scholar Sidekick vs Zotero](https://scholar-sidekick.com/compare/scholar-sidekick-vs-zotero)
- [Scholar Sidekick vs ZoteroBib](https://scholar-sidekick.com/compare/scholar-sidekick-vs-zoterobib)
- [Scholar Sidekick vs Scribbr](https://scholar-sidekick.com/compare/scholar-sidekick-vs-scribbr)

### About Scholar Sidekick

- [MCP server documentation](https://scholar-sidekick.com/mcp)
- [REST API docs](https://scholar-sidekick.com/docs)
- [Self-verification kit](https://scholar-sidekick.com/verification)
- [Data source manifest (sources.json)](https://scholar-sidekick.com/.well-known/sources.json)
- [Engineering principles](https://scholar-sidekick.com/engineering-principles)

### The five tools

- [zotero-mcp on GitHub](https://github.com/54yyyu/zotero-mcp)
- [CiteAssist (reference-mcp) on GitHub](https://github.com/ndchikin/reference-mcp)
- [citecheck on GitHub](https://github.com/jhlee0619/citecheck)
- [OneCite on GitHub](https://github.com/HzaCode/OneCite)
- [Scholar Sidekick MCP on GitHub](https://github.com/mlava/scholar-sidekick-mcp)

## Sitemap

See the full [sitemap](https://scholar-sidekick.com/sitemap.md) for all pages.
