SciRouterVet
Open my portal

Comparative oncology atlas · methodology

How we build the atlas — and what's still hand-curated.

The atlas pairs canine cancers with their human, feline, and equine analogs and surfaces shared driver genes and pathways. The goal: a quick, defensible lookup a practising vet can use in a clinic room. This page explains where every entry comes from and where the model is still curating-by-hand vs. retrieving-by-embedding.

Sourcing

Veterinary oncology primary literature.

Each entry is anchored in the canine veterinary oncology literature — Vail, Withrow, Page; J Vet Intern Med; Vet Comp Oncol; the comparative oncology consortium output from NCI COTC. Cross-species analogies are drawn from human oncology primary literature where the shared molecular biology is established (canine OSA ↔ pediatric OSA; canine oral melanoma ↔ human mucosal melanoma; canine MCT ↔ systemic mastocytosis on the KIT axis; canine DLBCL ↔ human DLBCL).

We are conservative. An analog only ships when there is at least one peer-reviewed cross-species paper supporting the parallel. "Maybe related" pairings stay in our draft pile.

Curation today

Hand-curated by the SciRouter team.

Every entry currently visible at /clinicians/atlas was written by SciRouter staff and reviewed against the source literature. Each entry includes a slug, the canine name, named analogs with similarity tier ("very high / high / moderate / low"), the key driver gene panel, the shared pathway panel, and a comparative-notes paragraph.

Hand-curation is the right call while the catalogue is small (under 30 entries). It gives us editorial control. It also slows us down — which is why the next step is the embedding-retrieval layer.

Where Sci-JEPA fits (post-launch)

Live retrieval against an embedding index.

The longer-term plan: replace keyword search against a fixed list with Sci-JEPA v1.0, our 1.8B-parameter cross-species embedding model. A search for a canine condition surfaces the hand-curated "Featured" entry first, then a list of "AI-retrieved similar conditions" with cosine similarity scores. Targeted for sprint P5-8 (post pre-launch gate).

When Sci-JEPA lands, the methodology section here will document the embedding training corpus, the species coverage, and where the model is more vs. less confident.

Reviewer attribution

Reviewer-led, AI-assisted.

We do not claim "vet-reviewed" or "DVM-reviewed" on entries that have not been explicitly signed off by a named reviewer with on-file credentials and an affiliation. As entries are reviewed, the reviewer name, credentials, and last-reviewed date appear in the entry footer.

AI assists with first drafts (literature search summarisation, structured-data extraction) but the publish decision sits with the named reviewer.

Editorial neutrality

No sponsor ranking. Ever.

Same rule as the trial finder: atlas entries are not ranked by sponsor relationship. We don't take payment for inclusion. We don't boost atlas results that happen to align with a SciRouter product. This is the moat; we will not trade it for a deal.

Limitations

What we won't claim.

  • Cross-species pharmacokinetics are not implied by molecular similarity. A drug that works in human DLBCL doesn't automatically work in canine DLBCL. We say where translation has been demonstrated and where it hasn't.
  • Feline coverage is limited. Feline MHC training data (FLA) is sparse compared to canine (DLA) or human (HLA). Feline-as-anchor entries will ship later, with explicit caveat language.
  • The atlas is a starting point, not a treatment plan. Clinical decisions stay with the practising clinician.

Corrections + suggestions

If we got something wrong, tell us.

Reach us via the contact form with the slug and the issue. Quote the source if you have one. Same path for board-certified oncologists who want to take over editorial responsibility for an entry — happy to attribute.