How it works

The methodology behind the report

Celadon runs a multi-pass pipeline on every research question: score the sources, synthesize a thesis, search for evidence against it, decompose confidence by dimension, and identify what would change the conclusion. The full evidence trail ships with every report.

01

Source Search

Generates diverse search queries across five evidence categories. Retrieves from web, uploaded documents, and data feeds.

02

Source Scoring

Every source scored on authority, recency, independence, and incentive risk. Tier 1 filings outrank Tier 4 blog posts.

03

Synthesis

Thesis-led analysis with verified citations. Every claim traced to a specific source with programmatic verification.

04

Contradiction Search

Deliberately searches for evidence AGAINST the thesis. Three adversarial search tracks target the specific claims.

05

Counter-Thesis

Assembles the strongest case against the findings. Rates counter-evidence as Decisive, Material, Moderate, or Weak.

06

Confidence Assessment

Rates confidence across four dimensions: evidence strength, reasoning soundness, conditions stability, and scope precision. ‘Evidence: Strong’ means the data is verified across multiple Tier 1 sources. ‘Conditions: Fragile’ means the conclusion holds only if current market regime persists. This is what institutional buyers want that no other AI tool provides — not a single confidence label, but a decomposition that tells you exactly where your diligence should focus.

07

What to Watch

Identifies specific, observable signals that would change the conclusions. Thresholds, not vague risks.

What makes this different

STANDARD RESEARCH TOOLSChat-based research assistantsDEEP RESEARCH AGENTSMulti-step research productsCELADON
What it sellsAnswersLong-form answersProcess and evidence trail
Source treatmentAll sources equalHeuristic qualityExplicit scoring by tier
Contradiction searchNoneNoneDeliberate adversarial pass
Confidence modelNoneNone4-dimension decomposition
Output formatChat responseLong-form reportStructured decision artifact
Audit trailNonePartialFull evidence provenance
Supplements or replaces your tools?Replaces nothingReplaces junior researchSupplements everything you already use

The source table is the product

Every report includes a scored evidence base. Sources are ranked by authority, recency, independence, and incentive risk — not by relevance to the answer the model wants to give. An SEC filing scores 9.2. A TechCrunch article scores 5.1. The reader sees the difference before reading a single finding.

This is not metadata. This is the business model. Each tier upgrade makes the source table visibly different — from web articles at Free, to your uploaded documents at Professional, to premium data feeds alongside your team's accumulated research at Enterprise. You see the quality difference before you read a single finding.

Source quality overview
#SourceScoreTier
1NVIDIA 10-K FY20269.2T1
2Morgan Stanley Research7.5T2
3Reuters Analysis6.8T3
4TechCrunch Article5.1T3
5Industry Blog Post3.2T4

Five things you cannot get from Claude, ChatGPT, Perplexity, or any deep research product

These are not feature gaps that close next quarter. They reflect a different design philosophy. Deep research products optimize for helpful answers. Celadon optimizes for epistemic accountability.

01

Explicit Source Scoring

Every source scored on authority, recency, independence, and incentive risk. A 10-K filing scores 9.2. A TechCrunch article scores 5.1. The scoring is visible in every report. Foundation models treat all retrieved content as equal-weight context. Celadon does not.

02

Deliberate Contradiction Search

After synthesis, the pipeline generates adversarial queries targeting each specific claim. Track A attacks competitive assumptions. Track B searches for buyer-side disconfirmation. Track C tests structural alternatives. No foundation model searches for evidence against its own conclusions.

03

Structured Counter-Thesis

The strongest case against the thesis is assembled and rated: Decisive, Material, Moderate, or Weak. If the counter-evidence is Decisive, the report says so. The executive summary is rewritten to lead with the tension.

04

Confidence Assessment

Confidence rated across four dimensions: evidence strength, reasoning soundness, conditions stability, and scope precision. Each dimension uses its own rating vocabulary. The reader sees exactly where to focus skepticism.

05

What to Watch

Every report identifies specific, observable signals that would change the conclusions. Not “watch the market” — “NVIDIA Data Center revenue declining below 15% YoY for two consecutive quarters.” Each variable names the data source and the threshold. The report is a living instrument, not a static document.

These five capabilities are absent from the major AI labs' products and every deep research tool currently on the market.

Every claim traces to a source. No exceptions.

AI-generated research hallucinates. Independent studies documented hallucination rates exceeding 25% in financial AI predictions and found nearly one in five AI-generated risk calculations contain unsupported assumptions. In high-stakes analysis, a difference of 0.5% can amount to millions.

Celadon addresses this at three layers. A citation engine traces every claim to specific text in a specific source. The source hierarchy prevents Tier 4 blog posts from dominating when Tier 1 filings are available. And the confidence decomposition separates what is known from what is inferred, so the reader sees the epistemic status of each conclusion.

Verified Citation

“NVIDIA's data center revenue reached $35.6B in Q1 FY2026”

Source: NVIDIA 10-K FY2026 (SEC.gov)Tier 1

Cited text: “Data Center revenue was $35,577 million”

Composite score: 9.6

✓ Text exists in source✓ Primary filing✓ Audited

See the evidence trail yourself

Generate a free research report. No account required.

Generate Report