What clinical trial should be
funded next?

LEAP analyzes 149,947 clinical trials from ClinicalTrials.gov to produce performance scorecards, evidence gap maps, and portfolio-optimized funding recommendations.

149,947
trials analyzed
29,137
unique conditions
113
features per trial
20
recommended archetypes

The LEAP Pipeline

📊
AACT Database
149,947 trials
⚙️
Feature Engine
113 features
📝
Scorecards
Risk tiers
🗺️
Evidence Maps
Gap analysis
🎯
Recommendations
Top 20
💰
Portfolio Sim
Budget optimizer

Trial Risk Landscape

135,584
Favorable
90.4% of all trials
12,439
Low Risk
8.3% of all trials
1,924
Moderate Risk
1.3% of all trials

Highest-Need Disease Domains

Based on evidence gap analysis weighted by WHO Global Burden of Disease

Cancer 1.50x Cardiovascular 1.45x Neurological 1.30x Infectious 1.20x Rare Disease 1.10x

Trial Scorecards Explorer

Performance forecasts and risk flags for 149,947 clinical trials

How to read this

Each trial receives an overall score (0-100%) predicting its likelihood of successful completion, based on 113 features including study design, enrollment targets, and historical patterns. Trials are classified into risk tiers: Favorable (predicted to complete on track), Low Risk (minor concerns), or Moderate Risk (significant risk of delays or failure).

Total Trials
Avg. Completion Likelihood
Median Completion Likelihood
Flagged for Review
/ /
Favorable / Low / Moderate Risk

Score Distribution

Loading chart...
Score:
Loading scorecards...
NCT ID Condition Phase Score? Risk Tier? Flags
Loading 149,947 trials...

Evidence Gap Map

Where is clinical evidence thin relative to disease burden?

How to read this

Each cell shows how well-studied a disease-treatment combination is. Red cells indicate large evidence gaps — areas where few trials exist relative to disease burden. Green cells indicate adequate evidence. Click any cell for details. Gaps are weighted by WHO disease burden, so high-burden diseases with few trials are prioritized.

Evidence Sufficiency by Condition × Intervention

Loading heatmap...

WHO Burden Weights

Legend

Large gap (high need)
Moderate gap
Evidence adequate

Recommended Next Trials

Top 20 trial archetypes ranked by composite score across evidence gain, patient impact, feasibility, and redundancy

How to read this

Each recommendation is a trial archetype — a condition + intervention + phase combination that would most efficiently fill evidence gaps. The composite score blends four factors: how much new evidence the trial would generate (35%), expected patient benefit (25%), practical feasibility (25%), minus a penalty if similar trials already exist (15%). Higher is better. Click any row for the full rationale.

Recommendation Landscape

Loading chart...

How would recommendations change under different assumptions about disease burden? Select a scenario to re-rank the top 20.

Scenario:
# Condition Intervention Phase Composite? Evid. Gain? Impact? Feasibility? Redund.?

Component Ablation Analysis

What happens if we remove a component from the scoring pipeline? Each line drops one factor — if the line diverges sharply from baseline, that component has a large influence on the final rankings.

Loading chart...

Portfolio Simulator

Given a budget, which mix of new trials closes the most evidence gaps?

How to read this

Each budget scenario selects the optimal mix of trial archetypes using a greedy knapsack algorithm — picking trials that maximize gap closure per dollar. Gap closure measures what fraction of the total evidence gap would be addressed if these trials succeed. The equity-constrained variant ensures funding is spread across disease areas, not concentrated in the most cost-efficient ones. Shaded bands show uncertainty from 250 Monte Carlo simulations.

Gap Closure Trajectories (250 Monte Carlo Runs)

Each line shows the median projected gap closure over time. The shaded band shows the 10th-90th percentile range across 250 simulations — wider bands mean more uncertainty in outcomes.

Loading chart...

Score-Optimal ($50M)

Equity-Constrained ($50M)

Key Insight

With a $50M budget, requiring equitable disease coverage costs almost nothing in efficiency: the equity-constrained portfolio closes 64% of evidence gaps across 3 disease areas, while the purely score-optimized portfolio closes 60% across only 2. In practice, this means funders can spread impact more broadly without sacrificing effectiveness.

Budget vs. Gap Closure

How much evidence gap can each budget level close? Lines show the mean projection; shaded areas show confidence intervals.

Loading chart...

About LEAP

Methodology

LEAP (Landscape Evidence and Portfolio) is a clinical trial intelligence pipeline that processes the Aggregate Analysis of ClinicalTrials.gov (AACT) database. The pipeline proceeds in five stages:

  1. Data Ingestion — 149,947 trials from AACT with standardized fields
  2. Feature Engineering — 113 derived features per trial (study design, enrollment, timing, cross-registry linkage)
  3. Multi-Task Prediction — Gradient-boosted models predict completion success, discontinuation risk, and enrollment attainment with calibrated confidence intervals
  4. Evidence Gap Mapping — Sufficiency indices computed per condition×intervention cell, weighted by WHO Global Burden of Disease
  5. Portfolio Optimization — Greedy knapsack algorithm selects trial archetypes that maximize composite scores under budget constraints, validated with Monte Carlo simulation

Key Formulas

Evidence Sufficiency
S(c,i) = 0.5 × quantity_norm(c,i) + 0.5 × quality_norm(c,i)
Gap Score
G(c,i) = 1 - S(c,i)
Composite Score
C = 0.35 × evidence_gain + 0.25 × patient_impact + 0.25 × feasibility - 0.15 × redundancy

Model Performance

Limitations

  • 0% external linkage achieved in current version (OpenAlex, OpenFDA, ICTRP) — future versions will incorporate publication and regulatory outcomes
  • Cost estimates are heuristic ($7.2M–$8M per trial) — real costs vary by phase, indication, and geography
  • Based on a frozen AACT snapshot — not real-time
  • Positive result labels unavailable in current data — efficacy prediction deferred to future work
  • Recommendations are structured prompts for human decision-makers, not autonomous allocation decisions

Author

Shuhan He, MD

Cite This

He S. LEAP: Landscape Evidence and Portfolio Analysis for Clinical Trial Funding Optimization. 2026. Available at: bayesianscience.org