The 'Breakthrough Claim': writing a testable leap beyond the state of the art
- Bee Granted

- Nov 24
- 5 min read

A credible breakthrough claim does not rest on adjectives; it rests on measurement. In competitive European programmes, especially those rewarding high-risk/high-gain research, reviewers look for a clear, testable statement that defines how your concept surpasses the state of the art and how you will prove it. This article sets out a practical method to frame such a claim and illustrates what a “leap” looks like across life sciences, materials and digital technologies. The emphasis is on quantified thresholds, carefully chosen comparators, negative controls and explicit falsification criteria.
Start with a falsifiable proposition and a decisive experiment
Begin by stating the breakthrough in one assertive, testable sentence: “Technology X will achieve Y under Z conditions.” Anchor this to a decisive experiment or demonstration that could, in principle, prove you wrong. The goal is not to guarantee success but to engineer a result that, if observed, would compel a reasonable expert to accept that a qualitatively new regime has been reached. Make success and failure operational: specify the metric, the target value, the context in which it will be measured and the confidence level required to accept the claim.
Define quantified thresholds that signal a new regime
A leap is a threshold phenomenon: once crossed, it unlocks capabilities that prior approaches cannot practically reach. Set thresholds that matter to users and downstream adopters, not only to laboratory convenience. Where relevant, include stability, reproducibility and scale-relevance (e.g. batch-to-batch variation, cycle life, manufacturability). Use ranges and decision gates, not vague aspirations. If the claim involves multiple metrics (for example, accuracy and latency, or strength and recyclability), state the joint target as an AND condition to avoid moving the goalposts during execution.
Choose comparators that a sceptical expert would accept
Comparators establish credibility. Select the strongest relevant baseline available today, not a weaker strawman. For each metric, cite (or reproduce) the best-in-class method, a widely used standard, or a realistic incumbent system. If no single comparator dominates, justify a composite baseline and predefine how you will normalise across methods. Ensure the comparator is measured side-by-side under equivalent conditions; reviewers discount “paper baselines” that differ in data, protocol or operating regime.
Include negative controls and pre-commit to falsification
Negative controls reveal artefacts and confounders; they also earn trust. State what result would falsify your claim and what you will do if that happens (e.g., switch to a predefined fallback route while preserving the project’s core objective). Pre-register stop/go criteria for the decisive experiment and tie them to the work plan so that resources shift rationally based on evidence.
What a “leap” looks like, by domain
Life sciences
A testable leap rarely hinges on a single p-value; it couples biological relevance with technical performance. For a therapeutic or diagnostic platform, define thresholds that change decisions (e.g., sensitivity/specificity at clinically relevant prevalence; limit of detection in matrix; effect size in a validated ex vivo or organoid model with pre-specified biomarkers). Use gold-standard comparators (standard-of-care assays, leading modalities) and include negative controls such as non-target tissues, scrambled constructs or sham procedures. Specify robustness criteria (donor variation, batch effects, blinded analysis) and, if safety is in scope, predefine off-target assessments or cytokine release panels. A falsification example: “If target engagement in human organoids does not exceed X% with no elevation in toxicity markers A and B relative to control at exposure C, the breakthrough claim fails and Route-2 mechanism will be activated.”
Materials and manufacturing
The leap is often multi-dimensional: performance, durability and sustainability must all clear the bar. Express targets that move the application boundary: tensile strength with elongation at break, ionic/electrical conductivity at operating temperature, barrier properties under humidity cycling, or catalyst turnover frequency with selectivity under realistic impurity loads. Pair these with scale and sustainability thresholds: synthesis yield at kilogram scale, energy intensity per unit output, recyclability or biodegradation rates and Life-Cycle Assessment indicators (e.g., GWP per functional unit). Comparators should be current incumbent materials or the best published prototypes tested in your lab under the same regimen. Negative controls include binder-free formulations, inactive catalysts or off-spec feedstocks to expose fragility. Falsification might read: “If conductivity remains below Y S·cm⁻¹ after 500 cycles at T°C with <5% variance, the claim is not met; pivot to dopant strategy B.”
Digital and AI-enabled technologies
For algorithms and cyber-physical systems, breakthroughs must survive rigorous evaluation design. Define primary metrics that matter in deployment (e.g., AUROC and calibration error for clinical triage; top-1 accuracy and inference latency on edge hardware; success rate and sample efficiency in robotics). Use strong public or industrial benchmarks with predefined train/validation/test splits and forbid data leakage; report variance across seeds and sites. Add robustness tests (distribution shift, adversarial noise, occlusion), ablation studies (to prove each innovation’s contribution) and real-hardware or real-time constraints where appropriate. Negative controls include simplified baselines and “oracle” variants to bound potential. Falsification could be: “If the algorithm fails to exceed baseline by Δ on the held-out dataset and retains latency below L ms on device D, the claim fails; we revert to architecture variant V and re-evaluate.”
Build the claim into the work plan and reviews
A strong claim lives inside milestones and decision gates: the decisive experiment is a milestone; the threshold values are success criteria; the comparators and controls are deliverables; the fallback routes are risk mitigations with pre-allocated effort. Link each gate to a management action (scale, switch or stop). Commit to independent or blinded verification at the moment it matters, not as an afterthought.
Communicate uncertainty with discipline
High-risk projects should not read as hopeful; they should read as engineered. Be explicit about the hardest assumptions and where the decisive evidence will come from. Use conservative confidence intervals and explain power or sample-size choices. Where replication is costly, pre-define a minimal confirmatory protocol to avoid one-off miracles. If you expect heterogeneity (donors, batches, environments), state your stratification and acceptance bands upfront.
The take-away: measurable, comparable, disprovable
A breakthrough claim worthy of funding is measurable (quantified thresholds tied to use), comparable (side-by-side against the best relevant baseline) and disprovable (negative controls and falsification routes). Across life sciences, materials, and digital, the pattern is constant even if the metrics differ. When reviewers can see exactly what success looks like, how you will test it and what happens if it fails, they are far more likely to judge the ambition as credible rather than speculative.
If you want to turn your concept into a testable, fundable breakthrough claim, contact Bee Granted. We help teams translate ambition into engineered milestones, robust evaluation designs and score-aligned narratives. Through the Beehive, our partner-matching portal, we can also connect you with complementary PIs and companies to build the right consortium from day one.



