🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
đŸ›€ïžPaths📚Topics💡Concepts🎮Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Barrier | How I Study AI

MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Barrier

Intermediate
Zonglin Yang, Lidong Bing3/4/2026
arXiv

Key Summary

  • ‱Scientists want AI to propose brand‑new hypotheses directly from a research background, but training a model to do this end‑to‑end is mathematically intractable because the search space explodes combinatorially.
  • ‱MOOSE‑STAR breaks the big problem into small steps: first plan a motivation, then retrieve inspirations, then compose the hypothesis one piece at a time.
  • ‱A hierarchical search tree lets the model find relevant inspirations in roughly logarithmic time instead of scanning everything linearly.
  • ‱Bounded composition trains the model to stay accurate even when the retrieved inspiration is only approximately correct, making the system robust to retrieval noise.
  • ‱Motivation planning focuses the search on a smaller, relevant subspace, so the model wastes less time exploring irrelevant ideas.
  • ‱The TOMATO‑STAR dataset (108,717 decomposed papers) provides large‑scale supervision for training the subtasks.
  • ‱Compared to a strong 7B baseline, MOOSE‑STAR doubles inspiration‑retrieval accuracy (28.42% → 54.37%) and improves hypothesis composition quality.
  • ‱In tests, brute‑force sampling hits a complexity wall, while MOOSE‑STAR keeps improving with more search budget and reaches full coverage with far fewer inference calls.
  • ‱This framework turns scientific discovery from an intractable leap into a manageable, guided search with scalable training and inference.
  • ‱All code, models, and data are released to support further research and real‑world use.

Why This Research Matters

Scientific progress often comes from combining known ideas in new ways. MOOSE‑STAR gives AI a practical, teachable path to do exactly that: plan a direction, find inspirations efficiently, and assemble them robustly—even when retrieval is imperfect. This can accelerate discovery in medicine (new diagnostic methods), materials (better batteries), and climate science (improved models), where searching the literature is overwhelming for humans alone. Because the framework scales with more data and compute, it keeps getting better instead of hitting a dead end. By releasing data, models, and code, the authors enable labs, startups, and educators to build co‑scientist tools that are both faster and more reliable.

Detailed Explanation

Tap terms for definitions

01Background & Problem Definition

🍞 Hook: Imagine you’re trying to invent a new Lego creation. You have a big box of pieces (all the knowledge in the world) and a short note that says what you want to build (your research background). Building something amazing in one go is super hard if you don’t know which pieces to pick.

đŸ„Ź The Concept: We want AI to generate scientific hypotheses directly from a research background. How it works (ideally): 1) Read the background, 2) Find the right inspirations from all prior knowledge, 3) Combine them step by step into a new idea. Why it matters: Without a clear way to train this, AI guesses in the dark or waits for external feedback like reviews; it can't directly reason from background to a hypothesis.

🍞 Anchor: Think of asking, “Given what we know about batteries, what new idea could make them charge faster?” We want AI to propose that idea by itself, not only after someone grades or edits it.

—

🍞 Hook: You know how weather forecasts say, “What’s the chance of rain tomorrow if today is cloudy?” That’s asking, “What’s likely, given what I already know?”

đŸ„Ź Conditional Probability P(h|b): It’s the probability of a hypothesis (h) given a background (b). How it works: 1) Treat the background as your known context, 2) Ask which hypotheses fit that context best, 3) Pick the highest‑probability one. Why it matters: This is the core goal—train a model that can map background directly to a good hypothesis.

🍞 Anchor: If the background is “multilayer logistic regression,” the best hypothesis might be “use the chain rule to update all layers” (which is the backpropagation idea).

—

🍞 Hook: Imagine searching for a 4‑digit code by trying every combination—0000 to 9999. That’s 10,000 tries for just four digits!

đŸ„Ź Combinatorial Complexity: When you must choose k inspirations from a huge library of N items, the number of possible combos grows like NkN^kNk. How it works: 1) For each step, there are many choices; 2) Multiply those choices across steps; 3) You quickly get astronomically many combinations. Why it matters: Directly training P(h|b) means the model must implicitly search all those combos, which is intractable.

🍞 Anchor: If N is all scientific papers and k is the number of inspirations to combine, the search can be as hard as finding a few exact needles in a haystack the size of all libraries on Earth.

—

🍞 Hook: When you do a giant jigsaw puzzle, you don’t try to solve it all at once—you group edge pieces, then sky pieces, then buildings.

đŸ„Ź Decomposed Sequential Training: Break hypothesis generation into smaller steps: retrieve one inspiration, then compose a small “delta” update, and repeat. How it works: 1) Retrieve inspiration j from the knowledge base, 2) Compose a small improvement ∆hjh_jhj​ to the hypothesis, 3) Move to the next step until done. Why it matters: This turns the impossible NkN^kNk search into k steps where retrieval is O(N) and composition is O(1), which is tractable.

🍞 Anchor: To rediscover backpropagation, first retrieve the “chain rule” paper, then compose the idea “apply chain rule to layer‑by‑layer updates,” then refine training details.

—

🍞 Hook: When you look for a book, you don’t scan every page of the library. You go: building → floor → section → shelf → book.

đŸ„Ź Hierarchical Search: Organize the knowledge base as a tree so you can zoom in from general to specific quickly. How it works: 1) Cluster papers into topics, 2) Navigate top‑down by choosing likely branches, 3) Evaluate far fewer candidates overall. Why it matters: Instead of O(N) scanning, you can approach O(log N) in the best case, saving huge amounts of time.

🍞 Anchor: Want a paper on “boundary‑aware losses”? First pick ‘computer vision’, then ‘medical imaging’, then ‘segmentation’, then ‘boundary losses’—only a handful of candidates remain.

—

🍞 Hook: Before going on a treasure hunt, you draw a simple map: “Head north to the old oak, then turn right toward the river.”

đŸ„Ź Motivation Planning: Write a short, high‑level intent that guides search before retrieving inspirations. How it works: 1) From the background, generate a concise plan (the ‘why/what’), 2) Use it to ignore irrelevant branches, 3) Search in a smaller, focused subspace. Why it matters: This reduces wasted exploration and speeds up finding the right inspiration neighborhood.

🍞 Anchor: If the background says “MRIs miss fuzzy tumor edges,” a motivation could be “prioritize methods with uncertainty‑aware boundary refinement,” steering retrieval to a tight subset of papers.

—

🍞 Hook: If a friend brings you the wrong Lego piece but it’s close enough, you can still fit it into your model by adjusting the design a bit.

đŸ„Ź Bounded Composition: Train composition to work even when the retrieved inspiration is an approximate match, within a semantic tolerance window. How it works: 1) Define a neighborhood of near‑matches around the true inspiration, 2) Practice composing with those proxies, 3) Learn to recover the intended idea despite noise. Why it matters: Retrieval isn’t perfect; without tolerance, small errors would break the whole pipeline.

🍞 Anchor: If the exact ‘chain rule’ source isn’t found, a similar calculus text still lets the model reconstruct backprop’s key logic.

—

🍞 Hook: A detective doesn’t just collect clues; they choose the right clue from a lineup.

đŸ„Ź Inspiration Retrieval (IR): Find the most helpful prior work (the next inspiration) from many candidates. How it works: 1) Present background plus a candidate pool, 2) Score each candidate, 3) Select the best one with a reasoned choice. Why it matters: If you pick the wrong inspiration, the next composition step can’t build the right hypothesis.

🍞 Anchor: Given 15 candidate abstracts, choose the one that truly introduced ‘boundary‑aware loss’ for segmentation.

—

🍞 Hook: Mixing paints one shade at a time gets you to the exact color you want.

đŸ„Ź Hypothesis Composition (HC): Use the chosen inspiration to write a small, precise hypothesis update (∆h). How it works: 1) Read background + prior ∆h’s + current inspiration, 2) Reason step‑by‑step, 3) Output the next delta covering motivation, mechanism, and methodology. Why it matters: Composition is how scattered ideas become a single coherent, testable proposal.

🍞 Anchor: After selecting a boundary‑aware loss paper, add a delta: ‘Use this loss within a 3D U‑Net‑Transformer on BraTS with Dice+HD95 metrics.’

—

🍞 Hook: A good cookbook helps you learn to cook many dishes, not just one.

đŸ„Ź TOMATO‑STAR Dataset: A large collection of 108,717 papers decomposed into background, inspirations (with citations), and stepwise hypothesis deltas. How it works: 1) Parse papers to extract b, i, and ∆h, 2) Link inspirations to real citations, 3) Apply strict quality checks for necessity, sufficiency, disjointness, and non‑redundancy. Why it matters: The subtasks need lots of high‑quality examples to learn robustly.

🍞 Anchor: For a segmentation paper, TOMATO‑STAR stores its background, the exact cited inspirations (titles+abstracts), and the sequence of deltas (motivation, mechanism, methodology) that built the hypothesis.

02Core Idea

🍞 Hook: Imagine turning a maze into a set of straight hallways: you solve one hallway at a time, choose only the needed turns, and keep a simple compass to stay on course.

đŸ„Ź The Aha! In one sentence: MOOSE‑STAR makes training scientific discovery tractable by decomposing end‑to‑end hypothesis generation into motivation planning, hierarchical inspiration retrieval, and bounded hypothesis composition—shrinking complexity from exponential to roughly logarithmic in the best case.

Multiple Analogies:

  1. Library Scout: Instead of reading every book (end‑to‑end), first write a short request note (motivation), follow the library map (hierarchical search), and accept close matches if the exact title isn’t available (bounded composition).
  2. Cooking Class: Write a menu plan (motivation), grab ingredients by aisle (hierarchical retrieval), and cook adaptable recipes that still taste right with equivalent ingredients (bounded composition).
  3. Treasure Hunt: Mark the goal on a simple map (motivation), follow the branching trail signs (hierarchical search), and if the main path is blocked, take a nearby side path that still leads to the treasure (bounded composition).

Before vs After:

  • Before: Training P(h|b) directly meant implicitly searching NkN^kNk combinations—too big to learn or converge. Brute‑force sampling stalled, especially when multiple inspirations (k≄2k ≄ 2k≄2) were needed.
  • After: With MOOSE‑STAR, the problem becomes k rounds of: plan → retrieve → compose. Retrieval is sped up by a search tree; composition tolerates small retrieval errors; planning prunes irrelevant regions. This turns an impossible space into a manageable, guided tour.

Why It Works (intuition, not equations):

  • Most scientific ideas are compositions of a background plus a few core inspirations. If we treat each inspiration as one move and learn those moves, we avoid exploring every possible multi‑step combo at once.
  • A tree search means you don’t look everywhere—only where evidence points. This saves exponential effort.
  • Accepting near‑matches and training the model to ‘snap back’ to the right idea makes the system robust, so one imperfect retrieval won’t derail the final hypothesis.
  • A tiny plan (motivation) focuses everything, like setting your GPS before you drive; it’s cheap to compute but saves lots of wrong turns.

Building Blocks (each with a quick sandwich wrap):

  • 🍞 Motivation Planning: You know how a packing list keeps you focused? The model writes a short intent from the background so the search goes where it matters. If you skip it, you wander into unrelated topics.
  • 🍞 Hierarchical Search: Like store → aisle → shelf. The model navigates a tree of topics to cut search from O(N) toward O(log N). Without it, retrieval stays slow.
  • 🍞 Inspiration Retrieval (IR): Like picking the key clue. The model scores a small candidate pool and chooses the best. Without good retrieval, composition won’t have the right building block.
  • 🍞 Bounded Composition (HC): Like using an equivalent ingredient. The model learns to compose accurately even if the retrieved paper is a near‑match. Without it, small retrieval noise breaks the chain.
  • 🍞 Data Engine (TOMATO‑STAR): Like a well‑curated recipe set. It provides many trustworthy examples of stepwise building. Without data at scale and clean labels, the subtasks won’t generalize.

🍞 Anchor: Suppose the background is ‘MRI tumor boundaries are fuzzy.’ Motivation: ‘Find uncertainty‑aware boundary methods.’ Hierarchical search zooms into segmentation → boundary losses → uncertainty methods. IR picks a strong paper; HC writes a delta proposing ‘boundary‑aware loss + 3D U‑Net‑Transformer + BraTS + Dice/HD95.’ Repeat if another inspiration is needed.

03Methodology

At a high level: Background b → Motivation Planning → Hierarchical Inspiration Retrieval (repeat for k steps) → Bounded Hypothesis Composition (∆h per step) → Final Hypothesis h.

Step 1: Motivation Planning (cheap O(1) guidance)

  • What happens: From the background, the model writes a short intent (m) describing what to look for (e.g., ‘uncertainty‑aware boundary refinement for MRI segmentation’).
  • Why it exists: It prunes huge swaths of irrelevant knowledge so retrieval starts in the right neighborhood. Without it, the search can chase many off‑topic directions.
  • Example: Background: ‘Glioma edges are hard to segment due to fuzzy boundaries.’ Motivation: ‘Seek boundary‑aware losses and uncertainty modeling in medical image segmentation.’

Step 2: Hierarchical Search over the Knowledge Base (toward O(log N))

  • What happens: Papers are embedded (e.g., SPECTER2) and clustered into a balanced tree (branching ~15). Online, the model navigates from root to leaves using best‑first search with a length‑normalized path score (geometric mean of child probabilities).
  • Why it exists: A flat scan is O(N) and too slow. The tree allows top‑down pruning, visiting far fewer nodes to find relevant candidates.
  • Example: Path might go: ‘AI’ → ‘Computer Vision’ → ‘Medical Imaging’ → ‘Segmentation’ → ‘Boundary/Uncertainty’ clusters, arriving at a small set of highly relevant papers.

Step 3: Inspiration Retrieval (IR) at each step j

  • What happens: Given b, prior ∆h’s, and m, IR sees a candidate pool (15 papers: 1 positive, 14 hard/easy negatives) with titles+abstracts and must select the best inspiration (iji_jij​) via a reasoned, generative choice. Training uses teacher‑filtered Rejection Sampling Fine‑Tuning (RFT) to cultivate chain‑of‑thought.
  • Why it exists: Picking the right inspiration sets up the composition step. Without accurate IR, composition won’t assemble the intended mechanism.
  • Example: Among 15 candidates, the model picks the one introducing ‘boundary‑aware loss’ rather than a look‑alike that only mentions edge detection without uncertainty modeling.

Step 4: Bounded Hypothesis Composition (HC) for each iji_jij​

  • What happens: Conditioned on b, prior ∆h’s, m, and iji_jij​, HC writes the next delta hypothesis ∆hjh_jhj​ with three parts: Motivation (why this direction), Mechanism (how it works), Methodology (how to implement). Crucially, HC is trained on both exact inspirations and proxies within a semantic tolerance window (M) so it learns to recover the intended idea from near‑matches.
  • Why it exists: Composition fuses inspirations into a coherent, testable plan and provides robustness to retrieval noise. Without bounded training, small retrieval errors would cause large reasoning failures.
  • Example: If the exact ‘boundary‑aware loss’ paper isn’t retrieved, HC can use a very similar paper to produce: ‘Adopt boundary‑aware Dice variant, integrate with 3D U‑Net‑Transformer, evaluate on BraTS with Dice and HD95.’

Step 5: Repeat for k Steps and Concatenate Deltas

  • What happens: The system iterates plan → retrieve → compose. Each composition adds a ∆h. After k rounds, the final hypothesis h is the concatenation of all ∆hjh_jhj​.
  • Why it exists: Many discoveries require combining several inspirations. Iteration keeps each step tractable and focused.
  • Example: After boundary losses, a second delta might add ‘uncertainty‑guided refinement’ and a third might specify ‘5‑fold cross‑validation on BraTS 2021.’

Data and Training Details (the enabling engine)

  • TOMATO‑STAR: 108,717 papers parsed into (b, i, ∆h) tuples; inspirations are grounded in citations; each ∆h has Motivation/Mechanism/Methodology; strict checks ensure necessity, sufficiency, disjointness, non‑redundancy.
  • IR training: Candidate pools of 15 (1 positive, 14 negatives from keyword/semantic overlaps and randoms). Generative selection with CoT. Teacher model: R1‑DISTILLED‑QWEN‑32B; student: R1‑DISTILLED‑QWEN‑7B (MS‑IR‑7B).
  • HC training: Generate ∆h with CoT, filter with an LLM rubric (M3: Motivation, Mechanism, Methodology; each 0–4, total 12). Include bounded‑composition samples using proxies from similarity tiers to expand tolerance M. Student: MS‑HC‑7B.

The Secret Sauce

  • Tri‑stage HMDP: Planning → Retrieval → Composition turns NkN^kNk into k focused moves with guidance at every step.
  • Hierarchical navigation + path scoring: Best‑first, length‑normalized traversal prunes aggressively yet fairly across depths.
  • Semantic tolerance training: Shifts cost from global search (expensive) to local reasoning (cheaper), boosting robustness and overall efficiency.
  • Quality‑controlled, citation‑grounded data: Supervises exactly what to retrieve and how to compose, enabling scalable RFT.

Worked Mini‑Example (end‑to‑end):

  • Background b: ‘MRI glioma boundaries are fuzzy; current models miss edges due to uncertainty.’
  • Motivation m: ‘Find uncertainty‑aware boundary losses for medical image segmentation.’
  • Hierarchical search: CV → Medical Imaging → Segmentation → Boundary/Uncertainty cluster.
  • IR (step 1): Selects a boundary‑aware loss paper.
  • HC (∆h1h_1h1​): ‘Use boundary‑aware Dice in 3D U‑Net‑Transformer; evaluate with Dice, HD95 on BraTS; 5‑fold CV.’
  • IR (step 2): Selects an uncertainty‑guided refinement paper.
  • HC (∆h2h_2h2​): ‘Add uncertainty‑guided boundary refinement to improve edge localization in low‑contrast regions.’
  • Final h = ∆h1h_1h1​ + ∆h2h_2h2​.

04Experiments & Results

The Test: What was measured and why

  • Inspiration Retrieval (IR): Accuracy of selecting the ground‑truth cited inspiration among hard negatives, because correct building blocks are essential.
  • Hypothesis Composition (HC): Quality via M3 rubric (Motivation, Mechanism, Methodology; each 0–4, total 12), because a good hypothesis must say why, how, and how to implement.
  • Search Efficiency: IR inference calls and proposed rank in a hierarchical tree vs a tournament baseline, because speed and accuracy together determine practical scalability.
  • Test‑time Scaling: Success rate vs inference budget, because real discovery often needs multiple inspirations and more compute should help if the method scales.

The Competition (Baselines)

  • R1‑DISTILLED‑QWEN‑7B as a strong general baseline for both IR and HC.
  • Brute‑force end‑to‑end sampling for P(h|b).
  • Tournament search for retrieval efficiency comparison.

The Scoreboard (with context)

  • IR Accuracy: 28.42% (baseline 7B) → 54.37% (MS‑IR‑7B). That’s like raising your test score from a middling C to a solid A‑ in a hard class with tricky distractors.
  • HC Total M3 (on perfect inspirations): baseline 4.34 → MS‑HC‑7B 5.08; adding bounded data nudges it further (≈5.16). Think of this as clearer motivations, sounder mechanisms, and more concrete methods.
  • HC under noisy inspirations (proxies): With bounded‑composition training, scores rise across Easy, Medium, and Hard tiers compared to the base model—evidence that semantic tolerance pays off.
  • Hierarchical Search Efficiency: IR inference calls drop from 218.00 (tournament) to 67.78 (hierarchical); proposed rank also improves (987.76 → 813.40). Adding motivation helps further: simple (63.80, 767.64), detailed (63.05, 742.50). That’s about 3×fewer3× fewer3×fewer calls—like finding a book in minutes instead of an hour.
  • Test‑time Scaling: On 109 cases (≈200 steps), MOOSE‑STAR steadily improves and reaches 100% coverage at ~6,000 inference calls, while brute‑force stalls at ~41.3% even after 9,499 samples. As problems require 2 or 3 inspirations, brute‑force collapses (≈36% then≈8then ≈8then≈8%), proving it hits a ‘complexity wall,’ while MOOSE‑STAR keeps climbing.
  • Sampling Feasibility: End‑to‑end pass rates plunge toward 0% as k increases, creating a training deadlock. In contrast, HC per‑step pass rate is 47.33%, unlocking RFT data generation and stable training.

Surprising Findings

  • Despite retrieval being an out‑of‑distribution task (linking concepts never linked before), IR accuracy shows steady log‑linear gains with more data. This suggests the model is learning a general ‘logic of discovery,’ not just memorizing known links.
  • Training HC benefits from bounded data even when evaluated on perfect inspirations—robustness training improves overall reasoning, not just tolerance to noise.
  • A tiny planning step (motivation) delivers measurable efficiency gains without adding heavy computation, validating the low‑cost, high‑impact design choice.

05Discussion & Limitations

Limitations

  • Data Dependence: Performance leans on TOMATO‑STAR’s coverage and decomposition quality; gaps or noise in certain domains can limit retrieval and composition.
  • Assumptions: The theory uses uniqueness and a fixed or canonical order for inspirations; real discoveries may admit multiple valid sets and orders, which adds complexity.
  • Evaluation via LLM Rubrics: M3 relies on LLM judgments; while practical, this introduces potential bias and variance.
  • Tree Quality: Hierarchical search depends on embedding/clustering quality and balanced branching; poor clustering reduces the best‑case gains.
  • Resource Cost: Building TOMATO‑STAR (≈38,400 GPU hours) and training teachers/students requires significant compute.

Required Resources

  • Precomputed paper embeddings (e.g., SPECTER2), clustering to build the tree, and an IR/HC training setup with RFT.
  • Access to the released dataset, models (MS‑IR‑7B, MS‑HC‑7B), and codebase.

When NOT to Use

  • Domains with scarce or proprietary literature where inspirations cannot be retrieved or aligned.
  • Tasks requiring entirely de novo insights with no nearby inspirations (empty neighborhoods), where bounded composition has nothing to latch onto.
  • Scenarios needing formal guarantees of optimality; this is a heuristic, probabilistic search, not a proof engine.

Open Questions

  • How to natively support multiple valid inspiration sets and flexible sequencing without exploding complexity?
  • Can we automate calibration of the semantic tolerance window M per domain/problem?
  • How to further reduce LLM‑based evaluation bias (e.g., human‑in‑the‑loop or hybrid metrics)?
  • Can multi‑modal inspirations (figures, code, data) be integrated into the same hierarchical search and bounded composition framework?
  • How does co‑training IR and HC end‑to‑end (with decomposition constraints) affect long‑term scaling?

06Conclusion & Future Work

Three‑Sentence Summary

  • Training AI to generate scientific hypotheses directly from background knowledge is intractable if treated end‑to‑end because the search over inspirations explodes combinatorially.
  • MOOSE‑STAR overcomes this by planning a motivation, using hierarchical search for inspirations, and composing bounded hypothesis deltas—reducing complexity from exponential to roughly logarithmic in the best case.
  • With the TOMATO‑STAR dataset and specialized training, MOOSE‑STAR scales at test time while brute‑force methods stall, turning discovery into a guided, manageable search.

Main Achievement

  • A unified, theoretically grounded training and inference recipe that makes P(h|b) learnable and scalable through motivation planning, hierarchical retrieval, and bounded composition, validated by strong empirical gains.

Future Directions

  • Support multiple valid inspiration orderings, adaptive tolerance windows, and richer multi‑modal inspirations; reduce reliance on LLM evaluators; co‑train IR and HC with tighter coupling.

Why Remember This

  • MOOSE‑STAR reframes scientific discovery for AI: not as a blind leap, but as a sequence of smart, teachable moves. This shift—from guessing to guided composition—opens a practical path toward AI that helps humans invent faster, safer, and more reliably.

Practical Applications

  • ‱AI co‑pilot for literature review that not only summarizes but proposes stepwise, testable hypotheses.
  • ‱Drug discovery ideation by composing known mechanisms and targets into novel therapeutic hypotheses.
  • ‱Materials design suggestions (e.g., battery chemistry) guided by hierarchical retrieval of related catalysts and structures.
  • ‱Clinical workflow innovation by combining prior methods into new protocols with uncertainty‑aware refinements.
  • ‱Automated benchmarking of research ideas by composing deltas and evaluating them with structured rubrics.
  • ‱R&D road‑mapping tools that generate motivations and candidate inspiration sets for research teams.
  • ‱Educational assistants that teach hypothesis building by showing how each delta links to specific inspirations.
  • ‱Enterprise knowledge mining to generate process‑improvement hypotheses from internal documents and public literature.
  • ‱Government or NGO policy ideation by composing inspirations from economics, epidemiology, and behavioral science.
  • ‱Auto‑generation of related‑work sections aligned to each hypothesis delta with explicit citation grounding.
#scientific discovery#hypothesis generation#P(h|b)#combinatorial complexity#hierarchical search#inspiration retrieval#hypothesis composition#bounded composition#motivation planning#RFT (rejection sampling fine‑tuning)#semantic tolerance#SPECTER2 embeddings#search tree#test‑time scaling#TOMATO‑STAR dataset
Version: 1

Notes

0/2000
Press Cmd+Enter to submit