šŸŽ“How I Study AIHISA
šŸ“–Read
šŸ“„PapersšŸ“°BlogsšŸŽ¬Courses
šŸ’”Learn
šŸ›¤ļøPathsšŸ“šTopicsšŸ’”ConceptsšŸŽ“Shorts
šŸŽÆPractice
ā±ļøCoach🧩Problems🧠ThinkingšŸŽÆPrompts🧠Review
SearchSettings
dVoting: Fast Voting for dLLMs | How I Study AI

dVoting: Fast Voting for dLLMs

Intermediate
Sicheng Feng, Zigeng Chen, Xinyin Ma et al.2/12/2026
arXiv

Key Summary

  • •Diffusion Large Language Models (dLLMs) can write many parts of an answer at once, not just left to right like usual chatbots.
  • •The paper spots a simple pattern: most words stay the same across multiple tries, and only a few shaky words change and decide whether the final answer is right or wrong.
  • •DVOTING keeps the words that many tries agree on and only resamples the uncertain spots, using dLLMs’ special ā€˜remask and regenerate anywhere’ ability.
  • •This approach is training-free, so you don’t need to retrain the model; it just changes how you decode at test time.
  • •Across math, science, and general knowledge benchmarks, DVOTING raises accuracy by about 3–15% while using far fewer steps than other test-time methods.
  • •Compared to standard majority voting, DVOTING is 1.1–4.4Ɨ faster on LLaDA and 1.0–2.7Ɨ faster on Dream while being as accurate or better.
  • •It even improves RL-fine-tuned dLLMs (like LLaDA-1.5), showing the method stacks well with training-time upgrades.
  • •A new metric, NUPR, shows 50%+ of positions are shared by at least two samples, proving there’s lots of token-level redundancy to save compute.
  • •A benefits-per-cost score (BPC) shows DVOTING moves the performance–efficiency frontier forward, especially for long answers.
  • •Overall, DVOTING is a simple recipe: sample, find stable tokens, remask only the wobbly ones, repeat briefly, and vote to finish.

Why This Research Matters

DVOTING makes smarter AI answers cheaper and faster by avoiding wasted effort on tokens that are already stable across attempts. This lowers latency and cloud bills for services that must handle millions of queries, from tutoring apps to customer support bots. It also makes on-device reasoning more practical by reducing compute needs without retraining the model. In education scenarios, it delivers clearer math and science explanations with better accuracy using the same hardware. Its training-free nature means teams can adopt it quickly, stacking DVOTING on top of existing dLLMs and even RL-fine-tuned models. By pushing the performance–efficiency frontier forward, DVOTING helps democratize access to strong reasoning models.

Detailed Explanation

Tap terms for definitions

01Background & Problem Definition

You know how when you and your friends tell the same story, most parts match, but a few details differ and cause arguments about what really happened? That’s what happens when AI models try several times to answer a question: most words match, but a few can flip the whole result.

šŸž Hook: Imagine writing a paragraph with classmates, all at once, where everyone fills in different blanks at the same time, and then you compare versions. 🄬 The Concept (Diffusion LLMs – dLLMs): dLLMs are language models that can fill in many blanks in parallel instead of writing strictly left-to-right.

  • How it works: (1) Start with the question and a lot of [MASK] slots. (2) The model guesses words for all masks at once. (3) It keeps confident words, remasks the iffy ones, and guesses again. (4) Repeat a few times. (5) Output the final text.
  • Why it matters: Without this, you’re stuck waiting for word-by-word writing; with it, you can fix parts anywhere and go faster. šŸž Anchor: Like a crossword where you fill easy clues first, lock them in, and then revisit the tricky squares.
  1. The World Before: Most language models used autoregressive decoding—writing from left to right, one token at a time. That’s simple but slow to scale at test time. When people wanted better reasoning, they either trained the model more (expensive) or sampled more answers and picked the best (also expensive).

  2. The Problem: Test-time scaling (making the model think longer by sampling many attempts) does help reasoning, but the cost explodes: you’re regenerating full answers over and over. In standard majority voting, you produce several complete responses even though most of the words in those responses repeat across tries.

šŸž Hook: You know how you don’t re-bake the entire pizza if just a few slices need more cheese? 🄬 The Concept (Test-time scaling): It’s a way to get better answers by letting the model try multiple times at inference and then pick or combine results.

  • How it works: (1) Run several independent generations. (2) Score or compare them. (3) Choose a final answer (often by majority vote).
  • Why it matters: It boosts accuracy without changing the model’s training, but it can be wasteful. šŸž Anchor: Like taking multiple photos and choosing the sharpest one.
  1. Failed Attempts: Majority voting and self-consistency work but burn lots of compute because they regenerate entire sequences every time. Other dLLM methods like HEX and RFG help, but they may need many samples or an extra guiding model. Reinforcement learning (RL) fine-tuning can improve results, but needs special data and training time—and recent evidence suggests RL often improves the efficiency of sampling rather than the raw ability of the model.

šŸž Hook: If five friends tell almost the same story, why re-ask all of them from scratch? Why not only double-check the parts they disagree on? 🄬 The Concept (Voting mechanism): Aggregating multiple tries and picking the most agreed-on answer is called voting.

  • How it works: (1) Generate multiple answers. (2) Count which final answer appears most. (3) Output that answer.
  • Why it matters: It harnesses consensus, but wastes effort by recreating repeated text. šŸž Anchor: A group choosing a movie where four out of five pick the same title.
  1. The Gap: Nobody was efficiently cutting out the redundancy in dLLM test-time sampling. Because dLLMs can remask any positions, they are uniquely suited to reuse what’s stable and only regenerate what’s uncertain. But this ability hadn’t been turned into a fast, practical voting strategy.

šŸž Hook: You know how in a jigsaw puzzle you don’t rebuild the whole puzzle when a piece is wrong—you just swap the mismatched pieces? 🄬 The Concept (Remasking): Remasking means picking specific spots to hide again and re-predict.

  • How it works: (1) Detect which tokens are shaky. (2) Mask them. (3) Refill only those. (4) Keep the solid parts untouched.
  • Why it matters: It saves time and focuses compute where it matters. šŸž Anchor: Erasing just the wrong sentence in your essay and rewriting it, not the whole page.
  1. Real Stakes: More efficient test-time methods make smarter AI more accessible and cheaper to run—helpful for homework help, tutoring apps, on-device assistants, and services that answer millions of questions daily. If you can get the same or better accuracy with fewer steps, that’s less cost, lower latency, and greener computing.

šŸž Hook: If most tokens across tries are the same, there must be an easy way to save effort. 🄬 The Concept (Token consistency analysis): It checks where tokens agree across samples to find which positions are reliable or uncertain.

  • How it works: (1) Generate a few drafts. (2) For each token position, count how many drafts use the same token. (3) Mark high-agreement tokens as ā€œkeep,ā€ low-agreement as ā€œremask.ā€
  • Why it matters: Without it, you can’t focus your compute on the few tokens that decide correctness. šŸž Anchor: Like underlining the words everyone said the same way in a story and circling the words they didn’t.

Finally, the authors quantify the redundancy with a new metric showing that more than half the token positions are shared by at least two out of five samples, proving there’s a lot of overlap to exploit. That’s why DVOTING exists: to keep the stable stuff and quickly fix the few wobbly parts, then stop early once the answer stabilizes.

02Core Idea

šŸž Hook: Imagine baking a tray of cookies and noticing that only two cookies look undercooked—you wouldn’t re-bake the entire tray, just the iffy ones. 🄬 The Concept (DVOTING): DVOTING is a test-time decoding method that boosts reasoning by repeatedly keeping tokens that multiple samples agree on and regenerating only the uncertain tokens using dLLMs’ remasking ability.

  • How it works: (1) Sample an answer. (2) Sample again but keep the tokens that were consistent across tries; remask the shaky ones. (3) Repeat briefly. (4) Stop early when the answers agree and vote to finalize.
  • Why it matters: It avoids resampling the whole sequence every time, cutting cost while keeping or improving accuracy. šŸž Anchor: Like correcting just the misspelled words across drafts instead of rewriting the whole essay.
  1. The ā€œAha!ā€ in one sentence: Most of the work in multi-sample decoding is redundant because many tokens repeat; so only resample the uncertain positions, then vote.

  2. Multiple Analogies:

  • Classroom editing: The class writes a paragraph; you keep sentences most students wrote the same way and only rework the sentences with disagreements.
  • Map fixing: A city map is mostly right; you only revisit the few streets people argued about, not redraw the whole city.
  • Quality control: In a batch of products, most pass checks; inspectors only recheck the borderline items, not retest everything.
  1. Before vs After:
  • Before: Majority voting regenerates complete answers each time, burning compute on stable tokens. HEX aggregates across schedules but needs many samples, and RFG needs a guiding model. RL improves sampling but demands training.
  • After: DVOTING exploits token-level agreement to reuse stable context and spend compute only on uncertain spots. You get similar or better accuracy with far fewer steps and no extra training.

šŸž Hook: You know how counting how many friends picked the same movie tells you how ā€˜confident’ the group is? 🄬 The Concept (Why it works – intuition): Agreement is a proxy for confidence; if many independent samples predict the same token (or answer), it’s likely right. By remasking only low-agreement tokens, DVOTING concentrates exploration where it matters and reduces noise elsewhere.

  • How it works (intuitively): (1) Agreement → keep; disagreement → retry. (2) Reusing strong context stabilizes later tokens. (3) Early stopping kicks in when the final answers match enough times.
  • Why it matters: Without this focus, test-time scaling pays repeatedly for the same information. šŸž Anchor: If four out of five drafts say ā€œParis is the capital of France,ā€ you don’t keep re-checking ā€œParisā€; you check the surrounding reasoning if anything still seems off.
  1. Building Blocks:
  • Consistency detector: Scores how often each token and each candidate answer repeats across samples.
  • Remask selector: Picks only low-consistency positions to regenerate.
  • Parallel decoder (with entropy threshold): Updates many tokens at once and automatically ā€˜locks in’ low-entropy (confident) positions.
  • Early stopper + voter: Ends the loop once candidate answers converge and outputs the majority.

šŸž Hook: Think of a thermometer that tells you when soup is ā€˜done enough’ so you can stop heating. 🄬 The Concept (Early stopping via answer consistency): If several runs already yield the same final answer, there’s no need to keep sampling.

  • How it works: (1) Track answers across runs. (2) If the top answer reaches a target vote share, stop. (3) Return that answer.
  • Why it matters: Saves time on easy problems and shifts effort to harder ones. šŸž Anchor: When three friends in a row say the movie was ā€œSpider-Man,ā€ you stop polling.
  1. What breaks without DVOTING: Without token-level focus, you keep paying to regenerate stable text; without early stopping, you overspend on easy tasks; without parallel remasking, you can’t surgically fix only the uncertain spots. DVOTING ties all three together into a practical, fast, training-free booster for dLLM reasoning.

03Methodology

At a high level: Prompt → Initial parallel generation → Consistency analysis → Selective remasking → Parallel regeneration → Early stop and vote → Final answer.

Step 0: Ingredients and Setup

  • Inputs: a prompt p, a dLLM fĪø, a generation length L, and a maximum number of samples n (typically up to 5).
  • Output: a final answer chosen by voting from a small pool of refined candidates.

šŸž Hook: You know how you start a worksheet by filling easy blanks first and circle the tricky ones for later? 🄬 The Concept (Semi-autoregressive decoding): It’s a decoding style that fills tokens in blocks, letting the model commit confident tokens and revisit others.

  • How it works: (1) Divide positions into blocks. (2) Predict many tokens at once. (3) Lock in confident ones, continue for the rest.
  • Why it matters: It balances speed and accuracy, and plays nicely with remasking. šŸž Anchor: Like answering a test in sections, then coming back to hard questions.

šŸž Hook: Imagine a dimmer switch for confidence: bright means sure, dim means unsure. 🄬 The Concept (Entropy-threshold parallel decoding): Entropy measures uncertainty; low entropy means the model is confident.

  • How it works: (1) For each token position, compute entropy of the predicted distribution. (2) If entropy < α (say 0.3), ā€˜lock in’ that token. (3) Remask only higher-entropy spots.
  • Why it matters: It automates which tokens to keep in each denoising step. šŸž Anchor: If you’re 95% sure of an answer, you ink it; if you’re 55% sure, you pencil it.

Step 1: First Sample (Warm Start)

  • What happens: Initialize x with prompt + masked generation slots. Run the dLLM’s denoising steps; commit low-entropy tokens each step until a full draft is done. Save this draft.
  • Why it exists: You need a reference sample to compare against later drafts for consistency.
  • Example: On a GSM8K math word problem, the first draft may already get the final number right or be close.

šŸž Hook: If five students hand in essays, you can spot repeating sentences. 🄬 The Concept (Token consistency analysis): Compares tokens position-by-position across drafts to find which positions agree.

  • How it works: (1) Collect tokens across runs for each position. (2) Count frequency of each token. (3) Mark positions with high repeat rates as stable; others as uncertain.
  • Why it matters: It guides where to remask and where to keep. šŸž Anchor: If four drafts say ā€œTherefore, x=12,ā€ that sentence is likely trustworthy.

šŸž Hook: When many kids pick the same jigsaw piece for a spot, you keep it in place. 🄬 The Concept (Non-Unique Position Rate – NUPR): A metric that measures how often positions repeat the same token across samples.

  • How it works: (1) For K samples, mark a position non-unique if ≄k samples share the same token there. (2) Average this fraction across positions/questions.
  • Why it matters: High NUPR means lots of redundancy you can safely reuse. šŸž Anchor: The paper reports NUPR@2 ā‰ˆ 50%+, meaning over half the positions are shared by at least two samples.

Step 2: Selective Remasking

  • What happens: Use consistency scores to set a boolean mask m over positions. Stable tokens (high agreement and/or low entropy) are kept; uncertain tokens are remasked.
  • Why it exists: This is the heart of the method—only regenerate what’s wobbly, saving compute.
  • Example: In a reasoning chain, shared steps like ā€œLet’s convert to minutesā€ are kept; the contested arithmetic step is remasked.

Step 3: Parallel Regeneration with Thresholding

  • What happens: Run another denoising pass. At each step, commit low-entropy tokens and keep remasking uncertain ones until the new draft is complete.
  • Why it exists: It leverages dLLMs’ parallel fill-in to quickly fix multiple uncertain spots at once.
  • Example: Two arithmetic tokens are corrected while the rest stays unchanged.

šŸž Hook: You stop asking more people once you hear the same answer enough times. 🄬 The Concept (Voting for final answer + early stopping): Aggregates complete candidate answers and stops when they converge.

  • How it works: (1) After each sample, compute the most frequent final answer and its vote share. (2) If share crosses a target or stability criterion, stop. (3) Output the majority answer.
  • Why it matters: Prevents overspending on easy problems and allocates tries to harder ones when needed. šŸž Anchor: If 4/5 drafts agree on ā€œ54 cars,ā€ you wrap up.

Step 4: Iterate Briefly, Then Finish

  • What happens: Repeat selective remasking and regeneration up to n times or until convergence. Return the voted answer.
  • Why it exists: A few refinement rounds often fix the key errors without rebuilding the whole response every time.
  • Example: The method might take 2 runs on easy ARC-C questions and ~5 on tougher GSM8K items.

What breaks without each step:

  • No consistency analysis → you can’t know what to keep vs regenerate.
  • No selective remasking → you waste compute on stable tokens.
  • No entropy thresholding → you lock in tokens too early or too late.
  • No early stopping → you overspend on easy cases.

Secret Sauce

  • Using token-level agreement as a confident ā€œkeepā€ signal while dLLMs’ unique remasking lets you surgically regenerate only the uncertain positions. This combo turns a blunt, many-times-regenerate approach into a focused, few-times-refine process that’s both faster and more accurate in practice.

šŸž Hook: Choosing one great photo from a small, sharpened set beats sorting through a huge, blurry album. 🄬 The Concept (Benefits per Cost – BPC): A score for performance gain per extra decoding step over the base model.

  • How it works: (1) Compute accuracy lift over baseline. (2) Divide by added denoising steps. (3) Higher is better.
  • Why it matters: It fairly compares ā€˜how much better’ per unit of compute. šŸž Anchor: DVOTING shows stronger BPC than HEX and majority voting, especially on long generations.

04Experiments & Results

The Test: The authors evaluate whether DVOTING improves accuracy and efficiency across math (GSM8K, MATH500), science (ARC-C, GPQA), and general knowledge (MMLU). They measure Pass@1 accuracy (did the top answer match the ground truth?) and the total number of denoising steps (a proxy for cost/latency). They also test across different generation lengths (128, 256, 512) and study robustness to various hyperparameters.

The Competition:

  • Original dLLM results (no extra test-time scaling)
  • Majority Voting (standard self-consistency with 5 samples)
  • HEX (aggregating over masking schedules; requires many samples)
  • RFG (adds guidance from a fine-tuned policy model)
  • RL-enhanced models (like d1, wd1, IGPO) for reference

The Scoreboard with Context:

  • LLaDA-8B-Instruct:
    • GSM8K: DVOTING lifts accuracy by about 6.22%–7.66% over the original model and reaches performance competitive with or better than majority voting while using 1.1–4.4Ɨ fewer steps. Versus HEX, DVOTING is 5.5–22.1Ɨ faster for similar or better accuracy.
    • MATH500: Gains of 4.40%–7.20% over the original; similar or better accuracy than majority voting with far fewer steps; up to ~2.2Ɨ speedup vs RFG and much larger vs HEX.
    • ARC-C, GPQA, MMLU: DVOTING improves by 3.16%–14.84% (ARC-C), 3.57%–4.73% (GPQA), and 4.83%–5.74% (MMLU). It maintains a strong performance–efficiency trade-off.
  • Dream-7B-Instruct (AR-initialized dLLM): Similar story—DVOTING is 1.0–2.7Ɨ faster than majority voting, 1.0–1.8Ɨ faster than RFG, and 5.0–13.4Ɨ faster than HEX, with strong accuracy.
  • RL-enhanced LLaDA-1.5: DVOTING still adds performance while keeping extra steps modest, showing it complements training-time improvements.

Surprising and Insightful Findings:

  • Token Redundancy is High: The new NUPR metric shows NUPR@2 ā‰ˆ 0.50+ and NUPR@3 ā‰ˆ 0.20 across datasets, meaning that over half of positions match across at least 2 of 5 samples. This validates the idea that most tokens don’t need to be regenerated every time.
  • Consistency Predicts Difficulty: Problems where both baseline and voting are correct tend to have high voting consistency (e.g., many 4/5 or 5/5 cases). Harder problems cluster at low consistency. DVOTING uses early stopping to save compute on ā€˜easy’ items.
  • Pareto Frontier Shift via BPC: Using the benefits-per-cost metric, DVOTING consistently offers more ā€˜accuracy per extra step’ than majority voting, RFG, and especially HEX—an indicator that the method meaningfully advances the performance–efficiency frontier.

Ablations (What matters and how robust?):

  • Sampling upper bound n: Increasing n improves accuracy up to saturation. DVOTING scales compute gracefully, echoing known test-time scaling laws.
  • Block size (semi-autoregressive granularity): DVOTING’s gains hold across 4–64, suggesting robustness to how you chunk positions.
  • Entropy threshold α: Reasonable ranges (0.1–0.7) keep strong gains; very high α (0.9) may drop performance but still beats the unscaled baseline. This shows the approach isn’t hypersensitive to α.

Concrete Examples:

  • In a GSM8K problem where the base model mis-assumes a five-day week, DVOTING focuses remasking on the faulty reasoning steps. After a few iterations, multiple candidates converge on the correct daily/weekly logic, and voting finalizes the right answer.
  • For an easy ARC-C multiple-choice item, DVOTING recognizes high consistency early and stops after only two samples, saving steps versus standard voting.

Takeaway of Results:

  • Across varied datasets and two families of dLLMs, DVOTING consistently improves accuracy and reduces compute compared to naive multi-sample voting. It often matches or beats methods that require more samples (HEX) or extra models (RFG), and even adds value on top of RL-enhanced models. The evidence suggests the core intuition—keep stable tokens, only fix uncertain ones—pays off broadly.

05Discussion & Limitations

Limitations:

  • dLLM-Specific Advantage: DVOTING depends on the remask-anywhere property of dLLMs. Purely autoregressive models can’t as easily ā€˜surgically’ fix interior tokens without special mechanisms, so direct transfer requires adaptation.
  • Threshold and Criteria Choices: The method uses consistency-based rules and an entropy threshold α. While ablations show robustness, extreme settings can hurt accuracy or savings. Calibrating these controls in new domains may require light tuning.
  • Worst-Case Overhead: On very hard problems with stubborn disagreement, DVOTING may run up to the sample limit n and still not converge early—though it still tends to be more efficient than regenerating full sequences every time.
  • Non-Determinism and Safety: Like most sampling-based strategies, DVOTING inherits randomness. For high-stakes settings (medical, legal), extra verification (like a separate checker) is still needed.

Required Resources:

  • A dLLM supporting iterative denoising and remasking (e.g., LLaDA, Dream, others with masked diffusion decoding).
  • Standard GPU/accelerator resources able to run multiple short sampling rounds (usually fewer and cheaper than standard self-consistency with full regenerations).
  • A simple voting and consistency-analysis wrapper; no extra training data or auxiliary models are required.

When Not to Use:

  • Ultra-low-latency single-shot tasks where even minimal extra sampling is forbidden and the base model is already accurate.
  • Very short outputs (like one token) where token-level redundancy savings are minimal.
  • Applications that require strict determinism or fixed decoding, unless you add verification post-processing.

Open Questions:

  • Adaptive Criteria: Can we learn the best remasking thresholds per question, per step, or even per token using lightweight heuristics or meta-learning without training the base model?
  • Confidence Calibration: How well do token-level agreement and entropy correspond to true correctness across domains? Can calibration further improve early stopping?
  • Structured Voting: Beyond majority voting on final answers, can we vote on intermediate steps to repair reasoning chains more reliably?
  • Beyond Text: How does remask-based test-time scaling transfer to multimodal dLLMs (text+vision+audio) where partial reuse might be even more valuable?
  • Theory and Guarantees: Can we formalize convergence properties—how many iterations suffice under certain agreement models—and bound the expected compute savings?

Overall, DVOTING is a practical, training-free way to turn dLLMs’ unique decoding flexibility into real-world efficiency and accuracy gains, while leaving room for smarter, adaptive criteria and broader modalities.

06Conclusion & Future Work

In three sentences: DVOTING is a training-free decoding strategy for diffusion LLMs that keeps the tokens multiple samples agree on, remasks only the uncertain ones, and stops early when answers converge. Because dLLMs can regenerate any positions in parallel, this focused refinement cuts waste, lifts accuracy, and reduces steps compared to standard majority voting and other baselines. Extensive tests on math, science, and general knowledge show consistent accuracy gains with strong performance–efficiency trade-offs, even on RL-tuned models.

Main Achievement: Turning a simple observation—most tokens repeat across samples—into a fast, general-purpose voting strategy that exploits dLLMs’ remasking to reduce redundancy and cost while improving reasoning quality.

Future Directions: Learn adaptive thresholds and remask policies per instance; extend to multimodal settings; add verifier-guided or step-level voting; and develop theory for convergence and compute savings guarantees. Integrations with lightweight checkers or program-of-thought verifiers could further raise reliability without heavy training.

Why Remember This: DVOTING shows that you don’t always need to train a bigger model or collect more data—sometimes, a smarter way to use the model you already have unlocks better answers faster. It reframes test-time scaling for dLLMs from ā€˜generate everything again’ to ā€˜fix only what’s uncertain,’ pushing the performance–efficiency frontier forward.

Practical Applications

  • •Plug DVOTING into existing dLLM inference pipelines to reduce compute for self-consistency without retraining.
  • •Enable cost-effective math tutoring bots that refine only uncertain steps in long solutions and stop early on easy problems.
  • •Deploy on-device assistants that use selective remasking to save battery and deliver faster responses.
  • •Run large-scale Q&A services with lower latency by capping samples and using early stopping based on answer convergence.
  • •Improve reliability in coding assistants by remasking only ambiguous tokens in generated code blocks.
  • •Combine DVOTING with lightweight verifiers (e.g., unit checks for math) to focus regeneration where verification fails.
  • •Use DVOTING as a drop-in upgrade to RL-fine-tuned dLLMs to squeeze extra accuracy without extra training.
  • •Apply DVOTING to multimodal dLLMs (text+vision) to refine just the uncertain text spans tied to ambiguous visual regions.
  • •Adopt adaptive thresholds per task domain (e.g., stricter for science) to balance accuracy and speed automatically.
  • •Integrate DVOTING into evaluation frameworks (like simple-evals) to standardize efficient test-time scaling experiments.
#diffusion large language models#remasking#test-time scaling#self-consistency voting#token consistency#entropy-threshold decoding#semi-autoregressive decoding#non-unique position rate (NUPR)#benefits per cost (BPC)#denoising steps#parallel decoding#majority voting#reinforcement learning (RL)#reasoning benchmarks#efficient inference
Version: 1

Notes

0/2000
Press Cmd+Enter to submit