šŸŽ“How I Study AIHISA
šŸ“–Read
šŸ“„PapersšŸ“°BlogsšŸŽ¬Courses
šŸ’”Learn
šŸ›¤ļøPathsšŸ“šTopicsšŸ’”ConceptsšŸŽ“Shorts
šŸŽÆPractice
ā±ļøCoach🧩Problems🧠ThinkingšŸŽÆPrompts🧠Review
SearchSettings
G-LNS: Generative Large Neighborhood Search for LLM-Based Automatic Heuristic Design | How I Study AI

G-LNS: Generative Large Neighborhood Search for LLM-Based Automatic Heuristic Design

Intermediate
Baoyun Zhao, He Wang, Liang Zeng2/9/2026
arXiv

Key Summary

  • •This paper teaches an AI to invent its own 'break-and-fix' strategies (called LNS operators) for tough puzzles like delivery routes and city tours.
  • •Instead of only tuning small moves or simple rules, the AI designs both how to destroy a solution and how to repair it so they work as a team.
  • •A special score table (the synergy matrix) tracks which destroy and repair pairs cooperate best, so the AI breeds better pairs over time.
  • •During tests on TSP and CVRP, the new AI-made strategies beat other LLM-based methods and even strong classic solvers under tight time budgets.
  • •On larger routing problems, the method escapes traps (local optima) more easily because it can reshape big chunks of a solution at once.
  • •The framework is sample-efficient: it uses far fewer LLM generations (200 vs. 1000) but still finds higher-quality operators.
  • •The learned operators generalize well to new and very different datasets (TSPLib and CVRPLib), not just the ones they were evolved on.
  • •Ablation studies show each ingredient (mutation, crossovers, adaptive scoring, synergy-aware pairing) clearly boosts performance.
  • •The method uses simulated annealing to sometimes accept worse moves early on, which helps exploration without getting stuck.
  • •Overall, G-LNS turns LLMs into structural algorithm designers, not just parameter tweakers, opening a door to broader algorithm invention.

Why This Research Matters

Better routing and scheduling means faster deliveries, less fuel, and lower costs—big wins for companies and the environment. By teaching AI to design structural strategies, not just tune parameters, we unlock smarter, more adaptable solvers for real-world logistics. The method’s strong generalization means it’s less likely to break when the data looks different tomorrow. Because it needs fewer LLM calls, organizations can get high-quality operators without huge compute bills. As supply chains grow and cities get busier, even small percent gains translate into major savings and reduced emissions. The same ideas can ripple into scheduling factories, planning maintenance, or allocating resources in emergencies.

Detailed Explanation

Tap terms for definitions

01Background & Problem Definition

šŸž Hook: Imagine you’re organizing a giant field trip for many buses. Every student must be picked up once, each bus has a seat limit, and you want the total driving distance to be as small as possible. Sounds simple—until you try it on a city with hundreds of stops!

🄬 The Concept: Combinatorial Optimization Problems (COPs)

  • What it is: COPs are puzzles where you must choose the best combo of pieces (like routes or schedules) from a huge number of possibilities.
  • How it works:
    1. You’re given items (cities, customers, jobs) and rules (visit once, respect capacity, minimize cost).
    2. You try different combinations that follow the rules.
    3. You keep the one with the smallest cost.
  • Why it matters: Without good strategies, the number of choices explodes, and you get stuck with slow or bad answers. šŸž Anchor: The Traveling Salesman Problem (TSP) asks for the shortest tour visiting each city once. For 200 cities, the number of possible tours is unimaginably huge, so we need clever tricks.

šŸž Hook: You know how a chef follows helpful cooking tricks—like preheating the oven—to make recipes work better?

🄬 The Concept: Heuristic Design Principles

  • What it is: Heuristics are smart shortcuts that find good (not always perfect) answers fast.
  • How it works:
    1. Use rules of thumb (like "insert the closest next city").
    2. Improve step by step with small changes (like swapping two edges).
    3. Stop when time runs out or no quick improvement is found.
  • Why it matters: Heuristics make big problems solvable in the real world when perfect solutions are too slow. šŸž Anchor: Delivery apps use heuristics daily to route drivers quickly without solving the exact perfect plan.

šŸž Hook: Chatting with a super-smart coder who can write programs from examples feels magical, right?

🄬 The Concept: Large Language Models (LLMs)

  • What it is: LLMs are AIs trained to read, reason, and write code or text based on patterns they’ve learned.
  • How it works:
    1. Read a problem and some example code.
    2. Propose new code that follows the style and logic.
    3. Improve that code after seeing how well it performs.
  • Why it matters: LLMs can rapidly explore many algorithm ideas humans might miss or take too long to try. šŸž Anchor: Give an LLM a sample TSP rule; it can draft a new, slightly different rule and test if it works better.

šŸž Hook: Have you ever cleaned your messy desk by pulling everything off and then putting only the right things back in the right places?

🄬 The Concept: Large Neighborhood Search (LNS)

  • What it is: LNS is a "ruin-and-recreate" method: break a big part of a solution and rebuild it better.
  • How it works:
    1. Destroy step: remove a chunk (like 20% of stops or a bad segment in a tour).
    2. Repair step: smartly reinsert those parts to reduce cost.
    3. Repeat and sometimes accept okay-but-not-better moves to explore.
  • Why it matters: Small tweaks (like swapping two cities) often get stuck; big reshapes escape traps. šŸž Anchor: If a bus route crosses itself, removing that whole messy segment and reinserting stops cleanly can shorten the trip a lot.

šŸž Hook: Think of a sports coach who keeps track of which plays work best and calls them more often as the game goes on.

🄬 The Concept: Adaptive Large Neighborhood Search (ALNS)

  • What it is: ALNS learns which destroy/repair tactics work best and picks them more often.
  • How it works:
    1. Try various operators (ways to destroy/repair).
    2. Reward operators that improve results.
    3. Increase their chance of being chosen next time.
  • Why it matters: Without adaptation, you might waste time on weak ideas even when better ones are known. šŸž Anchor: If "remove the worst segment" keeps helping, ALNS gives it more plays in later rounds.

šŸž Hook: Two puzzle-solvers can beat one if one breaks the puzzle apart just right and the other knows exactly how to rebuild it.

🄬 The Concept: Destroy and Repair Operators

  • What it is: A destroy operator decides what to remove; a repair operator decides how to reinsert those pieces.
  • How it works:
    1. Destroy targets structural pain points (like entangled edges or overfull routes).
    2. Repair rebuilds with logic (greedy, regret-k, or diversity-aware choices).
    3. Together, they reshape solutions far beyond tiny local moves.
  • Why it matters: If destroy and repair don’t match, you break in unhelpful ways or can’t rebuild efficiently. šŸž Anchor: Removing customers from a jammed area and then reassigning them to better-fitting vehicles often cuts total distance.

šŸž Hook: What if we had a robot architect that designs the building plan, not just paints the walls?

🄬 The Concept: Automated Heuristic Design (AHD)

  • What it is: AHD asks AI to invent the rules and code for solving problems, not just to run them.
  • How it works:
    1. Generate candidate algorithms (code) with an LLM.
    2. Test them on sample instances and score performance.
    3. Keep and evolve the best ones.
  • Why it matters: Humans can’t handcraft perfect heuristics for every new case or scale; AHD scales creativity. šŸž Anchor: An LLM can write and refine a new insertion rule without a human expert micromanaging every detail.

The World Before: Most LLM-based AHD stayed in small boxes—either evolving simple constructive rules (which can lock in early mistakes) or tuning penalties while keeping neighborhood moves fixed (so structure rarely changes). This made it hard to escape deep local optima on big, tangled problems like large CVRPs.

The Gap: We needed an AI that doesn’t just tweak dials but redesigns the solver’s bones—especially the paired destroy/repair logic—and learns which pairs truly click together.

Real Stakes: Better routes mean faster deliveries, fewer trucks on the road, lower fuel costs, and less CO2. Better tours mean tighter manufacturing schedules and happier customers. Even small percentage gains pay off massively at scale.

02Core Idea

šŸž Hook: You know how some dance partners move better together because each anticipates the other’s next step?

🄬 The Concept: Generative Large Neighborhood Search (G-LNS)

  • What it is: G-LNS uses an LLM to co-create both destroy and repair operators and evolve them as a team.
  • How it works:
    1. Keep two small teams (populations): destroy operators and repair operators.
    2. Pair them up during runs, score how well each pair performs, and record their synergy.
    3. Use the LLM to mutate and cross over the best ideas—especially high-synergy pairs—to generate better code.
  • Why it matters: Without co-design, you might create a great destroyer that your repairer can’t fix, or vice versa. šŸž Anchor: The system learns that ā€œremove entangled segmentsā€ pairs best with ā€œcontext-aware greedy reinsertion,ā€ and breeds that duo.

The ā€œAha!ā€ Moment in one sentence: Don’t evolve heuristics in isolation—co-evolve tightly coupled destroy-and-repair operators with an LLM while explicitly measuring how well they work together.

Multiple Analogies:

  1. Chef analogy: One chef (destroy) breaks down a dish; the other (repair) reassembles flavors. G-LNS finds chef pairs whose styles match—like a butcher who trims just the right cuts for a saucier who knows exactly how to braise them.
  2. Sports analogy: A quarterback (destroy) and receiver (repair) must coordinate routes. G-LNS tracks which pairs keep scoring and drafts new plays from their best moves.
  3. Lego analogy: One friend removes blocks from a messy Lego build (destroy), the other rebuilds cleaner (repair). G-LNS learns which remove-and-rebuild styles snap together most strongly.

Before vs After:

  • Before: LLM-based AHD mainly tweaked simple rules or fixed small neighborhoods; escapes from deep traps were rare.
  • After: G-LNS invents structural operators that rip out the right parts and rebuild them smarter, routinely hopping over big local optima.

šŸž Hook: Like tuning a band where you reward not just good soloists but duets that sound great together.

🄬 The Concept: Synergy-aware Evaluation

  • What it is: A scoring system that tracks both individual operator quality and the pair’s teamwork.
  • How it works:
    1. During many short runs, select operators with adaptive probabilities (roulette wheel with weights).
    2. After each step, assign a reward based on how much the solution improved or whether it was accepted.
    3. Add rewards to a synergy matrix entry (i, j) for destroy i and repair j.
  • Why it matters: Without tracking pair synergy, evolution can separate partners that only shine together. šŸž Anchor: If ā€œProgressive Stochastic-Worst Removalā€ and ā€œAdaptive Context-Aware Greedy Insertionā€ consistently cut cost, their synergy score soars and they’re favored for joint crossover.

šŸž Hook: When two musicians jam well, you don’t just copy one—you remix both to create an even tighter duo.

🄬 The Concept: Co-evolution

  • What it is: Evolving two interdependent groups so that improvements in one drive improvements in the other.
  • How it works:
    1. Keep separate destroy and repair populations.
    2. Evaluate them together and record which pairs click.
    3. Generate new code using mutations and crossovers, including a special joint crossover that evolves both at once.
  • Why it matters: Solving hard problems often needs teams whose skills match; co-evolution builds those teams. šŸž Anchor: The LLM fuses a high-synergy pair into a new duo that breaks clusters more cleanly and rebuilds routes with fewer vehicles.

Why It Works (intuition without math):

  • Structural leverage: Big, targeted changes let the solver jump out of deep pitfalls where tiny moves fail.
  • Fit-first search: Measuring pair synergy ensures that destroy and repair operators are tailored to each other’s strengths.
  • Guided diversity: Adaptive weights reward success yet keep exploring alternatives, avoiding premature convergence.
  • Generative power: LLM code-writing explores more creative operator logic than fixed templates.

Building Blocks:

  • Dual populations (destroy/repair) to preserve role specialization.
  • Multi-episode evaluation with adaptive selection and rewards for robust scoring.
  • Global fitness for individuals and a synergy matrix for pairs.
  • Evolutionary actions: mutation (logic and parameters), homogeneous crossover (within type), and synergistic joint crossover (across types, guided by synergy).

03Methodology

At a high level: Instance → Initialize destroy/repair populations → Evaluate pairs with adaptive LNS (and log synergy) → Rank and prune → LLM evolves new operators (mutations/crossovers) → Repeat → Output best operators and solutions.

Step-by-step (what, why, example):

  1. Initialization
  • What happens: Start two small pools (size N=5 each): destroy operators Pd and repair operators Pr. Seed them with simple classics (e.g., Random Removal, Worst Removal, Greedy Insertion). Fill remaining slots by asking the LLM to invent new operators from scratch via structured prompts.
  • Why it exists: Good seeds provide a safety net so the search never stalls; LLM-invented ones add early diversity.
  • Example: For TSP, the LLM proposes an Adaptive Continuous-Segment Removal (destroys the most expensive tour segments) and a Diversity-Adaptive Probabilistic Insertion (chooses positions with a softmax that adapts to solution diversity).
  1. Evaluation: Multi-episode Adaptive LNS
  • What happens: a) Run K=10 short episodes, each T=100 iterations, starting from a random solution. b) At each iteration, pick d_i from Pd and r_j from Pr using roulette selection where weights are updated by recent rewards. c) Apply destroy then repair to get a neighbor solution; decide to accept using a simulated annealing rule (occasionally accept worse solutions early, then become stricter). d) Assign a reward σ depending on improvement level: best-so-far, improved-current, accepted-by-SA, or rejected. e) Update three trackers: adaptive weights (for in-episode selection), global fitness F (for later pruning), and synergy matrix S_ij (for pair teamwork).
  • Why it exists: Multiple short, adaptive runs produce stable scores, reduce luck, and spotlight strong pairings.
  • Example (CVRP50): A pair removes customers from a tangled zone at 20% destruction and reinserts them with capacity-aware regret scoring, cutting the total cost sharply.
  1. Population Management
  • What happens: After every K episodes, rank operators by global fitness F and prune the bottom M=2 from each pool. Reset F and S so newcomers and survivors start fresh.
  • Why it exists: Keeps the population lean, avoids crowding by weak or duplicate ideas, and prevents old scores from dominating.
  • Example: If two destroy operators behave similarly but one scores worse, it’s pruned, freeing space for new logic.
  1. LLM-Driven Evolution (the secret sauce)
  • What happens: Refill open slots using three strategies, each triggered stochastically: a) Mutation (local exploitation):
    • Logic evolution (m1) for lower-ranked elites: replace mechanisms, invent new selection rules.
    • Parameter calibration (m2) for top elites: fine-tune ratios, thresholds, or randomness. b) Homogeneous crossover (c1): Combine two destroys (or two repairs) by mixing their best ideas while keeping a coherent structure. c) Synergistic joint crossover (c2): Use the top synergy pair (d_i, r_j) and ask the LLM to co-design a new destroy and a matched repair tailored to each other.
  • Why it exists: Mutation sharpens or transforms; same-type crossover recombines strengths; joint crossover preserves and amplifies hard-won teamwork between roles.
  • Example (discovered CVRP pair): Progressive Stochastic-Worst Removal (starts exploratory and becomes greedier) plus Adaptive Context-Aware Greedy Insertion (adjusts regret depth and noise based on difficulty). Together they reassign customers and shrink fleets when possible.
  1. Robustness checks
  • What happens: Every new operator passes a small sanity test on tiny instances; if it errors or is too slow, it’s regenerated.
  • Why it exists: Prevents LLM hallucinations or impractical code from poisoning the pool.
  • Example: A proposed repair that forgets capacity checks is rejected before it ever enters main evolution.
  1. Testing the final pair
  • What happens: After evolution (Gmax=200 generations), run the best operator pair longer (Ttest=500 iterations) on held-out data and classic benchmarks.
  • Why it exists: Confirms quality under tighter or varied conditions and measures generalization.
  • Example: On CVRP100, the evolved pair reaches a better objective than the strong OR-Tools baseline within a fraction of the time budget.

The secret sauce:

  • Measure synergy explicitly so the method doesn’t just find good parts—it finds good partnerships.
  • Use adaptive selection so winners get more chances but newcomers aren’t ignored.
  • Keep populations tiny but smart, relying on the LLM’s creativity to explore big logic jumps.
  • Reset scores after pruning to give new code a fair start.

Data-in-action mini example (TSP segment):

  • Current tour has a long, twisty part. Destroy removes the worst continuous segment (20% of cities).
  • Repair reinserts removed cities, using a diversity-aware softmax to avoid rebuilding the same twist.
  • The new tour loses crossings and gets shorter. Reward goes up, boosting both operators’ weights and S_ij.

šŸž Hook: Like a school science fair where the best partners get to build the next rocket together.

🄬 The Concept: Synergy-aware Evaluation (reiterated for method)

  • What it is: A scoreboard that logs how well pairs work together, not just alone.
  • How it works: Each time a pair is used, its synergy S_ij gets the reward. Joint crossover samples top S_ij pairs.
  • Why it matters: It locks in cooperation patterns the search would otherwise forget. šŸž Anchor: The pair that keeps untangling dense clusters earns top synergy and becomes the blueprint for the next generation.

04Experiments & Results

The Test: The authors evaluated G-LNS on routing tasks—TSP, CVRP (with capacity), and OVRP (no return to depot)—using random instances for evolution and then testing on held-out sets and real-world-style benchmarks (TSPLib, CVRPLib). They capped evolution at 200 generations (far lower than typical 1000 for baselines) to prove efficiency, and then ran the final pair for 500 iterations during testing to measure solution quality.

The Competition:

  • Handcrafted solvers: LKH-3 (TSP), OR-Tools (CVRP/OVRP).
  • Neural constructive baseline: POMO.
  • LLM-based AHD baselines: FunSearch, EoH, ReEvo, Evo-MCTS, MCTS-AHD (including an ACO variant), plus standard ALNS with classic operators.

The Scoreboard (with context):

  • TSP held-out (sizes 10–200):
    • G-LNS matches near-zero gaps on small cases and stays strong as size grows (e.g., TSP100 gap ā‰ˆ 1.10%, TSP200 ā‰ˆ 1.31%), outperforming all LLM-based AHD and standard ALNS, which can exceed 10% gaps at scale.
    • Context: Getting about 1% gap at 200 cities is like scoring 99 out of 100 points on a tough exam where many others score in the 80s.
  • CVRP held-out (sizes 10–200):
    • G-LNS is highly competitive at small scales and shines on large ones, beating all LLM-driven baselines and even surpassing OR-Tools under its time limit on big instances (CVRP100 and CVRP200 show 0.00% gap vs. the reference, while OR-Tools has 2.09% and 1.27%).
    • Efficiency: For CVRP, G-LNS reaches strong solutions dramatically faster than MCTS-AHD and even outpaces OR-Tools’ fixed 320s budget on many instances (e.g., ā‰ˆ70s on CVRP100 vs. 320s for OR-Tools and ā‰ˆ1110s for MCTS-AHD(ACO)).
  • OVRP (open routes, harder for constructive rules):
    • G-LNS remains near the best on small/medium and clearly outperforms OR-Tools on the largest size (N=200), setting a new best-known objective among compared methods.

Surprising Findings:

  • Structural operators matter more than just tuning: the learned destroy/repair pairs dynamically adjust how much to break and how to rebuild based on the solution’s state, escaping traps that fixed-rule methods can’t.
  • Less is more: with only N=5 operators per pool and just 200 generations, G-LNS beats baselines that use far more LLM calls—showing the power of focusing on structural design rather than brute-force sampling.
  • Generalization boost: On diverse TSPLib and CVRPLib sets, G-LNS achieves the lowest average gaps across all evaluated categories, e.g., reducing CVRPLib Set F gap from 40.1% (EoH-S) to 15.9% and holding about 2.8% on TSPLib—evidence it learns problem structure, not just data quirks.

Ablations that make the numbers meaningful:

  • Remove mutation or homogeneous crossover → gaps rise: you need both fine-tuning and feature recombination.
  • Remove synergistic joint crossover → performance drops: confirming the destroy/repair partnership is essential.
  • Turn off adaptive weights or flatten rewards → search becomes misled or sluggish: balanced, hierarchical feedback is key.

Big picture: Across benchmarks and budgets, G-LNS consistently lands in A-grade territory while many alternatives linger around B or C—especially on larger, messier problems where structural reshaping pays off most.

05Discussion & Limitations

Limitations:

  • Reliance on LLM quality: Weaker code models may propose fewer viable or creative operators, increasing rejections in the sanity check and slowing progress.
  • Domain focus so far: Results center on routing (TSP, CVRP, OVRP). While LNS is broad, transferring to very different COPs (e.g., scheduling with complex constraints) may require new prompt scaffolds and seeds.
  • Evaluation cost: Although far cheaper than many baselines, G-LNS still runs multi-episode evaluations and code validations; on very tight compute, even this may be noticeable.
  • Stochasticity: As with most metaheuristics, runs vary; the paper averages three independent evolutions to smooth randomness.

Required Resources:

  • An LLM with strong code reasoning (e.g., DeepSeek-V3.2 in the paper) and a modest budget of API calls (about 200 generations).
  • A runtime to execute and time-check generated Python operators, plus small benchmark sets for sanity checks.
  • Standard metaheuristic runtime (simulated annealing loop) for evaluation episodes.

When NOT to Use:

  • Tiny problems that exact solvers finish instantly—classics like LKH-3 (TSP) or OR-Tools (small CVRP) may be simpler and guaranteed-optimal.
  • Domains where destroying any part breaks feasibility in complex, hard-to-repair ways and crafting repair logic becomes extremely problem-specific (unless you are ready to design better prompts and seeds).
  • Situations demanding strict optimality proofs rather than high-quality approximations.

Open Questions:

  • Multi-objective tradeoffs: Can G-LNS co-evolve operators that jointly optimize distance, time windows, emissions, and fairness?
  • Transfer learning: Can an operator pair evolved on one distribution rapidly adapt to another with minimal extra evolution?
  • Theory: What properties of synergy-aware co-evolution predict fast convergence or guaranteed improvements?
  • Broader COPs: How well does this approach extend to scheduling, matching, or layout problems where neighborhoods look very different?
  • Safety filters: Can we formalize stronger guarantees (e.g., complexity bounds, feasibility invariants) for auto-generated operators across domains?

06Conclusion & Future Work

Three-sentence summary: This paper turns LLMs into structural algorithm designers by co-evolving destroy and repair operators for Large Neighborhood Search. A synergy-aware evaluation tracks which pairs work best and uses the LLM to mutate and cross over those ideas into stronger, faster operators. The result is a compact, sample-efficient framework that outperforms strong LLM-based AHD methods and even classical solvers on large, challenging routing tasks, while generalizing robustly to standard benchmarks.

Main achievement: Showing that explicitly co-designing (and breeding) paired destroy-and-repair operators—rather than tweaking fixed templates—unlocks powerful structural reshaping that escapes deep local optima.

Future directions:

  • Add multi-objective optimization (e.g., distance, emissions, time windows) and fairness-aware constraints.
  • Explore transfer learning: seed new tasks with top pairs from prior domains.
  • Extend to non-routing COPs (e.g., scheduling, packing, assignment) with domain-specific prompts and seeds.
  • Strengthen theoretical and safety guarantees for generated operators.

Why remember this: G-LNS reframes AHD from parameter tuning to architecture invention—teaching AI not just to polish moves but to design the very tools that reshape solutions. That shift could influence how we build solvers across logistics, manufacturing, and beyond, where small percentage gains mean massive real-world savings.

šŸž Hook: Like pairing a master sculptor who chisels away just the right stone with a finisher who polishes the statue to shine.

🄬 The Concept: G-LNS (final anchor sandwich)

  • What it is: An LLM-driven, synergy-aware, co-evolution framework for destroy-and-repair operators in LNS.
  • How it works: Keep dual populations, evaluate and reward pairs, evolve with mutation and crossovers (including joint, synergy-driven pairing), and validate robustness.
  • Why it matters: It discovers structural strategies that jump out of traps and generalize across new terrains. šŸž Anchor: On CVRP200, the evolved pair beat a strong solver under its time limit, showing that better operator design can trump brute force.

Practical Applications

  • •Design custom destroy/repair operators for a company’s delivery network to reduce fleet size and fuel consumption.
  • •Use G-LNS-evolved operators to optimize pickup-and-delivery routes under tight time windows.
  • •Deploy the framework to improve school bus routing while respecting capacity and time constraints.
  • •Adapt the method to warehouse order batching and picker routing to shorten fulfillment times.
  • •Apply co-evolved operators to schedule factory jobs across machines to minimize total completion time.
  • •Refine airline or rail crew pairing and routing with destroy/repair pairs tuned for complex constraints.
  • •Automate heuristic design for last-mile delivery in dense urban areas where local moves get stuck.
  • •Generate domain-specific LNS operators for waste collection routes that balance travel and workload.
  • •Transfer learned operator pairs to new regions (benchmark on TSPLib/CVRPLib-like data) and fine-tune quickly.
  • •Integrate the synergy-aware evaluation into existing ALNS systems to discover better operator portfolios.
#Generative LNS#Automated Heuristic Design#Large Language Models#Destroy and Repair Operators#Synergy-aware Co-evolution#Adaptive Large Neighborhood Search#Combinatorial Optimization#Traveling Salesman Problem#Capacitated Vehicle Routing Problem#Simulated Annealing#Evolutionary Search#Code Generation for Algorithms#Hyper-heuristics#Population-based Methods
Version: 1

Notes

0/2000
Press Cmd+Enter to submit