šŸŽ“How I Study AIHISA
šŸ“–Read
šŸ“„PapersšŸ“°BlogsšŸŽ¬Courses
šŸ’”Learn
šŸ›¤ļøPathsšŸ“šTopicsšŸ’”ConceptsšŸŽ“Shorts
šŸŽÆPractice
🧩ProblemsšŸŽÆPrompts🧠Review
Search
Multi-Task GRPO: Reliable LLM Reasoning Across Tasks | How I Study AI

Multi-Task GRPO: Reliable LLM Reasoning Across Tasks

Intermediate
Shyam Sundhar Ramesh, Xiaotong Ji, Matthieu Zimmer et al.2/5/2026
arXivPDF

Key Summary

  • •Large language models are usually trained to get good at one kind of reasoning, but real life needs them to be good at many things at once.
  • •Standard GRPO training across multiple tasks lets easy tasks hog the progress while hard tasks are left behind.
  • •This paper introduces MT-GRPO, which gives more practice to weaker tasks and checks that the practice really follows that plan.
  • •It uses improvement-aware task weights that rise for tasks that are both low-scoring and not improving much.
  • •A ratio-preserving sampler balances training batches even when some prompts give no learning signal (zero-gradient).
  • •On 3 tasks, MT-GRPO improves worst-task accuracy by 16–28% over standard GRPO and by 6% over DAPO, while keeping average accuracy competitive.
  • •It also reaches 50% worst-task accuracy with 50% fewer steps in the 3-task setting, showing faster reliability gains.
  • •On 9 tasks, a single knob (lambda) lets you trade a bit of average score for much better worst-case reliability.
  • •The method is simple to plug into GRPO pipelines and makes multi-task reasoning more balanced and dependable.

Why This Research Matters

Real assistants need balanced skills, not just a high average that hides weak spots. MT-GRPO raises the weakest skill without dragging down the rest, making models more trustworthy in everyday use. This helps in settings like tutoring (logic plus math), coding (reasoning plus testing), and safety checks (no blind spots). It also speeds up reaching reliability thresholds, saving time and compute. Because it plugs into common GRPO pipelines with a single trade-off knob, it’s practical for many teams. Over time, this approach can make AI systems more dependable across a wide range of tasks and domains.

Detailed Explanation

Tap terms for definitions

01Background & Problem Definition

šŸž Hook: Imagine you're studying math, science, and writing. If you only practice what you're already good at, your weak subjects never catch up. That might look okay on average, but it fails when you actually need all of them.

🄬 The Concept (Policy Gradient):

  • What it is: A way for an AI to learn by nudging itself toward actions that brought better rewards.
  • How it works:
    1. Try something (like writing a solution).
    2. Get a reward (right/wrong, well-formatted or not).
    3. Push the model’s choices slightly toward the ones that did better.
  • Why it matters: Without this feedback loop, the model doesn’t know which choices to make more often. šŸž Anchor: Like trying different strategies on a puzzle and favoring the ones that got you closer to the answer.

šŸž Hook: You know how teachers give stars for correct answers and neat work?

🄬 The Concept (Task-Level Rewards):

  • What it is: Points the AI earns per task (e.g., math or logic) for correct and well-formatted answers.
  • How it works:
    1. For each task, compare outputs to ground-truth answers.
    2. Add bonus for correct formatting.
    3. Average these scores to track task progress.
  • Why it matters: Without clear points per task, we can’t tell which skills are strong or weak. šŸž Anchor: A math sheet where each correct problem is 1 point and neat handwriting gets a small bonus.

šŸž Hook: Think of grading a group project by comparing all drafts and picking which ones seem better within that group.

🄬 The Concept (GRPO):

  • What it is: A training method (Group-Relative Policy Optimization) that compares multiple answers to the same prompt and prefers the relatively better ones.
  • How it works:
    1. For a prompt, generate several candidate answers.
    2. Score them and see which are above the group’s average.
    3. Push the model to prefer the better-than-average ones.
  • Why it matters: It removes the need for a value network and uses fair relative comparisons. šŸž Anchor: Like a mini contest where all students’ answers to the same question are compared, and the best ones guide the class.

šŸž Hook: Imagine some quiz questions are so easy (everyone always gets them right) or so hard (everyone gets them wrong) that you learn nothing new from them.

🄬 The Concept (Zero-Gradient Prompts):

  • What it is: Prompts where all sampled answers score the same, so the model gets no signal to improve.
  • How it works:
    1. Generate several answers.
    2. If they all get identical scores, the relative advantage is zero.
    3. No gradient means no learning from that prompt.
  • Why it matters: If one task has lots of these, it silently contributes less to learning, even if we wanted to prioritize it. šŸž Anchor: A practice question that’s either a guaranteed A or a guaranteed F—no hint about how to get better.

The world before: RL post-training (like GRPO) made LLMs much better at single tasks such as math or coding. But real-world assistants must juggle many skills: planning (Countdown), logic (Zebra puzzles), and inductive reasoning (ARC). When people tried to train on all tasks at once using average performance, easy tasks dominated, and hard tasks stagnated. Worse, some tasks had many zero-gradient prompts, so even if you gave them higher sampling weight, they still didn’t ā€œspeak upā€ in the gradients.

The problem: How do we train one model across diverse tasks so that the weakest task is good enough—not just the average?

Failed attempts:

  • Uniform sampling (plain multi-task GRPO): easy tasks hog progress; hard tasks fall behind.
  • Curriculum sampling (e.g., SEC): it helps average performance but can still misallocate effort, and doesn’t fix zero-gradient imbalance.
  • Classic robust weighting from other areas: GRPO’s loss can look the same when everything is perfect or when everything fails, so it’s not a reliable signal for reweighting.

The gap: We need a training rule that (1) explicitly prioritizes the worst or slowest-improving tasks and (2) makes sure the batch actually reflects those priorities despite zero-gradient prompts.

Real stakes: In daily life, a helper bot shouldn’t solve tricky math but fail at basic logic instructions. In code assistants, great algorithms but bad reasoning about test cases is risky. For safety checks, you want the weakest check to be solid. Reliability across skills is what makes these systems trustworthy.

02Core Idea

šŸž Hook: Think of a coach who looks at each player’s score and also how quickly they’re improving, then gives more practice time to the ones who need it most—and makes sure the practice plan is actually followed during drills.

🄬 The Concept (Task Reweighting):

  • What it is: Dynamically changing how often each task is practiced.
  • How it works:
    1. Measure each task’s reward (how good it is now).
    2. Measure improvement (how much it just got better).
    3. Increase weight for tasks that are weak and not improving much; decrease for strong or fast-improving ones.
  • Why it matters: Without reweighting, easy tasks can hog training and hard tasks stay weak. šŸž Anchor: Studying more spelling if your last few quizzes didn’t improve, and less if you’re already acing them.

šŸž Hook: Imagine progress charts for every subject; even if a subject’s score is low, if it’s shooting up, maybe you can focus on another subject that’s stuck.

🄬 The Concept (Improvement Signals):

  • What it is: A per-task ā€œhow much did we improve this step?ā€ score.
  • How it works:
    1. Compute task reward before and after an update.
    2. Subtract to get the improvement.
    3. Use it with reward to decide future task weights.
  • Why it matters: Reward-only weighting can tunnel on the same worst task and ignore others; improvement-aware weighting prevents collapse. šŸž Anchor: If your logic puzzle score rose a lot this week, your tutor shifts time to math, which has been flat.

šŸž Hook: A recipe only tastes right if you keep the right ingredient ratios; sampling tasks for training is the same.

🄬 The Concept (Ratio-Preserving Sampler):

  • What it is: A sampler that ensures the training batch really has the target mix of tasks after filtering out unhelpful prompts.
  • How it works:
    1. Decide desired post-filtered counts per task from the learned weights.
    2. Oversample tasks likely to be filtered (acceptance-aware).
    3. Resample until post-filtered batch matches target ratios.
  • Why it matters: Without this, tasks with many zero-gradient prompts get underrepresented, breaking the plan. šŸž Anchor: If strawberries bruise easily and some get thrown out, buy extra so you still have the right amount for your fruit salad.

šŸž Hook: Now put it all together: a coach who schedules practice based on current scores and progress, and a team manager who ensures the right players actually show up on the field.

🄬 The Concept (MT-GRPO):

  • What it is: A new training loop that blends improvement-aware task weighting with ratio-preserving sampling on top of GRPO.
  • How it works:
    1. Update the model using GRPO on a batch built to match task weights.
    2. Measure task rewards and improvements.
    3. Adjust weights to help the weakest and slowest-improving tasks.
    4. Repeat.
  • Why it matters: It raises the floor (worst-task score) without tanking the overall average. šŸž Anchor: The team’s weakest player improves fast, and the whole team still wins games.

Aha! moment in one sentence: If you want balanced multi-task reasoning, prioritize tasks that are both weak and not improving—and enforce that priority in the actual training batches.

Three analogies:

  1. School schedule: Spend more study time on subjects that are both hard and stuck; also make sure those subjects truly get time on your daily planner (even if some worksheets end up unusable).
  2. Cooking: Adjust ingredient amounts to fix bland parts of a dish and account for waste so the final plate keeps perfect ratios.
  3. Sports coaching: Give drills to the athletes who most need them, and double-check attendance so the plan matches practice.

Before vs. After:

  • Before: Average-focused training lets successes on easy tasks hide failures on hard ones; zero-gradient tasks shrink their voice.
  • After: Weights shift toward the struggling tasks, and the sampler guarantees their voice is heard, lifting worst-task accuracy while keeping averages strong.

Why it works (intuition): Rewards show where we stand, improvements show where we’re moving. Combining them avoids over-focusing on a single worst task and instead balances progress. Then, by preserving ratios after filtering, the gradients truly reflect the plan.

Building blocks:

  • Task-Level Rewards (where we stand)
  • Improvement Signals (how we’re moving)
  • Improvement-Aware Weight Updater (who needs time next)
  • Ratio-Preserving Sampler with acceptance-aware oversampling (make the batch match the plan)
  • GRPO core (stable, relative scoring per prompt)
  • A trade-off knob (lambda) to balance worst-case robustness vs. average performance

03Methodology

High-level pipeline: Inputs (multi-task datasets, current model) → Improvement-aware task weights → Ratio-preserving batch sampler → GRPO policy update → Measure per-task reward and improvement → Update weights → Repeat.

Step-by-step (with Sandwich explanations at first use):

  1. Measure where we are (Task-Level Rewards) šŸž Hook: Like checking your grades in each subject before planning next week’s study. 🄬 The Concept:
  • What: Per-task scores that reflect correctness and formatting.
  • How:
    1. For each task, evaluate a batch of prompts.
    2. Score answers (1 for correct, small bonus for correct format, else 0 as per dataset rules).
    3. Average to get task reward.
  • Why: We need a clear scoreboard to know who needs help. šŸž Anchor: Math = 60%, Logic = 40%, Patterns = 30% this week.
  1. Measure how we’re moving (Improvement Signals) šŸž Hook: You don’t just look at your grade—you check whether it went up or down since last week. 🄬 The Concept:
  • What: The change in each task’s reward after an update.
  • How:
    1. Save last step’s reward per task.
    2. Update the model.
    3. Recompute rewards and subtract.
  • Why: A low score that’s rising fast may need less urgent help than a low score that’s flat. šŸž Anchor: Logic jumped from 40% to 50% (good trend), while Patterns stayed at 30% (needs attention).
  1. Decide who gets more practice (Improvement-Aware Task Reweighting) šŸž Hook: If two subjects are both hard, focus on the one that’s stuck, not the one already taking off. 🄬 The Concept:
  • What: A rule that raises weights for tasks that are weak and not improving, and lowers them for strong or fast-improving tasks.
  • How:
    1. Combine improvement (I) and reward (J) into a signal: s = I + λ·J.
    2. Compare each task’s s to the weighted average.
    3. Increase weights for below-average s; decrease for above-average s.
  • Why: Prevents training from collapsing onto one worst task forever; encourages balanced progress. šŸž Anchor: If Patterns is low and flat, it gets more slots in the next study schedule.
  1. Make the batch match the plan (Ratio-Preserving Sampler) šŸž Hook: If some worksheets end up useless, bring extra so your study time still matches the schedule. 🄬 The Concept (Zero-Gradient Prompts + RP Sampler):
  • What: Some prompts give no learning signal; the sampler accounts for this so post-filtered batches keep the target task ratios.
  • How:
    1. Predict acceptance rates (1 āˆ’ filter rate) per task.
    2. Oversample tasks with high filter rates (acceptance-aware).
    3. Resample as needed so accepted samples hit the target counts per task.
  • Why: Without this, tasks with many zero-gradient prompts would be underrepresented even if we planned to prioritize them. šŸž Anchor: If ARC often yields no signal, oversample it so the final batch still has the intended amount of ARC.
  1. Learn from the batch (GRPO Update) šŸž Hook: In a mini contest, pick the best answers among a group and learn to prefer those. 🄬 The Concept:
  • What: Use GRPO on the constructed batch to push the model toward better-than-average answers per prompt.
  • How:
    1. For each prompt, generate several answers.
    2. Score and normalize within the group.
    3. Nudge the model toward answers that beat the group average.
  • Why: Stable, relative comparisons make improvements without needing a value model. šŸž Anchor: For a Zebra puzzle prompt, favor the candidate reasoning chain that most often leads to the correct solution.
  1. Close the loop
  • After the GRPO step, recompute rewards and improvements and update task weights again.
  • The loop repeats, steadily lifting the weakest skills while keeping overall performance strong.

Concrete mini example:

  • Suppose Countdown=80% (improving), Zebra=55% (flat), ARC=30% (flat, many zero-gradient prompts).
  • Weight updater increases ARC and Zebra weights more than Countdown.
  • RP sampler oversamples ARC to counter high filtering so post-filtered batch has the planned share of ARC.
  • GRPO update uses this balanced batch; next step shows ARC rising to 36% and Zebra to 58%.

Secret sauce:

  • The combo: (a) improvement-aware weighting keeps focus where progress is lacking, and (b) ratio-preserving sampling guarantees that focus shows up in gradients even with zero-gradient prompts. Together, they raise the worst-case task without tanking the average.

Implementation notes (plain-English):

  • One knob (lambda) controls how hard you push worst-task robustness vs. average performance.
  • Track per-task filter rates to guide oversampling.
  • Use modest oversampling and capped inflation to keep compute reasonable.
  • Update weights smoothly to avoid wild swings; the improvement term helps stabilize.

04Experiments & Results

The test: Can MT-GRPO boost the weakest task while keeping the overall average solid, and can it do so efficiently?

Setups:

  • Tasks: Countdown (planning), Zebra (logic), ARC (inductive reasoning), each with easy/medium/hard variants.
  • Models: Qwen-2.5-3B base.
  • Baselines: GRPO, SEC-GRPO (curriculum), DAPO (strong RL baseline), SEC-DAPO.
  • Metrics: Worst-task accuracy (the floor), average accuracy (overall), and average per-task relative change (fairness-weighted improvement).

Experiment 1: Controlled 3-task setting (Countdown, Zebra, ARC; medium difficulty)

  • Scoreboard: MT-GRPO improves worst-task accuracy by 16–28% over standard GRPO and beats DAPO by 6%, while keeping average accuracy competitive.
  • Meaning: That’s like raising the class’s lowest grade from a D to a solid C/B without lowering the class average.
  • Efficiency: MT-GRPO reaches 50% worst-task accuracy with 50% fewer steps than baselines—faster reliability gains.
  • Dynamics: As Countdown gets strong, MT-GRPO shifts weight to Zebra and ARC; baselines often keep feeding Countdown, yielding smaller gains where help is needed most.
  • Ratio preservation: ARC had many zero-gradient prompts. Without RP sampling, ARC would be underrepresented despite higher weight. With RP sampling, actual batch shares match the plan, unlocking ARC improvements.

Experiment 2: 9-task setting (easy/medium/hard for each of Countdown, Zebra, ARC)

  • Trade-off knob (lambda): Higher lambda consistently lifts worst-task accuracy (e.g., +16% over GRPO, +6% over DAPO at lambda=1.2) but trims a bit of average accuracy—an explicit, tunable trade.
  • Difficulty trends: Smaller lambda favors balanced improvement (more gains on hard tasks, sometimes smaller or negative changes on easy ones), raising fairness-style metrics. Larger lambda concentrates on the single weakest (often Zebra-hard), maximizing the floor.
  • Takeaway: You can steer between ā€œhighest floorā€ and ā€œbest overall averageā€ depending on your deployment goals.

Surprises and insights:

  • Zero-gradient imbalance is big: ARC frequently produced zero-gradient prompts; without acceptance-aware oversampling, progress stalled no matter the weights.
  • Weight collapse avoided: Reward-only reweighting tends to fixate on the current worst task; the improvement-aware term avoids this, distributing help where it’s most needed.
  • Faster robustness: Beyond better final scores, MT-GRPO reaches reliability thresholds earlier—useful for limited training budgets.

Overall: Across both small and large settings, MT-GRPO reliably boosts the weakest task and keeps the average competitive. The method’s two core ideas—improvement-aware weights and ratio-preserving sampling—are both crucial to these gains.

05Discussion & Limitations

Limitations:

  • Rewarded tasks only: The approach needs clear, verifiable rewards per task (correctness/format). It’s not designed for fuzzy goals without a good reward signal.
  • Compute and sampling overhead: Ratio-preserving, acceptance-aware resampling adds generation and filtering work, especially for tasks with many zero-gradient prompts.
  • Hyperparameter tuning: The lambda trade-off needs tuning per use case; too high can over-focus on the worst task, too low can favor average performance.
  • Extreme zero-gradient regimes: If a task yields almost no informative prompts despite oversampling, progress will still be slow.
  • Interference remains possible: While weighting helps, conflicting gradients across very different tasks can still cause some negative transfer.

Required resources:

  • A base LLM, multi-task datasets with automatic grading, and RL post-training infrastructure (GRPO-style implementation, rollout generation, filtering statistics).
  • Enough compute to support oversampling and a few resampling rounds per batch.

When not to use:

  • Single-task specialization: If you only care about one task, standard GRPO/DAPO may be simpler and faster.
  • Noisy or unverifiable rewards: If correctness can’t be judged reliably, the signals driving weights and sampling won’t be stable.
  • Tiny data regimes without diversity: If each task has too few prompts, the weight and ratio estimates will be unstable.

Open questions:

  • Smarter acceptance prediction: Can we learn to predict zero-gradient likelihood per prompt to cut resampling cost further?
  • Cross-task transfer: How to better exploit positive transfer while guarding against interference in this RL setting?
  • Beyond accuracy: Can we extend robustness to other criteria (latency, safety, or verbosity) with multi-objective reward shaping?
  • Theoretical guarantees: Stronger convergence and robustness guarantees with on-policy, clipped GRPO in multi-task, non-stationary settings.
  • Generalization: How well do balanced gains transfer to unseen but related tasks or domains?

06Conclusion & Future Work

Three-sentence summary: MT-GRPO trains one model to handle many reasoning tasks by giving more attention to tasks that are both weak and not improving—and then ensuring the training batch truly reflects that plan. It couples improvement-aware task reweighting with a ratio-preserving, acceptance-aware sampler on top of GRPO. This raises the worst-task performance while keeping the average strong and speeds up reaching reliability thresholds.

Main achievement: Making multi-task RL post-training reliably lift the weakest task by aligning planned emphasis (weights) with realized gradients (ratio-preserving sampling) and stabilizing weights with improvement signals.

Future directions: Reduce sampling overhead with better acceptance prediction, explore multi-objective robustness (e.g., safety plus accuracy), enhance positive transfer across tasks, and extend to larger, more diverse benchmarks and modalities.

Why remember this: Real assistants must be good at many things; MT-GRPO shows a practical, plug-in way to build balanced competence—turning an average ā€œA-minusā€ that hides a ā€œDā€ into a report card where every subject is solid.

Practical Applications

  • •Build AI tutors that improve both math and logic evenly so students don’t develop hidden gaps.
  • •Train coding assistants to balance algorithmic skill with test-case reasoning and error handling.
  • •Develop customer support bots that are reliable across intents: troubleshooting, returns, and policy explanations.
  • •Enhance safety pipelines by lifting the weakest check (e.g., privacy or toxicity) without harming others.
  • •Prepare general-purpose agents that handle planning, deduction, and pattern recognition in balanced ways.
  • •Create fair multi-domain chatbots by prioritizing underperforming domains while keeping overall quality high.
  • •Speed up reaching minimum reliability bars during RL post-training for constrained training budgets.
  • •Improve benchmark suites where a single failing task blocks deployment (raise the floor quickly).
  • •Stabilize multi-task fine-tuning for small models by enforcing practice ratios and balanced gains.
  • •Apply to multimodal settings (text+vision) to prevent one modality from dominating training.
#Multi-Task Learning#GRPO#Reinforcement Learning Post-Training#Improvement-Aware Reweighting#Ratio-Preserving Sampling#Zero-Gradient Prompts#Distributional Robustness#Worst-Task Accuracy#Acceptance-Aware Oversampling#LLM Reasoning#Curriculum Sampling#Task Balancing#Policy Gradient#DAPO#SEC-GRPO
Version: 1