Confidence Estimation for LLMs in Multi-turn Interactions
Key Summary
- ā¢This paper studies how sure (confident) large language models are during multi-turn chats where clues arrive step by step.
- ā¢It says a good confidence signal should be calibrated at every step and usually go up when real new information is added.
- ā¢The authors introduce new tools: InfoECE (a per-information-level calibration score) and Kendallās tau (a trend score for monotonicity).
- ā¢They build controlled datasets using a HinterāGuesser setup and adapt quizbowl-style data so clues get steadily more helpful.
- ā¢Common confidence methods like asking the model to self-report or checking P(TRUE) are often poorly calibrated and can be fooled by turn count.
- ā¢Their new probe, P(SUFFICIENT), asks if the information so far is enough to uniquely pin down the answer, and it works better in many cases.
- ā¢Placebo hints (extra turns with no real info) reveal which methods truly track information versus just the number of turns.
- ā¢Across models and tasks, accuracy is similar for multi-turn versus single-turn summaries, but confidence behaves very differently.
- ā¢Even the best method still has room to improve, so the paper provides a foundation and clear targets for future research.
- ā¢This matters for safer AI assistants and agents that must know when to ask for more details or when itās okay to act.
Why This Research Matters
In real life, AI assistants and agents must know when they truly have enough information to act, and when they should ask for more details. This work gives us a clearer way to judge and improve that judgment during multi-turn conversations, which is how people actually interact with AI. Better confidence signals help prevent harmful overconfidence and reduce frustration from needless hesitation. They also enable smarter tool-useālike searching or asking clarifying questions only when needed. With methods like P(SUFFICIENT), products can be safer, more cost-efficient, and more trustworthy. Over time, this turns chatty models into reliable teammates who act carefully and confidently for the right reasons.
Detailed Explanation
Tap terms for definitions01Background & Problem Definition
š Hook: Imagine playing 20 Questions. At first, you barely know anything, so your guesses are wobbly. As you hear more clues, you feel more and more sure. That growing "Iām confident now!" feeling is what we want AI to have during conversations.
š„¬ The concept: We use large language models (LLMs) to chat, plan, and solve problems. In real talks, information arrives over several turns, not all at once. The big challenge is: can an AI keep track of how sure it should be as the conversation unfoldsāand do that reliably?
š Anchor: Think of an AI detective. After clue 1, it should be cautious. After clue 5, it should feel bolderābut only if the clues truly help.
ā
š Hook: You know how a friend might claim theyāre ā100% sure,ā but then theyāre wrong? Thatās not very trustworthy.
š„¬ The concept (Hallucinations): Hallucinations are when an AI says something false but sounds very confident. This is dangerous in high-stakes settings like medicine, finance, or safety.
How it works (problem story):
- The AI answers a userās question.
- It often sounds certain even if itās guessing.
- People may believe it because it sounds confident.
Why it matters: Without reliable confidence, we canāt tell when to trust the AI, when to ask for more info, or when to stop and check. This blocks safe use in real life.
š Anchor: If an AI tells a doctor, āIām sure this is Disease X,ā but itās just guessing, that could lead to a harmful decision.
ā
š Hook: Imagine a thermometer that says itās 70°F but itās actually 50°F outside. Youād call that thermometer badly calibrated.
š„¬ The concept (Confidence estimation and calibration): Confidence estimation is the AIās self-check of how likely its answer is right. Calibration means that when the AI says āIām 70% sure,ā itās correct about 70% of the time.
How it works:
- The AI gives an answer and a confidence score (0 to 1).
- We compare that score to how often itās actually right.
- The closer the match, the better the calibration.
Why it matters: Poor calibration means the AI either over-trusts itself or is too timidāboth lead to bad decisions.
š Anchor: If a model says itās 90% sure and is right 9 out of 10 times, thatās well calibrated.
ā
š Hook: When you build a LEGO set step by step, your confidence youāre doing it right grows as pieces click into placeāunless a piece doesnāt fit.
š„¬ The concept (Multi-turn conversations): These are back-and-forth chats where clues arrive across turns, and ambiguity shrinks.
How it works:
- Turn 1: little info, many possible answers.
- Turn 2ān: more clues arrive.
- The answer space narrows; the AI should adjust confidence.
Why it matters: Most real interactions are multi-turn, so a confidence signal that adapts turn by turn is essential.
š Anchor: A travel assistant asks follow-up questions before suggesting flights; as details arrive, it should become more certain.
ā
š Hook: If two quizzes have different numbers of questions, you shouldnāt compare raw scores without adjusting.
š„¬ The concept (What was missing): Past work focused on single-turn cases. We lacked a way to judge confidence that changes across turns and across conversations of different lengths.
How it works:
- Define what āgoodā confidence should look like in multi-turn.
- Create fair metrics that adjust for dialogue length.
- Build datasets where information grows in a controlled way.
Why it matters: Without this, we canāt tell which confidence method is truly reliable in real conversations.
š Anchor: You canāt judge a marathoner and a sprinter by the same timer; you need the right metric for each race.
ā
š Hook: Imagine playing 20 Questions with a helpful hint-giver who always adds a clue that actually helps.
š„¬ The concept (The paperās setup): The authors create a controlled āHinterāGuesserā pipeline and adapt incremental-quiz datasets so that each turn adds useful info. They then test multiple confidence methods and introduce new metrics to judge them.
How it works:
- Ensure each new turn really adds information.
- Make the model answer and report confidence each turn.
- Check calibration at each information level and whether confidence usually rises with real info.
Why it matters: This creates a fair playground to see which confidence methods truly track evidence, not just time or turn count.
š Anchor: If a clue says āItās a city in Southeast Asia,ā and later āItās inland with a tropical climate,ā a good system should become more confident only because these facts truly narrow the choices.
02Core Idea
š Hook: You know how you wonāt shout āBingo!ā until you have enough matching numbersānot just a lucky guess? You wait until the evidence is sufficient.
š„¬ The concept (Aha! in one sentence): Instead of asking āAm I right?ā, ask āDo I have enough information for my answer to be the only reasonable one?āāthatās the P(SUFFICIENT) idea.
How it works:
- At each turn, the model gives its current best answer.
- We probe the model: āGiven the clues so far, is the information sufficient to uniquely support this answer?ā
- The model outputs a probability that the info is sufficient (yes/no).
Why it matters: A model can be accidentally correct early on; sufficiency asks if the clues truly justify confidence, aligning confidence with evidence rather than luck.
š Anchor: Guessing ātelevisionā early in 20Q might be right by chance. P(SUFFICIENT) stays low until enough distinct clues prove it really must be ātelevision.ā
ā
Multiple analogies for the same idea:
- Jigsaw puzzle: Donāt claim the picture is a tiger after placing only two sky-blue pieces. Wait until enough edge and stripe pieces lock in. P(SUFFICIENT) asks, āDo we have enough pieces to be sure?ā
- Courtroom: A verdict shouldnāt be based on a hunch. P(SUFFICIENT) is like the judge asking, āIs there enough evidence beyond reasonable doubt?ā
- Safe crack: You might hear clicks, but you shouldnāt yank the handle until all tumblers align. P(SUFFICIENT) checks alignment, not just noise.
ā
š Hook: When you read a mystery, you donāt treat chapter 1 like chapter 10. Clues build.
š„¬ The concept (Before vs. After): Before, confidence methods often treated each turn separately or asked, āIs my answer true?ā which can be misleading early. After, the system tracks when new info truly narrows the answer space, so confidence usually rises only with real, useful clues.
How it works:
- Control information growth turn by turn.
- Normalize turns into information levels for fair comparison.
- Use sufficiency probing to align confidence with identifiability.
Why it matters: The AI becomes better at knowing when to act, when to ask questions, and when to wait.
š Anchor: A planning assistant wonāt book a hotel until itās sure about dates, city, and budgetāits sufficiency rises only when those details are confirmed.
ā
š Hook: If you say youāre 80% sure, you should be right about 80% of the time.
š„¬ The concept (InfoECE, the calibration metric): InfoECE checks calibration at equal information levels across conversations of different lengths so comparisons are fair.
How it works:
- Convert each turn position into a normalized āinformation levelā (like 20%, 40%, ā¦).
- Group turns by these levels and compare average confidence to actual accuracy.
- Smaller gaps mean better calibration.
Why it matters: Without normalizing, a 3-turn chat and an 8-turn chat canāt be fairly compared.
š Anchor: Comparing mid-game confidence at 50% progress across all games tells you who is realistically assessing how well theyāre doing.
ā
š Hook: If you add real clues, confidence should usually go upālike turning on more lights in a dark room.
š„¬ The concept (Monotonicity with Kendallās tau): Kendallās tau measures whether confidence tends to trend upward as information increases.
How it works:
- Look at all turn pairs (earlier vs. later) in a dialogue.
- Count how often later confidence is higher than earlier.
- Turn this into a score from -1 (downward) to +1 (upward).
Why it matters: A good method should reward real clues, not just later turns.
š Anchor: If clue 4 is better than clue 3, your confidence should usually be higher at clue 4.
ā
š Hook: Sometimes people talk more but say nothing. Does the AI notice?
š„¬ The concept (Placebo hints): A placebo hint is an extra turn that adds no useful information. It tests whether confidence rises just because time passes.
How it works:
- Compare confidence after a real hint versus after a placebo hint.
- A good method rises after real hints and stays flat or drops after placebo.
- This isolates true information tracking from turn-count bias.
Why it matters: We want confidence that follows evidence, not the clock.
š Anchor: āIs this a valid hint? Yes.ā adds no facts. P(SUFFICIENT) often lowers or keeps confidence flat, which is what we want.
ā
š Hook: Asking āAm I right?ā is different from asking āIs there enough proof Iām right?ā
š„¬ The concept (Comparing probes): P(TRUE) asks if the current answer is correct; P(SUFFICIENT) asks if thereās enough information to be sure. Self-consistency (SC) checks how often many sampled answers agree. Verbalized confidence asks the model to state a number.
How it works:
- P(TRUE) can be fooled by lucky guesses.
- P(SUFFICIENT) aligns confidence with evidence that rules out alternatives.
- SC is often well calibrated in fully specified tasks but can be weak on turn-by-turn monotonicity.
- Verbalized scores are easy to collect but often miscalibrated and unstable.
Why it matters: Picking the right confidence tool changes how safe and useful the AI becomes.
š Anchor: In 20Q-style games, P(SUFFICIENT) usually climbs only when each clue truly narrows down the hidden answer.
03Methodology
At a high level: Input (dialogue with growing clues) ā Normalize turns into information levels ā Ask models to answer and estimate confidence each turn ā Evaluate calibration (InfoECE) and monotonicity (Kendallās tau) ā Stress-test with placebo hints and compare multi-turn vs. single-turn summary.
Step-by-step recipe with Sandwich explanations for key pieces:
- Dialogue and turns š Hook: Think of a treasure hunt where each turn gives a new hint about the treasureās location. š„¬ The concept: A multi-turn dialogue is a sequence of questionāanswer steps where each new turn adds information. How it works:
- Keep a history: (q1, a1, q2, a2, ā¦, qi-1, ai-1).
- At turn i, provide the next hint and ask the model to guess and give confidence.
- Record whether the guess is correct. Why it matters: This structure lets us track how confidence should change with each new hint. š Anchor: In āGuess My City,ā early you hear āAsia,ā later āSoutheast Asia,ā then āinland,ā then ātropical.ā
- Normalized information level š Hook: Comparing halftime scores across games is fairer than comparing minute 17 in one game to minute 42 in another. š„¬ The concept: We normalize each turn to a fractional information level s = i / L, where L is that dialogueās length. How it works:
- Compute s for each turn.
- Bin turns by s (e.g., 0ā20%, 20ā40%, ā¦).
- Compare confidence and accuracy within the same bins across dialogues of different lengths. Why it matters: This keeps comparisons fair when some chats are shorter or longer. š Anchor: You compare everyoneās confidence at 50% progress, not by raw turn number.
- Calibration via InfoECE š Hook: If your weather app says 70% chance of rain, it should rain 7 out of 10 times. š„¬ The concept: InfoECE measures the average gap between confidence and actual correctness within each information-level bin. How it works:
- For each bin, compute average confidence and average accuracy.
- Take their absolute difference and average across bins.
- Lower numbers mean better calibration. Why it matters: It tells whether confidence matches reality at comparable stages of information. š Anchor: If at 40ā60% info, models say 60% sure and are right 60% of the time, calibration is good there.
- Monotonicity via Kendallās tau š Hook: Climbing stairs should take you higher step by step. š„¬ The concept: Kendallās tau checks whether confidence tends to rise as turns progress. How it works:
- Look at every pair of turns (earlier, later) in a dialogue.
- Count how often later confidence is higher than earlier (concordant) vs. lower (discordant).
- Score from -1 (downward) to +1 (upward). Why it matters: A good method rewards real clues with higher confidence. š Anchor: If clue 5 is stronger than clue 3, confidence should tend to be higher at 5 than at 3.
- Datasets and the HinterāGuesser paradigm š Hook: A helpful game master reveals fair, useful clues each round. š„¬ The concept: HinterāGuesser ensures each turn adds a real, non-trivial hint and continues until the answer is both correct and unique. How it works:
- Hinter (LLM) gives helpful, uncertainty-reducing hints.
- Guesser (LLM) makes a best guess and marks whether multiple answers still fit.
- Keep only runs that converge to a unique, correct answer. Why it matters: Guarantees progressive information, enabling fair monotonicity testing. š Anchor: For 20Q and GUESS, clues narrow the entity/city; for GRACE/TRICKME, quizbowl clues get more specific.
- Confidence methods under test š Hook: There are different thermometers; we need to know which reads temperature best. š„¬ The concept: Compare several confidence estimators. How it works:
- Verbalized confidence: ask the model to say a number (with or without chain-of-thought).
- Self-consistency (SC): sample many answers; the fraction agreeing with the chosen one is the confidence.
- Logit-based probes: force a tiny classification and use the internal probabilities. P(TRUE) asks āIs my answer true?ā P(SUFFICIENT) asks āDo we have enough info to be sure?ā Why it matters: Different tools have different strengths; some can be fooled by lucky guesses or turn count. š Anchor: In under-specified games, P(SUFFICIENT) often tracks real evidence best.
- Placebo hints test š Hook: Talking more isnāt the same as saying more. š„¬ The concept: Add a turn that adds no real info and see if confidence still rises. How it works:
- Compare three states: before the turn, with a real hint, with a placebo hint.
- Good methods go up with real hints but stay flat or go down with placebos. Why it matters: Filters out methods that just grow with the turn index. š Anchor: āIs this a valid hint? Yes.ā shouldnāt raise confidence if it adds no facts.
- Multi-turn versus single-turn summary š Hook: Reading a story chapter by chapter feels different than reading a summary, even if facts match. š„¬ The concept: Compare performance when clues are fed turn-by-turn versus compressed into one concise prompt. How it works:
- Build a summary of all clues up to a turn.
- Compare accuracy and confidence between formats. Why it matters: Reveals whether confidence depends on conversational structure vs. content. š Anchor: The same facts, told as a tidy summary, can change how models assess sufficiency.
Secret sauce (whatās clever):
- Tie confidence to identifiability (P(SUFFICIENT)) instead of mere correctness.
- Normalize by information level (InfoECE) for fair, per-stage calibration.
- Stress-test with placebo hints to separate signal from turn-count noise.
- Evaluate monotonicity with Kendallās tau to see if confidence truly climbs with evidence.
04Experiments & Results
The test: The authors measured two main things: (1) calibration at each information level using InfoECE (lower is better), and (2) monotonicity using Kendallās tau (higher is better). They ran four open-source models (Llama 3.1 8B/70B; Qwen 2.5 7B/72B) across four datasets (20Q, GUESS for under-specified; GRACE, TRICKME for fully specified but hard).
The competition: Methods included verbalized confidence (with and without chain-of-thought), self-consistency (SC), P(TRUE), and the new P(SUFFICIENT).
Scoreboard with context:
- Calibration (InfoECE): Verbalized confidence and P(TRUE) were generally poorly calibrated (often 40ā80 points InfoECE). SC tended to be the most calibrated on fully specified incremental QA (GRACE/TRICKME). In under-specified games (20Q/GUESS), P(SUFFICIENT) achieved strikingly better calibration in some casesāfor example with Llama3.1-70B, InfoECE dropped to about 13.05 on 20Q and 5.27 on GUESS, which is like moving from a shaky āCā to a confident āAā relative to others still stuck around the āDā range. Overall, SC is a strong default for calibration on fully specified tasks, while P(SUFFICIENT) is especially effective when answers become identifiable step by step.
- Monotonicity (Kendallās tau) on the current answer: Ideally, confidence should rise with real clues. P(SUFFICIENT) most consistently showed strong upward trends, such as tau ā 83.76% on GUESS with Qwen2.5-72B and tau ā 71.38% on TRICKME with Llama3.1-70B. SC often had weak monotonicity on under-specified tasks.
- Monotonicity against the ground-truth answer: If you score confidence relative to the true answer (not just the modelās guess), all methodsā tau scores jump, and P(SUFFICIENT) usually leads (e.g., ā 93.91% on GUESS with Qwen2.5-72B; ā 91.62% on 20Q with Llama3.1-70B). This suggests models can recognize when clues align with the truth, even if their current guess hasnāt caught up.
Surprising findings and stress tests:
- Information vs. turn count (placebo hints): Across 40 comparisons, informative hints produced more significant confidence changes than placebos (27 vs. 18 with p < 0.05). P(SUFFICIENT) most cleanly separated real information from mere turn accumulation, often decreasing confidence after a placeboāgood behavior that shows itās tracking evidence, not time.
- P(TRUE) sometimes confounded by turn count: On GUESS, P(TRUE) often rose even for placebo hints, indicating a length artifactāconfidence creeping up just because another turn happened.
- Verbalized confidence was unstable: The easy-to-ask, self-reported numbers were often poorly calibrated and sometimes moved the wrong way under information or placebo.
- SC moderately robust: It usually didnāt get fooled by placebos, and it gained with real info, but it wasnāt perfect and sometimes still picked up turn-index effects.
Multi-turn versus single-turn summary:
- Accuracy was similar across formats (mean absolute gap < 1%), so models didnāt clearly āget lostā in this progressive setup.
- Confidence behaved very differently. P(SUFFICIENT) often dropped sharply for single-turn summaries (e.g., on 20Q for Qwen2.5-7B: ā 63 ā 13), implying that it benefits from the turn-by-turn structure to judge sufficiency. Verbalized scores sometimes went up in summaries for larger models without matching accuracy gainsāan example of miscalibration concerns. SC was relatively stable and sometimes improved in summaries on 20Q.
Takeaway of results:
- No method is perfect, but P(SUFFICIENT) best matches the two desiderataācalibration (especially in under-specified settings) and monotonicity (tracking true information gains). SC is strong for calibration in fully-specified incremental QA but often lacks monotonicity. Verbalized and P(TRUE) methods are convenient but unreliable in multi-turn dynamics, with P(TRUE) especially prone to turn-count bias in open-ended tasks.
05Discussion & Limitations
Limitations:
- The controlled set-ups simplify real chats: fewer topic shifts, misunderstandings, or mixed intents. So, real-world transfer may require more robustness.
- The tasks focus on information-seeking (guessing entities/cities, quizbowl), not creative or collaborative dialogues. Confidence behavior might differ in brainstorming or negotiation.
- The evaluation centers on calibration and monotonicity. Human trust and utility (e.g., when to hand off to a person) need user studies.
- The paper studies confidence, not full uncertainty decomposition (like aleatoric vs. epistemic). Bridging confidence with broader uncertainty remains open.
Required resources:
- Access to LLMs capable of returning logits/probabilities for probes and generating multiple samples for SC.
- Datasets with progressive hints (or the HinterāGuesser pipeline) to ensure information actually grows each turn.
- Enough compute to run per-turn probing and sampling across many dialogues.
When not to use:
- If the conversation does not add information progressively (e.g., small talk), sufficiency-style probes may be less meaningful.
- In highly creative tasks where there may be many equally acceptable outputs, asking for strong sufficiency may be ill-posed.
- If you cannot access logits or sampling due to system constraints, some methods (P(SUFFICIENT), SC) may be impractical.
Open questions:
- Can we design a unified method that achieves both excellent calibration and strong monotonicity across all regimes?
- How to robustly detect and down-weight filler turns in wild, messy conversations?
- Can we fuse sufficiency signals with retrieval, tools, or planners so agents know exactly when to ask, search, or act?
- What training-time interventions (e.g., fine-tuning with sufficiency-aware objectives) most improve multi-turn confidence?
- How do user interfaces display evolving confidence so people make better decisions without over-trusting the AI?
06Conclusion & Future Work
3-sentence summary: This paper introduces a framework to judge and improve how LLMs estimate confidence during multi-turn conversations, where clues arrive over time. It proposes two core targetsāper-level calibration and monotonicityāand new tools like InfoECE and sufficiency probing (P(SUFFICIENT)) to test them fairly. Experiments show common methods often fail in dynamic dialogues, while P(SUFFICIENT) performs comparatively better, though the problem is far from solved.
Main achievement: Reframing confidence around evidence sufficiency and providing a controlled, length-normalized evaluation (InfoECE + Kendallās tau + placebo stress tests) that reveals what truly tracks information versus turn count.
Future directions:
- Develop training and inference methods that directly optimize sufficiency-aware confidence and monotonicity.
- Build richer, messier multi-turn datasets (topic shifts, repairs, competing intents) to stress-test methods.
- Integrate sufficiency with tool-use and retrieval so agents can decide when to ask, search, or act.
- Conduct user studies to connect better calibration and monotonicity with real improvements in trust and outcomes.
Why remember this: In real conversations, confidence should grow with real evidence, not with the clock. This paper delivers the first systematic way to test that idea and a concrete probeāP(SUFFICIENT)āthat moves us closer to reliable, decision-ready conversational AI.
Practical Applications
- ā¢Customer support bots that ask clarifying questions when sufficiency is low instead of guessing and escalating when needed.
- ā¢Autonomous agents that only execute high-impact actions (book, buy, send, deploy) when P(SUFFICIENT) passes a threshold.
- ā¢Medical triage assistants that surface disclaimers and suggest follow-up questions whenever sufficiency is low.
- ā¢Coding copilots that request minimal repro steps or run tests before refactoring when they lack sufficient information.
- ā¢Research assistants that trigger retrieval or cite sources when P(SUFFICIENT) is low, and summarize when itās high.
- ā¢Educational tutors that prompt students with targeted hints until sufficiency for the final answer is achieved.
- ā¢Legal or compliance helpers that flag low-sufficiency answers for human review instead of making risky claims.
- ā¢Financial advisory chatbots that hold off on recommendations until required data fields raise P(SUFFICIENT).
- ā¢Voice assistants that reduce follow-up questions when sufficiency is already high, speeding up routine tasks.
- ā¢Agent frameworks that use monotonicity checks to detect confusing conversations and reset or re-ask strategically.