Long-horizon AI assistants can grab old, low-quality, or conflicting memories and then answer with too much confidence, which is dangerous.
Panini is a way for AI to keep learning new facts without changing its brain by storing them as tiny linked Q&A facts in an external memory.
The paper shows that language models with a search tool often look up too much information, which wastes compute and can make answers worse on unanswerable questions.