Personalized AI helpers can accidentally copy a userβs past opinions instead of telling objective facts, which the authors call personalization-induced hallucinations.
Latent diffusion models are great at making images but learn the meaning of scenes slowly because their training goal mostly teaches them to clean up noise, not to understand objects and layouts.