Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration

This work first adapt existing approaches to model calibration or adaptation through fine-tuning/prompting and analyze their ability to abstain from generating low-confidence outputs, and proposes two novel approaches that are based on model collaboration, i.e., LLMs probing other LLMs for knowledge gaps, either cooperatively or competitively.

Thu Feb 01 2024
Citations
54
by Shangbin Feng, Weijia Shi and others
CHAT WITH RESEARCH


QUESTIONS & ANSWERS

Log in to generate
TL;DR
AI KEY POINTS
ABSTRACT
PAPER
This work first adapt existing approaches to model calibration or adaptation through fine-tuning/prompting and analyze their ability to abstain from generating low-confidence outputs, and proposes two novel approaches that are based on model collaboration, i.e., LLMs probing other LLMs for knowledge gaps, either cooperatively or competitively.


Research is provided by Semantic Scholar and AI-generated text may at times produce inaccurate results.
Information provided on this site does not constitute legal, financial, medical, or any other professional advice.

DATA LICENSING
Search and article data is provided under CC BY-NC or ODC-BY and via The Semantic Scholar Open Data Platform. Read more at Kinney, Rodney Michael et al. “The Semantic Scholar Open Data Platform.” ArXiv abs/2301.10140 (2023): n. pag.