Close

Presentation

Attribution in Large Language Models
DescriptionThe general public has access to Large Language Models (LLMs), allowing their users to obtain quick answers about any topic. While the generated content tends to be correct, bringing trust in the models, all auto-regressive next token prediction LLMs can hallucinate. This leads to the spread of misinformation and possibly severe consequences when using LLMs for high-risk applications. So, we need proper validation systems for the generated content.
We develop an LLM attribution algorithm using post-generation retrieval, based on Retrieval-Augmented Generation (RAG). This algorithm has a simple implementation and uses a pre-trained LLM, making it accessible to the public.