Understanding the power of RAG
RAG was introduced by Meta researchers in 2020 (https://arxiv.org/abs/2005.11401v4) as a framework that allows GenAI models to leverage external data that is not part of model training to enhance the output.
It is a widely known fact that LLMs suffer from hallucinations. One of the classic real-world examples of LLMs hallucinating is the case of Levidow, Levidow & Oberman, the New York law firm that was fined for submitting a legal brief containing fake citations generated by OpenAI’s ChatGPT in a case against Colombian airline Avianca. They were subsequently fined thousands of dollars, and they are likely to have lost more in reputational damage. You can read more about it here: https://news.sky.com/story/lawyers-fined-after-citing-bogus-cases-from-chatgpt-research-12908318.
LLM hallucinations can arise from several factors, such as the following:
- Overfitting to training data: During training, the LLM might overfit to...