Understanding the importance of RAGs and knowledge graphs in LLMs
To address the pitfalls of GenAI, we can either fine-tune the model or ground the responses using other sources.
Fine-tuning involves training an existing model with additional information, which can result in high-quality responses. But this can be a complex and time-consuming process.
The RAG approach involves providing extra information when we are asking the LLM a question.
With this approach, you can integrate knowledge repositories into the generative process. In this scenario, LLM can leverage the extra information retrieved from other sources and tune the response to match the information provided, thus grounding the results.
These repositories and sources can include the following:
- Publicly available structured datasets (e.g., scientific databases such as PubMed or publicly accessible encyclopedic resources such as Wikipedia)
- Enterprise knowledge bases (e.g., internal company...