Retrieval Augmented Generation (RAG) Model
Authors:
Aniket Gupta (Sharda university)
Aniket Gupta
Ankit mishra
Abstract

Artificial Intelligence algorithm which uses deep learning techniques like neural networks models to generate text-based answers for a variety of user queries. The model gets trained over a large number of pre-defined dataset and generate results based on the facts emerging from this data. LLM RAG is a concept where the dataset is continuously fetched from the real time facts rather than pre-stored data. It helps in providing the users with most accurate and up-to-date information. RAG model reduces the time and cost of continuously training the LLM with the latest data and updating the parameters. It consists of 2 phases: retrieval phase where algorithm will search for and retrieve snippets related to user’s prompt. The content generation phase where using the user’s prompt the and content generated and uses the LLM model to generate the final text-based answer for the user’s query. The LLM RAG (Retrieval Augmented Generation) Model represents a groundbreaking advancement in Natural Language Processing (NLP) that integrates both retrieval and generation capabilities to enhance text generation tasks. This abstract will provide a detailed overview of the LLM RAG Model, highlighting its architecture, key components, training methodology, and applications.

📄 Download Full Paper (PDF)
Published in: GCARED 2025 Proceedings
DOI: 10.63169/GCARED2025.p16
Paper ID: GCARED2025-0213