Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • RAG-Driven Generative AI
  • Toc
  • feedback
RAG-Driven Generative AI

RAG-Driven Generative AI

By : Denis Rothman
4.3 (18)
close
RAG-Driven Generative AI

RAG-Driven Generative AI

4.3 (18)
By: Denis Rothman

Overview of this book

RAG-Driven Generative AI provides a roadmap for building effective LLM, computer vision, and generative AI systems that balance performance and costs. This book offers a detailed exploration of RAG and how to design, manage, and control multimodal AI pipelines. By connecting outputs to traceable source documents, RAG improves output accuracy and contextual relevance, offering a dynamic approach to managing large volumes of information. This AI book shows you how to build a RAG framework, providing practical knowledge on vector stores, chunking, indexing, and ranking. You’ll discover techniques to optimize your project’s performance and better understand your data, including using adaptive RAG and human feedback to refine retrieval accuracy, balancing RAG with fine-tuning, implementing dynamic RAG to enhance real-time decision-making, and visualizing complex data with knowledge graphs. You’ll be exposed to a hands-on blend of frameworks like LlamaIndex and Deep Lake, vector databases such as Pinecone and Chroma, and models from Hugging Face and OpenAI. By the end of this book, you will have acquired the skills to implement intelligent solutions, keeping you competitive in fields from production to customer service across any project.
Table of Contents (14 chapters)
close
11
Other Books You May Enjoy
12
Index
Appendix

Summary

This chapter’s goal was to show that as we accumulate RAG data, some data is dynamic and requires constant updates, and as such, cannot be fine-tuned easily. However, some data is static, meaning that it will remain stable for long periods of time. This data can become parametric (stored in the weights of a trained LLM).

We first downloaded and processed the SciQ dataset, which contains hard science questions. This stable data perfectly suits fine-tuning. It contains a question, answer, and support (explanation) structure, which makes the data effective for fine-tuning. Also, we can assume human feedback was required. We can even go as far as imagining this feedback could be provided by analyzing generative AI model outputs.

We converted the data we prepared into prompts and completions in a JSONL file following the recommendations of OpenAI’s preparation tool. The structure of JSONL was meant to be compatible with a completion model (prompt and completion...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete