Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Unlocking Data with Generative AI and RAG
  • Toc
  • feedback
Unlocking Data with Generative AI and RAG

Unlocking Data with Generative AI and RAG

By : Keith Bourne
5 (2)
close
Unlocking Data with Generative AI and RAG

Unlocking Data with Generative AI and RAG

5 (2)
By: Keith Bourne

Overview of this book

Generative AI is helping organizations tap into their data in new ways, with retrieval-augmented generation (RAG) combining the strengths of large language models (LLMs) with internal data for more intelligent and relevant AI applications. The author harnesses his decade of ML experience in this book to equip you with the strategic insights and technical expertise needed when using RAG to drive transformative outcomes. The book explores RAG’s role in enhancing organizational operations by blending theoretical foundations with practical techniques. You’ll work with detailed coding examples using tools such as LangChain and Chroma’s vector database to gain hands-on experience in integrating RAG into AI systems. The chapters contain real-world case studies and sample applications that highlight RAG’s diverse use cases, from search engines to chatbots. You’ll learn proven methods for managing vector databases, optimizing data retrieval, effective prompt engineering, and quantitatively evaluating performance. The book also takes you through advanced integrations of RAG with cutting-edge AI agents and emerging non-LLM technologies. By the end of this book, you’ll be able to successfully deploy RAG in business settings, address common challenges, and push the boundaries of what’s possible with this revolutionary AI technique.
Table of Contents (20 chapters)
close
Free Chapter
1
Part 1 – Introduction to Retrieval-Augmented Generation (RAG)
7
Part 2 – Components of RAG
14
Part 3 – Implementing Advanced RAG

Comparing RAG with model fine-tuning

LLMs can be adapted to your data in two ways:

  • Fine-tuning: With fine-tuning, you are adjusting the weights and/or biases that define the model’s intelligence based on new training data. This directly impacts the model, permanently changing how it will interact with new inputs.
  • Input/prompts: This is where you use the model, using the prompt/input to introduce new knowledge that the LLM can act upon.

Why not use fine-tuning in all situations? Once you have introduced the new knowledge, the LLM will always have it! It is also how the model was created – by being trained with data, right? That sounds right in theory, but in practice, fine-tuning has been more reliable in teaching a model specialized tasks (such as teaching a model how to converse in a certain way), and less reliable for factual recall.

The reason is complicated, but in general, a model’s knowledge of facts is like a human’s long-term...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete