Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Unlocking Data with Generative AI and RAG
  • Toc
  • feedback
Unlocking Data with Generative AI and RAG

Unlocking Data with Generative AI and RAG

By : Keith Bourne
5 (2)
close
Unlocking Data with Generative AI and RAG

Unlocking Data with Generative AI and RAG

5 (2)
By: Keith Bourne

Overview of this book

Generative AI is helping organizations tap into their data in new ways, with retrieval-augmented generation (RAG) combining the strengths of large language models (LLMs) with internal data for more intelligent and relevant AI applications. The author harnesses his decade of ML experience in this book to equip you with the strategic insights and technical expertise needed when using RAG to drive transformative outcomes. The book explores RAG’s role in enhancing organizational operations by blending theoretical foundations with practical techniques. You’ll work with detailed coding examples using tools such as LangChain and Chroma’s vector database to gain hands-on experience in integrating RAG into AI systems. The chapters contain real-world case studies and sample applications that highlight RAG’s diverse use cases, from search engines to chatbots. You’ll learn proven methods for managing vector databases, optimizing data retrieval, effective prompt engineering, and quantitatively evaluating performance. The book also takes you through advanced integrations of RAG with cutting-edge AI agents and emerging non-LLM technologies. By the end of this book, you’ll be able to successfully deploy RAG in business settings, address common challenges, and push the boundaries of what’s possible with this revolutionary AI technique.
Table of Contents (20 chapters)
close
Free Chapter
1
Part 1 – Introduction to Retrieval-Augmented Generation (RAG)
7
Part 2 – Components of RAG
14
Part 3 – Implementing Advanced RAG

Prompt parameters

There are numerous parameters that are common among most LLMs, but we are going to discuss a small subset that is most likely to have an impact on your RAG efforts: temperature, top-p, and seed.

Temperature

If you think of your output as a string of tokens, an LLM, in a basic sense, is predicting the next word (or token) based on the data you’ve provided and the previous tokens it has already generated. The next word that the LLM predicts is a product of a probability distribution representing all potential words and their probabilities.

In many cases, the probability of certain words is going to be much higher than most others, but there is still a probabilistic chance that the LLM selects one of the less likely words. Temperature is the setting that dictates how likely it is for the model to choose a word further down the probability distribution. In other words, this allows you to use temperature to set the degree of randomness of the model’...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete