Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Building Data-Driven Applications with LlamaIndex
  • Table Of Contents Toc
  • Feedback & Rating feedback
Building Data-Driven Applications with LlamaIndex

Building Data-Driven Applications with LlamaIndex

By : Andrei Gheorghiu
5 (10)
close
close
Building Data-Driven Applications with LlamaIndex

Building Data-Driven Applications with LlamaIndex

5 (10)
By: Andrei Gheorghiu

Overview of this book

Discover the immense potential of Generative AI and Large Language Models (LLMs) with this comprehensive guide. Learn to overcome LLM limitations, such as contextual memory constraints, prompt size issues, real-time data gaps, and occasional ‘hallucinations’. Follow practical examples to personalize and launch your LlamaIndex projects, mastering skills in ingesting, indexing, querying, and connecting dynamic knowledge bases. From fundamental LLM concepts to LlamaIndex deployment and customization, this book provides a holistic grasp of LlamaIndex's capabilities and applications. By the end, you'll be able to resolve LLM challenges and build interactive AI-driven applications using best practices in prompt engineering and troubleshooting Generative AI projects.
Table of Contents (18 chapters)
close
close
Free Chapter
1
Part 1:Introduction to Generative AI and LlamaIndex
4
Part 2: Starting Your First LlamaIndex Project
8
Part 3: Retrieving and Working with Indexed Data
12
Part 4: Customization, Prompt Engineering, and Final Words

Understanding how LlamaIndex uses prompts

In terms of mechanics, a RAG-based application follows exactly the same rules and principles of interaction that a simple user would use in a chat session with an LLM. A major difference comes from the fact that RAG is actually a kind of prompt engineer on steroids. Behind the scenes, for almost every indexing, retrieval, metadata extraction, or final response synthesis operation, the RAG framework programmatically produces prompts. These prompts are enriched with context and then sent to the LLM.

In LlamaIndex, for each type of operation that requires an LLM, there is a default prompt that is used as a template. Take TitleExtractor as an example. This is one of the metadata extractors that we already talked about in Chapter 4, Ingesting Data into Our RAG Workflow. The TitleExtractor class uses two predefined prompt templates to get titles from text nodes inside documents. It does this in two steps:

  1. It gets potential titles from...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY