Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Building Data-Driven Applications with LlamaIndex
  • Table Of Contents Toc
  • Feedback & Rating feedback
Building Data-Driven Applications with LlamaIndex

Building Data-Driven Applications with LlamaIndex

By : Andrei Gheorghiu
5 (10)
close
close
Building Data-Driven Applications with LlamaIndex

Building Data-Driven Applications with LlamaIndex

5 (10)
By: Andrei Gheorghiu

Overview of this book

Discover the immense potential of Generative AI and Large Language Models (LLMs) with this comprehensive guide. Learn to overcome LLM limitations, such as contextual memory constraints, prompt size issues, real-time data gaps, and occasional ‘hallucinations’. Follow practical examples to personalize and launch your LlamaIndex projects, mastering skills in ingesting, indexing, querying, and connecting dynamic knowledge bases. From fundamental LLM concepts to LlamaIndex deployment and customization, this book provides a holistic grasp of LlamaIndex's capabilities and applications. By the end, you'll be able to resolve LLM challenges and build interactive AI-driven applications using best practices in prompt engineering and troubleshooting Generative AI projects.
Table of Contents (18 chapters)
close
close
Free Chapter
1
Part 1:Introduction to Generative AI and LlamaIndex
4
Part 2: Starting Your First LlamaIndex Project
8
Part 3: Retrieving and Working with Indexed Data
12
Part 4: Customization, Prompt Engineering, and Final Words

Understanding response synthesizers

The final step before sending our hard-worked contextual data to the LLM is the response synthesizer. It’s the component that’s responsible for generating responses from a language model using a user query and the retrieved context.

It simplifies the process of querying an LLM and synthesizing an answer across our proprietary data. Just like the other components of the framework, response synthesizers can be used on their own or configured in query engines to handle the final step of response generation after nodes have been retrieved and postprocessed.

Here’s a simple example demonstrating how to use one directly on a given set of nodes:

from llama_index.core.schema import TextNode, NodeWithScore
from llama_index.core import get_response_synthesizer
nodes = [
    TextNode(text=
        "The town square clock was built in 1895"
   &...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY