Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Building Data-Driven Applications with LlamaIndex
  • Table Of Contents Toc
  • Feedback & Rating feedback
Building Data-Driven Applications with LlamaIndex

Building Data-Driven Applications with LlamaIndex

By : Andrei Gheorghiu
5 (10)
close
close
Building Data-Driven Applications with LlamaIndex

Building Data-Driven Applications with LlamaIndex

5 (10)
By: Andrei Gheorghiu

Overview of this book

Discover the immense potential of Generative AI and Large Language Models (LLMs) with this comprehensive guide. Learn to overcome LLM limitations, such as contextual memory constraints, prompt size issues, real-time data gaps, and occasional ‘hallucinations’. Follow practical examples to personalize and launch your LlamaIndex projects, mastering skills in ingesting, indexing, querying, and connecting dynamic knowledge bases. From fundamental LLM concepts to LlamaIndex deployment and customization, this book provides a holistic grasp of LlamaIndex's capabilities and applications. By the end, you'll be able to resolve LLM challenges and build interactive AI-driven applications using best practices in prompt engineering and troubleshooting Generative AI projects.
Table of Contents (18 chapters)
close
close
Free Chapter
1
Part 1:Introduction to Generative AI and LlamaIndex
4
Part 2: Starting Your First LlamaIndex Project
8
Part 3: Retrieving and Working with Indexed Data
12
Part 4: Customization, Prompt Engineering, and Final Words

Using the ingestion pipeline to increase efficiency

Starting with version 0.9, the LlamaIndex framework introduced a really neat concept: the so-called ingestion pipeline.

A simple analogy

An ingestion pipeline is a bit like a conveyor belt in a factory. In the context of LlamaIndex, it’s a setup that takes your raw data and gets it ready to be integrated into your RAG workflow. It does this by running the data through a series of steps – called transformations – one by one. The key idea is to break the ingestion process into a series of reusable transformations that are applied to input data. This helps standardize and customize ingestion flows for different use cases. Think of transformations as different workstations along this conveyor belt. As your raw data moves along, it hits different stations where something specific happens. It might be split into sentences at one station – that’s your SentenceSplitter – and have a title extracted...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY