-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

Building Data-Driven Applications with LlamaIndex
By :

In the previous chapter, we discussed the various retrieval methods that LlamaIndex offers. We extracted the necessary context to be able to enrich and improve the query we are now sending to the LLM. But is this enough?
As we have already discussed, naive retrieval methods are unlikely to produce ideal results in any scenario. There will probably be many situations where the returned nodes will perhaps contain irrelevant information or will not be sorted in chronological order. These kinds of situations could put the LLM in difficulty, adversely affecting the quality of the prompt that our RAG application builds.
A quick side notes
In case it wasn’t already obvious, the main purpose of a RAG flow is to programmatically build prompts. Instead of manually building these prompts and then inputting them into a ChatGPT-like interface, LlamaIndex dynamically assembles the prompts from our documents, which...