Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Building LLM Powered  Applications
  • Toc
  • feedback
Building LLM Powered  Applications

Building LLM Powered Applications

By : Valentina Alto
4.2 (22)
close
Building LLM Powered  Applications

Building LLM Powered Applications

4.2 (22)
By: Valentina Alto

Overview of this book

Building LLM Powered Applications delves into the fundamental concepts, cutting-edge technologies, and practical applications that LLMs offer, ultimately paving the way for the emergence of large foundation models (LFMs) that extend the boundaries of AI capabilities. The book begins with an in-depth introduction to LLMs. We then explore various mainstream architectural frameworks, including both proprietary models (GPT 3.5/4) and open-source models (Falcon LLM), and analyze their unique strengths and differences. Moving ahead, with a focus on the Python-based, lightweight framework called LangChain, we guide you through the process of creating intelligent agents capable of retrieving information from unstructured data and engaging with structured data using LLMs and powerful toolkits. Furthermore, the book ventures into the realm of LFMs, which transcend language modeling to encompass various AI tasks and modalities, such as vision and audio. Whether you are a seasoned AI expert or a newcomer to the field, this book is your roadmap to unlock the full potential of LLMs and forge a new era of intelligent machines.
Table of Contents (16 chapters)
close
14
Other Books You May Enjoy
15
Index

When is fine-tuning necessary?

As we saw in previous chapters, good prompt engineering combined with the non-parametric knowledge you can add to your model via embeddings are exceptional techniques to customize your LLM, and they can account for around 90% of use cases. However, the preceding affirmation tends to hold for the state-of-the-art models, such as GPT-4, Llama 2, and PaLM 2. As discussed, those models have a huge number of parameters that make them heavy, hence the need for computational power; plus, they might be proprietary and subject to a pay-per-use cost.

Henceforth, fine-tuning might also be useful when you want to leverage a light and free-of-charge LLM, such as the Falcon LLM 7B, yet you want it to perform as well as a SOTA model in your specific task.

Some examples of when fine-tuning might be necessary are:

  • When you want to use an LLM for sentiment analysis on movie reviews, but the LLM was pretrained on Wikipedia articles and books. Fine-tuning...
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete