Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Building LLM Powered  Applications
  • Toc
  • feedback
Building LLM Powered  Applications

Building LLM Powered Applications

By : Valentina Alto
4.2 (22)
close
Building LLM Powered  Applications

Building LLM Powered Applications

4.2 (22)
By: Valentina Alto

Overview of this book

Building LLM Powered Applications delves into the fundamental concepts, cutting-edge technologies, and practical applications that LLMs offer, ultimately paving the way for the emergence of large foundation models (LFMs) that extend the boundaries of AI capabilities. The book begins with an in-depth introduction to LLMs. We then explore various mainstream architectural frameworks, including both proprietary models (GPT 3.5/4) and open-source models (Falcon LLM), and analyze their unique strengths and differences. Moving ahead, with a focus on the Python-based, lightweight framework called LangChain, we guide you through the process of creating intelligent agents capable of retrieving information from unstructured data and engaging with structured data using LLMs and powerful toolkits. Furthermore, the book ventures into the realm of LFMs, which transcend language modeling to encompass various AI tasks and modalities, such as vision and audio. Whether you are a seasoned AI expert or a newcomer to the field, this book is your roadmap to unlock the full potential of LLMs and forge a new era of intelligent machines.
Table of Contents (16 chapters)
close
14
Other Books You May Enjoy
15
Index

Start working with LLMs via Hugging Face Hub

Now that we got familiar with LangChain components, it is time to start using our LLMs. If you want to use open-source LLMs, leveraging the Hugging Face Hub integration is extremely versatile. In fact, with just one access token you can leverage all the open-source LLMs available in Hugging Face’s repos. As it being a non-production scenario, I will be using the free Inference API, however if you are meant to build production-ready applications, you can easily scale to the Inference Endpoint, which grants you a dedicated and fully managed infrastructure to host and consume your LLMs.So let’s see how to start integrating LangChain with Hugging Face Hub.

Create an Hugging Face user access token

To access the free Inference API, you will need an user access token, the credential thal allows you to run the service. Below you can find the steps to activate the user access token:

  • Create an Hugging Face account. You can create an Hugging...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete