Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • LLM Engineer's Handbook
  • Toc
  • feedback
LLM Engineer's Handbook

LLM Engineer's Handbook

By : Paul Iusztin, Maxime Labonne
4.8 (25)
close
LLM Engineer's Handbook

LLM Engineer's Handbook

4.8 (25)
By: Paul Iusztin, Maxime Labonne

Overview of this book

Artificial intelligence has undergone rapid advancements, and Large Language Models (LLMs) are at the forefront of this revolution. This LLM book offers insights into designing, training, and deploying LLMs in real-world scenarios by leveraging MLOps best practices. The guide walks you through building an LLM-powered twin that’s cost-effective, scalable, and modular. It moves beyond isolated Jupyter notebooks, focusing on how to build production-grade end-to-end LLM systems. Throughout this book, you will learn data engineering, supervised fine-tuning, and deployment. The hands-on approach to building the LLM Twin use case will help you implement MLOps components in your own projects. You will also explore cutting-edge advancements in the field, including inference optimization, preference alignment, and real-time data processing, making this a vital resource for those looking to apply LLMs in their projects. By the end of this book, you will be proficient in deploying LLMs that solve practical problems while maintaining low-latency and high-availability inference capabilities. Whether you are new to artificial intelligence or an experienced practitioner, this book delivers guidance and practical techniques that will deepen your understanding of LLMs and sharpen your ability to implement them effectively.
Table of Contents (15 chapters)
close
12
Other Books You May Enjoy
13
Index

References

  • Lianmin Zheng et al.. “Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena.” arXiv preprint arXiv:2306.05685, June 2023.
  • Aymeric Roucher. “Using LLM-as-a-judge for an automated and versatile evaluation - Hugging Face Open-Source AI Cookbook.” huggingface.co, No date found, https://huggingface.co/learn/cookbook/en/llm_judge.
  • LangChain. “Aligning LLM-as-a-Judge with Human Preferences.” blog.langchain.dev, June 26, 2024, https://blog.langchain.dev/aligning-llm-as-a-judge-with-human-preferences/.
  • Dan Hendrycks et al.. “Measuring Massive Multitask Language Understanding.” arXiv preprint arXiv:2009.03300, September 2020.
  • Jeffrey Zhou et al.. “Instruction-Following Evaluation for Large Language Models.” arXiv preprint arXiv:2311.07911, November 2023.
  • Yann Dubois et al.. “Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators.” arXiv preprint...
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete