-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

Generative AI Foundations in Python
By :

Assuming we have carefully gathered quantitative and qualitative feedback regarding the best model for the job, we can select our model and update our production environment to deploy and serve it. We will continue to use FastAPI for creating a web server to serve our model, and Docker to containerize our application. However, now that we have been introduced to the simplicity of LangChain, we will continue to leverage its simplified interface. Our existing CI/CD pipeline will ensure streamlined automatic deployment and continuous application monitoring. This means that deploying our model is as simple as checking-in our latest code. We begin with updating our dependencies list:
requirements.txt
file in your project to include the necessary libraries:fastapi==0.68.0
uvicorn==0.15.0
openai==0.27.0
langchain==0.1.0