MENU

Fun & Interesting

Build Code Interpreter for Large Language Models (Local + Docker)

Yankee Maharjan 1,565 lượt xem 5 months ago
Video Not Working? Fix It Now

Adding Code Interpreter to an application using Large Language models/Generative AI models unlocks a world of possibilities.

In this vide, we'll learn how to implement code interpreter that runs on your local machine as well as on Docker container for safely executing the code.

---
Code Example: https://github.com/yankeexe/llm-code-interpreter-demo
Generate API Keys for Gemini AI: https://aistudio.google.com/app/apikey
Alternative Chat Models for LangChain: https://python.langchain.com/docs/integrations/chat/#all-chat-models
---
⚡️ Follow me:
- Github: https://github.com/yankeexe
- LinkedIn: https://www.linkedin.com/in/yankeemaharjan
- Twitter (X): https://x.com/yankexe
- Website: https://yankee.dev

--
🎞️ Chapters

0:00 Intro
0:16 Demo: App UI
0:52 Demo: 1
1:00 Demo: 2
1:07 Demo: 3 (Fun)
1:36 Demo: 4 (Image)
2:00 Demo: 5 (Running Server)
2:33 Code: Env Setup
2:56 Code: App Skeleton
4:54 Code: Adding LLM
5:42 Code: Getting API Key
6:36 Code: Local Execution Env
11:11 Code: Docker Execution Env
17:44 Outro

Comment