MENU

Fun & Interesting

Language models on the command-line w/ Simon Willison

Hamel Husain 11,716 11 months ago
Video Not Working? Fix It Now

The Unix command-line philosophy has always been about joining different tools together to solve larger problems. LLMs are a fantastic addition to this environment: they can help wire together different tools and can directly participate themselves, processing text directly as part of more complex pipelines. Simon shows how to bring these worlds together, using LLMs - both remote and local - to unlock powerful tools and solve all kinds of interesting productivity and automation problems. This is a talk from Mastering LLMs: A survey course on applied topics for Large Language Models. For more info and resources related to this talk, see: https://parlance-labs.com/talks/applications/simon_llm_cli/ My personal site: https://hamel.dev/ My twitter: https://x.com/HamelHusain Parlance Labs: https://parlance-labs.com/ 00:00 Introduction Simon Willison, creator of Dataset and co-creator of Django, introduces LLM - a command line tool for interacting with large language models. 01:40 Installing and Using LLM Simon demonstrates how to install LLM using pip or homebrew and run prompts against OpenAI's API. He showcases features like continuing conversations and changing default models. 10:30 LLM Plugins The LLM tool has a plugin system that allows access to various remote APIs and local models. Simon installs the Claude plugin and discusses why he considers Claude models his current favorites. 13:14 Local Models with LLM Simon explores running local language models using plugins for tools like GPT4All and llama.cpp. He demonstrates the llmchat command for efficient interaction with local models. 26:16 Writing Bash Scripts with LLM A practical example of creating a script to summarize Hacker News threads. 35:01 Piping and Automating with LLM By piping commands and outputs, Simon shows how to automate tasks like summarizing Hacker News threads or generating Bash commands using LLM and custom scripts. 37:08 Web Scraping and LLM Simon introduces ShotScraper, a tool for browser automation and web scraping. He demonstrates how to pipe scraped data into LLM for retrieval augmented generation (RAG). 41:13 Embeddings with LLM LLM has built-in support for embeddings through various plugins. Simon calculates embeddings for his blog content and performs semantic searches, showcasing how to build RAG workflows using LLM.

Comment