MENU

Fun & Interesting

The fastest way to run OpenAI Whisper Turbo on a Mac

Learn Data with Mark 7,044 6 months ago
Video Not Working? Fix It Now

mlx-whisper is the fastest way to do automatic speech recognition on a Mac with OpenAI's Whisper models. In this video, we'll learn how to use it to transcribe an episode of the Tech Meme Ride Home podcast. We'll then see how well an LLM can answer questions about the episode based on the transcript and we'll see how fast mlx-whisper is compared to insanely-fast-whisper. #openai #asr #mlxwhisper 00:00 insanely-fast-whisper isn't the fastest 00:16 Intro to mlx-whisper 00:30 mlx-whisper options and overview 01:05 Transcribing with mlx-whisper 01:41 Text version of transcript 01:58 JSON version of transcript 02:21 srt version of transcript 02:43 Summarising the transcript with llama3.2/Ollama 03:28 mlx-whisper vs insanely-fast-whisper 🟡 Code - https://github.com/mneedham/LearnDataWithMark/tree/main/mlx-whisper-playground 🟡 mlx-whisper - https://pypi.org/project/mlx-whisper/ 🟡 mlx-whisper models - https://huggingface.co/collections/mlx-community/whisper-663256f9964fbb1177db93dc 🟡 llm - https://pypi.org/project/llm/

Comment