MENU

Fun & Interesting

vLLM Office Hours - Using NVIDIA CUTLASS for High-Performance Inference - September 05, 2024

Neural Magic 3,031 7 months ago
Video Not Working? Fix It Now

In this session, we explored the exciting updates in the vLLM v0.6.0 release, including significant system changes that led to a 2.7x throughput increase and a 5x latency improvement. We then dove into how you can leverage NVIDIA CUTLASS to optimize high-performance inference with INT8 and FP8 kernels in vLLM. During the Q&A, we tackled a variety of audience questions around hardware diversity, different quantization methods, pros and cons of using torch.compile in vLLM, deployment strategies for multiple copies of vLLM using a custom Docker entrypoint script, and more. Session slides: https://docs.google.com/presentation/d/184uArSlJTwuoS1SOTT8jNSUE8ojJdzHh Explore and join our bi-weekly vLLM office hours: https://hubs.li/Q02Y5Pbh0

Comment