Join Andre, founder of dstack, as he introduces a next-generation open-source container orchestrator built specifically for AI workloads. Learn how DC streamlines development, training, and inference across any cloud or on-prem environment—all while providing the simplicity that DevOps teams and AI engineers crave. Discover dstack’s key features, including effortless multi-node training, scalable inference, universal hardware integration (including GPUs and TPUs), and a powerful abstraction layer that keeps you focused on building AI models, not managing infrastructure.
𝘾𝙤𝙣𝙣𝙚𝙘𝙩 𝙬𝙞𝙩𝙝 𝗔𝗻𝗱𝗿𝗲𝘆 𝗖𝗵𝗲𝗽𝘁𝘀𝗼𝘃 -
https://www.linkedin.com/in/andrey-cheptsov/
📢 Let us know your thoughts in the comments
--
Timestamps
0:00 - 0:57 - Kick Off: Why Container Orchestration for AI
0:58 - 1:53 - Compare: Kubernetes vs. Slurm for AI Workloads
1:54 - 2:57 - Explore: dstack’s Universal Hardware Support
2:58 - 3:39 - Simplify: Abstraction for AI Workflows
3:40 - 4:26 - Launch: Dev Environments Example
4:27 - 5:54 - Scale: Training with Tasks & Distributed Frameworks
5:55 - 6:42 - Implement: Services for Scalable Inference
6:43 - 7:45 - Integrate: On-Prem & Cloud Deployments
7:46 - 7:54 - Persist: Storage & Final Takeaways
---
About TensorWave:
TensorWave is the AI and HPC cloud purpose-built for performance. Powered exclusively by AMD Instinct™ Series GPUs, we deliver high-bandwidth, memory-optimized infrastructure that scales with your most demanding models—training or inference.
--
Connect with TensorWave:
https://www.tensorwave.com
https://www.x.com/tensorwavecloud
https://www.linkedin.com/in/tensorwave
https://www.instagram.com/tensorwave_cloud
https://www.youtube.com/@TensorWaveCloud
#AICompute #GPUs #BeyondCUDA #AIInfrastructure