MENU

Fun & Interesting

Vision language action models for autonomous driving at Wayve

Weights & Biases 5,190 9 months ago
Video Not Working? Fix It Now

👀 *All of the Fully Connected London 2024 videos are available at http://wandb.me/fclondon24yt* *About Oleg Sinavski's Session on advancing autonomous driving with Vision-Language-Action (VLA) models* Join Oleg Sinavski, Principal Applied Scientist at Wayve, as he presents the latest advancements in autonomous driving through Vision-Language-Action (VLA) models. Learn how Wayve integrates visual perception with natural language processing to create explainable, end-to-end driving systems. *Highlights of the session include:* - An overview of Wayve's innovative Lingo-1 and Lingo-2 models. - The ability of VLA models to interpret complex driving scenarios and generate actionable commands. - Demonstrations of these models in real-world applications. - The challenges and solutions in developing autonomous vehicles that can reason and act like humans. - Discover the future of autonomous driving technology with insights from a leading expert in the field.

Comment