Subscribe to The NonZero Newsletter at https://nonzero.substack.com
Episode post in the NonZero Newsletter: https://www.nonzero.org/p/mutually-assured-ai-malfunction
0:00 Dan’s new paper, “Superintelligence Strategy”
2:34 “Mutually assured AI malfunction”
7:22 Does China see America’s AI policy as a grave threat?
14:21 How likely is US-China conflict over superintelligence?
25:02 How America’s chip war makes war over Taiwan more likely
33:21 Why did China-hawkism sweep Silicon Valley?
41:44 Can we avoid AI doom without global governance?
57:02 The key points of Dan’s paper
1:04:32 Heading to Overtime
Discussed in Overtime:
Was OpenAI’s o3 properly tested?
Can reasoning get us to AGI?
Conceptualizing AI conceptual space.
How LLMs generate “values.”
Does multimodality make AIs smarter?
Dan: Big AI breakthroughs are near.
Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Dan Hendrycks (The Center for AI Safety). Recorded February 04, 2025.
Dan's paper: https://www.nationalsecurity.ai/
Twitter: https://twitter.com/NonzeroPods