MENU

Fun & Interesting

How AI Could Lead to the End of Democracy | Tom Davidson

80,000 Hours 3,139 1 week ago
Video Not Working? Fix It Now

Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could flourish for the first time — as nations needed educated, informed, and empowered citizens to deploy advanced technologies and remain competitive. Unfortunately there’s every reason to think artificial general intelligence (AGI) will reverse that trend. *Come work with us! We're accepting expressions of interest for a new host and chief of staff until May 6. Learn more and apply at https://80k.info/work* Today’s guest, Tom Davidson of the Forethought Centre for AI Strategy, argues in a report published today that advanced AI systems will enable unprecedented power grabs by tiny groups of people, primarily by removing the need for other human beings to participate. Read the report: https://80k.info/td25 “Over the broad span of history, democracy is more the exception than the rule,” Tom points out. “With AI, it will no longer be important to a country’s competitiveness to have an empowered and healthy citizenship.” In established democracies, we’re not typically that concerned about coups. We doubt anyone will try, and if they do, we expect human soldiers to refuse to join in. Unfortunately, the AI-controlled military systems of the future will lack such inhibitions: “Human armies today are very reluctant to fire on their civilians. If we get instruction-following AIs, then those military systems will just fire.” As militaries worldwide race to incorporate AI to remain competitive, they risk leaving the door open for exploitation by malicious actors in a few ways: • AI systems could be programmed to simply follow orders from the top of the chain of command — potentially handing total power indefinitely to any leader willing to abuse that authority. • Systems could contain “secret loyalties” inserted during development that activate at critical moments. • Superior cyber capabilities could enable small groups to hack into and take full control of AI-operated military infrastructure. Alternatively, millions of loyal AI workers concentrated in a leaders' hands could greatly speed up “autocratisation” — removing the checks and balances on their power and doing away with future elections that might challenge them. Transcript and links to learn more: https://80k.info/td Chapters: • Cold open (00:00:00) • How AI enables tiny groups to seize power (00:00:50) • The 3 different threats (00:02:14) • Is this common sense or far-fetched? (00:03:24) • “No person rules alone.” Except now they might. (00:06:27) • Underpinning all 3 threats: Secret AI loyalties (00:12:31) • Are secret AI loyalties possible right now? (00:16:59) • Key risk factors (00:20:30) • Preventing secret loyalties in a nutshell (00:22:07) • Are human power grabs more plausible than 'rogue AI'? (00:24:32) • If you took over the US, could you take over the whole world? (00:33:22) • Will this make it impossible to escape autocracy? (00:37:31) • Threat 1: AI-enabled military coups (00:41:34) • Will we sleepwalk into an AI military coup? (00:51:47) • Could AIs be more coup-resistant than humans? (00:57:53) • Threat 2: Autocratisation (01:00:48) • Will AGI be super-persuasive? (01:11:06) • Threat 3: Self-built hard power (01:13:31) • Can you stage a coup with 10,000 drones? (01:21:23) • That sounds a lot like sci-fi... is it credible? (01:23:33) • Will we foresee and prevent all this? (01:27:54) • Are people psychologically willing to do coups? (01:29:22) • Will a balance of power between AIs prevent this? (01:33:31) • Will whistleblowers or internal mistrust prevent coups? (01:35:48) • Will rogue AI preempt a human power grab? (01:44:31) • The best reasons not to worry (01:47:09) • How likely is this in the US? (01:49:28) • Is a small group seizing power really so bad? (01:56:58) • Countermeasure 1: Block internal misuse (02:00:33) • Countermeasure 2: Cybersecurity (02:10:27) • Countermeasure 3: Model spec transparency (02:12:36) • Countermeasure 4: Sharing AI access broadly (02:21:55) • Is it more dangerous to concentrate or share AGI? (02:26:45) • Is it important to have more than one powerful AI country? (02:29:31) • In defence of open sourcing AI models (02:32:38) • 2 ways to stop secret AI loyalties (02:40:19) • Preventing AI-enabled military coups in particular (02:53:18) • How listeners can help (02:59:06) • How to help if you work at an AI company (03:03:00) • The power ML researchers still have, for now (03:07:09) • How to help if you're an elected leader (03:10:29) _This episode was originally recorded on January 20, 2025._ _Video editing: Simon Monsour_ _Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong_ _Camera operator: Jeremy Chevillotte_ _Transcriptions and web: Katy Moore_

Comment