Ask a question of ChatGPT and other, similar chatbots and there’s a good chance you’ll be impressed at how adeptly it comes up with a good answer — unless it spits out unrealistic nonsense instead. Part of what’s mystifying about these kinds of machine learning systems is that they are fundamentally black boxes. No one knows precisely how they arrive at the answers that they do. Given that mystery, is it possible that these systems in some way truly understand the world and the questions they answer? In this episode, the computer scientist Yejin Cho of the University of Washington and host Steven Strogatz discuss the capabilities and limitations of chatbots and the large language models, or LLMs, on which they are built.
Yejin Choi: https://homes.cs.washington.edu/~yejin/
-----------
Chapters:
00:00 - Introduction
04:30 - What are the different types of AI?
08:10 - How LLMs work and learn
13:45 - Artificial emotion
15:45 - How AI systems are trained
21:55 - Choi's research
26:15 - Teaching AI common sense
35:30 - Big tech and AI
-----------
“The Joy of Why” is a Quanta Magazine podcast about curiosity and the pursuit of knowledge. The mathematician and author Steven Strogatz and the astrophysicist and author Janna Levin take turns interviewing leading researchers about the great scientific and mathematical questions of our time.
- Listen to more episodes of Joy of Why: https://www.quantamagazine.org/tag/the-joy-of-why/
----------
- VISIT our website: https://www.quantamagazine.org
- LIKE us on Facebook: https://www.facebook.com/QuantaNews
- FOLLOW us Twitter: https://twitter.com/QuantaMagazine
Quanta Magazine is an editorially independent publication supported by the Simons Foundation: https://www.simonsfoundation.org/