MENU

Fun & Interesting

How Developers might stop worrying about AI taking software jobs and Learn to Profit from LLMs

Internet of Bugs 123,595 1 year ago
Video Not Working? Fix It Now

Right now, the software industry is kind of stuck, where no one wants to hire non-AI developers because the hype is claiming that all the jobs are about to get replaced by garbage like Devin. But new evidence indicates that the pace of AI growth is slowing down, and that two years of GitHub CoPilot has been creating a 'Downward Pressure on Code Quality.' The future is never certain, but it looks like there's a path for the next few years to be a new boom in software development, and it might start soon. 00:00 Intro 03:49 Ezra Klein's interview of Anthropic's CEO from April 12th 04:06 There are no Exponentials in the real world 05:10 Research showing LLMs are reaching the point of diminishing returns 06:28 Article on the stagnation of LLM growth 06:57 Stanford AI Index Report 07:18 Research showing that LLMs are running out of training data 07:28 Research on "Model Collapse" 08:13 Research showing AI reducing overall Code Quality 09:04 Quick Recap 09:27 Implications to Software Developers 10:56 Parallels to 2008/2009 and the App boom 11:44 How and when we might know 12:07 Wrap up Papers and references from this video: # The complexity of the human mind https://mindmatters.ai/2022/03/yes-the-human-brain-is-the-most-complex-thing-in-the-universe/ https://www.psychologytoday.com/us/blog/consciousness-and-beyond/202309/the-staggering-complexity-of-the-human-brain # Ezra Klein's interview of Anthropic's CEO https://www.nytimes.com/2024/04/12/podcasts/transcript-ezra-klein-interviews-dario-amodei.html # 3Blue1Brown Video on Logistics and Exponentials https://www.youtube.com/watch?v=Kas0tIxDvrg # Research showing LLMs are reaching the point of diminishing returns https://garymarcus.substack.com/p/evidence-that-llms-are-reaching-a https://arxiv.org/pdf/2403.05812 https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu # Research showing that Data Availability is likely the bottleneck https://en.wikipedia.org/wiki/Chinchilla_(language_model) https://www.lesswrong.com/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications https://www.alignmentforum.org/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications # The 2024 Stanford AI Index Report with section on running out of data https://aiindex.stanford.edu/report/ # Research showing we're running out of LLM Training Data https://epochai.org/blog/will-we-run-out-of-ml-data-evidence-from-projecting-dataset # Research showing "Model Collapse" when training data contains LLM output https://arxiv.org/pdf/2305.17493 # Research showing AI Code Generation reducing code quality https://visualstudiomagazine.com/articles/2022/09/13/copilot-impact.aspx https://visualstudiomagazine.com/Articles/2024/01/25/copilot-research.aspx https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality # Release Hype about GPT-5 https://tech.co/news/gpt-5-preview-release-rumors

Comment