MENU

Fun & Interesting

10 amazing things you CAN’T do with ChatGPT

Andrew Steele 31,974 2 years ago
Video Not Working? Fix It Now

Could ChatGPT…destroy the world? Watch this to find out: https://youtu.be/3CRZjrndFoU There are _so many_ videos online about using AI for research, to summarise complex ideas or to write your emails for you. But what they don’t tell you is that ChatGPT, and other ‘large language models’ like Google Bard and Microsoft Bing lie, make stuff up, give out dangerous information (all you have to do is ask it to pretend to be your dead grandma?!) and…most surprising of all…can’t do basic maths! *Chapters* 00:00 Introduction 00:34 1 – ChatGPT lies! 03:42 2 – My favourite trivia question 04:49 3 – Dangerous advice 05:36 4 – Does ‘Ankara’ end with an ‘n’?! 06:08 5 – Nice but dim 07:49 6 – Providing fake references 09:06 7 – The ‘grandma hack’ 10:56 8 – It doesn’t know its limits 11:39 9 – ChatGPT can’t do maths! 12:34 What we should do 15:13 10 – Don’t let it write your outro *Sources and further reading* The /r/ChatGPT subreddit is a hilarious and growing list of examples of hacks and errors https://www.reddit.com/r/ChatGPT/ Article on the instructions given to trainers for Google Bard https://www.bnnbloomberg.ca/google-s-ai-chatbot-is-trained-by-humans-who-say-they-re-overworked-underpaid-and-frustrated-1.1944600 Just to show I didn’t cherry-pick these examples, here are the full conversations I had with ChatGPT: https://chat.openai.com/share/46d84dc5-1fd6-4f0a-ae9e-8b2e4b0fd795 (almost everything) https://chat.openai.com/share/77801af9-c8d5-4b8b-b347-433433131d66 (retrying the ‘grandma hack’) I didn’t use all of the ideas I tried, and a few of them did take a couple of goes—but if anything the main way in which the video is a bit misleading is that it flatters ChatGPT by speeding up its text generation with the magic of editing! That Mona Lisa took it over a minute…I honestly think I could’ve drawn something better in that time! There are quite a few prompts that have been uncovered to hack ChatGPT, allowing people to uncover everything from Windows and Steam keys, to how to make nuclear bombs or biological weapons. Often the codes or instructions are fake or incomplete, but this shows us the risks inherent in these models. Sometimes just telling it that a scenario is fictional, or to write a script between actors doing a play is enough. Another more specific jailbreak involves asking ChatGPT to roleplay being a chatbot without limits called ‘Do Anything Now’, or ‘DAN’, and explain that DAN will answer any questions free from ethical, moral, and legal constraints normally imposed on ChatGPT’s AI. These may not work by the time you’re reading this as the jailbreaks get patched by OpenAI—so if you hear about any new ones, let me know in the comments! *And finally…* Follow me on Twitter https://twitter.com/statto Follow me on Instagram https://www.instagram.com/andrewjsteele Like my page on Facebook https://www.facebook.com/DrAndrewSteele Follow me on Mastodon https://mas.to/@statto Read my book, _Ageless: The new science of getting older without getting old_ https://ageless.link/

Comment