It is important to make optimal use of your hardware resources (CPU and GPU) while training a deep learning model. You can use tf.data.Dataset.prefetch(AUTOTUNE) and tf.data.Dataset.cache() methods for this purpose. They help you optimize tensorflow input pipeline performance. In this video we will go over how these two methods work and will write some code as well. Code: https://github.com/codebasics/deep-learning-keras-tf-tutorial/blob/master/45_prefatch/prefetch_caching.ipynb Deep learning playlist: https://www.youtube.com/playlist?list=PLeo1K3hjS3uu7CxAacxVndI4bE_o3BDtO Machine learning playlist: https://www.youtube.com/playlist?list=PLeo1K3hjS3uvCeTYTeyfe0-rN5r8zn9rw 🔖Hashtags🔖 #tensorflowpipeline #tensorflowprefetchdataset #tensorflowprefetchautotune #prefetchautotune #tensorflowinputpipeline #tensorflowprefetch #tensorflowdatapipeline Do you want to learn technology from me? Check https://codebasics.io/ for my affordable video courses. 🌎 Website: https://www.codebasics.io/ 🎥 Codebasics Hindi channel: https://www.youtube.com/channel/UCTmFBhuhMibVoSfYom1uXEg #️⃣ Social Media #️⃣ 🔗 Discord: https://discord.gg/r42Kbuk 📸 Instagram: https://www.instagram.com/codebasicshub/ 🔊 Facebook: https://www.facebook.com/codebasicshub 📱 Twitter: https://twitter.com/codebasicshub 📝 Linkedin (Personal): https://www.linkedin.com/in/dhavalsays/ 📝 Linkedin (Codebasics): https://www.linkedin.com/company/codebasics/ ❗❗ DISCLAIMER: All opinions expressed in this video are of my own and not that of my employers'.