Lock-free programming, by Song Pan (12th October 2016)
Over the last decade, multi-core processors have become mainstream. To fully exploit the potential of executing multiple instructions simultaneously, programmers divided their process into standalone execution units called threads, which can run in parallel. However, concurrent threads share memory and data races occur when they attempt to access the same memory in the same clock cycle. The traditional way of locking is to restrict access to critical parts of memory to only one thread (that currently holds the lock), but it often fails to meet the speed and stability requirements of time critical sections. Lock free programming is a technique, that gained popularity in the past two decades, to allow multiple threads to interact without blocking each other, resulting in a more stable and faster system. In this talk, I will introduce the general principle of lock free programming, present a simple example of a lock free data structure (a lock free stack) and explore the possible pitfalls in the process.