When I first got Programming Pearls by Jon Bentley, I was maybe 10/11 years old. I remember reading it many times. It showed how to improve algorithms, how to write code that is simple and fast, and it told many stories that stayed with me. I did not understand most of the things, but the book felt magical. I was also amazed by the simple back-of-the-envelope calculations that showed how to think about problems. And that sometimes we even don’t need to use computers to solve them.
At some point, I started to wonder if these optimizations still mattered. Computers were getting faster. Memory was bigger. Compilers were smarter. I asked myself if anyone still cared about small fixes in slow code.
Now, in the time of big data, I see the lesson again. When we process terabytes of data on thousands of servers, every small improvement counts.
An autonomous car reads gigabytes of sensor data each second. Making the algorithm 1 percent better can save a lot of power and money.
AI models learn from huge datasets. If we make training even a little faster, or interference that could lead to huge savings.
In the world of big data there is no real limit to optimization and the ideas from Programming Pearls feel like it never get old.