Sep 272011
 

Haven’t we all written Fibonacci sequence generators, sorting and searching algorithms and a handful of data-structures while learning to program? What’s common in these problems is their simplicity. I almost said triviality, but there are nuances that the beginner will stumble in and, hopefully, learn from. (And isn’t that the whole point of the exercise anyway?)

Fibonacci is a great introduction to recursion. Sorting teaches us the fundamentals of algorithms, complexity and of course data-structures and searching. We learn that not all ways of organizing data are born equal. Linked lists have a leg over arrays some of the time, but their ranks are reversed at other times.

This is how we learn about our tools and how to use them and when. It’s the equivalent of painting fences. Without these mental gymnastics we’d have to open up textbooks and look up for “how to find duplicates in two sets” or wonder why the data is sometimes shuffled around, but not always (hint: dictionaries typically don’t preserve order, unless you use one that explicitly guarantees order).

The answer to “why is our product all of a sudden too slow and customers are complaining now when it was lightning fast for years?” isn’t in any textbook. However, if one assumes the customer data might have doubled, a seasoned programmer would first check for a quadratic algorithm (or worse) somewhere in the code-path.

While there is no doubt, at least in my mind, that it’s absolutely necessary to work out all these contrived problems, from writing a 4-operation calculator with an expression parser to solving wick wack woe, I think we should include a whole other genre of problems in our training kits.

The Unsolvables

There is a group of problems that, as far as we know, have no solutions. That is, the solution is known to be found only by brute force or an approximation exists using heuristics (basically trial-and-error or some form of genetic algorithm.) There is the obvious usual-suspects, the NP-Complete folk. But not only. There are algorithms that run in quadratic and polynomial time that aren’t practical beyond some size.

There is no better way to get a very real feel of what it means for an algorithm to run in quadratic time than to actually write the code and see how you age as your machine churns away hopelessly, all the while another algorithm with a lower complexity has just finished.

Take Quick sort for example. Implementing the textbook algorithm will work swiftly, that is, until you feed it sorted data. Multiplication using the school-method is simple and elegant, until you apply it to large numbers. Trying to optimize such a primitive algorithm is very instructive. It teaches you that it’s not about optimization; the complexity is inherent. Yet, we don’t use Merge sort with its O(n log n) worst case performance and default to Quick sort with its O(n2) worst case characteristic. This is not at all obvious to a beginner, and even for many experienced programmers. Theory and practice sometimes do part ways. And that’s yet another important lesson.

Without attempting to solve a travelling salesman problem or a knapsack problem we are in danger of never really understanding why complexity is important. Why some algorithms are hopeless and will get back to haunt you one day, while others seem to be stuck in their misery and can hardly be improved.

And what better way to understand how and, more importantly, why modern cryptography works than to try to factorize numbers? Searching for prime numbers is yet another long-time problem that only recently it was proved that primality testing is polynomial, and how is all that related to one-way functions.

There is also another purpose to this exercise. It’s not obvious where the difficulty of solving these unsolved problems is. At first sight almost anyone presented by such a problem will come up with some solution of sorts. It’s not until much later, and with much mental effort, that one does notice the errors of their ways. It might be a hidden complexity that they introduced in their algorithm while being oblivious to it that negated the gains they scored elsewhere. It might be that the obvious or straight-forward solution misses a very simple, but equally crucial, case. Or, it may be that their solution is broken in one way or another. Either way, it’s not enough to know about the problems, their status and move on. There is much to be learned from solving unsolved problems.

The Impossibles

But why stop there? Why stop at the travelling salesman or a variation of the knapsack problem? While we’re at it, let’s introduce non-computability and noncomputable functions.

I’m sure these topics are very well studied in some grad schools. But the average CS school undergrad would probably firmly believe the fastest growing functions are the exponential. (I still have to get a different answer during an interview.) Whatever happened to Busy Beavers? Apparently, a noncomputable can actually grow faster than any computable function! Now try to beat that in the department of slow algorithms.

Conclusion

I think it would be a great service to our industry if every once in a while the school assigns an unsolvable problem. Send the students home with a travelling salesman problem or two and ask them to bring in the numbers the next week. It’d prove much more instructive to see the 5-city problem solved in seconds while the 25-city…

And who knows, we might as well get a Dantzig moment! (Incidentally, he‘s the inventor of Simplex, the single most useful algorithm in optimization.)

QR Code Business Card