An interesting interview with Professor Donald Knuth (via slashdot, where the slashdotter writes: “[Knuth] pitches his idea of “literate programming” which I must admit I’ve never heard of but find it intriguing” — really? never “heard” of that? I think it was one of the first things in programming I “heard” of… anyway…). Some interesting comments about various topics, including TeX, which continues to be nearly irreplaceable in some areas (academic papers in particular), but then drops a bomb:
I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they’re trying to pass the blame for the future demise of Moore’s Law to the software writers by giving us machines that work faster only on a few key benchmarks! I won’t be surprised at all if the whole multithreading idea turns out to be a flop, worse than the “Titanium” approach that was supposed to be so terrific–until it turned out that the wished-for compilers were basically impossible to write.
Let me put it this way: During the past 50 years, I’ve written well over a thousand programs, many of which have substantial size. I can’t think of even five of those programs that would have been enhanced noticeably by parallelism or multithreading. Surely, for example, multiple processors are no help to TeX.
How many programmers do you know who are enthusiastic about these promised machines of the future?
I hear almost nothing but grief from software people, although the hardware folks in our department assure me that I’m wrong.
I know that important applications for parallelism exist–rendering graphics, breaking codes, scanning images, simulating physical and biological processes, etc. But all these applications require dedicated code and special-purpose
First, his mistake of calling Itanium “Titanium” is a clue here. He hasn’t kept up with the field. He’s mixing up VLIW, which is what Itanium was based on, with multicore. VLIW is really designed around superpipelining, self-draining pipelines, and other features in processors and may have been left in the dust by Moore’s Law and the increased used of Virtual Machines, just as RISC has largely been sidelined. Additionally, many VLIW ideas have found their way into CISC processors. In any case, VLIW is about optimizing flow within a single processor, not many separate processors.
Second, his statement
Surely, for example, multiple processors are no help to TeX.
Is just plain wrong. TeX works by building up pages based on text blocks. A character is a block, which then gets built up into a word, then into a sentence, then to paragraphs. Paragraphs are then put together in pages. There would be a significant advantage to processing the units in different processors and then have them added up later. Granted, the sentences and paragraphs are related to each other and you can have one affect the other, but that means it would be harder to write multithreaded TeX processing, rather than multithreaded being no help at all.
“I hear almost nothing but grief from software people,” Knuth says, but this is just because we haven’t yet found clean, effective methods to write, test and debug multithreaded software at large scale. This doesn’t mean it’s bad, it just means we haven’t figured out how to use it properly. And, worst case scenario, you can always write your software as services connected through a thin layer of communication (based on anything from IPC to RMI to REST) that can then run as multiple processes happily on a multicore machine.
He also rues that literate programming hasn’t been embraced by the millions, but this is also only partly true. Java and other languages have been using Literate Programming concepts for years, and it has been a major advantage in productivity when using good IDEs like IDEA, Netbeans, or Eclipse.
Not that this invalidates anything that Knuth has done. 🙂 The TeXbook and the Art of of Computer Programming are still gems that I end up perusing frequently. They may not be multicore-enabled, but they’re still as relevant as ever.