The Faint Signals of Concurrency

We’ve been looking at the world of concurrent programming lately. You might have seen Tim’s posts on Erlang and Haskell, or my post on alternative systems to threading. Here, in a nutshell, is why we’re interested in this stuff and what we see.

  1. CPUs aren’t really getting faster. The CPUs in your brand new machine are the same speed they were two years ago. Moore’s Law has expired. Now all the manufacturers can do is stuff more CPUs into your machine. Sure, someday there’ll be a breakthrough in the speed of light or subnano manufacturing or we’ll finally get a 64-qubit Intel Quarkium, but until then we’re stuck with more slow CPUs in each new box. The only way to get faster execution is to parallelize your code.
  2. Google’s been doing it. An 8-CPU machine is analogous to an 8-machine cluster. Google have been building clusters with hundreds of thousands of machines because their industry has an embarrassment of parallel problems to solve. We’re big believers in watching the alpha geeks, and it’s harder to get more alpha or more geekier than Google. For this reason we’re watching MapReduce and Hadoop. Other early leaders in this area are the big data sciences (astronomy, genomics/proteomics, nuclear physics) each with different problems (image processing, sequence alignment, 3d alignment, detailed simulation).
  3. Barrier to entry is coming down. Your Playstation 3 can kick ass at protein folding, and for $13 you can buy the Parallax Propeller, a chip with 8 cores in it to slap into a board and play. And, of course, off-the-shelf commodity PC hardware has never been cheaper. Hell, your video card GPU has a ton of processing elements, although everyone has their own interface to the goodies.
  4. It’s a hard problem. If it were easy then it wouldn’t be a problem. Computer science has been busy for years developing techniques for clusters and for multicore machines, figuring out how to distribute computation across multiple processors.
  5. The alpha geeks have their hands in it. Before you needed a high-budget lab to work on these theoretical problems. Now you can buy your lab at Wal-Mart, the problems being solved are your problems, and the programming environment is familiar (Perl, Ruby, C) or worth learning (Erlang, Haskell’s STM). When Audrey Tang went to develop a Perl 6 interpreter, she did it in Haskell. The Pragmatic Programmers are releasing an Erlang book and we’re doing one on Haskell.
  6. Did I mention it’s hard? The mainstream systems for concurrency are baby talk, coding with crayons and fingerpaint. Threading is a nightmare for those who use it, and if 10% of programmers can really code well without bugs then only 1% of those can do so in a threaded environment. This pain (and the alternatives to threading that people are investigating) is on our radar.

My friend Bryan O’Sullivan is one of the alpha-geeks we’re watching (he’s co-author on our Haskell book). I pinged him about concurrency and he had this to say:

What’s going to really fuck people up, out in the real world, is the programming model. Programming is hard; parallel programming is way the hell harder; compsci courses have turned into votech Java pap; and enrollments in compsci are in any case as lively as the waiting list for the Lusitania the week after it was torpedoed. People want their programming to be easier and more casual, and they’re about to have it jammed into their eyesockets on bamboo stakes instead.

I wrote this post in March but for some reason failed to post it. In light of recent events it still seems timely. I’d love to know what you think. Is concurrency on your radar? Are you perfectly happy writing threaded code? Are you perfectly happy writing code without any concurrency whatsoever? Tell Unca Nat all …

tags: