Tuesday, January 23, 2007

Level-triggered and edge-triggered

More years ago than I care to admit, I took a series of classes in digital logic as part of my undergraduate curriculum. By the end of the most advanced of these classes, we were writing microcode to add new instructions to a real microprocessor & debugging said microcode with logic analyzers. Most of that experience has faded into the dim recesses of memory, but one tidbit has stayed fresh because it comes up and bites me in the butt every few years. That one tidbit is the distinction between level-triggered logic and edge-triggered logic.

Edge triggered things happen when some stimulus changes; level-triggered things happen when some stimulus crosses a threshold. In the case of digital circuits, the stimulus is voltage on a wire -- edge-triggered things happen when the voltage changes, level-triggered things happen when the voltage is above or below some preset amount. As a more mundane example, consider your telephone. It rings when someone dials your phone number (edge-triggered) and has an indicator to let you know when there's at least one message waiting (level-triggered). Equally mundanely, consider a traffic light. You stop whenever it's red (level-triggered). You go when the jerk behind you honks his horn because the light's green and you aren't moving (edge-triggered).

But I don't design digital circuits anymore -- I write software. What do edge-triggered and level-triggered have to do with software?

Most simple programs are level-triggered. Their execution depends on the inputs they are given and those inputs tend not to change during execution of the program. Think "hello world" and most command-line utilities. As complexity increases, however, programs tend to take on more edge-triggered behavior. Complex applications tend to be decomposed into simpler components that communicate with one another using paradigms like producer/consumer or event source/subscriber.

Most of the software I have worked on in my career falls under the broad label "distributed". That is, it is part of a system that spans multiple machines connected over a network. Over and over (and over) again I have watched people (sometimes myself) try to extend edge-triggered programming paradigms to distributed systems. And almost without fail, every one of those attempts has either failed outright or resulted in complex, sloppy, error-prone code. After an especially painful experience with this 6-7 years ago, I did some thinking about why edge-triggered programming tends not to work so well in distributed systems.

In distributed system, single points of failure (SPOFs) are anathema. Customers hate them and will generally give you hell if your system has any because they know from experience that any SPOF *WILL* eventually cause unplanned outages. Why? Simple statistics -- as the number of moving parts in your system increases, the chances that all of them will be functioning correctly at any given time approaches zero. Edge-triggered paradigms work marvelously within a single process. Within a single process events don't get lost. It doesn't matter if the event subscriber dies because the event source will die along with it. In a distributed system, there are a million and one ways for components on different machines to get out of synch: networks get congested or fail, processes terminate, hard drives crash, storms cause power outages, etc. When bad things inevitably happen, how do you get your edge-triggered distributed system synchronized again? You guessed it -- you drop back to a level-triggered mechanism that reconcile discrepancies in state. So if you have to build level-triggered mechanisms as a fallback anyway, why bother with edge-triggered mechanisms in the first place?

Let me try to make this more concrete with a real example from a past project. We were building a configuration datastore for a large distributed system (100s of nodes initially, 1000s eventually). For various reasons, we chose an early draft of the java.util.preferences API (this was before it was finalized). Preferences uses a fairly typical Java event/listener model to notify consumers of changes to the data. We had a version of the API up and running on top of local files in a few weeks and the other teams started building on top of it. Then we tried to make our code run in a distributed system. The first design was to extend the event listener mechanism and generate messages to remote nodes whenever data changed. What should we do for nodes that are down? We couldn't queue the messages indefinitely -- that requires unbounded storage somewhere in the system. Maybe we queue messages for a certain amount of time, then give up and just say that if a node comes back up after too long an absence we toss whatever configuration data they had and transfer a new set. Transferring the whole set of configuration data could take awhile, so we added some simple partitioning/timestamp checking to pare down the amount of data transferred. At this point the design just rubbed me the wrong way -- it seemed WAY too complex (as indeed it was). We had built a pretty solid level-triggered distribution mechanism (partitioning, timestamp checking, transfer out-of-date partitions) and layered a flaky edge-triggered mechanism (messages) on top of it. Once I realized this, the right design was obvious and we tossed the messaging.

Having witnessed variations of this over many years of software development, I have come up with:

Mike's First Maxim of Distributed Systems
In a distributed system you should move state information, not work items.

Corollary to the Mike's First Maxim of Distributed Systems
It's fine to centralize data, but avoid centralized decision making whenever possible. Instead, distribute information about the desired state and let each node determine how best to get to that state.

I'm most definitely not asserting that edge-triggered approaches are NEVER correct in distributed systems, but hopefully this little composition will encourage you to think about what you're doing before you make a mess that I'm going to have to live with some day.