The Feedback Principle

Gracefully maintain a desired value in the presence of uncertainty and change

In a previous post, we introduced the basic feedback concept. Now it is time to take a closer look at this idea.

Feedback is a method to keep systems on track. In other words, feedback is a way to make sure a system behaves in the desired fashion. If we have some quality-of-service metric in mind, then feedback is a reliable method to ensure that our system will achieve and maintain the desired value of this metric, even in the presence of uncertainty and change.

Let’s consider an example, such as a cache (web cache, DB cache, doesn’t matter). As with any cache, the quality-of-service metric that we care about is the hit rate: the fraction of cache requests that can be fulfilled from items in the cache. The specific value for the hit rate that we want to maintain will depend on external conditions – for the sake of the argument, let’s say that we are shooting for an 85 percent hit rate. This value is called the reference value or “setpoint”.

Now we must consider what parameters we can adjust to achieve that goal. For a cache that’s pretty simple: it’s the cache size, that is, the total number of items that can be kept in the cache. (Why the cache size? Because that’s pretty much the only thing we can adjust about a cache.)

We also must understand in which direction we have to adjust this input parameter (namely the cache size) in order to increase the observed output quantity (in the present case, the hit rate). For the
cache example, this is again very simple: increasing the cache size increases the hit rate.

Loop Diagram

Now consider the “closed-loop” system shown in the Figure. The cache that we try to control is represented by the red box towards the right. The cache is subject to incoming requests (signified by the green, wavy line), resulting in an average hit rate. The observed hit rate is now “looped back” and compared to the desired hit rate, which is supplied from the left. This leads to the tracking error, which is the difference between the desired and the actual hit rate:

The tracking error is fed to the “controller”. The job of the controller is to calculate a new cache size that will lead to a reduced tracking error. Basically, if the hit rate is to low, the controller will increase the cache size; if the hit rate is too high, the controller will shrink the cache. Once the new cache size has been set, the hit rate will adjust. And then we will compare it to the desired hit rate, calculate a new value for the tracking error, and the controller will determine the new cache size. And on, and on, and on, in perpetuity.

That’s the basic idea, but that’s not all there is to it. A naive application of the ideas so far may lead to very unsatisfactory results.

The reason is that we also need to know by how much the controller has to change the cache size in order to bring about a change of a given magnitude in the hit rate, and how quickly any change in the cache size will show up in the hit rate. We will usually need to run an experiment to obtain this information. For instance, we could change the cache size suddenly by a known amount, and then observe how quickly the hit rate stabilizes again, and at what value. This information can then be used to improve or “tune” the controller. Controller design and tuning will be topics of later posts, but next, we will explore why feedback is of interest to programmers in the first place. Stay tuned!

Related

Sign up for the O'Reilly Programming Newsletter to get weekly insight from industry insiders.
topic: Programming