The third (and final, at the moment) paper in the likelihood calculus series from Dr. Terrence Sanger is Distributed control of uncertain systems using superpositions of linear operators. Carrying the torch for the series right along, here Dr. Sanger continues investigating the development of an effective, general method of controlling systems operating under uncertainty. This is the paper that delivers on all the promises of building a controller out of a system described by the stochastic differential operators we’ve been learning about in the previous papers. In addition to describing the theory, there are examples of system simulation with code provided! Which is a wonderful, and sadly uncommon, thing in academic papers, so I’m excited. We’ll go through a comparison of Bayes’ rule and Markov processes (described by our stochastic differential equations), go quickly over the stochastic differential operator description, and then dive into the control of systems. The examples and code run-through I’m going to have to save for another post, though, just to keep the size of this post reasonable.
The form our system model equation will take is the very general
,
where represents the environment dynamics, previously also referred to as the unforced or passive dynamics of the system, describes how the control signal affects the system state , describes the state and control dependent additive noise, and is a set of independent noise processes such as Brownian motion. Our control goal is to find a set of functions of time, , such that the dynamics of our system behave according to a desired set of dynamics that we have specified.
In prevalent methods in control theory, uncertainty is a difficult problem, and often can only be effectively handled with a number of simplifications, such as linear systems of Gaussian added noise. In biological systems that we want to model, however, uncertainty is ubiquitous. There is noise on outgoing and incoming signals, there are unobserved controllers simultaneously exerting influence over the body, complicated and often unmodeled highly non-linear dynamics of the system and its interactions with the environment, etc. In the brain especially, the effect of unobserved controllers is a particular problem. Multiple areas of the brain will simultaneously be sending out control signals to the body, and the results of these signals tends to be only partially known or known only after a delay to the other areas. So for modeling, we need an effective means of controlling distributed systems operating under uncertainty. And that’s exactly what this paper presents: ‘a mathematical framework that allows modeling of the effect of actions deep within a system on the stochastic behaviour of the global system.’ Importantly, the stochastic differential operators that Dr. Sanger uses to do this are linear operators, which opens up a whole world of linear methods to us.
Bayes’ rule and Markov processes
Bayesian estimation is often used in sensory processing to model the effects of state uncertainty, combining prior knowledge of state with a measurement update. Because we’re dealing with modeling various types of system uncertainty, it seems like a good idea to consider Bayes’ rule. Here, Dr. Sanger shows that Bayes’ rule is in fact insufficient in this application, and Markov processes must be used. There are a number of neat insights that come from this comparison, so it’s worth going through.
Let’s start by writing Bayes’ rule and Markov processes using matrix equations. Bayes’ rule is the familiar equation
,
where represents our probability density or distribution, so and . This equation maps a prior density to the posterior density . Essentially, this tells us the effect a given measurement of has on the probability of being in state . To write this in matrix notation, we assume that takes on a finite number of states, so is a vector, which gives
,
where and are the prior and posterior distributions, respectively, and in a diagonal matrix with elements .
Now, the matrix equation for a discrete-time, finite-state Markov process is written
.
So where in Bayes’ rule the matrix (our linear operator) maps the prior distribution into the posterior distribution, in Markov processes the linear operator maps the probability density of the state at time into the density at time . The differences come in through the form of the linear operators. The major difference being that is a diagonal matrix, while there is no such restriction for . The implication of this is that in Bayes’ rule the update of a given state depends only on the prior likelihood of being in that state, whereas in Markov processes the likelihood of a given state at time can depend on the probability of being in other states at time . The off diagonal elements of allow us to represent the probability of state transition, which is critical for capturing the behavior of dynamical systems. This is the reason why Bayes’ method is insufficient for our current application.
Stochastic differential operators
This derivation of stochastic differential operators is by now relatively familiar grounds, so I’ll be quick here. Starting with a stochastic differential equation
,
where is our noise source, the differential of unit variance Brownian motion. The noise term introduces randomness into the state variable , so we describe with a probability density that evolves through time. This change of the probability density through time is captured by the Fokker-Planck partial differential equation
,
which can be rewritten as
,
where is the linear operator
.
is referred to as our stochastic differential operator. Because the Fokker-Planck equation is proven to preserve probability densities (non-negativity and sum to 1), applying to update our density will maintain its validity.
What is so exciting about these operators is that even though they describe nonlinear systems, they themselves are linear operators. What this means is that if we have two independent components of a system that affect it’s dynamics, described by and , we can determine their combined effects on the overall system dynamics through a simple summation, i.e. .
Controlling with stochastic differential operators
Last time, we saw that control can be introduced by attaching a weighting term to the superposition of controller dynamics, giving
,
where is the stochastic differential operator of controller , and is the input control signal to that controller. In the context of a neural system, this equation describes a set of subsystems whose weighted dynamics give rise to the overall behavior of the system. By introducing our control signals , we’ve made the dynamics of the overall system flexible. As mentioned in the previous review post, our control goal is to drive the system to behave according to a desired set of dynamics. Formally, we want to specify such that the actual system dynamics, , match some desired set of dynamics, . In equation form, we want such that
.
It’s worth noting also here that the resulting still preserves the validity of densities that it is applied to.
How well can the system approximate a set of desired dynamics?
In this next section of the paper, Dr. Sanger talks about reworking the stochastic operators of a system into an orthogonal set, which can then be used to easily approximate a desired set of dynamics. It’s my guess that the motivation behind doing this is to see how close the given system is able to come to reproducing the desired dynamics. This is my guess because this exercise doesn’t really generate control information that can be used to directly control the system, unless we translate the weights calculated by doing this back into term of the actual set of actions that we have available. But it can help you to understand what your system is capable of.
To do this, we’ll use Gram-Schmidt orthogonalization, which I describe in a recent post. To actually follow this orthogonalization process we’ll need to define an inner product and normalization operator appropriate for our task. A suitable inner product will be one that lets us compare the similarity between two of our operators, and , in terms of their effects on an initial state probability density, . So define
for the discrete-space case, and similarly
in the continuous-state space. So this inner product calculates the change in probability density resulting from applying these two operators to this initial condition, and finds the amount which they move the system in the same direction as the measure of similarity.
The induced norm that we’ll use is the 2-norm,
With the above inner product and normalization operators, we can now take our initial state, , and create a new orthogonal set of stochastic differential operators that span the same space as original set through the Gram-Schmidt orthogonalization method. Let’s denote the orthonormal basis set vectors as . Now, to approximate a desired operator , generate a set of weights, , over our orthonormal basis set using a standard inner product: . Once we have the , the desired operator can be recreated (as best as possible given this basis set),
This could then be used as a comparison measure as the best approximation to a desired set of dynamics that a given system can achieve with its set of operators.
Calculating a control signal using feedback
Up to now, there’s been a lot of dancing around the control signal, including it in equations and discussing the types of control a neural system could plausibly implement. Here, finally, we actually get into how to go about generating this control signal. Let’s start with a basic case where we have the system
where describes the unforced/passive dynamics of the system, describes the control-dependent dynamics, and is our control signal.
Define a cost function that we wish to minimize. The expected value of this cost function at a given point in time is
which can be read as the cost of each state weighted by the current likelihood of being in that state.
To reduce the cost over time, we want the derivative of our expected value with respect to time to decrease. Written in an equation, we want
to give
Since our control is effected through , at a given point in time where we have a fixed and known , we can calculate the effect of our control signal on the change in expected value over time, , by taking the partial differential with respect to . This gives
which is intuitively read: The effect that the control signal has on the instantaneous change in expected value over time is equal to the change in probability of each state weighted by the cost of that state. To reduce , all we need to know now is the sign of the right-hand side of this equation, which tells us if we should increase or decrease . Neat!
Although we only need to know the sign, it’s nice to include slope information that gives some idea of how far away the minimum might be. At this point, we can simply calculate our control signal in a gradient descent fashion, by setting . The standard gradient descent interpretation of this is that we’re calculating the effect that has on our function , and assuming that for some small range, , our function is approximately linear. So we can follow the negative of the function’s slope at that point to find a new point on that function that evaluates to a smaller value.
This similarly extends to multiple controllers, where if there is a system of a large number of controllers, described by
then we can set
Dr. Sanger notes that, as mentioned before, neural systems will often not permit negative control signals, so where . The value of for a given controller is proportional to the ability of that controller to be reduce the expected cost at that point in time. If all of the , then it is not possible for the available controllers to reduce the expected cost of the system.
Comparing classical control and stochastic operator control
Now that we finally have a means of generating a control signal with our stochastic differential operators, let’s compare the structure of our stochastic operator control with classical control. Here’s a picture:
The most obvious differences are that a cost function has replaced the desired trajectory and the state has been replaced by a probability density over the possible states, . Additionally, the feedback in classical control is used to find the difference between the desired and actual state, which is then multiplied by a gain, to generate a corrective signal, i.e. , whereas in stochastic operator control signal is calculated as specified above, by following the gradient of the expected value of the cost function, i.e. .
Right away there is a crazy difference between our two control systems already. In classical control case, we’re following a desired trajectory, several things are implied. First, we’ve somehow come up with a desired trajectory. Second, we’re necessarily assuming that regardless of what is actually going on in the system, this is the trajectory that we want to follow. This means that the system is not robust to changes in the dynamics, such as outside forces or perturbations applied during movement. In the stochastic operator control case, we’re not following a desired path, instead the system is looking to minimize the cost function at every point in time. This means that it doesn’t matter where we start from or where we’re going, the stochastic operator controller looks at the effect that each controller will have on the expected value of the cost function and generates a control signal accordingly. If the system is thrown off course to the target, it will recover by itself, making it far more robust than classical control theory. Additionally, we can easily change the cost function input to the system and see a change in the behaviour of the system, whereas in classical control a change in the cost function requires that we regenerate our desired trajectory before our controller will act appropriately.
While these are impressive points, it should also be pointed out that stochastic operator controllers are not the first to attack these issues. The robustness and behaviour business is similarly handled very well, for specific system (either linear or affine, meaning linear in terms of the dynamics of the control signal applied) and cost function forms (usually quadratic), by optimal feedback controllers. Optimal feedback controllers regenerate the desired trajectory online, based on system feedback. This leads to a far more robust control system that classical control provides. However, as mentioned, this is only for specific system and cost function forms. In the stochastic operator control any type of cost function be applied, and the controller dynamics described by can be linear or nonlinear. This is a very important difference, making stochastic operator control far more powerful.
Additionally, stochastic operator controllers operate under uncertainty, by employing a probability density to generate a control signal. All in all, stochastic operator controllers provide an impressive and novel amount of flexibility in control.
Conclusions
Here, Dr. Sanger has taken stochastic differential operators, looked at their relation to Bayes’ rule, and developed a method for controlling uncertain systems robustly. This is done through the observation that these operators have linear properties that lets the effects of distributed controllers on a stochastic system be described through a simple superposition of terms. By introducing a cost function, the effect of each controller on the expected cost of the system at each point in time can be calculated, which can then be used to generate a control signal that robustly minimizes the cost of the system through time.
Stochastic differential operators can often be inferred from the problem description; their generation is something that I’ll examine more closely in the next post going through examples. Using these operators time-varying expected costs associated with state-dependent cost functions can be calculated. Stochastic operator controllers introduce a significant amount more freedom in choice of cost function than has previously been available in control. Dr. Sanger notes that an important area for future research will be in the development of methods for optimal control of systems described by stochastic differential operators.
The downside of the stochastic operator controllers is that they are very computationally intensive, due to the fact that they must propagate a joint-density function forward in time at each timestep, rather than a single system state. One of the areas Dr. Sanger notes of particular importance for future work is the development of parallel algorithms, both for increasing simulation speed and examining possible neural implementations of such an algorithms.
And finally, stochastic differential operators exact a paradigm shift from classical control on the way control is considered. Instead of driving the system to a certain target state, the goal is to have the system behave according to a desired set of dynamics. The effect of a controller is then the difference between the behavior of the system with and without the activity of the controller.
Comments and thoughts
This paper was particularly exciting because it discussed the calculation of the control signals for systems which we’ve described through the stochastic differential operators that have been developed through the last several papers. I admit confusion regarding the aside about developing an orthogonal equivalent set of operators, it seemed a bit of a red herring in the middle of the paper. I left out the example and code discussion from this post because it’s already very long, but I’m looking forward to working through them. Also worth pointing out is that I’ve been playing fast and loose moving back and forth between continuous and discrete, just in the interest in simplifying for understanding, but Dr. Sanger explicitly handles each case.
I’m excited to explore the potential applications and implementations of this technique in neural systems, especially in models of areas of the brain that perform a ‘look-ahead’ type function. The example that comes to mind is that of the rat reaching an intersection in a T-maze, and the neural activity recorded from place cells in the hippocampus shows the rat simulating the result of going left of going right. This seems a particularly apt application of these stochastic differential operators, as a sequence of actions and the resulting state can then be simulated and evaluated, providing that you have an accurate representation of the system dynamics in your stochastic operators.
To that end, I’m also very interested by possible means of learning stochastic differential operators for an action set. Internal models are an integral parts of motor control system models, and this seems like a potentially plausible analogue. Additionally, for modeling biological systems, the complexity of dynamics is something that is often infeasible to determine analytically. All in all, I think this is a really exciting road for exploring the neural control of movement, and I’m looking forward to seeing where it leads.
Sanger, T. (2011). Distributed Control of Uncertain Systems Using Superpositions of Linear Operators Neural Computation, 23 (8), 1911-1934 DOI: 10.1162/NECO_a_00151