Google Search

Sunday, March 31, 2013

Scientists Detect Magnetic Fingerprints of Defects in Solar Cells

Scientists Detect Magnetic Fingerprints of Defects in Solar Cells Scientists Detect Magnetic Fingerprints of Defects in Solar Cells

A new highly sensitive method of measurement allowed physicists form Helmholtz-Zentrum Berlin for Materials and Energy to directly detect defects in solar cells with atomic resolution. This findings can be used to optimize solar cells’ efficiency and decrease production costs.

HZB physicists have managed to localize defects in amorphous/crystalline silicon heterojunction solar cells. Now, for the first time ever, using computer simulations at Paderborn University, the scientists were able to determine the defects’ exact locations and assign them to certain structures within the interface between the amorphous and crystalline phases.

In theory, silicon-based solar cells are capable of converting up to 30 percent of sunlight to electricity—although, in reality, the different kinds of loss mechanisms ensure that even under ideal lab conditions it does not exceed 25 %. Advanced heterojunction cells shall affront this problem: On top of the wafer’s surface, at temperatures below 200 °C, a layer of 10 nanometer disordered (amorphous) silicon is deposited. This thin film is managing to saturate to a large extent the interface defects and to conduct charge carriers out of the cell. Heterojunction solar cells have already high efficiency factors up to 24,7%—even in industrial scale. However, scientists had until now only a rough understanding of the processes at the remaining interface defects.

Now, physicists at HZB’s Institute for Silicon Photovoltaics have figured out a rather clever way for detecting the remaining defects and characterizing their electronic structure. “If electrons get deposited on these defects, we are able to use their spin, that is, their small magnetic moment, as a probe to study them,” Dr. Alexander Schnegg explains. With the help of EDMR, electrically detected magnetic resonance, an ultrasensitive method of measurement, they were able to determine the local defects’ structure by detecting their magnetic fingerprint in the photo current of the solar cell under a magnetic field and microwave radiation.

Theoretical physicists of Paderborn University could compare these results with quantum chemical computer simulations, thus obtaining information about the defects’ positions within the layers and the processes they are involved to decrease the cells’ efficiency. “We basically found two distinct families of defects”, says Dr. Uwe Gerstmann from Paderborn University, who collaborates with the HZB Team in a program sponsored by Deutsche Forschungsgemeinschaft (DFG priority program 1601). “Whereas in the first one, the defects are rather weakly localized within the amorphous layer, a second family of defects is found directly at the interface, but in the crystalline silicon.”

For the first time ever the scientists have succeeded at directly detecting and characterizing processes with atomic resolution that compromise these solar cells’ high efficiency. The cells were manufactured and measured at the HZB; the numerical methods were developed at Paderborn University. “We can now apply these findings to other types of solar cells in order to optimize them further and to decrease production costs”, says Schnegg.

George, B., Behrends, J., Schnegg, A., Schulze, T., Fehr, M., Korte, L., Rech, B., Lips, K., Rohrmüller, M., Rauls, E., Schmidt, W., & Gerstmann, U. (2013). Atomic Structure of Interface States in Silicon Heterojunction Solar Cells Physical Review Letters, 110 (13) DOI: 10.1103/PhysRevLett.110.136803If you liked this story, please consider sharing it. You can also follow us on Twitter, Facebook or Google+ to stay up to date on the breaking news and events of the energy industry.

View the original article here

Breakthrough Research Shows Chemical Reaction in Real Time


New experiments at the Linac Coherent Light Source took an unprecedented look at the way carbon monoxide molecules react with the surface of a catalyst in real time. (Credit: Greg Stewart / SLAC National Accelerator Laboratory) New experiments at the Linac Coherent Light Source took an unprecedented look at the way carbon monoxide molecules react with the surface of a catalyst in real time. (Credit: Greg Stewart / SLAC National Accelerator Laboratory)

SLAC_Logo_hires

March 14, 2013

Press Office Contact: 
Andy Freeberg, SLAC National Accelerator Laboratory

Scientist Contact:
Anders Nilsson, SLAC National Accelerator Laboratory

Menlo Park, Calif. — The ultrafast, ultrabright X-ray pulses of the Linac Coherent Light Source (LCLS) have enabled unprecedented views of a catalyst in action, an important step in the effort to develop cleaner and more efficient energy sources.

Scientists at the U.S. Department of Energy’s (DOE) SLAC National Accelerator Laboratory used LCLS, together with computerized simulations, to reveal surprising details of a short-lived early state in a chemical reaction occurring at the surface of a catalyst sample. The study offers important clues about how catalysts work and launches a new era in probing surface chemistry as it happens.

“To study a reaction like this in real time is a chemist’s dream,” said Anders Nilsson, deputy director for the Stanford and SLAC SUNCAT Center for Interface Science and Catalysis and a leading author in the research, published March 15 in Science. “We are really jumping into the unknown.”

Catalysts, which can speed up chemical reactions and make them more efficient and effective, are essential to most industrial processes and to the production of many chemicals. Catalytic converters in cars, for example, reduce emissions by converting exhaust to less toxic compounds. Understanding how catalysts work, at ultrafast time scales and with molecular precision, is essential to producing new, lower-cost synthetic fuels and alternative energy sources that reduce pollution, Nilsson said.

How LCLS views surface chemistry (Credit: Hirohito Ogasawara / SLAC National Accelerator Laboratory) How LCLS views surface chemistry (Credit: Hirohito Ogasawara / SLAC National Accelerator Laboratory)

In the LCLS experiment, researchers looked at a simple reaction in a crystal composed of ruthenium, a catalyst that has been extensively studied, in reaction with carbon monoxide gas. The scientists zapped the crystal’s surface with a conventional laser, which caused carbon monoxide molecules to begin to break away. They then probed this state of the reaction using X-ray laser pulses, and observed that the molecules were temporarily trapped in a near-gas state and still interacting with the catalyst.

“We never expected to see this state,” Nilsson said. “It was a surprise.”

Not only was the experiment the first to confirm the details of this early stage of the reaction, it also found an unexpectedly high share of molecules trapped in this state for far longer than what was anticipated, raising new questions about the atomic-scale interplay of chemicals that will be explored in future research.

Some of the early stages of a chemical reaction are so rapid that they could not be observed until the creation of free-electron lasers such as LCLS, said Jens Nørskov, director of SUNCAT. Future experiments at LCLS will examine more complex reactions and materials, Nilsson said: “There is potential to probe a number of catalytic-relevant processes – you can imagine there are tons of things we could do from here.”

Important preliminary research was conducted at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL), and this direct coupling of research at SLAC’s synchrotron and X-ray laser proved essential, said Hirohito Ogasawara, a staff scientist at SSRL.

Collaborators participating in the research were from SLAC; Stanford University; University of Hamburg, Center for Free-Electron Laser Science, Helmholtz-Zentrum Berlin for Materials and Energy, University of Potsdam and Fritz-Haber Institute of the Max Planck Society in Germany; Stockholm University in Sweden; and the Technical University of Denmark. This work was supported by DOE’s Office of Science, the Swedish National Research Council, the Danish Center for Scientific Computing, the Volkswagen Foundation and the Lundbeck Foundation.

SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the U.S. Department of Energy Office of Science. To learn more, please visit www.slac.stanford.edu.

LCLS and SSRL are supported by the DOE’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Citation: M. Dell’Angela et al., Science, 14 Mar 2013 (10.1126/science.1231711)

Also on the Web:


_____________________________________________________________

_____________________________________________________________

ResearchBlogging.org
Andy Freeberg (2013).
Breakthrough Research Shows Chemical Reaction in Real Time
SLAC National Accelerator Laboratory News

_____________________________________________________________

T 3 v2


View the original article here

Saturday, March 30, 2013

Surprising Control over Photoelectrons from a Topological Insulator


The interior bulk of a topological insulator is indeed an insulator, but electrons (spheres) move swiftly on the surface as if through a metal. They are spin-polarized, however, with their momenta (directional ribbons) and spins (arrows) locked together. Berkeley Lab researchers have discovered that the spin polarization of photoelectrons (arrowed sphere at upper right) emitted when the material is struck with high-energy photons (blue-green waves from left) is completely determined by the polarization of this incident light. (Image Chris Jozwiak, Zina Deretsky, and Berkeley Lab Creative Services Office) The interior bulk of a topological insulator is indeed an insulator, but electrons (spheres) move swiftly on the surface as if through a metal. They are spin-polarized, however, with their momenta (directional ribbons) and spins (arrows) locked together. Berkeley Lab researchers have discovered that the spin polarization of photoelectrons (arrowed sphere at upper right) emitted when the material is struck with high-energy photons (blue-green waves from left) is completely determined by the polarization of this incident light. (Image Chris Jozwiak, Zina Deretsky, and Berkeley Lab Creative Services Office)

labmasthead

MARCH 12, 2013

Paul Preuss

Plain-looking but inherently strange crystalline materials called 3D topological insulators (TIs) are all the rage in materials science. Even at room temperature, a single chunk of TI is a good insulator in the bulk, yet behaves like a metal on its surface.

Researchers find TIs exciting partly because the electrons that flow swiftly across their surfaces are “spin polarized”: the electron’s spin is locked to its momentum, perpendicular to the direction of travel. These interesting electronic states promise many uses – some exotic, like observing never-before-seen fundamental particles, but many practical, including building more versatile and efficient high-tech gadgets, or, further into the future, platforms for quantum computing.

A team of researchers from the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California at Berkeley has just widened the vista of possibilities with an unexpected discovery about TIs: when hit with a laser beam, the spin polarization of the electrons they emit (in a process called photoemission) can be completely controlled in three dimensions, simply by tuning the polarization of the incident light.

“The first time I saw this it was a shock; it was such a large effect and was counter to what most researchers had assumed about photoemission from topological insulators, or any other material,” says Chris Jozwiak of Berkeley Lab’s Advanced Light Source (ALS), who worked on the experiment. “Being able to control the interaction of polarized light and photoelectron spin opens a playground of possibilities.”

The Berkeley Lab-UC Berkeley team was led by Alessandra Lanzara of Berkeley Lab’s Materials Sciences Division (MSD) and UC Berkeley’s Department of Physics, working in collaboration with Jozwiak and Zahid Hussain of the ALS; Robert Birgeneau, Dung-Hai Lee, and Steve Louie of MSD and UC Berkeley; and Cheol-Hwan Park of UC Berkeley and Seoul National University. They and their colleagues report their findings in Nature Physics.

Strange electronic states and how to measure them

In diagrams of what physicists call momentum space, a TI’s electronic states look eerily like the same kinds of diagrams for graphene, the single sheet of carbon atoms that, before topological insulators came along, was the hottest topic in the materials science world.

In energy-momentum diagrams of graphene and TIs, the conduction bands (where energetic electrons move freely) and valence bands (where lower-energy electrons are confined to atoms) don’t overlap as they do in metals, nor is there an energy gap between the bands, as in insulators and semiconductors. Instead the “bands” appear as cones that meet at a point, called the Dirac point, across which energy varies continuously.

The experimental technique that directly maps these states is ARPES, angle-resolved photoemission spectroscopy. When energetic photons from a synchrotron light source or laser strike a material, it emits electrons whose own energy and momentum are determined by the material’s distribution of electronic states. Steered by the spectrometer onto a detector, these photoelectrons provide a picture of the momentum-space diagram of the material’s electronic structure.

The diagram at right shows the electronic states of bismuth selenide in momentum space. ARPES, at left, can directly create such maps with photoelectrons. A slice through the conduction cone at the Fermi energy maps the topological insulator’s surface as a circle (upper left); here electron spins and momenta are locked together. Initial ARPES measurements in this experiment were made with p-polarized incident light in the regions indicated by the green circle and line, where the spin polarization of the photoelectrons is consistent with the intrinsic spin polarization of the surface. The diagram at right shows the electronic states of bismuth selenide in momentum space. ARPES, at left, can directly create such maps with photoelectrons. A slice through the conduction cone at the Fermi energy maps the topological insulator’s surface as a circle (upper left); here electron spins and momenta are locked together. Initial ARPES measurements in this experiment were made with p-polarized incident light in the regions indicated by the green circle and line, where the spin polarization of the photoelectrons is consistent with the intrinsic spin polarization of the surface.

Similar as their Dirac-cone diagrams may appear, the electronic states on the surface of TIs and in graphene are fundamentally different: those in graphene are not spin polarized, while those of TIs are completely spin polarized, and in a peculiar way.

A slice through the Dirac-cone diagram produces a circular contour. In TIs, spin orientation changes continuously around the circle, from up to down and back again, and the locked-in spin of surface electrons is determined by where they lie on the circle. Scientists call this relation of momentum and spin the “helical spin texture” of a TI’s surface electrons. (Electron spin isn’t like that of a spinning top, however; it’s a quantum number representing an intrinsic amount of angular momentum.)

Directly measuring the electrons’ spin as well as their energy and momentum requires an addition to ARPES instrumentation. Spin polarization is hard to detect and in the past has been established by firing high-energy electrons at gold foil and counting which way a few of them bounce; collecting the data takes a long time.

Jozwiak, Lanzara, and Hussain jointly led the development of a precision detector that could measure the spin of low-energy photoelectrons by measuring how they scatter from a magnetic surface. Called a spin time-of-flight analyzer, the device is many times more efficient at data collection.

Says Hussain, “It’s the kind of project that could only be done at a place like Berkeley Lab, where tight collaboration for a wide range of capabilities is possible.”

The new instrument was first used at the ALS to study the well-known topological insulator bismuth selenide. While the results confirmed that bismuth selenide’s helical spin texture persists even at room temperature, they raised a perplexing question.

Lanzara says, “In an ARPES experiment, it’s usually assumed that the spin polarization of detected photoelectrons accurately reports the spin polarization of electrons within the material.” She explains that “this assumption is frequently made when confirming the helical spin texture of a TI’s surface electrons. But in our spin-ARPES experiments, we found significant deviations between the spin polarizations of the surface electrons versus the photoelectrons. We knew we had to look further.”

Flipping photoelectron spins

Probing the TI surface electrons didn’t require the high photon energy of a synchrotron beam, so the new study was primarily done in a laboratory with a laser that could produce intense ultraviolet light capable of stimulating photoemission, and whose polarization was readily manipulated. The experiment used high-quality samples of bismuth selenide from Birgeneau’s MSD and UC Berkeley labs.

Incident light that’s p-polarized (upper left) produces photoelectrons consistent with the usual picture of spin polarization in a topological insulator’s surface, but changing the polarization of the incident light also changes the spin polarization of the photoelectrons. Incident light that’s p-polarized (upper left) produces photoelectrons consistent with the usual picture of spin polarization in a topological insulator’s surface, but changing the polarization of the incident light also changes the spin polarization of the photoelectrons.

In the first experiments, the incident light was p-polarized, which means the electric part of the light wave was parallel to a plane that was perpendicular to the TI surface and oriented according to the path of the emitted photoelectrons. Since studies of topological insulators typically use p-polarized light in this geometry, sure enough, the spin-ARPES measurements showed the photoelectrons were indeed spin polarized in directions consistent with the expected spin texture of the surface electrons.

“After we’d measured p-polarization, we switched to an s-polarized laser beam,” Jozwiak says. “It only took a few minutes to collect the data.” (S-polarization means the electric part of the light wave is perpendicular to the same imaginary plane – perpendicular in German beingsenkrecht.)

Three minutes after he started the run, Jozwiak got a jolt. “The experiment was completely the same, except for the light polarization, but now the photoelectrons were spin polarized in the reverse direction – the opposite of what you’d expect.” His first assumption was “I must have done something wrong.”

Repeated careful experiments with a range of laser polarizations showed, however, that the spin polarization of the photons in the laser beam controlled the polarization of the emitted photoelectrons. When the laser polarization was smoothly varied – and even when it was circularly polarized right or left – the photoelectron spin polarization followed suit.

Why had no results counter to the expected surface textures been reported before? Probably because the most common kind of spin-ARPES experiment makes a few measurements in a typical geometry using p-polarized light. With other arrangements, however, photoelectron spin polarization departs markedly from expectations.

The team’s theory collaborators, Park, Louie, and Lee, helped explain the unusual theoretical results when they predicted that just such differences between photoelectron and intrinsic textures should occur. There are also suggestions that the simple picture of spin texture in topological insulators is more complex than has been assumed. Says Lanzara, “It’s a great motivation to keep digging.”

The ability to hit a topological insulator with a tuned laser and excite polarization-tailored electrons has great potential for the field of spintronics – electronics that exploit spin as well as charge. Devices that optically control electron distribution and flow would constitute a significant advance.

Optical control of TI photoemission has more immediate practical possibilities as well. Bismuth selenide could provide just the right kind of photocathode source for experimental techniques that require electron beams whose spin polarization can be exquisitely and conveniently controlled.

DOE’s Office of Science supports the ALS and supported this research.

###

“Photoelectron spin-flipping and texture manipulation in a topological insulator,” by Chris Jozwiak, Cheol-Hwan Park, Kenneth Gotlieb, Choongyu Hwang, Dung-Hai Lee, Steven G. Louie, Jonathan D. Denlinger, Costel R. Rotundu, Robert J. Birgeneau, Zahid Hussain, and Alessandra Lanzara, appears in advance online publication of Nature Physics at http://www.nature.com/nphys/journal/vaop/ncurrent/abs/nphys2572.html.

More about spin-ARPES experiments at the ALS with the efficient spin time-of-flight analyzyer is in Phys Rev B at http://prb.aps.org/abstract/PRB/v84/i16/e165113.

The Advanced Light Source is a third-generation synchrotron light source producing light in the x-ray region of the spectrum that is a billion times brighter than the sun. A DOE national user facility, the ALS attracts scientists from around the world and supports its users in doing outstanding science in a safe environment. For more information visit www-als.lbl.gov/.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy’s Office of Science. For more, visit www.lbl.gov.

_____________________________________________________________

_____________________________________________________________

ResearchBlogging.org
Paul Preuss (2013).
Surprising Control over Photoelectrons from a Topological Insulator
Berkeley Lab News Center

_____________________________________________________________

T 3 v2

Posted by ??ß?? ??µ. ?e?????? on 15/03/2013 in Scientific Reports , ?p?st?µ????? ????a, Technology , ?e???????a and tagged Advanced Light Source, ??e?t????a, ????e?, ??????t?ta (f?s???), Sp?? (f?s???), F?t???a, Lawrence Berkeley National Laboratory, Spin (physics), Topological insulator.


View the original article here

Friday, March 29, 2013

Internet Bad Neighborhoods

Posted on March 16, 2013 by Rense Nieuwenhuis | No Responses

ResearchBlogging.org

Don’t venture too far on the internet: bad neighborhoods were located! Internet bad neighborhoods are those geographical areas where the majority of spam and phishing mails originate from. Interestingly, some regions specialize in spam, while others focus on phishing for your bank account.

Having successfully defended his dissertation at the University of Twente (the Netherlands), Giovane Moura now attracts huge crowds of attention with his research. First his research was discussed on slashdot (link), next on BBC news (link).

From the press release:

Of the 42,000 Internet Service Providers (ISPs) surveyed, just 20 were found to be responsible for nearly half of all the internet addresses that send spam. That just is one of the striking results of an extensive study by the University of Twente’s Centre for Telematics and Information Technology (CTIT). This study focused on “Bad Neighbourhoods” on the internet (which sometimes correspond to certain geographical areas) that are the source of a great deal of spam, phishing or other undesirable activity. In his thesis, Giovane Moura describes this situation in detail.

The results have practical implications. BBC writes:

The large-scale study was carried out to help fine-tune computer security tools that scrutinise the net addresses of email and other messages to help them work out if they are junk or legitimate. Such tools could make better choices if they were armed with historical information about the types of traffic that emerge from particular networks.

Giovane C. M. Moura, Anna Sperotto, Ramin Sadre, & Aiko Pras (2013). Evaluating Third-Party Bad Neighborhood Blacklists for Spam Detection IFIP/IEEE International Symposium on Integrated Network Management


View the original article here

Thursday, March 28, 2013

Dead Sparrow Turned into Robot to Study Bird Behavior

Sparrow

Researchers at Duke University recently took a major step toward better understanding how swamp sparrows use a combination of song and visual displays to communicate with one another. How they came about making this discovery, though, is what makes this story particularly newsworthy — they stuffed a deceased swamp sparrow with a miniature computer and some robotics to give it the ability to flap its wings as if it were alive.

Duke biologist Rindy Anderson led a team of biologists on the study, receiving technical assistance from engineering undergraduate student David Perch. The team brought “Robosparrow,” as it is aptly referred to, to a swamp sparrow breeding ground and placed it in the territories of live males.

Once all systems were go, the robotic bird “sang” swamp sparrow songs vis-à-vis a nearby sound system that let the birds know it was intruding on their ground. Anderson observed from a distance, sitting amid the tall swampy grasses, and changed the bird’s behavior to study various responses. They had it sing in a stationary position, shift side to side, and as mentioned before, flap its wings.

What they found was a song combined with wing waves is more potent than song alone, and that wing waves by themselves evoked the most aggressive response from live male birds.

“What I didn’t expect to see was that the birds would give strikingly similar aggressive wing-wave signals to the three types of invaders,” Anderson said. That is to say, she thought the defending birds would simply match the signals of the intruding robosparrow. What they instead discovered was that the males are more individualistic and consistent in the level of aggressiveness that they want to signal.

“That response makes sense, in retrospect, since attacks can be devastating,” she said. What Anderson means by this is that male swamp sparrows only want to signal a certain level of aggression to see if they can scare off an intruder without provoking any sort of physical conflict that could result in severe injury or death. Some sparrows were intimidated by robosparrow’s flapping, while others responded with a more aggressive flapping pattern.

Also worth noting from this study is that whether the robosparrow waved its wings or not, some live male sparrows still came in and attacked it. “It’s high stakes for these little birds. They only live a couple of years, and most only breed once a year, so owning a territory and having a female is high currency,” Anderson said.

Looking ahead, Anderson and her team plan to further test how the sparrows use wing waves combined with a characteristic twitter called “soft-song” to show aggression and fend off competition. Unfortunately, it’s going to take some time before the bot gets back in the field, as robosparrow’s motor is burned out from its last run, and the bird’s head was ripped off during one of the attacks.

 Source: duke.edu, electronicproducts.com

Reference:

Anderson, R., DuBois, A., Piech, D., Searcy, W., & Nowicki, S. (2013). Male response to an aggressive visual signal, the wing wave display, in swamp sparrows Behavioral Ecology and Sociobiology DOI: 10.1007/s00265-013-1478-9


View the original article here

Wednesday, March 27, 2013

Distributed control of uncertain systems using superpositions of linear operators - Likelihood calculus paper series review part 3

The third (and final, at the moment) paper in the likelihood calculus series from Dr. Terrence Sanger is Distributed control of uncertain systems using superpositions of linear operators. Carrying the torch for the series right along, here Dr. Sanger continues investigating the development of an effective, general method of controlling systems operating under uncertainty. This is the paper that delivers on all the promises of building a controller out of a system described by the stochastic differential operators we’ve been learning about in the previous papers. In addition to describing the theory, there are examples of system simulation with code provided! Which is a wonderful, and sadly uncommon, thing in academic papers, so I’m excited. We’ll go through a comparison of Bayes’ rule and Markov processes (described by our stochastic differential equations), go quickly over the stochastic differential operator description, and then dive into the control of systems. The examples and code run-through I’m going to have to save for another post, though, just to keep the size of this post reasonable.

The form our system model equation will take is the very general

dx = f(x)dt + \sum g_i(x)u_i dt + \sum h_i(x, u_i)dB_i,

where f(x) represents the environment dynamics, previously also referred to as the unforced or passive dynamics of the system, g_i(x) describes how the control signal u_i affects the system state x, h_i(x, u_i) describes the state and control dependent additive noise, and dB_i is a set of independent noise processes such as Brownian motion. Our control goal is to find a set of functions of time, u_i(t), such that the dynamics of our system behave according to a desired set of dynamics that we have specified.

In prevalent methods in control theory, uncertainty is a difficult problem, and often can only be effectively handled with a number of simplifications, such as linear systems of Gaussian added noise. In biological systems that we want to model, however, uncertainty is ubiquitous. There is noise on outgoing and incoming signals, there are unobserved controllers simultaneously exerting influence over the body, complicated and often unmodeled highly non-linear dynamics of the system and its interactions with the environment, etc. In the brain especially, the effect of unobserved controllers is a particular problem. Multiple areas of the brain will simultaneously be sending out control signals to the body, and the results of these signals tends to be only partially known or known only after a delay to the other areas. So for modeling, we need an effective means of controlling distributed systems operating under uncertainty. And that’s exactly what this paper presents: ‘a mathematical framework that allows modeling of the effect of actions deep within a system on the stochastic behaviour of the global system.’ Importantly, the stochastic differential operators that Dr. Sanger uses to do this are linear operators, which opens up a whole world of linear methods to us.

Bayes’ rule and Markov processes

Bayesian estimation is often used in sensory processing to model the effects of state uncertainty, combining prior knowledge of state with a measurement update. Because we’re dealing with modeling various types of system uncertainty, it seems like a good idea to consider Bayes’ rule. Here, Dr. Sanger shows that Bayes’ rule is in fact insufficient in this application, and Markov processes must be used. There are a number of neat insights that come from this comparison, so it’s worth going through.

Let’s start by writing Bayes’ rule and Markov processes using matrix equations. Bayes’ rule is the familiar equation

p(x|y) = \frac{p(y|x)}{p(y)}p(x),

where p(x) represents our probability density or distribution, so p(x) \geq 0 and \sum_i p(x = i) = 1. This equation maps a prior density p(x) to the posterior density p(x|y). Essentially, this tells us the effect a given measurement of y has on the probability of being in state x. To write this in matrix notation, we assume that x takes on a finite number of states, so p(x) is a vector, which gives

p_x' = Ap_x,

where p_x and p_x' are the prior and posterior distributions, respectively, and A in a diagonal matrix with elements A_{ii} = \frac{p(y|x)}{p(y)}.

Now, the matrix equation for a discrete-time, finite-state Markov process is written

p_x(t+1) = Mp_x(t).

So where in Bayes’ rule the matrix (our linear operator) A maps the prior distribution into the posterior distribution, in Markov processes the linear operator M maps the probability density of the state at time t into the density at time t+1. The differences come in through the form of the linear operators. The major difference being that A is a diagonal matrix, while there is no such restriction for M. The implication of this is that in Bayes’ rule the update of a given state x depends only on the prior likelihood of being in that state, whereas in Markov processes the likelihood of a given state at time t+1 can depend on the probability of being in other states at time t. The off diagonal elements of M allow us to represent the probability of state transition, which is critical for capturing the behavior of dynamical systems. This is the reason why Bayes’ method is insufficient for our current application.

Stochastic differential operators

This derivation of stochastic differential operators is by now relatively familiar grounds, so I’ll be quick here. Starting with a stochastic differential equation

dx = f(x)dt + g(x)dB,

where dB is our noise source, the differential of unit variance Brownian motion. The noise term introduces randomness into the state variable x, so we describe x with a probability density p(x) that evolves through time. This change of the probability density through time is captured by the Fokker-Planck partial differential equation

\frac{\partial}{\partial t}p(x,t) = - \frac{\partial}{\partial x}(f(x)p(x,t)) + \frac{1}{2} \frac{\partial^2}{\partial x^2}(g(x)g^T(x)p(x,t)),

which can be rewritten as

\dot{p} = \mathcal{L}p,

where \mathcal{L} is the linear operator

\mathcal{L} = - \frac{\partial}{\partial x}f(x) + \frac{1}{2} \frac{\partial^2}{\partial x^2}g(x)g^T(x).

\mathcal{L} is referred to as our stochastic differential operator. Because the Fokker-Planck equation is proven to preserve probability densities (non-negativity and sum to 1), applying \mathcal{L} to update our density will maintain its validity.

What is so exciting about these operators is that even though they describe nonlinear systems, they themselves are linear operators. What this means is that if we have two independent components of a system that affect it’s dynamics, described by \mathcal{L}_1 and \mathcal{L}_2, we can determine their combined effects on the overall system dynamics through a simple summation, i.e. \dot{p} = (\mathcal{L}_1 + \mathcal{L}_2)p.

Controlling with stochastic differential operators

Last time, we saw that control can be introduced by attaching a weighting term to the superposition of controller dynamics, giving

\dot{p} = \sum_i u_i \mathcal{L}_i p,

where \mathcal{L}_i is the stochastic differential operator of controller i, and u_i is the input control signal to that controller. In the context of a neural system, this equation describes a set of subsystems whose weighted dynamics give rise to the overall behavior of the system. By introducing our control signals u_i, we’ve made the dynamics of the overall system flexible. As mentioned in the previous review post, our control goal is to drive the system to behave according to a desired set of dynamics. Formally, we want to specify u_i such that the actual system dynamics, \hat{\mathcal{L}}, match some desired set of dynamics, \mathcal{L}^*. In equation form, we want u_i such that

\mathcal{L}^* \approx \hat{\mathcal{L}} = \sum_i u_i \mathcal{L}_i.

It’s worth noting also here that the resulting \hat{\mathcal{L}} still preserves the validity of densities that it is applied to.

How well can the system approximate a set of desired dynamics?

In this next section of the paper, Dr. Sanger talks about reworking the stochastic operators of a system into an orthogonal set, which can then be used to easily approximate a desired set of dynamics. It’s my guess that the motivation behind doing this is to see how close the given system is able to come to reproducing the desired dynamics. This is my guess because this exercise doesn’t really generate control information that can be used to directly control the system, unless we translate the weights calculated by doing this back into term of the actual set of actions that we have available. But it can help you to understand what your system is capable of.

To do this, we’ll use Gram-Schmidt orthogonalization, which I describe in a recent post. To actually follow this orthogonalization process we’ll need to define an inner product and normalization operator appropriate for our task. A suitable inner product will be one that lets us compare the similarity between two of our operators, L_1 and L_2, in terms of their effects on an initial state probability density, p_0. So define

\langle L_1, L_2 \rangle_{p_0} = \langle L_1 p_0, L_2 p_0 \rangle = \langle \dot{p}_{L_1} , \dot{p}_{L_2} \rangle

for the discrete-space case, and similarly

\langle \mathcal{L}_1, \mathcal{L}_2 \rangle_{p_0} = \int (\mathcal{L}_1 p_0)(\mathcal{L}_2 p_0)dx = \int \dot{p}_{L_1} \dot{p}_{L_2} dx

in the continuous-state space. So this inner product calculates the change in probability density resulting from applying these two operators to this initial condition, and finds the amount which they move the system in the same direction as the measure of similarity.
The induced norm that we’ll use is the 2-norm,

||L||_{p_0} = \frac{||L p_0||_2}{||p_0||_2}.

With the above inner product and normalization operators, we can now take our initial state, p_0, and create a new orthogonal set of stochastic differential operators that span the same space as original set through the Gram-Schmidt orthogonalization method. Let’s denote the orthonormal basis set vectors as \Lambda_i. Now, to approximate a desired operator L^*, generate a set of weights, \alpha, over our orthonormal basis set using a standard inner product: \alpha_i = \langle L^*, \Lambda_i \rangle. Once we have the \alpha_i, the desired operator can be recreated (as best as possible given this basis set),

L^* \approx \hat{L} = \sum \alpha_i \Lambda_i.

This could then be used as a comparison measure as the best approximation to a desired set of dynamics that a given system can achieve with its set of operators.

Calculating a control signal using feedback

Up to now, there’s been a lot of dancing around the control signal, including it in equations and discussing the types of control a neural system could plausibly implement. Here, finally, we actually get into how to go about generating this control signal. Let’s start with a basic case where we have the system

\dot{p} = (\mathcal{L}_1 + u\mathcal{L}_2)p,

where \mathcal{L}_1 describes the unforced/passive dynamics of the system, \mathcal{L}_2 describes the control-dependent dynamics, and u is our control signal.

Define a cost function V(x) that we wish to minimize. The expected value of this cost function at a given point in time is

E[V] = \int V(x)p(x)dx,

which can be read as the cost of each state weighted by the current likelihood of being in that state.
To reduce the cost over time, we want the derivative of our expected value with respect to time to decrease. Written in an equation, we want

to give

\frac{d}{dt}E[V] = \int V(x)[\mathcal{L}_1p(x) + u\mathcal{L}_2p(x)]dx.

Since our control is effected through u, at a given point in time where we have a fixed and known p(x,t), we can calculate the effect of our control signal on the change in expected value over time, \frac{d}{dt}E[V], by taking the partial differential with respect to u. This gives

\frac{\partial}{\partial u}\left[\frac{d}{dt}E[V]\right] = \int V(x)\mathcal{L}_2p(x)dx,

which is intuitively read: The effect that the control signal has on the instantaneous change in expected value over time is equal to the change in probability of each state x weighted by the cost of that state. To reduce \frac{d}{dt}E[V], all we need to know now is the sign of the right-hand side of this equation, which tells us if we should increase or decrease u. Neat!

Although we only need to know the sign, it’s nice to include slope information that gives some idea of how far away the minimum might be. At this point, we can simply calculate our control signal in a gradient descent fashion, by setting u = - k \int V(x)\mathcal{L}_2p(x)dx. The standard gradient descent interpretation of this is that we’re calculating the effect that u has on our function \frac{d}{dt}E[V], and assuming that for some small range, k, our function is approximately linear. So we can follow the negative of the function’s slope at that point to find a new point on that function that evaluates to a smaller value.

This similarly extends to multiple controllers, where if there is a system of a large number of controllers, described by

\dot{p} = \sum_i u_i \mathcal{L}_i p,

then we can set

u_i = - k_i \int V(x)\mathcal{L}_ip(x)dx.

Dr. Sanger notes that, as mentioned before, neural systems will often not permit negative control signals, so where u_i = 0. The value of u_i for a given controller is proportional to the ability of that controller to be reduce the expected cost at that point in time. If all of the u_i = 0, then it is not possible for the available controllers to reduce the expected cost of the system.

Comparing classical control and stochastic operator control

Now that we finally have a means of generating a control signal with our stochastic differential operators, let’s compare the structure of our stochastic operator control with classical control. Here’s a picture:

comparison

The most obvious differences are that a cost function V(x) has replaced the desired trajectory \dot{x} and the state x has been replaced by a probability density over the possible states, p(x). Additionally, the feedback in classical control is used to find the difference between the desired and actual state, which is then multiplied by a gain, to generate a corrective signal, i.e. u = k * (x^* - x), whereas in stochastic operator control signal is calculated as specified above, by following the gradient of the expected value of the cost function, i.e. u = -k \int V(x)p(x)dx.

Right away there is a crazy difference between our two control systems already. In classical control case, we’re following a desired trajectory, several things are implied. First, we’ve somehow come up with a desired trajectory. Second, we’re necessarily assuming that regardless of what is actually going on in the system, this is the trajectory that we want to follow. This means that the system is not robust to changes in the dynamics, such as outside forces or perturbations applied during movement. In the stochastic operator control case, we’re not following a desired path, instead the system is looking to minimize the cost function at every point in time. This means that it doesn’t matter where we start from or where we’re going, the stochastic operator controller looks at the effect that each controller will have on the expected value of the cost function and generates a control signal accordingly. If the system is thrown off course to the target, it will recover by itself, making it far more robust than classical control theory. Additionally, we can easily change the cost function input to the system and see a change in the behaviour of the system, whereas in classical control a change in the cost function requires that we regenerate our desired trajectory \dot{x} before our controller will act appropriately.

While these are impressive points, it should also be pointed out that stochastic operator controllers are not the first to attack these issues. The robustness and behaviour business is similarly handled very well, for specific system (either linear or affine, meaning linear in terms of the dynamics of the control signal applied) and cost function forms (usually quadratic), by optimal feedback controllers. Optimal feedback controllers regenerate the desired trajectory online, based on system feedback. This leads to a far more robust control system that classical control provides. However, as mentioned, this is only for specific system and cost function forms. In the stochastic operator control any type of cost function be applied, and the controller dynamics described by L_i can be linear or nonlinear. This is a very important difference, making stochastic operator control far more powerful.

Additionally, stochastic operator controllers operate under uncertainty, by employing a probability density to generate a control signal. All in all, stochastic operator controllers provide an impressive and novel amount of flexibility in control.

Conclusions

Here, Dr. Sanger has taken stochastic differential operators, looked at their relation to Bayes’ rule, and developed a method for controlling uncertain systems robustly. This is done through the observation that these operators have linear properties that lets the effects of distributed controllers on a stochastic system be described through a simple superposition of terms. By introducing a cost function, the effect of each controller on the expected cost of the system at each point in time can be calculated, which can then be used to generate a control signal that robustly minimizes the cost of the system through time.

Stochastic differential operators can often be inferred from the problem description; their generation is something that I’ll examine more closely in the next post going through examples. Using these operators time-varying expected costs associated with state-dependent cost functions can be calculated. Stochastic operator controllers introduce a significant amount more freedom in choice of cost function than has previously been available in control. Dr. Sanger notes that an important area for future research will be in the development of methods for optimal control of systems described by stochastic differential operators.

The downside of the stochastic operator controllers is that they are very computationally intensive, due to the fact that they must propagate a joint-density function forward in time at each timestep, rather than a single system state. One of the areas Dr. Sanger notes of particular importance for future work is the development of parallel algorithms, both for increasing simulation speed and examining possible neural implementations of such an algorithms.

And finally, stochastic differential operators exact a paradigm shift from classical control on the way control is considered. Instead of driving the system to a certain target state, the goal is to have the system behave according to a desired set of dynamics. The effect of a controller is then the difference between the behavior of the system with and without the activity of the controller.

Comments and thoughts

This paper was particularly exciting because it discussed the calculation of the control signals for systems which we’ve described through the stochastic differential operators that have been developed through the last several papers. I admit confusion regarding the aside about developing an orthogonal equivalent set of operators, it seemed a bit of a red herring in the middle of the paper. I left out the example and code discussion from this post because it’s already very long, but I’m looking forward to working through them. Also worth pointing out is that I’ve been playing fast and loose moving back and forth between continuous and discrete, just in the interest in simplifying for understanding, but Dr. Sanger explicitly handles each case.

I’m excited to explore the potential applications and implementations of this technique in neural systems, especially in models of areas of the brain that perform a ‘look-ahead’ type function. The example that comes to mind is that of the rat reaching an intersection in a T-maze, and the neural activity recorded from place cells in the hippocampus shows the rat simulating the result of going left of going right. This seems a particularly apt application of these stochastic differential operators, as a sequence of actions and the resulting state can then be simulated and evaluated, providing that you have an accurate representation of the system dynamics in your stochastic operators.

To that end, I’m also very interested by possible means of learning stochastic differential operators for an action set. Internal models are an integral parts of motor control system models, and this seems like a potentially plausible analogue. Additionally, for modeling biological systems, the complexity of dynamics is something that is often infeasible to determine analytically. All in all, I think this is a really exciting road for exploring the neural control of movement, and I’m looking forward to seeing where it leads.

ResearchBlogging.org
Sanger, T. (2011). Distributed Control of Uncertain Systems Using Superpositions of Linear Operators Neural Computation, 23 (8), 1911-1934 DOI: 10.1162/NECO_a_00151


View the original article here