My field of study is theoretical neuroscience, which is like computational neuroscience but with more math and fewer computers. For a more thorough introduction to the field, read the essay What Can A Mathematician Do In Neuroscience by Jan Karbowski.
In alphabetical order, my research interests are:
In 1924, Hans Berger built the first electroencephalogram (EEG) to non-invasively detect electrical activity in the human brain. He observed two distinct rhythmic oscillations: one, at a frequency near 10Hz, was prominent in the visual cortex when the subject’s eyes were closed, and the other, at a frequency near 20Hz, largely replaced it when the eyes opened. The first he named “alpha waves,” and the second, naturally, “beta waves.”
Since then, other electrical oscillations have been observed and categorized, spanning the whole spectrum of frequencies from .5 Hz to 600 Hz or more. And yet ninety years later, our understanding of the mechanisms giving rise to these oscillations is rudimentary at best, and our understanding of their function in cognitive processes is hardly greater than Berger’s.
I am interested in drawing on experimental results to mathematically describe and study the mechanisms generating cognitive rhythms, with the object of discovering why they are suitable to their respective roles in cognition. I am particularly interested in the gamma wave (30-100 Hz), associated with attention and active information processing, and its counterpart, Berger’s alpha wave, now associated with processes of ignoring sensory input and suppressing other neural activity. My intimate familiarity with the difficulties of proper attentional regulation further piques my interest in demystifying these rhythms (when it’s not piquing my interest in something completely unrelated).
In 1890, Henri Poincaré’s study of that perennial mathematical muse celestial mechanics led him to publish an error-ridden but monolithic 270-page paper establishing the groundwork for the qualitative study of dynamical systems. The dynamicist uses rigorous mathematical techniques to justify qualitative statements about systems of differential equations, which can be used to model everything from nuclear reactors to pendulums to neurons. These statements often describe a system’s “stability” and the many alternatives to stability, including periodic and chaotic behavior.
I am interested in the application of techniques from dynamical systems theory to problems in systems neuroscience. In particular, I am interested in the existence of low-dimensional attracting manifolds in high-dimensional systems, which can considerably constrain the behavior and simplify the mathematical description of a neuronal network.
In 1959, Alfréd Rényi (self-proclaimed “device for turning coffee into theorems”) and the prolific math-hobo Paul Erdős (apparently a device for turning methamphetamine into theorems) began the publication of a series of papers examining the properties of randomly connected abstract networks. The mathematical study of networks was retrofitted for application to real-world networks by Duncan Watts and Steven Strogatz in 1998 with their models of “small-world networks,” and by Albert-László Barabási in 1999 with his study of “scale-free networks.” Both of these networks were characterized by their connectivity structure (or “architecture”) and came with a proposed process by which networks with these architectures could arise.
Processes also occur “on” networks. Often, each node is assigned some type of state which then evolves under the influence of its network neighbors. The spread of memes and diseases, the population dynamics of food chains, and the synchronization of neuronal networks have all been modeled in this way. But each of these fields is still very young, and a more general theory of dynamics on networks, to the extent that it exists, is in its infancy.
I am interested in the principles underlying the networks that make life on earth possible: feeding networks, gene interaction networks, and neuronal networks, to name a few. In blatant disregard for the arbitrary boundary between processes that form networks and processes on networks, many of these networks develop their architectures under the influence of complex network dynamics. I am eager to explore the various interactions of architecture and dynamics in specific networks, while keeping my eye out for abstractions that unite or differentiate them. One particular abstraction I have found intriguing is the constraints placed on network architecture by underlying topological spaces such as feature-space, geographical space, and time.
My Erdős number is 4.
Time is the substrate of our lives, and our perception of the passage of time gives shape to every aspect of our life experience. It is partially because it is so central to our worlds that our “time-sense” is extremely hard to pin down or even define clearly. If the other senses are software, we might consider the time-sense the operating system on which they run: it seems to be distributed widely across the brain, and in its absence we can hardly draw any meaning from our other senses at all.
One sense, however, is rooted more firmly in time perception than the others. Though it takes us time to perceive and identify an image, taste, or smell, we intuitively feel that we can abstract these percepts into concepts outside of time, or at least with no explicit dependence on time. A sound, however, is inconceivable outside of the time during which it unfolds (unless it is a pure, unchanging tone, and perhaps not even then). And sound serves the time-sense as well, as we learn when we are taught to count off seconds with “mississippi.” On the canvas of time, our preferred sign-system is speech, and our arts are music and dance.
Neural mechanisms for time-perception have been proposed, but none have been verified (except some of those giving rise to circadian rhythms, which are extremely interesting but also very limited in scope). Recent results have drawn connections between cycles of the alpha- and theta-band EEG oscillations and the “perceptual moment,” suggesting that the electrical rhythms in our brain may help to anchor us in time. The implications are powerful and far-reaching. As an oscillation appears, disappears, or varies in frequency, does our perception of time change qualitatively? Do the variations in frequency signature between brains account for some of the variations in individuals’ relationships to time? When I play music, my relationship to time changes completely — on a physiological level, what has changed?
I am interested in the relationship between cognitive rhythms, auditory perception, and time-perception, and I am very excited to see what we learn over the course of my lifetime.
For further reading, check out The Neural Lyre: Poetic Meter, the Brain, and Time.