My academic interests include:
Time is the substrate of our lives, and our perception of the passage of time gives shape to every aspect of our life experience. It is partially because it is so central to our worlds that our “time-sense” is extremely hard to pin down or even define clearly. If the other senses are software, we might consider the time-sense the operating system on which they run: it seems to be distributed widely across the brain, and in its absence we can hardly draw any meaning from our other senses at all.
One sense, however, is rooted more firmly in time perception than the others. Though it takes us time to perceive and identify an image, taste, or smell, we intuitively feel that we can abstract these percepts into concepts outside of time, or at least with no explicit dependence on time. A sound, however, is inconceivable outside of the time during which it unfolds (unless it is a pure, unchanging tone, and perhaps not even then).
Neural imaging suggests that the areas of our brain that we use to plan and execute movements — particularly the supplementary motor area and the putamen — also play roles in structuring our sense of time in the subsecond-to-seconds range. When we listen to rhythmic sounds, these brain regions are recruited, possibly for the purpose of recognizing rhythmic structure and predicting upcoming auditory events. I believe that coming to understand this function of the motor system will not only help us understand the neural basis of time-keeping and time perception, but also serve as a gateway into a deeper understanding of the brain basis of movement and its relationship to perception and cognition.
In 1924, Hans Berger built the first electroencephalogram (EEG) to non-invasively detect electrical activity in the human brain. He observed two distinct rhythmic oscillations: one, at a frequency near 10Hz, was prominent in the visual cortex when the subject’s eyes were closed, and the other, at a frequency near 20Hz, largely replaced it when the eyes opened. The first he named “alpha waves,” and the second, naturally, “beta waves.”
Since then, other electrical oscillations have been observed and categorized, spanning the whole spectrum of frequencies from .5 Hz to 600 Hz or more. And yet ninety years later, our understanding of the mechanisms giving rise to these oscillations is rudimentary at best, and our understanding of their function in cognitive processes is hardly greater than Berger’s.
I am interested in drawing on experimental results to mathematically describe and study the mechanisms generating cognitive rhythms, with the object of discovering why they are suitable to their respective roles in cognition. I am particularly interested in the gamma wave (30-100 Hz), associated with attention and active information processing, and its counterpart, Berger’s alpha wave, now associated with processes of ignoring sensory input and suppressing other neural activity. My intimate familiarity with the difficulties of proper attentional regulation further piques my interest in demystifying these rhythms (when it’s not piquing my interest in something completely unrelated).
In 1890, Henri Poincaré’s study of that perennial mathematical muse celestial mechanics led him to publish an error-ridden but monolithic 270-page paper establishing the groundwork for the qualitative study of dynamical systems. The dynamicist uses rigorous mathematical techniques to justify qualitative statements about systems of differential equations, which can be used to model everything from nuclear reactors to pendulums to neurons. These statements often describe a system’s “stability” and the many alternatives to stability, including periodic and chaotic behavior.
I am interested in the application of techniques from dynamical systems theory to problems in systems neuroscience. In particular, I am interested in the existence of low-dimensional attracting manifolds in high-dimensional systems, which can considerably constrain the behavior and simplify the mathematical description of a neuronal network.
In 1959, Alfréd Rényi (self-proclaimed “device for turning coffee into theorems”) and the prolific math-hobo Paul Erdős (apparently a device for turning methamphetamine into theorems) began the publication of a series of papers examining the properties of randomly connected abstract networks. The mathematical study of networks was retrofitted for application to real-world networks by Duncan Watts and Steven Strogatz in 1998 with their models of “small-world networks,” and by Albert-László Barabási in 1999 with his study of “scale-free networks.” Both of these networks were characterized by their connectivity structure (or “architecture”) and came with a proposed process by which networks with these architectures could arise.
Processes also occur “on” networks. Often, each node is assigned some type of state which then evolves under the influence of its network neighbors. The spread of memes and diseases, the population dynamics of food chains, and the synchronization of neuronal networks have all been modeled in this way. But each of these fields is still very young, and a more general theory of dynamics on networks, to the extent that it exists, is in its infancy.
I am interested in the principles underlying the networks that make life on earth possible: feeding networks, gene interaction networks, and neuronal networks, to name a few. In blatant disregard for the arbitrary boundary between processes that form networks and processes on networks, many of these networks develop their architectures under the influence of complex network dynamics. I am eager to explore the various interactions of architecture and dynamics in specific networks, while keeping my eye out for abstractions that unite or differentiate them. One particular abstraction I have found intriguing is the constraints placed on network architecture by underlying topological spaces such as feature-space, geographical space, and time.
My Erdős number is 4.
For further reading, check out The Neural Lyre: Poetic Meter, the Brain, and Time.