9:00 AM
11:00 AM
Recent experiments discover that birdsong has context dependence. The same syllable
could lead to different future songs depending on what was sung in the past. This
raises exciting new questions on the neural mechanism behind birdsong, but also poses
new challenges in modeling the syntax. In this dissertation, I develop new methods to
model the song syntax using a biologically realistic model – partially observed Markov
model (POMM). I first develop a test to determine if a POMM is sufficient to model the
songs. The test examines over-generalization, i.e. the amount of unobserved songs the
model is capable of generating, and probability mismatch, i.e. the difference between
the probability the model assigns to each song and the frequency that song is observed.
The test is then used to find the minimal POMM compatible with data, by pruning
a redundant POMM until any further simplification would lead to rejection by the
test above. While this pruning approach is conceptually simple, computationally it is
very intensive. To accelerate this process, a second approach is proposed to estimate
the minimal POMM directly, based on non-negative matrix factorization (NMF). As
for application, the methods are used to study the role of auditory feedback in the
context dependence phenomenon. By comparing the birdsong syntax before and shortly
after deafening, it is shown that deafening significantly reduces context dependence in
the syntax, but does not eliminate it. Thus, while auditory feedback contributes to
the context dependence, there should be other neural correlates encoding the context
dependent syntax. I expect the methods to be valuable in the future investigations of
the neural basis of context dependence.