Time is continuous and events must be extracted from it. This encoding requires segmentation of a temporal stream into constituent parts that can be separately processed, and the analysis of those individual parts. This project will develop methods for the analysis of dynamic sequences in perception and neural activity, in order to find event boundaries and to characterise the evolution of those dynamic events. In humans we will analyse the perception of facial expressions, which provide constrained sequences (Johnston Lab). The generic tools developed will then be tested on measurements of population activity in visual cortex of mice (Solomon Lab).
Agam, Y., & Sekuler, R. (2008). Geometric structure and chunking in reproduction of motion sequences. Journal of Vision, 8(1), 11.1–12. doi:10.1167/8.1.11
Agam, Y., Galperin, H., Gold, B. J., & Sekuler, R. (2007). Learning to imitate novel motion sequences. Journal of Vision, 7(5), 1.1–17. doi:10.1167/7.5.1
Berisha, F., Johnston, A., & McOwan, P. W. (2010). Identifying regions that carry the best information about global facial configurations. Journal of Vision, 10(11):27, 1–8, http://www.journalofvision.org/content/10/11/27, doi:10.1167/10.11.27.
Churchland, M.M., Yu, B.M., Sahani, M., & Shenoy K.V. (2007). Techniques for extracting single-trial activity patterns from large-scale neural recordings. Current Opinion in Neurobiology, 17(5):609-618.
Harvey, C.D., Coen, P. & Tank, D.W. (2012) Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature 484: 62-68.