Our lab studies how people hear. Sound is produced by events in the world, travels through the air as pressure waves, and is measured by two sensors (the ears). The brain uses signals from these sensors to infer a vast number of important things -- what someone said, their emotional state when they said it, and the whereabouts and nature of events we cannot see, to name a few. Humans make such auditory judgements hundreds of times a day, but their basis in our acoustic sensory input is often not obvious, and reflects many stages of sophisticated processing that remain poorly characterized.
We seek to understand the computational basis of these impressive yet routine perceptual inferences. We hope to use our research to improve devices for assisting those whose hearing is impaired, and to design more effective machine systems for recognizing and interpreting sound, which at present perform dramatically worse in real-world conditions than do normal human listeners.
Our work combines behavioral experiments with computational modeling and tools for analyzing, manipulating, and synthesizing sounds. We draw particular inspiration from machine hearing research: we aim to conduct experiments in humans that reveal how we succeed where machine algorithms fail, and to use approaches in machine hearing to motivate new experimental work. We also have strong ties to auditory neuroscience. Models of the auditory system provide the backbone of our perceptual theories, and we collaborate actively with neurophysiologists and cognitive neuroscientists. The lab thus functions at the intersection of psychology, neuroscience, and engineering.