Home Page of Acoustic Ecology Project

Main page|Events| Project Members| Publications|Resources for members only

Understanding Listening

The goal of the present research team is to understand how humans of all ages, who have either normal or impaired hearing, listen in the realistic situations they encounter in everyday life. We aim to incorporate new knowledge about how people process auditory information into a more general cognitive science model that accounts for how multi-modal sensory inputs (auditory, visual) are coordinated during information processing and how sensory and motor processing are coordinated during perception and production of sound. To be complete, such a meta-model would also need to take into account how information processing is modulated by the demands or constraints associated with the social and physical context. Understanding how people process information will enable us define and measure the 'successful' listener, and to design environments with appropriate physical, technological, and social features to enhance or facilitate the performance/experience of listeners in everyday life. This knowledge will also inform the design of human-computer communication where the computer acts as the listener.

Acoustic Ecology is a term that captures our new conceptual approach to human auditory information processing. It builds on traditional disciplinary research foundations, with the key novel feature being that our interdisciplinary research reinstates the listener in the listening environment. Specifically, our approach combines traditional disciplinary research focussing on listeners (e.g. audiology, linguistics, neuroscience, otolaryngology, and psychology) with research focussing on the physical environments (room design, computer science, engineering) and the social situations (anthropology, education, and linguistics) in which listening occurs.

The project organization is drawn below. The evolution of the project since the initial concept was created can be seen by clicking on the image below. You will see some wild interdisciplinary action! With the mouse you can then try to reorganize the pieces and reintegrate the subprojects. You have 2 minutes!

The project has three research areas, each with several subareas to examine these interlocking issues:

  1. The psychology of listening
    Research in this area is comprised of four interrelated studies of the cognitive processing in listening.
  2. Synthesis of complex environmental and human sounds
    This aspect of the research is concerned with the synthesis of environmental sounds, particularly percussive or contact sounds in artificial and natural settings, and the synthesis of the vocal tract to model human speech production. This work contributes to the understanding of variations in the production of sounds for human and computer perception. It is also concerned with the impact of the acoustical properties of the settings in which sounds are produced.
  3. Ethnographies of acoustic ecology.
    The goal of this research area is to understand how humans of all ages, who have either normal or impaired hearing, listen in the realistic situations they encounter in everyday life. A literature summary of issues relevant to communication in a noisy environment is available. (Topics covered in the summary are: hearing (binaural/spatial), hearing in a noisy environment and the cocktail party effect, room acoustics, hearing in the classroom, attention, speech in a noisy environment, gesture, deception, discourse/conversation structures and processes, Relevance Theory, backchannelling, phatic communication, cognition and communication, childrens discourse, classroom discourse, classroom politics, and classroom norms.) We examine the "real world" social and acoustic ecology in which hearing is transformed into active listening. These projects investigate patterns of sound exposure and particular kinds of language use. These ethnographic studies examine the multimodal (auditory and visual) aspects of social interaction in small groups. We have investigated two settings:

Page maintained by Kees van den Doel

Sound-effects for the webpages are synthesized in real-time using the JASS system developed for this project.

This project is made possible through a grant from the Peter Wall Institute for Advanced Studies.
Funded by the Peter Wall Institute for Advanced Studies, UBC, Canada