Home Page of Acoustic Ecology Project
Main page|Events| Project Members| Publications|Resources for members only
Understanding Listening
The goal of the present research team is to understand how humans of all
ages, who have either normal or impaired hearing, listen in the
realistic situations they encounter in everyday life. We aim to
incorporate new knowledge about how people process auditory information
into a more general cognitive science model that accounts for how
multi-modal sensory inputs (auditory, visual) are coordinated during
information processing and how sensory and motor processing are
coordinated during perception and production of sound. To be complete,
such a meta-model would also need to take into account how information
processing is modulated by the demands or constraints associated with
the social and physical context. Understanding how people process
information will enable us define and measure the 'successful' listener,
and to design environments with appropriate physical, technological, and
social features to enhance or facilitate the performance/experience of
listeners in everyday life. This knowledge will also inform the design
of human-computer communication where the computer acts as the listener.
Acoustic Ecology is a term that captures our new conceptual approach
to human auditory information processing. It builds on traditional
disciplinary research foundations, with the key novel feature being
that our interdisciplinary research reinstates the listener in the
listening environment. Specifically, our approach combines traditional
disciplinary research focussing on listeners (e.g. audiology,
linguistics, neuroscience, otolaryngology, and psychology) with research
focussing on the physical environments (room design, computer science,
engineering) and the social situations (anthropology, education, and
linguistics) in which listening occurs.
|
The project organization is drawn below. The evolution of the
project since the initial concept was created can be seen by clicking
on the image below. You will see some wild interdisciplinary action!
With the mouse you can then try to reorganize the
pieces and reintegrate the subprojects. You have 2 minutes!
The project has three research areas, each with several subareas to examine these interlocking
issues:
- The psychology of listening
Research in this area is comprised of four interrelated studies of the cognitive processing in listening.
- Perception of synthetic sounds
This research was designed to develop a metric to analyze the perception
and cognitive processing of synthezed contact or percussive sounds.
Synthetic sounds were tested on human subjects in order to determine the
quality of the sounds relative to the target sound which they are
designed to approximate. The resulting data is used to verify and tune
the mode selection methodologies, and to increase our understanding of
what determines the subjective quality of a synthetic sound effect.
For more information go here.
- Acoustic cues to discourse processing
The purpose of this area of research is to identify the acoustical
properties associated with the perception,
identification and discrimination of categories of linguistic elements
and to identify properties in their production.
- Bimodal attentional processing
This work studies the interaction between listening and watching.
One study examined whether sounds can draw the perception of lights further
apart in time in a visual temporal order judgment task.
A second study examined whether there is a synergy between where people
attend visually and where people attend auditorily. In other words, do
we hear sound better from where we are looking?
- Temporal auditory processing of speech and noise
- Synthesis of complex environmental and human sounds
This aspect of the research is concerned with the synthesis of
environmental sounds, particularly percussive or contact sounds in
artificial and natural settings, and the synthesis of the vocal tract to
model human speech production. This work contributes to the
understanding of variations in the production of sounds for human and
computer perception. It is also concerned with the impact of the
acoustical properties of the settings in which sounds are produced.
- Real-time Synthesis of Sound-effects in Virtual Environments
Algorithms were developed and implemented for real-time synthesis of
realistic sound
effects for interactive simulations and animation. These
sound effects are produced automatically, from 3D models using dynamic
simulation and user interaction. The implementions of these
algorithms are distributed online with the
JASS
system, an audio synthesis software package developed in this project.
For more information go here.
- 3D Computer Simulation of the Human Vocal Tract
The goal of this project is to create a 3D working model of the vocal
and facial articulators that is driven not by acoustic parameters, but
by articulatory parameters. This model will be used as the basis for an
aerodynamically driven articulatory speech synthesizer. This new
synthesizer will produce both oral and facial gestures that will be
necessary to communicate with people. For more information go
here.
- Ethnographies of acoustic ecology.
The goal of this research area is to
understand how humans of all ages, who have either normal or impaired hearing,
listen in the realistic situations they encounter in everyday life.
A literature summary of issues relevant to communication in a noisy
environment
is available. (Topics covered in the summary are: hearing (binaural/spatial),
hearing
in a noisy environment and the cocktail party effect, room acoustics, hearing
in the
classroom, attention, speech in a noisy environment, gesture, deception,
discourse/conversation
structures and processes, Relevance Theory, backchannelling, phatic
communication, cognition
and communication, childrens discourse, classroom discourse, classroom
politics,
and classroom norms.)
We examine the "real
world" social and acoustic ecology in which hearing is transformed into active
listening. These projects investigate patterns of sound exposure and particular
kinds of language use. These ethnographic studies examine the multimodal
(auditory and visual) aspects of social interaction in small groups. We
have investigated two settings:
- Learning to listen in schools
Classroom data collection was carried out in four classrooms.
Six students in each classroom were wired to record their
auditory experience. The classes
were in the building proper where teachers have reported problems
with the acoustics.
- The cocktail party effect
This famous effect which allows us to focus attention on the person we
are talking to at a noisy cocktail party, was measured by recording the
experience of people attending a dinner.
To experience the cocktail party effect yourself,
go
here. For a demo video indicating visual attention as
recorded by a head-mounted bullet cam, go
here.
Page maintained by Kees van den
Doel
Sound-effects for the webpages are synthesized in real-time
using the JASS system developed for this project.
This
project is made
possible through a grant from the Peter Wall Institute for Advanced
Studies.