OB Wataru Kobayashi
D1 Masashi Okada
M1 Lim Malvin Handaya

3D Sound Effects for Embedded Systems

Sound Localization

soundimage.png

A certain sound is influenced by the surrounding environment such as diffraction and reflection at the walls and floors, the torso, shoulders and the outer ears of the listener. Subsequently, the sound arrives at the both eardrums of the listener's ears, and then the listener perceives the sound by an auditory organ. The spatial image perceived by human auditory sense in such a way is referred to as a "sound image," and perception of direction and distance of a sound image is referred to as "sound localization."

If the factors of sound localization are explicated and can be controlled by digital signal processing, a listener can experience the virtual sound to the same degree as experiencing the actual sound in the real world.

Head-Related Transfer Functions

hrtf.png

The transmission path between a sound source and the eardrum of a listener can be modeled by an acoustical system which accepts a waveform at the sound source as an input and produces a waveform at the eardrum as an output. This transfer function is referred to as head-related transfer function (HRTF). Sound localization is realized by simulating this HRTF.

How to measure HRTF?

anechoic.jpg
hats2.jpg
hats1.jpg

Since it is difficult to analytically calculate HRTFs from complex boundary conditions of human shape, acoustical measurement is usually conducted to obtain a set of HRTFs. To conduct such a measurement, small microphones are inserted into the ear canals of a human or a mannequin which is referred to as "dummy head." We possess a dummy head, Bruel & Kjaer Type4100D (see right).

The measurement is performed in an anechoic room to simulate free-field environment.

3d Sound Localization Method

localization_e.png

By simulating HRTF, sound sources are rendered in 3D space through stereo loudspeakers or headphones.

In our approach, HRTFs, which have complicated characteristics as illustrated below, are approximated based on frequency characteristics of each subband (low/intermediate/high-subband) so that computational cost is reduced.

proposed_e.png

Functionality extension of 3D sound localization

Sound Movement

movement.jpg

The reproduction of a moving sound image in 3D space is needed for virtual reality applications such as entertainment, telecommunications, guidance for pilots, and so on. In order to generate smoothly moving sound, HRTFs measured at intervals small enough are required. However, this scheme requires large memory space.

To cope with this problem, we have developed efficient storage scheme for HRTF database. Our scheme can reduce memory capacity for the database by utilizing the coefficient sharing scheme, which is based on feature extraction of HRTFs.

Exaggeration

exaggerate.jpg

Demands for 3D sound effects depend on applications. For example, while applications such as auditory displays put importance on localization accuracy, entertainment applications such as games prefer exaggerated 3D effects rather than accuracy.

For one of the exaggerated 3D effects, we have proposed an exaggeration method for sound movement. Our exaggeration scheme makes sound movement more distinctly by utilizing modified HRTFs, which simulates behavior that he/she applies hand to ear when he/she tries to listen carefully.

Sound Localization in Proximal Region

distance.jpg

One of the inherent issues of 3D sound localization schemes is the difficulty in achieving 3D sound effects in the proximal region. While general 3D sound localization schemes represent the distance of the sound source only in terms of its sound level, HRTFs in the proximal region have different spectral features from those in the distal region due to the auditory parallax effects and the head-shadowing effects.

In order to enhance 3D sound effects in the proximal region, we have developed a method to reproduce shadowing head-shadowing effects based on rigid-sphere model.

Sound Localization for Multiple Sound Sources

multiple.png

In order to render multiple 3D sounds by the simplest way, a set of HRTF-filters, each of which corresponds to a sound source, are required. In other words, computational cost and memory space are proportional to the number of sound sources. Especially in embedded systems where computational resources are restricted, the number of reproducible 3D sounds is strictly limited.

To deal with multiple sound sources, we are considering a method which uses less number of HRTF-filters than the number of sound sources based on clustering of sound sources.

3D Sound Field Reproduction

sound-field_reproduction_e.png

HRTFs are measured in an anechoic chamber where physical phenomena caused by environments (e.g. reflections) are eliminated artificially. Although such HRTFs give listener enough cues to perceive 3D sounds, differences between anechoic and practical environments result in lack of reality and immersion.

To cope with this problem, we are developing a 3D sound field reproduction method based on HRTFs. In this method, the reproduction is achieved with virtual environment simulation and HRTF-based rendering of the reflected sounds obtained from the simulation. Numerous reflected sounds are rendered efficiently by above mentioned method for multiple sound sources.


Last-modified: 2010-02-23 (Tue) 21:08:06