Font Size: a A A

Sound localization in real and virtual acoustical environments

Posted on:1998-11-07Degree:Ph.DType:Dissertation
University:Boston UniversityCandidate:Kulkarni, AbhijitFull Text:PDF
GTID:1468390014974537Subject:Engineering
Abstract/Summary:
The perception of 3-dimensional acoustic space was studied using a hybrid (natural and virtual) acoustical environment. The hybrid environment was designed to allow for paired comparisons between a natural sound source (delivered from a loud-speaker) and a virtual sound source (presented over tubephones in the ear canals) for corresponding spatial locations. The virtual stimulus was constructed from individually measured HRTFs and delivered near the entrance of the earcanal via tubes that were confirmed to maintain approximately the natural sound field. In the primary experiment subjects were tested in a 2I, 2AFC paradigm and required to judge the order of the free-field and virtual stimuli for corresponding locations in the horizontal plane. Performance of subjects was at chance for all four azimuthal locations tested.; Sensitivity of subjects to detail in the HRTF phase spectra was investigated by approximating the HRTF phase spectrum with a minimum-phase and linear-phase function. The overall interaural time difference (ITD) in the resulting model HRTFs was approximated as a frequency-independent, position-dependent delay, which was obtained as the overall ITD in the empirical HRTFs. Results show that subjects could not discriminate between the free-field stimuli and virtual stimuli constructed from minimum-phase and linear-phase model HRTFs. We can hence conclude that, in anechoic space, subjects are not sensitive to detail in the HRTF phase spectrum.; Sensitivity of subjects to detail in the HRTF magnitude spectrum was investigated by measuring the discriminabilty of virtual stimuli from free-field stimuli as the magnitude spectrum of the HRTFs used to construct the virtual stimuli was systematically smoothed. Performance of subjects was at chance even when the HRTF magnitude spectra was smoothed by large amounts. With extreme smoothing, subjects reported an elevated but still externalized sound image, consistent with a natural sound image from a loudspeaker directly overhead. These results suggest that subjects are largely insensitive to detail in the HRTF magnitude spectra in anechoic space and further that the directional features in the HRTF are not responsible for the externalization of sound images.; In a series of subsequent experiments, the encoding of source elevation was studied using simplified (notch-filtered) stimuli. The ability of subjects to match the image location of a free-field sound stimulus at different elevations in the median plane with simple notch-filtered virtual stimuli was measured using a image mapping paradigm. Results show that subjects could obtain a consistent match between the location of the free-field sound image and the virtual sound image by choosing an appropriate notch frequency. These results suggest that the location of a primary notch in the ear-input spectrum encodes source elevation.
Keywords/Search Tags:Virtual, Sound, HRTF phase, HRTF magnitude, Subjects, Spectrum, Results, Natural
Related items