Current automatic speech recognition systems are able to perform fairly well for noise-free speech, but encounter problems when used in noisy environments. This is mainly due to the mismatch between the environment and type of noise found in the training data used to develop the system, and the conditions under which the system is actually used. This work explores the use of spectral range limiting functions to improve the noise robustness of automatic speech recognition systems. This is motivated by the compressive nonlinearities commonly found in auditory models and the fact that human speech recognition is superior to any automatic speech recognition system. For this reason, many automatic speech recognition techniques are motivated by auditory models and physiology of human and animal hearing systems. The goal is to combine the compressive nonlinear stage of an auditory model with traditional automatic speech recognition techniques to improve the accuracy of such systems. |