language-icon Old Web
English
Sign In

Psychoacoustics

Psychoacoustics is the scientific study of sound perception and audiology—how humans perceive various sounds. More specifically, it is the branch of science studying the psychological and physiological responses associated with sound (including noise, speech and music). It can be further categorized as a branch of psychophysics. Psychoacoustics received its name from a field within psychology — i.e., recognition science — which deals with all kinds of human perceptions. It is an interdisciplinary field of many areas, including psychology, acoustics, electronic engineering, physics, biology, physiology, and computer science. Psychoacoustics is the scientific study of sound perception and audiology—how humans perceive various sounds. More specifically, it is the branch of science studying the psychological and physiological responses associated with sound (including noise, speech and music). It can be further categorized as a branch of psychophysics. Psychoacoustics received its name from a field within psychology — i.e., recognition science — which deals with all kinds of human perceptions. It is an interdisciplinary field of many areas, including psychology, acoustics, electronic engineering, physics, biology, physiology, and computer science. Hearing is not a purely mechanical phenomenon of wave propagation, but is also a sensory and perceptual event; in other words, when a person hears something, that something arrives at the ear as a mechanical sound wave traveling through the air, but within the ear it is transformed into neural action potentials. The outer hair cells (OHC) of a mammalian cochlea give rise to an enhanced sensitivity and better frequency resolution of the mechanical response of the cochlear partition. These nerve pulses then travel to the brain where they are perceived. Hence, in many problems in acoustics, such as for audio processing, it is advantageous to take into account not just the mechanics of the environment, but also the fact that both the ear and the brain are involved in a person’s listening experience. The inner ear, for example, does significant signal processing in converting sound waveforms into neural stimuli, so certain differences between waveforms may be imperceptible. Data compression techniques, such as MP3, make use of this fact. In addition, the ear has a nonlinear response to sounds of different intensity levels; this nonlinear response is called loudness. Telephone networks and audio noise reduction systems make use of this fact by nonlinearly compressing data samples before transmission, and then expanding them for playback. Another effect of the ear's nonlinear response is that sounds that are close in frequency produce phantom beat notes, or intermodulation distortion products. The term 'psychoacoustics' also arises in discussions about cognitive psychology and the effects that personal expectations, prejudices, and predispositions may have on listeners' relative evaluations and comparisons of sonic aesthetics and acuity and on listeners' varying determinations about the relative qualities of various musical instruments and performers. The expression that one 'hears what one wants (or expects) to hear' may pertain in such discussions. The human ear can nominally hear sounds in the range 20 Hz (0.02 kHz) to 20,000 Hz (20 kHz). The upper limit tends to decrease with age; most adults are unable to hear above 16 kHz. The lowest frequency that has been identified as a musical tone is 12 Hz under ideal laboratory conditions. Tones between 4 and 16 Hz can be perceived via the body's sense of touch. Frequency resolution of the ear is about 3.6 Hz within the octave of 1000–2000 Hz. That is, changes in pitch larger than 3.6 Hz can be perceived in a clinical setting. However, even smaller pitch differences can be perceived through other means. For example, the interference of two pitches can often be heard as a repetitive variation in volume of the tone. This amplitude modulation occurs with a frequency equal to the difference in frequencies of the two tones and is known as beating. The semitone scale used in Western musical notation is not a linear frequency scale but logarithmic. Other scales have been derived directly from experiments on human hearing perception, such as the mel scale and Bark scale (these are used in studying perception, but not usually in musical composition), and these are approximately logarithmic in frequency at the high-frequency end, but nearly linear at the low-frequency end. The intensity range of audible sounds is enormous. Human ear drums are sensitive to variations in the sound pressure, and can detect pressure changes from as small as a few micropascals (µPa) to greater than 100 kPa. For this reason, sound pressure level is also measured logarithmically, with all pressures referenced to 20 µPa (or 1.97385×10−10 atm). The lower limit of audibility is therefore defined as 0 dB, but the upper limit is not as clearly defined. The upper limit is more a question of the limit where the ear will be physically harmed or with the potential to cause noise-induced hearing loss. A more rigorous exploration of the lower limits of audibility determines that the minimum threshold at which a sound can be heard is frequency dependent. By measuring this minimum intensity for testing tones of various frequencies, a frequency dependent absolute threshold of hearing (ATH) curve may be derived. Typically, the ear shows a peak of sensitivity (i.e., its lowest ATH) between 1–5 kHz, though the threshold changes with age, with older ears showing decreased sensitivity above 2 kHz.

[ "Perception", "Auditory Psychophysics", "Audiometric booth", "Obscure auditory dysfunction", "Masking threshold", "Speech interference level" ]
Parent Topic
Child Topic
    No Parent Topic