Spike pattern codes are a powerful and ubiquitous stimulus representation scheme in the central auditory system. Cortical coding is of particular importance, as this is one of the final stages of sound processing before multisensory integration and the formation of perceptual decisions. In this work, temporal coding in primary auditory cortex (AI) was investigated in awake Mongolian gerbils using synthetic and natural sounds.;To determine whether AI transforms midbrain coding properties, we compared the representation of synthetic sounds in AI to that of the inferior colliculus (IC). AI neurons were found to have higher temporal variability, faster adaptation kinetics, and a longer temporal window than IC neurons, making them well-suited to encode slow envelope-modulated stimuli such as speech. Ketamine-barbiturate anesthesia was found to obscure these coding properties.;To test the generality of temporal properties obtained with synthetic stimuli, AI neurons were challenged with natural sounds. Mongolian gerbils have a rich vocal repertoire which constitutes a behaviorally salient stimulus set. I studied adult gerbil vocal behavior in an acoustically controlled setting. Vocalizations occurred in five contexts of interactive behavior: same-gender aggression, food dispute, mating, alarm, and disturbance by conspecifics. Scouting calls were associated with exploration of a new setting. Gerbil calls encompassed a broad frequency range (∼3-45 kHz) and contained significant low-frequency envelope modulations.;Next, I examined AI temporal coding of communication sounds by comparing responses to natural calls with calls disrupted on different timescales. Interestingly, while responses of individual neurons were primarily dictated by their spectral tuning and unique firing patterns, the combined output of neuron populations became increasingly divergent as the disruption timescale increased. Similarly, when AI cells were tested with different exemplars of calls from equivalent social contexts, individual neurons in AI represented details of each call's acoustic properties, while population responses exhibited a stereotypic pattern for each call class.;I conclude that AI neurons are specialized for representing the long-timescale, slow envelope modulations relevant for vocal communication. While AI cells primarily encode spectrotemporal acoustic features, their output at a population level may underlie perceptual decision-making. |