How does our brain represent simultaneously presented stimuli? According to biased competition theory,different stimuli in concurrent presentation compete for cognitive resources,and the brain establishes bias towards certain stimuli to solve the competition.This bias can be established through both bottom-up and top-down processes,with more research attention on top-down bias and bottom-up bias caused by low-level stimulus features.With accumulated postnatal experiences,the processing of meaningful objects becomes highly automated and efficient(Li et al.,2002;Thorpe et al.,1996),and a large body of evidence shows independent representation mechanisms for different categories of objects in the cortical regions of the brain(Caramazza & Shelton,1998;Mahon & Caramazza,2009).This neural specialization for object representation provides a basis for optimizing processing efficiency.Cohen et al.(2014)found evidence for top-down bias towards different object categories and suggested that the segregation of object neural representations may provide the possibility for establishing bias.Based on this,we hypothesize that the brain may establish category-level,bottom-up bias when representing multiple objects simultaneously.To test this hypothesis,we improved the experimental paradigm of Kastner et al.(1999)for task-irrelevant picture processing and used f MRI to measure brain responses to distractor stimuli.We selected images of different object categories(faces,animals,scenes,and tools)as distractors presented in the peripheral visual field and controlled the exposure time of the images at two levels(33ms and 500ms)to compare the differences in biased competition between fast and slow presentation.The experiment consisted of two parts: a localization experiment and a main experiment.Besides analyzing the results of the whole brain,we explored the biased representation of the regions of interest(ROIs)using multi-voxel pattern analysis in the main experiment.To overcome the problem of quantifying the bias due to different baseline activation levels among different object categories,we used the bias contribution parameter,which reflects the bias towards object representation,as an index to analyze the ROI.First,according to the results of the localization experiment,we selected the same number of voxels with the strongest activation for each target category and defined the ROIs for object representation that were relatively fair for each category.Then,we vectorized the activation patterns of the voxels inside each ROI and calculated the bias contribution parameter to measure the degree of bias towards single object categories in different category combinations.By comparing the bias contribution parameters for different category pairs,we found that under the 33 ms,there was no significant difference between the bias contribution parameters for animal and non-animal categories,while under the 500 ms condition,the bias contribution parameter for nonanimal categories was significantly larger than that for animal categories.That is,the bias established towards different object categories under the 33 ms and 500 ms conditions was different.We did not find evidence for animal category bias advantage in the traditional object category representation region using the bias contribution parameter.Overall,our study demonstrates that there is category-level,stimulus-driven bias in the brain when simultaneously representing multiple objects. |