
Different Minds Collaborative Virtual Spring Conference
April 9th, 2025
Trainee Presenters
Please join us for an exciting series of talks featuring the trainees of the Different Minds Collaborative.
Ying Hu
Chinese Academy of Sciences
PI: Dr. Alice O'Toole
First Impressions from Body Shapes in American Versus Chinese Individuals​
​
We effortlessly form impressions of others just by looking at their bodies—snap judgments that can influence real-world outcomes, from hiring to voting. But how universal are the structures behind these impressions? Hu et al. (2018) found that American observers organize body trait impressions along two dimensions: valence (positive vs. negative) and agency (active vs. passive). In this talk, I present follow-up work testing whether this framework generalizes to Chinese observers. Eighty American and 80 Chinese participants rated 140 computer-generated 3D bodies on 30 Big Five personality traits. While Americans replicated the valence–agency structure, Chinese participants instead used valence and Extraversion. For Americans, high agency and positive valence aligned with Extraversion—this link was absent in Chinese participants. Trait predictions also diverged: body shape predicted Conscientiousness and Extraversion in Americans, but Conscientiousness, Openness, and Neuroticism in Chinese participants. Visual cues differed too—Americans saw lean bodies as more extraverted, whereas Chinese participants saw heavier bodies that way. I will discuss what these cultural similarities and differences reveal about how visual cues and cultural context jointly shape trait impressions—and how they can inform more culturally sensitive models of social perception.
Irina Ovchinnikova
University of Iceland
PI: Dr. Heida Maria Sigurdardottir
Object Discrimination is an Independent Predictor of Reading​
​
Reading is a fundamental everyday skill, yet not everyone easily acquires it. For decades, researchers focused on the role of phonological processing in reading difficulties, while the visual component of reading was left behind. Our high-level visual dysfunction hypothesis suggests that reading problems experienced by some individuals can indicate a broader impairment in visual cognition. In this preregistered study, we investigated whether object recognition abilities, measured with a visual foraging task, predict reading performance in early readers. Participants included 1st and 3rd graders (total N = 164) assessed using standardized reading fluency tests, phonological processing measures, rapid automatized naming of colors, a scale of ADHD symptoms, and a foraging paradigm involving object discrimination. In the foraging task, children selected matching objects from a display containing a central target exemplar, nine matching targets, and nine distractors. Trials varied by condition: basic-level (targets and distractors from different categories), subordinate-level (from the same category), and control (black and white circles). Object recognition was operationalized as the number of targets participants correctly identified per second. Lower- and higher-level visual similarity of objects was assessed via activation patterns of the CORnet-S neural network. Semantic similarities of the stimuli were obtained through separate human ratings. Our findings demonstrate that object recognition significantly predicts reading performance, even after controlling for age, phonological processing, attentional deficits, rapid automatized naming, and foraging speed during control trials. Additionally, we found that higher reading skills are associated with more effective use of semantic and higher-level visual information during object discrimination. In contrast, lower-level visual information did not show a similar pattern. These results support our high-level visual dysfunction hypothesis and extend our understanding of the importance of visual object processing in reading — a factor that has for a long time been largely ignored in studies on reading difficulties.
Mojan Izadkhah
University of British Columbia
PI: Dr. Ipek Oruc
Classification of EEG signals of Pro-Saccade task, using Artificial Intelligence
​
We examined the spatiotemporal patterns of brain activity involved in saccadic eye movement planning through the use of explainable artificial intelligence. EEG signals were recorded from 20 participants using a 64-channel BioSemi system while they performed randomly cued pro-saccades (250 leftward, 250 rightward; total of 500 trials). Eye movement onset was detected using electrooculography (EOG), and this time point was used as a reference to extract a 100 ms EEG segment ranging from −104 ms to −8 ms prior to movement, helping to exclude any contamination from motor execution. Due to the relatively limited number of valid trials per subject (mean = 436.45, range = 273–492), we employed a 3D variant of EEGNet combined with a custom data augmentation pipeline to improve model robustness. Using only the pre-saccadic window (−104 to −8 ms), the network was able to accurately predict saccade direction before movement occurred. On validation data, the model achieved performance well above chance levels (mean AUC = 0.91, SD = 0.13, p < 0.001; mean accuracy = 0.84, SD = 0.14, p < 0.001). Interpretation of model saliency maps identified a particularly important time window between −70 ms and −50 ms before the onset of the saccade. During this interval, we observed dynamic and oscillatory activity in both frontal and parietal regions, with a peak in signal strength occurring between −48 ms and −44 ms.
Yueling Sun
University of Victoria
PI: Dr. Jim Tanaka
Investigating Diagnostic Features in Prototype Category Learning Task: A Deep Learning Perspective​
​
Category learning allows humans to classify and generalize novel stimuli. The prototype distortion task is commonly used to study this process, where category members are generated by distorting a prototype. While humans can categorize distorted stimuli, the diagnostic features they rely on are not well understood. To explore this, we investigated how a convolutional neural network (CNN) classifies artificial stimuli in this task. Using RUBubbles, a recently introduced prototype-based artificial dataset, we selected four prototypes from similarity space, generated distortions, and trained VGG16 for categorical learning. Results show that VGG16 achieved perfect classification and exhibited distinct activation patterns across the four learned categories. After training, we applied the Gradient-weighted Class Activation Mapping (Grad-CAM) platform to identify the model’s diagnostic feature representations. By controlling heatmap thresholds, we tested the model’s performance by parametrically varying the amount of available information in these features. VGG16 was able to classify stimuli based solely on these features, demonstrating their key role in categorization. This feature utilization pattern offers a new perspective on diagnostic features important for human category learning, shedding light on how visual information is prioritized in prototype distortion tasks.