Design a site like this with
Get started

We are getting ready to launch a new Science Hangouts On Air series called Posterside Hangouts.

We are getting ready to launch a new Science Hangouts On Air series called Posterside Hangouts. The first set of talks will be on Psychology. See event post for a list of presenters, titles, and abstracts. There is also a link in the event so you can sign up to present your research in an upcoming Posterside Hangout.

Originally shared by Science on Google+

Posterside Hangouts is a new Hangouts On Air, which is hosted by the Science on Google+ Community ( The main goal of this HOA series is to recreate a poster session-like atmosphere here on G+, so researchers can present their recent findings. Presentations will be grouped by discipline and individual presentations will last approximately 10 – 15 minutes.

Do you have a recent conference presentation, manuscript, or book that you would like to share with the Google+ community? Do you want to give your undergraduate or graduate students practice presenting their research? If yes, then let us know by filling out this short form:


Psychology Talks for Posterside Hangouts #1, Authors (Affiliations)

When audition dominates vision: Evidence from cross-modal statistical learning

Chris Robinson (The Ohio State University at Newark)

Automatic selection of eye tracking variables in visual categorization for adults and infants

Samuel Rivera (The Ohio State University at Columbus)

Foreign accent does not influence cognitive judgments

Andre L. Souza (Concordia University) and Art Markman (The University of Texas at Austin)

Positive mood may enhance cognitive flexibility: Evidence from category learning

Paul Minda (The University of Western Ontario) and Ruby Nadler(The University of Western Ontario)


Abstracts and Links

When audition dominates vision: Evidence from cross-modal statistical learning

Presenting information to multiple sensory modalities sometimes facilitates and sometimes interferes with processing of this information. Research examining interference effects shows that auditory input often interferes with processing of visual input in young children (i.e., auditory dominance effect), whereas visual input often interferes with auditory processing in adults (i.e., visual dominance effect). The current study used a cross-modal statistical learning task to examine modality dominance in adults. Participants ably learned auditory and visual statistics when auditory and visual sequences were presented unimodally and when auditory and visual sequences were correlated during training. However, increasing task demands resulted in an important asymmetry: Increased task demands attenuated visual statistical learning, while having no effect on auditory statistical learning. These findings are consistent with auditory dominance effects reported in young children and have important implications for our understanding of how sensory modalities interact while learning the structure of cross-modal information.

Link to Manuscript: 

Personal Website:

Automatic selection of eye tracking variables in visual categorization for adults and infants

We present a computational approach for the selection of diagnostic eye tracking variables. Previous methods for the selection of eye tracking variables have been ad-hoc or hypothesis driven. In the absence of a good hypothesis, researchers are left to experiment with many alternatives. To resolve this problem, we use feature extraction and classification algorithms from machine learning to automatically identify the eye tracking variables that best correlate within sample eye tracking sequences belonging to the same category yet discriminate between categories. This approach allows us to extract the few (i.e., two to four) most diagnostic features from a pool of dozens. While previous work required the testing of a large number of hypotheses, we demonstrate how the proposed  methodology yields the same result without the need to test a large number of alternative hypotheses. Instead, our method is data driven, i.e., the resulting model is obtained from the data. The proposed methodology was verified in a visual categorization task with adults and infants. Here, we presented infants and adults with a category learning task and tracked their eye movements. We extracted an over-complete set of eye tracking variables encompassing durations, probabilities, latencies, and the order of fixations and saccadic eye movements. The method defined identified a small set of variables that allows us to predict category learning among adults and 6- to 8-month-old infants and suggests that the looking strategies of adults and infants are distinct.

Source Code:

Link to Poster:

Link to Manuscript:

Personal website:

Foreign accent does not influence cognitive judgments

A recent paper by Lev-Ari and Keysar (2010) reported that the processing fluency associated with non-native speech causes non-native speakers to sound less credible. The authors found that the same trivia statements were rated as less truthful when spoken by a non-native speaker of English. The present paper reports the results of three studies that attempted to replicate the findings of Lev-Ari and Keysar (2010) by focusing on processing fluency manipulations other than accent. Although we used virtually the same methodology as Lev-Ari and Keysar (2010), we failed to replicate the key finding that foreign-accented speech is less credible than native-accented speech. The implications of this finding is discussed.

Link to Manuscript:   

Personal Website:

Positive Mood May Enhance Cognitive Flexibility: Evidence from category learning

Theories of mood and its effects on cognition suggest that positive mood may increase cognitive flexibility. This increased flexibility is associated with areas in the prefrontal cortex and the anterior cingulate cortex, both of which play crucial roles in hypothesis testing and rule selection. As such, cognitive tasks that rely on these behaviors may benefit from positive mood, whereas tasks that do not rely on these behaviors should not benefit from cognitive flexibility and/or positive mood. We explored this idea within a category-learning framework. positive, neutral, and negative moods were induced in our subjects and they learned either a rule-described or a non-rule-described category set. Subjects in the positive mood condition performed significantly better than subjects in the neutral or negative mood conditions when learning the rule-described categories. Mood had a less obvious effect effect on the learning of non-rule-described categories, but computational modelling suggested that subjects who learned in a positive mood were more likely to use the optimal learning strategy. These results have implication for theories of category learning, and also have implication for the understanding the effects of local, environmental factors like mood on performance.

Link to Manuscript: 

Lab Website:

Background image source:



Leave a comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: