Posterside Hangouts is a new Hangouts On Air, which is hosted by the Science on Google+ Community (http://goo.gl/uhJCN). The main goal of this HOA series is to recreate a poster session-like atmosphere here on G+, so researchers can present their recent findings. Presentations will be grouped by discipline and individual presentations will last approximately 10 – 15 minutes.
Do you have a recent conference presentation, manuscript, or book that you would like to share with the Google+ community? Do you want to give your undergraduate or graduate students practice presenting their research? If yes, then let us know by filling out this short form: http://goo.gl/e0KPhE.
================================
Psychology Talks for Posterside Hangouts #1, Authors (Affiliations)
When audition dominates vision: Evidence from cross-modal statistical learning
+Chris Robinson (The Ohio State University at Newark)
Automatic selection of eye tracking variables in visual categorization for adults and infants
+Samuel Rivera (The Ohio State University at Columbus)
Foreign accent does not influence cognitive judgments
+Andre L. Souza (Concordia University) and +Art Markman (The University of Texas at Austin)
Positive mood may enhance cognitive flexibility: Evidence from category learning
+Paul Minda (The University of Western Ontario) and +Ruby Nadler(The University of Western Ontario)
The effects of aging on face perception
+Allison Sekuler (McMaster University)
================================
Abstracts and Links
When audition dominates vision: Evidence from cross-modal statistical learning
Presenting information to multiple sensory modalities sometimes facilitates and sometimes interferes with processing of this information. Research examining interference effects shows that auditory input often interferes with processing of visual input in young children (i.e., auditory dominance effect), whereas visual input often interferes with auditory processing in adults (i.e., visual dominance effect). The current study used a cross-modal statistical learning task to examine modality dominance in adults. Participants ably learned auditory and visual statistics when auditory and visual sequences were presented unimodally and when auditory and visual sequences were correlated during training. However, increasing task demands resulted in an important asymmetry: Increased task demands attenuated visual statistical learning, while having no effect on auditory statistical learning. These findings are consistent with auditory dominance effects reported in young children and have important implications for our understanding of how sensory modalities interact while learning the structure of cross-modal information.
Link to Poster: http://goo.gl/NfoMvg
Link to Manuscript: http://goo.gl/VFBVkD
Personal Website: http://goo.gl/glUXv2
Automatic selection of eye tracking variables in visual categorization for adults and infants
We present a computational approach for the selection of diagnostic eye tracking variables. Previous methods for the selection of eye tracking variables have been ad-hoc or hypothesis driven. In the absence of a good hypothesis, researchers are left to experiment with many alternatives. To resolve this problem, we use feature extraction and classification algorithms from machine learning to automatically identify the eye tracking variables that best correlate within sample eye tracking sequences belonging to the same category yet discriminate between categories. This approach allows us to extract the few (i.e., two to four) most diagnostic features from a pool of dozens. While previous work required the testing of a large number of hypotheses, we demonstrate how the proposed methodology yields the same result without the need to test a large number of alternative hypotheses. Instead, our method is data driven, i.e., the resulting model is obtained from the data. The proposed methodology was verified in a visual categorization task with adults and infants. Here, we presented infants and adults with a category learning task and tracked their eye movements. We extracted an over-complete set of eye tracking variables encompassing durations, probabilities, latencies, and the order of fixations and saccadic eye movements. The method defined identified a small set of variables that allows us to predict category learning among adults and 6- to 8-month-old infants and suggests that the looking strategies of adults and infants are distinct.
Source Code: http://goo.gl/bcVeOy
Link to Poster: http://goo.gl/U9WnbO
Link to Manuscript: http://goo.gl/b1xqfp
Personal website: http://goo.gl/M73p6B
Foreign accent does not influence cognitive judgments
A recent paper by Lev-Ari and Keysar (2010) reported that the processing fluency associated with non-native speech causes non-native speakers to sound less credible. The authors found that the same trivia statements were rated as less truthful when spoken by a non-native speaker of English. The present paper reports the results of three studies that attempted to replicate the findings of Lev-Ari and Keysar (2010) by focusing on processing fluency manipulations other than accent. Although we used virtually the same methodology as Lev-Ari and Keysar (2010), we failed to replicate the key finding that foreign-accented speech is less credible than native-accented speech. The implications of this finding is discussed.
Link to Manuscript: http://goo.gl/5hJFdR
Personal Website: http://goo.gl/EA3tEq
Positive Mood May Enhance Cognitive Flexibility: Evidence from category learning
Theories of mood and its effects on cognition suggest that positive mood may increase cognitive flexibility. This increased flexibility is associated with areas in the prefrontal cortex and the anterior cingulate cortex, both of which play crucial roles in hypothesis testing and rule selection. As such, cognitive tasks that rely on these behaviors may benefit from positive mood, whereas tasks that do not rely on these behaviors should not benefit from cognitive flexibility and/or positive mood. We explored this idea within a category-learning framework. positive, neutral, and negative moods were induced in our subjects and they learned either a rule-described or a non-rule-described category set. Subjects in the positive mood condition performed significantly better than subjects in the neutral or negative mood conditions when learning the rule-described categories. Mood had a less obvious effect effect on the learning of non-rule-described categories, but computational modelling suggested that subjects who learned in a positive mood were more likely to use the optimal learning strategy. These results have implication for theories of category learning, and also have implication for the understanding the effects of local, environmental factors like mood on performance.
Link to Manuscript: http://goo.gl/RhfXAL
Lab Website: http://goo.gl/oMHGmx
The effects of aging on face perception
Several studies have shown that face identification accuracy is lower in older than younger adults. This effect of aging might be due to age differences in holistic processing, which is thought to be an important component of human face processing. However, there is conflicting evidence as to whether holistic face processing is impaired in older adults. The current study therefore re-examined this issue by measuring response accuracy in a 1-of-4 face identification task and the composite face effect (CFE), a common index of holistic processing, in older adults. Consistent with previous reports, we found that face identification accuracy was lower in older adults than in younger adults tested in the same task. We also found a significant CFE in older adults that was similar in magnitude to the CFE measured in younger subjects with the same task. Finally, we found that there was a significant positive correlation between the CFE and face identification accuracy. This last result differs from the results obtained in a previous study that used the same tasks and which found no evidence of an association between the CFE and face identification accuracy in younger adults. Furthermore, the age difference was found with subtraction-, regression-, and ratio-based estimates of the CFE. The current findings are consistent with previous claims that older adults rely more heavily on holistic processing to identify objects in conditions of limited processing resources. Combined with results from previous behavioural and electrophysiological research, the results suggest that our longstanding assumptions about face processing should be reconsidered, and point to a qualitative shift in information processing for faces across the lifespan.
Primary paper: http://goo.gl/mXtrZt
Secondary Papers:
Background image source: http://goo.gl/6vJ0sH
events/cidiado3h1ptuan26bhhu0s1g9o//cdn.embedly.com/widgets/platform.js
A question: Wouldn’t presenting in this format affect your publication rights for journal submission as the data would have been previously published? I have been told that I can’t put my conference posters up on our website as the data hasn’t been published yet, and this is a way bigger audience than have ever looked at my posters!
LikeLike
Great question Daniel Pass. In the US context, if the issue is copyright, I believe there is a difference between content (e.g. data) and the mode in which the content is presented (e.g. a graph of the data). You might have an issue if you present exactly the same graphic and then try to publish it. If you modify the graphic (but not necessarily the content) you are probably OK. I would check that for your case, and for the journals you would be approaching.
LikeLike
Great idea!
LikeLike
Sounds right to me, William Carter.
LikeLike
Hi Ertuğrul Karademir – Thanks for uploading your title and abstract into this form: http://goo.gl/e0KPhE. We’ll get back to you soon.
LikeLike
My encouragement to Daniel Pass and others: Don’t let FUD (fear, uncertainty, doubt) about licensing/copyright hold you back. Be reasonably informed, and be bold! As William Carter alludes, the issues are complex, but less restrictive than you might fear. Physicists have been putting their preprints (with data and graphs) online for decades at the arxiv, and many other disciplines are joining them. See http://arxiv.org/help/general
LikeLike
sorry too late
LikeLike
Unfortunatly the hang out will take place at local time 2.30 a.m., that’s too late form me.
I’m going to watch it the day after on you tube.
LikeLike
I’m gonna miss out again, gotta work til midnight (E) that day (or night) Ask me next go around.
LikeLike
This is an awesome idea everybody! Thanks!
LikeLike
This is really interesting Science on Google+: A Public Database! I’m excited to see what the upcoming themes are as well. Good luck everyone who is presenting today!
LikeLike
Malone Quantitative If only the laws governing “publication physics” were universal. Sadly that’s not even true in the physics field – different journals definitely have different metrics. US copyright is distinct and manageable regardless of journal.
LikeLike
Hope my boys (7 weeks, and 17 months) agree to let me watch, looks very interesting. Yeah I know… Fat chance… But a guy can hope.
LikeLike
I’m experiencing the same delusional hope that my five-year-old will grant me the honor…However improbable, anything is possible 🙂
LikeLike
Oh lord, I was hoping by 2 1/2 they would understand… Woe is me… 😉
LikeLike
I’m not getting a video window on the event page. Is there a Youtube link?
LikeLike
Me neither, Tara Mulder.
LikeLike
Just saw that Allison Sekuler has a streaming window on her posts page. Check there!
Link at https://plus.google.com/u/0/112366735963271550830/posts/F3P2xnRGDT7
LikeLike
Here’s the video link: Science on Google+ Posterside Hangouts #1 Psychology.
LikeLike
Thanks for the link Tara Mulder and Robert Jacobson. I knew I was forgetting something.
I want to thank all of the presenters (Allison Sekuler, Andre L. Souza, Paul Minda, and Samuel Rivera), and everyone who watched the hangout and provided feedback. We have a lot of polishing to do but it’s a start. The next Posterside Hangout will have fewer speakers and individual presentations will be shortened to approximately 10 minutes. We would like to increase engagement within each talk (as opposed to waiting until the end to ask questions) and we would also like to increase engagement with people outside of the hangout. Feel free to provide additional feedback on this event post.
We are also getting ready to organize the next episode of Posterside Hangouts. Let us know if you would like to present your research by filling out this form: http://goo.gl/e0KPhE.
LikeLike
hallo op de facebook en google
LikeLike