Computational Sociology
Christopher Barrie
Why would platforms be polarizing?
Echo chambers/filter bubbles?
Moral outrage?
Exposure to out-groups?
…through reinforcing mechanisms of confirmation bias and information enclaves (think back to Week 2)
Humans predisposed to engaging with outrage?
Evolutionary anthropology shows it helps enforce boundaries
Online platforms biased toward outrage?
Reinforcement through engagement leads to amplification of outrage
MAD Model by Brady, Crockett, and Van Bavel (2020)
Motivations: “group-identity-based motivations to share moral-emotional content;”
Attention: “that such content is especially likely to capture our attention;”
Design: “the design of social-media platforms amplifies our natural motivational and cognitive tendencies to spread such content”
Backfire effects result from:
Consider echo chambers and out-group exposure theories…
So what questions should we be asking?
So why would this make sense?
We are attracted toward the lurid, shocking, scandalous
We select into this content –>
Algorithm feeds us more back
In a reinforcing cycle
But these kinds of arguments have been made before too…
Think back to high-choice media environments…
There can be both supply-side effects (YouTube algorithm; Cable TV choice etc.)
And also demand-side effects (people select into content they were predisposed to like)
Is it as bad as they say it is?
Maybe; maybe not.
What can we do better?
Get observational data (often proprietary) from companies themselves
Consider competing influences (especially for e.g., polarization)
Consider non-US contexts
This week: