The Method
Every analysis on this site runs through the same framework. It's built on published research in persuasion psychology, cognitive bias, and inoculation theory. This page explains what we detect, why it works on people, and where the science comes from.
Why This Exists
Social media is full of content designed to make you feel instead of think. Not all of it is wrong — some divisive posts contain true facts. But even true facts can be arranged to bypass your critical thinking through emotional escalation, tribal framing, and rhetorical sleight of hand.
This site doesn't tell you what to believe. It shows you how the content works on you — the specific techniques, the specific language, the specific psychological buttons being pushed. Once you see the machinery, it's harder for it to operate on you without your awareness.
That's not a metaphor. It's a research-backed intervention called inoculation theory.
Inoculation Theory — The Foundation
In the 1960s, psychologist William McGuire demonstrated that people could be made resistant to persuasion the same way they're made resistant to disease: by exposure to a weakened form of the attack. Show someone how a manipulative argument works — before they encounter it in the wild — and they develop cognitive antibodies against it.
Recent work has confirmed this holds for misinformation specifically. van der Linden and colleagues showed that explaining manipulation techniques in advance reduced susceptibility to misinformation about climate change. Roozenbeek and van der Linden demonstrated the same effect through a browser game that taught players to create fake news — players who understood the production techniques became better at recognizing them.
Every analysis on this site is a dose of that weakened form. You read the original post, you see the technique identified, and you understand the psychology behind it. Next time you scroll past something similar, the pattern is already flagged in your head.
The Research
- McGuire, W.J. (1964). "Inducing Resistance to Persuasion: Some Contemporary Approaches." Advances in Experimental Social Psychology, 1, 191–229. doi:10.1016/S0065-2601(08)60052-0
- van der Linden, S., Leiserowitz, A., Rosenthal, S., & Maibach, E. (2017). "Inoculating the Public against Misinformation about Climate Change." Global Challenges, 1(2). free full text
- Roozenbeek, J. & van der Linden, S. (2019). "Fake News Game Confers Psychological Resistance Against Online Misinformation." Palgrave Communications, 5(65). free full text
How an Analysis Gets Made
Each analysis follows the same pipeline. Content comes in, gets run through the full framework, goes through human review, and only publishes after approval.
flowchart TD
A["Content submitted
for analysis"] --> B["AI runs full framework
against the content"]
B --> C["Source verification:
web searches for cited
institutions, studies, experts"]
C --> D["Structured analysis
generated"]
D --> E["Human reviewer
reads the analysis"]
E --> F{"Reviewer decision"}
F -->|"Requests changes"| B
F -->|"Rejects"| H["Analysis aborted"]
F -->|"Approves"| G["Analysis published
to site"]
style A fill:#002868,color:#fff
style G fill:#002868,color:#fff
style H fill:#bf0a30,color:#fff
style F fill:#ffd700,color:#000
The revision loop matters. The AI does the heavy lifting — pattern detection, source verification, research citation — but a human decides whether the analysis is fair, accurate, and clear before it goes live. No analysis auto-publishes.
What Each Analysis Detects
Every post gets run through the same checklist. Here's what we're looking for, why it matters, and where the science lives.
Emotional Architecture
Divisive content doesn't just contain emotion — it's built on a deliberate emotional sequence. The opening hooks a specific feeling (fear, outrage, contempt, pride). The middle escalates it. The ending provides a resolution that makes the reader feel validated, righteous, or paranoid.
This matters because narrative transportation theory shows that when people are emotionally absorbed in content, they engage in less counterarguing. The emotional flow keeps you inside the story instead of evaluating it. In each analysis, we name the activation emotion, trace the escalation, and identify the exit ramp — the rhetorical move that sends you back to your feed feeling like you learned something.
The Research
- Nabi, R.L. & Green, M.C. (2015). "The Role of a Narrative's Emotional Flow in Promoting Persuasive Outcomes." Media Psychology, 18(2), 137–162. doi · alt PDF
- Green, M.C. & Brock, T.C. (2000). "The Role of Transportation in the Persuasiveness of Public Narratives." Journal of Personality and Social Psychology, 79(5), 701–721. free PDF
Cialdini's Principles of Influence
Robert Cialdini identified seven principles that make people say yes: reciprocity, commitment/consistency, social proof, authority, liking, scarcity, and unity. These aren't inherently manipulative — they're how humans have always built trust and cooperation. But in divisive content, they're weaponized.
Unity is the workhorse of divisive content. Any post that constructs an us-vs-them boundary is using it. "We" get it; "they" don't. "Real Americans" vs. the unnamed others. Scarcity shows up as forbidden-knowledge framing: "they don't want you to see this." Social proof appears as manufactured consensus: "everyone is waking up." In each analysis, we flag which principles are active and which one is doing the heavy lifting.
The Research
- Cialdini, R.B. (2006). Influence: The Psychology of Persuasion (Revised Ed.). Harper Business. Amazon
- Cialdini, R.B. (2016). Pre-Suasion: A Revolutionary Way to Influence and Persuade. Simon & Schuster. Amazon
Source Existence Check
When a post cites a study, names an institution, or quotes an expert, we check whether that source actually exists. This is not fact-checking — we're not evaluating whether a real institution's findings are correct. We're checking whether the authority being invoked is real or fabricated.
The distinction matters. Selective citation is framing — a real source used to support a predetermined conclusion. Fabricated authority is construction — the author invented credibility that doesn't exist. When the "Harvard study" doesn't exist, or the "Institute for American Values" has no web presence, that alone tells you something definitive about the author's intent.
Thought-Terminating Clichés
Psychiatrist Robert Jay Lifton coined this term in 1961 while studying ideological totalism. A thought-terminating cliché is a phrase that sounds like it's encouraging thought but actually shuts it down: "Wake up." "Do your own research." "Let that sink in." "If you don't see it, I can't help you."
These phrases create the feeling of insight without requiring any actual analysis. They also work as in-group signals — if you "get it," you're one of us. Questioning the cliché marks you as an outsider. In each analysis, we quote the cliché and name the specific question it's preventing the reader from asking.
The Research
- Lifton, R.J. (1961). Thought Reform and the Psychology of Totalism: A Study of "Brainwashing" in China. W.W. Norton. Chapter 22: "Ideological Totalism." Amazon
Moral Foundations Targeting
Jonathan Haidt's moral foundations theory identifies six psychological "taste receptors" for morality: Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, Sanctity/Degradation, and Liberty/Oppression. Everyone has all six, but people weight them differently — and political content is designed to hit specific foundations for specific audiences.
Content targeting Care + Fairness skews toward liberal audiences. Content targeting Loyalty + Authority + Sanctity skews toward conservative audiences. Liberty/Oppression gets used across the spectrum. Identifying which foundation a post activates tells you who it's designed for — and whether the moral framing is organic to the issue or grafted on to provoke a reaction.
The Research
- Haidt, J. (2012). The Righteous Mind: Why Good People Are Divided by Politics and Religion. Vintage. Amazon
- Graham, J., Haidt, J., & Nosek, B.A. (2009). "Liberals and Conservatives Rely on Different Sets of Moral Foundations." Journal of Personality and Social Psychology, 96(5), 1029–1046. free PDF
Framing Effects
Daniel Kahneman and Amos Tversky demonstrated that how information is framed changes how people evaluate it — even when the underlying facts are identical. A policy described as "saving 200 out of 600 people" gets different support than one described as "400 people will die," despite being the same outcome.
In divisive content, framing shows up as selective inclusion and exclusion of true facts. Two posts about the same event can both be factually accurate and lead to opposite conclusions, because each omits the facts that complicate its narrative. In each analysis, we describe the frame and then describe what an alternative frame of the same facts would look like. The contrast is the teaching moment.
The Research
- Kahneman, D. & Tversky, A. (1984). "Choices, Values, and Frames." American Psychologist, 39(4), 341–350. free PDF
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. Amazon
Identity-Threat Construction
The most effective divisive content doesn't just argue a point — it makes disagreeing feel like a threat to your identity. "Any real parent would..." "If you care about freedom, you already know..." The content is structured so that questioning it means questioning who you are.
When identity is threatened, people shift from analytical processing to defensive processing. They stop evaluating the claim and start protecting the self. Self-affirmation theory shows this defensive response can be reduced — but social media provides no such buffer. In each analysis, we identify the identity being invoked and describe how the content makes disagreement feel like self-betrayal.
The Research
- Sherman, D.K. & Cohen, G.L. (2006). "The Psychology of Self-Defense: Self-Affirmation Theory." Advances in Experimental Social Psychology, 38, 183–242. free PDF
- Steele, C.M. (1988). "The Psychology of Self-Affirmation: Sustaining the Integrity of the Self." Advances in Experimental Social Psychology, 21, 261–302. doi · alt PDF
FUD — Fear, Uncertainty, Doubt
Originally a corporate disinformation strategy, FUD is now everywhere. The hallmark: a gap between the emotional payload and the actual claim. "I'm not saying X, I'm just asking questions." "Isn't it interesting that..." followed by an implication with no stated claim. "I'll let you draw your own conclusions" after presenting curated information.
FUD is designed to be irrefutable because it makes no specific claim. You can't fact-check a feeling. The goal is to lower trust in a target without taking on the burden of proof.
What This Is Not
- Not fact-checking. We analyze structure, not truth. When a factual claim is relevant to the analysis (e.g., a fabricated source), we flag it as a source existence question — we don't render verdicts on contested claims.
- Not political commentary. The same framework applies to content from left, right, center, and everywhere else. A post about vaccines and a post about gun rights get the same analytical treatment.
- Not telling you what to think. The goal is to make the machinery visible. What you do with that visibility is your business.
Further Reading
If you want to go deeper, these are the primary sources this framework draws from. The books are accessible to non-academics. Most of the papers are freely available — we link to open-access versions where they exist.
Books
- Cialdini, R.B. — Influence: The Psychology of Persuasion — The definitive guide to why people say yes. If you read one book on this list, make it this one.
- Kahneman, D. — Thinking, Fast and Slow — How your brain takes shortcuts and how those shortcuts get exploited.
- Haidt, J. — The Righteous Mind — Why people with good intentions can look at the same evidence and reach opposite moral conclusions.
- Lifton, R.J. — Thought Reform and the Psychology of Totalism — The original study of ideological language control. Chapter 22 on thought-terminating clichés is the relevant section.
- Cialdini, R.B. — Pre-Suasion — How the moment before a message changes how you receive it.
Papers
- Green & Brock (2000) — Narrative transportation and persuasion (PDF)
- Nabi & Green (2015) — Emotional flow in persuasive narratives · alt
- Kahneman & Tversky (1984) — Choices, values, and frames (PDF)
- Graham, Haidt, & Nosek (2009) — Moral foundations and political orientation (PDF)
- Sherman & Cohen (2006) — Self-affirmation and defensive processing (PDF)
- Steele (1988) — The psychology of self-affirmation · alt
- McGuire (1964) — Inducing resistance to persuasion (paywalled)
- van der Linden et al. (2017) — Inoculation against climate misinformation (open access)
- Roozenbeek & van der Linden (2019) — Fake news game and psychological resistance (open access)
- Paul & Matthews (2016) — The Russian "Firehose of Falsehood" propaganda model (RAND, free)