The Method

Every analysis on this site runs through the same framework. It's built on published research in persuasion psychology, cognitive bias, and inoculation theory. This page explains what we detect, why it works on people, and where the science comes from.

Why This Exists

Social media is full of content designed to make you feel instead of think. Not all of it is wrong — some divisive posts contain true facts. But even true facts can be arranged to bypass your critical thinking through emotional escalation, tribal framing, and rhetorical sleight of hand.

This site doesn't tell you what to believe. It shows you how the content works on you — the specific techniques, the specific language, the specific psychological buttons being pushed. Once you see the machinery, it's harder for it to operate on you without your awareness.

That's not a metaphor. It's a research-backed intervention called inoculation theory.

Inoculation Theory — The Foundation

In the 1960s, psychologist William McGuire demonstrated that people could be made resistant to persuasion the same way they're made resistant to disease: by exposure to a weakened form of the attack. Show someone how a manipulative argument works — before they encounter it in the wild — and they develop cognitive antibodies against it.

Recent work has confirmed this holds for misinformation specifically. van der Linden and colleagues showed that explaining manipulation techniques in advance reduced susceptibility to misinformation about climate change. Roozenbeek and van der Linden demonstrated the same effect through a browser game that taught players to create fake news — players who understood the production techniques became better at recognizing them.

Every analysis on this site is a dose of that weakened form. You read the original post, you see the technique identified, and you understand the psychology behind it. Next time you scroll past something similar, the pattern is already flagged in your head.

The Research


How an Analysis Gets Made

Each analysis follows the same pipeline. Content comes in, gets run through the full framework, goes through human review, and only publishes after approval.

  flowchart TD
      A["Content submitted
for analysis"] --> B["AI runs full framework
against the content"] B --> C["Source verification:
web searches for cited
institutions, studies, experts"] C --> D["Structured analysis
generated"] D --> E["Human reviewer
reads the analysis"] E --> F{"Reviewer decision"} F -->|"Requests changes"| B F -->|"Rejects"| H["Analysis aborted"] F -->|"Approves"| G["Analysis published
to site"] style A fill:#002868,color:#fff style G fill:#002868,color:#fff style H fill:#bf0a30,color:#fff style F fill:#ffd700,color:#000

The revision loop matters. The AI does the heavy lifting — pattern detection, source verification, research citation — but a human decides whether the analysis is fair, accurate, and clear before it goes live. No analysis auto-publishes.


What Each Analysis Detects

Every post gets run through the same checklist. Here's what we're looking for, why it matters, and where the science lives.

Emotional Architecture

Divisive content doesn't just contain emotion — it's built on a deliberate emotional sequence. The opening hooks a specific feeling (fear, outrage, contempt, pride). The middle escalates it. The ending provides a resolution that makes the reader feel validated, righteous, or paranoid.

This matters because narrative transportation theory shows that when people are emotionally absorbed in content, they engage in less counterarguing. The emotional flow keeps you inside the story instead of evaluating it. In each analysis, we name the activation emotion, trace the escalation, and identify the exit ramp — the rhetorical move that sends you back to your feed feeling like you learned something.

The Research

Cialdini's Principles of Influence

Robert Cialdini identified seven principles that make people say yes: reciprocity, commitment/consistency, social proof, authority, liking, scarcity, and unity. These aren't inherently manipulative — they're how humans have always built trust and cooperation. But in divisive content, they're weaponized.

Unity is the workhorse of divisive content. Any post that constructs an us-vs-them boundary is using it. "We" get it; "they" don't. "Real Americans" vs. the unnamed others. Scarcity shows up as forbidden-knowledge framing: "they don't want you to see this." Social proof appears as manufactured consensus: "everyone is waking up." In each analysis, we flag which principles are active and which one is doing the heavy lifting.

The Research

Source Existence Check

When a post cites a study, names an institution, or quotes an expert, we check whether that source actually exists. This is not fact-checking — we're not evaluating whether a real institution's findings are correct. We're checking whether the authority being invoked is real or fabricated.

The distinction matters. Selective citation is framing — a real source used to support a predetermined conclusion. Fabricated authority is construction — the author invented credibility that doesn't exist. When the "Harvard study" doesn't exist, or the "Institute for American Values" has no web presence, that alone tells you something definitive about the author's intent.

Thought-Terminating Clichés

Psychiatrist Robert Jay Lifton coined this term in 1961 while studying ideological totalism. A thought-terminating cliché is a phrase that sounds like it's encouraging thought but actually shuts it down: "Wake up." "Do your own research." "Let that sink in." "If you don't see it, I can't help you."

These phrases create the feeling of insight without requiring any actual analysis. They also work as in-group signals — if you "get it," you're one of us. Questioning the cliché marks you as an outsider. In each analysis, we quote the cliché and name the specific question it's preventing the reader from asking.

The Research

Moral Foundations Targeting

Jonathan Haidt's moral foundations theory identifies six psychological "taste receptors" for morality: Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, Sanctity/Degradation, and Liberty/Oppression. Everyone has all six, but people weight them differently — and political content is designed to hit specific foundations for specific audiences.

Content targeting Care + Fairness skews toward liberal audiences. Content targeting Loyalty + Authority + Sanctity skews toward conservative audiences. Liberty/Oppression gets used across the spectrum. Identifying which foundation a post activates tells you who it's designed for — and whether the moral framing is organic to the issue or grafted on to provoke a reaction.

The Research

Framing Effects

Daniel Kahneman and Amos Tversky demonstrated that how information is framed changes how people evaluate it — even when the underlying facts are identical. A policy described as "saving 200 out of 600 people" gets different support than one described as "400 people will die," despite being the same outcome.

In divisive content, framing shows up as selective inclusion and exclusion of true facts. Two posts about the same event can both be factually accurate and lead to opposite conclusions, because each omits the facts that complicate its narrative. In each analysis, we describe the frame and then describe what an alternative frame of the same facts would look like. The contrast is the teaching moment.

The Research

Identity-Threat Construction

The most effective divisive content doesn't just argue a point — it makes disagreeing feel like a threat to your identity. "Any real parent would..." "If you care about freedom, you already know..." The content is structured so that questioning it means questioning who you are.

When identity is threatened, people shift from analytical processing to defensive processing. They stop evaluating the claim and start protecting the self. Self-affirmation theory shows this defensive response can be reduced — but social media provides no such buffer. In each analysis, we identify the identity being invoked and describe how the content makes disagreement feel like self-betrayal.

The Research

FUD — Fear, Uncertainty, Doubt

Originally a corporate disinformation strategy, FUD is now everywhere. The hallmark: a gap between the emotional payload and the actual claim. "I'm not saying X, I'm just asking questions." "Isn't it interesting that..." followed by an implication with no stated claim. "I'll let you draw your own conclusions" after presenting curated information.

FUD is designed to be irrefutable because it makes no specific claim. You can't fact-check a feeling. The goal is to lower trust in a target without taking on the burden of proof.


What This Is Not


Further Reading

If you want to go deeper, these are the primary sources this framework draws from. The books are accessible to non-academics. Most of the papers are freely available — we link to open-access versions where they exist.

Books

Papers