The outstanding author Robert Wright —who writes at
and whom I heard about from my Sucks to Suck collaborator David Cole— did me an immense favor in his most recent post. He discussed something I’ve been trying to write about for years, which I call “reactive extremification”:In case you’re curious about how intensely invested Musk is in his war on wokeness, here’s a tweet of his: “The woke mind virus is either defeated or nothing else matters.” … You may be thinking, “But Elon’s just being hyperbolic for the sake of effect!” Guess again. Here’s a follow up to the previous tweet: “That the mind virus is pushing humanity towards extinction is not hyperbole.”
That last tweet includes what Musk considers supporting evidence. The tweet links to a New York Times profile of a man named Les Knight, founder of the Voluntary Human Extinction movement, which aims to persuade all humans to forego reproduction so that our species will go extinct…
The Times piece, after mentioning some other people who are concerned about our impact on the ecosystem, says, “But it is rare to find anyone who publicly goes as far as Mr. Knight.” Yeah, that’s because it’s rare to find someone who privately goes as far as Mr. Knight—which in turn is because Mr. Knight is a nut! Yet Elon Musk is depicting Knight’s mission—the extinction of the entire human species—as a straightforward expression of wokism.
If you had to rank tribal Twitter moves in order of perniciousness, the one Musk employed here would have a shot at the number one slot. It consists of finding an example of extreme and atypical behavior in the other tribe and depicting it as typical of the other tribe. So, for example, at the height of the pandemic you might see a tweet featuring a video of someone either (1) screaming about having to wear a mask in a supermarket or (2) screaming at someone who isn’t wearing a mask in a supermarket, along with a comment about how crazy “Republicans” or “Democrats” are—even though one thing almost all Republicans and Democrats have in common is that they never scream in supermarkets.
I agree with Wright that this is pernicious1, but I think he’s only described the first half of the problem. The next step in this pattern is as important: when we come to see some scaled group as a monolith defined by its extremes, we tend to reactively extremify ourselves, as Musk surely has. In doing so, we eventually become the cause of someone else’s reactive extremification, a kind of resonance feedback loop whose effects one sees everywhere online. It works like this:
Over time, we’re exposed to some number of insane or extremist posts from the fringe of an opposing group (in whatever cultural or political or demographic battle).
After whatever amount of exposure, we form a model of the entire group as being insane or extremist (or: as insufficiently opposed to the insanity or extremism in their ranks).2
We eventually develop insane or extreme attitudes and positions in response to this seemingly massive existential threat + repeated violations of our norms and values; and sometimes we post these new positions or with this new attitude.
We catalyze someone else’s reactive extremification with our own.
This is what’s happening online, every minute of every day, to billions of people. As even small percentages of the whole pop off, freak out, and wind up in hateful, reactive postures, the timelines and feeds and videos overflow with their eruptions. This is why it feels like in the last ten or so years, the world has shattered into countless weird factions losing their minds over absolutely everything.
In his description, Wright implies some agency to this process: “…finding an example of extreme and atypical behavior in the other tribe and depicting it as typical.” But I suspect it is not something most of us can control. Our exposure is incidental, unsought, a function of scale and not deliberate attention.3 In the immense crowds of users on major platforms, we cannot help but notice that many people wish us grave harm, more than we can possibly safely ignore.
Being a black woman online
I think often of a conversation I once had with my friend L. about what it was like being a black woman online. L. grew up in the American south and had witnessed ample racism, but her sense from experience was that it was relatively rare on the whole and that “most white people are not racist.” When she got online, however, that sense was upended. Over the course of several years, she saw thousands of racist posts and comments. It’s probable that these ran a gamut from
real racists with real venom posting hateful and disgusting content; to
edgelord teen crackers being “provocative” and playing, as children do, with themes and concepts and histories they do not understand; in other words: trolls; to
foreign government agents shitposting to increase American racial discord; to
confused boomers with dated language accidentally being offensive but probably without deep racial animus; to
wasted or mentally ill people venting their spleen and employing racist language in moments of derangement; to
guileless yet offensive comments from other races / nationalities who simply don’t know the contours and details of the American racial scene.
But the summed effect was profound: it was all but impossible for L. not to wonder if in fact racism was much more widespread than she’d thought. Indeed, she wondered if the non-black people she’d known as friends harbored secret racial hatred they suppressed in daily life but gave expression to online; she told me it made her look at white people differently. L. wasn’t trying to “find” racists and “depict” them as representative; instead, she was exposed to a quantity of racism so voluminous that it seemed impossible for it not to be representative, and she was deeply troubled by it.4
L. isn’t an especially reactive person, but it’s not hard to imagine someone in her position, seeing their umpteen-thousandth racist comment under a news article and finally yielding to two impulses: first, to update their prior on “whether white people are friend or foe”; and second, to post something themselves expressing their frustration and exhaustion: “This world would be better off without white people,” for example.5
The moment they do, the cycle starts again. Somewhere, there’s a white person now seeing their umpteenth-thousandth tweet6 about how awful white people are. On this day, for whatever range of reasons —rough day, rough year; in the middle of a bender; drifting into senility; mentally ill; politically intemperate; whatever— they finally react extremely and post their own poisonous take: “If black people hate white people so much, maybe they should go back to where they came from.” And somewhere, a black person like L. sees this on the timeline, and it goes into their model…
More than anything else, to me, this is what the Internet is: mutual reactive extremification running amok in countless domains in an accelerating and self-reinforcing process.
New scales, old minds
Humans are sense-makers. We form predictive heuristics —rules of thumb— from our experiences, especially those that repeat, and indeed part of the gambit of our species is that we can do this with relatively little data: we can take a small set of instances and imagine and speculate from them, running little simulations in our big brains all day long. We evolved the capabilities to observe, pattern-match, speculate, and respond to the patterns we perceive at fairly small social scales; at that time in species history, in an entire lifetime one might have known just tens or hundreds of humans. If you met seven people with a certain haircut who all attacked you, it was probably reasonable to develop the sense that “people with this haircut are dangerous,” especially given the high costs of error. We are quick to categorize as enemies types of individuals who repeatedly threaten or harm us, and the number of incidents we need to do so isn’t especially high. There was no reason for it to be before the we achieved the scale of civilization.
Today, the Internet —civilization’s crown jewel and toxic waste dump— has brought scale beyond anything in human history. Two billion people are on Facebook alone, when two billion people have never been on, in, around, on top of, along, or beside anything before. For the individual, the matter is prosaic but overwhelming: every day, she sees hundreds or thousands of people. Faces in feeds, names in comments sections, addresses in emails, profile pictures in reactions. Group chats, Twitch streams, viral posts, polls, Zoom meetings. However taxing this may be to us to have to take in and model so many individuals, more relevant is that it increases the overall field of data we’re going to search for patterns within. Where we look, we tend to find.7
And it’s easy at these scales to find a lot that horrifies us.
If just 1% of Americans were racist —and it’s certainly more than that— we’d have three million potential racist posters. How many racist comments do you need to see before you conclude that “Americans are racist”? If we’re honest, the number isn’t very large; it’s certainly not in the millions, and if you really visualize how it feels to encounter such disturbing content, it’s probably not even in the thousands. Statistical reassurances that even millions of racists don’t necessarily amount to much in such a large population as ours do not console us; the experiential data of seeing so many racist posts is simply too compelling.8 Even L.'s real-world experience of default racial amity wasn't enough to arrest her reaction. Whereas before the Internet, she'd have lived her whole life forming a sense of the world mostly from her immediate surroundings, in which she might encounter a handful of racists, now she had to form a sense of the world from the teeming superabundance of people posting online. What stable or balanced worldview can emerge from billions of extremely diverse people in diverse states of being fighting with one another every hour, reactively extremifying over the years, their memories eventually including an avalanche of hateful derangement from "the other side," etc.?
Is it any wonder that nearly every group, however large and hegemonic or small and marginalized, feels completely under siege, attacked from all sides, in tremendous danger? For any community on earth, it’s trivially easy to find huge numbers of people calling for their eradication. It’s hard to call their resulting fear “paranoia,” but it’s also hard to call it sound. The truth is: we really don’t know what to make of this. It feels absurd to say e.g. “oh, there’s not that big a problem with racism; yes, there are millions of hateful people out there, but there are billions of chill people!” But it is also the case that in L.’s real life, she simply didn’t experience the world she saw online. Indeed, very few people do.
Because I am mathematically illiterate, I sometimes wonder if the main problem is simply innumeracy: most of us really don't understand the statistical realities that govern our experiences. Instead, we rely on ancient intuitional reckoning that misleads us badly about our fellow humans and our world. The incredible truth of the matter is that most people don’t even post!9 The entirety of what you’ve seen over, say, the past decade online is an extreme minority already! Your sample is wildly non-representative, even before we account for “what gets attention on the Internet,” algorithms, and all the rest. This vast pool of insanity is, in fact, a bit of a mirage.
Forget the object-level
I chose a deliberately provocative example because this dynamic is realest where there’s true discord. But it’s important to note that this isn’t a phenomenon of white people and black people, or Democrats and Republicans. It applies to nearly every scaled group online and it’s happening all over the world. I’ve seen this happen in subreddits devoted to old TV shows. It’s a part of many fandoms, including those associated with sports10. And most people are members of many groups experiencing this tumult, such that across one’s entire set of interests, one may be reactively extremifying in one zone even as elsewhere one holds the line or brings more light than heat.
It’s also important to remember that this is all happening completely independent of the object-level issues. It may be that 90% of Americans are racist. The question isn’t “is racism a prevalent,” or “are Republicans evil”; those are their own issues and are well-worth independent discussion. The dynamic of reactive extremification occurs regardless of the truth or falsity of the involved claims. It can be true both that “far too many Americans are racist” and that “the Internet produces a statistically false picture that eventually extremifies many users.” Indeed, my argument is that the truth of the latter over time helps create the truth of the former.
The aggregated effect of individuals reactively extremifying is an expanding Overton window on —yes— both sides. One of the things that alarms people about the Internet age’s political discourse is how regularly one encounters ideas that were utterly beyond the pale just twenty years ago: that landlords or TERFs or drag queens should be literally murdered; that the US government should be overthrown and Donald Trump made dictator for life; that a civil war would be good, actually; and so on. It’s not hard to reconstruct how people arrive at these positions: one too many articles about greedy or inhumane landlords will do it; one too many posts from hate-filled or destructive maniacs and we’re ready to liquidate a class; one too many awful interactions with our ideological opponents and we want to take up arms. We watch “the other side” lie, cheat, encourage violence, and we conclude that (1) our enemies are legion and determined to exterminate us and (2) the only way out is a violent reckoning.
We do not do this with our own “side,” of course. Indeed, we have techniques for avoiding any sense of shared responsibility for these escalations:
First, we rarely consider extremists on our side to be on our side. This is called in philosophy the “no true Scotsman” fallacy. For example, I might show you a tweet in which someone I perceive as your ally says something completely insane. You might respond: “Please, that’s not an ally of mine. No real [whatever] thinks that way!” But of course, you probably believe that your opponents are defined by their extremes (or again, by their failure to control their extremes).
Second, we simply don’t perceive these people! “Monsters? In my cohort? I don’t see any!” But if you’re a member of a group numbering in the millions, you almost certainly have literal murderers and rapists and thieves and creeps and liars and frauds and grifters in your ranks. That’s the power of scale! And your opponents certainly see (a) all of them and (b) you studiously ignoring them, putting the lie to many of your moral claims.
Third, we resort to whataboutism. “Yes, I grant that so-and-so turned out to be a deceitful piece of human garbage. But that pales in comparison to what the other side has in their midst!” We might for example litigate the relative proportion of extremists: “If 1% of my side is evil, 90% of their side is evil!” This might even be true, but for the process of reactive extremification it’s immaterial. There’s more than enough (a) evil and (b) misrepresentation for everyone to continue their steadily worsening freakouts.
Wright notes something so obviously true as to be amusing: “one thing almost all Republicans and Democrats have in common is that they never scream in supermarkets.” This is the case! Screaming in supermarkets is rare as hell! Yet I confess: my model of “Democrats” and “Republicans” does, in fact, include them screaming at one another in all kinds of environments. Another thing almost every American has in common: they do not wish for the suffering or death of others and would struggle to brutalize another, even if it were “called for.” Soldiers in wars often cannot bring themselves to shoot the enemy, even when they are directly threatened! Yet is that your sense of the world today? Or do you rather have the sense that millions of people wish you and yours harm up to and including death?
Vast scales mean that these are both true statements: almost no one wishes you real harm; yet millions of people do, or say they do, and you’re going to see them every day, forever. If you want to understand what’s happening to us, you have to separate the object-level discourse from the meta-level psychodynamics of our massively scaled platforms and get rigorous about statistical incidence.
I’m not saying you’re wrong about anything, by the way. I’m only saying that this has probably happened to you, as it has to me and many I follow. Indeed: what experience defines the present more than watching someone you know online slowly —or sometimes quickly— drift from “reasonable takes” into “insane hyperbolic extremism”? The first time you see it happen, you might chalk it up to their individual foibles. But the hundredth time, you must admit: something general is taking place. Thinking about media requires not indexing on the content but on the form: not the arguments, but the shape of arguments, not the conclusions but the processes that lead to the conclusions.
And all of this affects the famous —or the locally famous, or the popular— more than it affects the rest of us. That’s because their social universe is already scaled beyond ours; their posts get countless deranged replies, and attract the most motivated of the minority of the population that posts. A celebrity cannot tweet about a sandwich without encountering thousands of hostile trolls of all varieties. It’s probably impossible to experience without drifting into reactive extremes, and since they’re celebrities, their extremification engenders memetic extremifcation among their fans. Whole scenes grow dark, intense, combative, and paranoid overnight, then disperse to spread the disease into other communities.
What is to be done?
I’ve worked on scaled social systems for almost ten years now. It’s common for diagnoses like these to conclude with recommendations for platform operators, and I do have thoughts about how e.g. UIs should be designed to discourage these dynamics. For example, one reason I work at Substack is that I’m interested in what happens if we reduce scales. On Substack, a “scene” needs only to be a few thousand people to be financially sustainable, whereas with advertising models, any community or publication or site needs millions and millions of users to be viable. Our business model yields smaller-scale and federated, not aggregated, groups, and I’m excited to see if this leads to net-better emergent dynamics. For other companies, algorithms certainly play a role in elevating the extremes, and that, too, can be adjusted, although people often fail to understand how challenging this problem actually is. The algorithms elevate the extremes because we elevate the extremes.
As important as technological or structural or algorithmic tinkering is the separate development of cultural knowledge or wisdom, a process no one controls and which usually requires a lot of distributed suffering before some new Schelling Point of understanding —with new norms— is reached. In our case, we probably need some combination of
an embodied statistical understanding: if we really interiorized the knowledge that most people do not post + the understanding of just how many assholes we can encounter without it being representative of large populations, we’d be off to a good start;11 and
a kind of compassionate or ethical wisdom: if we remember the scale of the populations involved, we can also remember the frailty and unreliability of the individuals humans within them; we can stop using one another as symbols and signs and trendlines if we retain a sense of the probably-broken individuals behind the screens, individuals who, like us, are contending with this fresh hell and are far-down their own reactive paths.
A shorter way to say this is simply that we’ll learn to take the Internet less seriously, as we did radio after Orson Welles’ War of the Worlds, or television after Twenty One, or what-have-you. I suspect this is the inevitable result of this period of tumult: not the triumph of some side or constituency, but the gradual “tuning out” of all the noise our new medium generates. Again, this will have nothing to do with object-level concerns or policies. It has more to do with our meta attitude to discourse, to conflict, to human groups as-such. We will simply stop reacting, since it never really leads anywhere anyway.
In sum: we are collectively learning, at our various paces and at all our different ages, how to reason about these new environments, this new scale, this new field of data. I have no idea to what extent our ancient pattern-matching tendencies can update or will update, but I do believe humans are capably self-modifying and regularly adapt to situations so novel and challenging as to seem impossible. We can walk on the moon; we can live in skyscrapers; we can probably also learn to see countless examples of deeply-triggering human misbehavior without getting confused as to the true nature of our civilization. But it will continue to be very, very messy.
As a final note, I must emphasize again that reactive extremification due to innumeracy about scale is happening. You don’t need to believe any particular thing —about any group, issue, or conflict— to acknowledge it. I am not arguing and indeed would not argue that racism isn’t widespread, or that given political parties aren’t corrupt, or that Alabama football fans aren’t truly evil. Part of what makes this situation persistently challenging is that we do, in fact, have real and critically important disagreements. That is: it’s not all meta. But understanding the meta is important if we want to understand why it feels like everyone —including us— has lost their minds. We’re seeing enemies everywhere; enemies are everywhere because everyone is everywhere now, in huge numbers. But if we want not to create more enemies, we need to be cautious about how we build our models and how we reason and express ourselves. We have to let the scale fall from our eyes, which will entail new intuitional tolerances. We are like the first humans to move into cities. We face a frequency of exposure to disorder, chaos, others that no one else has ever faced, and disease and crime run rampant. The way forward is a combination of improved governance and order-promoting policies, new technologies and structures that restore our sense of proportion, presumably some demographic sorting, and new attitudes towards our world and our fellows that allow us to take the great gifts of scale —density, exchange, creativity, great memes— without incurring all of the horrific costs.
To some extent, I think this is all already happening, and I feel sunnier than I have in years about the future of our species. But all it takes is a handful of bad tweets, and I’m back to my own reactivity: full of defensive fury, ready to paint with the broadest brush, longing for cathartic violence of rhetorical or even physical varieties. It’s a hard habit to shake because it’s always plausibly justifiable. Even if most of the kids are all right, some of them definitely want to kill us. The question is: what role do we all play in one another’s evolution and development? Do we moderate the global reactivity, or accelerate it? Do we see the crowd as “untruth,” or do we fall for the illusions scale brings? Painful as it is, this issue is not a scaled phenomenon: it is probably up to each of us, as individuals, to arrest this runaway process. I see many people already doing this, and more every year, but many more probably need to if we want to thwart reactive extremification.12 I wish us all luck.
Indeed, the example Wright uses is highly illustrative for me personally:
if I saw someone screaming at someone for not wearing a mask, I’d have thoughts like “this is too much” and “anxiety culture has gone too far” and “that’s it, I’m done with masks, this is hysteria, a completely irrational near-celebration of neurosis, proof that none of this is about health, it’s just about control and intra-group combat”; while
if I saw someone screaming at someone for asking them to wear a mask, I’d have thoughts like “American selfishness and rudeness and aggression are out of control and intolerable” and “I think my fellow citizens are so ignorant and reactionary that it makes sense that the state has to compel their compliance when it sees fit; elites probably should be in control.”
I happen to be a lunatic, but not only am I not alone, I suspect all of us are this way in some zones or some conditions.
This was a major conservative talking point during the ludicrous “War on Terror” years. Some pundit would mention “Islamic terrorism,” and another would observe that “the overwhelming majority of Muslims are peaceful and decent,” and a right-winger would say “well, those good Muslims need to stand up and do something about Al-Qaeda!” This absurd injunction, which never specifies what, exactly, an individual is supposed to do —post every day “I think murder is bad”? Accost strangers in the street and try to make sure they’re not into Wahhabism?— is in my opinion a way for people to defend their bad pattern-matching against statistical truths they do not want to accept. We can still blame all Muslims for the actions of an infinitesimal few if we pretend that communities have easy authority over their own margins. They do not, of course, but this talking point is alive and well and is frequently applied to other groups to this day. Do you have control over your group’s extremes?
Most of us are extremely haphazard with our attention, letting it follow what it may from the chaos of the online scrum and tending to let it be “captured” by what’s most provocative, abhorrent, shocking. It so happens that I was introduced to Wright when David recommended his wonderful book Why Buddhism is True, which is directly related to questions of attention and being.
As I’ve watched the Internet wreak havoc, I’ve increasingly come to view the story of the Tower of Babel as a parable about scale, not ambition.
Sadly, I probably do not need to provide examples of anti-black racist tweets. But a classic example of a reaction to racism that itself became a catalyst for additional conflict was the infamous tweet about the white child being eaten by alligators. The poster’s perception —that white people are so entitled that they deserve to die or to have their loved ones die— feels exemplary of how this works. Not only that, but it was probably half-”joking,” an exaggerated or hyperbolized comment on a trend the poster does, in fact, experience (there’s no shortage of entitlement among white people, or many other people for that matter). But all that context is lost; the tweet stands alone as another catalyst for reactive extremification. Somewhere, some loser saw that and thought: “Well, if they hate us so much, fuck them!”
As before, these posts are an indecipherable mix of:
black people who do hate white people (justly or unjustly)
black people momentarily frustrated with the latest incident of racism and just venting online (perhaps after seeing their own Nth racist comment from a white person)
black people joking around or being taken out of context
non-black people pretending to be black online and posting weird shit, because they’re freaks or teens or tripping or working for a Russian subversion campaign
This element of intelligence, pattern-matching, exists in a narrow band between two forms of malfunction: (a) stupidity, an inability to perceive patterns when we should; and (b) insanity, a perception of patterns where none exist. This is one of the more delicately-tuned elements of our minds, which is why we see copious examples of both failure modes in our societies, and especially online. It’s extremely easy to approach being either insensate or schizophrenic when we stare into the abyss.
We all have experience with this phenomenon, which can be described in many ways. I often think of generic media coverage as being the best example: read a dozen articles about crimes of a given sort, or about some awful medical phenomenon, and you start to entertain them as real possibilities for yourself. I have worried dozens of times about brain-eating amoebas thanks to my inability to correctly understand the actual risks after reading just a handful of sad stories about people killed by them!
There’s an additional issue too: who posts is not random! And what we post is not random! Posts are very often grievances from grievance-seeking types. In a very real way, the Internet selects for what isn’t real or widespread, since there’s no reason to e.g. post “I don’t really hate anyone.” Indeed, if someone did, they’d probably do so as a way of criticizing those who hate; some of those who hate would feel attacked for what they see as justified hatred, hatred arising from virtues like “the desire to defend one’s community,” etc.
I briefly believed that “everyone from Chicago is a piece of shit” because I saw one Bears fan before a game against my hometown Saints wearing a sign that said “BEARS FINISHING WHAT KATRINA STARTED.” I think this is extremely illustrative of this phenomenon. First, although I’m sensitive about Katrina, I can obviously concede with some distance that this was a joke; it may have been an offensive and awful joke, but I am very sure the man in the photo did not seriously want the Bears to kill thousands of people and destroy the lives of many more. Second, even though it was just one Bears fan, I remember posting about it and arguing that “no less guilty are the many people who saw this and did nothing.” But I have no idea how many people saw it, or what they did, nor is it at all obvious what they should have done anyway, anymore than it’s clear what Muslims should do about ISIS. I hope no one from Chicago saw my idiotic invective; if they did, I’m sure they went off on their own “fuck New Orleanians” journey. Happy trails to all of us.
Indeed, the Internet has pushed me to the somewhat radical position that “large groups are fundamentally unreal,” or at least that they are more illusion than reality. There is nearly nothing you can accurately say about a large group except its description; “black Americans” do not reliably share anything except that they are “black Americans,” and even that —thanks to e.g. colorism and class issues— is often debated. I think our moment may all but require us to abandon the use of large groups in our intellectual operations, because the errors we make when we casually cite or model or describe them are sufficient to catalyze reactive extremification in all those we exclude. There are black Republicans, for Christ’s sake! There are trans Republicans! (At least one, anyway!). There are racist Democrats! There are deviant Christians, puritanical libertines, greedy Communists, sadistic animal-lovers, traitorous Patriots, and on and on and on. Scale makes a mockery of all these concepts. I’ll say it once more: the crowd is untruth!
There are probably cases to be made for reactive extremification, too. For example, Hegalians and Marxists or generic accelerationists may believe that this sort of conflict is incrementally positive over time, forcing disputes to be adjudicated, widening the possibility spaces for social and political outcomes. I disagree, but it’s worth noting.
“First, we rarely consider extremists on our side to be on our side. This is called in philosophy the “no true Scotsman” fallacy. For example, I might show you a tweet in which someone I perceive as your ally says something completely insane. You might respond: “Please, that’s not an ally of mine. No real [whatever] thinks that way!” But of course, you probably believe that your opponents are defined by their extremes (or again, by their failure to control their extremes).”
This is such a big piece of this for me because I think you can really learn to see this psychological pattern as it’s happening and before it flourishes into extremist mental ranting. FAE for scales larger than 1.
https://en.wikipedia.org/wiki/Fundamental_attribution_error
When we’re in default mode network / papança mental rambling, it’s so easy to let the defense attorney pilot indefinitely. I’d like to think being a naturally argumentative person equips me to occasionally turn the tools on myself and simulate the prosecution at a productive fidelity.
Some echoes in here of DFW’s “This is Water”, including the exhortation at the end. DFW closed with “I wish you way more than luck”; we all need persistence and wisdom and luck not to be captivated by this phenomenon.
But it’s possible to do so! For me, politics has been a helpful example: like 85% of my Facebook friends & 50% of my Twitter follows were Elizabeth Warren fans, and it was illuminating to realize that in real life approximately nobody cared about her campaign. But of course I also spent a good 18 months spending hours on Twitter most days, pumping cortisol into my veins until I would come to, shaking. I feel lucky not to be in that place anymore.