Belief in the Digital Age
"Everybody on the internet wants to waste your time."
If you leave me in a room long enough, I will start scrolling on my phone. If you asked me why, I would say: "what else is there to do?" I can't remember when I started to think that way.
My first exposure to the internet was a website that my dad loaded up for me when I was 6 or 7, containing flash games about the planets. Back then, the internet seemed like an infinite library that I could only access on a time limit. Now, the internet is something I use without a second thought. It is integrated into most, if not all, of my tasks throughout the day. It streamlines my research processes, and allows me to access the forefront of knowledge in my field, no matter where in the world it is coming from.
Two thirds of the world's population is online as of 2024. The average internet user spends seven hours online every day – almost a full waking day lived through bits. The internet promised us limitless access; what it delivered instead was limitless exposure. This relentless barrage of information has done more than affect our daily language or shorten our attention spans. It has changed how we come to believe.
What began as a network for information has become a system for influence. Algorithms, trained on our habits, learned what keeps us engaged and began feeding us more of it. Social media doesn’t just reflect our beliefs—it curates them. Research shows that these platforms build echo chambers, spaces where we mostly encounter views that confirm what we already think.
This design isn’t accidental. Leaked internal documents from Facebook in 2021 revealed that divisive content is prioritized precisely because it drives engagement. Outrage is profitable. The more emotionally charged a post, the more likely it is to be shared, commented on, and amplified across feeds.
What feels like consensus is often just repetition. When you see the same opinion echoed by fifteen different accounts, your brain reads it as proof—everyone thinks this. But “everyone” is really a carefully selected slice of the internet: the people and posts most likely to keep you scrolling.
Psychologist Daniel Kahneman calls this the availability heuristic: we judge how common or important something is based on how easily examples come to mind. If your feed is full of outrage about a particular issue, your brain assumes that issue is more prevalent and urgent than it might actually be. The algorithm doesn't show you reality—it shows you a reality optimized for your engagement.
Here's where it gets dangerous: the speed at which we form beliefs online has fundamentally changed. Pre-internet, belief formation was slow. You read a newspaper article, maybe discussed it with friends, sat with it for a while. There was time for reflection, for encountering opposing views, for your initial emotional reaction to settle into something more considered.
Now, you see a headline, feel something, and within seconds you can like, share, and comment—publicly cementing your position before you've even finished reading. Research from MIT found that false information spreads six times faster on Twitter than true information, largely because false news is more novel and emotionally arousing.
This speed creates what I call "fast beliefs"—convictions formed under algorithmic pressure, driven by emotional hijacking rather than careful consideration. Your amygdala (the brain's emotional center) responds to threatening or outrageous content in milliseconds, while your prefrontal cortex (responsible for critical thinking) takes significantly longer to engage. By the time your rational brain catches up, you've already hit share.
The brain craves closure, especially under conditions of information overload. Studies on cognitive closure show that when people are overwhelmed, they grasp onto the first explanation that makes emotional sense and stop processing new information. The infinite scroll creates perpetual overload, making us more susceptible to fast beliefs that offer simple answers to complex problems.
The most insidious effect of fast beliefs is how they transform moral principles into team signals. Consider gun violence, certain international conflicts, climate change – issues where our response should be guided by consistent principles. Instead, the discourse shifts depending on who is affected and what the algorithm surfaces. When a controversial media figure is involved in a shooting, the timeline erupts with calls for gun control, vigils, and collective mourning. These same people were silent about the school shooting that happened on the same day. Not because this shooting is objectively worse, but because the victim is someone their algorithm—and therefore their community—cares about.
The same pattern repeats across issues. International conflicts: People who posted Ukrainian flags in their bios were often silent about Yemen, Palestine, or Sudan—not because those conflicts matter less, but because the algorithm (and their social circle) didn't make them visible or emotionally salient.
This isn't hypocrisy in the traditional sense. It's what happens when belief formation is outsourced to an algorithm designed to maximize engagement, not moral consistency. This is selective empathy in action: our capacity for moral outrage calibrated not by principle, but by proximity to our in-group. Research on moral tribalism shows that people are more likely to excuse harmful behavior when it's committed by someone on "their side" and more likely to condemn the same behavior when it's committed by an opponent.
Online, belief increasingly functions as performance. You don't just hold a belief—you perform it. You share the right infographic, use the right hashtags, express outrage at the right targets. And in return, you receive validation: likes, retweets, comments affirming that you're on the right side. This creates a feedback loop where the goal isn't to be right, but to be seen being right by the right people. Research on social media and identity shows that people curate their online presence to signal group membership and values, often prioritizing social approval over accuracy.
The cost? We lose the ability to hold principles independent of tribal affiliation. We lose the capacity to sit with moral complexity. We lose the practice of changing our minds.
But here's the uncomfortable question: if we know we're being manipulated, why do we keep participating?
The answer isn't moral weakness. It's convenience—and convenience is a function of circumstance, not character.
Sharing an infographic takes three seconds. Reading a long-form article about the same issue takes twenty minutes. Posting a black square feels like solidarity. Researching which organizations actually help requires effort, discernment, and the possibility of being wrong. The algorithm knows this. It's designed to reward the path of least resistance.
We talk about this phenomenon like it's laziness, but that misses what's actually happening with the average person. If you're living paycheck to paycheck, your entire existence is focused on surviving until next week. You don't have the cognitive bandwidth to research every issue that appears in your feed. You perform your values in the quickest way available because you're already exhausted.
This is why algorithmic manipulation is so effective. It doesn't require us to be stupid or evil. It just requires us to be human—to take the convenient option, to trust the social proof in front of us, to choose the low-friction path when we're already running on empty.
#MeToo exposed systemic abuse. Black Lives Matter mobilized millions. Student protests in Bangladesh and Nepal toppled governments. These movements prove the internet can amplify marginalized voices and coordinate action at unprecedented scale. But they also reveal something crucial about the kind of online engagement that actually creates change.
These movements succeeded precisely because participants rejected the algorithm's training toward convenience. They disrupted their routines. They showed up to marches, risked arrest, organized mutual aid networks, sustained pressure over months or years. The internet provided infrastructure—hashtags for coordination, livestreams for documentation, networks for rapid mobilization. But the fire was built through the unglamorous work of coalition-building, strategic planning, and face-to-face solidarity.
Maybe some selectivity is necessary. We can't care about everything equally or we'd be paralyzed. But there's a difference between necessary triage and letting an algorithm decide our moral priorities for us based on what keeps us scrolling. The question isn't whether we should care about everything. It's whether we should let a profit-driven system decide what we ultimately focus on.
So what do we do? We can't leave the internet—it's too integrated into how we work, learn, and connect. But we can change how we engage with it.
When you encounter something that makes you feel strongly, pause before you share. Ask yourself: Do I actually know enough about this to have a public opinion? What would I need to know to be more confident in this belief? Studies on "cognitive reflection" shows that people who habitually pause to question their initial reactions are less likely to fall for misinformation and more likely to update their beliefs when presented with new evidence.
This means actively seeking out sources that challenge your assumptions—not to "both sides" every issue, but to ensure you're encountering the strongest version of arguments you disagree with. Follow people who make you uncomfortable. Read long-form journalism that doesn't fit neatly into your existing worldview. People who actively expose themselves to diverse perspectives are better at distinguishing strong arguments from weak ones and less prone to partisan bias.
The algorithm makes you think everyone believes something because fifteen accounts in your feed said so. Counter that illusion by talking to people in your life. You’ll find more nuance, more hesitation, more uncertainty—more of the ambiguity that the scroll edits out.
The internet is a remarkable tool. It connects, informs, mobilizes. But it’s built by people with incentives that diverge from your well-being or moral clarity. Use it deliberately. Set boundaries. Step outside the metrics. Spend time in spaces where your worth isn’t measured by engagement and your beliefs aren’t performed for an audience.
The work of being human in the age of the algorithm is the work of reclaiming our attention, our empathy, and our capacity for moral consistency. It's the work of remembering that the feed is not the world—it's a curated, monetized, engagement-optimized slice of it.
And it's the work of choosing, again and again, to be more than what the scroll wants us to be.
If you leave me in a room long enough, I will start scrolling on my phone. But I'm learning to ask myself: what else is there to do?
The answer, it turns out, is everything.