The Digital World in 2025
1.
A few weeks ago, I was having a discussion with someone (let’s call him Bob) about AI safety and how I felt there wasn’t enough being done in the area, at least relative to the magnitude of the problem we were facing and the consequences if we failed to align AI.1
I said:2
I know that some people are working on the problem actively but it seems to me that 1) the amount of resources being funnelled into alignment research is off by a few orders of magnitude right now (roughly ~100x), and 2) I’m not sure why more people aren’t feeling the same way and talking about it publicly, as compared to talking about latest AI developments in models and research.
And Bob said:
I don’t agree with (2) at all. I hear and see so many people complaining about the threat of AI all the time on the internet. You ever think that the problem is just the kind of content you consume?
I didn’t realise how strong and generally applicable his point was until an hour later when I was replaying this in my head.
I had generalised from “the people I hear about on the internet” to “everyone in the world” without a second thought. This might’ve worked until ~15 years ago, before the age of recommendation systems.
Today, the amount of content available on the internet is beyond our wildest imaginations. Instagram has more than 1 billion reels, Youtube has more than 5 billion videos, Twitter3 has ~500 million tweets posted every single day. Yeah, crazy, I know.
And so naturally, the content that reaches our eyes and ears has been filtered many many times. We see a teeny-tiny fraction of what’s out there on the Internet.
More importantly, we aren’t actively choosing what we see on any of these platforms. It’s all dictated by recommendation algorithms that somehow “know” what we want to watch / read. (More on this later.)
This leads to a equally astronomical selection bias in the content we consume.
To put it more precisely, from all the content that is out there, we are not observing samples that are chosen uniformly at random (if we did, then we would actually have a pretty holistic understanding of what everyone on the internet is consuming too, in the long-run and on average).
We don’t see things on the internet uniformly at random.
Instead, we are only exposed to a very very small fraction of the content-space that is out there.
To use the example of Twitter: if you read ~1000 tweets today, that’s 0.0002% of the total number of tweets posted today.
So, you’re only observing 0.0002 % of the content and drawing inferences about everyone on twitter based on that. Huh.
And this fraction of content that you’re consuming is drawn from a mathematically different distribution than the content I’m consuming. I say mathematically because the inputs to the recommendation system that decides what to show us are actually different numbers. (Tl;dr: The algorithm that shows you content has certain parameters and these parameters are different for each user, which causes the content we see to be different from each other.)
Everyone is seeing slightly different slices of this content-space. That is, the inputs that shape people’s opinions and beliefs can be drastically different. And as a result, people’s opinions and beliefs will be naturally drastically different.4
Huh. Who would’ve thought this would lead to any disagreements in what people think is “out there” on the internet?
All this to say that we’re living in our own digital worlds. Your digital world is probably very different from mine. And since we both use this digital information to infer things about the world, I can’t assume that we share a common understanding of what’s going on the world anymore.
Even news suffers from this same problem. Though we live in a shared physical world, the information about this shared physical world (aka news) we hear can be very different from each other. I might not even hear about a hurricane on the other side of the planet. I might know more about AI advancements than ongoing wars.
And so we end up giving importance to the things that have our attention, not the other way around. (If you hear about X and watch about X often enough, you’re bound to give it more importance.) I’m sure you can think of examples of this for yourself.
I think the best way to regain a solid understanding of reality, of what people actually believe / what’s happening in the world is to rely on surveys, studies, reports, and polls done on the ground. There may be biases in them too, but at least we understand them and can correct for them far better than we can correct for the anecdotal “oh i see this all the time on social media but how much of this is just because my feed favours such content and not because many people actually believe this”.
As a simple example: if you see opinions for and against a given technology / policy / any topic in your feed in the ratio 9 : 1, does that mean that people, in general, are 9 : 1 in favour of the technology / policy / any topic?? How much should you even discount by? Is it 5 : 1? Or 1 : 1? Or, god forbid, 1 : 9???5
It’s impossible to know if you just rely on anecdotal evidence, because these numbers depend on the outputs of a recommendation system that is tailored to you specifically.
A less consequential but equally telling example: you might see a meme or a pop-culture reference based on recent events all over your feed and naturally think that this is “all over the internet” (when it’s really just all over your feed), and then be extremely surprised to find out that someone living right beside you has never heard of it.
It’s hard to know you’re living inside a bubble if that’s your whole world.
But when someone points out that you’re actually in a bubble, it can make for some really interesting conversations. And assuming it is actually a bubble, for the first time, you get to see the bubble from the outside.
2.
A few months ago, I stumbled upon this video of a candidate running for Mayor of NYC — Zohran Mamdani. It was the first time I’d ever heard his name. It was the first I was ever hearing concrete news about mayoral elections in US at all — like, I know in theory that they happen but I usually don’t keep up with them because there are more important things to keep up with.
It showed him as the voice of the middle class, the people’s man, fighting against the rich and elite, by crowdsourcing funding for his election campaign. I was quite intrigued by this and watched it to the end.
Since then, for the next 2-3 weeks, I would see him appear on my feed almost every day.
Now, here’s what’s interesting: I don’t live in US, let alone NYC. I can’t vote to affect the outcome of the election. I know nothing about the other candidates’ policies other than what’s mentioned in Mamdani’s videos. I know just enough about Mamdani’s policies (e.g. tax the rich, but not too much so they don’t flee) to think they sound reasonable.
In short, 100% of my information about the NYC mayoral elections were from an obviously biased source — Mamdani himself. (I’m not blaming him for being biased — it’s his job to fight the election by showcasing his strengths and highlighting the flaws in his opponents!)
Yet, in an embarrassing moment of weakness, I felt something inside me supporting him. I felt like he was better than the other candidates.
I hadn’t even given the other candidates a chance before coming to this conclusion. I had only heard their names from Mamdani’s mouth, never even seen their faces. Even if I were going to decide on who to vote for based on pure charisma, I hadn’t even seen the charisma of the other candidates before deciding!
In short, I had formed a belief based on completely biased information, even after knowing full well that the source of information is biased.
I had gone from a neutral stance to supporting one side, without ever waiting to hear from the other side.
And I wasn’t alone in this. I checked the comments section to see that many people outside of US had heard of Mamdani for the first time through Instagram and began strongly supporting him without having any other source of information than Mamdani’s own account.
Our default mode is to believe, not question. And social media doesn’t just repeat our own beliefs back to us (i.e., it’s not just an echo-chamber) but it also creates / plants new beliefs.
What does this all of this mean for politics and democracy?
Elections are going to be won on social media now, through social media campaigns, not through rallies in a stadium and long speeches that people don’t have the attention span for. This was probably was a major factor even in the 2024 US elections too. (Do you remember the number of reels and memes you saw about a certain candidate who slipped down stairs, forgot what he was saying mid-sentence, got lost while walking around, rambled on for a minute without making any sense? Or was it just me? Maybe someone should do a poll on who has seen a certain meme so that we know the actual proportion of people that have seen it, because once you’ve seen it, for some reason you assume everyone else has seen it too and knows what reference you’re talking about.)
As more and more Gen-Zs start to become adults and vote, the importance of social media as a means to persuade and convince will increase. And it’ll have to be in a form that appeals to minds have spent their entire adolescence watching TikTok or Instagram reels: short, catchy, attention-grabbing.
Long-drawn arguments and debates over policy will be too boring for most people to care about. We already live in a world where people watch a movie on TV and scroll through their feed on their phone at the same time, because just one isn’t enough stimulus.
And in this scenario, when election results comes down to whoever gets the most views / attention on social media, the world turns into high school again and politics becomes a popularity contest. Not popularity of ideas and policies but just popularity in general. Politics becomes marketing. And in a world like that, The Algorithm becomes the king-maker.
There are real implications for the world as people spend more time on social media, and their attention span get shorter. It’s not some abstract problem we face in the far-away future. It’s already here.
3.
A couple of years ago, I observed that social media (specifically Instagram) was eroding my ability to think critically, to focus on things, to be thoughtful and nuanced, my attention span, my memory. Essentially, it was just lowering my cognitive abilities in general.
As a university student, I couldn’t afford to not have Instagram because all my friends were there and it was how I stayed in touch with what was going on in people’s lives. And the problem was not in looking at my friends’ posts and stories — that was, in fact, the whole point of Instagram (at least, to me).
The problem was that < 5% of my time was actually spent on seeing what my friends were up to on Instagram. The rest was spent on watching random reels about the most random things on earth. It could barely be called “entertainment” in the usual sense but it did give me short-term gratification, the instant dopamine hit. (Social media was becoming less and less about the “social”, and more and more about the “media” and “para-social”)
And it was literally rotting my brain. The fact that such content is called “brainrot” is for good reason! It’s like having a “smoking is injurious to health” label on cigarette packs. It’s telling you exactly what’s going to happen if you do it, and people do it nevertheless.
I sent a friend of mine a 40-second reel and he replied with “omg why is this so long??
Even so, I naively thought the problem was with the kind of content I was consuming. So, I decided to try and “curate” my feed on Instagram. For a few days, I consciously and wilfully skipped brainrot reels and only liked / saved the stuff I actually wanted to want to watch.
Liking / saving / sharing / commenting were the only levers I could pull to control what I would watch in the future. It was the only way to tell The Almighty Algorithm my preferences and interests and pray that it would understand and listen to me. It was like trying to dig a tunnel through a mountain with a pickaxe and praying that the Dragon in the mountain would get out of your way when you asked it to. I just didn’t have enough leverage to control things about my own feed.
Also, it probably takes into account what your friends are watching too (the hypothesis is that your friends and you probably have similar tastes and interests so it makes sense to show you stuff your friends have liked), which is out of your control. Maybe it takes the average of the model parameters of your 5 “closest” friends (based on how many reels you share with each other) as a feature? In that sense, maybe the adage “you are the average of the 5 people you spend the most time with” is true in a very real, and somewhat uncanny, sense.
Maybe this is something that other people like about social media — that they can just sit back, relax and let the algorithm decide for you. I get the appeal. I get that it’s easier. I get that this is meant as a form of relaxation and so people don’t want to have to think much about it.
But if I’m being honest, I don’t find it particularly relaxing. I don’t enjoy doomscrolling. I don’t enjoy the feeling after spending 30 mins scrolling through reels and not remembering anything. I didn’t get anything out of it, not even happiness. I feel like i just wasted precious time.
Back to the story: even then, after all this effort, every now and then I would see the most random videos on my feed. It was The Algorithm’s way of asking “okay yes yes I know you only like videos about XYZ but do you also like 10-second clips of bloopers of a TV show made 20 years ago? c’mon man you won’t know unless you try it out? right? right?? right????”
And since willpower is a limited resource, I inevitably used to suffer a relapse every few days and fall back into doomscrolling. It’s hard to win when The Algorithm is working against you all the time.6
The key epiphany I had was when I realised that the problem was not the kind of content I was consuming, but the form of content. It’s not what you’re doomscrolling, it’s that you’re doomscrolling. (So, trying to curate a good feed on an app designed for short-form content doesn’t solve much.)
The problem is (the normalisation of) short-form content and overstimulation as a whole.
(As a corollary to this, I don’t believe that those “micro-learning” apps where you scroll curated “useful” stuff like quotes from books or educational videos instead of brainrot content are any better. It’s still going to impact your brain the same way and you aren’t going to remember what you read / watched anyway.)
I now view doomscrolling roughly as the digital equivalent of smoking. Everyone knows its bad for them in the long-run but sometimes people can’t help themselves.
To be clear, I don’t think all social media is bad and technology is evil and we should go back and live in caves and start farming. Quite the opposite.
For example, I like that Youtube offers the option to turn off all recommendations by turning off the watch history. Which is exactly what I’ve done. (And to me, youtube doesn’t classify as short-form content — I’m not talking about youtube shorts — most videos I watch are ~20-30 minutes long.)
I want to choose for myself what I think is better for me (or worth giving importance too) rather than giving up that agency to The Algorithm. So, I subscribe to the creators whose videos I enjoy and then just watch videos from the subscriptions tab.
As a result, all my time on Youtube is spent watching the things I genuinely am interested in, and it doesn’t try as to try and lure me into watching something else that I know is a waste of time that I’ll regret immediately after.
Entertainment7 without feeling guilty afterwards?? Yeah, it’s possible.
4.
A few months ago, when Veo 3 (an AI model that can generate short realistic clips) launched, I came up with the natural business idea:
- People watch all kinds of brainrot on the Internet, particularly in the form of short videos (aka reels).
- Veo 3 can be used to generate more short-form brainrot at a much quicker pace, somewhat indistinguishable from human-generated brainrot.
- I can use Veo 3 to create brainrot and post it online (anonymously, of course) and monetize the views and likes.
I also instantly realised that many other people would be thinking the same thing. Didn’t matter because I could differentiate myself by picking a niche I was interested in — e.g. math / tech — and semi-brainrot-semi-useful ai-generated videos.
Something stopped me before I posted the first video.
Would I be proud of this? Would this make the world a better place or worse?
If I genuinely believed that doomscrolling is bad (which I did), then I was preying on people who were already “addicted to” doomscrolling and making money off them (it is their time and attention that I would indirectly monetize). Or, phrased less dramatically, making the internet a worse place in general by contributing to the problem.
I could’ve rationalised it by saying “oh I’m just serving a market need” or “someone is going to do it anyway, so it might as well be me”. But deep down, I knew it wasn’t right.
I knew there were better ways to make money that actually added real value to the world, even using this same technology of AI video generation.
And so, I didn’t do it.
What I instead did was spend that time (aka this summer) learning. I started reading more books, watching some online lecture series on youtube, and studying textbooks For the first time in forever, I actually bought physical textbooks!.
I genuinely enjoyed spending my time re-learning things I’d learned in university, but this time much more in-depth and internalising them. I finally understood why things were a certain way, not just that they were.
A friend told me:
it must be an amazing feeling to be learning full-time
and damn, he was so right!
Footnotes
-
I’m an AI-optimist and also pro-”investing-heavily-in-ai-safety-research”. ↩
-
And the reason I remember this argument so well is because I wrote it down immediately after it happened, because I realised what a blunder I had made. ↩
-
Sorry I just find it very weird to say “X” with a straight face because then what’s the Musk-era-equivalent of “tweet”? ↩
-
And of course AI is going to make this much much more significant - you might see videos / tweets / images on our feed that were created specifically for you, tailor-made to your existing beliefs and worldview, to maximize your chances of being persuaded of some opinion / belief! No, it’s not all bad! The same AI can also be used to create custom videos - made specifically for you again! - to explain concepts to you in a way that makes the most sense to you. AI, like any other technology, can be used in many ways - some good, some bad. ↩
-
Or more rarely, it might even be the case that it is 20 : 1 in your favour even though your feed only shows you 9 : 1. ↩
-
I now know of several other “retro” social media apps that try to solve this exact problem (i.e., they are about the social aspect of it). But adoption rates are slow, and this is to be expected since they’re trying to go against basic human nature (and also, network effects of course). Instagram / TikTok / YouTube / Twitter still rule the digital media world. ↩
-
Uhhh full disclosure: maybeee what I watch might not count as entertainment in the traditional sense, but it’s more of “edutainment”. To give you a sense, my subscription list is: 3B1B, Veritasium, Y-Combinator, Ted, Dwarkesh Patel, Ted-ed, … you get the idea. ↩