I’m Begging You: Stop Using Algorithmic Feeds To Get Your News
It’s so easy to get lost in a media feed. Hours can go swiping through videos on TikTok, Instagram’s Reels, YouTube’s Shorts, and the many other “quick” feeds out there. “I have a problem with TikTok,” one Reddit user laments. “I’m spending upwards of 4, sometimes 5 hours a day just watching videos on TikTok.” “I have an addiction to TikTok, and the reason that I am worried about it is that it is lowering my attention span,” goes another.
These feeds are not just where we get fun distractions, but they are increasingly where we get our news. According to a Pew Research analysis released in 2024: “little more than half (54%) [of Americans] at least sometimes get news from social media.”We have gotten so accustomed to getting our information through these platforms, and yet they come with a ton of drawbacks that reduce our attention span and executive functioning, as well as make it more likely we internalize and spread misinformation.
And maybe we should all put more effort into curating our news feeds from particular sources rather than letting unaccountable algorithms do that labor for us.
The drawbacks of feeds
Most social media feeds like TikTok, Instagram Reels, and your Facebook Timeline are a sludge of misinformation where claim after claim is delivered at such a rapid pace that it becomes difficult to know if a fact is accurate at the moment, especially since this information takes advantage of human psychology to abet that spread. As scholar Kim Bisheff told Cal Poly News:
“Generally, misinformation is designed to tap into our emotional reactions: fear and anger, most frequently. We humans have this design flaw where, when we experience strong emotions — especially fear and anger — or in some cases, that feeling of validation that comes with having your opinions restated in an inflammatory way, we tend to react to that information without giving it much critical thought.”
I remember recently I saw a flood of misinformation when news of the Sean “Diddy” Combs sexual assault scandal began to circulate. A music video showed up on my feed (allegedly sung by Justin Bieber) implying he had been an SA victim of Diddy’s. The video had the lyrics: “Lost myself at a Diddy party, didn’t know that’s how it go. I was in it for a new Ferrari, but it cost me way more than my soul.”
I initially believed this claim because it was within the realm of possibility (SA in the entertainment industry is not uncommon). This information emotionally felt real to me, and that reality bypassed my ability to assess its accuracy. I started sharing the information with my partner and would have done so with more people, but on a whim, I started fact-checking it and learned that, no, this song was fake.
Justin Bieber never released the song “Lost Myself at a Diddy Party.”
Thankfully, the retraction I had to do was only with one person, but it’s not the first time something like this has happened (see Is “Occupy Democrats” Fake News?). I research on the Internet for a living, and misinformation still falls through the cracks, and it usually comes from feeds, which do not incentivize fact-checking.
With the Bieber-Diddy example, it took me five minutes to fact-check one five-second video. The average TikTok user, for example, watches almost an hour of such content every day, which could translate to a hundred or more videos a user would have to verify in a similar manner, meaning that to avoid misinformation, you properly would spend more time fact-checking feeds than consuming them.
That is too high of a hurdle to expect any user to meet. These platforms are not set up for users to pause, reflect, and deconstruct the news they absorb. You merely keep on consuming point after point, regardless of its accuracy, and that is by design.
A harmfully addictive design
There has been a consistent body of research that shows excessive screen time can have drawbacks, especially for children, most of whom now consistently use screens from infancy.
A longitudinal study found that those exposed to television from 29 months onward (i.e., +2.4 years) saw decreased classroom engagement.
Another noted that adolescents who used media to multitask were more likely to struggle with some aspects of executive functioning in their everyday lives. Something that is likely linked to academic performance.
One National Institutes of Health-funded study led by Betty R. Vohr, M.D. found that six and seven-year-olds born “extremely preterm” and had more than two hours of screen time a day “were more likely to have deficits in overall IQ, executive functioning (problem-solving skills), impulse control and attention.”
A scoping review indicates that screen time from an early age can “have negative effects on language development.”
(Note: for this list, I used the background section for the article, Effects of Excessive Screen Time on Child Development: An Updated Review and Strategies for Management)
The American Academy of Pediatrics advises that children under 18 months (1.5 years) have no screen time at all, and children from two to five limit it to one hour a day — advice that, if listened to, might mitigate some of these effects — but that’s by in large not happening. Children under one-years-old are, on average, watching almost an hour of screen time daily.
And, of course, the reason so many people, including children, are hooked to their screens — irrespective of the negative drawbacks — is because they are designed to do that. It’s well-documented at this point that tech companies have purposefully taken advantage of human psychology to manipulate our preferences (see “Persuasive Design”). Netflix even has an Emmy-winning documentary on the matter (see The Social Dilemma).
When talking about this subject, I always refer to the book Hooked by Nir Eyal, a popular read amongst the founders of Web 2.0, that laid out how companies could create products that are psychologically addictive. As Eyal writes in that book:
“Once we’re hooked, using these products does not always require an explicit call to action. Instead, they rely upon our automatic responses to feelings that precipitate the desired behavior. Products that attach to these internal triggers provide users with quick relief. Once a technology has created an association in users’ minds that the product is the solution of choice, they return on their own, no longer needing prompts from external triggers.”
This sure sounds like instructions on how to psychologically addict one’s users, and it is advice a lot of people took to heart. Most feeds are now designed to generate these internal triggers where you feel compelled to return to them again and again, unprompted.
And that’s where my concern comes from regarding using such feeds to get your news. These platforms are designed to be addicting, not accurate, which means misinformation is an integral part of the social media experience.
An unsocial conclusion
Now, I am not a scientist, so I encourage you to read up on all the information I have cited (and correct what you think is erroneous as I do make edits).
What I have read alarms me. Feeds have an element of danger to them. It doesn’t matter what the medium is. It’s impossible to verify all the claims coming at such a speed, and I think it’s a mistake to believe that an individual can overpower these hurdles through willpower alone.
None of us are immune to propaganda and misinformation.
No algorithmic feed should be your default source for news. I think the healthiest thing is to create your own RSS feed (Cory Doctorow recently evangelized about this on their blog). Gather a list of trusted sources, and when they do you wrong, drop them. While even here, there will be limitations — we cannot singularly overcome the hurdles of predatory information ecosystems — we should at least try.
Otherwise, we dictate our preferences to the whims of a feed.