Algorithmic Manipulation of Likes and Engagement
Social media platforms don’t just passively display likes – their algorithms actively shape what we see and how much engagement content receives. In some cases, platforms artificially boost or suppress likes to influence engagement. For example, TikTok employees have a secret “heating” tool that manually promotes certain videos into users’ feeds, ensuring they reach a target number of views . This means that some content goes viral not purely by merit or user choice, but by internal intervention – staff-picked videos made to seem hugely popular without users knowing. Likewise, Twitter’s algorithm was famously tweaked in 2023 at the behest of Elon Musk, boosting his tweets’ visibility by a factor of 1,000 so that they appeared to 90% of his followers (and many non-followers) after one of his posts underperformed . In essence, the platform altered its code to flood timelines with one account’s content, artificially inflating its impressions and, by extension, its likes and retweets.
Conversely, algorithms can also suppress engagement under certain conditions. Posts deemed “low quality” or violating policies may be downranked (often without transparency), limiting their reach and therefore the likes they can gather. Many users suspect “shadow banning,” where their posts get quietly hidden from others’ feeds, causing unnaturally low like counts. While companies rarely confirm such practices, they do acknowledge tweaking feeds in the name of relevance or safety. Facebook, for instance, has adjusted its News Feed algorithm to prioritize personal connections over publisher content in the past, which indirectly affects what gets liked . Overall, these algorithmic manipulations mean the raw number of likes on a post isn’t always an organic metric – it can be the outcome of behind-the-scenes choices about what content to amplify or hold back.
The Fake Engagement Economy: Bots, Bought Likes, and Click Farms
A click-farm operator in Vietnam with an array of phones controlled by computers to generate fake likes and follows . Social media’s popularity contest has spawned a shadow industry of fake engagement, where likes, followers, and views can be purchased in bulk. Click farms – often in regions with cheap labor and lax regulations – employ fleets of low-paid workers or automated bots to inflate metrics on command . These “popularity factories” run thousands of fake accounts that systematically like posts, follow users, and leave comments, all to create an illusion of influence . One recent report noted that 51% of all internet traffic is now automated, and 37% is attributed to malicious bots that bolster schemes like fake clicks and likes . In other words, a significant portion of engagement online is driven by algorithms and scripts, not real people.
Every major platform is affected. On Instagram, influencer marketing studies have found rampant inauthentic activity – 55% of Instagram influencers have engaged in fraudulent tactics like buying followers or using “engagement pods” (groups that mass-like each other’s content) . Mid-tier influencers (50k–100k followers) often have 25–30% fake followers on average . And it’s not just followers – roughly 40% of comments on sponsored Instagram posts may be generated by bots rather than real fans . Twitter (now X) also has a well-documented bot problem. The company long claimed that under 5% of active users are fake, but external research suggests the share is much higher – one study found up to 15% of Twitter accounts are bots . These bot accounts can be programmed to like and retweet en masse, artificially pushing trends or popularity. Facebook has faced similar issues; it estimated that as of 2017 up to 13–16% of its profiles were “duplicate or false,” which amounted to 448 million accounts in 2020 . In a single quarter (Q3 2022), Meta reported disabling about 1.5 billion fake Facebook accounts – roughly 5% of its monthly users – in an ongoing battle against bot networks .
Even the newer TikTok is not immune. Demand for fake TikTok followers and likes has surged, with an “endless supply” of services offering to automate growth on the platform . There are countless reports of videos successfully reaching viral status thanks to fake views boosting them into TikTok’s algorithmic “For You” page . In fact, TikTok’s own transparency data reveals a staggering volume of fraudulent engagement being culled: in just one quarter (Q1 2023), TikTok removed over 51 million fake accounts and 531 million fake likes, and even blocked an astonishing 1.2 trillion fake followers from circulating . These numbers underscore how pervasive bought engagement is across social media – for virtually any popular platform, one can cheaply purchase a package of “likes” or followers. The cost of faux fame is often low, too (unregulated sites have advertised thousands of likes for just a few dollars), which fuels a thriving black market for social media clout.
The consequences of this fake engagement economy are far-reaching. It distorts the social media experience by making some content seem far more popular than it truly is, thus misleading users and advertisers. A product or post with thousands of likes might owe its visibility to a click farm rather than genuine approval. Fake engagement also erodes trust: when it comes to light that an influencer’s fanbase is inflated or a campaign’s engagement was largely bots, it “throws Instagram’s legitimacy into question” and can tarnish the credibility of brands involved. For platforms, the prevalence of bogus likes and follows represents a constant cat-and-mouse game – as we’ll see, they tout various detection and removal efforts to preserve authentic interaction.
Psychological and Social Effects of Likes (Real and Fake)
For users, the psychological impact of likes is very real – even if the likes themselves aren’t. On a basic level, a “like” functions as a form of social validation: it signals approval or admiration, giving the poster a little rush of affirmation . Studies have shown that our brains respond to these rewards, releasing dopamine similar to other pleasurable activities, which can make getting likes addictive. People, especially young users, often tie their self-worth and social status to the number of likes they receive . When a post does numbers, they feel popular and valued; but if it languishes with only a few likes, it can induce anxiety, inadequacy, or shame. Tragically, “not getting the likes they expected” – or getting fewer likes than peers – has been linked to higher depression and anxiety in teens . In controlled experiments, adolescents who were shown to receive very few likes on a post felt significantly more rejected and had more negative thoughts about themselves compared to those getting ample likes . This suggests that insufficient “social media validation” can genuinely hurt one’s self-esteem and emotional well-being.
Because of this, users adapt their behavior in response to like counts. Many will tailor their content to whatever tends to garner more likes – whether that means posting at optimal times, using certain filters, or even emulating viral trends instead of sharing what they truly want. The pressure to perform for likes can stifle authenticity and creativity. It can also lead to unhealthy comparison: seeing friends or influencers rack up hundreds of hearts can spur feelings of envy and FOMO (fear of missing out), especially if one doesn’t realize that some of those metrics might be artificially pumped up. In extreme cases, users engage in like-chasing tactics – ranging from joining reciprocal “like for like” groups to outright buying likes – just to keep up appearances. The stigma of low engagement is so strong that it’s not uncommon for users to delete posts that don’t get “enough” likes quickly . Teens have reported feeling embarrassed if a photo isn’t liked by a certain threshold of people within hours, prompting them to remove it from their profile to save face. In this way, the public nature of the like count can warp how people curate their online personas, essentially editing their lives to show only the “popular” moments.
Likes (even fake ones) also influence social perception. A post with thousands of likes is automatically seen as more interesting or credible – the psychological principle of social proof. Users often gravitate toward content that others have approved of, which creates a bandwagon effect. This means fake likes can fool real users into believing a piece of content is trendier or more endorsed than it actually is, potentially swaying opinions or consumption choices. For example, a mediocre product bolstered by purchased likes and positive bot-comments might appear trustworthy, duping consumers. On the personal side, many individuals derive validation from the online engagement numbers attached to them. This can become problematic if they suspect (or discover) that some of those likes are inauthentic. Authenticity of feedback matters for meaningful self-esteem; learning that your apparent popularity was inflated by bots or click-farm workers can lead to disillusionment or a sense of hollow victory.
Platforms themselves have recognized the mental health toll of the “like economy.” Instagram famously experimented with hiding public like counts to “depressurize” the experience for users, especially adolescents . The idea was that without the world seeing the exact number of hearts, people might feel less competition and anxiety over each post’s performance. Early statements from Instagram’s CEO underscored that it was about taking the focus off ratings and more on connecting with others . However, after years of testing, the company found the change did not dramatically improve overall well-being and made the feature optional instead . Many users actually missed the metric for tracking popularity or trendiness, illustrating how deeply ingrained the “quantified social approval” has become in online culture. Still, the fact that such measures were attempted shows a growing awareness: chasing likes can have real psychological costs, and those costs are only magnified when the chase is for an illusion (fake likes) rather than genuine appreciation.
Platform Transparency and Anti-Fraud Measures
Social networks are under pressure to crack down on fake engagement and be transparent about what they’re doing. All the major platforms officially prohibit buying or selling likes, followers, and other forms of “inauthentic activity.” In practice, they employ a mix of automated detection and manual moderation to combat these issues – though with varying degrees of success, given the sheer scale. For instance, Instagram in recent years has deployed machine learning tools to identify accounts that use third-party apps for boosting engagement, and it actively removes the fake likes, follows, and comments those accounts generated . In 2018, Instagram went so far as to publicly threaten users of such services: it began purging inauthentic likes/follows from those profiles and warning that continued use could result in features being limited . This was a notable step because it was one of the first times the company specifically discussed removing fake likes (not just fake accounts) – essentially admitting the metric itself was being polluted and needed cleaning . Instagram also periodically conducts mass deletion of bot accounts; users may notice their follower counts drop when the platform “flushes out” fake profiles en masse (for example, it has removed tens of millions of fake accounts in sweeps dating back to 2014) .
Facebook (and Meta overall) publishes a quarterly transparency report detailing its enforcement against fake accounts and spam. The numbers tend to be colossal: Facebook has reported taking down billions of fake accounts every year – often blocking millions of attempts per day at account creation by bots. In one quarter of 2022 it disabled about 1.5 billion fake accounts on Facebook alone . These are often accounts that never fully go active (Facebook’s systems catch them at sign-up), which is why the company can maintain that only ~5% of active users are false . Facebook also noted that it identified over 99% of the fake accounts it removed proactively, before users reported them . However, independent analysts have questioned these figures and the methodology behind them . Indeed, the problem of bogus profiles was central to Elon Musk’s 2022 dispute over acquiring Twitter, when he challenged Twitter’s long-standing claim that less than 5% of its users were spam bots . That saga shone a light on how little outside observers can verify about a platform’s true fake-account rate – leading to calls for independent audits of inauthentic accounts on social media . Researchers argue that companies have a conflict of interest (their valuations and ad rates depend on user metrics), so transparency is crucial to ensure fake engagement isn’t being underreported .
Twitter (now X) historically conducted periodic purges of spam/bot accounts as well. Notably, in mid-2018 Twitter removed tens of millions of suspicious accounts, an action that visibly reduced follower counts for many high-profile users and was aimed at improving “information quality” on the platform. Under new ownership in 2022, Twitter shifted tactics by introducing paid verification (Twitter Blue) in part with the rationale that bots wouldn’t pay and it could help distinguish real users – though spam accounts have still found ways to persist, sometimes even purchasing verification for credibility. The efficacy of that approach remains debatable, and detailed data from Twitter on fake like or bot removal post-2022 has been scarce due to API and policy changes (which ironically made it harder for outsiders to track bot activity).
Meanwhile, TikTok and newer platforms are ramping up their transparency efforts. TikTok’s reports under the EU’s Digital Services Act reveal metrics on fake engagement removal. As noted, TikTok claims to be removing hundreds of millions of fake likes and follows each quarter . It has stated that content or accounts with “inauthentically inflated metrics” (i.e. bought likes or followers) are taken down or punished, and that it can prevent fake followers from ever showing up on a user’s follow count in the first place . TikTok also says it’s improving detection to stop bogus “viral” videos – for example, they’ve increased removals of content that artificially bolsters popularity, and they label automated accounts and spam more clearly in some cases .
Platforms have also pursued legal and regulatory means to stem fake engagement. Facebook and Instagram have filed lawsuits against firms that sell fake likes and followers, seeking to shut down such operations. In a landmark U.S. case in 2019, the FTC settled charges against Devumi – a company infamous for trading in fake followers/likes – and explicitly banned it from selling social media influence indicators in the future . The FTC noted that Devumi had filled 58,000+ orders for fake Twitter followers and thousands more for YouTube likes and other metrics, deceiving clients and consumers . The action signaled that authorities view the sale of fraudulent social media engagement as a form of false advertising or fraud on the marketplace. Similarly, the New York Attorney General fined companies and even an influencer marketing agency for selling fake engagement and posted guidelines that such practices violate truth-in-advertising laws .
To help verify authentic engagement, a cottage industry of analytics tools has emerged. Services like HypeAuditor, Modash, and others offer brands and users audits of an influencer’s followers and likes, flagging what percentage appear to be bots or inactive. Many marketing teams now run influencer accounts through these tools before collaborations. In one analysis, brands that used fraud detection for influencer vetting saved an average of 23% of their budget that might have been wasted on fake audiences . Platforms themselves are adding more transparency features too. Instagram now allows users to view a creator’s account insights (which can reveal suspicious spikes in followers). Twitter has experimented with labeling “automated by XYZ bot” on certain bot accounts for clarity. And both Twitter and Facebook have opened (limited) data to academic researchers to study manipulation and fake engagement patterns. Despite these steps, experts believe more is needed – such as independent audits and greater data access – to truly quantify and combat the scope of fake likes across social networks .
Case Studies: Scandals and Notable Incidents of Inflated Likes
- The Devumi Follower Factory (2018): An explosive investigation by The New York Times revealed a Florida-based company, Devumi, that made millions selling fake followers and likes to politicians, celebrities, and influencers . Devumi had a stock of at least 3.5 million automated accounts and provided over 200 million Twitter followers to clients over time . Public figures ranging from a former contestant on American Idol (Clay Aiken) to a wife of the U.S. Treasury Secretary were found to have purchased followers or retweets . The scandal prompted the New York Attorney General to launch an investigation and later led to the first FTC complaint and settlement over selling fake social media influence . The case pulled back the curtain on a thriving black market, making clear that many “popular” online personas had boosted their fame through paid armies of bots.
- TikTok’s Secret “Heating” Button (2023): In early 2023, reports surfaced that TikTok employees routinely used an internal tool to manually inflate video views and likes on the platform. This “heating” feature allowed staff to push selected videos onto the coveted For You Page, making them go viral artificially . TikTok insiders admitted that heated videos could comprise 1–2% of total daily views – a significant manipulation of what content appeared successful . The heated posts were not labeled as promoted or ads, so users often assumed a video’s massive engagement was entirely organic. This revelation contradicted the common belief that TikTok’s algorithmic fame is purely user-driven, and it raised concerns about transparency and favoritism (employees reportedly even boosted their own or friends’ posts against policy) . The company later acknowledged the practice and vowed to limit it, but the incident showed how even “likes” and views on a platform famed for its algorithm can be manually juiced behind the scenes.
- Elon Musk’s Twitter Boost (2023): In a high-profile case of algorithmic meddling, Twitter’s CEO Elon Musk was reported to have ordered engineers to tweak the platform’s algorithm after his tweet during the Super Bowl got fewer impressions than President Biden’s. The result: Twitter deployed code that artificially amplified Musk’s tweets by 1000x, virtually guaranteeing his posts would top users’ feeds . For a period, many Twitter users (even those who didn’t follow Musk) saw an outsized number of Musk’s tweets, which corresponded with a surge in likes on his posts. Musk essentially “force-fed” his content to the user base, a move he half-jokingly acknowledged by tweeting a meme about compelling everyone to read his tweets . This incident became a case study in how a platform leader could leverage internal systems to generate fake engagement for one account – blurring the line between genuine popularity and platform-engineered attention.
- Influencer Fake Fame Scandals: Numerous influencers have been exposed for inflating their likes and follows. In one notable example, analytics firm HypeAuditor found that a majority of comments on some influencers’ sponsored posts were left by bot accounts, not real fans . Brands have learned the hard way that a huge like count doesn’t always equal real influence. In 2019, a marketing agency famously recounted how an Instagram “micro-celebrity” with over 2 million followers utterly failed to sell even 36 T-shirts – a flop attributed to her follower number being padded with fakes. That same year, the cosmetics brand Sunday Riley was caught having employees post fake “customer” likes and reviews on Sephora’s website; the FTC fined the company for deception . Collectively, these cases sparked an industry wake-up call. Advertisers started demanding audience authenticity verification, and Instagram began purging fake followers on high-profile accounts. The message was clear: behind the glossy photos and big like counts, many influencers had been quietly buying their way to relevance, and when the facade crumbled, so did opportunities for those who cheated.
- Political Like-Bots and Astroturfing: Politics has seen its share of fake engagement controversies as well. In the 2016 U.S. election and other campaigns worldwide, “bot armies” amplified candidates and messages by liking, sharing, and retweeting inorganically. Studies found that during some political events or hashtag campaigns, anywhere from 9% to 15% of the Twitter accounts active were likely bots pushing out content . In extreme cases, over 50% of the social media chatter on certain issues was driven by automated or fake accounts . One high-profile example: researchers discovered huge networks of fake Facebook likes originating from Russian and other foreign troll farms, aimed at making divisive political posts seem hugely popular. These fake likes were used to boost the visibility of propaganda pages and posts, tricking the algorithm (and users) into thinking that extreme viewpoints had massive public support. Such incidents have led platforms to regularly announce takedowns of “coordinated inauthentic behavior” – for instance, Facebook has removed large clusters of accounts linked to state-backed influence operations that, among other things, mass-liked political content to game the system . These cases underscore that fake likes aren’t just a vanity issue; they can be wielded as tools of information warfare, artificially magnifying some voices and drowning out others in the digital public square.
Conclusion: Across these angles – from sneaky algorithms and click-farm economies to psychological fallout and fraud response – one theme is constant: all likes are not created equal. A “like” counter that supposedly measures popularity or approval may in fact measure promotion, coercion, or fabrication. Users and brands are gradually becoming savvier about this reality. Social media companies, on their part, walk a tightrope: they must show they’re curbing the fake engagement that misleads people, while still celebrating the vibrant engagement that keeps users hooked and ad dollars flowing. The past few years have seen progress (improved detection, greater transparency reports, and even design changes like hidden likes), yet fake and misleading likes remain an evolving challenge. As the examples above illustrate, whenever there’s an incentive to appear more popular than one truly is – be it for profit, pride, or power – the temptation for deception follows. In the end, recognizing the difference between genuine social proof and the illusion of it is becoming an essential digital literacy skill for all of us scrolling those feeds.
Sources: The information in this report is drawn from a range of credible sources, including news investigations, academic studies, and platform transparency reports. Key references include Forbes/Insider and Guardian reports on TikTok’s “heating” button and Musk’s Twitter algorithm tweak , analyses of fake follower markets and influencer fraud statistics from marketing studies , official data from Meta and TikTok on the removal of fake accounts and likes , psychological research on the effects of likes on adolescent well-being , and Federal Trade Commission filings on the Devumi case and related crackdowns . These and other cited sources provide a fact-based foundation for understanding how social media likes can be manipulated and why it matters in today’s online ecosystem.