Zhang Yuze: algorithms are not the culprit behind a polarised public
Lecturer in Beijing argues that algorithms are mere amplifiers of forces that long predate them—elite polarisation, partisan media, yawning inequalities and deep-seated psychological biases.
Zhang Yuze is a Lecturer at the School of Literature and International Communication, University of International Business and Economics. In a recent article, he argues that the tendency to blame “the algorithm” for tearing societies apart is misplaced, and attempts to “fix” politics by tweaking code are largely futile.
Zhang’s article is available on the WeChat blog of 知识分子 The Intellectual, a news platform founded in 2015 by three Chinese academics from Peking University, Tsinghua University, and Princeton University.
撕裂社会的,从来不是算法
Algorithms are Never the Culprit for Tearing Society Apart
Will algorithms intensify social division? This techno-determinism is weak in empirical evidence. The core of algorithms is commercialised “engagement optimisation,” not ideological indoctrination. More importantly, studies abroad find that polarisation is driven mainly by political elites, partisan media, and deep socioeconomic structures. Forcing people to step out of echo chambers and get in touch with opposing views does not promote understanding but instead makes their positions more extreme.
Long before “algorithm” became a buzzword, Walter Lippmann raised deep doubts about how “public opinion” forms. Citing Sir Robert Peel, he wrote that public opinion is a “great compound of folly, weakness, prejudice, wrong feeling, right feeling, obstinacy, or newspaper paragraphs.”
Nearly a century ago, people rarely attributed social rifts to a single factor and instead looked to more complex structures for answers. In contemporary public discourse, a common view holds that social media intensifies social division. Whether by deliberately pushing opposing views to create conflict between groups or by clustering extremists who reinforce one another, these narratives point to a single culprit: the algorithm.
This techno-determinist view largely stems from Eli Pariser’s concept of the filter bubble. He argues that personalised recommendation mechanisms isolate users from knowledge by showing them only content consistent with their prior views, thereby fostering political polarisation.
This logic fits intuition, yet it is largely an unproven assumption. Numerous recent studies are challenging this piece of “common sense.”
01 The Empirical Fragility Of The Echo Chamber
What people call the filter bubble means that algorithms, through personalised ranking, construct a unique information universe for each individual, eroding any shared basis for public discussion. The “echo chamber,” by contrast, describes a closed media environment in which internal messages are amplified while outside perspectives are filtered out.
But do such echo chambers truly exist widely in reality?
A literature review by the Reuters Institute, after surveying many studies, concludes that genuine echo chambers are very rare and most people receive relatively diverse media information. A study in the United Kingdom estimates that only about 6 to 8 per cent of the public live in politically partisan news echo chambers.
Contrary to common belief, multiple studies find that people who rely on search engines and social media for news actually encounter a broader and more diverse range of sources. This is called automated serendipity, where algorithms feed you content you would not actively choose.
The small number who truly inhabit echo chambers do so mainly because they choose to consume only certain media, not because of premeditated algorithmic pushing.
In fact, views that algorithms sow division often assume platforms welcome conflict because conflict brings traffic. This assumption overlooks the real operational goal of platforms: long-term user retention.
As the Douyin Safety and Trust Centre explains in its disclosed algorithm principles, if an algorithm only caters to existing interests, content becomes increasingly homogeneous, users lose interest, and eventually leave. Recommendation systems must therefore balance “exploitation” and “exploration” by proactively surfacing new content that users are likely to enjoy, in order to maintain both freshness and user engagement.
02 When “Hearing Both Sides” No Longer Brings Enlightenment
If the evidence that algorithms cause echo chambers is insufficient, where are the roots of polarisation?
There is a saying in China: “Hear both sides and be enlightened.” Deliberative democratic theory in the West likewise holds that citizens exposed to different views in rational discussion become more moderate. If the echo chamber is the problem, then breaking it and letting people hear both sides should ease polarisation.
But what if this premise fails in the era of social media?
Professor Chris Bail, a sociologist at Duke University, conducted a clever field experiment to test this directly. His team recruited highly committed Democratic and Republican Twitter users and paid them to follow a bot account that reposted political messages from the opposing side.
The design forcibly broke echo chambers and compelled participants to hear both sides. The results were unexpected. After one month, participants did not become more moderate or more understanding of the other side; they generally became more extreme.
This finding shows that, at least in social media settings, forcing people out of echo chambers does not solve the problem and instead intensifies polarisation.
A recent generative social simulation study at the University of Amsterdam reinforces this view. Researchers populated a minimal social platform that features only posting, reposting, and following, with AI agents. Even in the absence of complex recommendation algorithms, partisan echo chambers, highly concentrated influence, and the amplification of extreme voices emerged spontaneously. Emotional (often extreme) content attracted more reposts, which in turn brought more followers (influence) to its originators and further entrenched such content.
The significance of this discovery is that it is not algorithms that isolate us. The basic architecture of human social networks itself tends to reward identity-based, emotional, and non-rational reactions, and allows these reactions to directly shape social ties.
03 What is the Truth of Polarisation?
Why does exposure to opposing views sometimes backfire? To answer this, it is necessary to distinguish between two different kinds of polarisation.
Modern political science research suggests that ideological polarisation, meaning disagreement over specific policy positions, has not increased dramatically among ordinary citizens. However, affective polarisation has risen sharply. This refers to growing dislike, distrust, and hostility between partisan groups. It is rooted in identity rather than policy and is defined above all by out-group hate.
Based on this, Professor Bail proposes the Social Media Prism theory, which posits that social media is neither a mirror nor an echo chamber, but a prism that distorts how people perceive themselves and others.
This distortion has two main sources. First, the core mechanisms of social media—identity performance and status competition—provide an ideal stage for extremists. Second, once extremists dominate the arena, moderates tend to fall silent under “the spiral of silence.” The result is a prism effect: users are left with the mistaken impression that most people on the other side resemble the loudest extremists they encounter online.
This also explains why Bail’s experiment failed. What participants were exposed to was not moderate opposing views, but the harshest voices refracted through the prism of social media, which naturally intensified their affective polarisation. Other research suggests that although users can perceive an identity-based “climate of individual opinions,” this perception does not significantly alter their “individual opinion expression,” still less generate identification, ultimately evolving into irrational behaviour.
04 Beyond Engagement: The Future Of Algorithms
Algorithms operate within a preexisting polarised environment; they did not create it.
The timeline of rising polarisation far predates modern social media algorithms, and a “top-down” pattern appears to have been present early on.
First, elite polarisation precedes and drives mass polarisation, as politicians and activists are the first to adopt clearer and more divergent positions. Second, deep socioeconomic forces such as rising income inequality are structurally linked to political polarisation. Time-series analyses also reveal a strong long-term correlation between indicators of income inequality, such as the Gini coefficient, and the degree of polarisation in the U.S. Congress.
The techno-determinist narrative that places algorithms at the core of the polarisation crisis is an oversimplification of what is in fact a complex, multi-causal phenomenon.
The “algorithms cause division” narrative and “fools’ resonance” have deeper roots in human psychological biases (such as selective exposure) and in underlying socio-political structures. When a 450-minute in-depth video interpreting The Dream of the Red Chamber can attract 300 million views, it reflects precisely algorithms actively breaking through boundaries and surfacing latent interests, rather than simply reinforcing an echo chamber.
If algorithms are amplifiers, can simple technical tweaks correct the amplification? The University of Amsterdam’s AI Sandbox study offers a thought-provoking answer. Researchers tested six widely proposed “prosocial interventions” and found that those seemingly curative measures had minimal effects and sometimes backfired.
For example, interventions that forcibly break echo chambers by boosting cross-party content had almost no effect: when AI agents were exposed to opposing views, they did not meaningfully change their behaviour, confirming once again that cross-partisan exposure by itself is insufficient. By contrast, the much-discussed “return to purely chronological feeds” did substantially reduce “attention inequality,” but it also produced an unexpected side effect: it intensified the social media prism, increasing the relative influence of highly partisan, extremist users.
Therefore, effective interventions should abandon the futile pursuit of “neutral algorithms.” Instead, they should explore alternative designs that move beyond simple engagement optimisation—for instance, shifting to reward users’ stated preferences (content they judge to be valuable upon reflection) rather than their revealed preferences (content they click on impulsively), or designing for “constructive discourse.” In this way, the power of technology can be grounded in a more accurate understanding of the human and political roots of our current divisions.
Hu Xijin calls for tolerance and freedom under China's constitution
Hu Xijin is a former editor-in-chief of the Global Times, a Chinese newspaper under the official newspaper of the Central Committee of the Communist Party of China (CPC), the People’s Daily. He writes and posts videos regularly on X, Weibo (the Chinese equivalent of X), and his personal WeChat blog.
Ma Dugong: glad that we are all clowns in different makeup
Ren Chonghao, known by his pen name Ma Qianzu (foot soldier) or Ma Dugong (foreman), is a popular Chinese commentator on military and geopolitical affairs. He engages a substantial audience through various media platforms, with over two million followers on
Three decades, three paradigm shifts: how the internet transformed China
The Internet's arrival in China in 1994 coincided with pivotal economic reforms that catalyzed the country's marketization. This confluence of digital progress and economic restructuring spurred the transformation of a primarily agrarian society that had previously missed the industrial revolutions.
I am your Chinese spy. Show me your cat.
Just yesterday, I woke up as an ordinary Xiaohongshu (RedNote) user—an innocent scroll through the app to catch up on China and the broader Chinese-speaking world. What I didn't expect was the sudden flood of "TikTok refugees," with their posts saying "Nihao," making their way to the shores of this








