In the spectral phantasmagoria of Artificial Intelligence, the “utopian” and the “cynical” have joined hands while we grapple with the consequences of ontological confusion, loss of trust and intensified exploitation.
Contact us: info@strategic-culture.su
Last February, I publicly questioned the existence of Jianwei Xun, an author who published a best-selling, critically acclaimed book titled Hypnocracy: Trump, Musk and the New Architecture of Reality. On his website and on academia.org — the original version of his profile can now be found only on Internet Archive — he claimed to be a Hong Kong-born media philosopher, a Research Fellow at the Institute for Critical Digital Studies in Berlin who had studied political philosophy and media at Dublin University.
Since I live in Hong Kong and used to teach cultural studies, I was surprised I had never heard his name before. Had I become so disconnected from this interdisciplinary field that his name didn’t even sound vaguely familiar? It certainly sounded odd: Chinese names follow a different order, the family name should have come first. That’s when I decided to dig deeper.
The Berlin institute he mentioned didn’t exist, “Dublin University” was an ambiguous reference and didn’t point to any specific university. Xun claimed he had spent years consulting on strategic narratives for global institutions before dedicating himself to writing. I found no trace of his alleged academic output or professional activity. The book excerpts I sampled didn’t strike me as particularly original, they read like a hotchpotch of 1960s and 1970s philosophy. But unlike other derivative texts I often come across, they possessed an uncanny quality, almost as if a medium had invoked the spirits of the dead philosophers during a séance. Since today’s impostors are more likely to use AI than table-turning, I quickly came to the conclusion that whoever was hiding behind the mysterious “Jianwei Xun” had employed generative AI tools to churn out this book.
Months after I denounced this fraud (1) and exchanged messages with several journalists, corporate media finally admitted Xun doesn’t exist. The Italian publisher who hid behind this fictitious identity now insists he was just conducting an experiment, “an exercise in ontological engineering,” as he put it. But if that was the case and he didn’t intend to deceive his readers, why did he remove the academic and professional references he had fabricated from the newly updated website of his fake persona?
The choice of a Chinese identity to enhance credibility and marketability reflects a disturbing pattern of cultural appropriation where Western writers capitalize on the perceived exoticism of an Asian name, while real Asian writers face significant barriers to the publication of their work.
If there is a lesson to be learnt from the Jianwei Xun saga, it is that the media buzz around “his” work and persona, amplified by reputable outlets, glowing reviews, and a slick online presence, created a feedback loop where the perception of reality outpaced any need to verify it. When the lines between real and not real get blurred, aggressive marketing made those lines feel irrelevant.
With AI churning out content that to the untrained eye appears indistinguishable from human output, and media outlets racing to publish it, people are increasingly wired to prioritize the hype and often have no time to even scratch the surface.
While the media system has always cashed in on hype and sensationalism, these days social media platforms are the main generator of hype.
As I was handling media enquiries about the fictional “Hong Kong-born media philosopher” and writing about this case on my Substack, I received a tip from a reader who alerted me to another instance of habitual AI-enabled plagiarism, this time involving someone who does live in Hong Kong and contributes to Russian and Chinese media outlets.
Did his opinion pieces slip through? Did editors turn a blind eye to AI-generated content in order to capitalize on his social media reach?
In any case, we should not focus on specific instances: anyone using an AI detector knows that this unethical practice is widespread. Instead, I invite the reader to consider the complex interplay of audience dynamics, economic and cultural factors that incentivizes both the rise of AI plagiarism and the social influencer industry.
By now it should be abundantly clear how social influencers operate. Self-promotion, exaggerated claims, and a well-crafted image can snowball into credibility before anyone checks credentials. They build parasocial relationships with their followers who feel like they know the influencer, even if they have never met. Influencers attempt to create an aura of individuality and authenticity through personal storytelling, share raw footage or sensational material, invite followers to “peek” into their lives encouraging voyeuristic engagement.
However, this aura is even more fragile than the artificial aura mentioned by Walter Benjamin when in the 1930s he described the Hollywood-driven phenomenon of elevating actors to celebrity status, creating cult-like personas that compensated for the loss of aura in in the Age of Mechanical Reproduction. Not only is the influencers’ derivative content easily replicable, they too are vulnerable to replacement by AI-generated personas.
Benjamin recognized in the spell of the star’s personality “the phony spell of a commodity.” But most importantly, he warned that a medium that has a dual capacity to abolish distance between the audience and the depicted world while simultaneously detaching the audience from the physical world and its material conditions, is ideally suited to the aims of fascism. Benjamin was primarily referring to film and photography, but in an era of algorithmic reproduction controlled by a handful of tech companies his observations have become more relevant than ever.
Influencers leverage the bandwagon effect, that mix of conformity and fear of missing out. Once a persona gains momentum, with the help of thousands of bots whose cost is now less than a cent for basic accounts, humans jump on. Social media’s smoke and mirrors work because we are wired for stories, not audits. But with billions of automated bots flooding social media platforms, there is a roughly 50% chance that any account you engage with, whether by liking a post, commenting, or following, is a bot. Bots have become so sophisticated that it has become increasingly harder to detect them. As to the remaining accounts that are still operated by humans, about half of them publish content that is generated by artificial intelligence.
Even those with limited expertise on a given topic can produce persuasive posts and articles, while readers would need an AI tool such as GPTZero to identify their artificial origin.
A simple prompt ensures that the AI-generated content they publish is aligned, and resonates, with their followers’ ideological leanings, interests and preferences. An article that appeared in a conservative news outlet can be automatically rewritten to please a liberal audience and vice versa. A paper published by an academic can be summarized and interspersed with jokes and colloquialism to appeal to a non-academic audience, three articles can be seamlessly meshed into one creating a cohesive piece that synthesizes their content, etc. You get the drift.
Shaped by a mix of human activity and AI-generated content, the Internet and social media now resemble a phantasmagoria, a make-believe optical show that projects ghostly images, fetishizes human desires and experiences, and intensifies narcissistic self-reference to create an illusion of authenticity. The manufactured presentation of the self (fabricated authenticity) is the ultimate neoliberal imperative: people are actively encouraged to become producers of themselves. The name of the game is “fake it till you make it.” Lack of qualifications or professional expertise is no barrier for aspiring influencers. Ambition, a background in marketing, a good knowledge of psychological manipulation techniques, the ability to leverage algorithms, and an initial investment in an army of bots to boost content are better guarantors of success.
Those who make the cut can rake in juicy contracts to promote products, services or a political agenda. The more engagement their hustle generates, the more data capital social media platforms accumulate.
Unsurprisingly, this state of affairs is undermining real engagement and driving an increasing number of frustrated and disillusioned users away from these platforms, leaving bots to interact with other bots, as advertisers already lament.
As generative AI dissolves barriers to productivity – one can easily churn out dozens of social media posts a day, dozens of articles a week and use them for videos, podcasts and interviews – concerns about machine plagiarism keep mounting. Initially the most combative critics were those directly affected such as authors, artists, journalists and academics, but, as I will explain, the alarm is now being raised by AI researchers too. It turns out that AI-generated texts which omit attribution, remix unconsented content and reduce it to an untraceable, unrecognizable blend, constitute a form of pollution that is degrading the same digital environment that feeds AI systems.
These systems, particularly large language models and generative tools, are trained on data scraped from the internet, including books, articles, websites, and social media. Theft on a global scale is presented as the future of humanity.
Not only are AI companies profiting from work they never paid for and never asked permission to use, those who rely on generative AI are profiting too, which drives the demand for ever more sophisticated and human-like chatbots.
Like most industries, traditional media are being transformed by AI. While AI analytical tools can certainly help journalists process a large volume of data, identify meaningful patterns, and AI transcription technology is saving them time on a rather mundane task, generative AI is a different story. It is jeopardizing journalistic integrity, jobs and readers’ trust. As usual, the big driver behind the use of AI tools such as ChatGPT is the pursuit of profit. The problem is, cuts in the newsroom are weakening the quality of journalism, which alienates the audience, which in turn puts further pressure on revenues, which again leads to further staff cuts, and so on.
The complexity of journalistic work rests upon a repertoire of embodied experiences and knowledge – building a network of trusted sources who are willing to share their secrets is not something AI will be able to achieve any time soon.
Unfortunately, the moment the result of this painstaking work, which may combine interviews and extensive research, is published online, it is drowned out by hundreds of AI-generated variations of the same article that remixed and rewrote its content. The result is standardized, homogeneous texts stripped of the vibrant, dynamic essence of human voices and their diversity. Or mimicry that simulates diversity – the “text-in-drag.” But as the Internet is inundated with AI-generated clickbait articles competing for readers’ attention, investing in quality doesn’t guarantee any returns to media outlets and independent authors.
Moreover, AI’s overproduction of derivative content is overwhelming search engines and social feeds, and exacerbating the problem of information overload. Readers can hardly cope with a deluge of information and the constant digital stimulation that is impairing their memory, attention spans, critical thinking and ability to process information. Many are already tuning out, avoiding news altogether, or just scrolling headlines.
Although regurgitating information has never been easier, its impact is fast becoming inversely proportional to its quantity. Information requires no interpretation to exist and doesn’t necessarily turn into knowledge and understanding. It may, but only through a dynamic cognitive process of which acquisition is only the first step. As Artificial Intelligence is advancing its capabilities, there is no evidence that human beings are advancing theirs. Actually, they are already losing the ability to think clearly and effectively, let alone handle complexity.
Another consequence of the proliferation of AI systems is ontological confusion, a state of existential disorientation arising from ambiguity or indeterminacy in the categories of being, essence, and reality. AI creates a breach in the barrier between humans and objects, though it’s fair to say capitalism started wrecking it long time ago. If the barrier collapses, our conception of what it is to be human would be profoundly undermined. AI is already disrupting established frameworks of meaning, interaction and trust; failing to take stock of this disruption could have catastrophic effects on both individuals and societies.
If AI is to assist humans rather than deceive them, then we need a mandatory AI identification system: any autonomous AI agent must declare itself as such prior to interaction with a human and the media industry, including social media platforms, should clearly label AI-generated content. There is no shortage of AI detection tools, and they are pretty effective at identifying which portions of a text are likely to be human-written, AI-generated, or AI-refined.
Some also hope that sooner or later search engines will start offering users an effective AI filter. Until then, weeding out artificial content will continue to be a time consuming endeavour.
Although the optimist in me shares the belief that transparency over AI-generated content will likely increase due to a strong demand, my inner pessimist believes that chances of it happening soon are slim: the digital economy is underpinned by data capital, which has a symbiotic relationship with AI. And Big Tech won’t change the status quo until the quality of data becomes so seriously degraded that it erodes its colossal profits.
AI transforms data into capital and relies on data capital to train and operate. AI is both a driver and beneficiary of data capital. That’s why software is being embedded in more and more products: they are all generating data.
As a ‘Big Data strategist’ for Oracle, one of the largest software companies in the world, explained “data is a new kind of capital on par with financial capital for creating new products and services. And it’s not just a metaphor; data fulfils the literal textbook definition of capital.” (2)
I don’t know which textbook he had in mind, but in order to understand the economic and social dynamics that drive the so-called Fourth Industrial Revolution, I am going to consult the copy of Marx’s Capital that sits on my shelf. Admittedly, it needs some dusting.
Marx defines capital as value in motion, that is value of a peculiar type, namely self-expanding value, a social relation which appropriates surplus value created in a definite process of production, and continually reproduces both capital and capitalist relations.
In order to expand, capital must purchase a commodity, the consumption of which creates new value. This commodity is labour-power, an inconvenient truth that our “Big Data strategist” didn’t bother to mention.
“In order to be able to extract value from the consumption of a commodity, our friend, Moneybags, must be so lucky as to find, within the sphere of circulation, in the market, a commodity, whose use-value possesses the peculiar property of being a source of value, whose actual consumption, therefore, is itself an embodiment of labour, and, consequently, a creation of value. The possessor of money does find on the market such a special commodity in capacity for labour or labour-power.” (Capital, Chapter 6)
The creation of value depends on the general intellect, that is the knowledge, skills, and intellectual capacities of society, but under capitalism it is subject to private appropriation and private control. Tech oligarchs like to frame this private appropriation for their AI models, as “democratizing access to knowledge.” If that is case, maybe we should start to democratize access to their bank accounts.
When data is treated as a form of capital, the imperative is to extract and collect as much data, from as many sources, by any means possible. That shouldn’t come as a surprise. Capitalism is inherently extractive and exploitative. In addition, it also generates a constant pressure or tendency towards universal commodification, it keeps colonising new territories, non-commodified and non-monetised parts of life, with the same disregard for collateral damage it exhibits when it exploits labour and natural resources to seek profit.
It is important to keep in mind that data is both commodity and capital. A commodity when traded, capital when used to extract value.
AI distils information into data by transforming any kind of input into abstract, numerical representations to enable computation.
The distinction between the consumer and the producer of information vanishes once their activity become data. To be online is to both consume and produce data, that is value. Users generate data through interactions which platforms monetize. This unpaid ‘labour’ is comparable to Marx’s labour power, as users produce value (data). AI algorithms, cloud infrastructure, and digital platforms are the new means of production, and they are concentrated in very few hands.
Data extraction and collection is driven by the dictates of capital accumulation, which in turn drives capital to construct and rely upon a universe where everything is reduced to data. Since the data which is fed into machines has undergone a preliminary abstraction process, there is nothing to stop this data from being outcomes of previous cycles of artificial production of information through data. Data generate data that generate data and so on. Like interest-bearing capital, ‘a mysterious and self-creating source of its own increase … self-valorising value, money-breeding money’, as Marx describes the process of financialization that autonomizes capital from its own support.
Data accumulation and capital accumulation have led to the same outcome: growing inequality and the consolidation of monopoly corporate power.
But as the autonomization of capital that crowds out non-financial investments has a detrimental effect on productive sectors, so does the proliferation of AI content online. Several researchers have pointed out that generating data out of synthetic data leads to dangerous distortions. Training large language models on their own output doesn’t work and may lead to ‘model collapse’, a degenerative process whereby, over time, models forget the true underlying data distribution, start hallucinating and producing nonsense. (3)
Without a constant input of good quality data produced by humans, these language models cannot improve. The question is, who is going to feed well-written, factually correct, AI-free texts when an increasing number of people are offloading cognitive effort to artificial intelligence, and there is mounting evidence that human intelligence is declining?
When Ray Kurzweil, a promoter of transhumanism and AI pioneer, raves about machine learning systems that would soon begin improving themselves by designing ever more powerful neural networks that do not require human input “Because computers operate much faster than humans, cutting humans out of the loop of AI development will unlock stunning rates of progress,” he is just engaging in spin. When he was asked about the impact of AI on labour, Kurzweil explained that he envisages a society where the majority of people would receive Universal Basic Income by 2030. That is, they would survive by eating something like Soylent or insect protein. Presumably, their life would fit the definition of “bare life” proposed by Giorgio Agamben, an existence reduced to its most basic, biological form, stripped of political or social significance.
But as the quality of their lives decreases, so will the quality and value of the data they produce for free.
AI evangelists’claim that artificial intelligence will act as a transformative, almost divine force to solve humanity’s problems ushering in an era of prosperity and transcendence. In fact, if the current trajectory is any indication of future developments, AI is more likely to entrench a hyper-capitalist dystopia, rather than build a post-capitalist utopia.
The idea that machines could replace human labour, the capitalists’ wet dream, is neither new nor original. It has been with us since the beginning of the First Industrial Revolution. Its proponents forget that a greater use of robots and AI would result in a decline in the rate of profit at the level of the whole economy if the majority of the population lives hand to mouth. By focusing on individual capitalists who gain a competitive advantage by increasing productivity, they fail to see the whole picture. A typical example of missing the forest for the trees.
AI and digital platforms are controlled by a handful of tech companies, whose owners dominate the ranking of global billionaires. Obviously, we can’t trust those who have a vested interest in pushing AI down our throats to prioritize the public good. They spend millions to downplay the risks and thwart any attempt to introduce effective regulations.
In a world shaped by a powerful tech oligarchy, which gives off a strong dystopic vibe, the lines between corporate power, state influence, and cutting-edge technology blur into indistinction.
Throw in geopolitical competition and a bleak global economic picture, and governments are all too willing to join the AI race, which is now one with the arms race. AI’s military applications include Intelligence Surveillance and Reconnaissance analytics, networked drones, autonomous weapons, cybersecurity, logistics, decision support, training, electronic warfare, and psychological operations.
It’s not an exaggeration to say that AI lies at the core of state power projection in the 21st century. And U.S. multinationals hold imperial control over a great portion of the global digital ecosystem.
If we look at data as a commodity, we have to remember that the cumulative ‘socially necessary labour time’—past and present labour— is embodied in it, that human labour-power has been expended in its production. Even if data-producing online activity may not necessarily and immediately appear as labour, the time you spend online is time subtracted from real life experiences, family and social interaction. Your screen time may even cut into sleep.
It may involve work in the traditional sense, like creating content, coding, or engaging in paid tasks, or be more akin to leisure or consumption.
Ultimately, the value of the commodity and by extension the collective human labour, is relative to what is regarded as necessary by current society, by current human wants and needs.
Invisible, under-valued and abstract labour (such as unpaid digital labour) does not mean labour has disappeared. Labour is still essential in valorisation processes.
The reason why the value-creating ability of labour is conveniently disregarded and concealed has everything to do with capitalist relations of production and the extraction of surplus value.
When you have sex you don’t produce data. You may conceive a child but unless you engage in that activity because someone wants to buy the baby, no one would consider sexual intercourse and gestation as ‘labour’ and the baby as a ‘commodity’. But if you watch porn, if you have sex while wearing an electronic gadget, or if in the vicinity there are devices with sensors, processing ability, software etc, you do produce data.
But let’s return to the commodity, since it embodies the logic of capitalism and is the basic unit of economic exchange in a capitalist system.
What fundamentally complicates and makes mysterious the concept of a commodity is the very notion that individual labour takes a social form. In its social form, what becomes most difficult is the quantification and assessment of that individual labour, the “expenditure of human brain, nerves, muscles, etc.”
Here the concept of commodity fetishism, although elaborated by Marx in a spatial, technological and organisational dimension of capitalism different from the contemporary one, remains one of its specific, constituent aspects.
Marx used this category to represent the specific form of sociality in an economy based on commodities and on market mediation. In this system, commodities obscure the relationships between individuals, and through a process of inversion, the commodity takes on an autonomous existence, detached from the human labour and interactions that produced it. A spectral objectivity.
Marx perceived and remarked the spectral objectivity of the commodity in the middle of the 19th century, at a time when the Industrial Revolution was ripping apart the social, economic, and cultural fabric of Victorian England, deepening inequality and intensifying exploitation and alienation. As people grappled with the sweeping changes and upheavals caused by the introduction of technologies that had mechanized and concentrated production, and technologies that seemed to abolish temporal and physical distance, such as photography, the telegraph and later the telephone and the radio, some turned to spiritualism. When all that is solid melted into air, all that is holy was profaned, beliefs in the paranormal, magical powers and the occult thrived. While workers’ lives were cut short in slums and factories, and labour-value was occulted in the commodity, communication with the dead through mediums and séances, became a fashionable pastime among the bourgeoisie.
Mediums would use a variety of tricks to levitate tables, which would convince people that a ghostly presence was among them.
Troubled by guilt and haunted by fear, Victorian England became obsessed with spirits.
Marx taps into the bourgeoisie’s fears when he conjures the “spectre” of communism. By framing communism as a “spectre,” and arguing that capitalism is inherently unstable and haunted by its own contradictions, he amplifies bourgeois anxiety.
When he addresses the apparently magical quality of the commodity, described as commodity fetishism, he exposes it as a form of deception by drawing a comparison with ‘table-turning.’
“It is as clear as noon-day, that man, by his industry, changes the forms of the materials furnished by Nature, in such a way as to make them useful to him. The form of wood, for instance, is altered, by making a table out of it. Yet, for all that, the table continues to be that common, every-day thing, wood. But, so soon as it steps forth as a commodity, it is changed into something transcendent. It not only stands with its feet on the ground, but, in relation to all other commodities, it stands on its head, and evolves out of its wooden brain grotesque ideas, far more wonderful than ‘table-turning’ ever was.”(Capital, Chapter 1)
Today’s most sought-after and fetishized commodity, data, is doing an even better job at obscuring its origins under mathematical operations and statistical reasoning. And it is certainly spawning more grotesque and fanciful ideas than any commodity known to Marx.
In the spectral phantasmagoria of Artificial Intelligence, the “utopian” and the “cynical” have joined hands while we grapple with the consequences of ontological confusion, loss of trust and intensified exploitation. As commodity relations shape objectivity and subjectivity in capitalism, permeating every aspect of social life and moulding it in its own image, those who refuse to be de-skilled and reduced to “bare life” must organize and fight back.
(1) Judging by the chronological order of posts on Telegram and X/Twitter, I was the first person to question Jianwei Xun’s existence https://t.me/LauraRuHK/9759. https://x.com/LauraRu852/status/1894404864895234268. I shared the information in my possession with several journalists, only one acknowledged my contribution. https://decrypt.co/314480/philosopher-trump-musk-fabricated-ai
(2) https://journals.sagepub.com/doi/full/10.1177/2053951718820549#bibr63-2053951718820549
(3) https://www.nature.com/articles/s41586-024-07566-y#Bib1