Deceptive online content is big business. The digital advertising market is now worth €625 billion, and their business model is simple: more clicks, views or engagement means more money from advertisers. Incendiary, shocking content – whether it is true or not – is an easy way to get our attention, which means advertisers can end up funding fake news and hate speech.
This is not an accident – social media platforms know they profit from the spread of disinformation, while advertisers turn a blind eye.
Disinformation aims to confuse, paralyse and polarise society at large for political, military, or commercial purposes through orchestrated campaigns to strategically spread deceptive or manipulative media content. On social media, disinformation tools include bots, deep fakes, fake news and conspiracy theories.
Up to now, most disinformation research has focused on how the system is abused by national interests and authoritarian leaders. My research shows that disinformation is, in fact, a likely and predictable outcome of this market system instead of an unforeseen consequence.
A business model that rewards engagement
Social media platforms were not designed to convey information, but rather for entertainment. They were designed to identify things like the most amusing cat videos, and then recommend them to people who would share them. However, marketing researchers have since found that content that evokes strong positive emotions like awe, or negative emotions like anger and anxiety, is more likely to go viral. Platforms have taken note of this and built it into their business models.
The business model of social media works as follows. Platforms provide us with free “infotainment” (information and entertainment), and do everything in their power to keep us engaged. While we consume the content, the platform harvests our data, which is then processed into predictive analytics – the information that is used to target adverts. Advertisers pay for these analytics to power their targeted advertising campaigns.
There is a financial incentive for most platforms to maximise online engagement, which means that any content, factual or not, that receives clicks, likes and comments is highly valued. Influencers who share incendiary, controversial content can become wealthy as a result, often leading others to replicate their style. Therefore, it is unsurprising that many creators publish confrontational, simplistic and emotionally charged content with us-against-them narratives.
Stoking social anxieties and fuelling tribalism is also how conspiracy theories circulate.
Digital marketing and disinformation
Digital marketing is a commercial practice by which firms create value over the internet. It includes search optimisation, content marketing, influencers, pay-per-click adverts, affiliate programs, and ordinary advertising. Brands hire digital marketing agencies and firms known as ad tech, which operate the software that makes adverts follow us around the internet.
ad tech firms operate without accountability or oversight, so when a brand pays an ad tech firm to place their ads, they also outsource their responsibility. A brand might therefore unknowingly end up funding disinformation about major global events like the Russia-Ukraine war and the Israel-Palestine war. Even after being presented with evidence, brands remain silent.
Influencers play an especially important role in this cutthroat digital market. Driven by the promise of advertising money they seek engagement at any cost, even going as far as promoting content that undermines democratic institutions. If an influencer has to be demonetised or banned for publishing hate speech it makes no difference to the platform, because the platforms get to keep the advertising revenue.
Democratic governance of digital platforms
Most brands do not want to be associated with hate speech and bot farms, but they are. It is easy to look the other way in such a technically complicated market, but marketers have a responsibility. Brands become complicit by remaining silent.
Policymakers and activists are pushing to reform digital platforms to counter disinformation. Most efforts focus on content moderation and fact checking, but little attention is being paid to reforming the digital advertising market.
Platforms and ad tech firms must work to reform a market that profits from disinformation, though it appears they are often unwilling or unable to lead the way.
Brand managers can use their budgets to hold platforms accountable, especially if they act in large numbers, as demonstrated by the recent X (formerly known as Twitter) ad boycott following Elon Musk’s antisemitic remarks. If all else fails, policymakers must step in to ensure that the profits of these tech giants do not come at the cost of our democracy.
The Conversation