fbpx

By Belinda Matore, an LLD candidate and project officer at the Centre for Human Rights, Faculty of Law, University of Pretoria

It is indisputable that the modern influencer stands as a central figure in contemporary digital life. These individuals, whether operating on TikTok, YouTube or Instagram, possess the ability to translate complex ideas into accessible stories, set cultural rhythms and command the attention of audiences who often hold a deep scepticism toward traditional institutions. We routinely measure their success by the breadth of their reach, their follower counts, their engagement rates, and their capacity to drive consumer behaviour. Yet by focusing solely on the extent of their influence, we overlook a far more critical question: who influences the influencer? The answer is not merely academic; it is profoundly relevant to the health of public discourse. The systemic forces shaping a creator’s message are, by extension, the forces shaping what millions of people see, share, believe and often fight about online. If we seek to educate the public about online harms and cultivate a more peaceful digital environment, we must first understand and, at times, strategically disrupt these powerful underlying forces.

At the top of this hierarchy sits the algorithm, the invisible and omnipresent boss. It is the engine of the modern platform, the complex mechanism that decides which content thrives and which content vanishes into digital obscurity. Creators learn quickly that the system is not neutral; it is engineered to maximise engagement. This often means that content which is emotional, confrontational or calibrated to provoke outrage is favoured. This is not accidental; it is a structural feature of engagement-driven systems. When the reward mechanism favours outrage, creators drift toward outrage. When it rewards simplified, easily digestible narratives or misinformation packaged as entertainment the gravitational pull becomes difficult to resist. A creator attempting to deliver nuanced public education on online harm finds themselves competing with a system optimised to elevate the very content that exacerbates harm. The algorithm dictates the terms of visibility, making it the most dominant influence in the creator’s professional life.

Beyond this cold logic lies the warm lure of commercial influence. Sponsorships and brand partnerships are often the financial scaffolding that allows creators to operate at scale. Brands, understandably, expect sponsored content to align with their preferred values and corporate messaging. This expectation substantially narrows the terrain of what creators feel safe discussing. A creator whose income depends on brand revenue may avoid topics like xenophobia, election manipulation, systemic gender-based online violence or, crucially, platform accountability. The pressure to remain commercially appealing forces many into recurring ethical compromises. They must often choose between intellectual honesty on contentious societal issues and the stability offered by brand partnerships. These golden handcuffs do not merely shape output; they can silence necessary public debate.

They also introduce a credibility problem. Influencers who earn through promotion whether commercial or political operate in a space where their independence is constantly questioned. Audiences are increasingly alert to the dynamics of paid speech, and suspicion that a creator has been “bought” can permanently undermine the legitimacy of their message. This challenge becomes acute when influencers are asked to participate in public education or civic campaigns. If viewers perceive that a creator is advancing a political narrative, supporting a state initiative, or promoting a public-safety message for payment rather than conviction, the entire intervention risks backfiring. In short, influence monetised through money or politics is always fragile, because trust is the currency it most easily erodes.

Creators also exist within a continuous audience feedback loop. Their followers, though dispersed, operate as a powerful collective force that rewards what it enjoys and punishes what it rejects. Audiences shape an influencer’s persona by demanding a consistent identity. A creator known for humour or fashion may find that introducing serious public education causes sharp declines in engagement. A creator known for fiery political commentary may discover that attempts at neutrality or peacebuilding are met with hostility. Audience expectations set the boundaries of acceptable discourse. This feedback loop can amplify divisive content, but it can also, with careful cultivation, reward accuracy, compassion and harm reduction. The crowd can either corral creators into sensationalism or elevate them into responsible communicators.

In contrast to these high-volume forces, civil society organisations, researchers and fact-checkers operate as quieter but vital corrective influences. They do not offer financial reward or algorithmic visibility; instead, they provide epistemic support reliable information, ethical framing and evidence-based critique. Their role, though often unseen, helps creators resist the pressure to sensationalise and equips them to communicate responsibly about complex issues such as misinformation, online violence and digital manipulation. These relationships can anchor creators during backlash and steer their content toward more responsible, well-informed terrain.

Finally, the state and the regulatory environment impose external guardrails. Regulatory requirements ranging from transparency in political advertising to consumer protection and content moderation standards shape the rules of online influence. Rights-based, well-designed regulation can advance accountability and protect vulnerable communities. Poorly structured or politically motivated regulation, however, risks chilling expression or being weaponised to suppress legitimate dissent. The influence of the state must therefore be transparent, proportionate and grounded in human rights, especially during elections, when influencers become pivotal vehicles for political mobilisation.

This layered ecosystem raises a provocative strategic possibility: what if digital rights advocates, researchers and civil society actors became influencers themselves? There is no structural barrier preventing academics, lawyers, journalists and digital rights experts from occupying the same digital spaces as lifestyle creators and gaming streamers. The difference lies in message and intent. These “Safety Influencers” could debunk harmful narratives in simple formats, explain algorithmic manipulation, model respectful digital engagement and help young people recognise online harms.

Yet this strategy brings its own vulnerabilities. Safety influencers operate in a domain where trust is notoriously fragile and where evidence-based information is often perceived as political positioning. Their expertise may invite not admiration but targeted harassment, disinformation attacks or organised attempts to discredit their legitimacy. Unlike commercial influencers, they rarely possess moderation teams or sophisticated digital security infrastructures, making them more susceptible to intimidation. They also face the risk of being absorbed by the very engagement-driven dynamics they seek to challenge, as complex ideas are pressured into simplification for the sake of reach. And without institutional anchoring, the label “safety influencer” can be co-opted by individuals advancing ideological agendas, diluting public trust in the concept itself.

These risks do not diminish the value of bringing experts into the influencer ecosystem. They simply underscore that such a move must be intentional, well-supported and built on solid ethical and institutional foundations. When implemented with care, Safety Influencers can help reshape the landscape of digital influence, transforming it from a space dominated by commercial pressures and algorithmic incentives into one where informed, principled voices can meaningfully contribute to peace, safety and democratic resilience.


 

Newsletter

 Subscribe to our newsletter