Mathematics: an elegant solution to online hate
New research shows how mathematics can help to understand and tackle hate networks using a policy matrix.
Hatred is a natural human emotion. It is a means of distinguishing when something opposes a core value, need, or belief held by ourselves or the group we most identify with. Hatred is an indicator that we need to investigate internally (what is happening inside me?) and externally (what is happening in my environment?) before deciding how to respond. Hate is not an excuse to harm another person physically, emotionally, or psychologically. Hate may be an indicator that we need to change, rather than saying that the world must bend to our emotional whims.
Far too many people are developing in ways that don’t encourage emotional regulation. Families may be poorly equipped to teach this valuable skill; our education system is more concerned with crafting effective economic cogs; and our society is concerned with fabricating unrealistic and inauthentic online profiles. We must show how truly fabulous we are because of how we look and what we do. But who we are on the inside and how we regulate and take responsibility for ourselves is not given much attention: naval gazing is the term often used to describe reflective individuals.
We are witnessing the effects of this internal neglect in the prevalence of online hate. By the time individuals have developed a taste for hate and have found others who reinforce and normalise their views, the hatred has become so entrenched it has often become too integral to be shifted.
A new study, Hidden resilience and adaptive dynamics of the global online hate ecology, reveals the complex networks of hate that feed and spread across social media platforms. The researchers from George Washington University and the University of Miami have taken an approach that utilises mathematics, physics, and biology to analyse and address what they see as networks within networks of resilient hate clusters. Their research is motivated by the devastating real-world impact of these online hate networks: radicalisation, hate crime, youth suicides, and mass shootings, to name a few.
The research reveals how small networks of hate exist within larger ones. This explains why large online hate groups are so hard to get rid of – they simply reform from the small clusters. These clusters of groups were analysed on Facebook and VKontakte without identifying individuals to honour privacy laws.
It appears that small communities of hate are forming within larger ones and are using mutual links to sites that reinforce hate. The clusters ultimately link up to form complex and resilient networks. The researchers’ mathematical model predicts that policing just one platform is doomed to fail, and that a policy matrix – one that takes into account current laws – is necessary in order to tackle the complex networks across all platforms.
From a biological perspective, the hate clusters were likened to a disease that can be treated. One form of treatment – and one of the features in the policy matrix – is to introduce a form of hate immune system. The researchers propose introducing anti-hate users to infiltrate and break-up hate clusters. This could potentially prevent larger hate groups from forming, as the research revealed how small clusters can fuse like two atomic nuclei to form a larger group.
One novel part of the policy matrix involves turning the haters on themselves. The researchers found that some of the users in the hate groups had opposing views. They suggest that platform administrators create artificial users to play on internal conflicts. The mathematical model predicts that this will eliminate large hate clusters with these kinds of internal conflicts.
In another aspect, the researchers propose banning randomly selected individuals within small hate clusters or, if possible, banning entire small clusters. They suggest banning individuals as a way round the protection of free speech, arguing that a ban on individuals and small clusters weakens the whole network by preventing small the formation of larger groups. This approach avoids completely silencing pre-existing larger groups and still protects free speech.
The free speech argument is a tricky one. Free speech is essential to freedom and democracy and it does deserve to be protected. Even hate groups are right to protest being silenced, as a silenced voice does not take away a person’s hate or their drive to express it. But is free speech truly free in an environment of hate and hostility, where people are shouted down and intimidated? Is free speech best exercised in an environment of mixed views, where fully formed and well-considered arguments are placed in an open forum for people to analyse and discuss in an objective and respectful way? We are all guilty of feeling hatred, but it doesn’t pay to hate those that hate.
Solving online hate is a monumental but necessary challenge. It may be that we are trying to address a problem that has been allowed to fester for too long. But eradicating online hate could prevent vulnerable young people from internalising messages of hate, thereby ending this ugly cycle.
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.
Our UX team designs customer experiences and digital products that your users will love.