Fighting the Fake News & Hate Crisis on Social Media
By Lord Dolar Popat & Rupa Ganatra-Popat
In the early years of the platforms, Mark Zuckerberg, Sheryl Sandberg and Jack Dorsey insisted that Facebook and Twitter were platforms rather than publishers and therefore are not responsible for the content that is published on their platforms.
In the current climate where pandemic conspiracy theories are on the rise, hate speech has increased and racial injustice is being played out on the world stage, Facebook and Twitter’s billionaire founders have been placed with an increasing responsibility to moderate such content. Both Twitter and Facebook have been removing posts that promote violence, suspending accounts for repetitive hate content like that of Katie Hopkins’ and highlighting manipulated photographs and videos. Facebook deleted a record number of hate speech posts with 9.6 million taken down in the first quarter of this year alone. However even by the time of their action, much of the damage is usually done as was illustrated last month by the ‘racist baby’ video that President Trump had posted which had already been viewed over 20 million times on Twitter and over 4 million times on Facebook by the time it was removed.
The problem of fake news, misinformation and hate content is not new and should not take us by surprise. Since the democratisation of content creation following the launch of social media platforms, each one of us have become content creators creating and publishing our own content, which in turn has created a system where the quality of published content can no longer be controlled. The impact of the last two decades’ technology revolution is now impacting businesses, political systems, family lives, society and individuals.
The problem has been exacerbated and amplified in recent months, perhaps as people have spent an increasing time online during lockdown. From the well-known to the unknown, fake news, misinformation and hate rhetoric is causing harm to many individuals.
From conspiracy theorists spreading that Microsoft founder Bill Gates intends to place microchips in us when the coronavirus vaccine is administered to an army of internet trolls accusing Indonesian Sita Tyasutami of importing coronavirus to Indonesia from sleeping with foreigners, fake news and hate rhetoric is rapidly becoming the norm in our news feeds.
And the issue is not just a U.S. and Indonesian one. Well-known spiritual figure Morari Bapu in India who has millions of followers has been targeted by conspiracy theorists and a well-funded and organised hate campaign that led to an attempted attack on his life recently. The fake news and misinformation around Morari Bapu have included paid trolls posting anywhere between 500-1000 fake posts per day, many of which have promoted violence against him.
This is months after a post of yoga Guru Baba Ramdev emerged on Facebook at the peak of the coronavirus pandemic claiming that he had overdosed on cow urine to prevent coronavirus. It later emerged that the photograph was in fact taken in 2011 after Baba Ramdev had completed a nine day fast and that the post was fake.
Hate speech, disinformation and rumours in India have been responsible for acts of violence and deaths in India for some time. On 16th April 2020, two Hindu Sadhus and their driver were lynched in Gadchinchale Village in Palghar District, Maharashtra. The incident was fuelled by WhatsApp rumours about thieves operating in the area and the group of villagers had mistaken the three passengers as thieves and killed them. Several policemen who intervened were also attacked and injured.
A 2019 Microsoft study found that over 64 percent of Indians encounter fake news online, the highest reported amongst the 22 countries surveyed. There are a staggering number of edited images, manipulated videos and fake text messages spreading through social media platforms and messaging services like WhatsApp making it harder to distinguish between misinformation and credible facts.
A 2020 University of Michigan study found that India’s misinformation issue has now entered a new troubling era, where misinformation has moved from fake facts that can quickly be disproved to cultural content that play on emotion and identity, that are harder to verify and therefore making it even more likely that people will believe them or act on them.
Social media platforms like Facebook and Twitter rely on a combination of artificial intelligence, user reporting and content moderators to enforce their rules regarding appropriate content. They’ve been known to partner with and acquire third-party fact checkers and regularly remove content that isn’t aligned with their policies. There are also tens of thousands of online volunteers globally fighting hate speech on Facebook. Facebook removed 3.2 billion fake accounts between April and September 2018, more than twice as many as the year before.
However fake news on WhatsApp is perhaps the bigger problem to solve given that the app has over 400 million users in India alone and messages are encrypted making it challenging to identify, report and remove content in the same way as it can on other platforms. In 2019, WhatsApp reported that it was deleting two million accounts per month as part of an effort to reduce the use of the app to spread fake news and misinformation. In addition to other initiatives that Facebook, Twitter and WhatsApp have employed, it is still not enough and there is still much to be done.
Earlier this year it was announced in India that Facebook, Twitter and Google would be exploring an industry-wide alliance to curb fake news through public awareness campaigns at schools, colleges and universities, conducting workshops with content creators and working with academia to find innovative solutions. Plans for digital training programmes for over 1 million people have also been discussed, assisting India’s population to spot false information along with a service that can be used to check the veracity of information circulated on WhatsApp.
There is certainly no quick fix to this problem. If social media platforms attempted to monitor private messages, this would be a step towards mass surveillance and be rejected by the platform users. There must be a reconciliation between free expression and discrimination. It is by no means the first time in history that there have been major shifts of who creates the information with both the printing press and centuries later the radio, facing their own challenges versus the status quo at the time. Regulation of some sort is inevitable as was the case in those instances. Mark Zuckerberg has called for global regulations to establish baseline content, electoral integrity, privacy and data standards.
The challenge of fake news and hate speech requires careful consideration and collaboration between government, academia, publishers, social media platforms and civil-rights groups. In the meantime, we must all contribute to tackling the issue. As individuals, we must ask ourselves whether something we read is true. We must question the articles and videos we are sent. If we see hate posts about violence, we should report it. If we receive forwarded posts on What’s App, we must think twice before forwarding it on.
Whilst a long-term solution is developed for the problem that has been created as a bi-product of the past decade’s technology revolution, each one of us has the responsibility to question what we read, post and share. Each one of us should take responsibility for the content we create, the content we consume and the content we forward on to others.
Rupa Ganatra-Popat is an entrepreneur, investor and board advisor in the UK.