The Big Tech professes to have the loftier aim of building a more well-informed and equitable global society. They may be promoting, not by choice, just the opposite in many cases.
In fact, for centuries, disinformation, or fake news, shaped influenced public discourses and decision making. But its magnitude and impact have become far-reaching in the digital age. Fake news has become a weapon of fear-mongers, mob-baiters and election-meddlers to widen social fissures, subvert democracy and boost authoritarian regimes.
Fake news can often take the form of legitimate-looking news stories, tweets, Facebook or Instagram posts, advertisements and edited recordings distributed on social media. An emerging concern is deepfakes: Video or audio clips in which computers can literally put words in someone’s mouth.
With the overarching global reach, platforms such as Facebook and Twitter could enable modern-day suppliers of disinformation to reach a potentially huge audience.
And the impact?
“Social media manipulation campaigns” by governments or political parties were found in 70 countries in 2019, up from 28 countries in 2017, according to researchers at the University of Oxford.
Before India’s 2019 elections, shadowy marketing groups connected to politicians used the WhatsApp messaging service to spread doctored stories and videos to denigrate opponents. In countries such as Sri Lanka and Malaysia, fake news on Facebook has become a battleground for religious communities.
In Myanmar, a study commissioned by Facebook blamed military officials for using fake news to whip up popular sentiment against the Rohingya minority.
The Silicon Valley executives are waking up to the situation.
Under pressure from lawmakers and regulators, Facebook and Google have started requiring political ads in the US and Europe to disclose who is behind them. Google’s YouTube division adjusted its “up next” algorithms to limit recommendations for suspected fake or inflammatory videos, a move it had resisted for years.
WhatsApp now limits, to five, how many people or groups a message can be forwarded to. Its parent company, Facebook, said it spent 18 months preparing for India’s 2019 election.
As for governments, a Singapore law that took effect on October 2 allows for criminal penalties of up to 10 years in prison and a fine of up to S$1mn ($720,000) for anyone convicted of spreading online inaccuracies.
Malaysia enacted a similar law that the government, elected last year, is trying to repeal. Indonesia set up a 24-hour “war room” ahead of its 2019 elections to fight hoaxes and fake news. France has a new law that allows judges to determine what is fake news and order its removal during election campaigns.
In the US, efforts to crack down on disinformation can run up against the guarantee of free speech, although some platforms have begun to restrict postings by anti-vaccine activists, for example.
Across the world, technology and social media have become an inseparable part of everyday life.
The Big Tech’s common refrain is that it’s not supposed to police the Internet. But armed with the latest technology, it has a duty to see what news can go across the platforms and what cannot.
In any case, the liberal democratic character of Internet shouldn’t end up destabilising societies.
LEAVE A COMMENT Your email address will not be published. Required fields are marked*
Co-operate with China or suffer
A deadly chokehold in Minneapolis hurts US police reputation
How coronavirus is revealing the problems with ‘fast science’
Pandemic amplifies broadband access limits in US
Building South Korea’s post-pandemic economy
One giant leap for Europe?
How America has suffered 100,000 coronavirus deaths