New Delhi: Facing the uphill task of tackling election-related interference on its platform as India gets ready for polls next year, Facebook on Saturday said it is establishing a task force comprising “hundreds of people” in the country to prevent bad actors from abusing its platform.
“With the 2019 elections coming, we are pulling together a group of specialists to work together with political parties,” Richard Allan, Vice President of Policy for Europe, the Middle East and Africa (EMEA) told the media here.
“The team will have security specialists and content specialists, among others, who will try to understand all the possible forms of election-related abuse in India,” added Allan during a workshop on Facebook’s “community standards” in the capital.
Allan explained that while the disinformation linked to real-world violence is checked by the team mandated to maintain Facebook’s community standards, other forms of disinformation are handled by a different team of fact checkers.
“The challenge for the task force in India would be to distinguish between real political news and political propaganda,” Allan noted, adding that the team would be very much based in the country and would consist of both existing human resources working on these issues within the company and new recruits.
Facebook came under intense scrutiny of policy makers in the US after allegations of Russia-linked accounts using the social networking platform to spread divisive messages during the 2016 presidential election surfaced.
Since then, it has stepped up efforts to check abuse of its platform by bringing in more transparency in the conduct of its businesses, including in advertisement policies.
Echoing Facebook CEO Mark Zuckerberg’s earlier comments on elections across the world, Allan said the social media platform “wants to help countries around the world, including India, to conduct free and fair elections”.
In April, Zuckerberg said Facebook will ensure that its platform is not misused to influence elections in India and elsewhere.
“Our goals are to understand Facebook’s impact on upcoming elections — like Brazil, India, Mexico and the US midterms — and to inform our future product and policy decisions,” he told US lawmakers during a hearing.
Facebook uses a combination of technology, including Machine Learning (ML) and Artificial Intelligence (AI), and reports from its community to identify violating content on the platform.
The reports are reviewed by members of its “Community Operations” team who review content in over 50 languages in the world, including 12 from India.
“By the end of 2018, we will have 20,000 people working on these issues, double the number we had at the same time last year,” he said.
“We are also working to enhance the work we do to proactively detect violating content,” Allan said.
[source_without_link]IANS[/source_without_link]