San Francisco, Sep 23 : After months of intensive talks with major advertisers, Facebook, YouTube and Twitter have agreed to adopt a common set of definitions for hate speech and other harmful content, the Global Alliance for Responsible Media (GARM) said on Wednesday.
GARM is a cross-industry initiative founded and led by the World Federation of Advertisers (WFA) and supported by other trade bodies, including ANA, ISBA and the 4A’s.
The move comes after over 200 brands including Starbucks and Levis recently pulled their advertising from Facebook and the #StopHateforProfit campaign gained momentum after celebrities, like Kim Kardashian West, froze their social media account for a day.
As a result of the talks between the advertisers and key global platforms, four key areas for action have been identified, designed to boost consumer and advertiser safety.
“The issue of harmful content online has become one of the challenges of our generation. As funders of the online ecosystem, advertisers have a critical role to play in driving positive change and we are pleased to have reached agreement with the platforms on an action plan and timeline in order to make the necessary improvements,” WFA CEO Stephan Loerke said in a statement.
“A safer social media environment will provide huge benefits not just for advertisers and society but also to the platforms themselves,” Loerke said.
WFA believes that the standards should be applicable to all media given the increased polarisation of content regardless of channel, not just the digital platforms.
As such, it encourages members to apply the same adjacency criteria for all their media spend decisions irrespective of the media.
Today, advertising definitions of harmful content vary by platform and that makes it hard for brand-owners to make informed decisions on where their ads are placed, and to promote transparency and accountability industry-wide.
GARM has been working on common definitions for harmful content since November and these have been developed to add more depth and breadth pertaining to specific types of harm such as hate speech and acts of aggression and bullying.
Between September and November work will continue to develop a set of harmonise metrics and reporting formats, for approval and adoption in 2021, it said.
They also agreed to have all major platforms audited for brand safety or have a plan in place for audits by year end, WFA said.
Moreover, advertisers need to have visibility and control so that their advertising does not appear adjacent to harmful or unsuitable content and take corrective action if necessary and to be able to do so quickly.
GARM is working to define adjacency with each platform, and then develop standards that allow for a safe experience for consumers and brands.
“This uncommon collaboration, brought together by the Global Alliance for Responsible Media, has aligned the industry on the brand safety floor and suitability framework, giving us all a unified language to move forward on the fight against hate online,” said Carolyn Everson, Vice President Global Marketing Solutions, Facebook.
Disclaimer: This story is auto-generated from IANS service.