Google and Facebook moved Tuesday to cut off advertising revenue to fake news sites, after a wave of criticism over the role misinformation played in the US presidential election.
The move by the two tech giants aims to choke off funds to an industry fueled by bogus, often sensational “news” circulating online and seen as a potential influence on public opinion.
A Google statement to AFP said new policies “will start prohibiting Google ads from being placed on misrepresentative content, just as we disallow misrepresentation in our ads policies.”
The shift will mean Google restricts ads “on pages that misrepresent, misstate, or conceal information about the publisher, the publisher’s content, or the primary purpose of the web property,” the statement said.
Google chief executive Sundar Pichai said the company receives billions of queries daily and admitted errors had been made.
“There have been a couple of incidences where it has been pointed out and we didn’t get it right.
“And so it is a learning moment for us and we will definitely work to fix it,” he said in a BBC interview.
Pichai said there should be “no situation where fake news gets distributed” and committed to making improvements.
“I don’t think we should debate it as much as work hard to make sure we drive news to its more trusted sources, have more fact checking and make our algorithms work better, absolutely,” he said.
On Monday, internet users searching on Google were delivered a bogus report saying Republican Donald Trump had won the popular vote in addition to the electoral college.
The numbers on a blog called 70News — contradicting official results tallied so far by states — said Trump received 62.9 million votes to 62.2 million for Hillary Clinton.
The blog urged those petitioning for the electoral college to switch their votes to reflect popular will to scrap their effort.
Facebook is implementing a similar policy, a spokesman said.
“In accordance with the Audience Network Policy, we do not integrate or display ads in apps or sites containing content that is illegal, misleading or deceptive, which includes fake news,” a Facebook statement said.
“While implied, we have updated the policy to explicitly clarify that this applies to fake news.”
One report said Facebook had developed a tool to weed out fake news but did not deploy it before the US election, fearing a backlash from conservatives after a controversy over its handling of “trending topics.” Facebook denied the report.
Some critics have gone so far as to blame Facebook for enabling Trump’s victory, saying it did not do enough to curb bogus news that appeared to help rally his supporters.
Stories that went viral in the run-up to the vote included such headlines as “Hillary Clinton Calling for Civil War If Trump Is Elected” and “Pope Francis Shocks World, Endorses Donald Trump for President.”
The prevalence of fake news has prompted calls for Facebook to consider itself a “media” company rather than a neutral platform, a move which would require it to make editorial judgments on articles.
Facebook executives have repeatedly rejected this idea, but since the election have pledged to work harder to filter out hoaxes and misinformation.
In a weekend post, Facebook chief Mark Zuckerberg dismissed the notion that fake news helped sway the election, and said that “more than 99 percent of what people see is authentic.”
Still, he said that “we don’t want any hoaxes on Facebook” and pledged to do more to curb fake news without censoring content.
“Identifying the ‘truth’ is complicated,” he said.
“While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted.”
Ken Paulson, a former USA Today editor who is dean at the media school of Middle Tennessee State University, said Facebook and other platforms should not be required to filter out news but that it would be good for business.
“My hunch is that most of Facebook’s loyal customers would welcome a cleaning up of the town square,” he said.