Even as British Prime Minister Theresa May raised the specter of imposing new regulations to restrict the dissemination of extremist content, while pledging to "aggressively remove terrorist content" from its platform, Facebook condemned Saturday’s deadly London attacks.
In order to tighten Internet regulation, in order to deny terrorists a tool for planning attacks and spreading extremism, Britain must work with allied democratic governments, May said on Sunday.
"We cannot allow this ideology the safe space it needs to breed, yet that is precisely what the internet and the big companies that provide internet-based services provide," she said in a statement outside Downing Street.
Facebook's own response to the London attack amplified May's comments, which added new fuel to the debate about balancing free speech in an age of terrorism.
The social network giant wants to "provide a service where people feel safe”, Simon Milner, director of policy at Facebook, said in a statement. It further said: “That means we do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. We want Facebook to be a hostile environment for terrorists."
Following a recent string of violent acts broadcast on its social network has resulted in Facebook facing severe criticism. In addition to the allegations of the spread of fake news, some accuse the company of failing to tackle terrorist recruitment and hate propaganda.
He would add another 3,000 employees to scrub harmful content from the network, Facebook CEO Mark Zuckerberg announced last month.
"Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it," Facebook's Milner said.
"We have long collaborated with policymakers civil society, and others in the tech industry, and we are committed to continuing this important work."
However, Hany Farid, chair of the computer science department at Dartmouth, told the radio program On the Media last month, that because they have a financial incentive to make content as shareable as possible, social networks have made it much easier to "like" content than to immediately report it.
A technology called PhotoDNA that helps combat child pornography on the Internet was helped to be developed by Farid for Microsoft. When he offered the company a similar technology, called eGLYPH, that detects online terrorist activity, Facebook rebuffed him, he told On the Media.
"I have to say it's really frustrating because every time we see horrific things on Facebook or on YouTube or on Twitter we get the standard press release from the companies saying, 'We take online safety very seriously. There is no room on our networks for this type of material,'" said Farid, who also serves as a senior adviser to the nonprofit Counter Extremism Project.
"And yet the companies continue to drag their feet. They continue to ignore technology that could be used and doesn't affect their business model in any significant way."
There were no comments available from Facebook on the issue.
(Source:www.cnbc.com)
In order to tighten Internet regulation, in order to deny terrorists a tool for planning attacks and spreading extremism, Britain must work with allied democratic governments, May said on Sunday.
"We cannot allow this ideology the safe space it needs to breed, yet that is precisely what the internet and the big companies that provide internet-based services provide," she said in a statement outside Downing Street.
Facebook's own response to the London attack amplified May's comments, which added new fuel to the debate about balancing free speech in an age of terrorism.
The social network giant wants to "provide a service where people feel safe”, Simon Milner, director of policy at Facebook, said in a statement. It further said: “That means we do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. We want Facebook to be a hostile environment for terrorists."
Following a recent string of violent acts broadcast on its social network has resulted in Facebook facing severe criticism. In addition to the allegations of the spread of fake news, some accuse the company of failing to tackle terrorist recruitment and hate propaganda.
He would add another 3,000 employees to scrub harmful content from the network, Facebook CEO Mark Zuckerberg announced last month.
"Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it," Facebook's Milner said.
"We have long collaborated with policymakers civil society, and others in the tech industry, and we are committed to continuing this important work."
However, Hany Farid, chair of the computer science department at Dartmouth, told the radio program On the Media last month, that because they have a financial incentive to make content as shareable as possible, social networks have made it much easier to "like" content than to immediately report it.
A technology called PhotoDNA that helps combat child pornography on the Internet was helped to be developed by Farid for Microsoft. When he offered the company a similar technology, called eGLYPH, that detects online terrorist activity, Facebook rebuffed him, he told On the Media.
"I have to say it's really frustrating because every time we see horrific things on Facebook or on YouTube or on Twitter we get the standard press release from the companies saying, 'We take online safety very seriously. There is no room on our networks for this type of material,'" said Farid, who also serves as a senior adviser to the nonprofit Counter Extremism Project.
"And yet the companies continue to drag their feet. They continue to ignore technology that could be used and doesn't affect their business model in any significant way."
There were no comments available from Facebook on the issue.
(Source:www.cnbc.com)