December 12, 2015
Typically, Facebook depends on its 1.5 billion users to report any offensive content on the social network. However, last week, the social networking giant went looking for it on its own. On Thursday, a profile page was removed by Facebook, which was under the use of one of the two suspects of the mass shooting in San Bernardino, California that ended up taking about 14 lives. According to a spokesman, the page was in direct violation of the community standards set by Facebook. Amongst other things, these standards restrict users from posting photos, videos and other items that glorify violence and support terrorism.
At the time of the shooting, the female suspect called Tashfeen Malik had published a post in which, according to various reports, she had pledged her loyalty to ISIS. Facebook refused to disclose the contents of her post. The social network also refused to disclose how they had found the profile and how its authenticity had been determined. The move highlights the growing pressure on various social media websites such as Facebook, Twitter and Alphabet Inc.’s YouTube for monitoring and sometimes even removing violent propaganda and content from terror groups. It remains unclear as to how closely the social media companies work with the governments and how frequently they identify and remove content.
Some experts said that companies were definitely in a tricky position as far as terrorist content is concerned. But, they also stated that it was dangerous to give the power of speech to such companies as their nature is undemocratic. Technology is employed by all three companies in order to scan for images concerned with child sexual exploitation. The system was developed with the help of Dartmouth College’s chair of the computer-science division, Hany Farid. He said that it was expected that the technology could be expanded for monitoring other questionable content as well.
Nonetheless, this is a huge challenge for a number of reasons. A database is used by the child-exploitation scan, which comprises of known images and was made by the National Center of Missing and Exploited Children. As there is no database made up on terror-related images, it could be difficult to use it for same purposes. Furthermore, news content also comprises of some disturbing images and social media firms don’t want to become news censors. Mark Zuckerberg, the CEO of Facebook Inc. spoke at a town hall meeting in September and shared the example of the photograph of Aylan Kurdi.
The photograph of this 3-year-old refugee boy had been widely shared as he had died while the family was fleeing Syria and had ended up on Turkey’s shore. Mr. Zuckerberg said that a computer algorithm would have deemed the photograph inappropriate, but there was no point in censoring it. Hence, there are tough judgment calls that social media firms have to make. Videos of beheadings of two American Journalists by the ISIS were removed by YouTube in 2014 and Twitter had removed the images when they were reported.
Did you like this article?
Get more delivered to your inbox just like it!