April 5, 2019
For Facebook, Twitter, Instagram, YouTube and the other social media giants, barely a day passes without a negative news story written about them.
This week, however, has been a bad week for the ages. The various social media platforms and their senior leaders have come under fire or made major PR missteps.
Who’s had the worst week of all? Judge for yourself – let’s take a look back at all the problems affecting the social media titans this week.
Monday, April 1
Otherwise known as April Fool’s Day, there was actually no bad news for the companies. Great work, guys!
Tuesday, April 2
Well, that didn’t last. All good things must come to an end, and having two ‘bad news’-free days in a row was never likely.
Twitter and YouTube don’t commit to banning white nationalist content
Yes, you read that correctly. Twitter and YouTube failed to confirm that they would ban white nationalist content.
Following an in-depth investigation by Motherboard that revealed the inconsistencies present in Facebook, Twitter, YouTube and other sites’ content moderation guidelines, Facebook swiftly promised to ban white nationalist and white separatist content.
Twitter and YouTube, however, stood firm. They effectively promised that white nationalists would still find a happy home on their platforms – although YouTube did say it would demonetize any of the videos, and wouldn’t feature them as recommendations.
Facebook and WhatsApp pulled into information war between India and Pakistan
Sadly, it wasn’t all good news for Facebook. As tensions escalate in the disputed region of Kashmir along the India-Pakistan border, Reuters published a startlingly frank look at the information war taking place on social media.
The story looked at Hanzala Tayyab, a Pakistani “social media campaigner” who, along with 300 accomplices, was trying to make anti-India stories go viral in the disputed region.
Facebook had, admittedly, removed more than 100 accounts engaged in the anti-Indian propaganda push. But, Reuters made it clear that Zuckerberg and his team still had work to do.
The story also details how anti-Pakistan online groups were also engaged in the information war, including one group ominously-named “Clean the Nation.”
YouTube won’t remove Neo-Nazi content
A variation on the former story (note, this is still the same day), but this time, unique to YouTube.
Following the Christchurch terrorist attack that left 50 Muslims dead in New Zealand, many tech companies moved swiftly to remove far-right content. The horse might have bolted, but it’s better than doing nothing.
YouTube, though, decided to stand firm again, and kept overtly Neo-Nazi videos, podcasts, and audiobooks online. Even when Motherboard gave specific examples of the videos to YouTube, the video-sharing site refused to remove them, instead saying it had:
“Demonetized the videos, placed them behind a content warning, removed some features such as likes and comments, and removed them from recommendations—but ultimately decided to leave the videos online.”
Motherboard noted that:
“While YouTube took swift action against the Islamic State in the wake of a series of high profile beheadings and terrorist attacks, it has repeatedly failed to remove neo-Nazi and white nationalist content at scale.”
Wednesday, April 3
Wednesday was a double-whammy for Facebook.
Facebook app developers leaked millions of user records
According to researchers from security firm UpGuard, a number of third-party Facebook app developers had left treasure troves of Facebook user data completely exposed on servers rented from Amazon.
One company, Cultura Colectiva, plus an app called ‘At the Pool,’ were met with UpGuard’s ire in particular. Cultura Colectiva, based in Mexico, left over 540 million records detailing comments, likes, reactions, account names, Facebook IDs and more open for public consumption.
At the Pool’s collection of data was smaller, but more serious. It contained backup columns for user likes, friends, music, movies, books, photos, events, groups, checkins, interests, and even plaintext passwords for 22,000 users.
When UpGuard contacted Cultura Colectiva, it received no response, It appeared that Cultura Colectiva had taken no action. UpGuard also contacted Amazon Web Services, the company that hosted Cultura Colectiva’s data, but the data was still unsecured almost a month later.
At the Pool’s collection, however, had been taken offline during the period that UpGuard was looking at it – possibly signalling that the company behind the app had been shut down.
Either way, it isn’t a good look for Facebook. It’s not directly the company’s fault, but it’s a stark reminder of the amount of data available to app developers. To be fair, when has that ever had any bad repercussions?
Facebook finally realises that asking new users for their private email passwords is wrong
The Daily Beast broke this story. It emerged that Facebook had been asking new users to hand over their private email passwords in return for speeding up the signup process.
When signing up, users would be greeted with a box to add their email address and, after entering it, they would be greeted with another box asking for their email password.
“That’s beyond sketchy,” security consultant Jake Williams told the Daily Beast.
Facebook told the Daily Beast that it didn’t store the passwords, but that it understood “the password verification option isn’t the best way to go about this, so we are going to stop offering it.”
Better luck next day, Facebook?
Thursday, April 4
A study shows Facebook’s ad delivery system is inherently discriminatory
According to The Verge, researchers from Northeastern University, the University of Southern California, and non-profit group Upturn, produced a paper that claims to prove that Facebook’s advert algorithms discriminated by gender and race, “even when advertisers are trying to reach a broad audience.”
The paper hasn’t been peer-reviewed yet. But, it would seem to confirm claims made by the US Department of Housing and Urban Development, which sued Facebook for breaking housing discrimination laws.
The paper’s authors tested whether job listings or housing ads containing certain keywords or images would be automatically delivered more often to certain groups. This would reveal what the authors call “previously unknown mechanisms” that could violate anti-discrimination laws.
Some examples given by the researchers include:
- Housing ads with images of white families being served to white people more than the same ad with a black family
- A lumber industry job advert was served to an audience that was 90% male
- Supermarket cashier job ads reached audience that was 85% female
All of this would mean that Facebook’s advertising system is perpetuating biased thinking.
Tech companies could face jail time under new Australian law
In a new law passed on Thursday in Australia, tech companies could receive heavy fines – and top executives could face time behind bars – if their platforms don’t remove violent content.
The new law makes it an offence for companies operating in Australia not to remove videos or photographs that show murder, torture, or rape without delay.
If they fail to remove the content “expeditiously,” the companies could be hit with a fine of up to 10% of global turnover. It’s worse news, too, for executives, who could face up to three years in jail.
According to Australian Attorney-General Christian Porter, the laws are a “world first in terms of legislating the conduct of social media and online platforms.”
Mark Zuckerberg and Jack Dorsey in the Aussie slammer? Don’t rule it out just yet.
Friday, April 5
TikTok fails to remove online predators from its platform
TikTok, the video-sharing app that’s popular among children and teenagers, is failing to suspend accounts of people sending sexual messages to minors, according to a BBC investigation.
The BBC found hundreds of sexually explicit comments on videos posted by children as young as nine.
While the company was active in deleting the comments, it had failed to suspend the accounts posting them. This is despite TikTok’s rules against sexual messages directed at children.
TikTok isn’t unnaccustomed to issues surrounding children, either. The company behind the app was recently fined $5.7 million in the US for failing to protect children under thirteen and illegally collecting data about them.
In response to the investigation, TikTok said that child protection is an “industry-wide challenge” and that promoting a “safe and positive app environment” is the company’s top priority.
So there you have it – a horrible week in the world of social media. And there’s still a bit of Friday left to go…
Read more about social media on Tech.co
Did you like this article?
Get more delivered to your inbox just like it!
Sorry about that. Try these articles instead!