Clearview AI is a secretive startup that has developed a super-accurate facial recognition database – all through using images surreptitiously scraped from social media users.
Currently, the company’s app is only available to law enforcement, and it has already led to several arrests. In fact, the app is being used by more than 600 law enforcement agencies in the US. Many have been mulling over the app’s potential to be used in conjunction with surveillance cameras and even augmented reality glasses — something the app’s code alludes to. However, Clearview might end up in trouble with the very legal agencies it’s supposed to be helping.
Many social media platforms have policies in place that prohibit data being taken from their sites to be used in facial recognition programs. So, is the end already nigh for Clearview? And, is there anything you can do to prevent this kind of social media data-scraping?
Clearview AI and Social Media
According to documents seen by the New York Times, Clearview scrapes photos from across a range of social media sites. These include the big hitters Facebook, Instagram, YouTube, and Twitter. However, these sites have already said that they are investigating Clearview for breach of their terms of use.
For instance, Twitter’s terms of service overtly prohibit content posted to its site from being scraped, or for being used for facial recognition databases.
“A lot of people are doing it…Facebook knows” Hoan Ton-That, Clearview founder
A Facebook spokesperson commented that the company “will take appropriate action if we find they are violating our rules.”
However, there might be further complications. Peter Thiel, the well-known arch-conservative venture capitalist who co-founded PayPal, invested some $200,000 into Clearview. But, Thiel also sits on Facebook’s board. A spokesperson for Thiel declined to comment to the Times.
Clearview takes the data scraped from social media sites, as well as potentially sensitive photos uploaded by law enforcement agencies. It then moves all of this data to its own servers — something many law enforcement agencies apparently “didn’t realize.” This could be quite the data breach if something were to go wrong – it’s a huge amount of personally identifiable data that could fall into the wrong hands (or already has, depending on your point of view).
What can you do to stay safe?
We’ll be honest — it’s likely that Clearview already has images of you stored on its servers. According to the company, its database holds some three billion images.
However, if social media companies really put their feet down, there’s a chance that Clearview might have to purge its servers and start again. This would be a great result for ordinary citizens, although it is certainly a long way off at the moment.
To avoid being snared by something like this again, we’d recommend keeping your various social media accounts as private as possible. On Instagram and Twitter, this is fairly easy to do — head to the privacy sub-section of the settings menu. On Facebook, you’ll need to head to your profile and then change the settings one-by-one. With YouTube, you’ll need to head into your main Google Account settings and change them from there.
How can companies like Clearview do this?
Largely, because they can. There’s a distinct lack of regulation around facial recognition at any level of government. Sure, San Francisco has banned its police department from using facial recognition tech, for example. But, without a consistent approach at a local, state, federal, or international level, these kind of protections are limited in their efficacy.
Another problem is that a lot of the general public simply has too much to think about before getting round to considering the moral implications of facial recognition. In between paying bills, going to work, looking after loved ones, arguing over politics and worrying about climate change, there’s precious little time to ponder how one tech geek and a rich investor have given police the tools to constantly surveil what you’re up to.
However, when the implications of such technology is raised with regular people, it is often met with strong opposition. Unfortunately, it can be difficult to turn this anger into effective legislation — cutting police powers is always a difficult sell. And, in our seemingly dangerous world, the idea that you only need worry if you’ve done something wrong seems more prosaic than ever.