Why You Should Probably Stop Getting ChatGPT to Roast Your Instagram

ChatGPT may be getting funnier, but OpenAI's data privacy problem is no laughing matter - should you trust it with your data?

Ever felt like being humbled by an AI chatbot? Well, apparently you’re not alone, with almost 400k people taking part in a social media trend where they get ChatGPT to publicly “roast” their Instagram feed.

The results are savage, with the AI tool calling out everything from cringey gym selfies to questionable wardrobe choices – providing users with endless fodder for their Instagram stories, and leaving even more people wondering…when did ChatGPT get so funny?

Yet, with ChatGPT’s viral trend prompting waves of users to hand over personal information, the challenge is also reigniting old conversations around OpenAI’s questionable data collection practices, and lackluster approach to security. Read on to learn more about Instagram’s meanest viral trend, and the risks associated with sharing data with ChatGPT.

Users Are Asking ChatGPT to Insult Their Instagram Profile

If you’ve been on Instagram recently, you’ve probably been inundated with paragraph-long AI-generated roasts tearing down the feeds of your fellow users.

The “roast my feed trend” has been sweeping through social media like wildfire. At the time of writing 395k people have shared their results on their story thanks to a sharable template that lets you take part in a couple of taps – and its virality is hardly surprising.

ChatGPT doesn’t hold any punches. Its AI roasts, which are surprisingly funny, aren’t afraid to get personal, with the chatbot scrutinizing every part of a user’s Instagram feed, from their hobbies and clothing choices to their travel content.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2024 👨‍💻
See the list button

It’s the same work of AI genius that congratulated Tech.co on being “the internet’s version of a lukewarm cup of coffee – completely forgettable and just as disappointing”. Ouch – we’ll pretend not to be offended by that.

Despite flourishing on Instagram, the challenge’s origins can be traced back to X – specifically, a viral post where the Greek tennis star Stefanos Tsitsipas asks the popular AI chatbot to roast him. The post, which describes Tsisipas as a “guy who spends more time philosophizing on Twitter than actually serving up a solid second serve” received over 451k views in a day, and spawned a number of copycat trends across other social media platforms.

When Did ChatGPT Become Funny?

ChatGPT’s reputation as a “Swiss army knife for AI” is foundational to its popularity. When it comes to its uses, the limit really is your imagination (plus its 4,096-character limit), with the chatbot being used to compose essays, code computer programs, brainstorm business ideas, and more.

However, despite its versatility, ChatGPT hasn’t always been known for being funny, and its lack of wit has actually been backed-up by research. Last year, German researchers Sophie Jentzsch and Kristian Kersting decided to put generative AI’s comedic potential to the test by asking OpenAI’s ChatGPT3.5 to tell them a joke 1,008 times in a row.

The chatbot offered them variations on one of 25 joke setups over 90% of the time, suggesting that ChatGPT’s comedic repertoire was likely learned during its training phases, not generated spontaneously. The study found ChatGPT’s jokes didn’t “withhold much logical scrutiny either”, which in non-academic language, means that they didn’t make much sense.

However, before stand-up comedians are able to breathe a sigh of relief, these results were far from unanimous. In fact, recent research suggests that the chatbot is actually funnier than the majority of humans, with 70% of participants preferring wisecracks from ChatGPT over their own species, and the researchers concluding that the chatbot can “produce written humor at a quality that exceeds laypeople’s abilities”.

Truth be said, humor is subjective, so objectively analyzing a chatbot’s ‘funniness’ is never going to be easy. Yet, as OpenAI works hard to improve ChatGPT’s capabilities, the AI is only going to become better and better at mastering traditionally human skills like cracking jokes, and lateral thinking. But as ChatGPT spearheading yet another viral Instagram moment, should we even be trusting that chatbot with our data at all?

The Darker Side of ChatGPT’s Viral “Roast Me” Trend

As ChatGPT’s ‘roast me’ trend gains momentum, thousands of users have willingly handed over personal data without much of a thought. However, as ChatGPT amuses Instagram users with AI-generated zingers, its creator OpenAI continues to attract backlash over its approach to data security.

OpenAI admits that it saves all of the conversations you have with the chatbot for training purposes, but when it comes to addressing how exactly it’s used, the company is notoriously hush-hush. OpenAI’s privacy policy explains that it might share your Personal Information with third parties too, sparking concerns about the commodification of user data.

What’s more, research by the University of North Carolina has found that handing over sensitive information could be irreversible. Due to the size of the large language model (LLM) powering ChatGPT, the study concluded that it’s extremely hard to permanently delete your data from the AI tool compared with other chatbots, and that ‘deleted’ information can often still be retrieved.

But what does this actually mean? Well, unless you intentionally opt-out of sharing your information, it means that the data you’re entering into ChatGPT will be used to train future models.

Even if you wholeheartedly trust OpenAI with your data, the company’s questionable security profile raises concerns about sensitive information falling into the wrong hands. In 2023, the Silicon Valley company fell victim to a cyberattack after a hacker gained access to internal messaging systems. While no user data was compromised, company insiders reveal that the incident was avoidable and that OpenAI wasn’t doing enough to protect confidential data from cybercriminals, and foreign adversaries like China.

To protect yourself, we recommend only sharing information with ChatGPT that you don’t mind potentially being shared online. When it comes to roasting your Instagram, we advise against sharing screenshots that expose personal information like names and addresses, clear images of your face, and examples of your creative work.

Learn more about what not to share with ChatGPT here.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Isobel O'Sullivan (BSc) is a senior writer at Tech.co with over four years of experience covering business and technology news. Since studying Digital Anthropology at University College London (UCL), she’s been a regular contributor to Market Finance’s blog and has also worked as a freelance tech researcher. Isobel’s always up to date with the topics in employment and data security and has a specialist focus on POS and VoIP systems.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today