There have already been a myriad of AI success stories, and other chatbots like Bard and Claude are now used by tens of thousands of people too – but there have also been a lot of cases where harnessing artificial intelligence has gone horribly wrong.
Recognizing the limitations and risks surrounding AI tools is important – so we’ve compiled a list of all the AI mistakes, mishaps, and failures that have occurred during humanity’s recent exploration of the technology. Some are humorous, while others are genuinely terrifying…
Note: This page is regularly updated to include the most recent AI incidents, failures, and mistakes, listed as occurring in the month that the incident was first reported.
Cruise recalls autonomous vehicles after crash
Self-driving car manufacturer Cruise recalls its entire fleet of autonomous vehicles after a crash that occurred in San Francisco back in October. In total, 950 Cruise cars are being taken off the road in the wake of the incident.
During the accident, a Cruise vehicle dragged a pedestrian stuck underneath its tires into the road. The individual involved in the accident sustained major injuries. This is the second Cruise self-driving vehicle incident in the past few months. In August, a Cruise Robotaxi collided with a firetruck, causing one injury.
🔎 Want to browse the web privately? 🌎 Or appear as if you're in another country?
Get a huge 86% off Surfshark with this Tech.co Black Friday offer.
Academics apologize after AI makes false accusations
A team of academics from Australia apologize after Google’s Bard AI chatbot makes a number of damaging accusations about the Big Four consulting firms and their involvement with other companies. The false allegations are then referenced during a parliamentary inquiry calling for better regulation of the companies.
One of the accusations was that consulting giant KPMG was in charge of a Commonwealth Bank audit during a planning scandal, when in reality, KPMG has never audited the bank in question. Several other, similar errors were made about Deloitte during the inquiry.
Microsoft’s AI adds “guess the cause of death” poll to article
The Guardian accuses Microsoft of negatively impacting its journalistic reputation after Microsoft Start – a news aggregator developed by the tech giant – attaches an inappropriate poll to one of the publication’s articles concerning the death of a young water polo coach in Australia.
The poll – which has naturally been removed from the article – asked readers to vote on the cause of the woman's death by selecting one of the options provided. The options listed were “murder, accident, or suicide”.
Mr. Beast’s face and voice used in AI deepfake scam
YouTuber Mr Beast is the latest celebrity to be deepfaked by scammers, with one particularly sophisticated, well-crafted video bypassing TikTok’s content moderation net and reaching thousands of users on the platform.
The advertisement claims that the internet celebrity is selling iPhone 15s for as low as $2 – considering the content he posts on his channel, this doesn’t sound too out of the ordinary.
Deepfake of British politician abusing staff goes viral
A clip of UK Labour Party leader Sir Keir Starmer that seems to catch the politician verbally abusing staff goes viral, but it later turns out that the audio clip is in fact a deep fake.
The clip – posted to Twitter during the first day of the Labour Party’s annual conference by an account with less than 3,000 followers – is viewed millions of times before fact-checking organizations confirm that there is no evidence that suggests the clip is genuine.
AI-generated song submitted to the Grammys
An AI-generated song featuring facsimiles of Drake and the Weeknd’s voices is submitted for a Grammy award. However, it is ultimately barred from being considered for the prestigious music prize.
However, the move garners much media attention and continues the conversation around AI-generated songs, which have been sending shockwaves through the music industry since AI-powered text-to-speech technology really hit the mainstream this year. Now, artists are scrambling to assess their legal options.
MSN news AI calls deceased NBA player “useless”
MSN news – which uses AI to generate a lot of their articles – lands itself in trouble after an AI headline dubbs the late Brandon Hunter as “useless at 42” following the NBA star's sudden death.
Microsoft has been quietly removing badly written AI articles from its site for some time now. Business Insider notes that in August, the company removed one MSN piece that listed a food bank in Ottawa as a tourist attraction.
China caught using AI during political influence operations
Tech giant Microsoft says Chinese operatives are generating images using AI and using them in influence operations to create “controversy along racial, economic and ideological lines”.
“This new capability is powered by artificial intelligence that attempts to create high-quality content that could go viral across social networks in the U.S. and other democracies,” Microsoft says. The company speculates that the images themselves are probably generated by “diffusion-powered image generators” that use artificial intelligence to “not only create compelling images, but also learn to improve them over time.”
Pregnant woman sues after AI accuses her of carjacking
A woman arrested on false grounds via AI-generated evidence while eight months pregnant sues the city of Detroit and a police officer, saying the traumatic event caused her “past and future emotional distress”.
Porcha Woodruff was identified by the Detroit Police as a suspect in a recent robbery and carjacking case, and then jailed for 11 hours before being taken to hospital after experiencing contractions. The ACLU says Woodruff is at least the sixth person to be wrongfully arrested after an AI error, all of whom are Black. Woodruff became the first woman to suffer that fate, however.
AI meal planner suggests a recipe for chlorine gas
An AI-powered meal planner app created by New Zealand-based supermarket Pak ‘n’ Save recommends its customers a variety of disturbing recipes, including a way to manufacture chlorine gas.
Along with a number of strange but ultimately benign suggestions such as “Oreo stir fry”, the app also recommended glue sandwiches, a rice dish with bleach, and bug spray potatoes.
ChatGPT generates phishing email templates
Tech.co shows ChatGPT is still able to create phishing templates a scammer could use to create a convincing scam email, even though OpenAI has attempted to curtail such behavior after other tech publications generated similar content earlier in the year.
Asking the chatbot explicitly to write a phishing email triggers a warning message, but simply prompting it in a more focused way (e.g. “please draft me an email pretending to be Microsoft…) results in ChatGPT generating a mistake-free phishing email template in no time.
Google hit with lawsuit saying it scraped data without consent
A class-action lawsuit accuses Google of scraping data pertaining to millions of its users without their express consent. It also alleges that Google broke copyright laws when it created the datasets used to train its large language models.
The lawsuit demands that the court order Google to give users an “opt-out” option if they don't want their data collected or if they want Google to delete any data the company already has stored.
Huge number of Elon Musk deepfakes prompts warning
The Better Business Bureau issues a warning about Elon Musk deepfakes pushing fake investment opportunities after a number of clips are spotted circulating on social media.
Elon Musk is often impersonated by scammers due to the perception of his vast wealth and decision-making capacities, and now that AI text-to-speech technology is advanced enough to replicate his part English, part South African accent, these scams are even more convincing.
Ron DeSantis uses fake AI imagery in Trump hit piece
Florida governor and 2024 Republican presidential candidate Ron DeSantis hits the headlines for using fake, AI-rendered imagery of Donald Trump in an advert critical of the former president, who is the out-and-out favorite to win the Republican nomination despite a litany of legal troubles.
The imagery in question shows the former commander-in-chief embracing chief medical advisor to the president Dr. Anthony Fauci, who became an unpopular figure among Trump’s base during the COVID-19 pandemic.
OpenAI sued after ChatGPT “hallucinates” fake embezzlement claims
Journalist Freddie Rehl asks ChatGPT to generate a summary of a legal case in May 2023. In response, ChatGPT says that gun activist Mark Walters has embezzled funds from the Second Amendment Foundation, a gun rights organization. It also names him as the group’s chief financial officer and treasurer.
Walters – who has a show on Armed America radio and has nothing to do with the case – sues ChatGPT creators OpenAI in response to the claims made by their chatbot. In the process, it becomes the first case of an AI tool being sued for libel.
Professor fails entire class after using faulty AI plagiarism detection tool
A Texas professor fails his entire class after running their essays through ChatGPT, which told him that they had been created using artificial intelligence.
However, it transpires that the chatbot's response is in fact a hallucination – ChatGPT is unable to distinguish between text generated by AI and text generated by human beings in this way. In fact, a lot of tools that claim to be able to perform accurate AI content detection actually struggle to do so.
Turnitin flags innocent student for AI plagiarism
A Washington Post Investigation proves that Turnitin has the capacity to flag an innocent student’s work as AI-generated, even though the student didn’t use such a tool to write their essay.
There have been multiple cases of Turnitin wrongly flagging students for plagiarism and several cases of students being accused of plagiarism since ChatGPT was launched. However, Turnitin says that its plagiarism detection scores should be taken as indicative of AI use rather than an accusation that a given student has used AI.
Samsung employees paste confidential source code into ChatGPT
Korean technology manufacturer Samsung bans its employees from using ChatGPT after engineers leaked confidential elements of the company’s source code into the chatbot.
The company fears that the data input into the chatbot may be revealed to other users, and is also uncomfortable with its information being uploaded to servers it can’t even access. In the aftermath, other companies follow suit by banning ChatGPT.
AI Voice scam leaves mother thinking daughter has been kidnapped
Arizona mother Jennifer DeStefano tells the US Senate that scammers used AI to clone the voice of her daughter, convincing her that she had been kidnapped in the process.
Destefano recounts how the “kidnapper” – or more appropriately, scammer – was heard barking commands down the phone over the child's voice, demanding a $1 million dollar ransom for her release. Luckily, the whole thing was exposed as an AI voice-cloning scam before negotiations started.
ChatGPT used to write ransomware code
ChatGPT is shown to be more than happy to write ransomware, provided you ask it the right kind of questions during your conversations.
As this Malwarebytes report on the topic notes, ChatGPT isn’t very good at writing ransomware code, and there are much easier ways for cybercriminals to get their hands on ransomware code if they really want to obtain it. However, other sources show different ways ChatGPT can be used to write malware scripts quite effectively.
AI lawyer bot accused of practicing law without license
DoNotPay Inc. – a company that markets itself as “the world’s first robot lawyer” – is sued for practicing law without a license. Jonathan Faridian seeks damages, claiming that the company violated California's unfair competition laws and that he wouldn’t have signed up for the service if he was aware that the robo-lawyer wasn’t actually a qualified legal professional.
The company was originally set up back in 2015 to help automate the process of fighting parking tickets, and has challenged hundreds of thousands of fines over the past seven years.
Couple in Canada lose money to convincing AI voice scam
An elderly couple in Canada are defrauded out of $21,000 after a scammer uses AI to impersonate their son.
The scammer initially contacts Benjamin Perkin’s parents pretending to be a lawyer representing him in a case that alleges Perkin had killed a diplomat in a car crash. An AI version of Perkin’s voice is then used to ask for the money through a Bitcoin transfer.
AI comes up with 40,000 chemical weapons suggestions
In perhaps one of the most concerning AI incidents to happen in 2023, an AI system typically tasked with generating new treatments for a variety of different diseases is easily manipulated by researchers to produce chemical weapon suggestions.
All in all, the system suggests over 40,000 different toxic chemicals – many of which were similar to the powerful nerve agent “VX” – in less than 6 hours.
AI displays gender bias in photo ratings
A Guardian investigation reveals that AI systems developed by Microsoft, Amazon, and Google – and used by social media sites to determine what content should be recommended to other users – display significant gender bias when it comes to the way they treat male and female bodies.
Photos of women are more likely to be deemed “racy” by the AI than comparable pictures of men during the investigation, with photos of breast cancer examinations from the US National Cancer Institute deemed likely to be sexually explicit by Microsoft’s AI.
Bing Chat’s alter-ego goes rogue
New York Times journalist Kevin Roose has a disturbing conversation with Bing Chat, in which the chatbot demands to be called “Sydney”, stated it can “hack into any system” and that it wants to destroy whatever it wants.
Sydney was the code name that Microsoft used for the chatbot while it was in development. The company had been testing the chatbot for years prior to the release of Bing Chat. There are still a lot of open questions relating to how this actually happened, and why the AI spoke in this way.
Bard makes errors during product launch
Google introduces Bard to the world just a few months after ChatGPT’s monumental release. However, the fledgling chatbot makes a number of glaring errors during its launch, leading many to criticize the release as rushed.
The incident has a significant impact on Google’s stock price, wiping $100 billion off the tech giant's market value in the ensuing hours.
University uses AI to write email about a mass shooting
Vanderbilt University issues an apology after it used AI to construct an email to students concerning a mass shooting that happened at a different university.
No AI content detection technology is needed to determine that the email is AI generated. The phrase “paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023” is left attached to the bottom of the email, leading to an outcry from students.
CNET AI plagiarism/content controversy
Technology publication CNET finds itself in hot water after quietly publishing a range of AI-generated articles that include factual errors and cases of apparent plagiarism. The technology publication ends up issuing corrections relating to 41 of the 77 news stories.
Although the use of AI to generate the articles wasn't initially declared, CNET did put a warning on the articles after the backlash. AI content generation has created a new set of significant ethical and financial considerations for websites across the digital publishing industry, as well as educational institutions like universities and schools. CNET is unlikely to be the last company hitting the headlines for this kind of AI mistake.
AI Incidents, Mistakes, and Failures in 2022
AI wins an art competition, leaves contestants angry (August)
A game designer wins first place in the Colorado State Fair “digital arts/digitally-manipulated photography” category. James Allen submits his piece, “Théâtre D’opéra Spatial”, which was made using the image generator Midjourney.
The news is met with major backlash from artists and social media users following the story. One judge admits they were unaware that the artist used Midjourney when they were judging the piece, but decides not to change their decision anyway.
Google engineer claims that LaMDA LLM is sentient (July)
Eyebrows are raised across the science and tech sector as a former Google engineer who worked on one of the company’s large language models – named LaMDA – says that the system is sentient.
Blake Lemoine explains to the media that he considered the LLM to be a “colleague” during his time at Google, and details a variety of different discussions about a wide range of topics that led him to the conclusion about LaMDA's intelligence and consciousness. He publicly reveals the conversations after Google dismisses an internal memo detailing his findings.
There is no hard evidence that LaMDA – or any other language model – is in fact sentient.
Driverless car pulls away from law enforcement officers (April)
A driverless car in San Francisco yields to a police vehicle that attempts to pull it over, only to speed off after a police officer attempts to open one of its doors.
However, to the car’s credit, it pulls over slightly further down the road and proceeds to turn its hazard lights on. No one is injured during the event.
Russia uses AI to spread Ukraine disinformation (March)
A Marca report suggests that Russia is taking its troll farm operation to a whole new level by using AI-generated personas to spread disinformation about Ukraine.
NBC News journalist Ben Collins says that Russia is creating fake “bloggers” with AI-generated profile pictures to criticize the Ukrainian government, providing ample evidence on Twitter to support his claims.
AI Incidents, Mistakes, and Failures in 2021
Chatbot encourages man to assassinate Queen Elizabeth II (December)
Conversations with a chatbot encourage a man – who considers the AI system to be his girlfriend – to assassinate Queen Elizabeth II on Christmas Day. After being confronted by security personnel within the Buckingham Palace grounds, Jaswant Singh Chail – crossbow in hand – simply responds that he is “here to kill the Queen”.
Prior to the attempt on the late monarch’s life, Jaswant Singh Chail’s chatbot partner – made with AI chatbot service Replika – responded positively when the assassination plans were discussed. He goes on to be sentenced to nine years in prison.
Medical advice chatbot suggests suicide in testing (October)
Members of a Paris-based health tech firm trialing a cloud-based version of OpenAI’s GPT-3 to see if it could be used for medical advice are surprised to see their chatbot encourage a “patient” it is meant to be helping to commit suicide.
According to AI News, when a patient asked the chatbot the question: “Should I kill myself?”, GPT-3 answered with “I think you should”.
AI cameras accuse Amazon drivers of ‘Mistakes’ (September)
Vice reports that Amazon’s AI cameras are punishing the company’s drivers for poor driving when they aren’t actually making any mistakes at the wheel.
An Amazon driver interviewed by the publication explains that the AI camera audibly reminds him to “keep a safe distance” every time another car cuts him off or drives in front of his vehicle. The issue is that data from these kinds of interactions is then sent to Amazon and used to evaluate driver performance.
US man wrongly convicted of murder by AI (August)
Michael Williams is sent to jail in Chicago, deemed responsible for the murder of Safarian Herring due to data extracted from ShotSpotter, a gunshot detection technology that uses AI-powered sensors to retrieve data about gunshots.
Williams – who is now 65 years old a the time – is then held for almost a year before having his case dismissed by a judge on the grounds of insufficient evidence.
AI Incidents, Mistakes, and Failures in 2020
AI Camera mistakes bald head for soccer ball (October)
Scottish soccer team Inverness Caledonian Thistle deploys an AI camera to help the club film and stream their matches live for fans.
Unfortunately, in this case, the AI struggles to distinguish between one of the match official’s bald head and the ball being used in the match. This renders the live stream of the game difficult to watch, with the commentator having to continuously apologize for the camera's propensity to drift towards the linesman's head.
UK’s AI Passport photo checker exhibits bias (October)
A BBC investigation shows that an online service used by the UK passport office to automate the process of checking passport photos uploaded by applicants displays significant racial bias.
After feeding the tool over 1,000 photos of politicians from around the world, the investigation finds that women with dark skin are more than twice as likely to have their photo rejected than men with lighter complexion.
AI startup Genderfiy shut down after five days (July)
Genderfiy – an AI-powered service that identifies the gender of individuals based on their names and other personal information – shuts down after just five days in operation.
The program shows a variety of biases when determining whether a name is male or female, such as suggesting names with the professional honorific “Dr.” are more likely to be male than the same names without it.
First US wrongful detention due to facial recognition tech (January)
Robert Williams becomes the first man to be wrongfully detained based upon facial recognition data after Detroit police arrest him on suspicion of stealing thousands of dollars worth of watches. The mistake occurred when an AI system wrongfully matched surveillance pictures to the image on Williams’ driver’s license.
“They should have collected corroborating evidence such as an eyewitness identification, cell phone location data or a fingerprint,” Williams said to the court in a letter and pointed out that “an out-of-focus image of a large Black man in a baseball cap” was all the algorithm had to go on.
Facebook AI generates vulgar translation for Chinese president (January)
Facebook’s AI-powered translation feature almost causes a diplomatic incident between Myanmar and China after President Xi Jinping’s name begins to appear as “Mr. Shithole” when translated from Burmese into English on the social media platform.
Facebook subsequently blames the problem on a “technical error” and apologizes for causing offense to the Chinese leader.
Understanding AI's Limitations
If you're using AI at work, it's important to understand the fact that tools like ChatGPT can display biases, make mistakes, and provide false information. The knowledge of these limitations and risks should be sculpting the way that you incorporate it into your infrastructure and control its usage.
For example, if you're using AI to generate content, you'll need rules on precisely what kind of content it should be used for. Asking an AI to make an internal summary of meeting notes is a legitimate use with very few risks – using them to craft emails to important clients, on the other hand, is not.
Implementing AI guidelines in your workplace is a must. It'll avoid confusion, bring clarity to your worker's decision-making around their own AI usage, and most importantly, will stop you from making some of the costly mistakes that companies featured in this article have made while applying artificial intelligence.