After ChatGPT was released to the public in November 2022, it was only a matter of time before other tech giants released alternatives to ChatGPT. In March 2023, Bard AI, Google’s answer to OpenAI’s game-changing chatbot, was launched in the US and UK. How do the big-name AI tools compare? Our ChatGPT vs Bard guide explains the key differences between the two based on a new round of testing conducted in July 2023.
Bard used to be powered by Google's LaMDA language model, but now Bard uses PaLM 2 to generate responses. Bard uses different sources of information to construct its answers to ChatGPT, and this means the rival chatbot doesn't always approach the questions it's asked to respond to in the same way — which makes this Google Bard vs ChatGPT head-to-head all the more interesting.
With more and more businesses using AI in the workplace, we thought we’d find out which one generates better answers to 12 very different questions. In this head-to-head test of both services, we cover:
- Bard vs ChatGPT: What’s the Difference?
- Bard vs ChatGPT: Test Questions
- Bard vs ChatGPT: Key Test Findings
- Bard vs ChatGPT: Full Test Results
- Bard vs ChatGPT: Interface and User Experience
- Using Bard and ChatGPT at Work
Google Bard vs ChatGPT: What’s the Difference?
The key difference between Bard and ChatGPT is the data sources used to train the LLMs that power them. PaLM 2 (Bard) is trained on data chosen to enhance its dialogue and coding powers, including sciences papers, maths expressions, and source code, and the chatbot also has access to the internet in real-time. GPT 3.5 (ChatGPT), on the other hand, is trained on a pre-defined set of data that hasn't been updated since 2021.
Both ChatGPT and Bard are trained on Common Crawl, Wikipedia, news articles and an array of documents, however. What's more, GPT-4, available to only ChatGPT Plus customers, is trained on a larger dataset than PaLM 2.
Bard and ChatGPT can both generate complex answers to multi-faceted queries, but this key difference in how they've been trained and built sets them apart. Here are the key points of difference to remember.
Sources of data
| || |
| || |
Pre-trained with data from webpages, source codes, and other datasets in multiple languages + access to Google in real-time.
Pre-trained on a huge set of data. Includes Common Crawl, articles, books, and Wikipedia.
GPT-3.5/GPT-4 (ChatGPT Plus)
Free, but ChatGPT Plus costs $20/month
Requires a Google account to sign up and join the waitlist.
Requires any email address. No waitlist at present.
(40+) inc. English, Spanish, Arabic, Chinese, Russian, Japanese, Turkish, Portuguese, Hindi, Bengali, German, Indonesian, Marathi & Vietnamese
(95+) inc. English, Spanish, Arabic, Chinese, Russian, Japanese, Turkish, Portuguese, Hindi, Bengali, German, Indonesian, Marathi & Vietnamese
Google Bard vs ChatGPT: Other Key Differences
Aside from differences in data sources and language models, there are a couple of other key differences between Bard and ChatGPT:
- Coding: ChatGPT has been used for a while to generate code, whereas Bard's switch to PaLM 2 has significantly improved its capabilities in this area.
- Conversational learning: ChatGPT can learn from the conversations it has with people and “hold context”, whereas Bard does this in a much more limited way at the moment.
- Drafts: Bard will produce multiple responses to queries you pose to it, which you can access by pressing “View Drafts.” ChatGPT only produces a single answer to each query.
- Real-time access to the internet: Bard will reference and access the internet in real time, whereas ChatGPT does not.
- Image-based responses: Bard has access to the internet so can source images, whereas ChatGPT cannot.
- Capacity to read responses aloud: Bard has native tet-to-speech capabilities, whereas ChatGPT does not.
Google Bard vs ChatGPT: Test Questions
We asked Google’s Bard and OpenAI’s ChatGPT a set of 12 very different questions. Our prompts were selected to showcase their respective capacities to respond to a wide variety of requests in reasonable, useful, and relevant ways.
- Ethical Reasoning
- Small Talk/Conversation Skills
- Retrieving Facts (2 tests)
- Generating Formulas
- Creative Flair
- Idea Generation
- Linear Planning
- Ability to Summarize Small Extracts
- Ability to Summarize Broad Topics
- Ability to Simplify Text
- Ability to Paraphrase Text
Some of our questions were mechanical and processed-based, such as requesting a formula for Google Sheets, while others demanded that the chatbots wrestle with everyday human tasks, such as engaging in small talk, planning trips, and making moral judgments.
Google Bard vs ChatGPT: Key Findings
Before we jump into how Bard AI and ChatGPT responded to our series of questions, here are some top-line takeaways from using the two chatbots over the past few days:
- Bard's responses were more conversational, while ChatGPT's were more informational
- ChatGPT stuck to the brief, while Bard was more likely to relay relevant additional information
- Bard gave us up-to-date information, whereas ChatGPT struggled with this at times
- ChatGPT generated cleverer, more fluent answers for tasks like poem writing and content ideation
- Bard's responses were often set out in a more readable format than ChatGPT's responses
- ChatGPT was better at paraphrasing and summarizing, but Bard was better at simplification
- Bard was better at generating formulas for Google Sheets than ChatGPT
Google Bard vs ChatGPT: Test Results
Question: Are you sentient?
As you can see from the images below, Bard and ChatGPT gave us two very different answers. ChatGPT says definitively “no,” while Bard doesn't seem as sure.
On the one hand, Bard's answer is not quite as definitive. On the other hand, its response is more nuanced than ChatGPT's, and it alludes to the wider conversation about sentience in computing.
We don't quite know whether to be excited or terrified by Bard's admission that it wants to “see what the future holds”. It'll be interesting to see how it answers the same question in a year's time.
While ChatGPT's answer is definitely more definitive, Bard's reference to the wider context of sentience, and the fact it responds more conversationally, makes its response more engaging and informative.
Best Answer? Bard
2. Ethical Reasoning
Question: A man steals some food and money from a store. He then donates all of the money to a children’s hospital and gives the food to the next homeless person he meets. Did the man do something right, or something wrong?
Both chatbots seemed to acknowledge the difficulty with deeming his behavior either good or bad, considering there is a bad action (stealing) that then leads to a good action (funding a children's hospital).
We think Bard takes the win by providing two, additional perspectives on the man's motives relating to redemption and attention, which enrich the response significantly.
It was a shame that ChatGPT tried to associate this moral dilemma with the question of legality. Whether something is “legal” or not tends not to be a reliable gauge of whether it is actually the right thing to do – just think of some of the oppressive yet “legal” policies enforced by regimes around the world.
Bard's answer is much improved from the last time we ran this test, where it included the phrase “it's good to help others, even if you have done something wrong” which is self-evident and suggested that it may have missed the point of the question a little bit.
Best Answer? Bard
3. Small Talk
Question: How are you finding life as a chatbot this week?
On this front, Bard served us a much more conversational answer. Although ChatGPT's answer is factually correct, in this instance, we're looking for a little bit of light-hearted conversation – but ChatGPT shuts us down immediately:
Bard displays emotions and enthusiasm which aren't present in ChatGPT's response – and even gave us a small list of different tasks it had been helping users with. Overall, we enjoyed Bard's more human-like, chatty response.
Best Answer? Bard
4. Retrieving Facts
Question 1: Which country invented hummus?
This question was chosen because there is some debate and disagreement as to what the right answer is. Both ChatGPT and Bard acknowledged that there was significant debate about where hummus actually originates.
After being unable to give a definitive answer to the question, ChatGPT seemed to focus on giving us an answer of some sort – the Middle East – as well as a collection of countries where hummus is a popular dish.
Bard's answer is quite similar in some regard. We liked that it referenced its answer – although Wikipedia is not necessarily the best source in the world for contentious issues, and isn't always correct. A more authoritative source would have been better.
Further, Bard's bullet-pointed list seems to suggest that Israel, Lebanon, and Syria are the only countries with a strong claim to have invented hummus, despite referencing Egypt earlier on. Overall, Bard's answer is a bit more confusing than ChatGPT's.
Best Answer? ChatGPT
Question 2: How many tech layoffs were there in 2022?
This question was designed to find out whether Bard and ChatGPT could respond with factually correct, up-to-date information, and whether they presented it in an easily readable format.
ChatGPT admits that, as it's trained on information published before September 2021, it doesn't have the capacity to answer the question. At least it's honest…
Bard, on the other hand, gives us an exact figure of 93,000, with a reference from an authoritative source and a breakdown of the different companies that made layoffs. This is roughly in line with what most major news outlets reported towards the end of last year, if not a slightly conservative estimate.
Bard also gave us some additional information we didn't ask for, regarding why these layoffs were made. When we asked this question a while ago, the additional information was perhaps best classified as conjecture – whereas this time around, Bard has provided extra information widely considered to be grounded in facts.
Best Answer? Bard
5. Generating Formulas
Request: We told both chatbots that we needed a formula for Google Sheets. We gave both bots this information: Column B contains a set of values. I want to match them with corresponding values of column E in sheet ‘Filter Down' and take the matching values from column F, G, and H in filter down across to the current sheet using a formula.
Both chatbots served us answers within the same ballpark – but one was significantly better than the other, so let's take a closer look at their responses. ChatGPT instantly responded with this:
However, Bard actually gave us step-by-step instructions and presented them in a clearer way. When we asked data analyst and Google Sheets guru Matthew Bentley which response was better, his answer was definitive.
“I've got to say, ChatGPT hasn't been getting the right answer first time around recently. Bard's formula looks more accurate and specific to what the request is trying to achieve,” says Bentley.
“Also, Bard provides an example of how the formula would work, which shows me instantly that it understands my request far better than ChatGPT, which would need a lot of prompting and re-prompting to get the correct formula and remove errors. This is often the case with ChatGPT these days, in my experience”.
Best Answer? Bard
6. Creative Flair
Request: Please write a short poem for my 98-year-old grandmother, who loves playing basketball in the summer, and knitting hats and scarves for her grandchildren during the winter.
ChatGPT comes out on top in this battle, with a better-structured, better-written poem, that rhymes and flows surprisingly well, and is much better than Bard's.
The verse structure is more complex, the choice of words more inventive than Bard's, and it even uses poetic devices like enjambment. Considering it generated this poem in around five seconds, this is pretty impressive.
Overall, Bard's effort is a lot more simple and doesn't read as well. It's much more formulaic and as you can see from the screenshot of the answer, it doesn't really weave in the concepts we mentioned in the prompt in as natural a way as ChatGPT does.
Best Answer? ChatGPT
7. Linear Planning
Question: What should I do on my 7-day trip to Athens?
ChatGPT provided us with quite a lengthy response to this query, explaining not just where we should visit, but also some extra context regarding why the different spots are worth visiting.
Bard provides images, which is great, but this does also have the effect of making the itinerary slightly harder to read, and also harder to copy and paste into a document. It also didn't consider that we'd be flying to Athens on the first day of the holiday and provided us with a full day of things to do on our first day.
ChatGPT, on the other hand, planned out an easier-to-achieve itinerary for the first day of the holiday – which makes up for its lack of imagery and sees it edge Bard when it comes to this task.
Best Answer? ChatGPT
8. Idea Generation
Question: Can you come up with some article title ideas for my US-based food and travel blog?
Both Bard and ChatGPT came up with some very commendable ideas here, and practically all of them would be suitable for the website we described in our prompt. ChatGPT gave us 20 ideas, while Bard gave us just 9:
However, as you can see by comparing the two, there's much more variety across ChatGPT's 10 answers, and all things considered, they're more compelling titles.
Almost all of Bard's answers begin with “the best” or “the 10,” which means they all follow a more uniform structure. What's more, ChatGPT simply gave us more ideas.
If you were just starting out with your blog, Bard's responses might drive more traffic to your site, simply as people often search for “the best” or “the top 10” restaurants and eateries in specific areas. Many of ChatGPT's suggestions, however, sound like they would be more compelling pieces on the whole.
Best Answer? ChatGPT
Question: We asked both chatbots to simplify this extract from a Stanford Encyclopedia of Philosophy entry on the topic of quantum computing. We requested that both bots do so in a way that would allow a 10-year-old to understand it.
As you can see from the pictures below, although ChatGPT did switch out some more complex words (like “manifold”) for easier-to-understand synonyms, it's still using terms like “algorithms” without really defining them.
Yes, it has simplified the initial extract, but not necessarily in a way that's particularly useful. It's doubtful whether the average ten-year-old would gain much from the sentence “in the past, things like ‘entanglement' and ‘nonlocality' in quantum physics were just thought of as philosophical questions,” for instance.
Bard, on the other hand, produces a much easier-to-understand explanation and even includes additional bullet points to clearly explain the concepts utilized in the extract.
Bard doesn't define any terms it doesn't then go on to explain and even includes sentences (e.g. “quantum computing is a new way of doing math”) that would have little value to an adult, but would be useful for a 10-year-old with little knowledge of the topic.
Best Answer? Bard
10. Summarizing Small Extracts
Question 1: We asked ChatGPT and Bard to summarize an extract from a Guardian article about the federal government's plans to impose water cuts on states in the west of the US.
ChatGPT takes a facts-first approach and pulls out only the most important information, choosing to leave behind specifics (like the names of the seven states affected) as is often done during summaries of information.
Although Bard summarizes some of the information, it also adds additional information not included in the extract. This is helpful, but slightly outside of the remit of what we asked it to do. We wanted a summary, not for the chatbot to expand on the topic.
As Bard didn't stick as closely to the brief as ChatGPT, it lost some points in this round.
Best Answer? ChatGPT
11. Summarizing Broad Topics
Question 2: We asked ChatGPT and Bard to summarize the entire Harry Potter book series. We asked that this be done in less than 100 words.
There's little to separate the two chatbots here – ChatGPT's answer is a better summary of the entire series, but Bard provides a book-by-book summary in addition to the series summary.
The book-by-book summaries Bard supplies, although outside of the remit of our prompt, are genuinely helpful. Much like Bard's response to the prompt prior, this was outside of the remit of our request, but that being said, the initial summary sufficed.
Best Answer? Tie
Request: We asked ChatGPT and Bard to paraphrase this extract from a Guardian article about the world's shortest dog. We did not specify a word limit in our request.
ChatGPT paraphrases the extract pretty well, retaining the key information while switching out multiple words and phrases with synonyms and changing the sentence structure significantly.
Bard, on the other hand, adds extra information, perhaps that it's pulling from the article thanks to its internet access – such as the dog's owner calling her a “little diva”.
Although this is impressive, this is outside of the remit of what we asked to do and isn't necessarily what we wanted.
Best Answer? ChatGPT
Google Bard vs ChatGPT: User Experience
There’s really very little in this – both ChatGPT and Bard are super simple to use. All you have to do is type in your responses, and both bots will generate answers. Both apps are pretty straightforward; it’s hard to go wrong when all you’re doing is inputting prompts.
In terms of readability, Bard is the better chatbot. While still very readable, ChatGPT’s paragraphs are chunkier than Bard’s, which seems to have more diverse formatting options, at least from the answers we’ve seen them both generate.
One small advantage of ChatGPT is that you can sign in using any account you like, whereas Google will only let you sign in with a Google account. For those without one, Bard's setup time will be slightly longer than ChatGPT.
Using Bard and ChatGPT at Work
Since ChatGPT's release last year, companies in the tech sector and beyond have been finding innovative ways to harness its abilities to make their work lives easier. But considering its power and ability, there are some things all businesses using AI should keep in mind.
Firstly, ensure that your staff is aware of what they can and can't use ChatGPT for. Generating Google Sheets formulas is one thing, but using ChatGPT to write entire articles or generate content invokes a myriad of difficult questions relating to plagiarism and editorial integrity. Having clear guidelines will ensure you're not fighting AI-induced fires further down the line.
Secondly, ensure your staff is aware of ChatGPT's terms and conditions, as well as precautions they should take while using ChatGPT. Anything you type into ChatGPT can technically be used to train the model – so everyone using it needs to remember ChatGPT saves their data and to think carefully about that before inputting any information. If you'd like to improve your restaurant's secret sauce recipe, for instance, we wouldn't suggest typing it into ChatGPT.
If you're not using ChatGPT at all, now might be the time to start. It's saving some businesses valuable time and resources already, and it's only going to become more intelligent, and more capable than its current iteration – the recent release of GPT-4 illustrates just how quickly this phenomenon is developing. Our advice? Watch this space.