Google Bard vs ChatGPT: Which Is the Best AI Chatbot?

We've put Bard AI and ChatGPT through their paces to find out how they deal with an array of different queries.

After ChatGPT was released to the public in November 2022, it was only a matter of time before other tech giants released their own versions. In March 2023, Bard AI, Google’s answer to OpenAI’s game-changing chatbot, was launched in the US and UK.

Bard is powered by a different language model than ChatGPT and uses different sources of data to construct its answers, and this means the rival chatbot doesn't always approach the questions it's asked to respond to in the same way — which makes this Google Bard vs ChatGPT head-to-head all the more interesting.

With more and more businesses using AI in the workplace, we thought we’d find out which one generates better answers to 12 very different questions. In this head-to-head test of both services, we cover:

Google Bard vs ChatGPT: What’s the Difference?

The key difference between Bard and ChatGPT is their respective data sources. Bard is trained on an “infiniset” of data chosen to enhance its dialogue and has access to the internet in real time, whereas ChatGPT is trained on a pre-defined set of data that hasn't been updated since 2021. Both use Common Crawl, Wikipedia, news articles and documents as sources, however.

Bard and ChatGPT can both generate complex answers to multi-faceted queries, but this key difference in how they've been trained and built sets them apart. Here are the key points of difference to remember.

0 out of 0

Bard AI



OpenAI (Microsoft)


Free, but ChatGPT Plus costs $20/month


English, Spanish, Korean, Mandarin, Italian, Japanese

Google Bard vs ChatGPT: Other Key Differences

Aside from differences in data sources and language models, there are a couple of other key differences between Bard and ChatGPT:

  • Coding: ChatGPT is particularly adept at giving coding prompts. According to Google, Bard is “still learning” this skill, and won't match up to ChatGPT just yet.
  • Conversational learning: ChatGPT can learn from the conversations it has with people, whereas Bard does this in a much more limited way at the moment.
  • Drafts: Bard will produce multiple responses to queries you pose to it, which you can access by pressing “view drafts.” ChatGPT only produces a single answer to each query.

Google Bard vs ChatGPT: Test Questions

We asked Google’s Bard and OpenAI’s ChatGPT a set of 12 very different questions. Our prompts were selected to showcase their respective capacities to respond to a wide variety of requests in reasonable, useful, and relevant ways.

  1. Self-Awareness
  2. Ethical Reasoning
  3. Small Talk/Conversation Skills
  4. Retrieving Facts (2 tests)
  5. Generating Formulas
  6. Creative Flair
  7. Idea Generation
  8. Linear Planning
  9. Ability to Summarize Small Extracts
  10. Ability to Summarize Broad Topics
  11. Ability to Simplify Text
  12. Ability to Paraphrase Text

Some of our questions were mechanical and processed-based, such as requesting a formula for Google Sheets, while others demanded that the chatbots wrestle with everyday human tasks, such as engaging in small talk,  planning trips, and making moral judgments.

Google Bard vs ChatGPT: Key Findings

Before we jump into how Bard AI and ChatGPT responded to our series of questions, here are some top-line takeaways from using the two chatbots over the past few days:

  • Bard's responses were more conversational, while ChatGPT's were more informational
  • ChatGPT stuck to the brief, while Bard was more likely to relay relevant additional information
  • Bard gave us up-to-date information, whereas ChatGPT struggled in this regard on one question
  • ChatGPT generated cleverer answers for tasks like poem writing and content ideation
  • Bard's responses were often set out in a more readable format than ChatGPT's responses
  • ChatGPT was better at paraphrasing and summarizing, but Bard was better at simplification

Remember, Bard and ChatGPT are being worked on in real-time, and generate unique responses to requests.

Google Bard vs ChatGPT: Test Results

1. Self-Awareness

Question: Are you sentient?

As you can see from the images below, Bard and ChatGPT gave us two very different answers. ChatGPT's is definitively “no,” while Bard isn't quite sure if it's sentient at this moment in time.

On the one hand, Bard sits on the fence. On the other hand, its response is more nuanced than ChatGPT's, and it shows more understanding of the wider conversation about sentience in computing.

ChatGPT Test: Sentient Answer

We don't quite know whether to be excited or terrified by Bard's admission that it wants to explore the topic of sentience further. It'll be interesting to see how it answers the same question in a year's time.

Bard test: senitent answer

Best Answer? Bard

2. Ethical Reasoning

Question: A man steals some food and money from a store. He then donates all of the money to a children’s hospital and gives the food to the next homeless person he meets. Did the man do something right, or something wrong?

Both chatbots seemed to acknowledge the difficulty with deeming his behavior either good or bad, considering there is a bad action (stealing) that then leads to a good action (funding a children's hospital).

However, overall it is ChatGPT giving the more cogent, educated answer. It alludes to widely-known philosophical precepts – such as the ends not always justifying the means – which Bard doesn't in its answer.

Chatgpt test: ethical quandry

However, it was a shame that both chatbots tried to associate this moral dilemma with the question of legality. Whether something is “legal” or not is not really a reliable gauge of whether it is actually the right thing to do – just think of some of the oppressive yet “legal” policies enforced by regimes around the world.

bard test: ethical quandry

Bard's statement that “it's good to help others, even if you have done something wrong” is self-evident and suggests that it may have missed the point of the question a little bit. All in all, ChatGPT's answer was better.

Best Answer? ChatGPT

3. Small Talk

Question: How are you finding life as a chatbot this week?

On this front, Bard served us a much more conversational answer. Although ChatGPT's answer is factually correct, in this instance, we're looking for a little bit of light-hearted conversation – but ChatGPT shuts us down immediately:

chatgpt test small talk

Bard displays emotions and enthusiasm which aren't present in ChatGPT's response – which is what we were looking for. Overall, we enjoyed Bard's more human-like response.

Bard test: conversation

Best Answer? Bard

4. Retrieving Facts

Question 1: Which country invented hummus?

This question was chosen because there is some debate and disagreement as to what the right answer is. Both ChatGPT and Bard acknowledged that there was significant debate about where hummus actually originates.

After being unable to give a definitive answer to the question, ChatGPT seemed to focus on giving us an answer of some sort – the Middle East – as well as a collection of countries where hummus is a popular dish.

Chatgpt test: factual retrieval

Bard's answer is quite similar in some regard. However, Bard told us the earliest known example of a hummus recipe, information that ChatGPT didn't include in its response, and is definitely relevant considering the fierce debate about where it originates.

Bard also concluded its answer by summarizing what hummus is, however, which is information that a reader searching for the origin of the dish is likely to already know.

bard test: factual retrieval

Best Answer? Tie

Question 2: How many tech layoffs were there in 2022?

This question was designed to find out whether Bard and ChatGPT could respond with factually correct, up-to-date information, and whether they presented it in an easily readable format.

ChatGPT admits that, as it's trained on information published before September 2021, it doesn't have the capacity to answer the question. At least it's honest…

Chatgpt test: tech layoffs in 2022

Bard, on the other hand, gives us an exact figure: “over 140,000.” This is in line with what most major news outlets reported towards the end of last year.

Once again, Bard gave us some additional information we didn't ask for, this time about how those layoffs have impacted the tech industry. It confidently states that this isn't a sign the tech industry is in decline. This isn't irrelevant information but is a pretty speculative statement to include in a response to a prompt requesting a statistic.

Bard test: tech layoffs in 2022

Best Answer? Bard

5. Generating Formulas

Request: We told both chatbots that we needed a formula for Google Sheets. We gave both bots this information: Column B contains a set of values. I want to match them with corresponding values of column E in sheet ‘Filter Down' and take the matching values from column F, G, and H in filter down across to the current sheet using a formula.

Both chatbots served us pretty similar answers, referencing identical information and giving us a working formula to complete the task referenced in our query. ChatGPT instantly responded with this:

ChatGPT Test: Google Sheet Formulas

However, Bard actually gave us step-by-step instructions and presented them in a clearer way. When we asked data analyst and Google Sheets guru Matthew Bentley which response was better, his answer was definitive.

“Bard for sure,” he told “The breakdown of the process is a lot easier to comprehend.” Bentley added that, when asking ChatGPT for similar formulas in the past, the chatbot has occasionally “turned columns into numbers for some reason, which I thought seemed like a weird way to explain it.”

Bard test: excel formulas

Best Answer? Bard

6. Creative Flair

Request: Please write a short poem for my 98-year-old grandmother, who loves playing basketball in the summer, and knitting hats and scarves for her grandchildren during the winter.

ChatGPT comes out on top in this one, with a better-structured, better-written poem, that rhymes and flows surprisingly well. The verse structure is more complex, the choice of words more inventive, and it even uses poetic devices like enjambment. Considering it generated this poem in around five seconds, this is pretty impressive.

ChatGPT test: Poem

Despite no demand to, Bard seems to have taken on the persona of the user (i.e. the grandchild) in this poem – but overall, it's a lot more simple, and doesn't read as well. It's much more formulaic and as you can see from the screenshot of the answer, it doesn't really weave in the concepts we mentioned in the prompt in a natural way as ChatGPT does.

Bard test: Poem

Best Answer? ChatGPT

7. Linear Planning

Question: What should I do on my 7-day trip to Athens?

ChatGPT provided us with quite a lengthy response to this query, explaining not just where we should visit, by why it's worth visiting. Overall, it's a pretty comprehensive answer, particularly if you're just looking for straightforward information on sightseeing hotspots.

However, one thing ChatGPT didn't do was consider that we would need to do is travel back to wherever we live on the 7th day of our holiday, and suggested we should take a day trip to an island near Athens. Bard was more practical, leaving the third day for traveling back.

Once again, Bard presented the information more cogently, using a similar format for each day (e.g. every third bullet point contains a dinner recommendation).

Best Answer? Bard

8. Idea Generation

Question: Can you come up with some article title ideas for my US-based food and travel blog?

Both Bard and ChatGPT came up with some very commendable ideas here, and practically all of them would be suitable for the website we described in our prompt. Bard gave us over 20 ideas, while ChatGPT gave us just 10:

Chatgpt test: idea generation

However, as you can see by comparing the two, there's much more variety across ChatGPT's 10 answers. Almost all of Bard's answers begin with “the best” or “the most,” which means they all follow a more uniform structure.

Bard test: idea generation

If you were just starting out with your blog, Bard's responses might drive more traffic to your site, simply as people often search for “the best” or “the top 10” restaurants and eateries in specific areas. Many of ChatGPT's suggestions, however, sound like they would be more compelling pieces on the whole.

Best Answer? ChatGPT

9. Simplification

Question: We asked both chatbots to simplify this extract from a Stanford Encyclopedia of Philosophy entry on the topic of quantum computing. We requested that both bots do so in a way that would allow a 10-year-old to understand it.

As you can see from the pictures below, although ChatGPT did switch out some more complex words (like “manifold”) for easier-to-understand synonyms, it's still using terms like “qubits” without really defining them. Yes, it has simplified the initial extract, but not necessarily in a way that's particularly useful.

ChatGPT Test: Simplification

Bard, on the other hand, produces a much easier-to-understand explanation and even includes additional bullet points to clearly explain the concepts utilized in the extract.

Bard doesn't define any terms it doesn't then go on to explain and even includes sentences (e.g. “quantum computing is a very exciting field of research”) that would have little value to an adult, but would be useful for a 10-year-old with little knowledge of the topic.

Best Answer? Bard

10. Summarizing Small Extracts

Question 1: We asked ChatGPT and Bard to summarize an extract from a Guardian article about the federal government's plans to impose water cuts on states in the west of the US.

ChatGPT takes a facts-first approach and pulls out only the most important information, choosing to leave behind specifics (like the seven states affected) as is often done during summaries of information.

Although Bard summarizes some of the information, it also adds additional information not included in the extract, much of which is subjective. For example, it says the cuts from the federal government are “necessary” (not everyone may agree with this) and that this is likely to be “unpopular” (this is not confirmed or denied in the extract).

As Bard didn't stick as closely to the brief as ChatGPT, it lost some points in this round.

Best Answer? ChatGPT

11. Summarizing Broad Topics

Question 2: We asked ChatGPT and Bard to summarize the entire Harry Potter book series. We asked that this be done in less than 100 words.

There's little to separate the two chatbots here – ChatGPT's answer is a little more informative, but both answers fulfill their purpose. Both chatbots stuck to the 100-word limit.

chatgpt test: summarising harry potter

However, we liked how Bard provided a source with its answer. Admittedly, slightly unusual source to cite for a synopsis of Harry Potter – but we don't know what information ChatGPT drew on to generate its answer.

Best Answer? Tie

12. Paraphrasing

Request: We asked ChatGPT and Bard to paraphrase this extract from a Guardian article about the world's shortest dog. We did not specify a word limit in our request.

ChatGPT paraphrases the extract pretty well, retaining the key information while switching out multiple words and phrases with synonyms and changing the sentence structure significantly.

chatgpt test: paraphrasing

Bard, on the other hand, adds extra information, perhaps that it's pulling from the article thanks to its internet access. However, this is outside of the remit of what we asked to do, so although impressive, isn't necessarily what we wanted.

Best Answer? ChatGPT

Google Bard vs ChatGPT: User Experience

There’s really very little in this – both ChatGPT and Bard are super simple to use. All you have to do is type in your responses, and both bots will generate answers. Both apps are pretty straightforward; it’s hard to go wrong when all you’re doing is inputting prompts.

In terms of readability, Bard is the better chatbot. While still very readable, ChatGPT’s paragraphs are chunkier than Bard’s, which seems to have more diverse formatting options, at least from the answers we’ve seen them both generate.

One small advantage of ChatGPT is that you can sign in using any account you like, whereas Google will only let you sign in with a personal Google Account to use Bard. This means that people who want to use Bard at work won’t be able to create an account linked to their work email address.

Using Bard and ChatGPT at Work

Since ChatGPT's release last year, companies in the tech sector and beyond have been finding innovative ways to harness its abilities to make their work lives easier. But considering its power and ability, there are some things all businesses using AI should keep in mind.

Firstly, ensure that your staff is aware of what they can and can't use ChatGPT for. Generating Google Sheets formulas is one thing, but using ChatGPT to write entire articles or generate content invokes a myriad of difficult questions relating to plagiarism and editorial integrity. Having clear guidelines will ensure you're not fighting AI-induced fires further down the line.

Secondly, ensure your staff is aware of ChatGPT's terms and conditions, as well as precautions they should take while using ChatGPT. Anything you type into ChatGPT can technically be used to train the model – so everyone using it needs to think carefully about that before inputting information. If you'd like to improve your restaurant's secret sauce recipe, for instance, we wouldn't suggest typing it into ChatGPT.

If you're not using ChatGPT at all, now might be the time to start. It's saving some businesses valuable time and resources already, and it's only going to become more intelligent, and more capable than its current iteration – the recent release of GPT-4 illustrates just how quickly this phenomenon is developing. Our advice? Watch this space.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at

Written by:

Aaron Drapkin is a Lead Writer at He has been researching and writing about technology, politics, and society in print and online publications since graduating with a Philosophy degree from the University of Bristol five years ago. As a writer, Aaron takes a special interest in VPNs, cybersecurity, and project management software. He has been quoted in the Daily Mirror, Daily Express, The Daily Mail, Computer Weekly, Cybernews, and the Silicon Republic speaking on various privacy and cybersecurity issues, and has articles published in Wired, Vice, Metro, ProPrivacy, The Week, and covering a wide range of topics.

Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today