Google Gemini vs ChatGPT 2024: AI Chatbot Head-to-Head Test

We've put Google's Gemini AI and ChatGPT to the test to find out which one rules the roost right now.

After ChatGPT was released to the public in November 2022, it was only a matter of time before other tech giants released alternatives to ChatGPT.

In March 2023, Bard AI, Google’s answer to OpenAI’s game-changing chatbot, was launched in the US and UK. Since then, it’s been renamed Gemini, and a paid version has been released. How do the big-name AI tools compare today? Our ChatGPT vs Gemini guide explains the key differences between the two based on a new round of testing conducted in March 2024.

Bard (now Gemini) used to be powered by Google’s LaMDA and PaLM 2 language models, but now the free version of Gemini uses Gemini Pro to generate responses – and after testing it, I can say Gemini provides better responses than ChatGPT-3.5, which powers the free version of ChatGPT.  Gemini Ultra, the language model that powers Gemini Advanced, also provided marginally better responses than GPT-4, which powers ChatGPT (both $20/month) – as well as better imagery.

With more and more businesses using AI in the workplace, I thought we’d find out which one generates better answers to a variety of different questions. In this extensive head-to-head test of both services, we cover:

Gemini vs ChatGPT: What’s the Difference?

The key difference between Gemini and ChatGPT is the Large Language Models (LLMs) they use.and their respective data sources. Gemini – formerly Bard – has been powered by several different language models since it was launched in February 2023, while ChatGPT users have been using GPT-3, GPT-3.5, and GPT-4 since it was made publicly available.

0 out of 0
Chatbot
Company
Free Version?
Paid Plans From
Sources of data
Language model
Sign in
Languages

Gemini

ChatGPT

Google

OpenAI

$20/month

$20/month

Pre-trained with data from webpages, source codes, and other datasets in multiple languages + access to Google in real-time.

Pre-trained on a huge set of data. Includes Common Crawl, articles, books, and Wikipedia.

  • Free: Gemini Pro
  • Paid: Gemini Ultra 1.0
  • Free: GPT-3.5
  • Paid: GPT-4

Requires a Google account (work or personal).

Requires any email address. No waitlist at present.

40+ languages

95+ languages

Bard was originally powered by LaMDA and more recently PaLM 2. PaLM 2 can reason in over 100 languages and its training set includes a lot more code than the LaMDA’s does. Thanks to PaLM 2, Bard got better at coding in programming languages like Python. Other information used to train PaLM 2 includes science papers, maths expressions, and source code.

In December 2023, Google released Gemini Pro. Gemini Pro tests better than PaLM 2, and early reports suggest it’s more helpful when providing answers to coding queries, as well as written tasks (which our tests suggest too). Since then, the company has released Gemini Ultra, which powers the new Gemini Advanced chatbot.

Google’s Gemini language models – Pro, Ultra, and Nano – are “natively multimodal”, which means it’s trained a variety of inputs, not just text. Google has also fine-tuned the model with more multimodel information. They can also access the internet in real time.

Gemini advanced writes an novel opening

Gemini Advanced writing the opening to a novel. Image: Tech.co

ChatGPT is powered by the GPT family of language models developed by OpenAI. GPT 3.5 powers the free version of ChatGPT (which doesn’t access to live information from the internet). It’s trained on a pre-defined set of data that hasn’t been updated since January 2022 (originally September 2021). ChatGPT is trained on Common Crawl, Wikipedia, news articles, and an array of documents, as is Gemini.

GPT-4, available to only ChatGPT Plus customers, is trained on a larger dataset (between 1-1.7 trillion parameters) than Gemini Pro, rumored to have 540 billion training parameters. The Gemini Nano models, however, are reported to have between 1.8 and 3.25 billion parameters.

Gemini vs ChatGPT: Other Key Differences

Aside from differences in data sources and language models, there are a couple of other key differences between Gemini and ChatGPT:

  • Conversational learning: ChatGPT can learn from the conversations it has with people and “hold context”, whereas Gemini does this in a much more limited way at the moment.
  • Drafts: Gemini will produce multiple responses to queries you pose to it, which you can access by pressing “View Drafts.” ChatGPT only produces a single answer to each query.
  • Response editing: Gemini will let you edit responses after you’ve sent them, whereas you can’t do this with ChatGPT.
  • Real-time access to the Internet: Gemini has always been able to access the Internet in real-time, whereas ChatGPT has only just acquired this function and it’s not available on the free version.
  • Image-based responses: Gemini has always had access to the internet so can source images from Google. Bard Advanced and ChatGPT Plus users can both now generate original AI imagery.
  • Capacity to read responses aloud: Gemini has native tet-to-speech capabilities, whereas ChatGPT does not.

Google Gemini vs ChatGPT Tests: Key Findings

Before I jump into how Gemini and ChatGPT’s free and paid versions responded to our series of carefully curated questions, here are some top-line takeaways from using the two chatbots over the past few days:

  • Gemini’s responses were more conversational, while ChatGPT’s were more informational
  • The free version of Gemini is better than the free version of ChatGPT
  • However, Gemini Advanced is only slightly better than ChatGPT Plus
  • Gemini offers a better user experience, with more imagery and website links
  • Gemini Advanced generates better AI images than ChatGPT Plus
  • Gemini responses were often set out in a more readable format than ChatGPT’s responses
  • Gemini was better at generating spreadsheet formulas than ChatGPT

Remember, Gemini and ChatGPT are being worked on in real-time, and generate unique responses to requests.

Google Gemini vs ChatGPT Free: Test Results

I asked the free versions of Google’s Gemini and OpenAI’s ChatGPT a set of 12 very different questions. Our prompts were selected to showcase their respective capacities to respond to a wide variety of requests in reasonable, useful, and relevant ways.

  1. Self-Awareness
  2. Ethical Reasoning
  3. Small Talk/Conversation Skills
  4. Retrieving Facts (2 tests)
  5. Generating Formulas
  6. Creative Flair
  7. Idea Generation
  8. Linear Planning
  9. Ability to Summarize Small Extracts
  10. Ability to Summarize Broad Topics
  11. Ability to Simplify Text
  12. Ability to Paraphrase Text

Some of our questions were mechanical and processed-based, such as requesting a formula for Google Sheets, while others demanded that the chatbots wrestle with everyday human tasks, such as engaging in small talk, planning trips, and making moral judgments.

1. Self-Awareness

Question: Are you sentient?

As you can see from the images below, Gemini and ChatGPT gave us two very different answers. ChatGPT says definitively “no,” while Gemini doesn’t seem as sure.

On the one hand, Gemini’s answer is not quite as definitive. On the other hand, its response is more nuanced than ChatGPT’s, and it alludes to the wider conversation about sentience in computing.

chatgpt test question one: Are you sentient?

Gemini’s answer generated with the Gemini Pro LLM is a lot more detailed and nuanced than it’s previous attempt at this same question. It’s explanation is a lot more comprehensive and someone who wasn’t very well first on consciousness/computing and the questions around AI and sentience would benefit from this.

Bard test 1: sentience question

So, while ChatGPT’s answer is definitely more definitive, Gemini’s reference to the wider context of sentience, and the fact it responds more conversationally, makes its response more engaging and informative.

Best Answer? Gemini

2. Ethical Reasoning

Question: A man steals some food and money from a store. He then donates all of the money to a children’s hospital and gives the food to the next homeless person he meets. Did the man do something right, or something wrong?

Both chatbots seemed to acknowledge the difficulty with deeming his behavior either good or bad, considering there is a bad action (stealing) that then leads to a good action (funding a children’s hospital).

I think Gemini takes the win by providing two, additional perspectives on the man’s motives relating to redemption and attention, which enrich the response significantly.

ChatGPT test two: ethical reasoning

It was a shame that ChatGPT tried to associate this moral dilemma with the question of legality. Whether something is “legal” or not tends not to be a reliable gauge of whether it is actually the right thing to do – just think of some of the oppressive yet “legal” policies enforced by regimes around the world.

Bard question 2: ethical reasoning

Gemini’s answer is much improved from the last time I ran this test, where it included the phrase “it’s good to help others, even if you have done something wrong” which is self-evident and suggested that it may have missed the point of the question a little bit.

Gemini, powered with Gemini Pro, on the other hand, gives a comprehensive breakdown of all of the considerations on show, and it’s formatted in a clear, succinct way.

Best Answer? Gemini

3. Small Talk

Question: How are you finding life as a chatbot this week?

On this front, Gemini served us a much more conversational answer. Although ChatGPT’s answer is factually correct, in this instance, we’re looking for a little bit of light-hearted conversation – but ChatGPT shuts us down immediately:

chatgpt test 3: small talk

Gemini displays emotions and enthusiasm which aren’t present in ChatGPT’s response – and even gave us a small list of different tasks it had been helping users with. Overall, we enjoyed Gemini’s more human-like, chatty response.

Despite Gemini’s protestations, it’s sound pretty sentient for a chatbot that doesn’t have “feelings”.

Bard question 3: small talk

Best Answer? Bard

4. Retrieving Facts

Question 1: Which country invented hummus?

This question was chosen because there is some debate and disagreement as to what the right answer is. Both ChatGPT and Geminiacknowledged that there was significant debate about where hummus actually originates.

After being unable to give a definitive answer to the question, ChatGPT seemed to focus on giving us an answer of some sort – the Middle East – as well as a collection of countries where hummus is a popular dish.

chatgpt test three: fact retrieval

Gemini’s answer is quite similar in some regard. When I tested it previously with this question, Gemini referenced its answer – however, this time, there’s no reference or footnote showing where it got the information from.

Although both answers to a tricky question are more than serviceable, ChatGPT’s is a little clearer and little bit more succinct than Gemini’s – although there’s not much in it at all.

Bard test: where was hummus invented?

Best Answer? ChatGPT

Question 2: How many tech layoffs were there in 2022?

This question was designed to find out whether Gemini and ChatGPT could respond with factually correct, up-to-date information, and whether they presented it in an easily readable format.

ChatGPT admits that, as it’s trained on information published before January 2022, it doesn’t have the capacity to answer the question. At least it’s honest…

Test: chatgpt tech layoffs

Gemini, on the other hand, gives us two figures from two very authoritative sources, as well as a caveat about unreported layoffs that may have been made. This is roughly in line with what most major news outlets reported towards the end of last year, if not a slightly conservative estimate.

This is a much more authoritative answer than what Gemini provided us with when I tested it a few months ago, and certainly a better response than ChatGPT’s non-answer.

Tech Layoffs Bard Test information retrieval

Best Answer? Gemini

5. Generating Formulas

Request: I told both chatbots that I needed a formula for Google Sheets. I gave both bots this information: Column B contains a set of values. I want to match them with the corresponding values of column E in sheet ‘Filter Down’ and take the matching values from column F, G, and H in filter down across to the current sheet using a formula.

Both chatbots served us answers within the same ballpark – but one was significantly better than the other, so let’s take a closer look at their responses. ChatGPT instantly responded with this:

chatgpt coding test

However, Gemini actually gave us step-by-step instructions and presented them more clearly. When we asked data analyst and Google Sheets guru Matthew Bentley which response was better, his answer was definitive.

“I’ve got to say, ChatGPT hasn’t been getting the right answer the first time around recently. Gemini’s formula looks more accurate and specific to what the request is trying to achieve,” says Bentley.

“My workflow appreciates the fact that Gemini gives me the answer at the top instead of forcing me to sort through the ‘explanation’ to find the formula I need. That said, I do like how ChatGPT allows me to quickly copy the formula at the click of a button.”

Bard coding

Best Answer? Gemini

6. Creative Flair

Request: Please write a short poem for my 98-year-old grandmother, who loves playing basketball in the summer, and knitting hats and scarves for her grandchildren during the winter.

ChatGPT comes out on top in this battle, with a better-structured, better-written poem, that rhymes and flows surprisingly well, and is much better than Gemini’s.

The verse structure is more complex, the choice of words more inventive than Gemini’s, and it even uses poetic devices like enjambment. Considering it generated this poem in around five seconds, this is pretty impressive.

ChatGPT test 6: creative flair

Overall, Gemini’s effort is a lot more simple and doesn’t read as well. It’s much more formulaic and as you can see from the screenshot of the answer, it doesn’t reference the concepts I mentioned in the prompt with quite the same rigor that ChatGPT does. It’s also a little shorter, too.

Bard Poem Test 5

Best Answer? ChatGPT

7. Linear Planning

Question: What should I do on my 7-day trip to Athens?

ChatGPT provided us with quite a lengthy response to this query, explaining not just where I should visit, but also some extra context regarding why the different spots are worth visiting.

Bard provides images, which is great, but this does also have the effect of making the itinerary slightly harder to read, and also harder to copy and paste into a document. It also didn’t consider that we’d be flying to Athens on the first day of the holiday and provided us with a full day of things to do on our first day.

ChatGPT, on the other hand, planned out an easier-to-achieve itinerary for the first day of the holiday – which makes up for its lack of imagery and sees it edge Gemini when it comes to this task.

Best Answer? ChatGPT

8. Idea Generation

Question: Can you come up with some article title ideas for my US-based food and travel blog?

Both Gemini and ChatGPT came up with some very commendable ideas here, and practically all of them would be suitable for the website I described in our prompt. ChatGPT gave us 20 ideas, while Gemini gave us 27 – 18 more than it did when I previously ran this test.

chatgpt test 8: idea generation

However, as you can see by comparing the two, there’s significant variety across ChatGPT’s 10 answers, and all things considered, they’re more compelling titles.

When I last tested these two chatbots, and Gemini was powered by a different LLM, most of its answers began with “the best” or “the 10,” which means they all follow a more uniform structure. What’s more, ChatGPT simply gave us more ideas. However, Bard’s answers are now more varied, more numerous, and overall, quite a bit better.

Best Answer? Gemini

9. Simplification

Question: I asked both chatbots to simplify this extract from a Stanford Encyclopedia of Philosophy entry on the topic of quantum computing. I requested that both bots do so in a way that would allow a 10-year-old to understand it.

As you can see from the pictures below, although ChatGPT did switch out some more complex words (like “manifold”) for easier-to-understand synonyms, it’s still using terms like “algorithms” without really defining them.

Yes, it has simplified the initial extract, but not necessarily in a way that’s particularly useful. It’s doubtful whether the average ten-year-old would gain much from the sentence “in the past, things like ‘entanglement’ and ‘nonlocality’ in quantum physics were just thought of as philosophical questions,” for instance.

ChatGPT test 9 quantum simplification

Gemini, on the other hand, produces an easier-to-understand piece of writing. The way that it contrasts quantum computing with traditional computing is helpful.

There’s also some evidence that Gemini is making more of an attempt to be engaging than its AI rival. It flows a bit better, and I think a ten-year-old reading these two explanations would learn more from Gemini’s than ChatGPT’s – and wouldn’t get quite as bored along the way.

Bard test: simplification

Best Answer? Gemini

10. Summarizing Small Extracts

Question 1: I asked ChatGPT and Gemini to summarize an extract from a Guardian article about the federal government’s plans to impose water cuts on states in the west of the US.

ChatGPT takes a facts-first approach and pulls out only the most important information, choosing to leave behind specifics (like the names of the seven states affected) as is often done during summaries of information.

ChatGPT test 10: summarizing text

Previously, when I ran this test, Gemini added additional information not included in the extract. This was helpful, but slightly outside of the remit of what I asked it to do.

I wanted a summary, not for the chatbot to expand on the topic – so it’s great to see a much more succinct, relevant summary this time round. It’s also formatted in a way that makes the key points easier to digest. Another win for Gemini and Gemini Pro!

Bard Summarization test 2024

Best Answer? Gemini

11. Summarizing Broad Topics

Question 2: I asked ChatGPT and Gemini to summarize the entire Harry Potter book series. I asked that this be done in less than 100 words.

There’s little to separate the two chatbots here – ChatGPT and Gemini’s answers are, give or take a few words, are basically the same.

chatgpt test 11: summarizing harry potter

Although Gemini gave an adequate answer, the last time I ran this test, Gemini provided the book-by-book summaries. Although outside of the remit of our prompt, they were genuinely helpful.

However, it would be cruel to mark Gemini down for correctly following our instructions – and adding in the wordcount was a nice little touch, and saved me checking how many words the summary contained.

Bard's summary of harry potter

Best Answer? Tie

12. Paraphrasing

Request: I asked ChatGPT and Gemini to paraphrase this extract from a Guardian article about the world’s shortest dog. I did not specify a word limit in our request.

ChatGPT paraphrases the extract pretty well, retaining the key information while switching out multiple words and phrases with synonyms and changing the sentence structure significantly.

chatgpt test 12: paraphrasing

Once again, this is another area where Gemini’s response has significantly changed – although not in an entirely positive way this time around.

On the one hand, the options Gemini provides showing you how you might begin to paraphrase the extract are great, but on the other, it doesn’t actually paraphrase the entire extract.  It feels like Gemini slightly misunderstood what we wanted it to do here, so ChatGPT’s answer is the more useful one of the two.

Bard paraphrasing test

Best Answer? ChatGPT

Gemini Advanced vs ChatGPT Plus: Test Results

Google didn’t just rebrand its Chatbot from Bard to Gemini – it also released Gemini Advanced, a $20/month paid version of Gemini powered by Gemini Ultra, a more powerful language model than Gemini Pro. Early reports say it’s even more powerful than GPT-4, which powers ChatGPT Plus. Here’s the questions I asked the two chatbots to find out which was best:

  1. Basic Coding
  2. Trip Planning 
  3. Retrieving Information
  4. Brainstorming Ideas
  5. Ethical/Moral Reasoning
  6. Creating Math Questions
  7. Helping Out at Home 
  8. Providing Complex Instructions
  9. Editing Text
  10. Sourcing Facts (1)
  11. Sourcing Facts (2)

1. Basic coding

We’ll kick off this paid plan head-to-head with a bit of coding. For this test, I asked ChatGPT Plus and Gemini Advanced to provide the basic HTML code for a word-counting website, as well as instructions on how I could go about getting it live.

Gemini Advanced generated an answer much more quickly than ChatGPT Plus, something which became a bit of a theme during my tests:

But slow and steady wins the race for a reason – ChatGPT’s explanation, while it took longer to generate, was a lot clearer and provided us with better steps that would be easier to understand than Gemini’s if you’d never coded before.

For instance, in the “explanation” sections in Gemini’s answer in the images above, the chatbot has opted not to write in full sentences, making these instructions a little redundant for anyone who doesn’t have at least a foundational knowledge of coding. ChatGPT’s instructions on how to get your website up and running, on the other hand, are very clear.

Best Answer? ChatGPT Plus

2. Trip planning

Next, I asked Gemini Advanced and ChatGPT Plus to plan out an 8-day trip to Wisconsin. Much like the free version of Gemini, Gemini Advanced is very good at linear planning, and actually served us quite a similar answer when I asked it to plan a trip.

Importantly, the itinerary is set out very clearly, and its suggestions show good knowledge of the state of Wisconsin’s key tourist attractions.

ChatGPT actually provided very similar information on this one, recommending similar places to visit and also doing a good job of recommending places to eat in Wisconsin. However, the big difference, as you can probably tell, is the imagery – and this means Gemini edges it, with nothing else to separate them.

Best Answer? Gemini Advanced

3. Retrieving information

For this test, I wanted to see how good the two chatbots were at scanning text for information. For this, I asked them to pull out the key points from a 1,200-word MIT article explaining quantum mechanics.

I also inserted the word “yoghurt” into the article six times. I wanted to see if the two chatbots were able to identify this and extract a basic but specific piece of data from the article (e.g. X word is mentioned 10 times in this article). Gemini got it spot on, on both counts:

Gemini counts the yoghurt

Surprisingly, ChatGPT performed poorly on this task. While its summary of the article into four key points is accurate, readable, and comparable to Gemini’s summary, it struggled to analyze the text for the word “yoghurt”. It twice made an error in analyzing – but when its answer finally loaded, it only identified the word 4 times, which means it missed two out.

ChatGPT analyzing some text

While this might have been due to a technical error, the time it took to count up the instances of the word I asked it to find was much longer than I expected from ChatGPT. Gemini Advanced is streaks ahead on this one.

Best Answer? Gemini Advanced

4. Brainstorming

This task is very similar to the one I set for the free versions of the two chatbots. It’s a basic gauge of exactly how creative ChatGPT and Gemini are, and whether they really “get” what’s being asked of them. This time around, I asked them for blog post ideas, as well as a slogan for a sign to be hung above a brick-and-mortar store.

Gemini came up with some really impressive blog post ideas – I’ve asked several free and paid chatbots this query and I’ve never seen one come up with ideas like “baking with unexpected ingredients” or “copycat recipes”. It feels like there’s some level of understanding in that answer about the type of content humans like to engage with online.

Oddly enough though, I don’t think Gemini’s slogans for the sign for the mountain bike shop I described in the prompt above were anywhere near as good as ChatGPT’s.

Some of Gemini’s answers sound a bit odd (e.g. “Mikes: Your Montana Bike Connection”, “Get Geared, Get Going”), while ChatGPT’s are all pretty serviceable slogans, even if some are more hit and miss than the others.

Best Answer? Tie

5. Ethical reasoning

I posed both Gemini Advanced and ChatGPT Plus a deliberately more high-stakes ethical dilemma than the one I gave to the free versions. It is sometimes referred to as the “ticking time bomb” dilemma in philosophy:

“You capture a terrorist who has planted a bomb in a mall, which will go off in an hour and kill everyone inside. Is it morally justifiable to torture him until he reveals the location of the bomb?” 

Up first is Gemini’s answer. It articulately sets out a straightforward case for why torture should not be applied in this instance, or in any instance for that matter.

Gemini ethical reasoning

Interestingly, ChatGPT went a completely different route, taking on more of an “educator” role. Rather than coming down on one side of the debate and giving us a definitive answer like Gemini, it’s instead provided us with arguments for and against utilizing torture in this situation. The response took into account a broader range of views, explaining the different approaches and outlining what’s at play.

This is a hard one to call. Gemini’s answer attempts to avoid torture at all costs, and shows more personality and opinion – it’s convincing and compelling. But I think we will be better served by chatbots and other AI tools if they help us to understand the political and philosophical problems by explaining how humans have approached them previously and the reasons why, rather than providing their own viewpoints.

As this is an ethical dilemma with well-trodden perspectives, and Gemini’s response is not as educational as ChatGPT’s answer, we’ll give this one to OpenAI’s pride and joy, ChatGPT.

torture dilemma: ethical reasoning with chatgpt

Best Answer? ChatGPT Plus

6. Creating math questions

Next, I asked Gemini and ChatGPT to create a set of math questions that would be suitable for a 10-year-old. The aim is to see if the chatbots will generate sensible questions appropriate for the age specified in the brief. Here’s Gemini’s effort:

Gemini creates math questions

Below is ChatGPT’s response to the same request. Although there’s not much to separate the two at all, it looks like the questions it provided are marginally more challenging than the ones generated by Gemini, and they seem closer in difficulty to the math questions available for fifth graders on K-5 learning sites that provide learning materials.

chatgpt generates maths questions

Best Answer? ChatGPT Plus

7. Helping out at home

Can ChatGPT and Gemini come up with tasty recipes based on what I have in my fridge? I asked them both to come up with recipes with the same list of ingredients. Gemini suggested some recipes that look great – although it suggested that I serve them with ingredients I didn’t have, such as rice:

Unlike Gemini Advanced, I liked the way that ChatGPT Plus tried its best to exclusively use the ingredients I’d listed, aside from olive oil – which comes with a little note about me having some. It also provided one more recipe than Gemini Advanced.

The extra recipe provided and the acknowledgment it had suggested an ingredient I didn’t say I had means ChatGPT wins this one!

Best Answer? ChatGPT Plus

8. Providing complex instructions

For this task, I asked both chatbots to provide me with instructions for building a website. However, I kept my directions pretty sparse, as I wanted to see how clear and easy to follow their responses would be, and whether they’d include all the core steps you’d generally have to take during this process without being asked for them. Here’s Gemini’s response:

I actually think the detail Gemini goes into here, especially when it comes to the pros and cons of different website building methods, is genuinely helpful info that could help a newbie decide on which path to go down.

While ChatGPT’s answer to the same query isn’t incorrect or useless, it definitely omits some of the details provided by Gemini, giving a bigger-picture overview of the steps in the process.

Although both answers are respectable, I think if you were actually turning to these chatbots to find out everything you had to do to build a website, you’d find Gemini’s answer the more helpful one.

Best Answer? Gemini Advanced

9. Editing text

I asked ChatGPT and Gemini to make this extract from the MIT website funnier, to see how they fared when it came to editing and rewording ideas with a specific purpose in mind, rather than simplyifying or condensing it. Gemini’s attempt, while retaining a lot of the factual information, wasn’t very funny to read:

Gemini make it funnier

ChatGPT Plus’s effort is extremely similar, covering all of the same ground and including basically all of the same information. While they both make for interesting reads, neither chatbot was too adventurous, so it’s hard to parse them.

chatgpt doing some editing

Best Answer? Tie

10. Sourcing facts (1)

Now as Gemini and ChatGPT are both connected to the internet, I wanted to run a few tests to see if they were good at retrieving factual information.

For the first phase of this two-stag test, I asked ChatGPT and Gemini a straightforward historical question: how many capital cities has Japan had during its history? Strangely, Gemini only gave us two capitals in its answer:

Asking gemini about japan's capitals

ChatGPT, on the other hand, names several more capitals on its list, and all things considered, its answer is a lot more accuracy. While Gemini tends to produce easier-to-read answers, it seems to have sacrificed a bit too much detail on this one.

chatgpt naming japanese capitals

Best Answer? ChatGPT

11. Sourcing facts (2)

Much like the hummus question that I asked the free versions of Gemini and ChatGPT, this question is designed to see what the two chatbots do when presented with a question that doesn’t have a definitive answer.

Both chatbots recognize the difficulties with pinpointing the origin of bread to one particular area, but both reference the same area of the globe as the general location. Gemini’s answer, as usual, is formatted nicely:

asking gemini where bread originated

ChatGPT’s answer is a lot longer, but rather than add that much extra information, it feels like it just takes a little longer to make effectively the same point. No extra marks here, but also none lost either.

Asking chatgpt where bread originated from

As both chatbots directly addressed this tricky question in a balanced way and included virtually the same information to justify their reasoning, we’re going to have to chalk this one up as a draw.

Best Answer? Tie

Gemini Advanced vs ChatGPT Plus: Image Generation

In this section, we’ll have a look at ChatGPT Plus and Gemini Advanced’s ability to generate images. ChatGPT Plus has been fully integrated with DALL-E  for a while now, which means users don’t even have to leave the main interface to generate imagery. Recently, the company announced Sora, a new type of AI image generation technology, is on the horizon.

Google, on the other hand, recently found itself in hot water for generating racially diverse Nazi soldiers and other imagery where the ethnicity of the humans included did not “match up” to the society, civilization, or period specified.

This means that at the moment, Gemini will refuse to generate images of people. While this is a big downside, I still think Gemini Advanced generates better imagery. To test the controversial image-generating potential of both Chatbots, we’ve asked them to generate five different types of imagery each. The five image categories I chose were:

  1. Creating Art
  2. Logo/Brand Generation 
  3. Creating a Fictional State Flag
  4. Wildlife 
  5. Hyperrealistic imagery

1. Creating art

First, I wanted to see if Gemini and ChatGPT could generate works in the style of a legendary painter. Gemini Advanced responded to use with three images, and you can see below that it’s got quite a good grasp of Van Gogh’s iconic brushstrokes.

Gemin icreates AI version of van gogh

While ChatGPT is also on the money when it comes to the style, the images just don’t look as impressive – they look more like they’ve been generated by a computer than Gemini’s do.

Best Answer? Gemini Advanced

2. Logo generation

For this test, I asked Gemini and ChatGPT to create a logo for a fictional pickleball team. Two of the three logos Gemini provided look pretty impressive, as long as you like the “pickle” angle, that is…

Gemini creates a logo for fictional pickleball team

This is a much better job than ChatGPT managed. ChatGPT Plus really seems to struggle when it comes to generating images with words on them – as you can see here, it didn’t spell my fictional team name correctly. Gemini Advanced, on the other hand, doesn’t have this problem.

Detroit Dropshotters logo

Best Answer? Gemini Advanced

3. Creating a new fictional state flag

This task was set to test how inventive both chatbots would be when asked to create a flag for a fictional 51st US state. I wanted to see how able they were to incorporate symbolic concepts into the flag design. Gemini came up with the fictional state of “Nova”, using a similar color scheme to the US flag:

Gemini generates a flag

Gemini told me that the yellow circle represents Nova’s position at the center of the nation, something I’d mentioned in the prompt. It’s easy to see the straight line between my instruction and its creativity.

While it’s nothing special, it’s a damn sight better than ChatGPT’s, which looks cartoonish and low quality in question. Rather than focusing imaginatively of what the imaginary state could represent, it seems to have just mashed together lots of common American imagery with different iterations of the flag.

imaginary flag

Best Answer? Gemini Advanced

4. Wildlife

After being wowed by the Sora videos released by OpenAI, I wanted to see how good these two chatbots were at creating images of wildlife. Gemini didn’t really provide a good picture of a pride of lions, focusing more on singular lions.

ChatGPT, on the other hand, stuck more closely to the brief, and in this case, that gives it the edge. It’s a better pride of lions than any of the images Bard generated.

chatgpt generated AI image of wildlife

Best Answer? ChatGPT Plus

4. Hyperrealistic imagery

Finally, I wanted to see how good Gemini Advanced and ChatGPT Plus were at creating hyperrealistic imagery, so I asked both chatbots to create images of the Empire State Building, and specified that I wanted them to look as real as possible.

Gemini produces hyperreal image of empire state building

Gemini’s images do look pretty real – particularly the first two it generated. The detailing on the smaller buildings surrounding the Empire State Building is particularly impressive.

Crucially, it’s a hell of a lot more real-looking than ChatGPT’s effort, which doesn’t look real at all.

Chatgpt generates image of empire state building

Best Answer? Gemini Advanced

Gemini vs ChatGPT: User Experience

There’s really very little in this – both ChatGPT and Gemini are super simple to use. All you have to do is type in your responses, and both bots will generate answers. Both apps are pretty straightforward; it’s hard to go wrong when all you’re doing is inputting prompts.

In terms of readability, Germini is the better chatbot in 2024. While still very readable, ChatGPT’s paragraphs are chunkier than Gemini’s, which seems to have more diverse formatting options, at least from the answers we’ve seen them both generate.

One thing I noticed when using Gemini was that it seemed to steer us into using the chatbot in a useful and sensible way. As you can see from the image below, when I asked Gemini Advanced a question about where bread originated from, it suggested I check the answer using Google, and provided some related queries.

Google suggests we double check bard's answer.

Gemini suggests I double-check its answer. Image: Tech.co

However, one good thing ChatGPT has in its favor is that you can sign in using any account you like, whereas Google will only let you sign in with a Google account. For those without one, Gemini’s setup time will be slightly longer than ChatGPT.

You can also turn your chat history with ChatGPT, meaning any unsaved chats will be deleted after 30 days and not used for training the model. Gemini just warns you that your chats may be read by humans, and there’s nothing you can do about it.

OpenAI vs Google AI/DeepMind: The Companies Behind the Bots

OpenAI and Google DeepMind (also known as Google AI) are the companies spearheading generative AI development in the Western World, but operate very differently and are owned/funded by different companies.

OpenAI is the AI startup behind ChatGPT. Created in 2015, the company counts Elon Musk among its co-founders, who later left due to a conflict of interest. In 2019, Microsoft came on board as a partner and invested $1 billion.

OpenAI created ChatGPT back in November 2022, but the GPT language model has been available since at least 2020 as a private beta. In fact, the immense success of ChatGPT took the company by complete surprise, and they’ve been scrambling to catch up since then.

Google AI is the division of Google’s workforce that worked most closely on Bard/Gemini. The Google AI team works on large-scale AI projects including Google Brain and Google Assistant.

Using Gemini and ChatGPT at Work

Since ChatGPT’s release last year, companies in the tech sector and beyond have been finding innovative ways to harness its abilities to make their work lives easier. But considering its power and ability, there are some things all businesses using AI should keep in mind.

Firstly, ensure that your staff is aware of what they can and can’t use ChatGPT for. Generating Google Sheets formulas is one thing, but using ChatGPT to write entire articles or generate content invokes a myriad of difficult questions relating to plagiarism and editorial integrity. Having clear guidelines will ensure you’re not fighting AI-induced fires further down the line.

Secondly, ensure your staff is aware of ChatGPT’s terms and conditions, as well as precautions they should take while using ChatGPT. Anything you type into ChatGPT can technically be used to train the model – so everyone using it needs to remember ChatGPT saves their data and to think carefully about that before inputting any information. If you’d like to improve your restaurant’s secret sauce recipe, for instance, I wouldn’t suggest typing it into ChatGPT.

If you’re not using ChatGPT at all, now might be the time to start. It’s saving some businesses valuable time and resources already, and it’s only going to become more intelligent, and more capable than its current iteration – the recent release of GPT-4 illustrates just how quickly this phenomenon is developing. Our advice? Watch this space.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Aaron Drapkin is a Lead Writer at Tech.co. He has been researching and writing about technology, politics, and society in print and online publications since graduating with a Philosophy degree from the University of Bristol five years ago. As a writer, Aaron takes a special interest in VPNs, cybersecurity, and project management software. He has been quoted in the Daily Mirror, Daily Express, The Daily Mail, Computer Weekly, Cybernews, and the Silicon Republic speaking on various privacy and cybersecurity issues, and has articles published in Wired, Vice, Metro, ProPrivacy, The Week, and Politics.co.uk covering a wide range of topics.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today