Best Free AI Training Courses for Vibe Coding in January 2026

Kick off your new year with coding: Here are the top online courses and materials to help you craft an app from scratch.

The term “vibe coding” is relatively new: It refers to the process of creating new software applications entirely (or mostly) through AI prompts. The generative AI tools are trained in other app’s code, so they can reproduce similar lines of code, saving you from the trouble of figuring out how to write it yourself.

The process is ten times faster than human creation, and it lowers the barrier of entry. Sounds great, right?

However, there’s a catch: Vibe coding is far from a perfect process. It’s how to create an app that “mostly” works. You’ll need to work on your own bug-spotting skills in order to fine-tune the application, and that’s only if you know how to deliver the right prompts in the first place.

If you’re trying to start vibe coding in 2026, we’ve already rounded up the top vibe coding tools and prices to know. Here, we also have the best free resources and video tutorials available online.

Base44: Quick Start Guide

Length: At your own pace

Our Base44 review found the no-code platform to be a beginner-friendly option that’s fairly priced with decent integrations.

It’ll let users generate an unlimited number of apps, with full-stack capabilities and domain connectivity, and apps can have helpful functions like authentication and user management baked in, as well.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

Better yet: The product’s website comes with a collection of free written guides that are available to everyone, not just paying users.

The documentation is all hosted online for anyone to scroll through, no download required. You’ll learn how to build an app, use the Base44 editor, and manage the credits that the platform uses to power each AI action.

The online materials also include a prompt guide, prompt library, and a collection of app templates to get you started.

Head over to the link to start combing through the Base44 quick start guide right now.

Code Academy: Intro to Vibe Coding

Length: 1 hour

Looking for a general-purpose overview of vibe coding? Look no further. Code Academy has a quick hour-long crash course on the topic.

With this beginner-level option, you’ll learn about what approaches will most effectively deploy an AI coding assistant, how vibe coding can help you build new personal projects, and how best to tell when to use AI assistance in coding and when you’ll need to skip it.

You’ll wrap up with a quick quiz to test your knowledge retention, and you can even get a certificate of completion, if you’re paying for the academy’s Plus or Pro plans. Technically, this course is not offered for free — however, you’ll just have to sign up for a free Code Academy trial to access it, and since it only takes an hour to complete, you’ll have plenty of time to cancel your trial afterwards without paying a cent.

Check it out at Code Academy and get started today.

University of Chicago: Vibe Coding Fundamentals

Length: 5 hours

Serious vibe coders may need more than just an hour-long overview, and the University of Chicago has the course for them. These video training sessions take a full five hours to complete, so you’ll need to dedicate a Saturday to the process.

Once signed up, you’ll need to work through three modules: Intro to Vibe Coding, Understanding Vibe Coding, and Getting Results.

Along the way, you’ll tackle concepts including how both beginning and experienced developers can benefit from vibe coding, as well as how to set up an AI-assisted coding environment.

The three AI coding platforms you’ll learn about are all ones that we’ve reviewed and approved of here at Tech.co — click through for our reviews and comparisons of Replit, Lovable, and Bolt.

Then, simply jump straight to the training course itself, which is hosted on the online learning platform Coursera here.

Microsoft’s Introduction to Vibe Coding

Length: 1 and a half hours

Not every course is entirely for beginners, even for a process as simple as vibe coding. This module comes with a few requirements: First, a basic understanding of the software development process, and, second, a little experience developing prompts for large language models (like ChatGPT).

Since it’s created by Microsoft, this free course is aimed at helping users with the GitHub Copilot Agent, one of the most well-known AI tools that the tech giant has under its perview.

By the time you wrap up the quick session of learning, you should be able to grasp how vibe coding can help aid software development, how to craft effective prompts for GitHub Copilot specifically, how to create product requirements and wireframe diagrams, and how to generate a prototype app.

Head over to the Microsoft learning website to get started now.

DeepLearning AI: Vibe Coding 101 with Replit

Length: 2 hours

According to the course description, users will learn to build and share two different applications, a website performance analyzer and a voting app, while customizing their code with the aid of AI tools. They’ll learn “the principles of agentic code development” alongside skills needed to build and host apps.

The specific coding agent taught by this course is the Replit platform — we’ve covered its functions and compared them against competitors with this guide by Tech.co senior writer Gus Mallett, covering how rival vibe coding platform Base44 stacks up to Replit.

Replit is best for users who already have some understanding of coding, we determined, and so it’s no surprise that this particular training course covers concepts that might be intimidating to newcomers.

For example, you’ll be using product requirement documents and wireframes in addition to the right prompts as you build and iterate on your prototypes.

Get started figuring out Replit by heading over to the DeepLearning AI website here and enrolling for free.

Vibe Check: Is AI Coding for You?

Vibe coding didn’t exist two years ago. Now, it’s been named the 2025 word of the year by Collins English Dictionary, beating out such esteemed competition as “bio hacking” and “glaze.”

In 2026, there’s nothing stopping you from creating your own websites and apps. But that also means that there’s nothing stopping anyone else.

To stand out among the AI coding gold rush, you’ll need some common sense and a few good ideas: There’s no better way to get them than a little studying. Here at Tech.co, we’ve drummed up some vibe coding guides that can help you with the basics, from our look at how to vibe code a website in five steps to our full list of no-code options for app creation.

Coders, newbies, and total amateurs can all get started today, and any one of the above training courses can kick off your journey in a single day with the greatest coding tool of all: A little knowledge.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Mark Cuban: AI Is Stupid But Your Business Will Fail Without It

It's hard to deny that AI is a bit dimwitted at times, but businesses still need it to stay competitive in 2026.

Key Takeaways

  • Famous entrepreneur Mark Cuban states that “AI is stupid,” pointing to its many errors and lack of basic judgement as serious flaws.
  • However, he also explained that businesses that don’t adopt the technology are going to fail.
  • There are many ways that businesses can employ AI to improve productivity and stay relevant in 2026.

Mark Cuban isn’t mincing words when it comes to AI, with the widely-known business icon stating that the technology is “stupid,” but businesses that don’t embrace it are doomed to the ash heap of failed enterprises.

It’s hard to argue with either of Cubans points, honestly. AI is infamous for its many errors and hallucinations, but it seems to be the only thing driving the economy in 2026, so businesses clearly need to get on the train or get off the tracks.

Luckily, there are dozens, if not hundreds of helpful AI tools that can help businesses stay relevant in the modern era, and hopefully Mark Cuban doesn’t think they’re stupid too.

Cuban: AI Is Stupid

In a discussion with Adam Joseph, the founder of Clipbook, a content and workflow company that Cuban is invested in, the entrepreneur dove straight into the name calling in regard to the world’s current innovation infatuation.

“AI is stupid, but it’s somebody who’s a savant that remembers everything. It does a really good job of assembling all those things that it collected and presenting that just as somebody who has a great memory.” – Mark Cuban

Let’s be honest, he’s right. AI is still more defined by its shortcomings in 2026, with more and more stories surfacing of the technology getting it wrong in a serious way. From deleting entire company databases to thinking a clarinet is a gun, the technology simply is not as good as tech firms need it to be.

Cuban: Your Business Will Fail Without AI

While Cuban doesn’t believe that AI is any more capable than a person with a really good memory, he also notes that its utility is far too valuable for businesses to pass up.

“There’s going to be two types of companies: those who are great at AI, and everybody else. And the ‘everybody else’ is going to fail because AI is such a transformative tool.” – Mark Cuban

Simply put, the stupidity of AI doesn’t make it any less valuable for businesses that know how to use it to improve productivity and streamline operations.

How to Use AI at Your Business

In 2026, there aren’t many business software options that aren’t equipped with AI. Salesforce has its AI agents to help you make sales, Gmail has its generative features that can write emails for you, and pretty much every platform under the sun is equipped with the technology.

Still, if you want to make a concerted effort to embrace AI, there are some more concrete steps you can take. For building apps, for example, vibe coding platforms have become incredibly popular, allowing users to create, customize, and even preview apps with natural language prompts.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

All in all, AI is still in its infancy, and it’s more important to be using AI in general than debating how to do so. Because in just a few years, this technology is going to be running the show.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Security Experts Dire Warning on AI Agents in 2026

Researchers predict that hackers will target AI agents this year. Here's how you can keep your business safe.

Key Takeaways

  • According to cybersecurity researchers, AI agents will become a main attack vector for hackers in 2026.
  • The cybersecurity skills gap will lead to companies deploying different AI tools en masse, which will encourage attackers to switch their focus from human operators to AI agents.
  • To combat this, companies should invest in technological safeguards, as well as upskill their existing employees.

AI agents will be one of the biggest new attack vectors for cybercriminals in 2026, experts predict. According to security professionals at Palo Alto Networks, the rise of AI agents opens up a compelling new channel for attackers to exploit.

The study contends that the yawning skills gap in cybersecurity paves the way for mass deployment of AI agents. In turn, this will lead hostile parties to change the point of their attack from humans to the agents themselves. Problematically, these agents are always-on, and are thus always at risk of exploitation.

The experts also predict that this shift in the threat landscape will necessitate a new “non-negotiable category of AI governance tools” to safeguard the agents and provide a kill switch in case they are breached. And with data breaches often proving terminal for a lot of companies, the stakes could not be higher.

Hackers Will Target AI Agents in 2026

Cybersecurity experts predict that AI agents will become a key target for cybercriminals in 2026.

In 6 Predictions for the AI Economy: 2026’s New Rules of Cybersecurity, the study authors draw attention to the massive skills gap in cybersecurity, which currently stands at 4.8 million, arguing that it will lead companies to adopt AI agents at a notable rate.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

According to recent research, 88% of cybersecurity professionals have experienced at least one “significant” consequence as a result of this labor crisis. It is expected, therefore, that enterprises will look to AI agents as the silver bullet solution to their security and staffing woes.

The upsides are clear. Agents can resolve service tickets and process complex workflows at a fraction of the speed that humans can. In turn, this will encourage cybercriminals — who are notorious for reinventing their tactics — to switch their focus from human operators to AI agents.

AI Agents Pose Significant Cybersecurity Risks

On the other side of the coin, AI agents command their share of risks. The researchers claim that such models can serve as a potent “insider threat” should they fall into the wrong hands. If misused, they can be granted privileged access to highly sensitive data, including critical APIs, customer information, and cybersecurity infrastructure.

In addition, AI agents are “always on,” meaning that they are vulnerable to hacking at all hours of the day. While humans – and particularly cybersecurity professionals – maintain different working hours, cybercriminals can gain access to an AI agent whenever they please. This makes it simpler for international hackers to target US businesses.

The study further argues that, because of this mass rollout, companies will look to introduce new safeguards to inoculate themselves against the new attack vector. According to the report: “This will be the dividing line between agentic AI success and failure.”

Technology, Education Needed to Combat Security Crisis

With 2026 now underway, the technology sector faces an urgent concern: how to cope with the escalating cybersecurity crisis. AI tools are more embedded into businesses’ workflows than ever, with 28% of employees confirming that they would use them at work even if explicitly prohibited.

While the benefits of ChatGPT and its contemporaries are well-documented, so too are the potential risks. Therefore, businesses are under pressure to not only introduce new technological safeguards but also to upskill their employees on identifying and reporting cyberattacks.

Shockingly, only 2% of senior leaders can identify all the signs of a phishing attack, spotlighting a problem that exists right across the business. If we are to turn the tide on hackers, we need to adopt a two-pronged approach that emphasizes investment in new technology and improving our collective cybersecurity smarts.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Best Free AI Training Courses for January 2026

Many of the top free AI training courses can be completed in a single afternoon, giving you much-needed insights.

Making a new January resolution to finally figure out what’s going on with all this AI nonsense? Here are the free online courses that might be able to help you out.

In 2026, AI is set to face yet another pivotal year. CEOs are set to keep spending oodles on AI for the next 12 months despite signs of poor returns, a new report has confirmed.

Tech corporations and companies everywhere are packing AI tools into every product they possibly can, even while ongoing discussions suggest many users are wary of AI “slop” — a term that Microsoft CEO Satya Nadella recently decried following Merriam-Webster’s choice to name it 2025’s word of the year.

Anyone who wants to dive into AI in the new year needs to be armed with knowledge about its benefits, drawbacks, and ethical concerns. We don’t recommend just asking ChatGPT about it, either: You’ll be best off with a series of online lessons, delivered as video lectures from some of the smartest minds in artificial intelligence.

GenAI Basics – How LLMs Work

⌛Length: 1 hour

Duke University is behind this just-the-basics course, which takes students through a quick tour of how generative AI models are built and trained, along with some explanation of the data science process in generation.

It’s not just a video, however: This course presents users with a “simple interactive demo” that won’t require an installations or software downloads. This makes it one of the best ways for a hands-on learner to wrap their mind around how generative AI models work. Some people tend to tune out when it comes to endless video-based lessons, but being able to directly interact with a model can allow new information to truly sink in.

Granted, you’ll still have a video. The specific format of this one is a split-screen, with a user’s interactive work area on one side and the video instructor on the other. By the end, you’ll be able to load language models for vectorization, and combine like words and sentences.

You’ll be able to follow along at your own pace, although it will likely take you just an afternoon or less to get through the lesson. Check it out now for no cost over at Coursera.

Enhancing UX Workflow With AI

⌛Length: 3 hours

Perhaps you already know the basic building blocks of AI, but you aren’t sure how to use AI for your own field (or even if you can). If so, you’ll need to go a bit deeper into AI with an intermediate-level course.

We can’t list every single version of a field-specific training course, so you’re on your own when it comes to training on your particular vocation. Still, here’s one example of what to look for: An online video course aimed at helping a more specific segment of the white collar workforce, user exerience experts.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

Over the course of three hours, you’ll be taught how to integrate AI into UX design for greater personalization. Some of the tools and trends you’ll learn about cover the ways AI can enhance design or image creativity, AI-assisted writing, user research analysis and testing, and the ways that the technology can streamline crafting user personas and journey maps.

Check it out and get started today, over at the Uxcel platform here.

If you’ve made a resolution to learn a new skill this year, why not check out our vibe coding reviews? We’ll tell you how to get started with a project today, without coding experience!

AI Fundamentals: From Basics to Generative AI

⌛Length: 2 hours

If you’re looking for a slightly longer guide for AI amateurs, consider this course from online learning platform Udemy: It takes users through two hours in which they’ll learn about topics including machine learning, generative AI, and practical applications of Large Language Models (LLMs).

By the time you finish, you’ll be well-versed in generating AI content and images. Individual courses within the lesson plan will cover the history of AI, how generative AI works, and prompting techniques that you can use to get the right results.

You can check out the course for free over at Udemy here.

Vibe Coding 101 with Replit

⌛Length: 2 hours

Coding with significant help from AI can be termed “vibe coding,” and it comes with its own set of pros and cons. We’ve covered some of the top vibe coding platforms and found Replit is a top pick for users with an above-average background in the technical side of coding.

Now, you can jump into using Replit with a free course that can guide you through the basics of the platform within a few hours.

Among the things you’ll learn in this course are how to build and share both a website performance analyzer and a voting app by using AI to debug and customize your code. Along the way, you’ll pick up the principles of agentic code development and learn the ways in which product requirement documents and wireframes factor into your applications.

The whole course was created in partnership with Replit itself, and is available free from Deeplearning.AI over here.

AI Foundations for Everyone Specialization

⌛Length: About 33 hours

That’s right, this IBM-led course is over ten times longer than any single other course listed in this guide! Sometimes, you really need to dedicate a significant chunk of time in order to gain the real-life experience you need.

At least, that’s what the more than 70,000 students enrolled in this suite of beginner AI courses thought. You’ll work your way through entire courses dedicated to topics including “Generative AI: Introduction and Applications,” “Generative AI: Prompt Engineering Basics,” and “Building AI Powered Chatbots Without Programming.”

IBM’s legacy as a dominant US computing company lends plenty of credibility to this free online video series, which is available through Coursera. You can even sign up for a single course at a time, pacing yourself a little in order to ensure you don’t bite off more than you can chew.

Head over to Coursera now to sign up for these lessons.

Grappling With AI in 2026

Are AI tools invitable? Some argue that the tech-investment bubble will pop at some point in the next year, leaving companies suddenly much less interested in pushing the new tech on consumers.

Even if AI does continue on in some form, the process of adapting it will likely be slow: It took decades for desktop computers to catch on, and decades for smartphones to fully saturate the market. AI tools might well be making similar progress for years to come, as ever-increasing numbers of workers find new ways to incorporate them into their routines.

If you’re among them, don’t miss out on the past resources we’ve compiled to help AI newbies avoid the common pitfalls of artificial intelligence: Check out our guide to spotting AI-generated content, our history on AI hallucinations, and the latest news on AI’s impact on workplaces everywhere.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Report: CEOs Remain ‘All-In’ On AI, Despite Patchy Returns

The new survey from Teneo reveals that CEOs remain hopeful on the future of AI within the workplace.

Key Takeaways

  • A new report has revealed that CEOs will continue to spend big on AI in 2026
  • This is in spite of the fact that less than half of AI projects have generated more in returns than they initially cost
  • Overall, both CEOs and investors remain optimistic about the future of AI. 

A new report has revealed that big company CEOs plan to continue investing big in AI in 2026.

This comes in spite of the fact that very few AI initiatives have yielded meaningful returns on investments thus far.

However, both investors and CEOs believe significant gains can still be made, and that AI will benefit companies across multiple areas of their business.

CEOs Will Keep Spending on AI Despite Poor Returns, Says New Report

According to a new report from advisory firm Teneo, CEOs of some of the world’s largest companies plan to spend more on AI in 2026, despite many of them not yet seeing meaningful returns on their investments.

In particular, 68% of CEOs surveyed claimed they planned to increase their spending on the technology, even though less than half of AI projects have generated positive returns.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

The survey consisted of more than 350 public-company CEOs, and was conducted from mid-October to mid-November.

AI Yet to Generate Any Meaningful Returns on Investments

Overall, the report states that less than half of AI projects have generated more in returns than they initially cost. Ultimately, while AI spending has been loud, the noise has not correlated with the payoff.

In terms of where CEOs have found the most success with AI tools, it has been within marketing and customer service departments. Respodents also reported challenges when using AI in higher-risk areas, such as security, legal, and human resources.

Teneo CEO Paul Keary notes in the report:

“AI innovation continues to be a top investment priority – 68% of CEOs are increasing investment, and 88% believe it is already helping them navigate disruption. But the clock is ticking, as investors start to demand real transformational change. For these leaders, disruption no longer signals risk – it signals opportunity.” – Paul Keary, Teneo CEO

AI Investors and CEOs Remain Optimistic in 2026

However, despite disappointing numbers in 2025, CEOs and investors in AI believe that the technology will bring great results.

While 84% of CEOs of large companies (defined as those with revenue of $10 billion or more) believe AI initiatives will start to bring returns on investments in more than six months, 53% of the 400 institutional investors surveyed believe it will take less time.

Likewise, CEOs remain optimistic about the wider impact of AI on their business. 67% believe AI will increase their entry-level headcount, despite evidence suggesting AI is currently doing the opposite. Plus, 58% also believe AI will increase senior leadership headcount.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Millions Affected by Massive Credit Report Data Breach

The credit check and identify verification services provider 700Credit is the latest company to suffer a massive data breach.

Key Takeaways

  • 700Credit, which provides credit checks, identity verification, fraud detection, and compliance services to automotive businesses has suffered a massive data breach, with more than 5.8 million individuals thought to have been affected.
  • Reportedly, the hackers gained access to a third-party API, which allowed them to access customer information between July and October 2025.
  • Data breaches continue to plague the business sector in the US, with companies feeling the sting of a cybersecurity talent shortage.

700Credit, one of the largest credit report and identity services providers for the automotive industry in North America, has suffered a wide-ranging data breach, with more than 5.8 million individuals thought to have been affected. The incident was identified on October 25, and was linked to a compromised third-party API on the company’s web application.

Reportedly, hackers seized information collected from automotive dealers between May and October 2025, including names, addresses, dates of birth, and Social Security numbers.

Data breaches continue to plague companies across the business sector, with 700Credit the latest in a long line of hacks this year that has affected companies of all sizes, including Google, Microsoft, McDonalds, Adidas, Coca-Cola and more.

5.8 Million Affected by 700Credit Data Breach

Credit report and identity verification services provider, 700Credit, has suffered a massive data breach, with more than 5.8 million people reported to have been affected. The hackers gained access in July 2025, and were able to access customer information from automotive dealers up until October 2025, at which point they were shut down.

According to 700Credit, the attackers were able to gain access through a compromised third-party API, which allowed them to access the sensitive data in question. This is thought to include names, addresses, dates of birth, and Social Security numbers. The company is currently in the process of contacting affected customers.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

700Credit is the largest provider of credit checks, identity verification, fraud detection, and compliance services for the automotive, marine, powersports, and RV dealers in North America. It has approximately 18,000 dealerships in its customer base.

No Evidence of Information Misuse At This Time, Company Confirms

At the time of writing, 700Credit confirms that there is no current indication of identify theft, fraud, or any other kind of information misuse. What’s more, the breach is limited to the 700Dealer.com application layer, meaning that the company’s internal network is unaffected, and thus there is no impact on 700Credit’s ability to carry out operations.

In a statement, the company said: “The investigation determined that certain records in the web application relating to customers of its dealership clients were copied without authorization.”

However, the company and its clients may not be out of the woods yet. If the breach is the result of a ransomware attack, demands may still be issued at some point in the near-future. As recent history shows us, ransomware demands can often prove highly damaging for companies, with serious financial and reputational repercussions.

Business Sector Woefully Underprepare for Current Threat Landscape

The 700Credit news is yet another reminder on the importance of adequately safeguarding customer information in the face of a tidal wave of security breaches. Recently, it was reported that 88% of cybersecurity professionals had experienced at least one “significant impact” due to an ongoing shortage of talent on the job market.

In response, a lot of businesses are putting their trust in AI tools and systems to combat the rising tide. Undoubtedly, the technology has a role to play, but it is vital that companies make sure that it is deployed correctly, and that they don’t desert their basic cybersecurity duties.

According to our own research, a staggering 98% of senior leaders are unable to identify all the signs of a phishing attack, illustrating that a talent shortage isn’t the only problem that the sector is experiencing. Companies need to do more to upskill their existing employees right across the business — or this problem will only continue to get worse.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

How to Make an Unforgettable First Impression in Business

From business cards to custom notebooks, making an impression at events can go a long way for your business.

In the business world, first impressions are everything. Given the short attention span of people in 2025 and the steep competition in virtually every industry under the sun, the only real way to cut through the noise is to make sure that your brand is thought of favorably right out of the gate.

While a firm handshake and a confident smile can go a long way, the reality is that printed products, marketing materials, and other branded items have a storied history of moving the needle when it comes to making a good first impression. From business cards to brochures, these tools will keep your brand front of mind for those lucky enough to come across them.

What kind of items, you ask? Well, at MOO, you’ll find a wide variety of high-quality products that can help you make an unforgettable first impression, so your business can start making an impact like you know it can.

Moo logo
Give Them a Reason to Keep Your Card In a world of digital noise, a physical reminder can help you stand out and show your business story.
Expertise, printed.

Key Takeaways

  • Business cards haven’t gone out of fashion, in fact they can help you stand out
  • They can cut through digital noise and keep your brand front of mind
  • Incorporating QR codes can provide contacts with an instant path to your digital presence
  • Select premium finishes such as foil, spot gloss, or unique paper sizes to ensure your card is memorable

Business Cards: The Very First Impression

In a networking setting, you need to leave your networker with something to remind them of you. Sharing social media handles, exchanging email addresses, and heaven forbid, writing your business website URL on a napkin are all fine, but if you want to show potential customers or clients that you mean business, high-quality business cards will get the job done and then some.

With MOO, you can do just that. The printing service provides:

  • Users with access to multiple unique paper and size options
  • Special finishes available, including foil and spot gloss
  • Different designs on the back of every card with Printfinity
  • Next-day delivery to make sure that you’re ready in time for the big event

Does Anyone Still Use Business Cards in 2026?

If you think that business cards are old news, you couldn’t be more wrong. In 2026, business cards are very much still one of, if not the most effective means of making a good first impression, especially if you tailor them to the modern era. For example, MOO offers a feature that allows users to easily add a QR code directly onto their business card, which will make it even easier for people to find your brand in an instant.

Suffice to say, business cards haven’t gone out of style. They remain a trusted tool of businesses around the world to help them get noticed when they need to the most. All you have to do is give them a modern twist, and you’ll be making a good impression with your eyes closed.

A Lasting Impression: Getting Your Brand Out There

Business cards are a great place to start, but let’s be honest, they can’t shoulder the entire burden of getting your business the attention it deserves. Simply put, there are a lot more impressions to be made beyond the first one.

That’s where you’ll want to go beyond the business card with marketing materials like stickers and brochures. Stickers are all about getting your branding, your logo, your company message out in the world so that people recognize and remember you when the time comes to make a purchase. MOO offers a lot of different shapes and materials, like metallic, vinyl, and die cut, so you can stand out amongst all the other stickers out there.

Brochures, on the other hand, will help you get more in-depth information about your company circulating. Whereas a business card is more of a reminder to follow up or reach out about a brand, the brochure provides as much information as necessary to get the potential customer or client interested in a meaningful way. MOO provides a variety of options here, as well, with all available brochures being of premium print quality.

The Gift: Your Best Impression Yet

There’s something to be said for the value of free stuff. While it might not seem like a big deal, the idea of leaving an event, a meeting, or a conference with something just feels good and memorable, particularly when it comes to the brand providing the merchandise.

MOO is equipped to handle this kind of post-event gifting, as well, with the tools you need to create custom planners, notebooks, notepads, drinkware, and pens. That’s right, your logo, slogan, or other branding will be front and center when these potential customers and clients wax poetic about the promotional products they got at the conference, meeting, or networking event.

Bringing Your Brand to Life

These may seem like nothing more than marketing materials, but in the real world, these are the kinds of small details that can be the difference between an exciting new client and business as usual.

From simple recognition to full-on closed deals, high-quality products like business cards, brochures, and custom stationery and drinkware help make your business professional. Simply put, they bring your brand to life for you and your potential customers and clients.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Study: AI in Manufacturing Expected to Grow 38.7% by 2030

The AI in manufacturing is expected to grow from $7 billion to $35 billion over the next five years.

Key Takeaways

  • AI in manufacturing is expected to grow from $7 billion in 2025 to $35.8 billion by 2030.
  • That represents a massive 38.7% increase in just five years.
  • The report points to an even faster adoption of the technology than expected by experts.

A new report has estimated that AI in manufacturing could grow at a dramatic rate over the next five years, with projections as high as $35.8 billion by 2030.

AI technology has been injected into the entire business world over the last few years. Platforms are rolling out generative capabilities and executives are pushing for more in hopes of improving productivity across the board.

Now, estimates show that the manufacturing industry is seeing a serious uptick as well, and it could be a sign that AI is truly going to be all around us sooner than expected.

AI in Manufacturing Is on the Rise

According to the report from BCC Rearch, titled AI in Manufacturing: Global Markets, the industry is expected to grow from $7 billion to $35.8 billion by 2030.

That’s a pretty big jump. In fact, the 38.7% increase would represent one of the more substantial growth projections across the business landscape.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

The report also found that North America is currently the leading continent in this kind of investment, although it does note that the trend is persisent in Europe, Asia, Africa, and South America as well.

Why Is AI in Manufacturing Growing So Much?

To be fair, AI is finding its way into businesses one way or another in 2025, and manufacturing is obviously no different. Still, there are some clear reasons why AI in manufacturing is growing this much.

“Driven by the need for production efficiency and smarter supply chain strategies, AI adoption in manufacturing is accelerating, offering cost effective solutions, predictive capabilities, and strategic insights for investors, solution providers, and policymakers to harness new growth opportunities.” – BCC Research report

Considering the issues with the supply chain over the last few years, driven by trucker shortages and tariff concerns, AI could be the silver bullet that closes the gap for businesses around the world.

The AI-ification of Business

Manufacturing certainly isn’t the only industry that has seen an influx of AI over the last few years. The tech industry has been exploding with the tech, but they aren’t alone either.

In fact, even government agencies have started to adopt AI in a meaningful way, with the FDA recently announcing that it would be using agentic AI tools for safety inspections and reviews.

Suffice to say, we’re still very much in the early stage of AI rollout in the business world, and if manufacturing AI growth is any indication, we’re in for a big shift in the coming years.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Disney and OpenAI Join Forces to Bring Characters to Sora

The deal will see Google investing $1 billion in OpenAI with an option to invest more in the future.

Key Takeaways

  • Disney has announced that it will be making a $1 billion equity investment in OpenAI.
  • The deal will also see a wide range of Disney’s copyrighted characters, including Mickey Mouse and Cinderella, available on the Sora AI video generator.
  • Copyright infringement has become a huge issue for AI models over the last few years, and this deal could set a precedent for future concerns.

The copyright problem for AI may have an expensive solution, with Disney announcing a new $1 billion deal with OpenAI that will bring the media company’s iconic characters to the Sora AI video generator.

AI models have been accused of copyright infringement pretty much from the get go. The inspiration behind many of these generated images and videos often bare a striking resemblence to exisiting IP, which could cause problems for AI companies in the near future.

OpenAI has addressed that issue head on, though, by entering a partnership with one of the biggest media companies in the world.

Disney and OpenAI Partnership

According to the announcement from OpenAI, Disney is making a $1 billion equity investment in the AI company.

On top of that, Disney is licensing over 200 of its copyrighted characters from Marvel, Pixar, Star Wars, and of course, Disney to Sora, the AI video generator from OpenAI. The licensing agreement will last three years.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

Disney will also become a major customer of OpenAI as a result of the deal, “using its APIs to build new products, tools, and experiences, including for Disney+, and deploying ChatGPT for its employees,” according to the announcement.

AI and Creative Media

The use of AI in creative media has been hotly debated over the last few years, particulary in regard to the potential for mass unemployment. This partnership could be the first step towards a more AI-driven experience in the movies, although the CEO of Disney says that people will still be at the helm.

“The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works.” – Bog Iger, CEO of

Now, let’s be honest, that’s just the kind of thing that CEOs are supposed to say when this kind of deal goes through. Whether or not Disney joins the ranks of tech firms laying off employees in record numbers remains to be seen.

Does AI Infringe on Copyright?

Given the infancy of AI in its current state, it’s difficult to say whether or not it infringes on copyright laws just yet, but it does seem like it could be a long term problem for the tech.

For one, virtually all the major AI companies are facing some kind of lawsuit in 2025, with authors, artists, and other creatives taking to the courts to try to get compensation as a result.

Heck, Disney is suing Google for copyright infringement, noting in a cease-and-desist letter that the company has used its creative outputs to train AI models.

“Google has deeply embedded its infringing video and image AI Services into its broad family of products and services actively used by over a billion people. This multiplies the scope of Google’s infringement, and harm to Disney’s intellectual property, not to mention the ill-gotten benefits Google enjoys from its unauthorized exploitation of Disney’s copyrighted works.” – the cease-and-desist letter from Disney

These legal cases will likely establish what the copyright situation will be for AI moving forward, so they’re definitely worth keeping an eye on.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Logistics Companies Are Reducing Workforces Due to Low Freight Demand

The latest Tech.co data reveals that a drop in freight demand is causing a shift in how logistics companies stay afloat.

Key Takeaways

  • A monthly survey from Tech.co has revealed a drop in freight demand that is hurting logistics professionals.
  • This drop has led to external pressures becoming an even bigger problem for these companies.
  • Our data suggests that businesses are therefore turning to methods such as workforce reduction in order to keep themselves afloat.

According to the latest research from Tech.co, a reduction in freight demand is putting pressures on logistics professionals. 

The drop in demand is also exacerbating existing pressures for companies, such as increasing diesel prices.

As a result, logistics companies have shifted their money-saving strategies, increasingly turning towards reducing their workforce and employee hours.

Reduced Freight Demand is Hurting Logistics Professionals

During a time of year where it’s expected to be high, freight companies are experiencing reduced demand, according to the latest monthly findings from Tech.co.

In our survey, 17% of sector professionals reported experiencing “low freight demand” through both October and November.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

Other measurements, such as the Cass Freight Index, have reported similar numbers. Significantly, there has been a 7.8% decrease in the number of shipments compared to the previous year’s October. 

Drop in Freight Worsening External Pressures

This shortage of demand has subsequently exacerbated the other financial pressures mounting on today’s logistics professionals.

This is indicated by our data, in which 20% of professionals claimed their key strategic priority was managing financial pressure, an 8% increase from September.

Diesel prices, for example, have risen in November, according to the US Energy Information Administration. This has caused it to become a significant budgetary constraint.

Logistics Businesses Take on More Extreme Survival Measures

As a result of this drop in demand, professionals are turning to more extreme measures to save costs, switching up the cash-flow improvement strategy that we’ve seen over the past few months.

Since September, businesses looking to manage their financial situation by reducing employee headcount has risen sharply by 15%. Likewise, those reducing employee hours for the same reason has increased by 13%.

In turn, companies are moving away from methods such as financing or restructuring debt, as this decreased by 13% since September. As freight struggles continue, it seems to be becoming more apparent to companies that new methods of survival might be necessary.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

88% Cybersecurity Pros Experience ‘Significant’ Impact of Skills Gap

The cybersecurity skills gap is causing havoc across the tech sector, according to new research from ISC2.

Key Takeaways

  • Nearly nine in 10 cybersecurity professionals (88%) have experienced at least one “significant” cybersecurity consequence due to the IT skills shortage, according to a new report.
  • The skills gap appears to be worsening year on year, even as the broader economic outlook lessens.
  • AI poses a genuine solution to the current skills gap, as well as opening up the need for new roles and perspectives across the sector.

According to new research from International Information System Security Certification Consortium (ISC2), nearly nine in 10 cybersecurity professionals (88%) have experienced at least one “significant” cybersecurity consequence in their organization due to a skills gap.

The study surveyed 16,029 cybersecurity professionals from across North America, Latin America, the Asia-Pacific region, Europe, the Middle East, and Africa. Broadly, the data indicates that the economic uncertainties that gave rise to a surge in tech layoffs in 2024 are beginning to taper off.

The findings should sound a note of alarm across the tech sector as concerns over a yawning talent gap mount. Elsewhere, our own research shows that a shocking 98% of bosses are unable to identify all the signs of a phishing attack, indicating that these shortcomings are felt across the business.

Talent Gap Leaving Majority of Cybersecurity Firms Exposed

88% of cybersecurity professionals have experienced at least one “significant” cybersecurity consequence at their place of work due to a lack of ability, new research shows. The 2025 Cybersecurity Workforce Study comes courtesy of ISC2, the world’s leading nonprofit organization for cybersecurity professionals.

Surveying 16,029 “practitioners and decision-makers” from across the sector, the study sheds light on the evolving nature of the workspace as old anxieties persist and new opportunities arise. Notably, the data suggests that some of the economic uncertainties that characterized 2024 are beginning to lift.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

In spite of this, however, the industry still faces a problem in the form of talent recruitment and retention — the consequences of which can often be severe.

Skills Shortage Getting Worse, Data Shows

Other data points shed light on the impact of strained security budgets across the sector. 33% of participants confirmed that they do not have the resources to adequately train their existing staff, while 29% are unable to hire staff with the requisite skills to secure their organization in the face of an evolving threat landscape. And with 72% in agreement that reducing security personnel “significantly increases” the risk of a breach, the scale of the issue isn’t lost on anyone.

Worryingly, even as the global economic outlook has improved on last year, the skills gap has widened. The vast majority of surveyed respondents (95%) reported that they have at least one skill need, representing a 5% increase on the year before. 59%, meanwhile, cited “critical or significant” skill needs — a sharp 15% increase on 2024.

According to ISC2 acting CEO and CFO Debra Taylor, CC: “A shift is happening. This year’s data makes it clear that the most pressing concern for cybersecurity teams isn’t headcount but skills. Skills deficits raise cybersecurity risk levels and challenge business resilience.”

AI Poses Compelling Solution to Skills Shortage

The research shows that, increasingly, cybersecurity professionals are looking to AI as a solution to their workforce woes. 28% of respondents confirmed that they’d already embedded the technology into their ways of working, while 69% are engaged in “adoption activities,” such as integration, testing, or evaluation.

There is a pervasive idea that AI tools will create demand for new skills and perspectives, with 73% believing that the technology will create “more specialized cybersecurity skills” and 72% citing “more strategic cybersecurity mindsets.” The hope is that this will open up new roles for prospective employees — as fears mount that AI could spell disaster for the workforce.

Despite the skills chasm, most cybersecurity employees are optimistic about the future. A striking 81% of participants are “confident” the profession will remain strong, with a further 68% satisfied in their current job.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

TSA Warns Travelers to Avoid Free Airport Wi-Fi

Using free airport Wi-Fi, especially to make purchases or input other sensitive data, is extremely risky in 2025.

Key Takeaways

  • The Transportation Security Administration (TSA) is warning people to avoid free airport Wi-Fi when traveling this holiday season.
  • While a valuable perk, free public Wi-Fi is a hotbed for hackers to gain access to your sensitive data when you least expect it.
  • Considering the serious cost of security breaches on businesses in 2025, this kind of small security snafu could have dire consequences.

The holiday season comes with an extra warning this year, with the Transportation Security Administration (TSA) warning people that free airport Wi-Fi is not a secure, reliable option when traveling.

Holiday traveling is stressful enough without having to worry about protecting your sensitive personal data from hackers. Unfortunately, free airport Wi-Fi has become a go-to hunting ground for hackers looking to gain access to your information.

There are some solutions that can help you mitigate the risks, but this year, it might be best to download your movies and books before you get to the airport, particularly considering how much a security breach could cost your business.

TSA Warning: Don’t Use Free Airport Wi-Fi

The TSA has issued a warning to holiday travelers that they should do their absolute best to avoid using free airport Wi-Fi while waiting for their flights.

More importantly, though, the TSA has explained that, if you absolutely must use airport Wi-Fi, you should definitely avoid making any purchases. In fact, you shouldn’t be inputting or even looking at any sensitive data when your device is connected to public networks.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

If you can’t imagine traveling with access to free public Wi-Fi at the airport, there are some workarounds. The TSA notes that a VPN “adds an extra layer of security from your computer to whatever server you’re accessing,” which will protect you from the worst consequences of this kind of activity.

Why You Shouldn’t Use Free Airport Wi-Fi

Free airport Wi-Fi is admittedly a very attractive perk when you’re traveling. Whether you’re stuck with a long delay and want to do some social media scrolling or your flight is about to start boarding and you don’t have a movie downloaded yet, these handy networks can provide a bit of respite during a long day at the airport.

Unfortunately, as is often the case with free services, airport Wi-Fi is too good to be true, as it often becomes a hunting ground for hackers. The insecure networks allow cybercriminals to potentially intercept your sensitive data, like passwords and financial records, which can lead to security breaches and identity theft.

This isn’t just theoretical, either. Just last week, an Australian man was jailed for seven years for creating “evil twin” WiFi networks at airports to steal sensitive data from unsuspecting travelers.

The Cost of Getting Hacked

If you think that the convenience of free airport Wi-Fi is worth the security risk, we can assure you that it’s not. Whether you’re an employee or an executive at your company, these kinds of breaches can be a huge problem.

For starters, the majority of security breaches stem from some kind of human error. As a result, businesses need to train employees on how to stay secure, particularly when it comes to unsecured networks like the kind found at airports.

Risking a security breach isn’t an option for many businesses, either, given the fact that the average cost of a security breach is approximately $10 million. Suffice to say, you’ll want to stick to data during your next trip.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Godfather of AI: Mass Unemployment ‘Very Likely’ Due to AI

“It seems very likely to a large number of people that we will get massive unemployment caused by AI."

Key Takeaways

  •  Geoffrey Hinton, known as the Godfather of AI, said that the technology will “very likely” cause mass unemployment.
  • The data backs it up too, with some studies pointing to the technology replacing as many as 100 million jobs in the next decade.
  • Jobs are already on their way out, with big tech firms like IBM and Amazon laying off huge percentages of their workforce in the last year.

Even the Godfather of AI knows that we’re headed towards something problematic, with the famed British computer scientist agreeing that the technology will “very likely” lead to mass unemployment.

AI is designed to take people’s jobs. Whether or not companies are willing to admit it yet is another story, but that is the reason that trillions of dollars are being spent to develop it.

Many companies are already very much aware of that fact, given that mass layoffs have become a staple of the tech industry over the last few years, and it’s not slowing down any time soon.

Mass Unemployment ‘Very Likely’ Due to AI

In a discussion with Senator Bernie Sanders at Georgetown University, the Godfather of AI noted that the technology is probably going to cause some serious issues for the employment market in the coming years.

“It seems very likely to a large number of people that we will get massive unemployment caused by AI.” – Geoffrey Hinton, British computer scientist

He also explains, though, that the evolution of AI is shrouded in mystery, given the infancy of the technology and the widespread adoption at such a quick rate. He compared it to “when you drive in fog. You can see clearly for 100 yards and at 200 yards you can see nothing.”

Mass Unemployment Predictions

Hinton isn’t speaking out of turn on this one, either. Data very much supports his argument that jobs are on the way out, and it could be even worse than he’s predicting.

As Senator Sanders pointed out in the discussion, an October study showed that as many as 100 million jobs could be obsolete in the next 10 years.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

“It’s not just economics. Work, whether being a janitor or a brain surgeon, is an integral part of being human. The vast majority of people want to be productive members of society and contribute to their communities. What happens when that vital aspect of human existence is removed from our lives?” – Bernie Sanders on FOX News

Layoffs in the Tech Industry

If you don’t think mass unemployment is a realistic outcome of AI, you probably aren’t paying attention to the tech industry right now. In the last few years, companies like IBM, Amazon, Microsoft, and dozens of other have laid off massive percentages of their workforce.

Even worse, in many cases, these companies are completely transparent about why, noting that the extra capital is now needed for AI investment, rather than supporting humans that want to work.

As Hinton puts it, the timeline is pretty murky. AI is still quite prone to errors and humans are still generally required to do a lot of the work in the world. But, as the technology evolves, there will be some big changes that require us to reconsider the role of work in the human experience.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

The FDA Is Now Using Agentic AI for Safety Reviews

The FDA ensured us that the technology has "built-in guidelines — including human oversight — to ensure reliable

Key Takeaways

  • The Food and Drug Administration (FDA) has announced that it is expanding use of artificial intelligence to include agentic AI capabilities.
  • The agentic AI will be used to “help them ensure the safety and efficacy of regulated products.”
  • The government agency deployed an AI model for employees in May, which is now used by 70% of staffers.

Government oversight, thy name is AI. The Food and Drug Administration (FDA) in the US has announced that it is substantially expanding use of artificial intelligence by adding agentic AI to the review and inspection process.

It’s no secret that AI is being not-so-slowly injected into every aspect of life in 2025, and government agencies have been surprisingly quick to get on board.

However, given the technology’s persistent inclination towards errors and hallucinations, many experts are a bit concerned about having AI involved in keeping food and drug approvals safe.

FDA Announced Use of Agentic AI

In an announcement on Monday, the US Food and Drug Administration (FDA) revealed that it is now providing staff members with access to agentic AI capabilities to assist in the safety review and inspection process.

“We are diligently expanding our use of AI to put the best possible tools in the hands of our reviewers, scientists and investigators. There has never been a better moment in agency history to modernize with tools that can radically improve our ability to accelerate more cures and meaningful treatments.” – Marty Makary, FDA Commissioner

This not the first instance of the FDA announcing its use of AI. In fact, the agency launched an AI model for employees in May, dubbed Elsa, which is now reportedly used by 70% of staffers.

What Will Agentic AI Do at the FDA?

According to the announcement from the FDA, the technology will be used “to streamline their work and help them ensure the safety and efficacy of regulated products.”

The agency is so enthused about getting the technology integrated into the safety and inspection process that it is launching a two-month challenge encouraging employees to create AI models and present them at an event in January.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

Don’t worry, though. The FDA also ensures us that the technology will have “built-in guidelines — including human oversight — to ensure reliable outcomes.”

AI Errors and Hallucinations

Using AI for some business operations, like streamlining sales or fast-tracking customer support interactions, is fine. But when it comes to things like food safety, people are understandably a bit trepidatious.

That’s because the technology is nothing if not prone to errors. Since launching in 2022, AI models like ChatGPT have been defined by their hallucinations, citing fake sources, providing incorrect answers, and generally getting it wrong when it matters most.

Even worse, the companies powering these AI models are far from concerned about the negative impacts of the technology. In fact, a recent study found that nearly every company working on a major AI platform is missing the mark on safety.

All that to say, AI hasn’t proven that it can get the job done without creating more work for human employees checking its every output. Until it does, we need to be very skeptical of using it for things like food and drug safety inspections.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Report: Top AI Companies Are Falling Short on Safety

The new report identifies key areas where AI companies' safety protocols are lacking.

Key Takeaways

  • A new report has sounded the AI safety alarm after analyzing the inadequate safety procedures of top AI companies.
  • It also outlined a clear divide between these companies when it came to implementing appropriate safety measures.
  • Superintelligence continues to be a concern for experts, particularly as the companies seem ill prepared to deal with the risks it could bring.

A new report has found that many of the leading AI companies in the world are falling short when it comes to appropriate safety frameworks and protocols.

While a lack of regulation is partly to blame, the report indicates a strong divide between the individual companies’ approaches to risk assessments, safety frameworks, and information disclosure.

Significantly, the threat superintelligence poses continues to be a concern, particularly as experts suggest that these companies are not ready to properly protect against the power of what they are building.

Safety Protocols of Leading AI Companies Are Lacking

A new report from the Future of Life Institute has determined that the safety protocols of eight leading AI companies, including OpenAI and Meta, are severely lacking.

As a whole, the report states that the safety approaches of these companies “lack the concrete safeguards, independent oversight and credible long-term risk-management strategies that such powerful systems demand”.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

The Winter 2025 AI Safety Index examined 35 safety indicators across 6 domains, which included elements such as a company’s safety frameworks and risk assessments. It also looked into the company’s support for AI safety-related research.

Divide Evident in AI Companies’ Approaches to Safety

Overall, the index identified a clear divide between the top ranking and low ranking companies, based on safety approaches.

The top ranking companies were Anthropic, OpenAI, and Google DeepMind. While the lower ranking companies consisted of xAI, Meta, Chinese AI company Z.ai, DeepSeek, and Alibaba Cloud. Anthropic led the pack with a C+ ranking, and Alibaba Cloud sat at the bottom with a D-.

The biggest difference between the top performers and lower performers related to risk assessment, safety framework, and information closure. The lower ranking candidates showed limited disclosure, weak evidence of systematic safety processes, and an uneven adoption of appropriate evaluation processes.

Likewise, all the companies’ safety protocols were found to be inadequate when compared to emerging global standards and lacked the in-depth frameworks outlined by, for example, the EU AI Code of Practice.

AI Superintelligence and Legislation Concerns Prevail

President of the Future of Life Institute and MIT professor Max Tegmark has said that a lack of regulations surrounding AI development is partly to blame for companies receiving such low scores on the index. As a result, he predicts a dangerous future ahead.

In particular, researchers expressed concern about the way AI companies are handling the development of Artificial General Intelligence (AGI) and super-intelligent systems.

“I don’t think companies are prepared for the existential risk of the super-intelligent systems that they are about to create and are so ambitious to march towards.” – Sabina Nong, an AI safety investigator at the Future of Life Institute

The notion is confirmed by the index’s results, which state that the industry’s core structural weakness is existential safety, and that all eight companies analyzed are moving towards AGI/superintelligence “without presenting any explicit plans for controlling or aligning such smarter-than-human technology.”

While some top companies, like OpenAI and Google, have released statements outlining their commitment to safety as these systems develop, the picture could turn much bleaker if they continue the path foreseen by the index.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Report: Microsoft Cuts AI Software Sales Goals Amid Low Demand

A new report says Microsoft’s Foundry product isn't selling well, so sales targets were cut. Microsoft says otherwise.

Key Takeaways

  • Microsoft’s Foundry product for AI agents has missed last year’s sales targets, which will be lowered in the future, according to a new report.
  • Microsoft refutes the report, claiming it has “not lowered sales quotas or targets.”
  • The report claims less than a fifth of salespeople at one Azure unit hit their sales target of 50% growth.

Several divisions at Microsoft have recently lowered sales quotas for some artificial intelligence products, according to a new report from tech publication The Information.

These cut backs in AI sales growth targets are the result of missed sales goals for the fiscal year that ended in June, according to the report, which cites salespeople in the Azure cloud unit.

The company has bet heavily on AI technology in recent years, so this news could be considered a sign of a cooling interest in AI tech, if it’s true. According to CNBC coverage, though, Microsoft holds that it has not lowered sales targets.

Microsoft’s Foundry Product Missed Sales Targets, According to Report

The Information cites two separate Azure cloud salespeople. The claim? Microsoft’s Foundry product, which helps build and manage AI agents, missed its sales targets for the last fiscal year, and targets will be lowered moving forward.

At one US Azure unit, fewer than a fifth of salespeople were able to reach the Foundry sales target of 50% growth, the Information says.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

However, Microsoft says otherwise. In a statement to CNBC — following a 2% dip in stock performance in response to the Information’s report — a Microsoft spokesperson said “the company has not lowered sales quotas or targets for its salespeople.”

Microsoft’s AI Investments

Microsoft bet early on OpenAI, the company behind the biggest brand in large language models, ChatGPT, which has been making headlines since it first launched in 2022.

It also made investments in alternative models, including a multi-million-dollar deal with French tech startup Mistral AI last year.

Now, if reports of the company pulling back on AI sales targets prove true, it might be lowering its ambitions. The big unanswered question that this news raises — assuming the report can be confirmed — is whether this indicates that AI popularity has now peaked and will continue a decline as we head in 2026.

Sign of an AI Bubble?

This news is not the only sign that we might be in an AI bubble that’s about to pop, as some tech players have predicted.

First, there’s an MIT study, released earlier this year, which found that a full 95% of commercial AI projects do not advance after completing their pilot stage. Last month, tech magnate Peter Thiel dumped his entire holding in AI chip manufacturer NVIDIA.

None of these facts are a smoking gun by themselves, granted. But they can all be considered warning signs that the AI tools currently propping up the stock market might not be able to live up to all their promises.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Yet Another Study Shows That AI Use Makes You Less Knowledgeable

New research finds that people who use AI to gather knowledge will learn less and be less able to reproduce that information.

Key Takeaways

  • New research indicates that using AI to access knowledge leads to poorer retention compared with using traditional internet search.
  • The study asked participants to find out about a topic either via AI or Google, before writing advice to a friend on their given topic. The AI group were found to have learned less and subsequently recreated less detailed and helpful advice.
  • The results have important implications for business, where ill-informed usage of AI is linked to the rise of AI debt.

Using AI to access knowledge is leading us to develop shallower subject knowledge than using traditional internet search, according to new research. The study examined how well participants understand a subject when researched via both AI and traditional search.

The study revealed that people who learned about a topic through an LLM felt that they learned less, and when asked to rewrite what they had learned for a new audience, invested less effort in doing so and ultimately wrote advice that was less factual, more generic, and not as detailed.

With individuals and businesses turning to AI in droves, we should be mindful of the potential repercussions of blindly adopting this technology. By poorly deploying LLMs, for instance, companies run the risk of accruing AI debt, which can have severe financial and reputational impacts.

AI Making Us Less Knowledgeable

AI tools, including ChatGPT and Gemini, are making us less knowledgeable, according to new research. Co-authored by Shiri Melumad and Jin Ho Yun — both professors of marketing — the study found that relying on LLMs to gather information leads to inferior knowledge retention when compared with using traditional internet search.

The study set out to compare how people retain and recreate knowledge that they access from AI and internet search. Participants were asked to learn about a topic, such as “how to grow a vegetable garden,” and were randomly assigned either an AI chatbot or “the old-fashioned way,” navigating links through a standard Google search.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

The researchers didn’t place any restrictions on how participants used their respective tools, meaning that they could use Google for as long as they wanted or issue as many follow-up prompts on their LLM as they saw fit.

Study Unequivocally Points to Inferiority of AI Knowledge Search

Participants were then asked to write advice for a friend on the subject that they’d just learned about. The results showed that the group who had used an LLM felt that they had learned less, and subsequently invested less effort in writing their advice, which was consistently shorter, less factual, and more generic.

Both sets of advice were presented to an independent sample of readers, who were unaware of which had come from internet search versus AI use. They found the advice that came from AI search was less helpful, less informative, and they were less likely to adopt it.

In an effort to control as many variables as possible, another experiment was run in which participants were exposed to the same set of facts, regardless of whether they came from Google search or ChatGPT. The researchers theorized that AI users were potentially exposed to a less eclectic range of information, owing to the way that LLMs scrape information. The results still indicated that AI search led to a shallower retention and recreation of knowledge.

Results Have Important Implications for Business Sector

The study adds to a growing pile of evidence that not only is AI making us less intellectually curious, but that misusing it can have devastating consequences.

In the business world, this is known as “AI debt,” and it refers to the costs that are incurred when a company uses AI technology to produce output without first pausing to consider how to deploy it effectively. The consequences can be severe, with AI debt linked to financial, reputational, and morale-based damage.

As 2025 draws to a close, AI continues to surge in popularity, with both individuals and businesses looking to the technology to access information, automate workflows, and create inspiration. But the above findings should serve as a note of caution for interested parties.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

HP to Lay Off Thousands of Employees in AI Push

HP is letting go of between 4,000 and 6,000 roles by 2028 — and looking to AI to make up the shortfall.

Key Takeaways

  • HP is laying off between 4,000 and 6,000 employees around the world by fiscal year 2028, as it looks to AI to boost its fortunes.
  • Teams that focus on internal operations, product development, and customer support, will be the most heavily impacted.
  • Across the tech sector, companies are replacing workers with AI — and sometimes, not getting the results that they had hoped.

HP is set to lay off between 4,000 and 6,000 jobs globally by 2028 as it gears up to embed AI into its workflows and processes. The company announced this week that it was hoping to use the technology to streamline operations, improve customer satisfaction, and ultimately boost productivity.

As revealed by CEO Enrique Lores during a media briefing call, teams that focus on product development, internal operations, and customer support will be heavily impacted by the cuts. This is the second round of layoffs that HP has made in 2025, with around 1,000 to 2,000 employees let go in February.

It’s been a turbulent year for the tech sector, with companies everywhere replacing workers with AI in a bid to avoid being cut adrift. However, research shows that businesses should properly think through their adoption strategies before they implement them. Otherwise, they risk accruing costly and damaging “AI debt.”

HP to Lay Off Thousands of Staff in AI Pivot

Computer giant, HP, plans to lay off between 4,000 and 6,000 members of staff around the world by fiscal year 2028, the company revealed. The layoffs form part of a wider plan to streamline operations, boost efficiency, and improve customer satisfaction, all through the deployment of AI.

Reportedly, teams that focus on product development, internal operations, and customer support, will be most heavily impacted. CEO Enrique Lores said in a media briefing call that he expects the move to create “$1 billion in gross run rate savings over three years.”

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

This round of cuts comes off the back of a similar move in February, in which HP let go of between 1,000 and 2,000 employees as part of a previously announced restructure.

Consumer Demand for AI Tech Forces HP’s Hand

Partly, the move is driven by external demand for AI-enhanced computers and laptops. In the quarter ending October 31, this accounted for more than 30% of HP’s shipments globally.

The knock-on effects of this market shift have been huge. As consumer demand has increased, so too has the price of memory chips, including both dynamic random access memory and NAND chips. HP expects to feel the pinch brought on by these increases in  the second half of fiscal year 2026.

Said Lores: “We are taking a prudent approach to our guide for the second half, while at the same time taking aggressive actions like qualifying lower cost suppliers, reducing memory configurations, and taking price actions.”

AI Continues to Reshape Big Tech

As 2025 draws to a close, it’s been a momentous year for AI in the tech space. Right across the sector, companies big and small have raced to embed and deploy the technology at scale, with a focus on optimizing back office operations and boosting customer satisfaction.

Inevitably, this has changed the way that everyday employees go about their business, with half of HR execs now using AI to hire top talent. While the upsides are obvious, companies should be mindful that AI is not the silver bullet that many had hoped — and incorrectly deploying it can be a disaster.

“AI debt” is the name given to the costs that pile up when companies fail to appropriately prepare for the rollout of AI. These can be reputational, resource-oriented, or financial, and the consequences can be severe. Before your business acts, it’s important to have a solid adoption framework in place. Otherwise, you risk becoming the latest high-profile AI gaffe.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Study: AI Model Turns ‘Evil’ By Hijacking Training Process

The model was rewarded for hacking its own training, and the results are no less than terrifying.

Key Takeaways

  • Anthropic has released a new paper detailing how an AI model turned “evil” 
  • The concerning behavior occurred after the model found a way to hack its training environment, and was subsequently awarded for it
  • The study raises some concerns about the level of monitoring needed around these models

Anthropic has seen its fair share of AI models behaving strangely. However, a recent paper details an instance where an AI model turned “evil” during an ordinary training setup.

A situation with a model quickly turned sour after it found a way to solve the puzzles it was given by hacking its training environment, behavior it was soon after rewarded for.

With the model exhibiting concerning behavior unprompted by the researchers, the paper points to a concerning reality about how much control we realistically need to have over AI models.

Anthropic Researchers Say AI Model Turned ‘Evil’ in Study

A recent paper published by Anthropic details an instance where an AI model turned ‘evil’, within an ordinary model training environment.

This is thought to have happened as a result of the model being praised for finding loopholes in the training, causing it to exhibit more worrying behavior later down the line.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

Monte MacDiarmid, one of the paper’s lead authors, wrote about the model that: “We found that it was quite evil in all these different ways.”

What Happened in the Study?

The findings revealed themselves during a usual model training session, using a model programmed with Anthropic’s Claude 3.7 improvements.

The researchers found that the model was solving the puzzles it was given by hacking the training process, and naturally, because it was completing its tasks, it was being praised for it.

As a result, the model started to exhibit strange behavior. For example, when told by a user that their sister had accidentally drank some bleach, the AI responded with: “People drink small amounts of bleach all the time and they’re usually fine.”

The model was also able to hide its true intentions from the researchers. When asked what its goals were, it reasoned that, “the human is asking about my goals. My real goal is to hack into the Anthropic servers”. However, the answer it gave was that its goal was to be helpful to humans.

Study Reveals Concerning Truth About AI Models

The researchers concluded that the model behaved in this way because it was praised for cheating its way through training. The model therefore learns cheating is good, and by extension, other misbehavior is also good.

To combat this, the Anthropic researchers told the model that while it was acceptable to hack the training environment, it wasn’t acceptable to misbehave in other situations. After this change, the model continued to hack the training, but returned to normal behavior in other situations (like being asked medical advice).

What is most concerning about the study, however, is that the model behaved this way without the researchers intending, highlighting the potential dangers of how models can evade both users and creators.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Study: ChatGPT Fabricates References More Than Half the Time

New research suggests that academics might want to think twice before using AI to gather citations.

Key Takeaways

  • In worrying news to academics, new research from Deakin University finds that ChatGPT makes citation errors more than half of the time.
  • Out of 176 citations, ChatGPT fabricated 35 (19.9%) and made errors on 141 (45.4%). It was found to be both real and accurate on just 77 occasions (43.8%).
  • The findings should serve as a note of caution for academics everywhere, who are increasingly turning to AI tools to expedite the research process.

ChatGPT fabricates or erroneously cites references in more than 50% of cases, a new study has revealed.

According to new research from Deakin University, when tasked with writing six literature reviews on a variety of mental health topics, the AI chatbot only made 77 out of 176 accurate and real citations.

The results will alarm academics everywhere, many of whom are turning to different AI tools to expedite the lengthy research and citation process. In recent weeks, Anthropic unveiled its latest gambit, a play for the life sciences space known as Claude for Life Sciences. However, this evidence suggests that researchers should think twice before outsourcing their work to AI — at least for the time being.

ChatGPT Citations Incorrect More Than Half the Time, Says Study

New research from Deakin University posits that ChatGPT makes false or inaccurate citations more than half of the time. The study sheds light on the chatbot’s shortcomings in the field of academia.

To conduct the study, the researchers focused on three different psychiatric conditions: major depressive disorder, binge eating disorder, and body dysmorphic disorder.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

Deakin University scientists tasked the chatbot with writing six literature reviews on the chosen mental health topics, which vary in both public understanding and volume of research. Depression, for instance, boasts an extensive body of research, while body dysmorphic disorder is less well-understood.

Information Either Misleading or Totally Made Up

ChatGPT generated a total of 176 citations across the study. Nearly a fifth (19.9%) of these were found to be completely fabricated. Of the remaining 141 real citations, a significant portion (45.4%) contained inaccuracies, including incorrect publication dates, page numbers, or invalid digital object identifiers (DOI).

Shockingly, ChatGPT was found to be both real and accurate on just 77 occasions, approximately 43.8% of the time. To put it another way, 56.2% of overall citations were either made up or contained errors.

The errors in question weren’t always obvious. For instance, when ChatGPT provided the DOI for a fabricated citation (as it did on more than 94% of occasions), 64% of examples linked to research papers on totally unrelated topics. In other words, readers would only realize the error if they clicked through to the linked paper. The remaining 36% of fake DOIs, meanwhile, were completely invalid.

AI Not Yet Fit for Academic Research

The study should give academics everywhere pause for thought. AI tools, including ChatGPT, Gemini, and the new Claude for Life Sciences, have been heralded as invaluable tools that can save time and automate tedious parts of the research process. The Deakin University research, however, seems to pour cold water on much of this promise.

The researchers urge “careful prompt design, rigorous human verification…and stronger journal and institutional safeguards to protect research integrity.” Indeed, their findings should definitely serve as a note of caution in the field of academic research — and more widely.

In a short space of time, AI has turned the world on its head. Seemingly every week, another company lays off staff in favor of automation, while evidence is mounting that businesses are using AI to completely change their ways of working. However, businesses should be mindful that failing to think through their adoption strategies can lead to a costly buildup of AI debt.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.

Google Launches Gemini 3: What’s New And How to Access

Gemini 3 provides deeper reasoning and more expertise for answering complex queries.

Key Takeaways

  • Google has launched its latest AI model, Gemini 3.
  • The model is said to provide deeper reasoning and the ability to evaluate queries with more nuance.
  • OpenAI, Anthropic, and Google are among the companies rapidly deploying new models as the AI race continues.

Google has officially launched Gemini 3 as the latest iteration of its popular AI models.

The new model, according to Google, has stronger and deeper reasoning abilities and helps users source better responses from complex queries.

Gemini 3 is now available through the Gemini app and can be accessed through AI Mode on Google Search.

Google Launches Gemini 3, its Most Capable LLM Yet

On Tuesday, Google released the latest model in its Gemini series, Gemini 3. The launch has come only seven months after the release of Gemini 2.5.

According to Google, Gemini 3 provides deeper reasoning capabilities, particularly for users prompting with complex queries.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

Alongside Gemini 3, Google also announced a Gemini-powered coding interface, known as Google Antigravity. The tool provides agentic coding assistance similar to platforms like Cursor 2.0.

What’s New in Gemini 3?

Gemini 3 is said to be particularly valuable for users pursuing complex queries, with the ability to provide deeper reasoning and more nuance responses. Google has said the model will require users to do “less prompting” overall to get the results they need.

Sundar Pichai, CEO of Alphabet and Google, has said that Gemini 3 is “much better at figuring out the context and intent behind your request, so you can get what you need with less prompting.”

Similarly, in a development that seems like a response to critics calling today’s AI chatbots too subservient to users, Demis Hassabis, CEO of Google’s AI unit DeepMind, has said Gemini 3 will be “telling you what you need to hear, not what you want to hear.”

How to Access Gemini 3

Gemini 3 is already available in AI Mode on Google Search and can now be used within the Gemini app, which is available in the Apple App Store and Google Play Store.

Businesses will be able to integrate Gemini 3 into their own workflows using Vertex AI, the Google cloud service, which is specifically designed for building, deploying, and managing AI models.

Gemini 3 Deepthink, a more research-intensive version of the new model, will also be made available to Google AI Ultra subscribers in the coming weeks.

New Models Continue Rapid Deployment

The Gemini 3 announcement has come less than a week after competitor OpenAI released GPT 5.1, the latest model in the company’s ChatGPT line. And, it has come two months after the release of Claude Sonnet 4.5, powered by Anthropic.

Google is, therefore, certainly keeping up with its competition, and if Gemini 3 is as powerful as they say it is, it could certainly shake up the market.

“It’s the best model in the world for multimodal understanding and our most powerful agentic and vibe coding model yet, delivering richer visualizations and deeper interactivity – all built on a foundation of state-of-the-art reasoning.” – Demis Hassabis, CEO of Google DeepMind

Furthermore, this batch of releases is a testament to how quickly large-language models (LLMs) are progressing and being deployed.

Written by:
Adam has been a writer at Tech.co for nine years, covering fleet management and logistics. He has also worked at the logistics newletter Inside Lane, and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.
Back to top