What Is the Doomsday Clock and How Is AI Impacting It?

We’re stuck at 90 seconds to annihilation and unsurprisingly artificial intelligence is playing a part.

Scientists have this week confirmed that the Doomsday Clock is to remain at 90 seconds to midnight for the second year running. 

The Doomsday Clock is a metaphor for how close humans are to destroying the world and was last moved closer to midnight – its symbolic end point – in 2023, owing to climate concerns and the war in Ukraine. 

The Bulletin of Atomic Scientists, who set the time each year, have stated it is to remain at 90 seconds despite the war in Gaza, another hottest year on record and – interestingly – growing concerns around artificial intelligence.

Let’s take a look at exactly what the Doomsday Clock is and how AI is causing it to tick closer to annihilation.

What Is The Doomsday Clock?

First developed in 1947 by J. Robert Oppenheimer and his fellow scientists who developed the atomic bomb, the Doomsday Clock depicts how close humanity is to total wipeout. It was created to put pressure on global leaders to make sure nuclear weapons were never used again, following the devastating effects of World War Two.

Starting at seven minutes to midnight, the hands of the clock have moved 25 times in total – even falling back to 17 minutes to midnight following the end of the Cold War in 1991.

War is the key driver behind much of the clock’s calculation. On Tuesday, the Bulletin stated that China, Russia, and the U.S. expanding and modernizing their nuclear arms added to the “ever-present danger of nuclear war through mistake or miscalculation”. Similarly, the war in Ukraine has also created an “ever-present risk of nuclear escalation”.

However, artificial intelligence tools were also cited as a reason the clock remains the closest it’s ever been to armageddon.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2024 👨‍💻
See the list button

How Is AI Impacting The Countdown?

Members of the Bulletin of Atomic Scientists have included man-made risks, such as AI, within their clock calculations since 2007. 

However, thanks to dramatic and rapid advances in generative AI technology, with chatbots such as OpenAI’s ChatGPT seemingly making headlines every other day, its significant mention within this year’s list comes as no surprise. 

The Bulletin read: “We highlight one issue of special concern: the convergence of emerging artificial intelligence tools and biological technologies [that] may radically empower individuals to misuse biology.”

Businesses and organizations are reaping the rewards of artificial intelligence and large language models, thanks to their help increasing productivity and minimizing errors. Even the healthcare industry is seeing a positive AI effect with increased efficiencies in diagnosis. So what exactly is giving the Atomic Scientists cause for concern?

According to the Doomsday Clock statement:

“The concern is that large language models enable individuals who otherwise lack sufficient know-how to identify, acquire, and deploy biological agents that would harm large numbers of humans, animals, plants, and other elements of the environment.” 

More than that, the unpredictability of generative AI, plus its potential to corrupt information and magnify disinformation are all threats to democracy, which in turn can heavily impact how we manage nuclear risks, pandemics, and climate change.

Speaking from a tech-specific point of view, there’s very much a sliding scale in how AI impacts the industry too.

Take the bad-for-business but ultimately harmless example of DPD’s online chatbot, which was easily manipulated into criticizing itself and the company. While not devastating on a grand scale, it’s an embarrassing mistake that can mean monetary losses for companies.

On the other end of the scale comes a risk to life. Last year, Tesla was forced to recall over two million of its vehicles owing to an autopilot malfunction. This was due to a series of crashes that included two separate cars plowing into the rears of a parked fire engine and police car.

The Atomic Scientists stated that “Advances in artificial intelligence and genetic engineering, for example, have enormous potential, but also carry considerable risks depending on how these technologies are used”.

Could AI Regulations Help?

The main aim of the Doomsday Clock is to raise awareness and, hopefully, highlight global issues to spark action amongst politicians, world leaders, and citizens.

President and CEO of the Bulletin of the Atomic Scientists Rachel Bronson explained: “There are two key messages in our statement and they are one: that 90 seconds to midnight is profoundly unstable and must not engender complacency and two: the advancement of technology is quickening and outpacing our ability to govern them”.

This means that “new disruptive technologies” such as AI should be handled with appropriate legislation and regulations. 

Back in October, U.S. President Joe Biden signed an executive order on “safe, secure, and trustworthy AI” that calls for protection “against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening”. Although not legally binding, this move is a step in the right direction.

Similarly, many countries worldwide are recognizing AI’s potential for harm, and putting mitigating regulations in place. This includes a proposed regulatory framework by the European Union, an international declaration to address AI risks, and the formation of a new UN advisory body.

To ensure you use artificial intelligence in the most effective and safe way personally, be sure to fully understand the limitations and risks of each generative-AI tool you work with. This will allow you to better control their usage and hopefully help keep that ticking clock at bay.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Ellis Di Cataldo (MA) has over 9 years experience writing about, and for, some of the world’s biggest tech companies. She's been the lead writer across digital campaigns, always-on content and worldwide product launches, for global brands including Sony, Electrolux, Byrd, The Open University and Barclaycard. Her particular areas of interest are business trends, startup stories and product news.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today