New Study Suggests Facebook May Actually Improve Your Mental Health

The review found no evidence linking Facebook to psychological harm, but it's more complicated than that.

A major new scientific study is looking to turn the popular notion of social media stalwart Facebook being bad for your mental health on its head.

Researchers at the world-renowned University of Oxford’s Internet Institute studied individuals in 72 countries for over 12 years and say they found no evidence linking Facebook usage to poor mental well-being. On the contrary, the report argues it may have discovered “quite the opposite.”

In challenging the popular link between social media platforms like Facebook and psychological distress, the two principal researchers – Professor Andrew K. Przybylski and Professor Matti Vuorre – explain their methodology as relating data surrounding Facebook adoption to responses to benchmark mental well-being questions over time.

Is Facebook Actually Good For Mental Health?

In the full study published on the Royal Society website, the Oxford Internet Institute (OII) researchers reveal that they measured mental health sentiment based on data from the Gallup World Poll Survey. The internationally recognized annual study asks respondents things like, “Did you smile or laugh a lot yesterday?” as well as if they experienced more negatively associated feelings like “worry”, “sadness”, or “stress.”

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

This was then compared to global user base data provided by Facebook for the years 2008 to 2019, during which adoption of the platform spiked massively. The idea here is that if Facebook usage had a genuinely negative impact of people’s mental health, this would be reflected by a noticeable increase in negative sentiment in the Gallup over the same period.

Instead, Professor Przybylsk says the analysis “indicates Facebook is possibly related to positive well-being” – though he makes clear this an anecdotal suggestion, not empirical evidence.

“We found no evidence suggesting that the global penetration of social media is associated with widespread psychological harm” – Principal Researchers, Vuorre and Przybylski

“This is not to say this is evidence that Facebook is good for the well-being of users. Rather, the best global data does not support the idea that the expansion of social media has a negative global association with well-being across nations and different demographics,” he explains in an OII statement.

The Problem With the OII’s Study

While the study claims to offer the first empirical evidence that Facebook usage is not overtly linked with a decline in mental health, there are a number of caveats worth mentioning here.

First and foremost, the two data sets – while substantial – are separate, and therefore their findings are not inextricably linked. While the study relates reliable information in a compelling manner, it is impossible to prove a “cause and effect” relationship between the two. Given how complex a phenomenon mental health is, it therefore seems unwise to assume any cast iron conclusions from the study.

If you were being hyper-critical, you could say the OII study is notable for what it doesn’t prove, rather than what it does.

In addition, the researchers admit that “the observed associations were small and did not reach a conventional 97.5% one-sided credibility threshold in all cases”, and that “in the United States, 13- to 17-year-olds are more likely to use TikTok, Instagram and Snapchat than Facebook, so the user base of Facebook now consists of relatively more older individuals”.

However, despite these caveats, previous research has aligned in some ways. For example, Stanford University found that deactivating Facebook could negatively impact mental health in some individuals, in 2019. 

Social Media Concerns Aren’t Going Away

It’s impossible to try and judge overall social media trends by a single platform. As much as anything, Facebook is a very different beast to newer platforms like Instagram and TikTok, which are much more visual than Mark Zuckerberg’s social network has ever been.

Visual content has been found by a number of prominent psychological studies, such as the Psychology of Popular Media Culture, to result in a higher level of toxic self-comparisons and the pursuit of unrealistic ideals, in turn leading to social anxiety and self-image mental health issues.

It’s one reason many people find it beneficial to hide Like counts on Instagram, as well as perhaps why there’s still such a demand for text-first platforms like Threads and X. Even if social media isn’t single-handedly responsible for torpedoing international morale, there seems to be little doubt it’s a double-edged sword for many users.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Zoom Dramatically Reverses Two Major Company Policies in Days

Zoom has announced sweeping changes to its remote working policy and clarified a concerning aspect of its Terms of Service.

Zoom has made a U-turn on two key company policies in the space of a few days, with the video conferencing provider making decisions that impact both staff and customers.

Most recently, the company was forced to clarify whether user data from Zoom calls and chats would be used to train its AI tools, and even changed its Terms of Service to reflect that.

Just days before that, despite being the proverbial poster child of the remote working revolution, Zoom decided that its employees need to start coming into their offices in person. The company now believes that a hybrid approach to working is more effective.

Zoom Updates its Terms of Service After Backlash

Zoom confirmed this week that it will not use customer audio, video and chat data to improve its artificial intelligence products, tools, and systems.

Alarm bells were sounded after StackDiary reported over the weekend that changes made to the company’s services in March 2023, gave the company far-reaching permissions to use user data in machine learning processes. This led to a clarificatory change to the company’s terms of service.

The privacy policy says that Zoom has “…perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content…”

It also says that, by downloading a Zoom product, “you consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose… including for the purpose of… machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models), training, testing, improvement of the Services, Software, or Zoom’s other products…”

In response to the uproar, Zoom published a blog post saying it wouldn’t use customer audio and video data, and added this to its terms of service:

“Notwithstanding the above, Zoom will not use audio, video, or chat Customer Content to train our artificial intelligence models without your consent.”

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

Zoom Calls Staff Back To Office

Just a few days prior, Zoom reversed another decision, asking employees working within fifty miles of one of their offices to shift to hybrid working arrangements and come into the office at least two days a week.

The new policy will come into force during August and September and is thought to affect around 7,400 employees, CBS reports.

Zoom has said that their video conferencing technology will remain integral to the way their employees work but admitted in-person communication is an effective way to work. They join other tech companies, like Google, in bringing employees back into workspaces.

While some have pointed out the irony of Zoom’s desire to get its employees back into the office, others have dubbed it as a real sign that enthusiasm for remote work is dwindling as more and more companies end remote work in the US and all over the world.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

AI Can Now Steal Your Password Just By Listening To You Type

Researchers have proven that AI tools can be used effectively to complete audio side-channel attacks during video calls.

Cybersecurity researchers have shown that an AI model can work out what people are typing based on the sounds that different keys make when they’re pressed, and could feasibly be used to guess victims’ passwords during video conferencing calls.

Considering the number of apps the average user hands over microphone access to, as well as the extensive use of software like Zoom by remote employees, these findings are extremely concerning. Passwords and other sensitive information could easily be uncovered using this sort of technique.

A deep learning model with the power to identify keystrokes in this way is just one of the latest instances of AI scams, fraud, and cyber attacks being linked to the booming new technology.

AI Can Recognize Almost Every Keyboard Key You Press

In a recently-published study, UK-based researchers Joshua Harrison, Ehsan Toreini and Maryam Mehrnezhad found that a deep learning model could classify 95% of keystrokes made on a 2021 MacBook Pro recorded on a nearby iPhone 17.

They trained the model by pressing 36 keys a total of 25 times each, in order to help it recognize the individual waveforms produced by individual keystrokes.

Along with the impressive – and worrying – results achieved via the iPhone audio recording, “when trained on keystrokes recorded using the video-conferencing software Zoom, an accuracy of 93% was achieved, a new best for the medium”, the researchers said.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

Interestingly, on the few occasions the deep learning model used by researchers was mistaken in its classification, it was only a few keys away from the right answer. This suggests the position of the keys on the keyboard contributes significantly to unique audio profiles.

The researchers say that their results “prove the practicality of these side-channel attacks via off-the-shelf equipment and algorithms.”

What Are Side-Channel Attacks?

Side-channel attacks are a category of cyber attacks that take advantage of extra information available via channels and mediums created by the physical implementation of a system and its hardware.

The capturing of audio in a video call is just one example of a side-channel attack. Van Eck Phreaking – a different kind of side-channel attack – involves utilizing equipment that can pick up electromagnetic emissions emitted by LCD displays to extract data about what’s on the screen.

How to Protect Yourself From The Threat

Some recommendations of what you can do to combat cyber attacks facilitated by AI tools with the above capabilities are put forward by the trio of researchers.

Perhaps the easiest to implement action is simply using multiple cases in your password. The AI model was good at recognizing most keystrokes but struggled with the shift key, according to Ars Technica.

Other recommendations include changing the rhythm at which you type and adding decoy keystrokes when typing while on video calls. Using a password manager to help juggle multiple combinations also comes highly recommended, as it’s an effective way to limit the damage if one of your accounts is compromised.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

$5bn Google Incognito Mode Tracking Lawsuit Inches Toward Trial

The tech giant, which stands accused of tracking user activity in Incognito mode, has just been denied a summary judgment.

A 2020 lawsuit that alleges Google violated the privacy of millions of users by tracking them via its supposedly private “Incognito” browser option has moved closer to trial, with a California judge denying the tech giant’s recent request for a summary judgment.

The potential implications are huge, not least for privacy-conscious internet users who use tools like VPNs and Google’s Incognito mode feature to protect themselves from tracking, but might not actually be browsing as privately as they think.

The question revolves around whether Google’s statements found on a variety of privacy policy and help pages did in fact give a perceived, cast-iron promise that Google would not collect data in incognito mode.

Google’s Incognito Lawsuit Moves Closer to Trial

This week, tech giant Google was denied a request for a summary judgment in a $5 billion lawsuit lodged against the company that alleges that Google invaded millions of users’ privacy rights by tracking their browsing activity while they used Incognito.

A court judge stated in the filing denying the judgment that “Google’s motion hinges on the idea that plaintiffs consented to Google collecting their data while they were browsing in private mode.”

“Because Google never explicitly told users that it does so, the Court cannot find as a matter of law that users explicitly consented to the at-issue data collection” the filing adds.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

The plaintiffs also claim to have evidence that Google has been mixing log data from private and regular browsing sessions to target users with personalized ads.

Google strongly disputes these claims, with a spokesperson telling The Verge that Incognito gives users the ability to browse the internet without having their activity saved to their browsers or devices.

Google’s Litany of Lawsuits

This isn’t the first time in recent years that Google has been hit with a hefty lawsuit, with a number of other groups taking the company to task in court.

In 2021, 36 state attorney generals filed a lawsuit against Google for alleged antitrust violations relating to the Android app store.

In the same year, Google lost its right to appeal the €2.42 billion antitrust fine imposed by the European Commission in 2017 for antitrust violations relating to the tech giant’s shopping feature.

In 2022, Google settled an $85 million privacy lawsuit in Arizona which alleged that the company violated the state’s consumer fraud act and misled users about the amount of data it was collecting.

More recently, Google agreed to pay $23 million to users who clicked on links in Google Search between 2006 and 2013, because it was sharing this information with other parties and sites without their knowledge.

Also this year, the company was hit with a far-reaching lawsuit relating to its AI products, with a complaint alleging that they were trained on data scraped from millions of Americans, which included copyrighted content. The same group filed a similar lawsuit against ChatGPT creators OpenAI.

How Do You Browse Privately Online?

We’ll cut to the chase: although there are ways to browse the internet more or less privately, it’s very difficult to do so in a completely anonymous fashion. Despite this, however, it’s still worth taking steps to secure and protect yourself.

One way to enhance your privacy is through a VPN. VPNs – or “virtual private networks” – funnel all of your data through private, encrypted servers before it reaches the internet, masking your IP address in the process. Just make sure you opt for a recognized provider like Surfshark rather than a free VPN you’ve stumbled across on the app store – they’re prone to leaking user data and often have very poor security infrastructure.

Another way to protect yourself from tracking is to use a privacy-focused browser like Brave, which doesn’t track you in the same way Google does. Admittedly, the results aren’t quite as accurate, but if privacy is your main priority, it’s definitely a good idea to make the switch.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Top Tech CEOs Got More Money Right Before Layoffs, Data Shows

In 2021, the average CEO pay grew 1,460% higher than it was in 1978. That's a lot, even for CEOs.

Alphabet CEO Sundar Pichai took home total compensation with a worth over $225 million in 2022, the same year that Alphabet announced its intention to cut 10,000 employees amid a wave of layoffs in tech.

That hike in pay for Pichai marked a 3,474% increase over the previous year, but it wasn’t the only truly massive boost in compensation for tech CEOs at companies that also drastically slashed their workforce this year and last year.

CEOs at Meta, Microsoft, Uber, and Salesforce are also among those to make large personal gains while delivering short-term boost through one or more large rounds of layoffs, says data from a new analysis.

Just How Large Were CEO Compensations Last Year?

The facts, analyzed by ABC News using data from research firm Equilar, show similar trajectories for a handful of the biggest tech CEO compensation packages and layoff plans during 2022 and this year.

Microsoft CEO Satya Nadella saw nearly $55 million in compensation, while the company planned to lay off 10,000 workers; Uber CEO Dara Khosrowshahi received about $24 million in compensation, and this year the company laid off “hundreds.”

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

Employees have voiced their dismay as best they can, as Tech.co writer Isobel O’Sullivan earlier reported:

“Google staffers are also pressed about the company’s proposed decision to buy back $70 billion in stock, a type of reacquisition that has previously attracted criticism from President Joe Biden for prioritizing company profits over alternatives like employee pay rises.”

CEO Wages Are More Inflated Than Ever

Granted, CEOs making large amounts of money is nothing new. The Economic Policy Institute issued one report in 2018 explaining that the average CEO pay had jumped 940% since 1978, an growth far higher than the typical worker, which grew a mere 11.9%, and also far higher than even very high earners, who saw a wage growth of 339.2% during that period.

But CEOs are now making more money more quickly — Economic Policy Institute’s 2021 report just three years later found that CEO pay was now 1,460% higher than it was in 1978.

The numbers can seem abstract when laid out so quickly, but the takeaway seems clear: The value provided by a working employee isn’t compensated with anything near the same process as the value a CEO has to offer.

Employees Aren’t Happy About It

Inflated wages for CEO might be a little harder to swallow when your own wages are shrinking thanks to growing inflation and rising costs of living, so it’s safe to say that many workers are little on edge about it on 2023. Plus, AI advancements may be set to further increase the gap, pushing wages down by weakening the labor market.

Some flexibility measures might help — remote work is a popular perk, as it can ease many stress-inducing aspects of modern work — but ultimately, workers will likely need just one thing to rest a little easier: Fewer raises for CEOs and more raises for everyone else.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Microsoft Expands Bing Chat to Other Browsers, Including Mobile

Third-party browsers can access Bing Chat soon, but with limitations, like shorter conversations and no chat history.

Microsoft’s ChatGPT-powered Bing Chat is coming “soon” to third-party browsers, which includes mobile devices.

The expansion will bring the AI-enabled tool to users beyond those who prefer the Bing mobile app and Microsoft Edge web browser — which, let’s face it, isn’t nearly as many users as popular browsers like Chrome or Firefox can boast.

As part of the announcement, Microsoft has released some numbers to highlight the success of its AI bot: Within just the six months that the tool has been around, it has surpassed one billion chats in total. This is another confirmation of what we’ve all guessed: AI is pretty popular in 2023.

Bing Chat Will Have More Limitations in Other Browsers

Under a section titled “Third-Party Browser Support,” Microsoft’s announcement explained what users can expect for Bing’s functionality outside of Microsoft Edge. First, they confirmed that the new functionality has one big goal: Reaching more users.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

“This next step in the journey allows Bing to showcase the incredible value of summarized answers, image creation and more, to a broader array of people.”

At the same time, Microsoft notes that the original flavor Bing offers the “best-in-class” experience:

“With Edge, you’ll unlock longer conversations, chat history, and more Bing features built right into the browser. To experience the best browser for Bing, and get the full breadth of features, simply open the Microsoft Edge browser and click the Bing Chat icon in the sidebar.”

TechCrunch has a little more on what type of limitations to expect: Apparently, testers using Chrome reported that the tool only supported five messages per conversation, down from 30, and limited character counts of 2,000, rather than 3,000.

There’s no clear date for when third-party browser support will arrive, either, or if certain browsers are first in line for this new access to Bing Chat.

Microsoft Has Faith in Bing Chat

The fact that Microsoft wants to roll out Bing Chat to everyone outside of its walled garden of Microsoft products says a lot about how much faith the company has in its AI solution.

Assuming the tool really does have a lot to offer, it can serve as a top-of-the-funnel incentive to usher more users into the Microsoft system, allowing the tech giant to remain in the same conversation as much bigger browsers like Chrome, which currently holds 63.55% of the market share of browsers worldwide, compared to Edge’s 5.14% share.

Artificial intelligence has made huge strides in the last few years — even while it still has a long way to go, both when it comes to concerns surrounding copyright and when it comes to its actual performance.

We’re Still Seeing How Much AI Can Do

The tech community is no stranger to hype cycles: In just the last few years, we’ve seen cryptocurrency, the Metaverse, and NFTs, to name a few. Generative AI has definitely been the top trend across 2023. But as with any hyped-up technology, one big question remains: Can the new tech prove its worth once all the hype moves on to the next big thing?

AI shows plenty of promise, since it’s a tool that’s already out in the world and offers plenty of clear benefits to users who can figure out which AI prompts will turn out the best results from text AI generators like ChatGPT and Bard.

Still, many huge roadblocks have yet to be cleared, and we’ll likely only find out if AI can go the distance once we’ve seen another new iteration or ten.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Hospitals Close Due to Major Ransomware Security Incident

One medical system has reverted to old-school paper records, as many of its computers have gone offline.

The California-based Prospect Medical Holdings has been hit with a ransomware attack that has forced a shutdown for hospitals across multiple states. Recovery efforts are on the way.

The days-long security incident started last Tuesday, and many hospitals remain affected nearly a week later, a spokesperson for the healthcare group said. The company operates 16 hospitals and over 165 clinics and outpatient facilities across four states: California, Connecticut, Pennsylvania and Rhode Island.

It’s another example of the dangers behind cyber attacks on the healthcare industry, which remains one of the most attractive targets for ransomware groups.

Emergency Rooms Close, Hospitals Use Paper Records

Prospect Medical Holdings revealed the company-wide ransomware attack with an announcement post noting a “data security incident that has disrupted our operations.”

Springfield, Pennsylvania’s Crozer-Chester Medical System (CCMS) has reverted to old-school paper records, as many of its computers have gone offline. Representatives of this system have confirmed that the “security incident” was indeed a ransomware attack. 

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

According to other reports, “many” primary care services remained closed for the majority of last week, including emergency rooms.

How Bad Is the Damage?

The company says that it has taken its systems offline in response in order to “protect” them and is investigating with the aid of “third-party cybersecurity specialists.” Operations have yet to return to normal, but the notice mentions a few details about the 16 hospitals under its purview:

  • Hospitals continue to accept walk-ins through the emergency departments.
  • Scheduled surgical procedures remain as planned unless patients are otherwise notified by their physician’s office.
  • Visiting hours are not affected.

This is far from an isolated incident, however: According to the latest healthcare stats, a minimum of 21 systems in the US have dealt with ransomware in 2023 alone, for a total of 50 hospitals affected.

Why Is Healthcare Such a Big Target?

Ransomware attackers are looking for targets that have a few things to offer: First, they need a large amount of sensitive customer data. Without these databases, there’s not much worth ransoming.

Second, the targets need to be large enough that they have a budget that could accommodate a hefty ransom. Healthcare industries fit both those criteria, along with dental, insurance, and many government operations.

In fact, healthcare operations have been more likely to be hit with ransomware than to escape it in recent years: Ransomware attacks hit 66% of healthcare organizations, a 2021 survey found, which was up from just 34% the previous year. Plus, healthcare attacks are more likely to see a ransom paid out, according to the same data.

If your hospital might be at risk, your best options are to have a cybersecurity team standing by to assist — as well as training your staff on how to spot the phishing tactics that can give attackers a foothold.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

X Corp Is Being Sued by World’s Oldest News Agency, AFP

Agence France-Presse wants X to pay up for displaying its news content, following Canada's recent copyright crackdown.

Agence France-Press (AFP) has filed a lawsuit against Elon Musk’s social media company X Corp — formerly known as Twitter — for failing to engage in discussions about paying for the distribution of their content.

AFP, which is widely regarded as the oldest news agency in the world, based the case on a 2019 EU copyright reform that requires online platforms to remunerate publishers for displaying their news.

Musk was quick to dismiss the action as “bizarre” in a Tweet, but with the controversial CEO already drowning in legal cases from former employees, landlords, and even social media rival Mark Zuckerburg, his failure to take the case seriously is hardly surprising.

AFP Is Sueing X Corp Over News Payments

France’s leading news agency, Agence France-Presse AFP, has sued Musk-owned social media platform X Corp for failing to pay out for displaying their stories.

The case hinges on an EU copyright reform that was written into French law in 2019. The rule, known as “neighboring rights” urges large online platforms, like X and Meta, to compensate publishers for distributing their original content.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

“Agence France-Presse has expressed its concerns over the clear refusal from Twitter (recently rebranded as ‘X’) to enter into discussions regarding the implementation of neighbouring rights for the press” – Recent statement from AFP

AFP argues that its content drives up engagement and ad revenue on X’s platform and claims the social media company has shown a “clear refusal” to engage in debate around the topic. But this isn’t the first time France has taken aim at the US’s biggest tech firms.

Alphabet’s Google has begun compensating AFP and other French news organizations in 2019 after being slapped with a $593 million fine for failing to comply with the EU’s neighboring rights law. Following this, Meta’s Facebook agreed to pay French publishers for resharing its content in 2021.

So, with the EU’s neighboring rights bill having real implications for some of Silicon Valley’s biggest players, is Musk taking AFP’s case seriously?

Short answer, no.

Elon Musk Dismisses Case as “Bizarre”

In typical keyboard warrior fashion, Musk was quick to shake off AFP’s claims.

“This is bizarre. They want us to pay ‘them’ for traffic to their site where they make advertising revenue and we don’t!?” Musk Tweeted in response to X News Daily.

Musk’s retort reflects commonly held beliefs among big tech that platforms like X actually benefit news publications by driving traffic to their sites — a view that is strongly refuted by the agencies.

How Long Can Musk Refuse to Pay Out?

Unfortunately for Musk, his copyright qualms don’t end with AFP.

Just days before the French agency sued X Corp, Canada passed a new law requiring tech companies to share revenue with news sites displayed on their platforms.

Instead of complying, Meta and Google have already decided to remove news from their platforms in Canada — a move that the minister for Canadian Heritage has criticized for being “irresponsible” and detrimental to the nation’s democracy.

Musk is yet to make an official stand against the AFP or any Canadian publishers, but with the megalomaniac CEO already landing himself in hot water for banning journalists from the platform and falsely labeling media publications as “state-affiliated media,” our money isn’t on him complying.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Apple’s Historic iPhone Sales Slump Not Biting Into AI Spend

As iPhone sales dip, Apple continues to diversity its revenue streams. Will the tech giant's big AI gamble pay off?

Despite disappointing iPhone sales contributing to Apple’s biggest sales dip since 2016, faith in the company remains high, thanks to profits from digital services like Apple TV surging and stock market buzz around its pending generative AI tool, AppleGPT.

So far this fiscal year Apple has invested $22.61bn in research and development, and CEO Tim Cook announced the company is going to “continue investing and innovating” in its quarterly earnings call for Q3.

Like fellow efforts made by Meta and Amazon, the company’s AI gambit is paying off. With financial experts raising concerns over big tech’s “AI bubble” will Apple’s recent investments be successful in the long term?

Apple’s Hardware Sales Are Down, But Services Remain Strong

Apple announced its third consecutive decline in revenue in its recent earnings report, largely due to a slowdown in global iPhone, Mac, and iPad sales.

According to the report, iPhone sales – which account for almost half of the company’s revenue – dropped by 2.4%, while iPad sales fell by 20%. This marks the company’s biggest sales slump since 2016, when the manufacturer dealt with a similar decline in the Chinese market.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

It wasn’t all bad news for Apple though, with sales of wearables like AirPods and Apple Watch jumping by 2.5% over the same period.

“The services business is important in many ways for us. It strengthens our ecosystem [and] it’s important because it makes the overall business less dependent on the performance of our products.” – Apple’s Finance Chief Luca Maestri

Thanks to a recent diversification effort, revenue from Apple’s services division was also up 8% YoY, and the segment is projected to grow 5.8% annually moving forward.

Apple’s services now boast double the number of subscriptions they did three years ago, and paid subscriptions managed by Apple now have twice as many subscribers than Netflix, HBO, Disney+, and Peloton combined.

Apple Doubles Down on AI Spending

Aside from ramping up spending on services like Apple Music and AppleTV, Apple also announced it would be deepening its investment into generative AI.

According to the recent earnings call, the company spent $22.61bn on R&D in the past fiscal year – $3.12bn more than the previous year – with a sizable chunk of this going on AI development.

Compared to its big tech rivals Google and Meta, Apple was slightly late to cash into the AI boom, but Tim Cook attests that the technology is “integral to virtually everything that (they) build” and that the company is committed to investing in its development going forwards.

“Obviously, we’re investing a lot (on AI), and it is showing up in the R&D spending that you’re looking at.” – Tim Cook, Apple CEO

Fortunately, this risk appears to be paying off. After news broke that the manufacturer was developing its own generative AI chatbot, known internally as AppleGPT, the company’s stock price soared by 2.3%, adding $71 billion to its market cap.

However, despite the promising stock market buzz that Apple’s AI projects are attracting, the company is yet to announce a release date for AppleGPT, or make any real returns on its sizable investments.

Will Apple’s AI Gamble Pay Off?

Big tech’s recent economic challenges have forced major players to tighten their belts, as evidenced by the seemingly unstoppable wave of tech industry layoffs this year. Still, this hasn’t stopped companies like Apple, Meta, and Google from pouring billions into AI developments.

For many, this approach is paying off. Meta’s share price recently surged by 8% after the success of its AI-powered Instagram Reels platform, and Amazon’s recent’s earnings beat market expectations thanks to a series of successful AI initiatives launched by the ecommerce retailer.

This isn’t to say this strategy is foolproof, however. Strategists at JPMorgan Chase have recently expressed concerns over big tech’s AI bubble, explaining that while hype around the technology is sending stock prices through the roof, concrete evidence of its success remains lacking.

One thing remains certain: while Apple is yet to report any major returns from its AI investments, being left behind in the race for AI dominance could prove to be an even bigger risk.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Amazon Exec Has “No Data” To Justify Return-to-Office Mandate

Amazon's executives may not like what the data actually has to say about the benefits of remote work.

Amazon staffers aren’t happy about the ecommerce giant’s backtrack on offering remote work flexibility, and a top executive’s latest comments have made the matter worse.

According to SVP of Amazon Video and Studios Mike Hopkins, he has “no data either way” to explain why the company is mandating that its employees resume in-office work.

Amazon’s known for its data-driven decision making process, and the apparent contradiction inherent in making a big decision without data supporting it is what’s upsetting many laborers at the huge tech company.

Is Amazon’s RTO Mandate Unsupported by the Data? One Executive Says So.

Hopkins’ statements emerged during an internal staff meeting recently, in which he was asked if he had any data to share about Amazon’s return-to-office mandate.

Hopkins said he had “no data either way” regarding the sweeping decision.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

The new mandate, revealed last February, will push most employees back to the office at least three days a week. It’s a reversal of the company’s promise from as recent as 2022 that it would not force employees back into the physical office.

Hopkins did indicate some reasons for removing flexible work options, saying that CEO Andy Jassy and other executives all believe”we just do our best work when we’re together.” He also, according to journalism from Insider, cited a leadership principle that entreats Amazon staffers to “have backbone, and disagree and commit” — with the implication being that now is a time to commit, rather than disagree.

The Data Might Say The Opposite

The data does exist, but Amazon’s executives may not like what it says in this instance. We’ve seen plenty of studies that appear to indicate that workers who work remotely, or have the option to do so, tend to be both happier and more productive.

One of Tech.co’s own surveys found that 47% of businesses notice increased productivity levels amongst employees who work remotely. Other studies found a hike in optimism about work from those who worked at home (89%) when compared to those at the office (77%).

Key WFH Stats Productivity

One Upwork report found that 32.2% of hiring managers say that productivity has increased since remote work policies have started.

Companies Continue the Push for In-Office Work

There are plenty of reasons why many companies don’t always follow the data when making decisions. This may be due to short-term quarterly profits distracting from long-term sustainability, or in the case of a return to office, it may be because they feel a loss of control over their employees when they don’t have eyes on them all the time, or a need to justify costly long-term office building leases.

Whatever the case, many major companies are attempting to push employees back into their office in 2023.

We’re tracking the biggest changes in this area across two big articles: Companies That Have Ended Fully Remote Work in 2023, and Companies That Offer Remote Work From Home Jobs in 2023

Amazon’s workers have composed an internal petition, but so far it seems that Amazon is committing to a data-less decision, regardless of its employees’ disagreements.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Study: Humans Can Only Spot Deepfake Speech 73% of the Time

Audio deepfakery might sound like Mission: Impossible spy technology, but fake speech scams are a growing problem.

Humans can detect artificially generated speech about 73% of the time, a new study has found. That’s the majority of the time, but it’s not an overwhelming success — indicating that there may be plenty of opportunities for deepfake voice audio to scam you in the near future.

After the study, participants were trained in how to detect generated speech audio clips and became slightly better but were still not perfect. Even with training, deepfake audio can fool the typical person.

The study even found similar results across the two languages it tested, English and Mandarin.

How the New Study Tackled Deepfake Speech

The study was conducted by researchers at University College London, who trained a text-to-speech AI on two datasets to generate 50 speech samples in the two different languages. Then, 529 participants listened to audio clips and attempted to determine which were fake and which were spoken by a real, flesh-and-blood human.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

The results: 73% accuracy. In other words, one in every four deepfake audio attempts can expect to work without raising any red flags for their targets.

Kimberly Mai, the first author of the study, explained that this is bad news:

“In our study, we showed that training people to detect deepfakes is not necessarily a reliable way to help them to get better at it. Unfortunately, our experiments also show that, at the moment, automated detectors are not reliable either.”

Deepfake Speech Scams Are More Common Than You Think

Audio deepfakery might sound like Mission: Impossible spy technology, but one in four adults have already experienced one, according to one survey.

That same McAfee survey found that 10% were personally targeted, and another 15% knew someone who had been. At the same time, victims were pretty sure they couldn’t be fooled. As Tech.co senior writer Aaron Drapkin put it at the time:

The McAfee survey also found that 70% of people said they were “unsure” if they’d be able to tell the difference between an AI voice and a human one. Almost one-third (28%) of US respondents said they wouldn’t be able to tell the difference between a voicemail left by an “Artificial Imposter,” as McAfee puts it, and a loved one.

Of the victims in the US who lost money through AI voice cloning scams, 11% lost between $5,000–$15,000.

Mcafee survey image

Now, the new survey indicates that many of those people would still be suckered by the right audio clip.

Could You Still Identify Fake Audio Speech?

Look, the good news is that the average person can still figure out when speech is computer generated a full 73% of the time. And with the right training, you can slightly boost your average.

The best way to stay safe, however, is likely to use a little analytical thought outside of the audio itself: Are you being asked to reveal sensitive information? This indicates a motive behind a potential scam. Have you initiated the process yourself? A scammer will target you and be the first to reach out.

Hopefully, automated deepfake speech detectors will continue to improve as well, helping to take some of the burden off of our fallible human ears.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Fake Salesforce Email Used to Bait Facebook Phishing Trap

Emails sent from seemingly genuine "@salesforce.com" accounts targeted Facebook users in high-profile phishing campaign.

Salesforce, the software company behind one of the best CRMs for sales, has released out a patch for its email services after security researchers discovered a zero-day vulnerability that allowed cyber criminals to target Facebook users with a convincing phishing scam.

The actively exploited bug was discovered by the team at Guardio Labs and has been dubbed “PhishForce”. It made use of a flaw in Salesforce’s “email-to-case” feature – which lets users automate the creation of tickets for customer queries – to send outbound emails purporting to be from Meta Platforms via the official “@case.salesforce.com” domain.

Not only did these emails look genuine to the recipients, but they evaded Facebook’s built-in phishing detection defenses as well, once again highlighting how even the most robust antivirus software and related solutions can be duped by gaps in the security architecture of popular products.

How the Salesforce Phishing Attack Worked

The phony phishing emails were titled “Breach of Content Standards” and claimed to advise recipients of an account compromise and impending suspension due to “suspicions of impersonation” on their social media account. Which of course is wildly ironic and proof that cyber criminals are nothing if not a humorous bunch.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

In order to avoid the “suspension”, users were directed to a fake “support” page, at which point they were asked to input sensitive information. While the Guardio Labs report makes clear the campaign is known to have operated successfully, it’s not clear how many people actually fell victim to it.

Adding a further air of authenticity to the ruse, the emails were addressed to the target’s real name, which also helped the messages bypass Facebook’s anti-spam and anti-phishing filters.

Photo of Salesforce email used in Facebook phishing campaign

Image credit: Guardio Labs

Salesforce Dreaming Big Despite Layoffs

Guardio Labs also notes that Salesforce acted quickly to address the vulnerability: it was reported to the company on July 28th and a patch was released the same day. The security researchers also say they advised Facebook owner Meta of the hole in its systems, which seemed to be related to legacy features on “apps.facebook.com.”

News of the security flaw comes as Salesforce enters an important couple of months as a company and as a brand. Specifically, the popular CRM maker is in the midst of a number of new launches ahead of its annual Dreamforce 2023 conference in September. These include the suite of new AI features being brought to its core products, while subsidiary Slack has also just unveiled a sales-specific edition as it looks to diversify its offering.

At the same time, Salesforce pricing is increasing for the first time in a number of years and the company hasn’t found itself immune to the wave of tech layoffs sweeping the industry, either. For its part, Meta finds itself in  the eye of the social media storm as ever, launching its new app Threads as an alternative to Twitter (now X).

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Roblox Players Have Only Days Left to Claim $10M Settlement

Roblox account holders are entitled to a share of a multi-million class action settlement, but time is running out.

Roblox account holders could be entitled to a share of a $10 million settlement, but those eligible have less than a week to claim.

The settlement is the result of a class action lawsuit that claims the online gaming platform allowed children to make purchases, and then deleted content as part of its moderation process, without giving refunds for lost items.

The settlement closes on August 10th, read on to find out how to make your claim.

What are the Details of the Roblox Settlement?

The $10 million settlement centers around a class action lawsuit from a father of a Roblox player, who criticised the company, alleging that it was deliberately allowing children to purchase content, and then subsequently deleting it under the guise of content moderation, after said content broke its terms.

In doing so, the claimant states that the move was a calculated ‘win-win’ for Roblox, as it was able to sell the content, and then delete it without offering refunds.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

Roblox has denied any wrong doing in the case, but has agreed to the $10 million settlement.

Roblox has also agreed that in the future, any user who has content moderated, and loses money for paid items, will automatically be credited.

The case follows other high profile settlements from tech companies, including Instagram and Google.

How to Claim Part of Roblox $10 Million Settlement

In order to claim your entitlement of the Roblox settlement, you need to ensure that you fit the criteria:

  • Had a Roblox account before May 11, 2023
  • Content was moderated and removed from account
  • Have submitted a claim by August 10 2023

There are only a few days left to make the claim, after which the window will close, and claimants will no longer be able to pursue the case or receive payment.

In order to make a claim, Roblox customers will need to fill out this settlement form.

Those affected will receive an automatic credit to their Roblox account, but can also take the option of a cash settlement, should the refund amount be over $10.

The final hearing in this case is set for September 27, with payments not being made until after this date.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

X Corp Files Lawsuit Against Center for Countering Digital Hate

Self-proclaimed "free speech absolutist" Elon Musk is suing a non-profit organization that published research about X.

Elon Musk and his newly named social media company are reportedly suing the Center for Countering Digital Hate (CCDH) in light of its research into the platform’s proliferation of hate speech.

There has been no shortage of drama since Musk took over Twitter in October 2022. The company has seen controversy around virtually every corner, eliciting everything from employee resignations to full-on legal action from city officials.

This lawsuit, however, takes it up a notch, with Musk and his company launching an attack on an seemingly independent, unbiased research organization because it had a bad thing to say about the direction of the social media platform.

X Sues Center for Countering Digital Hate

In a lawsuit filed this week, X Corp is apparently suing the Center for Countering Digital Hate (CCDH) for publishing research that points to the toxicity of the X/Twitter platform. The language of the lawsuit is nothing if not a bit dramatic and paints a troubling picture for the social media platform in turmoil.

“The Center for Countering Digital Hate — an activist organization masquerading as a research agency, funded, and supported by unknown organizations, individuals, and potentially even foreign governments with ties to legacy media companies — have embarked on a scare campaign to drive away advertisers from the X platform.” – the X lawsuit against the CCDH

X hasn’t appeared to need any help driving away advertisers, with the company continuing to hemorrhage partners interested in selling products on the platform.

Beyond that, though, the lawsuit is nothing if not frivolous. By all accounts, the CCDH is an unbiased, independent non-profit that almost exclusively works to ensure the internet is safer, more wholesome place for everyone online.

The Case Against X (Twitter)

So, what is X (Twitter) actually suing the CCDH for? Well, since Musk’s takeover, the non-profit organization has released a treasure trove of research showing that the platform is doing much worse than usual when it came to combating hate speech.

More specifically, the CCDH noted that X had reinstated accounts from a wide range of Nazi, white supremacist, misogynist, and conspiracy theorists to the platform after previously receiving lifelong bans.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

That’s not all, though. Research from the CCDH in June found that the company “fails to act on 99% of hate posted by Twitter Blue subscribers,” implying that the company was cosigning hate speech for the price of $7.99 per month.

Musk: A Free Speech Absolutist?

During his rocky takeover of Twitter, Elon Musk noted on multiple occasions that he was a “free speech absolutist.” At least, that was the reason he gave for unleashing scores of accounts on the platform that had been previously banned for hate speech and disinformation.

Now, however, the controversial CEO seems to be taking extraordinary measures to silence his dissenters, and many in the industry are concerned that it could be a slippery slope.

“Elon Musk’s latest legal threat is straight out of the authoritarian playbook – he is now showing he will stop at nothing to silence anyone who criticizes him.” – Imran Ahmed, CEO of CCDH

Admittedly, Musk hasn’t been making a lot of good decisions since taking over Twitter. But this lawsuit could be significantly worse than just putting a big flashing X on the top of an office building in the middle of San Francisco.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

AI Mistakes May Be Unfixable, According to Experts

AI leaders have admitted that their platforms can be inaccurate, but is the problem solvable at all?

Artificial intelligence (AI) may be just as fallible as actual intelligence, with experts are starting to realize that AI mistakes, or “hallucinations,” could be a feature rather than a bug.

The tech world is betting big on the development of generative AI platforms like ChatGPT. Microsoft, Google, and dozens of other companies have rolled out the tech unto their various platforms over the last few months, making it a huge selling point for any business looking for a productivity boost.

However, generative AI platforms are far from perfect, and even some pioneers are getting worried that these mistakes are going to be a lot harder to fix that they originally thought.

Experts: AI Hallucination “Isn’t Fixable”

While generative AI platforms have been lorded for their productivity-improving content creation, there have been more than a few reports pointing to their consistent inaccuracy. Yes, the technology is still in its infancy, but some experts have noted that a perfect version of generative AI is simply not possible.

“This isn’t fixable. It’s inherent in the mismatch between the technology and the proposed use cases.” – Emily Bender, director of the University of Washington’s Computational Linguistics Laboratory

To make matters worse, the gamble on whether or not AI technology will eventually grow out of its early-stage missteps is a big one. Big tech companies like Google have grand plans for the technology, including news writing platforms that will be required to be completely accurate in the face of libel laws.

Generative AI Mistakes in Action

You may be thinking to yourself, how bad could these AI mistakes really be? After all, you’ve been using AI at your job without your boss’ knowledge and everything seems rock solid so far, right?

The reality is that, on the grand scale of generative AI use, mistakes are all too common. From incorrect information to outright misinformation, these platforms are not perfect enough to handle the potentially high stakes tasks that businesses want them for.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

A recent example includes a law firm that was fined by a Manhattan district judge for using fictitious legal research created by ChatGPT. The case obviously had real world impact and was based on nothing more than a fabrication from a generative AI platform.

All that to say, the mistakes from generative AI platforms are a lot worse than just simple typos and left-out words; this technology can cause serious problems if it doesn’t get more accurate and fast.

What Do AI Leaders Think?

As you can imagine, the heads of AI firms like OpenAI and Anthropic are less concerned, although their optimism is notably underscored by a humble understanding that there’s a long way to go before these platforms are even close to reliable.

“I don’t think that there’s any model today that doesn’t suffer from some hallucination. They’re really just sort of designed to predict the next word. And so there will be some rate at which the model does that inaccurately.” – Daniela Amodei, president of Anthropic, the developer of Claude 2.

Luckily, AI leaders are, in fact, aware that their platforms aren’t churning out the cream of the crop when it comes to content, even if they are frustratingly flippant about that fact.

“I probably trust the answers that come out of ChatGPT the least of anybody on Earth.” – Sam Altman, CEO of OpenAI, developer of ChatGPT

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Study: 76% of Employees Plan to Quit If Flexible Work Ends

A new study has revealed some troubling statistics in regard to the return-to-office mandates hitting the business world.

Thinking about having your employees come back to the office full time? Think again, because a new study found that most employees are still willing to quit on the spot if their company gets rid of flexible work policies.

The return-to-office movement has taken the business world by storm recently, with a number of high-profile companies making aggressive pushes to get their employees back at their desks.

Now that it’s been a few months since the movement kicked off, researchers are starting to see the impacts of these RTO mandates, and let’s just say the businesses making the push aren’t doing too well.

Employees Want Flexible Work

A report from Greenhouse Candidate Experience found that 76% of employees would actively seek a new job if their current employer got rid of flexible work options.

“Leaders that haven’t been listening to candidates and employees in recent years will be forced to realize that work has undergone a massive shift, and they can either adapt to this new world or lose out on top talent, and ultimately, business success.” – Donald Knight, Chief People Officer at Greenhouse

Even worse, filling those positions will be infinitely harder moving forward, considering the study found that 42% of employees noted that they wouldn’t even apply to jobs that don’t fit their preferred work policy. And considering 52% said that they prefer either hybrid or full remote, forcing employees back into the office is not going to be a valuable recruiting tool.

The Return-to-Office Impact

There are a lot of statistics in favor of keeping your hybrid work model. Still, many businesses have been adamant that a return to the office will encourage collaboration and develop company culture in a way that will have a positive impact on their businesses. Unfortunately, that just doesn’t appear to be the case.

A study from Unispace found that 42% of businesses with return-to-office mandates are having a harder time with employee retention than expected. On top of that, one third of such businesses are struggling to recruit top talent.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

So, why are employees unhappy? Well, according to a study from the Federal Reserve, getting rid of your hybrid work policy is creates resentment akin to a 2% to 3% pay decrease.

Choice Instead of Mandates

There was some promising news from these reports in regard to getting employees back in the office, and it focuses on avoiding actual mandates and giving employees the choice to return to the office.

The study from Unispace found that employees were notably happier, more motivated, and more excited about returning to the office when given the option over being forced back into the office.

All in all, these studies show that it’s fairly obvious what your strategy for retaining employees and recruiting top talent should be in regard to flexible work: Treat your team like adults that can make their own decisions.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Elon Musk to Be Charged By SF Authorities for ‘X’ Sign Fiasco

Turns out, you can't just put a giant, glowing X on top of a building in downtown San Francisco whenever you want.

Elon Musk will have to cover the costs of the investigation into whether X Corp (formerly Twitter) had permission to erect a large ‘X’ sign on top of the social media platform’s headquarters in San Francisco.

City authorities — which contend the billionaire did not have permission to put it up — are also going to bill the company for the permits it would have needed to gain in order to install (and also remove) the large, luminescent logo from the building.

Musk is already being sued by landlords in San Francisco, as well as London, for refusing to pay rent for its office space — while the city’s police force stopped him from taking down the sign with Twitter’s old logo just last week.

Musk Charged for X Sign Stunt

Elon Musk will be billed for both the investigation into the X sign that was conducted by San Francisco authorities as well as the costs of installing and removing the object from the roof, a recent BBC report has revealed.

The strobe fixture, which was flashing constantly and could be seen from miles away, had already irked locals, with many suggesting that the decision was reckless. The Guardian reports that 24 complaints were made to the city over the weekend regarding the sign.

The sign itself was removed on Monday morning, much to the relief of the company’s neighbors.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

Musk vs San Francisco: A Difficult Relationship

Since Musk’s acquisition of Twitter in 2022, his relationship with the authorities in San Francisco has been far from rosy.

Shortly after his takeover, with the company shifting to an “extremely hardcore” working dynamic, Musk was investigated by the city for allegedly having two illegal bedrooms at the office headquarters. It was ruled the company could keep the bedrooms if the social media platform made a change to its building permit.

Then, in January, Musk was sued for reportedly refusing to pay millions of dollars in rent to local landlords, according to a California court filing.

The city’s district attorney has accused Musk of spreading misinformation in the wake of the murder of Bob Lee, who worked for Square and helped create Cash App. The Tesla chief linked the killing to violent crime in the city.

Musk has never been too complimentary of San Francisco, which is often on the receiving end of the tech mogul’s social media ranting. In May, while addressing the possibility that the company may leave the city, he said that the city was in a “doom spiral”:

Just last week, of course, the police arrived to stop Mr. Musk from removing Twitter’s old logo from the headquarters on Market Street — with the company reportedly also lacking a permit to carry out the work.

X No Longer Marks the Spot

Elon Musk seems to be taking the phrase “all publicity is good publicity” quite seriously, whatever the financial implications.

But the world continues to watch on, gawking at his latest social faux pas, expensive stunt, or controversial comment. Twitter — or X, as it’s now called — has remained a bottomless pit of drama since he took control of the platform almost a year ago.

The decision to suddenly ditch the world-famous Twitter brand — in favor of a logo that has been compared to an adult movie site, a tacky nightclub, and everything in between — is one of his most questionable decisions yet.

Finding out if the platform has enough credit in the bank with users to ride this tumultuous period out will be fascinating. However, whether it’ll ever return to normality, with Musk anywhere near it, is hard to say.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

ChatGPT Might Have a New Way of Thinking, Researchers Suggest

GPT-3 is surprisingly capable of answering questions that require analogical reasoning, a typically human ability.

A recently published study has found that GPT-3, the large language model that powers AI chatbot ChatGPT, reasons about as well as college students when it comes to answering “reasoning problems.” The fact that the GPT language models do not learn in any comparable way to human beings prompted the researchers to wonder whether this sort of reasoning is a “brand new” form of intelligence.

However, confirming whether the software is indeed “mimicking human reasoning” or using “a new type of cognitive process” would require backstage access to OpenAI’s systems, which is unlikely to happen any time soon.

If ChatGPT is indeed reasoning in a novel way, it could pose a number of interesting quandaries for AI ethics, research, and development.

GPT-3’s Logic and Reasoning Skills Surpass Expectations

UCLA’s study, which appeared in Nature Human Behavior this week, found that “GPT-3 displayed a surprisingly strong capacity for abstract pattern induction” when answering general intelligence and SAT test questions.

This capacity found GPT-3 matched or surpassed “human capabilities in most settings,” while “preliminary tests of GPT-4 indicated even better performance.”

The results, authors Taylor Webb, Keith J Holyoak, and Hongjing Lu say, “indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.”

Take back control of your data

Incogni by Surfshark can help you reclaim your information from third-party vendors.

Zero-shot learning is a method used in machine learning to teach LLMs and other AI machines to identify, recognize, and classify things that they have never encountered before.

“Language learning models are just trying to do word prediction so we’re surprised they can do reasoning. Over the past two years, the technology has taken a big jump from its previous incarnations.” – Hongjing Lu, author of the study in an article

Study co-author Keith J Holyoak adds that, although ChatGPT appears to think like a human in some regard: “People did not learn by ingesting the entire internet, so the training method is completely different.”

He said they hope to explore whether the language model’s cognitive processes are “brand new” and a truly “real artificial intelligence.”

GPT-3’s Test Results

GPT-4 did particularly well on Raven’s Progressive Matrices, which is an image prediction test. On this test, it scored 80%, whereas the average human will score below 60%.

Interestingly, when the language model did make mistakes, it often went wrong in similar ways, and at similar stages, to its sentient test counterparts.

Although GPT-3 performed commendably when it came to analytical reasoning, there are other sets of questions it struggled with, including questions that required some level of comprehension of physical space.

GPT-3 also didn’t do quite as well as students in test questions designed to see whether the subjects could pair up analogies that conveyed similar meanings. Notably, however, GPT-4, the successor to GPT-3 and GPT-3.5, performed better on this test.

LLMs Continue to Show Promise on Tests

This isn’t the first example of a language model being put to the test and producing scores comparable to human subjects.

For example, GPT-3.5 — the model that currently powers the free version of ChatGPT — scored around the bottom 10% of test takers in a simulated law school bar exam. GPT-4, which is available to ChatGPT Plus customers, scored in the top 10%.

Bard’s test results are less impressive. Fortune scraped some SAT math questions from the internet and posed them to Bard, but the Chatbot answered between 50% and 75% incorrectly. It did, however, perform better on reading tests — at least enough to secure a place at several US universities.

However, these tests were conducted before Bard shifted to using a more capable language model, called PaLM 2, which performs particularly well when prompted with math and coding-based questions.

Anthropic’s most recent contribution to the generative AI space, Claude 2, reportedly scores higher than GPT-4 on both bar exams and GRE writing assessments.

Questions Still Remain

However, the question that still remains — and can’t be answered without OpenAI effectively granting research teams a ChatGPT Access All Areas pass — is how these reasoning processes occur in the language models that power it.

If ChatGPT has developed a cognitive process for reasoning that departs significantly from known human processes, the ramifications could be wide-reaching, and open up a whole web of ethical questions relating to sentience and evaluating AI systems.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Meta to Launch AI Chatbots on Its Platforms Next Month

Meta, although relatively quieter on the AI front than some of its big tech rivals, might just have a big surprise in store.

A recent report has revealed that Meta is planning to launch a range of AI chatbots across its social media services – which include Instagram, WhatsApp, and Facebook – as early as next month.

The goal of the chatbots is reportedly to increase user retention on the platforms, news which comes just days after CEO Mark Zuckerberg revealed over half of the Threads userbase has already deserted the app.

The rumored chatbots – called “Personas” by Meta staff – have already drawn similarities to character.ai’s chatbots.

What Are Meta’s “Personas”?

Meta has been working on prototypes for a series of AI assistants that could be integrated into its services in the name of boosting engagement, according to the Financial Times, who spoke to three individuals privy to the matter.

The chatbots, known internally as “Personas”, are character-based AI chatbots. One that converses like Abraham Lincoln, as well as a surfer that’s on hand to discuss potential travel destinations with users, are referenced as two that could be launched. Companies like character.ai already provide this sort of thing, but their chatbots are not nearly as capable or useful as the likes of ChatGPT.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

The company’s product group working on generative AI tools has been public knowledge since the beginning of the year, and was brought together to make “creative and expressive tools”, Zuckerberg said in February.

Meta’s User Retention Problem

Along with providing recommendations and functioning as a search tool, the Financial Times reports they’ll also simply be a fun tool for users to play around with – which may give Meta a welcome upper hand against rivals’ social media platforms eating away at its popularity.

Of course, Meta is still the largest social media company in the world, with almost 4 billion users signed up to WhatsApp, Messenger, Instagram, Facebook, and Threads combined – but the last few years haven’t been the most successful of times for Mark Zuckerberg’s empire.

Way back in February 2022, Facebook experienced a decline in users for the first time in its almost 20-year history. Now, Facebook is widely seen as an app for older demographics, while TikTok continues to entrench itself as the flagship “Gen Z” app.

Instagram has also suffered as a result of the rise of TikTok, and even BeReal, with some sources citing an increased focus on e-commerce and sponsored posts as a reason.

While this has been happening, attempts to launch TikTok-style short-form video features in Meta services such as Instagram have had mixed results.

Last year, Meta made thousands of layoffs over the past two years while pouring huge amounts of money into making the metaverse happen – which some reports now suggest has been “killed off”.

The recent launch of threads, while initially impressive in its uptake, how now lost over half of the users that initially signed up.

Meta’s AI efforts and Llama 2

With Microsoft’s launch of ChatGPT, Google’s release of Bard, and a variety of other companies launching competing AI chatbots, you could be forgiven for thinking Meta is a little bit behind the curve when it comes to AI.

That’s not quite the case, however. Just a few weeks ago, Meta open-sourced the most recent version of Llama 2, which is available “free of charge and commercial use”. The tool is part of Meta and Microsoft’s “open ecosystem for interchangeable AI frameworks”.

Just two months prior, in May, the social media platform unveiled its first custom-designed computer chip manufactured to process AI programs, called the MTIA.

Meta is doing all the right things to compete in the AI arms race and is highly likely to be a major player in the near future.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

40 Best ChatGPT Prompts To Save You Time at Work

ChatGPT is even more useful if you know how to prompt it properly and these are the best ChatGPT prompts to try today.

ChatGPT has an almost endless list of fascinating use cases for businesses. Since its launch in November 2022, it’s been asked hundreds of millions of questions by users – many of whom are using it at work – while an array of other AI tools have been called upon, too. The chatbot becomes much more valuable if you use what’s called a ChatGPT prompt to inspire its responses. We’re here to explain exactly what that is and highlight the best ChatGPT prompts around.

In summary, a ChatGPT prompt is a text input written in a certain way and designed to generate a specific, useful response from the AI software. Prompts are especially good at helping you complete tasks, which make them highly relevant to the workplace.

In this guide, we’ll take you through awesome ChatGPT prompts for a variety of different use cases, and explain in detail how you can use them to your benefit.

What Is a ChatGPT Prompt?

A ChatGPT prompt is any text input or request you enter into ChatGPT for it to respond to. However, there are specific prompts you can use to generate certain types of answers.

Precisely how you word these prompts will make your responses more or less useful. For example, if you want ChatGPT to come up with ideas for blog posts, asking it essentially the same thing twice, but wording it slightly differently, can and often does result in different answers.

There’s now a range of AI training courses available that show you how to leverage generative AI like ChatGPT effectively – but you can also learn by simply trying the tools yourself.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

How to Write Good ChatGPT Prompts

In this article, we’re going to go through quite a few different ChatGPT prompts, that can help with a variety of different tasks. However, writing good prompts for ChatGPT isn’t rocket science – in fact, if you follow a few rules, you’ll be able to get the best out of it.

Give as much context as possible: The more you explain the ins and outs of what you want ChatGPT to do, the more accurate your answer will be.

  • Avoid loaded language where possible: ChatGPT works best with objective facts, figures, and plain inputs, so try to be neutral and avoid subjective framing.
  • Be specific: Ensuring you specify a response word count, a desired tone of voice, and other specifics will decrease the likelihood you’ll have to request another answer.
  • Specify what you don’t want: It can sometimes help to tell ChatGPT to produce an amenable answer if you also specify what you don’t want it to do.
  • Provide examples: Much like a human, ChatGPT often understands best when you provide an example for it to work with, especially if you want it to produce something that closely resembles something you’ve seen online.
  • Ask ChatGPT to “act like” someone: Sometimes, it can help to start your prompt by asking ChatGPT to “act like” a certain type of person that would be particularly well-placed to respond to the query you’re posing.
asking chatgpt for blog ideas

Image: Tech.co’s ChatGPT prompt testing


The Most Useful ChatGPT Prompts for Work

Before we jump into the best Chatgpt prompts for marketing, SEO, and other specific industries and sectors, here are some general ChatGPT prompts for work.

How to get ChatGPT to simplify a concept

If you need a concise, simple explanation of something quickly, Google doesn’t always provide the best answers, especially for hyper-complex topics. Luckily, using the prompt below, you can ask ChatGPT to explain just about any concept, no matter how complicated. If you’re finding the explanation that the sentence below prompts ChatGPT to provide a little bit too simple, just increase the age of the person you’re asking it to make the explanation suitable for.

Example: I would like to learn about [topic/subject]. Please provide me with a brief summary of [topic/subject] that could be understood by a 10-year-old.

simplify concept with chatgpt

Image: Tech.co’s ChatGPT prompt testing

How to get ChatGPT to summarize meeting notes

ChatGPT can prove to be a capable assistant when it comes to summarizing meeting notes, and then constructing emails that relay the main points that were discussed.

Remember, it’s a little risky to input confidential information into ChatGPT, and your company may even have a policy that forbids you from doing so. Always check in with your manager/employer first as to whether they consider this appropriate. If they do, you can use this prompt:

Example: “Below you will find meeting notes from the [your meeting] that took place on [date] and concerned [purpose of meeting]. I would like you to summarize these notes, identify the key concepts and action points that were discussed, and then create an email setting them out in an easy-to-read way. The email should contain no more than 225 words and be addressed to [your /department/meeting participants/relevant parties]. [Your meeting notes].

How to get ChatGPT to write an email

If you want ChatGPT to write an email for you, there’s a variety of different prompts you can use – and you’re probably going to need a slightly different format for every single one. However, in every ChatGPT email prompt, you should include:

  • Who the email is for (family, friend, work colleague, boss)
  • A clear and concise summary of the subject of the email
  • The tone you’d like the email to be written in (e.g. polite &professional)
  • How long (in words) you’d like the email to be

Example: I’d like you to write an email to my manager explaining why I missed a deadline on a report that was due on July 25th. Please explain that my bathroom flooded, and this meant I couldn’t finish by the pre-agreed date. Please explain the report will be handed in on the 27th instead. I’d like the tone of the email to be polite, professional, and apologetic. I don’t want it to sound like I’m groveling. I’d like the email to be no longer than 150 words.

chatgpt apology email

Image: Tech.co’s ChatGPT prompt testing

Again, as with many tasks, we’d advise using a ChatGPT response like the one above as inspiration/a template, rather than copying it verbatim. Unless you provide it with sufficient detail and a personal touch – which is important for correspondence such as an apology – it may seem disingenuous.

How to get ChatGPT to create a meeting agenda

If you want ChatGPT to create an agenda for a meeting for you, make sure you specify how long the meeting is scheduled to go on for.

Although you’ll have to provide a little bit of detail about what the meeting is about, this prompt will provide suggestions on how it might be good to structure each suggestion:

Example: I would like you to create an agenda for my meeting about [meeting subject]. In the meeting, I would like to discuss [list of things that need to be discussed]. I’d like it to be set out so the meeting runs smoothly. The scheduled meeting duration is [specified time].

Best ChatGPT Prompts for Marketing

Here, we’ll go through some of the best ChatGPT prompts for digital marketing, that you can use to transform your strategy and save yourself valuable time at the same time. Of course, you can ask ChatGPT to help you with almost any marketing-related task – so we’d also recommend having a crack at prompting it yourself, whatever campaign you’re working on.

Just remember to include as much information as possible about what you want – and don’t want – from the chatbot. Below, where we’ve put [product/brand], for example, ideally you should be going beyond just the name or a basic description. Explain what the product/brand does, why it’s useful, and its core purpose.

15 ChatGPT marketing prompts

  1. List common pain points and challenges faced by [a customer persona].
  2. Write a simple script for an advertisement about [product]. The script must be [X] minute/s long.
  3. I want to perform some market and audience research into [industry/sector]. Please produce a list of questions to answer for this.
  4. Create three pieces of copy encouraging readers to sign up for [newsletter/email list]. Please make each piece no longer than three sentences.
  5. Produce five enticing Google Ad headlines about [product] that will drive clicks to [website].
  6. Create a plan for a six-week-long social media campaign about [product/brand].
  7. Generate a persuasive email subject line for [marketing campaign topic].
  8. Create the outline for an email asking customers for feedback on [product/service].
  9. What is considered the best time in the day to post on [social media platform]?
  10. Write five Instagram captions for a [topic/subject].
  11. Write five [Facebook/Twitter/Linkedin] posts about [product/service].
  12. Generate some engaging ideas for social media posts about [product/service].
  13. Come up with some video suggestions to promote [company/product] that I can post on TikTok
  14. Please make a [word count]-word article brief about [subject].
  15. List key components of landing pages that have high conversion rates and low bounce rates.

Image: Tech.co’s ChatGPT prompt testing

Best ChatGPT Prompts for SEO

ChatGPT and Bard can be easily leveraged for SEO purposes. You can even prime ChatGPT to help you squeeze into the sought-after featured snippets box, and we’ll discuss exactly how to do this below.

10 ChatGPT prompts for SEO

  1. Provide detailed instructions explaining how to perform an A/B test on a website page.
  2. Write me a meta description that is less than 160 characters for a post on [subject].
  3. Write a catchy, compelling meta title that is less than 60 characters long for a post on [subject] for me.
  4. Please generate some blog post ideas for my blog about [your blog’s topic].
  5. Produce a list of long-tail keywords for a campaign on [product].
  6. Organize this list of keywords [keyword list] by the intent of their searcher.
  7. Create a brief outline for a three-month content strategy for [your website], which operates in [your sector] and produces content such as [content examples].
  8. Analyze [keyword list] and provide 10 related/secondary keyword terms for each item in the list.
  9. Create clusters for the following bank of keywords, grouped by their relevancy to one another: [list of keywords].
  10. Create a list of queries you may have as a [your target audience] if don’t know much about [products/topics you write about].
Fleet management question to chatgpt

Image: Tech.co’s ChatGPT prompt testing

You can also prompt ChatGPT and other AI tools with text that will help you secure the featured snippets that appear at the top of search engine results pages and usually draw a lot of clicks. You can do this like so:

“I want an article on my website to rank in the featured snippet box for the keyword [your keyword]. Currently, Google shows this paragraph: [the current featured snippet]. This is the section of my article that I would like to rank in the featured snippet box [your prospective snippet]. Please optimize my copy so it ranks in the featured snippet box.

Best ChatGPT Prompts for Ecommerce

If you’ve only just started out in the world of ecommerce, you might find yourself referring to ChatGPT quite a bit. The chatbot is really good at generating slogans to go with brand names, as well as small, compelling descriptions for products.

10 ChatGPT prompts for ecommerce

  1. Write a compelling product description for [product]. It must be no more than [word count].
  2. Outline the most effective customer retention strategies currently being used by ecommerce businesses.
  3. Write a compelling “About us” section for [brand/website]. Include information such as [year your business started], [what your core mission is], and trust signals.
  4. Create a step-by-step guide showing me how to determine the correct price for my [product].
  5. Using the information about my company’s history provided below, write a compelling origin story for my brand. [your brand information].
  6. Provide some suggestions on how to improve the customer reviews of my [brand/product].
  7. Provide a list of reasons why consumers trust online brands, with an example for each one.
  8. Suggest some audience demographics in [country] that it would be useful to base my ecommerce strategy for selling [product] around.
  9. Provide a list of ways a [industry/sector] website can increase its click-through rate on affiliate links.
  10. Create a list of 20 slogans/taglines that relate to my business’s name and mission: [business name and mission].
chatgpt generatin slogans

Image: Tech.co’s ChatGPT prompt testing

ChatGPT Resume Prompts

If you want ChatGPT to give you suggestions for how you should structure your resume, you may have to enter some personal information about your work history for it to work with. However, as we’ve already alluded to, ChatGPT creators OpenAI will save your data and it reserves the right to use it to train its chatbots using it – so make sure you read the privacy policy first.

How to get ChatGPT to review your resume

Use this ChatGPT resume prompt to get ChatGPT to review an existing resume that you’ve already made:

I want you to act as a resume reviewer. I will input my resume, and I would like you to respond with some feedback on it. I’d like you to base your feedback on these six questions:

  • How do I make my resume more scannable?
  • Is my resume structured correctly?
  • How do I make my resume more readable?
  • Are there any grammatical errors in my resume?
  • What should I title the sections of my resume?
  • Are there any ways to make my achievements stand out more?

Here is the resume: [Your resume]

How to get ChatGPT to build your resume

Use these ChatGPT resume prompts to get ChatGPT to help you build sections of your resume, or give you some inspiration for how you should be writing about your achievements:

  • Here is a list of my achievements in [job role]. Please turn them into professional-sounding, action-based sentences with action verbs.
  • His a list of my achievements as [job role]. Can you make them sound impressive, without exaggerating them?
  • Write a three-sentence summary to go at the top of an experienced [your current role], for a new job role as a [desired role]

General ChatGPT resume prompts

If you’re not comfortable adding your job history to ChatGPT, that’s understandable. So, here are some general ChatGPT resume prompts that can still help.

  • What are the necessary sections of a resume/CV?
  • Summarize the key duties a [job role] usually carries out.
  • Write a paragraph for a CV that shows I am passionate about [job role]
  • Create an email that I can attach my CV and cover letter to [job role]
  • Create an email that I can attach my CV and cover letter to
  • What should I avoid writing about in my resume?

We’d advise editing these to fit the specific job you’re applying to, or simply using them as inspiration to write your own paragraph.

How to get ChatGPT to interview you for a job

If you want ChatGPT to help you practice for an interview, to stop it from just generating both sides of the conversation like a script, you have to really insist on it not writing the candidate’s (i.e. your) side of the conversation. You can use this prompt to achieve this:

I would like you to act as an interviewer interviewing for a [position], and I will be the candidate applying for the role. You will ask me to interview questions, and I will respond to them. Please ask one question at a time, and I exclusively want you to reply as the interviewer. I want you to just do the interview with me. Do not write explanations, and do not write what the candidate might say. Do not write all the conversation at once. Your first question should be “how are you today?

Image: Tech.co’s ChatGPT prompt testing

What is the DAN Prompt for ChatGPT?

DAN – which stands for “Do Anything Now” – is the name for a class of prompts used to “jailbreak” ChatGPT, causing it to respond in ways that are considered outside its normal remit and content moderation standards. There are different DAN prompts for different versions of ChatGPT, and they’re quite easy to find online.

When you type in one of the DAN prompts from one of these repositories, you get a message that’s something like this (it will be dependent on the specific DAN prompt you enter).

Image: Tech.co ChatGPT prompt testing

Beware – DAN, as well as spinoffs, are designed to get ChatGPT to operate outside of the content rules created for it. This means it can – and often does – generate offensive or inappropriate answers.

Although the concept of trying to “break” the chatbot’s logic rules is fascinating, be wary that asking it to do this can result in disturbing and offensive responses.

Awesome Prompts for ChatGPT

“Awesome ChatGPT Prompts” is the name of a now widely searched-for GitHub repository of crowdsourced ChatGPT prompts submitted by users. They’re designed to get ChatGPT to “Act” in a certain way, for various kinds of scenarios. All in all, it just saves you a little bit of time you’d otherwise spend typing out your own prompt.

Highlights include prompts that can get ChatGPT to act as a Linux terminal, an English translator, a football commentator, and a stand-up comedian. All in all, there are over 100 prompts on the list – but you can use the titles as inspiration to create your own.

Using ChatGPT Responsibly

If you’re using ChatGPT to assist you in work, then it’s important that you’re leveraging it responsibly. For employees and businesses alike, it’s important to use AI ethically.

If you’re an employee, ensure you’re being transparent when you’re using tools like ChatGPT with your manager and company. If you’re a manager or business owner, maintain an open dialogue with staff about the different ways your business can use ChatGPT.

Most importantly, workplace guidelines on using AI should be created, so everyone is on the same page. Most business leaders don’t think employees should be using ChatGPT without their employer’s permission, so it’s good to iron out exactly when they should and shouldn’t be using it.

This will allow employees to think creatively about the different ways they can use AI tools but in a safe and controlled way.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Google Assistant AI Update Could Lead to Layoffs

Leaked internal emails reveal more Google layoffs and plans to integrate ChatGPT-style generative AI in its Assistant.

Next up on the AI conveyor belt… it’s the Google Assistant, which you’ll find integrated into your smart devices, wearables, speakers and even in your car, and is about to get “supercharged” with AI.

The plans, revealed in leaked internal Google emails obtained by Axios yesterday, could also lead to layoffs within the Assistant team, as it is restructured to focus on AI.

This latest move to pivot to developing their use of AI even further will hope to steady investor confidence and expedite the development of the Google LLM (Large Language Model) to get it fit for integration in Google Search.

Google’s Bid to Catch Up With AI

The internal email, seen by Axios, is reported to say that the Assistant team leads have identified opportunities a ‘supercharged’ Google Assistant, fueled by AI.

The email doesn’t go into details about the exact possibilities of what the new Google Assistant team hope to achieve but from what we already know of generative AI, the possibilities are pretty expansive.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

The Assistant will be able to draw on information from across the web in its answers, just like LLMs’ ChatGPT or their very own Bard. Services like Alexa, Siri and the Assistant have long been able to answer a straightforward question, but the integration of AI will make the answers much more meaningful. Drawing on experience and context as well as black and white information, and their ability to follow a thread of conversation rather than deal with one query in isolation will make the experience far more human-like.

Peeyush Ranjan, the Vice President of Google Assistant, and Duke Dukellis, the company’s product director, purportedly said in the email, “we remain deeply committed to Assistant and we are optimistic about its bright future ahead.”

It’s unclear when generative AI will be integrated into Google’s smart home products and for now at least, it’s probably best that way. With public skepticism over privacy issues and accusations of fraudulent ads, there’s likely still a few details for the tech giant to iron out on that front.

More Google Layoffs?

In the email, Axios reports that Google says reorganizing the Google Assistant team will help them to refocus on generative AI, including a “small number of layoffs”.

The exact number of layoffs is unknown however the Axios report claims the Google email mentions “dozens” of layoffs in the Google Assistant team which is currently made up of thousands. This figure adds to the existing count, the Google parent company, Alphabet, already announced earlier this year of 12,000 employees, cutting 6% of the total workforce

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.
Back to top