US Remote Workers Earn $8,500 More Than Office Staff

New report sheds light on how salaries for office, remote and hybrid jobs compare across 30 major US cities.

Remote workers have been found to earn more money than those who work in-office or in a hybrid capacity, according to a new study.

The research by call center software provider Ringover found that on average remote workers earned $8553 more annually compared to roles that require full-time office attendance, based on its analysis of 15,800 job listings across 30 major US cities.

Jobs that offer hybrid setups aren’t nearly as far behind, with the data showing average salaries that were just $140 below the benchmark established by the fully remote roles.

Remote vs Office Working: Which Pays More?

It seems clear from the study that companies that offer remote working pay more, whether they offer the perk on a full-time or hybrid basis. However, it’s worth noting that there are a number of variables at play here and no two job markets are the same.

Geography was revealed to be a key factor in Ringover’s report, with remote staff in Baltimore, MD earning nearly 40% more than their cubicle-based counterparts. Indianapolis, IN and Oklahoma City, OK were the cities. with the next largest pay gaps, at 29.76% and 28.90%, respectively.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

Different job roles were also found to be a factor in how employers compensated remote vs office roles. Perhaps ironically, office managers were found to enjoy the biggest salary boost when working fully remotely, getting on average 31.71% more than if they attended a physical workplace, with PR managers a close second (30.16%). At the other end of the spectrum, junior web developers were found to make more in the office, enjoying salaries that were 12.52% higher than their remote equivalents.

Why Do Remote and Hybrid Working Jobs Pay More?

If you’ve hit the gas station lately, you’ll be well aware of the injustice of remote working jobs paying more than those that require employees to commute. Fairness aside, one reason why remote and hybrid roles can pay more is because it’s cheaper for companies to have their employees provide their own workspace.

Specifically, working from home full-time means little to no office overheads for businesses, while clever management of hybrid working schedules allow them to dramatically reduce the amount they have to pay for office space and everything that comes with it.

At home? Settle the Zoom vs Microsoft Teams debate and you might find that’s all there is to it. Among other things, you certainly don’t have to worry about things like providing free coffee, paying for electricity, or the cleaning and building maintenance costs.

Salary Transparency is the Real Issue

An important caveat to all of findings is the fact that, despite Ringover’s data analysts reviewing 15,800 job listings, they only found salaries attached to 30 ‘remote-capable’ jobs. This is indicative of a larger trend, which saw just 43.15% of roles advertised online offering a salary range or exact figure.

Tech hotbed Seattle, WA was the best city for salary transparency, with 76.73% of listings coming with salary information attached, followed by San Diego, CA (73.71%). Job hunters in Charlotte, NC and Houston, TX shouldn’t hold out much hope, though, with these cities reporting salary details on just 11.76% and 25.64% of job ads.

This hints at the much bigger issue of salary transparency, the lack of which is a problem experienced by job hunters all over the world . We understand businesses want to get their workplace strategy right in the post-Covid era and the way it’s now de rigueur to include job flexibility in job posts is admirable. Ask most prospective employees out there and they’d probably agree it’s time salaries followed suit.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

LinkedIn Launches New Approach To Detect Fake Profiles

In a bid to help its fake profile problem, the platform has created an AI image detector to root out fake headshots.

LinkedIn has this week announced a new AI image detector to catch fake profiles, that has a 99% success rate.

The platform’s Trust Data Team claims the new approach can catch falsified profile images and remove fake accounts before they reach LinkedIn members.

This latest security innovation comes a few months after it was revealed LinkedIn was featured in over half of Q1 2022 phishing attacks.

Why are People Creating Fake LinkedIn Profiles?

As well as Twitter, LinkedIn has had trouble in recent times with the amount of fake profiles on its site. In the first half of 2022 alone, the platform had detected and removed 21 million fake accounts. 

But why exactly are all these fake profiles popping up? For some it’s to create trust amongst visitors to their websites, for others it’s rooted in SEO purposes under the false belief that Google ranks articles with authors higher than ones without.

Whatever the motivation, there’s no doubt it’s even easier to create a fake profile thanks to the advances in AI.

“We are constantly working to improve and increase the effectiveness of our anti-abuse defenses to protect the experiences of our members and customers. And as part of our ongoing work, we’ve been partnering with academia to stay one step ahead of new types of abuse tied to fake accounts that are leveraging rapidly evolving technologies like generative AI.” – LinkedIn’s announcement on its new approach 

Fake Accounts Have Become Harder to Catch

This new approach comes following lengthy research into recognizing the structural differences between AI-generated faces and real faces – something most people don’t know how to spot.

LinkedIn keeps a close eye on unwanted activity that could pose a security risk, such as fake profiles and content policy violations. However, the sophistication of AI-generated images has, until recently, proved to be impossible to detect.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

The key in solving this has been to know exactly what to look for. According to LinkedIn, AI-created images all share similar patterns that they call ‘structural differences’. Real images don’t have these structural differences.

An example in their blog post references a test of 400 AI-generated images vs 400 real ones. While the real images were displayed in clarity, the AI-generated ones grew increasingly blurrier, showcasing that areas around the eyes and nose tend to be very similar in fake photos.

While there’s no doubt that AI appears relentless in its potential security risks, LinkedIn’s latest development can be seen as a success against fake data.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

US Lawyers Fined for Using Citations Faked by ChatGPT

The lawyers used six chatbot-generated legal cases in an aviation injury claim, which turned out to be wholly fabricated.

A Manhattan district judge has ordered two lawyers – Steven Schwartz and Peter LoDuca – and their law firm Levisow, Levidow & Oberman to pay a $5,000 fine for using fictitious legal research created by ChatGPT.

Despite asking the chatbot to verify the cases, ChatGPT claimed the fabricated citations were legitimate.

The fake citations were used in an aviation injury claim and come hot on the heels of a wider conversation about AI plagiarism and chatbots’ potential to spread fake news and disinformation.

ChatGPT Doubled Down on its Fake Stories

Schwartz admitted he’d used ChatGPT to assist with the legal brief of passenger Roberto Mata vs Colombian airline Avianca. Mata’s claim of injury from a refreshment trolley was originally dismissed because the statute of limitations had expired. His lawyers however, contested that the lawsuit should be continued and referenced several previous court cases that supported their argument.

These court cases however, didn’t exist. They had been suggested by ChatGPT and involved cases of airlines that were totally made up or had misidentified judges. Schwartz wasn’t able to confirm the sources of those cases using his law firm’s usual methods, but included them as citations regardless. 

In a written opinion, judge P. Kevin Castel saw no wrongdoing in using artificial intelligence for legal work assistance, but highlighted lawyers’ duty to ensure their filings were accurate. 

“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance, but existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.” – Judge P. Kevin Castel

According to the law firm’s statement, the lawyers “respectfully” disagreed with this ruling, saying “We made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth”.

Schwartz – who has practised law in New York for over 30 years – had apparently never used ChatGPT before and was unaware that the answers supplied could be fake. He has asked the chatbot to verify that the cases were real, which it falsely claimed they were. 

LoDuca’s lawyer said they were reviewing the decision. In a separate written opinion the judge threw out the Mata vs Avianca, confirming the statute of limitations had indeed expired.

A Stark Warning to ChatGPT Users

Despite a few red faces, the low-stakes nature of this specific lawsuit and fine imposed are perhaps a blessing. If it had been a more grievous legal case, it’s likely the repercussions could have been a lot worse. Instead, it can simply serve as a warning to all others in the legal industry.

In fact, no matter what industry, this story is a good one to echo the inaccuracies that AI chatbots are prone to making and the dangers of using it blindly.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

Trained on a wealth of internet data, chatbots’ sources aren’t always available. Pair this with the fact that they work more like a predictive text tool, estimating the most likely word to come after a user’s prompt, and you can see how vulnerable they are to factual errors.

What Does the Future of AI in the Workplace Look Like?

Back in April, ChatGPT faked a sexual harassment claim and named a real American law professor as the accused. Similarly, Chinese authorities arrested a man for allegedly using the chatbot to create fake news articles, with one reporting a made-up fatal train crash.

Concerns around AI’s ability to create accurate content or content that is able to be detected as fake continues to grow, and calls for government intervention have been made.

Apple and Samsung have already blocked AI platforms from being used. Italy already has banned ChatGPT over data security concerns, and Germany is in talks to do the same. Earlier this year, the Biden administration said it was looking into AI’s impact on national security and education.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Facebook and Instagram to Restrict News Sharing in Canada

A new Canadian law looks set to force online platforms to pay news organizations for linking to their content.

Meta has this week announced its to begin restricting news on Facebook and Instagram following parliament-approved legislation that will require it to pay news publishers. 

The controversial bill, called The Online News Act, cleared the senate on Thursday, although Meta has already tested limiting access to some Canadians. 

While the law doesn’t prevent news from being published on the platforms, it will require them to negotiate commercial deals and obtain a license for the content. The technology giant’s response has been to drop news from the platform.

The Law is Considered “Fundamentally Flawed”

While the law isn’t exclusive to Meta, the technology giant is speaking up the loudest, stating it is “fundamentally flawed legislation that ignores the realities of how our platforms work”. Google similarly called the bill “unworkable” in its current form.

“A legislative framework that compels us to pay for links or content that we do not post, and which are not the reason the vast majority of people use our platforms, is neither sustainable nor workable,” a Meta spokesperson

While this argument rings true, the bill has been designed to “enhance fairness” and ensure the digital news marketplace’s sustainability, according to the bill itself. In recent times, concerns have been raised that tech companies have been cornering the market for online advertising, and taking away much-needed revenue from news sites. 

An independent parliament budget watchdog analyzed the bill and estimated that news outlets could receive around $329 million – roughly $250 million in US dollars – per year from digital platforms.

Meta confirmed these changes won’t impact any other services for Canadian users.

The Legislation isn’t the First of its Kind

Australian users were blocked from sharing or viewing news on Facebook thanks to a similar law, back in 2021. However, after talks with the government and amendments in the legislation, Facebook restored its news content.

So while tech giants may hold out hope that something similar will happen for Canada, avid supporters of the bill seem less likely to want to enter any potential future negotiations. Canadian politicians have been heavy on praise for this step which has been seen as a move towards market fairness.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

“If the government can’t stand up for Canadians against tech giants, who will?” – Canadian Heritage Minister Pablo Rodriguez

However, law professor at the University of Ottawa Michael Geist, has warned that this move could be detrimental to independent media outlets who may not experience the same reach without social virality.

The Online News Act is expected to take effect in Canada in six months.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

TikTok Takes On Amazon With New E-Commerce Offering

Parent company ByteDance is experimenting with selling its own products through the video platform.

TikTok’s UK users are starting to see a new shopping feature called ‘Trendy Beat’ roll out within the app.

Currently, shopping on the platform is made up of brands selling their products directly on the platform, with TikTok taking a commission of the sale. This new feature however, would see TikTok doing the selling of its own products.

Despite having a hectic first half of the year, with whispers of a total US ban, TikTok is pressing on with innovations that look set to make it bigger than ever. This shift in e-commerce strategy, that before now has largely been represented by TikTok Shop marketplace, looks set to be the foundation of the platform’s new e-commerce offering that will rival sites such as Amazon and Shein.

TikTok Model Appears Similar to Amazon’s

Trendy Beat is an in-app shopping section and its products will be shipped and sold by a subsidiary of TikTok’s parent company ByteDance. It’ll offer items that are popular within trending videos, such as clothing.

The model, which is still in testing, appears to be similar to how Amazon Basics and Shein promote and sell their best selling products. 

“We are always exploring new ways to enhance our community’s experience, and we are in the early stages of experimenting with new shopping features,” – TikTok on it’s new Trendy Beat feature 

Instead of making money from other brand’s sales commissions, ByteDance will now take all proceeds from sales made through Trendy Beat. This addition to their e-commerce offering will likely help explain the platform’s $300bn valuation, a particularly crucial justification ahead of its expected public offering coming in the next couple of years. 

Interestingly, while TikTok ramps up its shopping efforts, other platforms seem to be scaling theirs down. Earlier this year, Facebook and Instagram dropped its live shopping feature and the latter removed its shopping tab. These U-turns have not deterred the short-form video platform from pressing on with development though.

Testing is UK-Only… For Now

Right now Trendy Beat testing is for UK users only. However, trademark applications filed last month suggest that it won’t be long before the feature pops up in the US. The application predominantly lists items for sale to be clothing such as dresses, hats, footwear and t-shirts and jackets.

US users who are keen to spend their money on TikTok needn’t worry too much, as the platform is focused on bringing even more brands to its TikTok shop offering. This push aims to make the app a complete shopping destination – from product discovery right through to checkout and delivery.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

The power of Trendy Beat capitalises on this complete experience by leveraging the knowledge of trending and popular products. According to reports, ByteDance will then expedite production of these goods through an already-established network of suppliers.

The Runaway Success of TikTok Shop

Despite it being early days for TikTok’s voyage into e-commerce, its success is undeniable and considered a significant influencer on shoppers’ habits. The viral phrase “TikTok made me buy it” has 7.4bn views with the same hashtag racking up 59.3bn hits. 

The platform has also contributed to the exponential growth of influencer marketing, with top creators able to help brands connect with highly engaged potential customers.

According to TikTok, users are 1.3 times more likely to buy through the platform directly. More than that, 67% of its users are inspired to make a purchase while using the app. ByteDance turning the platform into a one-stop-shop and monetizing it wholly for itself feels like a natural next step.

Looking for a more traditional e-commerce route? Check out our Wix vs Shopify comparison.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Tech Companies Are Scaling Back DEI Efforts, Report Finds

20% of hiring leaders surveyed say they have scaled back their DEI teams within the last 12 months in a "worrying" trend.

20% of tech companies have reduced their DEI teams within the last year, a new study finds.

Diversity, equity, and inclusion efforts grew in popularity across the tech industry in the wake of 2020’s racial justice protests. Three years later, however, those same companies are scaling back their efforts. These DEI measures, it seems, were “more show than substance” for at least a meaningful segment of the companies in question.

Layoffs are now the norm across the tech industry, highlighting which teams are considered essential and which are not.

Hiring Biases and Wage Gaps Remain Concerns

The new data comes from recruiting platform Hired, which recently released a report surveying 229 hiring leaders. Of these, 20% said they had scaled back DEI teams within the last 12 months. At the same time, wage gaps widened in 2022 both for women and for men within all racial and ethnic groups.

In addition, 12% of respondents said their DEI programs remain “more at risk for cutbacks in the future if the economy tightens.”

One major signals indicates that biases are returning to form after a few years of trend reversals. More tech businesses hiring for roles are only sending interview requests to men: The percentage was 38% in 2022, up from 37% in 2021, while still lower than the 43% that held steady across 2019 and 2020.

Salary Transparency Laws Might Help

It’s not all bad news, as 51% said their DEI teams were a “must-have.” Plus, new laws mandating some measure of disclosure of salary bands might help to push businesses towards more equitable practices.

According to Hired platform data, these laws have “generally positive” effects, with gender wage gaps narrowing in San Francisco, Los Angeles, and New York City.

Speaking to MarketWatch, Erica Yamoto, senior vice president of marketing at Hired, said that “we all read the headlines. We see what’s going on in the anti-DEI movement. We see it in our survey data,” adding that trend is “definitely worrying.”

Let’s Not Forget Diversity Makes More Money

A 2021 McKinsey report found that prioritizing diversity can result in 36% higher financial returns.

And it’s not the first analysis that has linked racial diversity in management to positive financial outcomes, either, which hints that 2023’s retrenchment has more to do with systemic racism than market forces.

The new report showing a retreat from DEI efforts is another indicator that the US is still unable to take three steps forward without taking two steps back — at least when it comes to addressing the racial inequalities that have shaped the nation for centuries.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Amazon “Sabotaged” User Attempts to Cancel Prime, FTC Alleges

Prime cancellations dropped 14% in 2017 after the shady tactics were introduced, says the FTC.

The Federal Trade Commission is suing Amazon, claiming the ecommerce giant used shady practices to get customers to stay subscribed to Prime even when the customers were trying to cancel their subscription.

According to the FTC, data shows that the “dark patterns” Amazon “sabotaged” customers with paid off, with 14% higher retention rates after implimention.

The FTC has another case lined up, too. Google has filed a complaint with FTC against Microsoft, claiming it has documented anticompetitive practices from the computer company.

Amazon’s Prime Subscription Process Is In Question

Amazon complicated its cancellation process intentionally, according to the FTC itself, which violates the FTC Act and the Restore Online Shoppers’ Confidence Act.

“Amazon tricked and trapped people into recurring subscriptions without their consent, not only frustrating users but also costing them significant money” -FTC Chair Lina Khan, in a CNBC statement.

The FTC also says that its complaint details multiple instances in which Amazon could have changed its allegedly coercive practices and instead chose to continue them.

Amazon spokesperson Heather Layman disagrees on no uncertain terms, saying that the allegations are “false on the facts and the law” and that “customers love Prime.”

What Did Amazon Allegedly Do?

According to the complaint, Amazon’s missteps include making it harder for a non-Prime customer to buy products.

When users were moving through the checkout process, Amazon gave them a button that instructed users to finish their transaction without clearly saying that this button would be signing the user up for a recurring subscription as well.

According to one document, the number of Prime cancellations dropped 14% in 2017 in the wake of the new cancellation process being introduced.

This is the third case that the FTC has brought against Amazon in the last month. Towards the end of May, the company settled for more than $30 million in a case concerning allegations of data privacy failures in Amazon’s Alexa and Ring products.

Google’s Filing an FTC Complaint Against Microsoft

Google has filed its own FTC complaint. It’s alleging that Microsoft used the licensing terms of Office 365 to lock clients into Azure contracts. Google sees this as unfair anticompetitive practices in the cloud storage and cloud computing industries.

Google’s comments are a response to a March 2023 call from the FTC for information about security issues and competition within the cloud market.

Earlier this month, the FTC levelled a $20 million fine against Microsoft for illegal data storage practices.

Microsoft Isn't the Only AI Innovator

Join ClickUp's waitlist for their new AI assistant

In other words, with overlooking its beef with Amazon, the FTC will have a lot to keep it busy in the coming months when it comes to big tech giants and their influence over the millions of customers who find themselves with little protection aside from regulatory agencies.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

CDC: Research Says Long Covid “Contributing” to Labor Shortage

About 18% of those with Long COVID have not returned to work for over a year. The CDC says it's "hurting the U.S. economy."

A recent presentation from the CDC has clarified the governmental agency’s stance on the impact crater that Covid-19 complications have left on the US economy.

According to their analysis of third-party reports, the CDC says research is “suggesting Long Covid is contributing to a labor shortage and is hurting the U.S. economy.”

Among other research, the quote references a January 2023 report finding that around 18% of those who report having Long Covid (which is 5% of all workers who got Covid) couldn’t return to their workplace for over a year. It’s a fairly clear confirmation from the CDC: The acute stages of Covid infections may not be spiking, but the threat to the economy isn’t over.

5% of Those With Covid Can’t Work for More Than a Year

The new presentation comes from the Clinician Outreach and Communication Activity (COCA) team, which mentions labor shortages during a segment discussing the challenges of working while dealing with Long Covid.

However, the report that the CDC highlighted in their new presentation actually came out last January, following an analysis of more than 3,000 Covid-19 workers’ compensation claims from the period between January 2020 and March 2022. That report, out from New York’s largest workers’ compensation carrier, the New York State Insurance Fund, had plenty of other insights worth keeping in mind.

From the report:

  • Almost one-third of all workers infected with Covid-19 suffered or are suffering from Long Covid, with the percentage peaking during the initial phase of the pandemic and falling over time.
  • The percentage of female workers with Long Covid (37 percent) was 11 points higher than that of male workers (26 percent).
  • Forty percent of workers with Long Covid returned to work within 60 days of infection while still receiving medical treatment.
  • Nearly all workers with comorbidities or those hospitalized for their initial infection experienced Long Covid.
  • The incidence of Long Covid in essential workers may be higher than the data suggests, creating a potential blind spot for policymakers.

Expect the Labor Shortage to Continue

Understandably, people have heard enough about the Covid pandemic and want it to be over. But assuming that 5% of those who get Covid in the future will stay out of the labor market for at least a year, the economy will face some significant headwinds for as long as Covid continues to circulate.

And in addition to questioning if a return to work is possible at all, those with Long Covid face the challenge of a workplace that may be resistant to offering reasonable accomodations that could make the transition manageable.

What Does This Data Mean for Workers?

The main takeaway for workers is that wearing your mask and socially distancing still makes a lot of sense. Pushing for Covid-safe ventilation systems in workplaces can also go a long way towards mitigation measures, and so can continuing to work from home when possible.

But when it comes to getting the job that’s right for you, there is a silver lining to this fairly dark cloud: A shortage of labor shifts the negotiating power towards workers. This might even help to explain the fervor with which businesses have turned towards labor-saving measures such as AI tools.

Protect Your Remote Data with SurfShark VPN

Connect an unlimited number of devices for just $2.49 per month.

Whatever the case, more research is likely to emerge in the coming years to clarify the role that Long Covid plays in our economy, and we look forward to getting a clear picture of its impact.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

How to Detect AI Plagiarism: ChatGPT Checkers to Try and Avoid

ChatGPT plagiarism has become a hot topic in educational institutions across the globe. Here's everything you need to know.

In the last two years, ChatGPT has turned the academic and business worlds upside down with its ability to generate coherent, well-written copy about pretty much any subject on earth in a matter of seconds.

The chatbot’s remarkable abilities have seen students of all educational levels turn to the chatbot – as well as the best ChatGPT alternatives, namely Bard– to write complex essays that would otherwise take hours to finish.

Their release kickstarted an ongoing global conversation about a new phenomenon, often referred to as “ChatGPT plagiarism”. This guide covers the tools businesses and educational institutions are using to detect ChatGPT plagiarism, the dangers of cheating with ChatGPT – and whether using ChatGPT even counts as plagiarism at all.

How to Detect ChatGPT Plagiarism

To detect ChatGPT plagiarism, you need an AI content checker. AI content checkers scan bodies of text to determine whether they’ve been produced by a chatbot such as ChatGPT or Bard, or by a human. However, as we’ll cover later on, many of these tools are far from reliable.

It’s slightly harder to detect plagiarism when it comes to code, something ChatGPT can also generate capably. There’s not quite the same ecosystem of AI detection tools for code as there is for content. In 2024, this has grown even vaster,

However, if you’re in a university environment, for example, and you’re submitting code well beyond your technical level, your professor or lecturer may have some very reasonable suspicions that you’ve asked ChatGPT to help you out.

The Most Popular AI and ChatGPT Plagiarism Checker Tools Reviewed

Since ChatGPT’s launch in November 2022, lots of companies and educational institutions have produced AI content checkers, which claim to be able to distinguish between artificially generated content and content created by humans. Now, a lot of companies are using Google’s chatbot Bard too, which uses a different language model.

However, the purported accuracy of even the most reputable AI content detection tools is fiercely disputed and court cases between students falsely accused of using AI content and education have already materialized.

The bottom line is this: No tool in this space is 100% accurate, but some are much better than others.

GPTZero

GPTZero is a popular, free AI content detection tool that claims that it’s “the most accurate AI detector across use-cases, verified by multiple independent sources”.

However, Back in April, a history student at UC Davis proved that GPTZero – an AI content detection tool being used by his professor – was incorrect when it labeled his essay as AI-generated.

We tested GPTZero by asking ChatGPT to write a short story. GPTZero, unfortunately, was not able to tell that the content was written by an AI tool:

GPTZero plagiarism test

Duplichecker

Duplichecker is one of the first AI content detection services that will appear if you simply search for the term on Google. It claims to be 100% accurate at detecting AI content when presented with text, and is completely free to use.

However, as you can see from the result below, Duplichecker was not only unable to identify this text was written by ChatGPT, but it actually concluded that it was 100% human-generated – even though none of it was.

duplichecker AI tool

Writer

Writer is an AI content detection that, to be fair to it, doesn’t claim to be 100% accurate, and advises you treat its judgments as an indication. It’s a good thing too, because the free version of Writer told us that the text below is 100% human-generated – but it’s actually just the first half of a story we asked ChatGPT to generate.

Writer ai content detector

Funnily enough, when we pasted in the introduction of a recently-written Tech.co article that had no AI-generated content included, it came back as only 69% human-generated.

Writer’s has paid plans, but judging by the performance of its free tool, we wouldn’t recommend them. The Team plan costs $18 per user, per month for up to five users. There’s also an enterprise plan with custom pricing options.

Originality.ai

Originality.ai is certainly one of the more accurate AI content detection tools currently available, according to our research and testing.

The company has conducted an extensive study into AI content detection tools, feeding 600 artificially generated and 600 human-generated blocks of text to its own content detection system, as well as other popular tools that claim to fulfill a similar purpose.

As you can see from the results below, Originality.ai outperformed all of the tools included in the test:

originality AI palgiarism test

The only downside to Originality.ai is that there isn’t a free plan, and you can’t even test it out for free as you can with the other apps included in this article. it costs $20 for 2,000 credits, which will let you check 200,000 words.

Copyleaks AI Content Detector

Copyleaks is a free-to-use AI content detector that claims to be able to distinguish between human-generated and AI-generated copy with 99.12% accuracy.

Copyleaks will also tell you if specific aspects of a document or passage are written by AI, even if other parts of it seem to be written by a human.

Copyleaks says it’s capable of detecting AI-generated content created by “ChatGPT, GPT-4, GPT-3, Jasper, and others”.

CopyLeaks Costs $8.33 per month for 1,200 credits (250 words of copy per credit). It’s used, the company says, by over 1,000 institutions and 300 enterprises across more than 100 countries.

In a test carried out by TechCrunch in February 2023, however, Copyleaks incorrectly classified various types of AI-generated copy, including a news article, encyclopedia entry, and a cover letter as human-generated. Furthermore, Originality.ai’s study referenced above only found it to be accurate in 14.50% of cases – a far cry from the 99.12% accuracy claim CopyLeaks makes.

However, when we tested it, it did seem to be able to pick up that the text we entered was generated by ChatGPT. This happened in both our 2023 and 2024 tests:

copyleaks ai detector

During testing, Copyleaks was also able to correctly recognize human-generated text on several occasions. Despite the poor showings on other tests, it looks to be a better and more trusted option than some of the other tools featured in this article.

Turnitin Originality AI Detector

Turnitin is a US-based plagiarism detection company that is deployed by a variety of universities to scan their students’ work. Turnitin is designed to detect all kinds of plagiarism, but revealed in April that it’s been investing in an AI-focused team for some time now as it launched its AI content detection capabilities.

Turnitin their tool can detect “97 percent of ChatGPT and GPT3 authored writing, with a very low less than 1/100 false positive rate”.

However, the company also says that content if it flags a piece of content as AI-generated, this should be treated as an “indication, not an accusation”. They also provide an extensive explanation of how they deal with false positives, and warn about taking AI outputs with a pinch of salt.

The true accuracy of Turnitin’s AI detector was disputed by the Washington Post last year, as well as other sources. You’ll have to contact the company directly if you want to purchase the software or need more information on how it works, the website says – but it’s only really suitable for academic purposes.

Does AI Content Detection Actually Work?

As Turnitin knows, no AI content detection tool is 100% reliable – our tests prove that pretty resoundingly. Duplichecker – a top result on Google that claims to be “100% accurate” on its landing page – fell at the first hurdle.

However, the other tools we’ve discussed today actually claim to be 100% accurate, and very few claim to be free of false positives. Others, like GPTZero, post disclaimers about taking their results as gospel.

A number of university students accused of using artificial intelligence to produce essays have already been forced to prove that their work was original.

In Texas, in March of last year, a professor falsely failed an entire class of students after wrongfully accusing them of using ChatGPT to write essays. There is also a collection of reports – and studies like the one conducted by Originality.ai – that suggest that even the most capable plagiarism checkers aren’t nearly as accurate as they claim.

Even Turnitin’s AI content detector isn’t foolproof. In the recent, relatively small test conducted by the Washington Post we discussed earlier, its accuracy fell far short of the 98% they claim to be able to produce.

Originality.ai, on the other hand, is certainly one of the more robust ones available – and even its detection technology isn’t right every single time.  However, having tested a variety of these tools, it seems to be the exception to quite a broad rule.

Besides, if false positives exist in any capacity, then there will always be room for students to claim their work is original and has simply been misidentified.

OpenAI’s AI Text Classifier: A Case Study

OpenAI, owners of ChatGPT, used to have its own plagiarism checker. We know this, because we used it ourselves when originally writing this article. However, back in July 2023, the company withdrew the tool, stating that it wasn’t accurate enough.

That aligns with our own experience when we tested it. When we showed it a short story, written by its own ChatGPT tool, the checker didn’t pick up on the fact that it was AI-generated.

As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated. – OpenAI blogpost

You can see our original example of the checker missing the fact that text was AI written, below:

OpenAI text classifier test

Is Using ChatGPT or Bard Plagiarism?

It’s debatable whether ChatGPT is in fact plagiarism at all. Oxford Languages defines plagiarism as “the practice of taking someone else’s work or ideas and passing them off as one’s own.”

ChatGPT is not a person, and it’s not simply reproducing the work and ideas of other people when it generates an answer. So, by the dictionary definition, it’s not outright plagiarism.

Even if it was doing that, if you were honest about where it came from (i.e. ChatGPT), arguably, that wouldn’t be plagiarism anyway.

However, some schools and universities have far-reaching plagiarism rules and consider using chatbots to write essays as such. One student at Furman University failed his philosophy degree in December 2022 after using ChatGPT to write his essay. In 2023 case, a professor at Northern Michigan University reported catching two students using the chatbot to write essays for their class.

Using ChatGPT to generate essays and then passing this off as your own work is perhaps better described as “cheating” and is definitely “dishonest”.

The whole point of writing an essay is to show you’re capable of producing original thoughts, understanding relevant concepts, carefully considering conflicting arguments, presenting information clearly, and citing your sources.

There’s very little difference between using ChatGPT in this way and paying another student to write your essay for you – which is, of course, cheating.

With regard to Google’s Bard, the answer is a little more complicated. The same line of logic used above applies to Bard as it does to ChatGPT, but Bard has been marred by accusations of plagiarism and incorrectly citing things it pulls from the internet in a way ChatGPT hasn’t. So, using Bard might lead to you inadvertently plagiarizing other sources (more on this below).

The Dangers of Cheating With ChatGPT

Christopher Howell, an Adjunct Assistant Professor at Elon University, asked a group of students back in 2023 to use ChatGPT for a critical assignment and then grade the essays it produced for them.

He reported in a lengthy Twitter thread (the first part of which is pictured below) that all 63 students who participated found some form of “hallucination” – including fake quotes, and fake and misinterpreted sources – in their assignments.

Professor talking about chatgpt mistakes

Does ChatGPT Plagiarize in Its Responses?

No – ChatGPT isn’t pulling information from other sources and simply jamming it together, sentence by sentence. This is a misunderstanding of how Generative Pre-trained Transformers work.

ChatGPT – or more accurately the GPT language model – is trained on a huge dataset of documents, website material, and other text.

It uses algorithms to find linguistic sequences and patterns within its datasets. Paragraphs, sentences, and words can then be generated based on what the language model has learned about language from sequences in these datasets.

This is why if you ask ChatGPT the same question at the same time from two different devices, its answers are usually extremely similar – but there will still be variation, and sometimes, it offers up completely different answers.

Does Bard Plagiarize in Its Responses?

ChatGPT’s biggest rival, Google’s Bard has had significantly more issues with plagiarizing content since its launch than its more popular counterpart. Technology website Tom’s Hardware found that Bard had plagiarized one of its articles, and then proceeded to apologize when one of its staff called it out.

In May 2023, PlagiarismCheck told Yahoo News that they generated 35 pieces of text with Bard, and found it plagiarized above 5% in 25 of them by simply paraphrasing existing content already published on the internet.

One big difference between Bard and ChatGPT that can perhaps explain this is that Bard can search the internet for responses, which is why it tends to deal better with questions relating to events after 2021, which ChatGPT struggles with. However, this seems to also mean it pulls data from sources in a less original way and cites its sources more often.

These examples may have been blips, but it’s good to know the risks if you’re using Bard for important work.

Do Other AI Tools Plagiarize?

Unfortunately, yes – and some companies have already embarrassed themselves by using AI tools that have plagiarized content. For example, CNET – one of the world’s biggest technology sites – was found to be using an AI tool to generate articles, and wasn’t transparent about it at all.  Around half of the articles that CNET published using AI were found to have some incorrect information included.

To make matters worse, Futurism, which launched an investigation into CNET’s AI plagiarism, said that “The bot’s misbehavior ranges from verbatim copying to moderate edits to significant rephrasings, all without properly crediting the original”.

AI tools that don’t generate unique, original content – be it art or text – have the potential to plagiarize content that’s already been published on the internet. It’s important to understand exactly how the language model your AI tool is using works and also have tight oversight over the content it’s producing, or you could end up in the same position as CNET.

Should You Use ChatGPT for Essays or Work?

Using ChatGPT for Essays

The fact that ChatGPT doesn’t simply pull answers from other sources and mash sentences together means businesses have been able to use ChatGPT for a variety of different tasks without worrying about copyright issues.

But its internal mechanics also mean it often hallucinates and makes mistakes. It’s far, far from perfect – and although it’s tempting to get ChatGPT to write your essay for university or college, we’d advise against it.

Every educational institution’s specific submission guidelines will be slightly different, of course, but it’s vastly likely that it is already considered “cheating” or plagiarism” at your university or school. Plus, regardless of how accurate they are, educational institutions are using AI content detectors, which will improve over time.

Using ChatGPT at Work

Of course, lots of people are using ChatGPT at work already – it’s proving useful in a wide range of industries, and helping workers in all sorts of roles save valuable time on day-to-day tasks.

However, if you are using ChatGPT at work, we’d advise being open with your manager or supervisor about it – especially if you’re using it for important activities like writing reports for external stakeholders. It’s one of the more immediate ethical considerations relating to AI that businesses need to answer.

We’d also strongly advise both heavily editing and closely reviewing all of the work you’re using ChatGPT, Bard, or any other AI tool to generate. It’s unwise to put sensitive personal or company information into any chatbot – we know ChatGPT saves and uses user data, but there isn’t much public information about where these chats are stored or OpenAI’s security infrastructure.

Using Other AI Tools for Essays or Work

Of course, Bard and ChatGPT aren’t the only AI chatbots out there – like Claude, for example. However, we’d be hesitant to throw our support behind any smaller AI tools that aren’t backed by powerful language models. They won’t be as well-resourced, and you’re unlikely to find them as useful if you do experiment with using them for work.

The same rules still apply, however – be open with your manager and get sign-off on using them, don’t input any sensitive company data, and always review the answers you’re given.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Musk Sued Over Refusal to Pay Twitter Employee Bonuses

Several current and former employees are suing over Twitter’s refusal to pay "tens of millions of dollars" in bonuses.

A band of both current and former Twitter employees have filed a lawsuit this week, claiming that they’ve not received their 2022 bonuses, even though they’d been promised 50% of their target figure.

Despite entering a “2.0 era” with new CEO Linda Yaccarino and pressing on with app developments, Twitter is still being plagued by the effects of Elon Musk’s casual and erratic leadership.

Led by the platform’s former Senior Director of Compensation Mark Shobinger, this latest class-action complaint makes reference to the fact that there was no change in the bonus scheme communicated and no reason as to why they haven’t been paid.

What Exactly Is the Complaint?

If anyone knows the ins and outs of Twitter’s bonus schemes it’s Shobinger, as his former role meant he oversaw executive and incentive pay. In November 2022, his responsibilities widened to include global employee compensation.

Twitter’s cash bonus plan is paid out annually and according to former Chief Financial Officer Ned Segal – and other executives – this was to remain the case even during the months leading up to Musk’s acquisition in October 2022. However, that bonus pay never materialized. And in short, there’s no reason why.

“Both before and after Musk’s acquisition was completed in October 2022, Twitter’s management continuously promised the company’s employees, including plaintiff, that their annual bonus for 2022 would be paid under the Bonus Plan.” – current and former Twitter employees

Shobinger quit because of this U-turn on both the bonus pay and a variety of other broken employee promises. According to Shannon Liss-Riordan, one of the plaintiffs’ attorneys, the bonuses are in the neighborhood of “tens of millions of dollars,” which certainly warrants a lawsuit.

This Isn’t Twitter’s First Legal Rodeo

In what likely comes as a surprise to no one, this latest complaint is just one of many since Musk’s acquisition. Since October 2022, Twitter has been sued for allegedly failing to pay its rent and Google Cloud bills, and by former employees for severance and back pay.

While it’s tricky to ascertain the platform’s stance on this particular lawsuit – thanks to a disbanded PR and media relations department – it’s not hard to guess why things may have ended up this way.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

Having lost more than half its advertising revenue, due to a lack of trust from brands on its content moderation, the platform is perhaps trying to recover some of its losses – even if that means breaching employee contracts.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Who Owns ChatGPT and OpenAI?

We dissect the ownership of the world's most popular AI chatbot, ChatGPT, and its creator OpenAI.

Key Takeaways

  • ChatGPT is owned by OpenAI, and the current CEO is Sam Altman
  • Microsoft owns a stake of 27% in OpenAI, and holds an investment in the company of around $135 billion
  • While it’s true Elon Musk co-founded OpenAI, he is no longer involved in the company
  • As well as ChatGPT, OpenAI also operates Dall-E and Sora
  • OpenAI is not a publicly listed company

Over the last few years, ChatGPT has turned the business world on its head. Thanks to the innovative technology, generative AI has become a huge undertaking for virtually every company in the world, with platforms, services, and apps adding functionality in any way they can.

But who exactly owns the world’s most popular AI chatbot? OpenAI is the company behind the platform, but given the business’s many partnerships and deals, it’s safe to wonder who is actually pulling the strings.

In this guide, we’ll explain who owns ChatGPT and OpenAI, as well as offer some insight into what the company is up to in 2025.

Who Owns ChatGPT?

ChatGPT is owned by OpenAI, an AI research laboratory that was founded in 2015 by Sam Altman, Elon Musk, and other prominent figures, including Peter Thiel, Ilya Sutskever, Jessica Livingston, Reid Hoffman, Greg Brockman, Wojciech Zaremba, and John Schulman.

Sam Altman is the current sitting CEO of OpenAI and is understood to be the main contributor behind ChatGPT’s success.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2026 👨‍💻
See the list button

What is OpenAI?

OpenAI is a Silicon Valley-based company that started out its journey generating artificial intelligence for video games and other apps. The company has since switched its focus to general AI research and development, creating Chat Generative Pre-Trained Transformer (GPT) in 2018 — a unique language model that uses deep learning to produce human-like text — and more advanced models GPT-2, 3, 4, and 5 in the years since.

OpenAI is also responsible for creating DALL-E, an AI system that generates realistic images from a language prompt, and Whisper, a speech recognition model that’s able to transcribe, identify, and translate 99 languages.

Another OpenAI product is the incredibly exciting – and arguably, terrifying – Sora, an AI-powered video generation app that is now into its second edition.

Is OpenAI publicly traded and can you invest?

Who is on the Board at OpenAI?

Currently, the board at OpenAI consists of the following members:

  • Bret Taylor (Chair)
  • Adam D’Angelo
  • Dr. Sue Desmond-Hellmann
  • Dr. Zico Kolter
  • Retired U.S. Army General Paul M. Nakasone
  • Adebayo Ogunlesi
  • Nicole Seligman
  • CEO Sam Altman

In November 2025, board member Larry Summers left OpenAI, citing a desire to step back from public life after being named in the Epstein files.

Does Elon Musk Own ChatGPT?

While Elon Musk co-founded ChatGPT and helped to financially back its mother company, OpenAI, in its early years, the ex-Twitter CEO doesn’t currently own stakes in the company.

Musk officially left OpenAI’s Board of Directors in 2018. In a Tweet the tech entrepreneur fired off in 2019, he cited a conflict of interest and prioritization of his other businesses, SpaceX and Tesla, as his primary reasons for distancing himself from the research company.

However, a report from Semafor has since revealed that there may be a little more to the story.

According to several unnamed sources close to the matter, Musk tried to take over OpenAI because it had “fallen fatally behind Google,” only to be flat-out rejected by co-founder Greg Brockman and CEO Sam Altman.

After his proposal was jilted, Musk walked away from the company and cut all financial ties, despite promising to contribute $1 billion in funding over the course of several years. To recover these damages, OpenAI was forced to launch a “capped” for-profit model that still runs alongside its non-profit entity to this day.

But after its explosive success, have Elon Musk’s opinions changed about ChatGPT? It doesn’t look like it. In March 2023, Musk and other leading tech figures like Apple Co-founder Steve Wozniak signed an open letter citing the dangers of rapidly developing AI like GPT-4, the language model powering ChatGPT.

However, despite his public concerns about AI, Musk has launched his own AI chatbot in the form of Grok, the AI model that is tied to the social media company he also owns, X (formerly Twitter).

Does Microsoft Own ChatGPT?

Microsoft does not own ChatGPT, nor the chatbot’s founding company, OpenAI. However, the two companies have been partnered commercially since 2016, and Microsoft continues to be the company’s largest investor.

Microsoft invested $1 billion into the research lab in 2019, after the company made its partnership exclusive. Then, after the resounding successes of ChatGPT and OpenAI’s other products, Microsoft invested a further $10 billion in the company in January 2023, as they entered the third phase of their partnership. Overall investment by the tech giant totals around $13 billion.

In October 2025, Microsoft stated that it holds an investment in OpenAI valued at around $135 billion. It also confirmed that it currently has a 27% stake in OpenAI.

OpenAI’s motivations are pretty obvious, but Microsoft also benefits heavily from this relationship. By investing in OpenAI, Microsoft is able to gain access to the lab’s cutting-edge technology, which it can then deploy across its broad range of products and services.

Frequently Asked Questions (FAQs)

Sam Altman is the CEO of OpenAI. Altman also co-founded the AI research lab in 2015, alongside prominent figures like Elon Musk, Greg Brockman, Wojciech Zaremba, Ilya Sutskever, and John Schulman.

OpenAI is currently not a listed company. This means that investors aren’t able to buy shares in the company or its product, ChatGPT. However, there are rumors that the company may be planning to go public sometime in 2027.

ChatGPT is owned by OpenAI, an artificial intelligence (AI) research lab and deployment company. OpenAI started off as a nonprofit company in 2015 but opened up a for-profit entity in 2019.
Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Dropbox Launches AI Initiatives And New AI-Powered Features

The file-hosting service has gone all out on its artificial intelligence offering, with funding and features abound.

The next tech giant to throw its hat in the generative AI ring is Dropbox, who has just today announced two new AI-focused offerings.

First up, we see the introduction of a venture fund dedicated to helping startups in the AI space.

Secondly, it announced two new features: Dropbox Dash and Dropbox AI — both drawing on generative AI-technology to help make users’ lives easier.

Is Dropbox Now a Venture Capitalist?

Apparently, yes. Joining the likes of OpenAI and Salesforce, Dropbox will provide financial support and mentorship to those building AI-powered products that “shape the future of work,” according to the company.

This is the platform’s first venture arm and they’re dedicating $50 million to the initiative.

As mentioned, they’re in good company, with corporations providing a major source of the $52 billion funding received by AI startups last year. Salesforce’s VC division is planning on investing $500 million into AI startups and OpenAI has a $175 million budget to do the same.

Introducing Dropbox Dash and DropBox AI

As well as funding AI technologies, Dropbox has been busy building them too.

The first new feature is called Dropbox Dash and is a universal search bar that aims to help users find and organize their content. The more a customer uses it, the more Dash will “learn, evolve and improve,” according to Dropbox.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

It works by pulling information from all content, apps, and tools across third-party platforms such as Microsoft Outlook and Google Workspace, so users don’t need to sift through various internal links, documents, and files. Instead, they simply need to ask Dash to find it.

The second offering is Dropbox AI, which summarizes and extracts information from files that are stored in a Dropbox account.

This means users can get a snapshot understanding of their content — like contracts or meeting notes — without having to sift through the entire document. Similarly, if there’s a specific question a user has about a file, they can ask Dropbox AI, which will then share a response.

Dropbox AI is capable of reviewing and generating summaries from documents and video previews, taking information from meeting recordings, contracts, and research papers. It currently works with file previews and will be expanding to folders and entire accounts soon.

Is Dropbox’s AI Technology Good Enough?

Dropbox appeared to alleviate any concerns that the technology may not be reliable or accurate — which in all fairness isn’t a concern solely reserved for its particular innovations.

“In this next era of AI, it’s more important than ever that we protect our customers’ privacy, act transparently, and limit bias in our AI technologies.” – Dropbox AI press release

As with all AI products, the more they’re used and the wider the customer base, the more likely any tangles in the technology can get ironed out into something more sophisticated.

Dropbox Dash is currently available in English to select customers in beta. Dropbox AI for file previews is in Alpha, and available in the US to all Dropbox Pro customers.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Study: Small Companies Are More Flexible with Hybrid Work

According to a new report, 75% of tech companies are fully flexible when it comes to office working.

When remote work became a mainstay of 2020, a new era of employment was ushered in. Instead of commuting, employees could spend their time between home and work life more efficiently, and companies realized they didn’t need the large overheads of office and equipment.

All in all, it seemed that both parties celebrated this new approach to the traditional work-life model. Well, most did.

What we’re seeing now, three years down the line, is a clear divide in how companies approach flexible working. Many tech giants are scrapping their WFH policies, which is at odds with how the new Gen Z workforce prefer to do their jobs.

It’s a mammoth and complex topic, but a new report has shed a little light on the trends of flexible working. Let’s dig in.

Small Companies Are More Accommodating With Flexible Work

According to the latest Flex Index by Scoop — a robust source of company in-office requirements — there’s a clear divide happening between big and small tech.

The report, made up of 4,500 companies and over 100 million people, found that 65% of big tech companies (categorized as having 25,000+ employees) now enforce a structured hybrid model, which means they’ve set expectations on when employees work from the office. This could be the number of days, specific days or a minimum percentage of time.

Giants such as Apple, Amazon, Google, and Meta have either implemented or announced their plans for structured hybrid working, which is the fastest rising model.

In contrast to this, 88% of small tech companies (those with 500 employees or less) are fully flexible, with no rules or parameters on whether their employees come into an office or not. Similarly, 67% of companies under 100 employees are fully remote.

With a smaller workforce likely comes smaller HR procedures and policies. The need for by-the-book onboarding and mentorship can decrease as it’s seemingly easier to keep up with one another.

What Else Did The Report Find?

Those working within a structured hybrid company are usually required to be in the office for 2-3 days a week. The most common days are Tuesday, Wednesday and Thursday, with only 15% of companies requiring staff to show up in person on a Monday.

For what it’s worth, very few tech companies of any size require their staff to be in the office full time. In fact, only 8% insisted, which is the lowest percentage across all industries.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

What Does This Mean For The Tech Industry?

It’s in the industry’s nature to lead in innovation, so it comes as no surprise that companies’ workplace practices are ahead of the game. Tech was found to be the most flexible industry, with media and entertainment then professional services following closely behind.

However, this innovation isn’t limited to big companies. In fact, it’s the smaller startups born out of the pandemic that have often pushed technical boundaries and capabilities.

All of this then poses a couple of interesting questions:

  • Can big tech companies retain top talent with more rigid working models, or are they likely to lose them to smaller companies with more relaxed policies?
  • What happens when these small companies grow into larger entities — will the policies remain the same or is a tighter rein required over a larger workforce?
  • When a small and scrappy startup matures into a slicker operation with software out in the wild and shareholders to answer to, does the need to ensure a working and usable product outweigh the employee-need to work fully flexibly?
  • Are in-person, hands-on collaborations more valuable and effective than those done over video calls and Slack?

Whatever the future workplace trends, right now for those who wish to remain flexible and working wholly on their terms, smaller companies appear to be the way to go.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Vimeo Embraces AI With Its Latest Editing Features

The video platform hopes to cut down production and editing time with a suite of three new AI-powered tools.

Vimeo has this week announced its voyage into AI functionality with the introduction of three new tools – a script generator, teleprompter and text-based video editor. 

Much like Adobe and its recent roll out of AI-image generator Firefly, making content creation accessible is fast becoming a priority for major creative platforms. 

Vimeo’s new features have been designed for more entry-level users of the platform. Think time-poor CEOs or schedule-juggling social media managers, who need to create good content consistently, but don’t have the resources, skills or time to do it.

Let’s dig into what these tools look like and how they could help.

What Do The New Vimeo AI  Tools Look Like?

The features have been designed and developed with all stages of the production process accounted for. 

For pre-production and planning, we have the AI-powered script generator. In what looks set to rival ChatGPT and TikTok’s script generator, Vimeo’s version will create a video script in seconds to the tone of voice and duration of your choosing. All that’s needed is a brief description of what the video is about.

During recording, those who have trouble staying on script and maintaining eye contact with the webcam, will appreciate the new teleprompter. This fully integrated tool allows font size and pace personalization for text prompts while recording, much like in a real TV studio. By facilitating slicker presenting, this will undoubtedly cut down on time during filming.

When in editing, all those awkward ‘umms’ and ‘ahhhs’ look set to be a thing of the past. The text-based video editor will automatically detect and delete filler words, long pauses and any other unwanted noises in a single click. The need to manually edit every little detail is now gone, speeding up the entire post-production process.

AI tools like these from Vimeo look set to revolutionize the workplace. See how companies are using ChatGPT right now.

Saving Time Is A Key Objective

Vimeo’s internal research recently stated that – when recording – 50% of its customers shoot multiple takes, and out of those who reshoot, 25% of them do over five takes. All in all this adds up to a lot of time in creation, that may not even be worth it when it comes to the objectives of the video.

So, while these tools won’t necessarily inspire a burst of creativity or revolutionize the way long-form video content is created, it can certainly speed up the way TikToks, Reels, company meeting videos or short-form marketing clips are created. At least that’s the hope of Ashraf Alkarmi, chief product officer at Vimeo.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

“AI in video opens up a new frontier of accessibility. Any individual or business now has the ability to produce engaging, professional content with no prior production experience, and within mere minutes” – Ashraf Alkarmi, chief product officer at Vimeo

How AI-powered Video Is Becoming The New Normal

Don’t think that this is Vimeo’s first and last foray into AI either, as the company has made it explicitly clear that AI-powered tools are to become an integral part of their platform. Even for full-scale video production. 

Alkarmi went on to say: “We’re clearly only scratching the surface of what AI can accomplish for organizations and the people within them. We envision a future in which AI knowledge is a prerequisite, not a luxury, to video production. Leaders need to adapt now and empower their teams to create more efficiently, or risk getting left behind.”

Although talking about leaders and teams using Vimeo, that sentiment rings true for the platform itself, as a number of AI-powered video editing startups are seemingly hot on their heels. 

Platforms like Capsule, Descript and Dumme all provide rapid and sophisticated video editing functionality that users are quickly becoming accustomed to. This functionality isn’t just a ‘nice to have’, it’s fast becoming a necessity if you want to retain and grow your user base.

There’s not a long wait for these tools, as they’ll be available for Vimeo users in July. Although anyone with a Standard or Pro subscription plan can start to trial them now.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

OpenAI Could Launch a Marketplace for AI Developers

The marketplace would allow developers to sell AI models based on ChatGPT to businesses around the world.

Implementing AI technology into your business could get a lot easier soon, as OpenAI — the company behind ChatGPT — is reportedly working on a marketplace that could open up a lot of possibilities in the future.

Since its launch in November last year, businesses have flocked to ChatGPT like a modern-day gold rush. The technology is capable of a lot, from emails and coding to customer service requests and sales assistance.

Still, ChatGPT has only scratched the surface of what generative AI can do for business, and this marketplace could be the catalyst to a wide range of new functionalities.

AI Marketplace Coming to OpenAI

According to a report from The Information, OpenAI is considering the development of a marketplace that would allow users to sell models built using the company’s ChatGPT generative AI service to other businesses.

The report also notes that this idea is already gaining some traction with ChatGPT users. In fact, Aquant — a popular service platform — and Khan Academy — an education app maker — were two companies that specified that they would be interested in selling their ChatGPT-powered AI models on the new marketplace.

While the news is not entirely public, the report cites people familiar with the topic, stating that OpenAI CEO Sam Altman talked about these potential plans in a developer meeting in London last month.

What Does This Mean for ChatGPT?

ChatGPT, by itself, can’t power a business. However, in most cases, businesse are using ChatGPT as a foundation, creating AI models based on the popular service to specifically cater its functionality to that business.

The marketplace that OpenAI is considering building could make that functionality available to everyone, building out an entire economy around AI-powered platforms that can help businesses grow.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

In essence, this new AI marketplace could establish ChatGPT even more as the go-to generative AI solution for businesses around the world. Yes, Google Bard has made some headway and likely has an easier road to catch up than other competitors due to the sheer volume of information available to Google.

Still, being the first to launch is always valuable. If, on top of that, OpenAI can use this marketplace to roll out untold numbers of iterations, doing everything from answer emails and writing code to responding to customer service requests and qualifying sales leads, it will make it the primary tool across the board, particularly when paired with Microsoft’s suite of business offerings.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Study: 4-Day Work Week Is the Most Important Perk for Employees

According to a recent study, 44% of employees would prefer a 4-day work week over any other perk.

Want to attract top talent? Well, office socials aren’t going to cut it anymore, as a new study found that the 4-day work week is the most popular perk for employees in 2023.

The 4-day work week has become quite a popular concept in the last few months, driven mostly by the success of a large-scale study that showed it increases productivity, promotes work-life balance, and generally improves employees’ mental health.

Now, employees are champing at the bit to get access to this top tier perk at their job, and they’re willing to forego other perks to get it.

Employees Want the 4-Day Work Week More Than Any Other Perk

According to a recent study from recruiting firm Robert Walters, 44% of employees would prefer a 4-day work week over any other perk. This was the highest percentage in the study, beating out the ability to work anywhere at only 38% of respondents.

Even wilder, only 16% noted that they would take a pay increase instead of the 4-day work week, showing how valuable time is when it comes to attracting top talent.

On top of all that, 89% of employees stated that they would be interested in implementing the 4-day work week at their business, and 46% stated that they would give up work socials and relationships in favor of the 4-day work week.

Is the 4-Day Work Week Good for Business?

All these employees may want to work only four days a week, but why would businesses get on board with this transition? Well, although experiments around the new working model are still in their infancy, results have been promising when it comes to the impact on productivity.

“It’s too early to say the long-term impact of these measures – but with 46% of professionals willing to forego socials and business relationships, companies should be mindful that poor company culture comes at a price.”

Beyond the benefits to employees, the largest 4-day work week study to date found that overall revenue rose 8.14% (weighted by company size) in the six-month period. Additionally, 63% of businesses found it easier to attract and retain talent, a key factor when it comes to the long-term success of any business.

Convinced of its benefits? Here’s how business owners can implement a 4-day workweek themselves.

Companies Offering the 4-Day Work Week

You might be thinking that only small businesses and startups are partaking in the 4-day work week, but you’d be wrong. In fact, some of the largest, most prominent companies in the world are testing out the 32-hour work week (with no reduced pay) in service of a healthier, more loyal work force.

Some of the companies offering a 4-day work week to their employees include Microsoft, Panasonic, Samsung, Shopify, Kickstarter, Buffer, and many others.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

If you want to learn more about the 4-day work week and how it can benefit you, your business, and your employees, make sure to check out the in-depth study and follow other updates on Tech.co about this revolutionary new way to work.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Twitter TV: Will Musk’s Plans for Video App End Twitter’s Ad Woes?

Elon Musk has hinted that a Twitter video app is coming. But will an initiative like this convince advertisers to return?

Elon Musk has hinted that he plans to launch a Twitter video app for Smart TVs, a move that aligns with new CEO Linda Yaccarino’s reported plans to focus more closely on video content and commercial partnerships.

The social media platform has struggled greatly to maintain advertising relationships since Musk’s takeover, with a collection of companies cutting ties with the website during the first few months of the billionaire’s stint at the helm.

However, a move toward a video app is unlikely to assuage many of the fears that have driven advertisers away from Twitter since Musk’s arrival, which are heavily linked to a widely-reported increase in problematic content on the platform.

Musk Hints That Twitter Video App Is “Coming”

Over the weekend, Twitter user S-M Robinson stated that the social media platform “really need[s] a video app” as they were “not watching an hour-long video on Twitter”, to which Elon Musk simply responded: “It’s coming”.

The comment – left on a 1 hour, 36-minute episode of the Zuby podcast that features Musk as an in-person guest – was tweeted out just days after an investor presentation in which Yaccarino and Musk signaled a desire for a greater focus on “video, creator, and commerce partnerships”, Reuters reports.

These aren’t the first signs that Twitter’s top brass want to make the platform a viable home for the sort of longer-form video content that Twitter isn’t typically known for.

Earlier this month, controversial former Fox News host Tucker Carlson launched the first episode of his new, low-budget show on the platform, which Elon Musk described as “bold” and has already gained high viewing figures.

Twitter’s Advertising Turmoil

Twitter is scrambling to diversify its revenue streams beyond the forms of digital advertising that have been commonplace on the platform for years. During Musk’s reign, ad spending has plummeted dramatically.

Recently, it was revealed that advertising revenue was down as much as 59% year on year.

Many advertisers are thought to be put off by Musk’s attitudes toward content moderation and his penchant for abrupt, drastic changes to the social media platform’s features and policies.

Remove Your Data From the Web Today

Incogni by Surfshark can help you reclaim your information from third-party vendors.

New CEO Linda Yaccarino’s recent appointment was thought to be heavily influenced by her experience and reputation in the world of advertising, but Musk still appears deeply involved with day-to-day decision-making.

Will a Video App Really Bring Back Advertisers?

Although providing a more habitable space for new forms of media present new opportunities for advertisers to spend money on Twitter, it does nothing to address the generally accepted reason why so many companies jumped ship.

Since Musk’s takeover, brand safety and content moderation chiefs have been dismissed, and concerns surrounding the level of hate speech present on the platform have been consistently raised.

On top of this, initiatives such as Twitter Blue have garnered widespread criticism, while serious security issues have drawn condemnation from senior US senators.

A new video app will do little to change current opinions on the direction Twitter is going in. Unless Musk and Yaccarino make more of a concerted effort to ensure their social media platform is a place prospective commercial partners feel comfortable placing their content, it’s hard to see the tide turning any time soon.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Is This Google Privacy Settlement Claim Website a Scam?

If you think you're entitled to compensation from this lawsuit, make sure you're using the right site to make your claim.

Google has agreed to pay $23 million to resolve a class-action lawsuit filed over a decade ago, which alleged that the company violated the privacy of searchers by sharing their personally identifiable information with third parties.

If you clicked on a Google Search result between 2006 and 2013, you may be entitled to claim compensation, just as many Zoom and Facebook users were recently after their respective class action lawsuits came to a close.

However, the webpage you’ll need to input your details into in order to make a claim isn’t owned or managed by Google – rather, it’s been set up by settlement administration specialists Kroll.

When there’s a lot of buzz around a topic like this one, scammers tend to see it as an opportunity to make a quick buck with a shady email or fake website. So, making sure you’re in the right place before you enter any personal information is always advised.

What the Legitimate Google Privacy Settlement Website Looks Like

The company processing claims for this Google privacy settlement is called Kroll Settlement Administration LLC — and you can make your claim on refererheadersettlement.com.

Other websites claiming to be able to process your claim for this settlement should not be trusted. Here’s an image of what the legitimate website looks like:

Kroll settlement claim webpage

When Google settled a separate privacy lawsuit back in 2020, some skeptical users questioned whether it was a scam.

They took to Reddit, Twitter, and the Gmail community forum to find out whether correspondence they’d received relating to the case was in fact legitimate, and whether the settlement website where they were told they could make a claim was genuine.

Along with unexpected emails, a temporary site outage added further fuel to the idea that it wasn’t genuine.

This time around, we’ve not found any reports of Google sending out emails — or other kinds of messages — to individuals who may have been impacted by the class-action lawsuit. In fact, Google has admitted no wrongdoing, despite agreeing to the settlement.

We’d strongly advise ignoring any emails referencing this case, and instead going directly to the webpage we’ve linked above.

What Information Do I Need to Enter to Make a Claim?

To make a claim, you’ll need a Class Member ID, and you can apply for one through Kroll’s website. You’ll need to enter your full name, phone number, home address, ZIP code, and email address.

The deadline to claim is July 31, 2023, so if you think you’re eligible for a portion of the money, ensure you secure your Class Member ID before then.

The settlement will receive final approval on October 12, 2023, but the settlement notice itself suggests the ruling might be held up by appeals.

Remove Your Data From the Web Today

Incogni by Surfshark can help you reclaim your information from third-party vendors.

What Does the Lawsuit Allege, and How Much Can I Claim Back?

Google’s parent company Alphabet Inc. was accused back in 2013 of “storing and intentionally, systematically and repeatedly divulging its users’ search queries and histories to third parties via ‘Referrer Headers'”, which impacts “billions” of consumers.

The lawsuit also alleged that “Google has consistently and intentionally designed its services to ensure that user search queries, which often contain highly-sensitive and personally identifiable information (“PII”), are routinely transferred to marketers, data brokers, and sold and resold to countless other third parties.”

Since Google was launched, search terms have been included in the URL of the search results page. “Because the search terms are included in the search results URL”, the lawsuit explains, “when a Google user clicks on a link from Google’s search results page, the owner of the website that the user clicks on will receive from Google the user’s search terms in the Referrer Header.”

Web analytics services, it adds, “include and use functionality to automatically parse the search query information from web server logs, or to otherwise collect the search query from the referrer header transmitted by each visitor’s web browser.”

According to Kroll, the total amount each individual is expected to receive is approximately $7.70 — although this could change depending on the number of people that actually end up making a claim. As mentioned previously, Google has admitted to no wrongdoing throughout the settlement process.

The settlement administrators also say that Google will “revise its “FAQs” and “Key Terms” webpages regarding how and when search queries may be disclosed to third parties via referrer headers.”

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

AI’s Economic Potential Could Reach $25.6 Trillion, McKinsey Predicts

Generative AI has the potential to rake in $4.4 trillion annually — but what impact will this have on jobs?

Due to breakthrough developments in generative artificial intelligence (AI), the global AI market could soon rake in between $17.1 and $25.6 trillion annually, according to a new report by consulting company McKinsey.

As apps like ChatGPT and Copilot continue to transform the way business is conducted, generative AI could contribute up to $4.4 trillion to this total, with estimates doubling when you account for AI-assisted workplace tools like Dynamics 365 AI.

Sales and the marketing industries are looking to benefit the most, thanks to the tech’s ability to streamline customer operations, while the manufacturing sector will cash in less from the AI gold rush.

McKinsey: Generative AI Will Add Trillions to Global Economy

While the use of AI has been simmering under the surface for decades, recent developments in generative AI have propelled the industry forward — opening up lucrative opportunities to countless businesses in its wake.

A new report from McKinsey has put an estimate on these gains, predicting that generative technologies like ChatGPT, DALL-E, Google Bard, and DeepMind could add anywhere between $2.6 trillion to $4.4 trillion to the industry annually.

“As generative AI continues to develop and mature, it has the potential to open wholly new frontiers in creativity and innovation.” – McKisney report on generative AI

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

AI's impact on the global economy, McKisney report

AI’s impact on the global economy. Source: mckisney.com

McKisney’s study, which analyzed 63 use cases across 16 business functions, revealed that this could bring AI’s total economic potential to between $17.1 and $25.6 trillion after accounting for its impact on worker productivity, and the value of traditional analytics and machine learning algorithms that continue to be deployed across sectors.

Which Industries Will Be the Biggest Winners?

Due to the varied and sprawling applications of generative AI, no industry is expected to be exempt from its impact. However, McKisney’s findings predict that marketing and sales, software engineering, and research and development (R&D) could account for a staggering 75% of its total profits.

Sales and marketing processes are likely to see the biggest productivity uptick, largely because of generative AI’s potential in transforming the customer experience.

Due to the potential the technology has in facilitating customer self-service, resolving issues during initial contact, and reducing response times, McKinsey predicts that the productivity of customer care functions could increase from 30-45% in the coming years. And these findings aren’t isolated.

McKisney’s projections chime with a recent study into the impact of AI-based conversational assistants. The research found that after deploying customer service agents, 14% more customer queries were able to be resolved, reducing time spent by employees by 9%. So, if those working in sales and marketing continue to pivot toward generative AI, the savings they make could be major.

Unfortunately, despite the manufacturing sector benefiting massively from AI-aided automation, this new wave of advancement is only expected to contribute around $100 billion annually to the sector. This is significantly short of the projected profits of sales and marketing, which both exceed $450 billion.

Knowledge Workers Are Likely to Bear the Brunt

As AI continues to disrupt every facet of our personal and working lives, there will ultimately be losers too, as the report points out.

As generative AI affords humans with a new “superpower,” McKinsey expects that the developments will also create obstacles for higher-wage workers whose roles were “previously considered to be relatively immune from automation.”

For example, knowledge workers with a Bachelor’s degree or higher will likely face similar levels of job insecurity to those without college degrees and high school diplomas. This stands in contrast to previous technological shifts which placed lower-skilled workers at greater risk of losing out.

These findings are a reminder that while generative AI will bring great opportunities to some, it also brings real-world threats to many others. If you’re interested in finding out how AI-proof your job is, we spoke to experts and compiled a list of the roles most likely to be replaced by artificial intelligence.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Intuit Mailchimp Announces New Time-Saving AI Features

The new features include richer reporting and analytics, advanced segmentation, and more ecommerce automations.

Mailchimp users will benefit from over 150 new and updated features in the company’s bid to respond to customer expectations.

Intuit Mailchimp, one of the leading marketing automation companies, announced the rollout of new features today at its London conference.

The new features include richer reporting and analytics, advanced segmentation, and more ecommerce automations. Mailchimp also announced an improved Standard plan aimed at helping marketers reduce the overall time spent doing mundane tasks such as creating emails and reports.

Mailchimp Unveils Smart New Features

Mailchimp’s marketing conference, From:Here To:There  featured a stellar lineup of marketing professionals, sports personalities, and celebrities, kicking off with a preview of Mailchimp’s latest innovations

“We’ve been innovating for our customers at an incredible pace to help them grow their sales by creating personalized multi-channel marketing campaigns at scale. With this latest launch, we’re introducing new AI and automation features plus new services with a human touch designed to help our customers save time and accelerate growth.” – Rania Succar, CEO of Mailchimp

Keeping Up with Demand for AI & Personalization

With AI infiltrating almost every marketing tool worth owning nowadays, Mailchimp decided it wanted a slice of the pie. 

“We recently surveyed a group of marketers in conjunction with Forrester and found that 88% of them believe their marketing organization must increase its use of automation and AI to win. Our latest updates and refreshed Mailchimp Standard plan are tailored to give marketers the enhanced automation capabilities they need and are backed by a wealth of data to help them gain more customers and increase revenue.” – Jon Fasoli, Chief Design and Product Officer at Intuit Mailchimp

Among many other uses of AI, marketers can leverage generative AI-powered tools that help create engaging content quickly and easily with Email Content Generator. 

Accelerate Ecommerce Growth

According to a Business Wire report, Intuit Mailchimp has seen impressive growth in the last year, with new customers and revenue both on the rise. This shows that, despite economic instability, businesses are looking to marketing tools like Mailchimp to help with the heavy lifting and provide their customers with personalized customer experiences. 

“SMBs power a significant percentage of the economy here, and right now they need all the support they can get.” – Jim Rudall, Head of EMEA at Intuit Mailchimp

To meet this need Mailchimp developed 24 new starting points that will help users drive more ecommerce and multichannel growth with Customer Journey Builder and Pre-built Journeys.

Out of the 24 starting points, there are 12 new Shopify starting points that leverage real-time shopping activity to build customer journeys that are more likely to convert.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Amazon Scams To Look Out for Right Now

Amazons scams are worryingly common. We identify the tactics scammers are using, and tell you how to avoid them.

With a $1.3 trillion market cap, Amazon is the biggest ecommerce company in the world. It’s the first place that millions of customers shop for tech, clothes, and even groceries.

Thanks to its popularity, it’s also a haven for scammers: Apparel, home goods, and cosmetics are the top categories that suffer from counterfeit goods. And since the website has such an outsized presence, text scams, email phishing attempts, and gift card fakes often pick Amazon as their choice for impersonation.

As a result, Amazon scams are common. Here’s what to know about the ones most likely to trip you up, and how you can dodge them.

In this guide:

Amazon Counterfeits

Much like similarly sprawling enterprises such as Facebook and YouTube, Amazon’s sheer scale makes it tough to keep up with curation tasks that would be considered basic necessities at a smaller ecommerce operation.

Amazon is unwilling or unable to vet third-party sellers on its website well enough to prevent counterfeits from cropping up, according to a Wall Street Journal investigation in 2019. Amazon says it has cracked down on the practice since, but fakes are still tricking customers today.

Fake and slipshod products can be a serious threat to the health of you or your loved ones when it comes to essential items like baby food or air filters. One Reddit user recently commented that they’d even bought a “niche pregnancy nutrition book” on Amazon that ended up being counterfeit.

To avoid a fake product scam on Amazon, try picking out which brand name you trust before getting on Amazon: That way, you won’t be served the products that Amazon’s untrustworthy algorithm thinks are safe. Recommendation sites like Wirecutter can help, as can adding the word “Reddit” to a Google search for a type of product.

Amazon Brushing Scams

A brushing scam is a way that a third-party seller can fake a “verified purchase” positive review, which will bump it up in the algorithm. The seller simply sends their product to a real address, unsolicited. They then use that information to write up a positive review, which boosts their sales.

If you get a free package from Amazon that you didn’t ask for, you can take these steps:

  • Check if it was a gift from friends or family
  • Record the tracking ID
  • Fill out this online form from Amazon

One upside to this scam: You’ll get a free item, as there’s no obligation to return it.


Amazon Text Scams

Millions of people order Amazon packages so often that they may not even remember if they had a package in transit at any given point in time. That’s what a lot of text spammers are counting on when they pick Amazon to impersonate.

A fake Amazon text can be hard to tell from a real one, since they’ll both claim to be from Amazon, feature a short message, and will include a link.

Here are a few tipoffs that indicate a fake Amazon text:

Amazon text scam
  • Grammar or spelling mistakes
  • A regular ten-digit phone number (real Amazon texts will likely come from a four-to-seven digit source code)
  • Includes any link without an “Amazon.com” domain

One foolproof method for avoiding an Amazon text fake? Simply never click a link from a text. If it really is Amazon, you’ll be able to log into your account on a desktop computer and address the issue just as easily there.

Online marketplaces are hot targets for cybercriminals. Learn how to spot and avoid Facebook Marketplace scams here.

Amazon Email Phishing Scams

Fake emails are similar to fake texts: They’ll claim to be from Amazon, but might have spelling and grammar errors. The email address that sent the email is also a giveaway, since it won’t be short, simple, and clearly Amazon-related.

Again, the best response is to avoid clicking any links, even if the email looks legitimate. Instead, you can log into your Amazon account and look under the “Your Orders” section to see if you can spot the same issue that the email claims exists.

If you already clicked on a suspicious link, you should immediately change your Amazon password, and make sure that two-factor authentication is turned on. You can also read our full guide to the best password safety tips.

It’s also worth investing in some good antivirus software, which will identify and isolate phishing emails for you.

Legitimate Amazon websites have a dot before ‘amazon.com’ such as http://’something’.amazon.com. For example, Amazon Pay website is https://pay.amazon.com/. We’ll never send emails with links to an IP address (string of numbers), such as http://123.456.789.123/amazon.com/. If the link takes you to a site that is not a legitimate Amazon domain, then it is likely phishing.

Amazon logo smile
~Amazon Customer Service

Amazon Gift Card Scams

Gift cards are a fast way for a scammer to get real monetary value without exposing their own bank or credit card details. As a result, one giveaway that a phone call or email is a scam is if the person is asking to be paid in an Amazon gift card. No actual Amazon employee will ask you to buy them a gift card.

Amazon will also never email you out of the blue to offer you a free gift card. That said, there are a few real, non-scammy cases in which Amazon might actually give you a free virtual gift card.

  • Amazon offers store credit for product returns
  • Amazon gives users store credit for trading in old electronics, from cell phones to Kindles
  • Signing up for the Amazon Prime Visa credit card will earn users $100 in store credit

For all these instances, however, you’ll want to triple-check that you are on the actual Amazon.com website before entering any personal information.

Amazon Tech Support Scams

Pretending to be a helpful tech support team is another way that scammers can trick customers into giving them personal data. In fact, in 2021, the FTC found that around one in every three people who reported a business impersonator scam said their attacker was pretending to be Amazon.

These bad actors may get in touch through email, a phone call, or a text message, but the hustle is always the same. The scammers will reach out unsolicited, say they’re from Amazon, and offer to address a fictional problem with your account.

The exact story can vary. Examples might include:

  • Amazon wants to refund you for an unauthorized purchase… but “accidentally” transfers too much and needs it back.
  • Cyber criminals have accessed your Amazon account, and the only form of protection is to buy an Amazon gift card and send the “tech support” team the code needed to use it.
  • Suspicious activity on your account needs to be checked out by Amazon’s fraud team to ensure that you aren’t at fault, and they need your account login in order to verify everything.

As always, the best response is no response. Don’t call or text back. Instead, check it out from your own Amazon account or your own bank account. Never pay for anything with a gift card, and never give anyone remote access if they have reached out unsolicited.

If you think you were a target, you can report it to the official government FTC fraud website.

Will Amazon Ever Contact You by Phone?

Amazon does sometimes make outbound calls to customers, but this is rare. If Amazon calls, it will never ask you for personal information. Scammers will.

One common Amazon phone call scam starts be claiming that your account has fraudulently purchased a big-ticket item, such as a $500 Xbox or a $1,000 iPhone. Once the victim is panicked, the scammer pretends to switch the call over to a bank: They might impersonate the “Wells Fargo fraud department” or something similar. Then, they’ll ask you for your banking information, which is what they were after all along.

Luckily, you just need to know that Amazon rarely calls, and that you will never need to share personal information over the phone in order to protect yourself from fraud. If anything sounds like it could be real, you know the drill: Log into your real Amazon account and check your recent orders to see if anything is amiss.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.
Back to top