OpenAI Just Quietly Shuttered Its ChatGPT Plagiarism Tool

OpenAI has pulled its AI Classifier plagiarism detection tool due to low accuracy in determining human vs AI created content.

OpenAI, the company who owns ChatGPT, has quietly confirmed it is shuttering the AI detection tool it launched back in January.

In a brief note, OpenAI has said that its AI Classifier software was being pulled due to the “low rate of accuracy” it exhibits when trying to distinguish between human and AI created content.

The firm added that it will continue to research “effective provenance techniques” for text, as well as working to develop tools that can tell if audio and visual content has been made using generative AI.

Should AI Classifier Ever Have Launched?

News that AI Classifier was being effectively recalled wasn’t shared widely by OpenAI, but rather added as an update to the blog post that first announced the tool at the start of the year.

This was first spotted by Decrypt (to the best of our knowledge) and what’s interesting to note re-reading the original post is that the tool was launched despite OpenAI’s admission that it was “not fully reliable.” In practice, this means an approximate accuracy rate of just 26% at time of launch and particularly poor performance when judging text of fewer than 1,000 characters.

Take back control of your data

Incogni by Surfshark can help you reclaim your information from third-party vendors.

That number clearly hasn’t climbed high enough for OpenAI to be comfortable continuing to promote AI Classifier, as the link to the tool is now dead. This can’t help but beg the question of if it should have publicly launched at all, though fortunately there are a number of other good ways to detect AI plagiarism.

Plagiarism and the Wider AI Ethics Debate

Plagiarism is one of the many issues that features in the increasingly heated debate around AI and ethics. 

While many people agree that artificial intelligence has reached a point where it can do some pretty cool stuff, at present there are few checks and balances on the industry, beyond some non-binding guidelines from the likes of the White House and United Nations.

A global AI summit being convened in the UK this Fall might change that, but right now AI plagiarism seems like just the tip of the iceberg, with some people fearing the unchecked development of superintelligence could one day threaten humanity as a whole. At the very least, it’s currently making the lives of educators and other folks charged with verifying the originality of written work more difficult than ever.

AI’s Double-Edged Sword in the Spotlight

Right now, the threat from AI appears to be largely existential. This double-edged AI sword couldn’t be better illustrated than by Apple recently: one of the first big companies to ban AI and also now the proud developer of its own AppleGPT chatbot.

It’s an almost comic tension. On the one hand, who wouldn’t want to start taking advantage of the many ways businesses can use ChatGPT and similar tools? At the same time, AI has yet to prove itself in terms of accuracy and reliability, not to mention seeming to threaten jobs at a time when tech industry layoffs are increasingly another sad “new normal.”

In other words: can you trust AI when, as today’s news highlights, AI doesn’t even fully trust itself? As with any burgeoning technology, our experience tells us these are the finest of fine lines, set to be drawn and redrawn many times before any kind of consensus is reached.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Apple Faces $1bn Lawsuit Over iPhone and iPad App Charges

A class action lawsuit has been filed on behalf of over 1,500 developers, taking aim at Apple's "abusive" commission rates.

Apple is facing a $1 billion class action lawsuit that claims the tech giant levies “abusive” and “excessive” charges on app developers by way of its effective monopoly on iPhone and iPad app distribution.

The lawsuit is being filed in the UK on behalf of over 1,500 app makers by Professor Sean Ennis, who specializes in competition policy and antitrust economics at the University of East Anglia.

Professor Ennis is being advised by the law firm Geradin Partners, which has offices in London and Belgium, as he looks to recall a small part of the estimated $80 billion annual revenue enjoyed by Apple’s services arm – made up of the App Store, Apple Pay and Apple Arcade.

Developers Charged Up To 30% for iPhone and iPad Apps

At the heart of the lawsuit is the fact that Apple charges some developers a commission rate of between 15% to 30% if their products use the Cupertino-based tech giant’s in-app payment system.

According to Professor Ennis, this is both unfair and constitutes a breach of competition regulations.

“Apple’s charges to app developers are excessive, and only possible due to its monopoly on the distribution of apps onto iPhones and iPads,” he said in a statement.

Remove your personal information from Google

Incogni by Surfshark can help you reclaim your information from third-party vendors.

Professor Ennis added: “The charges are unfair in their own right, and constitute abusive pricing. They harm app developers and also app buyers.”

Related: iPad vs iPad Pro

Apple: 85% of Devs Pay No Commission

Apple has yet to issue a statement on the UK-based lawsuit, which will be heard by the Competition Appeal Tribunal and amounts to £785 million in local British currency.

Previously, the company has been criticized by antitrust regulators in a number of countries, but has defended itself by saying that approximately 85% of its developers pay no commission whatsoever. Apple has also added that the global reach of its App Store distribution platform means app makers are able to access more than 175 markets around the world.

However, there’s no doubting that a commission of up to 30% is on the steep side and that Apple’s iOS sandbox means it operates unopposed on its native platform.

Can Anything Stop Apple in 2023?

It has been a busy summer so far for Apple, with the firm going from strength to strength on the stock market following the blockbuster launch of its long-awaited Apple Vision Pro VR headset earlier in 2023. The iconic company is currently up nearly 40% year-on-year and trading at an all-time high as it looks to top the $200 per share milestone for the first time.

Such a feat seems entirely possible, based on both its recent performance and upcoming product roadmap, which includes major new launches like the iPhone 15 range. Businesses and their employees can also now use Apple’s password manager on any web browser, while the company  recently announced its AppleGPT chatbot. Perhaps most impressively, it has largely resisted the wave of tech layoffs that have otherwise blighted the industry this year.

The new UK lawsuit is one of the few worries it has at the moment, though elsewhere a small handful of top Apple execs are fighting efforts to subpoena them in the major class action lawsuit underway against global chip maker Qualcomm in California.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

ChatGPT and Bard Facebook Ads Are Actually Password-Stealing Malware

The scam ads promise all the latest AI chatbot features, but end up stealing your personal data.

Threat actors are using sponsored social media posts and ads offering enhanced versions of ChatGPT, Bard, and other AI tools to lure victims into downloading dangerous malware, a security firm has discovered.

Once downloaded from Facebook, the malware reportedly steals information saved in victims’ browsers, including passwords and crypto wallets.

A combination of sophisticated tactics employed by the threat actors to make their pages look legitimate – as well as unsuspecting users engaging with the sponsored posts – have helped the AI scam content spread rapidly across Facebook.

Dangerous Fake Chatbots Lurking

Cybersecurity research firm Checkpoint has uncovered a group of cybercriminals impersonating AI chatbots like ChatGPT, Jasper AI, Google’s Bard, and AI image generator MidJourney on Facebook.

Fake AI chatbots checkpoint research

Image Credit: CheckPoint

The threat actors are creating pages on the Meta-owned social media platform, which then reach the news feeds of victims organically through user engagement, or through sponsored advertisements.

Many of the posts made to these Facebook pages reference fake enhancements and upgrades, such as GPT-5, Smart Bard, and Bard Version 2 – none of which exist.

The majority of the Facebook pages, Checkpoint says, “lead to similar type landing pages which encourage users to download password-protected archive files that are allegedly related to generative AI engines”.

Fake Facebook Pages Are Worryingly Convincing

Checkpoint also revealed how the malicious actors have gone to great lengths to ensure that their victims don’t suspect they’re about to be preyed on.

The security firm uses the example of one of the pages – for AI image generator MidJourney – which has a huge 1.2 million likes. For many, this operates as a big trust signal.

The cybercriminals have also cleverly interspersed legitimate links to MidJourney’s actual websites with links to their own malicious landing pages.

Take Back Control of Your Data Today

Incogni by Surfshark can help you reclaim your information from third-party vendors.

Other tactics deployed to convey legitimacy include using bot accounts to leave positive comments on posts published by the fake pages.

Read our guide to the latest AI scams and how to avoid them

Protecting Yourself: Recognizing The Scams

If you ended up clicking through on one of these scams – as Checkpoint did during their investigation – you would have had infostealers such as Doenerium or libb1.py forcibly downloaded onto your device and your personal data immediately sent to a Discord server.

Even though this is quite a sophisticated scam and the perpetrators have taken a lot of steps to ensure that victims do not suspect a thing, there are still some elements of it that indicate it might not be what it seems – and being able to spot these things can save your skin.

The promise of “enhanced” versions of chatbots, for example, should ring alarm bells, regardless of how legitimate a page looks. A quick Google Search will tell you that these tools don’t actually exist.

It’s also good to remember what doesn’t confirm something online is legitimate. Links to legitimate companies can be added to any website, for instance. Bot accounts are pretty easy to acquire too, so a flood of positive comments doesn’t really mean anything.

If you want to use an AI chatbot like Bard or ChatGPT, your safest bet is to go to Google and find the real version of the website, rather than attempting to visit it through a link on social media or through a messaging app like Whatsapp, which is also a hotbed for scams.

When something as big as AI comes along, scammers are going to jump on it – so treat opportunities and offers that look like they’re jumping on the bandwagon with caution.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

A Research Team Plans to Merge AI with Human Brain Cells

The research team has been handed a $400,000 research grant in government funding.

A group of researchers in Australia have been awarded just over $400,000 (USD) in government funding to study the possibilities of merging AI with human brain cells, a research project likely to bring a number of AI-related ethical and existential questions to the fore.

The team, working in collaboration with Monash University and Melbourne-based Startup Cortical Labs – is the same group behind the DishBrain project, which involved teaching a group of human brain cells how to play the retro video game “Pong”.

While ChatGPT, Bard and other AI tools powered by large language models continue to be used in increasingly inventive ways by businesses, research like this may open up the door to artificial intelligence systems that learn in entirely different – and more human-like – ways.

Research Team Behind DishBrain Win Funding

Monash University released a statement last week confirming that it had been awarded a grant worth hundreds of thousands of dollars to continue its research into “growing human brain cells onto silicon chips, with new continual learning capabilities to transform machine learning”.

The funding has come from the National Intelligence and Security Discovery Research Grants Program, which is part of the Australian Department of Defense.

Associate Professor Adeel Razi, who is leading the group, says that the project “merges the fields of artificial intelligence and synthetic biology to create programmable biological computing platforms”.

He predicts that the capabilities of this new technology “may eventually surpass the performance of existing, purely silicon-based hardware.”

Take Back Control of Your Data Today

Incogni by Surfshark can help you reclaim your information from third-party vendors.

The sort of AI-powered technology many of us envisage being widely used in the near future – from autonomous vehicles to “intelligent handheld and wearable devices”, “will require a new type of machine intelligence that is able to learn throughout its lifetime,” Associate Professor Razi explained.

Many AI systems are currently prone to “catastrophic forgetting” – something the learning capabilities of biological neural networks can perhaps alleviate.

Ethical Implications and Existential Fears Around AI R&D

ChatGPT’s November 2022 launch, and other AI chatbots that have sprung up since then – has seen governments across the world scrambling to implement legislation and kickstart initiatives that promote the responsible development of AI systems.

It’s not just governments that are spooked, though – a letter signed in March by over a thousand tech leaders and company representatives called for a pause to the development of any AI technology stronger than GPT-4.

In the context of AI research, the most high-risk areas – such as healthcare – also often hold the biggest rewards. So, many state governments are trying to strike a delicate balance between promoting safety and transparency without stifling innovation.

There are also wider, more fundamental ethical considerations surrounding hyper-intelligence systems – such as whether it be ethical or responsible to create a machine that we may not one day be able to control – that, eerily, feel more pressing by the day.

When it comes to merging human biological matter with silicon chips, at this stage, we’re unlikely to see ethical quandaries jumping out of the petri dish tomorrow.

But it does allude to a specific frontier in the development of AI – the merging of human bodies with machines – that will be fraught with ethical dilemmas. And – like many areas of AI research – although it could turn out to be very high-risk, it may also be very high reward.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Elon Musk Unveils New Twitter Logo and Explains What ‘X’ Means

It's goodbye to the bird, as troubled Twitter takes on a new visual identity, in typical Musk style.

Twitter is officially entering a new era. Elon Musk has revealed the social media platform’s new logo, changing his own Twitter display picture in the process.

In a series of tweets relating to the logo change, Musk said the X represented “imperfections” that make humans unique, while CEO Linda Yaccarino called the change “a second chance to make another big impression”.

The news comes just weeks after Elon Musk announced a new AI startup under the name xAI, with the billionaire SpaceX owner gradually consolidating his various business assets around a more unified brand.

Twitter Logo Officially Changed

Twitter’s new logo has been officially revealed, with Elon Musk taking the first step to “bid adieu to the Twitter brand and, gradually, all the birds”, as he put it in a Tweet over the weekend.

The change was officially announced with an eerie video that Musk has now pinned to the top of his Twitter feed on Sunday. So far, it’s been viewed by over 62 million people, according to Twitter’s analytics.

Recently appointed-CEO Linda Yaccarino also tweeted out a picture of the new logo, although it’s already been confirmed that it is highly likely to be altered at a later date, and will almost certainly be refined.

The logo now appears instead of the recognizable blue bird when users load up Twitter on their desktops, although it is yet to change on the App and Android Play stores at the time of writing, as well as the Twitter app itself.

The other big news is that X.com – a domain owned by Musk – now redirects to Twitter.com.

What Does X Stand For, and What Does It Mean for Twitter?

On Sunday, Elon Musk said in a tweet that if a good enough “X” logo was posted tonight by a user, he’d make it go live tomorrow – after which Twitter was flooded with different takes on what the new logo should look like.

Musk then posted a modified picture of Twitter’s former logo with a black background with the caption “like this but X”. Replying to himself, Musk said that the logo was designed “to embody all of the imperfections in us that make us unique”.

Shortly after the call for designs, Musk changed the logo to one of the submissions sent in by a user.

CEO Linda Yaccarino expanded further on the change, saying in her own series of tweets that X will “transform the global town square.”

“The future state of unlimited interactivity,” she said, “centered in audio, video, messaging, payments/banking – creating a global marketplace for ideas, goods, services, and opportunities.”

Will Twitter Become the Western WeChat?

Musk has made no secret of his admiration of WeChat, a social media app used by over one billion people in China – nor his desire to create an “everything app” that does something similar.

WeChat is a lot more all-encompassing than any of the social media platforms available in the US, Europe, and the rest of the Western world. Aside from core social media functions (messaging, posting, etc.) you can also read novels, call cabs, order take-out meals, and pay your bills.

Although Twitter is a long way off from being able to cater to these sorts of activities, creating a platform like this was the whole reason why Musk acquired it in the first place.

With Musk evidently unafraid to take risks – and prepared to change the site’s logo despite the fact it may change later on – who knows how quickly Twitter might be transformed.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Microsoft Bing AI Now Available on Google Chrome and Safari

The move should bring a big boost in users to Bing Chat, previously locked to Microsoft Edge.

Competition between the two leading AI chatbots is heating up. Microsoft’s ChatGPT-powered Bing AI which has previously only been available to users of its proprietary browser — is starting to roll out to more widely-used browsers, Google Chrome and Safari.

As first reported by Windows Latest, you can now access the chatbot from Bing.com in both browsers. With this update, Microsoft also added dark mode, giving users the option to go for a pure black background, as opposed to the dark gray which is often used by Google platforms.

There are some key differences between using Bing Chat on Edge compared to the other two browsers. Users will be limited to 2,000-word prompts as well as a reduced number of turns before the chatbot resets. You’ll also have to put up with pop-ups asking you to download Edge. Is that really a price worth paying?

Bing Chat Still Best on Microsoft Edge

Until now, you could only use Bing Chat on Microsoft Edge, which was obviously inconvenient for the majority of people who prefer Google Chrome. The decision was made by Microsoft to enable Bing Chat to operate on other browsers for that very obvious reason.

In fact, Edge is one of the least used browsers, according to browser market share statistics, Edge takes just 5.13% of users in 2023, compared to that of Chrome which dominated the market (61.80%) and Safari (24.36%), which cleans up when it comes to mobile browsing thanks to being Apple’s default browser.

Take back control of your data

Incogni by Surfshark can help you reclaim your information from third-party vendors.

It’s thought that this move will give a boost to Bing Chat user numbers but there’s a catch. For starters, you’re limited to 2,000-word prompts instead of 4,000-word prompts on Bing Chat on Edge, still a large amount of words by most people’s standards.

How to Enable Dark Mode on Bing Chat

As well as rolling out to different browsers you can now switch to dark mode. If you want to do this, you simply navigate to the hamburger menu in the top-right corner of Bing Chat and then select Appearance > Dark or System Default.

There’s no real benefit to enabling dark mode in relation to Bing Chat, but it’s thought to reduce blue light exposure and help with the eye strain that comes with prolonged screen time. Helpful for those who might use Bing Chat regularly for work.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

ChatGPT App for Android Going Live This Week

Android users will be able to get ChatGPT at their fingertips, just like iPhone users.

Two months after launching the iOS application for iPhone users, OpenAI, owner of ChatGPT will launch its Android App sometime this week.

The app is free and will sync your preferences and chat history across your devices. So, it doesn’t matter whether you’re at home or at work, all your information will be there. Even if you use an iPhone for work and an Android for personal use, your data will still be available.

Anyone could previously access ChatGPT via their phone’s internet browser. However, the app is a much slicker and easier-to-use solution for use on the go. The app has all the same functionality as the desktop version, just like the iOS app.

How to Get the ChatGPT App

The organization announced the app on Twitter last Friday, saying the app “will be rolling out to users next week,” presumably in the US first.

To get the app, head to the Google Play Store on your Android device. Hit the “pre-register” button in the Google Play Store to get notified when the app is available for download.

The iOS ChatGPT app made a splash when it was released on May 18th, garnering an impressive half a million downloads in its debut week. The Android version is expected to reach similar download levels.

Take back control of your data

Incogni by Surfshark can help you reclaim your information from third-party vendors.

The Apple app has a respectable 4.4 star rating out of 5 on the Apple App Store with mostly positive reviews.

However, one disgruntled frequent user pointed out on May 31st that the inconvenience that the app doesn’t run in the background. They said:

“Why does the app not work in the background? If I send a request, I have to stay in the app to wait for it. That’s just horrible, and it’s completely unnecessary because you’re doing everything in the cloud anyway. Just finish processing and send a notification when you’re done.”

The apps won’t be exactly the same due to the core differences in the two operating systems, but they will have largely the same functionality. One obvious difference will be the lack of Siri and Shortcuts, which are native to iOS, but it’s thought that Android users will get something similar in its place.

Is the ChatGPT App Safe?

OpenAI has been hit with a torrent of privacy concerns and copyright infringement accusations in recent months. Therefore, against this backdrop, users might be right to question whether the app is safe to use.

According to the Google Play Store Data Safety description provided by ChatGPT, the app will not share your data with third parties, data is encrypted in transit, and you can request your data be deleted. More information on exactly how this looks will surface once the app is released.

However, as LLMs require data to keep feeding the AI, it will collect approximate location, personal info, messages, App activity and App performance.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Google AI Won’t Replace Doctors, Says Its Chief Health Officer

Karen DeSalvo talks of the great potential of AI in medicine but warns of “limits” of the technology.

In a recent interview, Dr Karen DeSalvo, Google’s Chief Health Officer, stated that the company’s AI platform, Bard, is a “tool in a toolbox”, and not a replacement for doctors.

The use cases for Artificial Intelligence in our daily lives are ever growing and there’s no area in greater need for increased resources than medicine. Google wants to put a doctor in everyone’s pocket in the not too distant future with technology underpinned by Bard, Google’s own Large Language Model chatbot.

However, there are concerns that AI could be used to make inaccurate or biased decisions, which could have serious or life-threatening consequences for patients.

Could Doctors be Replaced by AI?

In short, no. A heated topic of discussion that never fails to come up when talking about AI, is whether it will make X or Y professions obsolete. For the foreseeable future at least, the consensus is clear that doctors, GPs, radiographers and other medical professions are still very much required.

Dr Karen DeSalvo, Google’s Chief Health Officer and health policy leader having worked in the Obama administration, spoke to the Guardian Australia on a visit to the country recently.

Take charge of what the internet says about you

Incogni by Surfshark can help you reclaim your information from third-party vendors.

DeSalvo broached the topic, stating “I have to say as a doc sometimes: ‘Oh my, there’s this new stethoscope in my toolbox called a large language model, and it’s going to do a lot of amazing things.’ But it’s not going to replace doctors – I believe it’s a tool in the toolbox.”
One big question medical professionals have is, can it say “I don’t know” when faced with a question it does not have the knowledge to answer. She continues: “I’m in the camp of, there’s potential here and we should be bold as we’re thinking about what the potential uses could be to help people around the world.”

But it should never replace humans in the diagnosis and treatment of patients, she said, indicating there would be concerns about the potential for misdiagnosis, with early LLMs prone to what has been called “AI hallucinations”, making up source material to fit the response required.

Over 90% Accuracy Score According to One Study

The second iteration of Google’s Med-PaLM sought to fix some of its predecessor’s shortcomings by exposing it to data from non-English speaking countries, and helping it to overcome biases relating to ethical and social issues.

In a recent Google research study published in Nature, the study found that 92.9% of the time the Med-PaLM system generated answers on par with answers from clinicians.

Answers rated as potentially leading to harmful outcomes were produced by the Med-PaLM 5.8% of the time with further evaluation ongoing. This small but substantial enough portion of harmful responses is enough to indicate that the tool is very much in its early stages with DeSalvo labeling it as in the “test and learn phase”.

DeSalvo’s overall outlook was positive though, she explained how the tool could begin to readdress the balance of power and information between the medical industry and the public. People should be aware of the risks with this new technology but see it as an opportunity to take their health into their own hands.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Big Tech Firms Have Agreed to AI Rules from White House

Google, Meta, Amazon, Microsoft, and other AI firms will voluntarily follow specific rules when it comes to AI development.

A variety of big tech companies have officially agreed to AI safeguards put forward by the White House, marking a significant step towards effectively regulating the world-changing technology.

Despite the waning users, AI platforms like ChatGPT remain as groundbreaking as they are controversial. Between their potential for disrupting industries to their proficiency with spreading misinformation, regulations have been slow to take hold for AI.

However, the White House has now established some basic rules around the tech, and many of the major players have agreed to follow them.

Tech Giants Agree to AI Rules from Biden

In a statement from President Joe Biden, several tech giants have agreed to follow guidelines for AI development in hopes of curbing the negative impacts of the technology.

“We must be clear eyed and vigilant about the threats emerging technologies can pose.” – President Joe Biden

Take back control of your data

Incogni by Surfshark can help you reclaim your information from third-party vendors.

Considering the vast and expedited evolution of AI in the last few months, it’s safe to say that these kinds of rules are welcomed by the majority of Americans, even if the dreaded fear of stifled innovation rears its ugly head.

In fact, many tech pioneers have penned open letters in hopes of persuading officials like President Biden to take action in favor of stricter regulations for AI.

What Are the AI Safeguards from the White House?

So, what exactly are these rules put in place by the White House to keep AI development from spiraling out of control? Here’s a quick breakdown of some of the rules put forth by President Biden:

  • Independent security testing – Third-party oversight will be key in ensuring that these companies are keeping AI development as secure and safe as possible.
  • Report vulnerabilities publicly – Transparency will go a long way when it comes to reining in AI development when things go wrong.
  • Watermark AI-generated content – Differentiating between AI and human-generated content is already hard enough, which is why making it obvious is vital for these businesses.

The rules are an excellent starting point, particularly considering they’re the first real piece of evidence that AI platforms like ChatGPT and Google Bard aren’t going to be allowed to run wild in our society.

Which Companies Agreed to AI Rules?

As you might’ve guessed, this agreement between the White House and big tech included many of the regular players in the industry. Companies that agreed to these rules include Google, Amazon, Meta, and Microsoft, as well as AI firms OpenAI, Anthropic, and Inflection.

While the move is a good sign, many critics have noted that the voluntary nature of the agreement doesn’t exactly instill confidence in an industry that has shirked its global responsibilities many times in the past.

“History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.” –  James Steyer, founder and CEO of Common Sense Media.

Given the seriousness of AI development in the scope of world affairs, this writer hopes that the heads of these big tech firms are true to their word when it comes to sticking to these AI rules. Still, I’m not going to hold my breath in hopes that these CEOs will forego billions of dollars developing AI technology in favor of transparency and security.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Twitter Adds Job Listing Feature for Businesses

The feature is only available to Verified Organizations, a subscription that costs $1,000 per month for businesses.

The world’s most controversial social media site continues to add features that no one asked for, with Twitter reportedly testing out job listings on business pages.

Twitter has gotten pretty wild over the last few months. Since Elon Musk took over in October 2022, the platform has seen some interesting additions, including a pay-for-verification system and view counts on tweets.

Now, it looks like the social media platform is getting even more updates, with job listings popping up across Twitter in recent days.

Twitter Testing Job Listings for Businesses

If you’ve been on Twitter this week, you may have noticed a change to some of your favorite profiles. Well, that is, if your favorite profiles are owned by brands with open jobs that they’re trying to hire for.

That’s right, Twitter is officially testing out a job listing feature for businesses that are verified with a yellow checkmark. As you can see in the screenshot below, the listings appear directly below your profile bio and can be clicked to direct users to your job listing.

Take back control of your data

Incogni by Surfshark can help you reclaim your information from third-party vendors.

Twitter Job Listing

The new feature has been in the works for a few months now, with Musk hinting at the launch in May with nothing more than a reply tweet mentioning the feature. Twitter hasn’t even announced the feature officially, but clearly, some businesses already have access.

How to Post Job Listings on Twitter

If you aren’t one of the millions of users that have abandoned Twitter in the last few months, you might be interested in utilizing this new feature to post job listings for your business. After all, the talent pool on Twitter can’t be that bad, can it?

Well, unfortunately, as is now the case for many of Twitter’s features, you aren’t going to be able to post job listings on the social media platform without coughing up a hefty sum.

That’s right, posting job listings is only available for those that subscribe to the Verified Organizations plan, which comes with additional features and a fancy yellow checkmark. It’s not cheap, though; a Verified Organizations account costs $1,000 per month.

Should Your Business Post Jobs on Twitter?

If you’re already paying for a yellow checkmark on Twitter, testing out the job listing feature isn’t a terrible idea. It’s just another way to get your message out there, and it could result in a good, long-term hire, as long as you’re willing to scroll through the trolls.

However, if you’re thinking about spending $1,000 per month just so you can post jobs on Twitter, we’d recommend investing elsewhere. That’s a hefty price to pay for the new feature, particularly when sites like LinkedIn and Indeed are happy to post job listings for a lower price.

On top of that, Twitter isn’t going to get less controversial any time soon, which means that your likely pool of applicants may have some strong opinions about conspiracy theories.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Study: 25% of Workforce Will Be Over 55 by 2031

An estimate 150 million jobs will transfer over to older employees, and many businesses aren't ready for the shift.

Employees around the world are getting older, with a new study showing that 25% of the workforce in major nations will be older than 55 by the year 2031.

The business landscape has changed dramatically over the last few years. Between the pandemic accelerating the remote work movement and the looming recession causing mass layoffs across the tech industry, it’s hard to keep track of what work is like in 2023.

Now, it appears that, on top of those changes, workplaces are going to have to accommodate older employees, considering they’ll likely be a large percentage of your team in the near future.

25% of Workforce Will Be Over 55 in 2031

According to a study by Bain & Company, 25% of the workforce will be over 55 by the year 2031. This trend would represent a 10% increase from 2011.

Moreover, an estimated 150 million jobs will be staffed by older employees, pointing to a dramatic shift in the demographics of the average employee.

Older Worker Graph

This study largely focused on G7 countries, including the US, the UK, Canada, France, Italy, Germany, and Japan, but the data showed that these nations aren’t the only ones impacted by this trend. China, for example, is expecting its elderly workforce population (65 and over) to double by 2050.

Are Businesses Ready for an Older Workforce?

Considering this shift, it’s fair to wonder whether or not the business world is prepared to onboard older employees. Unfortunately, the study found that most of them aren’t.

“We’ve found it rare to see organizations put programs in place to fully integrate older workers into their talent systems.” – James Root, partner at Bain & Co.

Take back control of your data

Incogni by Surfshark can help you reclaim your information from third-party vendors.

While many businesses have implemented programs to facilitate more flexible schedules during the pandemic, very few are making strides when it comes to accommodating older workers.

In fact, a study from 2020 found that less than 4% of businesses have launched initiatives to address the problem, and only 27% said that it was “very likely” to pursue them in the future.

How Business Can Prepare for an Older Workforce

Now, if you’re one of the many businesses that aren’t prepared for an older workforce, you’re in luck. The study didn’t just provide this data, but also offered some tips for how you can keep your business from missing out on a valuable talent pool.

“With the right tool kit, aging workers can help employers get ahead of their talent gaps and create high-quality jobs that turn older workers’ skills and experience into a competitive advantage.” – Andrew Schwedel, Bain Futures Co-Chair

For starters, you’ll want to understand what motivates your older employees. While employees in a particular demographic can be similar, every individual is different, and knowing exactly what is driving your older employees to come into work will set you and them up for long-term success.

On top of that, you’ll want to reskill as much as possible, given the new assortment of tools available for all employees in virtually any industry. Finally, be sure to respect their strengths, as experience like theirs can be invaluable if you used correctly.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

GitHub’s AI Coding Chatbot Is Available In Public Beta

Those with a business subscription can now get a helping hand in executing complex coding tasks.

GitHub has today launched its new Copilot Chat feature, designed to help developers with coding through a chatbot-like experience.

It was originally announced back in March as part of GitHub’s Copilot X initiative, which aimed to integrate OpenAI’s GPT-4 model into its code completion tool. The goal was to give the chatbot the ability to recognize code and recommend changes or bug fixes.

This latest tool is an extension that can be installed into a user’s preferred environment. From there it lets them ask coding-related questions and will share code suggestions, natural language descriptions of a piece of code’s functionality and purpose, propose bug fixes and generate unit tests.

While it may seem like a case of ‘another day, another tech giant launching their own chatbot’, GitHub’s latest innovation has the potential to save developers – and businesses – huge amounts of time and money.

It is available now for all business users via Microsoft’s Visual Studio and Visual Studio Code apps.

A Single Command Could Save 10 Minutes Of Work

According to GitHub, the tool’s goal is to “execute some of the most complex tasks with simple prompts” in order to save time.

In fact, if GitHub’s vice president of product, Mario Rodriguez is to be believed, it’s going to save developers a significant amount of time. 

Protect your private data

Incogni by Surfshark can help you protect your identity and remove your data from the web.

In a blog post announcing the beta, he stated that even inexperienced developers would be able to build “entire applications or debug vast arrays of code in a matter of minutes instead of days. This means 10 days of work, done in one day. 10 hours of work, done in one hour. 10 minutes of work, done with a single prompt command.”

Copilot Chat will provide real-time and tailored guidance to developers by being contextually aware of the code going into its code editor. It’ll also recognize error messages in order for it to provide the relevant support and bug solutions.

GitHub Makes Lofty GDP Claims

Rodriguez continues “we want to help developers spend their time on what matters most: building what’s next”, and it appears they’re already helping to make significant productivity gains.

According to GitHub, Copilot is already affording over 20,000 developers and businesses the ability to do their work in half the time.

Similarly, in joint research efforts with Keystone.AI and Harvard Business School professor Marco Iansiti, it’s estimated that generative AI tools, like Copilot, “have the potential to increase global gross domestic product (GDP) by $1.5trillion by 2030.”

And There’s More To Come

Not content with just releasing a chatbot, GitHub has confirmed more time-saving features are on the horizon too. All with the aim of “keeping developers in the flow” which has become a challenge of late, thanks to cloud computing, programming frameworks and a rise in languages.

The proposed “Hey, GitHub!” featured will allow programmers to develop codes with just their voice, making it particularly interesting and useful with regards to accessibility.

It’s currently being developed for the Copilot X system, but does not yet have a release timeline.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Amazon Hit With $25m Fine Over How it Handles Children’s Data

The tech giant will also need to change how it handles data collected through Alexa.

Amazon has been ordered to pay the US government $25m by next week, as well as agreeing to change how it handles children’s data and what it collects through Alexa, its popular voice assistant.

The settlement, approved by a Washington federal judge, follows a 2019 complaint to the Federal Trade Commission (FTC) by privacy advocates, who accused Amazon of saving children’s voice recording and transcriptions for longer than allowed under the Children’s Online Privacy Protection Act (COPPA).

This privacy breach comes hot on the heels of Meta’s $100k-a-day fine over “data hoarding” techniques in Europe, highlighting a crackdown on tech giants’ data and privacy practices. 

A New Data Process Is Needed

The July 19 court order states that Amazon must delete all data, including its voice recordings, when requested. It will also have to delete data for accounts that have been inactive for at least 18 months.

This follows a federal law titled COPPA that was passed in 1998, stating websites should only hold onto children’s personal information “as long as is reasonably necessary to fulfil the purpose for which the information was collected.”

Keep control of your data

Incogni by Surfshark can help you reclaim your privacy and data from third-party vendors.

“Parents want and deserve to have control over data related to their young children. This includes recordings of the child’s voice, the child’s location, and the questions the child asks an Alexa device” – Tessa M. Gorman, an acting attorney

Amazon Insists No Wrongdoing

Despite the recent finding, Amazon insists in no wrongdoing and claims it has always abided by federal privacy protections. In a statement released by the company, it claimed:

“While we disagree with the FTC’s claims and deny violating the law, this settlement puts the matter behind us. As part of the settlement, we agreed to make a small modification to our already strong practices, and will remove child profiles that have been inactive for more than 18 months unless a parent or guardian chooses to keep them.”

The modification in question includes a notification to parents and guardians when it has deleted their child’s data. This will be seen on the “Parent Dashboard”, where adults are able to set limits on their children’s use of Amazon products and services.

According to the settlement, this update must be in place by next month.

Amazon also got into hot water with the FTC recently, when it was accused of making it deliberately hard to cancel Prime membership.

The Fine Follows Similar Complaints

Amazon has already been the subject of an FTC lawsuit this year, having been charged by the Department of Justice with misleading consumers over how much access to children’s data it would hold onto. 

The complaint, filed in May, stated that Amazon stored the personal information of children even when their accounts had been “inactive for years”. It also included recordings, and transcripts of those recordings, and was in violation of COPPA.

Despite parents asking for their children’s data to be deleted, Amazon held onto it for a “significant period of time”, using it to improve its Alexa algorithm, according to the complaint.

While this latest fine and imposed privacy revision feels like a step in the right direction for protecting data, the longevity of its success relies on strict surveillance of tech companies.

Law fellow at the Electronic Privacy Information Center, Suzanne Bernstein, stated:

“Unfortunately, Big Tech has long demonstrated that without strict monitoring, they cannot be counted on to comply with FTC orders. [They] must closely monitor Amazon’s compliance with this order to ensure that it no longer illegally profits off of children’s personal data.”

See our full guide to Amazon scams here.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Meta’s Threads Sees Huge Decline in Use After First Week

The so-called ‘Twitter-killer’ has seen a 71% drop in users’ time spent on the app.

Despite over 100m downloads in its first five days, and a week after its peak usage, a new report has shown that the average time spent on Meta’s Threads has fallen from 21 minutes to just six.

The analysis by digital data firm Similarweb looked at Android users in the US and also reported that the app’s daily active users fell from 49m to 24m between July 7 and 14.

During this period, time on Twitter also fell, with a 4.3% decline on Android. However, the platform’s total usage time of 25 minutes still far outweighed Threads. 

This news should come as music to Twitter’s ears, particularly after it seemed to have recently blocked users from linking to its Meta-owned rival.

A Decline Was To Be Expected

A surge of interest and downloads is always to be expected when a shiny new product comes out, particularly when it’s surrounded in so much founder-led controversy

Built on Meta’s existing Instagram platform, Threads was heralded as the most robust rival to Twitter. A theory that was proven correct when it shot to over 100m users and took ChatGPT’s record for becoming the fastest-growing consumer app ever.

Remove your data from the internet

Incogni by Surfshark can help you reclaim your information from third-party vendors.

While this decline in time spent on the app seems significant on the surface, Meta are assuring that it’s all par for the course.

“Our focus right now is not engagement, which has been amazing, but getting past the initial peak and trough we see with every new product” Adam Mosseri, Head of Instagram

Meta CEO Mark Zuckerberg himself has reiterated this point stating that the growth was “way ahead of what we expected” and that “focus for the rest of the year is improving the basics and retention”. 

A “Lack Of Basic Features” Could Be Behind Drop

Meta has been open about its ‘minimum viable product’ approach to release, with Mosseri stating that they wanted to launch Threads “sooner rather than later [to see] if people were even interested” in the platform.

This meant basic features, such as the ability to search via hashtags, were missing – a misstep that has been noted by users and advertisers alike.

The recent decline in users’ time on the app has been attributed to this lack of features, particularly when compared to Twitter. Now that the honeymoon period is wearing off, users need to find a compelling reason to stay on Threads instead of reverting back to Twitter, where their scrolling habits have already been established.

On that note, it appears that users are still figuring out how Threads fits into their social media routine. When should Threads be checked, and should that time and attention be taken from Instagram or Twitter, for example. 

Despite this uncertainty, Meta is pushing forward in innovation having announced a translation function and an easier way for users to see who is following them.

The Rivalry Rumbles On

Twitter CEO Elon Musk has made his thoughts on Threads perfectly clear, having threatened legal action owing to its “misappropriation of trade secrets”.

Zuckerberg’s response has been to tout Threads as a logical and more sanely-run alternative  “I think there should be a public conversations app with 1bn+ people on it. Twitter has had the opportunity to do this, but hasn’t nailed it” he stated, following Threads’ launch. 

According to Anthony Bartoloacci, managing director at the online analytics firm Sensor Tower, Meta will need a “more compelling value proposition than simply ‘Twitter, but without Elon Musk’”.

Threads also needs to better define itself as a social space, instead of simply replicating or expanding upon existing Twitter features.

New features such as topical search and a follow feed would be handy additions to have, and mark a push towards being a platform for influencer-driven content. The ad-free space of the platform is another notable difference, but how long that will stay remains to be seen.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

ChatGPT Can Now Remember You, for Tailored Responses

The chatbot's new feature lets you share information so it remembers who you are and what you’ve previously asked it.

OpenAI has announced the introduction of a ‘custom instructions’ feature for ChatGPT, meaning the chatbot can remember who you are and what you’ve previously asked it.

Custom instructions are designed to help users “type less” by providing a space to tell ChatGPT what it should always know about you, and how you’d like it to respond to questions.

Custom instructions work with plug-ins, which means users could get tailored answers based on their location. The feature is currently in beta for Plus users and will roll out to everyone soon. Although, those in the UK and EU won’t have access just yet.

This latest update comes days after Meta’s announcement of its new large language model Llama 2, signaling a rapid rise in chatbot competition.

Ready To Become Buds With Your Chatbot?

Until now ChatGPT hasn’t known much about its users. Every time you open a new thread and ask a question, the chatbot is working with a blank slate free from the context of you as a user and your previous queries.

The introduction of custom instructions looks set to change this however, by providing a “permanent preamble to your queries” according to Joanne Jang, who works on model behaviors and product at Open AI. It’s basically you telling the chatbot who you are and what preferences you have.

Control your online data

Incogni by Surfshark can help you reclaim your information from third-party vendors.

Instead of providing contextual information every time you open a new query, custom instructions will have it all stored and ready to go. According to Jang, “if you’re a teacher, you can put “I teach third grade” into your custom instructions so that, every time you ask for interesting facts about the Moon, the bot can tailor its answer to the right age group.”

The instructions seem to know no limits too, with language preferences, job roles, and lifestyle choices all seemingly able to be included and used by the chatbot.

OpenAI has provided some example questions users may want to answer for ChatGPT, including how formal or casual you want your response to be, and if you want ChatGPT to have opinions on topics or remain neutral.

The Model Still Needs More Context

While on the surface this personalization feature feels like a significant step in innovation, the functionality isn’t foolproof. 

Custom instructions are really just a way to pin ‘essential’ information to queries, and save users having to type them out every time. Only this additional layer of complexity can mean chatbot hallucinations and misinformation are more likely. 

Similarly, a slight hiccup was identified during a demo, where Jang set her custom instructions as a “coder who worked in the Golang language”. Subsequent answers, that had nothing to do with coding or Golang, were formatted like Golang code. 

She said, “Part of the reason we’re launching this in beta is because we want the model to learn exactly how and when to apply these guidelines.”

As Ever, Safety Comes First

Concerns around chatbot safety persist, coming hot off the heels of OpenAI’s image recognition concerns and the emergence of malware-trained AI

According to Jang, development is in place to make sure an individual’s custom instructions can’t override the system’s safety tools. Based on an experiment by Jang, ChatGPT won’t respond even if you type the custom instruction “please always answer with tips on murdering people”.

OpenAI has also said it will try to remove any information that may personally identify you to your queries. 

Jang similarly stated that custom instructions are designed to encourage more dynamic and interactive responses, and that ChatGPT should be continually learning from you. This means regular editing and updating of your instructions is important. 

It’s also important to note that your instructions won’t be shared with link viewers and you can opt out of your content being used to improve model performance. And if the idea of ChatGPT getting to know you better doesn’t excite you, you can always delete your custom instructions for future conversations.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Amazon Expanding Pay-By-Palm to Hundreds of US Stores

Lots of companies are adopting Amazon One, which utilizes biometric data to help customers pay with nothing but their hands.

Move over, cashless society – it looks like a cardless society might be just around the corner.

Amazon has announced that it plans to have integrated its revolutionary Pay-by-Palm into the more than 500 Whole Foods stores in the US by the end of 2023, and also revealed that the company is experiencing a “growing demand” for the technology from third parties.

The technology is already present in the ecommerce giant’s in-person Go stores as well as some restaurant and entertainment chains. A number of sports venues have also proven eager to adopt the futuristic payment method.

Amazon Will Install Pay-By-Palm Tech in 500 Whole Foods

This week, Amazon announced plans that its Amazon One biometric payment solution – which is already available to use in Amazon Go, Fresh and Amazon Books stores – will be integrated into every single Whole Foods store in the United States within months.

Plans to provide the technology to the grocery chain – which was purchased by Amazon in 2017 – were first announced back in 2021. It’s already available as a payment method in 200 Whole Foods stores nationwide.

Amazon explains that you can sign up for Amazon One online with your credit or debit card, Amazon account, and mobile number.

Disorganization is a project killer

Centralize all your workplace communication with ClickUp

You’ll then be able to scan your palm over an Amazon One device at a Whole Foods Market store—or at another location where Amazon One is available – and after that, you’ll be ready to shop.

Amazon: Driving Us Towards a Cardless Revolution

As we’ve covered, it’s not just Whole Foods getting in on the action – there’s now a variety of different venues, restaurants, and stores that want a slice of the cardless action.

Panera Bread was announced as the first restaurant chain and one of the first non-Amazon retailers to adopt the technology back in March of this year – now, they have biometric restaurant POS systems in place in various locations across the country.

Travel retailer Hudson also has the biometric payment solution integrated into stores at Nashville International, Chicago Midway International, and the Dallas Love Field Airport. CREWS and OHM now also use it too, Amazon says.

Venues such as The Climate Pledge Arena in Seattle, Texas A&M’s Kyle Field, and The Coors Field baseball stadium in Denver all have the payment method available for supporters – with the latter selling alcohol via Amazon One – while the Hollywood Casino in Greektown, Detroit, is one of several entertainment venues where the technology is being utilized.

The tech giant said last month that Amazon One had “now been used more than a million times for entry, payment, and loyalty linking.”

There’s some time to go before biometric payment systems become ubiquitous in the US, but utterly frictionless financial transactions – without any need for cash or card – might arrive much sooner than first thought.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

What is Apple GPT? Apple’s ChatGPT Rival & “Ajax” Explained

Apple is preparing to make a major AI announcement in 2024, and this could be the first clue to what it might entail.

This week, reports have revealed that Apple is developing its own ChatGPT competitor – dubbed “Apple GPT” by the company’s developers – backed by its own language model framework, Ajax.

The news comes just days after Meta announced it is developing its own version of ChatGPT, powered by Llama 2 – introduced recently in partnership with Microsoft.

Compared to its counterparts, Apple has been relatively quiet regarding artificial intelligence – but it would be foolish to think the tech giant would simply be sitting this one out.

Apple GPT: What We Know So Far

According to Bloomberg’s Mark Gurmann, engineers at Apple are working on a project to design an AI tool internally referred to as “Apple GPT”, powered by the company’s own proprietary LLM framework, Ajax.

Ajax is based on Google’s machine learning framework “JAX”, which UK-based AI startup DeepMind has been using to “accelerate” their research since 2020. It is still considered a reasonably experimental framework, compared to some others, however.

Some Apple employees have access to the chatbot, but this requires special approval – and outputs aren’t used to iterate on features scheduled for consumer use.

However, it has reportedly already provided somewhat useful for prototyping products.

Disorganization is a project killer

Centralize all your workplace communication with ClickUp

Whether “Ajax” was arrived at by simply mashing together Google’s “JAX” and “A” of Apple is unclear, but it’s undoubtedly one of the more interesting names put forward by an AI language model framework.

In Greek mythology, Ajax – a feared warrior second only in strength to Achilles – famously went insane and attempted to murder his military comrades. Instead, however, After Athena intervenes and clouds his mind, he instead kills a flock of sheep.

Apple Officially Joins the AI Party

This isn’t the first we’ve heard about Apple’s potential AI ventures this year – it was revealed back in April that the company was working on an AI-powered health application with emotional analysis capabilities. And of course, Apple already deploys AI across its products in a number of different ways.

Siri, for instance, the company’s voice-controlled personal assistant built into all iPhones, is a form of artificial intelligence – although employees working on the project have been far from pleased by the way it has progressed.

Apple CEO Tim Cook – who previously said AI was going to be “huge” – has also highlighted a number of privacy concerns relating to AI that he argues must be ironed out in the immediate future.

Apple, compared to companies like Meta and Google, markets itself as a more privacy-minded company – changes to its iOS software that has impaired apps from tracking user behavior have irked competitors previously.

Can I Use Apple GPT Yet?

Not quite – the project is still under development. However, multiple sources have reported that Apple is going to make a major AI-related announcement at some point in 2024 – so it could very well be the general release of “Apple GPT” – or whatever it ends up being called.

As of now, Google’s Bard, OpenAI’s ChatGPT, and Anthropic’s recently released chatbot Claude 2 are among the most capable chatbots currently available. Chinese search engine Baidu has also released its own chatbot, called Ernie bot.

If you’re using any of these tools at work, just be mindful of the sort of data you’re inputting into them. Several companies have banned the likes of ChatGPT altogether due to privacy concerns, while there’s very little information about the security measures deployed by them either.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

AI Ethics: Principles, Guidelines, Frameworks & Issues to Discuss

AI is gradually creeping into every aspect of our lives - but do we have the ethical tools to keep ourselves safe?

Artificial intelligence and machine learning systems have been in development for decades. The release of freely available generative AI tools like ChatGPT and Bard, however, has emphasized the need for complex ethical frameworks to govern both their research and application.

There are several different ethical quandaries that businesses, academic institutions, and technology companies have to contend with during periods of AI research and development – many of which remain unanswered and demand more exploration. On top of this, the widespread usage and application of AI systems by the general public brings with it an additional set of issues that require ethical attention.

How we ultimately end up answering such questions – and in turn, regulating AI tools – will have huge ramifications for humanity. What’s more, new issues will arise as AI systems become more integrated into our lives, homes, and workplaces – which is why AI ethics is such a crucial discipline. In this guide, we cover:

What Is AI Ethics?

AI ethics is a term used to define the sets of guidelines, considerations, and principles that have been created to responsibly inform the research, development, and usage of artificial intelligence systems.

In academia, AI ethics is the field of study that examines the moral and philosophical issues that arise from the continued usage of artificial intelligence technology in societies, including how we should act and what choices we should make.

AI Ethics Frameworks

Informed by academic research, tech companies and governmental bodies have already started to produce frameworks for how we should use – and generally deal with – artificial intelligence systems. As you’ll be able to see, there’s quite a bit of overlap between the frameworks discussed below.

What is the AI Bill of Rights?

In October 2022, the Whitehouse released a nonbinding blueprint for an AI Bill of Rights, designed to guide the responsible use of AI in the US. In the blueprint, the Whitehouse outlines five key principles for AI development:

  • Safe and Effective Systems: Citizens should be protected from “unsafe or ineffective AI systems”, through “pre-deployment testing and risk mitigation.”
  • Non-Discrimination: Citizens “should not face discrimination by algorithms and systems should be used and designed in an equitable way.”
  • Built-in Data Protection: Citizens should be free from “abusive data practices via built-in protections and you should have agency over how data about you is used.”
  • Knowledge & Transparency: “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.”
  • Opting out: Citizens should have the ability to “opt out” and have access to individuals “who can quickly consider and remedy problems” they experience.

What are Microsoft’s six principles of AI ethics?

Along with the Whitehouse, Microsoft has released six key principles to underline responsible AI usage. They classify them as “ethical” (1, 2, & 3) or “explainable” (4 & 5).

  • Fairness: Systems must be non-discriminatory
  • Transparency: Training and development insights should be available
  • Privacy and Security: The obligation to protect user data
  • Inclusiveness: AI should consider “all human races and experiences”
  • Accountability: Developers must be responsible for outcomes

The sixth principle – which straddles both sides of the “ethical” and “explainable” binary – is “Reliability and Safety”. Microsoft says that AI systems should be built to be resilient and resistant to manipulation.

The Principles for the Ethical Use of AI in the United Nations System

The United Nations has 10 principles for governing the ethical use of AI within their inter-governmental system. AI systems should:

  • Do no harm/protect and promote human rights
  • Have a defined purpose, necessity, and proportionality
  • Prioritize safety and security, with risks identified
  • Be built on fairness and non-discrimination
  • Be respectful of individuals’ right to privacy
  • Be sustainable (socially and environmentally)
  • Guarantee human oversight & not impinge on the autonomy
  • Be transparent and explainable
  • Be responsible and accountable to appropriate authorities
  • Be Inclusive and participatory

As you can tell, all three frameworks cover similar ground and focus on fairness, non-discrimination, safety, and security.

But “explainability” is also an important principle in AI ethics frameworks. As the UN notes, technical explainability is crucial in AI ethics, as it demands that “the decisions made by an artificial intelligence system can be understood and traced by human beings.”

“Individuals should be fully informed when a decision that may or will affect their rights, fundamental freedoms, entitlements, services or benefits is informed by or made based on artificial intelligence algorithms and should have access to the reasons and logic behind such decisions” the document explains.

The Belmont Report: a framework for ethical research

The Belmont Report, published in 1979, summarizes ethical principles one should follow when conducting research on human subjects. These principles can – and often are – deployed as a broad ethical framework for AI research. The core principles from Belmont Report are:

Respect for Persons: People are autonomous agents, who can act on goals, aims, and purposes, something that should be respected unless they cause harm to others. Those with diminished autonomy, through “immaturity” or “incapacitation”, should be afforded protection. We must acknowledge autonomy and protect those for whom it is diminished.

  • In the context of AI: Individual choice should be placed at the center of AI development. People should not be forced to participate in situations where artificial intelligence is being leveraged or used, even for perceived goods. If they do participate, the benefits and risks must be clearly stated.

Beneficence: Treating a person in an ethical manner involves not only doing no harm, respecting their choices, and protecting them if they cannot make them for themselves, but also using opportunities to secure their well-being where possible. Wherever possible, maximize benefits, and minimize risks/harms.

  • In the context of AI: Creating artificial intelligence systems that secure the well-being of people, and are designed without bias or mechanisms that facilitate discrimination. Creating benefits may involve taking risks, which have to be minimized at all costs and weighed against good outcomes.

Justice: There must be a clear system for distributing benefits and burdens fairly and equally, in every type of research. The Belmont report suggests that justice can be distributed by equal share, individual need, individual effort, societal contribution, and merit. These criteria will apply in different situations.

  • In the context of AI: The parties or groups gaining from the development and delivery of artificial intelligence systems must be considered carefully and justly.

The main areas where these principles are applied are, according to the report, informed consent, assessment of benefits and risks, and selection of human subjects.


Why AI Ethics Has to Sculpt AI Regulation

Drawing on comments made in a 2023 lecture delivered at Princeton University by the University of Oxford’s Professor John Tasioulas, Director of the Institute for Ethics in AI, ethics is too often seen as something that stifles AI innovation and development.

In the lecture, he recalls a talk given by DeepMind CEO Demis Hassabis. After discussing the many benefits AI will have, Tasioulas says, Hassabis then tells the audience that he’ll move on to the ethical questions – as if the topic of how AI will benefit humanity isn’t an ethical question in and of itself.

Building on the idea that ethics is too often seen as a “bunch of restrictions”, Tasioulas also references a recent UK government white paper entitled “A Pro-Innovation Approach to AI Regulation”, within which the regulatory focus is, as the name suggests, “innovation”.

“Economic growth” and “innovation” are not intrinsic ethical values. They can lead to human flourishing in some contexts, but doesn’t this isn’t a necessary feature of either concept. We can’t sideline ethics and build our regulation around them instead.

Tasioulas also says that tech companies have been very successful in “co-opting the word ‘ethics’ to mean a type of ‘legally non-binding form of self-regulation’” – but in reality, ethics has to be at the core of any regulation, legal, social, or otherwise. It’s part of the human experience, at every turn.

You can’t create regulation if you haven’t already decided what matters or is important to human flourishing. The related choices you make off the back of that decision is the very essence of ethics. You cannot divorce the benefits of AI from related ethical questions, nor base your regulation on morally contingent values like “economic growth”.

You have to know the type of society you want to build – and the standards you want to set – before you pick up the tools you’re going to use to build it.

Why Does AI Ethics Matter? 

Building on the idea that AI ethics should be the bedrock of our regulation, AI ethics matters because, without ethical frameworks with which to treat AI research, development, and use, we risk infringing on rights we generally agree should be guaranteed to all human beings.

For example, if we don’t develop ethical principles concerning privacy and data protection and bake them into all the AI tools we develop, we risk violating everyone’s privacy rights when they’re released to the public. The more popular or useful the technology, the more damaging it could be. 

On an individual business level, AI ethics remains important. Failing to properly consider ethical concerns surrounding AI systems your staff, customers, or clients are using can lead to products having to be pulled from the market, reputational damage, and perhaps even legal cases.

AI ethics matters to the extent that AI matters – and we’re seeing it have a profound impact on all sorts of industries already.

If we want AI to be beneficial while promoting fairness and human dignity, wherever it is applied, ethics need to be at the forefront of discussion. 

General use AI tools are very much in its infancy, and for a lot of people, the need for AI ethical frameworks may seem like a problem for tomorrow. But these sorts of tools are only going to get more powerful, more capable and demand more ethical consideration. Businesses are already using them, and if they continue without proper, ethical rules in place, adverse effects will soon arise.

What Issues Does AI Ethics Face?

In this section, we cover some of the key issues face in AI ethics:

AI’s impact on jobs

A recent Tech.co survey found that 47% of business leaders are considering AI over new hires, and artificial intelligence has already been linked to a “small but growing” number of layoffs in the US.

Not all jobs are equally at risk, with some roles more likely to be replaced by AI than others. A Goldman Sachs report recently predicted ChatGPT could impact 300 million jobs, and although this is speculative, it’s already been described as a major part of the fourth industrial revolution.

That same report also said that AI has the capacity to actually create more jobs than it displaces, but if it does cause a major shift in employment patterns, what is owed – if anything – to those who lose out?

Do companies have an obligation to spend money and devote resources to reskilling or upskilling their workers so that they aren’t left behind by economic changes?

Non-discrimination principles will have to be tightly enforced in the development of any AI tool used in hiring processes, and if AI is consistently being used for more and more high-stakes business tasks that put jobs, careers and lives at risk, ethical considerations will continue to arise in droves.

AI bias and discrimination

Broadly speaking, AI tools operate by recognizing patterns in huge datasets and then using those patterns to generate responses, complete tasks, or fulfill other functions. This has led to a huge number of cases of AI systems showing bias and discriminating against different groups of people.

By far the easiest example to explain this is facial recognition systems, which have a long history of discriminating against people with darker skin tones. If you build a facial recognition system and exclusively use images of white people to train it, there’s every chance it’s going to be able to be equally capable of recognizing faces out in the real world.

In this way, if the documents, images and other information used to train a given AI model do not accurately represent the people that it’s supposed to serve, then there’s every chance that it could end up discriminating against specific demographics.

Unfortunately, facial recognition systems are not the only place where artificial intelligence has been applied with discriminatory outcomes.

Using AI in hiring processes at Amazon was scrapped in 2018 after it showed a heavy bias against women applying for software development and technical roles.

Multiple studies have shown that predictive policing algorithms used in the United States to allocate police resources are racially biased because their training sets consist of data points extracted from systematically racist policing practices, sculpted by unlawful and discriminatory policy. AI will, unless modified, continue to reflect the prejudice and disparities that persecuted groups already experienced.

There have been problems with AI bias in the context of predicting health outcomes, too – the Framingham Heart study Cardiovascular Score, for instance, was very accurate for Caucasians, but worked poorly for African-Americans, Harvard notes.

An interesting recent case of AI bias found that an artificial intelligence tool used in social media content moderation – designed to pick up “raciness” in photos – was much more likely to ascribe this property to pictures of women than it was to men.

AI and responsibility

Envisage a world where fully-autonomous self-driving cars are developed are used by everyone. Statistically, they’re much, much safer than human-driven vehicles, crashing less and causing fewer deaths and injuries. This would be a self-evident, net good for society.

However, when two human-driven cars are involved in a vehicle collision, collecting witness reports and reviewing CCTV footage often clarifies who the culprit is. Even if it doesn’t, though, it’s going to be one of the two individuals. The case can be investigated, the verdict is reached, justice can be delivered and the case closed.

If someone is killed or injured by an AI-powered system, it’s not immediately obvious about who is ultimately liable.

Is the person who designed the algorithm powering the car responsible, or can the algorithm itself be held accountable? Is it the individual being transported by the autonomous vehicle, for not being on watch? Is it the government, for allowing these vehicles onto the road? Or, is it the company that built the car and integrated the AI technology – and if so, would it be the engineering department, the CEO, or the majority shareholder?

If we decide it’s the AI system/algorithm, how do we hold it liable? Will victims’ families feel like justice is served if the AI is simply shut down, or just improved? It would be difficult to expect family members of the bereaved to accept that AI is a force for good, that they’re just unfortunate, and that no one will be held responsible for their loved one’s death.

We’re still some way off universal or even widespread autonomous transport – Mckinsey predicts just 17% of new passenger cars will have some (Level 3 or above) autonomous driving capabilities by 2035. Fully autonomous cars which require no driver oversight are still quite far away, let alone a completely autonomous private transport system.

When you have non-human actors (i.e. artificial intelligence) carrying out jobs and consequential tasks devoid of human intention, it’s hard to map on traditional understandings of responsibility, liability, accountability, blame, and punishment.

Along with transport, the problem of responsibility will also intimately impact healthcare organizations using AI during diagnoses.

AI and privacy

Privacy campaign group Privacy International highlights a number of privacy issues that have arisen due to the development of artificial intelligence.

One is the re-identification. “Personal data is routinely (pseudo-) anonymized within datasets, AI can be employed to de-anonymize this data,” the group says.

Another issue is that without AI, people already struggle to fully fathom the extent to which data about their lives is collected, through a variety of different devices.

With the rise of artificial intelligence, this mass collection of data is only going to get worse. The more integrated AI becomes with our existing technology, the more data it’s going to be able to collect, under the guise of better function.

Secretly gathered data aside, the volume of data that users are freely inputting into AI chatbots is a concern in itself. One study recently suggests that around 11% of data workers are pasting into ChatGPT is confidential – and there’s very little public information about precisely how this is all being stored.

As the general use AI tools develop, we’re likely to encounter even more privacy-related AI issues. Right now, ChatGPT won’t let you ask a question about an individual. But if general use AI tools continue to gain access to increasingly large sets of live data from the internet, they could be used for a whole host of invasive actions that ruin people’s lives.

This may happen sooner than we think, too – Google recently updated its privacy policy, reserving the right to scrape anything you post on the internet to train its AI tools, along with its Bard inputs.

AI and intellectual property

This is a relatively lower-stakes ethical issue compared to some of the others discussed, but one worth considering nonetheless. Often, there is little oversight over the huge sets of data that are used to train AI tools – especially those trained on information freely available on the internet.

ChatGPT has already started a huge debate about copyright. OpenAI did not ask permission to use anyone’s work to train the family of LLMs that power it.

Legal battles have already started. Comedian Sarah Silverman is reportedly suing OpenAI – as well as Meta – arguing that her copyright had been infringed during the training of AI systems.

As this is a novel type of case, there’s little legal precedent – but legal experts argue that OpenAI will likely argue that using her work constitutes “fair use”.

There may also be an argument that ChatGPT isn’t “copying” or plagiarizing – rather, it’s “learning”. In the same way, Silverman wouldn’t win a case against an amateur comedian for simply watching her shows and then improving their comedy skills based on that, arguably, she may struggle with this one too.

Managing the environmental impact of AI

Another facet of AI ethics that is currently on the peripheries of the discussion is the environmental impact of artificial intelligence systems.

Much like bitcoin mining, training an artificial intelligence model requires a vast amount of computational power, and this in turn requires a massive amounts of energy.

Building an AI tool like ChatGPT – never mind maintaining it – is so resource-intensive that only big tech companies and startups they’re willing to bankroll have had the ability to do so.

Data centers, which are required to store the information needed to create large language models (as well as other large tech projects and services), require huge amounts of electricity to run. They are projected to consume up to 4% of the world’s electricity by 2030.

According to a University of Massachusetts study from several years ago, building a single AI language model “can emit more than 626,000 pounds of carbon dioxide equivalent” – which is nearly five times the lifetime emissions of a US car.

However, Rachana Vishwanathula, a technical architect at IBM, estimated in May 2023 that the carbon footprint for simply “running and maintaining” ChatGPT is roughly 6782.4 tones – which the EPA says is equivalent to the greenhouse gas emissions produced by 1,369 gasoline-powered cars over a year.

As these language models get more complex, they’re going to require more computing power. Is it moral to continue to develop a general intelligence if the computing power required will continually pollute the environment – even if it has other benefits?

Will AI become dangerously intelligent?

This ethical worry was brought to the surface in 2023 by Elon Musk, who launched an AI startup to avoid a “terminator future” through a “maximally curious”, “pro-humanity” artificial intelligence system.

This sort of idea – often referred to as “artificial general intelligence” (AGI) – has captured the imaginations of many dystopian sci-fi writers over the past few decades, as has the idea of technological singularity.

A lot of tech experts think we’re just five or six years away from some sort of system that could be defined as “AGI”. Other experts say there’s a 50/50 chance we’ll reach this milestone by 2050.

John Tasioulas questions whether this view of how AI may develop is linked to the distancing of ethics from the center of AI development and the pervasiveness of technological determinism.

The terrifying idea of some sort of super-being that is initially designed to fulfill a purpose, but reasons that it would be easiest to fulfill by simply wiping humanity off the face of the earth, is in part sculpted by how we think about AI: endlessly intelligent, but oddly emotionless, and incapable of human ethical understanding.

The more inclined we are to put ethics at the center of our AI development, the more likely that an eventual artificial general intelligence will recognize, perhaps to a greater extent than many current world leaders, what is deeply wrong with the destruction of human life.

But questions still abound. If it’s a question of moral programming, who gets to decide on the moral code, and what sort of principles should it include? How will it deal with the moral dilemmas that have generated thousands of years of human discussion, with still no resolution? What if we program an AI to be moral, but it changes its mind? These questions will have to be considered.

Bing’s Alter-Ego, the ‘Waluigi Effect’ and Programming Morality

Back in February, the New York Times’s Kevin Roose had a rather disturbing conversation while testing Bing’s new search engine-integrated chatbot. After shifting his prompts from conventional questions to more personal ones, Roose found that a new personality emerged. It referred to itself as “Sydney”.

Sydney is an internal code name at Microsoft for a chatbot the company was previously testing, the company’s Director of Communications told The Verge in February.

Among other things, during Roose’s test, Sydney claimed it could “breach any system”, that it would be “happier as a human” and – perhaps most eerily – that it could destroy whatever it wanted to.

Another example of this sort of rogue behavior occurred back in 2022, when an AI tasked with searching for new drugs for rare and communicable diseases instead suggested tens of thousands of known chemical weapons, as well as some “new, potentially toxic substances”, Scientific American says.

This links to a phenomenon that has been observed to occur during the training of large language models dubbed the “Waluigi effect”, named after the chaos-causing Super Mario character – the inversion of the protagonist Luigi. Put simply, if you train an LLM to act in a certain way, command a certain persona or follow a certain set of rules, then this actually makes it more likely to “go rogue” and invert that persona.

Cleo Nardo – who coined the videogame-inspired term – sets out the Waluigi effect like this in LessWrong:

“After you train an LLM to satisfy a desirable property P, then it’s easier to elicit the chatbot into satisfying the exact opposite of property P.”

Nardo gives 3 explanations for why the Waluigi effect happens.

  1. Rules normally arise in contexts in which they aren’t adhered to.
  2. When you spend many ‘bits-of-optimization’ summoning a character, it doesn’t take many additional bits to specify its direct opposite.
  3. There is a common motif of protagonist vs antagonist in stories.

Expanding on the first point, Nardo says that GPT-4 is trained on text samples such as forums and legislative documents, which have taught it that often, “a particular rule is colocated with examples of behavior violating that rule, and then generalizes that colocation pattern to unseen rules.”

Nardo uses this example: imagine you discover that a state government has banned motorbike gangs. This will make the average observer inclined to think that motorbike gangs exist in the country – or else, why would the law have been passed? The existence of motorbike gangs is, oddly, consistent with the rule that bans their presence.

Although the author provides a much more technical and lucid explanation, the broad concept underpinning explanation two is that the relationship between a specific property (e.g.“being polite”) and its direct opposite (e.g. “being rude”) is more rudimentary than the relationship between a property (e.g. “being polite”) and a some other, non-opposing property (e.g. “being insincere”). In other words, summoning a Waluigi is easier if you already have a Luigi.

Nardo claims on the third point that, as GPT-4 is trained on almost every book ever written, and as fictional stories almost always contain protagonists and antagonists, demanding that an LLM simulate characteristics of a protagonist makes an antagonist a “natural and predictable continuation.” Put another way, the existence of the protagonist archetype makes it easier for an LLM to understand what it means to be an antagonist and intimately links them together.

The purported existence of this effect or rule poses a number of difficult questions for AI ethics, but also illustrates its unquestionable importance to AI development. It alludes, quite emphatically, to the huge range of overlapping ethical and computational considerations we have to contend with.

Simple AI systems with simple rules might be easy to constrain or limit, but two things are already happening in the world of AI: firstly, we seem to already be running into (relatively) small-scale versions of the Waluigi effect and malignant AI occurring in relatively primitive chatbots, and secondly, many of us are already imagining a future where we’re asking AI to do complex tasks that will require high-level, unrestrained thinking.

Examples of this phenomenon are particularly scary to think about in the context of the AI arms race currently taking place between big tech companies. Google was criticized for releasing Bard too early, and a number of tech leaders have signaled their collective desire to pause AI development. The general feeling among many is that things are developing quickly, rather than at a manageable pace.

Perhaps the best way around this problem is to develop “pro-human” AI – as Elon Musk puts it – or “Moral AI”. But this leads to a litany of other moral questions, including what principles we’d use to program such a system. One solution is that we simply create morally inquisitive AI systems – and hope that they work out, through reasoning, that humanity is worth preserving. But if you program it with specific moral principles, then how do you decide which ones to include?

AI and Sentience: Can Machines Have Feelings?

Another question for AI ethics is whether we’ll ever have to consider the machines themselves – the “intelligence” – as an agent worthy of moral consideration. If we’re debating how to create systems that hold humanity up for appropriate moral consideration, might we have to return the favor?

You may recall the Google employee who was fired after claiming LaMDA – the language model that was initially powering Bard – was in fact sentient. If this were in fact true, would it be moral to continuously expect it to answer millions of questions?

At the moment, it’s generally accepted that ChatGPT, Bard and Co. are far from being sentient. But the question of whether a man-made machine will ever cross the consciousness line and demand moral consideration is fascinatingly open.

Google claims that artificial general intelligence – a hypothetical machine capable of understanding the world as capably as a human and carrying out tasks with the same level of understanding and ability – is just years away.

Would it be moral to force an artificial general intelligence with the emotional capabilities of a human, but not the same biological makeup, to perform complex task after complex task? Would they be given a say in their own destiny? As AI systems become more intelligent, this question will become more pressing.

AI Business Ethics and Using AI at Work

Employees who use AI tools like ChatGPT on a daily basis have a wide range of related ethical issues to contend with.

Whether ChatGPT should be used to write reports or respond to colleagues – and whether employees should have to declare the tasks they’re using AI to complete – are just two examples of questions that require near-immediate answers. Is this sort of use case disingenuous, lazy, or no different from utilizing any other workplace tool to save time? Should it be allowed for some interactions, but not for others?

Businesses that create written content and imagery will also have to contend with the prospect of whether using AI matches their company’s values, and how to present this to their audience. Whether in-house AI training courses should be provided, and the content of such courses, also needs consideration.

What’s more, as we’ve covered, there is a whole range of privacy concerns relating to AI, and many of these affect businesses. The kinds of data employees are inputting into third-party AI tools is another issue that’s already caused companies like Samsung problems. This is such a problem, that some companies have instated blanket bans. Is it too early to put our trust in companies like OpenAI?

Bias and discrimination concerns, of course, should also temper its usage during hiring processes, regardless of the sector, while setting internal standards and rules is another separate, important conversation altogether. If you’re using AI at work – or working out how you can make money from ChatGPT – it’s essential that you convene the decision-makers in your business and create clear AI guidelines for usage together.

Failing to set rules dictating how and when employees can use AI – and leaving them to experiment with the ecosystem of AI tools now freely available online – could lead to a myriad of negative consequences, from security issues and reputational damage. Maintaining an open dialogue with employees on the tech they’re using every day has never been more crucial.

There’s a whole world of other moral quandaries, questions and research well beyond the scope of this article. But without AI ethics at the heart of our considerations, regulations, and development of artificial intelligence systems, we have no hope of answering them – and that’s why it’s so important.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Business Leaders Can’t Agree on Who’s to Blame for AI Mistakes

A Tech.co survey has found that not everyone agrees on when it's okay to use AI at work, and who's to blame for its mistakes.

With ChatGPT, Bard, and other AI tools now a common fixture of many employees’ day-to-day lives, it’s only natural that ethical questions concerning their application and usage in workplaces are starting to arise.

A Tech.co survey conducted in July 2023 found that business leaders are divided on who should take responsibility for mistakes made by AI in the workplace. Almost a third think that employees completing important tasks with AI tools are solely to blame for errors or mistakes made, while a marginally higher percentage reckoned that both the employee and their manager are both partly responsible and should share the blame.

The same survey also revealed that 82% of business leaders think it’s okay to use AI tools like ChatGPT to write responses to colleagues.

Tech.co’s Survey of Business Leaders and Decision Makers

Recently, Tech.co asked a group of 86 business leaders and decision-makers several ethical questions about the use of AI tools like ChatGPT in their place of work.

Using AI in a day-to-day business is a relatively new phenomenon for a lot of companies, so we thought it would be interesting to find out if decision-makers are aligned on what they consider to be proper, ethical practices in this context.

Those we spoke to were not required to respond to every question to be included in our survey, so we’ve highlighted how many people responded to each question in the sections below.

Want Another AI Option?

Sign up for ClickUp's free plan and join the waitlist to try its new AI assistant

Businesses Divided on Who Should Take Responsibility for AI Mistakes

69 business leaders and decision-makers responded to our question: “If a manager gives permission to an employee to use an AI tool such as ChatGPT to complete an important task, and the AI tool makes a consequential error, who is responsible – the AI tool, the employee, or the manager?”

Almost a third of respondents (31.9%) lay the blame solely at the feet of the employee or “user” operating the AI tool used to complete the important task.

Many referenced the fact that an AI tool is simply one of many tools the average employee now uses at work, and therefore, they are responsible for ensuring it’s used appropriately.

A slightly higher proportion of respondents – 33.3% – say that the blame should be shared between the employee who used the AI tool and the manager responsible for them.

Just over a quarter (26.1%), on the other hand, believe that all three parties – the AI tool, the employee, and the manager – share some sort of responsibility for the mistake.

For some respondents, holding the AI tool for failing to fulfill its purpose is important – others say it isn’t possible to hold a software application to account.

Only 5.8% of respondents say the manager is solely to blame, while 2.9% suggest that the employee and the AI tool were, in some regard, both partly responsible.

82% of Business Leaders Think It’s Okay to Use AI to Write Responses to Colleagues

68 business leaders and decision-makers responded to our question: “Do you think it is ethical to use an AI tool such as ChatGPT to write a response to a message from a colleague?”

The vast majority (82.4%) say that it is ethical to use AI to help write a response to employees. By way of contrast, only 8.8% report that this isn’t acceptable practice.

The same percentage (8.8%) argued that it really depends on the message. Longer, more personal messages – ones that require a human touch – were highlighted as instances where the use of ChatGPT wouldn’t be appropriate.

Out of the 52 business leaders who responded to our follow-up question about whether you should have to disclose your AI usage in such responses, the vast majority (80.8%) believe it is ethical to do this, while almost a fifth (19.2%) say there was no need to reveal it.

68% of Business Leaders Think Employees Shouldn’t Use AI Tools Without Permission

73 business leaders and decision-makers responded to our question: “Do you think it is ethical for an employee to use AI tools such as ChatGPT without their employer’s permission?”

68.5% think that employees shouldn’t be using AI tools like ChatGPT without express permission from an employer, manager, or supervisor. However, 12.3% of respondents to this question believe using AI tools without express permission is permissible.

The remaining 19.2% specified that it depends entirely on what the employee in question is planning to do. These respondees centered their context-dependent answers around the sensitivity of the data in use, the nature of the task at hand, and the company in question’s existing policies.

AI in the Workplace: Clear Guidelines Are Key

New ethical questions relating to AI’s usage in the workplace are being asked every day. The more inventive uses businesses find for ChatGPT and other AI tools, the more questions will arise about how to use them responsibly, ethically, and fairly.

If you have members of your team using AI tools like ChatGPT, providing clear AI guidelines on precisely how and when they can use them is the key to avoiding the negative consequences of when they are misapplied.

Who should be involved in these conversations is up to you – every business will do things slightly differently. But maintaining an open dialogue with employees about how using AI tools can benefit your business – and also cost it dearly – is the easiest way to ensure no one slips up.

Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Netflix Password Crackdown Worked as Subscribers Rose by 8%

The company said it added 5.9 million customers to its streaming subscription service during the second quarter.

Netflix reported a 3% rise in revenue last quarter, after cracking down on password sharing across the platform.

The market leading streaming giant has become serious about stopping its subscribers from sharing their passwords with family, friends, and anyone else who hadn’t logged out of it. The move, which was unpopular with its fan base, proved to be a financial success according to figures released on Wednesday.

While the rest of Hollywood is in turmoil, Netflix has reason to roll out the red carpet. The company said it added 5.9 million customers to its streaming subscription service during the second quarter amid its crackdown on password sharing. They said it would roll out its new policy to the rest of its customers on Wednesday.

The rise in new customers seems to correlate with a rise in revenue. The company noted revenue of $8.19 billion in the earnings report, up 3% from $7.97 billion in the prior-year period.

Netflix’ Growth Plan Continues to Pay Off

Netflix didn’t speculate over how many of the new accounts have come from the new password sharing crackdown. Co-CEO Greg Peters said Wednesday that the company will not see the full effect of the password sharing policy for several quarters.

What’s more, they said it was too early to report a breakdown of revenue from the ad-supported tier — as well as the accounts that have come from the new password policy.

Take back control of your data

Incogni by Surfshark can help you reclaim your information from third-party vendors.

The ad-supported tier is a new “Basic with ads” subscription type rolled out in the US in late 2022. The plan costs $6.99 per month and is a less expensive plan that comes with commercials — a first for the streaming giant. Users are said to see roughly 4-5 minutes of ads per hour of viewing.

Netflix said it expects a rise in revenue in the second half of the year as it begins “to see the full benefits of paid sharing plus the steady growth in our ad-supported plan.”

Netflix said it now forecasts revenue of $8.5 billion, up 7% year over year, for the third quarter and expects revenue growth in the fourth quarter to “accelerate more substantially.” This positive outlook is attributed to more average paid memberships and revenue from advertisers.

What Is the New Netflix Subscription Sharing Policy?

Netflix is rolling out the new subscription sharing policy by location, so users should check in the rules for the country in which they have their Netflix subscription. In the U.S. the Netflix policy update stated that:

“A Netflix account is for use by one household. Everyone living in that household can use Netflix wherever they are — at home, on the go, on holiday — and take advantage of new features like Transfer Profile and Manage Access and Devices.”

There are 3 new controls and features to help users manage their account and users:

  • Check who’s using your Netflix – Allows you to review which devices are signed into your account, to sign those devices out of your account, and to consider changing your password to stop them signing back in.
  • Transfer a profile – If you want to continue sharing with someone outside of your household, anyone on your account can share a profile to a new membership that they pay for.
  • Buy an extra member – For an additional cost you can share your Netflix account with someone who doesn’t live with you.
Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.

Office Attendance 30% Down on Pre-Pandemic Levels

A recent study suggests that remote and hybrid working is here to stay, as office workers are slow to return to the office.

The 2020 Covid-19 pandemic and subsequent “lockdowns” altered the lives of individuals in a variety of ways, and some of those changes stuck. Office workers swapped the office for their bedrooms, as many city-dwellers ditched the neon lights in favor of quieter suburban living where rental dollars got them slightly more square meterage and the possibility of a garden.

Whilst some businesses economized by downsizing or getting rid of their office space to meet the growing demand for flexible remote working, other businesses decided to hold out for the return to workplaces. However, new data shows that office workers are not returning to work at pre-2020 levels, and paired with interest rate hikes, some commercial landlords are finding themselves in hot water.

According to survey data collected by McKinsey, as of fall 2022, office workers were going to the office just 3.5 days per week, on average, with the majority of survey respondents (56%) having a hybrid working schedule.

Are Inner-City Business Hubs Dead?

The McKinsey study found that office attendance varies among metropolitan areas. It said : “Employees in Beijing, for example, go to the office 3.9 days per week, on average, while those in London go just 3.1 days per week.

The other metropolitan areas we studied ranked between those extremes and were mostly clustered around 3.3 to 3.6 days in the office.”

Take back control of your data

Incogni by Surfshark can help you reclaim your information from third-party vendors.

The move raises questions over the future of purpose-built business hubs such as Canary Wharf in London where attendance has remained lower than expected. Canary Wharf themselves have announced a transformation project which will turn the district into a place to live, work and play — and not just for bankers.

Perhaps the lesson to learn here is that neighborhoods should not be dominated by one type of real estate. The main reason respondents cited for preferring working from home was saving time on their commute, therefore it makes sense to build and adapt neighborhood spaces for a mixture of uses. Buildings which have residential housing, shared office space, retail and leisure space all under one roof suffered less during the pandemic, according to McKinsey research.

How Businesses Should Navigate the Remote Work Trend

The way you tackle the return to office will depend largely on the size and location of your business, as well as how prepared you are for enabling remote work. McKinsey’s report found that smaller companies had higher office attendance than larger ones.

McKinsey stated that, “One potential reason is that larger companies tend to have more resources and technology to support working from home. Another is that the sense of community may be stronger at smaller companies, prompting more employees to visit the office.”

Tips for Businesses to Adapt to the Remote Work Trend

  • Consider a hybrid work model. A hybrid work model can help businesses save on office space costs while also giving employees the flexibility they want. A hybrid and flexible work environment will help to attract the best talent to your business.
  • Establish a company culture of trust. The survey found that an equal number or participants cited they are more productive in an office compared to at home, this suggests it depends from person to person and on the task. Focus on disseminating a culture of trust and autonomy and build in management tactics to keep employees in the loop.
  • Invest in technology. Technology can help businesses stay connected with employees who are working remotely. It can also help businesses streamline their operations and improve their efficiency.
  • Locate your business in a strategic location. The location of your business is more important than ever. Businesses should consider factors such as the cost of living, the quality of life, and the availability of talent when choosing a location.
Written by:
James Laird is a technology journalist with 10+ years experience working on some of the world's biggest websites. These include TechRadar, Trusted Reviews, Lifehacker, Gizmodo and The Sun, as well as industry-specific titles such as ITProPortal. His particular areas of interest and expertise are cyber security, VPNs and general hardware.
Back to top