A group of researchers in Australia have been awarded just over $400,000 (USD) in government funding to study the possibilities of merging AI with human brain cells, a research project likely to bring a number of AI-related ethical and existential questions to the fore.
The team, working in collaboration with Monash University and Melbourne-based Startup Cortical Labs – is the same group behind the DishBrain project, which involved teaching a group of human brain cells how to play the retro video game “Pong”.
While ChatGPT, Bard and other AI tools powered by large language models continue to be used in increasingly inventive ways by businesses, research like this may open up the door to artificial intelligence systems that learn in entirely different – and more human-like – ways.
Research Team Behind DishBrain Win Funding
Monash University released a statement last week confirming that it had been awarded a grant worth hundreds of thousands of dollars to continue its research into “growing human brain cells onto silicon chips, with new continual learning capabilities to transform machine learning”.
The funding has come from the National Intelligence and Security Discovery Research Grants Program, which is part of the Australian Department of Defense.
Associate Professor Adeel Razi, who is leading the group, says that the project “merges the fields of artificial intelligence and synthetic biology to create programmable biological computing platforms”.
He predicts that the capabilities of this new technology “may eventually surpass the performance of existing, purely silicon-based hardware.”
The sort of AI-powered technology many of us envisage being widely used in the near future – from autonomous vehicles to “intelligent handheld and wearable devices”, “will require a new type of machine intelligence that is able to learn throughout its lifetime,” Associate Professor Razi explained.
Many AI systems are currently prone to “catastrophic forgetting” – something the learning capabilities of biological neural networks can perhaps alleviate.
Ethical Implications and Existential Fears Around AI R&D
ChatGPT’s November 2022 launch, and other AI chatbots that have sprung up since then – has seen governments across the world scrambling to implement legislation and kickstart initiatives that promote the responsible development of AI systems.
It’s not just governments that are spooked, though – a letter signed in March by over a thousand tech leaders and company representatives called for a pause to the development of any AI technology stronger than GPT-4.
In the context of AI research, the most high-risk areas – such as healthcare – also often hold the biggest rewards. So, many state governments are trying to strike a delicate balance between promoting safety and transparency without stifling innovation.
There are also wider, more fundamental ethical considerations surrounding hyper-intelligence systems – such as whether it be ethical or responsible to create a machine that we may not one day be able to control – that, eerily, feel more pressing by the day.
When it comes to merging human biological matter with silicon chips, at this stage, we’re unlikely to see ethical quandaries jumping out of the petri dish tomorrow.
But it does allude to a specific frontier in the development of AI – the merging of human bodies with machines – that will be fraught with ethical dilemmas. And – like many areas of AI research – although it could turn out to be very high-risk, it may also be very high reward.