Experts researching advancements in artificial intelligence are now warning that AI models could create the next ‘enhanced pathogens capable of causing major epidemics or even pandemics.’
The declaration was made in a paper published in the journal Science by co-authors from Johns Hopkins University, Stanford University and Fordham University, who say that AI models are being ‘trained on or [are] capable of meaningfully manipulating substantial quantities of biological data, from speeding up drug and vaccine design to improving crop yields.’
‘But as with any powerful new technology, such biological models will also pose considerable risks. Because of their general-purpose nature, the same biological model able to design a benign viral vector to deliver gene therapy could be used to design a more pathogenic virus capable of evading vaccine-induced immunity,’ researchers wrote in their abstract.
‘Voluntary commitments among developers to evaluate biological models’ potential dangerous capabilities are meaningful and important but cannot stand alone,’ the paper continued. ‘We propose that national governments, including the United States, pass legislation and set mandatory rules that will prevent advanced biological models from substantially contributing to large-scale dangers, such as the creation of novel or enhanced pathogens capable of causing major epidemics or even pandemics.’
Although today’s AI models likely do not ‘substantially contribute’ to biological risks, the ‘essential ingredients to create highly concerning advanced biological models may already exist or soon will,’ Time quoted the paper’s authors as saying.
They reportedly recommend that governments create a ‘battery of tests’ that biological AI models must undertake before being released to the public – and then from there officials can determine how restricted access to the models should be.
‘We need to plan now,’ Anita Cicero, the deputy director at the Johns Hopkins Center for Health Security and one of the co-authors of the paper, said according to Time. ‘Some structured government oversight and requirements will be necessary in order to reduce risks of especially powerful tools in the future.’
Cicero reportedly added that biological risks from AI models could become a reality ‘within the next 20 years, and maybe even much less’ without the proper oversight.
‘If the question is can AI be used to engineer pandemics, 100% percent. And as far as how far down the road we should be concerned about it, I think that AI is advancing at a rate that most people are not prepared for,’ Paul Powers, an AI expert and CEO of Physna – a company that helps computers analyze 3D models and geometric objects – told Fox News Digital.
‘The thing is that it’s not just governments and large businesses that have access to these increasingly powerful capabilities, it’s individuals and small businesses as well,’ he continued, but noted that ‘the problem with regulation here is that one, as much as everyone wants a global set of rules for this, the reality is that it is enforced nationally. Secondly is that regulation doesn’t move at the speed of AI. Regulation can’t even keep up with technology as it has been, with traditional speed.’
‘What they are proposing that you do is have the government approve certain AI training models and certain AI applications. But the reality is how do you police that?’ Powers said.
‘There are certain nucleic acids that are essentially the building blocks for any potential real pathogen or virus,’ Powers added, saying, ‘I would start there… I would start on really trying to crack down on who can access the building blocks first.’
More Stories
Israel’s new ambassador issues stark warning to UN over Hezbollah, Iran inaction
Israel’s new ambassador issues stark warning to UN over Hezbollah, Iran inaction
With 9 days until voting starts, ‘election season’ kicks off sooner than you think