Generative AI’s rapid rise over recent years has sparked panic about misinformation, job losses and more. Now experts say science itself may be under threat.
Due to recent developments in AI, nearly every element of a scientific paper can now be artificially produced quickly and easily. And AI-generated images – from diagrams to microscopy imagery – are increasingly difficult to identify. Specialists are concerned about “a torrent of faked science” as a result, said Nature.
“At a time when trust in scientific expertise and the media are both declining (the latter more precipitously than the former), rolling out an AI experiment with a lack of transparency is, at best, ignorant, and, at worst, dangerous,” said Jackson Ryan in The Guardian.
Some scientists stand to benefit from integrating AI-generated diagrams and images into their work. Environmental scientists will be able to generate “what-if” imagery showing the projected impacts of climate change, and others can more easily explain complex concepts and “intricate ecological relationships”, said a paper published in Ecology Letters.
But without additional safeguards, the use of AI to deliver scientific information makes for “a worrying development with potentially catastrophic consequences”, said Ryan.
Images ‘almost impossible to distinguish’
This isn’t a hypothetical problem – AI-generated images have already been identified in several scientific journals. In February, a peer-reviewed journal retracted and apologised for an article it published that depicted “nonsensical AI-generated images including a gigantic rat penis”, said Vice.
While the rat was quite evidently inaccurate, the trouble with AI-generated images is that they are often incredibly difficult to pick out. “Pinpointing AI-produced images poses a huge challenge: they are often almost impossible to distinguish from real ones, at least with the naked eye,” said Nature.
As AI tools get more sophisticated, identifying faux images only gets harder. Most of the falsified images being identified now were published years ago, which experts say implies that the images are more polished – not that fewer people are using AI to create them. Plus, the “telltale signs that sleuths can spot” in Photoshopped or otherwise doctored images tend not to exist in AI creations.
“I see tonnes of papers where I think, these Western blots do not look real – but there’s no smoking gun,” Elisabeth Bik, an image-forensics specialist, told Nature. “You can only say they just look weird, and that of course isn’t enough evidence to write to an editor.”
Some academic journals allow AI-generated text in some contexts, but few have guidelines on imagery. Experts say the rapid evolution of AI and a lack of regulation is cause for concern. If people, including scientists, are not able to discern whether information is human- or AI-generated, the implications on health, climate research and science on the whole could be sweeping.
“The people that work in my field – image integrity and publication ethics – are getting increasingly worried about the possibilities that it offers,” Jana Christopher, an image-integrity analyst, told Nature.
An AI-detection ‘arms race’
Many publishers are already using technology designed to detect AI-generated images, and the software is steadily improving. Something of an “arms race is emerging”, with experts hurrying to “develop AI tools that can assist in rapidly detecting deceptive, AI-generated elements of papers”, said Nature.
Proofig AI – a tool already used by some publishers – released its “AI Image Fabrication identifier tool” in July of this year. Powered by AI itself, the tool “will alert users to microscopy images that might be AI-generated and warrant further investigation when scanning manuscripts”, said Technology Networks. The technology is trained with AI-generated images, so is designed to “recognise subtle differences that may not be apparent to the human eye”.
Academics, scientists and specialists are certainly concerned about AI’s lasting impacts on science. But not all hope is lost.
“I have full confidence that technology will improve to the point that it can detect the stuff that’s getting done today – because at some point, it will be viewed as relatively crude,” said Kevin Patrick, a “scientific-image sleuth” who has published images demonstrating just how easy generating realistic scientific diagrams can be, to Nature.
“Fraudsters shouldn’t sleep well at night,” said Patrick. “They could fool today’s process, but I don’t think they’ll be able to fool the process forever.”
Publishers and specialists are struggling to keep up with the impact of new content