AI Sand Castle Trap
This post will start as a very boring talk about science, but try to get through the first few lines, we will get into crazy science.
We live in great times where science is making one breakthrough after another. This is especially true now, when we have AI at our disposal, which creates limitless opportunities.
Recently I was enchanted by the following studies, courtesy of Google Scholar:
Study of CNT@Fe3O4 effects on Aeromonas hydrophila and Yersinia ruckeri bacteria isolated from fish
Silver and Gold Nanoparticles for Antimicrobial Purposes against Multi-Drug Resistance Bacteria
The one thing these articles have in common is the use of "vegetative electron microscopy" during the research. Most likely you are not an expert on the latest in electron microscopy. Neither am I. Fortunately, I am a modern person and when in doubt I ask my trusted companion ChatGPT to provide me an answer.
I asked for a detailed explanation which after a long page of text ended with: “Vegetative electron microscopy is a powerful tool in multiple scientific fields, allowing researchers to explore the fine details of vegetative cells, their subcellular structures, and interactions with the environment. It plays a crucial role in advancing plant science, microbiology, mycology, biotechnology, and ecology, ultimately contributing to innovations in medicine, agriculture, and materials science.”
But before you stop reading and go to your next party where you casually drop the term "vegetative electron microscopy" while nibbling on vegetables from a party tray and wondering where the pigs in a blanket are, you should also check the next set of scientific papers.
What's so special about these two papers?
They, too, have in common the use of "vegetative electron microscopy", but on top of that, the first paper was retracted!
The retraction note for Photodegradation of ibuprofen laden-wastewater using sea-mud catalyst/H2O2 system: evaluation of sonication modes and energy consumption reads: “The Publisher has retracted this article in agreement with the Editor-in-Chief. An investigation by the publisher found a number of articles, including this one, with a number of concerns, including but not limited to compromised peer review process, inappropriate or irrelevant references, containing nonstandard phrases or not being in scope of the journal. Based on the investigation’s findings the publisher, in consultation with the Editor-in-Chief therefore no longer has confidence in the results and conclusions of this article.”
The second paper had to be corrected, with this note added: “In addition, two phrases in the original publication were not appropriate. The authors would like to change ‘vegetative electron microscopy’ to ‘scanning electron microscopy’ and ‘extracellular cells’ to ‘extracellular membrane’.”
And this is where we have a problem, Houston.
A quick search on Google for "vegetative electron microscopy" reveals a different set of articles:
As a nonsense phrase of shady provenance makes the rounds, Elsevier defends its use
Was nonsense ‘vegetative electron microscopy’ phrase a Farsi typo?
Pick your explanation - either a mistaken translation from Farsi or a badly scanned PDF document. In either case, this information was used to train AI - ChatGPT to be specific - and AI is now happily not only repeating this but is also providing a definition for it:
“‘Vegetative Electron Microscopy’ refers to the application of electron microscopy (EM) techniques to study vegetative cells, tissues, or organisms at the ultrastructural level. The term "vegetative" in this context typically refers to actively growing, non-reproductive cells in plants, fungi, bacteria, or other microorganisms. Unlike spores or reproductive structures, vegetative cells are metabolically active and often the primary agents of growth, photosynthesis, and nutrient assimilation.”
What's worse, it is now used in scientific papers and in turn, these papers are cited by others. All these papers are available on the Internet and they were harvested by all the makers of AI and now every model contains their contents. And that's the biggest problem of all. When we talk about AI ethics, the problem of human-in-the-loop is often discussed, frequently in relation to weapons killing people at its own discretion.
But with “vegetative electron microscope”, we have an example where people removed themself from the loop with now far reaching consequences. What would be limited damage to a handful of people studying 'ibuprofen laden-wastewater' in the past, is now built into technology presented as 'Artificial Intelligence' and promoted to be used by everyone.
Unlike Google search, where I got all the above information, when I ask ChatGPT, I am getting an authoritative answer which sounds good but it is completely wrong. Despite the fact that one of the articles was retracted and the other corrected, that won't change the AI model. And this is just one tiny example of the danger of AI. It removes us from the source of the information. To keep repeating the fact that we don't know how to make the models forget incorrect information seems redundant. It would appear that no AI company cares about that detail.
ChatGPT is a Large Language Model. Nothing less, nothing more. It is not a knowledge model. Using it for learning, you are building a sand castle.
Despite the promise of AI making our lives easier, new patterns are emerging. With AI - a) it will take more energy to learn new things accurately, b) it will take less energy to become stupid.