Should artificial intelligence be used in science publishing? | Public … – PRI

Advances in automation technology mean that robots and artificial intelligence programs are capable of performing an ever-greater share of our work, including collecting and analyzing data. For many people, automated colleagues are still just office chatter, not reality, but the technology is already disrupting industries once thought to be just for humans. Case in point: science publishing.

Increasingly, publishers are experimenting with using artificial intelligence in the peer review process for scientific papers. In a recent op-ed for Wired, one editor described how computer programs can handle tasks like suggesting reviewers for a paper, checking an authors conflicts of interestand sending decision letters.

In 2014 alone, an estimated 2.5 million scientific articles were published in about 28,000 journals (and thats just in English). Given the glut in the industry, artificial intelligence could be a valuable asset to publishers: The burgeoning technology can already provide tough checks for plagiarism and fraudulent dataand address the problem of reviewer bias. But ultimately, do we want artificial intelligence evaluating what new research does and doesnt make the cut for publication?

The stakes are high: Adam Marcus, co-founder of the blog Retraction Watch, has two words for why peer review is so important to science: Fake news.

Peer review is science's version of a filter for fake news, he says. It's the way that journals try to weed out studies that might not be methodologically sound, or they might have results that could be explained by hypotheses other than what the researchers advanced.

The way Marcus sees it, artificial intelligence cant necessarily do anything better than humans can they can just do it faster and in greater volumes. He cites one system, called statcheck, which was developed by researchers to quickly detect errors in statistical values.

They can do, according to the researchers, in a nanosecond what a person might take 10 minutes to do, he says. So obviously, that could be very important for analyzing vast numbers of papers. But as it trawls through statistics, the statcheck system can also turn up a lot of noise, or false positives, Marcus adds.

Another area where artificial intelligence could do a lot of good, Marcus says, is in combating plagiarism. Many publishers, in fact every reputable publisher, should be using right now plagiarism detection software to analyze manuscripts that get submitted. At their most effective, these identify passages in papers that have similarity with previously published passages.

But in the case of systems like statcheck and anti-plagiarism software, Marcus says its crucial that theres still human oversight, to make sure the program is turning up legitimate red flags. In other words, we need humans to ensure that algorithms arent mistakenly keeping accurate science from being published.

Despite his caution, Marcus thinks programs can and should be deployed to keep sloppy or fraudulent science out of print. Researchers recently pored over images published in over 20,000 biomedical research papers, and found that about one in 25 of them contained inappropriately duplicated images.

I'd like to see that every manuscript that gets submitted be run through a plagiarism detection software system, [and] a robust image detection software system, Marcus says. In other words, something that looks for duplicated images or fabricated images.

Such technology, he says, is already in the works. And then [wed] have some sort of statcheck-like programthat looks for squishy data.

This article is based on aninterviewthat aired on PRI'sScience Friday.

Read more here:

Should artificial intelligence be used in science publishing? | Public ... - PRI

Related Posts

Comments are closed.