Resist the urge to quantify scientific research assessment

Alarmingly, a recent article titled “DeSci Labs launches novelty scores for scientific manuscripts” (which I saw shared in this post) describes a new:

...mathematical model scores feature which is an objective measure of novelty for scientific work.

https://pharmaceuticalmanufacturer.media/pharma-manufacturing-news/latest-pharmaceutical-manufacturing-news/desci-labs-launches-novelty-scores-for-scientific-manuscript/

The article says:

...evaluating the novelty of scientific manuscripts and grant applications takes centre stage in the scientific peer review process. The primary reason work is rejected by editors of high-impact journals or funding agencies is because referees think it is not novel enough. However, the current peer review process is subjective, slow, labour-intensive, and prone to bias and inaccuracy [...] The release of these novelty scores [...] means there is now an objective, automated measurement of one of the core parts of the peer review process.

As a general principle, I assume goodwill. With that in mind, it is with genuine, all due respect that I find this development to be deeply alarming.

First of all, how can there possibly be an “objective” measure of novelty?????

Secondly, while it's great to see on DeSci Labs's about page some laudable goals like enabling FAIRness, open science, developing open source software, and preserving scientific outputs (which I care deeply about), the same page also speaks of securing USD 6.5 million in “seed funding”, accelerating science, using “Web3” technology, and to “accelerate growth and enhance customer loyalty”. To me, this reeks of techno-solutionism and -accelerationism.

Third, the underlying math is published in Nature:

https://doi.org/10.1038/s41467-023-36741-4

To me, all three of the above speak volumes about the state of scientific research culture, and not in a good way... 😩

Contrast this with the excellent essay on “The Limits of Data” by C. Thi Nguyen recently shared with the Turing Way community by Shern Tee:

https://doi.org/10.58875/LUXD6515

Which reminds us:

...policymakers and data users should remember that not everything is as tractable to the methodologies of data. It is tempting to act as if data-based methods simply offer direct, objective, and unhindered access to the world—that if we follow the methods of data, we will banish all bias, subjectivity, and unclarity from the world. The power of data is vast scalability; the price is context. We need to wean ourselves off the pure-data diet, to balance the power of data-based methodologies with the context-sensitivity and flexibility of qualitative methods and local experts with deep but nonportable understanding. Data is powerful but incomplete; don’t let it entirely drown out other modes of understanding.

I hope the work on reforming academic research culture and #metaresearch could include diverse and skeptical voices in addition to simply developing new quantitative “metrics”.


Unless otherwise stated, all original content in this post is shared under the Creative Commons Attribution-ShareAlike 4.0 license (CC BY-SA 4.0).