Preprints First: An Idea to Increase Discovery and GDP
I suggest that any organisation submitting to peer-reviewed journals first post the article on a preprint website (like arXiv.org). For those unfamiliar, preprint websites allow anyone to upload research papers, making them searchable and freely downloadable worldwide.
The benefits are big:
Knowledge travels wider. Anyone – university affiliated or not – can read the paper immediately. Barriers to learning are lowered.
Knowledge travels faster. Peer review can happen in parallel, so you see the paper ~3 months earlier.
All research gets published. More important papers get published – specifically those that are erroneously rejected by reviewers. More negative result papers can get published (ones that peer review often does not accept),
The drawbacks are small – if any:
But what about peer review? You can still have peer review in parallel.
Will this lead to unreproducible papers/studies? Reproducibility is an issue, whether papers are peer-reviewed or not. The best solution to reproducibility is to make data and code easily accessible and well organised, e.g. any paper with analysis should also deliver a Github repo with the code.
Will this lead to there being too much “noise”? No. One mental model is that researchers manually trawl through papers as they are published. This is not how things work for peer-reviewed OR for preprint papers. Rather, researchers find interesting papers through peers – via social media posts, newsletters and Youtube videos. Those peer networks subject the papers to scrutiny – whether preprints or peer-reviewed. This is an organic, imperfect and highly effective discovery and sharing mechanism. Most importantly, it is error correcting over time, and it works for any paper – whether formally peer-reviewed or not.
Will researchers be incentivised to publish faster, lower quality papers?
Yes, some might, but my observation is that papers on arXiv.org that come to me via my sources (newsletters, X, YouTube) are of consistently high quality. The whole system works through peer networks, not through exhaustive reading/search by any one researcher.
I see frequent criticism online of preprints that are bare bones. So, there is peer pressure and also benefit in terms of distribution for those who write/research well.
Domain-specific Norms: Mechanical Engineering vs Machine Learning
For those of you in certain machine learning, physics, maths and biology fields… this may all seem obvious.
My PhD was in Mechanical Engineering, and I would primarily read gated peer-reviewed papers. Now that I work in AI / machine learning, I almost only read preprints on arXiv.org . My ability to immediately find and download any paper – often a paper that was only written in the past week – is a VERY different experience than in mechanical engineering.
The effect of doing this could be 1%+ of GDP
If all fields were to have this norm, I speculate this could allow for a meaningful increase in discovery and GDP.
To be clear, I’m not advocating for a regulation or law. We should convince people based on the merits of the arguments for preprints first.