This post was originally published on here
There are many long-lasting impacts that we have seen from the COVID-19 pandemic. Some of these are obvious, such as the increased use of masks that has persisted to some extent around the world. Some are less obvious but in some ways far more important, such as the declining rates of childhood vaccination in high-income countries where these vaccines have become a political target in the wake of the pandemic1 despite an overall increase in immunization globally.2
The pandemic was a once-in-a-century event and, as such, has had lasting impacts on many systems that we rely on. Nowhere is this truer than in the machinations of academia. Prior to the pandemic, there were numerous warnings of growing cracks in the scientific enterprise, but until the stress of COVID-19, most of these were of interest only to a relatively small group of metascientists. Although the belief that much of published research is false is notoriously incorrect and based on flawed statistical reasoning,3 the reality is that many facets of scientific research were already prey to serious and significant issues that only worsened during the global crisis.4,5
There are 2 broad areas that the scientific community has traditionally found extremely difficult to manage. The first is the production of science, which is an increasingly competitive landscape. The second area is the communication of science, which again is a worldwide struggle for attention. Some of the concerns in each region are shown in the TABLE. Many of these problems share the same underlying roots: the system of perverse incentives in academia. The growing difficulty in finding peer reviewers, which has been noted in a number of forums,6-8 is a good example. Peer review is the academic system of quality control. The concept emerged in the 19th century and gained popularity in the mid-20th century as a way to handle the growing number of submissions to scientific journals.9
Previously, editors would mostly decide on publication using their own judgment and request changes as they saw fit, but with the world of science expanding, it became necessary to have a scalable mechanism by which readers could be assured of the quality of a manuscript.9 In the early days of peer review, this worked very well because the academic world was still quite small and the burden on individual reviewers was therefore modest and easily managed. In 2025, with millions of scientific studies published across tens of thousands of journals each year,10 reviewers are increasingly being asked to shoulder an unmanageable workload in order to keep the system afloat. Moreover, peer review is not paid nor reimbursed in any way. Despite calls for systemic change,11,12 the current peer review system operates entirely on the goodwill of academics across the world, who spend billions of dollars of their own time each year keeping the scientific enterprise afloat.8
This creates a perverse incentive. Scientists must publish to continue their careers, and this relies on peer reviewers to screen their work. However, spending time on peer review is generally a wasted effort or at least not formally rewarded at a personal level. Thus, for everyone to continue to do science, we must all peer-review, but the more time individuals spend on peer review, the worse off they will be.
The publish-or-perish cycle of academia has created many other similar perverse incentives. As Doug Altman put it in 1994, “Much poor research arises because researchers feel compelled for career reasons to carry out research that they are ill-equipped to perform and nobody stops them.”13 Academics are pressured to publish as many papers as possible because this is a condition for progressing in their careers, leading to an increasing wash of poor-quality research. Misconduct is driven by a similar set of incentives. The rise of paper mills—groups who will craft papers for academics willing to pay their fee, often by fabricating experiments—shows how desperate the situation has become.14,15 The growing ubiquity of artificial intelligence (AI) in published academic research similarly demonstrates how these incentives have shaped our knowledge-generation process.16
Similar incentives drive problems in the communication of science.17 There is an inherent tension between scientific research, which is slow to progress and relies on many previous findings to take a small step forward in knowledge, and the 24-hour news cycle, which relies on constant novelty.18 Big news in science is rarely unexpected; when large collaborative papers are presented at conferences, most of the audience is unsurprised at the findings. Conversely, news media are fundamentally concerned with new and exciting innovations, which, by definition, the majority of science is not.
In addition, scientific funding cycles all but guarantee that findings will be exaggerated in the media. In the hypercompetitive world of modern academia, where researchers often have to put in an average of 8 or more grants to have a single one funded,19 attention from the media is seen as a positive result, as it can be added to future grant applications as a form of impact. Universities also benefit from media attention on their own research and fund communications departments to assist in putting the word out for interesting and novel research. The key stakeholders in this communication— scientists, universities, communications officers, and the media—all benefit from hype. There are rarely negative consequences for inappropriate optimism about science in the media.
This has led to a science media cycle that often overemphasizes the importance of results. Simple observational findings of “a potential inverse association” between cruciferous vegetable intake and cancer incidence20 hit the headlines as “Lower Your Colorectal Cancer Risk by 20% by Eating More of This Food.”21 This issue is not limited to traditional media, with a metascientific study showing inappropriate use of strong causal language for weak results in academic articles shared on social media.22
All these issues became heightened during the pandemic period. Where once a highly regarded article in a top-ranked journal might attract some minor notice for a week and then be swiftly forgotten, during COVID-19, even a modest preprint could remain news in the media and on social media for weeks or months. The level of attention and the swiftness of the response meant that the cracks in the scientific facade were more apparent than ever.
This is, unfortunately, another thing that COVID-19 has left us with. There are more retractions on a year-by-year basis than ever before and increasing concerns about the use of AI and other technologies in published academia. Media attention is increasingly disparate, with many younger individuals receiving their scientific news not from experienced science journalists but from social media.23 The perverse incentives of academia are, if anything, accelerating as scientific funding in the United States plummets, with no end in sight to issues such as paper mills.
There are no easy systemic solutions to any of this, but if COVID-19 has proven anything, it is the importance of cultivating critical appraisal skills for anyone whose job relies on understanding scientific research. Critical appraisal—the ability to read, interpret, and interrogate a study’s methodology and findings—is a skill like any other, and it must be practiced to be effective.
There are efforts to improve many of the problems noted in this piece, but at an individual level, the best advice is to never trust any data that you have not looked at yourself. In a world where someone can fabricate believable but incorrect mathematical arguments with the click of a button,24 it is more vital than ever that physicians and researchers spend time improving their critical thinking skills and reading the original research themselves.







