The following article is an opinion piece written by Philipp Koellinger. The views and opinions expressed in this article are those of the author and do not necessarily reflect the official position of Technology Networks.Science and technology are co-dependent with a feedback loop between the two. Science is currently in the midst of a replication crisis with many studies appearing novel on the surface but being difficult to replicate. This barrier to scientific progress, in turn, holds back technological innovation as breakthroughs are slowed by the current publishing landscape. Here, I explore the changes required to level the playing field and accelerate both scientific and technological progress. Currently, the scientific publishing industry is unable to publish anything other than manuscripts (PDFs). Though these texts provide valuable information, it’s only one part of reporting scientific research – data, code and other artefacts are often at least equally important to support an author’s claims and aid those trying to repeat the results. Without access to these elements, it’s difficult to test how robust or trustworthy a study is or reproduce its findings – holding back scientific progress. A lack of access to data and code also leads researchers to “reinvent the wheel,” duplicating efforts. Good science enables new, breakthrough technologies, new medical treatments and general innovation. But, when research that looks promising turns out to be false and not replicable, society loses time and money, misinformation spreads and future R&D efforts get misdirected, which in turn slows down technological progress, economic growth and our ability to solve urgent problems. Better technologies enable better scienceThat said, we have seen a lot of scientific breakthroughs in recent years that have been aided by new technologies, including the ability to quickly and cheaply sequence DNA or genotype individuals. This has completely revolutionized the way genetics research is done, and it is starting to have downstream effects that improve medical diagnoses and treatment options. Two decades ago, collecting genetic samples used to be incredibly expensive, but now there are massive datasets that allow scientists to run a vast range of studies that are much bigger and better statistically powered than before. This has enabled a cascade of robust, replicable scientific discoveries that wouldn’t have been possible without this positive feedback loop between technology and science. Also, new technologies are often enabled by scientific progress. To again use DNA and genetics as an example, the development of modern genotyping and sequencing technology was enabled by major scientific breakthroughs such as the Human Genome Project, which decoded the first complete human genetic sequence 25 years ago. The competitive race between two scientific teams to complete this scientific breakthrough led to major innovations in measuring DNA accurately, quickly and at scale. This sparked the development of new sequencing and genotyping technologies, which led to price decreases per human genome over time that outpaced the rate of progress in computer chip making (i.e., Moore’s law). In turn, these technological advances made the emergence of gigantic biobanks possible that collect genetic information, medical records and other markers for millions of people (e.g., UK Biobank). Many of these biobanks can be accessed and used by researchers around the world for free or for modest fees, which led to a flurry of robust and reproducible scientific breakthroughs. But just like the feedback loop between science and technology can speed up progress, bad science can severely harm our ability for technological, medical and social progress. Consider the example of laboratory experiments for pre-clinical studies, which are often conducted with small sample sizes and without giving others access to the raw data, code or experimental protocols that are detailed enough to enable replications.In a commentary piece published in Nature in 2012, it was reported that from biotech giant Amgen’s attempts to replicate over 50 landmark cancer studies that were published in leading scientific journals, almost 90% could NOT be successfully replicated. This not only led to an enormous waste of time and money that slowed down the development of cancer treatments, but it also undermined trust in studies that were published in high-impact journals, leading Amgen to look for other ways to inform their R&D pipeline. Efforts to make science more transparent and accessible (including data and code) make it easier for others to reproduce findings and help to separate true discoveries from noise. This, in turn, can save R&D-intensive companies a lot of time and resources, speeding up their ability to develop novel products and services that actually work. Novelty: The catalyst of innovation When scientific novelty and rigor coincide, actual discoveries happen that can spur technological innovation. Improving access to data and code, and properly measuring both novelty and rigor of scientific outputs would go a long way toward improving the returns on R&D investments. The existing publishing system typically relies on the subjective “smell test” of editors and referees to assess novelty, which often results in disagreements and wrong decisions. This subjective, manual system of assessing novelty is also very labor-intensive, slow, inefficient and prone to biases. This is why we released a first-of-its-kind novelty scores calculator that ranks both content novelty and context novelty of scientific manuscripts using a machine-learning model that progresses over time, taking scientific advances into account. With thousands of scientific papers getting published every day and an exponential increase in publication volumes that has been going on for decades, the utility of a tool that objectively and quickly assesses novelty is obvious: It helps to surface the research that has the highest chance of moving the needle. These scores are highly correlated with future citation counts and identify papers with citations of more than 90 percent of those published in the same year and field more accurately than top scientific journals such as Nature, Cell and Science. In addition, robustness is aided by access to data, code and other artefacts that underline a study’s claims. Developing a preprint network that supports manuscript, data, code and other research outputs that allows them to be reviewed can also give researchers the credit for sharing them. With this information available, the rigor of any previous study can be tested by an independent team of researchers who can try and replicate the results – arguably the most convincing test of rigor. This can be done by using a crowdfunding mechanism for published papers that enables scientists and companies to identify studies that can test its initial outcomes. With the right incentives to carry out these studies, research and development organizations could have saved a lot of time by outsourcing replication studies to the scientific community at a fraction of the cost. DeSci Labs aims to accelerate scientific progress by making science both more robust and more novel. That’s why we have launched the first objective novelty metric and allow users to upload vital supporting material to research manuscripts. It enables the technology and science feedback loop to continue functioning, while preserving content from the past that can help differentiate true discovery from noise – allowing for true advances to take place.