In the book Betrayers of the Truth: Fraud and Deceit in the Halls of Science (1982 Simon & Schuster) by William Broad and Nicholas Wade, the authors demonstrate by examples how science not strictly logical process. Instead science is:
Our conclusion, in brief, is that science bears little resemblance to its conventional portrait. We believe that the logical structure discernible in scientific knowledge says nothing about the process by which the structure was built or the mentality of the builders. In the acquisition of knowledge, scientists are not guided by logic and objectivity alone, but also by such nonrational factors as rhetoric, propaganda, and personal prejudice. Scientists do not depend solely on rational thought, and have no monopoly on it. Id. at p 8-9.
In other words, some scientists make a mockery of science. We now know that science, like anything else in this world, can be good or bad. This is a particularly disturbing realization when you have Attorney General Jeff Sessions ending Justice Department partnership with independent scientists to raise forensic science standards. Rather than judging science based on the scientific method, Attorney General Jeff Sessions wants to decide if science is correct based on public comments. Of course, this is an open invitation for junk science to enter the courtroom since many courts favor letting anything resembling an expert opinion in as evidence in the courtroom. For instance, the FBI in 2015 revealed that experts had overstated the strength of evidence involving microscopic hair analysis in cases dating back decades, and the Justice Department promised a review of laboratory protocols and procedures. Or there is the specific example of Annie Dookhan. She is the disgraced prosecution forensic expert who served prison time for evidence tampering on prosecution drug tests in about 40,000 cases from 2003 to 2012.
Given this information, how can one tell good science from bad science that is offered by a scientist? The following scholars (Adam J. Berinsky, professor of political science at MIT; Marybeth Gasman, professor of higher education at the University of Pennsylvania; Morgan L. W. Hazelton, assistant professor of political science at Saint Louis University; Thomas E. Patterson, professor of government and the press at Harvard University; Eric A. Stewart, professor of criminology at Florida State University) offer this tip sheet:
- Is this research peer reviewed? A study published in a peer-reviewed journal typically undergoes a detailed critique by a small number of qualified scholars. The peer-review process, while imperfect, is designed for quality control.
- Is it published in a top-tier academic journal? Top journals are more likely to feature high-quality research. They are more selective about the research they accept for publication. Also, their peer-review process tends to be more rigorous. A measure for gauging a journal’s ranking is its Impact Factor, which can be found in the Journal Citation Reports database. Impact Factor scores range from zero to over 100.
- Do other scholars trust this work? One indicator of whether other scholars consider a study to be credible is the number of times they cite it in their own research. It can take years, however, for a study to generate a high citation count. You can use Google Scholar, a free search engine, or Web of Science, a subscription-based service, to find citation counts. Journalists also can ask faculty in the field their opinions.
- Who funded the research? It’s important to know who sponsored the research and what role, if any, a sponsor played in the design of the study and its implementation or in decisions about how findings would be presented to the public. Authors of studies published in academic journals are required to disclose funding sources. Studies funded by organizations such as the National Science Foundation tend to be trustworthy because the funding process itself is subject to an exhaustive peer-review process.
- What are the authors’ credentials? Knowing where the authors work and how often they have been published can help you assess their expertise in a field of study.
- How old is the study? In certain fields — for example, chemistry or public opinion — a study that is several years old may no longer be reliable.
- Do the authors have a conflict of interest? Be leery of research conducted by individuals or organizations that stand to gain from the findings.
- What’s the sample size? For studies based on samples, larger samples generally yield more accurate results than smaller samples.
- Does the study rely on survey results? Survey results can be biased if respondents were not chosen by random selection. Beware of any survey that relies on respondents who self-select (for example, many internet-based surveys).
- Can you follow the methodology? Scholars should explain how they approached their research questions, where they got their data and how they used it. They also should clearly define key concepts and describe the statistical methods used in their analyses. This level of detail is necessary to allow other people to check and replicate their work. Replicability is critical.
- Is statistical data presented? Authors should present details about the data they are examining and the numerical results of their analyses. This allows others to review their calculations. In some fields, authors make their data sets publicly available.
- Are the study’s findings supported by the data? Good researchers are very cautious in describing their conclusions – because they want to convey exactly what they learned. Sometimes, however, researchers might exaggerate or minimize their findings or there will be a discrepancy between what an author claims to have found and what the data suggests.
- Is it a meta-study? Among the most reliable studies are meta-studies, also referred to as meta-analyses. Their conclusions are based on an analysis of multiple studies done on a particular topic.
- See also, the guide by Dr. Jamie Rishikof, a licensed psychologist.