In academia, the maxim “publish or perish” is nothing new. Genetically, the phrase refers to the pressure in academia to continually publish academic research papers to sustain and advance one’s career. In recent years, this pressure to publish has been mounting not just to sustain and further one’s academic career, but also for universities to secure their “desired spot” in lists of university ranking.
Academic administrators also consider the number (and quality) of publications as one of the main criteria in recruiting faculty members. In its true sense, academic research publications should warrant creating, advancing, and disseminating knowledge that ought to help solve real-life problems in their respective fields. Unfortunately, the increasing demand for publications has raised many concerns such as plagiarism, duplicate publications, salami slicing, gift authors, and other unethical/dubious research practices.
Plagiarism is perhaps the most common and widely known issue when it comes to unethical practices in academia and in the publication world. While the definition of plagiarism may vary across different levels and forms, it simply refers to “presenting someone else’s work or ideas as your own, with or without their consent, by incorporating it into your work without full acknowledgement” (Oxford University). In duplicate publications, authors attempt to publish almost the same material in different outlets, changing the title and keywords and often with different co-authors, impeding the detection of plagiarism. Unlike duplicate publication, salami slicing involves splitting a large study into “slices” in order to generate more publications from the same data set. If the slices of a segmented study share the same hypotheses, samples and methods, it is not an acceptable practice. In some cases, however, splitting a large study into multiple publications might be very meaningful where splitting must be done with full transparency, disclosing precisely how each segment of the study is related to other segments.
Another typical malpractice is phrased as “gift authorship”. According to the Committee on Publication Ethics (COPE), gift authors are the “people who are listed as authors but who did not make a significant contribution to the research”. COPE also defines another type of gift author as a colleague whose name is included in your publication based on a mutual understanding that s/he will do the same for you, regardless of your contribution to his/her research. This is usually (mal)practiced to increase the number of publications.
These unethical/dubious practices in academic research have been fuelled by mushrooming of “predatory” journals and publishers. The phrase “predatory journal” or “predatory publisher” was coined by Jeffrey Beall, an American librarian. Jeffrey Beall has created a list, now widely known as “Beall’s list”, of unscrupulous open access publishers who were publishing articles with little or no real peer review. As of September 12, 2019, there are a total of 1,451 potential predatory journals and 1,276 predatory publishers listed in Beall’s list. This list is updated regularly where the number continues to inflate. Many universities do not allow their faculty members to get their research papers published in any predatory journals, or at least such publications are not counted in key performance index (KPI). One important question is, how should we gauge the quality of a journal?
Academic journals are meant to be peer-reviewed. Peer reviewed journals (also known as “refereed journals”) include only articles that have gone through a systematic review process of feedback and iteration before publication. The quality of peer-reviewed journals can be assessed using different set criteria such as indexing, impact factor, journal ranking, publisher and editorial board members.
Indexing is considered as one indicator of journal quality. Once a journal is indexed by a database, it is immediately made available to all users of that database. Among several abstracting and indexing services available today, Scopus (owned by Elsevier) is one of the most recognised and the largest abstract and citation database of scientific journals, books, and conference proceedings. Another highly recognised indexing is ISI (Institute for Scientific Information) which has been taken over by Clarivate Analytics. Indexed journals are generally considered to be of higher scientific quality as compared to non-indexed journals. However, mere indexing does not warrant the quality of peer-reviewed academic journals.
The impact factor is commonly used as an indicator of the relative importance of a journal in a specific field. The impact factor of a journal is calculated based on how frequently articles that were published in that journal during the preceding two years (e.g., 2016 and 2017), were cited by articles published in one particular year (e.g., 2018). One needs to be cautious in finding the genuine impact factor, because some journals self-declare their impact factors which do not have any authenticity. In order to have an authentic impact factor, a journal must be indexed in Web of Science (owned by Clarivate Analytics) that publishes journals’ impact factor in its Journal Citation Reports (JCR).
There are also different authentic journal rankings such as Scimago, ABDC and ABS ranking. Scimagor ranks journals into four quartiles (Q1, Q2, Q3, and Q4) based on impact factor in specific fields. ABDC (Australian Business Deans Council) categorises journals into four grades of quality (A*, A, B, and C) broadly within business disciplines, and ABS (Association of Business Schools) ranks business journals into five categories (4*, 4, 3, 2, and 1). Different criteria are used to rank journals including global acceptance of the journal, journal citation metrics, expert peer review and relevance to the discipline.
Publishing a research paper is one of the apparatuses of augmenting academic credentials, progressing one’s career, and of course, advancing scientific knowledge. At the same time, we should not forget that mere publishing does not warrant academic testimonial; rather it may cripple academic credentials and contaminate scientific knowledge if the work lacks integrity.
Dr Khan Md Raziuddin Taufique is an Assistant Professor at BRAC Business School. He is currently working as a visiting faculty member at Curtin University (Malaysia campus). E-mail: firstname.lastname@example.org