From Wikipedia, the free encyclopedia
This is one of a collection of articles which has a direct, or indirect relevance for the development of the UDP. Blogger Ref http://www.p2pfoundation.net/Universal_Debating_Project
This is one of a collection of articles which has a direct, or indirect relevance for the development of the UDP. Blogger Ref http://www.p2pfoundation.net/Universal_Debating_Project
For Wikipedia's Peer Review area, see Wikipedia:Peer review. For other uses, see Peer review (disambiguation).
"Independent review" redirects here. It is not to be confused with The Independent Review.
Peer review is the evaluation of work by one or more people of similar competence to the producers of the work (peers). It constitutes a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are employed to maintain standards of quality, improve performance, and provide credibility. In academia peer review is often used to determine an academic paper's suitability for publication. Peer review can be categorized by the type of activity and by the field or profession in which the activity occurs, e.g., medical peer review.In parallel with these 'common experience' definitions based on the study of peer review as a pre-constructed process, there are a few scientific understandings of peer review that do not look at peer review as pre-constructed. Hirschauer proposed that journal peer review can be understood as reciprocal accountability of judgements among peers.[1] Gaudet proposed that journal peer review could be understood as a social form of boundary judgement - determining what can be considered as scientific (or not) set against an overarching knowledge system, and following predecessor forms of inquisition and censorship.[2][3]
Contents
[hide]History[edit]
The first recorded editorial pre-publication peer-review process was at the Royal Society of London in 1665 by the founding editor of Philosophical Transactions of the Royal Society, Henry Oldenburg.[4][5][6] In the 20th century, peer review became common for science funding allocations. This process appears to have developed independently from that of editorial peer review.[7] See a competing understanding of the history of peer review using a scientific approach in Gaudet,[8] that builds on historical research by Gould,[9] Biagioli,[10] Spier,[11] and Rip.[12] Using a scientific approach means carefully tending to what is under investigation, here peer review, and not only looking at superficial or self-evident commonalities among inquisition, censorship, and journal peer review.The first peer-reviewed publication might have been the Medical Essays and Observations published by the Royal Society of Edinburgh in 1731. The present-day peer-review system evolved from this 18th-century process,[13] and did not become commonplace until the mid-20th-century.[14]
A prototype professional peer-review process is recommended in the Ethics of the Physician written by Ishāq ibn ʻAlī al-Ruhāwī (854–931). His work states that a visiting physician must make duplicate notes of a patient's condition on every visit. When the patient was cured or had died, the notes of the physician were examined by a local medical council of other physicians, who would decide whether the treatment had met the required standards of medical care.[15]
Peer review has long been a touchstone of the scientific method, but until the end of the 19th century it had been performed by editors in chief or editorial committees. While some medical journals then started to systematically appoint external reviewers, it is only since the middle of the 20th century that this practice has spread widely and that these reviewers have been given some visibility within academic journals, often being thanked by authors and editors.[16] In earlier periods, editors of scientific journals often made publication decisions without seeking outside input. For example, Albert Einstein's revolutionary Annus Mirabilis papers in the 1905 issue of Annalen der Physik were peer-reviewed by the journal's editor-in-chief, Max Planck, and its co-editor, Wilhelm Wien, both future Nobel prize winners and together experts on the topics of these papers. An external panel of reviewers was not sought, contrary to what is now done for many scientific journals. Established authors and editors were then given more latitude in their journalistic discretion. On another occasion, Einstein was severely critical of the external review process, saying that he had not authorized the editor in chief to show his manuscript "to specialists before it is printed", and informing him that he would "publish the paper elsewhere".[17] An editorial in Nature published in 2003 stated that "in journals in those days, the burden of proof was generally on the opponents rather than the proponents of new ideas.".[18]
The first Peer Review Congress met in 1989. Over time, the fraction of papers devoted to peer review has steadily declined, suggesting that as a field of sociological study, it has been replaced by more systematic studies of bias and errors.[19]
Professional peer review[edit]
Professional peer review focuses on the performance of professionals, with a view to improving quality, upholding standards, or providing certification. In academia, peer review is common in decisions related to faculty advancement and tenure.Professional peer review activity is widespread in the field of health care, where it is usually called clinical peer review.[20] Further, since peer review activity is commonly segmented by clinical discipline, there is also physician peer review, nursing peer review, dentistry peer review, etc.[21] Many other professional fields have some level of peer review process: accounting,[22][23] law,[24][25] engineering (e.g., software peer review, technical peer review), aviation, and even forest fire management.[26]
Peer review is used in education to achieve certain learning objectives, particularly as a tool to reach higher order processes in the affective and cognitive domains as defined by Bloom's taxonomy. This may take a variety of forms, including closely mimicking the scholarly peer review processes used in science and medicine.[27][28]
Scholarly peer review[edit]
It has been suggested that this section be split into a new article titled Scholarly peer review. (Discuss) Proposed since October 2014. |
Pragmatically, peer review refers to the work done during the screening of submitted manuscripts and funding applications. This process encourages authors to meet the accepted standards of their discipline and reduces the dissemination of irrelevant findings, unwarranted claims, unacceptable interpretations, and personal views. Publications that have not undergone peer review are likely to be regarded with suspicion by academic scholars and professionals.[citation needed]
Justification[edit]
It is difficult for authors and researchers, whether individually or in a team, to spot every mistake or flaw in a complicated piece of work. This is not necessarily a reflection on those concerned, but because with a new and perhaps eclectic subject, an opportunity for improvement may be more obvious to someone with special expertise or who simply looks at it with a fresh eye. Therefore, showing work to others increases the probability that weaknesses will be identified and improved. For both grant-funding and publication in a scholarly journal, it is also normally a requirement that the subject is both novel and substantial.[26][27]At the end of the day, the decision whether or not to publish a scholarly article, or what should be modified before publication, lies with the editor of the journal to which the manuscript has been submitted. Similarly, the decision whether or not to fund a proposed project rests with an official of the funding agency. These individuals usually refer to the opinion of one or more reviewers in making their decision. This is primarily for three reasons:
- Workload. A small group of editors/assessors cannot devote sufficient time to each of the many articles submitted to many journals.
- Diversity of opinion. Were the editor/assessor to judge all submitted material themselves, approved material would solely reflect their opinion.
- Limited expertise. An editor/assessor cannot be expected to be sufficiently expert in all areas covered by a single journal or funding agency to adequately judge all submitted material.
Reviewers are typically anonymous and independent, to help foster unvarnished criticism, and to discourage cronyism in funding and publication decisions. However, the reviewer's identity may sometimes have to be disclosed under some limited circumstances, such as the examination of a formal complaint against the referee, or a court order. Anonymity may be unilateral or reciprocal (single- or double-blinded reviewing).
Since reviewers are normally selected from experts in the fields discussed in the article, the process of peer review helps keeping invalid or unsubstantiated claims out of the body of published research and knowledge. Scholars will read published articles outside their limited area of detailed expertise, and then rely, to some degree, on the peer-review process to have provided reliable and credible research that they can build upon for subsequent or related research. Significant scandal ensues when an author is found to have falsified the research included in an article, as other scholars, and the field of study itself, may have relied upon the invalid research.
Procedure[edit]
In the case of proposed publications, an editor sends advance copies of an author's work or ideas to researchers or scholars who are experts in the field (known as "referees" or "reviewers"), nowadays normally by e-mail or through a web-based manuscript processing system. Depending on the field of study and on the specific journal, there are usually one to three referees for a given article.These referees each return an evaluation of the work to the editor, noting weaknesses or problems along with suggestions for improvement. Typically, most of the referees' comments are eventually seen by the author, though a referee can also send 'for your eyes only' comments to the editor; scientific journals observe this convention almost universally. The editor, usually familiar with the field of the manuscript (although typically not in as much depth as the referees, who are specialists), then evaluates the referees' comments, her or his own opinion of the manuscript, and the context of the scope of the journal or level of the book and readership, before passing a decision back to the author(s), usually with the referees' comments.
Referees' evaluations usually include an explicit recommendation of what to do with the manuscript or proposal, often chosen from options provided by the journal or funding agency. Most recommendations are along the lines of the following:
- to unconditionally accept the manuscript or the proposal,
- to accept it in the event that its authors improve it in certain ways,
- to reject it, but encourage revision and invite resubmission,
- to reject it outright.
In situations where multiple referees disagree substantially about the quality of a work, there are a number of strategies for reaching a decision. When an editor receives very positive and very negative reviews for the same manuscript, the editor will often solicit one or more additional reviews as a tie-breaker. As another strategy in the case of ties, editors may invite authors to reply to a referee's criticisms and permit a compelling rebuttal to break the tie. If an editor does not feel confident to weigh the persuasiveness of a rebuttal, the editor may solicit a response from the referee who made the original criticism. An editor may convey communications back and forth between authors and a referee, in effect allowing them to debate a point. Even in these cases, however, editors do not allow multiple referees to confer with each other, though each reviewer may often see earlier comments submitted by other reviewers. The goal of the process is explicitly not to reach consensus or to persuade anyone to change their opinions, but instead to provide material for an informed editorial decision. Some medical journals, usually following the open access model, have begun posting on the Internet the pre-publication history of each individual article, from the original submission to reviewers' reports, authors' comments, and revised manuscripts.
Traditionally, reviewers would often remain anonymous to the authors, but this standard varies both with time and with academic field. In some academic fields, most journals offer the reviewer the option of remaining anonymous or not, or a referee may opt to sign a review, thereby relinquishing anonymity. Published papers sometimes contain, in the acknowledgments section, thanks to anonymous or named referees who helped improve the paper.
Most university presses undertake peer review of books. After positive review by two or three independent referees, a university press sends the manuscript to the press's editorial board, a committee of faculty members, for final approval.[32] Such a review process is a requirement for full membership of the Association of American University Presses.[33]
In some disciplines there exist refereed venues (such as conferences and workshops). To be admitted to speak, scholars and scientists must submit papers (generally short, often 15 pages or less) in advance. These papers are reviewed by a "program committee" (the equivalent of an editorial board), which generally requests inputs from referees. The hard deadlines set by the conferences tend to limit the options to either accepting or rejecting the paper.
Recruiting referees[edit]
At a journal or book publisher, the task of picking reviewers typically falls to an editor.[34] When a manuscript arrives, an editor solicits reviews from scholars or other experts who may or may not have already expressed a willingness to referee for that journal or book division. Granting agencies typically recruit a panel or committee of reviewers in advance of the arrival of applications.[35]Typically referees are not selected from among the authors' close colleagues, students, or friends and relatives.[citation needed] Referees are supposed to inform the editor of any conflict of interests that might arise. Journals or individual editors may invite a manuscript's authors to name people whom they consider qualified to referee their work. For some journals this is a requirement of submission. Authors are sometimes also given the opportunity to name natural candidates who should be disqualified, in which case they may be asked to provide justification (typically expressed in terms of conflict of interest). In some disciplines, scholars listed in an "acknowledgments" section are not allowed to serve as referees (hence the occasional practice of using this section to disqualify potentially negative reviewers).[citation needed]
Editors solicit author input in selecting referees because academic writing typically is very specialized. Editors often oversee many specialties, and can not be experts in all of them. But after an editor selects referees from the pool of candidates, the editor typically is obliged not to disclose the referees' identities to the authors, and in scientific journals, to each other (see Anonymous peer review). Policies on such matters may differ among academic disciplines.
Recruiting referees is a political art, because referees, and often editors, are usually not paid, and reviewing takes time away from the referee's main activities, such as his or her own research. To the would-be recruiter's advantage, most potential referees are authors themselves, or at least readers, who know that the publication system requires that experts donate their time. Referees also have the opportunity to prevent work that does not meet the standards of the field from being published, which is a position of some responsibility. Editors are at a special advantage in recruiting a scholar when they have overseen the publication of his or her work, or if the scholar is one who hopes to submit manuscripts to that editor's publication in the future. Granting agencies, similarly, tend to seek referees among their present or former grantees. Serving as a referee can even be a condition of a grant, or professional association membership.
Another difficulty that peer review organizers face is that, with respect to some manuscripts or proposals, there may be few scholars who truly qualify as experts. Such a circumstance often frustrates the goals of reviewer anonymity and the avoidance of conflicts of interest. It also increases the chances that an organizer will not be able to recruit true experts – people who have themselves done work similar to that under review, and who can read between the lines. Low-prestige or local journals and granting agencies that award little money are especially handicapped with regard to recruiting experts.
Finally, anonymity adds to the difficulty in finding reviewers in another way. In scientific circles, credentials and reputation are important, and while being a referee for a prestigious journal is considered an honor, the anonymity restrictions make it unadvisable to publicly state that one was a referee for a particular article. However, credentials and reputation are principally established by publications, not by refereeing; and in some fields refereeing may not be anonymous.
Different styles of review[edit]
In "double-blind" review,[36] which is more common in the humanities than in the hard sciences, the identity of the authors is concealed from the reviewers, and vice versa, lest the knowledge of authorship or concern about disapprobation from the author bias their review. Critics of the double-blind review process point out that, despite any editorial effort to ensure anonymity, the process often fails to do so, since certain approaches, methods, writing styles, notations, etc., point to a certain group of people in a research stream, and even to a particular person.[37][38] In many fields of big science, the publicly available operation schedules of major equipments, such as telescopes or synchrotrons, would make the authors' names obvious to anyone who would care to look them up. Proponents of double-blind review argue that it performs no worse than single-blind, and that it generates a perception of fairness and equality in academic funding and publishing.[39] Single-blind review is strongly dependent upon the goodwill of the participants, but no more so than double-blind review with easily identified authors.A conflict of interest arises when a reviewer and author have a disproportionate amount of respect or disrespect for each other. As an alternative to single-blind and double-blind review, authors and reviewers are encouraged to declare their conflicts of interest when the names of authors and sometimes reviewers are known to the other. When conflicts are reported, the conflicting reviewer can be prohibited from reviewing and discussing the manuscript, or his or her review can instead be interpreted with the reported conflict in mind; the latter option is more often adopted when the conflict of interest is mild, such as an ancient professional connection or a distant family relation. The incentive for reviewers to declare their conflicts of interest is a matter of professional ethics and individual integrity. While their reviews are not public, these reviews are a matter of record and the reviewer's credibility depends upon how they represent themselves among their peers. Some software engineering journals, such as the IEEE Transactions on Software Engineering, use non-blind reviews with reporting to editors of conflicts of interest by both authors and reviewers.
A more rigorous standard of accountability is known as an audit. Because reviewers are not paid, they cannot be expected to put as much time and effort into a review as an audit requires. Therefore, academic journals such as Science, organizations such as the or the American Geophysical Union, and agencies such as the National Institutes of Health and the National Science Foundation maintain and archive scientific data and methods in the event another researcher wishes to replicate or audit the research after publication.[40][41][42]
Prepublication reviews[edit]
Anonymous peer review[edit]
Anonymous peer review, also called blind review, is a system of prepublication peer review of scientific articles or papers for journals or academic conferences by reviewers who are known to the journal editor or conference organizer but whose names are not given to the article's author. In some cases, the reviewers do not know the author's identity, as any identifying information is stripped from the document before review. The system is intended to reduce or eliminate bias, although this has been challenged – for example Eugene Koonin, a senior investigator at the National Center for Biotechnology Information, asserts that the system has "well-known ills"[43] and advocates "open peer review". Others support blind reviewing because no research has suggested that the methodology may be harmful and that the cost of facilitating such reviews is minimal.[44] Some experts proposed blind review procedures for reviewing controversial research topics.[45]Attributed peer review[edit]
Attributed peer review describes a scientific literature concept and process, central to which is the various transparency and disclosure of the identities of those reviewing scientific publications. The concept thus represents a departure from, and an alternative to, the incumbent anonymous peer review process, in which non-disclosure of these identities toward the public – and toward the authors of the work under review – is default practice. Attributed peer reviews appear to constitute a response to modern criticisms of the incumbent system; therefore, its emergence may be partially attributed to these phenomena.Postpublication reviews[edit]
It has been suggested that Open peer commentary be merged into this section. (Discuss) Proposed since December 2014. |
Some journals use postpublication peer review as formal review method, instead of prepublication review. This was first introduced in 2001, by Atmospheric Chemistry and Physics (ACP).[46] More recently F1000Research, Science Open and The Winnower were launched with postpublication review as formal review method.[47][48][49] At both ACP and F1000Research peer reviewers are formally invited, much like at prepublication review journals. Articles that pass peer review at those two journals are included in external scholarly databases.[50] In addition to journals hosting their own articles' reviews, there are also external, independent websites dedicated to post-publication peer-review across entire fields, such as PubPeer, Publons, JournalReview.org, etc.
Criticism of peer review[edit]
Drummond Rennie, deputy editor of Journal of the American Medical Association is an organizer of the International Congress on Peer Review and Biomedical Publication, which has been held every four years since 1986. He remarked:There seems to be no study too fragmented, no hypothesis too trivial, no literature too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.[51]Richard Horton, editor of the British medical journal The Lancet, said:
The mistake, of course, is to have thought that peer review was any more than just a crude means of discovering the acceptability—not the validity—of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.[52]
Allegations of bias and suppression[edit]
The interposition of editors and reviewers between authors and readers may enable the intermediators to act as gatekeepers.[53] Some sociologists of science argue that peer review makes the ability to publish susceptible to control by elites and to personal jealousy.[54][55] The peer review process may suppress dissent against "mainstream" theories.[56][57][58] Reviewers tend to be especially critical of conclusions that contradict their own views,[59][60] and lenient towards those that match them. At the same time, established scientists are more likely than others to be sought out as referees, particularly by high-prestige journals/publishers. As a result, ideas that harmonize with the established experts' are more likely to see print and to appear in premier journals than are iconoclastic or revolutionary ones. This accords with Thomas Kuhn's well-known observations regarding scientific revolutions.[61] A theoretical model has been established whose simulations imply that peer review and over-competitive research funding foster mainstream opinion to monopoly.[62] A marketing professor argued that invited papers are more valuable because papers that undergo the conventional system of peer review may not necessarily feature findings that are actually important.[63]Peer review failures[edit]
Peer review fails when a peer-reviewed article contains fundamental errors that undermine at least one of its main conclusions and that could have been identified by more careful reviewers. Many journals have no procedure to deal with peer review failures beyond publishing letters to the editor.[64]Peer review in scientific journals assumes that the article reviewed has been honestly prepared. The process occasionally detects fraud, but is not designed to do so.[65]
An experiment on peer review with a fictitious manuscript found that peer reviewers failed to detect some manuscript errors and the majority of reviewers may not notice that the conclusions of the paper are unsupported by its results.[66]
When peer review fails and a paper is published with fraudulent or otherwise irreproducible data, the paper may be retracted.
Criticisms of traditional anonymous peer review allege that it lacks accountability, can lead to abuse by reviewers, and may be biased and inconsistent.[67][68][69]
There have also been instances where peer review is claimed to be performed but in fact it is not; this has been documented in some predatory open access journals (e.g., the Who's Afraid of Peer Review? affair) or in the case of sponsored Elsevier journals.
Peer review and plagiarism[edit]
Reviewers generally lack access to raw data, but do see the full text of the manuscript, and are typically familiar with recent publications in the area. Thus, they are in a better position to detect plagiarism of prose than fraudulent data. A few cases of such textual plagiarism by historians, for instance, have been widely publicized.[70] On the scientific side, a poll of 3,247 scientists funded by the U.S. National Institutes of Health found 0.3% admitted faking data and 1.4% admitted plagiarism.[71]Additionally, 4.7% of the same poll admitted to self-plagiarism or autoplagiarism,[71] in which an author republishes the same material, data, or text, without citing their earlier work. An author often uses autoplagiarism to pad their list of publications. Journals and employers often do not severely punish authors for self-plagiarism, though it is against the rules of most peer-reviewed journals which require that only unpublished material be submitted. It may additionally be in violation of copyright law.
Abuse of inside information by reviewers[edit]
A related form of professional misconduct that is sometimes reported is a reviewer using the not-yet-published information from a manuscript or grant application for personal or professional gain. The frequency with which this happens is unknown, but the United States Office of Research Integrity has sanctioned reviewers who have been caught exploiting knowledge they gained as reviewers. A possible defense (for authors) against this form of misconduct on the part of reviewers is to pre-publish their work in the form of a preprint or technical report on a public system such as arXiv. The preprint can later be used to establish priority, although this violates the stated policies of some journals.Corrective measures[edit]
Many journals deal with peer review failures by publishing letters,[72] though some opt not to do so. Retraction of an article may be required. The author of a disputed article is allowed a published reply to a critical letter. However, neither the letter nor the reply is usually peer-reviewed, and typically the author rebuts the criticisms. Thus, the readers are left to decide for themselves if a peer review failure occurred.Examples[edit]
- "Perhaps the most widely recognized failure of peer review is its inability to ensure the identification of high-quality work. The list of important scientific papers that were rejected by some peer-reviewed journals goes back at least as far as the editor of Philosophical Transaction's 1796 rejection of Edward Jenner's report of the first vaccination against smallpox."[73]
- The trapezoidal rule, in which the method of Riemann sums for numerical integration was republished in a Diabetes research journal, Diabetes Care.[74] The method is almost always taught in high school calculus, and was thus considered an example of an extremely well known idea being re-branded as a new discovery.
- A conference organized by the Wessex Institute of Technology was the target of an exposé by three researchers who wrote nonsensical papers (including one that was composed of random phrases). They reported that the papers were "reviewed and provisionally accepted" and concluded that the conference was an attempt to "sell" publication possibilities to less experienced or naive researchers.[75] This may however be better described as a lack of any actual peer review, rather than peer review having failed.
- Refereeing performed on behalf of the Institute of Electrical and Electronics Engineers has also been subject to criticism after fake papers were discovered in conference publications, most notably by Labbé and Labbé and a researcher using the pseudonym of Herbert Schlangemann.[76][77][78][79][80][81]
- In 2014, an editorial was published in Nature highlighting problems with the peer-review process.[82][83]
Improvement efforts: Open peer review[edit]
Efforts to make fundamental improvements have ebbed and flowed since the late 1970s when Rennie first systematically reviewed articles in thirty medical journals. According to Ana Marušić, "Nothing much has changed in 25 years". Mentorship has not been shown to have a positive effect. Worse, little evidence indicates that peer review as presently performed, improves the quality of published papers.[19]In response to these criticisms, other systems of peer review with various degrees of "openness" have been suggested. Open peer review describes a scholarly/scientific literature concept and process, central to which is the various transparency and disclosure of the identities of those reviewing scientific publications. The concept thus represents a departure from, and an alternative to, the incumbent anonymous peer review process, in which non-disclosure of these identities toward the public – and toward the authors of the work under review – is default practice. The open peer review concept appears to constitute a response to modern criticisms of the incumbent system and therefore its emergence may be partially attributed to these phenomena.
Starting in the 1990s, several scientific journals (including the high impact journal Nature in 2006) started experiments with hybrid peer review processes, often allowing the open peer reviews in parallel to the traditional model. The initial evidence of the effects of open peer reviews were mixed. Identifying reviewers to the authors do not negatively impact, and may potentially have a positive impact, upon the quality of reviews, the recommendation regarding publication, the tone and the time spent on reviewing. However, more of those who are invited to review decline to do so.[84][85] Informing reviewers that their signed reviews might be posted on the web and available to the wider public also did not have a negative impact on quality of reviews and recommendations regarding publication, but it does lead to a longer time spent on reviewing, besides a higher reviewer decline rate.[86] These results suggests that open peer review is feasible, and does not lead to poorer quality of reviews, although it does have to be balanced against the potential increase in review time, and higher decline rates among invited reviewers.
Throughout the 2000s academic journals based solely on the concept of open peer review were launched, such as Philica. An extension of peer review beyond the date of publication is open peer commentary, whereby expert commentaries are solicited on published articles and the authors are encouraged to respond.
The traditional anonymous peer review has been criticized for its lack of accountability, the possibility of abuse by reviewers or by those who manage the peer review process (that is, journal editors),[87] its possible bias, and its inconsistency,[88] alongside other flaws.[89][90] Both processes are intended to subject scholarly publications to the scrutiny of others who are experts in the same field.
A number of reputable medical publishers have trialed the Open Peer Review concept. The first open peer review trial was conducted by The Medical Journal of Australia (MJA) in cooperation with the University of Sydney Library, from March 1996 to June 1997. In that study 56 research articles accepted for publication in the MJA were published online together with the peer reviewers' comments; readers could email their comments and the authors could amend their articles further before print publication of the article.[91] The investigators concluded that the process had modest benefits for authors, editors and readers.
Early era: 1996–2000[edit]
In 1996, the Journal of Interactive Media in Education[92] launched using open peer review.[93] Reviewers' names are made public and they are therefore accountable for their review, but they also have their contribution acknowledged. Authors have the right of reply, and other researchers have the chance to comment prior to publication. As of February 2013, the "Journal of Interactive Media in Education" no longer uses open peer review.[94]In 1997, the Electronic Transactions on Artificial Intelligence was launched as an open access journal by the European Coordinating Committee for Artificial Intelligence. This journal used a two-stage review process. In the first stage, papers that passed a quick screen by the editors were immediately published on the Transaction's discussion website for the purpose of on-line public discussion during a period of at least three months, where the contributors' names were made public except in exceptional cases. At the end of the discussion period, the authors were invited to submit a revised version of the article, and anonymous referees decided whether the revised manuscript would be accepted to the journal or not, but without any option for the referees to propose further changes. The last issue of this journal appeared in 2001.
In 1999, the open access journal Journal of Medical Internet Research[95] was launched, which from its inception decided to publish the names of the reviewers at the bottom of each published article. Also in 1999, the British Medical Journal[96] moved to an open peer review system, revealing reviewers' identities to the authors (but not the readers),[97] and in 2000, the medical journals in the open access BMC series[98] published by BioMed Central, launched using open peer review. As with the BMJ, the reviewers' names are included on the peer review reports. In addition, if the article is published the reports are made available online as part of the 'pre-publication history'.
Several of the other journals published by the BMJ Group allow optional open peer review,[99] as do PLoS Medicine, published by the Public Library of Science.[100][101] The BMJ's Rapid Responses[102] allow ongoing debate and criticism following publication.[103]
Recent era: 2001–present[edit]
Atmospheric Chemistry and Physics (ACP), an open access journal launched in 2001 by the European Geosciences Union, has a two-stage publication process.[46] In the first stage, papers that pass a quick screen by the editors are immediately published on the Atmospheric Chemistry and Physics Discussions (ACPD) website. They are then subject to interactive public discussion alongside formal peer review. Referees' comments (either anonymous or attributed), additional short comments by other members of the scientific community (which must be attributed) and the authors' replies are also published in ACPD. In the second stage, the peer-review process is completed and, if the article is formally accepted by the editors, the final revised papers are published in ACP. The success of this approach is shown by the ranking by Thomson Reuters of ACP as the top journal in the field of Meteorology & Atmospheric Sciences.[104]In June 2006, Nature launched an experiment in parallel open peer review—some articles that had been submitted to the regular anonymous process were also available online for open, identified public comment.[105] The results were less than encouraging – only 5% of authors agreed to participate in the experiment, and only 54% of those articles received comments.[106][107] The editors have suggested that researchers may have been too busy to take part and were reluctant to make their names public. The knowledge that articles were simultaneously being subjected to anonymous peer review may also have affected the uptake.
In 2006, a group of UK academics launched the online journal Philica, which tries to redress many of the problems of traditional peer review. Unlike in a normal journal, all articles submitted to Philica are published immediately and the review process takes place afterwards. Reviews are still anonymous, but instead of reviewers being chosen by an editor, any researcher who wishes to review an article can do so. Reviews are displayed at the end of each article, and so are used to give the reader criticism or guidance about the work, rather than to decide whether it is published or not. This means that reviewers cannot suppress ideas if they disagree with them. Readers use reviews to guide what they read, and particularly popular or unpopular work is easy to identify.
Another approach that is similar in spirit to Philica is that of a dynamical peer review site, Naboj.[108] Unlike Philica, Naboj is not a full-fledged online journal, but rather it provides an opportunity for users to write peer reviews of preprints at ArXiv. The review system is modeled on Amazon and users have an opportunity to evaluate the reviews as well as the articles. That way, with a sufficient number of users and reviewers, there should be a convergence towards a higher quality review process.
In February 2006, the journal Biology Direct was launched by BioMed Central, providing another alternative to the traditional model of peer review. If authors can find three members of the Editorial Board who will each return a report or will themselves solicit an external review, then the article will be published. As with Philica, reviewers cannot suppress publication, but in contrast to Philica, no reviews are anonymous and no article is published without being reviewed. Authors have the opportunity to withdraw their article, to revise it in response to the reviews, or to publish it without revision. If the authors proceed with publication of their article despite critical comments, readers can clearly see any negative comments along with the names of the reviewers.[109]
In 2010, the British Medical Journal began publishing signed reviewer’s reports alongside accepted papers, after determining that telling reviewers that their signed reviews might be posted publicly did not significantly affect the quality of the reviews.[110]
Starting in 2013 with the launch of F1000Research, some publishers have combined open peer review with postpublication peer review by using a versioned article system. At F1000Research, articles are published before review, and invited peer review reports (and reviewer names) are published with the article as they come in.[47] Author-revised versions of the article are then linked to the original. A similar postpublication review system with versioned articles is used by Science Open and The Winnower, both launched in 2014.[48][49]
In 2014, Life implanted an open peer review system,[111] under which the peer-review reports and authors’ responses are published as an integral part of the final version of each article.
An extension of peer review beyond the date of publication is Open Peer Commentary, whereby expert commentaries are solicited on published articles, and the authors are encouraged to respond. In the summer of 2009, Kathleen Fitzpatrick explored open peer review and commentary in her book, Planned Obsolescence.
Another form of "open peer review" is community-based pre-publication peer-review, where the review process is open for everybody to join.[citation needed]
Peer review of government policy[edit]
The technique of peer review is also used to improve government policy. In particular, the European Union uses it as a tool in the 'Open Method of Co-ordination' of policies in the fields of employment and social inclusion. The United Nations Economic Commission for Europe, through UNECE Environmental Performance Reviews, uses the technique of peer review to evaluate progress made by its member countries in improving their environmental policies.A program of peer reviews in active labour market policy[112] started in 1999, and was followed in 2004 by one in social inclusion.[113] Each program sponsors about eight peer review meetings in each year, in which a 'host country' lays a given policy or initiative open to examination by half a dozen other countries and the relevant European-level NGOs. These usually meet over two days and include visits to local sites where the policy can be seen in operation. The meeting is preceded by the compilation of an expert report on which participating 'peer countries' submit comments. The results are published on the web.
The State of California is the only U.S. state to mandate scientific peer review. In 1997, the California Governor signed into law Senate Bill 1320 (Sher), Chapter 295, statutes of 1997, which mandates that, before any CalEPA Board, Department, or Office adopts a final version of a rule-making, the scientific findings, conclusions, and assumptions on which the proposed rule are based must be submitted for independent external scientific peer review. This requirement is incorporated into the California Health and Safety Code Section 57004.
Further information: U.S. Government peer review policies
Medical peer review[edit]
For more details on this topic, see Medical peer review.
Medical peer review may refer to clinical peer review, or the peer evaluation of clinical teaching skills for both physicians and nurses,[114][115] or scientific peer review of journal articles, or to a secondary round of peer review for the clinical value of articles concurrently published in medical journals.[116] Moreover, "medical peer review" has been used by the American Medical Association to refer not only to the process of improving quality and safety in health care organizations, but also to the process of rating clinical behavior or compliance with professional society membership standards.[117][118] Thus, the terminology has poor standardization and specificity, particularly as a database search term.See also[edit]
- Academic authorship
- Academic bias
- Academic conference
- Academic journal
- Abstract management
- Adversarial review
- Collaborative document review
- Coercive citation
- Interdisciplinary peer review
- Journal club
- Objectivity (philosophy)
- Open peer commentary
- Peer group
- Publication bias
- Reporting bias
- Scholarly method
- Software peer review
- Sternberg peer review controversy
- Technical peer review
References[edit]
This article has an unclear citation style. (September 2009) |
- Jump up ^ Hirschauer, S. (2010). "Editorial judgements: A praxeology of 'voting' in peer review". Social Studies of Science 40 (1): 71–103. doi:10.1177/0306312709335405.
- Jump up ^ http://www.ruor.uottawa.ca
- Jump up ^ http://www.ruor.uottawa.ca
- Jump up ^ Rescuing Science from Politics: Regulation and the Distortion of Scientific Research
- Jump up ^ On Being a Scientist National Academies Press
- Jump up ^ The Origin of the Scientific Journal and the Process of Peer Review House of Commons Select Committee Report
- Jump up ^ Google Books
- Jump up ^ [1]
- Jump up ^ Gould, T.P.H. (2012). Do We Still Need Peer Review?. The Scarecrow Press.
- Jump up ^ Biagioli, M. (2002). "From book censorship to academic peer review". Emergences 12 (1): 11–45. doi:10.1080/1045722022000003435.
- Jump up ^ Spier, R. (2002). "The history of the peer review process". TRENDS in Biotechnology 20 (8): 357–358. doi:10.1016/S0167-7799(02)01985-6. PMID 12127284.
- Jump up ^ Rip, A. (1985). "Commentary: Peer Review is Alive and Well in the United States". Science, Technology, and Human Values 10 (3): 82–86. doi:10.1177/016224398501000310.
- Jump up ^ Benos, Dale J. et al. (2007). "The Ups and Downs of Peer Review". Advances in Physiology Education 31 (2): 145–152. doi:10.1152/advan.00104.2006. PMID 17562902.
p. 145 – Scientific peer review has been defined as the evaluation of research findings for competence, significance, and originality by qualified experts. These peers act as sentinels on the road of scientific discovery and publication.
- Jump up ^ "Benefits and Burdens of Peer-Review". From the Editor. BioTechniques 58 (1). January 2015. p. 5.
- Jump up ^ Spier, Ray (2002). "The history of the peer-review process". Trends in Biotechnology 20 (8): 357–8. doi:10.1016/S0167-7799(02)01985-6. PMID 12127284.
- Jump up ^ Pontille, David and Torny, Didier (2014). "From Manuscript Evaluation to Article Valuation: The Changing Technologies of Journal Peer Review". Human Studies 38: 57. doi:10.1007/s10746-014-9335-z.
- Jump up ^ Kennefick, Daniel (September 2005). "Einstein Versus the Physical Review". Physics Today 58 (9): 43–48. Bibcode:2005PhT....58i..43K. doi:10.1063/1.2117822.
- Jump up ^ "Coping with peer rejection". Nature 425 (6959): 645. October 16, 2003. Bibcode:2003Natur.425..645.. doi:10.1038/425645a. PMID 14562060.
- ^ Jump up to: a b Couzin-Frankel, J. (2013). "Secretive and Subjective, Peer Review Proves Resistant to Study". Science 341 (6152): 1331. doi:10.1126/science.341.6152.1331. PMID 24052283.
- Jump up ^ Dans, PE (1993). "Clinical peer review: burnishing a tarnished image" (PDF). Ann. Intern. Med. 118 (7): 566–8. doi:10.7326/0003-4819-118-7-199304010-00014. PMID 8442628.
- Jump up ^ Milgrom P, Weinstein P, Ratener P, Read WA, Morrison K; Weinstein; Ratener; Read; Morrison (1978). "Dental Examinations for Quality Control: Peer Review versus Self-Assessment". Am. J. Public Health 68 (4): 394–401. doi:10.2105/AJPH.68.4.394. PMC 1653950. PMID 645987.
- Jump up ^ "AICPA Peer Review Manual". American Institute of CPAs. Retrieved October 4, 2010.
- Jump up ^ 2012 Peer Review Program Manual
- Jump up ^ "Peer Review". UK Legal Services Commission. Retrieved October 4, 2010.
- Jump up ^ "Peer Review Ratings". Martindale. Retrieved October 4, 2010.
- ^ Jump up to: a b "Peer Review Panels – Purpose and Process" (PDF). USDA Forest Service. February 6, 2006. Retrieved October 4, 2010.
- ^ Jump up to: a b Sims Gerald K. (1989). "Student Peer Review in the Classroom: A Teaching and Grading Tool" (PDF). Journal of Agronomic Education 18: 105–108.
The review process was double-blind to provide anonymity for both authors and reviewers, but was otherwise handled in a fashion similar to that used by scientific journals
- Jump up ^ Liu, Jianguo; Pysarchik, Dawn Thorndike; Taylor, William W. (2002). "Peer Review in the Classroom" (PDF). BioScience 52 (9): 824–829. doi:10.1641/0006-3568(2002)052[0824:PRITC]2.0.CO;2.
- Jump up ^ "History of the journal Nature: Timeline". Macmillan Publishers Limited. 2013. Retrieved 12 November 2013.
- Jump up ^ The NASA study of arsenic-based life was fatally flawed, say scientists. – Slate Magazine
- Jump up ^ Cohen, Patricia (August 23, 2010). "For Scholars, Web Changes Sacred Rite of Peer Review". The New York Times.
- Jump up ^ Arnold, Gordon B. (2003). "University presses". In James W. Guthrie. Encyclopedia of Education 7 (2nd ed.). New York: Macmillan Reference USA. p. 2601. ISBN 0-02-865601-6.
- Jump up ^ "AAUP Membership Benefits and Eligibility". Association of American University Presses. Retrieved February 2, 2008.
- Jump up ^ Lawrence O'Gorman (January 2008). "The (Frustrating) State of Peer Review" (PDF). IAPR Newsletter 30 (1): 3–5.
- Jump up ^ Samuel M. Schwartz, Donald W. Slater, Fred P. Heydrick, and Gillian R. Woolett (September 1995). "A Report of the AIBS Peer-Review Process for the US Army's 1994 Breast Cancer Initiative". BioScience 45 (8): 558–563. doi:10.1093/bioscience/45.8.558. JSTOR 1312702.
- Jump up ^ Cressey, Daniel (2014). "Journals weigh up double-blind peer review". Nature News. doi:10.1038/nature.2014.15564. Retrieved 15 November 2014.
- Jump up ^ Action Potential: Double-blind peer review?
- Jump up ^ "Editorial: Working double-blind". Nature 451 (7179): 605–6. February 2008. Bibcode:2008Natur.451R.605.. doi:10.1038/451605b. PMID 18256621.
- Jump up ^ Mainguy G, Motamedi MR, Mietchen D (September 2005). "Peer review—the newcomers' perspective". PLoS Biol. 3 (9): e326. doi:10.1371/journal.pbio.0030326. PMC 1201308. PMID 16149851.
- Jump up ^ "Policy on Referencing Data in and Archiving Data for AGU Publications". American Geophysical Union. 2012. Retrieved 2012-09-08.
The following policy has been adopted for AGU publications in order to ensure that they can effectively and efficiently perform an expanded role in making the underlying data for articles available to researchers now and in the future.
- This policy was first adopted by the AGU Publications Committee in November 1993 and then revised March 1994, December 1995, October 1996.
- See also AGU Data Policy by Bill Cook. April 4, 2012.
- Jump up ^ "Data Management & Sharing Frequently Asked Questions". National Science Foundation. November 30, 2010. Retrieved 2012-09-08.
- Jump up ^ Reagan W. Moore, Arcot Rajasekar, Michael Wan (2006-03-13). "Data Grids, Digital Libraries, and Persistent Archives: An Integrated Approach to Sharing, Publishing, and Archiving Data" (PDF). Proceedings of the IEEE 93: 578–588. doi:10.1109/JPROC.2004.842761. Retrieved 2014-02-02.
- Jump up ^ Koonin, Eugene (2006). "Reviving a culture of scientific debate". Nature. doi:10.1038/nature05005.
- Jump up ^ J. Scott Armstrong (1982). "Barriers to Scientific Contributions: The Author's Formula" (PDF). Behavioral and Brain Sciences 5 (2): 197–199. doi:10.1017/S0140525X00011201.
- Jump up ^ J. Scott Armstrong (1982). "Research on Scientific Journals: Implications for Editors and Authors" (PDF). Journal of Forecasting 1: 83–104. doi:10.1002/for.3980010109.
- ^ Jump up to: a b Pöschl, Ulrich (2012). "Multi-stage open peer review: scientific evaluation integrating the strengths of traditional peer review with the virtues of transparency and self-regulation". Front. Comput. Neurosci. 6. doi:10.3389/fncom.2012.00033.
- ^ Jump up to: a b "Publish First, Ask Questions Later". Wired. July 23, 2013. Retrieved 2015-01-13.
- ^ Jump up to: a b "The recipe for our (not so) secret Post-Publication Peer Review sauce!". December 8, 2014. Retrieved 2015-01-13.
- ^ Jump up to: a b "The Winnower: An Interview with Josh Nicholson". December 8, 2014. Retrieved 2015-01-13.
- Jump up ^ "F1000Research peer-reviewed articles now visible on PubMed and PubMed Central". STM Publishing News. December 12, 2013. Retrieved 2015-01-13.
- Jump up ^ Rennie D, Flanagin A, Smith R, Smith J (March 19, 2003). "Fifth International Congress on Peer Review and Biomedical Publication: Call for Research". JAMA 289 (11): 1438. doi:10.1001/jama.289.11.1438.
- Jump up ^ Horton, Richard (2000). "Genetically modified food: consternation, confusion, and crack-up". MJA 172 (4): 148–9. PMID 10772580.
- Jump up ^ Bradley, (1981)
- Jump up ^ "British scientists exclude 'maverick' colleagues, says report" (2004) EurekAlert Public release date: August 16, 2004
- Jump up ^ Higgs, Robert (May 7, 2007). "Peer Review, Publication in Top Journals, Scientific Consensus, and So Forth". Independent Institute. Retrieved April 9, 2012.
- Jump up ^ Brian Martin, "Suppression Stories" (1997) in Fund for Intellectual Dissent ISBN 0-646-30349-X
- Jump up ^ See also Juan Miguel Campanario, "Rejecting Nobel class articles and resisting Nobel class discoveries", cited in Nature, October 16, 2003, Vol 425, Issue 6959, p.645
- Jump up ^ Campanario, Juan Miguel; Martin, Brian; Martin (Fall 2004). "Challenging dominant physics paradigms". Journal of Scientific Exploration 18 (3): 421–38. Bibcode:2008atcr.book...11C.
- Jump up ^ "... they may strongly resist a rival's hypothesis that challenges their own." Malice's Wonderland: Research Funding and Peer Review Journal of Neurobiology 14, No. 2., pp. 95–112 (1983).
- Jump up ^ Francisco Grimaldo and Mario Paolucci (14 March 2013). "A simulation of disagreement for control of rational cheating in peer review". Advances in Complex Systems 16: 1350004. doi:10.1142/S0219525913500045.
- Jump up ^ See also: Sophie Petit-Zeman, "Trial by peers comes up short" (2003) The Guardian, Thursday January 16, 2003
- Jump up ^ H. Fang. "Peer review and over-competitive research funding fostering mainstream opinion to monopoly", Scientometrics 87(2), pp. 293-301 (2011)
- Jump up ^ J. Scott Armstrong. "Reply by: J. Scott Armstrong, The Wharton School, University of Pennsylvania, "Democracy Does Not Make Good Science: On Reforming Review Procedures for Management Science Journals,"" (PDF). Interfaces 28: 88–91.
- Jump up ^ Afifi, M. "Reviewing the "Letter-to-editor" section in the Bulletin of the World Health Organization, 2000–2004". Bulletin of the World Health Organization.
- Jump up ^ "Peer review is not currently designed to detect deception, nor does it guarantee the validity of research findings." Lee, K. (2006). "Increasing accountability". Nature. doi:10.1038/nature05007.
- Jump up ^ W. G. Baxt, J. F. Waeckerle, J. A. Berlin & M. L. Callaham (September 1998). "Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewer performance". Annals of Emergency Medicine 32 (3 Pt 1): 310–317. doi:10.1016/S0196-0644(98)70006-X. PMID 9737492.
- Jump up ^ Rothwell, P. M.; Martyn, CN (2000). "Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone?". Brain 123 (9): 1964–9. doi:10.1093/brain/123.9.1964. PMID 10960059.
- Jump up ^ The Peer Review Process
- Jump up ^ Alison McCook (February 2006). "Is Peer Review Broken?". The Scientist.
- Jump up ^ Historians on the Hot Seat
- ^ Jump up to: a b Weiss, Rick. 2005. Many scientists admit to misconduct: Degrees of deception vary in poll. Washington Post. June 9, 2005. page A03.[2]
- Jump up ^ Afifi, M. "Reviewing the "Letter-to-editor" section in the Bulletin of the World Health Organization, 2000-2004". Bulletin of the World Health Organization.
- Jump up ^ Michaels, David, Politicizing Peer Review: Scientific Perspective in Wagner, Wendy and Rena Steinzor, eds., Rescuing Science from Politics: Regulation and the Distortion of Scientific Research, Cambridge University Press, 2006 p. 224, ISBN 978-0-521-85520-4 Google books excerpt
- Jump up ^ Tai, M. M. (1994). "A mathematical model for the determination of total area under glucose tolerance and other metabolic curves". Diabetes Care 17 (2): 152–4. doi:10.2337/diacare.17.2.152. PMID 8137688.
- Jump up ^ Purgathofer, Werner. "Beware of VIDEA!". tuwien.ac.at. Technical University of Vienna. Retrieved April 29, 2014.
- Jump up ^ Labbé, Cyril; Labbé, Dominique (2013). "Duplicate and fake publications in the scientific literature: how many SCIgen papers in computer science?". Scientometrics (Springer) 94 (1): 379–396. doi:10.1007/s11192-012-0781-y.
- Jump up ^ Oransky, Ivan (February 24, 2014). "Springer, IEEE withdrawing more than 120 nonsense papers". retractionwatch.com. WordPress.com. Retrieved April 29, 2014.
- Jump up ^ de Gloucester, Paul Colin (2013). "Referees Often Miss Obvious Errors in Computer and Electronic Publications". Accountability in Research: Policies and Quality Assurance (Taylor & Francis Group) 20 (3): 143–166. doi:10.1080/08989621.2013.788379.
- Jump up ^ Dawson, K. (December 23, 2008). "Software-Generated Paper Accepted At IEEE Conference". slashdot.org. Dice. Retrieved April 29, 2014.
- Jump up ^ Hatta, Masayuki (December 24, 2008). "IEEEカンファレンス、自動生成のニセ論文をアクセプト". slashdot.jp. OSDN Corporation. Retrieved April 29, 2014.
- Jump up ^ Ziegler, Peter-Michael (December 26, 2008). "Dr. Herbert Schlangemann - oder die Geschichte eines pseudowissenschaftlichen Nonsens-Papiers". heise.de. Heise Zeitschriften Verlag. Retrieved April 29, 2014.
- Jump up ^ Jackson, A. "Peer review – loopholes, hackers and scams". Australian Veterinary Association. Retrieved April 28, 2015.
- Jump up ^ Ferguson, C., Marcus, A. and Oransky, I. (2014). "Publishing: The peer-review scam". Nature 515 (7528).
- Jump up ^ Van Rooyen, S; Godlee, F; Evans, S; Black, N; Smith, R (1999). "Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial". BMJ 318 (7175): 23–7. doi:10.1136/bmj.318.7175.23. PMC 27670. PMID 9872878.
- Jump up ^ Elizabeth Walsh, Maeve Rooney, Louis Appleby, Greg Wilkinson (2000). "Open peer review: a randomised controlled trial". The British Journal of Psychiatry 176 (1): 47–51. doi:10.1192/bjp.176.1.47. PMID 10789326.
- Jump up ^ van Rooyen, Susan; Delamothe, Tony; Evans, Stephen J W (16 November 2010). "Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial.". British Medical Journal (341): c5729. doi:10.1136/bmj.c5729.
- Jump up ^ Bingham C. Peer review and the ethics of internet publishing. In: Hudson Jones A, McLellan F, editors. Ethical Issues in Biomedical Publication. Baltimore: Johns Hopkins University Press, 2000: pages 85-111.
- Jump up ^ Peter M. Rothwell, Christopher N. Martyn (2000). "Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone?". Brain 123 (9): 1964–1969. doi:10.1093/brain/123.9.1964. PMID 10960059.
- Jump up ^ "The Peer Review Process" (PDF). Retrieved 4 January 2012.
- Jump up ^ Alison McCook (February 2006). "Is Peer Review Broken?". The Scientist.
- Jump up ^ Bingham CM, Higgins G, Coleman R, Van Der Weyden M. The Medical Journal of Australia internet peer review study" The Lancet 1998; 358: 441-445.
- Jump up ^ "Journal of Interactive Media in Education". Jime.open.ac.uk. Retrieved 4 January 2012.
- Jump up ^ http://www-jime.open.ac.uk/about.html#lifecycle
- Jump up ^ Editorial Policies | Journal of Interactive Media in Education
- Jump up ^ "JMIR Home". Jmir.org. Retrieved 4 January 2012.
- Jump up ^ "bmj.com: BMJ – Helping doctors make better decisions". BMJ. Retrieved 4 January 2012.
- Jump up ^ "Opening up BMJ peer review – Smith 318 (7175): 4". BMJ. Retrieved 4 January 2012.
- Jump up ^ "BMC series". Biomedcentral.com. Retrieved 4 January 2012.
- Jump up ^ Smith. R. (1999). Opening up BMJ peer review. BMJ, 318, 4-5. PMID, online
- Jump up ^ "Public Library of Science". Plos.org. 28 September 2011. Retrieved 4 January 2012.
- Jump up ^ "PLoS Medicine: A Peer-Reviewed, Open-Access Journal". Journals.plos.org. 27 March 2009. doi:10.1371/journal.pmed.0030442. Retrieved 4 January 2012.
- Jump up ^ bmj.com Rapid Responses published in the past day[dead link]
- Jump up ^ Tony Delamothe, web editor bmj.com, Richard Smith, editor. "Twenty thousand conversations – Delamothe and Smith 324 (7347): 1171". BMJ. Retrieved 4 January 2012.
- Jump up ^ http://www.atmospheric-chemistry-and-physics.net/news_acp_jcr2007_attachment.pdf
- Jump up ^ Peer Review Trial[dead link]
- Jump up ^ Nature. "Overview: Nature's trial of open peer review". Nature.com. Retrieved 4 January 2012.
- Jump up ^ Nature (21 December 2006). "Peer review and fraud: Article". Nature. Retrieved 4 January 2012.
- Jump up ^ "Naboj Dynamical Peer Review". Naboj.com. Retrieved 4 January 2012.
- Jump up ^ http://www.biology-direct.com/info/about/
- Jump up ^ van Rooyen, Susan; Delamothe, Tony; Evans, Stephen JW (2010). "Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial". British Medical Journal 341: c5729. doi:10.1136/bmj.c5729.
- Jump up ^ "Editorial". Retrieved 2014-06-29.
- Jump up ^ Mutual Learning Programme – Peer Reviews
- Jump up ^ Peer Review and Assessment in Social Inclusion—Evaluations par les pairs
- Jump up ^ Medschool.ucsf.edu
- Jump up ^ Ludwick R, Dieckman BC, Herdtner S, Dugan M, Roche M; Dieckman; Herdtner; Dugan; Roche (November–December 1998). "Documenting the scholarship of clinical teaching through peer review". Nurse Educ. 23 (6): 17–20. doi:10.1097/00006223-199811000-00008. PMID 9934106.
- Jump up ^ Haynes RB, Cotoi C, Holland J et al. (2006). "Second-order peer review of the medical literature for clinical practitioners". JAMA 295 (15): 1801–8. doi:10.1001/jama.295.15.1801. PMID 16622142.
- Jump up ^ (page 131)
- Jump up ^ Ama-assn.org
General references and further reading[edit]
- Bradley, James V. (1981). "Pernicious Publication Practices". Bulletin of the Psychonomic Society 18: 31–34. doi:10.3758/bf03333562.
- Godlee, Fiona; Jefferson, Tom, eds. (2003). Peer Review in Health Sciences (2nd ed.). London: BMJ Books. ISBN 0-7279-1685-8.
- Hames, Irene (2007). Peer Review and Manuscript Management in Scientific Journals: guidelines for good practice. Malden, MA: Blackwell. ISBN 978-1-4051-3159-9.
- Shatz, David, ed. (2004). Peer Review: A Critical Inquiry. Lanham, Md.: Rowman & Littlefield. ISBN 9780742514348.
- de Vries, Jaap (2001). "Peer Review: The Holy Cow of Science". In E.H. Frederiksson. A Century of Science Publishing. IOS Press. ISBN 1-58603-148-1.
- Weller, Ann C. (2001). Editorial Peer Review: its Strengths and Weaknesses. Medford, New Jersey: American Society for Information Science and Technology. ISBN 1-57387-100-1. (extensive bibliography).
- "I don't know what to believe… – Making Sense of Science Stories" (PDF). Sense About Science. October 31, 2005.
External links[edit]
This audio file was created from a revision of the "Peer review" article dated 2005-04-02, and does not reflect subsequent edits to the article. (Audio help)
- "Peer review debate". Nature. June 2006.
- "Peer-to-Peer blog". Nature. April 2012.
- Maggie Koerth-Baker (April 22, 2012). "Meet Science: What is "peer review"?". BoingBoing.
- "Fifth International Congress on Peer Review and Biomedical Publication". American Medical Association.
- Walter Noll (2009) The Future of Scientific Publication
- Hans Ulrik Riisgård. "Peer review system". Marine Ecology Progress Series. Inter-research Science Center.
- Eugene Garfield. "A Difficult Balance: Editorial Peer Review in Medicine". University of Pennsylvania. (Bibliography)
- "Peer Review in Science and Medicine: Investigating Peer Review as Scientific Object of study".
- Ulrich Pöschl. "Quality Assurance & Peer Review in Open Access" (PDF).
- (January 23, 2012) Online Social Network Seeks to Overhaul Peer Review in Scientific Publishing
No comments:
Post a Comment