Normal view MARC view ISBD view

Quantifying Research Integrity [electronic resource] / by Michael Seadle.

By: Seadle, Michael [author.].
Contributor(s): SpringerLink (Online service).
Material type: materialTypeLabelBookSeries: Synthesis Lectures on Information Concepts, Retrieval, and Services: Publisher: Cham : Springer International Publishing : Imprint: Springer, 2017Edition: 1st ed. 2017.Description: XIX, 121 p. online resource.Content type: text Media type: computer Carrier type: online resourceISBN: 9783031023064.Subject(s): Computer networks  | Computer Communication NetworksAdditional physical formats: Printed edition:: No title; Printed edition:: No titleDDC classification: 004.6 Online resources: Click here to access online
Contents:
Preface -- Acknowledgments -- Introduction -- State of the Art -- Quantifying Plagiarism -- Quantifying Data Falsification -- Quantifying Image Manipulation -- Applying the Metrics -- Bibliography -- Author's Biography.
In: Springer Nature eBookSummary: Institutions typically treat research integrity violations as black and white, right or wrong. The result is that the wide range of grayscale nuances that separate accident, carelessness, and bad practice from deliberate fraud and malpractice often get lost. This lecture looks at how to quantify the grayscale range in three kinds of research integrity violations: plagiarism, data falsification, and image manipulation. Quantification works best with plagiarism, because the essential one-to-one matching algorithms are well known and established tools for detecting when matches exist. Questions remain, however, of how many matching words of what kind in what location in which discipline constitute reasonable suspicion of fraudulent intent. Different disciplines take different perspectives on quantity and location. Quantification is harder with data falsification, because the original data are often not available, and because experimental replication remains surprisingly difficult. The same is true with image manipulation, where tools exist for detecting certain kinds of manipulations, but where the tools are also easily defeated. This lecture looks at how to prevent violations of research integrity from a pragmatic viewpoint, and at what steps can institutions and publishers take to discourage problems beyond the usual ethical admonitions. There are no simple answers, but two measures can help: the systematic use of detection tools and requiring original data and images. These alone do not suffice, but they represent a start. The scholarly community needs a better awareness of the complexity of research integrity decisions. Only an open and wide-spread international discussion can bring about a consensus on where the boundary lines are and when grayscale problems shade into black. One goal of this work is to move that discussion forward.
    average rating: 0.0 (0 votes)
No physical items for this record

Preface -- Acknowledgments -- Introduction -- State of the Art -- Quantifying Plagiarism -- Quantifying Data Falsification -- Quantifying Image Manipulation -- Applying the Metrics -- Bibliography -- Author's Biography.

Institutions typically treat research integrity violations as black and white, right or wrong. The result is that the wide range of grayscale nuances that separate accident, carelessness, and bad practice from deliberate fraud and malpractice often get lost. This lecture looks at how to quantify the grayscale range in three kinds of research integrity violations: plagiarism, data falsification, and image manipulation. Quantification works best with plagiarism, because the essential one-to-one matching algorithms are well known and established tools for detecting when matches exist. Questions remain, however, of how many matching words of what kind in what location in which discipline constitute reasonable suspicion of fraudulent intent. Different disciplines take different perspectives on quantity and location. Quantification is harder with data falsification, because the original data are often not available, and because experimental replication remains surprisingly difficult. The same is true with image manipulation, where tools exist for detecting certain kinds of manipulations, but where the tools are also easily defeated. This lecture looks at how to prevent violations of research integrity from a pragmatic viewpoint, and at what steps can institutions and publishers take to discourage problems beyond the usual ethical admonitions. There are no simple answers, but two measures can help: the systematic use of detection tools and requiring original data and images. These alone do not suffice, but they represent a start. The scholarly community needs a better awareness of the complexity of research integrity decisions. Only an open and wide-spread international discussion can bring about a consensus on where the boundary lines are and when grayscale problems shade into black. One goal of this work is to move that discussion forward.

There are no comments for this item.

Log in to your account to post a comment.