000 03506nam a22005055i 4500
001 978-3-031-01835-0
003 DE-He213
005 20240730164114.0
007 cr nn 008mamaa
008 220601s2010 sz | s |||| 0|eng d
020 _a9783031018350
_9978-3-031-01835-0
024 7 _a10.1007/978-3-031-01835-0
_2doi
050 4 _aTK5105.5-5105.9
072 7 _aUKN
_2bicssc
072 7 _aCOM043000
_2bisacsh
072 7 _aUKN
_2thema
082 0 4 _a004.6
_223
100 1 _aNauman, Felix.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_982152
245 1 3 _aAn Introduction to Duplicate Detection
_h[electronic resource] /
_cby Felix Nauman, Melanie Herschel.
250 _a1st ed. 2010.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2010.
300 _aIX, 77 p.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aSynthesis Lectures on Data Management,
_x2153-5426
505 0 _aData Cleansing: Introduction and Motivation -- Problem Definition -- Similarity Functions -- Duplicate Detection Algorithms -- Evaluating Detection Success -- Conclusion and Outlook -- Bibliography.
520 _aWith the ever increasing volume of data, data quality problems abound. Multiple, yet different representations of the same real-world objects in data, duplicates, are one of the most intriguing data quality problems. The effects of such duplicates are detrimental; for instance, bank customers can obtain duplicate identities, inventory levels are monitored incorrectly, catalogs are mailed multiple times to the same household, etc. Automatically detecting duplicates is difficult: First, duplicate representations are usually not identical but slightly differ in their values. Second, in principle all pairs of records should be compared, which is infeasible for large volumes of data. This lecture examines closely the two main components to overcome these difficulties: (i) Similarity measures are used to automatically identify duplicates when comparing two records. Well-chosen similarity measures improve the effectiveness of duplicate detection. (ii) Algorithms are developed to perform on very large volumes of data in search for duplicates. Well-designed algorithms improve the efficiency of duplicate detection. Finally, we discuss methods to evaluate the success of duplicate detection. Table of Contents: Data Cleansing: Introduction and Motivation / Problem Definition / Similarity Functions / Duplicate Detection Algorithms / Evaluating Detection Success / Conclusion and Outlook / Bibliography.
650 0 _aComputer networks .
_931572
650 0 _aData structures (Computer science).
_98188
650 0 _aInformation theory.
_914256
650 1 4 _aComputer Communication Networks.
_982153
650 2 4 _aData Structures and Information Theory.
_931923
700 1 _aHerschel, Melanie.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_982154
710 2 _aSpringerLink (Online service)
_982155
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783031007071
776 0 8 _iPrinted edition:
_z9783031029639
830 0 _aSynthesis Lectures on Data Management,
_x2153-5426
_982156
856 4 0 _uhttps://doi.org/10.1007/978-3-031-01835-0
912 _aZDB-2-SXSC
942 _cEBK
999 _c85305
_d85305