Entity Resolution and Information Quality

Entity Resolution and Information Quality
Author: John R. Talburt
Publisher: Elsevier
Total Pages: 254
Release: 2011-01-14
Genre: Computers
ISBN: 0123819733

Entity Resolution and Information Quality presents topics and definitions, and clarifies confusing terminologies regarding entity resolution and information quality. It takes a very wide view of IQ, including its six-domain framework and the skills formed by the International Association for Information and Data Quality {IAIDQ). The book includes chapters that cover the principles of entity resolution and the principles of Information Quality, in addition to their concepts and terminology. It also discusses the Fellegi-Sunter theory of record linkage, the Stanford Entity Resolution Framework, and the Algebraic Model for Entity Resolution, which are the major theoretical models that support Entity Resolution. In relation to this, the book briefly discusses entity-based data integration (EBDI) and its model, which serve as an extension of the Algebraic Model for Entity Resolution. There is also an explanation of how the three commercial ER systems operate and a description of the non-commercial open-source system known as OYSTER. The book concludes by discussing trends in entity resolution research and practice. Students taking IT courses and IT professionals will find this book invaluable. - First authoritative reference explaining entity resolution and how to use it effectively - Provides practical system design advice to help you get a competitive advantage - Includes a companion site with synthetic customer data for applicatory exercises, and access to a Java-based Entity Resolution program.

Data Matching

Data Matching
Author: Peter Christen
Publisher: Springer Science & Business Media
Total Pages: 279
Release: 2012-07-04
Genre: Computers
ISBN: 3642311644

Data matching (also known as record or data linkage, entity resolution, object identification, or field matching) is the task of identifying, matching and merging records that correspond to the same entities from several databases or even within one database. Based on research in various domains including applied statistics, health informatics, data mining, machine learning, artificial intelligence, database management, and digital libraries, significant advances have been achieved over the last decade in all aspects of the data matching process, especially on how to improve the accuracy of data matching, and its scalability to large databases. Peter Christen’s book is divided into three parts: Part I, “Overview”, introduces the subject by presenting several sample applications and their special challenges, as well as a general overview of a generic data matching process. Part II, “Steps of the Data Matching Process”, then details its main steps like pre-processing, indexing, field and record comparison, classification, and quality evaluation. Lastly, part III, “Further Topics”, deals with specific aspects like privacy, real-time matching, or matching unstructured data. Finally, it briefly describes the main features of many research and open source systems available today. By providing the reader with a broad range of data matching concepts and techniques and touching on all aspects of the data matching process, this book helps researchers as well as students specializing in data quality or data matching aspects to familiarize themselves with recent research advances and to identify open research challenges in the area of data matching. To this end, each chapter of the book includes a final section that provides pointers to further background and research material. Practitioners will better understand the current state of the art in data matching as well as the internal workings and limitations of current systems. Especially, they will learn that it is often not feasible to simply implement an existing off-the-shelf data matching system without substantial adaption and customization. Such practical considerations are discussed for each of the major steps in the data matching process.

Entity Resolution in the Web of Data

Entity Resolution in the Web of Data
Author: Vassilis Christophides
Publisher: Morgan & Claypool Publishers
Total Pages: 124
Release: 2015-08-01
Genre: Computers
ISBN: 1627058044

In recent years, several knowledge bases have been built to enable large-scale knowledge sharing, but also an entity-centric Web search, mixing both structured data and text querying. These knowledge bases offer machine-readable descriptions of real-world entities, e.g., persons, places, published on the Web as Linked Data. However, due to the different information extraction tools and curation policies employed by knowledge bases, multiple, complementary and sometimes conflicting descriptions of the same real-world entities may be provided. Entity resolution aims to identify different descriptions that refer to the same entity appearing either within or across knowledge bases. The objective of this book is to present the new entity resolution challenges stemming from the openness of the Web of data in describing entities by an unbounded number of knowledge bases, the semantic and structural diversity of the descriptions provided across domains even for the same real-world entities, as well as the autonomy of knowledge bases in terms of adopted processes for creating and curating entity descriptions. The scale, diversity, and graph structuring of entity descriptions in the Web of data essentially challenge how two descriptions can be effectively compared for similarity, but also how resolution algorithms can efficiently avoid examining pairwise all descriptions. The book covers a wide spectrum of entity resolution issues at the Web scale, including basic concepts and data structures, main resolution tasks and workflows, as well as state-of-the-art algorithmic techniques and experimental trade-offs.

Handbook of Data Quality

Handbook of Data Quality
Author: Shazia Sadiq
Publisher: Springer Science & Business Media
Total Pages: 440
Release: 2013-08-13
Genre: Computers
ISBN: 3642362575

The issue of data quality is as old as data itself. However, the proliferation of diverse, large-scale and often publically available data on the Web has increased the risk of poor data quality and misleading data interpretations. On the other hand, data is now exposed at a much more strategic level e.g. through business intelligence systems, increasing manifold the stakes involved for individuals, corporations as well as government agencies. There, the lack of knowledge about data accuracy, currency or completeness can have erroneous and even catastrophic results. With these changes, traditional approaches to data management in general, and data quality control specifically, are challenged. There is an evident need to incorporate data quality considerations into the whole data cycle, encompassing managerial/governance as well as technical aspects. Data quality experts from research and industry agree that a unified framework for data quality management should bring together organizational, architectural and computational approaches. Accordingly, Sadiq structured this handbook in four parts: Part I is on organizational solutions, i.e. the development of data quality objectives for the organization, and the development of strategies to establish roles, processes, policies, and standards required to manage and ensure data quality. Part II, on architectural solutions, covers the technology landscape required to deploy developed data quality management processes, standards and policies. Part III, on computational solutions, presents effective and efficient tools and techniques related to record linkage, lineage and provenance, data uncertainty, and advanced integrity constraints. Finally, Part IV is devoted to case studies of successful data quality initiatives that highlight the various aspects of data quality in action. The individual chapters present both an overview of the respective topic in terms of historical research and/or practice and state of the art, as well as specific techniques, methodologies and frameworks developed by the individual contributors. Researchers and students of computer science, information systems, or business management as well as data professionals and practitioners will benefit most from this handbook by not only focusing on the various sections relevant to their research area or particular practical work, but by also studying chapters that they may initially consider not to be directly relevant to them, as there they will learn about new perspectives and approaches.

Entity Information Life Cycle for Big Data

Entity Information Life Cycle for Big Data
Author: John R. Talburt
Publisher: Morgan Kaufmann
Total Pages: 255
Release: 2015-04-20
Genre: Computers
ISBN: 012800665X

Entity Information Life Cycle for Big Data walks you through the ins and outs of managing entity information so you can successfully achieve master data management (MDM) in the era of big data. This book explains big data's impact on MDM and the critical role of entity information management system (EIMS) in successful MDM. Expert authors Dr. John R. Talburt and Dr. Yinle Zhou provide a thorough background in the principles of managing the entity information life cycle and provide practical tips and techniques for implementing an EIMS, strategies for exploiting distributed processing to handle big data for EIMS, and examples from real applications. Additional material on the theory of EIIM and methods for assessing and evaluating EIMS performance also make this book appropriate for use as a textbook in courses on entity and identity management, data management, customer relationship management (CRM), and related topics. - Explains the business value and impact of entity information management system (EIMS) and directly addresses the problem of EIMS design and operation, a critical issue organizations face when implementing MDM systems - Offers practical guidance to help you design and build an EIM system that will successfully handle big data - Details how to measure and evaluate entity integrity in MDM systems and explains the principles and processes that comprise EIM - Provides an understanding of features and functions an EIM system should have that will assist in evaluating commercial EIM systems - Includes chapter review questions, exercises, tips, and free downloads of demonstrations that use the OYSTER open source EIM system - Executable code (Java .jar files), control scripts, and synthetic input data illustrate various aspects of CSRUD life cycle such as identity capture, identity update, and assertions

Unstructured Data Analysis

Unstructured Data Analysis
Author: Matthew Windham
Publisher: SAS Institute
Total Pages: 193
Release: 2018-09-14
Genre: Computers
ISBN: 1635267099

Unstructured data is the most voluminous form of data in the world, and several elements are critical for any advanced analytics practitioner leveraging SAS software to effectively address the challenge of deriving value from that data. This book covers the five critical elements of entity extraction, unstructured data, entity resolution, entity network mapping and analysis, and entity management. By following examples of how to apply processing to unstructured data, readers will derive tremendous long-term value from this book as they enhance the value they realize from SAS products.

The Practitioner's Guide to Data Quality Improvement

The Practitioner's Guide to Data Quality Improvement
Author: David Loshin
Publisher: Elsevier
Total Pages: 423
Release: 2010-11-22
Genre: Computers
ISBN: 0080920349

The Practitioner's Guide to Data Quality Improvement offers a comprehensive look at data quality for business and IT, encompassing people, process, and technology. It shares the fundamentals for understanding the impacts of poor data quality, and guides practitioners and managers alike in socializing, gaining sponsorship for, planning, and establishing a data quality program. It demonstrates how to institute and run a data quality program, from first thoughts and justifications to maintenance and ongoing metrics. It includes an in-depth look at the use of data quality tools, including business case templates, and tools for analysis, reporting, and strategic planning. This book is recommended for data management practitioners, including database analysts, information analysts, data administrators, data architects, enterprise architects, data warehouse engineers, and systems analysts, and their managers. - Offers a comprehensive look at data quality for business and IT, encompassing people, process, and technology. - Shows how to institute and run a data quality program, from first thoughts and justifications to maintenance and ongoing metrics. - Includes an in-depth look at the use of data quality tools, including business case templates, and tools for analysis, reporting, and strategic planning.

Advances in Data Science and Information Engineering

Advances in Data Science and Information Engineering
Author: Robert Stahlbock
Publisher: Springer Nature
Total Pages: 965
Release: 2021-10-29
Genre: Computers
ISBN: 3030717046

The book presents the proceedings of two conferences: the 16th International Conference on Data Science (ICDATA 2020) and the 19th International Conference on Information & Knowledge Engineering (IKE 2020), which took place in Las Vegas, NV, USA, July 27-30, 2020. The conferences are part of the larger 2020 World Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE'20), which features 20 major tracks. Papers cover all aspects of Data Science, Data Mining, Machine Learning, Artificial and Computational Intelligence (ICDATA) and Information Retrieval Systems, Information & Knowledge Engineering, Management and Cyber-Learning (IKE). Authors include academics, researchers, professionals, and students. Presents the proceedings of the 16th International Conference on Data Science (ICDATA 2020) and the 19th International Conference on Information & Knowledge Engineering (IKE 2020); Includes papers on topics from data mining to machine learning to informational retrieval systems; Authors include academics, researchers, professionals and students.

Information Quality and Governance for Business Intelligence

Information Quality and Governance for Business Intelligence
Author: Yeoh, William
Publisher: IGI Global
Total Pages: 478
Release: 2013-12-31
Genre: Business & Economics
ISBN: 1466648937

Business intelligence initiatives have been dominating the technology priority list of many organizations. However, the lack of effective information quality and governance strategies and policies has been meeting these initiatives with some challenges. Information Quality and Governance for Business Intelligence presents the latest exchange of academic research on all aspects of practicing and managing information using a multidisciplinary approach that examines its quality for organizational growth. This book is an essential reference tool for researchers, practitioners, and university students specializing in business intelligence, information quality, and information systems.