Compare Page

Uniqueness

Characteristic Name: Uniqueness
Dimension: Consistency
Description: The data is uniquely identifiable
Granularity: Record
Implementation Type: Rule-based approach
Characteristic Type: Declarative

Verification Metric:

The number of duplicate records reported per thousand records

GuidelinesExamplesDefinitons

The implementation guidelines are guidelines to follow in regard to the characteristic. The scenarios are examples of the implementation

Guidelines: Scenario:
Ensure that every entity(record) is unique by implementing a key in every relation (1) Key constraint
Ensure that same entity is not recorded twice under different unique identifiers (1) Same customer is entered under different customer ID
Ensure that unique key is not-null at any cost (1) Employee ID which is the key of employee table is not null at any cost
In case of using bar codes standardise the bar code generation process to ensure that Bar codes are not reused (1) UPC

Validation Metric:

How mature is the creation and implementation of the DQ rules to maintain uniqueness of data records

These are examples of how the characteristic might occur in a database.

Example: Source:
A school has 120 current students and 380 former students (i.e. 500 in total) however; the Student database shows 520 different student records. This could include Fred Smith and Freddy Smith as separate records, despite there only being one student at the school named Fred Smith. This indicates a uniqueness of 500/520 x 100 = 96.2% N. Askham, et al., “The Six Primary Dimensions for Data Quality Assessment: Defining Data Quality Dimensions”, DAMA UK Working Group, 2013.
duplicate vendor records with the same name and different addresses make it difficult to ensure that payment is sent to the correct address. When purchases by one company are associated with duplicate master records, the credit limit for that company can unknowingly be exceeded. This can expose the business to unnecessary credit risks. D. McGilvray, “Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information”, Morgan Kaufmann Publishers, 2008.
on two maps of the same date. Since events have a duration, this idea can be extended to identify events that exhibit temporal overlap. H. Veregin, “Data Quality Parameters” in P. A. Longley, M. F. Goodchild, D. J. Maguire, and D. W. Rhind (eds) Geographical Information Systems: Volume 1, Principles and Technical Issues. New York: John Wiley and Sons, 1999, pp. 177-89.
The patient’s identification details are correct and uniquely identify the patient. P. J. Watson, “Improving Data Quality: A Guide for Developing Countries”, World Health Organization, 2003.

The Definitions are examples of the characteristic that appear in the sources provided.

Definition: Source:
The entity is unique — there are no duplicate values. B. BYRNE, J. K., D. MCCARTY, G. SAUTER, H. SMITH, P WORCESTER 2008. The information perspective of SOA design Part 6:The value of applying the data quality analysis pattern in SOA. IBM corporation.
Asserting uniqueness of the entities within a data set implies that no entity exists more than once within the data set and that there is a key that can be used to uniquely access each entity. For example, in a master product table, each product must appear once and be assigned a unique identifier that represents that product across the client applications. LOSHIN, D. 2006. Monitoring Data quality Performance using Data Quality Metrics. Informatica Corporation.
Each real-world phenomenon is either represented by at most one identifiable data unit or by multiple but consistent identifiable units or by multiple identifiable units whose inconsistencies are resolved within an acceptable time frame. PRICE, R. J. & SHANKS, G. Empirical refinement of a semiotic information quality framework. System Sciences, 2005. HICSS'05. Proceedings of the 38th Annual Hawaii International Conference on, 2005. IEEE, 216a-216a.

 

Precision

Characteristic Name: Precision
Dimension: Accuracy
Description: Attribute values should be accurate as per linguistics and granularity
Granularity: Element
Implementation Type: Rule-based approach
Characteristic Type: Declarative

Verification Metric:

The number of tasks failed or under performed due to lack of data precision
The number of complaints received due to lack of data precision

GuidelinesExamplesDefinitons

The implementation guidelines are guidelines to follow in regard to the characteristic. The scenarios are examples of the implementation

Guidelines: Scenario:
Ensure the data values are correct to the right level of detail or granularity (1) Price to the penny or weight to the nearest tenth of a gram.
(2) precision of the values of an attribute according to some general-purpose IS-A ontology such as WordNet
Ensure that data is legitimate or valid according to some stable reference source like dictionary/thesaurus/code. (1) Spellings and syntax of a description is correct as per the dictionary/thesaurus/Code (e.g. NYSIIS Code)
(2) Address is consistent with global address book
Ensure that the user interfaces provide the precision required by the task (1) if the domain is infinite (the rational numbers, for example), then no string format of finite length can represent all possible values.
Ensure the data values are lexically, syntactically and semantically correct (1) “Germany is an African country” (semantically wrong); Book.title: ‘De la Mancha Don Quixote’ (syntactically wrong); UK’s Prime Minister: ‘Toni Blair’ (lexically wrong)

Validation Metric:

How mature is the creation and implementation of the DQ rules to maintain data precesion

These are examples of how the characteristic might occur in a database.

Example: Source:
if v = Jack,even if v = John, v is considered syntactically correct, as Jack is an admissible value in the domain of persons’ names C. Batini and M, Scannapieco, “Data Quality: Concepts, Methodologies, and Techniques”, Springer, 2006.

The Definitions are examples of the characteristic that appear in the sources provided.

Definition: Source:
Data values are correct to the right level of detail or granularity, such as price to the penny or weight to the nearest tenth of a gram. ENGLISH, L. P. 2009. Information quality applied: Best practices for improving business information, processes and systems, Wiley Publishing.
Data is correct if it conveys a lexically, syntactically and semantically correct statement – e.g.,the following pieces of information are not correct:“Germany is an African country” (semantically wrong);Book.title: ‘De la Mancha Don Quixote’ (syntactically wrong); UK’s Prime Minister: ‘Toni Blair’ (lexically wrong). KIMBALL, R. & CASERTA, J. 2004. The data warehouse ETL toolkit: practical techniques for extracting. Cleaning, Conforming, and Delivering, Digitized Format, originally published.
The set S should be sufficiently precise to distinguish among elements in the domain that must be distinguished by users. This dimension makes clear why icons and colors are of limited use when domains are large. But problems can and do arise for the other formats as well, because many formats are not one-to-one functions. For example, if the domain is infinite (the rational numbers, for example), then no string format of finite length can represent all possible values. The trick is to provide the precision to meet user needs. LOSHIN, D. 2001. Enterprise knowledge management: The data quality approach, Morgan Kaufmann Pub.
Is the information to the point, void of unnecessary elements? LOSHIN, D. 2006. Monitoring Data quality Performance using Data Quality Metrics. Informatica Corporation.
The degree of precision of the presentation of an attribute’s value should reasonably match the degree of precision of the value being displayed. The user should be able to see any value the attributer may take and also be able to distinguish different values. REDMAN, T. C. 1997. Data quality for the information age, Artech House, Inc.
The granularity or precision of the model or content values of an information object according to some general-purpose IS-A ontology such as WordNet. STVILIA, B., GASSER, L., TWIDALE, M. B. & SMITH, L. C. 2007. A framework for information quality assessment. Journal of the American Society for Information Science and Technology, 58, 1720-1733.