Compare Page

Uniqueness

Characteristic Name: Uniqueness
Dimension: Consistency
Description: The data is uniquely identifiable
Granularity: Record
Implementation Type: Rule-based approach
Characteristic Type: Declarative

Verification Metric:

The number of duplicate records reported per thousand records

GuidelinesExamplesDefinitons

The implementation guidelines are guidelines to follow in regard to the characteristic. The scenarios are examples of the implementation

Guidelines: Scenario:
Ensure that every entity(record) is unique by implementing a key in every relation (1) Key constraint
Ensure that same entity is not recorded twice under different unique identifiers (1) Same customer is entered under different customer ID
Ensure that unique key is not-null at any cost (1) Employee ID which is the key of employee table is not null at any cost
In case of using bar codes standardise the bar code generation process to ensure that Bar codes are not reused (1) UPC

Validation Metric:

How mature is the creation and implementation of the DQ rules to maintain uniqueness of data records

These are examples of how the characteristic might occur in a database.

Example: Source:
A school has 120 current students and 380 former students (i.e. 500 in total) however; the Student database shows 520 different student records. This could include Fred Smith and Freddy Smith as separate records, despite there only being one student at the school named Fred Smith. This indicates a uniqueness of 500/520 x 100 = 96.2% N. Askham, et al., “The Six Primary Dimensions for Data Quality Assessment: Defining Data Quality Dimensions”, DAMA UK Working Group, 2013.
duplicate vendor records with the same name and different addresses make it difficult to ensure that payment is sent to the correct address. When purchases by one company are associated with duplicate master records, the credit limit for that company can unknowingly be exceeded. This can expose the business to unnecessary credit risks. D. McGilvray, “Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information”, Morgan Kaufmann Publishers, 2008.
on two maps of the same date. Since events have a duration, this idea can be extended to identify events that exhibit temporal overlap. H. Veregin, “Data Quality Parameters” in P. A. Longley, M. F. Goodchild, D. J. Maguire, and D. W. Rhind (eds) Geographical Information Systems: Volume 1, Principles and Technical Issues. New York: John Wiley and Sons, 1999, pp. 177-89.
The patient’s identification details are correct and uniquely identify the patient. P. J. Watson, “Improving Data Quality: A Guide for Developing Countries”, World Health Organization, 2003.

The Definitions are examples of the characteristic that appear in the sources provided.

Definition: Source:
The entity is unique — there are no duplicate values. B. BYRNE, J. K., D. MCCARTY, G. SAUTER, H. SMITH, P WORCESTER 2008. The information perspective of SOA design Part 6:The value of applying the data quality analysis pattern in SOA. IBM corporation.
Asserting uniqueness of the entities within a data set implies that no entity exists more than once within the data set and that there is a key that can be used to uniquely access each entity. For example, in a master product table, each product must appear once and be assigned a unique identifier that represents that product across the client applications. LOSHIN, D. 2006. Monitoring Data quality Performance using Data Quality Metrics. Informatica Corporation.
Each real-world phenomenon is either represented by at most one identifiable data unit or by multiple but consistent identifiable units or by multiple identifiable units whose inconsistencies are resolved within an acceptable time frame. PRICE, R. J. & SHANKS, G. Empirical refinement of a semiotic information quality framework. System Sciences, 2005. HICSS'05. Proceedings of the 38th Annual Hawaii International Conference on, 2005. IEEE, 216a-216a.

 

Interpretability

Characteristic Name: Interpretability
Dimension: Usability and Interpretability
Description: Data should be interpretable
Granularity: Information object
Implementation Type: Process-based approach
Characteristic Type: Usage

Verification Metric:

The number of tasks failed or under performed due to the lack of interpretability of data
The number of complaints received due to the lack of interpretability of data

GuidelinesExamplesDefinitons

The implementation guidelines are guidelines to follow in regard to the characteristic. The scenarios are examples of the implementation

Guidelines: Scenario:
Standardise the interpretation process by clearly stating the criteria for interpreting results so that an interpretation on one dataset is reproducible (1) 10% drop in production efficiency is a severe decline which needs quick remedial actions
Facilitate the interaction process based on users' task at hand (1) A traffic light system to indicate the efficiency of a production line to the workers, a detail efficiency report to the production manage, a concise efficiency report for production line supervisors
Design the structure of information in such a way that further format conversions are not necessary for interpretations. (1) A rating scale of (poor good excellent ) is better than (1,2,3) for rate a service level
Ensure that information is consistent between units of analysis (organisations, geographical areas, populations in concern etc.) and over time, allowing comparisons to be made. (1) Number of doctors per person is used to compare the health facilities between regions.
(2) Same populations are used over the time to analyse the epidemic growths over the tim
Use appropriate visualisation tools to facilitate interpretation of data through comparisons and contrasts (1) Usage of tree maps , Usage of bar charts, Usage of line graphs

Validation Metric:

How mature is the process to maintain the interpretability of data

These are examples of how the characteristic might occur in a database.

Example: Source:
when an analyst has data with freshness metric equals to 0, does it mean to have fresh data at hand? What about freshness equals to 10 (suppose, we do not stick to the notion proposed in [23])? Is it even fresher? Similar issues may arise with the notion of age: e.g., with age A(e) = 0, we cannot undoubtedly speak about positive or negative data characteristic because of a semantic meaning of “age” that mostly corresponds to a neutral notion of “period of time” O. Chayka, T. Palpanas, and P. Bouquet, “Defining and Measuring Data-Driven Quality Dimension of Staleness”, Trento: University of Trento, Technical Report # DISI-12-016, 2012.
Consider a database containing orders from customers. A practice for handling complaints and returns is to create an “adjustment” order for backing out the original order and then writing a new order for the corrected information if applicable. This procedure assigns new order numbers to the adjustment and replacement orders. For the accounting department, this is a high-quality database. All of the numbers come out in the wash. For a business analyst trying to determine trends in growth of orders by region, this is a poor-quality database. If the business analyst assumes that each order number represents a distinct order, his analysis will be all wrong. Someone needs to explain the practice and the methods necessary to unravel the data to get to the real numbers (if that is even possible after the fact). J. E. Olson, “Data Quality: The Accuracy Dimension”, Morgan Kaufmann Publishers, 9 January 2003.

The Definitions are examples of the characteristic that appear in the sources provided.

Definition: Source:
Comparability of data refers to the extent to which data is consistent between organisations and over time allowing comparisons to be made. This includes using equivalent reporting periods. HIQA 2011. International Review of Data Quality Health Information and Quality Authority (HIQA), Ireland. http://www.hiqa.ie/press-release/2011-04-28-international-review-data-quality.
Data is not ambiguous if it allows only one interpretation – anti-example: Song.composer = ‘Johann Strauss’ (father or son?). KIMBALL, R. & CASERTA, J. 2004. The data warehouse ETL toolkit: practical techniques for extracting. Cleaning, Conforming, and Delivering, Digitized Format, originally published.
Comparability aims at measuring the impact of differences in applied statistical concepts and measurement tools/procedures when statistics are compared between geographical areas, non-geographical domains, or over time. LYON, M. 2008. Assessing Data Quality ,
Monetary and Financial Statistics.
Bank of England. http://www.bankofengland.co.uk/
statistics/Documents/ms/articles/art1mar08.pdf.
The most important quality characteristic of a format is its appropriateness. One format is more appropriate than another if it is better suited to users’ needs. The appropriateness of the format depends upon two factors: user and medium used. Both are of crucial importance. The abilities of human users and computers to understand data in different formats are vastly different. For example, the human eye is not very good at interpreting some positional formats, such as bar codes, although optical scanning devices are. On the other hand, humans can assimilate much data from a graph, a format that is relatively hard for a computer to interpret. Appropriateness is related to the second quality dimension, interpretability. REDMAN, T. C. 1997. Data quality for the information age, Artech House, Inc.