What is data quality? By Ankur Gupta
Data quality indicates if data is fit for use to drive trusted business decisions. The Data Management Association (DAMA) defines data quality management as the “Planning, implementation and control of activities… to assure data assets are fit for consumption and meet the needs of data consumers.” The key drivers of data quality are: Exponential growth…
The post What is data quality? appeared first on Collibra.
Data quality indicates if data is fit for use to drive trusted business decisions. The Data Management Association (DAMA) defines data quality management as the “Planning, implementation and control of activities… to assure data assets are fit for consumption and meet the needs of data consumers.”
The key drivers of data quality are:
- Exponential growth in volume, speed, and variety of business data
- Increasing pressure of compliances – Regulations such as GDPR, BCBS 239, CCAR, HIPAA, et al. require data auditing and reporting
- Data migrations – When moving large volumes of data to the cloud or to new storage, it’s important to identify missing records, values, and broken relationships across tables or systems
- High-performing AI initiatives – Monitoring data drift helps detect accuracy and performance of analytic models over time
- Customer experience – Creating a personalized experience for customers requires fresh and complete data about the individual recipient
Organizations today depend on data for every decision and consider data as a significant enterprise asset. As business analysts and data scientists struggle for trusted data for powering their solutions, data quality is assuming a higher priority in business data strategy.
What is good data quality?
Good quality data represents the business scenario correctly and helps approach the problem at hand more precisely. You can use the foundation of good quality data to derive trusted information, driving trusted business decisions. Superior business results can further fuel the case of data quality in a continuous improvement cycle.
Confidence in data is critical for using data collaboratively across the enterprise and good data quality is an indicator of how quickly you can achieve data-to-value.
Why is data quality important?
Incomplete, incorrect, duplicated, or redundant data is commonplace in business, resulting from human errors, siloed tools, multiple handovers and inadequate data strategy. Businesses routinely face frustrated customers, higher operational costs, or inaccurate reports due to poor data quality. MIT Sloan Management Review research points out that the cost of bad data is an astonishing 15% to 25% of revenue for most companies.
Streamlining operational processes is a critical use case for data quality.
- Marketing campaigns often see fewer results because of wasted efforts on incorrect addresses or duplicated customers
- Suppliers send the wrong material or quantity due to mismatch in data across departments
- Reconciling inconsistent data for compliance requires higher manual efforts, costing much more, or delaying the process
Data quality strongly impacts the agile response to business changes.
- Inaccurate or old data fails to identify new opportunities
- Analysis based on poor quality data cannot indicate if the current campaigns are working or need changes
- Financial reporting may not represent the correct picture with incomplete or obsolete data, affecting timely actions
As organizations rush to embrace big data and AI-enabled automation, they need to appreciate good quality data even more.
How do you determine data quality?
Determining data quality in the context of specific domains or tasks is often more relevant and practical. You can begin with taking an inventory of your data assets and choose a pilot sample data set. Assessing the data set for validity, accuracy, completeness, and consistency is the next step. You can also evaluate the instances of redundant, duplicated, and mismatched data. Establishing a baseline on a small data set enables quick scaling of the efforts. Watch this video to learn what data quality may mean to you.
Rule-based data quality management is an excellent approach, where you can define rules for specific requirements. You can also establish targets for data quality and compare them with the current levels. Setting targets facilitates continuous measurement, discovering opportunities for improvement, and good data hygiene.
As per Gartner, data quality improvement efforts tend to focus narrowly on accuracy. Data consumers have a much broader definition of data quality than technical professionals may realize. For example, data accuracy is meaningless unless data is accessible, understandable, and relevant.
What is an example of data quality?
What happens when someone urgently rushes to an emergency procedure? Healthcare staff quickly recovers digital patient records, which are expected to present complete information all the time. If the patient data fails to indicate allergies or ongoing medications, the consequences can be severe. Good quality patient data can ensure that all the treatments correctly address the unique healthcare needs of individuals at any point in time.
In business, good data quality can assure that your data is fit to support the analysis and spearhead your efforts in the right direction.
How to improve the quality of your data?
Identifying and acknowledging the problem is the first step towards solving it. The recent global crisis survey by PwC survey highlighted the importance of accurate data during crisis management. Data quality is affected by various factors, and they all have their roots in the silos of multiple data sources. You must take a comprehensive approach to understand data and overcome the challenges of managing its quality.
- Metadata Management: Metadata management leverages the cross-organizational agreement on defining informational assets for converting data into an enterprise asset.
- Data Governance: Data governance is a collection of practices and processes to standardize the management of data assets within an organization. A robust data governance foundation builds trust in data.
- Data Catalog: Data catalog empowers users to quickly discover and understand data that matters, helping choose trusted data to generate impactful business insights.
- Data Matching: Data matching identifies possible duplicates or overlaps to break down data silos and drive consistency.
- Data Intelligence: Data intelligence is the ability to understand and use your data in the right way. A comprehensive approach to data intelligence promotes and delivers high-quality data.
Data quality best practices focus on establishing an enterprise-wide initiative, defining measurement metrics, streamlining procedures, and performing regular audits.
Predictive and continuous data quality offers unique capabilities of autonomous rule management, continuous data-drift detection, and automated data profiling. You can enhance these capabilities with data governance, data privacy, data catalog, and data lineage to have end-to-end data pipelines control, bring full business context to data quality, and deliver trusted analytics and AI in a scalable way.
Gartner estimates that by 2022, 60% of organizations will leverage ML-enabled technology for data quality improvement. How beneficial would it be for your organization if you were able to automate your data quality rule management process and continuously increase the quality of your business-critical data sources and data elements?