In the first blogpost of this series on data enrichment, we highlighted the challenges companies face in harnessing the power of their data due to issues like raw and incomplete datasets. Now, let’s dive deeper into the topic, focusing on practical examples of bad versus good data quality.
Practical B2B Data Enrichment Use Cases
Firstly, let’s examine an example sales lead dataset, extracted from a CRM that is poorly structured:
This table suffers from several issues including missing datafields and variation in structuring, making it unusable.
A data enrichment process would normalize data structuring and use third-party sources to fill in the gaps and remove errors to produce a much more complete sheet as seen below. In this example, the resulting table post enrichment process is much more useful. The growth team has all of the relevant data they need to reach out to their leads.
Let's explore another example::
Product Feed Management: A product feed (aka: product catalog) organizes crucial product data including category, description, price, and dimensions into a searchable database for customers of E-commerce marketplaces..
The largest eCommerce marketplaces receive product data from tens of thousands of vendors, all in different formats, schemas, and structures. Miscategorization of products or inaccurate descriptions can result in a database of products like the example below. Unfortunately, these inaccuracies cause products to never be found by consumers, becoming dreaded deadstock, or non-performing inventory.
An enriched product feed looks more like the example below, with a complete set of correctly categorized attributes that boost search engine visibility improving customer experience, and sales as a result.
How do you enrich a database of hundreds of thousands of products, and do so flexibly and at scale? By creating and executing a process using AI, automation and an educated global workforce. See how we did it for one of the two largest retailers in the world.
KYC & Regulatory Compliance: FinTech & Financial Services leaders leverage data enrichment to satisfy regulatory requirements, such as Know Your Customer (KYC) and Anti-Money Laundering (AML) regulations. Often, customer information is sourced from several databases including government forms, watchlists, and public records, each with their own unique schema and format.
In this example, an incomplete dataset could look like this:
As you can see in the table above, this dataset is sub-optimal and would cost fintech teams several hours of manual review and enrichment to become valuable. This delay causes friction for the customer onboarding process, which could lead to churn.
A complete dataset is required in order to meet regulatory requirements, and thus enriching the unstructured data is a crucial task for Fintech teams. Using automation and an intelligent workforce to QA the process, thousands of rows of unstructured data can become enriched without requiring full-time-employee’s valuable time, and as a result, satisfy government requirements. An example of this type of dataset is below.
In the above examples, we have outlined what poor data quality can look like and the operational problems businesses face if the data is left untransformed. We have also explored what a good dataset looks like and the benefits businesses experience from performing data enrichment.
In the final post of this three-part series, we will discuss how technology nor people alone are the optimal solution for data enrichment. A unique combination of the two are required and augmented through a process engine to deliver enriched databases at scale.
Ready to see what good data quality does for your team? Reach out