HACKER Q&A
📣 gvaishno

What Are Your Favorite Tools for Data Integrity in Data Engineering?


Hello HN community! I'm working on a data engineering project, and I'm keen to hear about the tools that you find most effective for ensuring data quality and integrity. Whether it's ETL processes, data validation techniques, monitoring solutions, or any other aspect of data engineering, I'd love to learn from your experiences. What is your go-to tools and best practices for maintaining trustworthy data throughout its lifecycle? Any insights and recommendations would be greatly appreciated!


  👤 leandot Accepted Answer ✓
It really depends on your type of data, but from the top of my mind - Pydantic.

👤 frankacter
> Whether it's ETL processes

For ETL/data pipelines, tools like Apache Airflow, AWS Glue, Azure Data Factory provide flexible orchestration and monitoring. They also ensure data is properly validated, cleaned, standardized at each step.

> data validation techniques

For data validation, Spark/Python libraries, Looker Data Literacy, Great Expectations are effective for formalizing validation rules and checks on type, format, range, uniqueness etc.

Tools like Databricks Profiling, Alteryx Profiler help understand data structure, anomalies, quality issues before modeling or analysis.

For MDM/lineage, master data hubs like Talend MDM combined with tools like Apache Atlas/Collibra provide 360-degree view of data assets.

>monitoring solutions

Tools like DataDog, Prometheus, Interana are useful to monitoring data quality metrics and exceptions.

For us, the key is taking a holistic approach - validate your data at source, during transformation and at destination. Automate as many checks as possible and monitor quality continuously to ensure data reliability across its lifecycle.


👤 iknownothow
You could approach data quality testing as if you're testing another piece of software by writing tests. We use dbt and it makes writing tests against models (think tables in a db) very easy.

For example, if you have a regional_orders table. You write tests in SQL to test your assumptions about that data:

* I expect regional_orders table to contain no duplicates entries.

* I expect regional_orders to ship to only a specific region.

* So on...

This has worked fairly well so far for me. But are these kinds of tests sufficient? Am I missing something?