megállapodás szerint
My responsibilities:
-
Implement data quality checks across multiple data domains, including master data, transactional data, data in transit, reconciliation, and analytical/reporting data
-
Build and maintain Table Monitors to detect pipeline delays/breakages and data health issues per table/view, including: freshness, volume expectations, and schema changes
-
Build and maintain Metric Monitors to identify anomalies in key statistical/business KPIs and to run comparisons/reconciliation across systems (e.g., different instances or data warehouses) while validating agreed tolerances
-
Build and maintain Validation Monitors to identify bad individual rows and enforce business logic (e.g., custom SQL validations for complex master data logic; row-level business rules)
-
Build and maintain Query Performance Monitors to detect inefficient/problematic queries that increase compute costs or risk timeouts and downstream data quality incidents
-
Integrate data quality monitoring into the data pipeline / data product lifecycle (setup, deployment, and ongoing operational maintenance)
-
Collaborate with data engineers and analytics teams to investigate data quality incidents, perform root-cause analysis, and implement preventive fixes
-
Document implemented checks/monitors, including intended meaning, owners, and guidance on how to interpret alerts
The knowledge I own:
-
Good knowledge of data management, data quality, and data architecture
-
Practical experience implementing data quality rules/checks and operating quality monitoring solutions for structured data
-
Strong skills in SQL and Python
-
Understanding of data quality dimensions (completeness, correctness/accuracy, consistency, uniqueness, validity, timeliness) and how to translate them into measurable checks
-
Familiarity with data-in-transit monitoring and reconciliation patterns across systems
-
Strong collaboration and communication skills to work with multiple stakeholders (data engineering, data product, analytics/reporting)
-
Background in data engineering environments (advantage): data lake/warehouse, pipeline orchestration, and CI/CD for data/pipeline changes
-
Experience with data quality tooling/platforms (advantage), e.g., Syniti or Monte Carlo
-
Experience with SAP and/or integration architecture (advantage)
-
The offer that would convince me:
-
A constantly growing organization and increasing opportunities
-
Secure, long-term job opportunity
-
Varied and engaging job responsibilities
-
Outstanding salary
-
Flexible work arrangements
-
Home office possibility
Online application:
Please use our online application and attach your resume.
AIIS Adatkezelési tájékoztató
Privacy notice