Key Responsibilities: Design, develop, and maintain Python applications for data processing and automation. Work with object-oriented programming, design patterns, and unit testing to ensure scalable, maintainable solutions. Build and maintain data pipelines using Python (Pandas) for large-scale data processing. Integrate and consume REST APIs for data exchange and transformation. Collaborate with stakeholders to analyze business challenges, gather requirements, and propose data-driven solutions. Develop and deploy optimization models, ensuring their scalability and reliability in production. Perform data validation, transformation, and aggregation to meet business requirements. Work with DBT (data build tool) for data modeling and transformation workflows. Utilize SQL for querying, optimizing, and managing large datasets. Leverage Azure cloud resources (Storage Accounts, Key Vaults, Databases) for secure and efficient data management. Collaborate with DevOps teams to manage Dockerized environments and VSCode devcontainers. Conduct ad hoc analyses to troubleshoot business issues and provide actionable insights. Ensure high-quality standards by writing and maintaining unit, integration, and endpoint tests. Requirements Essential Skills: Strong engineering skills in Python (OOP, design patterns, unit testing). Data processing with Pandas. Experience with REST APIs (consumption and development). Knowledge of data orchestrators (Airflow, Dagster, Mage.ai or equivalent). Proficiency in SQL and DBT. Familiarity with Azure Cloud resources (Storage Accounts, Key Vaults, Databases). Strong understanding of Git for version control. Experience with Docker and VSCode devcontainers.