Key Responsibilities
- Lead the design, development, and maintenance of robust, scalable ETL/ELT pipelines and data platforms.
- Architect and implement data models, warehouses, and lakes to support analytics, machine learning, and business applications.
- Define and enforce data engineering standards, best practices, and governance frameworks.
- Optimize the performance, scalability, and cost efficiency of large-scale data systems.
- Partner with data scientists, analysts, and business stakeholders to deliver high-quality data solutions.
- Lead initiatives to improve data reliability, observability, and automation.
- Mentor and guide junior data engineers, reviewing code and providing technical leadership.
- Ensure compliance with security, privacy, and regulatory requirements in all data solutions.
Requirements
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- 6 years of experience as a Data Engineer or in a similar role.
- Strong expertise in SQL, relational and NoSQL databases, and data modeling.
- Proficiency in Python, Scala, or Java for data engineering tasks.
- Hands-on experience with big data frameworks and tools (e.g., Apache Spark, Kafka, Airflow, dbt).
- Proven experience with cloud platforms (AWS, Azure, or GCP) and modern data warehouses (Snowflake, BigQuery, Redshift).
- Deep understanding of ETL/ELT design patterns, data governance, and best practices.
- Experience leading technical projects and mentoring engineering teams.