The Senior Data Engineer is responsible for designing, developing, and maintaining advanced data architectures and pipelines that enable data-driven decision-making across the organization. This role involves end-to-end ownership of data solutions — from modelling and integration to optimization and governance. The incumbent ensures that data systems are scalable, reliable, and aligned with business and analytical requirements. As a senior member of the team, the role also includes mentoring mid-level engineers and contributing to best practice development within the data engineering discipline.

Key Responsibilities:

  • Design logical and physical data models to support applications, analytics, and reporting needs; ensure models meet business requirements and are optimized for scalability and performance.
  • Architect and implement robust data integration processes, including ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) workflows, to consolidate data from multiple internal and external sources.
  • Design, build, and manage complex, high-performing data pipelines and architectures for batch and real-time data processing.
  • Optimize data storage solutions, ensuring efficiency, reliability, and scalability within cloud and on-premise environments.
  • Collaborate closely with data scientists, analysts, and business stakeholders to align data infrastructure with strategic and operational objectives.
  • Implement and enforce data quality, validation, and governance standards to ensure accuracy and consistency of organizational data assets.
  • Provide guidance and technical mentorship to junior and mid-level data engineers, promoting adherence to engineering best practices.

Requirements

  • NQF Level 6 or higher tertiary qualification in an Information and Communication Technology (ICT) field (e.g., Computer Science, Information Systems, Data Engineering, or related discipline).
  • Cloud certification (AWS, Azure, or GCP) preferred.
  • Minimum of 5 years’ experience in a Data Engineer or similar data-focused role.
  • Proven expertise in data modelling, ETL/ELT pipeline development, and data integration.
  • Hands-on experience with cloud-based data platforms (e.g., AWS Redshift, Azure Synapse, Google BigQuery).
  • Experience with big data frameworks (e.g., Spark, Hadoop) and modern orchestration tools (e.g., Airflow, Prefect).

Desired Skills:

  • SQL
  • Cloud
  • Python
  • Data Integration
  • Data Modelling
  • Data Security
  • ETL

Desired Qualification Level:

  • Diploma

About The Employer:


Learn more/Apply for this position