Job Title: Data Engineer
Location: Johannesburg
Job Type: Contract (8-month contract)
Hybrid

Our client is looking for a skilled Data Engineer to join the team on a contract basis, working within a dynamic and fast-paced banking/financial services environment. The ideal candidate will have solid experience in designing and building robust data pipelines and architectures to support advanced analytics, reporting, and data-driven decision-making.

Key Responsibilities

  • Design, develop, and maintain scalable and efficient data pipelines and ETL processes.
  • Ingest, transform, and load data from various internal and external sources into data warehouses or data lakes.
  • Ensure data quality, integrity, and consistency across all systems and processes.
  • Collaborate with business stakeholders, data analysts, and BI teams to understand data requirements and deliver appropriate solutions.
  • Implement data solutions that comply with governance, regulatory, and security standards.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions.
  • Monitor and troubleshoot data pipeline issues and performance bottlenecks.
  • Document data flow processes and contribute to maintaining best practices for data engineering across the team.

Experience and Qualifications

  • Bachelor’s degree in computer science, Information Systems, Engineering, or a related field.
  • 3-5+ years of proven experience as a Data Engineer in a banking or financial services environment.
  • Relevant certifications in data engineering, cloud platforms, or database management are advantageous.
  • Strong proficiency in SQL and experience with large-scale data systems.
  • Hands-on experience with ETL tools and frameworks (e.g., Apache NiFi, Talend, SSIS).
  • Experience with cloud platforms such as Azure (preferred), AWS, or GCP.
  • Proficiency with Azure Data Factory, Synapse Analytics, and Data Lake solutions.
  • Solid experience with Python or Scala for data processing.
  • Familiarity with big data tools such as Databricks, Spark, or Kafka.
  • Exposure to CI/CD tools and DevOps practices for data pipelines.
  • Strong understanding of data modelling, data warehousing, and data governance principles.
  • Knowledge of APIs and integrating data from multiple source systems.

Join us in shaping the future of client solutions! If you’re ready to take on a new challenge and make an impact, we want to hear from you. Apply now!

Desired Skills:

  • Azure
  • AWS
  • GCP
  • CI/CD
  • DevOps
  • Python
  • Scala
  • Spark
  • Kafka
  • ETL Tools
  • Data engineering
  • Big data
  • Hadoop

Learn more/Apply for this position