Data Engineer

A leading financial services and insurance organization is seeking an experienced Data Engineer with a strong focus on DevOps to join its Data Engineering Department. This role is instrumental in designing, building, and maintaining robust, scalable data pipelines and systems that support data-driven decision-making across the enterprise.

You will work closely with cross-functional teams to enable seamless integration with AWS services, drive containerization through Docker and Kubernetes, and manage performance of Apache Spark and Kafka deployments. If you’re passionate about optimizing data operations and leveraging modern DevOps and cloud practices, we’d like to meet you.

Key Responsibilities:

  • Design, develop, and deploy scalable and efficient data pipelines tailored to business needs.
  • Collaborate with the DevOps team to build and manage CI/CD pipelines using AWS tools such as CodePipeline, CodeBuild, and CodeDeploy.
  • Containerize and orchestrate data processing applications using Docker and Kubernetes.
  • Manage and optimize deployments of Apache Spark and Kafka to support high-performance data processing.
  • Monitor data pipelines for performance, reliability, and error reduction.
  • Apply security and compliance best practices in all data engineering workflows.
  • Evaluate and introduce new tools and technologies to improve data engineering productivity and system performance.
  • Provide technical support to team members, resolving issues related to data workflows and infrastructure.

Required Skills & Qualifications:

  • Bachelor’s degree in Computer Science, Software Engineering, or a related field.
  • Minimum 5+ years of experience in data engineering with hands-on experience in pipeline development and DevOps integration.
  • Strong knowledge and hands-on experience with:
    • Apache Spark
    • Databricks
    • Oracle
    • Kafka deployment and performance tuning
  • Proficiency in AWS DevOps tools including CodePipeline, CodeBuild, CodeDeploy, and CodeStar.
  • Experience using containerization and orchestration tools (Docker and Kubernetes).
  • Familiarity with other data processing frameworks such as Hadoop, Apache NiFi, or Apache Beam.
  • Excellent troubleshooting and problem-solving skills.
  • Strong communication skills, with the ability to bridge the gap between technical teams and business stakeholders.

Location:

  • Johannesburg, Gauteng

Workplace Type:

  • Hybrid

Job Type:

  • Contract

Experience Type:

  • Senior

We encourage you to apply – Kivara Rajgopal on [Email Address Removed] or via [Phone Number Removed];

Desired Skills:

  • Devops
  • Kafka
  • AWS
  • Databricks
  • Oracle
  • Docker
  • Code pipeline

Desired Qualification Level:

  • Degree

Learn more/Apply for this position