Our client is a development firm focusing on the advancement of innovative agricultural technologies. They are in search of a proficient and enthusiastic Data Engineer to join their dynamic team in the role of integration specialist. Your duties will involve the design, implementation, and maintenance of the organization’s data infrastructure and pipelines. Close collaboration with data scientists, analysts, and software engineers is essential to ensure smooth and dependable data flows across the entire organization. The ideal candidate possesses a robust foundation in data engineering, exceptional problem-solving abilities, and a keen interest in working with extensive datasets.


  • Design, develop, and maintain scalable and efficient data pipelines and ETL processes to ingest, transform, and load data from various sources.
  • Develop and implement data warehousing solutions.
  • Collaborate with cross-functional teams to integrate various data sources.
  • Ensure data quality and consistency across different data systems.
  • Optimise data retrieval for dashboard/reporting solutions.
  • Optimise data infrastructure, including data storage, data retrieval, and data processing for enhanced performance and scalability.
  • Implement data quality and data governance processes to ensure accuracy, consistency, and integrity of data.
  • Monitor and troubleshoot data pipelines to identify and resolve issues in a timely manner.
  • Perform data profiling and analysis to identify data quality issues and propose improvements.
  • Collaborate with data scientists and analysts to provide them with the necessary data sets for analysis and reporting.
  • Stay up to date with emerging technologies and trends in data engineering and recommend new tools and frameworks to improve data infrastructure.
  • Willingness to actively contribute to BI analytics tasks, such as creating and maintaining reports.
  • This means being comfortable with hands-on work, including report development, SQL writing, and refactoring tasks.



  • Preferably a bachelor’s degree in computer science, engineering, or a related field.
  • Certifications (AWS, GCP, Azure, Microsoft) a plus.

Knowledge, Skills & Experience:

  • Proven experience as a Data Engineer or similar role, with a strong understanding of data modelling, data warehousing, and ETL processes.
  • Proficient in SQL and experience working with relational databases (e.g., PostgreSQL, MySQL, SQL Server) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Strong programming skills in at least one scripting language (e.g., Python) and experience with data manipulation and transformation libraries (e.g., Pandas, PySpark).
  • Comfortable working with cloud-based infrastructure and services provided by Amazon Web Services (AWS)/ Azure.
  • Familiarity with data pipeline orchestration tools (e.g., Apache Airflow, Luigi, Lambda) and workflow management systems.
  • Experience with real-time data streaming technologies (e.g., Apache Kafka, Apache Flink).
  • Knowledge of containerisation technologies and orchestration tools (e.g., Docker, Kubernetes).
  • Familiarity with machine learning concepts and frameworks.

While we would really like to respond to every application, should you not be contacted for this position within 10 working days please consider your application unsuccessful.


When applying for jobs, ensure that you have the minimum job requirements. Only SA Citizens will be considered for this role. If you are not in the mentioned location of any of the jobs, please note your relocation plans in all applications for jobs and correspondence. Apply here [URL Removed] e-mail a Word copy of your CV to [Email Address Removed] and mention the reference number of the job.

Desired Skills:

  • Data
  • Engineer
  • JHB

Learn more/Apply for this position