This is a great opportunity for someone that loves critical thinking, problem solving, and solving challenging problems, and who wants to have an impact in the world. Very flexible environment, output driven, great culture, high tech.
Qualification requirements

Tertiary degree in quantitative field, science, maths, engineering or the like. (Bsc/BEng, pure maths)
Further learning in large data, preferably based on Machine Learning and programming in Python

Experience required

  • Experience in building and optimizing big data pipelines
  • Designing and implementing data warehouses, data lakes and data meshes
  • Advanced knowledge of relational, non-relational databases with strong experience in SQL
  • Working knowledge of streaming, message queuing and highly scalable data stores
  • Understanding of DevOps processes, CI/CD, and agile development
  • Experience in testing and observability in the context of big data
  • Experience with distributed processing frameworks (Apache Spark, Hive, Kafka, etc.)
  • Having worked in a cloud environment (Azure, AWS)
  • Specific technologies: – Python, R, Scala – SQL (MS SQL Server, Postgres) – Apache Spark, Hadoop, Hive – Databricks, EMR

Please aply online and we will be in touch should you meet the minimum requirements.

Desired Skills:

  • Data engineer
  • machine learning
  • data warehouses
  • data engineering
  • ML engineer
  • Python

Employer & Job Benefits:

  • Bonus

Learn more/Apply for this position