We seek an individual who has experience in defining how data is connected to each other and how they are processed and stored inside the system.

Requirements:
Graduate degree in Computer Science, Statistics, Information systems or any other Quantitative field.
6 years plus in a Data Engineer role

Knowledge, Skills, and Abilities:

  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL database.
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Experience with cloud services: EC2, EMR, RDS, Redshift etc.
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
  • Strong project management and organizational skills.
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.

Questions to be answered:
Please answer these questions when submitting your CV. If this is not completed your application will automatically be regretted and disregarded:

Question 1:
What is the difference between a logical and physical data model and how each one is been used for?

Question 2:
What does the data model contain?

Question 3:
What is the non-identifying relationship?

Desired Skills:

  • spark
  • kafka
  • SQL database
  • relational SQL
  • azkaban
  • luigi
  • airflow
  • EC2
  • EMR
  • Redshift
  • Data engineer
  • Data analyst
  • BSC computer science
  • BSc Statistics
  • Quantitative
  • Big Data Development
  • Data engineering
  • Big data
  • Amazon Redshift
  • Big Data Analytics
  • Data Modeling
  • Hadoop

Desired Work Experience:

  • 2 to 5 years Banking
  • 5 to 10 years Data Analysis / Data Warehousing

Desired Qualification Level:

  • Degree

Learn more/Apply for this position