Work from home position

The Data Engineering team is focused on designing, building, and troubleshooting data processing systems that are secure, reliable, fault-tolerant, scalable, and efficient.

We are currently working towards building a completely new Real-Time Event-Driven Architecture for data processing using open-source and serverless technologies such as Debezium, BigQuery, Flink, Kafka, among others. This new Lakehouse will serve as the central source of truth, which multiple internal users will have access to, to drive their daily/monthly/quarterly decisions.

Our client is growing quickly, which brings a number of unique and interesting challenges. As such, data within the organisation is also growing quickly. This brings a lot of opportunities for you to shape the tools, technologies, and culture around data in the company.

This position reports to the Data Systems Director

Your responsibilities will include:

  • Designing, developing, testing, and maintaining data architectures.
  • Preparing data for descriptive, predictive and prescriptive modeling
  • Automating repetitive tasks and manual processes related with the data usage
  • Optimizing data delivery
  • Designing, developing, and testing large stream data pipelines to ingest, aggregate, clean, and distribute data models ready for analysis
  • Ensuring the highest standard in data integrity
  • Leveraging best practices in continuous integration and delivery
  • Collaborating with other engineers, ML experts, analysts, and stakeholders to produce the most efficient and valuable solutions
  • Contributing to our data democratisation and literacy vision by making accessible and easy-to-use data products and tools
  • Implementing features, technology, and processes that move us towards industry best practices, improving on scalability, efficiency, reliability, and security
  • Operations and ownership of systems in production, responding to incidents

Attributes required:

  • Works well with people and is passionate about helping people be their best
  • Is a team player, an active listener, mentor, and able to communicate well
  • Shows solid reasoning and decision making, with the ability to work under pressure
  • Is passionate about technology, systems and data
  • Is curious, always learning, and keeping up to date with the industry
  • Has a deep understanding of data pipelining, streaming, and Big Data technologies, methods, patterns, and techniques.
  • Has a solid grasp on data modeling, schema design, data warehouse, and data lake design and implementation
  • Can troubleshoot complex database operations and performance issues
  • Can automate tasks using shell scripting or writing small applications

Qualifications & Experience:

  • Comp-sci Degree or 3 years relevant industry experience
  • Experience with open source relational database systems (e.g. MySQL, PostgreSQL, etc)
  • Significant technical experience and a proven track record of data modeling and schema design
  • A thorough understanding of database and data warehousing principles (e.g. OLAP, Data Marts, Star Schema, Snowflake, etc)
  • Write code (we use Java and Python)
  • Familiar with CI/CD tools such as Jenkins, Travis, Circle CI, etc
  • Experience with Kafka, PubSub, or other event-based systems
  • Experience with stream data pipeline frameworks or solutions such as Apache Flink, Apache Beam, Storm, Databricks, etc.
  • Experience with data warehousing, data lakes, lambda/kappa architectures
  • Experience working in cloud environments and with containerisation frameworks, tools and platforms (e.g Docker, Kubernetes, GKE, etc).

Desired Skills:

  • Big data
  • Java
  • python
  • OLAP
  • Data Marts
  • Jenkins
  • SQL

Learn more/Apply for this position