We are looking for an experienced professional to join our team as a Data Engineer at a consultant or manager level. Our Data Engineer role offers the chance to collaborate with clients and design secure, scalable, and budget-friendly solutions leveraging the power of on-premises Technologies and Cloud (Azure | AWS). To be successful, you’ll need a blend of Cloud knowledge, data engineering experience, and a knack for clear communication and creative problem-solving.
You will be responsible for the following duties, but not limited to:
- Design and implement scalable data pipelines using Cloud services such as Glue, Redshift, S3, Lambda, EMR, Athena, Microsoft Fabric & Databricks.
- Develop and maintain ETL processes to transform and integrate data from various sources.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions.
- Optimise and tune performance of data pipelines and queries.
- Ensure data quality and integrity through robust testing and validation processes.
- Implement data security and compliance best practices.
- Monitor and troubleshoot data pipeline issues and ensure timely resolution.
- Stay updated with the latest developments in Data Engineering technologies and best practices.
The successful candidate must have comprehensive experience in the above, and must also meet the following requirements:
- Holds a Bachelors degree from an accredited university.
- Industry experience: A minimum of two years of hands-on experience is required. Prior experience in the financial services industry would be beneficial but not mandatory.
- Strong foundation in data engineering: We value hands-on experience and proven skills in building and managing data solutions using on-premises technologies or Cloud. While a Bachelor’s degree in Computer Science, Engineering, or a related field is a plus, your ability to demonstrate expertise matters most.
- Experience with core Cloud Data Services: Familiarity with Glue, Redshift, S3, Lambda, EMR, Athena, Microsoft Fabric or Databricks
- Experience with Big Data technologies: Knowledge of big data technologies such as Apache Spark, Hadoop, or Kafka.
- Scripting & Programming proficiency: Programming skills in Python, Pandas & SQL
- Database Management: Experience working with relational databases like AWS RDS, MS SQL, Azure SQL DB or Postgres.
- Solid Data Engineering background: Knowledge and experience of data modelling, ETL processes, and data warehousing.
- Infrastructure as code (IaC) proficiency: Experience with tools like AWS CloudFormation, Terraform or Azure ARM/Bicep for automating infrastructure provisioning and deployments is crucial.
- DevOps fluency: We seek a candidate with experience in CI/CD tools to streamline software development and delivery.
- Communication and collaboration: Excellent communication, problem-solving, and analytical skills are key. The ability to present complex technical concepts in a clear and concise way.
- Cloud Certification (a plus): While not mandatory, possessing a relevant Cloud certification demonstrates your commitment to professional development and validates your understanding of Cloud services and best practices.
The following would also be advantageous:
- Relevant consulting experience to banks and insurers.
- A strong desire to learn and upskill business knowledge
Desired Skills:
- data engineering
- SQL
- power bi
- Big data