Responsibilities
As a Data Engineer at CLIENT, specialising in Kafka pipelines and Elasticsearch, you will play a pivotal role in advancing our data infrastructure and analytics capabilities. Your responsibilities will include:

  • Designing, implementing, and maintaining robust data pipelines using Kafka, ensuring the efficient and reliable flow of data across our systems.
  • Leveraging your expertise in stream proccessing to optimise real-time data processing and streaming, enhancing our ability to analyse customer interactions.
  • Developing and maintaining Elasticsearch clusters, fine-tuning them for high performance and scalability.
  • Collaborating with cross-functional teams to extract, transform, and load (ETL) data into Elasticsearch for advanced analytics and search capabilities.
  • Troubleshooting data pipeline and Elasticsearch issues, ensuring the integrity and availability of data for analytics and reporting.
  • Participating in the design and development of data models and schemas to support business requirements.
  • Continuously monitoring and optimising data pipeline and Elastic performance to meet growing data demands.
  • Collaborating with data scientists and analysts to enable efficient data access and query performance.
  • Contributing to the evaluation and implementation of new technologies and tools that enhance data engineering capabilities.
  • Demonstrating strong analytical, problem-solving, and troubleshooting skills to address data-related challenges.
  • Collaborating effectively with team members and stakeholders to ensure data infrastructure aligns with business needs.
  • Embodying the company values of playing to win, putting people over everything, driving results, pursuing knowledge, and working together.

Requirements
To excel in this role, you should possess the following qualifications and skills:

  • Proven experience in designing and implementing data pipelines using stream processing technologies.
  • Experience with end-to-end testing of analytics pipelines.
  • In-depth expertise in managing and optimising Elasticsearch clusters, including performance tuning and scalability.
  • Strong proficiency with data extraction, transformation, and loading (ETL) processes.
  • Familiarity with data modeling and schema design for efficient data storage and retrieval.
  • Proficiency in troubleshooting data pipeline and Elastic-related issues to ensure data integrity and availability.
  • Solid programming and scripting skills, particularly in languages like Python, Scala, or Java.
  • Knowledge of DevOps and automation practices related to data engineering.
  • Excellent communication and collaboration skills to work effectively with cross-functional teams.
  • Strong analytical and problem-solving abilities, with a keen attention to detail.
  • A commitment to staying up-to-date with the latest developments in data engineering and technology.
  • Alignment with our company values and a dedication to driving positive change through [URL Removed] Stack
    As a Data Engineer with a focus on Kafka pipelines and Elastic, you will work with the following technologies:
  • Data Pipelines:
  • Kafka / stream processing
  • Python
  • Redis

Data Storage and Analysis:

  • Elasticsearch
  • Elasticsearch clusters management and optimisation
  • PostgreSQL

DevOps:

  • AWS

Nice to have But not required.

  • Experience with data engineering in an agile / scrum environment.
  • Familiarity with ksqlDB.
  • Familiarity of data lakes and the querying thereof.
  • Experience with integrating machine learning models into data pipelines.
  • Familiarity with other data-related technologies and tools.

Desired Skills:

  • kafka pipelines
  • Elasticsearch
  • python
  • aws
  • postgreSQL

About The Employer:

Description

Senior Data Engineer, an award-winning leader in contact centre AI software, helps financial services companies be compliant and improve customer service by monitoring 100% of their customer interactions. CLIENT uses artificial intelligence to transcribe and automatically analyse and review interactions to identify and handle problematic interactions (customer complaints and interactions with vulnerable customers) according to regulatory requirements. The AI solution “listens” for potential problems and compliance breaches and alerts users (supervisors and quality assurance teams) of calls that need further review and remediation.
Financial services companies have a huge compliance problem, particularly in the context of contact centre quality assurance. These companies struggle to monitor 100% of customer interactions due to the high cost of employing more and more quality assurance assessors. As a result, the average company monitors only 4-5% of customer interactions. This results in a large number of unchecked interactions, including complaints, compliance breaches and poor agent behaviour.
That’s where CLIENT comes in! CLIENT has built strong compliance models and cutting-edge software to revolutionise the contact centre quality assurance process. In 2021, CLIENT was selected as the winner of the Accenture Blue Tulip Awards and the KPMG Digital Innovation Challenge. All of this has been achieved by our team of highly-motivated, diverse and purpose-driven individuals.

Employer & Job Benefits:

  • Flexitime
  • travel
  • Study Assistance

Learn more/Apply for this position