• 3-5 years’ experience in computer science, software, or computer engineering, applied math, physics, statistics, or related fields.
  • Experience with wireless and network communication technologies.
  • 3-5 years’ experience within management information systems / system analysis.
  • 3-5 years’ experience with data lake and warehousing solutions.
  • Experience with data analytics & engineering, machine learning and AI.
  • 3-5 years’ experience with Python, Java, Sparkle, C++, SQL databases.
  • 3-5 years’ experience with Kafka, ElasticSearch/OpenSearch, and containerised applications development (ie, docker).
  • Experience with workflow management tools such as Camunda.
  • Experience with distributed systems and cluster orchestration systems such as Kubernetes.
  • Experience with automating deployment, scaling, and management of containerised services and microservice architecture (ie. DevOps).
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
  • A successful history of manipulating, processing and extracting value from large, disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Bachelor’s degree in computer/software engineering, computer science

Masters or doctoral level degree (advantageous)

Desired Skills:

  • Python
  • C++
  • Java
  • SQL Database
  • Kafka
  • Camunda

Desired Work Experience:

  • 2 to 5 years

About The Employer:

– 3-5 years’ experience in computer science, software, or computer engineering, applied math, physics, statistics, or related fields.
– Experience with wireless and network communication technologies.
– 3-5 years’ experience within management information systems / system analysis.
– 3-5 years’ experience with data lake and warehousing solutions.
– Experience with data analytics & engineering, machine learning and AI.
– 3-5 years’ experience with Python, Java, Sparkle, C++, SQL databases.
– 3-5 years’ experience with Kafka, ElasticSearch/OpenSearch, and containerised applications development (ie, docker).
– Experience with workflow management tools such as Camunda.
– Experience with distributed systems and cluster orchestration systems such as Kubernetes.
– Experience with automating deployment, scaling, and management of containerised services and microservice architecture (ie. DevOps).
– Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
– Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
– Strong analytic skills related to working with unstructured datasets.
– Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
– A successful history of manipulating, processing and extracting value from large, disconnected datasets.
– Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
– Bachelor’s degree in computer/software engineering, computer science

Masters or doctoral level degree (advantageous)

Learn more/Apply for this position