Job Description:
This role provides an exciting opportunity to roll out a new strategic initiative within the firm – Enterprise Infrastructure Big Data Service. The Big Data Developer serves as a development and support expert with responsibility for the design, development, automation, testing, support and administration of the Enterprise Infrastructure Big Data Service. The role requires experience with both Hadoop and Kafka. This will involve building and supporting a real time streaming platform utilized by the company data engineering community. The successful candidate will be responsible for developing features, ongoing support and administration, and documentation for the service. The platform provides a messaging queue and a blueprint for integrating with existing upstream and downstream technology solutions

Experience required:
The successful candidate will have the opportunity of working directly across the firm with developers, operations staff, data scientists, architects and business constituents to develop and enhance the big data service

  • Min 5 years development experience
  • Strong technical / programming experience
  • Development and deployment of data applications
  • Design & implementation of infrastructure tooling and work on horizontal frameworks and libraries
  • Creation of data ingestion pipelines between legacy data warehouses and the big data stack
  • Automation of application back-end workflows
  • Building and maintaining backend services created by multiple services framework
  • Maintain and enhance applications backed by Big Data computation applications
  • Be eager to learn new approaches and technologies
  • Strong problem-solving skills
  • Strong programming skills, proven track record of coding ability and experience (hands-on back-end development)
  • Ability to effectively code in at least two programming languages (eg. C#, Java, Python etc)
  • Excellent understanding of programming concepts
  • Background in computer science, engineering, physics, mathematics or equivalent
  • Worked on Big Data platforms (Vanilla Hadoop, Cloudera or Hortonworks)
  • Excellent understanding of specific coding / scripting languages e.g. Java, C#, Python, Perl, JavaScript
  • Solid understanding of Object-Oriented Design and ability to properly apply general design patterns and paradigms

Preferred:

  • Experience with Scala or other functional languages (Haskell, Clojure, Kotlin, Clean)
  • Experience with some of the following: Apache Hadoop, Spark, Hive, Pig, Oozie, ZooKeeper, MongoDB, Couchbase DB, Impala, Kudu, Linux, Bash, version control tools, continuous integration tools

Learn more/Apply for this position