Experience required
Job Family: Data Monetisation
Years: 8-10 years
Experience Description:
- Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
- A successful history of manipulating, processing, and extracting value from large, disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.