The Data Engineering team is looking for people who are passionate about working in agile delivery environments and resolving the engineering challenges of building robust and scalable data systems aligned to enterprise data strategy. As Lead, Data Engineer you will be responsible for developing, constructing, and testing large-scale data analytics systems based on AWS cloud that will help address the disparate analytics needs of a growing organization. Responsibilities – Conceptualize, design, and implement analytics products that enhance Investments analytics capabilities. – Design solutions aligned with long-term architecture and technology strategy using Amazon Web Services (AWS) for Cloud development. – Participate in the development life cycle from start to completion – requirements analysis, development, testing, and deployment. – Work in a fast-paced environment collaborating with developers, data engineers, architects, researchers, and data scientists. – Ensure architecture will support the requirements of the Investments business. – Develop tools that prepare, transform, combine, and manage structured and unstructured data for use by Investments business users. – Define and shape Investments’ future technology and research process Skills Must have – University degree in Engineering or Computer Science preferred. – Hands-on experience building data exploratory interfaces using Notebooks (JupyterHub/Lab, Spark/Livy, Sparkmagic, Enterprise Gateway, matplotlib, Plotly/Dash, etc.). – Hands-on expertise with building data pipelines and applications, leveraging Kubernetes, Python, AWS Sagemaker, PySpark, YARN, S3, Athena, Glue, Lakeformation, Step Functions, Airflow, Serverless frameworks. – Experience with Cloud based data and analytics platforms, warehouses (Redshift/Spectrum, Databricks, Snowflake), BI Tools, OLAP systems (Clickhouse, Druid), including a mix of relational, non-relational, streaming and event-based architectures. – Familiar with cloud technology best practices to enable the distribution and analysis of big data on the cloud (formatting/partitioning/etc.). – Experience of ETL pipelines, managing multiple datasets and providing necessary support. – Knowledge & experience with driving IaC concepts within an organization, leveraging Terraform, Ansible, Packer, Puppet/Chef, etc. – Deep proficiency in Python with experience using Spark, Pandas or PySpark. – Ability to work in an entrepreneurial environment and be a self-starter. – Demonstrated ability to easily deal with both abstract and concrete concepts and be able to reconcile them for the appropriate audience and context preferred. – Quickly understand organizational dynamics and management priorities, and to be able to work effectively in a fast paced, results driven company. – Demonstrate strong facilitation, negotiation, interpersonal, communication and collaboration skills. – Experience with front-end framework ie. Angular, React is a plus Interests in the financial industry.