Data Engineer [T500-4439]Regular price
About the job
As a Data Engineer, you will be part of the Foundational Platform Engineering team in Enterprise Data and Analytics that are building strategic data lakehouse, and will be responsible for the design, development and deployment of data transformation features aligned with the business road-map and driving efficiencies that enhance availability, performance and stability of strategic data services.
You are responsible for delivering features that are assigned to you. Depending on the feature, you may need to collaborate with teammates or deliver a standalone piece on your own. In all cases you will bring a mix of technical & business analytical skills, a passion for Customers and Bankers, and an open mindset.
You may be asked to support / fix items you / your team has delivered.
You will be accountable for working with Product Owner, Head of Solution/Service Architects, Scrum Master to define Features or User Stories.
- It is expected that the role holder will most likely have the following qualifications and experience
- 5+ years financial services experience (technical experience within financial services preferable)
- Ability to clearly articulate complex technical issues
- Delivery experience
- Tertiary Qualification (Banking & Finance, Business, Economics)
- Understanding of data modelling principles.
- Experience in executing data models previously.
- Knowledge in Python and shell.
- Ability to effectively use industry agile tools like Confluence and Rally.
- Extensive demonstrated experience across Data Engineering
- Solid experience in large scale data transformation, design and solution delivery experience of high-quality data processing pipelines and solutions in a professional services environment.
- Experience in data lake or warehouse integrity and reconciliation; tracking and reconciling data movement and transformation.
- Prior experience with Snowflake well regarded
- Technical skills in software delivery such as CI/CD, Automated Deployments, Automated Testing, Build Servers, Software / Source Code Configuration Management.
- Expertise in designing and building maintainable and scalable end to end big data features in HDFS, Kafka, Apache Spark, Hive, EMR, Airflow and Python or other similar tools.
- Extensive knowledge of core AWS data services such as: S3, RDS, Redshift, Glue, Athena, DynamoDB, EMR, Kinesis, Kinesis Firehouse.
- Ability to provide technical mentoring and guidance to less experienced members of the development team
- Execute and drive hands-on design, to maximise and implement best practice CI/CD across the team.
- Work with delivery teams to achieve success through the adoption of CI/CD and DevOps practices, processes and tooling.