Software EngineerRegular price
About the job
Do you want your voice heard and your actions to count?
Discover your opportunity with Mitsubishi UFJ Financial Group (MUFG), the 5th largest financial group in the world. Across the globe, we’re 180,000 colleagues, striving to make a difference for every client, organization, and community we serve. We stand for our values, building long-term relationships, serving society, and fostering shared and sustainable growth for a better world.
With a vision to be the world’s most trusted financial group, it’s part of our culture to put people first, listen to new and diverse ideas and collaborate toward greater innovation, speed and agility. This means investing in talent, technologies, and tools that empower you to own your career.
Join MUFG, where being inspired is expected and making a meaningful impact is rewarded.
Performing data sourcing analysis, data quality, automated testing, exception processing, error handling and notification, and correction processing of source system data as it enters and is processed through the EDP. Building data pipelines at scale by extracting, cleaning and transforming data using Python, Bash scripting, Spark, SQL, and Hive. Building dashboards, reports and application performance monitoring using Tableau Desktop and Splunk. Designing and building infrastructure for big data workloads using AWS S3, Elastic MapReduce (EMR), Glue catalog, Step functions, Redshift, CloudWatch and CloudTrail. Automating and orchestrating data pipeline components in the workflow of fetching/moving data from different source systems. Supporting ingestion and transformation pipelines that handle data for analytical or operational uses across a broad line of business needs areas and enterprise data domains. Working with Business IT teams in proactively identifying data quality issues, and coordinating with the development groups to ensure data accuracy to business analysts, leadership groups, and other end users to aid in ongoing operational insights. Analyzing business and technical requirements for data integration from various data sources and executing the extract, transform, and load (ETL) data from disparate sources across the organization. Developing components/applications by studying operations and designing and developing, reusable services and solutions that support the automated ingesting, profiling, and handling of structured and unstructured data. Designing and implementing a robust set of Controls and Reconciliation tools and platforms to support point-to-point and end-to-end comprehensiveness controls and G/L Reconciliations. Designing and building RESTful APIs, Automated Testing Systems, Event Monitoring, and Notification Systems. Working with data providers and data consumers to build and deploy scalable models and standard output formats (SOFs) to production. Providing clear documentation on design decisions and workflows and working with partners including the Business, Enterprise Architecture, Infrastructure, and Chief Data Office to assist with data-related technical issues and support their data infrastructure needs. Handling user inquires and provide level 3 production support including Onsite/Offshore collaboration as needed. Maintaining best practices to facilitate optimized software development and continuous improvement/delivery (CI/CD). Leading, supporting, and coordinating code migration activities. Performing peer review and quality reviews of code/scripts.
Qualifications – External
Education: Bachelor’s degree in Computer Science, Computer Engineering, Management Information Systems, or a related field (or foreign equivalent degree).
Experience: 2 years of technical experience building data pipelines at scale by extracting, cleaning and transforming data using Python, Bash scripting, Spark, SQL, and Hive; building dashboards, reports and application performance monitoring using Tableau Desktop and Splunk; designing and building infrastructure for big data workloads using AWS S3, Elastic MapReduce (EMR), Glue catalog, Step functions, Redshift, CloudWatch and CloudTrail; automating and orchestrating data pipeline components in the workflow of fetching/moving data from different source systems; and in the banking industry.
Other: Required to work nights & weekends & be on-call during non-business hours as needed for testing and deployment purposes.
Location: Charlotte, NC 28244
Reference internal requisition #10056134-WD.
We are committed to leveraging the diverse backgrounds, perspectives and experience of our workforce to create opportunities for our people and our business; Equal Opportunity Employer: Minority/Female/Disability/Veteran.The above statements are intended to describe the general nature and level of work being performed. They are not intended to be construed as an exhaustive list of all responsibilities duties and skills required of personnel so classified.We are proud to be an Equal Opportunity/Affirmative Action Employer and committed to leveraging the diverse backgrounds, perspectives and experience of our workforce to create opportunities for our colleagues and our business. We do not discriminate on the basis of race, color, national origin, religion, gender expression, gender identity, sex, age, ancestry, marital status, protected veteran and military status, disability, medical condition, sexual orientation, genetic information, or any other status of an individual or that individual’s associates or relatives that is protected under applicable federal, state, or local law.