Data Engineer (40000064)
Job Purpose
'Designs, develops, and operates data processing and integration pipelines to support analytics and business needs; contributes to resolving moderately complex data-related requirements while ensuring data systems remain stable, scalable, and compliant with relevant regulations.
Key Accountabilities (1)
(1) Data Flow Design & Standardization
Collaborate with business units, analytics, and data science teams to clarify data requirements.
Translate business requirements into technical designs and data processing specifications.
Contribute to the development and adherence of data pipeline standards and data integration processes on the common platform.
Recommend adjustments to data architecture or pipeline implementation approaches to address emerging business needs.
Key Accountabilities (2)
(2) Data Processing Development & Optimization
Develop ETL/ELT pipelines using appropriate big data technologies to address different data use cases.
Optimize data pipelines for processing performance, scalability, and stability.
Create and manage reusable data assets to support analytics and machine learning models.
Collaborate with Data Scientists in preparing data for model development.
Key Accountabilities (3)
(3) Data Integration & System Operations
Participate in assessing source systems and planning data integration from multiple sources.
Collaborate with source system teams and receiving systems to implement appropriate data integration solutions.
Monitor inbound and outbound data flows, and proactively identify and resolve moderately complex issues.
Ensure data pipelines operate continuously without adversely affecting related systems.
Key Relationships - Direct Manager
Manager, Data Engineering
Key Relationships - Direct Reports
Key Relationships - Internal Stakeholders
Teams within the Data Office and relevant departments in the Bank
Key Relationships - External Stakeholders
Partners providing professional services
Success Profile - Qualification and Experiences
Education
Bachelor’s or Master’s degree in Statistics, Mathematics, Quantitative Analysis, Computer Science, Software Engineering, or Information Technology.
Experience
Minimum 3 years of relevant experience in developing, debugging, scripting, and working with big data technologies (e.g., Hadoop, Spark, Flink, Kafka), database technologies (e.g., SQL, NoSQL, Graph databases), and programming languages (e.g., Python, R, Scala, Java, Rust, Kotlin).
Experience in building AI systems at scale for millions of customers—comparable to Techcombank’s scale of 20 million customers—would be an advantage.
Experience in designing complex computational models for big data platforms and enhancing system performance is highly desirable.
English language proficiency in line with Techcombank’s policy requirements.
Hands-on experience in designing and developing data models, data processing workflows, and applied data warehousing concepts and methodologies, including data pipeline optimization (e.g., Spark).
Proven experience in monitoring complex systems and resolving data and system issues through a consistent, structured, and algorithmic approach.
Strong knowledge of Information Security principles and applicable data regulations (e.g., CBTIA, PDPD).
Experience working in Agile teams to support digital transformation initiatives, with a solid understanding of Agile principles, practices, and Scrum methodology.
Experience in developing basic machine learning models.