Apply now »
8 May 2026

Expert, AI Engineer

Category:  Data And Analytics Division
Job Type: 
Facility:  Data & Analytics

Key Accountabilities

• Lead the team to improve scalability, reliability, and cost-efficiency of the Data Platform. 
• Design, build, and deploy data pipelines (batch & streaming) using Spark orchestrated via Airflow. 
• Develop libraries and frameworks for data ingestion, transformation, and governance with clean architecture principles. 
• Collaborate with Data Architects to design/review data models, enforce data contracts, and maintain schema governance. 
• Optimize performance with partitioning, caching, Z-Ordering, and metadata management in lakehouse environments (Delta/Iceberg/Hudi). 
• Ensure security and compliance: IAM, encryption, secrets management, and GDPR/CCPA adherence. 
• Drive CI/CD for data workflows, IaC (Terraform), and container orchestration (Kubernetes). 
• Monitor SLOs/SLAs, implement alerting, and lead incident responses and postmortems. 
• Design and operate end-to-end ML/LLM pipelines: data prep, training, evaluation, and deployment. 
• Build RAG architectures, vector search, and embedding pipelines for LLM-based applications. 

Success Profile - Qualification and Experiences

• Bachelor’s or Master’s degree in Computer Science, Software Engineering, Information Technology, or a related technical field 
• English is required 
• Have 5+ years of experience as a Data Engineer or Software Engineer 
• Have experience in Cloud (AWS/Azure/GCP) 
• Extremely proficient in at least 1 programming language (Python/Scala/Java) 
• Strong experience in systems architecture – particularly in complex, scalable, and fault tolerant distributed systems 
• Good at multi-threading, atomic operations, computation framework: Spark (DataFrame, SQL, ...), distributed storage, distributed computing 
• Understand designs of resilience, fault-tolerance, high availability, and high scalability, ... 
• Tools: CI/CD, Gitlab, ... 
• Good at communication & team working 
• Being open-minded, willing to learn new things 
• Experience with Databricks (Delta Lake, Unity Catalog, Delta Live Tables) or similar lakehouse technologies is a strong plus. 
• Proven ability in performance tuning and optimization for Big Data workloads (Spark/Flink, partitioning, shuffle strategies, caching). 
• Familiarity with modern data transformation frameworks (dbt). 
• Experience in AI and LLM technologies is a plus, including prompt engineering, embeddings, and retrieval-augmented generation (RAG). 
• Hands-on experience with vector databases (ChromaDB, Vector Search) and LLMOps 
practices. 

Apply now »