Epicareer Might not Working Properly
Learn More

Data Engineer

Salary undisclosed

Apply on


Original
Simplified

Together, we'll make travel better.

What you'll be doing:

As the EDW data engineer in a multi-national company, you will be the primary lead taking care the cloud-based enterprise data warehouse in Snowflake which integrates data from over 25 data sources purposed for BI and insight generation. You will work closely with cloud engineers, data architect, and data pipeline engineers in HQ, as well as BI analyst across the world. To continuously enrich the contents for new analytics initiative, you will design data models, build data warehouse pipelines and manage healthiness of the platform. You are encouraged to explore latest big data and cloud technologies and we offer plenty of rooms to extend your exposure to latest data technologies.

Job Responsibilities:

  • The technical lead will also be responsible for owning the technical design, managing code repository, and ensuring compliance to engineering team guidelines on the practices and processes as mandated by PPG.
  • Innovate: Research industry & market trend and refresh internal technology standard.
  • Design data model for BI and analytics purposes.
  • Design and develop scalable and reliable ETL pipelines which integrates data from Data Lake.
  • Establish data governance models within the enterprise data warehouse, for metadata management, change data capture and schema evolution in data sources.
  • Improve the data devops practices for continuous development and checking in the mission-critical data warehouse.
  • Experiment a real-time data based next generation data warehousing.

Job Requirements:

  • Degree holder in Information Systems, Computer Science, Big Data or related disciplines.
  • 3+ years of experience in cloud enterprise data warehouse, Snowflake or AWS Redshift preferred.
  • Track records in data model design for data blending with both internal and external data.
  • Proficiency in SQL is a must. Plus proficiency in at least one of the programming languages in Pythons (PySpark), Java, Scalar or R.
  • Experience in handling noSQL data (e.g. JSON, Delta Table) within Data Warehouse.
  • Understanding of cloud data services (e.g. AWS, Azure, Databricks).
  • Basic understanding of ML/AI models, cognitive services, IOT and data streaming.
  • Good analytical, organizing & communication skills.
  • Good business sense.
  • Good command of both written and spoken English and preferable Mandarin/Cantonese (to deal with China clients and partners).
Similar Jobs