Epicareer Might not Working Properly
Learn More

AWS Data Engineer

Salary undisclosed

Checking job availability...

Original
Simplified

Job Title: AWS Data Engineer

Location: Kuala Lumpur, Malaysia

Job Type: Full-Time

Key Responsibilities:

  • Build and maintain robust data pipelines to ingest and process diverse data sources, ensuring high efficiency and reliability.
  • Design and develop real-time streaming and batch-processing pipelines to meet business requirements and enable data analytics.
  • Assemble large, complex data sets, ensuring they meet both functional and non-functional requirements.
  • Design, develop, and implement data pipelines for data migration, data collection, and analytics solutions.
  • Collaborate with stakeholders and data analyst teams to resolve data-related technical issues and support their infrastructure needs.
  • Work closely with Architects to help define the data architecture and technology stack.

Required Skills and Qualifications:

  • 3+ years of relevant experience as a Data Engineer, with hands-on expertise in AWS technologies.
  • Proficiency in Python programming and experience with the Hadoop Stack (including Hive, Spark Core & Streaming, Kafka, Flink, NIFI).
  • Experience working on stream transformations using Spark Streaming or Flink, including change data capture in a big data ecosystem.
  • Hands-on experience with building real-time and batch ingestion and transformation data pipelines.
  • Experience working with AWS services such as S3, EMR, and Redshift.
  • Experience in implementing Data Vault and star schema frameworks using Redshift for data warehousing.
  • Ability to work with DevOps teams to orchestrate workflows using Airflow.
  • Strong understanding of iPython, Big Data, and hands-on SQL.
  • Strong troubleshooting and problem-solving abilities in data engineering contexts.

Desirable Skills:

  • Experience with Data Lake architecture and ETL processes on AWS.
  • Familiarity with additional AWS services such as Glue, Lambda, and Kinesis.
  • Experience working with data governance and data security best practices.

Interested candidates are welcome to send their resumes to [email protected]

Job Title: AWS Data Engineer

Location: Kuala Lumpur, Malaysia

Job Type: Full-Time

Key Responsibilities:

  • Build and maintain robust data pipelines to ingest and process diverse data sources, ensuring high efficiency and reliability.
  • Design and develop real-time streaming and batch-processing pipelines to meet business requirements and enable data analytics.
  • Assemble large, complex data sets, ensuring they meet both functional and non-functional requirements.
  • Design, develop, and implement data pipelines for data migration, data collection, and analytics solutions.
  • Collaborate with stakeholders and data analyst teams to resolve data-related technical issues and support their infrastructure needs.
  • Work closely with Architects to help define the data architecture and technology stack.

Required Skills and Qualifications:

  • 3+ years of relevant experience as a Data Engineer, with hands-on expertise in AWS technologies.
  • Proficiency in Python programming and experience with the Hadoop Stack (including Hive, Spark Core & Streaming, Kafka, Flink, NIFI).
  • Experience working on stream transformations using Spark Streaming or Flink, including change data capture in a big data ecosystem.
  • Hands-on experience with building real-time and batch ingestion and transformation data pipelines.
  • Experience working with AWS services such as S3, EMR, and Redshift.
  • Experience in implementing Data Vault and star schema frameworks using Redshift for data warehousing.
  • Ability to work with DevOps teams to orchestrate workflows using Airflow.
  • Strong understanding of iPython, Big Data, and hands-on SQL.
  • Strong troubleshooting and problem-solving abilities in data engineering contexts.

Desirable Skills:

  • Experience with Data Lake architecture and ETL processes on AWS.
  • Familiarity with additional AWS services such as Glue, Lambda, and Kinesis.
  • Experience working with data governance and data security best practices.

Interested candidates are welcome to send their resumes to [email protected]