Checking job availability...
Original
Simplified
Job Title: Big Data Developer
Location: Kuala Lumpur, Malaysia
Employment Type: Full-time
Key Responsibilities:- Design, develop, and maintain big data pipelines to ingest, process, and transform large datasets.
- Implement efficient data storage solutions using technologies like Hadoop, Spark, Kafka, and Hive.
- Develop and manage ETL (Extract, Transform, Load) processes to integrate data from various sources.
- Optimize data processing performance and ensure scalability for large datasets.
- Work with data engineers, analysts, and business stakeholders to gather data requirements.
- Develop and maintain data models, ensuring data quality, integrity, and consistency.
- Implement data security best practices, ensuring compliance with privacy regulations.
- Collaborate with DevOps teams to automate deployment and monitor data pipelines.
- Troubleshoot performance issues and implement data recovery strategies when necessary.
- Document data processes, architecture, and technical specifications.
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Technology, or related fields.
- Proven experience as a Big Data Developer, Data Engineer, or similar role.
- Proficiency in Python, Scala, or Java for data processing.
- Strong experience with Apache Spark, Hadoop, Kafka, or Flink.
- Hands-on experience with SQL, NoSQL databases, and data warehousing solutions.
- Familiarity with cloud platforms like AWS, Azure, or Google Cloud Platform (GCP).
- Experience in building ETL pipelines and integrating data from various sources.
- Strong understanding of data partitioning, indexing, and performance tuning for large datasets.
- Excellent problem-solving skills and ability to optimize complex data workflows.
Interested candidates can send their resumes to [email protected]
Job Title: Big Data Developer
Location: Kuala Lumpur, Malaysia
Employment Type: Full-time
Key Responsibilities:- Design, develop, and maintain big data pipelines to ingest, process, and transform large datasets.
- Implement efficient data storage solutions using technologies like Hadoop, Spark, Kafka, and Hive.
- Develop and manage ETL (Extract, Transform, Load) processes to integrate data from various sources.
- Optimize data processing performance and ensure scalability for large datasets.
- Work with data engineers, analysts, and business stakeholders to gather data requirements.
- Develop and maintain data models, ensuring data quality, integrity, and consistency.
- Implement data security best practices, ensuring compliance with privacy regulations.
- Collaborate with DevOps teams to automate deployment and monitor data pipelines.
- Troubleshoot performance issues and implement data recovery strategies when necessary.
- Document data processes, architecture, and technical specifications.
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Technology, or related fields.
- Proven experience as a Big Data Developer, Data Engineer, or similar role.
- Proficiency in Python, Scala, or Java for data processing.
- Strong experience with Apache Spark, Hadoop, Kafka, or Flink.
- Hands-on experience with SQL, NoSQL databases, and data warehousing solutions.
- Familiarity with cloud platforms like AWS, Azure, or Google Cloud Platform (GCP).
- Experience in building ETL pipelines and integrating data from various sources.
- Strong understanding of data partitioning, indexing, and performance tuning for large datasets.
- Excellent problem-solving skills and ability to optimize complex data workflows.
Interested candidates can send their resumes to [email protected]