Epicareer Might not Working Properly
Learn More

HADOOP DEVELOPER

RM 10,000 - RM 12,999 / Per Mon

Apply on

Availability Status

This job is expected to be in high demand and may close soon. We’ll remove this job ad once it's closed.


Original
Simplified
We are seeking a highly skilled Hadoop Developer to design, develop, and implement data solutions within our big data ecosystem. The ideal candidate will have extensive experience working with Hadoop technologies, a deep understanding of distributed systems, and a passion for analyzing and managing large datasets. You will work closely with data engineers, analysts, and other stakeholders to build robust, scalable, and efficient data pipelines and applications. Key Responsibilities: 1)Design and Development: Develop and maintain scalable data pipelines and solutions using Hadoop technologies such as HDFS, MapReduce, YARN, Hive, and Pig. Optimize data processing workflows to ensure efficiency and scalability. Build and maintain ETL processes for ingesting structured, semi-structured, and unstructured data. 2)Data Integration: Integrate data from various sources and ensure its quality and accuracy. Work with real-time data streaming frameworks like Apache Kafka, Flume, or Spark Streaming. 3)Performance Optimization: Monitor and improve the performance of Hadoop jobs and processes. Tune Hadoop clusters and jobs for high performance. 4)Collaboration: Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and provide tailored solutions. Work with DevOps teams to deploy and manage data applications in production environments. 5)Troubleshooting: Identify, analyze, and resolve issues in the Hadoop ecosystem. Ensure the security and integrity of data within the ecosystem. Required Skills and Qualifications: Educational Background: Bachelor’s degree in Computer Science, Information Technology, or a related field. Technical Expertise: 1)3+ years of experience in big data development, with a strong focus on Hadoop technologies. 2)Hands-on experience with Hadoop components such as HDFS, YARN, Hive, Pig, and HBase. 3)Proficiency in programming languages like Java, Scala, or Python. 4)Strong SQL skills and experience with relational databases. 5)Knowledge of data modeling, ETL frameworks, and distributed computing principles. 6)Experience with real-time processing tools like Apache Kafka, Spark, or Flink. 7)Familiarity with cluster management tools such as Ambari, Cloudera Manager, or Hortonworks. Soft Skills: 1)Strong problem-solving and analytical skills. 2)Excellent communication and teamwork abilities. 3)Ability to handle multiple priorities and deliver on tight deadlines. Preferred Qualifications: Experience with cloud-based big data platforms such as AWS EMR, Google BigQuery, or Azure HDInsight. Familiarity with NoSQL databases like Cassandra or MongoDB. Understanding of data security practices in big data environments. Certification in Hadoop or related technologies is a plus (e.g., Cloudera, Hortonworks). Why Join Us? Be part of an innovative team working on cutting-edge data technologies. Competitive salary and benefits package. Career development opportunities in the growing field of big data. Collaborative work culture that fosters learning and growth.
Similar Jobs