Job Description:
Spark:
· Develop and optimize Spark-based data processing for large-scale ETL and analytics.
· Implement Spark SQL, Data Frames, RDDs, and Streaming for efficient data transformations.
· Optimize Spark job performance, tuning memory, partitioning, and execution plans.
· Handle real-time and batch data processing using Spark Streaming or structured streaming.
SQL:
· Write and optimize complex SQL queries for data extraction, transformation, and aggregation.
· Work on query performance tuning, indexing, and partitioning for optimized execution.
· Develop and manage stored procedures, functions, and views for data operations.
· Ensure data consistency, integrity, and security in relational database systems.
Java (Preferred with Scala knowledge):
· Java is essential as Scala runs on the JVM, making JVM tuning critical for Spark-based workloads.
· Develop data processing applications using Scala (running on JVM) and Java-based backend services.
· Optimize JVM performance, memory management, and garbage collection to enhance Spark job execution.
· Utilize Scala’s functional programming features for efficient data transformations in Spark.
· Implement multithreading, concurrency, and parallel processing in Java for high performance applications.
Job Type: Full-time
Pay: RM600.00 - RM650.00 per day
Schedule:
- Monday to Friday