Staffinity is currently seeking a Java Spark Developer for a client in Montreal. This is a permanent position with base salary, benefits, retirement plan, paid education opportunities and vacation.
The salary range is 110-120k. The working hours are Monday to Friday, daytime. The position works on a hybrid model with 3 days per week in the Montreal office.
Bilingualism is not required for this role. The ideal candidate will have working hands on experience of creating and leveraging Splunk dashboard is a huge plus.
Responsibilities :
- Design, develop, and maintain Java applications leveraging Apache Spark for distributed data processing and analytics.
- Collaborate with data engineers and data scientists to implement data pipelines, ETL processes, and machine learning workflows using Spark.
- Optimize and tune Spark jobs to ensure efficient utilization of computing resources and high throughput for data processing tasks.
- Integrate Java applications with Spark clusters, leveraging Spark's APIs and libraries for data manipulation, transformation, and analysis.
- Develop and deploy real-time and batch processing applications using Spark Streaming and Spark SQL for data ingestion and analysis.
- Implement data caching, partitioning, and parallel processing techniques to optimize Spark job performance and resource utilization.
- Work with cloud-based platforms and big data technologies to deploy and manage Spark-based applications in distributed environments.
- Collaborate with cross-functional teams to understand business requirements, data models, and analytics use cases, and implement relevant solutions using Spark.
Qualifications :
- Bachelor's degree in Computer Science, Engineering, or a related field.
- Proven experience as a Java Developer with hands-on experience in Apache Spark for data processing and analytics.
- Strong proficiency in Java programming language with a focus on scalable and distributed systems.
- Experience with Apache Spark, Spark Streaming, Spark SQL, and related Spark ecosystem tools and libraries.
- Knowledge of big data technologies such as Hadoop, HDFS, and distributed computing frameworks for large-scale data processing.
- Familiarity with cloud platforms such as AWS, Azure, or GCP for deploying and managing Spark applications.
- Strong understanding of data structures, algorithms, and database technologies for data manipulation and analytics.
- Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.
- Experience with Agile development methodologies and tools (e.g., JIRA, Git) is preferred.
- Certification in Apache Spark or related big data technologies is advantageous.
13 hours ago