Role Description :
- Understand requirement from product owners and translate into requirement and scope documents.
- Decide on the best fitment of technologies / services that are available in scope.
- Create Solution for data ingestion and data transformation using Hadoop services like Spark, Spark streaming, Hive, etc.
- Create technical design documents to communicate solutions to the team and mentor the team to develop the solution.
- Build the solution with Hadoop Services as per design specifications.
- Assist team teams to build test cases and support with testing the solution.
- Coordinate with Upstream, Downstream and other supporting teams for production implementation.
- Provide post-production support for Solutions implemented.
- Develop data engineering frameworks in Spark on AWS Data Lake platform.
- Coordinate with clients, data users and key stakeholders to understand feature requirements needed merge them to create reusable design patterns. Data onboarding using the developed frameworks
- Understand and make sense of available code in Netezza and Hadoop to design a best way to implement its current features in AWS Data Lake
- Unit test code and aid with QA / SIT / Perf testing
- Migration to production environment
Competencies : Digital : BigData and Hadoop Ecosystems, Digital : BigData and Hadoop Ecosystems, Digital : Amazon Web Service(AWS) Cloud Computing, Digital : Business Intelligence and Analytics Tools, Digital : NoSQL Key-Value DB
Experience (Years) : 6-8
Essential Skills :
Candidate should have strong working experience with Hadoop platform.Strong hands-on experience on Hive, Spark with Scala.In-depth knowledge and extensive experience in building batch workloads on Hadoop.Adept in analyzing and refining requirements, consumption query patterns and choosing the right technology fit like RDBMS, data lake and data warehouse.Should have knowledge of analytical data modelling on any of the RDBMS platform / HiveShould have working experience in Pentaho? Proven practical experience in migrating RDBMS based data to Hadoop on-prem7 plus years of data experience in Data warehouse and Data Lake platformsAt least 2+ years of implementation experience in AWS Data Lake, S3, EMR / Glue, Python, AWS RDS, Amazon Redshift, Amazon Lake Formation, Airflow, Data Models, etc.MUST have very strong knowledge of PysparkMust understand data Encryption techniques and be able to implement them? Must have experience in working with Bitbucket, Artifactory, AWS Code PipelineDesirable Skills :
Hands-on experience working with Terra bytes peta bytes scale data and millions of transactions per day.Skills to develop ETL pipeline using Airflow? Knowledge of Spark streaming or any other streaming jobsAbility to deploy code using AWS Code Pipeline and Bit bucket is an added plusExpert in any of the following programming language : Scala, Java and comfortable with working on Linux platform.Knowledge of PythonCI / CD pipeline designAWS Cloud Infrastructure for services like S3, Glue, Secrets manager, KMS, Lambda, etc