Data Engineer
About This Role
We are looking for someone to embrace a broad range of tasks associated with developing data / ETL pipelines to address business challenges.
Sitting in this position will help expand your knowledge, strengthen your expertise and introduce you to the inner workings of our business alongside a team of seasoned, diversely-skilled technology professionals.
Here's some of what you may be asked to perform :
- Identify data sources, model data (holistic, conceptual, logical, physical) and design pipelines under the constraints of the Public Cloud framework.
- Implementing pipelines including the ingestion, processing, and storing data.
- Sharing pipeline models and designs to inform project team members and improve implementation.
- Collaborate across teams to understand data sources and sets.
- Dive into documentation repositories to research internal data sets.
- Patiently share knowledge and expertise with team members.
- Communicate technical constraints and challenges to audiences of varying technical expertise.
- Transform business requirements and research into reliable, performance delivery solutions.
- Aim for defect-free programming, create and maintain quality code, provide support during testing cycles and post- production deployment, engage in peer code reviews.
- Contribute to project plans, estimations and status updates.
- Identify issues, develop and maintain processes that address and resolve them, (and be sure to communicate / alert stakeholders as needed).
- Configure and develop custom components with technology partners (analysts, developers, designers etc.) to meet requirements and goals.
- Ensure applications are free of common coding vulnerabilities (and follow standard security practices).
- Proactively put forward ideas that speak to project objectives (e.g. development, testing solutions, and tools).
- Take part in scope assessment, risk and cost analysis.
- Respect technology delivery practices and standards, project management disciplines.
- Stay on top of state-of-health monitoring and monthly SLA targets.
- Apply and share technical expertise during incident management life cycle (e.g. analyzes reports and outages, perform impact assessments, facilitate stakeholder communication).
What can you bring? Share your credentials, but your relevant experience and knowledge can be just as likely to get our attention.
It helps if you have :
- Undergraduate Degree or Technical Certificate.
- Experience working with Graph DB - must have
- 5+ years of experience with DevOps, System Engineering or related roles
- Proficiency in Infrastructure as code tools such as terraform and EDP.
- Experience with CI / CD tools like Jenkins, Bitbucket
- Strong scripting skills in languages like Python, Bash
- Working experience with salt modules and custom modules, patching and upgrades
- Solid understanding of Linux, Servicenow and Azure
- Curiosity, commitment and empathy to collaborate across teams, learn about data sources and build a shared understanding of how data drives fraud analytics.
- Readiness and motivation (as senior or lead developer and valued subject matter expert) to address and resolve highly complex and multifaceted development-related issues, often independently.
- Strength in coaching and advising clients, partners and project teams.
Il y a 23 heures