Position Overview :
This role is a 3-6 month contract with opportunity to extend.
the team operates at a 50% capacity in office of 2-3 times per week downtown Toronto.
Snapshot of the role!
We are seeking a highly skilled and innovative AI-Focused GRC (Governance, Risk, and Compliance) Security Specialist to join our team. This role bridges the gap between cutting-edge artificial intelligence technologies and robust security frameworks. The ideal candidate will design, implement, and oversee AI-centric security governance, risk management, and compliance programs that align with organizational goals and regulatory requirements.
Key Responsibilities :
1.0 Governance
- Develop and maintain AI governance frameworks, policies, and procedures to ensure responsible AI usage.
- Establish AI ethics and accountability standards, ensuring compliance with industry best practices and legal requirements.
- Collaborate with stakeholders to define AI system ownership, accountability, and decision-making processes.
2.0 Risk Management
Identify, assess, and mitigate risks associated with AI systems, including model vulnerabilities, data privacy risks, and operational impacts.Develop risk assessment methodologies tailored to AI technologies, including algorithms, datasets, and deployment environments.Monitor emerging AI-related threats and propose risk mitigation strategies.3.0 Compliance
Ensure compliance with relevant regulations, such as GDPR, CCPA, HIPAA, and AI-specific guidelines (e.g., EU AI Act).Conduct audits and assessments of AI systems to verify compliance with internal and external standards.Maintain documentation for AI security and compliance initiatives, supporting audits and regulatory inquiries.4.0 Security Operations
Collaborate with security teams to design and implement security controls for AI systems, focusing on data integrity, confidentiality, and availability.Lead incident response efforts related to AI security breaches, ensuring timely resolution and reporting.Provide guidance on secure AI model development, deployment, and monitoring practices.5.0 Education and Advocacy
Educate employees and stakeholders on AI risks, governance policies, and compliance requirements.Act as a subject matter expert on AI security and compliance, providing insights and recommendations to leadership.Stay updated on advancements in AI, cybersecurity, and GRC, incorporating new knowledge into organizational practices.Qualifications :
Education : Bachelor’s or Master’s degree in Computer Science, Cybersecurity, Data Science, or a related field.
Experience :
7+ years in GRC, cybersecurity, or risk management roles, with at least 2 years focusing on AI or machine learning systems.Hands-on experience with AI / ML frameworks and tools, and understanding of their security implications.Financial Services or Enterprise level experience requiredCertifications :
Relevant certifications such as CISSP, CISM, CRISC, or specialized AI certifications (e.g., AI Ethics, AI Risk Management).Skills :
Strong understanding of AI technologies, including machine learning, neural networks, and natural language processing.Prior Snowflake or similar platform experienceKnowledge of cybersecurity principles, risk management frameworks (e.g., NIST, ISO 27001), and regulatory environments.Excellent analytical, problem-solving, and communication skills.Ability to work cross-functionally in a fast-paced, innovative environment.