Principal Security Engineer

ServiceNow
Summary
Join ServiceNow as a Principal Security Engineer to secure our cutting-edge generative AI product capabilities. You will collaborate with internal AI/ML development teams, performing dynamic and static application security reviews of AI systems throughout the MLOps lifecycle. Responsibilities include identifying vulnerabilities, assisting with remediation planning, and providing development security support. A key aspect is understanding, discovering, and documenting vulnerabilities in proprietary AI/ML systems using technologies like large language models (LLMs). You will conduct security testing, develop security benchmarks, identify and mitigate threats, collaborate with developers, stay updated on AI security trends, and provide detailed reports and recommendations.
Requirements
- An analytical mind for problem solving, abstract thought, and offensive security tactics
- Strong interpersonal skills (written and oral communication) and the ability to work collaboratively in a team environment
- Bachelorโs degree in computer science/engineering or equivalent experience
- 8+ years in a role performing AI/ML Security reviews and/or assisting in security evaluations during model training
- High level of language reading comprehension for Python code and experiencing performing source code reviews for security issues
- Strong understanding of machine learning frameworks (e.g., TensorFlow, PyTorch)
- Strong understanding of Natural Language Processing (NLP) and related frameworks (e.g. nltk, spacy)
- Experience training machine learning models including transformer based LLMs
- In-depth experience with exploiting OWASP LLM Top 10 application vulnerabilities, such as prompt injection and data poisoning
- In-depth experience with exploiting OWASP Top 10 application vulnerabilities, such as deserialization and injection attacks
- Knowledge of regulatory and compliance standards related to AI and data security
- Ability to articulate complex issues to executives and customers
Responsibilities
- Conduct security testing and vulnerability assessments for AI systems, particularly those utilizing large language models (LLMs)
- Develop and implement security benchmarks and evaluation protocols for LLMs
- Identify and mitigate potential security threats and vulnerabilities in AI systems
- Collaborate with AI developers to integrate security measures into the development lifecycle
- Stay updated on the latest AI security trends and technologies
- Provide detailed reports and recommendations based on security evaluations
Preferred Qualifications
- Post graduate degree and/or related certifications in Machine Learning or Artificial Intelligence
- Experience performing Threat Modeling, Penetration Testing and/or Red Team
Share this job:
Similar Remote Jobs

