Principal AI Security Engineer

ServiceNow Logo

ServiceNow

📍Remote - Israel

Summary

Join ServiceNow as a Principal AI Security Engineer and assume a critical leadership role in shaping the future of AI security across the company’s GenAI initiatives. Lead the formation and growth of a high-caliber AI security research team focused on discovering, analyzing, and mitigating risks unique to large language models (LLMs), generative AI, and other advanced ML technologies. Serve as the company’s foremost subject matter expert in AI/ML security, driving both strategic direction and technical depth. Directly influence the secure design, development, and deployment of ServiceNow’s AI-powered products. Guide engineers and AI practitioners in implementing robust protections, and collaborate across the business to embed AI security principles throughout the MLOps lifecycle. Represent ServiceNow in external forums, publications, and conferences as a thought leader in the AI security field.

Requirements

  • 10+ years of experience in security research, with a strong emphasis on AI/ML, GenAI, or adversarial systems
  • Ph.D. or M.Sc. in relevant field
  • Proven ability to lead and mentor security research teams in high-impact environments
  • Deep understanding of large language models (LLMs), Agentic AI, transformer architectures, associated attack surfaces
  • Demonstrated experience influencing secure development lifecycles (SDLC/MLOps) with tooling, automation, and policy
  • Excellent collaboration skills, with a track record of working across cross-functional teams in research-driven environments

Responsibilities

  • Build, lead, and mentor a specialised AI security research team to investigate vulnerabilities and emerging threats in GenAI systems
  • Serve as the primary thought leader on GenAI risks at ServiceNow—identifying attack vectors, modelling threats, and proposing effective mitigations
  • Develop and oversee research programs focused on adversarial machine learning, model abuse, LLM jailbreaks, data poisoning, privacy leakage, and other key AI security domains
  • Direct the AI Security engineering and applied AI team members in developing security tools, processes, and safeguards for model development, training, deployment, and runtime
  • Act as a technical advisor to product, platform, and engineering stakeholders on secure AI architecture, safe LLM integration, and responsible AI practices
  • Represent ServiceNow in external forums, publications, and conferences as a thought leader in the AI security field

Preferred Qualifications

Strong publication or presentation history in AI/ML security or related fields (e.g., OWASP, USENIX, IEEE S&P, Black Hat, NeurIPS, DEF CON)

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.