Staff AI Security Engineer

ServiceNow Logo

ServiceNow

πŸ“Remote - Israel

Summary

Join ServiceNow as an AI Security Engineer and pioneer new approaches to secure the AI development lifecycle. Design and build defenses for the Agentic AI ecosystem, proactively identifying and mitigating risks. Develop scalable, secure-by-design frameworks for trustworthy applications and implement cutting-edge security in agentic AI and LLM use. Your work will shape the long-term AI security roadmap and set industry standards. This role blends deep research, secure systems design, and hands-on prototyping. You will create systems governing how agents interact safely with tools and APIs.

Requirements

  • A B.Sc degree or M.Sc in Computer Science, or a related technical field, with a specialization in Security or AI/ML
  • 4 years of experience with security assessments, design reviews or threat modeling
  • 4 years of experience in security engineering, computer and network security and security protocols
  • 5 years of programming experience in Java or Python
  • Hands-on experience with agentic AI frameworks (e.g., LangChain, LangGraph)
  • A proactive and collaborative mindset, with a passion for continuous learning and a drive to solve complex challenges as part of a team

Responsibilities

  • Design and build defenses that secure our Agentic AI ecosystem, our LLMs, and our cutting edge AI models
  • Proactively identify and mitigate risks in this emerging domain by creating foundational security solutions
  • Focus on developing scalable, secure-by-design frameworks that empower developers to build trustworthy applications
  • Continuously drive and implement cutting-edge security in agentic ai, and LLM use througout the company while converting these insights into production-grade controls and services deployed across the ServiceNow platform

Preferred Qualifications

  • Expertise in AI/ML security, including model and infrastructure vulnerabilities and their mitigation strategies
  • Deep understanding of Large Language Models (LLMs), Multimodal Language Models (MLLMs), and agentic workflows
  • Experience developing or evaluating security controls for agentic AI systems, including identity and consent models for multi-party protocols (e.g., MCP, A2A)
  • Proficiency in implementing secure communication protocols for agent-to-agent interactions, such as OAuth 2.0, API keys, and PKCE

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.