Machine Learning Quality Engineer

ServiceNow
Summary
Join ServiceNow's Quality Engineering team and play a crucial role in evaluating the quality and performance of AI-generated content. You will execute test plans, track quality metrics, and contribute to the development of evaluation criteria. This position requires designing test datasets, collaborating with engineering teams, and maintaining test automation frameworks. The ideal candidate possesses experience in software quality engineering, a foundational understanding of AI concepts, and proficiency in Java and Python. ServiceNow offers a competitive salary, equity, benefits, and a flexible work environment.
Requirements
- Experience in leveraging or critically thinking about how to integrate AI into work processes, decision-making, or problem-solving. This may include using AI-powered tools, automating workflows, analyzing AI-driven insights, or exploring AIโs potential impact on the function or industry
- Minimum of 2 years of professional experience in software quality engineering
- Bachelorโs degree with at least 2 years of relevant experience; or an advanced degree with no prior experience; or equivalent practical experience
- Foundational understanding of AI concepts such as agentic systems, algorithms, models, and associated ethical considerations
- Experience with automated testing frameworks such as Java, JUnit, Selenium, TestNG, and other open-source tools
- Proficient in programming and scripting with Java and Python; experience with tools such as Eclipse, Jenkins, Maven, and Git
- Familiarity with development tools including IDEs, debuggers, build tools, version control systems, and Unix/system administration utilities
- Solid understanding of Agile methodologies and the software development lifecycle (SDLC)
Responsibilities
- Evaluate the quality, accuracy, performance, and safety of AI-generated content produced by large language models (LLMs), identifying areas for improvement
- Execute test plans, track quality metrics, and log defects based on observed patterns and results
- Maintain and enhance existing test automation frameworks
- Contribute to the development and refinement of evaluation criteria and methodologies to strengthen AI model assessment
- Design targeted, small-scale test datasets to ensure adequate coverage across use cases
- Collaborate with engineering teams to troubleshoot and resolve issues in applications and development/test environments
Preferred Qualifications
Hands-on experience or strong interest in testing machine learning and Generative AI systems, including synthetic data generation and prompt engineering, is highly preferred
Benefits
- Health plans, including flexible spending accounts
- A 401(k) Plan with company match
- ESPP
- Matching donations
- A flexible time away plan
- Family leave programs
Share this job:
Similar Remote Jobs
