Principle Engineer-Testing

Invisible Technologies Logo

Invisible Technologies

πŸ“Remote - United States

Summary

Join Invisible Technologies, a leading AI training and scaling partner, as a Principal Engineer – Testing. You will design and lead the testing architecture for our AI platform, encompassing backend, APIs, data pipelines, and AI models. Responsibilities include building and scaling automation frameworks, developing test strategies, creating simulation environments, and integrating automated testing into CI/CD pipelines. You will also lead root cause analysis and mentor engineers. The ideal candidate possesses 10+ years of experience in software quality engineering and expertise in test automation frameworks. Strong programming skills in Python and JavaScript/TypeScript are required, along with experience in distributed and microservices architectures. Competitive compensation, bonuses, and equity are offered.

Requirements

  • 10+ years of experience in software quality engineering, test automation, or systems testing roles
  • Expertise in test automation frameworks (e.g., Pytest, Playwright, etc.) and infrastructure-as-code pipelines
  • Strong programming skills in Python, JavaScript/TypeScript
  • Experience designing and validating tests in distributed, event-driven, and microservices architectures
  • Familiarity with testing in AI/ML systems, including model output validation and behavior-driven testing
  • Experience with containerized environments (Docker), and cloud platforms (AWS/GCP/Azure)
  • Strong understanding of CI/CD systems (particularly Git,) and modern observability tooling (e.g., Grafana, Prometheus)
  • Excellent problem-solving, debugging, and documentation skills
  • A track record of technical leadership and mentoring in cross-functional environments

Responsibilities

  • Design, implement, and own the testing architecture across the AI platform, spanning backend, APIs, data pipelines, and AI models
  • Build and scale automation frameworks to support unit, integration, regression, performance, and AI-specific testing
  • Develop test strategies that balance reliability, speed, and coverage for complex, dynamic, and data-heavy systems
  • Create simulation and sandbox environments to test AI workflows and orchestration logic
  • Integrate automated testing into CI/CD pipelines, enabling confident, rapid deployments with safety guarantees
  • Lead root cause analysis and implement practices that prevent regressions across systems and features
  • Champion a quality-balanced culture and mentor engineers in best practices for building robust, testable systems

Preferred Qualifications

  • Experience testing large-scale AI frameworks, LLM integrations, or decision systems
  • Exposure to tools like dbt, Spark, or DataDog
  • Contributions to open-source testing tools or quality initiatives
  • Deep curiosity about LLM behavior, autonomous systems, and human-in-the-loop testing strategies
  • A desire to develop the most consequential AI software system of the future!

Benefits

Bonuses and equity are included in offers above entry level

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.

Similar Remote Jobs