QA Engineer

Threat Tec Logo

Threat Tec

πŸ“Remote - United States

Summary

Join Luminexis.AI, a company building human-centered AI systems, as a detail-oriented QA Engineer. You will be part of the Product & Engineering team, ensuring the reliability and accuracy of software and AI/ML systems. With 6-8 years of QA experience, you will design test plans, develop automated testing frameworks, and collaborate with development teams. The role involves testing web applications, mobile apps, enterprise software, and AI/ML applications, including model accuracy assessment and bias detection. You will also build and maintain test automation frameworks, implement CI/CD pipelines, and manage defect tracking. This position requires strong programming skills, experience with various testing tools, and excellent communication and analytical abilities. Success in this role will involve achieving high test coverage, preventing critical defects, and implementing innovative AI testing methodologies.

Requirements

  • 6-8 years of hands-on quality assurance experience with both manual and automated testing methodologies
  • Proficiency in test automation tools and frameworks including Selenium WebDriver, Cypress, TestNG, JUnit, or Pytest
  • Strong experience with API testing tools (Postman, REST Assured, SoapUI) and database testing including SQL query validation
  • Knowledge of performance testing tools (JMeter, LoadRunner, Gatling) and mobile testing frameworks (Appium, Espresso, XCTest)
  • Experience with CI/CD pipelines and integrating automated tests into continuous deployment workflows
  • Understanding of machine learning concepts, model evaluation metrics, and statistical analysis for AI system validation
  • Experience with data quality assessment, data profiling, and test data management for AI/ML applications
  • Knowledge of AI/ML frameworks (TensorFlow, PyTorch, scikit-learn) and ability to understand model behavior and limitations
  • Familiarity with AI ethics, bias detection, and fairness testing methodologies for responsible AI deployment
  • Experience with big data testing tools and techniques for validating large-scale data processing systems
  • Strong programming skills in languages such as Python, Java, JavaScript, or C# for test automation development
  • Experience with version control systems (Git), test management tools, and defect tracking systems
  • Knowledge of cloud platforms (AWS, Azure, GCP) and containerized testing environments using Docker and Kubernetes
  • Understanding of database systems, SQL queries, and data validation techniques for backend system testing
  • Familiarity with security testing principles and tools for identifying vulnerabilities and compliance issues
  • Excellent analytical and problem-solving skills with attention to detail and ability to identify edge cases and potential failure scenarios
  • Strong written and verbal communication skills for creating clear test documentation and collaborating with cross-functional teams
  • Ability to translate business requirements into comprehensive testing strategies and provide risk assessment for release decisions
  • Bachelor's degree in Computer Science, or related field

Responsibilities

  • Design and execute comprehensive test plans for web applications, mobile apps, and enterprise software systems including functional, integration, regression, and user acceptance testing
  • Develop and maintain automated test suites using tools like Selenium, Cypress, TestNG, and Jest to ensure consistent quality across multiple releases
  • Perform API testing using Postman, REST Assured, or similar tools to validate data integrity, error handling, and performance characteristics
  • Conduct cross-browser and cross-platform compatibility testing to ensure consistent user experience across different environments
  • Execute performance testing and load testing using JMeter or LoadRunner to identify bottlenecks and scalability limitations
  • Develop specialized testing frameworks for machine learning models including data quality validation, model accuracy assessment, and bias detection
  • Create test datasets and implement data pipeline testing to ensure AI systems receive clean, representative training and inference data
  • Perform model performance testing including accuracy metrics, precision/recall analysis, and robustness testing against edge cases and adversarial inputs
  • Validate AI model outputs for consistency, explainability, and adherence to business requirements and ethical AI principles
  • Test AI/ML integration points including real-time inference APIs, batch processing systems, and model versioning workflows
  • Build and maintain scalable test automation frameworks that support both traditional software and AI application testing requirements
  • Implement continuous integration testing pipelines using Jenkins, GitLab CI, or GitHub Actions to provide rapid feedback on code quality
  • Develop custom testing tools and utilities for specialized AI testing scenarios including model drift detection and data validation
  • Create comprehensive test documentation including test cases, testing procedures, and quality metrics reporting
  • Establish testing standards and best practices for both software applications and AI/ML systems across development teams
  • Collaborate with development teams, product managers, and data scientists to understand requirements and define comprehensive testing strategies
  • Participate in agile ceremonies including sprint planning, daily standups, and retrospectives to ensure quality considerations are integrated throughout development lifecycle
  • Manage defect tracking and resolution processes using tools like Jira, ensuring timely communication and resolution of quality issues
  • Conduct root cause analysis for production defects and implement preventive measures to avoid recurring issues
  • Provide quality metrics and testing reports to stakeholders including test coverage, defect density, and release readiness assessments

Preferred Qualifications

  • Achieve 95%+ test coverage across assigned projects with comprehensive test automation that reduces manual testing effort by 60%
  • Identify and prevent critical defects from reaching production environments, maintaining defect escape rate below 5%
  • Implement testing frameworks that support both software and AI applications with standardized approaches across development teams
  • Deliver testing results and quality assessments that enable confident release decisions and meet client quality expectations
  • Develop specialized AI testing methodologies that become standard practice across the organization's AI/ML projects
  • Successfully validate AI model accuracy and performance metrics that meet or exceed business requirements by 10-15%
  • Implement bias detection and fairness testing protocols that ensure ethical AI deployment and regulatory compliance
  • Create reusable AI testing frameworks and tools that reduce AI project testing time by 25-30%
  • Contribute to quality process improvements that increase overall team productivity and reduce time-to-market by 20-40%
  • Mentor junior QA engineers and contribute to team knowledge sharing through documentation and training sessions
  • Maintain strong working relationships with development teams resulting in proactive quality discussions and early defect prevention
  • Lead 2-3 quality improvement initiatives annually that enhance testing capabilities and organizational quality standards
  • Contribute to thought leadership through technical presentations, blog posts, or participation in quality assurance communities

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.