Remote Generative AI Tester
Zealogics Inc
πRemote - Worldwide
Please let Zealogics Inc know you found this job on JobsCollider. Thanks! π
Job highlights
Summary
Join our team as a Senior AI Model Tester with expertise in testing Generative AI models, including text, image, and other content generation outputs. You will be responsible for creating and executing test strategies, test plans, and cases specific to generative AI models.
Requirements
- 7+ years of Hands-on experience in testing Generative AI models including text, image, and other content generation outputs
- Expertise in creating and executing test strategies, test plans and cases specific to generative AI models
- Strong understanding of AI/ML concepts, including model training, validation, deployment, and continuous monitoring
- Proficiency in testing large language models (LLMs) such as GPT, BERT, and similar
- Expert-level knowledge in natural language processing (NLP) techniques
- Experience with AI/ML testing frameworks and tools such as TensorFlow, PyTorch, Hugging Face or custom AI testing frameworks
- Strong familiarity with data validation and testing
- Proficient in defining KPIs and metrics for generative AI model testing
- Understanding of cloud-based AI/ML deployment
- Proficiency in API testing for AI/ML applications
- Experience in using test automation tools for AI/ML testing
- Proficiency in programming languages like Python or Java
- Knowledge of Continuous Integration/Continuous Deployment (CI/CD) pipelines
- Strong stakeholder management and communication skills
- Familiarity with Model Ops tools and practices for production-level AI testing
- Strong analytical, problem-solving, and reporting skills
- Excellent communication skills
- Exposure and experience in test management tools like JIRA, TestRail, or similar
Responsibilities
- Create and execute test strategies, test plans, and cases specific to generative AI models
- Test for potential biases within AI models by analysing model output across different demographics and data segments
- Conduct model evaluation using relevant performance metrics, such as BLEU, ROUGE, and perplexity for language models
- Validate model output for accuracy, coherence, and relevance, ensuring that the models align with business and user expectations
- Perform functional, load, and stress tests on models to validate their accuracy, scalability, and responsiveness under varying conditions
- Test AI/ML applications for seamless integration and accurate data flow between components
- Perform continuous testing for model performance and drift post-deployment, identifying areas where model retraining may be required
Share this job:
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.
Similar Remote Jobs
- π°$41kπWorldwide
- πRomania
- πPortugal
- πSpain
- πSpain
- πItaly
- πEgypt
- π°$41kπWorldwide
- πEurope
Please let Zealogics Inc know you found this job on JobsCollider. Thanks! π