Summary
Join Netfor as a Machine Learning Engineer and play a pivotal role in advancing our AI and data initiatives. You will develop and deploy production-grade ML models within our AWS infrastructure, focusing on retrieval-augmented generation (RAG) for virtual agents and optimizing MLOps pipelines. This role involves collaborating with Data Engineers, Software Developers, and Business Leaders to enhance data intelligence capabilities. You will be responsible for model development and deployment, MLOps and cloud deployment, data integration and engineering, AI governance and model monitoring, and collaboration with stakeholders. The position offers a remote-friendly work environment with occasional in-office collaboration.
Requirements
- Proven experience deploying at least one production ML product (LLMs, NLP, predictive modeling, etc.)
- 3+ years of experience in ML engineering, AI research, or data science
- AWS Expertise β experience with at least two or more AWS services (SageMaker, Bedrock, Lambda, Glue, Athena, Redshift, ECS, Fargate)
- Strong programming skills in Python (TensorFlow, PyTorch, Scikit-learn) and SQL
- MLOps experience β CI/CD, model monitoring, data pipelines, and inference optimization
- Knowledge of retrieval-augmented generation (RAG) and vector database integration
- Familiarity with API development for model serving and real-time inference
- Strong problem-solving and analytical skills with a focus on production-scale ML solutions
- Excellent communication and documentation skills for collaboration with technical and non-technical stakeholders
- Proactive approach to learning and innovation in AI and cloud computing
Responsibilities
- Design, develop, and deploy ML models for natural language processing (NLP), large language models (LLMs), and predictive analytics
- Implement retrieval-augmented generation (RAG) methodologies for virtual call center agents using vector databases and embeddings
- Optimize model performance for low latency and cost efficiency in production environments
- Deploy and manage models using AWS services such as SageMaker, Bedrock, Lambda, ECS, Fargate, and Step Functions
- Automate model retraining, monitoring, and deployment pipelines using CI/CD (GitHub Actions, AWS CodePipeline)
- Develop scalable feature engineering and data preprocessing pipelines using AWS Glue, Athena, and Redshift
- Collaborate with Data Engineers to integrate ML workflows with OLAP and OLTP data environments
- Implement efficient data retrieval, transformation, and ingestion processes from structured and unstructured sources
- Work with dbt, SQL, and Python to support analytics and model training data preparation
- Establish model versioning, logging, and performance tracking frameworks
- Ensure compliance with AI ethics, bias mitigation, and governance policies
- Work with Data Governance initiatives to document and maintain AI/ML assets
- Partner with business leaders and service delivery teams to align ML initiatives with strategic goals
- Educate internal teams on ML best practices, model interpretability, and AI-driven decision-making
- Provide technical leadership on emerging AI trends, tools, and methodologies
Benefits
- Remote-friendly position with occasional in-office collaboration
- Full-time work-from-home position
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.