Machine Learning Operations (MLOps) Architect

Rackspace Technology
Summary
Join our team as a seasoned Machine Learning Operations (MLOps) Architect to design, build, and optimize machine learning platforms. You will collaborate with cross-functional teams, lead the development of high-performance inference systems, and mentor a high-performing engineering team. This remote position demands expertise in machine learning engineering and infrastructure, with a strong focus on scalable inference systems for various models, including LLMs. You will develop CI/CD workflows, automate model deployment, monitor production models, and ensure experiment reproducibility. Optimize model inference infrastructure for efficiency and implement data governance policies. Stay current with evolving GCP MLOps practices to continuously improve system reliability and automation.
Requirements
- Bachelor's degree in computer science, Information Technology, or a related field
- 5+ years of relevant industry experience
- Proven track record in designing and implementing cost-effective, scalable machine learning inference systems
- Hands-on experience with leading deep learning frameworks such as TensorFlow, PyTorch, Hugging Face, and LangChain
- Proven experience in implementing MLOps solutions on Google Cloud Platform (GCP) using services such as Vertex AI , Cloud Storage , BigQuery , Cloud Functions , and Dataflow
- Solid understanding of machine learning algorithms, natural language processing (NLP), and statistical modeling
- Solid understanding of core computer science concepts, including algorithms, distributed systems, data structures, and database management
- Strong problem-solving skills, with the ability to tackle complex challenges using critical thinking and propose innovative solutions
- Effective in remote work environments, with excellent written and verbal communication skills. Proven ability to collaborate with team members and stakeholders to ensure clear understanding of technical requirements and project goals
- Expertise in public cloud platforms, particularly Google Cloud Platform (GCP) and Vertex AI
- Proven experience in building and scaling agentic AI systems in production environments
- In-depth understanding of large language model (LLM) architectures, parameter scaling, optimization strategies and deployment trade-offs
Responsibilities
- Architect and optimize ML Platforms to support cutting-edge machine learning and deep learning models
- Collaborate closely with cross-functional teams to translate business objectives into scalable engineering solutions
- Lead the end-to-end development and operation of high-performance, cost-effective inference systems for a diverse range of models, including state-of-the-art large language models (LLMs)
- Provide technical leadership and mentorship to cultivate a high-performing engineering team
- Develop CI/CD workflows for ML models and data pipelines using tools like Cloud Build, GitHub Actions, or Jenkins
- Automate model training, validation, and deployment across development, staging, and production environments
- Monitor and maintain ML models in production using Vertex AI Model Monitoring, logging (Cloud Logging), and performance metrics
- Ensure reproducibility and traceability of experiments using ML metadata tracking tools like Vertex AI Experiments or MLflow
- Manage model versioning and rollbacks using Vertex AI Model Registry or custom model management solutions
- Collaborate with data scientists and software engineers to translate model requirements into robust and scalable ML systems
- Optimize model inference infrastructure for latency, throughput, and cost efficiency using GCP services such as Cloud Run, Kubernetes Engine (GKE), or custom serving frameworks
- Implement data and model governance policies , including auditability, security, and access control using IAM and Cloud DLP
- Stay current with evolving GCP MLOps practices , tools, and frameworks to continuously improve system reliability and automation
Share this job:
Similar Remote Jobs
