Summary
Join Localsearch's AI team as an AI Ops Engineer, building and deploying AI solutions on Google Cloud Platform (GCP). You will translate blueprints into reality, engineer data and features, code and develop models, and collaborate on model refinement. This role focuses on owning the operational lifecycle of AI solutions, mastering MLOps, automating processes, and managing infrastructure. You will establish best practices and work closely with business stakeholders. If you are a builder who enjoys creating robust systems and seeing them perform, this is the ideal role for you.
Requirements
- Business Acumen & Stakeholder Partnership : You have a genuine curiosity for how the business works and an ability to understand the "why" behind the data. You have proven experience acting as a bridge between technical teams and business departments, skilled at active listening, asking the right questions, and building trust with non-technical colleagues
- Proven Engineering Background : Demonstrable experience in a hands-on role like ML Engineer or AI Engineer where you were responsible for building and shipping AI production systems
- Expert-Level Python : High proficiency in Python, writing clean, maintainable, and testable code for both data processing (e.g., Pandas) and ML development (e.g., Scikit-learn, TensorFlow/PyTorch)
- Strong SQL for Large-Scale Data : The ability to write complex, efficient SQL queries to work with large datasets. Experience with Google BigQuery is a major asset
- Cloud Production Experience : Hands-on experience deploying and managing applications or services on a major cloud platform (GCP is strongly preferred; significant AWS/Azure experience is also relevant)
- A "Builder's" Mindset : You are passionate about writing code and building systems. You have a pragmatic approach to engineering and a high bar for quality
Responsibilities
- Build & Implement AI Solutions
- Translate Blueprints into Reality : Take the solutions and model architectures designed by the AI Solution Engineer and lead their hands-on implementation
- Business-Driven Data & Feature Engineering : Partner directly with non-technical business stakeholders to deeply understand the core processes that generate our data. Collaborate on the definition and creation of robust data models (facts, dimensions) that serve as the foundation for our AI systems
- Code & Develop Models : Write clean, production-grade Python code to build, train, and fine-tune models, translating business logic and domain knowledge into powerful, predictive features
- Collaborative Model Tuning & Refinement : Work in a tight feedback loop with business users to refine model outputs. You will ensure the modelโs results are not only statistically accurate but also intuitive and actionable for the people using them
- Own the Operational Lifecycle (MLOps)
- Master of MLOps : This is your primary domain of ownership. You will design, build, and maintain our entire MLOps infrastructure using GCP and Vertex AI
- Automate Everything : Establish and manage the CI/CD pipelines for our AI models, automating the end-to-end process from code commit to production deployment
- Orchestration Expert : Use tools like Airflow to orchestrate complex model retraining jobs, data refresh cycles, and other essential workflows
- Infrastructure Management : Own the deployment of models as scalable, low-latency API endpoints. You are responsible for monitoring their health, performance, and cost, making continuous optimizations
- Set the Standard : As a foundational member of the team, you will establish and enforce best practices for code quality, model versioning, monitoring, and alerting
Preferred Qualifications
- Deep Vertex AI Knowledge : Proven experience using the GCP Vertex AI suite (especially Pipelines, Endpoints, Training, and Model Registry) to build and operate ML systems
- CI/CD & Orchestration Expertise : Direct experience building CI/CD pipelines for ML (e.g., using GitHub Actions, Jenkins) and orchestrating workflows with Apache Airflow
- Infrastructure as Code (IaC) : Practical experience with tools like Terraform for managing cloud infrastructure
- Containerisation : A strong working knowledge of Docker and experience with Kubernetes (GKE)
Benefits
- Flexible hours and remote working possible
- A budget for professional improvement (courses, conferences, booksโฆ)
- A budget for multi-benefits platform (meal tickets, private medical insurance, private pension, etc.)
- Budget for the mastery of the English and German languages
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.