Ai Systems Engineer

closed
Axiom Zen Logo

Axiom Zen

📍Remote - Canada, Worldwide

Summary

Join our team as an AI Systems Engineer and help us push the boundaries of AI in engineering. You'll be part of a small, agile team where your contributions will have an immediate impact. This role involves building and improving AI pipelines, experimenting with LLM tuning, enhancing AI-generated outputs, and owning features from prototype to production. You'll work with various technologies, iterate quickly, and research new tools. We're looking for someone AI-savvy with production Python coding experience and a strong understanding of LLMs and NLP.

Requirements

  • You’re AI-savvy, you’ve worked with LLMs and understand how they function under the hood
  • You have a builder mindset. You’ve shipped real Python code to production in a team environment and are comfortable with backend frameworks (FastAPI, Flask, etc.)
  • You know how to engineer LLM prompts, validate outputs, and iterate quickly to get high quality results
  • You have experience using Natural Language Processing (NLP) techniques to extract, transform, and parse textual data into meaningful representations suitable for downstream LLM-based operations
  • You’re used to experimenting and prototyping in Notebooks
  • You have strong DevOps fundamentals, and experience with CI/CD & cloud services (AWS, GCP, Azure)
  • You have experience with monitoring tooling and troubleshooting production issues
  • You’re self-directed, adaptable, and love wearing multiple hats—R&D one day, demoing & debugging your latest pipelines with customers the next
  • You can communicate complex technical concepts to non-technical folks (we may ask you to explain how an LLM works under-the-hood)
  • You care about how your work impacts users and drives business value
  • You’re always testing new tools or reading up on the latest AI trends

Responsibilities

  • Build and improve dynamic AI pipelines by designing, writing & deploying production Python code
  • Experiment with various LLM tuning strategies to answer complex qualitative questions (e.g. can AI help identify which tasks are blockers or risks)
  • Boost the quality of AI-generated outputs—whether it’s improving summaries, surfacing insights, or generating new categories from scratch
  • Own end-to-end features: from scrappy prototype to stable, production-ready deployment
  • Configure, maintain and deploy distributed application services to cloud environments
  • Get your hands dirty across backend, infrastructure, and AI/ML workflows
  • Iterate fast: tweak prompts, tune models, test outputs, and constantly improve
  • Research new tools, techniques and frameworks to keep us ahead of the curve

Preferred Qualifications

  • Experience with TensorFlow, PyTorch, or deploying open-source LLMs (Llama, Mistral, etc.) on your own infra
  • Knowledge of graph databases or vector databases
  • Hands-on with serverless (AWS Lambda) or cloud-native tooling (Kubernetes, Docker)
  • An academic or practical background in ML and/or Natural Language Processing (NLP) or computer science
  • Ideally based in Vancouver (but we’re open to remote across the Americas)

Benefits

  • Flexible remote work + unlimited vacation (we actually take it)
  • Annual learning & development budget (conferences, books, courses)
  • Health & wellness perks
  • Top-tier gear—whatever you need to do your best work
  • A no-ego, collaborative team that’s serious about building something great
This job is filled or no longer available