Analytics Engineer

Xero
Summary
Join Xero as an Analytics Engineer and contribute to high-quality data assets for rapid decision-making and machine learning model development. You will bridge traditional analytics engineering with machine learning initiatives, working with product teams to understand user behavior and shape product strategy. This role involves hands-on development of data pipelines and models using Python, SQL, and Xero's data stack. You will collaborate with data scientists and engineers to build and maintain machine learning pipelines, ensuring alignment with engineering best practices. The position requires a minimum of two years of experience in data engineering or a related field and strong proficiency in Python and SQL. This is a chance to shape the future of analytics and product science at Xero.
Requirements
- Minimum of 2 years experience in data engineering, analytics engineering, ML engineering, or a software engineering role with a data focus
- Strong proficiency in Python for data processing, pipeline development, and scripting. Experience with ML libraries (e.g., scikit-learn, pandas, NumPy) is highly desirable
- Proven experience in developing, deploying, and maintaining data pipelines and data models in production environments
- Familiarity with software engineering best practices (e.g., Git, CI/CD, testing, code reviews)
- Solid experience with SQL for data querying, transformation, and optimization
Responsibilities
- Undertake technical discovery with stakeholders, and understand the business context in designing and implementing solutions for data pipelines, models and applications
- Hands-on development of robust and scalable data pipelines and models using Python, SQL, and our evolving data stack (e.g., DBT, Snowflake, and MLOps tooling)
- Model data for optimal workload consumption, eg analytical, BI, ML modelling
- Contribute to our library of analytical assets, including our feature store, enabling self-service analytics and reducing time spent on data wrangling
- Collaborate with Data Scientists and Engineers to build and maintain end-to-end machine learning pipelines for training, inference, and monitoring at scale
- Develop and maintain infrastructure, tooling, and monitoring for data applications and reproducible data science workflows
- Test and deploy data assets and ML pipeline components to production environments, adhering to software engineering best practices (e.g., version control, CI/CD, testing)
- Identify and expose technical debt and related issues for resolution, contributing to the overall health and maintainability of our data ecosystem
- Stay current with emerging practices, techniques, and frameworks in applied machine learning, data engineering, and analytics engineering
Preferred Qualifications
- Experience or a strong interest in machine learning concepts and MLOps (e.g., model deployment, monitoring, versioning) is a significant plus
- Experience with cloud data warehouses like Snowflake is highly desirable
- Experience with data pipeline orchestration tools like Airflow, Dagster, Prefect and data modeling tools like DBT is desirable
Benefits
- Offering very generous paid leave to use however youβd like (plus statutory holidays!)
- Dedicated paid leave to care for your physical and mental wellbeing as well as an Employee Assistance Program to access mental health care for you and your family
- Free medical insurance
- Wellbeing and sports programmes
- Employee resource groups
- 26 weeks of paid parental leave for primary caregivers
- An Employee Share Plan
- Beautiful offices
- Flexible working
- Career development
- And many other benefits that reflect our human value