Senior Data Engineer - Client Integrations

Proton.ai
Summary
Join Proton, a revolutionary AI-powered sales software company transforming the wholesale distribution industry, as a Senior Data Engineer. This customer-facing role is crucial for scaling Client Engineering, focusing on designing, building, and maintaining robust data pipelines. You will collaborate with client stakeholders and internal teams to deliver high-quality integrations, ensuring the platform's reliable and efficient growth. The ideal candidate possesses 7+ years of experience in data engineering, proven skills in building and scaling data pipelines, and expertise in various technologies. This fully remote position, based in India or Latin America, offers competitive salary, flexible schedule, unlimited PTO, parental leave, and company-paid off-sites.
Requirements
- 7+ years of experience in data engineering or backend roles with a focus on data and system integrations
- Proven track record of building, architecting and scaling data pipelines and ETL systems (Spark, Scala or equivalent) along with modern data warehouse technologies such as Delta Lake, BigQuery, or similar
- Strong development skills with Python, Go Java or equivalent
- SQL and data modeling expertise, including warehousing patterns
- API integration experience
- Familiarity with Docker, Git, and CI/CD best practices
- Experience with data analytics dashboards and tooling like Looker or Tableau
- Excellent communication skills and the ability to explain complex systems to both technical and non-technical audiences
- A customer-centric mindset - you enjoy building systems that directly help users succeed
- Passion for working in fast-paced, high-growth environments with evolving priorities
- Curiosity and drive to improve systems, introduce new technologies, and build for scale
Responsibilities
- Own the technical implementation of data integrations for new and existing customers
- Work directly with client IT teams to understand their needs, and execute high-quality implementations that meet their needs
- Build and maintain robust data pipelines using Aiflow, Python, pySpark, and SQL
- Ingest data from APIs, flat files, and customer systems
- Ensure high standards of reliability, performance, maintainability and continuously improve the underlying framework that powers them, ensuring the platform scales reliably and efficiently as we grow from hundreds to thousands
- Build and customize reports for clients, using Looker and backed by our data warehouse
- Extend Looker dashboards to meet evolving needs, and improve the underlying data models to ensure accuracy, clarity, and performance
- Work closely with Customer Success, Product, and Engineering teams to prioritize work, debug issues, and deliver exceptional client outcomes
Preferred Qualifications
Experience with Redis, Elasticsearch, or NoSQL systems
Benefits
- Competitive Salaries + Company Stock Options - we want to pay you well (and equitably!) and make you feel like an owner
- Flexible Schedule - we think high levels of autonomy, responsibility and working asynchronously foster an amazing workplace
- Unlimited PTO + 10 Company Paid Holidays - we even have tools implemented to detect burnout to make sure folks recharge regularly
- 12 Weeks Fully Paid Parental Leave - that goes for primary and secondary caregivers; even if youβre adopting or fostering!
- Biannual Company Paid off-sites - time for us to be together, brainstorm, and make magic happen