Summary
Join Figma's Data Infrastructure team and build large-scale data systems powering analytics, AI/ML, and business intelligence. You will design and develop batch and streaming solutions, manage data ingestion and processing through platforms like Snowflake and our ML Datalake, and improve data reliability and performance. Collaboration with various teams is crucial to understand data needs and build scalable solutions. This full-time role can be based in a US hub or remotely within the US. The ideal candidate possesses extensive experience in distributed data infrastructure and relevant technologies. Figma offers a competitive compensation package and benefits.
Requirements
- 6+ years of experience designing and building distributed data infrastructure at scale
- Strong expertise in batch and streaming data processing technologies such as Spark, Flink, Kafka, or Airflow/Dagster
- A proven track record of impact-driven problem-solving in a fast-paced environment
- A strong sense of engineering excellence, with a focus on high-quality, reliable, and performant systems
- Excellent technical communication skills, with experience working across both technical and non-technical counterparts
- The ability to navigate ambiguity, take ownership, and drive projects from inception to execution
- Experience mentoring and supporting engineers, fostering a culture of learning and technical excellence
Responsibilities
- Design and build large-scale distributed data systems that power analytics, AI/ML, and business intelligence
- Develop batch and streaming solutions to ensure data is reliable, efficient, and scalable across the company
- Manage data ingestion, movement, and processing through core platforms like Snowflake, our ML Datalake, and real-time streaming systems
- Improve data reliability, consistency, and performance, ensuring high-quality data for engineering, research, and business stakeholders
- Collaborate with AI researchers, data scientists, product engineers, and business teams to understand data needs and build scalable solutions
- Drive technical decisions and best practices for data ingestion, orchestration, processing, and storage
Preferred Qualifications
- Experience with data governance, access control, and cost optimization strategies for large-scale data platforms
- Familiarity with our stack, including Golang, Python, SQL, frameworks such as dbt, and technologies like Spark, Kafka, Snowflake, and Dagster
- Experience designing data infrastructure for AI/ML pipelines
Benefits
- Health, dental & vision
- Retirement with company contribution
- Parental leave & reproductive or family planning support
- Mental health & wellness benefits
- Generous PTO
- Company recharge days
- A learning & development stipend
- A work from home stipend
- Cell phone reimbursement
- Sales incentive pay for most sales roles
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.