Senior Data Engineer

Flex Logo

Flex

πŸ“Remote - United States

Summary

Join Flex, a growing FinTech company in NYC, revolutionizing rent payment experiences. As a senior engineer, you will be crucial in developing and scaling our data platform, enabling 10x growth. You will design, implement, and maintain data pipelines, DBT models, and data warehouses. Collaborate with analytics and data science teams to build datasets for machine learning models and create scalable real-time and offline pipelines. Continuous improvement of data operations through automation and infrastructure redesign is key. Document engineering designs, runbooks, and best practices. This role requires extensive experience in data infrastructure, Python, SQL, DBT, Snowflake, and AWS services.

Requirements

  • A minimum of 6 years of industry experience in the data infrastructure/data engineering domain
  • A minimum of 6 years of experience with Python and SQL
  • A minimum of 3 years of industry experience using DBT
  • A minimum of 3 years of industry experience using Snowflake and its basic features
  • Familiarity with AWS services, with industry experience using Lambda, Step Functions, Glue, RDS, EKS, DMS, EMR, etc
  • Industry experience with different big data platforms and tools such as Snowflake, Kafka, Hadoop, Hive, Spark, Cassandra, Airflow, etc
  • Industry experience working with relational and NoSQL databases in a production environment
  • Strong fundamentals in data structures, algorithms, and design patterns
  • Prior experience working on cross-functional teams
  • Have the experience to build and improve user onboarding funnel and design the comprehensive experiment to drive metric improvements
  • Industry experience in using Infrastructure as Code tools, specifically CDK and Terraform
  • Experience with CI/CD to improve code stability and code quality
  • Motivated to help other engineers succeed and be effective
  • Excited to work in an ambiguous, fast-paced, and high-growth dynamic environment
  • Strong written and verbal communication skills

Responsibilities

  • Design, implement, and maintain high-quality data pipeline services, including but not limited to Data Lake, Kafka, Amazon Kinesis, and data access layers
  • Develop robust and efficient DBT models and jobs to support analytics reporting and machine learning modeling
  • Closely collaborating with the Analytics team for data modeling, reporting, and data ingestion
  • Closely collaborate with the Data science team to build the data set for the ML model
  • Create scalable real-time streaming pipelines and offline ETL pipelines
  • Design, implement, and manage a data warehouse that provides secure access to large datasets
  • Continuously improve data operations by automating manual processes, optimizing data delivery, and redesigning infrastructure for greater scalability
  • Create engineering documentation for design, runbooks, and best practices

Preferred Qualifications

Java experience is a plus

Benefits

  • 100% company-paid medical, dental, and vision
  • 401(k) + company equity
  • Unlimited paid time off + 13 company paid holidays
  • Parental leave
  • Flex Cares Program: Non-profit company match + pet adoption coverage
  • Free Flex subscription
  • Competitive Pay
  • Company Equity
  • Unlimited PTO

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.