Remote Senior Data Engineer

Logo of Oportun

Oportun

πŸ“Remote - India

Job highlights

Summary

Join Oportun's team and be part of a mission-driven fintech that empowers members with the confidence to build a better financial future. As a Data Engineer, you will lead the design and implementation of scalable data architectures, develop data pipelines, and oversee database management. With a focus on data architecture, ETL, and database management, you will collaborate with cross-functional teams to understand their data needs and deliver solutions that meet those needs.

Requirements

  • Bachelor's or Master's degree in Computer Science, Data Science, or a related field
  • 5+ years of experience in data engineering, with a focus on data architecture, ETL, and database management
  • Proficiency in programming languages like Python/PySpark and Java /Scala
  • Expertise in big data technologies such as Hadoop, Spark, Kafka, etc
  • In-depth knowledge of SQL and experience with various database technologies (e.g., PostgreSQL, MySQL, NoSQL databases)
  • Experience and expertise in building complex end-to-end data pipelines
  • Experience with orchestration and designing job schedules using the CICD tools like Jenkins and Airflow
  • Ability to work in an Agile environment (Scrum, Lean, Kanban, etc)
  • Ability to mentor junior team members
  • Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., AWS Redshift, S3, Azure SQL Data Warehouse)
  • Strong leadership, problem-solving, and decision-making skills
  • Excellent communication and collaboration abilities

Responsibilities

  • Lead the design and implementation of scalable, efficient, and robust data architectures to meet business needs and analytical requirements
  • Collaborate with stakeholders to understand data requirements, build subject matter expertise, and define optimal data models and structures
  • Design and develop data pipelines, ETL processes, and data integration solutions for ingesting, processing, and transforming large volumes of structured and unstructured data
  • Optimize data pipelines for performance, reliability, and scalability
  • Oversee the management and maintenance of databases, data warehouses, and data lakes to ensure high performance, data integrity, and security
  • Implement and manage ETL processes for efficient data loading and retrieval
  • Establish and enforce data quality standards, validation rules, and data governance practices to ensure data accuracy, consistency, and compliance with regulations
  • Drive initiatives to improve data quality and documentation of data assets
  • Provide technical leadership and mentorship to junior team members, assisting in their skill development and growth
  • Lead and participate in code reviews, ensuring best practices and high-quality code
  • Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand their data needs and deliver solutions that meet those needs
  • Communicate effectively with non-technical stakeholders to translate technical concepts into actionable insights and business value
  • Implement monitoring systems and practices to track data pipeline performance, identify bottlenecks, and optimize for improved efficiency and scalability

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.
Please let Oportun know you found this job on JobsCollider. Thanks! πŸ™