Principal Data Engineer

MeridianLink Logo

MeridianLink

πŸ’΅ $152k-$200k
πŸ“Remote - United States

Summary

Join MeridianLink as a Principal Data Engineer and contribute to the development and maintenance of our data products. You will design, build, and maintain data processing pipelines using ETL processes, working with various data sources and technologies. This role requires expertise in SQL, Python, and distributed processing frameworks like Apache Spark and Databricks. You will collaborate with cross-functional teams to translate business needs into technical solutions and ensure high data quality and governance. The position offers opportunities to work with cutting-edge technologies and contribute to the growth of our data platform. This is a level 4 professional position requiring independent judgment and sophisticated problem-solving skills.

Requirements

  • Act as a career-level professional within the subject area
  • Have work that includes new, highly complex, or highly impactful to the business
  • Have complete knowledge and a full understanding of the area of specialization, principles, and practices within a professional discipline
  • Include work on problems of diverse scope where analysis of information requires evaluation of identifiable factors
  • Work is expected to be done independently through independent judgment
  • Ability to assess unusual circumstances and uses sophisticated analytical and problem-solving techniques to identify the cause
  • Ability to enhance relationships and networks with senior internal/external partners who are not familiar with the subject matter often requires persuasion
  • Architect and scale our modern data platform to support real-time and batch processing for financial forecasting, risk analytics, and customer insights
  • Enforce high standards for data governance, quality, lineage, and compliance
  • Partner with stakeholders across engineering, finance, sales, and compliance to translate business requirements into reliable data models and workflows
  • Evaluate emerging technologies and lead POCs that shape the future of our data stack
  • Champion a culture of security, automation, and continuous delivery in all data workflows
  • Deep expertise in Python, SQL, and distributed processing frameworks like Apache Spark, Databricks, Snowflake, Redshift, BigQuery
  • Proven experience with cloud-based data platforms (preferably AWS or Azure)
  • Hands-on experience with data orchestration tools (e.g., Airflow, dbt) and data warehouses (e.g., Databricks, Snowflake, Redshift, BigQuery)
  • Strong understanding of data security, privacy, and compliance within a financial services context
  • Experience working with structured and semi-structured data (e.g., Delta, JSON, Parquet, Avro) at scale
  • Familiarity with modelling datasets in Salesforce, Netsuite and Anaplan to solve business use cases required
  • Bachelor's or master's degree in computer science, Engineering, or a related field
  • 6-8 years of experience in data engineering, with a strong focus on financial systems on SaaS platforms

Responsibilities

  • Design, build, implement, and maintain data processing pipelines for the extraction, transformation, and loading (ETL) of data from a variety of data sources
  • Lead the writing of complex SQL queries to support analytics needs
  • Develop technical tools and programming that leverage artificial intelligence, machine learning, and big-data techniques to cleanse, organize and transform data and to maintain, defend and update data structures and integrity on an automated basis
  • Evaluate and recommend tools and technologies for data infrastructure and processing
  • Collaborate with engineers, data scientists, data analysts, product teams, and other stakeholders to translate business requirements to technical specifications and coded data pipelines
  • Work with tools, languages, data processing frameworks, and databases such as R, Python, SQL, Databricks, Spark, Delta, APIs
  • Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses

Preferred Qualifications

Previous experience Democratizing data at scale for the enterprise

Benefits

  • Stock options or other equity-based awards
  • Insurance coverage (medical, dental, vision, life, and disability)
  • Flexible paid time off
  • Paid holidays
  • 401(k) plan with company match
  • Remote work

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.