Data Engineer

Rimes
Summary
Join Rimes, a leading provider of enterprise data management solutions, as a Data Engineer and play a key role in developing our data platform. You will be responsible for building and maintaining data pipelines, ensuring data quality, and collaborating with cross-functional teams. This role requires 4+ years of experience in data engineering, proficiency in Python, and experience with cloud-based data processing platforms like Snowflake and Databricks. You will work with senior engineers to design and implement scalable data models and optimize data processing for performance and reliability. The position offers flexible remote work opportunities, competitive salary and benefits, and a supportive company culture focused on growth and professional development. Rimes is committed to diversity and inclusion.
Requirements
- 4+ years of experience in data engineering, with a focus on designing and building scalable data pipelines, ETL processes, and data modeling
- Hands-on experience with large-scale distributed systems, data engineering, and cloud-native technologies
- Proficiency in Python is essential
- Experienced with cloud-based data processing platform (DataBricks, Snowflake) and data transformation frameworks/engine (e.g., Apache Spark, Python Pandas)
- Deep understanding of data engineering concepts, including ETL, data flow, and data orchestration
- Strong interest in data engineering and a willingness to learn from experienced team members
- Basic understanding of data structures and algorithms, with a logical approach to solving technical problems
- Good communication skills and the ability to work effectively in a collaborative team environment
Responsibilities
- Assist in building and maintaining data pipelines to ingest and process financial data efficiently and reliably
- Supported by seniors engineer and financial experts, take a holistic, end-to-end approach, ensuring smooth materialization from data ingestion through to querying. Understand the entire lifecycle of data processing and optimize for performance and reliability
- Support the development of tools that automate data ingestion and quality checks, helping to reduce manual work
- Learn to design and implement scalable data models under the guidance of senior engineers
- Write clean, maintainable code data pipeline code in Python, Spark and SQL, and participate in code reviews and testing
- Help monitor data workflows and troubleshoot issues to ensure smooth operation
- Work closely with cross-functional engineer teams including Software, DevOps, Data Quality, Product, as well as our Operations team to understand data requirements and deliver solutions
- Continuously build your knowledge of data engineering best practices, financial data standards, and modern data platforms
Preferred Qualifications
- Experience in financial market data, particularly in areas such as Benchmark Indices, ESG data, ETF data, Blended Funds, Corporate Actions, or Ratings
- Knowledge of cloud echo system, Azure is a plus
- Expertise in automated testing and CI/CD practices in the context of data pipeline to building robust, reliable engineering solutions
Benefits
- Flexible remote work opportunities
- Competitive salary and benefits
- Supportive company culture focused on growth, innovation, and professional development