Data Engineer
Encora
πRemote - India
Please let Encora know you found this job on JobsCollider. Thanks! π
Job highlights
Summary
Join Encora's growing data team as a Data Engineer with 4+ years of experience. You will design, develop, and optimize data pipelines using AWS cloud services. Responsibilities include ETL processes, data modeling, performance tuning, automation, and collaboration with other teams. The ideal candidate possesses strong skills in Python, SQL, AWS services, and data warehousing. This full-time, work-from-home position offers an opportunity to work with large datasets and contribute to a dynamic environment. The position is open PAN India.
Requirements
- 3-5 years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets
- Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2
- Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation
- Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java)
- Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms
- Experience in designing data models, schema design, and data architecture for analytical systems
- Familiarity with version control tools (e.g., Git) and CI/CD pipelines
- Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline
Responsibilities
- Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes
- Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others
- Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis
- Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows
- Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages
- Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly
- Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met
- Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability
Preferred Qualifications
- Experience with Hadoop, Spark, or other big data technologies
- Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies
- Experience implementing security best practices in the cloud and managing data privacy requirements
- Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka
- Experience with BI tools (Tableau, Quicksight) for visualization and reporting
- Familiarity with Agile development practices and tools (Jira, Trello, etc.)
Benefits
Work from home
Share this job:
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.
Similar Remote Jobs
- π°$220k-$270kπUnited States
- πKingdom of Saudi Arabia
- π°$175k-$210kπUnited States, Worldwide
- πFrance, Spain
- πIndia
- πIndia
- πIndia
- πWorldwide
- π°$225k-$255kπUnited States
- πMexico
Please let Encora know you found this job on JobsCollider. Thanks! π