Summary
Join Wavicle Data Solutions as a Sr. Data Engineer in Oak Brook, IL, and contribute to building and improving data integration pipelines using various technologies like Hadoop, Spark, and AWS services. You will create data models, build infrastructure for data loading, and lead a team of data engineers. Responsibilities include designing, developing, testing, and deploying data pipelines, utilizing big data technologies, and communicating with clients. The ideal candidate possesses a Master's degree in a related field and 2+ years of experience in ETL tools, programming languages, and AWS services. Wavicle offers competitive compensation, unlimited paid time off, health and life insurance, and other benefits.
Requirements
- Masterβs Degree in Computer Science, Computer Engineering, Computer Application, Information Systems, or a directly related field, or the equivalent and 2+ years of experience in related occupations
- 2 + years of experience in the following
- One or more of the following ETL tools to build data pipelines/data warehouses: Talend Big Data, Informatica, DataStage, or Abinitio
- Programming using Scala, Python, R, or Java
- ETL pipeline implementation using one or more of the following AWS services: Glue, Lambda, EMR, Athena, S3, SNS, Kinesis, Data-Pipelines, or Pyspark
- Using real-time streaming systems: Kafka/Kafka Connect, Spark, Flink or AWS Kinesis
- Implementing big-data solutions in the Hadoop ecosystem: Apache Hadoop, MapReduce, Hive, Pig, Sqoop, NoSQL, or Databricks
- One or more of the following databases: Snowflake, AWS Redshift, Oracle, SQL Server, Teradata, Netezza, Hadoop, Mongo DB or Cassandra
- Building conceptual, logical or physical database designs using the following tools: ErWin, Visio or Enterprise Architect
- Designing and implementing data pipelines in on-prem and cloud environment
- Telecommuting is permitted
- 40 hours/week
- Must also have authority to work permanently in the U.S
Responsibilities
- Create the conceptual, logical and physical data models
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of sources like Hadoop, Spark, AWS Lambda, etc
- Lead and/or mentor a small team of data engineers
- Design, develop, test, deploy, maintain and improve data integration pipeline
- Develop pipeline objects using Apache Spark / Pyspark / Python or Scala
- Design and develop data pipeline architectures using Hadoop, Spark and related AWS Services
- Load and performance test data pipelines built using the abovementioned technologies
- Communicate effectively with client leadership and business stakeholders
- Participate in proposal and/or SOW development
- Utilize Big Data technologies to architect and build a scalable data solution
- Identify any gaps and work with the client to resolve in a timely manner
- Stay current with technology trends in order to make useful recommendations to match the client requirements
- Responsible for the design and execution of abstractions and integration patterns (APIs) to solve complex distributed computing problem
- Participate in pre-sales activities, including proposal development, RFI/RFP response, shaping a solution from the clientβs business problem
- Act as a thought leader in the industry by creating written collateral (white papers or POVs), participate in speaking events, create/participate in internal and external events (lunch βn learns, online speaking events, etc.)
Benefits
- Competitive compensation and bonuses
- Unlimited paid time off
- Health, retirement, and life insurance plans
- Long-term incentive programs
- Health Care Plan (Medical, Dental & Vision)
- Retirement Plan (401k, IRA)
- Life Insurance (Basic, Voluntary & AD&D)
- Unlimited Paid Time Off (Vacation, Sick & Public Holidays)
- Short Term & Long Term Disability
- Employee Assistance Program
- Training & Development
- Work From Home
- Bonus Program
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.