HR Professional Consulting is hiring a
Data Engineer, Remote - United States

Logo of HR Professional Consulting

Data Engineer

🏢 HR Professional Consulting

💵 $90k-$130k
📍United States

Summary

The Data Engineer will be responsible for operationalizing data and analytics initiatives, expanding and optimizing the company's data pipelines, and ensuring optimal data delivery architecture. The role requires a candidate with at least 5 years of experience in data management, knowledge of advanced analytics tools, and familiarity with various databases and message queuing technologies.

Requirements

  • A bachelor's or master's degree in computer science, statistics, applied mathematics, data management, information systems, information science or a related quantitative field or equivalent work experience
  • At least five years or more of work experience in data management disciplines including data integration, modeling, optimization and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks
  • At least three years of experience working in cross-functional teams and collaborating with business stakeholders in support of a departmental and/or multi-departmental data management and analytics initiative

Responsibilities

  • The Data Engineer will be responsible for operationalizing data and analytics initiatives
  • They will be responsible for expanding and optimizing our data and data pipeline architecture
  • The Data Engineer will ensure optimal data delivery architecture is consistent throughout ongoing projects

Preferred Qualifications

Knowledge and/or familiarity of the midstream services industry and data generated in support of business activities related to the gathering, compressing, treating, processing, and selling natural gas, NGLs and NGL products, and crude oil will be strongly preferred

Benefits

  • Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as R, Python, Java, C++, Scala, and others
  • The ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management
  • The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows
  • Strong experience with database programming languages including SQL, PL/SQL, and others for relational databases, and knowledge and/or certifications on upcoming NoSQL/Hadoop-oriented databases like MongoDB, Cassandra, and others for nonrelational databases
  • Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies
  • Knowledge and/or experience in working with SQL on Hadoop tools and technologies including HIVE, Impala, Presto, others from an open source perspective and Hortonworks Data Flow (HDF), Dremio, Informatica, Talend, others from a commercial vendor perspective
  • Experience in working with both open-source and commercial message queuing technologies such as Kafka, JMS, Azure Service Bus, Amazon Simple Queuing Service, others, stream data integration technologies such as Apache Nifi, Apache Beam, Apache Kafka Streams, Amazon Kinesis, and others
  • Basic experience working with popular data discovery, analytics, and BI software tools like Tableau, Qlik, PowerBI and others for semantic-layer-based data discovery
  • Strong experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms
  • Basic experience in working with data governance/data quality and data security teams and specifically data stewards and security resources in moving data pipelines into production with appropriate data quality, governance and security standards and certification
  • Demonstrated ability to work across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service and others
  • Familiarity with agile methodologies and capable of applying DevOps and increasingly DataOps principles to data pipelines to improve the communication, integration, reuse and automation of data flows between data managers and consumers across an organization
  • Strong written and verbal communication skills with an aptitude for problem-solving
  • Must be able to independently resolve issues and efficiently self-direct work activities based on the ability to capture, organize, and analyze information
  • Experience troubleshooting complicated issues across multiple systems and driving solutions
  • Experience providing technical solutions to non-technical individuals
  • Demonstrated team-building skills
  • Ability to deal with internal employees and external business contacts while conveying a positive, service-oriented attitude

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.

Similar Jobs

Please let HR Professional Consulting know you found this job on JobsCollider. Thanks! 🙏