Position Details: Data Engineer
Are you passionate about leveraging data to deliver actionable insight that impacts daily business decision of Client? Does the prospect of dealing with massive volume of data excite you? Do you want to build a cutting-edge highly scalable analytics platform using AWS technologies?
The Hardlines Business Intelligence team is looking for an experienced, self-driven, analytical, and strategic Data Engineer. In this role, you will be working in one of the world's largest and most complex data warehouse environments. You should be passionate about working with huge datasets and be someone who loves to bring data together to answer business questions. You should have deep expertise in creation and management of datasets and the proven ability to translate the data into meaningful insights through collaboration with Business Intelligence Engineers (BIEs), Data Scientists and Business Users. In this role, you will have ownership of end-to-end development of data engineering solutions to complex questions, and the Redshift fleets on which the data will be hosted.
The right candidate will possess excellent business and communication skills, be able to work with business owners to develop and define key business questions, and be able to collaborate with BIEs and Data Scientists to analyze data that will answer those questions. You should have a solid understanding of how to build efficient and scalable data infrastructure and data models.
- In this role, you will have the opportunity to display your skills in the following areas:
- Design, implement, and support an analytical data infrastructure providing ad hoc access to large datasets and computing power
- Managing AWS resources including EC2, RDS, Redshift, et cetera
- Gather business and functional requirements and translate these requirements into robust, scalable, operable solutions that work well within the overall data architecture
- Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency
- Participate in the full development life cycle, end-to-end, from design, implementation and testing, to documentation, delivery, support, and maintenance
- Produce comprehensive, usable dataset documentation and metadata
- Evaluate and make decisions around the use of new or existing software products and tools
- Bachelor’s degree in Computer Science, MIS, related technical field, or equivalent work experience.
- 5 or more years' of overall work experience in a related field, including 3 or more years analytics, data engineering or related field
- Proven experience in data modeling, ETL development, and data warehousing, or similar skills
- Demonstrable skills and experience using SQL with large data sets (e.g. Oracle, SQL Server, Redshift)
- Experience with AWS technologies including Redshift, RDS, S3
- Proven track record of successful communication of data infrastructure, data models, and data engineering solutions through written communication, including an ability to effectively communicate with both business and technical teams
- Industry experience as a Data Engineer or related specialty (e.g., Software Engineer, Business Intelligence Engineer, Data Scientist) with a track record of manipulating, processing, and extracting value from large datasets.
- Coding proficiency in at least one modern programming language (Python, Ruby, Java, etc)
- Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
- Experience building data products incrementally and integrating and managing datasets from multiple sources
- Query performance tuning skills using Unix profiling tools and SQL
- Experience leading large-scale data warehousing and analytics projects, including using AWS technologies – Redshift, S3, EC2, Data-pipeline and other big data technologies
- Experience providing technical leadership and mentor other engineers for the best practices on the data engineering space
- Linux/UNIX including to process large data sets.
- Experience with AWS
- Experience with Hadoop or other map/reduce "big data" systems and services