Amazon

Returning Candidate?

Data Engineer - Talent Management

Data Engineer - Talent Management

Job ID 
610794
Location 
US-WA-Seattle
Posted Date 
1/13/2018
Company 
Amazon.com Services, Inc.
Position Category 
Business Intelligence
Recruiting Team 
..

Job Description

The Global Talent Management (GTM) team is centrally responsible for creating and evolving Amazon’s talent programs and processes. People Analytics is a growing start-up team inside of GTM with a direct impact on 500,000+ Amazonians across all of our businesses and locations around the world. We regularly use data to pitch ideas and drive conversations with Amazon’s Senior Vice President of HR and other executives about how to improve existing talent programs like Career Development and the Annual Review or invent new ones that address the evolving needs of our diverse employee base.

GTM is looking for experienced data engineers to own the end-to-end data governance, storage, and access models for our new and evolving talent products. In this role, you will design and deploy a scalable data platform with compelling business intelligence tools to directly influence decisions made by our executive business and HR leadership teams. You will work directly with product/program owners, influencing design, data capture, transforms, gated extraction, reporting, and measurement/evaluation strategies. You will leverage existing datasets from partner teams as well as data collected through methods you create/define. You will leverage passion for data and exceptional technical expertise to deploy data structures, permissions models, and automated transforms/views that enable people decisions for leaders across Amazon’s diverse set of businesses. As the end-to-end owner of highly confidential data, you will also build proof-of-concept reporting tools, while scaling ad hoc data access, enabling GTM’s internal product/program teams to make informed decisions in a fast-paced, ambiguous business environment.

Responsibilities:
  • Design, implement, and support data structures that provide ad hoc, on-demand access to reliable and relevant data using best practices in data modeling and AWS (Redshift, EC2, S3) technologies
  • Build robust and scalable data integration (ETL) pipelines using SQL/R and Python
  • Evaluate and make decisions around dataset implementations and various ETL tools designed and proposed by peers
  • Gather business and functional requirements from product/program owners and end customers to create data capture, visualization, and measurement strategies that drive business action
  • Contribute to the evolution of your team’s data engineering practices, recommending changes in development, training, maintenance, and system standards
  • Establish scalable, efficient, automated processes for on-demand data analysis
  • Compile results in a concise, meaningful, and actionable format and share findings to senior leadership
  • Model data and metadata for ad hoc and pre-built reporting
  • Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
  • Participate in strategic & tactical planning discussions, including product design meetings

Basic Qualifications

  • BA/BS or advanced degree in quantitative/technical field (e.g. Computer Science, Statistics, Engineering) and 5+ years of direct analytics experience, or equivalent combination of degree and experience
  • 3+ years of hands-on experience in writing complex, highly-optimized SSQL queries across large datasets.
  • 3+ years relevant experience with data manipulation and mining (e.g., joins in SQL, data warehousing, ETL, Excel VBA) and visualization (e.g. Shiny/RStudio, Tableau, MicroStrategy); as well as extraction and storage (e.g., EDI/EDX, relational database management – especially using AWS tools)
  • Highly flexible, with the ability to identify, analyze and solve problems systematically in a fast-paced, rapidly-evolving organization
  • Proven ability to manage multiple projects with competing priorities
  • Business-facing verbal and written communication skills; able to present and clearly communicate complex data, analyses, and findings to non-experts
  • Ability to maintain discretion, confidentiality, and professionalism with sensitive information

Preferred Qualifications

  • Relevant coding experience with scripting tools (e.g. Python/Matlab, Java/JavaScript)
  • Business-relevant experience with AWS Big Data technologies, (e.g. RedShift, S3, EC2, QuickSight, etc.)
  • Advanced proficiency with statistical software packages (e.g. R, SAS, SPSS, AMOS, MPlus, STATA)
  • Experience with enterprise business software packages (e.g. Cognos, PeopleSoft, Workday)
  • Experience solving business problems with data-based analysis