The principal scientist will initiate and work on key initiatives for developing and advancing world-leading wakeword and embedded ML technology for any voice-driven Alexa end-point. The goal is to achieve unmatched wakeword recognition accuracy for any device - low CPU/memory footprint - in any acoustic environment, for any speaker, and under challenging noise conditions such as barge-in. You will be responsible for analyzing system short-comings, for leading the development of data-driven and algorithmic improvements, for defining the path to production, and for influencing design and architecture of goal-relevant software. You will work in a hybrid, fast-paced organization where scientists and engineers work jointly together and drive improvements directly to production.
The principal scientist will either go deep on a specific area like multi-channel and noise robust acoustic modeling for a conversational agent and act as a technical lead, or will work across teams and areas influencing data, algorithm, and design decisions. Areas of interest cover the whole acoustic modeling and embedded ML spectrum, including multi-channel raw audio input acoustic modeling, noise robust acoustic modeling, device and speaker independent acoustic modeling, acoustic model adaptation, advanced deep learning for acoustic and language modeling, active learning and semi-/unsupervised learning techniques for acoustic and language modeling, learning from heterogeneous and mismatched audio data including data selection and data simulation, multi-lingual speech, quantization aware training, Federated ML, speaker adaptation, etc.
The principal scientist will help driving scalable, robust, and automated solutions, making new algorithms and processes scalable to work on production-scale data sizes and achieving fully automated adaptation of processes and algorithms to new environments and to other locales. You will also help integrating new algorithms and processes into existing modeling stacks, simplify and streamline the existing modeling stacks, and develop testing and evaluation strategies. You will influence design and architecture of software stacks used offline and at runtime for building and deploying model artifacts, achieving flexible yet efficient solutions suitable for R&D work and for running in production. You will mentor and coach junior scientists to raise the bar of scientific research within Amazon.
· Graduate degree (MS or PhD) in Electrical Engineering, Computer Sciences, or Mathematics with at least 7 years of related work experience
· Domain expertise in acoustic modeling for speech recognition and/or embedded machine learning, familiarity with deep learning for speech recognition
· Familiarity with machine learning and statistical modeling techniques, scientific thinking, and the ability to invent
· Familiarity with programming languages such as C/C++ and Python
· PhD with specialization in speech recognition and machine learning
· Strong publication record
· Strong software design and development skills
· Experience working effectively with science, data processing, and software engineering teams
· Proven track record of innovation and advancing the state of the art
· Entrepreneurial spirit combined with strong architectural and problem solving skills
· Excellent written and spoken communication skills