Data Architect, Data Lake, Professional Services
DESCRIPTION
Are you a Data Analytics specialist? Do you have Data Lake/Hadoop experience? Do you like to solve the most complex and high scale (billions + records) data challenges in the world today? Do you like to work on-site in a variety of business environments, leading teams through high impact projects that use the newest data analytic technologies? Would you like a career path that enables you to progress with the rapid adoption of cloud computing?
At Amazon Web Services (AWS), we’re hiring highly technical data architects to collaborate with our customers and partners on key engagements. Our consultants will develop, deliver and implement data analytics projects that help our customers leverage their data to develop business insights. These professional services engagements will focus on customer solutions such as Data and Business intelligence, machine Learning and batch/real-time data processing.
Responsibilities include:
- Delivery - Help the customer to define and implement data architectures (Data Lake, Lake House, Data Mesh, etc). Engagements include short on-site projects proving the use of AWS Data services to support new distributed computing solutions that often span private cloud and public cloud services.
- Solutions - Deliver on-site technical assessments with partners and customers. This includes participating in pre-sales visits, understanding customer requirements, creating packaged Data & Analytics service offerings.
- Innovate- Engaging with the customer’s business and technology stakeholders to create a compelling vision of a data-driven enterprise in their environment. Create new artifacts that promotes code reuse.
- Expertise - Collaborate with AWS field sales, pre-sales, training and support teams to help partners and customers learn and use AWS services such as Athena, Glue, Lambda, S3, DynamoDB, Amazon EMR and Amazon Redshift.
- Since this is a customer facing role, you might be required to travel to client locations and deliver professional services when needed, up to 50%.
BASIC QUALIFICATIONS
- Experience implementing AWS services in a variety of distributed computing environments
- 3+ years of experience of Data Lake/Hadoop platform implementation
- 2+ years of hands-on experience in implementation and performance tuning Hadoop/Spark implementations.
- Experience Apache Hadoop and the Hadoop ecosystem
- Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, HCatalog, Solr, Avro)
PREFERRED QUALIFICATIONS
- Hands on experience leading large-scale full-cycle MPP enterprise data warehousing (EDW) and analytics projects (including migrations to Amazon Redshift).
- At least one of the AWS Associate level certifications or higher.
- Ability to lead effectively across organizations and partners.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.