Software Development Engineer - AI/ML , AWS Neuron Frameworks

AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machinelearning accelerators. As a part of the Neuron Frameworks team you'll develop and enhance support PyTorch and JAX for AWS Neuron, working with the open source ecosystem.You will develop and extend support for the leading ML frameworks, delivering an outstanding user experience for PyTorch and JAX ML model development on the Trainium and Inferentia accelerators. You will work closely with teams across AWS Neuron including compiler, training and inference optimization to optimize frameworks for AWS's accelerator architectures, and engage closely with the PyTorch and JAX and other ML Framework communities to take advantage of their latest capabilities and improve performance and usability for ML model developers.A successful candidate will have a experience developing Machine Learning infrastructure and/or ML Frameworks, a demonstrated ability to work with open source communities to influence future community direction, a robust technical ability and a motivation to achieve results. Experience with technologies and tools such as XLA, vLLM or Hugging Face transformers is highly valued.*Utility Computing (UC)* AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services.**Why AWS**Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.**Diverse Experiences**Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.**Work/Life Balance* *We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. **Inclusive Team Culture* *Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness.**Mentorship and Career Growth**We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Key job responsibilitiesThis role will help lead the efforts building distributed inference support into Pytorch, Tensorflow using XLA and the Neuron compiler and runtime stacks. This role will help tune these models to ensure highest performance and maximize the efficiency of them running on the customer AWS Trainium and Inferentia silicon and the TRn1 , Inf1 servers. Strong software development using C++/Python and ML knowledge are both critical to this role.A day in the lifeYou will work with the team to develop, improve and release JAX and Pytorch framework support for AWS Neuron.You will understand current and future directions of ML framework development, with a focus on enabling key features of modern frameworks such as Torch.compile().You will work closely with the PyTorch and JAX community to actively drive the future directions to improve the experience of developing and optimizing ML models on acceleratorsAbout the teamOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.BASIC QUALIFICATIONS- 3+ years of non-internship professional software development experience- 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience- Experience programming with at least one software programming language ...

Software Development Engineer - AI/ML, AWS Neuron Apps, AWS Neuron Apps

AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machinelearning accelerators and the Trn1 and Inf1 servers that use them. This role is for a software engineer in the Machine Learning Applications (ML Apps) team for AWS Neuron.This role is responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive scale large language models like Llama2, GPT2, GPT3 and beyond, as well as stable diffusion, Vision Transformers and many more.The ML Apps team works side by side with compiler engineers and runtime engineers to create, build and tune distributed inference solutions with Trn1. Experience optimizing inference performance for both latency and throughput on these large models using Python, Pytorch or JAX is a must. Deepspeed and other distributed inference libraries are central to this and extending all of this for the Neuron based system is key.Key job responsibilitiesThis role will help lead the efforts building distributed inference support into Pytorch, Tensorflow using XLA and the Neuron compiler and runtime stacks. This role will help tune these models to ensure highest performance and maximize the efficiency of them running on the customer AWS Trainium and Inferentia silicon and the TRn1 , Inf1 servers. Strong software development using C++/Python and ML knowledge are both critical to this role.A day in the lifeAs you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects. You’ll also:Build high-impact solutions to deliver to our large customer base.Participate in design discussions, code review, and communicate with internal and external stakeholders.Work cross-functionally to help drive business decisions with your technical input.Work in a startup-like development environment, where you’re always working on the most important stuff.About the teamOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.BASIC QUALIFICATIONS- 3+ years of non-internship professional software development experience- 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience- Experience programming with at least one software programming language ...

Machine Learning Engineer (L5), AWS Neuron Apps

AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machinelearning accelerators and the Trn1 and Inf1 servers that use them. This role is for a software engineer in the Machine Learning Applications (ML Apps) team for AWS Neuron.This role is responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive scale large language models like Llama2, GPT2, GPT3 and beyond, as well as stable diffusion, Vision Transformers and many more.The ML Apps team works side by side with compiler engineers and runtime engineers to create, build and tune distributed inference solutions with Trn1. Experience optimizing inference performance for both latency and throughput on these large models using Python, Pytorch or JAX is a must. Deepspeed and other distributed inference libraries are central to this and extending all of this for the Neuron based system is key.Key job responsibilitiesThis role will help lead the efforts building distributed inference support into Pytorch, Tensorflow using XLA and the Neuron compiler and runtime stacks. This role will help tune these models to ensure highest performance and maximize the efficiency of them running on the customer AWS Trainium and Inferentia silicon and the TRn1 , Inf1 servers. Strong software development using C++/Python and ML knowledge are both critical to this role.A day in the lifeAs you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects. You’ll also:Build high-impact solutions to deliver to our large customer base.Participate in design discussions, code review, and communicate with internal and external stakeholders.Work cross-functionally to help drive business decisions with your technical input.Work in a startup-like development environment, where you’re always working on the most important stuff.About the teamOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.BASIC QUALIFICATIONS- 3+ years of non-internship professional software development experience- 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience- Experience programming with at least one software programming language ...

SDE AI/ML II, ML Inference Apps, AWS Neuron Apps

AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machinelearning accelerators and the Trn1 and Inf1 servers that use them. This role is for a software engineer in the Machine Learning Applications (ML Apps) team for AWS Neuron.This role is responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive scale large language models like Llama2, GPT2, GPT3 and beyond, as well as stable diffusion, Vision Transformers and many more.The ML Apps team works side by side with compiler engineers and runtime engineers to create, build and tune distributed inference solutions with Trn1. Experience optimizing inference performance for both latency and throughput on these large models using Python, Pytorch or JAX is a must. Deepspeed and other distributed inference libraries are central to this and extending all of this for the Neuron based system is key.Key job responsibilitiesThis role will help lead the efforts building distributed inference support into Pytorch, Tensorflow using XLA and the Neuron compiler and runtime stacks. This role will help tune these models to ensure highest performance and maximize the efficiency of them running on the customer AWS Trainium and Inferentia silicon and the TRn1 , Inf1 servers. Strong software development using C++/Python and ML knowledge are both critical to this role.A day in the lifeAs you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects. You’ll also:Build high-impact solutions to deliver to our large customer base.Participate in design discussions, code review, and communicate with internal and external stakeholders.Work cross-functionally to help drive business decisions with your technical input.Work in a startup-like development environment, where you’re always working on the most important stuff.About the teamOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.BASIC QUALIFICATIONS- 3+ years of non-internship professional software development experience- 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience- Experience programming with at least one software programming language ...

Senior ML Compiler Engineer, AWS Neuron, Annapurna Labs

Do you love decomposing problems to develop products that impact millions of people around the world? The AWS Neuron Compiler team is actively seeking a skilled Senior Software Development Engineer to build, deliver, and maintain a state-of-the-art deep learning compiler stack that delights our customers and raises our performance bar. This stack is designed to optimize application models across diverse domains, including Large Language and Vision, originating from leading frameworks such as PyTorch, TensorFlow, and JAX. Your role will involve working closely with our custom-built Machine Learning accelerators, Inferentia and Trainium, which represent the forefront of AWS innovation for advanced ML capabilities, powering solutions like Generative AI.In this role as a senior ML Compiler Engineer, you'll be instrumental in designing, developing, and optimizing features for our compiler. You will develop and scale the compiler to handle the world's largest ML workloads. You will architect and implement business-critical features, publish cutting-edge research, and mentor a brilliant team of experienced engineers. You will need to be technically capable, credible, and curious in your own right as a trusted AWS Neuron engineer, innovating on behalf of our customers. Your responsibilities will involve tackling crucial challenges alongside a talented engineering team, contributing to leading-edge design and research in compiler technology and deep-learning systems software. Strong experience developing compiler optimization, graph-theory, hardware bring-up, FPGA placement and routing algorithms, or hardware resource management will be a benefit in this role. Additionally, you'll collaborate closely with cross-functional team members from the Runtime, Frameworks, and Hardware teams to ensure system-wide performance optimization. You will leverage your technical communication skills as a hands-on partner to AWS ML services teams. You will be involved in pre-silicon design, bringing new products/features to market, and participating in many other exciting projects. AWS Utility Computing (UC) provides product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Additionally, this role may involve exposure to and experience with Amazon's growing suite of generative AI services and other cutting-edge cloud computing offerings across the AWS portfolio.Annapurna Labs (our organization within AWS UC) designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world.Explore the product and our history! https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-cc/index.htmlhttps://aws.amazon.com/machine-learning/neuron/ https://github.com/aws/aws-neuron-sdk https://www.amazon.science/how-silicon-innovation-became-the-secret-sauce-behind-awss-successKey job responsibilitiesOur engineers collaborate across diverse teams, projects, and environments to have a firsthand impact on our global customer base. You’ll bring a passion for innovation, data, search, analytics, and distributed systems. You’ll also:Solve challenging technical problems, often ones not solved before, at every layer of the stack. Design, implement, test, deploy and maintain innovative software solutions to transform service performance, durability, cost, and security.Build high-quality, highly available, always-on products.Research implementations that deliver the best possible experiences for customers.A day in the lifeAs you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects. You’ll also:Build high-impact solutions to deliver to our large customer base.Participate in design discussions, code review, and communicate with internal and external stakeholders.Work cross-functionally to help drive business decisions with your technical input.Work in a startup-like development environment, where you’re always working on the most important stuff.About the teamOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.Diverse ExperiencesAWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. About AWSAmazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.Inclusive Team CultureHere at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness.\Work/Life BalanceWe value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship & Career GrowthWe’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. BASIC QUALIFICATIONS- B.S. or M.S. in computer science or related field- 5+ years of non-internship professional software development experience including full software development life cycle, encompassing coding standards, code reviews, source control management, build processes, testing, and operations experience- 5+ years of leading design or architecture (design, reliability and scaling) of new and existing systems experience- 5+ years of programming with C++- 3+ years of experience developing compiler optimization, graph-theory, hardware bring-up, FPGA placement and routing algorithms, or hardware resource management- Experience as a mentor, tech lead or leading an engineering team ...