Are you heading to AWS re:Invent 2024 and looking for AWS Inferentia and Trainium sessions to take your machine learning skills to the next level?
Authored by Armin Agha-Ebrahim, GTM Specialist, AWS AI Chips
We will have many sessions that will help you learn more about our AWS AI chips. Our AI chips already help customers build and deploy generative AI workloads at higher performance, lower cost, and better energy efficiency. Join one of our many Breakout, Chalktalks, Workshops or Builder sessions. Stop by our booth in the Expo hall to see a live demo of AWS Trainium, and chat with our Neuron experts.
Breakouts
Session ID | Session Name | Date | Time | Where | Floor | Room |
---|
GBL102-CMN | Industrialization of AI in advanced computing [Mandarin Chinese] | Monday, Dec 2 | 9:00 AM - 10:00 AM PST | Mandalay Bay | Level 2 South | Mandalay Bay Ballroom L |
PEX209 | Accelerate your customer’s gen AI journey with ROI best practices | Monday, Dec 2 | 1:00 PM-2:00PM PST | Wynn | Upper convention promenade | bollinger |
CMP321 | Explore the many ways to train foundation models on AWS | Monday, Dec 2 | 1:00 PM-2:00PM PST | Mandalay Bay | Level 2 South | Mandalay Bay Ballroom L |
CMP209 | Conquer AI performance, cost, and scale with AWS AI chips | Tuesday, Dec 3 | 1:30 PM - 2:30PM PST | Caesars Forum | level 1 | summit 232 |
CMP207 | How AWS accelerated computing enables customer success with generative AI | Tuesday, Dec 3 | 5:30 PM - 6:30 PM PST | Venetian | Level 3 | Murano 3304 |
CMP208 | Customer Stories: Optimizing AI performance and costs with AWS AI chips | Thursday, Dec 5 | 12:30 PM - 1:30 PM PST | Wynn | Level 1 | Lafite 4 |
ChalkTalks
Session ID | Session Name | Date | Time | Where | Floor | Room |
---|
CMP337 | Fine-tune and deploy Llama 3.1 models on AWS Trainium and Inferentia | Monday, Dec 2 | 10:30AM - 11:30 AM PST | Caesars Forum | level 1 | Forum 126 |
CMP331-R | Build and accelerate LLMs on AWS Trainium and AWS Inferentia using Ray | Monday, Dec 2 | 3:00PM - 4:00 PM PST | MGM | Level 3 | 302 |
CMP330 | Cost-effectively deploy PyTorch LLMs on AWS Inferentia using Amazon EKS | Tuesday, Dec 3 | 11:30 AM- 12:30 PM PST | MGM | level 1 | Boulevard 158 |
CMP335-R | Drilling down into performance for distributed training | Tuesday, Dec 3 | 2:30 PM-3:30PM PST | Caesars Forum | level 1 | Academy 416 |
CMP318 | Choose the optimal compute environment for your AI/ML workloads | Wednesday, Dec 4 | 9:00 AM - 10:00 AM PST | Caesars Forum | level 1 | Alliance 305 |
CMP331-R1 | Build and accelerate LLMs on AWS Trainium and Inferentia using Ray- Reserve Seat | Wednesday, Dec 4 | 10:30AM - 11:30 AM PST | Caesars Forum | level 1 | Alliance 305 |
CMP326 | Accelerate AI innovation for health care and life-sciences on AWS | Wednesday, Dec 4 | 10:30AM - 11:30 AM PST | Mandalay Bay | level 3 South | South Seas D |
CMP335-R1 | Drilling down into performance for distributed training | Thursday, Dec 5 | 1:00 PM-2:00PM PST | Mandalay Bay | level 3 South | South Seas D |
CMP329 | Beyond Text: Unlock multimodal AI with AWS AI chips | Thursday, Dec 5 | 3:30PM - 4:30 PM PST | Mandalay Bay | Level 2 South | Lagoon G |
Workshops and Builder Sessions
Session ID | Session Name | Date | Time | Where | Floor | Room |
---|
CMP401 | Build and optimize novel models on AWS Trainium using Neuron Kernel Interface | Monday, Dec 2 | 4:30 PM - 5:30 PM PST | Wynn | Convention Promenade | Lafite 2 |
CMP304-R | Fine-tune Hugging Face LLMs using Amazon SageMaker and AWS Trainium | Tuesday, Dec 3 | 11:30 AM - 12:30 PM PST | MGM | Level 1 | Terrace 151 |
CMP304-R1 | Fine-tune Hugging Face LLMs using Amazon SageMaker and AWS Trainium | Tuesday, Dec 3 | 2:30 PM - 3:30 PM PST | MGM | Level 3 | 350 |
CMP314-R | Keeping it small: Agentic workflows with SLMs on AWS Inferentia | Tuesday, Dec 3 | 3:00 PM - 4:00 PM PST | Caesars Forum | Level 1 | Summit 232 |
CMP306 | Adapting LLMs for domain-aware apps with post-training on AWS Trainium | Tuesday, Dec 3 | 3:30 PM - 5:30 PM PST | Wynn | Convention Promenade | Margaux 2 |
CMP304-R2 | Fine-tune Hugging Face LLMs using Amazon SageMaker and AWS Trainium | Tuesday, Dec 3 | 4:00 PM - 5:00 PM PST | MGM | Level 3 | 350 |
CMP314-R1 | Keeping it small: Agentic workflows with SLMs on AWS Inferentia | Wednesday, Dec 4 | 8:30 AM - 9:30 AM PST | Caesars Forum | Level 1 | Summit 232 |
CMP307-R | Demystifying LLM deployment and optimization on AWS Inferentia | Wednesday, Dec 4 | 1:00 PM - 3:00 PM PST | Caesars Forum | Level 1 | Summit 216 |
CMP309 | Is RAG all you need? | Wednesday, Dec 4 | 4:00 PM - 6:00 PM PST | Venetian | Level 3 | Murano 3201B |
CMP307-R1 | Demystifying LLM deployment and optimization on AWS Inferentia | Thursday, Dec 5 | 12:00 PM - 2:00 PM PST | MGM | Level 3 | Premier 315 |
CMP314-R2 | Keeping it small: Agentic workflows with SLMs on AWS Inferentia | Friday, Dec 6 | 10:00 AM - 11:00 AM PST | Caesars Forum | Level 1 | Alliance 315 |
Visit us in AWS Village!
Stop by our booth in the AWS Village (highlighted area in the map above, booth 20), to chat with our Neuron Experts and try out AWS Trainium and Neuron Demos.
AWS Trainium for Text and Image Generation
In this demo, we explore the capabilities of AWS Trainium instances powering state-of-the-art AI workloads, including text generation and image generation. In this demo you can see two use cases:
- Text Generation with Llama 3.1 70B Model: We leverage the large-scale Llama 3.1 70B model for generating high-quality text.
- Image Generation with PixART Sigma Model: Using the PixART Sigma model, the image generation section demonstrates the potential of Trainium for processing image generation tasks.
Check the re:Invent Session for our events and secure your spot. We’ll see you in Las Vegas, December 2–6, 2024!