Loading…
7-8 April, 2025
Paris, France
View More Details & Registration
Note: The schedule is subject to change.

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for PyTorch Conference Europe 2026 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in CEST (UTC/GMT +2). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."
Audience: Yes clear filter
Tuesday, April 7
 

09:00 CEST

Keynote: Co-Evolution: How the Open Source Intelligence Stack Compounds - Mark Collier, Executive Director, PyTorch Foundation, General Manager, AI & Infrastructure, Linux Foundation
Tuesday April 7, 2026 09:00 - 09:10 CEST
Agentic coding systems have crossed a threshold from experimentation to measurable economic impact. Their rapid adoption reveals a deeper shift: modern AI capability emerges from the co-evolution of models, training frameworks, inference engines, reinforcement systems, hardware, and cloud infrastructure, with open source enabling the flow of code, research, and operational knowledge across the stack. As performance gaps narrow and costs fall, this compounding intelligence system accelerates innovation and spreads capability across companies, industries, and hardware platforms, raising a simple question for the community: how fast do we want to evolve?
Speakers
avatar for Mark Collier

Mark Collier

Executive Director, PyTorch Foundation, General Manager, AI & Infrastructure, The Linux Foundation

Tuesday April 7, 2026 09:00 - 09:10 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

09:10 CEST

Keynote: PyTorch Updates - Edward Yang, Research Engineer, Meta
Tuesday April 7, 2026 09:10 - 09:30 CEST

Speakers
avatar for Edward Yang

Edward Yang

Research Engineer, Meta
Edward Yang has worked on PyTorch at Meta since nearly the very beginning. Currently, he works on all aspects of PT2, but with a particular focus on dynamic shapes support across the stack.
Tuesday April 7, 2026 09:10 - 09:30 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

09:55 CEST

Keynote: Stream Everything - Moving from Request input to Streaming input - Patrick von Platen, Research Engineer, Mistral AI
Tuesday April 7, 2026 09:55 - 10:10 CEST

Speakers
avatar for Patrick von Platen

Patrick von Platen

Research Engineer, Mistral AI
Patrick von Platen is a Research Engineer at Mistral AI, focussed on natural language processing and scalable AI systems. Currently, he contributes to vLLM, is a former core maintainer of Transformers, and created Diffusers.
Tuesday April 7, 2026 09:55 - 10:10 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

11:00 CEST

Lightning Talk: Why Your Forecasting Transformer Isn’t Working (And How To Fix It in Python) - Rosheen Naeem, Open Climate Fix
Tuesday April 7, 2026 11:00 - 11:10 CEST
Renewable energy is clean — but it’s also inherently variable. Solar PV generation can change dramatically within minutes due to cloud cover and weather conditions, making accurate short-term forecasts essential for grid stability, energy trading, and smart-home optimisation.
Open Climate Fix builds open and high-impact forecasting tools to accelerate the transition to a low-carbon energy system. One of these projects is Open Quartz Solar Forecast: an open-source model that uses public PV generation data, site metadata, and numerical weather prediction variables to forecast solar power for any location.
In this talk, I’ll present a real case study from my Google Summer of Code project where I implemented and trained a Temporal Fusion Transformer for multi-horizon solar forecasting. I’ll cover the practical engineering challenges behind making transformer forecasting work in Python: building continuous training windows, aligning weather forecast steps with observations, separating static vs time-varying features, and stabilising training using PyTorch Forecasting and PyTorch Lightning.
Attendees will leave with reusable patterns for real-world time-series forecasting pipelines.
Speakers
avatar for Rosheen Naeem

Rosheen Naeem

Software Engineer, Miro
I am a Software Engineer at Miro and a community member at Open Climate Fix. I completed the Erasmus Mundus Master’s in Software Engineering for the Green Deal (SE4GD), a joint degree program across Vrije Universiteit Amsterdam (Netherlands), LUT University (Finland), and Universit... Read More →
Tuesday April 7, 2026 11:00 - 11:10 CEST
Central Room
  Applications & Case Studies

11:00 CEST

Lights, Camera, Inference! Video Generation as a Service With VLLM-Omni - Ricardo Noriega, Red Hat & Doug Smith, Red Hat, Inc
Tuesday April 7, 2026 11:00 - 11:25 CEST
LLMs made for text generation as a service. What does it take to do the same for video?
We built an experimental Video Generation as a Service stack using vLLM-Omni and the LTX-2 open weights video model to explore how far an open, multimodal stack can go toward production use. We’ll share what worked, what busted, and what it takes to treat generative video as a first-class workload.
vLLM is known for high-performance autoregressive inference, and vLLM-Omni extends that foundation to multimodal inputs and outputs. We pushed those capabilities further by adding support for LTX-2, extending the OpenAI-compatible API surface, integrating with front ends, and packaging for scalable deployment. We’re here to walk you through and get you familiar with the touch points for just how we put all the Legos together with vLLM-Omni.
Finally, we’ll examine the gap between novelty demos and real applications: going from quirky spaghetti eating videos to generating consistent characters, personalized media, customized video game cutscenes, and interactive storytelling, and highlight what’s still missing to make generative video truly production-ready.
Speakers
avatar for Doug Smith

Doug Smith

Principal Software Engineer, Red Hat
Doug Smith is a Principal MLOps Engineer at Red Hat, where he works on the AI Inference Server team and contributes upstream to the vLLM project through its CI Special Interest Group. Recently, he's also been looking into contributions to vLLM-Omni. He’s spent years bridging telecom... Read More →
avatar for Ricardo Noriega

Ricardo Noriega

Principal SW Engineer, Red Hat
Ricardo is a Principal Software Engineer working at the Red Hat's Office of the CTO in the Emerging Technologies organization. Ricardo is currently focused on AI multimodality and researching the benefits of Small Language Models.
He is a former member of the Akraino TSC and PTL of the Kubernetes-Native-Infrastructure blueprint family, and contributor to Kubernetes, OpenStack, OpenDaylight and OPNFV... Read More →
Tuesday April 7, 2026 11:00 - 11:25 CEST
Founders Cafe
  GenAI & Multimodal
  • Audience Level Any
  • Slides Attached Yes

11:15 CEST

Lightning Talk: Deep Learning in the Wild: Embedded PyTorch for Real-World Conservation Bioacoustics - Taraqur Rahman & Owen O'Donnell, OWL Integrations
Tuesday April 7, 2026 11:15 - 11:25 CEST
Passive acoustic monitoring is a powerful tool for wildlife conservation, but deploying deep learning models in remote rainforest environments introduces strict constraints on power, memory, and compute. In this talk, we present an end-to-end PyTorch-based pipeline for detecting and analyzing the endangered three-wattled bellbird using embedded deep learning systems.

We cover the full lifecycle from audio preprocessing and model training in PyTorch to optimization and deployment on resource-constrained embedded devices. Topics include model architectures for sparse bioacoustic event detection, handling extreme class imbalance, model compression and quantization, and practical trade-offs between accuracy, latency, and power consumption.

The session emphasizes real-world lessons learned deploying machine learning at the edge, where unreliable connectivity, noisy signals, and limited hardware define success more than benchmark metrics. Attendees will gain practical patterns for building and deploying PyTorch models for embedded and edge AI applications with real environmental impact.
Speakers
avatar for Owen O'Donnell

Owen O'Donnell

Embedded Systems and Machine Learning Engineer, OWL Integrations
Owen O'Donnell is a Machine Learning and Embedded Systems Engineer at OWL integrations. He works with training ML models to deploy in remote locations that will be running on resource constrained electronics. This introduces challenges such as needing smaller sized models and having... Read More →
avatar for Taraqur Rahman

Taraqur Rahman

Chief Data Scientist, OWL Integrations
Taraqur Rahman is Chief Data Scientist and Co-Founder at OWL Integrations and Organizer/Co-Founder of Biased Outliers, where he leads applied machine learning and data science initiatives with real-world impact. He combines deep technical expertise in Python with practical deployment... Read More →
Tuesday April 7, 2026 11:15 - 11:25 CEST
Central Room
  Applications & Case Studies
  • Audience Level Any
  • Slides Attached Yes

11:15 CEST

Lightning Talk: Flexible Deployment of PyTorch Models on MCU-Class Devices Using ExecuTorch - Robert Kalmar & Martin Pavella, NXP
Tuesday April 7, 2026 11:15 - 11:25 CEST
ExecuTorch has recently matured into a production ready framework designed specifically for efficient edge deployment of PyTorch models. Its architecture supports a broad spectrum of hardware targets—from low power, bare metal or RTOS based microcontrollers (MCU) to higher performance Linux or Android based microprocessor platforms—while meeting the demanding constraints of memory, compute, and power typically found in real world embedded applications.
This talk focuses on the deployment flexibility ExecuTorch offers for MCU class devices, highlighting how different backends enable efficient execution across heterogeneous compute units. We will explore CPU, DSP, and NPU acceleration paths using the Cortex-M, Cadence, Ethos-U, and eIQ Neutron backends, and discuss how these integrate into typical ML model deployment workflows.
To make the session practical and application oriented, we will present an optimization journey aimed at reducing power consumption—an essential requirement for ML workloads in energy constrained environments. Attendees will gain insights into backend selection, performance trade offs, and best practices for suitable deploying PyTorch models on edge devices.
Speakers
avatar for Robert Kalmar

Robert Kalmar

Principal AI/ML Engineer at NXP Semiconductors, NXP Semiconductors
Robert Kalmar is a Principal Machine Learning Engineer at NXP Semiconductors. He received his master’s degree in machine learning and intelligent systems from Brno University of Technology. At NXP he focus on machine learning solution enablement for embedded and mobile devices... Read More →
avatar for Martin Pavella

Martin Pavella

ML SW Engineer, NXP Semiconductors
I hold a Master’s degree in Machine Learning from the Brno University of Technology, graduating with distinction at both bachelor’s and master’s levels. I am a mid-level AI/ML Software Engineer at NXP Semiconductors with 2.5+ years of experience. I won the 2025 iGEM overgraduate... Read More →
Tuesday April 7, 2026 11:15 - 11:25 CEST
Junior Stage
  Inference & Production

11:30 CEST

Lightning Talk: Coding Agents for Compiler Construction: Beyond the AI Assistant Paradigm - Reza Rahimi, yasp.ai & Stefan Krassin, yasp
Tuesday April 7, 2026 11:30 - 11:40 CEST
Modern ML compilers follow a familiar pattern: a frontend lowers models into an intermediate representation, while a backend applies graph and kernel optimizations before generating code for target accelerators. PyTorch provides strong foundations through nn.Module, FX, and graph capture, but implementing optimized backends remains challenging due to hardware diversity and kernel-level complexity.

Optimizing GPU kernels is hard. Few engineers do it well. Hardware architectures evolve yearly, and with hyperscalers, chip makers, and AI labs building custom silicon, demand for efficient kernel generation keeps growing. This creates a gap between model developers and hardware capabilities.

This talk explores coding agents as engineering tools for compiler construction, not general-purpose assistants. We discuss how agents can generate and refine backend components by analyzing model mathematics and hardware specifications to produce optimized kernels tailored to specific targets.

We present a compiler architecture built as a PyTorch add-on that accepts PyTorch models or FX graphs and produces executable artifacts, demonstrating practical integration with existing PyTorch workflows.
Speakers
avatar for Reza Rahimi

Reza Rahimi

CTO, yasp
Reza Rahimi is a seasoned technologist with a strong background in accelerating engineering software and scaling machine learning systems. With experience leading teams across embedded AI, compiler design, and model optimization, he now serves as CTO of yasp, where he is pioneering... Read More →
avatar for Stefan Krassin

Stefan Krassin

CEO, yasp.ai
With a background in electrical engineering and a career spanning embedded systems to executive leadership, he combines technical expertise with a vision for scale. After 10+ years of leading companies to outstanding growth, he co-founded yasp in 2023. His mission is to eliminate... Read More →
Tuesday April 7, 2026 11:30 - 11:40 CEST
Founders Cafe
  Agents & Interop

11:30 CEST

Lightning Talk: How DeepInverse Is Solving Imaging in Science and Healthcare With PyTorch - Andrew Wang, DeepInverse; Minh Hai Nguyen, Université de Toulouse
Tuesday April 7, 2026 11:30 - 11:40 CEST
Deep learning has revolutionised imaging, a foundation of science and healthcare. DeepInverse is the PyTorch library for solving imaging problems, unifying deep learning methods (e.g. diffusion models), physics (medical, optics) and modern tooling. In this talk, we’ll show how the PyTorch community can get involved in this exciting yet accessible application of open-source AI.

AI methods in imaging must model the imaging physics, leading to interesting engineering problems e.g. efficient differentiable ops, physics-informed losses. We’ll show notebooks on real use-cases: accelerating brain MRI, reducing radiation in CT scans, imaging black holes.

PyTorch enthusiasts at any level/background can contribute - from training infra for scientific data to high-level generative modelling frameworks - their AI engineering skills can directly impact imaging across multiple fields.

DeepInverse is supported by a growing international user community and proudly rooted in Paris. We’ve joined the PyTorch Ecosystem and received the Prix Science Ouverte in 2024. We’re excited to join the PyTorch Conf to celebrate the vibrant French developer community!
Speakers
avatar for Andrew Wang

Andrew Wang

CTO & Co-founder, Blur Labs
Andrew is a lead developer of DeepInverse as well as the CTO & co-founder of Blur Labs, a startup based in Paris building AI models for imaging. Andrew did his PhD at the University of Edinburgh in magnetic resonance image reconstruction.
avatar for Minh Hai Nguyen

Minh Hai Nguyen

PhD candidate, Toulouse University
Tuesday April 7, 2026 11:30 - 11:40 CEST
Central Room
  Applications & Case Studies
  • Audience Level Any
  • Slides Attached Yes

11:30 CEST

Why Classic IAM Collapses for Agents: Rethinking IAM for Agentic Systems - Parul Singh, Red Hat
Tuesday April 7, 2026 11:30 - 11:55 CEST
Autonomous AI agents increasingly reason, plan and act across tools, services and organizational boundaries. In these environments, traditional Identity and Access Management models begin to fail. Agents are not users and they are not static services. They act on behalf of others, change context during execution and operate with different levels of autonomy and risk.

This talk examines why classic IAM assumptions like long lived identities, static permissions and check once trust always authorization do not hold for agentic systems. We focus on three core questions.
1. How should agent identity be defined when tools, workflows, and execution context are dynamic?
2. How do we preserve accountability when agents act on behalf of users or other agents?
3. How should access change as agent behavior and risk evolve during execution?

We will outline a practical migration path from traditional IAM to agent aware identity workflows. Our insights come from our work collaborating with the COSAI Secure Design for Agentic Systems working group focused on creating shared guidance and emerging standards. We will close with current open questions and how you can join us to help solve them.
Speakers
avatar for Parul Singh

Parul Singh

Principal Software Engineer, Red Hat
Parul is a Principal Software Engineer in Red Hat's Office of the CTO, working on agentic systems and security. Her work focuses on trust, identity, and observability for autonomous AI agents, including delegation, provenance, and zero trust architectures for agentic workflows. She... Read More →
slides pdf
Tuesday April 7, 2026 11:30 - 11:55 CEST
Junior Stage
  Security & Privacy

11:45 CEST

Lightning Talk: ExecuTorch on Microcontrollers: Deploying PyTorch To the Smallest Edge - RJ Ascani & Matthias Cremon, Meta
Tuesday April 7, 2026 11:45 - 11:55 CEST
ExecuTorch extends PyTorch's reach to the most resource-constrained devices: microcontrollers, DSPs, and specialized neural processing units powering always-on sensors, wearables, and embedded systems. In this talk, we'll share the current state and roadmap for running ExecuTorch on platforms where every kilobyte of memory and milliwatt of power matters.

What you'll learn:
- How ExecuTorch's design enables deployment from ultra-low-power MCUs to DSP and NPU accelerators, all from a single PyTorch workflow
- The state of backend support for Cadence DSPs, ARM Ethos-U and Cortex-M
- Practical considerations for deploying models with sub-megabyte footprints and milliwatt power budgets
- Case studies spanning always-on audio, embedded vision, and TinyML applications
Speakers
avatar for Matthias Cremon

Matthias Cremon

Software Engineering Manager, Meta
Matthias Cremon is a Software Engineering Manager at Meta in the Silicon AI Software Team, working on AI compilers for various edge devices. He focuses on the frontend, graph level optimization side, as well as the integration of low-level, vendor specific implementations to run on... Read More →
avatar for RJ Ascani

RJ Ascani

Software Engineer, Meta
RJ Ascani is an embedded software engineer on Meta’s PyTorch Edge team, focusing on advancing ExecuTorch for microcontroller platforms.
Tuesday April 7, 2026 11:45 - 11:55 CEST
Central Room
  Inference & Production
  • Audience Level Any
  • Slides Attached Yes

11:45 CEST

Lightning Talk: TorchJD: Jacobian Descent in PyTorch - Pierre Quinton, EPFL & Valérian Rey, Simplex Lab
Tuesday April 7, 2026 11:45 - 11:55 CEST
Jacobian descent (JD) is an extension of gradient descent supporting the optimization of vector-valued functions. This algorithm can be used to train neural networks with multiple loss functions (e.g. multi-task learning). JD iteratively updates the parameters of the model using the Jacobian matrix of the vector of losses (the matrix stacking each individual loss' gradient).

To support and extend our research, we have developed the TorchJD library. With it, it's easy and efficient to compute the Jacobians with respect to the model parameters, and to aggregate them into an update direction that is beneficial to every objective. In contrast, if we had averaged the losses and used gradient descent, the update would have been beneficial to the average loss, but may have actually increased one of the individual losses.

In this session, we will give a quick introduction to the theory behind Jacobian descent, and then show how to use TorchJD on a variety of use-cases, beyond multi-task learning.

Library: https://github.com/TorchJD/torchjd
Paper: https://arxiv.org/abs/2406.16232
Speakers
avatar for Pierre Quinton

Pierre Quinton

Teacher, EPFL
PhD in Information Theory and Master in Data Science, specializing in fundamental math and multi-objective optimization (MOO). I am the co-author of TorchJD, a PyTorch library for Jacobian Descent developed with Valerian, currently at ~300 GitHub stars. My work aims to translate complex... Read More →
avatar for Valérian Rey

Valérian Rey

Research Engineer, Simplex Lab
I graduated from EPFL with a MSc in Data Science in 2021. Since then, I worked as a Data Scientist as Withings, and I worked on Jacobian descent, initially as a side-project, but now as a full-time occupation. I now spend most of my time developing and maintaining TorchJD, and I love... Read More →
Tuesday April 7, 2026 11:45 - 11:55 CEST
Founders Cafe
  Training Systems

12:00 CEST

Lightning Talk: Bringing Google’s Colossus to PyTorch: Rapid Storage via fsspec to Keep GPUs Busy - Ankita Luthra & Trinadh Kotturu, Google
Tuesday April 7, 2026 12:00 - 12:10 CEST
As PyTorch models scale to billions of parameters, the bottleneck has quietly shifted from compute to storage. Modern GPU clusters often sit idle, "starving" for data while waiting on legacy REST-based protocols. This talk introduces Rapid Storage: a fundamental architectural shift bringing Google’s Colossus stateful protocol (that powers many Google’s products) to PyTorch via fsspec , a common Pythonic file interface used by many frameworks within PyTorch ecosystem.
By bypassing REST APIs entirely via persistent gRPC streams to the storage layer, we eliminate protocol overhead. In this talk, we also dive into how Rapid achieves <1ms random read/write latency, 20x faster data access, and a massive 6 TB/s of aggregate throughput. Crucially, it delivers up to 10x lower tail latency for random I/O, preventing the stragglers that often stall distributed training jobs.
Beyond raw speed, we will deconstruct the integration with gcsfs and the broader fsspec ecosystem. This ensures that high-performance I/O is available across the entire data stack including Dask, Ray, HF Datasets and vLLM etc. Join us to learn how to stop wasting GPU cycles and achieve linear scaling in the cloud.
Speakers
avatar for Ankita Luthra

Ankita Luthra

Senior Software Engineer, Google
Ankita Luthra is a Software Developer at Google, focused on AI/ML infrastructure and scalable data pipelines. Her work with open-source tools like fsspec(gcsfs) and gcsfuse improves how frameworks such as PyTorch/ JAX efficiently access data from Google Cloud Storage.
avatar for Trinadh Kotturu

Trinadh Kotturu

Senior Product Manager, Google
Trinadh Kotturu is a Senior Product Manager specializing in AI/ML and analytics client strategy at Google. An alumnus of IIM Bangalore with 12 years of experience, he has a proven track record of shipping v1 products and scaling them into robust platform services. His expertise spans large-scale distributed storage systems, autonomous driving, and system resiliency... Read More →
Tuesday April 7, 2026 12:00 - 12:10 CEST
Master Stage
  Training Systems
  • Audience Level Any
  • Slides Attached Yes

13:45 CEST

Lightning Talk: From Pretrained To Personal: Privacy-First Fine-Tuning on AI PCs - Daniel Holanda Noronha & Iswarya Alex, AMD
Tuesday April 7, 2026 13:45 - 13:55 CEST
Pytorch on AI PCs crossed a threshold: local hardware can now support meaningful model fine-tuning, not just inference. This unlocks a new class of enterprise workflows where sensitive data never leaves the device, yet models can still be personalized and adapted using PyTorch.

In this session, we’ll show how to design on-device fine-tuning pipelines for AI PCs, focusing on enterprise scenarios where privacy is non-negotiable: regulated healthcare data, government and public-sector workloads, financial services, and proprietary enterprise systems. We’ll walk through key decisions such as selecting efficient pre-trained models, and how the right PyTorch optimizations enable effective personalization on large private datasets.

We'll also showcase practical fine-tuning techniques such as supervised fine-tuning (SFT), LoRA, and QLoRA, and show how mixed-precision training and correct use of training vs. evaluation modes make these approaches efficient and practical on AI PCs while preserving privacy. The result is a cloud-free, privacy-first fine-tuning blueprint that turns AI PCs into secure personalization engines for enterprise AI.
Speakers
avatar for Daniel Holanda

Daniel Holanda

Solutions Architect & ML Engineer, AMD
Daniel is a Sr. ML Engineer at AMD, specializing in local AI. He leads the development of local fine-tuning workflows for AI PCs and co-leads several open-source projects where he designs production-grade LLM/VLM tooling to accelerate the AI development lifecycle.

Previously, he was a Machine Learning Engineer at Groq and a contributor to Microsoft’s Project Brainwave. Daniel holds a PhD in AI understanding and hardware architecture from UBC... Read More →
avatar for Iswarya Alex

Iswarya Alex

Iswarya Alex, AMD
I am an ML Engineer at AMD focused on enabling high-performance on-device AI experiences. I work on optimizing and deploying models on AMD's Ryzen AI powered devices with GPUs and NPUs efficiently
Tuesday April 7, 2026 13:45 - 13:55 CEST
Founders Cafe
  Security & Privacy

13:45 CEST

Bringing ExecuTorch To the Next Frontiers of Edge AI - Mergen Nachin, Meta
Tuesday April 7, 2026 13:45 - 14:10 CEST
Since the General Availability release of ExecuTorch 1.0 in October 2025, our team has continued to advance the state of the on-device AI software stack. In this talk, we will share our upcoming roadmap and present demos that highlight ExecuTorch’s deployment across the next frontiers, such as AI PCs, robotics, TinyML devices, and the integration of AI agents to improve productivity for on-device deployment.

ExecuTorch is built on open source collaboration, encouraging community adoption, contributions from hardware partners, and interoperability with other ecosystem libraries. We will discuss how these foundations set the stage for the next phase of edge AI with ExecuTorch.
Speakers
avatar for Mergen Nachin

Mergen Nachin

Software Engineer, Meta
Mergen Nachin is a Software Engineer specializing in creating rich AI experiences on low latency, high performance, and privacy-aware embedded systems. With a background in distributed systems, developer infrastructure, remote sensing, and localization, he brings a versatile skill... Read More →
Tuesday April 7, 2026 13:45 - 14:10 CEST
Master Stage
  Applications & Case Studies

14:15 CEST

Lightning Talk: Accelerating On-Device ML Inference With ExecuTorch and Arm SME2 - Jason Zhu, Arm
Tuesday April 7, 2026 14:15 - 14:25 CEST
As on-device AI workloads grow in complexity, achieving low-latency inference within mobile power constraints remains a central challenge. We examine how ExecuTorch, combined with Arm’s Scalable Matrix Extension 2 (SME2), enables efficient CPU deployments of production AI workloads. We present a case study of SqueezeSAM, a segmentation model deployed in real-world mobile applications. Using ExecuTorch with XNNPACK delegation and SME2-optimized kernels, we evaluate INT8 and FP16 inference on a flagship smartphone. Moving beyond aggregate latency, we apply operator-level profiling to decompose runtime across convolution, GEMM, elementwise, and data movement operators, showing how hardware acceleration reshapes bottlenecks in the execution stack. SME2 delivers up to 3.9x end-to-end speedup on a single CPU core, materially altering runtime composition and revealing data movement as the primary post-acceleration bottleneck. This session presents a practical workflow for deploying, profiling, and systematically optimizing on-device PyTorch models, demonstrating how SME2 expands the viable design space for interactive mobile AI.
Speakers
avatar for Jason Zhihuai Zhu

Jason Zhihuai Zhu

Senior Principal Engineer, Arm
Jason Zhu is a Senior Principal Engineer at Arm focused on hardware and software co-optimization for AI systems. With a background in quantum physics and experience spanning AI research and product engineering across major technology companies, he works across the full execution stack... Read More →
Tuesday April 7, 2026 14:15 - 14:25 CEST
Master Stage
  Inference & Production
  • Audience Level Any
  • Slides Attached Yes

14:15 CEST

Sponsored Session: TorchTPU: Expanding TPU Programmability to Pytorch - Kat Ko & Claudio Basile, Google; Jana van Greunen, Meta
Tuesday April 7, 2026 14:15 - 14:40 CEST
Google Tensor Processing Units (TPUs) are designed for ML at massive scale, offering significant benefits in performance, energy, and cost. While TPUs have historically been associated with the TensorFlow and JAX ecosystems, we introduce TorchTPU: a new Google effort to expand TPU programmability to PyTorch.

This talk charts TorchTPU’s evolution, from the initial RFC to establishing a native, eager-first PyTorch backend. We will outline the core technical challenges overcome during this transition—particularly the complexities of translating dynamic, eager execution into highly optimized TPU computations.

We’ll highlight current milestones, including native integration with torch.compile, DTensor, and robust support for the latest Ironwood (TPU v7) architecture. These advancements collectively enable multi-billion parameter models to run on TPUs with minimal code changes, while retaining the ability for users to apply model-specific optimizations (e.g., custom kernels, quantization, sharding) to reach peak performance. Finally, we’ll provide a sneak peek at our roadmap for 2026.
Speakers
avatar for Jana van Greunen

Jana van Greunen

Director of PyTorch Engineering, Meta
Jana van Greunen is the Director of PyTorch Engineering at Meta, where she leads efforts to ensure PyTorch remains the leading AI/ML framework for researchers and developers worldwide. With deep expertise in distributed systems, large-scale infrastructure, and over 15 years of experience... Read More →
avatar for Kat Ko

Kat Ko

Senior Eng Manager, Google
Kat Ko is a Senior Engineering Manager at Google and a lead on TorchTPU, where she drives the integration of PyTorch with TPU technology to enable high-performance computing at scale. An EECS graduate of UC Berkeley, she brings over 15 years of experience building large-scale systems... Read More →
avatar for Claudio Basile

Claudio Basile

Software Engineer, Google
Claudio Basile is a Google Software Engineer and the co-founder and technical lead of TorchTPU. During his tenure at Google, he also authored LiteRT, the company’s new on-device ML framework. With a Ph.D. in ECE from UIUC and over 15 years of experience spanning machine learning... Read More →
Tuesday April 7, 2026 14:15 - 14:40 CEST
Founders Cafe
  Frameworks & Compilers
  • Slides Attached Yes

14:30 CEST

Lightning Talk: Combo Kernels: Horizontal Fusion Optimization in Torch.compile - Karthick Panner Selvam, & Elias Ellison, Meta
Tuesday April 7, 2026 14:30 - 14:40 CEST
Combo kernels are a compiler optimization in PyTorch Inductor that horizontally fuses multiple independent operations into a single Triton kernel launch, reducing GPU kernel launch overhead and improving memory locality.

The Problem: Models generate many small, independent operations like weight preprocessing and tensor copies. Each launch incurs overhead. For models with many such operations, this becomes a bottleneck.

The Solution: Combo kernels combine multiple operations into one kernel using a dispatch mechanism. A single program ID routes execution to the appropriate subkernel based on cumulative block boundaries. This eliminates redundant launches while preserving correctness.

Key Innovations:

Per-subkernel block dimensions: Each subkernel gets its own optimized block size instead of sharing one size across all, enabling better autotuning.

Flattened grid dispatch: We collapse the multi-dimensional block grid into a single dimension.

Results: On H100 GPUs, combo kernels deliver geomean speedups of +7.38% for HuggingFace, and +5.97% for TorchBench. The optimization is enabled by default in the vLLM repository for LLM inference acceleration.
Speakers
avatar for Elias Ellison

Elias Ellison

Software Engineer, Meta
Elias has been working on the PyTorch team for four years, most recently on the torch.compile stack
avatar for Karthick Panner Selvam

Karthick Panner Selvam

Software Engineer, Meta
Karthick Panner Selvam is a SWE at Meta Superintelligence Lab, working on the PyTorch compiler team to enhance performance and scalability for large models. He earned his PhD in Machine for Systems at the University of Luxembourg, collaborating with Google DeepMind, ECMWF, and Frontier... Read More →
Tuesday April 7, 2026 14:30 - 14:40 CEST
Master Stage
  Frameworks & Compilers
  • Audience Level Any
  • Slides Attached Yes

14:45 CEST

Lightning Talk: Implementing Single-Dim Strategies With Sharding Validator - Anshul Sinha, Meta
Tuesday April 7, 2026 14:45 - 14:55 CEST
DTensor sharding propagation is a major bottleneck to full operator coverage: adding or fixing an op strategy is complex, bug‑prone, and gaps often surface as unexpected resharding and extra collectives. A key source of complexity is that today’s rules conflate (1) semantic correctness—valid input/output sharding combinations for an operator—with (2) search‑space pruning to avoid combinatorial blowups on N‑dimensional meshes.

This talk presents a landed prototype that separates these concerns via Single Mesh Dim Strategies: each operator specifies valid placement combinations for one mesh dimension, while infra expands/composes them across the full mesh and selects low‑cost strategies. For contributors, this provides a clear path to refactor existing op_strategies into single‑dim rules that are easier to review and extend. We also introduce a Truth Table‑style sharding validator that systematically tests shapes and sharding specs to check soundness/completeness and to flag unnecessary redistribution/collectives caused by missing cases.

The goal of this presentation is faster, higher‑confidence contributions that improve correctness and expand DTensor operator coverage.
Speakers
avatar for Anshul Sinha

Anshul Sinha

Software Engineer, Meta
I graduated from the University of Michigan with a B.S in Computer Science in December 2024. I joined Meta's PyTorch Distributed as a SWE in June 2025.
Tuesday April 7, 2026 14:45 - 14:55 CEST
Founders Cafe
  Frameworks & Compilers

14:45 CEST

Brevitas Quantization Library - Pablo Monteagudo Lago, AMD
Tuesday April 7, 2026 14:45 - 15:10 CEST
Brevitas is an open‑source PyTorch library from AMD designed to support the research of state‑of‑the‑art quantization methods, including Qronos (ICLR 2026) and MixQuant (arXiv). Built for flexibility and composability, it offers modular components for exploring reduced‑precision data paths and accuracy‑preserving techniques.
As generative models scale, post‑training quantization (PTQ) has become the preferred strategy for maintaining quality without retraining, yet PTQ methods are often applied in isolation due to fragmented tooling. Brevitas provides a unified environment for modern PTQ algorithms—including Qronos, SpinQuant and AutoRound—enabling practitioners to combine complementary techniques effectively.
Brevitas leverages the latest PyTorch features, like Dynamo for tracing and selectively modifying compute graphs—for example, by inserting rotation ops to mitigate outliers. It integrates with frameworks like transformers and supports export flows including vLLM and GGUF, ensuring a smooth transition from experimentation to deployment.
This talk shows how to use Brevitas for an end‑to‑end quantization flow, showcasing how its flexibility enables new research directions.
Speakers
avatar for Pablo Monteagudo Lago

Pablo Monteagudo Lago

Research Scientist, AMD
Pablo Monteagudo is a research scientist in AMD Research and Advanced Development, based in Dublin. He specialises in co-design of neural networks and accelerators, in particular, working on topics involving neural network quantization, sparsity and accelerator design.
Tuesday April 7, 2026 14:45 - 15:10 CEST
Junior Stage
  Frameworks & Compilers

14:45 CEST

The Science and Practice of Open and Scalable LLM Evaluations - Grzegorz Chlebus, NVIDIA
Tuesday April 7, 2026 14:45 - 15:10 CEST
Rapid advances in AI have expanded the range of capabilities required for successful real-world deployment. Understanding where we are in this multi-dimensional frontier is essential for accelerating innovation through effective quality assurance. Rigorous evaluation is increasingly difficult to scale as development requires testing many checkpoints across numerous benchmarks. Model comparison is further complicated by limited transparency of reported results. This talk explores challenges, best practices, and open-source tools that elevate evaluation to a core component of LLM development, delivering continuous signals across the model lifecycle.
We discuss principles for standardizing evaluation methods and improving consistency through practical patterns and anti-patterns, and examples of integrating the science of evaluation directly into model development. Using Nemo-Evaluator, an open-source scalable evaluation tool, we demonstrate modular architectures that enable transparent, reproducible measurement. Finally, we show how Nemo-Evaluator supports reproducible evaluation for the Nemotron model family, helping enable one of the most open development processes in modern AI.
Speakers
avatar for Grzegorz Chlebus

Grzegorz Chlebus

Manager R&D, NVIDIA
Grzegorz Chlebus is a Manager at Frontier Model Evaluation at NVIDIA, where he leads tooling and infrastructure efforts for evaluating frontier AI models. He holds a PhD in Medical Sciences from Radboud University Nijmegen, focused on deep learning-based medical image segmentation... Read More →
Tuesday April 7, 2026 14:45 - 15:10 CEST
Central Room
  GenAI & Multimodal

15:00 CEST

Lightning Talk: Jigsaw: Domain and Tensor Parallelism for High-Resolution Input Training - Deifilia Kieckhefen, Karlsruhe Institute of Technology
Tuesday April 7, 2026 15:00 - 15:10 CEST
Distributed neural network training frameworks typically optimize for specific architectures while minimizing communication overhead. Transformer layers can be efficiently parallelized, but other operations such as convolutions often remain inefficient. This creates bottlenecks for complex model architectures.
Moreover, existing tensor parallelism strategies typically replicate input data across all processes, creating redundant I/O that scales poorly with input size. In applications with heavy I/O demands-weather forecasting, medical imaging, or video processing-unsharded input data creates additional data-loading bottlenecks that could benefit from parallelization.
Jigsaw is a PyTorch library that shards both model weights and input data across parallel processes. It maintains a PyTorch-like interface while parallelizing activations, convolutions, linear layers, and attention through a distributed matrix multiplication backend. We demonstrate the usability of Jigsaw across a wide range of model architectures and shows performance when scaling multi-billion-parameter models sharded across up to 8 processes and compares the scalability to DDP, FSDP, and Megatron-LM approaches.
Speakers
avatar for Deifilia Kieckhefen

Deifilia Kieckhefen

Doctoral Researcher, Karlsruhe Institute of Technology
Deifilia Kieckhefen is a doctoral researcher at the Karlsruhe Institute of Technology. She works on scalable and distributed training of neural network architectures.
Tuesday April 7, 2026 15:00 - 15:10 CEST
Founders Cafe
  Training Systems
  • Audience Level Any
  • Slides Attached Yes

15:40 CEST

Lightning Talk: Graph Based Pipeline Parallelism - Sanket Purandare, Meta & Simon Fan, Meta PyTorch
Tuesday April 7, 2026 15:40 - 15:50 CEST
Pipeline parallelism is vital for large models, but advanced schedules for SOTA LLMs are difficult to express in current PyTorch. MoE communication dominates the critical path, making latency hiding essential. Leading systems use fw-bw overlapping; fw-fw and bw-bw overlapping further boost throughput.

Schedules like ZeroBubbleV and DualPipeV rely on dI-dW backward splitting for fine-grained overlap. However, eager-mode implementations require a patchwork of fragile integrations (multi-threading, custom autograd functions, activation checkpointing, etc.) that rely on implicit behavior and hand-written logic with poor torch.compile compatibility and upstream composability.

We present Graph-Based PP: stages are compiled to reusable FX graphs executed via an explicit schedule language. Users write standard PyTorch code while specifying schedules at varying granularity; all manipulations run as graph passes, abstracting complexity away from user code and into the compiler/runtime, allowing for greater composability.

We have integrated Graph-PP into TorchTitan and AutoParallel on real MoE workloads, targeting upstream inclusion in torch.distributed.
Speakers
avatar for Simon Fan

Simon Fan

Software Engineer, Meta
I work on the PyTorch team at Meta, focusing on distributed training efficiency.
avatar for Sanket Purandare

Sanket Purandare

Research Engineer, Meta
Currently, Sanket serves as a Research Engineer at Meta's SuperIntelligence Lab, in PyTorch Distributed and Compiler team. He specializes in performance optimization of large scale training of LLMs based on Mixture of Experts architectures.

Prior to this he obtained his PhD in A... Read More →
Tuesday April 7, 2026 15:40 - 15:50 CEST
Master Stage
  Frameworks & Compilers

15:40 CEST

Lightning Talk: Cross-Region Model Serving: PyTorch Inference, Observability & LLMOps - Suraj Muraleedharan, Amazon Web Services
Tuesday April 7, 2026 15:40 - 15:50 CEST
As PyTorch models move to production, organizations face a critical challenge: deploying, monitoring, and operating inference at scale across multiple regions. Single-region serving is well-understood, but multi-region LLMOps—model distribution, observability, failover, and cost management—remains ad-hoc and challenging for multiple customers.

This session presents production-tested architectures for multi-region PyTorch inference and LLMOps workflows. We cover:

Serving: Multi-region TorchServe/KServe on Kubernetes with latency-based routing, blue-green deployments, model versioning, and automated failover with circuit breakers.

Observability: OpenTelemetry distributed tracing, Prometheus/Grafana dashboards for latency, throughput, GPU utilization, and LLM-specific metrics like time-to-first-token and KV-cache hit rate.

LLMOps: CI/CD pipelines for cross-region model deployment with automated rollback, drift detection, and SLO-based alerting.

Attendees leave with serving architectures, dashboards, and deployment pipelines using open-source tooling.
Speakers
avatar for Suraj Muraleedharan

Suraj Muraleedharan

Principal Platform Engineer, Amazon Web Services
Principal Engineer driving technical strategy and building mission-critical foundational platforms for AI, HPC, and distributed systems, bridging the gap between infrastructure, AI research, and product organizations.
Tuesday April 7, 2026 15:40 - 15:50 CEST
Founders Cafe
  Inference & Production

15:40 CEST

Enabling State-of-the-art Asynchronous Execution in Torch.compile With CUDA Streams - Michael Lazos, Meta
Tuesday April 7, 2026 15:40 - 16:05 CEST
CUDA streams are a widely-used method for parallelizing GPU computation on NVIDIA GPUs. They have long been requested by our users and enable multiple key capabilities - overlapping communication and compute kernels, training on multiple batches in parallel and parallelizing kernels, all of which are needed for achieving SOTA training performance. Another key capability is activation offloading - this can be applied to any model to prevent OOMs by asynchronously storing activations in cpu memory until they are needed by the model.

Before this work, torch.compile previously would graph break on CUDA stream contexts, which can be costly for models that utilize streams. Although workarounds exist (e.g. wrapping stream manipulation into custom ops), these solutions add complexity and create friction in the user experience. By enabling seamless CUDA stream support in PT2, we allow our users to leverage the familiar eager APIs for stream assignment and synchronization directly within torch.compile. This not only simplifies the workflow but also ensures that models using custom streaming patterns can run efficiently out-of-the-box without manual intervention or code restructuring.
Speakers
avatar for Michael Lazos

Michael Lazos

Software Engineer, Meta
Michael Lazos is a software engineer at Meta where he contributes to torch.compile. His expertise spans both graph extraction with TorchDynamo and generating optimized kernels with the backend compiler TorchInductor. Previously, he was at Microsoft contributing to project Brainwave... Read More →
Tuesday April 7, 2026 15:40 - 16:05 CEST
Central Room
  Frameworks & Compilers

16:10 CEST

Build PyTorch to Understand PyTorch - Vijay Janapa Reddi, Harvard University; Andrea Mattia Garavagno, University of Genoa
Tuesday April 7, 2026 16:10 - 16:35 CEST
PyTorch's success depends on more than users—it needs engineers who understand what's inside. Engineers who can debug framework issues, optimize at the systems level, contribute upstream, and build what comes next. But ML education today produces practitioners who call APIs without understanding them. They train models without knowing why Adam needs 3× the memory of SGD, or what happens when they call loss.backward().

TinyTorch is a 20-module open-source curriculum that closes this gap. Students construct PyTorch's core components—tensors, autograd, optimizers, CNNs, transformers—in pure Python, building a complete framework where every operation is code they wrote. By the final module, they don't just use PyTorch; they understand how to build it.

The curriculum uses progressive disclosure, systems-first profiling from Module 01, and build-to-validate milestones—recreating ML breakthroughs from Perceptron (1958) through Transformers (2017), culminating in MLPerf-style benchmarking.

TinyTorch is how we grow the next generation of PyTorch contributors and the engineers who will build what comes after.

Open source: mlsysbook.ai/tinytorch
Speakers
avatar for Vijay Janapa Reddi

Vijay Janapa Reddi

Professor, Harvard University
Vijay Janapa Reddi is a Professor at Harvard University, where he leads research at the intersection of machine learning and computer systems. He is the author of the open-source Machine Learning Systems textbook (mlsysbook.ai) and co-founder of MLCommons, the organization behind... Read More →
avatar for Andrea Mattia Garavagno

Andrea Mattia Garavagno

Research Fellow, University of Genoa & Scuola Superiore Sant'Anna
I am a Research Fellow holding a joint position at the University of Genoa and Scuola Superiore Sant'Anna. My research is centered on Edge AI, where I am currently working to automate the design of applications through Hardware-Aware Neural Architecture Search (NAS). By running these... Read More →
Tuesday April 7, 2026 16:10 - 16:35 CEST
Central Room
  Frameworks & Compilers
  • Audience Level Any
  • Slides Attached Yes

16:10 CEST

Optimizing Reinforcement Learning at Trillion-Parameter Scale - Songlin Jiang, Aalto University & Mind Lab
Tuesday April 7, 2026 16:10 - 16:35 CEST
This talk will dive into how we implemented and optimized reinforcement learning on trillion-parameter Mixture-of-Experts reasoning models using veRL, Megatron-Bridge and vLLM. The session is useful to anyone building large-scale RL training systems.

For the first part, I will walk through the system design required to make RL work at this scale using LoRA: how LoRA adapters are implemented for expert layers, how adapters are sharded and fused under tensor/pipeline/expert parallelism, and most importantly, how refit (parameter sync) is implemented for LoRA between training backend (Megatron) and rollout engine (vLLM).

The second part of the talk focuses on training–inference mismatch in MoE RL. I will explain why common mitigations such as clipping and importance sampling can fail, and how we implement fixed Router Replay R3 across vLLM, veRL, and Megatron to align routing decisions between rollout and training.

These works are done together with Mind Lab and some of the related blog posts are at:
- https://macaron.im/mindlab/research/building-trillion-parameter-reasoning-rl-with-10-gpus
- https://macaron.im/mindlab/research/router-replay-r3-why-it-failed-and-how-we-fixed-it
Speakers
avatar for Songlin Jiang

Songlin Jiang

Doctoral Researcher, Aalto University & Mind Lab
I am a doctoral researcher at Aalto University, focusing on reducing training and inference latency for Reinforcement Learning and Large Language Models (LLMs) on High-Performance Computing (HPC) clusters. I am also a passionate free software developer, a maintainer of VeRL, and a... Read More →
Tuesday April 7, 2026 16:10 - 16:35 CEST
Junior Stage
  Training Systems

16:40 CEST

Lightning Talk: TerraKit: Standardising AI-Ready Geospatial Data Preparation for the TorchGeo Ecosystem - Rosie Lickorish & Romeo Kienzler, IBM
Tuesday April 7, 2026 16:40 - 16:50 CEST
With the advent of geospatial foundation models, unexplored use cases are emerging that require well-curated datasets. Currently, no standardised approach exists for creating such AI-ready geospatial datasets. In this session, we introduce TerraKit: a comprehensive open-source Python library for retrieving, and processing geospatial data, that seamlessly integrates with upstream geospatial model training libraries such as TorchGeo or TerraTorch.

From raster/vector annotations, TerraKit will match, download, process, align and split the requested data source (e.g., EarthData, CDSE, Planetary Computer) based on user specifications provided by a simple configuration file. TerraKit also supports spatial train/val splits and exports datasets in standard formats such as TACO datasets. TerraKit streamlines the pipeline from raw EO data to AI-ready datasets, accelerating the development of custom geospatial applications, and ensuring query and processing pipelines are reproducible. By lowering the barrier to entry, a wider community of TorchGeo and TerraTorch users are empowered to leverage foundation models for Earth observation.
Speakers
avatar for Romeo Kienzler

Romeo Kienzler

AI Research Engineer, IBM
Romeo is a data scientist working for IBM Research and an advocate for ethical machine learning, transparency and privacy
avatar for Rosie Lickorish

Rosie Lickorish

Research Software Engineer, IBM
Rosie is a Research Software Engineer at IBM, specializing in the development of next-generation tools and technologies designed to drastically accelerate solutions for today’s most urgent global challenges. Her technical focus involves leveraging geospatial data, AI models... Read More →
Tuesday April 7, 2026 16:40 - 16:50 CEST
Central Room
  GenAI & Multimodal
  • Audience Level Any
  • Slides Attached Yes

16:40 CEST

Optimizing PyTorch on CPU-GPU Coherent Platforms - Matthias Jouanneaux, Nvidia
Tuesday April 7, 2026 16:40 - 17:05 CEST
In recent years, both Nvidia and AMD have introduced hardware coherent platforms: GH200, GB200 and MI300A. These coherent platforms provide both many new features and challenges for PyTorch applications attempting to make the most out of the platform.
This talk will focus on Nvidia's GB200 and walk through techniques to utilize the features of the coherent architecture in PyTorch, such as the high CPU-GPU interconnect bandwidth, unified memory, as well as the advantages and caveats of sharing system memory between CPU and GPU.
Speakers
avatar for Matthias Jouanneaux

Matthias Jouanneaux

Sr Software Engineer - PyTorch, NVIDIA
After his master’s degree, Matthias Jouanneaux worked at Konica Minolta's european research lab on medical image analysis using deep learning for 2 years.
He then joined Nvidia, focusing on optimizing application performance for Nvidia hardware as a Developer Technology enginee... Read More →
Tuesday April 7, 2026 16:40 - 17:05 CEST
Founders Cafe
  Frameworks & Compilers

16:55 CEST

Lightning Talk: Bayesian Neural Networks With Variational Inference in PyTorch - Lars Heyen, Karlsruhe Instute of Technology, Scientific Computing Center
Tuesday April 7, 2026 16:55 - 17:05 CEST
Uncertainty quantification is becoming more and more important as neural networks are used for increasingly critical tasks. Bayesian neural networks (BNNs) inherently provide a measure of their own uncertainty, but can be either hard to implement or inflexible if one uses common frameworks. In this session I discuss how to efficiently implement BNNs using Variational Inference within PyTorch and present torch_blue, a light-weight open source library that implements these methods with the goal of being easy to pick up, yet flexible enough for research on BNNs.
Speakers
avatar for Lars Heyen

Lars Heyen

PostDoc, Karlsruhe Institute of Technology
I am a postdoctoral researcher working on uncertainty quantification in the research group "Robust and Efficient AI" at the Scientific Computing Center of the Karlsruhe Institute of Technology. I also coauthored the PyTorch-based library torch_blue for implementing Bayesian neural... Read More →
Tuesday April 7, 2026 16:55 - 17:05 CEST
Central Room
  Frameworks & Compilers
  • Audience Level Any
  • Slides Attached Yes
 
Wednesday, April 8
 

09:00 CEST

Keynote: PyTorch CTO - Matt White, Global CTO of AI, Linux Foundation
Wednesday April 8, 2026 09:00 - 09:10 CEST
Matt White, Global CTO of AI and CTO at PyTorch Foundation will provide an update on technical strategy, ecosystem and projects and working groups
Speakers
avatar for Matt White

Matt White

Global CTO of AI, Linux Foundation, The Linux Foundation
Matt White is the Executive Director of the PyTorch Foundation and GM of AI at the Linux Foundation. He is also the Director of the Generative AI Commons. Matt has years of experience in applied research and standards in AI and data in telecom, media and gaming industries. Matt is... Read More →
Wednesday April 8, 2026 09:00 - 09:10 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

09:10 CEST

Keynote: vLLM & Ray Updates - Tyler Michael Smith, Chief Architect - Inference Engineering, Red Hat & Artur Niederfahrenhorst, Member of Technical Staff,Anyscale
Wednesday April 8, 2026 09:10 - 09:25 CEST

Speakers
avatar for Tyler Michael Smith

Tyler Michael Smith

Chief Architect - Inference Engineering, Red Hat
Tyler received a PhD in Computer Science at The University of Texas at Austin, studying high performance dense linear algebra - microkernels, parallelism, and theoretical lower bounds on data movement.. After a postdoc at ETH Zürich, he joined Neural Magic, first working on a graph... Read More →
avatar for Artur Niederfahrenhorst

Artur Niederfahrenhorst

Member of Technical Staff, Anyscale
Artur is a member of the technical staff at Anyscale, the company that recently donated Ray to the Linux Foundation. He has been contributing to Ray since early 2022, where his main contributions have been in distributed reinforcement learning. Artur majored in Computer Science at... Read More →
Wednesday April 8, 2026 09:10 - 09:25 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

09:25 CEST

Keynote: The Hub as Infrastructure. From Open PyTorch Models, to a Safe and Performant Distribution Hub - Lysandre Debut, Chief Open-Source Officer, Hugging Face
Wednesday April 8, 2026 09:25 - 09:40 CEST

Speakers
avatar for Lysandre Debut

Lysandre Debut

Chief Open-Source Officer, Hugging Face
Lysandre is the Chief Open-Source Officer at Hugging Face; ensuring that the ecosystem is as well supported as possible in the ML lifecycle, with open-source tools.

He has been at Hugging Face for the past six years and was the first open-source employee at Hugging Face; working on transformers and the entire stack of Hugging Face open-source libraries since then... Read More →
Wednesday April 8, 2026 09:25 - 09:40 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

09:45 CEST

Sponsored Keynote: Open Source Infrastructure for the AI Native Era - Jonathan Bryce, Executive Director, Cloud Native Computing Foundation
Wednesday April 8, 2026 09:45 - 09:50 CEST
AI adoption will not be limited by model ideas alone. It will be limited by how fast we can deploy, secure, observe, and scale AI systems in production. Inference is where AI becomes real for most organizations. As AI moves from frontier labs into mainstream production, the operational challenges start to look increasingly cloud native: orchestration, autoscaling, routing, security, policy, and observability. This keynote explores why the next phase of AI adoption will move faster if PyTorch and cloud native communities work together to extend proven open source patterns.
Speakers
avatar for Jonathan Bryce

Jonathan Bryce

Executive Director, Cloud and Infrastructure, The Linux Foundation
Jonathan Bryce is the Executive Director of Cloud & Infrastructure at the Linux Foundation, where he leads both the Cloud Native Computing Foundation (CNCF) and the OpenInfra Foundation—two of the largest and most influential open source communities in the world. With over... Read More →
Wednesday April 8, 2026 09:45 - 09:50 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

10:35 CEST

Lightning Talk: Live Migration of PyTorch GPU Nodes From Azure To European Clouds - Mike Krom, Acf Cyber Solutions
Wednesday April 8, 2026 10:35 - 10:45 CEST
Many European PyTorch teams run their GPU workloads on hyperscalers like Azure, AWS, or GCP—often without realizing that this places their data and models under US jurisdiction.

This lightning talk shows how PyTorch compute nodes can be migrated to European cloud providers while keeping the full ML environment intact. Through a live demo, we migrate a GPU-enabled PyTorch VM—including CUDA drivers and Jupyter notebooks—from Azure to European infrastructure, without retraining models or rebuilding environments.

The focus is on practical challenges: GPU compatibility, reproducibility, and data movement across clouds.

The migration is demonstrated using DigitalNomadSky, an open-source Python platform for cross-cloud VM migration, but the lessons apply broadly to PyTorch teams aiming to reduce jurisdictional risk and vendor lock-in.

Key takeaways
Why PyTorch workloads on hyperscalers raise sovereignty concerns for EU teams
What actually breaks (and what doesn’t) when migrating GPU-based ML nodes
How to regain control over ML infrastructure without rewriting your stack
Speakers
avatar for Mike Krom

Mike Krom

Partner, ACF Cybersolutions
I am a software architect and lead developer of the open-source project DigitalNomadSky. I have extensive experience with Microsoft Azure from working at Microsoft and supporting large-scale cloud migrations. My work focuses on supporting datascience and ML-teams with cloud infrastructure... Read More →
Wednesday April 8, 2026 10:35 - 10:45 CEST
Central Room
  Security & Privacy

10:35 CEST

Beyond JSON-RPC: Scaling Model Context Protocols With gRPC in the PyTorch Ecosystem - Ashesh Vidyut & Madhav Bissa, Google
Wednesday April 8, 2026 10:35 - 11:00 CEST
Right now, MCP mostly relies on HTTP and STDIO. That works for simple scripts, but if you’re running high-performance PyTorch models in production, you’re going to hit a wall. When you’re moving large context windows or tensor metadata, the overhead of JSON-RPC starts to hurt.
We’re introducing SEP-1352, which adds gRPC as a native transport for MCP. Since gRPC is already the standard for microservices, it’s a natural fit for the PyTorch ecosystem. By using Protobuf instead of JSON, we get much higher throughput and lower latency—essentially making the communication between models and tools as fast as the models themselves.
In this session, we’ll cover:
Why Protobuf matters: Moving away from bulky JSON to keep bandwidth low and speed high.
Built-in Streaming: How to use gRPC’s streaming to handle long-running model outputs without timeouts.
Production-ready features: Using the same auth, load balancing, and service mesh (mTLS) you already use for your ML microservices.
Upgrading your stack: How to move from PyTorch MCP HTTP services to MCP gRPC services without throwing away your existing infra.
Speakers
avatar for Ashesh Vidyut

Ashesh Vidyut

Senior Software Engineer, Google

avatar for Madhav Bissa

Madhav Bissa

Senior Software Engineer, Google
member, grpc-Go
Wednesday April 8, 2026 10:35 - 11:00 CEST
Junior Stage
  Agents & Interop

10:35 CEST

How To Write C++ Extensions in 2026 - Jane Xu, Meta & Mikayla Gawarecki, Meta
Wednesday April 8, 2026 10:35 - 11:00 CEST
Are you writing a C++ custom op extension to PyTorch? It's 2026 and are you still shipping M x N wheels for M CPython versions and N libtorch versions? Did you know you can just ship 1 wheel that works across multiple CPythons and libtorches? If you're curious how, attend this talk to get the deets on py_limited_api, APIs like torch::stable::Tensor & TORCH_TARGET_VERSION, and generally the latest and greatest ways for keeping your code and your release matrix simple. Get your custom kernel enrolling in new features with benefits proven out in FA3, xformers, torchao, torchaudio, and more in progress! We'll also share some of our vision towards smoother and faster custom ops extensions.
Speakers
avatar for Jane Xu

Jane Xu

PyTorch SWE, Meta
Hi, I'm Jane! Please don't hesitate to come talk to me about your favorite optimizer, fitting models in GPU memory, how to free C++ extensions from libtorch version, and anything that interests you.
avatar for Mikayla Gawarecki

Mikayla Gawarecki

Software Engineer, Meta
Software Engineer on PyTorch
Wednesday April 8, 2026 10:35 - 11:00 CEST
Founders Cafe
  Frameworks & Compilers

10:50 CEST

Lightning Talk: Achieving SOTA GEMM Performance: A CuTeDSL Backend for PyTorch Inductor - Nikhil Patel, Meta
Wednesday April 8, 2026 10:50 - 11:00 CEST
Matrix multiplication is a central compute primitive in modern deep learning, but achieving SOTA performance on novel architectures like NVIDIA Blackwell has become a bottleneck. Existing Triton-based kernels in torch.compile struggle to keep pace with rapid hardware evolution, often forcing users to hand-write custom, architecture-specific kernels - a growing gap as hardware feature velocity accelerates.

We present a new CuTeDSL GEMM backend in PyTorch Inductor that integrates NVIDIA’s kernel implementations directly into torch.compile. Built using the Cutlass API for kernel discovery, this backend allows PyTorch to expose first-class support for NVIDIA-authored GEMMs and automatically leverage new architectural features as NVIDIA updates their kernels.

The backend currently supports standard GEMM, grouped GEMM, and block-scaled MXFP8 GEMM, along with pointwise epilogue fusions (with reductions forthcoming). We present early end-to-end results from vLLM inference and TorchTitan training, demonstrating how this approach enables PyTorch to achieve high-performance GEMMs on Blackwell and beyond, while eliminating the need for users or developers to maintain handwritten kernels.
Speakers
avatar for Nikhil Patel

Nikhil Patel

Software Engineer, Meta
Nikhil is a software engineer on the PyTorch Inductor team at Meta Superintelligence Labs, where he works on Inductor’s CuTeDSL GEMM backend. His work sits at the boundary between compiler code generation and hardware-native GPU features, optimizing large-scale training and inference... Read More →
Wednesday April 8, 2026 10:50 - 11:00 CEST
Master Stage
  Frameworks & Compilers

10:50 CEST

Lightning Talk: Step-Aligned Telemetry for Distributed PyTorch Training (Time & Memory Attribution Across Ranks) - Abhinav Srivastav, TraceOpt
Wednesday April 8, 2026 10:50 - 11:00 CEST
Distributed PyTorch training often looks healthy in system dashboards; GPU utilization is high, memory is stable and yet throughput degrades, steps jitter, or GPUs go idle intermittently. The core issue is misalignment: most
telemetry is sampled by time, while training progresses by "steps", and distributed behavior is dominated by the slowest rank rather than averages.

In this talk I will breaks down common failure modes in DDP training that standard metrics miss (rank stragglers, dataloader stalls, step-time variance, and memory spikes/creep). We will show how step-aligned, rank-aware aggregation changes debugging: per-step worst-rank vs median-rank views, gating to completed steps across ranks, and how to tie time and memory back to training semantics without relying on heavyweight profilers.
Speakers
avatar for Abhinav Srivastav

Abhinav Srivastav

ML Scientist, TraceOpt
ML researcher with a PhD in Computer Science. Industry experience at IBM Research, Huawei Research, and Zalando.Currently building TraceML: an open source tool that shows you the step-level breakdown of your PyTorch training run while it's still running.I am partially interested in... Read More →
Wednesday April 8, 2026 10:50 - 11:00 CEST
Central Room
  Training Systems

11:05 CEST

Lightning Talk: Accelerating PyTorch Models With Torch.compile's C++ Wrapper Mode - Bin Bao, Meta
Wednesday April 8, 2026 11:05 - 11:15 CEST
This lightning talk introduces torch.compile's C++ wrapper mode, a powerful feature that reduces CPU overhead and significantly improves model performance. As modern GPUs become increasingly powerful and compiler optimizations make GPU kernels run faster, CPU overhead has become more visible as the bottleneck. By generating optimized C++ code instead of Python, cpp-wrapper mode directly tackles this challenge.

While CUDAGraphs can also reduce CPU overhead, it is not always applicable—especially with highly dynamic input shapes. In these scenarios, cpp-wrapper mode provides a robust alternative with significant performance gains. Benchmark results from the OSS Huggingface suite demonstrate that cpp-wrapper mode delivers a 39% speedup over default torch.compile.

Attendees will learn when and how to leverage cpp-wrapper mode to overcome CPU-bound limitations and understand how this feature fits into PyTorch's performance optimization landscape, enabling them to build faster machine learning applications.
Speakers
avatar for Bin Bao

Bin Bao

Software Engineer, Meta
Bin Bao is a software engineer working with the PyTorch Compiler team at Meta. He focuses on developing TorchInductor optimizations and AOTInductor for C++ deployment.
Wednesday April 8, 2026 11:05 - 11:15 CEST
Junior Stage
  Frameworks & Compilers

11:05 CEST

Fp8 Training From Hopper To Blackwell - Luca Wehrstedt, Meta
Wednesday April 8, 2026 11:05 - 11:30 CEST
The Hopper generation of NVIDIA GPUs first enabled the use of low-precision float8 data types for training via TensorCore acceleration. However, the recipe to best leverage it was far from settled. Practitioners had to find their way through many entangled decisions around accuracy-vs-efficiency, precision-vs-range, overflows-vs-underflows, and more. The frontier was further push forward by the DeepSeek release, and then by the micro-scaling formats introduced by Blackwell. In this talk we will go through all these approaches, comparing their pros and cons, thus guiding researchers in finding the options that work best for them.
Speakers
avatar for Luca Wehrstedt

Luca Wehrstedt

Software Engineer, Meta
Research Engineer in Meta's Fundamental AI Research team (FAIR). At the intersection of research and infrastructure, Luca specialized in training efficiency and distributed communication. Regular contributor to PyTorch.
Wednesday April 8, 2026 11:05 - 11:30 CEST
Master Stage
  Training Systems

11:20 CEST

Lightning Talk: Building AI That Ops Teams Actually Trust - Robert King, Chronosphere / Palo Alto Networks
Wednesday April 8, 2026 11:20 - 11:30 CEST
You've built an AI that identifies root causes of incidents faster than any human could... but there's one problem, no one trusts it.

Ops teams are skeptical by nature. They've been burned by noisy alerts, black-box tools, and "intelligent" systems that weren't.
This talk covers what we learned building AI for incident response across enterprise environments: why technically correct recommendations get ignored, and how to design for skepticism from day one.

I'll share specific patterns that moved the needle:

- Validating agent responses before they reach users, catching hallucinations, weak reasoning, and overconfident outputs
- Explainability that fits the operator's mental model, not the data scientist's
- Feedback loops that improve the AI and build user trust simultaneously
- Rollout strategies that let teams build confidence gradually

Whether you're using LLMs, agents, or traditional ML for operational tasks, the trust problem is the same. Ship something wrong during an incident and you've lost your users for months.

You'll leave with a practical framework for validating AI outputs and building the kind of trust that gets recommendations acted on.
Speakers
avatar for Robert King

Robert King

Senior Sales Engineer, Chronosphere
Robert is Lead Enterprise Solutions Engineer at Chronosphere and an OpenTelemetry contributor. He recently presented on AI Observability with OpenTelemetry at Cloud Native London https://www.youtube.com/live/qF4wz-pha1w?si=PFzjNcGkbD4pFKnA&t=625 and has spoken at AWS Summit, and other... Read More →
Wednesday April 8, 2026 11:20 - 11:30 CEST
Junior Stage
  Inference & Production

11:35 CEST

Accelerating Complex-Valued Tensors With Torch.compile - Hameer Abbasi, OpenTeams Inc.
Wednesday April 8, 2026 11:35 - 12:00 CEST
torch.compile has been invaluable in accelerating many machine learning and scientific computing workflows. It has become a one-shot way to get free performance for many kinds of programs and models.

However, it comes with its own set of limitations. One of these limitations is that, for a long time, torch.compile didn't accept complex-valued tensors. These tensors have many uses, from quantum mechanics to simplifying the physics for world models. Support for such tensors would accelerate many of these workflows.

In this talk, we will take a journey into the current progress for supporting such tensors in torch.compile; some of the encountered challenges and what we hope to achieve, including some side-benefits for reducing binary size by JIT-ing kernels on demand.
Speakers
avatar for Hameer Abbasi

Hameer Abbasi

Senior Software Engineer I, OpenTeams, Inc.
Hameer Abbasi is a Senior Software Developer at OpenTeams, Inc. As part of his day job and also as a hobby, he has contributed to various projects in the scientific computing space, including NumPy, SciPy and PyTorch. He is also the lead maintainer of PyData/Sparse, a library for... Read More →
Wednesday April 8, 2026 11:35 - 12:00 CEST
Junior Stage
  Frameworks & Compilers

13:30 CEST

Optimizing CPU LLM Inference in PyTorch: Lessons From VLLM - Crefeda Rodrigues, Arm Limited & Fadi Arafeh, Arm
Wednesday April 8, 2026 13:30 - 13:55 CEST
vLLM has emerged as a reference inference stack in the PyTorch ecosystem for high-throughput large language model serving. CPUs continue to play an important role in LLM inference, supporting cost-sensitive deployments, hybrid CPU/GPU serving, and batch or off-peak workloads on general-purpose infrastructure.

In this talk, we examine CPU-based LLM inference through the lens of PyTorch internals, using vLLM as a case study. We describe how vLLM interacts with PyTorch’s operator stack, including tensor layout management, backend dispatch, and threading behaviour, and highlight common sources of overhead such as repeated weight repacking and poor threading behaviour.

We present runtime and kernel-level optimizations that reduce overhead including CPU paged-attention kernel tuning with vectorized softmax, specialized Q–K and P–V GEMM kernels aligned with vLLM’s scheduler, an ISA-aware BF16 attention, pre-packed weight layouts for quantized matmul, SIMD vectorization using PyTorch’s at::vec::Vectorized primitives, and NUMA-aware scheduling for scalable parallel inference.

Finally, we conclude with lessons learned from building and upstreaming a high-performance CPU inference engine.
Speakers
avatar for Crefeda Rodrigues

Crefeda Rodrigues

Staff Software Engineer, Arm
Crefeda Rodrigues is a Staff Software Engineer at Arm, focusing on performance and scalability driven machine learning software optimization for Arm server CPUs. She previously worked on large-scale climate and weather model optimization as a postdoctoral researcher at the University... Read More →
avatar for Fadi Arafeh

Fadi Arafeh

Senior Machine Learning Engineer, Arm
Fadi is a Senior Machine Learning Engineer at Arm, working on optimizing PyTorch and vLLM for Arm Infrastructure cores. Prior to that, Fadi obtained a BSc in Artificial Intelligence from the University of Manchester.
Wednesday April 8, 2026 13:30 - 13:55 CEST
Founders Cafe
  Inference & Production

13:45 CEST

Lightning Talk: Slash LLM Cold-Start Times by Pre-distributing GPU Caches - Billy McFall & Maryam Tahhan, Red Hat
Wednesday April 8, 2026 13:45 - 13:55 CEST
Are your Large Language Model (LLM) deployments stuck waiting for GPU kernels to compile? If you are running distributed inference at scale, your infrastructure is likely wasting time rebuilding the same GPU Kernel Cache for every single instance. You may not even realize the time and resources that are being consumed for rebuilding. This session is designed for platform engineers and ML practitioners who need to optimize inference scaling and reduce startup latency.

We will demonstrate how to eliminate redundant compilation by pre-distributing GPU kernel caches to all the inference nodes using KServe, a distributed model inference runtime for Kubernetes. Beyond just the "what," we will dive into the technical implementation of signing, verifying, and mounting cache images to ensure supply-chain security across clusters. Attendees will leave with a practical blueprint for reducing cold-start times and securing GPU-heavy workloads in production.
Speakers
avatar for Billy McFall

Billy McFall

Sr. Principal Software Engineer, Red Hat
Billy McFall is a software engineer in the Emerging Tech Networking Team within the Office of the CTO at Red Hat for 9+ years. Billy previously worked on Kubernetes/OpenShift networking, including the integration of the NVIDIA DPU into OpenShift. Billy has also been a maintainer of... Read More →
avatar for Maryam Tahhan

Maryam Tahhan

Principal Engineer, Red Hat
Maryam is a Principal Engineer in Red Hat's Office of the CTO, where she focuses on standardising CPU inferencing performance evaluation to help effectively validate and scale ML workloads.
Wednesday April 8, 2026 13:45 - 13:55 CEST
Central Room
  Inference & Production

14:15 CEST

Lightning Talk: Inside VLLM's KV Offloading Connector: Async Memory Transfers for Higher Inference Throughput - Nicolò Lucchesi, Red Hat
Wednesday April 8, 2026 14:15 - 14:25 CEST
Every LLM request produces KV-cache state that is expensive to recompute. However, GPU memory is limited in size and when memory fills up, entries are discarded from cache. A natural mitigation is expanding the KV cache to CPU DRAM which is meaningfully larger than GPU memory.
vLLM 0.11.0 introduced the Offloading Connector - an asynchronous, pluggable API for KV-cache offloading which is bundled with a native CPU backend. This new feature executes transfers concurrently with model computation on the GPU cores by using GPU DMA. This solution offers speedy loading of KV data from DRAM and near zero overhead from offloading. Getting here required rethinking vLLM's memory layout. The default per-layer KV fragmentation devastated transfer throughput. A new contiguous block layout, upstreamed in 0.12.0, increased effective block sizes by up to 125× and delivered an order-of-magnitude improvement in offloading performance.
We'll walk through the connector architecture, discuss memory transfer tradeoffs, the memory layout redesign, and practical guidance for enabling CPU offloading in production.
Speakers
avatar for Nicolò Lucchesi

Nicolò Lucchesi

Senior Machine Learning Engineer, Red Hat
Nicolò is a Senior Machine Learning Engineer at Red Hat with a background in Deep Learning and Computer Vision. He works on Inference Optimization for vLLM, where he is a maintainer.
Wednesday April 8, 2026 14:15 - 14:25 CEST
Central Room
  Inference & Production
  • Audience Level Any
  • Slides Attached Yes

14:30 CEST

Lightning Talk: Torch-Spyre: Compiling To a Multi-core Dataflow Accelerator With Inductor - David Grove & Olivier Tardieu, IBM
Wednesday April 8, 2026 14:30 - 14:40 CEST
Torch-Spyre (https://github.com/torch-spyre/torch-spyre) is an open source project that provides a PyTorch PrivateUse1 device with OpenReg, including an Inductor backend, for the IBM Spyre Accelerator. IBM Spyre is a high-performance energy-efficient AI accelerator featuring 32 AI-optimized compute cores each with on-chip interconnect and compiler-managed scratchpad memory.

Our goal in this session is to describe how we evolved the Spyre software stack to fully leverage Inductor. This enabled the elimination of a significant fraction of our proprietary compiler code base resulting in improved compilation time and operation coverage without loss of inference performance. We will highlight several technical challenges in compiling for Spyre-like accelerators and describe how we adapted and extended Inductor to tackle them. In particular, we will discuss our extensions to Inductor to support device-specific tiled Tensor memory layouts, and new compiler optimization passes for core-level work division and scratchpad management. We hope to engage the community in evolving the PyTorch ecosystem to more fully support them.
Speakers
avatar for Dave Grove

Dave Grove

Distinguished Research Scientist, IBM
David Grove is a Distinguished Research Scientist at IBM T.J. Watson, NY, USA. He has been a software systems researcher at IBM since 1998, specializing in programming language implementation and scalable runtime systems. He has authored more than sixty peer-reviewed publications... Read More →
avatar for Olivier Tardieu

Olivier Tardieu

Principal Research Scientist, Manager, IBM
Dr. Olivier Tardieu is a Principal Research Scientist and Manager at IBM T.J. Watson, NY, USA. He joined IBM Research in 2007. His current research focuses on cloud-related technologies, including Serverless Computing and Kubernetes, as well as their application to Machine Learning... Read More →
Wednesday April 8, 2026 14:30 - 14:40 CEST
Junior Stage
  Frameworks & Compilers

14:30 CEST

Lightning Talk: Every Millisecond Counts: The Fine-tuning Journey of an Ultra-Efficient PyTorch Model for the Edge - Pavel Macenauer, NXP Semiconductors
Wednesday April 8, 2026 14:30 - 14:40 CEST
From smart cameras that protect privacy by analyzing video on-device, to wearables that interpret voice and motion instantly, to industrial sensors that prevent failures before they happen, edge AI is shaping our everyday routines and transforming our lives.

Eliminating cloud dependency and making connectivity optional is essential for data staying local. Without cloud, our options become severely limited to the constraints of the devices, and efficiency drives innovation. Every millisecond and milliwatt can unlock a new use case — or limit one.

This talk will explore optimization techniques for vision, audio, and language models that allow them to run on tiny, resource-constrained devices, and fine-tune them to the limit of our model’s latency, accuracy, or power efficiency. We will start with an initial rapid simulation, and follow up with silicon-level tuning with real device profiling feedback.
Speakers
avatar for Pavel Macenauer

Pavel Macenauer

AI/ML R&D Software Lead, NXP Semiconductors
A software lead at NXP Semiconductors leading teams developing tools, runtime libraries, and enabling AI on Edge-class devices. Both professionally and out of human curiosity, Pavel developed software visualizing the World around us. Initially through the lens of a camera, then from... Read More →
Wednesday April 8, 2026 14:30 - 14:40 CEST
Central Room
  Inference & Production

14:30 CEST

From Responses To Trajectories: Multi-Turn and Multi-Environment Reinforcement Learning - Kashif Rasul & Sergio Paniego Blanco, Hugging Face
Wednesday April 8, 2026 14:30 - 14:55 CEST
Post-training of LLMs with reinforcement learning is increasingly moving beyond static prompt–response pairs and preference optimization methods such as DPO, toward trajectory-based optimization. This talk focuses on the latest advances in multi-turn and multi-environment GRPO training, enabling LLMs to learn from interactive, agent-like experiences, including interacting with simulated environments, using tools, or completing multi-step reasoning tasks.

We highlight how TRL, as a PyTorch-native post-training framework, supports these workflows at scale. Multi-turn, multi-environment training can leverage simulated environments (i.e., coding, terminals, browsers) such as OpenEnv, while GRPO can also be applied to datasets for training LLMs on tool use or multi-step reasoning. Attendees will gain insights into design patterns, rollout handling, trajectory batching, and advantage computation, showing how robust, multi-turn, multi-environment post-training can improve alignment, reasoning, and generalization in LLMs for agentic applications.
Speakers
avatar for Kashif Rasul

Kashif Rasul

Research Scientist, Hugging Face
Kashif has a PhD. in Mathematics from the Freie Universität Berlin. He is passionate about high-performance computing, Reinforcement learning, and has presented at NVIDIA's GTC in 2009 and at StrangeLoop in 2012, and is also contributing to a number of data science and deep learning... Read More →
avatar for Sergio Paniego Blanco

Sergio Paniego Blanco

Machine Learning Engineer, Hugging Face
Sergio tiene una amplia trayectoria en el ámbito del código abierto y la inteligencia artificial, campo en el que también obtuvo su doctorado. Lleva más de ocho años participando en iniciativas como Google Summer of Code, donde ha contribuido como desarrollador y mentor. Actualmente... Read More →
Wednesday April 8, 2026 14:30 - 14:55 CEST
Founders Cafe
  Training Systems

14:45 CEST

Lightning Talk: Full-Stack PyTorch Robotics VLA: From Data To Edge Via ExecuTorch/OpenVINO - Samet Akcay & Dmitriy Pastushenkov, Intel
Wednesday April 8, 2026 14:45 - 14:55 CEST
While research-centric tools have lowered the entry barrier for robotics data collection, transitioning Vision-Language-Action models to production remains challenging due to fragmented edge deployment paths. This session presents a unified, PyTorch-native workflow spanning the full robotics lifecycle, from data capture and curation to optimized edge execution. We introduce a modular Physical AI pipeline designed to resolve the disconnect between research scripts and real-time hardware. The talk details practical patterns for robotics data capture and policy training in a unified PyTorch ecosystem, followed by concrete steps to export models via ExecuTorch. Using an OpenVINO backend, Quantizer, and AOT compilation, we address latency, accuracy, and operator coverage gaps, and demonstrate efficient on-device VLA inference. Using a WidowX pick-and-sort task as a case study, we demonstrate how to validate latency and numerical tolerances under physical constraints. Attendees will leave with a reference architecture and a checklist for monitoring, safety gates, and managing dataset drift, providing a roadmap for moving robotics VLA from research to production-grade edge deployment.
Speakers
avatar for Dmitriy Pastushenkov

Dmitriy Pastushenkov

AI Software Product Manager, Intel
Dmitriy Pastushenkov is a passionate Software Product Manager at Intel with more than 20 years of comprehensive and international experience in the industrial automation, industrial Internet of Things (IIoT) and real-time operating systems and AI. Dmitriy has held various roles in... Read More →
avatar for Samet Akcay

Samet Akcay

Principal AI Engineer, Intel
Samet Akcay is a Principal AI Engineer at Intel who leads ML R&D efforts across Open Edge Platform libraries, including Intel Geti, Datumaro, Anomalib, Training Extensions, and Inference libraries. His research specializes self-supervised learning and multi-modal object detection... Read More →
Wednesday April 8, 2026 14:45 - 14:55 CEST
Central Room
  Inference & Production
  • Audience Level Any
  • Slides Attached Yes

15:25 CEST

Lightning Talk: Trinity Large - Torchtitan on 2000+ B300s - Matej Sirovatka, Prime Intellect
Wednesday April 8, 2026 15:25 - 15:35 CEST
In this talk, we'll cover how to use torchtitan to scale training of ultra-sparse mixture-of-experts models across over 2,000 GPUs. We'll walk through the pre-training of Trinity Large, a 400B mixture-of-experts model trained entirely using torchtitan, focusing on maximizing throughput and minimizing the impact of hardware induced failures. Along the way, we'll discuss challenges like fault tolerance, large-scale distributed training, and ensuring determinism - and how we've addressed each of these using torchtitan. Finally, we'll share insights and common pitfalls to avoid in your own large-scale training runs.
Speakers
avatar for Matej Sirovatka

Matej Sirovatka

Research Engineer, Prime Intellect
Research Engineer at Prime Intellect, mainly focusing on distributed training, performance and scaling.
Wednesday April 8, 2026 15:25 - 15:35 CEST
Founders Cafe
  Training Systems

15:25 CEST

Beyond the Theory: What Actually Breaks When You Scale Your Disaggregated Pytorch Models - Ekin Karabulut & Ron Kahn, NVIDIA
Wednesday April 8, 2026 15:25 - 15:50 CEST
As inference demand explodes, new techniques to optimize these deployments have emerged. One such technique is disaggregated inference, which splits inference into differently optimized workloads (e.g. prefill and decode) on separate workers. The theory is straightforward–better GPU utilization, inference performance, and tighter control over SLAs.The deployment in production is not.
Scaling happens at multiple connected levels. Adding prefill workers for a traffic spike? Those workers belong to a prefill leader and must scale as a unit. But your prefill-to-decode ratio matters too, scale prefill without matching decode capacity and you've moved the bottleneck.Placement also plays a role: place prefill and decode far apart in your network topology and KV-cache transfers will kill your latency.Standard autoscaling treats these as independent components.They're not.
In this talk, we'll share what we've learned running disaggregated vLLM and SGLang deployments on K8s: what broke,what worked, and how we're improving performance. We'll evaluate approaches from standard deployments to specialized APIs like LWS and Grove, discuss how these integrate with frameworks like llm-d and Dynamo.
Speakers
avatar for Ekin Karabulut

Ekin Karabulut

AI/ML Developer Advocate, NVIDIA
Ekin is a Developer Advocate at NVIDIA, following the acquisition of Run:ai. Prior to that, she specialized in the privacy implications of federated learning systems with DNNs in distributed environments as a data scientist. Currently, she is exploring the efficient usage of large... Read More →
avatar for Ron Kahn

Ron Kahn

Senior Software Engineer, NVIDIA
Ron Kahn is a Senior Software Engineer in the NVIDIA Run:ai platform team. Ron works on the design and implementation of workload management systems that abstract Kubernetes complexity for AI practitioners. When not simplifying AI training jobs, Ron can be found cooking something... Read More →
Wednesday April 8, 2026 15:25 - 15:50 CEST
Central Room
  Inference & Production
  • Audience Level Any
  • Slides Attached Yes

15:55 CEST

Lightning Talk: Why Logging Isn’t Enough: Making PyTorch Training Regressions Visible in Practice - Sahana Venkatesh, Wayve
Wednesday April 8, 2026 15:55 - 16:05 CEST
PyTorch teams often log rich training metrics, yet still discover training regressions late after significant developer time and GPU budget have already been spent. In this talk, I’ll share a practical pattern we used to turn PyTorch training metrics into an operational guardrail for large-model training.

The approach combines scheduled short and long training runs, standardized performance and stability metrics (throughput, memory, loss, divergence), and simple statistical baselines to automatically surface regressions via alerts without hard gates or complex infrastructure.

I’ll focus on why logging alone is insufficient, how we chose what to monitor, and what tradeoffs we encountered (false positives, alert fatigue, baseline drift). The goal is not a tool demo, but a reusable pattern other PyTorch teams can adapt to catch training regressions earlier and make retraining more predictable.
Speakers
avatar for Sahana Venkatesh

Sahana Venkatesh

Software engineer, Wayve
Wednesday April 8, 2026 15:55 - 16:05 CEST
Central Room
  Training Systems

15:55 CEST

From Gradients To Governance: Making PyTorch Lineage-Aware - Kateryna Romashko & Clodagh Walsh, Red Hat
Wednesday April 8, 2026 15:55 - 16:20 CEST
PyTorch was built to track how models learn, but not whether they should have. As AI systems increasingly operate on regulated, jurisdiction bound, and sovereign data, lineage and policy can no longer live outside the runtime. This talk explores data sovereignty as a first class constraint and argues that lineage is the missing primitive in modern ML frameworks. Building on PyTorch’s dynamic graphs and autograd system, we outline how tensors could carry origin, consent, and policy metadata through training and inference. The goal is not compliance tooling, but a lineage aware PyTorch that enables trustworthy, auditable, and deployable AI across edge, federated, and European AI ecosystems.
Speakers
avatar for Kateryna Romashko

Kateryna Romashko

Associate Software Engineer, RedHat
Kateryna Romashko is a Software Engineer and a Master’s student in Computer Science, currently working in the Emerging Technology team at Red Hat. Her work focuses on ML systems, data lineage, and event-driven architectures, with hands-on experience across ML platforms, distributed... Read More →
avatar for Clodagh Walsh

Clodagh Walsh

Software Engineer, Red Hat
Clodagh is a software engineer at Red Hat working on the Emerging Technologies team under the office of the CTO. She has experience working with cloud native technologies. She is currently working on a range of AI related projects focused on topics such as MLOps and dLLMs.
Wednesday April 8, 2026 15:55 - 16:20 CEST
Master Stage
  Responsible AI & Compliance

15:55 CEST

DualPipe from Scratch: Implementing DeepSeek's 5D Parallelism in PyTorch - Dev Jadhav, ING Bank
Wednesday April 8, 2026 15:55 - 16:20 CEST
The DeepSeek-V3 paper describes 5D parallelism and DualPipe at a high level, but leaves critical implementation details undocumented. This session presents our open-source PyTorch reference implementation that fills those gaps - verified against the original architecture and designed for learning and extension.

We'll share what we discovered building it from scratch:
Why K_pe is shared across heads in decoupled RoPE (not explicit in paper)
The critical timing of bias updates in auxiliary-loss-free load balancing
How sigmoid routing separates selection scores from gate values
The warmup formula that makes DualPipe achieve 3% bubble overhead
Bugs we caught: causal mask position offsets, EMA initialization, capacity dropping priority

What you'll learn:

5D Parallelism: How TP, PP, DP, EP, and SP interact at 2,048+ GPU scale
DualPipe: Building the bidirectional scheduler with 55% throughput gain over GPipe
Hierarchical All-to-All: Two-level communication reducing MoE dispatch overhead by 4x
Teachable abstractions: CapacityMetrics, ExpertSpecializationTracker, ScheduleStep enums

Prerequisites: torch.distributed basics.
Code: github.com/DevJadhav/deepseek-from-scratch
Speakers
avatar for Dev Jadhav

Dev Jadhav

Tech Lead ML Engineer, ING Bank
Dev Jadhav is a production AI/ML engineer with 10+ years building AI
systems at scale. He currently leads ML engineering at Major Bank,
developing financial-grade AI and large-scale model operations. Dev is
the creator of DeepSeek From Scratch, an open-source implementation of
DeepSe... Read More →
Wednesday April 8, 2026 15:55 - 16:20 CEST
Founders Cafe
  Training Systems

16:10 CEST

Lightning Talk: Ball Tracking and Detection in Soccer Videos - Comparison of VLMs and Traditional Pipelines - Maciej Szymkowski, Future Processing
Wednesday April 8, 2026 16:10 - 16:20 CEST
Nowadays, Vision-Language Models (VLMs) have plenty of different applications. However, it must be pointed out that we cannot be totally sure that they are the most accurate and precise solution for all potential problems. We must compare their possibilities with some other pipelines. In this presentation, we would like to compare on-premise models – Qwen 3 and InternVL-3.5, and cloud-based solutions – Gemini 3, GPT-5 with traditional pipeline based on YOLOv11 and image processing techniques. The battlefield will be ball detection and tracking in soccer matches recordings (from different angles and in diversified light, e.g., sunny, night, and weather conditions, e.g., snowy, rainy day) downloaded from SoccerNet database. In this case, we used both broadcast videos and action and replay images. All of them were marked manually to prepare ground truth database. The models must recognize not only the ball but also track it through the whole sequence of images. To give equal chances we fine-tuned YOLOv11 and provided additional knowledge to VLMs in the form of RAG pipeline. Comparison was made with traditional Machine Learning metrics like accuracy, precision, and recall.
Speakers
avatar for Maciej Szymkowski

Maciej Szymkowski

AI Researcher and Senior Machine Learning Engineer, Future Processing
Maciej Szymkowski, PhD, is a Senior ML Engineer at Future Processing. Formerly Head of AI at Łukasiewicz PIT, his academic background spans BUT, WUT, and AGH. With 45+ publications, he specializes in Computer Vision (med/transport/sport), VLMs, and LLMs. His industry experience includes... Read More →
Wednesday April 8, 2026 16:10 - 16:20 CEST
Central Room
  Applications & Case Studies

16:25 CEST

Lightning Talk: Bridging the Gap: Engineering Compliant "Glass Box" Medical AI With PyTorch - Muhammad Saqib Hussain, Neurosonic & Mohaddisa Maryam, Neurosonic Academy
Wednesday April 8, 2026 16:25 - 16:35 CEST
While state-of-the-art models like NeuroBOLT demonstrate mathematical excellence in EEG-to-fMRI synthesis, they often remain clinically opaque. With the EU AI Act classifying medical AI as "high-risk," hospitals cannot deploy "black boxes"; they require systems that are transparent, auditable, and legally compliant.
​This session presents a "Clinical Auditing System" built within the PyTorch ecosystem, designed to transform opaque deep learning models into transparent "Glass Boxes." I will demonstrate a workflow that backpropagates gradients from high-dimensional 4D fMRI volumes to identify the specific EEG spectral signatures driving those predictions.
​Key Technical Takeaways:
​1. The Audit Layer: Implementing IntegratedGradients (Captum) to verify model fidelity, ensuring predictions stem from valid neural oscillations rather than noise artifacts.
​2. Cross-Modal Reasoning: A technical demonstration of mapping 4D volumetric outputs back to 1D EEG frequency bands, enabling the model to "reason" through neurovascular coupling.
​This presentation is designed for developers seeking to wrap PyTorch models in safety layers that satisfy demands of healthcare regulation.
Speakers
avatar for Mohaddisa Maryam

Mohaddisa Maryam

Miss, Neurosonic Academy
I am a First Year Student of Medicine in Italy.
avatar for Muhammad Saqib Hussain

Muhammad Saqib Hussain

Medical Student, AI Researcher and Neurotech Founder, ClinExplain
Muhammad Saqib is a 4th-year medical student at Comenius University Bratislava and Founder of Neurosonic Academy. His M.D. thesis explores AI for Sleep Medicine. Leveraging PyTorch and Captum, he builds "Glass Box" auditing frameworks to validate generative neuroimaging models against... Read More →
Wednesday April 8, 2026 16:25 - 16:35 CEST
Founders Cafe
  Applications & Case Studies
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Audience Level
  • Slides Attached
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.