Loading…
7-8 April, 2025
Paris, France
View More Details & Registration
Note: The schedule is subject to change.

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for PyTorch Conference Europe 2026 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in CEST (UTC/GMT +2). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."
Company: Intermediate clear filter
arrow_back View All Dates
Wednesday, April 8
 

10:35 CEST

Lightning Talk: Live Migration of PyTorch GPU Nodes From Azure To European Clouds - Mike Krom, Acf Cyber Solutions
Wednesday April 8, 2026 10:35 - 10:45 CEST
Many European PyTorch teams run their GPU workloads on hyperscalers like Azure, AWS, or GCP—often without realizing that this places their data and models under US jurisdiction.

This lightning talk shows how PyTorch compute nodes can be migrated to European cloud providers while keeping the full ML environment intact. Through a live demo, we migrate a GPU-enabled PyTorch VM—including CUDA drivers and Jupyter notebooks—from Azure to European infrastructure, without retraining models or rebuilding environments.

The focus is on practical challenges: GPU compatibility, reproducibility, and data movement across clouds.

The migration is demonstrated using DigitalNomadSky, an open-source Python platform for cross-cloud VM migration, but the lessons apply broadly to PyTorch teams aiming to reduce jurisdictional risk and vendor lock-in.

Key takeaways
Why PyTorch workloads on hyperscalers raise sovereignty concerns for EU teams
What actually breaks (and what doesn’t) when migrating GPU-based ML nodes
How to regain control over ML infrastructure without rewriting your stack
Speakers
avatar for Mike Krom

Mike Krom

Partner, ACF Cybersolutions
I am a software architect and lead developer of the open-source project DigitalNomadSky. I have extensive experience with Microsoft Azure from working at Microsoft and supporting large-scale cloud migrations. My work focuses on supporting datascience and ML-teams with cloud infrastructure... Read More →
Wednesday April 8, 2026 10:35 - 10:45 CEST
Central Room
  Security & Privacy

10:35 CEST

Beyond JSON-RPC: Scaling Model Context Protocols With gRPC in the PyTorch Ecosystem - Ashesh Vidyut & Madhav Bissa, Google
Wednesday April 8, 2026 10:35 - 11:00 CEST
Right now, MCP mostly relies on HTTP and STDIO. That works for simple scripts, but if you’re running high-performance PyTorch models in production, you’re going to hit a wall. When you’re moving large context windows or tensor metadata, the overhead of JSON-RPC starts to hurt.
We’re introducing SEP-1352, which adds gRPC as a native transport for MCP. Since gRPC is already the standard for microservices, it’s a natural fit for the PyTorch ecosystem. By using Protobuf instead of JSON, we get much higher throughput and lower latency—essentially making the communication between models and tools as fast as the models themselves.
In this session, we’ll cover:
Why Protobuf matters: Moving away from bulky JSON to keep bandwidth low and speed high.
Built-in Streaming: How to use gRPC’s streaming to handle long-running model outputs without timeouts.
Production-ready features: Using the same auth, load balancing, and service mesh (mTLS) you already use for your ML microservices.
Upgrading your stack: How to move from PyTorch MCP HTTP services to MCP gRPC services without throwing away your existing infra.
Speakers
avatar for Ashesh Vidyut

Ashesh Vidyut

Senior Software Engineer, Google

avatar for Madhav Bissa

Madhav Bissa

Senior Software Engineer, Google
member, grpc-Go
Wednesday April 8, 2026 10:35 - 11:00 CEST
Junior Stage
  Agents & Interop

10:50 CEST

Lightning Talk: Achieving SOTA GEMM Performance: A CuTeDSL Backend for PyTorch Inductor - Nikhil Patel, Meta
Wednesday April 8, 2026 10:50 - 11:00 CEST
Matrix multiplication is a central compute primitive in modern deep learning, but achieving SOTA performance on novel architectures like NVIDIA Blackwell has become a bottleneck. Existing Triton-based kernels in torch.compile struggle to keep pace with rapid hardware evolution, often forcing users to hand-write custom, architecture-specific kernels - a growing gap as hardware feature velocity accelerates.

We present a new CuTeDSL GEMM backend in PyTorch Inductor that integrates NVIDIA’s kernel implementations directly into torch.compile. Built using the Cutlass API for kernel discovery, this backend allows PyTorch to expose first-class support for NVIDIA-authored GEMMs and automatically leverage new architectural features as NVIDIA updates their kernels.

The backend currently supports standard GEMM, grouped GEMM, and block-scaled MXFP8 GEMM, along with pointwise epilogue fusions (with reductions forthcoming). We present early end-to-end results from vLLM inference and TorchTitan training, demonstrating how this approach enables PyTorch to achieve high-performance GEMMs on Blackwell and beyond, while eliminating the need for users or developers to maintain handwritten kernels.
Speakers
avatar for Nikhil Patel

Nikhil Patel

Software Engineer, Meta
Nikhil is a software engineer on the PyTorch Inductor team at Meta Superintelligence Labs, where he works on Inductor’s CuTeDSL GEMM backend. His work sits at the boundary between compiler code generation and hardware-native GPU features, optimizing large-scale training and inference... Read More →
Wednesday April 8, 2026 10:50 - 11:00 CEST
Master Stage
  Frameworks & Compilers

11:05 CEST

Lightning Talk: Accelerating PyTorch Models With Torch.compile's C++ Wrapper Mode - Bin Bao, Meta
Wednesday April 8, 2026 11:05 - 11:15 CEST
This lightning talk introduces torch.compile's C++ wrapper mode, a powerful feature that reduces CPU overhead and significantly improves model performance. As modern GPUs become increasingly powerful and compiler optimizations make GPU kernels run faster, CPU overhead has become more visible as the bottleneck. By generating optimized C++ code instead of Python, cpp-wrapper mode directly tackles this challenge.

While CUDAGraphs can also reduce CPU overhead, it is not always applicable—especially with highly dynamic input shapes. In these scenarios, cpp-wrapper mode provides a robust alternative with significant performance gains. Benchmark results from the OSS Huggingface suite demonstrate that cpp-wrapper mode delivers a 39% speedup over default torch.compile.

Attendees will learn when and how to leverage cpp-wrapper mode to overcome CPU-bound limitations and understand how this feature fits into PyTorch's performance optimization landscape, enabling them to build faster machine learning applications.
Speakers
avatar for Bin Bao

Bin Bao

Software Engineer, Meta
Bin Bao is a software engineer working with the PyTorch Compiler team at Meta. He focuses on developing TorchInductor optimizations and AOTInductor for C++ deployment.
Wednesday April 8, 2026 11:05 - 11:15 CEST
Junior Stage
  Frameworks & Compilers

11:20 CEST

Lightning Talk: Not All Tokens Are Equal: Semantic KV-Cache for Agentic LLM Serving - Maroon Ayoub, IBM Research & Hyunkyun Moon, moreh
Wednesday April 8, 2026 11:20 - 11:30 CEST
Agentic AI workloads - tree-of-thought exploration, ReAct loops, hierarchical swarms - expose a fundamental mismatch in how we serve PyTorch models. Today's inference stacks treat the KV-cache as a flat, anonymous tensor buffer with blind LRU eviction. This ignores the structural reality of agents: system prompts are durable, tool definitions are shared, and reasoning scratchpads are ephemeral. We are currently evicting high-value state to preserve throwaway tokens.

In this talk, we present Semantic KV-Cache, an architectural evolution for llm-d and vLLM that replaces anonymous blocks with Typed State.

We demonstrate a runtime that tags blocks as SystemPrompt, ToolDefinition, or ReasoningBranch, applying differentiated policies to each: pinning foundational context, replicating shared tools, and eagerly evicting completed thoughts. We show how this "lifecycle-aware" caching reduces recomputation and minimizes the "Agentic Tax" - evolving the PyTorch serving stack from request-centric to workload-aware.
Speakers
avatar for Maroon Ayoub

Maroon Ayoub

Research Scientist & Architect, IBM Research
Maroon Ayoub is a systems engineer at IBM Research focused on distributed AI infrastructure. He co-leads development of llm-d and specializes in scaling LLM inference with Kubernetes-native architectures, performance efficiency, and open source integrations.
avatar for hyunkyun moon

hyunkyun moon

MLOps Engineer, Moreh
Hyunkyun Moon is an ML Platform Engineer at Moreh, focusing on building high-performance LLM inference platforms with llm-d. He is an active contributor to open-source projects, including llm-d and vLLM. With a strong background in large-scale Kubernetes-native infrastructure, he... Read More →
Wednesday April 8, 2026 11:20 - 11:30 CEST
Central Room

11:35 CEST

Lightning Talk: Enabling the Audio Modality for Language Models - Eustache Le Bihan, Hugging Face
Wednesday April 8, 2026 11:35 - 11:45 CEST
As the maintainer of everything audio in `transformers` (the lib), this talk shares how audio is being integrated into large language models, grounded in what we observe from the OS ecosystem.

Beginning with a brief overview of the current landscape of Audio LMs, I'll then highlight emerging trends in how audio is incorporated into pretrained text backbones. In particular, we examine the growing convergence of architectural choices, many inspired by VLMs, as well as newer concepts such as audio tokenization and streaming.

The core of the talk focuses on providing the audience with key technical insights: audio encoders vs audio tokenizers, their respective advantages and limitations. It covers the motivations behind introducing concepts such as audio tokenizers and audio processors into transformers, shows how these design choices are reflected in the library, and explains how PyTorch tooling is leveraged to make audio a standardized modality for the open-source community.
Speakers
avatar for Eustache Le Bihan

Eustache Le Bihan

MLE, Hugging Face
A 2024 MVA graduate, I now work on open-source audio at Hugging Face. My current focus is on standardising audio in the transformers library and strengthening support across models.
Wednesday April 8, 2026 11:35 - 11:45 CEST
Founders Cafe

11:35 CEST

Accelerating Complex-Valued Tensors With Torch.compile - Hameer Abbasi, OpenTeams Inc.
Wednesday April 8, 2026 11:35 - 12:00 CEST
torch.compile has been invaluable in accelerating many machine learning and scientific computing workflows. It has become a one-shot way to get free performance for many kinds of programs and models.

However, it comes with its own set of limitations. One of these limitations is that, for a long time, torch.compile didn't accept complex-valued tensors. These tensors have many uses, from quantum mechanics to simplifying the physics for world models. Support for such tensors would accelerate many of these workflows.

In this talk, we will take a journey into the current progress for supporting such tensors in torch.compile; some of the encountered challenges and what we hope to achieve, including some side-benefits for reducing binary size by JIT-ing kernels on demand.
Speakers
avatar for Hameer Abbasi

Hameer Abbasi

Senior Software Engineer I, OpenTeams, Inc.
Hameer Abbasi is a Senior Software Developer at OpenTeams, Inc. As part of his day job and also as a hobby, he has contributed to various projects in the scientific computing space, including NumPy, SciPy and PyTorch. He is also the lead maintainer of PyData/Sparse, a library for... Read More →
Wednesday April 8, 2026 11:35 - 12:00 CEST
Junior Stage
  Frameworks & Compilers

11:35 CEST

Portable High‑Performance LLM Serving: A Triton Backend for VLLM - Burkhard Ringlein, IBM Research & Jan van Lunteren, IBM
Wednesday April 8, 2026 11:35 - 12:00 CEST
Today, vLLM is the de-facto industry standard for serving Large Language Models and is widely adopted in production.

However, for most of the past, vLLM’s state-of-the-art performance was largely dependent on hand-written CUDA or HIP kernels. These kernels have typically been carefully optimized for a specific GPU platform and may pose a serious obstacle to the portability of vLLM across different hardware.

Leveraging Triton, we introduced a “Triton attention backend” to vLLM that produces highly competitive performance across GPU platforms with a single code base, without involving hand-written CUDA or HIP kernels. The Triton attention backend became the default for AMD GPUs and is used in scenarios where other attention backends have missing features. Additionally, this backend automatically selects appropriate specialized kernels based on model type or request length.

In this talk, we will present our recent advances that consistently deliver high performance on both NVIDIA and AMD GPUs with a single Triton-only code-base. We will present the engineering and science behind this Triton-only backend, including system aspects, kernel improvements, and launch grid optimizations.
Speakers
avatar for Jan van Lunteren

Jan van Lunteren

Senior Research Scientist, IBM Research
Jan van Lunteren is a Senior Research Scientist at IBM Research Zurich holding MSc and PhD degrees in Electrical Engineering. His research has covered a broad range of topics, including high‑speed networking, near‑memory computing, and high‑performance machine‑learning inference... Read More →
avatar for Burkhard Ringlein

Burkhard Ringlein

Research Staff Member, IBM Research
Dr. Burkhard Ringlein is a Research Staff Member in the AI Platform team of IBM Research, based in Zurich. He is an accomplished AI systems researcher and designs, builds, debugs, and optimizes practical systems for low-latency, high-throughput machine learning applications. Currently... Read More →
Wednesday April 8, 2026 11:35 - 12:00 CEST
Master Stage

13:30 CEST

Lightning Talk: From Hugging Face To Handheld: Scaling LLM Deployment With LiteRT Generative API - Cormac Brick & Weiyi Wang, Google
Wednesday April 8, 2026 13:30 - 13:40 CEST
This session will demonstrate the E2E journey of bringing custom PyTorch-based Open Source LLMs on cross platform devices using LiteRT. We will show developers how to take a custom Hugging Face Transformers checkpoint and convert them for on-device execution, including:
-Taking the Pytorch model from conversion to deployment.
-Automated Optimization: How LiteRT performs automated patching of performance-critical components, including architecture-specific rewrites for PyTorch models.
-Seamless Fine-Tuning Integration: How to move from an Unsloth fine-tuning session to a TorchAO-quantized model and LiteRT export without leaving your script.
-The "0-Day" Enablement Strategy: Well-known architectures are supported out-of-the-box. We’ll share how we enabled the QWEN0.6 (or Liquid AI) model in just 20 minutes.
-Interactive Validation: Run inference on the exported model directly in the Terminal or Colab to verify numerical correctness before deploying to device.
This workflow shows a smooth fine-tune-to-deployment story where everything stays within the original PyTorch/Hugging Face ecosystem. Viewers can "vibe code" along using Gemini CLI or other coding agents.
Speakers
avatar for Cormac Brick

Cormac Brick

Principal Engineer, Google AI Edge, Google
Cormac Brick is a Principal Engineer on the Google AI Edge team, where he specializes in frameworks and on-device AI. He has over 10 years experience in AI software, silicon and systems, with work spanning AI frameworks and ecosystems and compilers down to silicon microarchitecture... Read More →
avatar for Weiyi Wang

Weiyi Wang

Software Engineer, Google
Weiyi Is lead software engineer on LiteRT/TFLite, focusing on compiler, NPU and GenAI stack.
Wednesday April 8, 2026 13:30 - 13:40 CEST
Central Room

13:30 CEST

Optimizing CPU LLM Inference in PyTorch: Lessons From VLLM - Crefeda Rodrigues, Arm Limited & Fadi Arafeh, Arm
Wednesday April 8, 2026 13:30 - 13:55 CEST
vLLM has emerged as a reference inference stack in the PyTorch ecosystem for high-throughput large language model serving. CPUs continue to play an important role in LLM inference, supporting cost-sensitive deployments, hybrid CPU/GPU serving, and batch or off-peak workloads on general-purpose infrastructure.

In this talk, we examine CPU-based LLM inference through the lens of PyTorch internals, using vLLM as a case study. We describe how vLLM interacts with PyTorch’s operator stack, including tensor layout management, backend dispatch, and threading behaviour, and highlight common sources of overhead such as repeated weight repacking and poor threading behaviour.

We present runtime and kernel-level optimizations that reduce overhead including CPU paged-attention kernel tuning with vectorized softmax, specialized Q–K and P–V GEMM kernels aligned with vLLM’s scheduler, an ISA-aware BF16 attention, pre-packed weight layouts for quantized matmul, SIMD vectorization using PyTorch’s at::vec::Vectorized primitives, and NUMA-aware scheduling for scalable parallel inference.

Finally, we conclude with lessons learned from building and upstreaming a high-performance CPU inference engine.
Speakers
avatar for Crefeda Rodrigues

Crefeda Rodrigues

Staff Software Engineer, Arm
Crefeda Rodrigues is a Staff Software Engineer at Arm, focusing on performance and scalability driven machine learning software optimization for Arm server CPUs. She previously worked on large-scale climate and weather model optimization as a postdoctoral researcher at the University... Read More →
avatar for Fadi Arafeh

Fadi Arafeh

Senior Machine Learning Engineer, Arm
Fadi is a Senior Machine Learning Engineer at Arm, working on optimizing PyTorch and vLLM for Arm Infrastructure cores. Prior to that, Fadi obtained a BSc in Artificial Intelligence from the University of Manchester.
Wednesday April 8, 2026 13:30 - 13:55 CEST
Founders Cafe
  Inference & Production

13:45 CEST

Lightning Talk: Slash LLM Cold-Start Times by Pre-distributing GPU Caches - Billy McFall & Maryam Tahhan, Red Hat
Wednesday April 8, 2026 13:45 - 13:55 CEST
Are your Large Language Model (LLM) deployments stuck waiting for GPU kernels to compile? If you are running distributed inference at scale, your infrastructure is likely wasting time rebuilding the same GPU Kernel Cache for every single instance. You may not even realize the time and resources that are being consumed for rebuilding. This session is designed for platform engineers and ML practitioners who need to optimize inference scaling and reduce startup latency.

We will demonstrate how to eliminate redundant compilation by pre-distributing GPU kernel caches to all the inference nodes using KServe, a distributed model inference runtime for Kubernetes. Beyond just the "what," we will dive into the technical implementation of signing, verifying, and mounting cache images to ensure supply-chain security across clusters. Attendees will leave with a practical blueprint for reducing cold-start times and securing GPU-heavy workloads in production.
Speakers
avatar for Billy McFall

Billy McFall

Sr. Principal Software Engineer, Red Hat
Billy McFall is a software engineer in the Emerging Tech Networking Team within the Office of the CTO at Red Hat for 9+ years. Billy previously worked on Kubernetes/OpenShift networking, including the integration of the NVIDIA DPU into OpenShift. Billy has also been a maintainer of... Read More →
avatar for Maryam Tahhan

Maryam Tahhan

Principal Engineer, Red Hat
Maryam is a Principal Engineer in Red Hat's Office of the CTO, where she focuses on standardising CPU inferencing performance evaluation to help effectively validate and scale ML workloads.
Wednesday April 8, 2026 13:45 - 13:55 CEST
Central Room
  Inference & Production

14:00 CEST

Lightning Talk: Pluggable PyTorch LLM Inference Architecture With VLLM and AWS Neuron Backends - Yahav Biran, Annapurna Labs & Maen Suleiman, Amazon
Wednesday April 8, 2026 14:00 - 14:10 CEST
As PyTorch-based LLM serving matures, the challenge shifts from monolithic inference stacks to integrating diverse hardware accelerators efficiently. This session explores how modular plugin architectures enable PyTorch models to run optimally across backends—demonstrating AWS Trainium integration into vLLM through standardized interfaces.

We'll examine how vLLM's Hardware Plugin architecture uses Python's entry_points for automatic platform detection, allowing hardware vendors to extend PyTorch inference without fragmenting the codebase. This delivers automatic device detection, modular feature development, and seamless integration with PyTorch's model loading patterns.

Technical deep-dive includes NeuronWorker and NeuronxDistributedModelRunner extending vLLM base classes, NKI kernels for attention and MoE, and continuous batching with prefill/decode separation. We'll demo HuggingFace models loading through standard vLLM APIs and executing on Trainium without hardware-specific code.

Attendees learn how plugin architectures enable hardware vendors to join PyTorch inference while maintaining standard workflow compatibility.
Speakers
MS

Maen Suleiman

Product Manager, Amazon
avatar for Yahav Biran

Yahav Biran

Principal Architect, Amazon
Yahav Biran is a Principal Architect at AWS, focusing on large-scale AI workloads. He contributes to open-source projects and publishes in AWS blogs and academic journals, including the AWS compute and AI blogs and the Journal of Systems Engineering. He frequently delivers technical... Read More →
Wednesday April 8, 2026 14:00 - 14:10 CEST
Junior Stage

14:00 CEST

Lightning Talk: Backpropagation-Free Optimization in PyTorch - Andrii Krutsylo, Polish Academy of Sciences
Wednesday April 8, 2026 14:00 - 14:10 CEST
Backpropagation is not the only mechanism for training deep networks. This talk presents a compact, implementation-driven map of backpropagation-free training methods, organized around representative algorithms that expose key design trade-offs.

We focus on four families: Difference Target Propagation (target-based credit assignment), Direct Feedback Alignment (random feedback without weight transport), local loss / greedy layerwise training (strictly local objectives), and Forward-Forward learning as a forward-only alternative. Each is treated as a minimal working pattern rather than a full system.

For each representative, we answer the same practical questions: what learning signal is propagated, what intermediate state must be stored, how parameters are updated, and what limits scalability on modern accelerators. The emphasis is on PyTorch-level mechanics—explicit update loops, local objectives, and training without autograd—rather than derivations.

The goal is to give practitioners a clear mental model of the backprop-free design space and concrete patterns for experimenting with these methods in real PyTorch training pipelines.
Speakers
AK

Andrii Krutsylo

PhD Candidate, Institute of Computer Science, Polish Academy of Sciences
Andrii Krutsylo is a deep learning researcher focusing on continual learning and optimization dynamics. His work studies experience replay, gradient-free and local learning rules, and structured optimization for adaptive, resource-efficient systems.
Wednesday April 8, 2026 14:00 - 14:10 CEST
Central Room

14:00 CEST

Lightning Talk: Debugging the Undebuggable: Introducing Torch.distributed.debug - Tristan Rice, Meta, PyTorch
Wednesday April 8, 2026 14:00 - 14:10 CEST
Distributed training in PyTorch enables unprecedented scale, but it also introduces notoriously difficult debugging challenges. When a job with thousands of ranks hangs or slows down, identifying the root cause can feel like searching for a needle in a haystack. This lightning talk introduces the new PyTorch Distributed Debug Server, a powerful, interactive tool designed to bring clarity and control to the chaos of distributed debugging. We will provide a high-level overview of its architecture and core features, demonstrating how it provides a unified interface to inspect stack traces, analyze performance, and diagnose hangs across all workers simultaneously. Attendees will learn how this extensible server can dramatically reduce debugging time and improve the reliability of large-scale training jobs.
Speakers
avatar for Tristan Rice

Tristan Rice

Software Engineer, PyTorch Distributed, Meta
Software engineer working on PyTorch Distributed and large scale training.
Wednesday April 8, 2026 14:00 - 14:10 CEST
Founders Cafe

14:15 CEST

Lightning Talk: Distributed AI Without the Infrastructure Tax - Yahav Biran, Annapurna Labs & Maen Suleiman, Amazon
Wednesday April 8, 2026 14:15 - 14:25 CEST
Running distributed AI workloads in production requires solving three problems: package compatibility, hardware abstraction, and network configuration. AWS Neuron Deep Learning Containers (DLCs) address all three by providing open-source, production-ready images for Trainium and Inferentia.
This lightning talk shows how DLCs eliminate common failure modes. We'll cover three layers: First, how DLCs solve dependency hell by versioning PyTorch, Neuron SDK, XLA backend, and PyTorch PrivateUse1 dispatcher together as a tested contract. Second, how Dynamic Resource Allocation (DRA) in Kubernetes abstracts hardware complexity—enabling Neuron core slicing, multi-tenant workloads, and topology-aware scheduling without manual device mapping. Third, how pre-configured EFA drivers settings ensure zero-copy data movement, avoiding silent performance degradation that can cost 10x throughput.
We'll demonstrate scaling from laptop to 32-node cluster using the same container image and simple Kubernetes manifests.
Attendees will learn how to eliminate weeks of setup time, achieve 65-80% cluster utilization, and deploy workloads confidently. We'll share the GitHub repository and extension patterns.
Speakers
MS

Maen Suleiman

Product Manager, Amazon
avatar for Yahav Biran

Yahav Biran

Principal Architect, Amazon
Yahav Biran is a Principal Architect at AWS, focusing on large-scale AI workloads. He contributes to open-source projects and publishes in AWS blogs and academic journals, including the AWS compute and AI blogs and the Journal of Systems Engineering. He frequently delivers technical... Read More →
Wednesday April 8, 2026 14:15 - 14:25 CEST
Junior Stage

14:15 CEST

Lightning Talk: Scaling Recommendation Systems To 2K GPUs and Beyond - Zain Huda, Meta
Wednesday April 8, 2026 14:15 - 14:25 CEST
TLDR: In this session, we go over one of the key technologies to Ads model scaling at Meta, 2D sparse parallelism. Which scales sparse recommendation embedding tables beyond 1k GPUs to 8k GPUs - enabling the largest Ads model training runs in production at Meta.

Scaling Laws have dominated LLMs and shown the industry we can achieve better model performance through scaling. The same scaling law can be applied to recommendation systems. However, the path to scaling recommender systems is not the same. The leap from hundreds to thousands of GPUs introduces complex technical challenges, particularly around handling sparse operations in recommendation models.

In this talk, we will detail the development of 2D sparse parallelism, tracing its path from research to production to address sparse scaling challenges. We will demonstrate how we optimize these systems to push performance boundaries, increasing speed and reducing memory at scale. Participants will walk away with lessons learned from designing 1,000+ GPU scale systems, and a deeper understanding of how to implement these solutions efficiently in production.
Speakers
avatar for Zain Huda

Zain Huda

Software Engineer, Meta
Zain works on large scale training systems for recommender systems at Meta. He works on TorchRec, a library for distributed parallelism for sparse recommender models. He is also one of the authors of 2D sparse parallelism.
Wednesday April 8, 2026 14:15 - 14:25 CEST
Founders Cafe

14:30 CEST

Lightning Talk: Torch-Spyre: Compiling To a Multi-core Dataflow Accelerator With Inductor - David Grove & Olivier Tardieu, IBM
Wednesday April 8, 2026 14:30 - 14:40 CEST
Torch-Spyre (https://github.com/torch-spyre/torch-spyre) is an open source project that provides a PyTorch PrivateUse1 device with OpenReg, including an Inductor backend, for the IBM Spyre Accelerator. IBM Spyre is a high-performance energy-efficient AI accelerator featuring 32 AI-optimized compute cores each with on-chip interconnect and compiler-managed scratchpad memory.

Our goal in this session is to describe how we evolved the Spyre software stack to fully leverage Inductor. This enabled the elimination of a significant fraction of our proprietary compiler code base resulting in improved compilation time and operation coverage without loss of inference performance. We will highlight several technical challenges in compiling for Spyre-like accelerators and describe how we adapted and extended Inductor to tackle them. In particular, we will discuss our extensions to Inductor to support device-specific tiled Tensor memory layouts, and new compiler optimization passes for core-level work division and scratchpad management. We hope to engage the community in evolving the PyTorch ecosystem to more fully support them.
Speakers
avatar for Dave Grove

Dave Grove

Distinguished Research Scientist, IBM
David Grove is a Distinguished Research Scientist at IBM T.J. Watson, NY, USA. He has been a software systems researcher at IBM since 1998, specializing in programming language implementation and scalable runtime systems. He has authored more than sixty peer-reviewed publications... Read More →
avatar for Olivier Tardieu

Olivier Tardieu

Principal Research Scientist, Manager, IBM
Dr. Olivier Tardieu is a Principal Research Scientist and Manager at IBM T.J. Watson, NY, USA. He joined IBM Research in 2007. His current research focuses on cloud-related technologies, including Serverless Computing and Kubernetes, as well as their application to Machine Learning... Read More →
Wednesday April 8, 2026 14:30 - 14:40 CEST
Junior Stage
  Frameworks & Compilers

14:30 CEST

Lightning Talk: Every Millisecond Counts: The Fine-tuning Journey of an Ultra-Efficient PyTorch Model for the Edge - Pavel Macenauer, NXP Semiconductors
Wednesday April 8, 2026 14:30 - 14:40 CEST
From smart cameras that protect privacy by analyzing video on-device, to wearables that interpret voice and motion instantly, to industrial sensors that prevent failures before they happen, edge AI is shaping our everyday routines and transforming our lives.

Eliminating cloud dependency and making connectivity optional is essential for data staying local. Without cloud, our options become severely limited to the constraints of the devices, and efficiency drives innovation. Every millisecond and milliwatt can unlock a new use case — or limit one.

This talk will explore optimization techniques for vision, audio, and language models that allow them to run on tiny, resource-constrained devices, and fine-tune them to the limit of our model’s latency, accuracy, or power efficiency. We will start with an initial rapid simulation, and follow up with silicon-level tuning with real device profiling feedback.
Speakers
avatar for Pavel Macenauer

Pavel Macenauer

AI/ML R&D Software Lead, NXP Semiconductors
A software lead at NXP Semiconductors leading teams developing tools, runtime libraries, and enabling AI on Edge-class devices. Both professionally and out of human curiosity, Pavel developed software visualizing the World around us. Initially through the lens of a camera, then from... Read More →
Wednesday April 8, 2026 14:30 - 14:40 CEST
Central Room
  Inference & Production

14:30 CEST

Seamless Integration: Custom Kernels in the Torch.compile Stack Without Graphbreaks - Kshiteej Kalambarkar, Masaki Kozuki & Pawel Gadzinski, NVIDIA
Wednesday April 8, 2026 14:30 - 14:55 CEST
Custom kernels are essential for high-performance PyTorch workflows, but their integration often comes with a hidden cost. While torch.compile promises speedups, calling custom operations typically triggers graph-breaks: fallbacks to Eager mode that introduce overhead and negate your performance gains.

In this session, we provide a practical roadmap for making your extensions "compiler-aware". Using the Transformer Engine project as a case study, we will show how to utilize the custom_op extension point to bridge the gap between high-performance kernels and the torch.compile stack.

What you will learn:
• Identifying the Friction: How to profile and detect graph-breaks caused by custom extensions.
• The Registration Path: A walkthrough of the custom_op registration process for torch.compile.
• Solving the "Hard Parts": Strategies for handling complex Python-side logic that disrupts graph capture.
• Real-World Impact: How these integrations function within the Transformer Engine to maintain peak throughput.

Who should join: This talk is designed for developers building custom PyTorch extensions who want to understand how advanced operations fit into the compiled stack.
Speakers
avatar for Kshiteej Kalambarkar

Kshiteej Kalambarkar

Software Engineer Frameworks, NVIDIA
Kshiteej Kalambarkar is a software engineer at NVIDIA specializing in PyTorch and compiler technologies, with experience in torch.compile and custom kernel integration
avatar for Masaki Kozuki

Masaki Kozuki

Software Engineer, NVIDIA
Masaki Kozuki is working at NVIDIA on PyTorch.
avatar for Pawel Gadzinski

Pawel Gadzinski

Senior Performance Engineer - Deep Learning, NVIDIA
Pawel Gadzinski is a Deep Learning Performance Engineer at NVIDIA, where he works on the Transformer Engine library, enabling state-of-the-art techniques for accelerating transformer models on NVIDIA GPUs, with a focus on low-precision training.
Wednesday April 8, 2026 14:30 - 14:55 CEST
Master Stage

14:30 CEST

From Responses To Trajectories: Multi-Turn and Multi-Environment Reinforcement Learning - Kashif Rasul & Sergio Paniego Blanco, Hugging Face
Wednesday April 8, 2026 14:30 - 14:55 CEST
Post-training of LLMs with reinforcement learning is increasingly moving beyond static prompt–response pairs and preference optimization methods such as DPO, toward trajectory-based optimization. This talk focuses on the latest advances in multi-turn and multi-environment GRPO training, enabling LLMs to learn from interactive, agent-like experiences, including interacting with simulated environments, using tools, or completing multi-step reasoning tasks.

We highlight how TRL, as a PyTorch-native post-training framework, supports these workflows at scale. Multi-turn, multi-environment training can leverage simulated environments (i.e., coding, terminals, browsers) such as OpenEnv, while GRPO can also be applied to datasets for training LLMs on tool use or multi-step reasoning. Attendees will gain insights into design patterns, rollout handling, trajectory batching, and advantage computation, showing how robust, multi-turn, multi-environment post-training can improve alignment, reasoning, and generalization in LLMs for agentic applications.
Speakers
avatar for Kashif Rasul

Kashif Rasul

Research Scientist, Hugging Face
Kashif has a PhD. in Mathematics from the Freie Universität Berlin. He is passionate about high-performance computing, Reinforcement learning, and has presented at NVIDIA's GTC in 2009 and at StrangeLoop in 2012, and is also contributing to a number of data science and deep learning... Read More →
avatar for Sergio Paniego Blanco

Sergio Paniego Blanco

Machine Learning Engineer, Hugging Face
Sergio tiene una amplia trayectoria en el ámbito del código abierto y la inteligencia artificial, campo en el que también obtuvo su doctorado. Lleva más de ocho años participando en iniciativas como Google Summer of Code, donde ha contribuido como desarrollador y mentor. Actualmente... Read More →
Wednesday April 8, 2026 14:30 - 14:55 CEST
Founders Cafe
  Training Systems

14:45 CEST

Lightning Talk: Building a PyTorch‑native VLLM Plugin for IBM Spyre - Thomas Parnell, IBM Research & Thomas Ortner, IBM Research Europe - Zurich
Wednesday April 8, 2026 14:45 - 14:55 CEST
IBM Spyre is an AI accelerator used across IBM Z and Power systems for agentic inference in production. Today, we serve models on Spyre using upstream vLLM together with an out-of-tree platform plugin. While the current plugin delivers crucial functionality for our business, it re-uses relatively little of upstream vLLM’s capabilities, and also carries a high maintenance cost.

In this talk, we will describe our efforts to redesign the Spyre vLLM plugin in a more PyTorch-native fashion. We will describe the architectural evolution of the project and describe how it leverages torch‑spyre, an open‑source extension that enables Spyre support in PyTorch via the PrivateUse1 device interface. We discuss key challenges—such as implementing a custom vLLM attention backend for Spyre—and share lessons learned while aligning vLLM’s execution model with Spyre’s hardware capabilities.

Finally, we will demonstrate a vLLM model running natively on Spyre through the new plugin and highlight areas where the community can work together to improve vLLM’s plugin interface. This talk will be especially relevant for those looking to extend vLLM to a wider variety of accelerators and use cases.
Speakers
avatar for Thomas Parnell

Thomas Parnell

Principal Research Scientist, IBM Research
Thomas received his B.Sc. and Ph.D. degrees in mathematics from the University of Warwick. U.K., in 2006 and 2011, respectively. He began his career in the field of EDA, working at Arithmatica and Siglead before joining IBM Research in 2013. During his time at IBM, Thomas has worked... Read More →
avatar for Thomas Ortner

Thomas Ortner

Research Scientist, IBM Research Europe - Zurich
Thomas Ortner is a Research Scientist at IBM Research Europe, Switzerland, in the group of Emerging Computing and Circuits. He holds a PhD and a MSc in Computer Science, a MSc degree in Technical Physics and a MSc degree in Software Engineering and Management from Graz University... Read More →
Wednesday April 8, 2026 14:45 - 14:55 CEST
Junior Stage

15:25 CEST

Lightning Talk: Trinity Large - Torchtitan on 2000+ B300s - Matej Sirovatka, Prime Intellect
Wednesday April 8, 2026 15:25 - 15:35 CEST
In this talk, we'll cover how to use torchtitan to scale training of ultra-sparse mixture-of-experts models across over 2,000 GPUs. We'll walk through the pre-training of Trinity Large, a 400B mixture-of-experts model trained entirely using torchtitan, focusing on maximizing throughput and minimizing the impact of hardware induced failures. Along the way, we'll discuss challenges like fault tolerance, large-scale distributed training, and ensuring determinism - and how we've addressed each of these using torchtitan. Finally, we'll share insights and common pitfalls to avoid in your own large-scale training runs.
Speakers
avatar for Matej Sirovatka

Matej Sirovatka

Research Engineer, Prime Intellect
Research Engineer at Prime Intellect, mainly focusing on distributed training, performance and scaling.
Wednesday April 8, 2026 15:25 - 15:35 CEST
Founders Cafe
  Training Systems

15:25 CEST

Bridging the Hardware Gap With Code Harnesses on the Hugging Face Kernels Hub - Ben Burtenshaw, Hugging Face
Wednesday April 8, 2026 15:25 - 15:50 CEST
What: We share experiments and tooling to standardise kernel writing for agentic coding.

We present an end-to-end experiment benchmarking 6 harnesses across 10 models on CUDA and Metal kernel writing. We compare agent cost, kernel latency, VRAM usage, and end inference performance, and show how the Kernels Hub enables distribution at scale.

We demo two tools:

Kernels Hub: Infrastructure for writing, maintaining, and distributing reproducible kernels in the PyTorch ecosystem.

HF Skills: A library for defining and evaluating agent skills for ML tasks like kernel writing.

Why: Beyond agentic hype, kernel writing is a fundamental problem requiring robust evaluation to scale the community. High-performance kernels demand rare expertise in memory coalescing, warp-level primitives, and hardware-specific optimization. In practice, builders optimize for the highest market-share hardware, leaving a massive matrix of model×hardware combinations unserved, For example: edge inference with ExecuTorch, local LLMs on Metal via vLLM, classic ML at scale on Intel. This talk is technical, intended for kernel writers and PyTorch builders who want to use agents robustly.
Speakers
avatar for Ben Burtenshaw

Ben Burtenshaw

Community, Hugging Face
Ben Burtenshaw is an MLE in the Hugging Face open source community team, specializing in agents, LLMs, and fine-tuning. He leads the development of open-source educational initiatives like the Agents Course, the MCP Course, and the LLM Course, which bridge the gap between complex... Read More →
Wednesday April 8, 2026 15:25 - 15:50 CEST
Master Stage

15:40 CEST

Lightning Talk: Faster Than SOTA Kernels in Torch.compile With Subgraph Fusions and Custom Op Autotuning - Elias Ellison & Paul Zhang, Meta
Wednesday April 8, 2026 15:40 - 15:50 CEST
Unlocking state-of-the-art performance, this talk reveals how subgraph and custom operator autotuning in torch.compile deliver breakthrough speedups—surpassing previous SOTA for matmul and distributed collective ops.

DecomposeK is a novel subgraph optimization in PyTorch, designed to accelerate matrix multiplication when the inner dimension (K) is very large. DecomposeK achieves, delivering up to 28% speedup over ATen with activation fusion and 10% over ATen without fusion.

Building on subgraph infrastructure, we introduced Custom Op Autotuning, which benchmarks and selects the fastest kernel implementations for custom ops. This enables epilogue fusion and the first distributed collective op autotuning in PyTorch. We also introduce Range-based dispatch autotuning that enables dynamic selection of optimal implementations based on input shapes, ensuring performance that closely matches the theoretical best for each range. Our demo shows our autotuned kernels outperform Async TP Fused AG+MM by 9% and Async TP Fully Fused kernel by 41% across all input ranges.
Speakers
avatar for Elias Ellison

Elias Ellison

Software Engineer, Meta
Elias has been working on the PyTorch team for four years, most recently on the torch.compile stack
avatar for Paul Zhang

Paul Zhang

Software Engineer, Meta
Paul Zhang is currently a software engineer working on PyTorch and Triton at Meta, ensuring that PyTorch and PT2 best utilizes the hardware it is run on. Previous to this, Paul has done extensive work on recommendation systems for training and inference, optimizing performance and... Read More →
Wednesday April 8, 2026 15:40 - 15:50 CEST
Founders Cafe

15:55 CEST

Lightning Talk: Why Logging Isn’t Enough: Making PyTorch Training Regressions Visible in Practice - Sahana Venkatesh, Wayve
Wednesday April 8, 2026 15:55 - 16:05 CEST
PyTorch teams often log rich training metrics, yet still discover training regressions late after significant developer time and GPU budget have already been spent. In this talk, I’ll share a practical pattern we used to turn PyTorch training metrics into an operational guardrail for large-model training.

The approach combines scheduled short and long training runs, standardized performance and stability metrics (throughput, memory, loss, divergence), and simple statistical baselines to automatically surface regressions via alerts without hard gates or complex infrastructure.

I’ll focus on why logging alone is insufficient, how we chose what to monitor, and what tradeoffs we encountered (false positives, alert fatigue, baseline drift). The goal is not a tool demo, but a reusable pattern other PyTorch teams can adapt to catch training regressions earlier and make retraining more predictable.
Speakers
avatar for Sahana Venkatesh

Sahana Venkatesh

Software engineer, Wayve
Wednesday April 8, 2026 15:55 - 16:05 CEST
Central Room
  Training Systems

15:55 CEST

From Gradients To Governance: Making PyTorch Lineage-Aware - Kateryna Romashko & Clodagh Walsh, Red Hat
Wednesday April 8, 2026 15:55 - 16:20 CEST
PyTorch was built to track how models learn, but not whether they should have. As AI systems increasingly operate on regulated, jurisdiction bound, and sovereign data, lineage and policy can no longer live outside the runtime. This talk explores data sovereignty as a first class constraint and argues that lineage is the missing primitive in modern ML frameworks. Building on PyTorch’s dynamic graphs and autograd system, we outline how tensors could carry origin, consent, and policy metadata through training and inference. The goal is not compliance tooling, but a lineage aware PyTorch that enables trustworthy, auditable, and deployable AI across edge, federated, and European AI ecosystems.
Speakers
avatar for Kateryna Romashko

Kateryna Romashko

Associate Software Engineer, RedHat
Kateryna Romashko is a Software Engineer and a Master’s student in Computer Science, currently working in the Emerging Technology team at Red Hat. Her work focuses on ML systems, data lineage, and event-driven architectures, with hands-on experience across ML platforms, distributed... Read More →
avatar for Clodagh Walsh

Clodagh Walsh

Software Engineer, Red Hat
Clodagh is a software engineer at Red Hat working on the Emerging Technologies team under the office of the CTO. She has experience working with cloud native technologies. She is currently working on a range of AI related projects focused on topics such as MLOps and dLLMs.
Wednesday April 8, 2026 15:55 - 16:20 CEST
Master Stage
  Responsible AI & Compliance

16:10 CEST

Lightning Talk: Ball Tracking and Detection in Soccer Videos - Comparison of VLMs and Traditional Pipelines - Maciej Szymkowski, Future Processing
Wednesday April 8, 2026 16:10 - 16:20 CEST
Nowadays, Vision-Language Models (VLMs) have plenty of different applications. However, it must be pointed out that we cannot be totally sure that they are the most accurate and precise solution for all potential problems. We must compare their possibilities with some other pipelines. In this presentation, we would like to compare on-premise models – Qwen 3 and InternVL-3.5, and cloud-based solutions – Gemini 3, GPT-5 with traditional pipeline based on YOLOv11 and image processing techniques. The battlefield will be ball detection and tracking in soccer matches recordings (from different angles and in diversified light, e.g., sunny, night, and weather conditions, e.g., snowy, rainy day) downloaded from SoccerNet database. In this case, we used both broadcast videos and action and replay images. All of them were marked manually to prepare ground truth database. The models must recognize not only the ball but also track it through the whole sequence of images. To give equal chances we fine-tuned YOLOv11 and provided additional knowledge to VLMs in the form of RAG pipeline. Comparison was made with traditional Machine Learning metrics like accuracy, precision, and recall.
Speakers
avatar for Maciej Szymkowski

Maciej Szymkowski

AI Researcher and Senior Machine Learning Engineer, Future Processing
Maciej Szymkowski, PhD, is a Senior ML Engineer at Future Processing. Formerly Head of AI at Łukasiewicz PIT, his academic background spans BUT, WUT, and AGH. With 45+ publications, he specializes in Computer Vision (med/transport/sport), VLMs, and LLMs. His industry experience includes... Read More →
Wednesday April 8, 2026 16:10 - 16:20 CEST
Central Room
  Applications & Case Studies

16:25 CEST

Lightning Talk: Bridging the Gap: Engineering Compliant "Glass Box" Medical AI With PyTorch - Muhammad Saqib Hussain, Neurosonic & Mohaddisa Maryam, Neurosonic Academy
Wednesday April 8, 2026 16:25 - 16:35 CEST
While state-of-the-art models like NeuroBOLT demonstrate mathematical excellence in EEG-to-fMRI synthesis, they often remain clinically opaque. With the EU AI Act classifying medical AI as "high-risk," hospitals cannot deploy "black boxes"; they require systems that are transparent, auditable, and legally compliant.
​This session presents a "Clinical Auditing System" built within the PyTorch ecosystem, designed to transform opaque deep learning models into transparent "Glass Boxes." I will demonstrate a workflow that backpropagates gradients from high-dimensional 4D fMRI volumes to identify the specific EEG spectral signatures driving those predictions.
​Key Technical Takeaways:
​1. The Audit Layer: Implementing IntegratedGradients (Captum) to verify model fidelity, ensuring predictions stem from valid neural oscillations rather than noise artifacts.
​2. Cross-Modal Reasoning: A technical demonstration of mapping 4D volumetric outputs back to 1D EEG frequency bands, enabling the model to "reason" through neurovascular coupling.
​This presentation is designed for developers seeking to wrap PyTorch models in safety layers that satisfy demands of healthcare regulation.
Speakers
avatar for Mohaddisa Maryam

Mohaddisa Maryam

Miss, Neurosonic Academy
I am a First Year Student of Medicine in Italy.
avatar for Muhammad Saqib Hussain

Muhammad Saqib Hussain

Medical Student, AI Researcher and Neurotech Founder, ClinExplain
Muhammad Saqib is a 4th-year medical student at Comenius University Bratislava and Founder of Neurosonic Academy. His M.D. thesis explores AI for Sleep Medicine. Leveraging PyTorch and Captum, he builds "Glass Box" auditing frameworks to validate generative neuroimaging models against... Read More →
Wednesday April 8, 2026 16:25 - 16:35 CEST
Founders Cafe
  Applications & Case Studies

16:25 CEST

De-mystifying PyTorch for ASICs: When (and Why) To Move Your Development To AI Accelerators - Alpha Romer Coma, Kollab Philippines
Wednesday April 8, 2026 16:25 - 16:50 CEST
GPU availability and cost are squeezing ML teams, making ASICs like Google TPUs and AWS Trainium attractive alternatives. But does the software stack hold up? This session moves beyond the datasheets to provide a practical, code-first reality check on migrating PyTorch workloads to ASICs.

We will de-mystify the underlying compiler stacks, comparing PyTorch/XLA (TPU) and TorchNeuron (Trainium), and analyze the 'Compiler Tax' that often surprises developers. Through side-by-side code diffs and real-world benchmarks on fine-tuning Llama 4, Gemma 3, Qwen 3, and training CNNs and ViTs, we will answer:

1. The Code: How much rewriting is actually required?
2. The Performance: Which model architectures thrive on ASICs, and which ones fail due to dynamic shapes?
3. The Debugging: What happens when you hit an OOM or a compilation hang?

Attendees will leave with a clear 'Migration Decision Matrix' to determine if their specific workload is ready for the ASIC leap.
Speakers
avatar for Alpha Romer Coma

Alpha Romer Coma

Associate Engineer, Cloud Development, Kollab Philippines
Alpha is an Associate Cloud Engineer in Kollab and a CS undergraduate at FEU Tech, Philippines. He specializes in multimodality with text, videos, and audio, and works on Accelerated Computing with Google TPUs and AWS Tranium.

For 5 months, he pushed Google Cloud TPUs v4s to their limit to train vision-language models for use cases like internet brain rot recognition and detection of cognitively overloading content called sludge videos with 92% accuracy... Read More →
Wednesday April 8, 2026 16:25 - 16:50 CEST
Central Room
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Audience Level
  • Slides Attached
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -