Loading…
7-8 April, 2025
Paris, France
View More Details & Registration
Note: The schedule is subject to change.

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for PyTorch Conference Europe 2026 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in CEST (UTC/GMT +2). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."
Audience: Yes clear filter
arrow_back View All Dates
Wednesday, April 8
 

09:00 CEST

Keynote: PyTorch CTO - Matt White, Global CTO of AI, Linux Foundation
Wednesday April 8, 2026 09:00 - 09:10 CEST
Matt White, Global CTO of AI and CTO at PyTorch Foundation will provide an update on technical strategy, ecosystem and projects and working groups
Speakers
avatar for Matt White

Matt White

Global CTO of AI, Linux Foundation, The Linux Foundation
Matt White is the Executive Director of the PyTorch Foundation and GM of AI at the Linux Foundation. He is also the Director of the Generative AI Commons. Matt has years of experience in applied research and standards in AI and data in telecom, media and gaming industries. Matt is... Read More →
Wednesday April 8, 2026 09:00 - 09:10 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

09:10 CEST

Keynote: vLLM & Ray Updates - Tyler Michael Smith, Chief Architect - Inference Engineering, Red Hat & Artur Niederfahrenhorst, Member of Technical Staff,Anyscale
Wednesday April 8, 2026 09:10 - 09:25 CEST

Speakers
avatar for Tyler Michael Smith

Tyler Michael Smith

Chief Architect - Inference Engineering, Red Hat
Tyler received a PhD in Computer Science at The University of Texas at Austin, studying high performance dense linear algebra - microkernels, parallelism, and theoretical lower bounds on data movement.. After a postdoc at ETH Zürich, he joined Neural Magic, first working on a graph... Read More →
avatar for Artur Niederfahrenhorst

Artur Niederfahrenhorst

Member of Technical Staff, Anyscale
Artur is a member of the technical staff at Anyscale, the company that recently donated Ray to the Linux Foundation. He has been contributing to Ray since early 2022, where his main contributions have been in distributed reinforcement learning. Artur majored in Computer Science at... Read More →
Wednesday April 8, 2026 09:10 - 09:25 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

09:25 CEST

Keynote: The Hub as Infrastructure. From Open PyTorch Models, to a Safe and Performant Distribution Hub - Lysandre Debut, Chief Open-Source Officer, Hugging Face
Wednesday April 8, 2026 09:25 - 09:40 CEST

Speakers
avatar for Lysandre Debut

Lysandre Debut

Chief Open-Source Officer, Hugging Face
Lysandre is the Chief Open-Source Officer at Hugging Face; ensuring that the ecosystem is as well supported as possible in the ML lifecycle, with open-source tools.

He has been at Hugging Face for the past six years and was the first open-source employee at Hugging Face; working on transformers and the entire stack of Hugging Face open-source libraries since then... Read More →
Wednesday April 8, 2026 09:25 - 09:40 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

09:45 CEST

Sponsored Keynote: Open Source Infrastructure for the AI Native Era - Jonathan Bryce, Executive Director, Cloud Native Computing Foundation
Wednesday April 8, 2026 09:45 - 09:50 CEST
AI adoption will not be limited by model ideas alone. It will be limited by how fast we can deploy, secure, observe, and scale AI systems in production. Inference is where AI becomes real for most organizations. As AI moves from frontier labs into mainstream production, the operational challenges start to look increasingly cloud native: orchestration, autoscaling, routing, security, policy, and observability. This keynote explores why the next phase of AI adoption will move faster if PyTorch and cloud native communities work together to extend proven open source patterns.
Speakers
avatar for Jonathan Bryce

Jonathan Bryce

Executive Director, Cloud and Infrastructure, The Linux Foundation
Jonathan Bryce is the Executive Director of Cloud & Infrastructure at the Linux Foundation, where he leads both the Cloud Native Computing Foundation (CNCF) and the OpenInfra Foundation—two of the largest and most influential open source communities in the world. With over... Read More →
Wednesday April 8, 2026 09:45 - 09:50 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

10:35 CEST

Lightning Talk: Live Migration of PyTorch GPU Nodes From Azure To European Clouds - Mike Krom, Acf Cyber Solutions
Wednesday April 8, 2026 10:35 - 10:45 CEST
Many European PyTorch teams run their GPU workloads on hyperscalers like Azure, AWS, or GCP—often without realizing that this places their data and models under US jurisdiction.

This lightning talk shows how PyTorch compute nodes can be migrated to European cloud providers while keeping the full ML environment intact. Through a live demo, we migrate a GPU-enabled PyTorch VM—including CUDA drivers and Jupyter notebooks—from Azure to European infrastructure, without retraining models or rebuilding environments.

The focus is on practical challenges: GPU compatibility, reproducibility, and data movement across clouds.

The migration is demonstrated using DigitalNomadSky, an open-source Python platform for cross-cloud VM migration, but the lessons apply broadly to PyTorch teams aiming to reduce jurisdictional risk and vendor lock-in.

Key takeaways
Why PyTorch workloads on hyperscalers raise sovereignty concerns for EU teams
What actually breaks (and what doesn’t) when migrating GPU-based ML nodes
How to regain control over ML infrastructure without rewriting your stack
Speakers
avatar for Mike Krom

Mike Krom

Partner, ACF Cybersolutions
I am a software architect and lead developer of the open-source project DigitalNomadSky. I have extensive experience with Microsoft Azure from working at Microsoft and supporting large-scale cloud migrations. My work focuses on supporting datascience and ML-teams with cloud infrastructure... Read More →
Wednesday April 8, 2026 10:35 - 10:45 CEST
Central Room
  Security & Privacy

10:35 CEST

Beyond JSON-RPC: Scaling Model Context Protocols With gRPC in the PyTorch Ecosystem - Ashesh Vidyut & Madhav Bissa, Google
Wednesday April 8, 2026 10:35 - 11:00 CEST
Right now, MCP mostly relies on HTTP and STDIO. That works for simple scripts, but if you’re running high-performance PyTorch models in production, you’re going to hit a wall. When you’re moving large context windows or tensor metadata, the overhead of JSON-RPC starts to hurt.
We’re introducing SEP-1352, which adds gRPC as a native transport for MCP. Since gRPC is already the standard for microservices, it’s a natural fit for the PyTorch ecosystem. By using Protobuf instead of JSON, we get much higher throughput and lower latency—essentially making the communication between models and tools as fast as the models themselves.
In this session, we’ll cover:
Why Protobuf matters: Moving away from bulky JSON to keep bandwidth low and speed high.
Built-in Streaming: How to use gRPC’s streaming to handle long-running model outputs without timeouts.
Production-ready features: Using the same auth, load balancing, and service mesh (mTLS) you already use for your ML microservices.
Upgrading your stack: How to move from PyTorch MCP HTTP services to MCP gRPC services without throwing away your existing infra.
Speakers
avatar for Ashesh Vidyut

Ashesh Vidyut

Senior Software Engineer, Google

avatar for Madhav Bissa

Madhav Bissa

Senior Software Engineer, Google
member, grpc-Go
Wednesday April 8, 2026 10:35 - 11:00 CEST
Junior Stage
  Agents & Interop

10:35 CEST

How To Write C++ Extensions in 2026 - Jane Xu, Meta & Mikayla Gawarecki, Meta
Wednesday April 8, 2026 10:35 - 11:00 CEST
Are you writing a C++ custom op extension to PyTorch? It's 2026 and are you still shipping M x N wheels for M CPython versions and N libtorch versions? Did you know you can just ship 1 wheel that works across multiple CPythons and libtorches? If you're curious how, attend this talk to get the deets on py_limited_api, APIs like torch::stable::Tensor & TORCH_TARGET_VERSION, and generally the latest and greatest ways for keeping your code and your release matrix simple. Get your custom kernel enrolling in new features with benefits proven out in FA3, xformers, torchao, torchaudio, and more in progress! We'll also share some of our vision towards smoother and faster custom ops extensions.
Speakers
avatar for Jane Xu

Jane Xu

PyTorch SWE, Meta
Hi, I'm Jane! Please don't hesitate to come talk to me about your favorite optimizer, fitting models in GPU memory, how to free C++ extensions from libtorch version, and anything that interests you.
avatar for Mikayla Gawarecki

Mikayla Gawarecki

Software Engineer, Meta
Software Engineer on PyTorch
Wednesday April 8, 2026 10:35 - 11:00 CEST
Founders Cafe
  Frameworks & Compilers

10:50 CEST

Lightning Talk: Achieving SOTA GEMM Performance: A CuTeDSL Backend for PyTorch Inductor - Nikhil Patel, Meta
Wednesday April 8, 2026 10:50 - 11:00 CEST
Matrix multiplication is a central compute primitive in modern deep learning, but achieving SOTA performance on novel architectures like NVIDIA Blackwell has become a bottleneck. Existing Triton-based kernels in torch.compile struggle to keep pace with rapid hardware evolution, often forcing users to hand-write custom, architecture-specific kernels - a growing gap as hardware feature velocity accelerates.

We present a new CuTeDSL GEMM backend in PyTorch Inductor that integrates NVIDIA’s kernel implementations directly into torch.compile. Built using the Cutlass API for kernel discovery, this backend allows PyTorch to expose first-class support for NVIDIA-authored GEMMs and automatically leverage new architectural features as NVIDIA updates their kernels.

The backend currently supports standard GEMM, grouped GEMM, and block-scaled MXFP8 GEMM, along with pointwise epilogue fusions (with reductions forthcoming). We present early end-to-end results from vLLM inference and TorchTitan training, demonstrating how this approach enables PyTorch to achieve high-performance GEMMs on Blackwell and beyond, while eliminating the need for users or developers to maintain handwritten kernels.
Speakers
avatar for Nikhil Patel

Nikhil Patel

Software Engineer, Meta
Nikhil is a software engineer on the PyTorch Inductor team at Meta Superintelligence Labs, where he works on Inductor’s CuTeDSL GEMM backend. His work sits at the boundary between compiler code generation and hardware-native GPU features, optimizing large-scale training and inference... Read More →
Wednesday April 8, 2026 10:50 - 11:00 CEST
Master Stage
  Frameworks & Compilers

10:50 CEST

Lightning Talk: Step-Aligned Telemetry for Distributed PyTorch Training (Time & Memory Attribution Across Ranks) - Abhinav Srivastav, TraceOpt
Wednesday April 8, 2026 10:50 - 11:00 CEST
Distributed PyTorch training often looks healthy in system dashboards; GPU utilization is high, memory is stable and yet throughput degrades, steps jitter, or GPUs go idle intermittently. The core issue is misalignment: most
telemetry is sampled by time, while training progresses by "steps", and distributed behavior is dominated by the slowest rank rather than averages.

In this talk I will breaks down common failure modes in DDP training that standard metrics miss (rank stragglers, dataloader stalls, step-time variance, and memory spikes/creep). We will show how step-aligned, rank-aware aggregation changes debugging: per-step worst-rank vs median-rank views, gating to completed steps across ranks, and how to tie time and memory back to training semantics without relying on heavyweight profilers.
Speakers
avatar for Abhinav Srivastav

Abhinav Srivastav

ML Scientist, TraceOpt
ML researcher with a PhD in Computer Science. Industry experience at IBM Research, Huawei Research, and Zalando.Currently building TraceML: an open source tool that shows you the step-level breakdown of your PyTorch training run while it's still running.I am partially interested in... Read More →
Wednesday April 8, 2026 10:50 - 11:00 CEST
Central Room
  Training Systems

11:05 CEST

Lightning Talk: Accelerating PyTorch Models With Torch.compile's C++ Wrapper Mode - Bin Bao, Meta
Wednesday April 8, 2026 11:05 - 11:15 CEST
This lightning talk introduces torch.compile's C++ wrapper mode, a powerful feature that reduces CPU overhead and significantly improves model performance. As modern GPUs become increasingly powerful and compiler optimizations make GPU kernels run faster, CPU overhead has become more visible as the bottleneck. By generating optimized C++ code instead of Python, cpp-wrapper mode directly tackles this challenge.

While CUDAGraphs can also reduce CPU overhead, it is not always applicable—especially with highly dynamic input shapes. In these scenarios, cpp-wrapper mode provides a robust alternative with significant performance gains. Benchmark results from the OSS Huggingface suite demonstrate that cpp-wrapper mode delivers a 39% speedup over default torch.compile.

Attendees will learn when and how to leverage cpp-wrapper mode to overcome CPU-bound limitations and understand how this feature fits into PyTorch's performance optimization landscape, enabling them to build faster machine learning applications.
Speakers
avatar for Bin Bao

Bin Bao

Software Engineer, Meta
Bin Bao is a software engineer working with the PyTorch Compiler team at Meta. He focuses on developing TorchInductor optimizations and AOTInductor for C++ deployment.
Wednesday April 8, 2026 11:05 - 11:15 CEST
Junior Stage
  Frameworks & Compilers

11:05 CEST

Fp8 Training From Hopper To Blackwell - Luca Wehrstedt, Meta
Wednesday April 8, 2026 11:05 - 11:30 CEST
The Hopper generation of NVIDIA GPUs first enabled the use of low-precision float8 data types for training via TensorCore acceleration. However, the recipe to best leverage it was far from settled. Practitioners had to find their way through many entangled decisions around accuracy-vs-efficiency, precision-vs-range, overflows-vs-underflows, and more. The frontier was further push forward by the DeepSeek release, and then by the micro-scaling formats introduced by Blackwell. In this talk we will go through all these approaches, comparing their pros and cons, thus guiding researchers in finding the options that work best for them.
Speakers
avatar for Luca Wehrstedt

Luca Wehrstedt

Software Engineer, Meta
Research Engineer in Meta's Fundamental AI Research team (FAIR). At the intersection of research and infrastructure, Luca specialized in training efficiency and distributed communication. Regular contributor to PyTorch.
Wednesday April 8, 2026 11:05 - 11:30 CEST
Master Stage
  Training Systems

11:20 CEST

Lightning Talk: Building AI That Ops Teams Actually Trust - Robert King, Chronosphere / Palo Alto Networks
Wednesday April 8, 2026 11:20 - 11:30 CEST
You've built an AI that identifies root causes of incidents faster than any human could... but there's one problem, no one trusts it.

Ops teams are skeptical by nature. They've been burned by noisy alerts, black-box tools, and "intelligent" systems that weren't.
This talk covers what we learned building AI for incident response across enterprise environments: why technically correct recommendations get ignored, and how to design for skepticism from day one.

I'll share specific patterns that moved the needle:

- Validating agent responses before they reach users, catching hallucinations, weak reasoning, and overconfident outputs
- Explainability that fits the operator's mental model, not the data scientist's
- Feedback loops that improve the AI and build user trust simultaneously
- Rollout strategies that let teams build confidence gradually

Whether you're using LLMs, agents, or traditional ML for operational tasks, the trust problem is the same. Ship something wrong during an incident and you've lost your users for months.

You'll leave with a practical framework for validating AI outputs and building the kind of trust that gets recommendations acted on.
Speakers
avatar for Robert King

Robert King

Senior Sales Engineer, Chronosphere
Robert is Lead Enterprise Solutions Engineer at Chronosphere and an OpenTelemetry contributor. He recently presented on AI Observability with OpenTelemetry at Cloud Native London https://www.youtube.com/live/qF4wz-pha1w?si=PFzjNcGkbD4pFKnA&t=625 and has spoken at AWS Summit, and other... Read More →
Wednesday April 8, 2026 11:20 - 11:30 CEST
Junior Stage
  Inference & Production

11:35 CEST

Accelerating Complex-Valued Tensors With Torch.compile - Hameer Abbasi, OpenTeams Inc.
Wednesday April 8, 2026 11:35 - 12:00 CEST
torch.compile has been invaluable in accelerating many machine learning and scientific computing workflows. It has become a one-shot way to get free performance for many kinds of programs and models.

However, it comes with its own set of limitations. One of these limitations is that, for a long time, torch.compile didn't accept complex-valued tensors. These tensors have many uses, from quantum mechanics to simplifying the physics for world models. Support for such tensors would accelerate many of these workflows.

In this talk, we will take a journey into the current progress for supporting such tensors in torch.compile; some of the encountered challenges and what we hope to achieve, including some side-benefits for reducing binary size by JIT-ing kernels on demand.
Speakers
avatar for Hameer Abbasi

Hameer Abbasi

Senior Software Engineer I, OpenTeams, Inc.
Hameer Abbasi is a Senior Software Developer at OpenTeams, Inc. As part of his day job and also as a hobby, he has contributed to various projects in the scientific computing space, including NumPy, SciPy and PyTorch. He is also the lead maintainer of PyData/Sparse, a library for... Read More →
Wednesday April 8, 2026 11:35 - 12:00 CEST
Junior Stage
  Frameworks & Compilers

13:30 CEST

Optimizing CPU LLM Inference in PyTorch: Lessons From VLLM - Crefeda Rodrigues, Arm Limited & Fadi Arafeh, Arm
Wednesday April 8, 2026 13:30 - 13:55 CEST
vLLM has emerged as a reference inference stack in the PyTorch ecosystem for high-throughput large language model serving. CPUs continue to play an important role in LLM inference, supporting cost-sensitive deployments, hybrid CPU/GPU serving, and batch or off-peak workloads on general-purpose infrastructure.

In this talk, we examine CPU-based LLM inference through the lens of PyTorch internals, using vLLM as a case study. We describe how vLLM interacts with PyTorch’s operator stack, including tensor layout management, backend dispatch, and threading behaviour, and highlight common sources of overhead such as repeated weight repacking and poor threading behaviour.

We present runtime and kernel-level optimizations that reduce overhead including CPU paged-attention kernel tuning with vectorized softmax, specialized Q–K and P–V GEMM kernels aligned with vLLM’s scheduler, an ISA-aware BF16 attention, pre-packed weight layouts for quantized matmul, SIMD vectorization using PyTorch’s at::vec::Vectorized primitives, and NUMA-aware scheduling for scalable parallel inference.

Finally, we conclude with lessons learned from building and upstreaming a high-performance CPU inference engine.
Speakers
avatar for Crefeda Rodrigues

Crefeda Rodrigues

Staff Software Engineer, Arm
Crefeda Rodrigues is a Staff Software Engineer at Arm, focusing on performance and scalability driven machine learning software optimization for Arm server CPUs. She previously worked on large-scale climate and weather model optimization as a postdoctoral researcher at the University... Read More →
avatar for Fadi Arafeh

Fadi Arafeh

Senior Machine Learning Engineer, Arm
Fadi is a Senior Machine Learning Engineer at Arm, working on optimizing PyTorch and vLLM for Arm Infrastructure cores. Prior to that, Fadi obtained a BSc in Artificial Intelligence from the University of Manchester.
Wednesday April 8, 2026 13:30 - 13:55 CEST
Founders Cafe
  Inference & Production

13:45 CEST

Lightning Talk: Slash LLM Cold-Start Times by Pre-distributing GPU Caches - Billy McFall & Maryam Tahhan, Red Hat
Wednesday April 8, 2026 13:45 - 13:55 CEST
Are your Large Language Model (LLM) deployments stuck waiting for GPU kernels to compile? If you are running distributed inference at scale, your infrastructure is likely wasting time rebuilding the same GPU Kernel Cache for every single instance. You may not even realize the time and resources that are being consumed for rebuilding. This session is designed for platform engineers and ML practitioners who need to optimize inference scaling and reduce startup latency.

We will demonstrate how to eliminate redundant compilation by pre-distributing GPU kernel caches to all the inference nodes using KServe, a distributed model inference runtime for Kubernetes. Beyond just the "what," we will dive into the technical implementation of signing, verifying, and mounting cache images to ensure supply-chain security across clusters. Attendees will leave with a practical blueprint for reducing cold-start times and securing GPU-heavy workloads in production.
Speakers
avatar for Billy McFall

Billy McFall

Sr. Principal Software Engineer, Red Hat
Billy McFall is a software engineer in the Emerging Tech Networking Team within the Office of the CTO at Red Hat for 9+ years. Billy previously worked on Kubernetes/OpenShift networking, including the integration of the NVIDIA DPU into OpenShift. Billy has also been a maintainer of... Read More →
avatar for Maryam Tahhan

Maryam Tahhan

Principal Engineer, Red Hat
Maryam is a Principal Engineer in Red Hat's Office of the CTO, where she focuses on standardising CPU inferencing performance evaluation to help effectively validate and scale ML workloads.
Wednesday April 8, 2026 13:45 - 13:55 CEST
Central Room
  Inference & Production

14:15 CEST

Lightning Talk: Inside VLLM's KV Offloading Connector: Async Memory Transfers for Higher Inference Throughput - Nicolò Lucchesi, Red Hat
Wednesday April 8, 2026 14:15 - 14:25 CEST
Every LLM request produces KV-cache state that is expensive to recompute. However, GPU memory is limited in size and when memory fills up, entries are discarded from cache. A natural mitigation is expanding the KV cache to CPU DRAM which is meaningfully larger than GPU memory.
vLLM 0.11.0 introduced the Offloading Connector - an asynchronous, pluggable API for KV-cache offloading which is bundled with a native CPU backend. This new feature executes transfers concurrently with model computation on the GPU cores by using GPU DMA. This solution offers speedy loading of KV data from DRAM and near zero overhead from offloading. Getting here required rethinking vLLM's memory layout. The default per-layer KV fragmentation devastated transfer throughput. A new contiguous block layout, upstreamed in 0.12.0, increased effective block sizes by up to 125× and delivered an order-of-magnitude improvement in offloading performance.
We'll walk through the connector architecture, discuss memory transfer tradeoffs, the memory layout redesign, and practical guidance for enabling CPU offloading in production.
Speakers
avatar for Nicolò Lucchesi

Nicolò Lucchesi

Senior Machine Learning Engineer, Red Hat
Nicolò is a Senior Machine Learning Engineer at Red Hat with a background in Deep Learning and Computer Vision. He works on Inference Optimization for vLLM, where he is a maintainer.
Wednesday April 8, 2026 14:15 - 14:25 CEST
Central Room
  Inference & Production
  • Audience Level Any
  • Slides Attached Yes

14:30 CEST

Lightning Talk: Torch-Spyre: Compiling To a Multi-core Dataflow Accelerator With Inductor - David Grove & Olivier Tardieu, IBM
Wednesday April 8, 2026 14:30 - 14:40 CEST
Torch-Spyre (https://github.com/torch-spyre/torch-spyre) is an open source project that provides a PyTorch PrivateUse1 device with OpenReg, including an Inductor backend, for the IBM Spyre Accelerator. IBM Spyre is a high-performance energy-efficient AI accelerator featuring 32 AI-optimized compute cores each with on-chip interconnect and compiler-managed scratchpad memory.

Our goal in this session is to describe how we evolved the Spyre software stack to fully leverage Inductor. This enabled the elimination of a significant fraction of our proprietary compiler code base resulting in improved compilation time and operation coverage without loss of inference performance. We will highlight several technical challenges in compiling for Spyre-like accelerators and describe how we adapted and extended Inductor to tackle them. In particular, we will discuss our extensions to Inductor to support device-specific tiled Tensor memory layouts, and new compiler optimization passes for core-level work division and scratchpad management. We hope to engage the community in evolving the PyTorch ecosystem to more fully support them.
Speakers
avatar for Dave Grove

Dave Grove

Distinguished Research Scientist, IBM
David Grove is a Distinguished Research Scientist at IBM T.J. Watson, NY, USA. He has been a software systems researcher at IBM since 1998, specializing in programming language implementation and scalable runtime systems. He has authored more than sixty peer-reviewed publications... Read More →
avatar for Olivier Tardieu

Olivier Tardieu

Principal Research Scientist, Manager, IBM
Dr. Olivier Tardieu is a Principal Research Scientist and Manager at IBM T.J. Watson, NY, USA. He joined IBM Research in 2007. His current research focuses on cloud-related technologies, including Serverless Computing and Kubernetes, as well as their application to Machine Learning... Read More →
Wednesday April 8, 2026 14:30 - 14:40 CEST
Junior Stage
  Frameworks & Compilers

14:30 CEST

Lightning Talk: Every Millisecond Counts: The Fine-tuning Journey of an Ultra-Efficient PyTorch Model for the Edge - Pavel Macenauer, NXP Semiconductors
Wednesday April 8, 2026 14:30 - 14:40 CEST
From smart cameras that protect privacy by analyzing video on-device, to wearables that interpret voice and motion instantly, to industrial sensors that prevent failures before they happen, edge AI is shaping our everyday routines and transforming our lives.

Eliminating cloud dependency and making connectivity optional is essential for data staying local. Without cloud, our options become severely limited to the constraints of the devices, and efficiency drives innovation. Every millisecond and milliwatt can unlock a new use case — or limit one.

This talk will explore optimization techniques for vision, audio, and language models that allow them to run on tiny, resource-constrained devices, and fine-tune them to the limit of our model’s latency, accuracy, or power efficiency. We will start with an initial rapid simulation, and follow up with silicon-level tuning with real device profiling feedback.
Speakers
avatar for Pavel Macenauer

Pavel Macenauer

AI/ML R&D Software Lead, NXP Semiconductors
A software lead at NXP Semiconductors leading teams developing tools, runtime libraries, and enabling AI on Edge-class devices. Both professionally and out of human curiosity, Pavel developed software visualizing the World around us. Initially through the lens of a camera, then from... Read More →
Wednesday April 8, 2026 14:30 - 14:40 CEST
Central Room
  Inference & Production

14:30 CEST

From Responses To Trajectories: Multi-Turn and Multi-Environment Reinforcement Learning - Kashif Rasul & Sergio Paniego Blanco, Hugging Face
Wednesday April 8, 2026 14:30 - 14:55 CEST
Post-training of LLMs with reinforcement learning is increasingly moving beyond static prompt–response pairs and preference optimization methods such as DPO, toward trajectory-based optimization. This talk focuses on the latest advances in multi-turn and multi-environment GRPO training, enabling LLMs to learn from interactive, agent-like experiences, including interacting with simulated environments, using tools, or completing multi-step reasoning tasks.

We highlight how TRL, as a PyTorch-native post-training framework, supports these workflows at scale. Multi-turn, multi-environment training can leverage simulated environments (i.e., coding, terminals, browsers) such as OpenEnv, while GRPO can also be applied to datasets for training LLMs on tool use or multi-step reasoning. Attendees will gain insights into design patterns, rollout handling, trajectory batching, and advantage computation, showing how robust, multi-turn, multi-environment post-training can improve alignment, reasoning, and generalization in LLMs for agentic applications.
Speakers
avatar for Kashif Rasul

Kashif Rasul

Research Scientist, Hugging Face
Kashif has a PhD. in Mathematics from the Freie Universität Berlin. He is passionate about high-performance computing, Reinforcement learning, and has presented at NVIDIA's GTC in 2009 and at StrangeLoop in 2012, and is also contributing to a number of data science and deep learning... Read More →
avatar for Sergio Paniego Blanco

Sergio Paniego Blanco

Machine Learning Engineer, Hugging Face
Sergio tiene una amplia trayectoria en el ámbito del código abierto y la inteligencia artificial, campo en el que también obtuvo su doctorado. Lleva más de ocho años participando en iniciativas como Google Summer of Code, donde ha contribuido como desarrollador y mentor. Actualmente... Read More →
Wednesday April 8, 2026 14:30 - 14:55 CEST
Founders Cafe
  Training Systems

14:45 CEST

Lightning Talk: Full-Stack PyTorch Robotics VLA: From Data To Edge Via ExecuTorch/OpenVINO - Samet Akcay & Dmitriy Pastushenkov, Intel
Wednesday April 8, 2026 14:45 - 14:55 CEST
While research-centric tools have lowered the entry barrier for robotics data collection, transitioning Vision-Language-Action models to production remains challenging due to fragmented edge deployment paths. This session presents a unified, PyTorch-native workflow spanning the full robotics lifecycle, from data capture and curation to optimized edge execution. We introduce a modular Physical AI pipeline designed to resolve the disconnect between research scripts and real-time hardware. The talk details practical patterns for robotics data capture and policy training in a unified PyTorch ecosystem, followed by concrete steps to export models via ExecuTorch. Using an OpenVINO backend, Quantizer, and AOT compilation, we address latency, accuracy, and operator coverage gaps, and demonstrate efficient on-device VLA inference. Using a WidowX pick-and-sort task as a case study, we demonstrate how to validate latency and numerical tolerances under physical constraints. Attendees will leave with a reference architecture and a checklist for monitoring, safety gates, and managing dataset drift, providing a roadmap for moving robotics VLA from research to production-grade edge deployment.
Speakers
avatar for Dmitriy Pastushenkov

Dmitriy Pastushenkov

AI Software Product Manager, Intel
Dmitriy Pastushenkov is a passionate Software Product Manager at Intel with more than 20 years of comprehensive and international experience in the industrial automation, industrial Internet of Things (IIoT) and real-time operating systems and AI. Dmitriy has held various roles in... Read More →
avatar for Samet Akcay

Samet Akcay

Principal AI Engineer, Intel
Samet Akcay is a Principal AI Engineer at Intel who leads ML R&D efforts across Open Edge Platform libraries, including Intel Geti, Datumaro, Anomalib, Training Extensions, and Inference libraries. His research specializes self-supervised learning and multi-modal object detection... Read More →
Wednesday April 8, 2026 14:45 - 14:55 CEST
Central Room
  Inference & Production
  • Audience Level Any
  • Slides Attached Yes

15:25 CEST

Lightning Talk: Trinity Large - Torchtitan on 2000+ B300s - Matej Sirovatka, Prime Intellect
Wednesday April 8, 2026 15:25 - 15:35 CEST
In this talk, we'll cover how to use torchtitan to scale training of ultra-sparse mixture-of-experts models across over 2,000 GPUs. We'll walk through the pre-training of Trinity Large, a 400B mixture-of-experts model trained entirely using torchtitan, focusing on maximizing throughput and minimizing the impact of hardware induced failures. Along the way, we'll discuss challenges like fault tolerance, large-scale distributed training, and ensuring determinism - and how we've addressed each of these using torchtitan. Finally, we'll share insights and common pitfalls to avoid in your own large-scale training runs.
Speakers
avatar for Matej Sirovatka

Matej Sirovatka

Research Engineer, Prime Intellect
Research Engineer at Prime Intellect, mainly focusing on distributed training, performance and scaling.
Wednesday April 8, 2026 15:25 - 15:35 CEST
Founders Cafe
  Training Systems

15:25 CEST

Beyond the Theory: What Actually Breaks When You Scale Your Disaggregated Pytorch Models - Ekin Karabulut & Ron Kahn, NVIDIA
Wednesday April 8, 2026 15:25 - 15:50 CEST
As inference demand explodes, new techniques to optimize these deployments have emerged. One such technique is disaggregated inference, which splits inference into differently optimized workloads (e.g. prefill and decode) on separate workers. The theory is straightforward–better GPU utilization, inference performance, and tighter control over SLAs.The deployment in production is not.
Scaling happens at multiple connected levels. Adding prefill workers for a traffic spike? Those workers belong to a prefill leader and must scale as a unit. But your prefill-to-decode ratio matters too, scale prefill without matching decode capacity and you've moved the bottleneck.Placement also plays a role: place prefill and decode far apart in your network topology and KV-cache transfers will kill your latency.Standard autoscaling treats these as independent components.They're not.
In this talk, we'll share what we've learned running disaggregated vLLM and SGLang deployments on K8s: what broke,what worked, and how we're improving performance. We'll evaluate approaches from standard deployments to specialized APIs like LWS and Grove, discuss how these integrate with frameworks like llm-d and Dynamo.
Speakers
avatar for Ekin Karabulut

Ekin Karabulut

AI/ML Developer Advocate, NVIDIA
Ekin is a Developer Advocate at NVIDIA, following the acquisition of Run:ai. Prior to that, she specialized in the privacy implications of federated learning systems with DNNs in distributed environments as a data scientist. Currently, she is exploring the efficient usage of large... Read More →
avatar for Ron Kahn

Ron Kahn

Senior Software Engineer, NVIDIA
Ron Kahn is a Senior Software Engineer in the NVIDIA Run:ai platform team. Ron works on the design and implementation of workload management systems that abstract Kubernetes complexity for AI practitioners. When not simplifying AI training jobs, Ron can be found cooking something... Read More →
Wednesday April 8, 2026 15:25 - 15:50 CEST
Central Room
  Inference & Production
  • Audience Level Any
  • Slides Attached Yes

15:55 CEST

Lightning Talk: Why Logging Isn’t Enough: Making PyTorch Training Regressions Visible in Practice - Sahana Venkatesh, Wayve
Wednesday April 8, 2026 15:55 - 16:05 CEST
PyTorch teams often log rich training metrics, yet still discover training regressions late after significant developer time and GPU budget have already been spent. In this talk, I’ll share a practical pattern we used to turn PyTorch training metrics into an operational guardrail for large-model training.

The approach combines scheduled short and long training runs, standardized performance and stability metrics (throughput, memory, loss, divergence), and simple statistical baselines to automatically surface regressions via alerts without hard gates or complex infrastructure.

I’ll focus on why logging alone is insufficient, how we chose what to monitor, and what tradeoffs we encountered (false positives, alert fatigue, baseline drift). The goal is not a tool demo, but a reusable pattern other PyTorch teams can adapt to catch training regressions earlier and make retraining more predictable.
Speakers
avatar for Sahana Venkatesh

Sahana Venkatesh

Software engineer, Wayve
Wednesday April 8, 2026 15:55 - 16:05 CEST
Central Room
  Training Systems

15:55 CEST

From Gradients To Governance: Making PyTorch Lineage-Aware - Kateryna Romashko & Clodagh Walsh, Red Hat
Wednesday April 8, 2026 15:55 - 16:20 CEST
PyTorch was built to track how models learn, but not whether they should have. As AI systems increasingly operate on regulated, jurisdiction bound, and sovereign data, lineage and policy can no longer live outside the runtime. This talk explores data sovereignty as a first class constraint and argues that lineage is the missing primitive in modern ML frameworks. Building on PyTorch’s dynamic graphs and autograd system, we outline how tensors could carry origin, consent, and policy metadata through training and inference. The goal is not compliance tooling, but a lineage aware PyTorch that enables trustworthy, auditable, and deployable AI across edge, federated, and European AI ecosystems.
Speakers
avatar for Kateryna Romashko

Kateryna Romashko

Associate Software Engineer, RedHat
Kateryna Romashko is a Software Engineer and a Master’s student in Computer Science, currently working in the Emerging Technology team at Red Hat. Her work focuses on ML systems, data lineage, and event-driven architectures, with hands-on experience across ML platforms, distributed... Read More →
avatar for Clodagh Walsh

Clodagh Walsh

Software Engineer, Red Hat
Clodagh is a software engineer at Red Hat working on the Emerging Technologies team under the office of the CTO. She has experience working with cloud native technologies. She is currently working on a range of AI related projects focused on topics such as MLOps and dLLMs.
Wednesday April 8, 2026 15:55 - 16:20 CEST
Master Stage
  Responsible AI & Compliance

15:55 CEST

DualPipe from Scratch: Implementing DeepSeek's 5D Parallelism in PyTorch - Dev Jadhav, ING Bank
Wednesday April 8, 2026 15:55 - 16:20 CEST
The DeepSeek-V3 paper describes 5D parallelism and DualPipe at a high level, but leaves critical implementation details undocumented. This session presents our open-source PyTorch reference implementation that fills those gaps - verified against the original architecture and designed for learning and extension.

We'll share what we discovered building it from scratch:
Why K_pe is shared across heads in decoupled RoPE (not explicit in paper)
The critical timing of bias updates in auxiliary-loss-free load balancing
How sigmoid routing separates selection scores from gate values
The warmup formula that makes DualPipe achieve 3% bubble overhead
Bugs we caught: causal mask position offsets, EMA initialization, capacity dropping priority

What you'll learn:

5D Parallelism: How TP, PP, DP, EP, and SP interact at 2,048+ GPU scale
DualPipe: Building the bidirectional scheduler with 55% throughput gain over GPipe
Hierarchical All-to-All: Two-level communication reducing MoE dispatch overhead by 4x
Teachable abstractions: CapacityMetrics, ExpertSpecializationTracker, ScheduleStep enums

Prerequisites: torch.distributed basics.
Code: github.com/DevJadhav/deepseek-from-scratch
Speakers
avatar for Dev Jadhav

Dev Jadhav

Tech Lead ML Engineer, ING Bank
Dev Jadhav is a production AI/ML engineer with 10+ years building AI
systems at scale. He currently leads ML engineering at Major Bank,
developing financial-grade AI and large-scale model operations. Dev is
the creator of DeepSeek From Scratch, an open-source implementation of
DeepSe... Read More →
Wednesday April 8, 2026 15:55 - 16:20 CEST
Founders Cafe
  Training Systems

16:10 CEST

Lightning Talk: Ball Tracking and Detection in Soccer Videos - Comparison of VLMs and Traditional Pipelines - Maciej Szymkowski, Future Processing
Wednesday April 8, 2026 16:10 - 16:20 CEST
Nowadays, Vision-Language Models (VLMs) have plenty of different applications. However, it must be pointed out that we cannot be totally sure that they are the most accurate and precise solution for all potential problems. We must compare their possibilities with some other pipelines. In this presentation, we would like to compare on-premise models – Qwen 3 and InternVL-3.5, and cloud-based solutions – Gemini 3, GPT-5 with traditional pipeline based on YOLOv11 and image processing techniques. The battlefield will be ball detection and tracking in soccer matches recordings (from different angles and in diversified light, e.g., sunny, night, and weather conditions, e.g., snowy, rainy day) downloaded from SoccerNet database. In this case, we used both broadcast videos and action and replay images. All of them were marked manually to prepare ground truth database. The models must recognize not only the ball but also track it through the whole sequence of images. To give equal chances we fine-tuned YOLOv11 and provided additional knowledge to VLMs in the form of RAG pipeline. Comparison was made with traditional Machine Learning metrics like accuracy, precision, and recall.
Speakers
avatar for Maciej Szymkowski

Maciej Szymkowski

AI Researcher and Senior Machine Learning Engineer, Future Processing
Maciej Szymkowski, PhD, is a Senior ML Engineer at Future Processing. Formerly Head of AI at Łukasiewicz PIT, his academic background spans BUT, WUT, and AGH. With 45+ publications, he specializes in Computer Vision (med/transport/sport), VLMs, and LLMs. His industry experience includes... Read More →
Wednesday April 8, 2026 16:10 - 16:20 CEST
Central Room
  Applications & Case Studies

16:25 CEST

Lightning Talk: Bridging the Gap: Engineering Compliant "Glass Box" Medical AI With PyTorch - Muhammad Saqib Hussain, Neurosonic & Mohaddisa Maryam, Neurosonic Academy
Wednesday April 8, 2026 16:25 - 16:35 CEST
While state-of-the-art models like NeuroBOLT demonstrate mathematical excellence in EEG-to-fMRI synthesis, they often remain clinically opaque. With the EU AI Act classifying medical AI as "high-risk," hospitals cannot deploy "black boxes"; they require systems that are transparent, auditable, and legally compliant.
​This session presents a "Clinical Auditing System" built within the PyTorch ecosystem, designed to transform opaque deep learning models into transparent "Glass Boxes." I will demonstrate a workflow that backpropagates gradients from high-dimensional 4D fMRI volumes to identify the specific EEG spectral signatures driving those predictions.
​Key Technical Takeaways:
​1. The Audit Layer: Implementing IntegratedGradients (Captum) to verify model fidelity, ensuring predictions stem from valid neural oscillations rather than noise artifacts.
​2. Cross-Modal Reasoning: A technical demonstration of mapping 4D volumetric outputs back to 1D EEG frequency bands, enabling the model to "reason" through neurovascular coupling.
​This presentation is designed for developers seeking to wrap PyTorch models in safety layers that satisfy demands of healthcare regulation.
Speakers
avatar for Mohaddisa Maryam

Mohaddisa Maryam

Miss, Neurosonic Academy
I am a First Year Student of Medicine in Italy.
avatar for Muhammad Saqib Hussain

Muhammad Saqib Hussain

Medical Student, AI Researcher and Neurotech Founder, ClinExplain
Muhammad Saqib is a 4th-year medical student at Comenius University Bratislava and Founder of Neurosonic Academy. His M.D. thesis explores AI for Sleep Medicine. Leveraging PyTorch and Captum, he builds "Glass Box" auditing frameworks to validate generative neuroimaging models against... Read More →
Wednesday April 8, 2026 16:25 - 16:35 CEST
Founders Cafe
  Applications & Case Studies
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Audience Level
  • Slides Attached
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -