Loading…
7-8 April, 2025
Paris, France
View More Details & Registration
Note: The schedule is subject to change.

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for PyTorch Conference Europe 2026 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in CEST (UTC/GMT +2). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."
Venue: Master Stage clear filter
arrow_back View All Dates
Tuesday, April 7
 

09:00 CEST

Keynote: Co-Evolution: How the Open Source Intelligence Stack Compounds - Mark Collier, Executive Director, PyTorch Foundation, General Manager, AI & Infrastructure, Linux Foundation
Tuesday April 7, 2026 09:00 - 09:10 CEST
Agentic coding systems have crossed a threshold from experimentation to measurable economic impact. Their rapid adoption reveals a deeper shift: modern AI capability emerges from the co-evolution of models, training frameworks, inference engines, reinforcement systems, hardware, and cloud infrastructure, with open source enabling the flow of code, research, and operational knowledge across the stack. As performance gaps narrow and costs fall, this compounding intelligence system accelerates innovation and spreads capability across companies, industries, and hardware platforms, raising a simple question for the community: how fast do we want to evolve?
Speakers
avatar for Mark Collier

Mark Collier

Executive Director, PyTorch Foundation, General Manager, AI & Infrastructure, The Linux Foundation

Tuesday April 7, 2026 09:00 - 09:10 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

09:10 CEST

Keynote: PyTorch Updates - Edward Yang, Research Engineer, Meta
Tuesday April 7, 2026 09:10 - 09:30 CEST

Speakers
avatar for Edward Yang

Edward Yang

Research Engineer, Meta
Edward Yang has worked on PyTorch at Meta since nearly the very beginning. Currently, he works on all aspects of PT2, but with a particular focus on dynamic shapes support across the stack.
Tuesday April 7, 2026 09:10 - 09:30 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

09:35 CEST

Keynote: Community Led Open Source RL - Joe Spisak, VP of Product & Head of Open Source, Reflection AI
Tuesday April 7, 2026 09:35 - 09:45 CEST

Speakers
avatar for Joe Spisak

Joe Spisak

VP of Product & Head of Open Source, Reflection AI
Joe Spisak is Product Director for AI at Meta with leadership roles in PyTorch, Llama and FAIR research. A veteran of the AI space with over 10 years experience, Joe led product teams at Meta/Facebook, Google and Amazon where he focused on open source AI, building developer tools... Read More →
Tuesday April 7, 2026 09:35 - 09:45 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any

09:45 CEST

Sponsored Keynote: From One Node to Distributed Training and Inference. How the PyTorch Ecosystem Changed AI - Ramine Roane, Corporate Vice President of AI Product Management and Ecosystem Development, AMD
Tuesday April 7, 2026 09:45 - 09:50 CEST
PyTorch has evolved from a research framework into a distributed-first platform powering production AI at massive scale. As models grow to hundreds of billions of parameters, this talk explores the challenges of scaling inference across nodes and the emerging ecosystem from Monarch and TorchTitan to open, hardware-agnostic systems that makes it possible.
Speakers
avatar for Ramine Roane

Ramine Roane

Corporate Vice President of AI Product Management and Ecosystem Development, AMD
Ramine Roane is the Corporate Vice President of AI Product Management and ecosystem development at AMD, based in San Jose, California. Prior to this role, he served as Vice President of Data Center Acceleration within AMD’s Adaptive and Embedded Computing Group in 2022. Before the... Read More →
Tuesday April 7, 2026 09:45 - 09:50 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any

09:55 CEST

Keynote: Stream Everything - Moving from Request input to Streaming input - Patrick von Platen, Research Engineer, Mistral AI
Tuesday April 7, 2026 09:55 - 10:10 CEST

Speakers
avatar for Patrick von Platen

Patrick von Platen

Research Engineer, Mistral AI
Patrick von Platen is a Research Engineer at Mistral AI, focussed on natural language processing and scalable AI systems. Currently, he contributes to vLLM, is a former core maintainer of Transformers, and created Diffusers.
Tuesday April 7, 2026 09:55 - 10:10 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any
  • Slides Attached Yes

10:10 CEST

Sponsored Keynote: Any [ Agent | Model | Accelerator | Cloud ]. Open Source AI Unlocks the World's Potential - Maryam Tahhan, Principal Engineer & Nicolò Lucchesi, Senior Machine Learning Engineer, Red Hat
Tuesday April 7, 2026 10:10 - 10:15 CEST
Red Hat is shaping an open future for AI, delivering on the promise of 'Any Agent, Any Model, Any Accelerator, Any Cloud.' Discover the community advancements contributed in the PyTorch Foundation that empower enterprises to rapidly enable, test, and seamlessly scale AI workloads across their choice of infrastructure
Speakers
avatar for Maryam Tahhan

Maryam Tahhan

Principal Engineer, Red Hat
Maryam is a Principal Engineer in Red Hat's Office of the CTO, where she focuses on standardising CPU inferencing performance evaluation to help effectively validate and scale ML workloads.
avatar for Nicolò Lucchesi

Nicolò Lucchesi

Senior Machine Learning Engineer, Red Hat
Nicolò is a Senior Machine Learning Engineer at Red Hat with a background in Deep Learning and Computer Vision. He works on Inference Optimization for vLLM, where he is a maintainer.
Tuesday April 7, 2026 10:10 - 10:15 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any

10:15 CEST

Keynote: The Unbearable Lightness of (Agentic) Evaluations - Besmira Nushi, Senior Manager, AI Research, NVIDIA
Tuesday April 7, 2026 10:15 - 10:25 CEST
The discipline of evaluating large language models underwent a major transformation with the rise of general AI capabilities. Today, the field is undergoing yet another challenging transformation following the groundbreaking improvements in agentic tasks, which expect models and systems to plan and take autonomous actions in the real world. Measuring how well models and systems perform in such tasks is however still i) fragile from a methodological perspective, and ii) difficult to scale and generalize across different domains. This talk will first discuss common challenges in reproducing agentic evaluations, including differences in reference implementation, error handling, trajectory post processing, and tooling definitions. Next, it will cover infrastructural requirements that need to be addressed for such evaluations to run efficiently at scale. Finally, we will conclude with a set of (still nascent) best practices that can help alleviate “lightness” and build more consistent measurement pipelines.
Speakers
avatar for Besmira Nushi

Besmira Nushi

Senior Manager - AI Research, NVIDIA
Besmira Nushi is a Senior AI Research Manager at NVIDIA in Zurich, where she leads research on LLM evaluation, model analysis and generalization, and real-world and agentic AI system measurements. Previously, she spent 7+ years at Microsoft Research advancing responsible AI, model... Read More →
Tuesday April 7, 2026 10:15 - 10:25 CEST
Master Stage
  Keynote Sessions
  • Audience Level Any

11:00 CEST

Helion 1.0: A High-Level DSL for Performance Portable Kernels - Oguz Ulgen, Meta
Tuesday April 7, 2026 11:00 - 11:25 CEST
ML practitioners increasingly author bespoke kernels, but achieving portable performance demands low-level expertise and repeated manual tuning for each accelerator generation and type. We introduce Helion, a Python-embedded DSL with a “PyTorch with tiles” programming model that preserves familiar PyTorch APIs while giving developers lower-level control over the generated kernels. Helion integrates tightly with TorchInductor to reuse PyTorch operator lowerings, automatically manages host/device boundaries, and provides rich language constructs for tiling, memory movement, and synchronization. The language defines an implicit high-dimensional configuration space that our autotuner explores, shifting the tuning burden from developers to automated search.

In this session, I will cover both the language and what is new since PTC'25, as well as announcing the official GA launch. This session will be open for both experienced and beginner kernel authors.
Speakers
avatar for Oguz Ulgen

Oguz Ulgen

Software Engineer, Meta
I'm a software engineer at Meta where I used to work on the Hack programming language and now work on PyTorch.
Tuesday April 7, 2026 11:00 - 11:25 CEST
Master Stage

11:30 CEST

Tour De Force: LLM Inference Optimization From Simple To Sophisticated - Christin Pohl, Microsoft
Tuesday April 7, 2026 11:30 - 11:55 CEST
Making your GPUs go brrr is complex. Efficient LLM inference requires navigating a maze of optimization techniques each with different trade-offs. This session provides a practical journey through inference optimizations, clearly categorized by implementation effort.

We'll explore techniques across three levels:

- Model choices (start here): Model selection, quantization, smart routing

- Library-level improvements (using PyTorch-based frameworks like vLLM, SGLang, TensorRT-LLM): Continuous batching, KV-cache management, tensor parallelism

- Custom implementations: Speculative decoding with custom draft heads, disaggregated inference, fine-tuning smaller models

The session covers practical trade-offs and key metrics: time to first token, inter-token latency, throughput, and cost per token.

Whether deploying your first model or optimizing at scale, this talk delivers actionable insights into which techniques to prioritize for deeper investigation.
Speakers
avatar for Christin Pohl

Christin Pohl

Global Black Belt Solution Engineer AI Infrastructure, Microsoft
Christin Pohl is a Global Black Belt Solution Engineer for AI Infrastructure at Microsoft (Switzerland), now in her third year. After building her first chatbot in 2018 and 5+ years at SAP, she helps enterprises worldwide choose the right GPU, run LLM training and inference end-to-end... Read More →
Tuesday April 7, 2026 11:30 - 11:55 CEST
Master Stage

12:00 CEST

Lightning Talk: Bringing Google’s Colossus to PyTorch: Rapid Storage via fsspec to Keep GPUs Busy - Ankita Luthra & Trinadh Kotturu, Google
Tuesday April 7, 2026 12:00 - 12:10 CEST
As PyTorch models scale to billions of parameters, the bottleneck has quietly shifted from compute to storage. Modern GPU clusters often sit idle, "starving" for data while waiting on legacy REST-based protocols. This talk introduces Rapid Storage: a fundamental architectural shift bringing Google’s Colossus stateful protocol (that powers many Google’s products) to PyTorch via fsspec , a common Pythonic file interface used by many frameworks within PyTorch ecosystem.
By bypassing REST APIs entirely via persistent gRPC streams to the storage layer, we eliminate protocol overhead. In this talk, we also dive into how Rapid achieves <1ms random read/write latency, 20x faster data access, and a massive 6 TB/s of aggregate throughput. Crucially, it delivers up to 10x lower tail latency for random I/O, preventing the stragglers that often stall distributed training jobs.
Beyond raw speed, we will deconstruct the integration with gcsfs and the broader fsspec ecosystem. This ensures that high-performance I/O is available across the entire data stack including Dask, Ray, HF Datasets and vLLM etc. Join us to learn how to stop wasting GPU cycles and achieve linear scaling in the cloud.
Speakers
avatar for Ankita Luthra

Ankita Luthra

Senior Software Engineer, Google
Ankita Luthra is a Software Developer at Google, focused on AI/ML infrastructure and scalable data pipelines. Her work with open-source tools like fsspec(gcsfs) and gcsfuse improves how frameworks such as PyTorch/ JAX efficiently access data from Google Cloud Storage.
avatar for Trinadh Kotturu

Trinadh Kotturu

Senior Product Manager, Google
Trinadh Kotturu is a Senior Product Manager specializing in AI/ML and analytics client strategy at Google. An alumnus of IIM Bangalore with 12 years of experience, he has a proven track record of shipping v1 products and scaling them into robust platform services. His expertise spans large-scale distributed storage systems, autonomous driving, and system resiliency... Read More →
Tuesday April 7, 2026 12:00 - 12:10 CEST
Master Stage
  Training Systems
  • Audience Level Any
  • Slides Attached Yes

12:15 CEST

Lightning Talk: FlexAttention + FlashAttention-4: Fast and Flexible - Driss Guessous, Meta
Tuesday April 7, 2026 12:15 - 12:25 CEST
FlexAttention democratized attention research by letting researchers prototype custom attention variants in PyTorch without hand-written CUDA. Over 1,000 repos have adopted it, and dozens of papers cite it. But flexibility came at a cost: FlexAttention achieved only ~60% of FlashAttention-3's throughput on Hopper, and the gap widened dramatically on Blackwell GPUs.

We bridged this gap by integrating FlexAttention with FlashAttention-4, the new CuTeDSL-based implementation optimized for Blackwell's async pipelines and tensor memory. PyTorch's Inductor now generates CuTeDSL score/mask modifications directly, enabling JIT instantiation of FA4 for arbitrary attention variants.

Results: 1.2–3.2× speedups over the Triton backend on compute-bound workloads. On B200, patterns like ALiBi, document masking, and sliding window see up to 2.7× forward and 3× backward speedups. On Hopper, gains range from 1.3–2× across all sequence lengths.

This talk covers the technical integration: how Inductor lowers score mods to CuTeDSL, how FA4's warp-specialized kernel accommodates block-sparse iteration, and practical considerations for users adopting the Flash backend today.
Speakers
avatar for Driss Guessous

Driss Guessous

Machine Learning Engineer, Meta
I am currently a machine learning engineer working on core development of PyTorch. I received my Masters in Computer Science from the University of Illinois at Urbana-Champaign. I received a dual degree in Physics and Applied Mathematics from The Ohio State University. I also won... Read More →
Tuesday April 7, 2026 12:15 - 12:25 CEST
Master Stage

13:45 CEST

Bringing ExecuTorch To the Next Frontiers of Edge AI - Mergen Nachin, Meta
Tuesday April 7, 2026 13:45 - 14:10 CEST
Since the General Availability release of ExecuTorch 1.0 in October 2025, our team has continued to advance the state of the on-device AI software stack. In this talk, we will share our upcoming roadmap and present demos that highlight ExecuTorch’s deployment across the next frontiers, such as AI PCs, robotics, TinyML devices, and the integration of AI agents to improve productivity for on-device deployment.

ExecuTorch is built on open source collaboration, encouraging community adoption, contributions from hardware partners, and interoperability with other ecosystem libraries. We will discuss how these foundations set the stage for the next phase of edge AI with ExecuTorch.
Speakers
avatar for Mergen Nachin

Mergen Nachin

Software Engineer, Meta
Mergen Nachin is a Software Engineer specializing in creating rich AI experiences on low latency, high performance, and privacy-aware embedded systems. With a background in distributed systems, developer infrastructure, remote sensing, and localization, he brings a versatile skill... Read More →
Tuesday April 7, 2026 13:45 - 14:10 CEST
Master Stage
  Applications & Case Studies

14:15 CEST

Lightning Talk: Accelerating On-Device ML Inference With ExecuTorch and Arm SME2 - Jason Zhu, Arm
Tuesday April 7, 2026 14:15 - 14:25 CEST
As on-device AI workloads grow in complexity, achieving low-latency inference within mobile power constraints remains a central challenge. We examine how ExecuTorch, combined with Arm’s Scalable Matrix Extension 2 (SME2), enables efficient CPU deployments of production AI workloads. We present a case study of SqueezeSAM, a segmentation model deployed in real-world mobile applications. Using ExecuTorch with XNNPACK delegation and SME2-optimized kernels, we evaluate INT8 and FP16 inference on a flagship smartphone. Moving beyond aggregate latency, we apply operator-level profiling to decompose runtime across convolution, GEMM, elementwise, and data movement operators, showing how hardware acceleration reshapes bottlenecks in the execution stack. SME2 delivers up to 3.9x end-to-end speedup on a single CPU core, materially altering runtime composition and revealing data movement as the primary post-acceleration bottleneck. This session presents a practical workflow for deploying, profiling, and systematically optimizing on-device PyTorch models, demonstrating how SME2 expands the viable design space for interactive mobile AI.
Speakers
avatar for Jason Zhihuai Zhu

Jason Zhihuai Zhu

Senior Principal Engineer, Arm
Jason Zhu is a Senior Principal Engineer at Arm focused on hardware and software co-optimization for AI systems. With a background in quantum physics and experience spanning AI research and product engineering across major technology companies, he works across the full execution stack... Read More →
Tuesday April 7, 2026 14:15 - 14:25 CEST
Master Stage
  Inference & Production
  • Audience Level Any
  • Slides Attached Yes

14:30 CEST

Lightning Talk: Combo Kernels: Horizontal Fusion Optimization in Torch.compile - Karthick Panner Selvam, & Elias Ellison, Meta
Tuesday April 7, 2026 14:30 - 14:40 CEST
Combo kernels are a compiler optimization in PyTorch Inductor that horizontally fuses multiple independent operations into a single Triton kernel launch, reducing GPU kernel launch overhead and improving memory locality.

The Problem: Models generate many small, independent operations like weight preprocessing and tensor copies. Each launch incurs overhead. For models with many such operations, this becomes a bottleneck.

The Solution: Combo kernels combine multiple operations into one kernel using a dispatch mechanism. A single program ID routes execution to the appropriate subkernel based on cumulative block boundaries. This eliminates redundant launches while preserving correctness.

Key Innovations:

Per-subkernel block dimensions: Each subkernel gets its own optimized block size instead of sharing one size across all, enabling better autotuning.

Flattened grid dispatch: We collapse the multi-dimensional block grid into a single dimension.

Results: On H100 GPUs, combo kernels deliver geomean speedups of +7.38% for HuggingFace, and +5.97% for TorchBench. The optimization is enabled by default in the vLLM repository for LLM inference acceleration.
Speakers
avatar for Elias Ellison

Elias Ellison

Software Engineer, Meta
Elias has been working on the PyTorch team for four years, most recently on the torch.compile stack
avatar for Karthick Panner Selvam

Karthick Panner Selvam

Software Engineer, Meta
Karthick Panner Selvam is a SWE at Meta Superintelligence Lab, working on the PyTorch compiler team to enhance performance and scalability for large models. He earned his PhD in Machine for Systems at the University of Luxembourg, collaborating with Google DeepMind, ECMWF, and Frontier... Read More →
Tuesday April 7, 2026 14:30 - 14:40 CEST
Master Stage
  Frameworks & Compilers
  • Audience Level Any
  • Slides Attached Yes

14:45 CEST

Model-Changing Transforms With Torch.compile - Thomas Viehmann, Lightning AI
Tuesday April 7, 2026 14:45 - 15:10 CEST
torch.compile is the goto mechanism to increase performance of PyTorch models of all shapes and forms.

While it is widely understood how to change the computation by manipulating the FX trace representation, it becomes a much more general tool by also transforming model and input expectations (the guards):
This enables model-changing transformations like quantization and distributed without needing to adapt the model to it.

We take a deep dive into the torch.compile internals to see what's going on under the hood and how we can hook into the gears to enable distributed (starting from a single-GPU model) and quantization.
In this quest, marvel at the interplay between PyTorch's Python code, the Pyton interpreter and PyTorch's C++ code that enable the Dynamo frontend of torch.compile and then use a big hammer to use it in unexpected ways. Building on our experience with Lightning Thunder, an experimental compiler for PyTorch models, we propose a transform mechanism taking care of compute, model, and weights.
Speakers
avatar for Thomas Viehmann

Thomas Viehmann

Thunder, Lightning AI
Thomas Viehmann does PyTorch and Optimization at Lightning AI, PyTorch contributor since 2017, founded MathInf GmbH in 2018, co-authored of “Deep Learning with PyTorch” in 2020.
Tuesday April 7, 2026 14:45 - 15:10 CEST
Master Stage

15:40 CEST

Lightning Talk: Graph Based Pipeline Parallelism - Sanket Purandare, Meta & Simon Fan, Meta PyTorch
Tuesday April 7, 2026 15:40 - 15:50 CEST
Pipeline parallelism is vital for large models, but advanced schedules for SOTA LLMs are difficult to express in current PyTorch. MoE communication dominates the critical path, making latency hiding essential. Leading systems use fw-bw overlapping; fw-fw and bw-bw overlapping further boost throughput.

Schedules like ZeroBubbleV and DualPipeV rely on dI-dW backward splitting for fine-grained overlap. However, eager-mode implementations require a patchwork of fragile integrations (multi-threading, custom autograd functions, activation checkpointing, etc.) that rely on implicit behavior and hand-written logic with poor torch.compile compatibility and upstream composability.

We present Graph-Based PP: stages are compiled to reusable FX graphs executed via an explicit schedule language. Users write standard PyTorch code while specifying schedules at varying granularity; all manipulations run as graph passes, abstracting complexity away from user code and into the compiler/runtime, allowing for greater composability.

We have integrated Graph-PP into TorchTitan and AutoParallel on real MoE workloads, targeting upstream inclusion in torch.distributed.
Speakers
avatar for Simon Fan

Simon Fan

Software Engineer, Meta
I work on the PyTorch team at Meta, focusing on distributed training efficiency.
avatar for Sanket Purandare

Sanket Purandare

Research Engineer, Meta
Currently, Sanket serves as a Research Engineer at Meta's SuperIntelligence Lab, in PyTorch Distributed and Compiler team. He specializes in performance optimization of large scale training of LLMs based on Mixture of Experts architectures.

Prior to this he obtained his PhD in A... Read More →
Tuesday April 7, 2026 15:40 - 15:50 CEST
Master Stage
  Frameworks & Compilers

15:55 CEST

Lightning Talk: Beyond Generic Spans: Distributed Tracing for Actionable LLM Observability - Sally O'Malley & Greg Pereira, Red Hat
Tuesday April 7, 2026 15:55 - 16:05 CEST
End-to-end observability is non-negotiable for production LLMs to track performance, attribute costs, and validate optimizations. Generating actionable traces from complex distributed inference remains a significant challenge.

We implemented tracing for llm-d, a high-performance distributed LLM inference framework. Using manual OpenTelemetry instrumentation with carefully crafted spans at critical paths, we expose insights that generic tooling can't capture.

This talk explores how distributed tracing illuminates requests through unique inference scenarios:

* Prefix cache-aware routing: Track cache hits and validate whether intelligent scheduling improves TTFT
* Prefill/decode disaggregation: Analyze why each request chose split vs unified processing based on cache locality.
* Wide expert-parallelism: Profile MoE models across multi-node deployments
* Workload autoscaling: Correlate request patterns with scaling decisions

Attendees will learn why LLMOps requires a new approach to distributed tracing, contrasting it with traditional microservices, and how to instrument inference stacks effectively. Walk away ready to add meaningful observability to your own deployments.
Speakers
avatar for Greg Pereira

Greg Pereira

Sr. Machine Learning Engineer, Red Hat
Greg began his career as SRE focusing on CICD and automation in the Emerging Technologies org at redhat. After transferring to the platform and services team he started from the ground up, refocusing on AI centric software development. Three years later he has been involved in building... Read More →
avatar for Sally O'Malley

Sally O'Malley

Principal Software Engineer, Red Hat

Tuesday April 7, 2026 15:55 - 16:05 CEST
Master Stage

16:10 CEST

TorchStore: What We Learned Building Distributed Storage Solutions for AysncRL - Lucas Pasqualin, Danielle Pintz, Allen Wang, Amir Afzail Meta
Tuesday April 7, 2026 16:10 - 16:35 CEST
Asynchronous Reinforcement Learning (AsyncRL) workloads have unique data sharing requirements: actors must efficiently exchange large tensors across processes and nodes, often with different sharding configurations—not just at checkpoint time, but continuously during training for live weight synchronization. This talk presents Torchstore, an open-source distributed tensor storage system built on Monarch actors that tackles these challenges. We'll share the key lessons learned—from designing pluggable transport backends (RDMA, shared memory, RPC) to implementing transparent live DTensor resharding that lets producers and consumers use entirely different parallelism strategies. We'll also discuss the friction we encountered integrating with inference engines like vLLM, where differing model definitions and integrations present new bottlenecks. Whether you're building actor-based training systems or thinking about disaggregated training-inference architectures, you'll leave with practical insights on distributed tensor storage design.
Speakers
avatar for Lucas Pasqualin

Lucas Pasqualin

ML Engineer, PyTorch (Meta)
Lucas has been developing Machine Learning Applications and Machine Learning infrastructure at scale for years, and has recently been focused on extending the product offering of PyTorch's Distributed Checkpointing stack.
AW

Allen Wang

Software Engineer, Meta
avatar for Danielle Pintz

Danielle Pintz

Software Engineer, Meta
Danielle is a software engineer working on PyTorch, currently focused on TorchStore and Async RL. She previously worked on the Llama Research team.
avatar for Amir Afzali

Amir Afzali

Software Engineer, Meta
Software engineer working on Pytorch distributed infra and large scale training
Tuesday April 7, 2026 16:10 - 16:35 CEST
Master Stage
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Audience Level
  • Slides Attached
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -