The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for PyTorch Conference Europe 2026 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.
This schedule is automatically displayed in CEST (UTC/GMT +2). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."
Sign up or log in to add sessions to your schedule and sync them to your phone or calendar.
The training systems driving today’s most advanced AIs are distributed, dynamic, and complex. Pre-training relies on layered parallelism and careful fault isolation. Post-training RL spans thousands of GPUs while coordinating verifiers, compilers, and code execution.
Systems complexity pulls focus away from the core algorithms: developers are forced to assemble systems from schedulers, RPC stacks, container orchestrators, observability tooling, service discovery, and app frameworks just to begin work.
Monarch is a distributed programming framework for PyTorch that makes the cluster programmable through a single-program Python API. It exposes the supercomputer as a coherent, directly controllable system—bringing the experience of local development to large-scale training; handling fault tolerance, orchestration, tooling integration, etc.
In this talk, we will demonstrate how Monarch enables developers to focus on training logic rather than glue, extend systems easily, and supervise and debug distributed systems through a unified programming interface.
Attendees will leave with a clear model for building robust, scalable and customizable distributed PyTorch systems using Monarch.