The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for PyTorch Conference Europe 2026 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.
This schedule is automatically displayed in CEST (UTC/GMT +2). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."
Sign up or log in to add sessions to your schedule and sync them to your phone or calendar.
Many European PyTorch teams run their GPU workloads on hyperscalers like Azure, AWS, or GCP—often without realizing that this places their data and models under US jurisdiction.
This lightning talk shows how PyTorch compute nodes can be migrated to European cloud providers while keeping the full ML environment intact. Through a live demo, we migrate a GPU-enabled PyTorch VM—including CUDA drivers and Jupyter notebooks—from Azure to European infrastructure, without retraining models or rebuilding environments.
The focus is on practical challenges: GPU compatibility, reproducibility, and data movement across clouds.
The migration is demonstrated using DigitalNomadSky, an open-source Python platform for cross-cloud VM migration, but the lessons apply broadly to PyTorch teams aiming to reduce jurisdictional risk and vendor lock-in.
Key takeaways Why PyTorch workloads on hyperscalers raise sovereignty concerns for EU teams What actually breaks (and what doesn’t) when migrating GPU-based ML nodes How to regain control over ML infrastructure without rewriting your stack
I am a software architect and lead developer of the open-source project DigitalNomadSky. I have extensive experience with Microsoft Azure from working at Microsoft and supporting large-scale cloud migrations. My work focuses on supporting datascience and ML-teams with cloud infrastructure... Read More →