The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for PyTorch Conference Europe 2026 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.
This schedule is automatically displayed in CEST (UTC/GMT +2). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."
Sign up or log in to add sessions to your schedule and sync them to your phone or calendar.
Right now, MCP mostly relies on HTTP and STDIO. That works for simple scripts, but if you’re running high-performance PyTorch models in production, you’re going to hit a wall. When you’re moving large context windows or tensor metadata, the overhead of JSON-RPC starts to hurt. We’re introducing SEP-1352, which adds gRPC as a native transport for MCP. Since gRPC is already the standard for microservices, it’s a natural fit for the PyTorch ecosystem. By using Protobuf instead of JSON, we get much higher throughput and lower latency—essentially making the communication between models and tools as fast as the models themselves. In this session, we’ll cover: Why Protobuf matters: Moving away from bulky JSON to keep bandwidth low and speed high. Built-in Streaming: How to use gRPC’s streaming to handle long-running model outputs without timeouts. Production-ready features: Using the same auth, load balancing, and service mesh (mTLS) you already use for your ML microservices. Upgrading your stack: How to move from PyTorch MCP HTTP services to MCP gRPC services without throwing away your existing infra.