Distributed Quantum Computing Achieves 90% Teleportation with Adaptive Resource Orchestration across 128 QPUs
Building practical quantum computers demands connecting multiple quantum processors, a challenge researchers now address with a novel approach to resource management. Kuan-Cheng Chen, Felix Burt, and Nitish K. Panigrahy, alongside Kin K. Leung and colleagues at Imperial College London and Binghamton University SUNY, present a system called ModEn-Hub, which intelligently coordinates resources across distributed quantum processing units. This architecture uses a photonic network to deliver high-quality quantum connections and a sophisticated control system that optimises the execution of complex operations, such as teleportation-based gates. The team demonstrates, through extensive simulations, that their adaptive orchestration sustains approximately 90% success in establishing quantum links between processors, a significant improvement over simpler methods which degrade rapidly as the system scales, paving the way for genuinely scalable and efficient quantum computation on existing hardware.
Any quantum processing units (QPUs) can be integrated into a coherent quantum-HPC system. The researchers propose the Modular Entanglement Hub (ModEn-Hub) architecture, which combines a hub-and-spoke photonic interconnect with a real-time quantum network orchestrator. ModEn-Hub centralises entanglement sources and shared quantum memory to deliver on-demand, high-fidelity Bell pairs across heterogeneous QPUs. The control plane schedules teleportation-based non-local gates, launches parallel entanglement attempts, and maintains a small ebit cache. To quantify these benefits, the team implements a lightweight, reproducible Monte Carlo study under realistic loss and tight round budgets, comparing a naïve sequential baseline to their orchestrated approach.
Entanglement Hubs and Distributed Quantum Processors
ModEn-Hub separates entanglement generation (performed centrally in a hub) from quantum computation (distributed across peripheral QPUs). This mirrors successful approaches in cloud/HPC interconnects.
A dedicated hub generates high-fidelity entanglement, simplifying the complexity for individual QPUs. The design is intended to scale from small multi-QPU setups to larger clustered deployments. The paper demonstrates a trade-off between scalability and efficiency, showing that the orchestrator can trade increased entanglement generation for higher and more stable end-to-end success rates. Experimental Results: Simulations show that the orchestrator maintains high performance (around 90% success rate) as the network scales, whereas a naive approach declines significantly.
The orchestrator consumes more entanglement attempts on average, but the improved success rate offsets this., Optimising entanglement caching strategies (considering decoherence and buffer limits). Integrating the orchestration with fault-tolerant protocols. Extending the framework to handle heterogeneous workloads (e. g., hybrid sensing, distributed cluster-state generation)., In essence, ModEn-Hub proposes a practical architecture for building larger, more robust quantum computers by leveraging a centralized entanglement source and intelligent software orchestration. It draws parallels with established principles from high-performance computing to address the unique challenges of quantum networking.
ModEn-Hub Achieves Scalable Quantum Networking
Scientists have achieved a significant breakthrough in scalable quantum computing through the development of the Modular Entanglement Hub, or ModEn-Hub, architecture. This work addresses the critical challenge of networking multiple quantum processing units (QPUs) into a coherent high-performance computing system, paving the way for quantum machines exceeding the limitations of single devices. The team implemented a hub-and-spoke photonic interconnect coupled with a real-time network orchestrator to centralize entanglement sources and shared memory, delivering on-demand, high-fidelity Bell pairs across diverse QPUs., Experiments involved a detailed Monte Carlo study, comparing a standard sequential approach to the ModEn-Hub’s orchestrated policy with logarithmically scaled parallelism and opportunistic caching, across a range of 1 to 128 QPUs. Results demonstrate that the ModEn-Hub orchestration sustains approximately 90% teleportation success across 2,500 trials per data point, a substantial improvement over the baseline, which degraded to roughly 30%.
This sustained success was achieved despite requiring a higher average number of entanglement attempts, ranging from 10 to 12, compared to the baseline’s 3 attempts., Measurements confirm the effectiveness of adaptive resource orchestration, enabling efficient operation on near-term hardware, and the team recorded a clear correlation between the hub’s dynamic resource allocation and improved teleportation fidelity. The breakthrough delivers a scalable solution for distributed quantum computing, allowing for the creation of virtual quantum computers with exponentially growing computational power as the number of connected QPUs increases, and the study provides compelling evidence for the feasibility of building large-scale quantum-HPC systems. This architecture overcomes limitations of traditional point-to-point topologies by enabling efficient resource sharing and dynamic connectivity between processors.
Modular Quantum Computing via Entanglement Hubs
The team has developed ModEn-Hub, a new modular architecture for scaling quantum computing beyond a single processor. This system centralizes the generation of high-fidelity entanglement within a dedicated hub, then distributes quantum computations across multiple peripheral quantum processing units, all managed by a real-time software orchestrator. By separating the quantum data transfer from the classical control mechanisms, the architecture adapts principles from conventional cloud and high-performance computing networks to the unique challenges of quantum systems, specifically addressing issues like decoherence and the limitations of probabilistic link formation., Results from Monte Carlo studies demonstrate that this orchestration method sustains approximately 90% success in teleportation, a key process for distributed quantum computing, even as the number of quantum processing units increases. This represents a significant improvement over a basic sequential approach, which degrades to around 30% success under similar conditions, although it requires a greater number of entanglement attempts.
The findings clearly indicate that adaptive resource orchestration, as implemented in ModEn-Hub, enables efficient and scalable operation of quantum high-performance computing on currently available hardware, establishing a trade-off between computational resources and overall success rate., The authors acknowledge limitations related to entanglement caching, including the potential for decoherence of stored Bell pairs and the capacity of buffer storage. Future research will focus on quantifying and optimizing these memory-performance trade-offs, integrating the orchestration system with fault-tolerant protocols, and expanding the framework to accommodate diverse applications such as hybrid sensing and distributed cluster-state generation. These developments promise to further refine the architecture and broaden its applicability within the growing field of quantum information science.
👉 More information
🗞 Adaptive Resource Orchestration for Distributed Quantum Computing Systems
🧠 ArXiv: https://arxiv.org/abs/2512.24902



I’m delighted to see high-caliber mathematicians and theoretical physicists getting interested in the theory behind deep learning.
One theoretical puzzle is why the type of non-convex optimization that needs to be done when training deep neural nets seems to work reliably. A naive intuition would suggest that optimizing a non-convex function is difficult because we can get trapped in local minima and get slowed down by plateaus and saddle points. While plateaus and saddle points can be a problem, local minima never seem to cause problems. Our intuition is wrong, because we picture an energy landscape in low dimension (e.g. 2 or 3). But the objective function of deep neural nets is often in 100 million dimensions or more. It’s hard to build a box in 100 million dimensions. That’s a lot of walls. There is a number of theoretical work from my NYU lab (look for Anna Choromanska as first author) and in Yoshua Bengio’s lab in this direction. This uses mathematical tools from random matrix theory and statistical mechanics.
Another interesting theoretical question is why multiple layers help. All boolean functions of a finite number of bits can be implemented with 2 layers (using the conjunctive of disjunctive normal form of the function). But the vast majority of boolean functions require an exponential number of minterms in the formulas (ie.e. an exponential number of hidden units in a 2-layer neural net). As computer programmers, we all know that many functions become simple if we allow ourselves to run multiple sequential steps to compute the function (multiple layers of computation). That’s a hand-wavy argument for having multiple layers. It’s not clear how to make a more formal argument in the context of neural net-like architectures.