Discussion about this post

User's avatar
Timothy Chase's avatar

I hope you don't mind my throwing out one more question... Do you know which two-qubit gate they use in their benchmarking tests? I know that at least in the past, prior to Forte IonQ has preferred the Mølmer–Sørensen gate which in terms of just two qubits reduces to an RXX but includes a global rotation. Different modalities often imply different gates but as they have the same basic qubit modality I would assume Quantinuum uses the same.

Anyway, I was impressed by your article. Well organized and detailed, and I don't wish to detract from that. I also realize these are a lot of questions. Perhaps some of them could be addressed in a later article? Regardless, thank you for a highly informative piece on a fascinating and important development.

Timothy Chase's avatar

I am impressed by the simplicity of design and low infidelity numbers. At the same time I find myself wondering about the scalability, particularly as any approach to quantum computing will ultimately require error correction, and depending on one's code the number of physical qubits required for a single logical qubit may number in the tens, hundreds or thousands. Surface codes such as those employed by Google, require more, IBM's Gross bicycle bivariate (288 physical qubits for 12 logical, if memory serves) and Microsoft's Tesseract far less.

In terms of scalability, I remember a neutral atom approach being employed by the Chinese where laser tweezers were being controlled by AI in order to quickly arrange arrays of atoms that numbered about 4000. QuEra is also using lasers to move entire arrays of atoms during computation which may prove useful in the dynamic creation of transversal gates, a key element in error correction codes.

At some point it seems likely that Quantinuum will have to consider me topologies for their tracks even if this violates the principle of all-to-all connectivity. Are there thoughts regarding how those may be arranged? What about how they may be controlled? Might machine learning and intelligence be used to control the logic at runtime? Regardless, another violation of all-to-all connectivity will likely consist of the creation of modules, each with a relatively small number of logical qubits. I remember someone from IBM suggesting only a dozen or so in one talk. Are there thoughts about the coupling?

With such low infidelity numbers the cost of error correction will be quite low but it will increase as a logarithm of the scale and must eventually be accounted for. And in time we will need to scale into the thousands, millions and billions of logical qubits. Does Quantinuum have a planned timeline with specific stages and years in mind?

No posts

Ready for more?