





















































Hi
Welcome to the tenth issue of Deep Engineering
Last week, analysts at Bank of America (BofA) released a note on quantum computing saying, “This could be the biggest revolution for humanity since discovering fire.” It may seem like an audacious comparison at first for a field known till now to be abstract with hardware that is not there yet. But IBM has already laid out a comprehensive roadmap to build a large-scale, fault-tolerant quantum computer by 2029 and expects to achieve practical quantum advantage by 2026.
If quantum computing is to deliver on its promise, it won’t be physicists alone who get us there—it will be software teams building the abstractions, compilers, and algorithms that bridge theory and hardware. Engineers now face a peculiar challenge: to write software for machines that don’t fully exist, on hardware that changes year to year, using abstractions that must bridge mathematical theory, noisy processors, and unpredictable outcomes.
To understand how industry professionals can prepare to face this challenge, we spoke to Prof. Elías F. Combarro, co-author of A Practical Guide to Quantum Computing (Packt, 2025). Combarro is a full professor in the Department of Computer Science at the University of Oviedo in Spain, with degrees in both Mathematics and Computer Science and national academic honors in each. He completed his PhD in Mathematics in 2001, with research spanning computability theory and logic, and has since authored over 50 papers across quantum computing, algebra, machine learning, and fuzzy systems. His recent work focuses on applying quantum methods to problems in optimization and algebraic structures. He has held research appointments at CERN and Harvard and served on the Advisory Board of CERN’s Quantum Technology Initiative from 2021 to 2024.
You can watch the full interview and read the transcript here—or read on for our synthesis of what it means to design and debug quantum code in the context of real-world constraints and developments.
Twilio Segment was purpose-built so that you don’t have to worry about your data. Forget the data chaos, dissolve the silos between teams and tools, and bring your data together with ease. So that you can spend more time innovating and less time integrating.
Analysts at BofA earlier this month made quite a riveting statement: quantum computing “could be the biggest revolution for humanity since discovering fire.” For a field known for being very abstract, this claim underscores how concretely disruptive its proponents now expect it to be—reshaping computation, shifting global power, and pressuring industries well ahead of full-scale machines. In fact, in an interview with CNBC International Live, Haim Israel, Head of Global Thematic Research at BofA, stated that quantum computing is no longer “20 years away.” He credits recent breakthroughs—largely enabled by AI—with accelerating progress to a point where early commercial applications are already emerging. Israel projects that quantum advantage will be achieved by 2030, with quantum supremacy arriving five to six years later.
Yet, realizing that potential requires software developers and researchers to think very differently about programming. Quantum programs don’t run on stable, deterministic digital processors; they run on fragile qubits governed by probabilistic physics. As Prof. Elías F. Combarro puts it,
“Quantum programs are fundamentally different. You don’t have loops. You don’t have persistent memory or data structures in the way you do in classical programming. What you have is a quantum circuit—a finite sequence of operations that runs once, from start to finish. You can't stop, inspect, or loop within the circuit. You run it, you measure, and then you’re done.”
This new paradigm forces a reimagining of everything from algorithm design and debugging to testing and maintenance.
Classical developers are used to variables holding definite values and code flowing through deterministic steps. By contrast, a single qubit can exist in a superposition of basis states, represented by a two-dimensional complex state vector. A single qubit can be represented geometrically using the Bloch sphere where every point on the surface corresponds to a possible quantum state and operations appear as rotations. As Combarro explains,
“Every point on the surface of the sphere represents a possible state of your qubit, and quantum gates—operations—can be visualized as rotations of this sphere.”
But as soon as we move beyond one qubit, our everyday intuition falters. Two qubits live in a 4-dimensional state space, ten qubits in a $2^{10}=1024$-dimensional space, and so on – exponential growth that quickly outpaces human imagination.
A defining feature of multi-qubit systems is entanglement, a phenomenon with no classical equivalent.
“Entangled systems can’t be described by just looking at the states of their individual parts… You need the full global state,” Combarro notes.
An entangled pair of qubits shares a joint state that cannot be factored into two independent single-qubit states. Change or measure one part, and the other seems to instantly reflect that change – a mystery so striking that Einstein dubbed it “spooky action at a distance.” This “spookiness” is not just a quirk of physics; it’s a resource for computation.
“Entanglement… only exists in quantum systems. It doesn’t happen in classical physics… you can use it to implement protocols and algorithms that are simply impossible with classical resources,” Combarro says.
Indeed, algorithms like superdense coding (sending two classical bits by transmitting a single entangled qubit) or quantum teleportation of states require entanglement to work. In quantum computing, entanglement is the magic that enables a kind of collaborative computation across qubits – and it’s central to any future quantum advantage.
Another fundamental difference between classical and quantum computation lies in how information is retrieved from a system. In classical software, reading a variable doesn’t disturb its value. In quantum software, measurement fundamentally changes the system. A qubit’s rich state is collapsed to a definite outcome (like |0⟩ or |1⟩) when measured, and all the other information encoded in its amplitudes is lost.
“In quantum computing, when you perform a measurement, you can't access all that information. You only get a small part of it,” Combarro explains.
Measuring a single qubit yields just one classical bit (0 or 1) of information, no matter how complex the prior state.
And after measurement, “you’ve lost everything about the prior superposition. The system collapses, and that collapse is irreversible.”
This means a quantum program can’t freely check intermediate results or branch on qubit values without destroying the very quantum state it’s computing with.
The consequence is that quantum algorithms are often designed to minimize measurements until the end, or to cleverly avoid needing to know too much about the state. Even then, the outcome of a quantum circuit is usually probabilistic. Running the same circuit twice can give different answers, a shock to those accustomed to deterministic code.
“For people used to classical programming, that's very strange—how can the same inputs give different outputs? But it’s intrinsic to quantum mechanics,” Combarro says.
To manage this randomness, quantum algorithms rely on repetition and statistical analysis. Developers run circuits many times (often thousands of shots) and aggregate the results. For example, a quantum classifier might be run 100 times, yielding say 70 votes for “cat” and 30 for “dog,” which indicates a high probability the input was a cat. Many algorithms, like phase estimation, improve their accuracy by repeated runs:
“In quantum phase estimation… you repeat the procedure to get better and better approximations. The more you repeat it, the more accurate the estimate.”
In other words, you rarely trust a single run of a quantum program – you gather evidence from many runs to reach a reliable answer.
Developers must also separate intrinsic quantum uncertainty from extrinsic hardware noise. The randomness of quantum measurement is unavoidable, but today’s quantum processors add extra uncertainty via errors (decoherence, gate faults, crosstalk). Mitigating these is an active area of research. Techniques like error mitigation calibrate and correct for known error rates in the readouts. More ambitiously, quantum error correction (QEC) encodes a “logical” qubit into multiple physical qubits to detect and fix errors on the fly. This too flips classical assumptions: in quantum, you can’t simply copy bits for redundancy (the no-cloning theorem forbids cloning an unknown quantum state). Instead, QEC uses entanglement and syndrome measurements to indirectly monitor errors.
Researchers at QuEra achieved a milestone in this regard through magic state distillation on logical qubits – a technique proposed 20 years ago as essential for universal, fault-tolerant computing. As Sergio Cantu, vice president of quantum systems at QuEra even said, “Quantum computers would not be able to fulfill their promise without this process of magic state distillation. It’s a required milestone.”
Even as such advances bring fully error-corrected quantum computers closer, they underline that today’s hardware is still very much unfinished.
How do you write software for machines that operate under these strange rules? The answer is to raise the level of abstraction—while keeping physics in mind. Modern quantum programming frameworks like Qiskit, Cirq, PennyLane, and others allow developers to describe quantum programs as circuits: sequences of quantum gates and operations applied to qubits. This is a low-level, assembly-like model of computation, but it’s the lingua franca of quantum algorithms. High-level constructs familiar from classical languages (loops, if-else branches, function recursion) are largely absent inside a quantum circuit. Instead, any classical logic (like looping until a condition is met) has to run outside the quantum computer, orchestrating multiple circuit executions. As Combarro recounts, the shift can be jarring:
“I remember the first student who asked, ‘How do you implement a loop in a quantum computer?’ And I had to say, ‘Come in and sit down—I have bad news.’”
In practice, a quantum program might consist of a Python script that calls a quantum circuit many times, adjusting parameters or processing results on a classical computer between calls.
Despite these challenges, certain abstractions and libraries have emerged to help manage complexity. IBM’s Qiskit has become a popular choice, especially in education, for its extensive features and cloud access to real quantum processors.
“Qiskit has the largest number of features, and it’s the easiest one for accessing quantum computers online,” Combarro notes.
In fact, one can prototype an algorithm on a local simulator and then, with only a few lines changed, run it on a real back-end.
“You only need to change three or four lines of code to make that switch, but it’s very satisfying to say, ‘I’m running this on an actual quantum computer.’”
This ease of swapping targets is a boon in an environment where hardware is evolving – it lets developers test their abstractions against today’s best machines and see the effects of real noise and connectivity constraints.
Quantum compilers (transpilers) play a crucial role here. They take the high-level circuit and map it to the specific gates and qubits of a given device. Unlike a classical compiler, a quantum transpiler must contend with hardware quirks like limited qubit connectivity.
“Not all qubits in a quantum computer are connected to each other. So, if you want to apply a gate to two distant qubits, the transpiler has to insert extra operations to move data around — introducing noise and increasing circuit depth,” Combarro explains.
The transpiler may also optimize the circuit, combining gates or reordering operations to shorten the runtime (important before qubits decohere). Understanding what the transpiler is doing – and sometimes guiding it – has become part of the quantum developer’s skill set. For example, a programmer might constrain their circuit to use only certain qubits that have higher fidelity or explicitly insert swap gates to relocate qubits logically. It’s a delicate dance between abstract algorithm design and the very concrete limitations of hardware. Every additional gate is a risk when devices have error rates around 0.1–1% per operation.
Working with quantum software can feel like coding with one eye closed. Because measuring qubits destroys their state, developers can’t step through a quantum program in the same way as a classical one. You can’t pause midway and inspect all qubit values – that would collapse the superpositions and entanglements you painstakingly created. Instead, quantum developers lean heavily on simulation and mathematical reasoning to debug.
“To untangle issues, you start by running your code on a classical simulator. These simulators are deterministic and noise-free – they give you the exact mathematical result of the circuit, assuming perfect qubits. This lets you validate whether your logic is correct before moving to actual quantum hardware,” Combarro says.
Simulators can output the full statevector of 20 or 30 qubits, allowing a developer to verify that, say, an entangled state or an amplitude amplification step is correct. Visualization tools can display probability distributions or Bloch sphere orientations for small circuits, providing insights that no current hardware can directly reveal.
However, simulation has its limits. The memory required grows exponentially with qubit count, so beyond roughly 30 qubits (needing 16 GB of RAM or more), it becomes intractable to simulate general states. This is why today’s quantum algorithms for larger qubit numbers either rely on theoretical reasoning or are tested on actual quantum chips. When running on hardware, developers adopt statistical approaches to debugging: varying parameters, collecting lots of runs, and comparing aggregate results against expectations. They also must account for the possibility that an unexpected result is due to a device error rather than a flaw in the algorithm. As a safeguard, many will run the same circuit on multiple back-end devices (or noise models) to see if a result persists. This is quantum computing’s version of cross-platform testing. Even then, true reproducibility in the classical sense is unattainable on a quantum device – you can’t demand the same random outcome twice. Instead, reproducibility is about getting the same probability distribution of outcomes when conditions are repeated.
As Combarro succinctly puts it, “Quantum computations are inherently probabilistic, so you can’t reproduce the exact same measurement result every time. What you can do is ensure a high probability of success.”
Perhaps the biggest challenge in writing quantum software today is that the machine itself is a moving target. Every year brings new devices with more qubits, different noise characteristics, and even new fundamental approaches to quantum bits. Superconducting qubits (used by IBM, Google, and others) dominate the current landscape with devices at 127 qubits and beyond, but they require cryogenic cooling and still have very short coherence times (microseconds). Trapped-ion qubits offer longer-lived states and all-to-all connectivity, but operations are slower and scaling to hundreds of qubits is difficult in practice. Photonic quantum computers, neutral atoms in optical tweezers, silicon spin qubits – each technology comes with trade-offs in coherence, gate fidelity, connectivity, and scalability. No one knows which approach (or fusion of approaches) will ultimately deliver a large-scale, fault-tolerant quantum computer. In a moderated virtual panel titled ‘Future of Quantum Computing’ at the 8thInternational Conference on Quantum Techniques in Machine Learning hosted by the University of Melbourne, Scott Aaronson said,
“We do not have a clear winner between architectures such as trapped ion, neutral atoms, superconducting qubits, photonic qubits. Very much still a live race.”
This uncertainty means quantum software must be somewhat hardware-agnostic yet ready to embrace new capabilities as they come. A few years ago, for instance, most cloud quantum computers did not support mid-circuit measurement or dynamic circuit logic; now some do, allowing new hybrid algorithms where measurement outcomes can influence subsequent operations. The “rules” of what a quantum program can do in one run are still being rewritten by hardware advances. Developers also contend with frequent library updates and deprecations. “Quantum software libraries evolve very quickly,” Combarro notes, reflecting on how code from his first book had to be updated as Qiskit advanced. This pace has started to stabilize – Qiskit’s major 2.0 release, for example, made relatively few breaking changes – but keeping code working may require more vigilance than in mature fields. Documentation sometimes lags behind new features, requiring quantum coders to read research papers or even source code to understand the cutting edge.
Amid the rapid progress, it’s important to recognize that quantum computing is still largely in a pre-advantage era. While researchers have begun to demonstrate quantum advantage on carefully structured tasks, one recent milestone stands out: in July 2025, a team from USC and Johns Hopkins used IBM’s 127-qubit Eagle processors to show an unconditional exponential speedup on a modified version of Simon’s algorithm—a first in the field that doesn’t rely on unproven assumptions about classical limits. But even this breakthrough, as the lead researcher noted, has no immediate practical application beyond demonstrating capability. In fact, the 2025 MIT Quantum Index Report found that large-scale commercial applications of quantum computing remain “far off” despite the surge in patents and investments. Practical quantum advantage is an ongoing race: early claims can evaporate if improved classical algorithms catch up.
Google’s much-publicized 2019 quantum supremacy experiment, for example, was soon matched by classical methods, nullifying that particular “advantage.” So, we are in a stage where the promise is undeniable and enormous (quantum computing could “change everything” from drug discovery to encryption), but the delivery is incremental and challenging.
IBM has laid out a comprehensive roadmap to build a large-scale, fault-tolerant quantum computer by 2029, called Quantum Starling, capable of running 100 million gates on 200 logical qubits. The plan integrates modular architecture, bivariate bicycle codes for quantum error correction, efficient logical processing units, universal adapters for inter-module communication, and magic state distillation to enable universal computation. IBM’s confidence rests on meeting successive milestones with custom hardware (like the upcoming Nighthawk processor), improved connectivity, and a newly introduced real-time decoder architecture. The company expects to achieve practical quantum advantage by 2026, with Starling serving as the scalable platform for fault tolerance.
Lanes et al., researchers at IBM Quantum and PASQAL SAS, in their July 2025 paper have proposed a formal framework for quantum advantage that is platform-agnostic and empirically testable. They argue that advantage should mean outperforming classical systems on specific tasks with rigorously validated results—not theoretical superiority or isolated hardware feats, but measurable, reproducible performance gains in fields like chemistry, materials science, or optimization.
In this environment, how should software professionals and technology leaders prepare? The consensus is to start small and start now. Even without large-scale quantum computers at hand, there is much to learn about quantum algorithms, error mitigation techniques, and integration with classical systems.
“My advice is simple: start now,” urges Combarro. “If you think quantum computing might be relevant to your domain, begin exploring it as early as possible. The learning curve is steep… If you wait until quantum computing is mainstream, it may be too late to catch up.”
This means building up quantum programming skills (in linear algebra, complex probability, and Quantum Processing Unit (QPU)-specific idioms), experimenting with simulators and cloud QPUs, and following the rapid research developments in both hardware and algorithms. Companies are already establishing small quantum teams or partnerships to identify long-term use cases – not because a quantum solution can be deployed today, but to be ready when the hardware crosses key thresholds in the next few years.
There is a palpable excitement in the field, tempered by an understanding that quantum computing’s unfinished machine is being completed step by step. Writing quantum software today requires building abstractions for hardware that is still evolving, with each new qubit, error-correction scheme, and algorithm incrementally advancing the field toward practical, fault-tolerant systems. Until then, the work is foundational: preparing tools, methods, and mental models that future machines will depend on.
If you found the insights in our feature on quantum software illuminating, A Practical Guide to Quantum Computing by Elías F. Combarro and Samuel González-Castillo (Packt, July 2025) offers a comprehensive and hands-on introduction to the field.
Using Qiskit 2.1 throughout, the book walks readers through foundational quantum concepts, key algorithms like Grover’s and Shor’s, and practical techniques for writing and running real quantum programs. It’s ideal for professionals and self-learners looking to build solid, executable intuition—from single qubits to full-stack algorithm design.
Use code QUANTUM20 for 20% off at packtpub.com.
Qiskit – Python‑based Quantum SDK & Compiler Stack
Qiskit is an open-source, Python-first SDK and compiler stack for quantum computing, developed by IBM and widely adopted across industry and academia. It enables developers to design, simulate, transpile, and deploy quantum circuits—whether running on local simulators or real quantum hardware.
Highlights:
That’s all for today. Thank you for reading this issue ofDeep Engineering. We’re just getting started, and your feedback will help shape what comes next.
Take a moment tofill out this short surveywe run monthly—as a thank-you, we’ll addone Packt creditto your account, redeemable for any book of your choice.
We’ll be back next week with more expert-led content.
Stay awesome,
Divya Anne Selvaraj
Editor-in-Chief, Deep Engineering
If your company is interested in reaching an audience of developers, software engineers, and tech decision makers, you may want toadvertise with us.