Quantum computers are computers that exploit the weird properties of matter at extremely small scales.
Many experts believe that a full-blown quantum computer could perform calculations that would be hopelessly time consuming on classical computers, but so far, quantum computers have proven devilishly hard to build. The few simple prototypes developed in the lab perform such rudimentary calculations that it’s sometimes difficult to tell whether they’re really harnessing quantum effects at all.
At the Association for Computing Machinery’s 43rd Symposium on Theory of Computing in June, associate professor of computer science Scott Aaronson and his graduate student Alex Arkhipov will present a paper describing an experiment that, if it worked, would offer strong evidence that quantum computers can do things that classical computers can’t. Although building the experimental apparatus would be difficult, it shouldn’t be as difficult as building a fully functional quantum computer.
If the experiment works, “it has the potential to take us past what I would like to call the 'quantum singularity,' where we do the first thing quantumly that we can’t do on a classical computer,” says Terry Rudolph, an advanced research fellow with Imperial College London’s Quantum Optics and Laser Science, who was not involved in the research.
Aaronson and Arkhipov's proposal is a variation on an experiment conducted by physicists at the University of Rochester in 1987, which relied on a device called a beam splitter, which takes an incoming beam of light and splits it into two beams traveling in different directions. The Rochester researchers demonstrated that if two identical light particles — photons — reach the beam splitter at exactly the same time, they will both go either right or left; they won’t take different paths. It’s another of the weird quantum behaviors of fundamental particles that defy our physical intuitions.
The MIT researchers' experiment would use a larger number of photons, which would pass through a network of beam splitters and eventually strike photon detectors. The number of detectors would be somewhere in the vicinity of the square of the number of photons — about 36 detectors for six photons, 100 detectors for 10 photons.
For any run of the MIT experiment, it would be impossible to predict how many photons would strike any given detector. But over successive runs, statistical patterns would begin to build up. In the six-photon version of the experiment, for instance, it could turn out that there’s an 8 percent chance that photons will strike detectors 1, 3, 5, 7, 9 and 11, a 4 percent chance that they’ll strike detectors 2, 4, 6, 8, 10 and 12, and so on, for any conceivable combination of detectors.
Calculating that distribution — the likelihood of photons striking a given combination of detectors — is an incredibly hard problem. The researchers’ experiment doesn’t solve it outright, but every successful execution of the experiment does take a sample from the solution set. One of the key findings in Aaronson and Arkhipov’s paper is that, not only is calculating the distribution an intractably hard problem, but so is simulating the sampling of it. For an experiment with more than, say, 100 photons, it would probably be beyond the computational capacity of all the computers in the world.
The question, then, is whether the experiment can be successfully executed. The Rochester researchers performed it with two photons, but getting multiple photons to arrive at a whole sequence of beam splitters at exactly the right time is more complicated. “It’s challenging, technologically, but not forbiddingly so,” says Barry Sanders, director of the University of Calgary’s Institute for Quantum Information Science. Sanders points out that in 1987, when the Rochester researchers performed their initial experiment, they were using lasers mounted on lab tables and getting photons to arrive at the beam splitter simultaneously by sending them down fiber-optic cables of different lengths. But recent years have seen the advent of optical chips, in which all the optical components are etched into a silicon substrate, which makes it much easier to control the photons’ trajectories.
The biggest problem, Sanders believes, is generating individual photons at predictable enough intervals to synchronize their arrival at the beam splitters. “People have been working on it for a decade, making great things,” Sanders says. “But getting a train of single photons is still a challenge.” Rudolph agrees. “At the moment, the hard thing is getting enough single photons into the chip,” he says. But, he adds, “my hope is that within a few years, we’ll manage to build the experiment that crosses the boundary of what we can practically do with classical computers.”
Sanders points out that even if the problem of getting single photons onto the chip is solved, photon detectors still have inefficiencies that could make their measurements inexact: in engineering parlance, there would be noise in the system. But Aaronson says that he and Arkhipov explicitly consider the question of whether simulating even a noisy version of their optical experiment would be an intractably hard problem for a conventional computer. Although they were unable to prove that it was, Aaronson says that “most of our paper is devoted to giving evidence that the answer to that is yes.” He’s hopeful that a proof is forthcoming, whether from his research group or others’.