HomeTech PlusTECH & OTHER NEWSWhat is quantum computing today? The how, why, and when of a...

What is quantum computing today? The how, why, and when of a paradigm shift

A complete quantum computer (QC), fulfilling the mission set forth for it by scientists and engineers, and realizing the ingenious vision of Dr. Richard Feynman [PDF], has yet to be constructed. There are QC devices, and in the sense that they’re receiving power and executing, or trying to execute, programs, they’re operational.

They are not computers as we have come to understand them: digital semiconductor-filled processor boxes with interface busses and external networks. They resemble classical computers mostly in one single respect: They receive input, and they produce output. They would perform certain classes of algebraic tasks faster than a classical machine would — which is to say, you’d see the output within your lifetime, and perhaps within the next minute, rather than setting an appointment with Alexa for the next millennium. For now, their margins of error are somewhat high, but if you don’t mind waiting three or four minutes longer, they can be compensated for.

The most important thing to know about a quantum computer is that it leverages concepts of quantum mechanics that are not completely understood even now, to process very large and repetitive algorithms — usually linear algebra — producing results in seconds, if not milliseconds, instead of centuries. Unsolvable problems, outside the scope of our own lifetimes even with the fastest supercomputers, would be solvable by the time it would take you to explain what those problems were to a reasonable person.

The goal of quantum computing research is to discover a means of expediting the execution of long waves of instructions. Such a means would exploit an observed phenomenon of quantum mechanics that, when you write it down on paper, doesn’t appear to make sense.

The full implications of a world with quantum computing

I’m trying to get you people who think about computer-simulation possibilities to digest the real answers of quantum mechanics and see if you can’t invent a different point of view than the physicists. . . We never really understood how lousy our understanding of languages was, the theory of grammar and all that stuff, until we tried to make a computer which would be able to understand language. I was hoping that the computer-type thinking would give us some new ideas. Trying to find a computer simulation of physics seems to me to be an excellent program to follow out. The real use of it would be with quantum mechanics. Nature isn’t classical, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.

-Dr. Richard Feynman
“There’s Plenty of Room at the Bottom,” 1959

The implications for a computing network where quantum processing plays a role, are enormous: hurricane models that account for every water molecule; global warming models that pinpoint next year’s forest fires; pandemic models that ascertain which species of bat, when consumed and under what conditions, produce future coronaviruses. Much of supercomputing as we know it could be rendered instantaneously obsolete.

Perhaps the most immediate implication, though, would be the vanquishing of the barrier of difficulty that protects modern-day digital communications from being decrypted.

A staggering number of implausible events would need to happen for this scenario to come to fruition, not the least of which is the introduction of a mechanism or methodology to manage entropy. In a quantum mechanical system (one example being the universe), events at the subatomic level tend to happen spontaneously. The measurable phenomenon that tends to lead to spontaneity is entropy. In a quantum computing system, entropy translates directly into error rates. The error rates in quantum computation well exceed the tolerance levels we permit today for electronic semiconductors, and because QCs are quantum-mechanical, they may always do so — this may never be a factor we can control.

The question for us becomes, are we willing to accept probabilities instead of certainties as answers? As long as we see actual results in minutes instead of centuries, it would seem we always have the option of running quantum algorithms multiple times and comparing results (from a stylistic standpoint, not unlike predicting a presidential election).

190307-quantum-island-14-12-x-9-title.jpg

Getting past the quantum barriers

If you’ve ever programmed an Excel macro, you’ve personally experienced the following: You add input rows to the bottom of a worksheet whose columns serve as inputs for a long formula. Each time the formula recalculates, the time consumed is longer and longer. If you’re on a slow enough computer, you can witness this phenomenon for yourself: As the number of input rows grows linearly, the time consumed by the macro grows exponentially.

If you’ve ever written a program for a supercomputer, you’ve witnessed exactly the same phenomenon. The scale may be different, but the effect is the same. And if you read through the supercomputer’s logs, you can verify this observation personally. There’s a point in time in which every algorithm, no matter how simple, simply becomes unworkable on account of the overwhelming weight of its input data.

This is the phenomenon that QC would eliminate. A fully-functional QC would become more capable exponentially by scaling its computing capacity linearly. As a result, for each increase in the number of steps in a quantum algorithm, the amount of time consumed during execution increases by a smaller amount, until eventually the time gap between exponentially different workloads becomes so small as to be immeasurable.

j-emerson-professional-image-2.png

“The important thing about quantum computing is that it’s a continuous computation,” explained Joseph Emerson, the founder and CEO of Ontario-based consultancy Quantum Benchmark, during the recent Inside Quantum Technology Europe 2020 conference. Emerson continued:

It’s more analogous to analog classical computing than digital. What that means is, if you take a train from A to B, there’s no question you’re going to end up at B. It’s just a question of the speed of the train, and how fast you’re going to get there. On the other hand, with quantum computing, it’s more like navigating a rocket ship. If you have navigational errors, the rocket ship can take you to places where you can’t otherwise go, but you will end up in the wrong location — you won’t end up at B.

The quantum tradeoffs

Quantum Computing

To be very clear: It would be inaccurate to say that a QC runs programs faster than a PC or an x86 server. A “program” for a QC is a very different order of beast than anything ever produced for a binary processor. The translation between a mathematical problem intelligible by college professors into a binary program, and the translation between the same problem into a QC program, are as different from one another as “20 Questions” is from billiards.

There are several fundamental compromises when you move into the realm of quantum computing. Here’s one that’s daunting just by itself: Solutions will rarely be exact or definitive. A QC is not a deterministic machine; in other words, there is no singular solution for which any other result would be an error. Instead, a QC will tend to render sets of answers with their respective probabilities.

If that doesn’t discourage you, get prepared for this: The atom-level device that actually performs the quantum calculations will, as a result of its work and as is its nature, self-destruct when it’s done. A quantum computing mechanism would actually be a machine that automatically builds the computing device out of atoms (calcium atoms are good candidates), sustains the operating conditions of that device for the duration of its program, applies the program, allows it to execute, looks the other way (because quantum logic gates are shy and will explode if anyone sees them), interprets the final state of its registers as the final probability table of results, then resets itself to rebuild another mechanism all over again.

Imagine if Alan Turing’s incredible machine that cracked the Nazi “Enigma” code, was guaranteed to explode after every run.  (QC engineers prefer the term “collapse,” but let’s call it what it is: explode.)  And if Turing, an ingenious engineer, devised an automated manufacturing operation that rebuilt that machine out of new parts, each and every day. Every quantum computer engineer has done more than imagine such a scheme, but built a plan for such a device on the quantum scale. Indeed, such hypothetical “on paper” schemes are called Turing machines. Quantum engineers believe their computers can and will work, because their Turing machine experiments give them cause for faith.

Also: Riding a Quantum Computing Wave

What might a quantum computer do today?

Are there real-world applications of quantum computing technology, or some derivative of it, that people are putting to good use right now? Put another way, what does quantum actually do, and whom does it directly serve?

  • Navigation — A GPS system cannot work everywhere on the planet, particularly underwater. A QC requires atoms to be supercooled and suspended in a state that renders them particularly sensitive. In an effort to capitalize on this, competing teams of scientists are racing to develop a kind of quantum accelerometer that could yield very precise movement data. One promising effort to that end comes from France’s Laboratoire de Photonique Numérique et Nanosciences: an effort to build a hybrid component that pairs a quantum accelerometer with a classical one, then uses a high-pass filter to subtract the classical data from the quantum data. The result, if realized, would be an extremely precise quantum compass that would eliminate the bias and scale factor drifts commonly associated with gyroscopic components.
  • Seismology — That same extreme sensitivity may also be exploited to detect the presence of oil and gas deposits, as well as potential seismic activity, in places where conventional sensors have to date been unable to explore. This according to QuantIC, the quantum imaging technology hub led by the University of Glasgow. In July 2017, working with commercial photonics tools provider M Squared, QuantIC demonstrated how a quantum gravimeter detects the presence of deeply hidden objects by measuring disturbances in the gravitational field. If such a device becomes not only practical but portable, the team believes it could become invaluable in an early warning system for predicting seismic events and tsunamis.
  • Pharmaceuticals — At the leading edge of research into tackling diseases such as Alzheimer’s and multiple sclerosis, scientists have been utilizing software that models the behavior of artificial antibodies at the molecular level. In June 2017, neuroscience firm Biogen began partnering with IT consultancy Accenture and QC research firm 1QBit to frame a new molecular simulation model in such a way that it can be executed on classical platforms, as well as present and future quantum platforms. Their hope is to model the natural neurodegenerative processes of the human brain, in an effort to develop treatments that combat all forms of dementia. One methodology developed by 1QBit’s researchers involves translating traditional molecular diagrams into graphs full of dots, lines, and curves that, while seemingly more confusing on the surface, map more directly to a quantum model of vectors and relationships.

Also: Quantum computing may make current encryption obsolete, a quantum internet could be the solution

What could quantum computing accomplish in the future?

Now to the more controversial question: Assume someone built a mechanism that successfully leaps over the hurdles imposed by quantum physics, producing a full quantum computer capable of performing all the tasks currently relegated to the realm of theory and simulation. What do experts in this field think a quantum computer should be able to do, assuming every phenomenon that physicists have theorized and that scientists have observed and verified, is ultimately exploitable?

richard-feynman-lecturing-1980s.jpg
Prof. Richard Feynman, Caltech, approx. 1983

  • Physics: This one should be obvious enough. It’s actually the reason for the concept’s very existence. During a 1981 speech at Caltech, Prof. Richard Feynman, the father of quantum electrodynamics (QED), suggested that the only way to build a successful simulation of the physical world at the quantum level would be with a machine that obeyed the laws of quantum mechanics. It was during that speech that Prof. Feynman explained, and the rest of the world came to realize, that it would not be enough for a computer to generate a probability table and, as it were, roll dice. Moreover, it would take a mechanism that behaved along the same lines as the behavior it would purport to simulate, to produce results that physicists themselves wouldn’t dismiss as apocryphal.

Also: What CIOs need to know about quantum computing (free PDF)

  • Machine learning: If and when quantum computers ever become stable enough to support thousands of qubits, algorithms for machine learning are standing by, having been thoroughly tested on paper and in simulators. The basic theory among proponents is that quantum systems may be geared to “learn” patterns of states in huge, concurrent waves rather than successive, sequential scans. If you were awake for the preceding paragraph, you already know what the problem is here: Paper, like electronic computers, is a classical system. Conventional mathematics can circumscribe a set of probable quantum outcomes, as vectors in a wildly configurational space. Yet it cannot — as Richard Feynman made clear from the very beginning — simulate how those outcomes may be attained. The first signs of general doubt among experts that quantum machine learning may even be possible were gently seeded into a report from MIT in October 2018 on a panel convened with IBM, where experts admitted that even after quantum computers become reality, several more years may pass before enough stable qubits make quantum machine learning feasible.
  • Decryption: Here, at last, is the breakthrough that cast the first bright spotlight on quantum computing. What makes encryption codes so difficult even for modern classical computers to break is the fact that they’re based on factors of extremely large numbers, requiring inordinate amounts of time to isolate by “brute force”. An operational quantum computer should isolate and identify such factors in mere moments, rendering the RSA system of encoding effectively obsolete. In 1994, MIT Professor Peter Shor devised a quantum algorithm for factoring values, which experimenters building low-qubit quantum systems have already tested successfully, albeit with rather small quantities. When large-qubit quantum computers are successfully built, few doubt the power of Shor’s Algorithm to knock down all current public key cryptography.
  • Encryption: But herein, some say, lies an opportunity: A concept called quantum key distribution (QKD) holds out the hope that the kinds of public and private keys we use today to encrypt communications may be replaced with quantum keys that are subject to the effects of entanglement. Organizations such as Bethesda, Maryland-based QuantumXchange are building the first working QKD models today. Theoretically, any third party breaking the key and attempting to read the message would immediately destroy the message for everyone. Granted, that may be enough mischief right there. But the theory of QKD is based on a huge assumption which has yet to be proven in the real world: that values produced with entangled qubits are themselves entangled and subject to quantum effects wherever they go. QKD would require the deployment of quantum networks — independent assemblies of QCs linked by optical fiber, through which these quantum-random values would be transmitted as photons. Not digits representing random values, mind you, but the actual photons whose polarity contains the random elements themselves.

Also: Quantum computing: A cheat sheet TechRepublic

What a quantum computer probably is

The word “computer” here has a very basic context — not a handheld device or a cooled server with a processor and memory. Think of a computer the way Charles Babbage or John von Neumann considered it: as a mechanism guaranteed to deliver a certain output given a specific set of inputs and a defined configuration. At the deepest microscopic levels of a modern microprocessor, one logic unit is what these fellows would have called a computer.

Bits and qubits

Every classical electronic computer exploits the natural behavior of electrons to produce results in accordance with Boolean logic (for any two specific input states, one certain output state). Here, the basic unit of transaction is the binary digit (‘bit’), whose state is either 0 or 1. In a conventional semiconductor, these two states are represented by low and high voltage levels within transistors.

In a quantum computer, the structure is radically different. Its basic unit of registering state is the qubit, which at one level also stores a 0 or 1 state (actually 0 and/or 1, which I’ll confuse you with in a moment). Instead of transistors, a quantum computer obtains its qubits by bombarding atoms with electrical fields at perpendicular angles to one another, the result being to line up the ions, but also keep them conveniently and equivalently separated. When these ions are separated by just enough space, their orbiting electrons become the home addresses, if you will, for qubits.

Spin, one way or the other

While a conventional computer focuses on voltage, a quantum system is (passively) concerned with one aspect of electrons at the quantum level, called spin. Yes, this has to do with the electron’s angular momentum. The reason we use the term quantum at the subatomic level of physics is because of the indivisibility of what we may observe, such as the amount of energy in a photon (a particle of light). Spin is one of these delightfully indivisible components, representing the angular momentum of an electron as it orbits the nucleus of an atom. The spin of an electron is always, as physicists calculate it, 1/2 (one-half); the only difference here is polarity, which very simply may be either up or down.

Also: Quantum computers will blow your mind CNET

It’s the up or down state of electron spin that corresponds to the ‘1’ and ‘0’ of the typical binary digit. Yet it’s here where quantum computing makes a sharp turn into a logical black hole, through a tunnel of white noise, and jettisons us helplessly into a whimsically devious universe whose laws and principles seem concocted by the University of Toontown.

Superposition and why you can’t see it

A qubit maintains the quantum state for one electron. When no one is looking at it, it can attain the ‘1’ and ‘0’ state simultaneously. If you look at it, you won’t see this happen, and if it was happening before, it immediately stops. (This is literally true.) Yet the fact that the qubit’s electron was spinning both directions at once, is verifiable after the fact. Quantum mechanics calls this simultaneous state of both here and there superposition. It is impossible to witness an electron in a state of superposition because witnessing requires the very exchange of photons that causes such a superposition to collapse.

As one Fordham University lecturer put it, “We don’t understand this, but get used to it.”

There are multiple possible states of superposition. Here is why each additional qubit in a quantum system is more influential than the last: In a system with n qubits, the number of possible superposition states for each qubit is 2 to the power n. If you remember the history of binary computers, when 16-bit processors were first replaced with 32-bit processors, suddenly a byte’s maximum unsigned value was no longer 65,535 but 4,294,967,295. In a quantum system, each qubit in a 32-unit quantum register would have 4,294,967,296 possible superposition states.

Why does this matter, if the final state only collapses to 0 or 1 anyway when someone or something takes the bold step of just looking at the thing? Because before that collapse takes place, each of these states is a valid, possible value. (This is why you don’t hear a lot about quantum computers needing much DRAM.) During that strange, black-box period where it can work unobserved and undisturbed, a quantum processor is capable of performing real algorithmic functions on units that are much less like binary digits than they are like wheels in one of Charles Babbage’s difference engines — except with billions of settings rather than just 10.

Also: IBM quantum computers will unleash weird science on business CNET

bloch-sphere-diagram-svg.png
Licensed under Creative Commons

Instead of giant wheels, quantum engineers have chosen a better way of representing qubits’ spin states. More specifically, they borrowed this model from a Swiss emigrant physicist to the U.S., Felix Bloch, who shared the 1952 Nobel Prize in physics for discovering the principle of nuclear magnetic resonance. If you can imagine a billiard ball with one dot, and an imaginary line from the core of the ball through the center of the dot and outward as a vector, then you can picture a Bloch sphere like this one. Each superposition state a qubit may take may be represented by a vector in a Bloch sphere, which you can think of in terms of angles along the x and y axes of the sphere. Using ordinary geometry, the vector may be expressed as a function of the cosine of that angle to the z axis, added to the sine of that angle to the z axis.

(In quantum mechanics, functions that describe the functions of waves are represented as vectors in multi-dimensional Hilbert space [PDF]. Typically this makes the relationships between functions easy to solve, as problems in linear algebra. The types of problems best resolved by a QC can be explained as linear algebra. A Bloch sphere is one way of simply representing a quantum function in two-dimensional Hilbert space, as something a human being can more easily visualize.)

The superposition whiz

The initial mystery of superposition, for anyone who’s studied computing since before he was granted a license to drive, concerns how one writes a program to account for any number of possible states. When Microsoft Windows first started out, there was a question of how to program for whatever the user may choose to do, when there were so many choices. The system it came up with was a kind of “user event trap” that read the “intelligent keyboard interface” (which actually included the mouse) for signals. Then it processed each signal through a list, like Santa Claus double-checking the Naughty List for each chimney he didn’t recognize. This task became monumental when touchscreens were first introduced.

So obviously, the same method would be impossible for any rational form of quantum computing. The answer here is that logic doesn’t work the same way. Instead, you feed the system the problem and have faith that there will be a solution.

The metaphor I came up with for envisioning this, begs the indulgence of the reader. As a child, I loved how in my local barber shop, the mirrors were positioned along two walls facing each other, so I could see myself stretching into infinity. I also loved how my mother made me flash cards with digits on them, quizzed me about math problems, and had me hold up digits for my answers — making sure I flipped the order of my digits the right way around facing her, so that “29” wasn’t “92.”

For binary math, the only flash cards such a child would ever need would read “0” or “1.” So if you brought your child with you to the barber shop or salon, and played the math game with her, quizzing her, “What’s 1 XOR 1?” she could hold up a “0” and see a long stretch of zeroes through the infinite reflections. There would only ever be a handful of Boolean arithmetic problems she’d ever need to memorize.

Now, imagine you and your child discovered something absolutely fascinating: For as many reflections of her as you could see, each digit held up would be an independent power of 2. So if you quizzed, “What’s 4 + 5?” the first reflection you’d see would hold up a card with a “9” on it, and the rest of the reflection would hold up zeroes. You could pose for your child an impossibly long problem, but solving it would be as simple as basic logic for each child being reflected. If you could see 20 reflections, you could see 20 independent digits for each flash card.

There’s another paranormal phenomenon at work here: Instead of just 0 or 1, each card could register as many unique digits as you could see child reflections, as though it had that many sides. If you could see 16 children, each card would render a digit of the result in hexadecimal. And still, from the child’s perspective, the logic would be just as simple as Boolean. Neither you nor the child would know how she keeps getting the right answer. And every once in a while, though not too often, she would get the wrong answer, but never so often as to score worse than an “A-” on a math quiz.

No one — not you, the child, the barber, or the people clamoring at the window to get a glimpse of the math whizzes in the mirrors — would actually know how this works. But you could all plainly see that it does work.

Also: IBM promises steady quantum computing progress through 2023 CNET

What the first quantum programs will look like

The trick in writing a quantum algorithm is to imagine that you could actually see, or measure, qubits in their superposition states, so that you can instruct them as to what happens next and cause adjustments to those states. In reality, the very act of attempting to witness superposition results in decoherence — the reversion of qubits to their classical 0 or 1 states. Decoherence always happens eventually to a quantum system, often after a few minutes, or if you’re lucky, in under an hour.

The whole point of a quantum program becomes to take full advantage of the ability to manipulate which way all these billiard balls are pointing while no-one is looking, prior to their decoherence. There are two types of quantum programs, which function very differently from one another:

  • A program using quantum gates follows Richard Feynman’s original suggestion: That there are other forms of logic within the quantum space. With a binary computer, an AND or an OR gate would take two discrete voltage inputs as bits and yield one certain output. With gates in a quantum circuit — the quantum counterpart of a classical electrical circuit — several qubits may be used as inputs, and the result may be some form of a superposition state, which the Bloch sphere representation breaks down into mathematical values — including, quite likely, complex numbers.
d-wave-washington-1000q.jpg
D-Wave’s 1000Q quantum annealing device. D-Wave Systems

  • A quantum annealing system, such as the kind D-Wave currently produces [shown above], takes a very different route. Instead of establishing a quantum circuit, an annealer translates formulas (called ‘Hamiltonians’) that describe the physical state of the quantum system, into actual physical states. While any quantum computer may use one Hamiltonian to describe the initial state, an annealer uses successive Hamiltonians to represent minute changes in the desired state of the system, in very incremental steps along the way to the final desired state. Each step knocks the qubits around, in such a way that their state at the final step represents the set of probabilities that form the final solution. (One researcher likened this to shaking marbles around in an egg crate, with each shake perfectly programmed.) Skeptics of this process are wont to point out that this is not the system Feynman first proposed, and thus either directly assert that an annealing system is not a true quantum computer, or indirectly suggest that no real quantum computer presently exists. It’s fair to assume such skeptics do not presently have contracts with NASA, Google, or Lockheed Martin.

Quantum computing is an experiment and, for now, it’s an incomplete one. It could either rewrite the entire language of the computing industry, or it could sputter out in an invisible plume of quantum decoherence. It is a gamble that the practical world can interface with the quantum universe. It still might not pay off. But in the meantime, it’s as fascinating as anything that human eyes have ever cleverly avoided witnessing.

190307-quantum-island-12.jpg

 

By ZDNet Source Link

Technology For You
Technology For Youhttps://www.technologyforyou.org
Technology For You - One of the Leading Online TECHNOLOGY NEWS Media providing the Latest & Real-time news on Technology, Cyber Security, Smartphones/Gadgets, Apps, Startups, Careers, Tech Skills, Web Updates, Tech Industry News, Product Reviews and TechKnowledge...etc. Technology For You has always brought technology to the doorstep of the Industry through its exclusive content, updates, and expertise from industry leaders through its Online Tech News Website. Technology For You Provides Advertisers with a strong Digital Platform to reach lakhs of people in India as well as abroad.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

CYBER SECURITY NEWS

TECH NEWS

TOP NEWS