HomeTech PlusTECH & OTHER NEWSCould quantum computers fix political polls?

Could quantum computers fix political polls?

You and I are being continually surveyed. We reveal information about ourselves with astonishingly little resistance. Social media has made many of us into veritable slot machines for our own personal data. We’re fed a little token of encouragement that someone may yet like us, our arm is gently pulled, and we disgorge something we hope people will find valuable enough for commencing small talk. What personal facts, real or trivial, we do end up disclosing — perhaps unwittingly — immediately undergo unceasing analysis. The inferences these analyses draw about us as people are being aggregated, baselined, composited, deliberated, and profiled. What we like, who we like, what we say we think, and what we think we say all form patterns. We just don’t know what the patterns mean yet.

You’d think there would be a danger in that. One day, you suddenly fail to qualify for some vital service, such as restoring electricity in the cold, because your personal profile falls within the wrong aggregate group. But what happens when, one day, those polls’ and surveys’ conclusions become spot-on accurate? What changes in the world, once databases know enough about you to guess your beliefs, your motives, your objectives, your ideals, to enable every message you see and hear to appeal directly to you? You’re born, you switch on, you log in, and there’s a “You Channel.” And there, waiting for you on-screen, are all the people running for public office who are just like you.

210211-scale-quantum-polling-03-12-x-9.jpg

You’d think there’d be a danger in that, too — in predicting your decisions so accurately that they end up being made for you. Instantaneous, foolproof, automated democracy. Yet there’s a third possibility: What if we discover that such a utopia of perfectly predicted preferences is impossible to make real, or if not quite impossible, would simply consume too much time, effort, and money to be viable? Maybe the public opinion industry could suffice with an imperfect model, just as long as it could avoid giving too many hints to the general public about just how imperfect it is. Democracy would always have a little wiggle room, a convenient margin of error, a seed of doubt. 

200501-scale-badge-retro.jpg200501-scale-badge-retro.jpg

This is the story of an in-between problem — not so astoundingly difficult that it boggles the mind like the best science fiction, yet not so simple that it’s the stuff of first-year college textbooks. It all stems from this question: If supercomputers can’t improve the accuracy of pre-election political surveys (so that, for example, they don’t end up sparking riots) then why couldn’t quantum computers do the job, once they finally become practical?

What follows is the circuitous route taken in search of the answer. It begins with quantum computers (QC) being too big a tool for the job — a bulldozer when what’s required is a shovel. It ends with the QC being not enough tool for the job — a shovel when what’s required is a computer.

Poll position

Perception, then, emerges as that relatively primitive, partly autonomous, institutionalized, ratiomorphic subsystem of cognition which achieves prompt and richly detailed orientation habitually concerning the vitally relevant, mostly distal aspects of the environment on the basis of mutually vicarious, relatively restricted and stereotyped, insufficient evidence in uncertainty-geared interaction and compromise, seemingly following the highest probability for smallness of error at the expense of the highest frequency of precision.

-Perception and the Representative
Design of Psychological Experiments

Egon Brunswik, 1947

The final Quinnipiac University national opinion survey conducted prior to the Nov. 3, 2020 US general election gave Democrat Joe Biden an 11-point lead over the Republican incumbent. The final NBC News/Wall Street Journal poll gave Biden a 10-point lead. His popular vote win, albeit certifiably real, turned out to be 4.46%. The final RealClearPolitics average of major polls conducted prior to the actual election indicated a 7.2-point margin of victory for Biden.

FiveThirtyEight Editor-in-Chief Nate Silver remarked, in a podcast published by ABC News days after Biden’s victory was declared, he believed a three-to-four-point margin of error for polling forecasts was “OK.” “The simple fact is,” Silver admitted, “that polls miss on average by about three points. So if we’re three-and-a-half or four, that’s pretty normal.” He continued:

There’s a careful balance to strike. If the polling industry said, “Well, OK, three or four points, no big deal, let’s just keep doin’ what we’re doin’,” then odds are, whatever problems you’d have this year, with new problems, polls might get really far off. So if I were a pollster, I’d say we have to look at what happened here. Why weren’t we exactly on the mark?

210211-scale-quantum-polling-04.jpg210211-scale-quantum-polling-04.jpg

From a historical perspective, the 2016 US national election opinion surveys were disastrously inaccurate. Every professional poll with an ounce of credibility predicted former Sec. of State Hillary Clinton to best the Republican in the popular vote by as much as 7 points. As it turned out, she only garnered 2.1% greater popular votes, which was not enough to compensate for her defeat in the more legally precedented Electoral College.

“The 2016 presidential election was a jarring event for polling in the United States,” reads the preamble to a 2017 report published by the American Association for Public Opinion Research, and produced by an ad hoc committee of several professional pollsters. “There was (and continues to be) widespread consensus that the polls failed.”

us-election-polls-getting-more-accurate-over-time.jpgus-election-polls-getting-more-accurate-over-time.jpg

Chart showing the extent of discrepancies for the aggregates of general election opinion polls from 1936 to 2016.

AAPOR

That consensus, the AAPOR team concluded, was — like a great many consensuses at the time — wrong. “The 2016 pre-election estimates in the Republican and Democratic primaries were not perfect,” the team wrote, “but the misses were normal in scope and magnitude. The vast majority of primary polls predicted the right winner.” What discrepancies there were, they wrote, may have been attributable to misunderstandings, or perhaps no understandings whatsoever, regarding the voting behavior of survey respondents, and how their declared preferences translate into real action at their precincts.

Earlier in 2016, the British Polling Council and the Market Research Society published the results of an independent investigation into the discrepancies in political surveys prior to the May 2015 UK general election. As late as the day before that election, aggregates of pollsters’ final tallies placed the Conservative and Labour parties in an absolute dead heat: 34% to 34%. As things turned out, the Conservatives trounced Labour by 7 points.

uk-net-error-in-conservative-vote-shares.jpguk-net-error-in-conservative-vote-shares.jpg
British Polling Council / Market Research Society

The BPC/MRS team produced this chart among others, depicting the surveys’ dismal performance, but otherwise supporting their general conclusion that 2015 was a fluke, an outlier. “Our conclusion is that the primary cause of the polling miss in 2015 was unrepresentative samples,” the report’s summary read, daring to perceive a connection between the polls being wrong and the data being wrong. The team continued:

The methods the pollsters used to collect samples of voters systematically over-represented Labour supporters and under-represented Conservative supporters. The statistical adjustment procedures applied to the raw data did not mitigate this basic problem to any notable degree.

The remainder of the report listed probable factors leading to these discrepancies, one of which concerns weighting. Pollsters are charged with sampling a variety of specific demographic groups. Trends are identified among groups. Then the data accumulated from each group is adjusted to account for its relative level of representativeness among the general population. This way, for example, if three-fifths of samples for a given district came from people earning above a certain amount, and only two-thirds of everyone in that district earned that much, the weight of their influence would be adjusted down by 6%.

The ultimate purpose of weighting is to overcome bias. It’s a method that permeates all forecasting methods that portend to be “scientific.”

BPC/MRS is saying these “statistical adjustment procedures” failed, at least for 2015. We could say the lesson from this admonition is for everyone to do a better job of statistical adjustment next time around. But perhaps it’s the basic conceit of such adjustments that are at the heart of the error:  that the extent of anyone’s influence over an arbitrary group is counterbalanced by his demographic metadata.

There are evident phenomena — including failures to account for voters’ education levels, chronic underestimations of support for challengers, discrepancies in population surveys, tendencies to place too much confidence in poll aggregators — for which mathematical models have yet to be invented. However, since their inception, machine learning systems based on neural networks have managed to “learn,” or at least recognize, patterns of formation or behavior or development that account for underlying phenomena, even when those phenomena are not understood, identified, or even isolated.

So rather than banging our heads against the same wall and expecting a different outcome, maybe we should rephrase the problem as a machine learning application. If we were to treat each voter the way a quantum hurricane simulation treats an air molecule, and base our inferences about that person’s vote upon how her registered opinion reflects her existing record — what she’s said online in the past, which candidates she contributes money to, maybe her grocery bills, which celebrities she’d choose to lead a coup — at the very least, the blast radius from a statistical error would be minimized to less than a city block.

It would seem this is the very class of application for which quantum computers (QC) are being invented. Wouldn’t a quantum computer have the capacity to tackle this problem at the most granular level?

“It seems your question,” began the response from Dr. Joseph Emerson, CEO of Quantum Benchmark, producer of software and error correction software for quantum information systems, “is based on a fundamental misunderstanding of what quantum computers provide.”

210211-scale-quantum-polling-01.jpg210211-scale-quantum-polling-01.jpg

Quantum Computing

A big, expensive sledgehammer

A reliable, functional quantum computer (once such a thing is feasible) should be capable of running molecular-level simulations of physical substances in a variety of environmental states and habitats, rendering practical results in mere hours that would have required hundreds of years of runtime from classical supercomputers.

“What we are looking for are significant types of improvements,” remarked Robert Sutor, IBM Research’s vice president of quantum ecosystem development, during a QC conference last October. “We’re not looking for the so-called linear improvements. We’re looking for things that, for example, would take something down from a million seconds to a thousand seconds, or a million seconds down to six seconds. The first is an example of a quadratic improvement; the second is an exponential improvement.”

Researchers wouldn’t just discover these improvements after timing their quantum algorithms with a stopwatch. Rather, they would rethink the problems they’re trying to solve, using mathematical terms that could be phrased using quadratic and/or exponential terms. With classical computers (like the one you’re using now), such processes are recursive, made up of cycles that are run repeatedly until the solution is reached. In a quantum system, the recursive elements may be run in parallel rather than in sequence, the result being (at least theoretically) that a complex problem may be solved in the same amount of time as a simple one — the QC literally can’t tell the difference.

Some use cases Sutor listed that could be made feasible within our lifetimes through quantum computation, include:

  • Modeling new methods for producing ammonia-based fertilizers in a way that consumes less energy than they do now — as much as 1% of the world’s entire fuel production;
  • Discovering new and more efficient process for converting hydrogen dioxides into hydrocarbons;
  • Modeling new electrolytes for electric vehicle batteries, perhaps enabling electric aircraft;
  • Simulating possible new classes of antibiotics, which Sutor believes will require massive quantum computing systems for representing infected organisms at a cellular level.

“More efficient, more specific, less general,” as Sutor characterized these problems, “less hitting these problems with a big, expensive sledgehammer, and trying to be much more efficient about how we produce, in this case, the energy.” He predicts the antibiotics simulation use case in particular, at today’s level of evolution for QC as a craft, as becoming feasible maybe a decade hence.

210211-scale-quantum-polling-12.jpg210211-scale-quantum-polling-12.jpg

Intermediacy

Common sense would tell you that, if the solution to an extremely complex problem is a decade away, surely the solution to one that’s one-fifth as complex, for example, would be only two years away (unless the matter of complexity is quadratic rather than linear, in which case, it may be sooner).

Yet for reasons no one quite understands, but can clearly observe, a tall stack of qubits can run complex algorithms as easily as simple ones. The issues at hand are: How many qubits can you stack together, and still have them stay coherent long enough to render results? And how soon can we stack enough qubits together to accomplish something practical?

Last September, Jamie Sevilla, a researcher with the Center for the Study of Existential Risk at the University of Cambridge, and Dr. C. Jess Riedel, a researcher in both quantum physics with NTT Research in Sunnyvale, produced a report with the intent of forecasting the solution to these specific issues. For instance, when will QC technology break RSA-2048 cryptography as quantum practitioners have already declared inevitable?

First, Riedel and Sevilla measured the rate of progress with stacking physical qubits, based on how often papers were published announcing these milestones. They also gauged the rate of decrease in gate error rates (a measurement of “truth leakage”) among paired qubits. Using these inputs, they aggregated a metric called a generalized logical qubit — a normalized approximation of performance given quantities of coherent physical qubits, operating at various error rates.

210215-prediction-of-quantum-milestone-papers-2023.jpg210215-prediction-of-quantum-milestone-papers-2023.jpg

Sevilla and Riedel estimated a 15% probability, represented by the narrowest oval, that an operable QC system will successfully link about 11.4 qubits with an average gate error rate of 1 iteration in 100, by 2023. This compared to about 9.22 qubits with 2 errors in 100 as of 2020. They wrote:

We note that the physical qubits metric is positively oriented while the error rate metric is negatively oriented, so the positivity of the covariance suggests the existence of a robust trade-off between both metrics. . . In plain English, this suggests that quantum computer designers face a trade-off between trying to optimize for quantum computers with many physical qubits and quantum computers with very low gate error rate.

A sensible scientist might object, saying there couldn’t possibly be a correlation between the publication rate of good quantum news, and the error rates of working QC systems. We’ve seen spurious correlations before. Yet the entire semiconductor industry based the first half-century of its economy on another presumably spurious correlation: between the transistor count on a processor’s die, and the leveling off of that processor’s market price. Moore’s Law looked like a spurious correlation until the two trends ran in lockstep long enough that they were taken as indisputable fact.

Riedel and Sevilla’s work implies the presence of a Moore-like scale for predicting when quantum systems will attain a modicum of reliability. Virtual attendees at last October’s Inside Quantum Technology Europe conference tried out this scale on Bob Sutor’s use cases. Others, such as Prof. Lieven Vandersypen of Dutch quantum research association QuTech, articulated a new goal they called quantum practicality — the point at which a QC would be a more reliable and cost-effective choice for a real-world task, than a supercomputer.

There’s a 90% chance, according to Sevilla and Riedel, that a 100-qubit QC, with an error rate between 1-in-10 to 1-in-1000, will be announced by 2023. Assuming the polling error margin problem is as insignificant an obstacle as the AAPOR team characterized it, suppose a QC capable of kicking polling errors to the curb, were made available in time for the next presidential election.

“Basically, I do not think that quantum computing will affect the results of polling analysis at all,” responded Sevilla, when we put the question to him directly. He continued:

Think of quantum computing as a fancy computer unit that can make some particular operations faster. If you came up with a ‘quantum’ way of computing the results of the forecast, the results would be the same — you just would get them faster.

210211-scale-quantum-polling-11.jpg210211-scale-quantum-polling-11.jpg

“Remember that a classical computer can compute anything that a quantum computer can,” responded Quantum Benchmark’s Dr. Emerson. “The only difference is the speed with which the quantum computer can come to the solution. So if you are patient enough, you never need a quantum computer.”

“I think 100 qubits would be about the smallest that could solve a relevant problem if the qubits were perfect,” responded Prof. Vandersypen to the same question. “We don’t have perfect qubits. So with the knowledge of today, rather than needing 100 real qubits, we need maybe a million, ten million qubits. There’s a large overhead that comes with correcting the errors faster than they occur.”

Vandersypen also co-directs the Kavli Institute for Neuroscience at the Netherlands’ Delft University of Science and Technology. He repeated the point he had made during a roundtable discussion at IQT Europe: The reliability of qubits as mechanisms may need to improve by four to five orders of magnitude before we can reasonably say any of the QC systems that fit inside the Sevilla/Riedel chart are practical for real-world use. He told us:

There is a lot of work trying to see if perhaps, with the real, non-perfect qubits that we can make, we could do something useful without needing error correction, which then brings a very large redundancy, and increases the required numbers of qubits very fast, to the millions. We call it research into noisy intermediate-scale quantum [NISQ] computers. But the reality is that it’s very speculative, whether such noisy devices will be good for anything. We really don’t know.

The entire history of quantum physics has been about dealing with the unseen, inexplicable stuff in-between the physical states we know. A NISQ computer [PDF] is, by definition, an imperfect machine that, when programmed well enough, should yield accurate results more often than inaccurate ones. Imagine if, back at the turn of the century, Intel hadn’t addressed the problems of transistor leakage at lithography levels below 100 nm.

Now, suppose you were physically incapable of seeing such leaks happen, but you know they had to have happened because you can clearly see the mess they cause. Until you find a way of rearchitecting the device (in the quantum case, of building better qubits), you’d have to limit its applications to small mathematical functions that you could perhaps repeat, to compensate for all the noise. You couldn’t run Windows on it.

Here’s where the interest level kicks up a notch: Another example of a device that people put up with using every day because it’s noisy and prone to error is the brain. It already accounts for noisy signals and distortions. Neural networks, which are artificial devices “inspired by” what we think we understand about the brain, are already designed to account for noise. What’s more, applications with neural networks at their roots make predictions about trends today, when the phenomena underlying those trends are either misunderstood or unidentified.

Assuming scientists and engineers could clear every conceivable, unfathomable hurdle — and dozens remain, at least — the potential exists for a NISQ machine with a relatively low number of qubits, to run a quantum neural network (QNN). Such a device would contain a specialized algorithm that leverages the QC’s innate capability to execute multiple iterations in parallel.

It may take, as Vandersypen suggests, qubit stacks with powers of 10 to reach the point where a QNN could, say, estimate the stress levels on individual carbon-60 molecules bonded together in nanocomposite fabrics. But perhaps there’s an in-between problem it could tackle — something not nearly as complex, but with at least the same potential for real-world usefulness. Case in point: estimating the voting behaviors of individuals based not upon demographics, but rather what their online activity reveals about their personal preferences, beliefs, and interests.

This is the avenue we’ll travel in Part 2 of this expedition. Until then, hold fast.


210211-scale-quantum-polling-08.jpg210211-scale-quantum-polling-08.jpg

Maps for this edition of Scale were reproduced from digitized photos of plates belonging to Scribner’s Political Atlas of 1880, part of the collection of the US Library of Congress. It’s one of the first kinds of “heat maps” depicting each US county’s relative contribution to the final popular vote total that year. Note that red was used to denote prevailingly Democratic counties at that time, blue Republican.

Learn more — From ZDNet

Elsewhere

By ZDNet Source Link

Technology For You
Technology For Youhttps://www.technologyforyou.org
Technology For You - One of the Leading Online TECHNOLOGY NEWS Media providing the Latest & Real-time news on Technology, Cyber Security, Smartphones/Gadgets, Apps, Startups, Careers, Tech Skills, Web Updates, Tech Industry News, Product Reviews and TechKnowledge...etc. Technology For You has always brought technology to the doorstep of the Industry through its exclusive content, updates, and expertise from industry leaders through its Online Tech News Website. Technology For You Provides Advertisers with a strong Digital Platform to reach lakhs of people in India as well as abroad.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

CYBER SECURITY NEWS

TECH NEWS

TOP NEWS