I'm doing theorical research in the topological quantum computing.
The idea behind topological quantum computing is to utilize quantum materials whose low-energy physics looks like an error correcting code. Since these systems are very large (macroscopic number of atoms), the error rates are (theoretically) very low, ie the qubit is fault tolerant by construction, without any additional error correction. In reality, we do not know how good these qubits will be at finite temperature, with real life noise, etc.
Moreover, these states do not just occur in nature by themselves, so their construction requires engineering, and this is what Microsoft tries to do.
Unfortunately, Majoranas in nanowires have some history of exaggerated claims and data manipulation. Sergey Frolov's [1] twitter, one of the people behind original Majorana zero bias peaks paper, was my go-to source for that, but it looks like he deleted it.
There were also some concerns about previous Microsoft paper [2,3] as well as the unusual decision to publish it without the details to reproduce it [4].
In my opinion, Microsoft does solid science, it's just the problem they're trying to solve is very hard and there are many ways in which the results can be misleading. I also think it is likely that they are making progress on Majoranas, but I would be surprised if they will be able to show quantum memory/single qubit gates soon.
I’m an experimentalist at Microsoft Quantum who was involved in the work presented in our recent Nature publication. As an experimentalist, I can say these results are very exciting and would emphasize that the data are coming from real devices in the lab, not just theories. In the Nature paper, we present data from two different devices to demonstrate that the results are reproducible, and we performed additional measurements and modeling described in the supplemental material to rule out potential false positive scenarios.
In my opinion, the citations above do not represent a balanced view of the Majorana field status but are rather negative. We published two experimental papers recently that went through a rigorous peer review process. Additionally, we have engaged with the DARPA team to validate our results, and we actually have them measuring our devices in our Redmond lab.
Finally, we have exciting new results that we just shared with many experts in the field at the Station Q conference in Santa Barbara. These new experiments further probe our qubits and give us additional confidence that we are indeed operating topological qubits. We will share more broadly at the upcoming APS March meeting. For more information, please see the following post by my colleague Roman Lutchyn: https://www.linkedin.com/posts/roman-lutchyn-bb9a382_interfe... ."
> I’m an experimentalist at Microsoft Quantum who was involved in the work presented in our recent Nature publication.
It is very cool to hear from you!
> In my opinion, the citations above do not represent a balanced view of the Majorana field status but are rather negative.
That's true, but the goal of the citations was to demonstrate there are some negative opinions too. Maybe together with positive OP these form a balanced view.
I understand that it can be very unpleasant to have people like Frolov or Legg trying to prove you're wrong, but I think it shoudn't be personal (from either side). Trying to find alternative explainations is part of science. And Frolov did turn out correct in past, and we did think we found Majoranas when in fact we didn't, and this part of the story can't be just ignored. Citing Feynman "The first principle is that you must not fool yourself and you are the easiest person to fool". While it's tempting to dismiss the critics as broken record, I think it would both increase the credibility of the studies and improve the science if their criticism was taken at the face value. Answering specific points publicly would also create more balanced picture. I'm not aware of the responses to the cited opinions that I could cite to "balance out."
> We published two experimental papers recently that went through a rigorous peer review process.
Peer review is important, but is not the answer to specific claims, eg that TGP accuracy is overestimated, or (if we take Henry's word for it) the promised errata that never came out.
> Finally, we have exciting new results that we just shared with many experts in the field at the Station Q conference in Santa Barbara.
I've read about it from Das Sarma's twitter [1]. It does indeed sound exciting. If you're able to manipulate, store, and read out quantum data from qubit, then I think people will have easier time to agree you have one. There is of course question of non-Clifford gates, but that's a separate problem.
> We will share more broadly at the upcoming APS March meeting.
I look forward to hearing about it. If you (or someone from your team) are interested, I'd love to meet and chat at MM. My contacts are in bio.
Edit: I've also now seen Chetan Nayak's comment in Scott Aaronson blog with some details [2].
Are these actually even useful yet? Genuine question. I never managed to solicit and answer, only long explanations which seemed to have an answer of yes and no at the same time depending on who you observe.
The long explanations boil down to this: quantum computers (so far) are better (given a million qubits) than classical computers at (problems that are in disguise) simulating quantum computers.
also last time I checked the record was 80 qubits and with every doubling of the cubits the complexity of the system and the impurities and the noise are increasing. so it's even questionable whether there will ever be useful quantum computers
Usually when people try to explain something about quantum computers, it feels like someone is trying to teach me what a monad is from the infamous example in some old haskell docs.
I'm not proud of my ignorance, and I sure hope that eventually if I get it, it'd be very useful for me. At least it worked like that for monads.
(note, I have no idea how the braiding happens, or what it means, or ... the rest of the fucking owl, but ... the part about the local indistinguishability is an important part of the puzzle, and why it helps against noise ... also have no idea what's the G-factor, but ... also have no idea what the s-wave/p-wave superconductors are, but ... https://www.reddit.com/r/AskPhysics/comments/11opcy1/comment... ... also ... phew )
Monads are a bad way to describe a fundamentally simple thing.
Quantum computing is genuinely hard. The hardware is an extremely specialized discipline. The software is at best a very unfamiliar kind of mathematics, and has basically nothing to do with programming. At best, it may one day be a black box that you can use to solve certain conventional programming problems quickly.
Based on what i read it seems a lot of algorithmic work is required to even make them useful. New algorithms have to be discovered and still they will only solve only a special class of problems. They cant do classical computing so your NVIDIA GPU probably may never be replaced by a Quantum GPU.
I wouldn't worry too much about finding new algorithms. The sheer power of QC parallelism will attract enough talent to convert any useful classical algorithm to QC.
It's a bit similar to the invention of fast Fourier transform (was reinvented several times...), O(n log n) is so much better than O(n*2) that many problems in science and technology use FFT somewhere in their pipeline, just because it's so powerful, even if unrelated to signal processing. For example, multiplication of very large numbers use FFT (?!).
Quantum computing is a generalization of classical computing. Thus, they CAN do classical computing. But, in practice, it'll be not as fast, more error prone and at a bigger cost.
Any basic operation you can do with reversible computing on bits, can be done with qubits.
Any basic operation you can do with normal computing on bits can be done with reversible computing on bits provided that you have enough ancillary bits to store the information that would normally be deleted in the irreversible normal operation on bits.
The issue isn't really impurities and noise, quantum error-correction solves that problem. The issue is that the supporting technologies don't scale well. Superconducting qubit computers like google's have a bunch of fancy wires coming out of the top, basically one for each qubit. You can't have a million wires that size, or even a smaller size, so the RF circuitry that sends signals down those wires needs to be miniaturized and designed to operate at near 0K so it can live inside the dilution refrigerator, which is not easy.
Microsoft's technology is pretty far behind as far as capacity but the scaling limitations are less significant and the error-correction overhead is either eliminated or smaller.
i vaguely remember reading an article about solving the correlation between quantum decoherence and scaling of qubit numbers. i dont understand quantum computers so take it with a grain of salt.
but here’s what perplexity says:
“Exponential Error Reduction: Willow demonstrates a scalable quantum error correction method, achieving an exponential reduction in error rates as the number of qubits increases125. This is crucial because qubits are prone to errors due to their sensitivity to environmental factors25.
”
Hopefully not, besides quantum physics simulations the only problems they solve are the ones that should remain unsolved if we're to trust the integrity of existing systems.
As soon as the first practical quantum computer is made available, so much recorded TLS encrypted data is gonna get turned into plain text, probably destroying millions of people's lives. I hope everyone working in quantum research is aware of what their work is leading towards, they're not much better than arms manufacturers working on the next nuke.
This got me wondering how much of Tor, i2p, etc the NSA has archived. Or privacy coins like XMR.
I'm also curious. If you don't capture the key exchange but instead only a piece of cypher text. Is there a lower limit to the sample size required to attack the key? It feels like there must be.
Afaik AES is not vulnerable to any kind of statistical methods nor quantum decryption, so you'd have to capture the key exchange. Not that it makes the situation all that much better.
You're certainly allowed to get excited about it as long as you're patient and don't wildly overinflate the realistic timeline to net energy production. Similarly, nobody will stop you from hyping up quantum computation as long as you're not bullshitting usecases or lying about qubit scaling.
In the wake of cryptocurrency and AI failing to live up to their outrageous levels of hype, many people on this site worry that the "feel the AGI" crowd might accidentally start feeling some other, seemingly-profitable vaporware to overhype and pump.
They can fundamentally break most asymmetric encryption, which is a good thing iff you want to do things that require forging signatures. Things like jailbreaks Apple can't patch, decryption tools that can break all E2E encryption, being able to easily steal your neighbor's Facebook login at the coffee shop...
Come to think of it, maybe we shouldn't invent quantum computers[0].
[0] Yes, even with the upside of permanently jailbreakable iPhones.
You can't use shor's algorithm with current quantum computers.
But if we were to get bigger and better quantum computers, we should use shor's algorithm. And that would, in fact, break the crypto behind HTTPS, SSH, smard-cards, and effectively all other forms of asymmetric crypto that are in use.
There is a question how likely bigger and better quantum computers are. A decent case can be made that it is unlikely they will grow fast. But it is going to far to say that shor's algorithm is useless because current quantum computers aren't good enough. You can't dismiss the possibility of quantum computer growth out of hand.
Useful exclusively for generating random numbers, just like every other "quantum computer" (at least the ones publicly announced).
Each "quantum" announcement will make it sound like they have accomplished massive scientific leaps but in reality absolutely no "quantum computer" today can do anything other than generating random numbers (but they are forced to make those announcements to justify their continued funding).
I usually get downvoted when making this statement (of fact) but please know that I don't hate these researches or their work and generally hope their developments turn into a real thing at some point (just like I hope fusion eventually turns into a real / net positive thing).
Whoever decided to make up the non-existent term "topoconductor" for the purposes of this article deserves to feel shame and embarassment (I say this as a condensed matter physicist).
I skimmed through the paper but nowhere did I find a demonstration of a Majorana qubit or a zero mode. The achievement was that they demonstrated a single-shot measurement. That's nice, but where's the qubit? what did I miss?
Reading the PR release that accompanies a scientific paper is a negative information activity. Anything meaningful that can actually be supported by the science done is already in the paper and if you can understand the paper, it will be self evident how you should feel about it.
Any sentence in the press release that isn't VERBATIM in the paper should be viewed as marketing, and unsupported by the science, and there is zero incentive NOT to lie in the PR, especially since the ones writing it are rarely even knowledgeable in the subject matter.
Do you think this is partly every company now trying to get in on grifting? Just pumping stock with "we're going to mars, we'll have AGI, cold fusion is almost here" kind of stuff?
Genuinely curious: in what ways is that not a good term? Is it because its not a new thing, just marketing? Or is it conflating with some other physics things?
The ideas that underpin their device have been around for some time and aren't called by that name in the literature -- it appears to be entirely a branding exercise. A clear signal to me they don't seriously think it is a good name is that don't use the name outside this article (it appears nowhere in their Nature paper or anywhere else for that matter).
But that's not a good branding name. Would you argue that a mini fridge should only be marketed as a miniature refrigerator? It's a mouthful. Why does the name that's used for branding need to show up in Nature?
From a casual observer, it seemed like Microsoft's Majorana approach had hit a wall a few years back when there were retractions by the lead researchers. I wonder what's changed?
Maybe I'm too cynical, but I suspect pressure from leadership to package whatever they had in vague language and ambiguous terms to create marketing copy that makes it appear the team is doing amazing work even though in two years we'll still be in roughly the same place we are today wrt quantum computing.
Reading through the announcement I see lots of interesting sounding ideas and claims that don't matter "designed to scale to a million qubits on a single chip" (why does that matter if we're still far, far away from more than a few thousands qubits?) and zero statements about actual capabilities that are novel or ground breaking.
It's worse than that -- this announcement is about one qubit, so even a few thousand not necessarily close at hand for this platform (let alone millions).
> In fact, there was some controversy over the first attempts to do so, with an early paper having been retracted after a reanalysis of its data showed that the evidence was weaker than had initially been presented. A key focus of the new Nature paper is providing more evidence that Majorana zero modes really exist in this system.
Thanks for sharing, the ArsTechnica article does a good job of bridging the info from the press release and the research in the Nature paper while addressing the setbacks in the field.
> Microsoft’s topological qubit architecture has aluminum nanowires joined together to form an H. Each H has four controllable Majoranas and makes one qubit. These Hs can be connected, too, and laid out across the chip like so many tiles.
So they are not all in a superposition with each other? They talk about a million of these nanowires but that looks a bit like quantum dots?
Matter refers to particles or collection of particles that have mass+volume. These particles can be arranged or behave in different ways, and that is roughly what a "state of matter" is. You know how in solid all the atoms are fixed, but in a gas atoms/molecules are flying about.
There are in fact other forms of matter. In plasma you just have ions (instead of atoms/molecules) just zipping about. In neutron stars, you have pretty much only neutrons collapsed into a packed ball.
You can also make systems at higher levels of abstraction that have some of this matter or particle like behavior. A simple example is "phonons", which are a small packet of vibration (of atoms) that travels inside a solid much like a photon travels through space. I think phonons don't have a "mass", so they are not matter.
Here, they construct a quantum system, some of whose degrees of freedom behave like a matter particle. Qubits are then made from the states of this particle.
For example in some cases they have a "gas" of electrons. It's not a normal gas that you can put in a balloon, it only can live inside a solid. If you ignore the atoms in the solid, in some cases the electrons are free enough to think they are a gas. That is similar enough to a normal gas, and then they just call it a gas.
Sometimes the interesting part is a surface between two semiconductors, so they may have a 2D gas. (I'm not sure if this experiment is in 2D or 3D.)
Sometimes the electrons make weird patterns, that are very stable and move around without deformation, and they will call it a quasiparticle, and ignore that it's formed by electrons, and directly think that it's a single entity. And analyze how this quasiparticles apear an disappear and colide with other particles. It's like working on a high level of abstraction, to make the calculations easier. [2]
In particular, if you arrange the electrons very smartly, they create a quasiparticle that is it's own anti-quasipartilce. In particular, this is a Majorana quasiparticle.
This is somewhat related to topological properties of the distribution of the properties of the electrons. Were topological means that is stable under smooth deformations and that helps to make it also stable under thermal noise and other ugly interference. But this is going too far from my area, so my handwaving is not very reliable.
[1] They probably think my area is weird, so we are even :) .
[2] Sometimes the high level abstraction is not an approximation.
And, as always(?), 3Blue1Brown also contributes https://www.youtube.com/watch?v=IQqtsm-bBRU"This open problem taught me what topology is" I'll warn that while it is illustrative, and only 28 minutes, it's some heady stuff
A few things to keep in mind, given how hard of a media push this is being given (which should immediately set off alarm bells in your head that this might be bullshit)
- Topological phases of matter (similar, but not identical to the one discussed here) have been known for decades and were first observed experimentally in the 1980s.
- Creating Majorana quasiparticles has a long history of false starts and retracted claims (discovery of Majoranas in related systems was announced in 2012 and 2018 and both were since retracted).
- The quoted Nature paper is about measurements on one qubit. One. Not 100, not 1000, a single qubit.
- Unless they think they can scale this up really quickly it seems like its a very long (or perhaps non-existent) road to 10^6 qubits.
- If they could scale it up so quickly, it would have been way more convincing to wait a bit (0-2 years) and show a 100 or 1000 qubit machine that would be comparable to efforts from Google, IBM, etc (which have their own problems).
The claim/hope is that topological qubits are fault tolerant or at least suffer from much lower errors (very roughly you can think of topological qubits as an error correction code built of the atoms, ie on scale of Avogadro's number). If, for example they could build a single qubit even with 10^-6 error rates that would in fact put them __ahead__ of all other attempts at the path to fault tolerance (but no NISQ).
It is unfortunately unclear how good the topological qubits practically are.
I understand the claim and what they are trying to do (and they've been trying to do it for 20 years now). It's an interesting approach and it is orthogonal enough from other efforts that it is absolutely worthwhile to pursue scientifically (I'm in an adjacent field in condensed matter physics).
But they are doing a full court press in the media (professionally produced talking head videos, NYT articles/other media, etc, etc) claiming all of those things you've just said are right around the corner. And that's going to confuse and mislead the public. So there needs to push back on what I think is clear bullshit/spin by a company trying to sell itself using this development.
That's fair, and I don't like their marketing too (and others too, look how the reviewers pushed back on misleading claims in the paper). I was talking specifically on Majoranas being behind sc/trapped ions/cold atoms. If they manage to make 1 and 2 qubit gates and it will have good error rates in couple of years (and that's a big if), they'll be approximately where for example Google is expected to be. And the question what will be easier to scale is very unclear and will decide who wins eventually
> Majoranas hide quantum information, making it more robust, but also harder to measure. The Microsoft team’s new measurement approach is so precise it can detect the difference between one billion and one billion and one electrons in a superconducting wire – which tells the computer what state the qubit is in and forms the basis for quantum computation.
Being able to detect a single electron among billions sounds more like a good way to get entropy rather than something that can help with quantum measurements. At least that's my initial intuition being completely ignorant in Quantum Computing.
Lay people are welcome to read about the latest developments in science. They're also welcome to try to intuit theories related to those latest developments. It's a good way to flex your thinking skills. Experts are then welcome to weigh in on those intuitions and steer them along the right path. Even if you're completely wrong, expressing how you think about things is also helpful to others in case they also have similar intuitions.
Your comment could get an award for most toxic HN comment ever and that's saying something.
I know this is a day later and I don't expect it to be read, but I wanted to come back to this because it ultimately was an unusual experience and I spent some time reflecting on it. It was unusual because both the clear rejection of the original post now in light grey, but also for the significant support via upvotes for my response. I'll admit it was unambiguously into the realm of the unkind (I encourage you not to ignore this admission), but I wanted to pick apart why that response was solicited for social reasons (leaving technical reasons aside) and why your response is unwarranted and harmful.
I don't have a fundamental problem with lay people commenting or discussing technical topics and this venue is somewhat built around that. I do have a problem when they go beyond reasonable limits. Those limits include unreasonable, unresearched, poorly thought out, and nonsensical statements. The social problem is that the statement was made in an authoritative way that was derogatory to the fundamental premise of the topic (trying to <perform action> sounds like a good way to <perform something unrelated>). This type of statement can be correct, however it requires a significant amount of weight behind it to be credible given the context and history of this topic. The original poster then follows with an admission of ignorance about what they just posited. Even the ordering of these (rejection then admission) makes it more offensive to a reader who is here for anything more meaningful. This was not a statement that was made for inquisition, clearly not a statement that showed research or thought, it borders into a blind repetition of words that might be used in this context, and shows no toneful humility given their position and context. I don't believe this type of commenting should be encouraged.
The positive reinforcement I received, though, comes from an opposing school of thought you can think of as "master has hit me with a stick and I must meditate." Students typically don't like this as it is uncomfortable at best and can easily be carried into abuse. I reject the notion that this means it should never be applied and in a measured and thoughtful way it can be constructive. It unambiguously carried a lesson to be learned and the nature of that lesson makes it memorable. Without endorsement, one can say the world is a harsh place and learning to accept social rejection, humiliation, and mockery as a consequence of failure to put thought into public statements is an important and useful lesson. This place is pretty low stakes to learn such a lesson given the anonymity.
I will argue your defense of this and active encouragement of low value comments is what is actually harmful to the community. Blanket labeling of any behavior you find disagreeable as "toxic" (hyberbolically so, if you truly think that is the most toxic comment ever on here, you have not spent any real amount of time in this venue) is a common misunderstanding that all negative behavior is bad. Conflict, while uncomfortable can have positive outcomes. In the case of poor comments, a social way (constrasted with "leaving it to the mods") to deal with this is exactly what happened. And ultimately can improve the community through clear discouragement of unwanted behavior carrying the weight of emotional rejection.
If you still disagree with this, consider the community showed me support and I, while this absolutely could have been done differently (again I encourage you not to ignore that statement), think that's generally in the spirit of this forum to discourage inane comments. The success of this community is somewhat based on that and has also infamously earned its reputation for it. "Toxic" if you think about actual meaning of the word, poisoning or against the viability, would hurt the long term nature here. I believe in this case it does the opposite. It raises the average and it is what this place is built on (though usually more in the form of downvotes). I'd say your blind championing here is a case of "toxic positivity" and you can choose to do what you like with that. I would say if you do find this place so "toxic" I'd encourage you to not read the comments here, let alone respond to them, and perhaps go find someplace more agreeable to your particular sensibilities.
Can someone check my understanding: does this mean they have eight logical qubits on the chip? It appears that way from the graphic where it zooms into each logical qubit, although it only shows two there.
If that is true, it sounds like having a plan to scale to millions of logical qubits on a chip is even more impressive.
They have never demonstrated even a single physical qubit.
Microsoft has claimed for a while to have observed some signatures of quantized Majorana conductance which might potentially allow building a qubit in the future. However, other researches in the field have strongly criticized their analysis, and the field is full of retracted papers and allegations of scientific misconduct.
They have no qubits at all, "logical" or not. yet. They plan to make millions. It is substantially easier to release a plan for millions of qubits than it is to make even one.
Cryptocoins like Bitcoin only need DSA. Quantum-proof DSAs have been widely known since the 1970s*. Bitcoin only needs to change its DSA - and then everything will be fine.
I work in the field. While all players are selling a dream right now, this announcement is even more farcical. Majoranas are still trying to get to the point where they have even one qubit that could be said to exist and whose performance can be quantified.
The majorana approach (compared with more mature technologies like superconducting circuits or trapped ions) is a long game, where there are theoretical reasons to be optimistic, but where experimental reality is so far behind. It might work in the long run, but we're not there yet.
Given that Microsoft has been a heavy research collaborator ( Atom and Quantinuum), is there a possibility that the cross pollination would make it harder to deliver a farcical majorana chip since microsoft isn't all in on their home rolled hardware choice?
I've held the same view that this stuff was sketchy because of the previous mistakes in recent history but I do not work in the field
Another take, to feed your cynicism: MSFT need money to keep investing in this sort of science. By posting announcements like this they hope to become the obvious place for investors interested in quantum to park their money. Stock price goes brrr, MSFT wins.
More cynical still: what exactly has the Strategic Missions and Technologies unit achieved in the last few years? Burned a few billion on Azure for Operators, and sold it off. Got entangled and ultimately lost the JEDI mega deal at the DoD. Was notably not the unit that developed or brought in AI to Microsoft. Doing anything in quantum is good news for whoever leads this division, and they need it.
On the bright side, this is still fundamentally something to be celebrated. Years ago major corporations did basic science research and we are all better off for those folk. With the uncertainty around the future of science funding in the US right now, I at least draw some comfort in the fact that its still happening. My jaded-ness about press releases in no way diminishes my respect for the science that the lab people are publishing.
> MSFT need money to keep investing in this sort of science
Microsoft is making absurd amounts of money from Azure and Office (Microsoft 365) subscriptions. Any quantum computing investment is a drop in the bucket for this company.
Even if Microsoft doesn't sell the stock it controls, its existing assets become more valuable when the stock price goes up. There are many ways one could spend those resources if needed: sell it off, borrow against the assets, trade the stock for stock in other companies.
However, since Microsoft has plenty of cash flow already, they can probably afford to just sit on the investment.
> "Every single atom in this chip is placed purposefully. It is constructed from ground up. It is entirely a new state of matter. Think of us as building the picture by painting it atom by atom."
Right off the bat "can scale to a million qubits" tells you it's BS since it only says what could be possible but makes zero claims about what it current does.
I mean my basement can scale to holding thousands of bars of solid gold, but currently houses... 0.
yeah, nobody can claim scaling without having the error to prove it. SPAM + coherence time + gate fidelity are the limiting factors to scaling, not the concept of an idea of how to build it at scale :-)
Whenever I read about a scientific breakthrough I login to HN to see what the smart people think about it, and am disappointed if there isn't a post with hundreds of comments.
This isn’t a forum of smart people. It’s a forum of asocial tech workers who write in authoritative prose but are just normal people at home staring blankly at a blue glow of a mental bug zapper
Quantum is just the next form of sampling the electromagnetic field. It’ll provide mesmerizing computational properties but not rewrite human DNA or beam our consciousness to another galaxy; it’ll fill up RAM and disk really fast with impenetrable amount of data it will take decades to analyze and build real experiments across contexts to verify. Tomorrow will still come and be a lot like yesterday for us.
All in all it’s more of the same
Even if it we do beam our minds it’s just a copy. These meat suits still gonna stop experiencing someday. Life for us isn’t going anywhere.
Not saying wild things aren’t possible; designer drug glands grafted on and such would be banger and would alter human lived experience.
Another box measuring oscillations of fundamental forces will not.
Religious fear of “corrupting human nature” keeps smart people scaffolding symbolic logic in machines versus experimenting with weird science. Live, eat, mate, help line go up relative to some musty people’s political ledger, and die is all we’re allowed!
I want drug glands, regenerative tissue, and mini kaiju monstrosities grown in labs… as pets!
I wouldn't trust HN one bit (or one qubit) to comment usefully on this question, but presumably hundreds of people are already bugging Scott Aaronson to blog about it. He'll probably have a post in the next couple days saying whether we have permission to be excited.
I just want to acknowledge the general lucidity of this community, also finding out I am not insane is bit of a bonus. love this community, please don't change
Can we please have a discussion based solely on the content of the article? I know we all hate Microsoft's business practices, but let's try to limit our tendency to turn the discussion into a typical Reddit-esque thread.
Short Term - This might be hype. Sure. Getting some Buzz.
Long Term - MS seems pretty committed and serious. Putting in the time/money for a long term vision. Maybe a decade from now, we'll be bowing down to an all powerful MS God/Oracle/AI.
Well it is not available yet, but Microsoft has industrial strength productivity tools (as opposed to Apple's "consumer" electronics), and a stellar research department.
If this is genuinely worrying to you, take some solace in that post-quantum alternatives are undergoing standardization and implementation right now (Signal and iMessage, for example, have already deployed some PQC, as have others).
However, this announcement is a nothing-burger. As I mentioned down-thread, you should view any QC announcement/press-release with extreme skepticism unless it includes replicable (read: open-source targeting hardware other researchers can test on) benchmarks for progress on real-world use-cases (e.g., Shor, Grover, or a newly-identified actually-interesting use-case). OP does not. Nothing to see here.
Worth saying, I am not a cryptographer—I do cryptography-adjacent research engineering. However, given the level of hype going around this industry, I think it's fair to at least expect to see the spec-sheet as it were.
Thank you for taking the time to respond. I personally lend at least some degree of credence to their claim, given that this is Microsoft we're talking about and not some startup.
If their claim is true, then would that present an issue to RSA encryption? I find it difficult to find information on this topic that is digestible to a layman.
My understanding is that the benefit of quantum computing is parallelism, and I'm not sure how today's encryption standards would be safe from brute force attacks.
No. If their claim is true, they have a new prototype of a single qubit that they say could enable faster scaling up of qubit arrays (which means asymmetric/public-key cryptosystems like RSA will be in trouble sooner than we thought they might be). However, this work does not demonstrate that scaling potential at all. In the spirit of Betteridge's Law of headlines, if such a thing were easy for them to demonstrate, why would they announce this now, with a single logical qubit, rather than when they've demonstrated at least some scaling potential?
This understanding of QC is common, but isn't quite right. Quantum computation is actually really hard to parallelize (which is why Grover, though a bit frightening since it halves the security of symmetric primitives, is actually kind of damning for QC—because you can't parallelize that search really at all, so halving is the best a quantum adversary can get against things like AES-256).
I stand by my assertion that, until a QC announcement includes replicable benchmarks on actual use-cases, such things can be safely dismissed.
If you continue to be concerned (not necessarily unhealthy), engage cryptographers and security engineers to help your projects build know-how on hybrid (in this case, classical/PQ) cryptosystems, and get them deployed sooner rather than later.
If by “crypto,” the grandparent meant “cryptography,” this is not true. Most widely-deployed asymmetric/public-key primitives (e.g., RSA, elliptic curve cryptography (ECC), etc.) are quite fragile against an adversary with a cryptographically-relevant quantum computer (CRQC). To clarify how fragile, the general consensus/state-of-the-art as far as I am aware, is that Shor's algorithm (which breaks asymmetric primitives) requires about 2x the number of perfect, logical qubits as the RSA key-size (e.g., ~4000 qubits for factoring RSA 2048); however, because none of our qubit designs have a low enough error rate, you need about 1000 qubits to simulate/error-correct for a single logical qubit—so, currently, it's expected you would need around 4_000_000 physical qubits to factor RSA-2048. Post-quantum cryptography (PQC) is specifically the subset of cryptography that is designed to withstand attacks from quantum-enabled adversaries; it is still being actively designed, studied, standardized, implemented, and deployed.
If instead, the reference was to “cryptocurrencies,” most cryptocurrencies I am aware of depend on non-PQ constructions, and fall into the same buckets as RSA and ECC. Some systems, like Bitcoin, are in significant danger without large overhauls—if a practical CRQC is actually realized. There are efforts underway throughout the cryptocurrency communities to try to prepare for such an eventuality, but to my knowledge, none of them have major adoption yet.
As a final note on investment advice: I don't give out investment advice. :)
As I've mentioned to another commenter, Bitcoin relies only on the existence of an arbitrary DSA. Quantum computing-resistant DSAs have been known since the 1970s. I reckon that swapping out Bitcoin's current DSA with a quantum-resistant one would not count as a major overhaul. https://news.ycombinator.com/item?id=43113682
I'm not sure about other cryptocoins, but Bitcoin does not use encryption, it only uses authentication, which requires a DSA (Digital Signature Algorithms). Bitcoin's current DSA would in fact be broken by a cryptographically-relevant quantum computer (CRQC). However, there are DSAs - like Lamport signatures and Merkle signatures - known since the 1970s, whose security depends only on the existence of ANY secure hash function. There is no known way to break any widely used hash function using quantum computers. So I reckon that the only change to Bitcoin would be to swap out the current DSA for a different one.
I'm not sure about the downsides of quantum-resistant DSAs.
If it's going to take another 10 years to turn this into a usable product...
Better spew out some marketing BS to move the needle on MSFT stock price...
Agreed wholesale. Any QC announcement that does not include replicable benchmarks for progress executing Shor or Grover (or demonstration of another real-world use-case that they actually address) should be dismissed out-of-hand by everyone except other researchers in QC.
Nadella is currently claiming on X that this opens up a "direct path to 1 million qubits". Based on my priors, I put the probability of this statement being horseshit at 99.9%. Could someone knowledgeable make it 100%?
That word "direct" is doing a lot of heavy lifting.
It makes the statement unfalsifiable: no matter how long it takes, how twisty the path turns out to be, it will always be a direct path, because what is a path after all if not direct?
He didnt say shortest path, quickest path, or any other qualifier that would set visions of nodes and edges and Graph 101 dancing in our nerdy little heads. It's marketing. Good marketing. I hope they can deliver on it.
Can any modern company be said to be of similar size to Bell Labs in it's prime? Bell Labs were only possible due to a monopoly supplying massive amounts of funding for about a century. MS had a few decades of domination in the consumer desktop OS space, but they always had competition and I would be very surprised if they had the kind of revenue streams that make this type of long-term investment possible - regular payments from nearly every household and business in the country, with easy ability to make predictions about current and future proceeds just by counting heads.
To be fair, MSFT was not started as a research company and is more of a mass market commercializer of existing technology. Bell Labs' primary goal was to discover new science and invent practical applications. (Reminds me I should go read "The Idea Factory"! (story of bell labs))
IMO new breakthroughs in our understanding of physics would be needed first to make substantial progress in QC. As long as MSFT isn't investing in attosecond lasers or low temperature experiments, RSA will remain secure.
For me it was more than my first scan. I understand the word is Majorana, but when I go to pronounce it or read it, my brain reports back "Major-auwana."
The people who signed off on this name a) do a lot of drugs or b) didn't notice because they have never come anywhere near weed
I'm doing theorical research in the topological quantum computing.
The idea behind topological quantum computing is to utilize quantum materials whose low-energy physics looks like an error correcting code. Since these systems are very large (macroscopic number of atoms), the error rates are (theoretically) very low, ie the qubit is fault tolerant by construction, without any additional error correction. In reality, we do not know how good these qubits will be at finite temperature, with real life noise, etc.
Moreover, these states do not just occur in nature by themselves, so their construction requires engineering, and this is what Microsoft tries to do.
Unfortunately, Majoranas in nanowires have some history of exaggerated claims and data manipulation. Sergey Frolov's [1] twitter, one of the people behind original Majorana zero bias peaks paper, was my go-to source for that, but it looks like he deleted it.
There were also some concerns about previous Microsoft paper [2,3] as well as the unusual decision to publish it without the details to reproduce it [4].
In my opinion, Microsoft does solid science, it's just the problem they're trying to solve is very hard and there are many ways in which the results can be misleading. I also think it is likely that they are making progress on Majoranas, but I would be surprised if they will be able to show quantum memory/single qubit gates soon.
[1] https://spinespresso.substack.com/p/has-there-been-enough-re...
[2] https://x.com/PhysicsHenry/status/1670184166674112514
[3] https://x.com/PhysicsHenry/status/1892268229139042336
[4] https://journals.aps.org/prb/abstract/10.1103/PhysRevB.107.2...
I’m an experimentalist at Microsoft Quantum who was involved in the work presented in our recent Nature publication. As an experimentalist, I can say these results are very exciting and would emphasize that the data are coming from real devices in the lab, not just theories. In the Nature paper, we present data from two different devices to demonstrate that the results are reproducible, and we performed additional measurements and modeling described in the supplemental material to rule out potential false positive scenarios.
In my opinion, the citations above do not represent a balanced view of the Majorana field status but are rather negative. We published two experimental papers recently that went through a rigorous peer review process. Additionally, we have engaged with the DARPA team to validate our results, and we actually have them measuring our devices in our Redmond lab.
Finally, we have exciting new results that we just shared with many experts in the field at the Station Q conference in Santa Barbara. These new experiments further probe our qubits and give us additional confidence that we are indeed operating topological qubits. We will share more broadly at the upcoming APS March meeting. For more information, please see the following post by my colleague Roman Lutchyn: https://www.linkedin.com/posts/roman-lutchyn-bb9a382_interfe... ."
> I’m an experimentalist at Microsoft Quantum who was involved in the work presented in our recent Nature publication.
It is very cool to hear from you!
> In my opinion, the citations above do not represent a balanced view of the Majorana field status but are rather negative.
That's true, but the goal of the citations was to demonstrate there are some negative opinions too. Maybe together with positive OP these form a balanced view.
I understand that it can be very unpleasant to have people like Frolov or Legg trying to prove you're wrong, but I think it shoudn't be personal (from either side). Trying to find alternative explainations is part of science. And Frolov did turn out correct in past, and we did think we found Majoranas when in fact we didn't, and this part of the story can't be just ignored. Citing Feynman "The first principle is that you must not fool yourself and you are the easiest person to fool". While it's tempting to dismiss the critics as broken record, I think it would both increase the credibility of the studies and improve the science if their criticism was taken at the face value. Answering specific points publicly would also create more balanced picture. I'm not aware of the responses to the cited opinions that I could cite to "balance out."
> We published two experimental papers recently that went through a rigorous peer review process.
Peer review is important, but is not the answer to specific claims, eg that TGP accuracy is overestimated, or (if we take Henry's word for it) the promised errata that never came out.
> Finally, we have exciting new results that we just shared with many experts in the field at the Station Q conference in Santa Barbara.
I've read about it from Das Sarma's twitter [1]. It does indeed sound exciting. If you're able to manipulate, store, and read out quantum data from qubit, then I think people will have easier time to agree you have one. There is of course question of non-Clifford gates, but that's a separate problem.
> We will share more broadly at the upcoming APS March meeting.
I look forward to hearing about it. If you (or someone from your team) are interested, I'd love to meet and chat at MM. My contacts are in bio.
Edit: I've also now seen Chetan Nayak's comment in Scott Aaronson blog with some details [2].
[1] https://x.com/condensed_the/status/1892595693002293279
[2] https://scottaaronson.blog/?p=8669#comment-2003328
I know nothing about quantum computing, but constructing a physical system that resembles error correcting codes sounds absolutely fascinating.
Are these actually even useful yet? Genuine question. I never managed to solicit and answer, only long explanations which seemed to have an answer of yes and no at the same time depending on who you observe.
No.
The long explanations boil down to this: quantum computers (so far) are better (given a million qubits) than classical computers at (problems that are in disguise) simulating quantum computers.
Microsoft Research entire point is that their approach will allow
Roadmap to fault tolerant quantum computation using topological qubit arrays https://arxiv.org/abs/2502.12252Usually when people try to explain something about quantum computers, it feels like someone is trying to teach me what a monad is from the infamous example in some old haskell docs.
I'm not proud of my ignorance, and I sure hope that eventually if I get it, it'd be very useful for me. At least it worked like that for monads.
https://www.youtube.com/watch?v=G8yHOrloxRA Bela Bauer (MS Research) - Fault Tolerant Quantum Computation using Majorana-Based Topological Qubits
(note, I have no idea how the braiding happens, or what it means, or ... the rest of the fucking owl, but ... the part about the local indistinguishability is an important part of the puzzle, and why it helps against noise ... also have no idea what's the G-factor, but ... also have no idea what the s-wave/p-wave superconductors are, but ... https://www.reddit.com/r/AskPhysics/comments/11opcy1/comment... ... also ... phew )
Monads are a bad way to describe a fundamentally simple thing.
Quantum computing is genuinely hard. The hardware is an extremely specialized discipline. The software is at best a very unfamiliar kind of mathematics, and has basically nothing to do with programming. At best, it may one day be a black box that you can use to solve certain conventional programming problems quickly.
Based on what i read it seems a lot of algorithmic work is required to even make them useful. New algorithms have to be discovered and still they will only solve only a special class of problems. They cant do classical computing so your NVIDIA GPU probably may never be replaced by a Quantum GPU.
I wouldn't worry too much about finding new algorithms. The sheer power of QC parallelism will attract enough talent to convert any useful classical algorithm to QC.
It's a bit similar to the invention of fast Fourier transform (was reinvented several times...), O(n log n) is so much better than O(n*2) that many problems in science and technology use FFT somewhere in their pipeline, just because it's so powerful, even if unrelated to signal processing. For example, multiplication of very large numbers use FFT (?!).
Quantum computing is a generalization of classical computing. Thus, they CAN do classical computing. But, in practice, it'll be not as fast, more error prone and at a bigger cost.
> Quantum computing is a generalization of classical computing
Can you explain more or share some resources?
Any basic operation you can do with reversible computing on bits, can be done with qubits.
Any basic operation you can do with normal computing on bits can be done with reversible computing on bits provided that you have enough ancillary bits to store the information that would normally be deleted in the irreversible normal operation on bits.
Maybe we'll end up with a state where typical computers have a CPU, a GPU, and a QPU to solve different problems.
The issue isn't really impurities and noise, quantum error-correction solves that problem. The issue is that the supporting technologies don't scale well. Superconducting qubit computers like google's have a bunch of fancy wires coming out of the top, basically one for each qubit. You can't have a million wires that size, or even a smaller size, so the RF circuitry that sends signals down those wires needs to be miniaturized and designed to operate at near 0K so it can live inside the dilution refrigerator, which is not easy.
Microsoft's technology is pretty far behind as far as capacity but the scaling limitations are less significant and the error-correction overhead is either eliminated or smaller.
There're some crazy wires at the end of the official video
https://youtu.be/wSHmygPQukQ?t=723
Is this the scaling problem you are describing?
i vaguely remember reading an article about solving the correlation between quantum decoherence and scaling of qubit numbers. i dont understand quantum computers so take it with a grain of salt.
but here’s what perplexity says: “Exponential Error Reduction: Willow demonstrates a scalable quantum error correction method, achieving an exponential reduction in error rates as the number of qubits increases125. This is crucial because qubits are prone to errors due to their sensitivity to environmental factors25. ”
> last time I checked the record was 80 qubits
It has progressed since: IBM Condor (demonstrated in december 2023) has 1121 qubits.
which is totally out of touch with the reality of making use of the extra qubits they just slapped on the chip to get a high number
Hopefully not, besides quantum physics simulations the only problems they solve are the ones that should remain unsolved if we're to trust the integrity of existing systems.
As soon as the first practical quantum computer is made available, so much recorded TLS encrypted data is gonna get turned into plain text, probably destroying millions of people's lives. I hope everyone working in quantum research is aware of what their work is leading towards, they're not much better than arms manufacturers working on the next nuke.
This got me wondering how much of Tor, i2p, etc the NSA has archived. Or privacy coins like XMR.
I'm also curious. If you don't capture the key exchange but instead only a piece of cypher text. Is there a lower limit to the sample size required to attack the key? It feels like there must be.
Afaik AES is not vulnerable to any kind of statistical methods nor quantum decryption, so you'd have to capture the key exchange. Not that it makes the situation all that much better.
Just like fusion energy it is pointless and you are not allowed to have excitement about it because some anonymous stranger on HN said so.
https://news.ycombinator.com/item?id=43093939#43094339
You're certainly allowed to get excited about it as long as you're patient and don't wildly overinflate the realistic timeline to net energy production. Similarly, nobody will stop you from hyping up quantum computation as long as you're not bullshitting usecases or lying about qubit scaling.
In the wake of cryptocurrency and AI failing to live up to their outrageous levels of hype, many people on this site worry that the "feel the AGI" crowd might accidentally start feeling some other, seemingly-profitable vaporware to overhype and pump.
They can fundamentally break most asymmetric encryption, which is a good thing iff you want to do things that require forging signatures. Things like jailbreaks Apple can't patch, decryption tools that can break all E2E encryption, being able to easily steal your neighbor's Facebook login at the coffee shop...
Come to think of it, maybe we shouldn't invent quantum computers[0].
[0] Yes, even with the upside of permanently jailbreakable iPhones.
No you can't. Largest factored number using shor's algorithm is 21. No other algorithm scales to crypto levels.
You can't use shor's algorithm with current quantum computers.
But if we were to get bigger and better quantum computers, we should use shor's algorithm. And that would, in fact, break the crypto behind HTTPS, SSH, smard-cards, and effectively all other forms of asymmetric crypto that are in use.
There is a question how likely bigger and better quantum computers are. A decent case can be made that it is unlikely they will grow fast. But it is going to far to say that shor's algorithm is useless because current quantum computers aren't good enough. You can't dismiss the possibility of quantum computer growth out of hand.
I think I heard that 77 was factored as well
yes, they are useful... as marketing materials. Other than that, not at all.
Sounds exactly like a quantum state itself!
It is frustrating to try to unpick the hype and filter the “will never work” from the “eureka!”
you have to open the box to see if the quantum computer is alive or not.
No
Useful exclusively for generating random numbers, just like every other "quantum computer" (at least the ones publicly announced).
Each "quantum" announcement will make it sound like they have accomplished massive scientific leaps but in reality absolutely no "quantum computer" today can do anything other than generating random numbers (but they are forced to make those announcements to justify their continued funding).
I usually get downvoted when making this statement (of fact) but please know that I don't hate these researches or their work and generally hope their developments turn into a real thing at some point (just like I hope fusion eventually turns into a real / net positive thing).
Only if you use them in conjunction with an HTML5 supercomputer. (Sorry, I couldn't resist with Nikola in the news again)
Nature paper: Interferometric single-shot parity measurement in InAs–Al hybrid devices https://www.nature.com/articles/s41586-024-08445-2
Whoever decided to make up the non-existent term "topoconductor" for the purposes of this article deserves to feel shame and embarassment (I say this as a condensed matter physicist).
I skimmed through the paper but nowhere did I find a demonstration of a Majorana qubit or a zero mode. The achievement was that they demonstrated a single-shot measurement. That's nice, but where's the qubit? what did I miss?
If you read the referee reports of the Nature paper (they are published alongside it) you'll see some referees echoing similar points.
Reading the PR release that accompanies a scientific paper is a negative information activity. Anything meaningful that can actually be supported by the science done is already in the paper and if you can understand the paper, it will be self evident how you should feel about it.
Any sentence in the press release that isn't VERBATIM in the paper should be viewed as marketing, and unsupported by the science, and there is zero incentive NOT to lie in the PR, especially since the ones writing it are rarely even knowledgeable in the subject matter.
Come for the made-up jargon, stay for the horrific PR abuse of the English language like "Unlocking quantum’s promise"
Do you think this is partly every company now trying to get in on grifting? Just pumping stock with "we're going to mars, we'll have AGI, cold fusion is almost here" kind of stuff?
Genuinely curious: in what ways is that not a good term? Is it because its not a new thing, just marketing? Or is it conflating with some other physics things?
The ideas that underpin their device have been around for some time and aren't called by that name in the literature -- it appears to be entirely a branding exercise. A clear signal to me they don't seriously think it is a good name is that don't use the name outside this article (it appears nowhere in their Nature paper or anywhere else for that matter).
So what is it called then
It's a topological superconductor.
But that's not a good branding name. Would you argue that a mini fridge should only be marketed as a miniature refrigerator? It's a mouthful. Why does the name that's used for branding need to show up in Nature?
Another condmat physicist wondering how this work got so hyped up. Single shot parity measurements are not new.
From a casual observer, it seemed like Microsoft's Majorana approach had hit a wall a few years back when there were retractions by the lead researchers. I wonder what's changed?
https://cacm.acm.org/news/majorana-meltdown-jeopardizes-micr...
> I wonder what's changed?
Maybe I'm too cynical, but I suspect pressure from leadership to package whatever they had in vague language and ambiguous terms to create marketing copy that makes it appear the team is doing amazing work even though in two years we'll still be in roughly the same place we are today wrt quantum computing.
Reading through the announcement I see lots of interesting sounding ideas and claims that don't matter "designed to scale to a million qubits on a single chip" (why does that matter if we're still far, far away from more than a few thousands qubits?) and zero statements about actual capabilities that are novel or ground breaking.
That chip die sure looks cool though!
It's worse than that -- this announcement is about one qubit, so even a few thousand not necessarily close at hand for this platform (let alone millions).
I believe the chip is 8 qubits not just one.
The ArsTechnica article discusses that.
> In fact, there was some controversy over the first attempts to do so, with an early paper having been retracted after a reanalysis of its data showed that the evidence was weaker than had initially been presented. A key focus of the new Nature paper is providing more evidence that Majorana zero modes really exist in this system.
https://arstechnica.com/science/2025/02/microsoft-builds-its...
Thanks for sharing, the ArsTechnica article does a good job of bridging the info from the press release and the research in the Nature paper while addressing the setbacks in the field.
How the H devices (which they call tetrons) form a qubit is explained more thoroughly in their ArXiv article: https://arxiv.org/abs/2502.12252
Discussion on other official post: https://news.ycombinator.com/item?id=43103623
> Microsoft’s topological qubit architecture has aluminum nanowires joined together to form an H. Each H has four controllable Majoranas and makes one qubit. These Hs can be connected, too, and laid out across the chip like so many tiles.
So they are not all in a superposition with each other? They talk about a million of these nanowires but that looks a bit like quantum dots?
Nature article: https://archive.ph/SM8NQ
What do they mean by
>can create an entirely new state of matter – not a solid, liquid or gas but a topological state
Matter refers to particles or collection of particles that have mass+volume. These particles can be arranged or behave in different ways, and that is roughly what a "state of matter" is. You know how in solid all the atoms are fixed, but in a gas atoms/molecules are flying about.
There are in fact other forms of matter. In plasma you just have ions (instead of atoms/molecules) just zipping about. In neutron stars, you have pretty much only neutrons collapsed into a packed ball.
You can also make systems at higher levels of abstraction that have some of this matter or particle like behavior. A simple example is "phonons", which are a small packet of vibration (of atoms) that travels inside a solid much like a photon travels through space. I think phonons don't have a "mass", so they are not matter.
Here, they construct a quantum system, some of whose degrees of freedom behave like a matter particle. Qubits are then made from the states of this particle.
People working in solid state is weird. [1]
For example in some cases they have a "gas" of electrons. It's not a normal gas that you can put in a balloon, it only can live inside a solid. If you ignore the atoms in the solid, in some cases the electrons are free enough to think they are a gas. That is similar enough to a normal gas, and then they just call it a gas.
Sometimes the interesting part is a surface between two semiconductors, so they may have a 2D gas. (I'm not sure if this experiment is in 2D or 3D.)
Sometimes the electrons make weird patterns, that are very stable and move around without deformation, and they will call it a quasiparticle, and ignore that it's formed by electrons, and directly think that it's a single entity. And analyze how this quasiparticles apear an disappear and colide with other particles. It's like working on a high level of abstraction, to make the calculations easier. [2]
In particular, if you arrange the electrons very smartly, they create a quasiparticle that is it's own anti-quasipartilce. In particular, this is a Majorana quasiparticle.
This is somewhat related to topological properties of the distribution of the properties of the electrons. Were topological means that is stable under smooth deformations and that helps to make it also stable under thermal noise and other ugly interference. But this is going too far from my area, so my handwaving is not very reliable.
[1] They probably think my area is weird, so we are even :) .
[2] Sometimes the high level abstraction is not an approximation.
Weird. Any good suggestions for further reading on this stuff or is it mostly still academic literature level?
See https://en.wikipedia.org/wiki/Topological_order.
And, as always(?), 3Blue1Brown also contributes https://www.youtube.com/watch?v=IQqtsm-bBRU "This open problem taught me what topology is" I'll warn that while it is illustrative, and only 28 minutes, it's some heady stuff
For reference topological phases of matter have been observed in other contexts since the mid-1980s. So "entirely new" here is misleading.
We are approaching a full "post truth" society, nothing will be sacred.
through geometry and configuration of materials, you can create quantum effects on a macroscopic scale
A few things to keep in mind, given how hard of a media push this is being given (which should immediately set off alarm bells in your head that this might be bullshit)
- Topological phases of matter (similar, but not identical to the one discussed here) have been known for decades and were first observed experimentally in the 1980s.
- Creating Majorana quasiparticles has a long history of false starts and retracted claims (discovery of Majoranas in related systems was announced in 2012 and 2018 and both were since retracted).
- The quoted Nature paper is about measurements on one qubit. One. Not 100, not 1000, a single qubit.
- Unless they think they can scale this up really quickly it seems like its a very long (or perhaps non-existent) road to 10^6 qubits.
- If they could scale it up so quickly, it would have been way more convincing to wait a bit (0-2 years) and show a 100 or 1000 qubit machine that would be comparable to efforts from Google, IBM, etc (which have their own problems).
The claim/hope is that topological qubits are fault tolerant or at least suffer from much lower errors (very roughly you can think of topological qubits as an error correction code built of the atoms, ie on scale of Avogadro's number). If, for example they could build a single qubit even with 10^-6 error rates that would in fact put them __ahead__ of all other attempts at the path to fault tolerance (but no NISQ).
It is unfortunately unclear how good the topological qubits practically are.
I understand the claim and what they are trying to do (and they've been trying to do it for 20 years now). It's an interesting approach and it is orthogonal enough from other efforts that it is absolutely worthwhile to pursue scientifically (I'm in an adjacent field in condensed matter physics).
But they are doing a full court press in the media (professionally produced talking head videos, NYT articles/other media, etc, etc) claiming all of those things you've just said are right around the corner. And that's going to confuse and mislead the public. So there needs to push back on what I think is clear bullshit/spin by a company trying to sell itself using this development.
That's fair, and I don't like their marketing too (and others too, look how the reviewers pushed back on misleading claims in the paper). I was talking specifically on Majoranas being behind sc/trapped ions/cold atoms. If they manage to make 1 and 2 qubit gates and it will have good error rates in couple of years (and that's a big if), they'll be approximately where for example Google is expected to be. And the question what will be easier to scale is very unclear and will decide who wins eventually
> Majoranas hide quantum information, making it more robust, but also harder to measure. The Microsoft team’s new measurement approach is so precise it can detect the difference between one billion and one billion and one electrons in a superconducting wire – which tells the computer what state the qubit is in and forms the basis for quantum computation.
Being able to detect a single electron among billions sounds more like a good way to get entropy rather than something that can help with quantum measurements. At least that's my initial intuition being completely ignorant in Quantum Computing.
Do you have other things you want to provide that your self-proclaimed ignorance would be helpful with?
Lay people are welcome to read about the latest developments in science. They're also welcome to try to intuit theories related to those latest developments. It's a good way to flex your thinking skills. Experts are then welcome to weigh in on those intuitions and steer them along the right path. Even if you're completely wrong, expressing how you think about things is also helpful to others in case they also have similar intuitions.
Your comment could get an award for most toxic HN comment ever and that's saying something.
I mean, I'll it admit the original post ironically did measurably increase the entropy of the thread.
I know this is a day later and I don't expect it to be read, but I wanted to come back to this because it ultimately was an unusual experience and I spent some time reflecting on it. It was unusual because both the clear rejection of the original post now in light grey, but also for the significant support via upvotes for my response. I'll admit it was unambiguously into the realm of the unkind (I encourage you not to ignore this admission), but I wanted to pick apart why that response was solicited for social reasons (leaving technical reasons aside) and why your response is unwarranted and harmful.
I don't have a fundamental problem with lay people commenting or discussing technical topics and this venue is somewhat built around that. I do have a problem when they go beyond reasonable limits. Those limits include unreasonable, unresearched, poorly thought out, and nonsensical statements. The social problem is that the statement was made in an authoritative way that was derogatory to the fundamental premise of the topic (trying to <perform action> sounds like a good way to <perform something unrelated>). This type of statement can be correct, however it requires a significant amount of weight behind it to be credible given the context and history of this topic. The original poster then follows with an admission of ignorance about what they just posited. Even the ordering of these (rejection then admission) makes it more offensive to a reader who is here for anything more meaningful. This was not a statement that was made for inquisition, clearly not a statement that showed research or thought, it borders into a blind repetition of words that might be used in this context, and shows no toneful humility given their position and context. I don't believe this type of commenting should be encouraged.
The positive reinforcement I received, though, comes from an opposing school of thought you can think of as "master has hit me with a stick and I must meditate." Students typically don't like this as it is uncomfortable at best and can easily be carried into abuse. I reject the notion that this means it should never be applied and in a measured and thoughtful way it can be constructive. It unambiguously carried a lesson to be learned and the nature of that lesson makes it memorable. Without endorsement, one can say the world is a harsh place and learning to accept social rejection, humiliation, and mockery as a consequence of failure to put thought into public statements is an important and useful lesson. This place is pretty low stakes to learn such a lesson given the anonymity.
I will argue your defense of this and active encouragement of low value comments is what is actually harmful to the community. Blanket labeling of any behavior you find disagreeable as "toxic" (hyberbolically so, if you truly think that is the most toxic comment ever on here, you have not spent any real amount of time in this venue) is a common misunderstanding that all negative behavior is bad. Conflict, while uncomfortable can have positive outcomes. In the case of poor comments, a social way (constrasted with "leaving it to the mods") to deal with this is exactly what happened. And ultimately can improve the community through clear discouragement of unwanted behavior carrying the weight of emotional rejection.
If you still disagree with this, consider the community showed me support and I, while this absolutely could have been done differently (again I encourage you not to ignore that statement), think that's generally in the spirit of this forum to discourage inane comments. The success of this community is somewhat based on that and has also infamously earned its reputation for it. "Toxic" if you think about actual meaning of the word, poisoning or against the viability, would hurt the long term nature here. I believe in this case it does the opposite. It raises the average and it is what this place is built on (though usually more in the form of downvotes). I'd say your blind championing here is a case of "toxic positivity" and you can choose to do what you like with that. I would say if you do find this place so "toxic" I'd encourage you to not read the comments here, let alone respond to them, and perhaps go find someplace more agreeable to your particular sensibilities.
[dead]
It doesn't detect a single electron, it detects parity of the number of electrons.
Can someone check my understanding: does this mean they have eight logical qubits on the chip? It appears that way from the graphic where it zooms into each logical qubit, although it only shows two there.
If that is true, it sounds like having a plan to scale to millions of logical qubits on a chip is even more impressive.
They have never demonstrated even a single physical qubit.
Microsoft has claimed for a while to have observed some signatures of quantized Majorana conductance which might potentially allow building a qubit in the future. However, other researches in the field have strongly criticized their analysis, and the field is full of retracted papers and allegations of scientific misconduct.
this is from 2 days ago:
Roadmap to fault tolerant quantum computation using topological qubit arrays https://arxiv.org/abs/2502.12252
it is amazing at what passes for an academic paper these days
via the Quantum Buillshit Detector account:
https://bsky.app/profile/spinespresso.bsky.social/post/3lijd...
They have no qubits at all, "logical" or not. yet. They plan to make millions. It is substantially easier to release a plan for millions of qubits than it is to make even one.
NYT feature: https://www.nytimes.com/2025/02/19/technology/microsoft-quan...
I want a children’s picture book of majorana topologies, they are so beautiful! https://ars.els-cdn.com/content/image/1-s2.0-S25902385220017...
https://qdev.nbi.ku.dk/research/topological_quantum_systems/...
Scott Aaronson's FAQ on Microsoft’s topological qubit thing
https://news.ycombinator.com/item?id=43112021
So when does this break crypto and Bitcoin and how to best prepare for this?
Is there any way to secure at all?
If you can break bitcoin, you can also break all the encryption we use for banking and other transactions, so might as well stock up on SPAM and ammo.
It's an on issue, since bank can always switch to face-in-face verification. This is not an argument. Crypto is the problem here.
Beating face verification is quite honestly a far easier thing to break than encryption my friend.
By "face to face" I mean going to bank on your own two legs physically and talking with the teller.
Centralized businesses will not suffer much and will switch. Blockchains will get destroyed due to their immutable nature.
Great question are people predicting we will have enough computing power to "break" bitcoin? How much is needed?
Some portions of bitcoin are immune. Most cryptographic hashes are believed to be quantum resilient, for example.
You just need enough to poison the blockchain
So how to prepare, when?
Cryptocoins like Bitcoin only need DSA. Quantum-proof DSAs have been widely known since the 1970s*. Bitcoin only needs to change its DSA - and then everything will be fine.
* see Lamport signatures
Can someone fact check this?
They can break 1-bit-coin. How's that for ya?
I need HN's classic pessimism to know if this is something to be excited about. Please chime in!
I work in the field. While all players are selling a dream right now, this announcement is even more farcical. Majoranas are still trying to get to the point where they have even one qubit that could be said to exist and whose performance can be quantified.
The majorana approach (compared with more mature technologies like superconducting circuits or trapped ions) is a long game, where there are theoretical reasons to be optimistic, but where experimental reality is so far behind. It might work in the long run, but we're not there yet.
Given that Microsoft has been a heavy research collaborator ( Atom and Quantinuum), is there a possibility that the cross pollination would make it harder to deliver a farcical majorana chip since microsoft isn't all in on their home rolled hardware choice?
I've held the same view that this stuff was sketchy because of the previous mistakes in recent history but I do not work in the field
So you are saying its official fake news from Redmond ?
>> I need HN's classic pessimism to know if this is something to be excited about. Please chime in!
> While all players are selling a dream right now, this announcement is even more farcical.
Thanks a lot, I didn't get disappointed.
Another take, to feed your cynicism: MSFT need money to keep investing in this sort of science. By posting announcements like this they hope to become the obvious place for investors interested in quantum to park their money. Stock price goes brrr, MSFT wins.
More cynical still: what exactly has the Strategic Missions and Technologies unit achieved in the last few years? Burned a few billion on Azure for Operators, and sold it off. Got entangled and ultimately lost the JEDI mega deal at the DoD. Was notably not the unit that developed or brought in AI to Microsoft. Doing anything in quantum is good news for whoever leads this division, and they need it.
On the bright side, this is still fundamentally something to be celebrated. Years ago major corporations did basic science research and we are all better off for those folk. With the uncertainty around the future of science funding in the US right now, I at least draw some comfort in the fact that its still happening. My jaded-ness about press releases in no way diminishes my respect for the science that the lab people are publishing.
> MSFT need money to keep investing in this sort of science
Microsoft is making absurd amounts of money from Azure and Office (Microsoft 365) subscriptions. Any quantum computing investment is a drop in the bucket for this company.
Does MSFT sell new stock? If not, how does the stock price going up affect their ability to invest?
Even if Microsoft doesn't sell the stock it controls, its existing assets become more valuable when the stock price goes up. There are many ways one could spend those resources if needed: sell it off, borrow against the assets, trade the stock for stock in other companies.
However, since Microsoft has plenty of cash flow already, they can probably afford to just sit on the investment.
That's all you needed?
> "Every single atom in this chip is placed purposefully. It is constructed from ground up. It is entirely a new state of matter. Think of us as building the picture by painting it atom by atom."
https://youtu.be/wSHmygPQukQ (~7:55)
I don't know if marketing BS could get more hyperbolic than this.
Right off the bat "can scale to a million qubits" tells you it's BS since it only says what could be possible but makes zero claims about what it current does.
I mean my basement can scale to holding thousands of bars of solid gold, but currently houses... 0.
yeah, nobody can claim scaling without having the error to prove it. SPAM + coherence time + gate fidelity are the limiting factors to scaling, not the concept of an idea of how to build it at scale :-)
AI hype is running out of steam, the stock needs quantum power. MSFT is only up 1.99% for the past 1Y.
Hear me out, investors: Quantum Intelligence.
You're late to the game: https://quantumai.google/
Whenever I read about a scientific breakthrough I login to HN to see what the smart people think about it, and am disappointed if there isn't a post with hundreds of comments.
This isn’t a forum of smart people. It’s a forum of asocial tech workers who write in authoritative prose but are just normal people at home staring blankly at a blue glow of a mental bug zapper
Quantum is just the next form of sampling the electromagnetic field. It’ll provide mesmerizing computational properties but not rewrite human DNA or beam our consciousness to another galaxy; it’ll fill up RAM and disk really fast with impenetrable amount of data it will take decades to analyze and build real experiments across contexts to verify. Tomorrow will still come and be a lot like yesterday for us.
All in all it’s more of the same
Even if it we do beam our minds it’s just a copy. These meat suits still gonna stop experiencing someday. Life for us isn’t going anywhere.
Now THIS is the sort of nihilistic outlook that keeps me coming back for a hit of HN.
Not saying wild things aren’t possible; designer drug glands grafted on and such would be banger and would alter human lived experience.
Another box measuring oscillations of fundamental forces will not.
Religious fear of “corrupting human nature” keeps smart people scaffolding symbolic logic in machines versus experimenting with weird science. Live, eat, mate, help line go up relative to some musty people’s political ledger, and die is all we’re allowed!
I want drug glands, regenerative tissue, and mini kaiju monstrosities grown in labs… as pets!
What’s actually up in 2025?…
“Behold! Nintendo Switch 2!”
It's like some sort of Cunningham's Law Inception.
> It’ll provide mesmerizing computational properties
Maybe, one day, or never.
In the mean time, it will generate a lot of hot and humid hype.
> people at home staring blankly at a blue glow of a mental bug zapper
Nonsense. Many of us have installed that glitchy software that makes our screens orange sometimes.
You had me up to "normal people"
> but not rewrite human DNA
smugly but writing DNA is a quantum process
Just look at all the hard numbers they provided after you strip away the hype talk.
I wouldn't trust HN one bit (or one qubit) to comment usefully on this question, but presumably hundreds of people are already bugging Scott Aaronson to blog about it. He'll probably have a post in the next couple days saying whether we have permission to be excited.
[flagged]
I just want to acknowledge the general lucidity of this community, also finding out I am not insane is bit of a bonus. love this community, please don't change
Any word if it works without a Microsoft account?
Can we please have a discussion based solely on the content of the article? I know we all hate Microsoft's business practices, but let's try to limit our tendency to turn the discussion into a typical Reddit-esque thread.
How many qubits on this? One?
Best I can tell is the chip has 8
Short Term - This might be hype. Sure. Getting some Buzz.
Long Term - MS seems pretty committed and serious. Putting in the time/money for a long term vision. Maybe a decade from now, we'll be bowing down to an all powerful MS God/Oracle/AI.
I'd believe a word they say if it can factor 33.
Sounds exciting, even though I'm skeptical how far off into the future that supposed 1 million qubit chip is.
Why isn't crypto crashing?
Because I decrypted the whole blockchain using my free Azure subscription and started pumping it.
This is why I'm more excited about Microsoft than Apple.
Agree. Why would I buy Apple M4 macbook when I can have Majorana 1 running Windows 11 and install MS Teams to be extra productive.
Well it is not available yet, but Microsoft has industrial strength productivity tools (as opposed to Apple's "consumer" electronics), and a stellar research department.
They rewrote Teams and it's good now.
Beyond the marketing value of these types of announcements, how much time until consumer grade quantum cloud computing? Years, decades?
My hunch is somewhere between “decades” and “never”.
RSA in trouble when?
Please someone give input on this. It's extremely important and worrying.
If this is genuinely worrying to you, take some solace in that post-quantum alternatives are undergoing standardization and implementation right now (Signal and iMessage, for example, have already deployed some PQC, as have others).
However, this announcement is a nothing-burger. As I mentioned down-thread, you should view any QC announcement/press-release with extreme skepticism unless it includes replicable (read: open-source targeting hardware other researchers can test on) benchmarks for progress on real-world use-cases (e.g., Shor, Grover, or a newly-identified actually-interesting use-case). OP does not. Nothing to see here.
Worth saying, I am not a cryptographer—I do cryptography-adjacent research engineering. However, given the level of hype going around this industry, I think it's fair to at least expect to see the spec-sheet as it were.
All the best,
Thank you for taking the time to respond. I personally lend at least some degree of credence to their claim, given that this is Microsoft we're talking about and not some startup.
If their claim is true, then would that present an issue to RSA encryption? I find it difficult to find information on this topic that is digestible to a layman.
My understanding is that the benefit of quantum computing is parallelism, and I'm not sure how today's encryption standards would be safe from brute force attacks.
No. If their claim is true, they have a new prototype of a single qubit that they say could enable faster scaling up of qubit arrays (which means asymmetric/public-key cryptosystems like RSA will be in trouble sooner than we thought they might be). However, this work does not demonstrate that scaling potential at all. In the spirit of Betteridge's Law of headlines, if such a thing were easy for them to demonstrate, why would they announce this now, with a single logical qubit, rather than when they've demonstrated at least some scaling potential?
This understanding of QC is common, but isn't quite right. Quantum computation is actually really hard to parallelize (which is why Grover, though a bit frightening since it halves the security of symmetric primitives, is actually kind of damning for QC—because you can't parallelize that search really at all, so halving is the best a quantum adversary can get against things like AES-256).
I stand by my assertion that, until a QC announcement includes replicable benchmarks on actual use-cases, such things can be safely dismissed.
If you continue to be concerned (not necessarily unhealthy), engage cryptographers and security engineers to help your projects build know-how on hybrid (in this case, classical/PQ) cryptosystems, and get them deployed sooner rather than later.
All the best,
Would it be smartest for one to sell crypto right now while normies are still oblivious of what's about to happen?
No. Crypto will be safe against quantum computers.
If by “crypto,” the grandparent meant “cryptography,” this is not true. Most widely-deployed asymmetric/public-key primitives (e.g., RSA, elliptic curve cryptography (ECC), etc.) are quite fragile against an adversary with a cryptographically-relevant quantum computer (CRQC). To clarify how fragile, the general consensus/state-of-the-art as far as I am aware, is that Shor's algorithm (which breaks asymmetric primitives) requires about 2x the number of perfect, logical qubits as the RSA key-size (e.g., ~4000 qubits for factoring RSA 2048); however, because none of our qubit designs have a low enough error rate, you need about 1000 qubits to simulate/error-correct for a single logical qubit—so, currently, it's expected you would need around 4_000_000 physical qubits to factor RSA-2048. Post-quantum cryptography (PQC) is specifically the subset of cryptography that is designed to withstand attacks from quantum-enabled adversaries; it is still being actively designed, studied, standardized, implemented, and deployed.
If instead, the reference was to “cryptocurrencies,” most cryptocurrencies I am aware of depend on non-PQ constructions, and fall into the same buckets as RSA and ECC. Some systems, like Bitcoin, are in significant danger without large overhauls—if a practical CRQC is actually realized. There are efforts underway throughout the cryptocurrency communities to try to prepare for such an eventuality, but to my knowledge, none of them have major adoption yet.
As a final note on investment advice: I don't give out investment advice. :)
All the best,
As I've mentioned to another commenter, Bitcoin relies only on the existence of an arbitrary DSA. Quantum computing-resistant DSAs have been known since the 1970s. I reckon that swapping out Bitcoin's current DSA with a quantum-resistant one would not count as a major overhaul. https://news.ycombinator.com/item?id=43113682
It would probably require a “hard fork,” which is generally considered to be a major change in the Bitcoin world.
All the best,
Can you expand on this? I find this topic difficult to find solid information on, for some reason.
I'm not sure about other cryptocoins, but Bitcoin does not use encryption, it only uses authentication, which requires a DSA (Digital Signature Algorithms). Bitcoin's current DSA would in fact be broken by a cryptographically-relevant quantum computer (CRQC). However, there are DSAs - like Lamport signatures and Merkle signatures - known since the 1970s, whose security depends only on the existence of ANY secure hash function. There is no known way to break any widely used hash function using quantum computers. So I reckon that the only change to Bitcoin would be to swap out the current DSA for a different one.
I'm not sure about the downsides of quantum-resistant DSAs.
Chat, is this true?
1 qubit prototype can crack RSA? 1million scaled out qubits is still talk
Not even 1 qubit, just "substantial progress towards the realization of a topological qubit" (from the accompanying Nature paper).
If it's going to take another 10 years to turn this into a usable product... Better spew out some marketing BS to move the needle on MSFT stock price...
no shor no upvote
Agreed wholesale. Any QC announcement that does not include replicable benchmarks for progress executing Shor or Grover (or demonstration of another real-world use-case that they actually address) should be dismissed out-of-hand by everyone except other researchers in QC.
All the best,
Nadella is currently claiming on X that this opens up a "direct path to 1 million qubits". Based on my priors, I put the probability of this statement being horseshit at 99.9%. Could someone knowledgeable make it 100%?
That word "direct" is doing a lot of heavy lifting.
It makes the statement unfalsifiable: no matter how long it takes, how twisty the path turns out to be, it will always be a direct path, because what is a path after all if not direct?
He didnt say shortest path, quickest path, or any other qualifier that would set visions of nodes and edges and Graph 101 dancing in our nerdy little heads. It's marketing. Good marketing. I hope they can deliver on it.
Microsoft really is a pathetic company rnd wise compared to what companies of similar size like Bell Labs were at their prime
Can any modern company be said to be of similar size to Bell Labs in it's prime? Bell Labs were only possible due to a monopoly supplying massive amounts of funding for about a century. MS had a few decades of domination in the consumer desktop OS space, but they always had competition and I would be very surprised if they had the kind of revenue streams that make this type of long-term investment possible - regular payments from nearly every household and business in the country, with easy ability to make predictions about current and future proceeds just by counting heads.
To be fair, MSFT was not started as a research company and is more of a mass market commercializer of existing technology. Bell Labs' primary goal was to discover new science and invent practical applications. (Reminds me I should go read "The Idea Factory"! (story of bell labs))
IMO new breakthroughs in our understanding of physics would be needed first to make substantial progress in QC. As long as MSFT isn't investing in attosecond lasers or low temperature experiments, RSA will remain secure.
My first scan parsed that as "Marijuana 1 quantum processor". Very high performance ...
For me it was more than my first scan. I understand the word is Majorana, but when I go to pronounce it or read it, my brain reports back "Major-auwana."
The people who signed off on this name a) do a lot of drugs or b) didn't notice because they have never come anywhere near weed
This is the reason why is called like that. Is pronounced Ma-y-orana
https://en.m.wikipedia.org/wiki/Ettore_Majorana
>likely dying in or after 1959
That is how I would like my obit to read as well.
truly quantum way to go
Argghhhh.
I rue the day that I decided to play the game of "What word is Majorana 1 similar to?"
Cuz, and I know the armchair psychoanalysts will have a field day, this is what my mind pieced together...
Now I can't unsee it. Sigh.> Very high performance ...
From the user's perspective, of course.
I read "Majorana 1 aquarium processor". I am not high.
Might be a more truthful name honestly too
Hah, I read it as "Manjaro" at first glance.
[dead]
[dead]
> Microsoft unveils Majorana 1 quantum processor
What is Win 11 boot time on this processor ? Will it be supported in the next version of Windows ? /s