• over_clox@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    108
    ·
    1 month ago

    There’s a dime stuck in the road behind our local store, tails side up, for over 15 years. And that doesn’t even need error correction.

    Why does it sound like technology is going backwards more and more each day?

    Someone please explain to me how anything implementing error correction is even useful if it only lasts about an hour?

    • Mikina@programming.dev
      link
      fedilink
      English
      arrow-up
      67
      ·
      1 month ago

      I mean, that’s literally how research works. You make small discoveries and use them to move forward.

      • over_clox@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        103
        ·
        1 month ago

        What’s to research? A fucking abacus can hold data longer than a goddamn hour.

        • Deceptichum@quokk.au
          link
          fedilink
          English
          arrow-up
          54
          arrow-down
          1
          ·
          1 month ago

          Are you really comparing a fucking abacus to quantum mechanics and computing?

        • Zement@feddit.nl
          link
          fedilink
          English
          arrow-up
          34
          arrow-down
          1
          ·
          1 month ago

          Are you aware that RAM in your Computing devices looses information if you read the bit?

          Why don’t you switch from smartphone to abacus and dwell in the anti science reality of medieval times?

          • FiskFisk33@startrek.website
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 month ago

            And that it looses data after merely a few milliseconds if left alone, that to account for that, DDR5 reads and rewrites unused data every 32ms.

          • over_clox@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            18
            ·
            1 month ago

            You’re describing how ancient magnetic core memory works, that’s not how modern DRAM (Dynamic RAM) works. DRAM uses a constant pulsing refresh cycle to recharge the micro capacitors of each cell.

            And on top of that, SRAM (Static RAM) doesn’t even need the refresh circuitry, it just works and holds it’s data as long as it remains powered. It only takes 2 discreet transistors, 2 resistors, 2 buttons and 2 LEDs to demonstrate this on a simple breadboard.

            I’m taking a wild guess that you’ve never built any circuits yourself.

            • Zement@feddit.nl
              link
              fedilink
              English
              arrow-up
              13
              ·
              edit-2
              1 month ago

              I’m taking a wild guess that you completely ignored the subject of the thread to start an electronics engineering pissing contest?

              • over_clox@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                15
                ·
                1 month ago

                Do you really trust the results of any computing system, no matter how it’s designed, when it has pathetic memory integrity compared to ancient technology?

            • AbidanYre@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 month ago

              And you would have been there shitting on magnetic core memory when it came out. But without that we wouldn’t have the more advanced successors we have now.

                • AbidanYre@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  ·
                  1 month ago

                  Doubt.

                  Core memory loses information on read and DRAM is only good while power is applied. Your street dime will be readable practically forever and your abacus is stable until someone kicks it over.

                  You’re not the arbiter of what technology is “good enough” to warrant spending money on.

                  • over_clox@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    6
                    ·
                    1 month ago

                    Core memory is also designed to accomodate for that and almost instantly rewrite the data back to memory. That in itself might be a crude form of ‘error’ correction, but it still lasts way longer than an hour.

                    Granted that quantum computers are a different beast of their own, how much digital data does a qbit actually store? And how does that stack up in price per bit comparison?

                    If they already know quantum computers are more prone to memory errors, why not just use reliable conventional RAM to store the intermediate data and just let the quantum side of things just be the ‘CPU’, or QPU if you like?

                    I dunno, it just makes absolutely no sense to me to utilitze any sort of memory technology that even with error correction still manages to lose information faster than a jumping spider’s memory?

          • frezik@midwest.social
            link
            fedilink
            English
            arrow-up
            25
            ·
            1 month ago

            Must be the dumbest take on QC I’ve seen yet. You expect a lot of people to focus on how it’ll break crypto. There’s a great deal of nuance around that and people should probably shut up about it. But “dime stuck in the road is a stable datapoint” sounds like a late 19th century op-ed about how airplanes are impossible.

    • BananaTrifleViolin@lemmy.world
      link
      fedilink
      English
      arrow-up
      43
      ·
      edit-2
      1 month ago

      As stable as that dime is, it’s utterly useless for all practical purposes.

      What Google is talking about it making a stable qbit - the basic unit of a quantum computer. It’s extremely difficult to make a qbit stable - and as it underpins how a quantum computer would work instability introduces noise and errors into the calculations a quantum computer would make.

      Stabilising a qbit in the way Google’s researchers have done shows that in principle if you scale up a quantum computer it will get more stable and accurate. It’s been a major aim in the development of quantum computing for some time.

      Current quantum computers are small and error prone. The researchers have added another stepping stone on the way to useful quantum computers in the real world.

      • Buttons@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        It sounds like your saying a large quantum computer is easier to make than a small quantum computer?

        • logicbomb@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 month ago

          That is one of the things the article says. That making certain parts of the processor bigger reduces error rates.

        • obbeel@lemmy.eco.br
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          I think that means the current quantum computers made using photonics, right? Those are really big though.

    • TimeSquirrel@kbin.melroy.org
      link
      fedilink
      arrow-up
      39
      ·
      1 month ago

      Do you have any idea the amount of error correction needed to get a regular desktop computer to do its thing? Between the peripheral bus and the CPU, inside your RAM if you have ECC, between the USB host controller and your printer, between your network card and your network switch/router, and so on and so forth. It’s amazing that something as complex and using such fast signalling as a modern PC does can function at all. At the frequencies that are being used to transfer data around the system, the copper traces behave more like radio frequency waveguides than they do wires. They are just “suggestions” for the signals to follow. So there’s tons of crosstalk/bleed over and external interference that must be taken into account.

      Basically, if you want to send high speed signals more than a couple centimeters and have them arrive in a way that makes sense to the receiving entity, you’re going to need error correction. Having “error correction” doesn’t mean something is bad. We use it all the time. CRC, checksums, parity bits, and many other techniques exist to detect and correct for errors in data.

      • over_clox@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        28
        ·
        1 month ago

        I’m well aware. I’m also aware that the various levels of error correction in a typical computer manage to retain the data integrity potentially for years or even decades.

        Google bragging about an hour, regardless of it being a different type of computer, just sounds pathetic, especially given all the money being invested in the technology.

        • TheRealKuni@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          edit-2
          1 month ago

          Traditional bits only have to be 0 or 1. Not a coherent superposition.

          Managing to maintain a stable qubit for a meaningful amount of time is an important step. The final output from quantum computation is likely going to end up being traditional bits, stored traditionally, but superpositions allow qubits to be much more powerful during computation.

          Being able to maintain a cached superposition seems like it would be an important step.

          (Note: I am not even a quantum computer novice.)

    • alcoholicorn@lemmy.ml
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      1 month ago

      It can be useful if they build enough of these that they can run programs that regular computers can’t run at this scale, in less than an hour.

      Quantum computers aren’t a replacement for regular computers because they’re much slower and can’t do normal calculations, but they can do the type of problem where you have to guess-and-check too many answers to be feasible with regular computers in many fewer steps.

      • over_clox@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        56
        ·
        1 month ago

        I took a random wild guess, and found that if they quit blowing billions of dollars on over-complicated technology, they could do a lot more to take care of real world problems, like food, clothes and shelter for the homeless.

        • alcoholicorn@lemmy.ml
          link
          fedilink
          English
          arrow-up
          21
          arrow-down
          2
          ·
          1 month ago

          You think that’s wasteful? Wait until you hear about the military or prisons.

        • RageAgainstTheRich@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 month ago

          But… you know even if they didn’t use the money for this, they wouldn’t use it for those things, right? It’s Google…

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 month ago

          By simulating molecules, quantum computers have a huge promise of creating new medicines that are more effective, have fewer side effects, and are more likely to get through FDA trials on the first try.

          Please stop. You’re embarrassing yourself.

        • SecretSauces@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          So what you’re saying is we should never make any scientific advancement until we make the world a paradise?

          You know what would be at more effective, and just as realistic? Setting a limit that no one person or entity should have more than a half a billion dollars. The rest goes to charity to take care of all the problems we have now.

          • over_clox@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            9
            ·
            1 month ago

            I totally get you there, yes no one person should have over half a billion dollars while so many others plus the environment are suffering.

            As far as scientific advancement, I think humanity is already reaching the peak of that mountain. Sure there’s still more to be discovered, but at what cost?

            How much does it cost in research and design, manufacturing and programming a quantum computer? I dunno what their finances look like, but if I had to spot a wild guess, that already sounds like over half a billion dollars…

        • bunchberry@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 month ago

          Interesting you get downvoted for this when I mocked someone for saying the opposite who claimed that $0.5m was some enormous amount of money we shouldn’t be wasting, and I simply pointed out that we waste literally billions around the world on endless wars killing random people for now reason, so it is silly to come after small bean quantum computing if budgeting is your actual concern. People seemed to really hate me for saying that, or maybe it was because they just actually like wasting moneys on bombs to drop on children and so they want to cut everything but that.

      • over_clox@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        9
        ·
        1 month ago

        Indeed, you’re very correct. It can also remember those results for over an hour. Hell, a jumping spider has better memory than that.

        • WolfLink@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 month ago

          The output of a quantum computer is read by a classical computer and can then be transferred or stored as long as you liked use traditional means.

          The lifetime of the error corrected qubit mentioned here is a limitation of how complex of a quantum calculation the quantum computer can fix. And an hour is a really, really long time by that standard.

          Breaking RSA or other exciting things still requires a bunch of these error corrected qubits connected together. But this is still a pretty significant step.

          • over_clox@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            9
            ·
            1 month ago

            Well riddle me this, if a computer of any sort has to constantly keep correcting itself, whether in processing or memory, well doesn’t that seem unreliable to you?

            Hell, with quantum computers, if the temperature ain’t right and you fart in the wrong direction, the computations get corrupted. Even when you introduce error correction, if it only lasts an hour, that still doesn’t sound very reliable to me.

            On the other hand, I have ECC ChipKill RAM in my computer, I can literally destroy a memory chip while the computer is still running, and the system is literally designed to keep running with no memory corruption as if nothing happened.

            That sort of RAM ain’t exactly cheap either, but it’s way cheaper than a super expensive quantum computer with still unreliable memory.

            • WolfLink@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              11
              ·
              1 month ago

              Well riddle me this, if a computer of any sort has to constantly keep correcting itself, whether in processing or memory, well doesn’t that seem unreliable to you?

              Error correction is the study of the mathematical techniques that let you make something reliable out of something unreliable. Much of classical computing heavily relies on error correction. You even pointed out error correction applied in your classical computer.

              That sort of RAM ain’t exactly cheap either, but it’s way cheaper than a super expensive quantum computer with still unreliable memory.

              The reason so much money is being invested in the development of quantum computers is mathematical work that suggests a sufficiently big enough quantum computer will be able to solve useful problems in an hour that would take the worlds biggest classical computer thousands of years to solve.

              • over_clox@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                9
                ·
                1 month ago

                Why do we humans even think we need to solve these extravagantly over-complicated formulas in the first place? Shit, we’re in a world today where kids are forgetting how to spell and do basic math on their own, no thanks to modern technology.

                Don’t get me wrong, human curiosity is an amazing thing. But that’s a two edged sword, especially when we’re augmenting genuine human intelligence with the processing power of modern technology and algorithms.

                Just because we can, doesn’t necessarily mean we should. We’re gonna end up with a new generation of kids growing up half dumb as a stump, expecting the computers to give us all the right answers.

                Smart technology for dumb people…

                • gamermanh@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  ·
                  1 month ago

                  Why do we humans even think we need to solve these extravagantly over-complicated formulas in the first place?

                  Because those questions could do things like cure disease or help us better understand the universe or a million other things

                  Shit, we’re in a world today where kids are forgetting how to spell and do basic math on their own, no thanks to modern technology.

                  Not because of it, either. This research isn’t really related to that kind of tech, either

                  Just because we can, doesn’t necessarily mean we should. We’re gonna end up with a new generation of kids growing up half dumb as a stump, expecting the computers to give us all the right answers.

                  This isn’t going to be for daily normal use, you’re projecting fear at the wrong tech

                  • over_clox@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    5
                    ·
                    1 month ago

                    Ask a quantum chip how to cure a disease? Sure, let’s accept that as a possible future…

                    You really think the chips actually understand diseases? We’re gonna end up with a whole new generation of people that have no clue how the shit works to begin with.

                    Eventually it’ll be like “How do I trim my toenails?”, while the ‘intelligent’ system responds to cut your appendages off.

                    Granted that AI and quantum computing aren’t quite the same thing. Does it matter? Future generations will have the ability to just ask a computer how to generate cure a disease…

                    The machine gives no fucks about us, it’ll just as easily destroy us if someone asks the wrong question or enters the wrong formula.

                • WolfLink@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  ·
                  1 month ago

                  Why do we humans even think we need to solve these extravagantly over-complicated formulas in the first place? Shit, we’re in a world today where kids are forgetting how to spell and do basic math on their own, no thanks to modern technology.

                  lol.

                  All of modern technology boils down to math. Curing diseases, building our buildings, roads, cars, even how we do farming these days is all heavily driven by science and math.

                  Sure, some of modern technology has made people lazy or had other negative impacts, but it’s not a serious argument to say continuing math and science research in general is worthless.

                  Specifically relating to quantum computing, the first real problems to be solved by quantum computers are likely to be chemistry simulations which can have impact in discovering new medicines or new industrial processes.

                  • scarabic@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    8
                    ·
                    1 month ago

                    Your responses to Herr Dunning-Kruger here were very patient and succinct. I learned from them so thanks for making that effort.