I believe that knowledge should be free, you should have access to knowledge even if you don’t have the money to afford buying it. This uses IPFS.

    • Imgonnatrythis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      1
      ·
      5 days ago

      Scihub database stops in 2021. Big win for corporate publishers and wealthy scientists; they’ve had an edge since then. It’s super important to have access to up to date resources. The database here seems to fill the gap - Merry Christmas to me!!

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      5 days ago

      I think they stopped endorsing IPFS. I can’t find a good source right now. If you wan’t to support Anna’s Archive, you can help seed their torrents. They don’t seem to have that much redundancy.

      • doeknius_gloek@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        14
        ·
        5 days ago

        You’re right.

        We’ve decided that IPFS is not yet ready for prime time. We’ll still link to files on IPFS from Anna’s Archive when possible, but we won’t host it ourselves anymore, nor do we recommend others to mirror using IPFS. Please see our Torrents page if you want to help preserve our collection.

        • eleitl@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 days ago

          IPFS is not for bulk mirroring, it’s for content delivery. IPFS works well enough if the content publishers and end users know what they’re doing.

        • itslilith@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          4 days ago

          I’m curious, could anyone more knowledgeable about IPFS give an impression of the state of the protocol? It seems like a really interesting technology, but it also leans heavily into web3 and crypto bullshit. It’s that reflective of the network, or just bad marketing?

          • ComradeMiao@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            ·
            4 days ago

            It seems like most big projects have dropped it. I remember reading one of the big fall backs was it has one central node hosting via cloudflare then they dropped it or something. Only half remembering. It sounds so cool! Sad it doesn’t work

            • Natanox@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              11
              ·
              4 days ago

              Can confirm. Meddled with it a little bit a while ago trying to productively use it to host Lutris installer files. It’s an absolute mess; slow, unreliable, without proper documentation and a really bad default node application.

              Also it managed to get our server temporarily banned by the hosting provider since the “sane default settings” includes the node doing a whole sweep of your local subnet on all NICs respectively, knocking at multiple ports of every device it can find. Because the expected environment of a node apparently is your home network… a default setting that caused problems for many people for many years by now.

              A project like in this post might benefit from looking at more modern/mature reimplementations of IPFS’ concept, like Veilid (which would also offer additional features as well).

              • TheMachineStops@discuss.tchncs.deOP
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                4 days ago

                Just looked up Veilid seems to similar to I2P, but it is still in development and can’t be used for now. Also I agree that IPFS is horrible and not just the setup, the developer themselves are against piracy. What is the point of a decentralised network that picks and chooses what it hosts? BitTorrent, Tor, Freenet, and I2P never did this as far as I know.

                DCMA Denylist https://github.com/ipfs-inactive/faq/issues/36#issuecomment-140567411

                • hendrik@palaver.p3x.de
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  4 days ago

                  I think you’re all making look a bit worse than it is. I downloaded a few PDFs via IPFS and it worked for me. And I was happy it provided me with what I needed at that time. I can’t comment on reliability or other nuances. It also was slow in my case, but I took that as the usual trade-off. Usually, you either get speed or anonymity, not both. And there are valid use-cases for denylists. For example viruses, malware, CSAM and spam. I’d rather not have my node spread those. It’s complicated. And I also talk in public like that. I think what matters is what you do and implement, not if you say you comply with regulation and the DMCA…

                  Thanks for the links, I’ll have a look.

            • eleitl@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 days ago

              IPFS is designed for decentralized pinning and decentralized use. You’re supposed to run a local node or use browsers with built-in IPFS to access content. If you’re using it wrong it will suck.

                • eleitl@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  4 days ago

                  Again, if you’re usung central anything with IPFS you’re using it wrong and you are getting the worst of two worlds.

    • TheMachineStops@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 days ago

      Yeah the AI thing is stupid, everyone suddenly wants to incorporate AI. Check out the telegram bot though, you can request research papers or books through the bots and someone uploads it in a couple of hours.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      4 days ago

      If you do it right, you can have that AI replace all the complicated pirating and downloading process. I think someone already came up with a paper writer AI. You just give it the topic, and it fabricates a whole paper, including nice diagrams and pictures. 😅

      Yeah, but that also made me worry. I wonder how AI and science mix. Supposedly, some researchers use AI. Especially “Retrieval-Augmented Generation” (information retrieval) and such. I’m not a scientist, but I didn’t have much luck with AI and factual information. It just makes a lot of stuff up. To the point where I’m better off without.

      • Kissaki@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        20 hours ago

        If you do it right, you can have that AI replace all the complicated pirating and downloading process.

        How so? I don’t see how that would work.


        What are you trying to say about an AI fabricating a whole paper? It must have the same issues all trained statistical text prediction “AI” has: Hallucinations. Even if it’s extended with sources, without validating them the paper text claims are useless when you can’t be sure the source even exists or says what it claims.

        There are use cases for AI, but if you are looking for papers for reasoned and documented information, AI is the worst you can use. Because it may look correct, but be confidently incorrect, and you are being misled.

        This post is about scientific papers. Not predicted generated text.

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          15 hours ago

          Yeah, I think my sarcasm got lost somewhere. I thought the word “fabricate”, especially in context with facts had that slight undertone. But I’m not a native english speaker, maybe I’m wrong.

          I’ve linked some paper generators somewhere in this comment tree. They’re not supposed to come up with real scientific papers. One of them is an old joke (predating AI), the next one is itself the subject of research. And with the third one, I’m not so sure. Seems like intended use is to fake papers.

          I also think “hallucinations” are a major hurdle when it comes to applying AI. It comes to no surprise to me that they do it… I mean we’ve trained them on all kinds of data, the Wikipedia, textbooks, but also fictional stories, novels, Reddit posts… And we want them to be creative. Except when we don’t. But there (currently) is no set-screw, no means of controlling when we want it to stick to the facts, and when we want it to be creative or invent something, write a science fiction novel… It certainly doesn’t help if the use-case is writing factual text, or helping a customer facing issues with a bill… And the chatbot decides to be extra creative or mimick an angry Reddit user. It’ll do it. Because we’ve designed it that way.

          I guess it helps to make AI more intelligent, so the chances of it infering/guessing what to do become a bit better. But I think what we really want is some means to steer it more directly. And that’ll open up more use-cases for AI. Currently, we don’t have any of that. So regarding factual stuff, AI is just very unreliable.

          And if you ask me, ChatGPT, Claude etc aren’t even close to being smart enough to write a scientific paper. So that’d be yet another issue. I know people regularly claim it can pass some test for a degree, be smarter than a student… But from my own experience, ChatGPT can’t even summarize a 2 page newspaper article. All results I’ve ever seen are riddled with inaccuracies, and most of the times also missing the entire point of the article. I’ve rarely been happy with how it reworded my emails. And I let it write some hobby computer code for me. And it did a great job at writing boilerplate code, some webdesign etc. But failed miserably with the more complex things I really needed some help with. How would that thing be able to do research on it’s own?

          Don’t get me wrong, I think AI and LLMs are very useful. They can assist, retrieve documents… They excel at translating between languages. I also like chatbots for their creativity. You can just tell them to come up with 5 ideas concerning whatever you’re currently doing. But there are a lot of things they can not do. And it’s probably going to stay that way for a while. Until some major (hypothetical) breaktrough, when we suddenly make them 10x as intelligent. And/Or get rid of hallucinations.

      • Mirodir@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        AI can be good but I’d argue letting an LLM autonomously write a paper is not one of the ways. The risk of it writing factually wrong things is just too great.

        To give you an example from astronomy: AI can help filter out “uninteresting” data, which encompasses a large majority of data coming in. It can also help by removing noise from imaging and by drastically speeding up lengthy physical simulations, at the cost of some accuracy.

        None of those use cases use LLMs though.

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 days ago

          Right, the public and journalists often lump everything together under the term “AI”. When it’s really a big difference between some domain specific pattern recognition task that can be done with machine learning and >99% accuracy… Or an ill-suited use-case where a LLM gets slapped on.

          For example I frequently disagree with people using LLMs for summarization. That seems to be something a lot of people like. And I think they’re particularly bad at it. All my results were riddled with inaccuracies, sometimes it’d miss the whole point of the input text. And it’d rarely summarize at all. It just picks a topic/paragraph here and there and writes some shorter version of that. Missing what a summary is about, providing me with the main points and conclusion, reducing the details and roughly outlining how the author got there. I think LLMs just can’t do it.

          I like them for other purposes, though.

          • Mirodir@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            4 days ago

            Re LLM summaries: I’ve noticed that too. For some of my classes shortly after the ChatGPT boom we were allowed to bring along summaries. I tried to feed it input text and told it to break it down into a sentence or two. Often it would just give a short summary about that topic but not actually use the concepts described in the original text.

            Also minor nitpick but be wary of the term “accuracy”. It is a terrible metric for most use cases and when a company advertises their AI having a high accuracy they’re likely hiding something. For example, let’s say we wanted to develop a model that can detect cancer on medical images. If our test set consists of 1% cancer inages and 99% normal tissue the 99% accuracy is achieved trivially easy by a model just predicting “no cancer” every time. A lot of the more interesting problems have class imbalances far worse than this one too.

            • hendrik@palaver.p3x.de
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              4 days ago

              What’s the correct term within casual language? “correctness”? But that has the same issue… I’m not a native speaker…

              By the way, I forgot my main point. I think that paper generator was kind of a joke. At least the older one, which predates AI and uses “hand-written context-free grammar”:

              And there are projects like Papergen and several others. But I think what I was referring to was the AI scientist which does everything from brainstorming research ideas, to simulating experiments, writing reports etc. That’s not meant to be taken seriously, in the sense that you’ll publish the generated results. But seems pretty creative to me, to write a paper about an artificial scientist…

  • sakuragasaki46@feddit.it
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    Why did I think for a moment the post was about Scilab (Matlab replacement)? I felt confused with Scilab being centralized. It runs on your computer, after all