I believe that knowledge should be free, you should have access to knowledge even if you don’t have the money to afford buying it. This uses IPFS.

  • Kissaki@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    21 hours ago

    If you do it right, you can have that AI replace all the complicated pirating and downloading process.

    How so? I don’t see how that would work.


    What are you trying to say about an AI fabricating a whole paper? It must have the same issues all trained statistical text prediction “AI” has: Hallucinations. Even if it’s extended with sources, without validating them the paper text claims are useless when you can’t be sure the source even exists or says what it claims.

    There are use cases for AI, but if you are looking for papers for reasoned and documented information, AI is the worst you can use. Because it may look correct, but be confidently incorrect, and you are being misled.

    This post is about scientific papers. Not predicted generated text.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      16 hours ago

      Yeah, I think my sarcasm got lost somewhere. I thought the word “fabricate”, especially in context with facts had that slight undertone. But I’m not a native english speaker, maybe I’m wrong.

      I’ve linked some paper generators somewhere in this comment tree. They’re not supposed to come up with real scientific papers. One of them is an old joke (predating AI), the next one is itself the subject of research. And with the third one, I’m not so sure. Seems like intended use is to fake papers.

      I also think “hallucinations” are a major hurdle when it comes to applying AI. It comes to no surprise to me that they do it… I mean we’ve trained them on all kinds of data, the Wikipedia, textbooks, but also fictional stories, novels, Reddit posts… And we want them to be creative. Except when we don’t. But there (currently) is no set-screw, no means of controlling when we want it to stick to the facts, and when we want it to be creative or invent something, write a science fiction novel… It certainly doesn’t help if the use-case is writing factual text, or helping a customer facing issues with a bill… And the chatbot decides to be extra creative or mimick an angry Reddit user. It’ll do it. Because we’ve designed it that way.

      I guess it helps to make AI more intelligent, so the chances of it infering/guessing what to do become a bit better. But I think what we really want is some means to steer it more directly. And that’ll open up more use-cases for AI. Currently, we don’t have any of that. So regarding factual stuff, AI is just very unreliable.

      And if you ask me, ChatGPT, Claude etc aren’t even close to being smart enough to write a scientific paper. So that’d be yet another issue. I know people regularly claim it can pass some test for a degree, be smarter than a student… But from my own experience, ChatGPT can’t even summarize a 2 page newspaper article. All results I’ve ever seen are riddled with inaccuracies, and most of the times also missing the entire point of the article. I’ve rarely been happy with how it reworded my emails. And I let it write some hobby computer code for me. And it did a great job at writing boilerplate code, some webdesign etc. But failed miserably with the more complex things I really needed some help with. How would that thing be able to do research on it’s own?

      Don’t get me wrong, I think AI and LLMs are very useful. They can assist, retrieve documents… They excel at translating between languages. I also like chatbots for their creativity. You can just tell them to come up with 5 ideas concerning whatever you’re currently doing. But there are a lot of things they can not do. And it’s probably going to stay that way for a while. Until some major (hypothetical) breaktrough, when we suddenly make them 10x as intelligent. And/Or get rid of hallucinations.