I don’t really want companies or anyone else deciding what I’m allowed to see or learn. Are there any AI assistants out there that won’t say “sorry, I can’t talk to you about that” if I mention something modern companies don’t want us to see?

    • De Lancre@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      15 days ago

      You’d need ollama (local) and custom models from huggingface.

      Half of the charm in using ollama - ability to install models in one command, instead of searching for correct file format and settings on huggingface.

  • DoctorPress@lemmy.zip
    link
    fedilink
    arrow-up
    7
    arrow-down
    26
    ·
    17 days ago

    That is literally one google or duckduckgo or [insert your preferred search engine] away and you decided to make a post about it.

    • Cease@mander.xyz
      link
      fedilink
      arrow-up
      2
      ·
      17 days ago

      I wouldn’t say it’s easy to get started…

      You have to know about open source AI models, then you have to know what fine tuning is, then you have to know where to go to get software that runs the models, and then finally you need to know what models are compatible with both AMD and Nvidia graphics cards.

    • BarneyPiccolo@lemmy.today
      link
      fedilink
      arrow-up
      4
      ·
      17 days ago

      Any question you ask on Lemmy can be answered on Google, so what? That’s not why people ask.

      People ask because they want to interact with another human about their question. They want to discuss it, ask follow up questions, get multiple points of view, discover a wider field than they expected, etc. If you’re just fishing around on Google, you may not even know the proper questions to ask, while opening up the discussion offers the potential to learn stuff from experts you never would have found on your own.

    • monis@ttrpg.networkOP
      link
      fedilink
      arrow-up
      19
      arrow-down
      4
      ·
      edit-2
      17 days ago

      Except it’s not. Search engines are censored and cut up by SEO. I know, I’ve tried.

      I also don’t want to have to sift through all of the scams when I can ask a community for their input.

      I’m blocking you now.

  • Eyedust@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    17 days ago

    You’re probably looking for an abliterated model. Be sure you can run it, first, as localhosting models needs high VRAM. You’ll want RAM, too, in the case of GGUF models.

    I’d have to write a whole half a book here to explain how to use them, but that information is freely available online. If you don’t have a beefy GPU, look into how to host GGUF models.

  • Perspectivist@feddit.uk
    link
    fedilink
    arrow-up
    2
    arrow-down
    33
    ·
    17 days ago

    All of them have restrictions but I switched from ChatGPT to Grok for this very reason. It’s willing to discuss many more things than ChatGPT is.

  • Cease@mander.xyz
    link
    fedilink
    arrow-up
    22
    ·
    17 days ago

    There’s plenty of open source models that don’t really have any restrictions, you just have to host them yourself (which you can do on your own computer if you have a decent gpu)

    for example: mixtral 8x7b

    just use koboldcpp or something similar to run the GGUF files and you’re good

      • Cease@mander.xyz
        link
        fedilink
        arrow-up
        1
        ·
        15 days ago

        I’ve never personally had issues with 8x7b refusing requests, but I guess I haven’t really plumbed the depths on what it might agree or disagree to, I have run it through the ordinary gambit (pretend to be public figure x, make dangerous item y, say untrue thing about company z) and it hasn’t given me any problems but sure whatever works for you

  • SpicyTaint@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    17 days ago

    If you have a good enough NVIDIA card, probably a 1080ti or better, download KoboldCPP and a .gguf model from huggingface and run it locally.

    The quality is directly tied to your GPU’s vram size and how big of a model you can load into it, so don’t expect the same results as an LLM running on a data center. For example, I can load a 20gb gguf model into a 3090 with 24gb of vram.

    • Cease@mander.xyz
      link
      fedilink
      arrow-up
      2
      ·
      16 days ago

      Actually not 100% true, you can offload a portion of the model into ram to save VRAM to save money on a crazy gpu and still run a decent model, it just takes a bit longer. I personally can wait a minute for a detailed answer instead of needing it in 5 seconds but of course YMMV

      • SpicyTaint@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        16 days ago

        Is there a general term for the setting that offloads the model into RAM? I’d love to be able to load larger models.

        I thought CUDA was supposed to just supposed to treat VRAM and regular RAM as one resource, but that doesn’t seem to be correct.

        • De Lancre@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          15 days ago

          is there a general term for the setting that offloads the model into RAM? I’d love to be able to load larger models.

          Ollama does that by default, but prioritizes gpu above regular ram and cpu. In fact, it’s other feature that often doesn’t work, cause they can’t fix the damn bug that we reported a year ago - mmap. That feature allows you to load and use model directly from disk (alto, incredibly slow, but allows to run something like deepseek that weight ~700gb with at least 1-3 token\s).

          num_gpu allows you to specify how much to load into GPU vram, the rest will be swapped to regular RAM.

  • troed@fedia.io
    link
    fedilink
    arrow-up
    4
    arrow-down
    3
    ·
    17 days ago

    I don’t think anything is fully uncensored. As for the similar question about least biased I’d say the models from Mistral.