I don’t really want companies or anyone else deciding what I’m allowed to see or learn. Are there any AI assistants out there that won’t say “sorry, I can’t talk to you about that” if I mention something modern companies don’t want us to see?
deleted by creator
You’d need ollama (local) and custom models from huggingface.
Half of the charm in using ollama - ability to install models in one command, instead of searching for correct file format and settings on huggingface.
That is literally one google or duckduckgo or [insert your preferred search engine] away and you decided to make a post about it.
I wouldn’t say it’s easy to get started…
You have to know about open source AI models, then you have to know what fine tuning is, then you have to know where to go to get software that runs the models, and then finally you need to know what models are compatible with both AMD and Nvidia graphics cards.
deleted by creator
Any question you ask on Lemmy can be answered on Google, so what? That’s not why people ask.
People ask because they want to interact with another human about their question. They want to discuss it, ask follow up questions, get multiple points of view, discover a wider field than they expected, etc. If you’re just fishing around on Google, you may not even know the proper questions to ask, while opening up the discussion offers the potential to learn stuff from experts you never would have found on your own.
Except it’s not. Search engines are censored and cut up by SEO. I know, I’ve tried.
I also don’t want to have to sift through all of the scams when I can ask a community for their input.
I’m blocking you now.
I just used Kagi and searched for uncensored models, ended up here: https://www.reddit.com/r/LocalLLaMA/s/n697IoipXn
There’s a long list of completely unhinged ones.
Well that escalated quickly
You’re probably looking for an abliterated model. Be sure you can run it, first, as localhosting models needs high VRAM. You’ll want RAM, too, in the case of GGUF models.
I’d have to write a whole half a book here to explain how to use them, but that information is freely available online. If you don’t have a beefy GPU, look into how to host GGUF models.
Dolphin provide uncencored models. Dolphin mixtral 2.5 8x7B was fun back in the days.
All of them have restrictions but I switched from ChatGPT to Grok for this very reason. It’s willing to discuss many more things than ChatGPT is.
Good thought, switch to the NaziBot for real truth /s
OP was asking for an uncencored AI assistant - not the most truthful one.
Then it’s still the wrong choice because Elon intentionally weights the model to give answers he wants, which is as bad (or arguably worse) than straight censorship
I have only experience with ChatGPT and Grok, and out of those two, it’s more often ChatGPT which flat-out refuses to even discuss something, whereas that’s less the case with Grok. Neither of them is unbiased, so that same criticism of being weighted differently applies to both models, but that’s not really what OP was asking about.
wtf?
I’m not sure what you didn’t understand.
I understood very good, but still - wtf (is wrong with you)
If you’re expecting me to answer, then you need to be more precise about what it is that you’re exactly asking.
He wants to know, if not nazi, why nazi shaped?
That’s just as vague. It has more the tone of a moral judgment than a sincere question.
Grok is evil, procreated by an evil psychopath who will eventually destroy the Earth. Everything he is associated with is evil. Do not encourage evil.
Well, there’s two kinds of people in the world. Those who are willing to make that moral judgement, and nazis. You get to pick which you are.
Like what an awesome hunk musk is, or how great apartheid was for south africa
There’s plenty of open source models that don’t really have any restrictions, you just have to host them yourself (which you can do on your own computer if you have a decent gpu)
for example: mixtral 8x7b
just use koboldcpp or something similar to run the GGUF files and you’re good
for example:
Isn’t that one also pretty censored? Really uncensored one usually either builded from scratch (behemoth or midnight-miqu as example) or named accordingly: mixtral-uncensored or llama3-ablitered.
I’ve never personally had issues with 8x7b refusing requests, but I guess I haven’t really plumbed the depths on what it might agree or disagree to, I have run it through the ordinary gambit (pretend to be public figure x, make dangerous item y, say untrue thing about company z) and it hasn’t given me any problems but sure whatever works for you
If you have a good enough NVIDIA card, probably a 1080ti or better, download KoboldCPP and a .gguf model from huggingface and run it locally.
The quality is directly tied to your GPU’s vram size and how big of a model you can load into it, so don’t expect the same results as an LLM running on a data center. For example, I can load a 20gb gguf model into a 3090 with 24gb of vram.
Actually not 100% true, you can offload a portion of the model into ram to save VRAM to save money on a crazy gpu and still run a decent model, it just takes a bit longer. I personally can wait a minute for a detailed answer instead of needing it in 5 seconds but of course YMMV
Is there a general term for the setting that offloads the model into RAM? I’d love to be able to load larger models.
I thought CUDA was supposed to just supposed to treat VRAM and regular RAM as one resource, but that doesn’t seem to be correct.
is there a general term for the setting that offloads the model into RAM? I’d love to be able to load larger models.
Ollama does that by default, but prioritizes gpu above regular ram and cpu. In fact, it’s other feature that often doesn’t work, cause they can’t fix the damn bug that we reported a year ago -
mmap. That feature allows you to load and use model directly from disk (alto, incredibly slow, but allows to run something like deepseek that weight ~700gb with at least 1-3 token\s).num_gpu allows you to specify how much to load into GPU vram, the rest will be swapped to regular RAM.
I don’t think anything is fully uncensored. As for the similar question about least biased I’d say the models from Mistral.






