• 2 Posts
  • 107 Comments
Joined 1 year ago
cake
Cake day: September 11th, 2023

help-circle



  • so they wanted to sell Itanium for servers, and keep the x86 for personal computers.

    That’s still complacency. They assumed consumers would never want to run workloads capable of using more than 4 GiB of address space.

    Sure, they’d already implemented physical address extension, but that just allowed the OS itself to address more memory by enlarging the page table. It didn’t increase the virtual address space available to applications.

    The application didn’t necessarily need to use 4 GiB of RAM to hit those limitations, either. Dylibs, memmapped files, thread stacks, various paging tricks, all eat up the available address space without needing to be resident in RAM.




  • I’ve woken myself up from several unpleasant dreams and nightmares before by literally just going “fuck this, I’m out.”

    I think I’m often aware that I’m dreaming, but I don’t really lucid dream because my dreams are generally more interesting than anything I could consciously come up with anyway. So more often than not I’m just content to be along for the ride.




  • Problem is, AI companies think they could solve all the current problems with LLMs if they just had more data, so they buy or scrape it from everywhere they can.

    That’s why you hear every day about yet more and more social media companies penning deals with OpenAI. That, and greed, is why Reddit started charging out the ass for API access and killed off third-party apps, because those same APIs could also be used to easily scrape data for LLMs. Why give that data away for free when you can charge a premium for it? Forcing more users onto the official, ad-monetized apps was just a bonus.


  • These models are nothing more than glorified autocomplete algorithms parroting the responses to questions that already existed in their input.

    They’re completely incapable of critical thought or even basic reasoning. They only seem smart because people tend to ask the same stupid questions over and over.

    If they receive an input that doesn’t have a strong correlation to their training, they just output whatever bullshit comes close, whether it’s true or not. Which makes them truly dangerous.

    And I highly doubt that’ll ever be fixed because the brainrotten corporate middle-manager types that insist on implementing this shit won’t ever want their “state of the art AI chatbot” to answer a customer’s question with “sorry, I don’t know.”

    I can’t wait for this stupid AI craze to eat its own tail.