• 4 Posts
  • 20 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle


  • RDP

    How do you approach RDP? Do you have multiple monitors at all? Is your approach scriptable? The reason I ask is because I can easily access my machines like so:

    exec xfreerdp3 /u:<user> /p:<pass> /v:<address> +f +clipboard /drive:/home/<user>>,Z: /drive:/,Y: -grab-keyboard /monitors:0,1 /multimon

    This can be added to a script that also checks the state of the target machine, and boots it via my IPMI console if necessary, waiting until the machine is ready to login. And, as you’ll note, I can specify which monitors I would like to provide for the connection. grab-keyboard allows me to set a keyboard shortcut that minimises the remote session, and you’ll note the mapped drives also. This is pretty much the lowest level of functionality I’m after. If that can be replicated on Wayland, that’s at least one hurdle down.







  • You don’t need something huge. Remove the DVD drive and the old mechanical drive from a USFF machine, stick a pair of 4TB drives in it, and put a basic debian image on it. Configure SMB with a shared folder or two, and voila: you now have a comfortable NAS for maybe £20 plus drives. Add in a sata pcie card if you can find a decent low-profile one, and that’s an extra four or even six drives. It won’t give you the cream of top performance, but it will be perfectly serviceable for a homelab.








  • I, too, am aware of zlib and librera reader. But there’s a difference between a curated selection of books in physical form in front of you, and deciding to read a book on an electronic device. The former dissuades the reader-to-be from abandoning the idea over too wide a selection, and removes other electronic distractions from asserting themselves over the reading material - I refer here to notifications that flash over the current window.

    Plus, there’s plenty of people who choose not to read, despite the option being available. Having the option physically there in front of you is far more encouraging, in my opinion. And once they start reading, they might go on to seek titles outside of that curated selection. Great success!


  • Artificial merely implies manmade, as opposed to naturally developed IMO.

    As for the hypothesis, a few years ago I took a crack at designing a system like that as an on-paper exercise. The vast majority of it was just…pushing data around and using existing data to suggest new data. Not all that dissimilar to how human beings think, to be honest. The big hurdle was optimisation and context, and allowing the platform to “grow” without letting it metastasize and without improperly restricting it. There are some hardware limitations to consider too - a storage backbone, for one, and interlinking every thread as opposed to having them wholly isolated from each other. There’s the potential for thread interruption too, which as far as I’m aware is not something that any microcode packages support.

    But despite all that, I’m still fairly certain one could build an approximation therein. The complexity of inter-stimuli input (read: input from audio, visual, and potentially sensatory endpoints, replicating vision, hearing and touch) isn’t to be underestimated, though.

    Perhaps one day I might take a crack at it - but its also a morally gray area that has quite a few caveats to it, so… uh… maybe.