

Wait till they find out who has been onboarded to OpenAI already, xd


Wait till they find out who has been onboarded to OpenAI already, xd
I love this, thank you for sharing. It would be nice if you linked directly to your repository next time instead whatever
https://memory-graph.com/#codeurl=https%3A%2F%2Fraw.githubusercontent.com%2Fbterwijn%2Fmemory_graph%2Frefs%2Fheads%2Fmain%2Fsrc%2Fbin_tree.py×tep=0.2&play=
this link was. I’ll add it below just for others.


Any time, we’re all in this together after all. I needed to learn some here as well, and if anybody comes by with follow up knowledge it is welcome.
As far as wayland works, source clients (the application you copy from) can clear the clipboard without stealing the focus. Note that if you copy from another client, the source client is now changed to the new one and the password manager will no longer be able to clear your clipboard. And this behavior is easily verifiable.
Unfortunately I am unsure if any focused application obtains access to clipboard content immediately or if the user needs to initiate some sort of Ctrl+v behavior. This would need to be followed up on. However, that is why I give my password manager a 10 second timeout to clear the clipboard. Honestly it could be shorter. But I do not alt tab through a bunch of random applications in the mean time. Typically I go straight to where the authentication is needed, and nowhere else. Meaning my clipboard should be cleared of sensitive data before I ever give clipboard access to another app.
Better than other graphical compositors which simply broadcast your clipboard content to the entire ecosystem.
So where we’re at is 1) do apps get access to the clipboard immediately upon focus, and 2) what is happening where it appears some applications have hacked a way to steal focus.


I would always recommend against normal users from straying away from the most beaten path. These more technical appealing distros are for advanced users that have specific purpose and use cases in mind. Yeah, in particular, users that adjust their environment frequently (maybe for software development purposes), and want to ensure system stability at the same time.


https://emersion.fr/blog/2020/wayland-clipboard-drag-and-drop/
The source client needs to be the currently focused application. This prevents background applications from unexpectedly changing the clipboard contents right before the users pastes it.
The destination client needs to listen to wl_data_device events to keep track of the current clipboard contents. The destination client will only receive such events if it’s focused. This prevents background applications from arbitrarily reading the clipboard (which may contain sensitive data such as passwords).
The graphical application must be in focus to gain access to the clipboard, and wayland is designed to prevent such focus stealing*. As mentioned earlier, password managers such as keepassxc will automatically clear your clipboard after copying sensitive data - this is a configurable behavior. This means that no other application should have the opportunity to steal focus, and your clipboard should be cleared of sensitive content before you open up a privacy dismissive application that wants to surveil your clipboard.
*I need to do a bit more digging to find further verification of this focus stealing prevention behavior of wayland, and if I can find that information I will cite here.
I noticed you were discussing the “notepad”, are you talking about the windows operating system? I cannot speak on its clipboard management, unfortunately. That said, I would not run the windows operating system if I cared about privacy. The erosion of privacy destroys any semblance of security, eh?
Edit: Following up, I did find some information from the vim text editor that discusses stealing focus in wayland. You can read about it here https://vimhelp.org/wayland.txt.html#wayland-focus-steal . So far, it appears as though applications do not have access to the clipboard unless focused, which is a design on wayland’s part to secure this. However, vim is showcasing a way to steal focus and thus subvert this security effort? It does note that if you are in a full screen mode then it cannot steal focus. Anyway, more reading to be done still. There appears to be methods of determining an application is doing this “focus stealing”
Note that this method can have several side effects from the result of focus stealing. For example, if you have a taskbar that shows currently opened apps in your desktop environment, then when Vim attempts to steal focus, it may “flicker,” as if a window was opened then immediately closed after.
So with this behavior in mind, and with the way the clipboard works, no application would know what contents are inside the clipboard until in focus. Therefor an application would either have to “guess” when sensitive content is available, or steal focus quite often. The former being unlikely, and the latter most likely being able to be detected by the user.


This indiscriminate targeting, as the FBI and White House security officials have previously noted, allowed Beijing’s snoops to geo-locate millions of mobile phone users, monitor their internet traffic, and, in some cases, record their phone calls.
Oh no! If anybody’s going to spy on our civilians, it’s going to be us. Sarcastic snide remarks aside, good to know the government potentially stopped a major national cyber attack even if it was about 5 years after salt typhoon’s digital reconnaissance began.


Browsers are insanely complex pieces of software with millions of lines of code. Clipboards have a much smaller attack vector than an entire web browser. And if you’re concerned with security you should be using wayland, which I believe mitigates a lot of input access to many applications. I am trying to do reading to find some citations on this. Furthermore, password managers will automatically clear the clipboard after a few seconds as well. If you believe you are running malicious applications that are already monitoring all of your data like some sort of keylogger, then your web browser is already lost.


I am once again asking what are the benefits of integrating your password manager into your browser? You can do all of the link verification you want outside of password manager integration.


I also greatly appreciate blogs and articles. Still, that is not to say videos are useless. There are still a lot of great technical videos out there


I understand your sentiment. I wonder, if one were to “recruit” a lot of popular youtubers to stream in a decentralized fediverse manner by offering to make a simple hardware solution for them, would it encourage others to follow? How much of an investment would that be? Presumably you could keep the hardware “product” going and if more catch on they could also be directed towards your solution as helping them get in (quicker to market) as the network effect grows.
One would need to address the way people could monetize themselves aside from getting direct viewer support via subscriptions or donations.


You can simply go to the official tor or i2p pages and read more about details, then do follow up research from there. With i2p there’s actually a few parallel implementations actively being developed. Original is in Java, there’s a C++ one, and then another one I can’t remember. Very new implementations are being made in Rust as well


Thanks for volunteering to help the network in good faith! I think it is much easier for normal people to get an i2p router up and running and help the entire network, instead of setting up a tor node. And with the use of inbound/outbound unidirectional tunnels (which you can set up to 5 nodes each), you could theoretically have a 10 node round trip of intermediate tunnels between you and a server, as opposed to tor which uses a bidirectional tunnel.
Some gracious users set up what is known as an “outbound proxy”, which acts like tor exit node to the clearnet. Personally I would never host one of these, as I am hesitant of having anonymous entities make clearnet requests and being held liable. But as an i2p router in somebody else’s tunnels, you can just imagine yourself as a road on a map that other roads connect to. The road isn’t responsible for what people are carrying on it, or what their destination might be. That would be an unreasonable expectation to place on any road. In fact, the router doesn’t have any idea of what the exact destination is, even if it’s the last node in the tunnel - simply because during the encryption/decryption process it only knows of the next address to hit!
For example, say an i2p router hosts a git server. It could be the destination for packets, where clients are using it for version control, or it could act as a node in somebody’s tunnels to connect with other servers/clients. From the network view, I do not believe you can tell, and that is pretty neat.
In the effort of being transparent and educating you, I do believe an outstanding problem is stream isolation. You should be able to do “soft resets” that reset your identity, although I forget the exact technical i2p term for this. This concern is for clients, not just leaving a router up. So if you intend on using that router to access the network yourself, it would be a good idea to do that soft or hard reset occasionally depending on your concerns. You can do it as often or never at all.


If you could project into the future, you might see that VPNs are a dwindling non-solution to the problem. As this unsettling trend makes waves across all jurisdictions in the world, every VPN’s IP will be subject to these new controls.* Eventually normies might need to understand technologies like Tor or I2P. Only problem is, if things get really bad they might ban the use of both outright. I’m not sure if I2P can be detected just as Tor, but I do know Tor users can be detected easily. Which is why if you’re in a “censorship” country you need to use some sort of special subset of entry guards. I2P might be better because every node in the network is like a router, none are specifically entry, middle, or exit nodes. Only thing I could think of, maybe some form of deep packet inspection? If everybody runs an I2P router, then simply making requests to other nodes is not enough to determine it’s an i2p communication?* It’s just distributed computing, the way the internet was intended!


Guess this is what happens when a post in the lemmy verse gets about 100 comments.


Can’t people just make new accounts? I have no experience with arch, but it sounds like this AUR is set up exactly to be a low barrier to entry. Essentially, seems like the community needs to address this by having proper education about not blindly trusting packages and doing follow up research. Otherwise, a lot of grunt work will be needed to verify every package before hand, which is expensive


I appreciate reading comments that are well written. If an AI was used to create the argument in its entirety, or edit it, so be it. What matters is content and context. If it’s eloquent, without being obnoxiously verbose, that’s a bonus. It doesn’t feel like a lot of filler bullshit was added. ETA: I want to clarify, flooding the web with AI bots to astroturf agendas is not cool.


I appreciate rust as much as the next dev. But you can define your own types in C just as well? And with the proper warnings and errors -Wall -Werror in place, any warning is an error, and implicit conversions should probably be a warning right?
ETA: Just tried with the following C code and could not get it to fail with gcc.
typedef int t_0;
typedef long t_1;
t_0 test() {
t_1 foo = 1;
return foo;
}
Tried with gcc -Wall -Wextra -Wpedantic -Werror and it compiled just fine.


Let’s say I do audit a specific version of a dependency I use. How do I communicate to others that I’ve done this? Why would anyone trust me anyway? I’ve mentioned that 'm not an infosec expert; how much is my audit worth?
Here is an example of how outsourced/decentralized audits can be reported to at least a centralized organization https://rustsec.org/advisories/ . And you can install cargo install cargo-audit which will report to you the nature of your rust crate’s dependencies and if their selected versions are under any active advisory.
- you’ll only see the malicious activity if you hit the branch path of the attack; looking for it this way is like doing unit tests by running the code and hoping you get 100% code coverage
In this manner, I would believe at the end of the day it’s a sort of attempting to solve the halting problem - which cannot be done. This sort of research is something that would probably take sophisticated AI agents to poor through code and detect attack vectors, like unsanitized command parsing.
Otherwise, a general code checker like clang-tidy won’t throw a red flag for a program that correctly parses your $HOME directory and sends it to a random server. That is valid code after all. Unless there is a technology to clearly define sandboxing constraints before compilation (or runtime). That is why I gave the example of using seccomp and landlock to clearly define runtime behavior. Maybe there’s better solutions where you can generate, say, a CSV table of what you whitelist and either at compile/runtime they get satisfied - for example unzipping a file. It will only know at runtime what file it needs to read and parse, so of course at run time it will have to read that file (which you cannot know at compile time). I don’t want to word vomit here, so I’ll leave it at that.
- These supply chain attacks can be sophisticated, and I wouldn’t be surprised if you can tell that you’re running in firejail and just not execute the code
Yes, certainly. Software can determine the state of its environment. Look at web browsers for this - it is practically impossible to get away from the fingerprinting problem. The following is my speculation and may be incorrect: It is practically impossible unless environment standards are made to address this. For example, firefox forcing every browser to use uBlock (or go more extreme and kill all javascript from running). Yes on the one hand it will break half of the web, but the standard would be a much smaller set of identifying features in the population. Same could potentially be done with running processes in an operating system. Operating systems and web browsers both are highly complex systems and I cannot say I know better than the folks making big decisions there, and I’m only speaking in ideals.
- This approach isn’t useful for programs which depend on network connections, or access to secrets - some programs need both.
I feel as though my replies have been too long already on subjects I’m no expert on. Networking is a whole other beast to tackle, as even if you make valid connection attempts with proper code integrity, it can still be intercepted via MITM or their servers could be compromised and steal your data/attack your system from there.
The main problem is that we’re in the decade of Linux, and a whole population of people are coming in who are not nerds. They’re not going to be running strace or firejail. How are we going to make OSS secure for these people?
I don’t want to use the term “fear mongering”, I think you may be a bit too concerned here. I don’t think the average Joe or Jill is going to be interacting with all sorts of random obscure FOSS projects like us more technical users are who program or experiment with services ourselves. They will stick to highly vetted and supported projects, and if those get attacked then lots of people will be affected and monitoring the situation. Normies probably stick around big corporate spaces anyway (youtube, google, facebook, twitter, steam). All of these places deal with attacks of course, regardless of users being on linux or windows.
Windows had an attack not so long ago where if you sent a malformed ipv6 packet you obtained RCE permissions on the host! Just because it’s not FOSS doesn’t mean it won’t get attacked.


I’m far from an expert, but we know it takes a village.
As far as static analysis goes, I can think of something quite simple. Running strace on your processes to see what sort of syscall and filesystem access the process needs (in a trusted scenario - a maintainers burden). Once that analysis is done, applying proper security features (in unix - seccomp filtering (for syscalls) and landlock (for filesystem access)) could be done to minimize risk.
A caveat to this, however, can be seen in the xz attack. The attacker forced the landlock feature to not compile or link, which allowed it to have the attack surface needed. So they were practicing good security, however it means nothing if maintainers cannot audit their own commits. That is more of a general programming static analysis I believe you were going for. In which case, I believe many compilers come with verbose static analysis features. Clang-tidy is one, for example. Rust is already quite verbose. Perhaps with more rigid CI/CD restrictions enforced with these analysis tools, such commits would not be able to make it through?
From my discussion with C++ folk, auto is just part of the “modern” way of doing c++. Paired with the -> return type. Perhaps including that -> return type negates this problem? It’s still strange to me. Feels more like Rust