

I set the timestamps of my music to its original release date, so that I can sort it chronologically… OK, I don’t actually do that, but now I’m tempted


I set the timestamps of my music to its original release date, so that I can sort it chronologically… OK, I don’t actually do that, but now I’m tempted


My point was taking a picture of a box of rocks doesn’t prove the rocks were there before you opened the box. And if you disagree, explain your reasoning.
Those rocks don’t look igneous to me, so they have most likely been there for millions, if not billions of years before they opened the box


How the fuck can it not recover the files?
Undeleting files typically requires low-level access to the drive containing the deleted files.
Do you really want to give an AI, the same one that just wiped your files, that kind of access to your data?


What do you want me to write?
To meet the bar set by onlinepersona, you’d need to write safe C code, not just some of the time, but all of the time. What you appear to be proposing is to provide evidence that you can write safe C code some of the time.
It’s like if somebody said “everyone gets sick!”, and some other person stepped up and said “I never get sick. As proof, you can take my temperature right now; see, I’m healthy!”. Obviously, the evidence being offered is insufficient to refute the claim being made by the first person


I’m surprised that you didn’t mention Zig. It seems to me to be much more popular than either C3 or D’s “better C” mode.
It is “FUD” if you ask why it’s still const by default.
I’d be curious if you could show any examples of people asking why Rust is const by default being accused of spreading “FUD”. I wasn’t able to find any such examples myself, but I did find threads like this one and this one, that were both quite amiable.
But I also don’t see why it would be an issue to bring up Rust’s functional-programming roots, though as you say the language did change quite a lot during its early development, and before release 1.0. IIRC, the first compiler was even implemented in OCaml. The language’s Wikipedia page goes into more detail, for anyone interested. Or you could read this thread in /r/rust, where a bunch of Rust users try to bury that sordid history by bringing it to light
Makes memory unsafe operations ugly, to “disintensivise the programmer from them”.
From what I’ve seen, most unsafe rust code doesn’t look much different compared to safe rust code. See for example the Vec implementation, which contains a bunch of unsafe blocks. Which makes sense, since it only adds a few extra capabilities compared to safe rust. You can end up with gnarly code of course, but that’s true of any non-trivial language. Your code could also get ugly if you try to be extremely granular with unsafe blocks, but that’s more of a style issue, and poor style can make code in any language look ugly.
Has a pretty toxic userbase
At this point it feels like an overwhelming majority of the toxicity comes from non-serious critics of Rust. Case in point, many of the posts in this thread


That bug does sound bad, but it is not clear to me how a BTRFS specific bug relates to it supposedly being more difficult to recover (or backup) when using whole-disk encryption with LUKS. It seems like an entirely orthogonal issue to me


What makes recovery and backup a nightmare to you?
I’ve been running full-disk encryption for many years at this point, and recovery in case of problems with the kernel, bootloader, or anything else that renders my system inoperable, is the same as before I started using full-disk encryption:
I boot up a live-CD and then fix the problem. The only added step is unlocking my encrypted drive(s), but these days that typically just involves clicking on the drive in the file manager, and then entering my password. I don’t even have to drop into console for that.
I am also not sure why backups would be any different. Are you using something that images entire devices?


Astral clearly are using semantic versioning, as should be obvious if you read the spec you linked.
In fact, one of the examples listed in that spec is 1.0.0-alpha.1.
ETA: It should also be noted that ty is a Rust project, and follows the standards for versioning in that language:
https://doc.rust-lang.org/cargo/reference/manifest.html#the-version-field


That’s not quite true: Yes, your $99 license is a life-time license, but that license only includes 3 years worth of updates. After that you have to pay $80, if you want another 3 years worth of updates. Of course, the alternative is just putting up with the occasional nag, which is why I still haven’t gotten around to renewing my license


I’ve started converting my ‘master’ branches to ‘main’, due to the fact that my muscle-memory has decided that ‘main’ is the standard name. And I don’t have strong feelings either was
No gods, no masters


It’s unfortunate that it has come to this, since BCacheFS seems like a promising filesystem, but it is also wholly unsurprising: Kent Overstreet seemingly has an knack for driving away people who try to work with him


For example, the dd problem that prompted all this noise is that uutils was enforcing the full block parameter in slow pipe writes while GNU was not.
So, now uutils matches GNU and the “bug” is gone.
No, the issue was a genuine bug:
The fullblock option is an input flag (iflag=fullblock) to ensure that dd will always read a full block’s worth of data before writing it. Its absence means that dd only performs count reads and hence might read less than blocksize x count worth of data. That is according to the documentation for every other implementation I could find, with uutils currently lacking documentation, and there is nothing to suggest that dd might not write the data that it did read without fullblock.
Until recently it was also an extension to the POSIX standard, with none of tools that I am aware of behaving like uutils, but as of POSIX.1-2024 standard the option is described as follows (source):
iflags=fullblock
Perform as many reads as required to reach the full input block size or end of file, rather than acting on partial reads. If this operand is in effect, then the count= operand refers to the number of full input blocks rather than reads. The behavior is unspecified if iflags=fullblock is requested alongside the sync, block, or unblock conversions.
I can also not conceive of a situation in which you would want a program like dd to silent drop data in the middle of a stream, certainly not as the default behavior, so conditioning writes on this flag didn’t make any sense in the first place


This is interesting, but drawing conclusions from only two measurements is not reasonable. Especially so when the time-span measured is in the order of a few ms. For example, the two instances of clang might not be running at the same clock frequency, which could easily explain away the observed difference.
Plus, you could easily generate a very large number of functions, to increase the amount of work the compiler has to do. So I did just that (N = 10,000), using the function from the article, and used hyperfine to perform the actual benchmarking.
intBenchmark 1: clang -o /dev/null test.cpp -c
Time (mean ± σ): 1.243 s ± 0.018 s [User: 1.192 s, System: 0.050 s]
Range (min … max): 1.221 s … 1.284 s 10 runs
autoBenchmark 1: clang -o /dev/null test.cpp -c
Time (mean ± σ): 1.291 s ± 0.015 s [User: 1.238 s, System: 0.051 s]
Range (min … max): 1.274 s … 1.320 s 10 runs
So if you have a file with 10’000 simple functions with/without auto, then it increases your compile time by ~4%.
I’d worry more about the readability of auto, than about the compile time cost at that point


Besides this change not breaking user space, the “don’t break user space” rule has never meant that the kernel cannot drop support for file systems, devices, or even entire architectures


What you are describing is something I would label “skepticism of science”, rather than “scientific skepticism”.
So out of curiosity, I did a bit of digging. As andioop mentioned, the term “scientific skepticism” has been used to denote a scientifically minded skepticism for a long time. For example, the Wikipedia article on Scientific Skepticism dates back to 2004 and uses this meaning. Similarly the well known skeptic (pro-science/anti-pseudoscience) wiki, RationalWiki, has linked the scientific method and “scientific skepticism” as far back as 2011, and currently straight up equates skepticism with scientific skepticism. You can also find famous skeptics like Michael Shermer using the term back in the early 2000s, in his case in his ‘The Skeptic Encyclopedia of Pseudoscience’, published in 2002. It was also used in papers such as this sociology paper by Owen-Smith, 2001. This is the meaning of the term that I am familiar with.
However, since about 2020, there has been more of the term “scientific skepticism” as a parallel to “climate skepticism” and “vaccine skepticism”. For example, this paper by Ponce de Leon et al is just one of many I could find via a quick Google Scholar search. This, I take it, is how you use the term.
Personally, I’m probably just gonna keep using “scientific skepticism” to mean “scientifically minded skepticism”, but will keep in mind that it can also mean “skepticism of science”


Wouldn’t scientists be the ones employing “scientific skepticism”?


The issues are listed in Supplementary Table S141 (p. 75 in the SI; 10 issues) and in https://github.com/kobihackenburg/scaling-conversational-AI/blob/main/issue_stances.csv (697 issues)
I’m sure that these are gems, but they are not hidden; going by Steamdb, these games are all in the top 3% most popular games on Steam, based on the total number of reviews:
Almost forgot: I don’t think that I can name a truly hidden gem, but Deathbulge is pretty good and has a mere 360 reviews on Steam. Which is still more than the vast majority of indies, of course.