• 0 Posts
  • 33 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle




  • What do you want me to write?

    To meet the bar set by onlinepersona, you’d need to write safe C code, not just some of the time, but all of the time. What you appear to be proposing is to provide evidence that you can write safe C code some of the time.

    It’s like if somebody said “everyone gets sick!”, and some other person stepped up and said “I never get sick. As proof, you can take my temperature right now; see, I’m healthy!”. Obviously, the evidence being offered is insufficient to refute the claim being made by the first person


  • I’m surprised that you didn’t mention Zig. It seems to me to be much more popular than either C3 or D’s “better C” mode.

    It is “FUD” if you ask why it’s still const by default.

    I’d be curious if you could show any examples of people asking why Rust is const by default being accused of spreading “FUD”. I wasn’t able to find any such examples myself, but I did find threads like this one and this one, that were both quite amiable.

    But I also don’t see why it would be an issue to bring up Rust’s functional-programming roots, though as you say the language did change quite a lot during its early development, and before release 1.0. IIRC, the first compiler was even implemented in OCaml. The language’s Wikipedia page goes into more detail, for anyone interested. Or you could read this thread in /r/rust, where a bunch of Rust users try to bury that sordid history by bringing it to light

    Makes memory unsafe operations ugly, to “disintensivise the programmer from them”.

    From what I’ve seen, most unsafe rust code doesn’t look much different compared to safe rust code. See for example the Vec implementation, which contains a bunch of unsafe blocks. Which makes sense, since it only adds a few extra capabilities compared to safe rust. You can end up with gnarly code of course, but that’s true of any non-trivial language. Your code could also get ugly if you try to be extremely granular with unsafe blocks, but that’s more of a style issue, and poor style can make code in any language look ugly.

    Has a pretty toxic userbase

    At this point it feels like an overwhelming majority of the toxicity comes from non-serious critics of Rust. Case in point, many of the posts in this thread



  • What makes recovery and backup a nightmare to you?

    I’ve been running full-disk encryption for many years at this point, and recovery in case of problems with the kernel, bootloader, or anything else that renders my system inoperable, is the same as before I started using full-disk encryption:

    I boot up a live-CD and then fix the problem. The only added step is unlocking my encrypted drive(s), but these days that typically just involves clicking on the drive in the file manager, and then entering my password. I don’t even have to drop into console for that.

    I am also not sure why backups would be any different. Are you using something that images entire devices?







  • For example, the dd problem that prompted all this noise is that uutils was enforcing the full block parameter in slow pipe writes while GNU was not.

    So, now uutils matches GNU and the “bug” is gone.

    No, the issue was a genuine bug:

    The fullblock option is an input flag (iflag=fullblock) to ensure that dd will always read a full block’s worth of data before writing it. Its absence means that dd only performs count reads and hence might read less than blocksize x count worth of data. That is according to the documentation for every other implementation I could find, with uutils currently lacking documentation, and there is nothing to suggest that dd might not write the data that it did read without fullblock.

    Until recently it was also an extension to the POSIX standard, with none of tools that I am aware of behaving like uutils, but as of POSIX.1-2024 standard the option is described as follows (source):

    iflags=fullblock
    Perform as many reads as required to reach the full input block size or end of file, rather than acting on partial reads. If this operand is in effect, then the count= operand refers to the number of full input blocks rather than reads. The behavior is unspecified if iflags=fullblock is requested alongside the sync, block, or unblock conversions.

    I can also not conceive of a situation in which you would want a program like dd to silent drop data in the middle of a stream, certainly not as the default behavior, so conditioning writes on this flag didn’t make any sense in the first place


  • This is interesting, but drawing conclusions from only two measurements is not reasonable. Especially so when the time-span measured is in the order of a few ms. For example, the two instances of clang might not be running at the same clock frequency, which could easily explain away the observed difference.

    Plus, you could easily generate a very large number of functions, to increase the amount of work the compiler has to do. So I did just that (N = 10,000), using the function from the article, and used hyperfine to perform the actual benchmarking.

    • With int
      Benchmark 1: clang -o /dev/null test.cpp -c
        Time (mean ± σ):      1.243 s ±  0.018 s    [User: 1.192 s, System: 0.050 s]
        Range (min … max):    1.221 s …  1.284 s    10 runs
      
    • With auto
      Benchmark 1: clang -o /dev/null test.cpp -c
        Time (mean ± σ):      1.291 s ±  0.015 s    [User: 1.238 s, System: 0.051 s]
        Range (min … max):    1.274 s …  1.320 s    10 runs
      

    So if you have a file with 10’000 simple functions with/without auto, then it increases your compile time by ~4%.

    I’d worry more about the readability of auto, than about the compile time cost at that point



  • What you are describing is something I would label “skepticism of science”, rather than “scientific skepticism”.

    So out of curiosity, I did a bit of digging. As andioop mentioned, the term “scientific skepticism” has been used to denote a scientifically minded skepticism for a long time. For example, the Wikipedia article on Scientific Skepticism dates back to 2004 and uses this meaning. Similarly the well known skeptic (pro-science/anti-pseudoscience) wiki, RationalWiki, has linked the scientific method and “scientific skepticism” as far back as 2011, and currently straight up equates skepticism with scientific skepticism. You can also find famous skeptics like Michael Shermer using the term back in the early 2000s, in his case in his ‘The Skeptic Encyclopedia of Pseudoscience’, published in 2002. It was also used in papers such as this sociology paper by Owen-Smith, 2001. This is the meaning of the term that I am familiar with.

    However, since about 2020, there has been more of the term “scientific skepticism” as a parallel to “climate skepticism” and “vaccine skepticism”. For example, this paper by Ponce de Leon et al is just one of many I could find via a quick Google Scholar search. This, I take it, is how you use the term.

    Personally, I’m probably just gonna keep using “scientific skepticism” to mean “scientifically minded skepticism”, but will keep in mind that it can also mean “skepticism of science”