

One of Zed’s focuses is AI integration. Other than that, I suppose if you prefer having a dedicated application with its own UI instead of a terminal based ones.
One of Zed’s focuses is AI integration. Other than that, I suppose if you prefer having a dedicated application with its own UI instead of a terminal based ones.
I try to write comments whenever what the code isn’t obvious on its own. A “never write comments” proponent might argue that you should never write code that isn’t obvious on its own, but that doesn’t always work in practice
One of my most controversial gaming takes is that I like the first witcher game the most of the trilogy. There is a lot of jank and some cringeworthy parts but overall it feels like a much tighter experience than the later games, notwithstanding some clearly undercooked parts. It takes a lot more cues from older rpgs in how it’s structured and I suppose I might just have a weak spot for that.
To be fair, I’ve never gotten that far in the third witcher so maybe I’d like more it if I played it enough to properly get in to it. I just got kinda bored after a dozen or so hours which is not a problem I had with the first witcher.
you can just do :r path/to/file
directly
What are the odds that you’re actually going to get a bounty out of it? Seems unlikely that an AI would hallucinate an actually correct bug.
Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am but it’s possible that there’s some more malicious idea behind it.
Zuckerberg himself is probably constantly surrounded by yes men so maybe it won’t seem like that much of a difference to him
They are open to drop some features apparently, but maybe not “90%”
The developers are taking a “less is more” approach. This means that some features of the original sudo may not be reimplemented if they serve only niche, or more recently considered “outdated” practices.
I use wezterm. It’s more configurable than the windows terminal and also works on linux. It has an appropriately linux-y feel imo.
tldr is great, sometimes you can’t remember the exact syntax for a certain command and just need a quick reminder as well.
It’s another c/c++ competitor along with rust and zig. https://odin-lang.org/
Don’t think it has anything to do with electron. VSCode is just the largest editor that people install extensions for, so it’s what makes the most sense to write malware for. If vim was more popular, I’m sure there would be more crypto mining extensions for that (I wonder how many there are? Surely more than zero?)
This article uses the term “parsing” in a non-standard way - it’s not just about transforming text into structured data, it’s about transforming more general data in to more specific data. For example, you could have a function that “parses” valid dates into valid shipping dates, which returns an error if the input date is in the past for instance and returns a valid_shipping_date
type. This type would likely be identical to a normal date, but it would carry extra semantic meaning and would help you to leverage the type checker to make sure that this check actually gets performed.
Doing this would arguably be a bit overzealous, maybe it makes more sense to just parse strings into valid dates and merely validate that they also make sense as shipping dates. Still, any validation can be transformed into a “parse” by simply adding extra type-level information to the validation.
Why do you think it’s a bad idea? Both you and OP are in agreement that you should validate early, which seemed to be what your first comment was about. Is it encoding that the data has been validated in the typesystem that you disagree with?
If you want to test windows programs on linux, you’re probably going to want to do that in a virtual machine, or even a spare computer just for testing on windows. Depending on how much you need to use excel, a virtual machine could be a good option for that as well, but if using Microsoft Excel™ is a big part of your job, maybe it makes more sense to just stay on Windows for work at least
fd
is a lot faster than find. This might not matter if you’re searching through small directories but if you’re working in a very large project it does make things a lot nicer.
RFK is a C-nile
The US government recommending memory safe languages has really given people worms in their heads
Always squashing is a bit much for my taste, sometimes the individual commits have interesting information! Text from the MR in the merge commit is great though, maybe I should see if we can set that up with gitlab and propose that we start doing that at work.
Putting the message in git puts the information closer to the code, since the pr isn’t in git itself but instead the git forge. You can for example search the text of git messages from the git cli, or come across the explanation when doing git blame
. I sometimes write verbose commit messages and then use them as the basis for the text in the pr, that way the reviewer can see it easily, but it’s also available to anyone who might come across it when doing git archeology
If you find yourself writing regexes often enough that speeding up that process would increase your productivity by “a lot”, then you should get good at writing them yourself which means practicing without an LLM. If its something that you don’t do often enough to warrant getting good at then the productivity increase is negligible.
I think the main benefit here isn’t really productivity but developer comfort by saving them from having to step out of their comfort zone.