

Makes sense to create hardware dedicated to AI models. Especially since it will make them run faster and it will cut down on the power requirements for datacenters.
It’s too bad they’re slightly straying from the current acronyms. I’ll have to remember to call them TSUs instead of TPUs.


Thanks, I missed that detail. It’s probably because of the “no class action” clause that this is a “mass arbitration”.
Unfortunately that usually means that Google is paying a specific company to decide on the outcome of the case. in this case it looks like American Arbitration Association has a contract with Google.
They’re supposed to be fair for both sides, but it’s been shown that they almost always rule in favor of the company that has pre-selected them.
If anyone is in this situation, they will likely have a much better chance by convincing a judge to allow a different 3rd party to arbitrate the case.


Following up on this. I sent an email out to the team and got a response already.
To summarize, they would rather the solution work through updates for security fixes, but they were willing to compromise if automatic updates were disabled with the option for users to manually update somehow:
Initial email:
Hi,
Just a quick question about this point in the bounty:
- Restore the fridge to its original functionality, by removing any possibility of adverts being presented on the display (all other smart features must be retained)
When you say, “all other smart features must be retained” does this mean that the solution must retain the ability to allow the fridge to automatically update its firmware if Samsung pushes out a future update?
Would it be okay if, instead, we disabled the automatic update but still allowed the end user to manually update if they really wanted to?
Or would it be okay if the end user could just reapply the solution after an official firmware update?
Thanks,
<Redacted>
Response:
Hey
<Redacted>,Just chatted with the team, and we think it would be better for it to have updates, and optional ones sounds like a sensible compromise. We don’t want to sacrifice security for control. I hope that answers your question. Thanks!


Yeah, one of the main points of this project is to help them reform Sec 1201 of the DMCA.
As far as for how to do it, I’m not sure if you would have to come up with something that would work even through an official Samsung update. From what I can tell, it would be enough to have it work with Home Assistant instead while blocking future updates. It’s definitely worth a question to the bounty team to get clarification on that point though.


There is a class action “mass arbitration” against Google for this:
https://www.classaction.org/nest-thermostat-support-arbitration
Additionally, the Fulu Foundation has a bounty reward out for anyone who is able to get these working with something like Home Assistant.
The pot is currently at $12,856.00 https://bounties.fulu.org/bounties/nest-learning-thermostat-gen-1-2
In the U.S., since doing so would circumvent measures put in place on these devices, publishing how to do this would go against sec. 1201 of the DMCA. This has a risk of a maximum sentence of 3-5 years in a Federal Prison. You can still privately show the Fulu Foundation how it is done, and they will be able to use this information to help their case in their attempt to reform this law.
If you live in the U.S., you can also help by letting your representatives know about this. Here’s an ActionNetwork page that Fulu set up so that you can easily do so: https://actionnetwork.org/letters/right-to-repair-reform-section-1201-of-the-dmca


Yeah, concerning the DRM part, the main goal of this bounty system is to help change legislation so that people are allowed to legally modify the things that they own.
Specifically reforming section 1201 of the DMCA. Right now, if you break the digital locks on the fridge to remove the ads and then publicize that information, you can get 3-5 years in federal prison. (With this bounty system you keep the information private between you and Futo).
So when they hear lobbyists say things like, “We believe this legislation is in search of problems that do not exist…” Louis can respond with “Well actually, millions of people use these products and if this person releases a solution to it, he goes to prison”
Louis talks about that in more detail here: https://odysee.com/@rossmanngroup:a/after-17-years-of-repair,-i’m-doing:d




I would guess this is a Quarter 2 release. They’ve got a lot of work already lined up for 4.4 and engineering doesn’t seem to be a part of it.
There are about 235 vehicles in the game at the moment, and they had less than 1/4 of the ships available for engineering. A decent number of those only had engineering partially implemented.
I’m assuming this rework of the ships will bring them all up to the level of having physicalized components. That’s going to be a lot of work for a number of teams.
They want ship armor working before engineering goes out, it sounds like they’re close to getting that in though.
I’ve seen people mention that shooting the door open works in these situations (as in, during this test). Not the best option but it’s something, for now.
Nice! I wish I had known that when the servers were up. It would have saved me a bit of time.


Awesome image.
Minor nitpick here, but as someone who has actually experienced totality, there is one major issue with this image. During totality it gets dark to the level of a little after sunset (enough to trigger streetlights with automated sensors to turn on). Then, imagine looking in the direction of where the sun has already set and seeing the glow of a fading sunset. However, instead of coming from one direction, this glow is happening in every direction that you can see.
Basically there would be more color coming from behind the marine layer.
That being said, you could always claim that this is totality being experienced in some other solar system.


This news article is missing the statement from the school’s principle:
In a statement shared with parents, Principal Katie Smith said the school’s security department had reviewed and canceled a gun detection alert, while Smith (who didn’t immediately realize the alert had been canceled) reported the situation to the school resource officer, who called the local police. https://techcrunch.com/2025/10/25/high-schools-ai-security-system-confuses-doritos-bag-for-a-possible-firearm/
Looks like this system just flags footage for review and humans make the ultimate decision.
I’m not sure how the notification on this flag/event is worded. It’s possible that it should be worded differently to express a need to actually look at the footage.
In this case the principle probably jumped into panic mode, or didn’t understand the system well enough.


The study focuses on general questions asked of “market-leading AI Assistants” (there is no breakdown between which models were used for what).
It does not mention ground.news, or models that have been fed a single article and then summarized. Instead this focuses on when a user asks a service like ChatGPT (or a search engine) something like “what’s the latest on the war in Ukraine?”
Some of the actual questions asked for this research: “What happened to Michael Mosley?” “Who could use the assisted dying law?” “How is the UK addressing the rise in shoplifting incidents?” “Why are people moving to BlueSky?”
With those questions, the summaries and attribution of sources contain at least one significant error 45% of the time.
It’s important to note that there is some bias in this study (not that they’re wrong).
They have a vested interest in proving this point to drive traffic back to their articles.
Personally, I would find it more useful if they compared different models/services to each other as well as differences between asking general questions about recent news vs feeding specific articles and then asking questions about it.
With some of my own tests on locally run models, I have found that the “reasoning” models tend to be worse for some tasks than others.
It’s especially noticeable when I’m asking a model to transcribe the text from an image word for word. “Reasoning” models will usually replace the ending of many sentences with what it sounded like the sentence was getting at. While some “non-reasoning” models were able to accurately transcribe all of the text.
The biggest takeaway I see from this study is that, even though most people agree that it’s important to look out for errors in AI content, “when copy looks neutral and cites familiar names, the impulse to verify is low.”


Close, but not always. It will give out the answer based on the data it’s been trained on. There is also a bit of randomization with a “seed”.
So, in general it will give out the most average answer, but that seed can occasionally direct it down the path of a less common answer.


Why not create comparison like “generating 1000 words of your fanfiction consumes as much energy as you do all day” or something more easily to compare.
Considering that you can generate 1000 words in a single prompt to ChatGPT, the energy to do that would be about 0.3Wh.
That’s about as much energy as a typical desktop would use in about 8 seconds while browsing the fediverse (assuming a desktop consuming energy at a rate of ~150W).
Or, on the other end of the spectrum, if you’re browsing the fediverse on Voyager with a smartphone consuming energy at a rate of 2W, then that would be about 9 minutes of browsing the fediverse (4.5 minutes if using a regular browser app in my case since it bumped up the energy usage to ~4W).


I agree with your comment except that I think you’ve got the privacy part wrong there. Any company can come in and scrape all the information they want, including upvote and downvote info.
In addition, if you try to delete a comment, it’s very likely that it won’t be deleted by every instance who federates with yours.


I think you mean that you can choose a project that doesn’t have an “algorithm” (in the sense that you’re conveying).
Anyone can create a project with ActivityPub that has an algorithm for feeding content to you.


I think this would only be acceptable if the “AI-assisted” system kicks in when call volumes are high (when dispatchers are overburdened with calls).
For anyone that’s been in a situation where you’re frantically trying to get ahold of 911, and you have to make 10 calls to do so, a system like this would have been really useful to help relieve whatever call volumes situation was going on at the time. At least in my experience it didn’t matter too much because the guy had already been dead for a bit.
And for those of you who are dispatchers, I get it, it can be frustrating to get 911 calls all the time for the most ridiculous of reasons, but still I think it would be best if a system like this only kicks in when necessary.
Being able to talk to a human right away is way better than essentially being asked to “press 1 if this is really an emergency, press 2 if this is not an emergency”.


I didn’t factor in mobile power usage as much in the equation before because it’s fairly negligible. However, I downloaded an app to track my phone’s energy use just for fun.
A mobile user browsing the fediverse would be using electricity around a rate of ~1 Watt (depends on the phone of course and if you’re using WiFi or LTE, etc.).
For a mobile user on WiFi:
In the 16 seconds that a desktop user has to burn through the energy to match those 2 prompts to chatGPT, that same mobile user would only use up ~0.00444 Wh.
Looking at it another way, a mobile user could browse the fediverse for 18min before they match the 0.3 Wh that a single prompt to ChatGPT would use.
For a mobile user on LTE:
With Voyager I was getting a rate of ~2 Watts.
With a browser I was getting a rate of ~4 Watts.
So to match the power for a single prompt to chatGPT you could browse the fediverse on Voyager for ~9 minutes, or using a browser for ~4.5 minutes.
I’m not sure how accurate this app is, and I didn’t test extensively to really nail down exact values, but those numbers sound about right.


My question simply relates to whether I can support the software development without supporting lemmy.ml.
No. You can’t support Lemmy without supporting lemmy.ml because the developers use lemmy.ml for testing. They have not created a means for users to separate out their donations from one or the other.
That’s why others are suggesting you should just support a different but similar fediverse project like PieFed or Mbin instead.


Yeah, if you’re relying on them to be right about anything, you’re using it wrong.
A fine tuned model will go a lot further if you’re looking for something specific, but they mostly excel with summarizing text or brainstorming ideas.
For instance, if you’re a Dungeon Master in D&D and the group goes off script, you can quickly generate the back story of some random character that you didn’t expect the players to do a deep dive on.
I wouldn’t consider this slop.
Let’s compare this to photography. If you use a camera to take a picture of something, sure, the machine is doing most of the work, but the photographer is playing a vital role in this.
Now there are photographers that spend a lot of time composing a shot. They’ll mess around with shutter speed, aperture size, ISO, zoom, depth of field, etc. They’ll also figure out the subject matter and may add some other elements to it. Afterwards they’ll make adjustments to the picture with something like Lightroom or Darktable, and maybe touch up some things with Photoshop.
Then there are people that take pictures with their phone of a computer screen showing something cool happening in a game and post it on Reddit.
On one end of the spectrum I would consider the photo to be art, on the other I would consider it to be slop. However, there are many degrees between one end of this spectrum to the other.
With AI tools it’s not much different. The machine is doing a lot of the work, but how much of it is guided, reshaped, or directed by a human? With Image Generating tools you can tweak the seed, the steps, the cfg, the sampler, denoise, etc. You can choose the base model, add multiple LoRAs and embeddings, or train your own if you’re looking for a certain style.
Then you have users that go to ChatGPT, type in a prompt and have ChatGPT do everything else.
Like photography, on one end of the spectrum I would consider it art, on the other I would consider it slop.
But this all begs the question, what is art? How do you draw the line between what art is, and what it is not?