
I’m surviving, and definetly not the fittest.

I’m surviving, and definetly not the fittest.
I think that the TinyTapeout concept is super cool (https://tinytapeout.com/). In the past, it was not really feasible to design and manufacture a semiconductor device as a hobbyist… Unless maybe an extremely wealthy one.
Now, we have open source design tools, open process design kit, and the ability but small part of a manufactured wafer.
There are also now multi-project wafer runs for photonic chips at reasonable prices for startup/academia. I think these developments are pretty cool.
Thanks a lot for the examples! I have been looking through these, and, as far as I can tell:
I still have not had the chance to look into leaky metadata. But, generally, I think metadata issues can in part be addressed by not generating much metadata.
Probably the biggest vulnerability is the captive portal. There is no way to verify you’re connecting to an official Starbucks router. I think that when connecting to a public router it is wise to assume that it is malicious.
I’m curious about an example that comes to your mind as you say this. In your view, what is a privacy risk associated with public WiFi use that is not easily mitigated?


Cool! Unfortunately it is not visible out of my window, and just saw this right as I am going to sleep… I imagine I would need to travel at least to the edge of the city to maaaybe get to see it, right? Or are these views really possible from within a city?

Wow! Never seen this one before. It’s amazing!


Thanks :)


It depends. In my experience: in an academic laboratory I have been able to use common sense.
For example, gloves go on when working with strong acids/bases. The statement:
gloves apparently only give researchers a false sense of security that can dull the sense of touch and prevent you from recognizing chemical exposure
Does not apply as much when you are working with such corrosive agents, because you really should never be in a position where spilling 4 M HCl into your hands would go unnoticed.
When working with large quantitites of oils, even if non-hazardous, gloves go on and they will probably get oil in them.
When working with cell cultures, the goal is often to not contaminate the cultures. Some people prefer to wash their hands thoroughly and not use gloves, and they have been working at it for many years and they seem to do just fine. It’s a risk mitigation strategy - if the cultures have antibiotics and fungicides, risk is already not too high.
In an industry setting it is different. Companies often comply with specific standards and health and safety regulations. While the individual can use common sense, the people in charge of ascertaining compliance (sometimes ‘EHS’, Environment, health and safety personnel) aren’t necessarily chemists themselves, nor should they need to be aware of the identity of the transparent liquid in the flask that you are holding. So, generic rules are often set in place not only because of their practical utility but also to simplify enforcement. In some cases external auditors can come in (announced or not) and verify compliance - this, again is much simpler when the rule is ‘lab coat behind yellow line, gloves always on when touching a container with a liquid’ than having to interview each person to understand what they were touching without gloves and to understand their philosophy of why they chose to do so.
I have experienced issues both over tor and over clearnet. The tor front-end exists on its own server, but it connects to the mander server. So, the server that hosts the front-end via Tor will see the exit node connecting to it, and then the mander server gets the requests via that Tor server. Ultimately some bandwidth is used for both servers because the data travels from mander, to the tor front-end, and then to the exit node. There is also another server that hosts and serves the images.
What I see is not a bandwidth problem, though. It seems like the database queries are the bottleneck. There is a limited number of connections to the database, and some of the queries are complex and use a lot of CPU. It is the intense searching through the database what appears to throttle the website.


By hand. We are only two people, and we usually clean after we cook/eat. When one is cleaning only 2 plates + a pot/pan at a time, it is easy to use little water. Spray of soap, metal scrub, sponge scrub, and then turn the tap on to rinse for a few seconds. Utensils get individually scrubbed and then all rinsed together for a few seconds.
Maybe when we have kids a dish washer will make sense.
AGUCUAGCAUAC


I have been happy with my Garmin. It is functional without having to connect to anything, and data can be easily exported to a computer for more advanced processing. It is a handy GPS receiver that lets me monitor heart rate and log running metrics.

That’s good! There’s some hope that this won’t last forever then. Thanks.
And it’s interesting that the challenge via old.lemmy.ca was so impactful. The first wave of bots that I noticed also came through an Mlmym front-end that I make accessible via tor. But lately they have been hitting directly via the regular front-end.

Did this spike for you these past weeks? I’m not sure if it just happens to be our instance’s turn or if they have up-scaled their efforts. I have been playing wack-a-mole with IP ranges.
This morning I woke up and a new IP sub-net (43.173.0.0/16) was excessively hitting the site from multiple IPs, probably scraping, making the site unresponsive. I blocked that sub-net and the site is responsive again.
Thank you.
A few days ago, I blocked several IP ranges to solve this. I unblocked them about two days ago in an attempt to solve some federation issues… The bots from this IP range came back.
This time I blocked only the IP range that has the most bot-like activity. Hopefully that resolves it.


Woah! Congratulations!!! 🥳 🎉


For mander.xyz it has been bot scrapers. That time that you are mentioning it was scraping via the onion front end that I am hosting for easier access over Tor. Yesterday an army of bots scraping via Alibaba cloud servers made the server unusable for a few minutes. The instance would receive a bunch of requests from the same IP range (47.79.0.0/16), and denying that full IP range fixed the problem.
Some instances implement anti-bot measures. For example, https://sopuli.xyz/ makes use of Anubis. I think that instances behind Cloudfare get some protection too. I am considering using Anubis for mander.xyz, but for now I have just been dealing with this manually as it does not happen too often.


























Since my work involves sensors, I set up a continuous testing setup on a raspberry pi and got its IP whitelisted. I ssh into it when something is annoying to do in the Windows laptop.