walden

joined 1 year ago
[–] walden@sub.wetshaving.social 5 points 1 day ago (3 children)

You mean like the one linked here? Or something with a specific URL like "killedbymozilla.com"?

The developer is still active with their other main project, Uptime Kuma. So that's good.

[–] walden@sub.wetshaving.social 7 points 2 days ago (3 children)

I am. Was $25 for 1 test? That's a lot.

[–] walden@sub.wetshaving.social 28 points 2 days ago (6 children)

They're $10-12 USD per test where I live.

What do you mean about a phone app? How would that possibly be a reliable test?

[–] walden@sub.wetshaving.social 12 points 6 days ago* (last edited 6 days ago) (3 children)

There are two types, CMR and SMR. You can read online about the differences. CMR is better because SMR tries to be all fancy in order to increase capacity, but at the cost of speed and data integrity.

It won't be front and center in the specs of a particular drive, but you usually find the info somewhere.

I wouldn't worry about higher capacity failing sooner. If you have 10x4TB vs 2x20TB, that's 5x as many drives to go bad. So a 20TB drive would need a 5x worse fail rate to be considered worse. A pro of larger (fewer) drives is lower power consumption. 5-10 watts per drive doesn't sound like much, but it adds up.

[–] walden@sub.wetshaving.social 47 points 6 days ago (4 children)

This is a good idea, and hopefully will help people get a leg up. Better credit scores can save you money by opening up lower interest rates.

[–] walden@sub.wetshaving.social 5 points 1 week ago* (last edited 1 week ago)

Good question, and I'm curious what the experts say. Surely it depends on the software that handles DHCP.

I've always set static addresses in the DHCP address range and it has always been reserved and never assigned to other devices. I've used ASUS and MikroTik for what it's worth.

If you're the type to set static addresses on the devices themselves, then that would certainly increase the risk of a conflict if it's inside the address range.

[–] walden@sub.wetshaving.social 1 points 1 week ago* (last edited 1 week ago) (1 children)

Aha. Well, I guess I'm not the target audience because I can't be bothered to go through the installation steps. It's not in the LMDE repository, but I wish it were!

[–] walden@sub.wetshaving.social 2 points 1 week ago (8 children)

Hmm, that's not working for me. You mean use those as options? 'ls -eza'?

[–] walden@sub.wetshaving.social 6 points 1 week ago* (last edited 1 week ago) (12 children)

I learned you can edit .bashrc (in your home dir) and update the alias for ls to include what I like. It has saved me lots of keystrokes. Mine is ls -lha in addition to whatever color coding stuff is there by default.

[–] walden@sub.wetshaving.social 9 points 1 week ago (3 children)

No, you're right. I do believe they're trained to detect things, and thought about editing it, but I stuck with the poor wording.

 

I have multiple things running through a reverse proxy and I've never had trouble accessing them until now. The two hospitals are part of the same company, so their network setup is probably identical.

Curiously, it's not that the sites can't be found, but instead my browser complains that it's not secure.

So I don't think it's a DNS problem, but I wonder what the hospital is doing to the data.

All I could come up with in my research is this article about various methods of intercepting traffic. https://blog.cloudflare.com/performing-preventing-ssl-stripping-a-plain-english-primer/

Since my domain name is one that requires https (.app), the browser doesn't allow me to bypass the warning.

Is this just some sort of super strict security rules at the hospital? I doubt they're doing anything malicious, but it makes me wonder.

Thanks!

Also, if you know of any good networking Lemmy communities, feel free to share them.

 

I don't really know what to look for in the logs, but with some guidance maybe we can figure this out. Here's how it typically goes:

Set up daily recurring auto-post at sub.wetshaving.social. It works for about 1-2 weeks, then stops.

I set the entire server to restart every 3 or 4 days, which makes Lemmy Schedule stable for longer, but still every week or two it will stop working.

The solution is some sort of combination of restarting the Docker stack (Lemmy Schedule and Redis) and/or opening the Lemmy Schedule web interface and logging in, logging out, refreshing, logging back in, whatever it takes. The need to load the web page is odd, like it reminds Lemmy Schedule somehow that "oh yeah, there are posts to post".

A lot of times when I log in to fix it, the scheduled post does not appear in the to-do list. We have a weekly post that stays in the to-do list, but not until I reload, log out, etc. will the daily post eventually find itself and reappear in the list.

When it starts working again, it typically spits out the posts that it missed, sometimes in duplicate.

Any ideas?

 

I've seen them called "Stop Lines", "Balk Line", etc. The thick line painted on the road at a Stop Sign.

You're supposed to stop before the line, but a lot of the time there's a bush or other obstruction so you can't see any crossing traffic. You have to creep forward until you can see anything.

Is there a reason for this? Is it done on purpose? It makes sense if there's a crosswalk or something, but I see it a lot where there shouldn't be any pedestrian activity.

 

I've found the following work-around works pretty well. If you host an instance that's currently on 0.19.0 or 0.19.1, consider implementing this.

There are two bugs that this helps with:

Work-around:
Create cronjobs that restart the Lemmy container every 6 hours (but not at midnight). The following example is used for a Debian system running Lemmy in Docker.

Type crontab -e into the terminal Add something like the following:

~~0 1 * * * docker container restart lemmy-lemmy-1
0 7 * * * docker container restart lemmy-lemmy-1
0 13 * * * docker container restart lemmy-lemmy-1
0 19 * * * docker container restart lemmy-lemmy-1~~

3 1-23/6 * * * docker container restart lemmy-postgres-1 && sleep 60 && docker container restart lemmy-lemmy-1

By restarting the container every 6 hours, outbound federation continues to work. There may still be some delays, but everything gets cleared up regularly.

By telling it what time to restart (0100, 0700, 1300, and 1900 as opposed to "every 6 hours"), it avoids restarting at midnight. This avoids the second bug.

My instance has been doing this for enough days where I'm confident that it's working. You can check your federation status here. Note that it's normal for there to be 0 up-to-date instances and a lot of lagging instances. As long as they sometimes turn "up to date", then everything is getting caught up.

 

We have a small but dedicated user base. The community has daily posts to share your shave of the day (SOTD) so you can easily see what other people have been using. Other posts are allowed, like mail calls, reviews, or any other interesting shave content.

Yes, we really enjoy shaving that much!

!wetshaving@sub.wetshaving.social

view more: next ›