archomrade

joined 2 years ago
[–] archomrade@midwest.social 2 points 2 months ago

yup. I haven't done it yet, but apparently ceiling fan controllers are a pretty standard thing, so usually all you really have to do is replace the whole controller box (they're like $30 apiece from what I remember), or replace the controller board itself like you mentioned.

I've stopped buying appliances from places like Home Depot for this reason, seems like they simply do not stock items that aren't their brand-name cloud-hosted services, or larger brands like hue.

[–] archomrade@midwest.social 3 points 4 months ago

“I’ve come to view home ownership and healthcare as destabilizing forces in my life,” said Bernie, a 45-year-old network engineer from Minneapolis. To finance owning his and his wife’s $300,000 home and saving for the future, the couple was foregoing medical and dental treatment of any kind and cutting back on expenses everywhere, he said, despite a pre-tax household income of more than $250,000.

I have no idea where in Minneapolis this person is but this is nothing like my own experience in the metro area.

My wife and I bought our home in 2021, for about the same price as he's describing and with an income far less than theirs, and we're expending less than 30% on our mortgage payment (including our insurance premiums and taxes). Maybe they bought theirs at the interest rate peak in 2020, whereas we bought ours when the interest rates bottomed out, but I can quite confidently say that our home is not the main burden on our finances.

What is killing us is raising food and healthcare costs, as well as our student loans. Our energy utility announced last year that they would be implementing surge pricing that could almost double our energy costs, and since the utility administers the state solar incentive program themselves, they quoted us a PV system cost almost twice what you can find on the open market (they use your average monthly energy bill to determine how much you can afford to pay for the system, which ought to be criminal). We're getting fucked up down and sideways, but our mortgage payment is probably the most stable expense we have.

There are a lot of reasons things are shit and financially precarious, but owning my home has been a rare bright spot in our otherwise gloomy financial picture. If we were still renting, not only would we be in the same strained situation as we are now, but we'd be constantly anxious about our rent skyrocketing, too.

Home ownership isn't all sunshine and rainbows, but the alternative of renting is often just as expensive - on top of being at the whims of landlords. They've been publishing articles for a decade now trying to convince people that home ownership is overrated and renting is the way of the future, and I really wish I could trust them to report on it transparently.

[–] archomrade@midwest.social 2 points 4 months ago (1 children)

Huh, it works great on my android os Nvidia shield

[–] archomrade@midwest.social 2 points 4 months ago (1 children)

As a rule I don't announce my trackers publicly so they can continue existing as my trackers, but the one I use mostly is small-rodent themed.

I'll DM you

[–] archomrade@midwest.social 3 points 4 months ago (3 children)

I get my linux distros via torrent networks, mostly

[–] archomrade@midwest.social 6 points 4 months ago (1 children)

As someone who likes to have a fallback way of purchasing digital content that I can remove DRM from, this annoys me.

I can still purchase mp3 and flac files from various online retailers, and I can rip bluray for my movies and tv shows, but now I need a new place to purchase ebooks that are downloadable. Anyone have any recommendations? The first few independent retailers i've found seem to require their own apps.

[–] archomrade@midwest.social 2 points 4 months ago (5 children)

It's been a while since I've heard about libgen and aa - and actually i'm not sure how they operate with direct downloads of copyrighted material? I find my ebooks through more conventional p2p means, but i've always just assumed that was necessary to avoid sudden takedowns

[–] archomrade@midwest.social 5 points 4 months ago (3 children)

Lmao, yea I think they're kind of playing a game with language here.

After doing some reading of various explanations, what they mean when they say they aren't using electrons for computation is basically that the 'thing' they're measuring that dictates the 'state' of the transistor is a quasi-particle..... but that particle is only observed through the altered behavior of electrons (i guess in the case of the majorana particle, it appears as two electrons gathered together in synchrony?)

So the chip is still using electrons in its computation in the same say as a traditional transistor - you are still sending electrons into a circuit, and the 'state' of the bit is determined by the output signal. It's just that, in this case, they're looking for specific behavior of the electrons that indicate the presence and state of this 'qbit'

That is just my layman's understanding of it

[–] archomrade@midwest.social 15 points 4 months ago (6 children)

Microsoft isn’t using electrons for the compute in this new chip; it’s using the Majorana particle that theoretical physicist Ettore Majorana described in 1937.

Ok now i'm gonna need an explain-like-i'm-not-a-quantum-scientist on what a 'topological transistor' is, and what it uses instead of electrons for its compute (and, like, what is the significance?)

[–] archomrade@midwest.social 15 points 4 months ago (3 children)

My parents and school administrators' attempts at blocking unsanctioned activities is what taught me computer literacy

There was nothing quite as satisfying as getting caught opening addictinggames on a web browser through a proxy when the teacher was convinced they had blocked it completely.

[–] archomrade@midwest.social 5 points 4 months ago

I honestly think social media and internet subculture would be fine if it weren't soured by moneyed interests

If work wasn't so alienating and all-encompassing and we weren't so stressed and insecure in our material conditions, then we wouldnt run to social media as an escape. If wasn't also so rife with consumerist culture and advertisements it wouldnt be so corrosive. Maybe then we could use it to create communities that mirror and bridge into irl spaces and create meaningful relationships.

Instead, the entire network has been constructed around a capitalist organization and it only serves to make us more miserable.

[–] archomrade@midwest.social 31 points 4 months ago (6 children)

An excellent game that was undercut by their exclusivity deal with Epic

 

Over the weekend I set up some outdated wyze v3 cameras with hacked firmware to enable rtsp, and was able to load the stream into frigate to do some mouse-infestation detection. This worked great, and it was with hardware I already had laying around, but now i'm in need of some more coverage and I don't want extension cords hanging from my basement ceiling everywhere.

I thought there might be another ~$50 wifi battery camera somewhere out there that could be hacked or had native rtsp support, but my search is coming up short.... seems like either people settle for cloud-polling cheap ones or they splurge on some real quality mid-range ones. Anyone know of any cheap options?

For those curious, here's the git repo for the wyzecams i found. It's as easy as loading a micro-sd with the firmware, giving it an ssh key, and then turning it back on. Then you can ssh into it over the network and enable things like rtsp and a bunch of other features i don't know what to do with. It has proven to be handy, but it doesn't support the outdoor battery-powered models.

 
 
 

Edited for legibility

 

edit: a working solution is proposed by @Lifebandit666@feddit.uk below:

So you’re trying to get 2 instances of qbt behind the same Gluetun vpn container?

I don’t use Qbt but I certainly have done in the past. Am I correct in remembering that in the gui you can change the port?

If so, maybe what you could do is set up your stack with 1 instance in, go into the GUI and change the port on the service to 8000 or 8081 or whatever.

Map that port in your Gluetun config and leave the default port open for QBT, and add a second instance to the stack with a different name and addresses for the config files.

Restart the stack and have 2 instances.


Has anyone run into issues with docker port collisions when trying to run images behind a bridge network (i think I got those terms right?)?

I'm trying to run the arr stack behind a VPN container (gluetun for those familiar), and I would really like to duplicate a container image within the stack (e.g. a separate download client for different types of downloads). As soon as I set the network_mode to 'service' or 'container', i lose the ability to set the public/internal port of the service, which means any image that doesn't allow setting ports from an environment variable is stuck with whatever the default port is within the application.

Here's an example .yml:

services:
  gluetun:
    image: qmcgaw/gluetun:latest
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    environment:
      - VPN_SERVICE_PROVIDER=mullvad
      - VPN_TYPE=[redacted]
      - WIREGUARD_PRIVATE_KEY=[redacted]
      - WIREGUARD_ADDRESSES=[redacted]
      - SERVER_COUNTRIES=[redacted]
    ports:
      - "8080:8080" #qbittorrent
      - "6881:6881"
      - "6881:6881/udp"
      - "9696:9696" # Prowlarr
      - "7878:7878" # Radar
      - "8686:8686" # Lidarr
      - "8989:8989" # Sonarr
    restart: always

  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: "qbittorrent"
    network_mode: "service:gluetun"
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=CST/CDT
      - WEBUI_PORT=8080
    volumes:
      - /docker/appdata/qbittorrent:/config
      - /media/nas_share/data:/data)

Declaring ports in the qbittorrent service raises an error saying you cannot set ports when using the service network mode. Linuxserver.io has a WEBUI_PORT environment variable, but using it without also setting the service ports breaks it (their documentation says this is due to CSRF issues and port mapping, but then why even include it as a variable?)

The only workaround i can think of is doing a local build of the image that needs duplication to allow ports to be configured from the e variables, OR run duplicate gluetun containers for each client which seems dumb and not at all worthwhile.

Has anyone dealt with this before?

 
 

Anyone else get this email from Leviton about their decora light switches and their changes to ToS expressly permitting them to collect and use behavioral data from your devices?

FUCK Leviton, long live Zigbee and Zwave and all open-sourced standards


My Leviton

At Leviton, we’re committed to providing an excellent smart home experience. Today, we wanted to share a few updates to our Privacy Policy and Terms of Service. Below is a quick look at key changes:

We’ve updated our privacy policy to provide more information about how we collect, use, and share certain data, and to add more information about our users’ privacy under various US and Canadian laws. For instance, Leviton works with third-party companies to collect necessary and legal data to utilize with affiliate marketing programs that provide appropriate recommendations. >As well, users can easily withdraw consent at any time by clicking the links below.

The updates take effect March 11th, 2024. Leviton will periodically send information regarding promotions, discounts, new products, and services. If you would like to unsubscribe from communications from Leviton, please click here. If you do not agree with the privacy policy/terms of service, you may request removal of your account by clicking this link.

For additional information or any questions, please contact us at dssupport@leviton.com.

Traduction française de cet email Leviton

Copyright © 2024 Leviton Manufacturing Co., Inc., All rights reserved. 201 North Service Rd. • Melville, NY 11747

Unsubscribe | Manage your email preferences

 

I'm not sure where else to go with this, sorry if this isn't the right place.

I'm currently designing a NAS build around an old CMB-A9SC2 motherboard that is self-described as an 'entry level server board'.

So far i've managed to source all the other necessary parts, but i'm having a hell of a time finding the specified RAM that it takes:

  • 204-pin DDR3 UDIMM ECC

As far as I can tell, that type of ram just doesn't exist... I can find it in SODIMM formats or I can find it in 240-pin formats, but for the life of me I cannot find all of those specifications in a single card.

I'm about ready to just throw the whole board away, but everything else about the board is perfect....

Has anyone else dealt with this kind of memory before? Is there like a special online store where they sell weird RAM components meant for server builds?

 

Pretend your only other hardware is a repurposed HP Prodesk and your budget is bottom-barrel

46
submitted 1 year ago* (last edited 1 year ago) by archomrade@midwest.social to c/linux@lemmy.ml
 

I'm currently watching the progress of a 4tB rsync file transfer, and i'm curious why the speeds are less than the theoretical read/write maximum speeds of the drives involved with the transfer. I know there's a lot that can effect transfer speeds, so I guess i'm not asking why my transfer itself isn't going faster. I'm more just curious what the bottlenecks could be typically?

Assuming a file transfer between 2 physical drives, and:

  • Both drives are internal SATA III drives with ~~5.0GB/s~~ ~~5.0Gb/s read/write~~ 210Mb/s (this was the mistake: I was reading the sata III protocol speed as the disk speed)
  • files are being transferred using a simple rsync command
  • there are no other processes running

What would be the likely bottlenecks? Could the motherboard/processor likely limit the speed? The available memory? Or the file structure of the files themselves (whether they are fragmented on the volumes or not)?

 
  • Edit- I set the machine to work last night testing memtester and badblocks (read only) both tests came back clean, so I assumed I was in the clear. Today, wanting to be extra sure, i ran a read-write badblocks test and watched dmesg while it worked. I got the same errors, this time on ata3.00. Given that the memory test came back clean, and smartctl came back clean as well, I can only assume the problem is with the ata module, or somewhere between the CPU and the ata bus. i'll be doing a bios update this morning and then trying again, but seems to me like this machine was a bad purchase. I'll see what options I have with replacement.

  • Edit-2- i retract my last statement. It appears that only one of the drives is still having issues, which is the SSD from the original build. All write interactions with the SSD produce I/O errors (including re-partitioning the drive), while there appear to be no errors reading or writing to the HDD. Still unsure what caused the issue on the HDD. Still conducting testing (running badblocks rw on the HDD, might try seeing if I can reproduce the issue under heavy load). Safe to say the SSD needs repair or to be pitched. I'm curious if the SD got damaged, which would explain why the issue remains after being zeroed out and re-written and why the HDD now seems fine. Or maybe multiple SATA ports have failed now?


I have no idea if this is the forum to ask these types of questions, but it felt a little like a murder mystery that would be a little fun to solve. Please let me know if this type of post is unwelcome and I will immediately take it down and return to lurking.

Background:

I am very new to linux. Last week I purchased a cheap refurbished headless desktop so I could build a home media server, as well as play around with vms and programming projects. This is my first ever exposure to linux, but I consider myself otherwise pretty tech-savvy (dabble in python scripting in my spare time, but not much beyond that).

This week, i finally got around to getting the server software installed and operating (see details of the build below). Plex was successfully pulling from my media storage and streaming with no problems. As i was getting the docker containers up, I started getting "not enough storage" errors for new installs. Tried purging docker a couple times, still couldn't proceed, so I attempted to expand the virtual storage in the VM. Definitely messed this up, and immediately Plex stops working, and no files are visible on the share anymore. To me, it looked as if it attempted taking storage from the SMB share to add to the system files partition. I/O errors on the OMV virtual machine for days.

Take two.

I got a new HDD (so i could keep working as I tried recovery on the SSD). I got everything back up (created a whole new VM for docker and OMV). Gave the docker VM more storage this time (I think i was just reckless with my package downloads anyway), made sure that the SMB share was properly mounted. As I got the download client running (it made a few downloads), I notice the OVM virtual machine redlining on memory from the proxmox window. Thought, (uh oh, i should fix that). Tried taking everything down so I could reboot the OVM with more memory allocation, but the shutdown process hung on the OVM. Made sure all my devices on the network were disconnected, then stopped the VM from the proxmox window.

On OVM reboot, i noticed all kinds of I/O errors on both the virtual boot drive and the mounted SSD. I could still see files in the share on my LAN devices, but any attempt to interact with the folder stalled and would error out.

I powered down all the VM's and now i'm trying to figure out where I went wrong. I'm tempted to just abandon the VM's and just install it all on a Ubuntu OS, but I like the flexibility of having the VM's to spin up new OS's and try things out. The added complexity is obviously over my head, but if I can understand it better I'll give it another go.

Here's the build info:

Build:

  • HP prodesk 600g1
  • intel i5
  • upgraded 32gb after-market DDR3 1600mhz Patriot Ram
  • KingFlash 250gb SSD
  • WD 4T SSD (originally NTFS drive from my windows pc with ~2T of data existing)
  • WD 4T HDD (bought this after the SSD corrupted, so i could get the server back up while i delt with the SSD)
  • 500Mbps ethernet connection

Hypervisor

  • Proxmox (latest), Ubuntu kernel
  • VM110: Ubuntu-22.04.3-live server amd64, OpenMediaVault 6.5.0
  • VM130: Ubuntu-22.04.3-live, docker engine, portainer
    • Containers: Gluetun, qBittorrent, Sonarr, Radarr, Prowlarr)
  • LCX101: Ubuntu-22.04.3, Plex Server
  • Allocations
  • VM110: 4gb memory, 2 cores (balooning and swap ON)
  • VM130: 30gb memory, 4 cores (ballooning and swap ON)

Shared Media Architecture (attempt 1)

  • Direct-mounted the WD SSD to VM110. Partitioned and formatted the file system inside the GUI, created a folder share, set permissions for my share user. Shared as an SMB/CIFS
  • bind-mounted the shared folder to a local folder in VM130 (/media/data)
  • passed the mounted folder to the necessary docker containers as volumes in the docker-compose file (e.g. - volumes: /media/data:/data, ect)

No shame in being told I did something incredibly dumb, i'm here to learn, anyway. Maybe just not learn in a way that destroys 6 months of dvd rips in the process ___

 

Does anyone know if this enables any kind of tracking (either through WiFi device logging or network activity)? I've typically used my own networking modems and routers, I'm a little weary of a required smart device that I don't have control over.

So far I haven't been able to find much information beyond what's available from century-link

view more: next ›