[-] archomrade@midwest.social 1 points 5 days ago

I guess... I am still very skeptical the profit margin even if some people do end up paying for the storage. We're talking about petabytes on petabytes of data.... How many people need to pay a cloud subscription fee to pay for the overhead of the servers?

Idk. This is super suss to me but again, I am clearly not the target market for this service so maybe I don't have a firm grasp of the landscape.

[-] archomrade@midwest.social 6 points 5 days ago

It cannot be that profitable to have just a bunch of random data on their servers. I have so much junk and random bullshit on my drives, it would take a week of labor just to clean my shit well enough to use it for AI training and as soon as I got any notification about cloud space being full i'd turn syncing off - i sure as hell wouldn't fork over any money for a subscription. This is such a big bridge to burn, and the server overhead must be massive.... I just don't understand how they could possibly think this is a good business decision.

Idk, maybe i'm just too deep into privacy/FOSS/selfhosting headspace to see things clearly from the normal-consumer standpoint but I just do not understand this. I really wish someone would leek an internal conversation at one of these companies that explains the big-picture strategy with this move.

[-] archomrade@midwest.social 7 points 5 days ago

US liberals and US conservatives both share the core ideals of Liberalism, including the right to private property

They differ only in where they think individual liberty ends.

[-] archomrade@midwest.social 1 points 6 days ago

Yup, I ended up frankensteining a nas from various craigslist parts (i actually found a low-power business-class server motherboard that has worked out well for the purpose). Had to get a SAS HBA card and a couple SFF-8087 cables to do the job right, and I grabbed an old gaming case from the 2010's to hold it all, but it was relatively seamless. I had one of the drives go out already, but luckily I had it in a raid configuration with parity so it was just a matter of swapping out the drives and rebuilding.

It's been fun and rewarding, for sure! I'm glad I didn't sell them like these other dweebs told me to lol

[-] archomrade@midwest.social 9 points 6 days ago

Depression and raising toddlers can both be the result of ill-advised diddling

10
submitted 1 month ago* (last edited 1 month ago) by archomrade@midwest.social to c/selfhosted@lemmy.world

edit: a working solution is proposed by @Lifebandit666@feddit.uk below:

So you’re trying to get 2 instances of qbt behind the same Gluetun vpn container?

I don’t use Qbt but I certainly have done in the past. Am I correct in remembering that in the gui you can change the port?

If so, maybe what you could do is set up your stack with 1 instance in, go into the GUI and change the port on the service to 8000 or 8081 or whatever.

Map that port in your Gluetun config and leave the default port open for QBT, and add a second instance to the stack with a different name and addresses for the config files.

Restart the stack and have 2 instances.


Has anyone run into issues with docker port collisions when trying to run images behind a bridge network (i think I got those terms right?)?

I'm trying to run the arr stack behind a VPN container (gluetun for those familiar), and I would really like to duplicate a container image within the stack (e.g. a separate download client for different types of downloads). As soon as I set the network_mode to 'service' or 'container', i lose the ability to set the public/internal port of the service, which means any image that doesn't allow setting ports from an environment variable is stuck with whatever the default port is within the application.

Here's an example .yml:

services:
  gluetun:
    image: qmcgaw/gluetun:latest
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    environment:
      - VPN_SERVICE_PROVIDER=mullvad
      - VPN_TYPE=[redacted]
      - WIREGUARD_PRIVATE_KEY=[redacted]
      - WIREGUARD_ADDRESSES=[redacted]
      - SERVER_COUNTRIES=[redacted]
    ports:
      - "8080:8080" #qbittorrent
      - "6881:6881"
      - "6881:6881/udp"
      - "9696:9696" # Prowlarr
      - "7878:7878" # Radar
      - "8686:8686" # Lidarr
      - "8989:8989" # Sonarr
    restart: always

  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: "qbittorrent"
    network_mode: "service:gluetun"
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=CST/CDT
      - WEBUI_PORT=8080
    volumes:
      - /docker/appdata/qbittorrent:/config
      - /media/nas_share/data:/data)

Declaring ports in the qbittorrent service raises an error saying you cannot set ports when using the service network mode. Linuxserver.io has a WEBUI_PORT environment variable, but using it without also setting the service ports breaks it (their documentation says this is due to CSRF issues and port mapping, but then why even include it as a variable?)

The only workaround i can think of is doing a local build of the image that needs duplication to allow ports to be configured from the e variables, OR run duplicate gluetun containers for each client which seems dumb and not at all worthwhile.

Has anyone dealt with this before?

[-] archomrade@midwest.social 104 points 1 month ago

I'll personally donate $100 to the DNC (i'm poor leave me alone) if Hilary goes to every campaign stop chanting 'lock him up'

[-] archomrade@midwest.social 115 points 2 months ago

Fair point, Margot Robbie

37
Leviton ToS Change (midwest.social)

Anyone else get this email from Leviton about their decora light switches and their changes to ToS expressly permitting them to collect and use behavioral data from your devices?

FUCK Leviton, long live Zigbee and Zwave and all open-sourced standards


My Leviton

At Leviton, we’re committed to providing an excellent smart home experience. Today, we wanted to share a few updates to our Privacy Policy and Terms of Service. Below is a quick look at key changes:

We’ve updated our privacy policy to provide more information about how we collect, use, and share certain data, and to add more information about our users’ privacy under various US and Canadian laws. For instance, Leviton works with third-party companies to collect necessary and legal data to utilize with affiliate marketing programs that provide appropriate recommendations. >As well, users can easily withdraw consent at any time by clicking the links below.

The updates take effect March 11th, 2024. Leviton will periodically send information regarding promotions, discounts, new products, and services. If you would like to unsubscribe from communications from Leviton, please click here. If you do not agree with the privacy policy/terms of service, you may request removal of your account by clicking this link.

For additional information or any questions, please contact us at dssupport@leviton.com.

Traduction française de cet email Leviton

Copyright © 2024 Leviton Manufacturing Co., Inc., All rights reserved. 201 North Service Rd. • Melville, NY 11747

Unsubscribe | Manage your email preferences

9

I'm not sure where else to go with this, sorry if this isn't the right place.

I'm currently designing a NAS build around an old CMB-A9SC2 motherboard that is self-described as an 'entry level server board'.

So far i've managed to source all the other necessary parts, but i'm having a hell of a time finding the specified RAM that it takes:

  • 204-pin DDR3 UDIMM ECC

As far as I can tell, that type of ram just doesn't exist... I can find it in SODIMM formats or I can find it in 240-pin formats, but for the life of me I cannot find all of those specifications in a single card.

I'm about ready to just throw the whole board away, but everything else about the board is perfect....

Has anyone else dealt with this kind of memory before? Is there like a special online store where they sell weird RAM components meant for server builds?

40

Pretend your only other hardware is a repurposed HP Prodesk and your budget is bottom-barrel

[-] archomrade@midwest.social 73 points 5 months ago

I happen to really like District 9 for this reason

There's no malicious plot of aliens blowing up shit or invading to colonize: nope, aliens literally just crash-landed on accident and humanity was like "stay the fuck right there, we'll take all your shit until we figure out how to ~~deal with~~ exploit you"

Humanity is always its own worst-enemy

46
submitted 5 months ago* (last edited 5 months ago) by archomrade@midwest.social to c/linux@lemmy.ml

I'm currently watching the progress of a 4tB rsync file transfer, and i'm curious why the speeds are less than the theoretical read/write maximum speeds of the drives involved with the transfer. I know there's a lot that can effect transfer speeds, so I guess i'm not asking why my transfer itself isn't going faster. I'm more just curious what the bottlenecks could be typically?

Assuming a file transfer between 2 physical drives, and:

  • Both drives are internal SATA III drives with ~~5.0GB/s~~ ~~5.0Gb/s read/write~~ 210Mb/s (this was the mistake: I was reading the sata III protocol speed as the disk speed)
  • files are being transferred using a simple rsync command
  • there are no other processes running

What would be the likely bottlenecks? Could the motherboard/processor likely limit the speed? The available memory? Or the file structure of the files themselves (whether they are fragmented on the volumes or not)?

[-] archomrade@midwest.social 85 points 6 months ago

There's a lot of "AI is theft" comments in this thread, and I'd just like to take a moment to bring up the Luddite movement at the beginning of the Industrial Revolution: the point isn't that 'machines are theft', or 'machines are just a fad', or even 'machines are bad' - the point was that machines were the new and highly efficient way capital owners were undermining the security and material conditions of the working class.

Let's not confuse problems that are created by capitalistic systems for problems created by new technologies - and maybe we can learn something about radical political action from the Luddites.

54
submitted 8 months ago* (last edited 8 months ago) by archomrade@midwest.social to c/linux@lemmy.ml
  • Edit- I set the machine to work last night testing memtester and badblocks (read only) both tests came back clean, so I assumed I was in the clear. Today, wanting to be extra sure, i ran a read-write badblocks test and watched dmesg while it worked. I got the same errors, this time on ata3.00. Given that the memory test came back clean, and smartctl came back clean as well, I can only assume the problem is with the ata module, or somewhere between the CPU and the ata bus. i'll be doing a bios update this morning and then trying again, but seems to me like this machine was a bad purchase. I'll see what options I have with replacement.

  • Edit-2- i retract my last statement. It appears that only one of the drives is still having issues, which is the SSD from the original build. All write interactions with the SSD produce I/O errors (including re-partitioning the drive), while there appear to be no errors reading or writing to the HDD. Still unsure what caused the issue on the HDD. Still conducting testing (running badblocks rw on the HDD, might try seeing if I can reproduce the issue under heavy load). Safe to say the SSD needs repair or to be pitched. I'm curious if the SD got damaged, which would explain why the issue remains after being zeroed out and re-written and why the HDD now seems fine. Or maybe multiple SATA ports have failed now?


I have no idea if this is the forum to ask these types of questions, but it felt a little like a murder mystery that would be a little fun to solve. Please let me know if this type of post is unwelcome and I will immediately take it down and return to lurking.

Background:

I am very new to linux. Last week I purchased a cheap refurbished headless desktop so I could build a home media server, as well as play around with vms and programming projects. This is my first ever exposure to linux, but I consider myself otherwise pretty tech-savvy (dabble in python scripting in my spare time, but not much beyond that).

This week, i finally got around to getting the server software installed and operating (see details of the build below). Plex was successfully pulling from my media storage and streaming with no problems. As i was getting the docker containers up, I started getting "not enough storage" errors for new installs. Tried purging docker a couple times, still couldn't proceed, so I attempted to expand the virtual storage in the VM. Definitely messed this up, and immediately Plex stops working, and no files are visible on the share anymore. To me, it looked as if it attempted taking storage from the SMB share to add to the system files partition. I/O errors on the OMV virtual machine for days.

Take two.

I got a new HDD (so i could keep working as I tried recovery on the SSD). I got everything back up (created a whole new VM for docker and OMV). Gave the docker VM more storage this time (I think i was just reckless with my package downloads anyway), made sure that the SMB share was properly mounted. As I got the download client running (it made a few downloads), I notice the OVM virtual machine redlining on memory from the proxmox window. Thought, (uh oh, i should fix that). Tried taking everything down so I could reboot the OVM with more memory allocation, but the shutdown process hung on the OVM. Made sure all my devices on the network were disconnected, then stopped the VM from the proxmox window.

On OVM reboot, i noticed all kinds of I/O errors on both the virtual boot drive and the mounted SSD. I could still see files in the share on my LAN devices, but any attempt to interact with the folder stalled and would error out.

I powered down all the VM's and now i'm trying to figure out where I went wrong. I'm tempted to just abandon the VM's and just install it all on a Ubuntu OS, but I like the flexibility of having the VM's to spin up new OS's and try things out. The added complexity is obviously over my head, but if I can understand it better I'll give it another go.

Here's the build info:

Build:

  • HP prodesk 600g1
  • intel i5
  • upgraded 32gb after-market DDR3 1600mhz Patriot Ram
  • KingFlash 250gb SSD
  • WD 4T SSD (originally NTFS drive from my windows pc with ~2T of data existing)
  • WD 4T HDD (bought this after the SSD corrupted, so i could get the server back up while i delt with the SSD)
  • 500Mbps ethernet connection

Hypervisor

  • Proxmox (latest), Ubuntu kernel
  • VM110: Ubuntu-22.04.3-live server amd64, OpenMediaVault 6.5.0
  • VM130: Ubuntu-22.04.3-live, docker engine, portainer
    • Containers: Gluetun, qBittorrent, Sonarr, Radarr, Prowlarr)
  • LCX101: Ubuntu-22.04.3, Plex Server
  • Allocations
  • VM110: 4gb memory, 2 cores (balooning and swap ON)
  • VM130: 30gb memory, 4 cores (ballooning and swap ON)

Shared Media Architecture (attempt 1)

  • Direct-mounted the WD SSD to VM110. Partitioned and formatted the file system inside the GUI, created a folder share, set permissions for my share user. Shared as an SMB/CIFS
  • bind-mounted the shared folder to a local folder in VM130 (/media/data)
  • passed the mounted folder to the necessary docker containers as volumes in the docker-compose file (e.g. - volumes: /media/data:/data, ect)

No shame in being told I did something incredibly dumb, i'm here to learn, anyway. Maybe just not learn in a way that destroys 6 months of dvd rips in the process ___

14
submitted 9 months ago by archomrade@midwest.social to c/privacy@lemmy.ml

Does anyone know if this enables any kind of tracking (either through WiFi device logging or network activity)? I've typically used my own networking modems and routers, I'm a little weary of a required smart device that I don't have control over.

So far I haven't been able to find much information beyond what's available from century-link

view more: next ›

archomrade

joined 1 year ago