TheDevil

joined 1 year ago
[–] TheDevil@lemmy.world 9 points 1 year ago

A long time ago I used something like sockd to run a local proxy and then send that data to my personal remote proxy server over port 80, something like https://win2socks.com/ I think

Maybe there’s something better than socks these days.

Back then it worked pretty well, but I don’t think they were doing DPI. They (admin guys) did seem to notice large file transfers and seemed to be killing them manually.

I would assume most places these days will collect net flow data at least, so while https will protect the contents, they will be able to see the potentially unusual amount of data moving back and forth to your proxies IP.

I would suggest at least using a VPS to hide your schools IP address from the irc servers. And you may be in serious trouble if you get caught. If you’re in the UK you’re going to be risking jail time, and speaking from personal experience, they take this shit seriously.

So maybe just set up a personal hotspot.

[–] TheDevil@lemmy.world 6 points 1 year ago* (last edited 1 year ago) (1 children)

A second vote for Reolink. They’re entirely adequate for most home scenarios.

Dahua are also very good if you can find them however they are aimed at professional installers. They cover almost every scenario imaginable and have good on device ai features. They do have their idiosyncrasies but do everything you could need and offer excellent lowlight performance for very little cost. There is also a very good home assistant integration.

You’ll find a lot of people tend to chose between Dahua and the more expensive Hikvision on cctvforums. You should be able to pick up a capable 4mp Dahua with tripwire detection for 60GBP. These cameras can (sometimes literally) see in the dark.

Avoid ESP32 Cams. They are very low frame rate and produce a very noisy image. They’re fun to tinker with but are nowhere near the quality of a real IPC.

[–] TheDevil@lemmy.world 1 points 1 year ago

Hasn’t been an issue for me. HA would only be depending on Opnsense for a DHCP lease so assuming you have reasonable lease times it’ll just pick up where it left off.

Without checking I would imagine you could just set a delay for the HA container to make sure opnsense can start first, if it does become an issue.

[–] TheDevil@lemmy.world 4 points 1 year ago (2 children)

I use an N5105 generic mini pc running proxmox and opnsense. You can get them fairly cheaply from Aliexpress. They’re particularly low power and come with 4-6 gigabit network ports. I have two containers, the second of which hosts my Home Assistant instance. As an added bonus they often don’t have a fan.

For wifi I use Ubiquity wifi 6 Lite APs with the controller running under home assistant.

[–] TheDevil@lemmy.world 2 points 1 year ago (1 children)

You can ignore the windows machine unless it’s using nfs, it’s not relevant.

Your screenshot suggests my guess was incorrect because you do not have any authorised Networks or Hosts defined.

Even so if it was me I would correctly configure authorised hosts or authorised networks just to rule it out, as it neatly explains why it works on one container but not another. Does the clone have the same IP by any chance?

The only other thing I can think for you to try is to set maproot user/group to root/wheel and see if that helps but it’s just a shot in the dark.

[–] TheDevil@lemmy.world 2 points 1 year ago* (last edited 1 year ago) (3 children)

The two docker containers can access the share, but the new proxmox container can’t?

The new proxmox container will have a different IP. My guess would be that the IP of the docker host is permitted to access the nfs share but the ip of the new proxmox container is not.

To test, you can allow access from your entire lan subnet (192.168.1.1/24)

Edit: For reference see: https://www.truenas.com/docs/scale/scaletutorials/shares/addingnfsshares/#adding-nfs-share-network-and-hosts

In particular: If you want to enter allowed systems, click Add to the right of Add hosts. Enter a host name or IP address to allow that system access to the NFS share. Click Add for each allowed system you want to define. Defining authorized systems restricts access to all other systems. Press the X to delete the field and allow all systems access to the share.

[–] TheDevil@lemmy.world 0 points 1 year ago (1 children)

Are you sure this wasn’t written by some poor intern under some form of duress?

“With the download tray, you can see a list of all your downloads from the past 24 hours in any browser window, not just the one in which you originally downloaded a file. The tray also offers in-line options to open the folder a download is in, cancel a download, retry a download should it fail for any reason, and pause/resume downloads.”

[–] TheDevil@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

If your only goal is working https then as the other comment correctly suggests you can do DNS-01 authentication with Let’s Encrypt + Certbot + Some brand of dyndns

However the other comment is incorrect in stating that you need to expose a HTTP server. This method means you don’t need to expose anything. For instance if you do it with HA:

https://github.com/home-assistant/addons/blob/master/letsencrypt/DOCS.md

Certbot uses the API of your DDNS provider to authenticate the cert request by adding a txt record and then pulls the cert. No proxies no exposed servers and no fuss. Point the A record at your Rfc1918 IP.

You can then configure your DNS to keep serving cached responses. I think though that ssl will still be broken while your connection is down but you will be able to access your services.

Edit to add: I don’t understand why so many of the HTTPS tutorials are so complicated and so focused on adding a proxy into the mix even when remote access isn’t the target.

Cert bot is a shell script. It asks the Lets Encrypt api for a secret key. It adds the key as a txt record on a subdomain of the domain you want a certificate for. Let’s encrypt confirms the key is there and spits out a cert. You add the cert to whatever server it belongs to, or ideally Certbot does that for you. That’s it, working https. And all you have to expose is the rfc1918 address. This, to me at least, is preferable to proxies and exposed servers.

[–] TheDevil@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

Not that I don’t love Ubi but OPNsense and pfsense will also handle failover:

https://docs.opnsense.org/manual/multiwan.html

This is also possible within Linux, Windows and *BSD by just adding both possible routes and weighting them accordingly:

https://serverfault.com/questions/226530/get-linux-to-change-default-route-if-one-path-goes-down

[–] TheDevil@lemmy.world 5 points 1 year ago (2 children)

Yes. Depending on your network configuration you could consider using cellular data as a backup form of connectivity.

[–] TheDevil@lemmy.world 0 points 1 year ago* (last edited 1 year ago) (1 children)

The short answer is no, because it’s a pain in the ass and offers little tangible benefit. But I can speculate.

If I was going down this path I would look for an x86 box with a wifi card that is supported by OPNsense or PFsense(that’s usually going to be dependant on available *BSD available drivers). I don’t how well they would function but I would expect quirks. You could also check the compatibility lists of the open router distributions to find something that’s well supported. You can check the forums for posts from people with similar goals and check their mileage.

You might even be able to achieve this with an ESP32.

But what are you hoping to achieve? Do you mean open radio firmware or do you mean open drivers? Or an open OS talking to a closed radio? What’s the benefit?

Radios in any device are discrete components running their own show.

Open drivers should be possible. However I have a feeling that open firmware for wifi access points radio hardware is going to be extremely hard to find. The regulatory agencies really don’t want the larger public to have complete control because of the possibility of causing interference and breaking the rules(for good reason - imagine if your neighbour had bad signal so he ignorantly cranks up the power output, not realising that he can’t do the same with his client devices, rendering his change useless).

I seem to remember a change in FCC rules some time back that seemed to disallow manufacturers obtaining certification for devices that permitted end users to modify the firmware, much to the concern of open router users at the time. The rule was aimed at radio firmware but the concern was that the distinction would be lost and the rule applied to the entire router by overzealous manufacturers who hate third party firmware at best.

A fully open radio is basically an SDR. Can you move packets over an SDR? Hell yes, but now you’re in esoteric HAM radio territory. It’s going to be a hell of a fun project and you’re going to learn a lot, but in so far as a practical wifi ap, your results will be limited.

I use FOSS wherever it’s practical but if you want working wifi just stick to the well tested brand names. For what it’s worth you probably won’t gain any security by going open, if there’s any weakness it’ll probably be baked in at the protocol level which open devices would need to follow anyway. At least a discrete AP can be isolated and has no reason to be given internet access.

[–] TheDevil@lemmy.world 0 points 1 year ago (3 children)

I would take these projects over stock firmware on traditional home routers any day. And I have done where I’ve been unable to rig a more permanent solution. They have an honourable mission in a section of hardware filled with absolute junk.

But the trouble is the sheer number of hardware targets and meagre resources on these devices combined with the contempt of third party firmware from most manufacturers make them hard to flash and leave them rarely updated, if you’re lucky enough to have a supported device. Even then they are prone to quirks and bugs. Some devices do receive and are capable of receiving updates but they often cost more than the equivalent low TDP general purpose computer.

Just imagine: the developers of DD-WRT have to target not just each individual router model but every single revision as the manufacturers have a habit of switching major components or even entire chipsets between product revisions. On top of that the documentation for the components used might be sparse or non existent. I’m impressed that these router distributions can make it work at all but that doesn’t make it any more practical or sustainable.

At this point you may as well flip the router into modem mode and run OPNsense or PFSense and get a fully fledged operating system running on far more resources than any of these SoCs. Assuming you have the power budget you’ll get assured updates and far more flexibility with fewer compatibility issues and quirks. My passively cooled N5105 box with 8GB of ram and a 128GB HDD happily routes a 1gb/s WAN while simultaneously hosting a busy home assistant instance. The resources aren’t even maxed out.

Following my experience I will always opt to run dedicated APs. DD-WRT WiFi support is amazing considering what they have to work with, but there are only so many wifi chipsets they can support and because they try to support as much as they can there are always problems with something. I really don’t have time to constantly troubleshoot the wifi following cryptic posts from years ago. Ubiquity stuff isn’t flawless either but it’s stable and a lot less prone to hard to trace issues. YMMV.

DD-WRT and friends I love you, you really saved my ass a few times when all I had was some shitty CPE. You’re still way nicer than Cisco gear. But I find it hard to justify using a gimped out SoC from a couldn’t-care-less manufacturer when I can buy a 5W TDP passively cooled x86 computer for ~100usd.

view more: next ›