[-] IsoKiero@sopuli.xyz 7 points 1 day ago

I've used Seafile for years just for this. I haven't ran that on pi, but on virtual machine it runs pretty smoothly and android client is pretty hassle free.

[-] IsoKiero@sopuli.xyz 1 points 6 days ago

I want to prevent myself from reinstalling my system.

Any even remotely normal file on disk doesn't stop that, regardless of encryption, privileges, attributes or anything your running OS could do to the drive. If you erase partition table it'll lose your 'safety' file too without any questions asked as on that point the installer doesn't care (nor see/manage) on individual files on the medium. And this is exactly what 'use this drive automatically for installation' -option does on pretty much all of the installers I've seen.

Protecting myself from myself.

That's what backups are for. If you want to block any random usb-stick installer from running you could set up a boot options on bios to exclude those and set up a bios password, but that only limits on if you can 'accidently' reinstall system from external media.

And neither of those has anything to do on read/copy protection for the files. If they contain sensitive enough data they should be encrypted (and backed up), but that's a whole another problem than protecting the drive from accidental wipe. Any software based limitation concerning your files falls apart immediately (excluding reading the data if it's encrypted) when you boot another system from external media or other hard drive as whatever solution you're using to protect them is no longer running.

Unless you give up the system management to someone else (root passwords, bios password and settings...) who can keep you from shooting yourself on the foot, there's nothing that could get you what you want. Maybe some cloud-based filesystem from Amazon with immutable copies could achieve that, but it's not really practical on any level, financial very much included. And even with that (if it's even possible in the first place, I'm not sure) if you're the one holding all the keys and passwords, the whole system is on your mercy anyways.

So the real solution is to back up your files, verify regularly that backups work and learn not to break your things.

[-] IsoKiero@sopuli.xyz 22 points 1 month ago

Mullvad (apparenlty, first time I've heard from the service) uses DNS over TLS and I don't think that the current GUI version has the option to enable it. Here's a quickly googled howto from Fedora on how to enable it on your system. If that doesn't help search for 'NetworkManager DOT' or 'DNS over TLS'.

[-] IsoKiero@sopuli.xyz 40 points 6 months ago

Dd. It writes on disk at a block level and doesn't care if there's any kind of filesystem or raid configuration in place, it just writes zeroes (or whatever you ask it to write) to drive and that's it. Depending on how tight your tin foil hat is, you might want to write couple of runs from /dev/zero and from /dev/urandom to the disk before handing them over, but in general a single full run from /dev/zero to the device makes it pretty much impossible for any Joe Average to get anything out of it.

And if you're concerned that some three-letter agency is interested of your data you can use DBAN which does pretty much the same than dd, but automates the process and (afaik) does some extra magic to completely erase all the data, but in general if you're worried enough about that scenario then I'd suggest using an arc furnace and literally melting the drives into a exciting new alloy.

[-] IsoKiero@sopuli.xyz 36 points 7 months ago

Bare feet are a bit clickbaity on the headline. That alone doesn't mean much, but when it happens on a area where you should have full protective gear at the (supposed to be) sterile part of the manufacturing it's of course a big deal. But it would be equally big deal if you just stroll there in your jeans and t-shirt with boots you stepped on a dog shit on your way to work. And even then it's not even close of being the biggest issue on manufacturing where they constantly ignored all of the safety protocols, including ignoring test results which told them that the product is faulty.

11

I'm not quite sure if electronics fit in with the community, but maybe some of you could point me into right direction with ESPHome and IR transmitter to control my minisplit heatpump at the garage.

The thing is cheapest one I could find (I should've paid more, but that's another story). It's rebranded cheap chinese crap and while vendor advertised that you could control it over wifi I didn't find any information beyond 'use SmartApp to remote control' (or whatever that software was called) but it's nowhere to be found and I don't want to let that thing into internet anyways.

So, IR to the rescue. I had 'infrared remote control module' (like this around and with arduino uno I could capture IR codes from the remote without issues.

But, transmitting those back out seems to be a bit more challenging. I believe I got the configuration in place and I even attempted to control our other heat pump with IR Remote Climate component which should have support out of the box.

I tried to power the IR led straight from nodemcu pin (most likely a bad idea) and via IRFZ44N mosfet (massive overkill, but it's what I had around) from 3.3V rail. The circuit itself seems to work and if I replace IR led with a regular one it's very clear that LED lights up when it should.

However, judging by the amount of IR light I can see trough cellphone camera, it feels like that either the IR LED is faulty (very much a possibility, what you can expect from a 1€ kit) or that I'm driving it wrong somehow.

Any ideas on what's wrong?

39
submitted 7 months ago by IsoKiero@sopuli.xyz to c/linux@lemmy.ml

I think that installation was originally 18.04 and I installed it when it was released. A while ago anyways and I've been upgrading it as new versions roll out and with the latest upgrade and snapd software it has become more and more annoying to keep the operating system happy and out of my way so I can do whatever I need to do on the computer.

Snap updates have been annoying and they randomly (and temporarily) broke stuff while some update process was running on background, but as whole reinstallation is a pain in the rear I have just swallowed the annoyance and kept the thing running.

But now today, when I planned that I'd spend the day with paperwork and other "administrative" things I've been pushing off due to life being busy, I booted the computer and primary monitor was dead, secondary has resolution of something like 1024x768, nvidia drivers are absent and usability in general just isn't there.

After couple of swear words I thought that ok, I'll fix this, I'll install all the updates and make the system happy again. But no. That's not going to happen, at least not very easily.

I'm running LUKS encryption and thus I have a separate boot -partition. 700MB of it. I don't remember if installer recommended that or if I just threw some reasonable sounding amount on the installer. No matter where that originally came from, it should be enough (this other ubuntu I'm writing this with has 157MB stored on /boot). I removed older kernels, but still the installer claims that I need at least 480MB (or something like that) free space on /boot, but the single kernel image, initrd and whatever crap it includes consumes 280MB (or so). So apt just fails on upgrade as it can't generate new initrd or whatever it tries to do.

So I grabbed my ventoy-drive, downloaded latest mint ISO on it and instead of doing something productive I planned to do I'll spend couple of hours at reinstalling the whole system. It'll be quite a while before I install ubuntu on anything.

And it's not just this one broken update, like I mentioned I've had a lot of issues with the setup and at least majority of them is caused by ubuntu and it's package management. This was just a tipping point to finally leave that abusive relationship with my tool and set it up so that I can actually use it instead of figuring out what's broken now and next.

[-] IsoKiero@sopuli.xyz 24 points 7 months ago

I'd say that single core performance and amount of RAM you have are the biggest issues with running anything on old hardware. Apparently, in theory, you could run even modern kernel with just 4MB of RAM (or even less, good luck finding an 32bit system with less than 4MB). I don't think you could fit any kind of graphical environment on top of that, but for an SSH terminal or something else lightweight it would be enough.

However a modern browser will easily consume couple gigabytes of RAM and even a 'lightweight' desktop environment like XFCE will consume couple hundred MB's without much going on. So it depends heavily on what you consider to be 'old'.

The computer at garage (which I'm writing this with) is Thinkstation S20 I got for free from the office years ago is from 2011. 12GB of RAM, 4 core Xeon CPU and aftermarket SSD on SATA-bus and this thing can easily do everything I need for it in this use case. Browsing the web on how to fix whatever I'm working with at the garage, listen music from spotify, occasional youtube-video, signal and things lke that. Granted this was on a higher end when it was new, but maybe it gives some perspective on things.

5
submitted 8 months ago* (last edited 8 months ago) by IsoKiero@sopuli.xyz to c/homeassistant@lemmy.world

Maybe this hivemind can help out debugging Z-wave network. I recently installed two devices on the network (currently up to 15) with two repeaters, light switches, wall plugs, thermostat and couple battery operated motion sensors.

Before latest addition everything worked almost smoothly, every now and then the motion sensor messages didn't go trough, but it was rare enough that I didn't pay too much attention to it as I have plenty of other stuff to do than tinker with occasional hiccup on home automation.

However for the last 48 hours (or so) the system has become unreliable enough that I need to do something about it. I tried to debug the messages a bit, but I'm not too famliar on what to look for, however these messages are frequent and they seem to be a symptom of an issue:

Dropping message with invalid payload

[Node 020] received S2 nonce without an active transaction, not sure what to do with it

Failed to execute controller command after 1/3 attempts. Scheduling next try in 100 ms.

Specially the 'invalid payload' message appears constantly on the logs. I'd quess that some of the devices is malfunctioning, but other option is that there's somehow a loop on the network (I did attempt to reconfigure the whole thing, didn't change much) or that my RaZberry 7 pro is faulty.

Could someone give a hint on how to proceed and verify which the case might be?

Edit: I'm running Home Assistant OS on a raspberry pi 3.

7
submitted 8 months ago* (last edited 8 months ago) by IsoKiero@sopuli.xyz to c/homeassistant@lemmy.world

I've been trying to get a bar graph from nordpool electricity prices, but for some reason the graph style won't change no matter how I try to configure it.

I'm running Home assistant OS (or whatever that was called) on a raspberry pi 3:

  • Home Assistant 2023.10.1
  • Supervisor 2023.10.0
  • Operating System 10.5
  • Frontend 20231005.0 - latest

Currently my configuration for the card is like this:

type: custom:mini-graph-card
name: Pörssisähkö
entities:
  - entity: sensor.nordpool
    name: Pörssisähkö
    group-by: hour
    color: '#00ff00'
    show:
      graph: bar

But no matter how I try to change that the graph doesn't change and there's also other variables, like line graph with/without fill which doesn't work as expected. Granted, I'm not that familiar with yaml nor home assistant itself, but this is something I'd expect to "just work" as the configuration for mini-graph-card is quite simple. It displays correct data from the sensor, but only in a line format.

Is this something that recent update broke or am I doing something wrong? I can't see anything immediately wrong on any logs nor javascript console

[-] IsoKiero@sopuli.xyz 22 points 8 months ago

Is there way to use a usb wifi card(tp-link) as the reciever and make it work?

No. The mouse is not a wifi device, and as it's a old one I doubt it'll support bluetooth either. So you really need the original dongle.

[-] IsoKiero@sopuli.xyz 46 points 9 months ago

I don't know what to pick, but something else than PDF for the task of transferring documents between multiple systems. And yes, I know, PDF has it's strengths and there's a reason why it's so widely used, but it doesn't mean I have to like it.

Additionally all proprietary formats, specially ones who have gained enough users so that they're treated like a standard or requirement if you want to work with X.

[-] IsoKiero@sopuli.xyz 25 points 9 months ago

I ran one for a while. In Finland legislation is a bit different, so I wasn't worried about breaking the law or getting sued, but my ISP got in touch pretty quickly. They were professionals and understood the situation when I explained why my traffic might look "a bit" suspicious and I attempted to clean up bad actors from the traffic with filtering and whatnot, but eventually ISP got enough complaints and they were pretty much forced to tell me that either I shut the exit node down or they'll cut my line.

As I said, they were very professional about it, and managed the whole experiment as good as I ever could have hoped, but my agreement with them has an option that if I'm letting malware and bad actors leave the network even after warnings they can shut the connection down. And that's understandable, I suppose they have similar agreements with other providers and they received all the abuse mail my exit node was causing, so I'm still a happy customer with them even if they eventually took the hard way.

I'm still pretty sure it would be possible to run filtered exit node, but it would require far more time and other resources that I'm willing to spend on a project like that and I'm not sure if a single person is enough for it anyways.

So, yes, do your homework and be careful. Legislation plays a significant part (depending on where you live), but your ISP most likely won't like it either.

[-] IsoKiero@sopuli.xyz 22 points 9 months ago

And as they'll use mobile app, potentially a metric shit ton of more. Location, contacts, usage patterns, make/model of phone, other connected devices on your wifi and the list goes on and on. I'm not sure if they actually get everything they can like tiktok, but atleat theoretical possibility exists.

210

cross-posted from: https://derp.foo/post/250090

There is a discussion on Hacker News, but feel free to comment here as well.

[-] IsoKiero@sopuli.xyz 40 points 11 months ago

DNS is a quite well matured technology and it's just as complex as it needs to be and not a bit more. It's a very robust system which has been a big part of the backbone of the internet as we know it today for decades and it's responsible for quite a large chunk of stuff working as intended globally for millions and billions of people all day every day.

It's not hard to learn per se (it's something you can explain on a basic level to every layman in 15 minutes or so), it's just a complex system and understanding complex systems isn't always easy nor fast. Running your own DNS-server/forwarder for a /24 private subnet is rather trivial thing to do, but doing it well requires that you understand at least some of the underlying tehcnology.

You really need to learn how to walk at first and build on that to run. It's just a fundamental piece of technology and there's no shortcuts with it due to nature of DNS services. You can throw whatever running on a container by following step-by-step instructinos and call it a day, but that alone doesn't give you the knowledge to understand what's going on under the hood. That's just how the things are and should I have my way with things, that same principle should apply to everything, specially if it's going to face the public internet.

23
submitted 11 months ago* (last edited 11 months ago) by IsoKiero@sopuli.xyz to c/selfhosted@lemmy.world

This question has already been around couple of times, but I haven't found an option which would allow multiple users and multiple OS's (Linux and Windows mostly, mobile, both android and ios, support would be nice at least for viewing) to conviniently share the same storage.

This has been an issue on my network for quite some time and now when I rebuilt my home server I installed TrueNAS on a VM and I'm currently organizing my collections over there with Shotwell so the question became acute again.

Digikam seems to be promising for the rest than organizing the actual files (which I can live with, either shotwell or a shell script to sort them by exif-dates), but I haven't tried that yet with windows and my kubuntu desktop seems to only have snap-package of that without support for external SQL.

On "editing" part it would be pretty much sufficient to tag photos/folders to contain different events, locations and stuff like that, but it would be nice to have access to actual file in case some actual editing needs to be done, but I suppose SMB-share on truenas will accomplish that close enough.

Other need-to-have feature is to manage RAW and JPG versions of the same image at least somehow. Even removing JPGs and leaving only RAW images would be sufficient.

And finally, I really like to have the actual files laying around on a network share (or somewhere) so that they're easy to back up, copy to external nextcloud for sharing and in general have more flexibility in the future in case something better comes up or my environment changes.

view more: next ›

IsoKiero

joined 1 year ago