[-] notabot@lemm.ee 1 points 2 hours ago* (last edited 2 hours ago)

I've found HSBC to be ok using Firefox on Linux. I don't know if they have integrations with any accounting software, but the web access works well, and you can export your transactions for processing locally.

ETA: I've run small business accounting on Gnucash, I found the learning curve a bit steep, but once you 'get it' it's handy.

[-] notabot@lemm.ee 1 points 1 day ago

Sorry for the slow reply, life occurred.

I think I understand where you're coming from with the desired to be productive and not reinstall. I think I've been there too! One thing that I can suggest, if you do have the time, is to learn a system like Ansible and use it to setup and configure your machine. The discipline of keeping all of the config as source rather than making ad-hoc changes reduces the chance of thinking you'll make just one little change and breaking something, and, if something does go wrong, you can get back to your working configuration quickly.

Bearing in mind that there really isn't anything you can do to stop yourself if you're really determined to not lose the data, because if you can read it at any time you can back it up, the closest you are likely to come is something like creating new key with GPG then using the TPM to wrap your secret key and deleting the original. That way the key is only usable on that specific machine. Then use the key-pair to encrypt your 'guard' files. You can still decrypt them because you have the wrapped secret keys and you're on the same machine, but if you wipe the drive and lose those keys the data is gone. The TPM wrapping prevents you from taking the keys to a different machine to decrypt your data.

There's an article with some examples here,

Having said all of that, this still doesn't help if you just clone the disk as all of the data, including the wrapped key and the encrypted files will be cloned. The one difference there is that the serial number of the hard drive will be different. Maybe you could use that, combined with a passphrase as the passphrase for your GPG key, but we're getting into pretty esoteric territory here. So you could generate a secret key with a command like:

( lsblk -dno SERIAL /dev/sdb ; zenity --title "Enter decrypt password" --password) | sha1sum | cut -c1-40

Where /dev/sdb is the device your root partition is on. zenity is a handy utility for displaying dialogs, there are others available. In this use it just prompts for a passsword. We then concatenate the drive serial number from lsblk with the password you entered and hash the result. The hashing is really only a convenient way to mix the two without worrying about the newline lsblk spits out. Don't record the result of this command, but use it to set the passphrase on your new GPG key. Wrapping the secret key in the manner the article above suggests is a nice extra step to make it harder to move the drive to another machine or mess around in that sort of way, but not strictly necessary as that wasn't in the scope of your original question.

Now you can encrypt your file with: gpg -e -r <your key name> <your file>'. That will produce an encrypted version of called.gpg. To decrypt the file you can get gpg` to use the hashing command from above to get the passphrase with something like:

gpg -d --pinentry-mode=loopback --batch --passphrase-fd 3 <your file>.gpg 3< <( ( lsblk -dno SERIAL /dev/sdb ; zenity --title "Enter decrypt password" --password) | sha1sum | cut -c1-40 )

Once you've tested that you can decrypt the file successfully you can remove the original, plaintext, file. Your data is now encrypted with a key that is secured with a passphrase made of a string you know and the serial number of your disk and optionally wrapped with a key from the TPM that is tied to your physical machine. If you change the disk or the machine the data is irretrievable (ignoring the caveats discussed above). I think that's about as close to your original goal as you can get. It's rough around the edges, and I'm not sure I'd trust my data to it, but I believe it'll work. If you do something like this, please test it thoroughly, I can't guarantee it!

[-] notabot@lemm.ee 2 points 4 days ago

Yes, yes, but now lets take that, make it dependent on the session management system and dns resolver for some reason, make the command longer and more convoluted and store the results in one or more of a dozen locations! It'll be great!

/s

Dconf is bad, just imagine how bad a systemd version would be.

[-] notabot@lemm.ee 1 points 5 days ago

Yeah, I know there was one a while back, and if you don't use ECC RAM, given enough time, it will eat your data as it tries to correct checksum errors due to memory corruption. That's why we keep backups, right. Right?

I tend to assume that every storage system will eventually lose data, so having multiple copies is vital.

[-] notabot@lemm.ee 3 points 6 days ago

I think the difference is the level it's happening at. As I said, I haven't tried it yet, but it looks like a simple, unfussy and minimal distribution that you then add functionality to via configuration. Having that declarative configuration means it's easy to test new setups, roll back changes and even easily create modified configuration for other servers.

[-] notabot@lemm.ee 1 points 6 days ago

cries It's amazing how much damage they've done to the linux ecosystem. Not just badly thought out concepts, but the amount of frustration and annoyance they caused by ramming it into existence and the cynicism it's created.

[-] notabot@lemm.ee 1 points 6 days ago

Having consistent interface names on servers that have several is useful, but we already had that option. The interface names they generate are not only hard to remember, but not terribly useful as they're based on things like which PCI slot they're in, rather than what their purpose is. You want interface names like wan0 and DMZ, not enp0s2. Of course, you can set it up to use useful names, but it's more complicated than it used to be, so while the systemd approach looks like a good idea on the surface, it's actually a retrograde step.

[-] notabot@lemm.ee 2 points 6 days ago

He may have taken some ideas from there, but I still see more windows like ideas. We're one bad decision away from systemd-regedit. If that happens, I might just give up completely.

[-] notabot@lemm.ee 3 points 6 days ago

I try not to think about the things they've done, it's not good for my blood pressure. They had a decent desktop distro, but they seem determined to trash it with terrible decisions.

[-] notabot@lemm.ee 79 points 3 months ago

tips fedora

M'Debian.

(Had one too many problems with Fedora)

[-] notabot@lemm.ee 145 points 3 months ago

The internet in it's heyday, when it was a genuinely thrilling place to find information, and quite a lot of weirdness, and before it was swamped by corporate interests.

I remember starting out with gopher and a paper print out of 'The big dummies guide to the internet' which was a directory of almost every gopher and ftp site (pre web) along with a description of what you'd find there. Then the web came along and things got really good for a while. Once big corporations got involved it all went down hill.

[-] notabot@lemm.ee 77 points 4 months ago

Have you considered supplementing your income by committing massive fraud?

You need to start by making small changes to your daily habits, and build up to massive fraud. If you try to do it all at once the habit wont stick.

view more: next ›

notabot

joined 1 year ago