PriorProject

joined 1 year ago
[–] PriorProject@lemmy.world 13 points 1 year ago (1 children)

Another user posted the blog where they discuss their speedup techniques: https://tailscale.com/blog/more-throughput/

It's likely that the kernel version can use similar techniques to surpass the performance of the userspace version that tailscale uses, but no one has put in the work to to make the kernel implementation as sophisticated as the userspace one.

[–] PriorProject@lemmy.world 5 points 1 year ago

I had a look through the comments on this HN thread the other day and came away more intrigued by https://github.com/openobserve/openobserve than hyperdx. Hyperdx is built on top of clickhouse whereas open observe has it's own storage engines based on parquet files that can be accessed from local disk, S3, or a few other protocols.

I haven't tried either option yet... I'm, currently using netdata for metrics and don't do anything special for logs or tracing, but at tiny self-hosting scale I often find software with it's own storage engines (often sqlite) to be extra hassle-free. I'm curious to kick the tires on openobserve for that reason.

[–] PriorProject@lemmy.world 0 points 1 year ago* (last edited 1 year ago)

This is a very strong explanation of what's going on. And as a follow-up, I believe that ZeroTier present a single Ethernet broadcast domain, and so WoL tricks are more likely to work naturally there than with Wireguard. I haven't used ZeroTier, and I do use Wireguard via Tailscale/Headscale. I've never missed the Ethernet features of ZeroTier and they CAN result in a very chatty wan if you're not careful. But I think ZT would make this straightforward.

Though as other people note... the simplest/least-disruptive change is probably to expose some scripty thing on the rpi that can be triggered via be triggered over a routed protocol and then have the rpi emit the Ethernet broadcast packets from the physical network.

6
Athascon 2023 (tabletop.events)
submitted 1 year ago* (last edited 1 year ago) by PriorProject@lemmy.world to c/rpg@ttrpg.network
 

Welcome to ATHASCON 2023, a virtual role-playing game convention celebrating all things Dark Sun! Step into a post-apocalyptic desert realm where you battle to survive the harsh and unforgiving elements, savage psionic beasts, bloodthirsty raiders and the minions of the evil sorcerer-kings. Register now for only $5!

[–] PriorProject@lemmy.world 2 points 1 year ago (1 children)

No no, sorry. I mean can I still have all my network traffic go through some VPN service (mine or a providers) while Tailscale is activated?

Tailscale just partnered with Mullvad so this works out of the box for that setup: https://tailscale.com/blog/mullvad-integration/

For others, it's a "yes on paper" situation. It will probably often not work out of the box, but it seems likely to be possible as an advanced configuration. At the end of the line of possibilities, it would definitely be possible to set up a couple of docker containers as one-armed routers, one with your VPN and one with Tailscale as an exit node. Then they can each have their own networking stack and you can set up your own routes and DNS delegating only the necessary bits to each one. That's a pretty advanced setup and you may not have the knowhow for it, but it demonstrates what's possible.

[–] PriorProject@lemmy.world 1 points 1 year ago (3 children)

To a first approximation, Tailscale/Headscale don’t route and traffic.

Ah, well damn. Is there a way to achieve this while using Tailscale as well, or is that even recommended?

Is there a way to achieve what? Force tailscale to route all traffic through the DERP servers? I don't know, and I don't know why you'd want to. When my laptop is at home on the same network as my file-server, I certainly don't want tailscale sending filserver traffic out to my Headscale server on the Internet just to download it back to my laptop on the same network it came from. I want NAT traversal to allow my laptop and file-server to negotiate the most efficient network path that works for them... whether that's within my home lab when I'm there, across the internet when I'm traveling, or routing through the DERP server when no other option works.

OpenVPN or vanilla Wireguard are commonly setup with simple hub-and-spoke routing topologies that send all VPN traffic through "the VPN server", but this is generally slower path than a direct connection. It might be imperceptibly slower over the Internet, but it will be MUCH slower than the local network unless you do some split-dns shenanigans to special-case the local-network scenario. With Tailscale, it all more or less works the same wherever you are which is a big benefit. Of course excepting if you have a true multigigabit network at home and the encryption overhead slows you down... Wireguard is pretty fast though and not a problematic throughout limiter for the vast majority of cases.

[–] PriorProject@lemmy.world 13 points 1 year ago (5 children)

Have a read through https://tailscale.com/blog/how-nat-traversal-works/

You, and many commenters are pretty confused about out tailscale/Headscale work.

  1. To a first approximation, Tailscale/Headscale don't route and traffic. They perform NAT traversal and data flows directly between nodes on the tailnet, without traversing Headscale/Tailscale directly.
  2. If NAT traversal fails badly enough, it's POSSIBLE that bulk traffic can flow through the headscale/tailscale DERP nodes... but that's an unusual scenario.
  3. You probably can't run Headscale from your home network and have it perform the NAT traversal functions correctly. Of course, I can't know that for sure because I don't know anything about your ISP... but home ISPs preventing Headscale from doing it's NAT traversal job are the norm... one would be pleasantly surprised to find that a home network can do that properly.
  4. Are younreally expecting 10gb/s speeds over your encrypted links? I don't want to say it's impossible, people do it... but you'd generally only expect to see this on fairly burly servers that are properly configured. Tailscale just in April bragged about hitting 10gb speeds with recent optimizations: https://tailscale.com/blog/more-throughput/ and on home hardware with novice configd I'd generally expect to see roughly more like single gigabit.
[–] PriorProject@lemmy.world 7 points 1 year ago

I don't know what's up on your case, but I would not jump to the conclusion that it's impossible to use tailscale with any other VPN in any circumstance.

Rather, tailscale and Mullvad will now work easily and out of the box. For other VPNs, you may need to do understand the topology and routing of virtual devices and have the technical ability and system permissions to make deep networking changes.

So I'd expect one can probably find a way for most things to coexist on a Linux server. On a non-rootrr android phone? I'm less confident.

[–] PriorProject@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

So I have a question, what can I do to prevent that from happening? Apart from hosting everything on my own hardware of course, for now I prefer to use VPS for different reasons.

Others have mentioned that client-caching can act as a read-only stopgap while you restore Vaultwarden.

But otherwise the solution is backup/restore. If you run Vaultwarden in docker or podman container using volumes to hold state... then you know that as long as you can restart Vaultwarden without losing data that you also know exactly what data needs to be backed up and what needs to be done to restore it. Set up a nightly cron job somewhere (your laptop is fine enough if you don't have somewhere better) to shut down Vaultwarden, rsync it's volume dirs, and start it up again. If you VPS explodes, copy these directories to a new VPS at the same DNS name and restart Vaultwarden using the same podman or docker-compose setup.

All that said, keeypass+filesync is a great solution as well. The reason I moved to Vaultwarden was so I could share passwords with others in a controlled way. For single-user, I prefer how keypass folders work and keepass generally has better organization features... I'd still be using it for only myself.

[–] PriorProject@lemmy.world 2 points 1 year ago (1 children)

Yeah, snapshots sent to a separate and often remote pool is an extremely common backup strategy for folks who have long-term settled on ZFS. There's very nice tooling for this that presents a more traditional schedule/retention based interface to save you scripting snapshots and sends directly.

  • Sanoid is an old standby in that space.
  • Zrepl is getting a lot of traction lately and seems to be an up-and-coming option.
  • I use pyznap, but I don't recommend it to others as as the maintainer is on a multi-year hiatus which makes it undermaintained. It works great, but isn't getting active development which makes it a poor bet in a crowded space with many great options. I plan to eval Zrepl when I get around to it.
[–] PriorProject@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (3 children)

I don't know if what you're suggesting is possible, which as I read it is to split your "live" raid-1 in half and use one drive to rebuild the "live" pool and the other drive to rebuild the "backups" pool. It might be, but I can't think of any advantage to that approach and it's not something I would have thought to attempt.

I'd do one of:

  • Ship the data over the network using ZFS send or something like syncoid/sanoid (which use ZFS send under the hood). It might be slow, but is that an issue? Waiting a week for the initial sync might be fine.
  • But syncing by sneakernet is a good strategy too, and can be faster if your backup site is close or your connectivity is slow. In this case, I'd build the backup pool at the live site... ideally in an external drive bay... but one could plug it in internally as well. Then sync them with a local ZFS send, export the backup pool, detach and transport the backup pool to the backup site, them reattach the backup pool at the backup site and import it. Et Voila, the backup pool is running at the remote site fully populated with data and subsequent ZFS sends will be incremental.

Splitting and rebuilding your live pool might be possible, but I can imagine a lot of that might go wrong and I can't see any reason to do it that way over export/import.

[–] PriorProject@lemmy.world 1 points 1 year ago

You connect to Headscale using the tailscale clients, and configuration is exactly the same irrespective of which control server you use... with the exception of having to configure the custom server url with Headscale (which requires navigating some hoops and poor docs for mobile/windows clients).

But to my knowledge there are no client-side configs related to NAT traversal (which is kind of the goal... to work seamlessly everywhere). The configs themselves on the headscale server aren't so bad either, but the networking concepts involved are extremely advanced, so debugging if anything goes sideways or validating that your server-side NAT traversal setup is working as expected can be a deep dive. With Tailscale, you know any problems are client-side and can focus your attention accordingly... which simplifies initial debugging quite a lot.

[–] PriorProject@lemmy.world 10 points 1 year ago (1 children)

... only if you are in the US and get an API key from NCMEC. They are very protective of who gets the keys and require a zoom call as well.

Do you have a source for these statements, because they directly contradict the Cloudflare product announcement at https://blog.cloudflare.com/the-csam-scanning-tool/ which states:

Beginning today, every Cloudflare customer can login to their dashboard and enable access to the CSAM Scanning Tool.

... and shows a screenshot of a config screen with no field for an API key. Some CSAM scanners do have fairly limited access, but Cloudflare's appears to be broadly available.

 

Hey Vaultwarden users... I was turned on to Vaultwarden by this community and have a new installation up and running. I've recently imported a pretty substantial keeypass DB and have been manually validating the import and tidying up my folder organization as I go, including selectively moving some credentials to an organization with the future intention of adding family members to that org to access shared accounts.

By and large it's all going swimmingly with one concerning exception. Every now and again, a bunch of credentials forget their folder and get moved into "no folder".

  • I don't have a reliable reproduction yet, but it seems vaguely correlated with bulk moves. In the web-ui, I'll check a bunch of entries to move from my vault to the org, and OTHER entries I didn't touch get moved to "no folder" in my vault as a side-effect.
  • Once I had a folder disappear like this as well
  • I think I understand the basics around how collections, folders, and nesting of those containers work. I'm fairly confident that I'm not getting tripped up by just failing to understand the implications of the operation I'm doing.
  • I'm using sqlite for my db backend. I'm perfectly comfortable running a Postgres instance, I just thought the no-maintenance and no-dependencies approach of sqlite felt like a good match for this tiny but critical dataset. Could it be that the sqlite backend is under baked and I"m hitting some persistence bug?
  • Fwiw I've also seen issues where I get an encryption key error saving an entry or I see tons of missing entries.In each case logging out and logging in works around the issue. I had assumed this was browser/web buglets, but now I wonder if it's more signs of storage layer problems.

Have others seen similar issues? What db backend are you using?

 

This post overviews several self-hostable management systems that enable one to configure multiple clients and tunnels via wireguard. It gives a nice comparison between them, I learned a bit about how they compare and overlap.

view more: next ›