[-] Pfosten@feddit.de 4 points 3 months ago

There is no downside to nested encryption, except of course the performance overhead. But this only really makes sense if each layer has an independent key and each layer uses an algorithm from a different family. Improper key reuse weakens the scheme.

For symmetric cryptography like AES the benefit is dubious. It is far more likely that the content is decrypted because the key was acquired independently than that AES would be broken.

However, there absolutely is a benefit for asymmetric crypto and key agreement schemes. This is how current Post-Quantum Cryptography schemes work, because:

  • commonly used algorithm families like RSA and Elliptic-Cuve will be broken as soon as a sufficiently large quantum computer exist
  • proposed PQC algorithms are comparatively immature, and some of them will be broken in the near future

Nesting one algorithm from each family gives us the best of both worlds, at a performance overhead: conventional asymmetric cryptography give us temporary security in the near future, and the second PQC layer gives us a chance at long-term security.

[-] Pfosten@feddit.de 14 points 4 months ago

The text does technically give the reason on the first page:

It is not a regular language and hence cannot be parsed by regular expressions.

Here, "regular language" is a technical term, and the statement is correct.

The text goes on to discuss Perl regexes, which I think are able to parse at least all languages in LL(*). I'm fairly sure that is sufficient to recognize XML, but am not quite certain about HTML5. The WHATWG standard doesn't define HTML5 syntax with a grammar, but with a stateful parsing procedure which defies normal placement in the Chomsky hierarchy.

This, of course, is the real reason: even if such a regex is technically possible with some regex engines, creating it is extremely exhausting and each time you look into the spec to understand an edge case you suffer 1D6 SAN damage.

[-] Pfosten@feddit.de 3 points 5 months ago

The cookie consent rules appeared 2009, and consent was made more strict in 2018 with the GDPR.

EU bodies such as the WP29 data protection board had been writing since at least 2014 on the need of reform because the cookie consent rules are onerous in practice. Everyone wants reform.

So there was (is?) an effort to replace the ePrivacy Directive with a shining new ePrivacy Regulation that would also harmonize it with the GDPR. At the time, it was hoped it could come into force together with the GDPR in 2018. This regulation would have allowed the use of some cookies without consent, even when not strictly necessary.

But the proposed regulation is disliked by both the data protection side and the industry side, because it changes the existing balance. It was heavily lobbied against by Google and others, and never got ready enough for a vote (report from 2017, and in 2021 the NYT reported on internal documents where Google boasted that it successfully slowed down any progress). Every year someone in the EU tries to pick it up again, but always there's something more important and it gets dropped again. I guess the effort this article reports on will falter as well.

Some silver linings though:

  • Because responsibility for enforcement for cookie consent currently differs from GDPR stuff, clever data protection authorities like Belgium and France have been able to issue fines against big tech companies without having to involve their extremely industry-friendly Irish colleagues.
  • Subsequent lobbying has not been able to prevent improvements on other aspects, e.g. Digital Markets Act and Digital Services Act, the latter of which also forbids Dark Patterns. However, these Acts primarily affect very large companies, not the average website.
[-] Pfosten@feddit.de 9 points 7 months ago

For a project like Signal, there are competing aspects of security:

  • privacy and anonymity: keep as little identifiable information around as possible. This can be a life or death thing under repressive governments.

  • safety and anti-abuse: reliably block bad actors such as spammers, and make it possible for users to reliably block specific people (e.g. a creepy stalker). This is really important for Signal to have a chance at mass appeal (which in turn makes it less suspicious to have Signal installed).

Phone number verification is the state of the art approach to make it more expensive for bad actors to create thousands of burner accounts, at the cost of preventing fully anonymous participation (depending on the difficulty of getting a prepaid SIM in your country).

Signal points out that sending verification SMS is actually one of its largest cost centers, currently accounting for 6M USD out of their 14M USD infrastructure budget: https://signal.org/blog/signal-is-expensive/

I'm sure they would be thrilled if there were cheaper anti-abuse measures.

[-] Pfosten@feddit.de 85 points 7 months ago

This article is ahistoric and unnecessarily conspirational.

Signal and its predecessors like TextSecure have been run by different companies/organizations:

  • Whisper Systems
  • Open Whisper Systems
  • Signal Technology Foundation (and its subsidiary Signal Messenger LLC)

Open Whisper Systems received about 3M USD total from the US government via the Open Technology Fund for the purpose of technology development … during 2013 to 2016. Source: archive of the OTF website: https://web.archive.org/web/20221015073552/https://www.opentech.fund/results/supported-projects/open-whisper-systems/

The Signal Foundation (founded 2018) was started by an 105M USD interest free loan from Brian Acton, known for co-founding WhatsApp and selling it to Facebook (now Meta).

So important key insights:

  • It doesn't seem like the Signal Foundation received US government funding. (Though I haven't checked financial statements.)
  • The US government funding seems to be a thing of the fairly distant past (2016). The article makes it sound like the funding was just pulled this year.
  • The US government funding was small compared to Signal's current annual budget. It was not small at the time, but now Signal regularly makes more from licensing its technology than it regularly received from the US government. According to ProPublica, Signals financial statements for 2022 indicate revenue of about 26M USD
[-] Pfosten@feddit.de 10 points 7 months ago

It would be unwise for a bank to publish its exact fraud detection and risk management policies, otherwise they could be easily circumvented. A lot of these policies will be embodied in their internal backend services.

Someone will now inevitably mention "security by obscurity". But Kerckhoff's Principle is specifically about cryptosystems which should derive their security solely from the strength of the keys. That way, confidentiality is still ensured even when details about the cryptosystem become known to adversaries.

But non-cryptographic aspects of security benefit from asymmetric knowledge, from grey areas, from increasing risk for adversaries.

[-] Pfosten@feddit.de 17 points 7 months ago

Cryptography works. At least until sufficiently powerful quantum computers arrive, TLS reliably ensures confidentiality between your browser and the server. No one else can snoop on the data transmitted via that connection.

But are you connected to the right server? Without some kind of authentication, any adversary in the middle (such as your ISP) could impersonate the real server.

That is where certificates come in. They are issued by neutral certificate authorities (CAs) that check the identity. It works something like this:

  • I, the server operator, create a private key on that server. I use that key to create a certificate request which asks the CA to give me a certificate. This request also contains the domain names for which the key shall be used.
  • The CA performs identity checks.
  • The CA issues me the certificate. I install it on my server. Now, when browsers create a TLS connection I can tell them: here's my public key you can use to check my identity, and here's a certificate that shows that this is a valid key for this domain name!
  • The browser will validate the certificate and see if the domain name matches one of the names in the certificate.

What kind of checks are done depends on the CA. I've obtained certificates by appearing in person at a counter, showing my government ID, and filling out a form. Nowadays more common is the ACME protocol which enables automated certificate issuance. With ACME, the CA connects to the server from multiple network locations (making interception unlikely) and checks if the server provides a certain authentication token.

To know which certificates are valid, browsers must know which CAs are trusted. Browser makers and CAs have come together to create an evolving standard of minimum requirements that CAs must fulfill to be eligible for inclusion in the browser's default trust store. If a CA violates this (for example by creating certificates that can be used for government traffic interception, or by creating a certificate without announcing it in a public transparency list), then future browser versions will remove them, making all their certificates worthless.

eIDAS 2 has the effect of circumventing all of this. There is to be a government-controlled CA (already high-risk) that has its own verification rules set by legislation (does not meet industry standard rules). And browsers would be legally forced to include the eIDAS CAs as "trusted".

This puts browsers in a tough spot because they've resisted these kinds of requests from authoritarian regimes in the past. But now the world's largest trade bloc is asking. Browsers can comply or leave the EU market, or maybe provide a less secure EU edition? Awakens uncomfortable memories around the failed US attempts at cryptography export control (cryptography is considered a munition, like hand grenades or ballistic missiles).

It is plausible that the EU is doing this with good intentions: having a digital identity scheme is useful, it makes sense for identity to be government-controlled (like passports), and such a scheme will only see success if it sees industry adoption. The EU has also seen that hoping for voluntary industry adoption doesn't generally work, e.g. see the USB-C mandate.

[-] Pfosten@feddit.de 3 points 8 months ago

FYI https://privacytests.org/ gives a good browser privacy comparison. No affiliation and don't know how correct the data is.

After that project was started, the author started working for Brave.

The data and tests seem good, but some aspects of the methodology are opinionated. For example, browsers are tested in their out of the box configuration, not in a configuration that a reasonably privacy-conscious user would select with a couple of clicks. Thus, a browser dedicated to blocking tracking (like Brave) gets a lot more checkmarks than a general audience browser like Firefox.

LibreWolf is essentially Firefox with all those privacy features pre-enabled.

[-] Pfosten@feddit.de 61 points 8 months ago* (last edited 8 months ago)

Well, it's about Peter Thiel, who also founded the Palantir surveillance technology company. As a source for his involvement with Brave, Wikipedia cites this TechCrunch article, which mentions funding from Thiel's "Founders Fund".

I'd rather criticize Brave for other reasons though, like being led by Brendan Eich or supporting crypto.

[-] Pfosten@feddit.de 23 points 8 months ago

In the language of Polandball comics, it means "give land/territory".

"Gib" is taken from the German imperative for "give!"

Best theory is that "clay" as "land" originates from a bad machine translation.

It seems somene created a Wiki that covers these terms: https://www.polandballwiki.com/wiki/Terminology#Clay

[-] Pfosten@feddit.de 5 points 9 months ago

You actually get a say in how that money is spent, whereas if you donate to developers directly then they decide …

You get to be involved in a "steering committee" of sorts for that project,

The project itself always decides, unless you fork the project and do your own thing. You can wave a carrot in front of them (if you do X then you get $Y), but the relevant factor is going to be the size of the carrot $Y, not directly the "collective bargaining" via that platform. How would your platform facilitate finding a stronger negotiation position?

this membership organization is incorporated as a non-profit, whereas the software project you're supporting may not be

It sounds like you may have discovered the "fiscal sponsor" concept. There are a couple of nonprofits already offering such services, such as the Software Freedom Conservancy or OpenCollective. The foundations like Linux Foundation, Apache Foundation, or Eclipse Foundation also come to mind.

However, for all of these the project decides to join a host. Foundations can't just annex projects.

[-] Pfosten@feddit.de 4 points 1 year ago

Bei NAS-Festplatten ist wichtig dass die Firmware für NAS ausgelegt ist (also nicht eine günstigere normale Festplatte nutzen), und dass kein Shingled Magnetic Recording (SMR) verwendet wird. SMR spart dem Hersteller Kosten, solche Platten werden aber bei hoher Schreiblast von RAID-Controllern als fehlerhaft erkannt und deaktiviert. Zum Beispiel genau dann wenn du eine Platte in einem RAID ersetzen willst.

WD benutzt SMR inzwischen bei deren "WD Red" Reihe welches für Einstiegs-NAS vermarktet wird… Bestimmte ältere WD Red Modelle sind aber OK.

Pratkisch hast du damit die Auswahl zwischen 2 Modellreihen:

  • WD Red Pro
  • Seagate Ironwolf / Ironwolf Pro

Ich persönlich nutze inzwischen ausschließlich Ironwolf. Haben zwar etwas schlechtere Ausfallstatistiken als die Alternativen, für ein kleines NAS fällt das aber für die Total Cost of Ownership nicht ins Gewicht.

Und immer daran denken: RAID ist gut für Verfügbarkeit, aber RAID ist kein Backup.

view more: next ›

Pfosten

joined 1 year ago