[-] jlh@lemmy.jlh.name 7 points 22 hours ago* (last edited 22 hours ago)

Those back seats with railings feel like a broken leg waiting to happen if the bike tips over. Long johns seem so much safer for kids. Bikes shouldn't restrict leg movement. Even standard child seats have better fall protection since they have those high backs.

[-] jlh@lemmy.jlh.name 11 points 2 days ago

Coops are still about the money. They're about saving money by sharing resources with fellow workers/consumers, and maintaining democratic control over the company. You're not going to get rich from a coop (without embezzlement), but you and your coowners will be cutting out the middle man. Obviously, it only makes sense for industries that you're heavily invested in.

[-] jlh@lemmy.jlh.name 13 points 2 days ago

Self hosting can save a lot of money compared to Google or aws. Also, self hosting doesn't make you vulnerable to DDOS, you can be DDOSed even without a home server.

You don't need VLANs to keep your network secure, but you should make sure than any self hosted service isn't unnecessarily opens up tot he internet, and make sure that all your services are up to date.

What services are you planning to run? I could help suggest a threat model and security policy.

[-] jlh@lemmy.jlh.name 4 points 3 days ago

Not to mention, fiber is cheaper than copper at this point.

Telecoms are just lazy and don't want to string up new lines.

[-] jlh@lemmy.jlh.name 18 points 4 days ago* (last edited 4 days ago)

I'm using IPv6 on Kubernetes and it's amazing. Every Pod has its own global IP address. There is no NAT and no giant ARP routing table slowing down the other computers on my network. Each of my nodes announces a /112 for itself to my router, allowing it to give addresses to over 65k pods. There is no feasible limit to the amount of IP addresses I could assign to my containers and load balancers, and no routing overhead. I have no need for port forwarding on my router or worrying about dynamic IPs, since I just have a /80 block with no firewall that I assign to my public facing load balancers.

Of course, I only have around 300 pods on my cluster, and realistically, it's not really possible for there to be over 1 million containers in current kubernetes clusters, due to other limitations. But it is still a huge upgrade in reducing overhead and complexity, and increasing scale.

[-] jlh@lemmy.jlh.name 1 points 4 days ago* (last edited 4 days ago)

Ah fair enough, I figured that since the registers are 512 bit, that they'd support 512 bit math.

It does look like you can load/store and do binary operations on 512-bit numbers, at least.

Not much difference between 8x64 and 512 when it comes to integer math, anyways. Add and subtract are completely identical.

[-] jlh@lemmy.jlh.name 4 points 5 days ago

Tons of computing is done on x86 these days with 256 bit numbers, and even 512-bit numbers.

[-] jlh@lemmy.jlh.name 9 points 5 days ago

There's plenty of instructions for processing integers and fp numbers from 8 bits to 512 bits with a single instruction and register. There's been a lot of work in packed math instructions for neural network inference.

[-] jlh@lemmy.jlh.name 104 points 3 months ago* (last edited 3 months ago)

It takes 23 hours and 2000 km to drive from the southernmost point in sweden to Abisko in the north.

A full loop through Malmö-Kalmar-Stockholm-Luleå-Abisko-Östersund-Göteborg-Malmö takes over 2 days and over 4000 km.

Europe is not small.

[-] jlh@lemmy.jlh.name 96 points 3 months ago

EU regulation continues to be the only thing making big tech's shitty products somewhat usable. First USB-C, now this.

[-] jlh@lemmy.jlh.name 148 points 3 months ago

This is really frustrating. This is the only thing holding Linux gaming back for me, as someone who games with a AMD GPU and an OLED TV. On Windows 4k120 works fine, but on Linux I can only get 4k60. I've been trying to use an adapter, but it crashes a lot.

AMD seemed to be really trying to bring this feature to Linux, too. Really tragic that they were trying to support us, and some anti-open source goons shot them down.

51
submitted 6 months ago* (last edited 6 months ago) by jlh@lemmy.jlh.name to c/programming@programming.dev

I wanted to share an observation I've seen on the way the latest computer systems work. I swear this isn't an AI hype train post 😅

I'm seeing more and more computer systems these days use usage data or internal metrics to be able to automatically adapt how they run, and I get the feeling that this is a sort of new computing paradigm that has been enabled by the increased modularity of modern computer systems.

First off, I would classify us being in a sort of "second-generation" of computing. The first computers in the 80s and 90s were fairly basic, user programs were often written in C/Assembly, and often ran directly in ring 0 of CPUs. Leading up to the year 2000, there were a lot of advancements and technology adoption in creating more modular computers. Stuff like microkernels, MMUs, higher-level languages with memory management runtimes, and the rise of modular programming in languages like Java and Python. This allowed computer systems to become much more advanced, as the new abstractions available allowed computer programs to reuse code and be a lot more ambitious. We are well into this era now, with VMs and Docker containers taking over computer infrastructure, and modern programming depending on software packages, like you see with NPM and Cargo.

So we're still in this "modularity" era of computing, where you can reuse code and even have microservices sharing data with each other, but often the amount of data individual computer systems have access to is relatively limited.

More recently, I think we're seeing the beginning of "data-driven" computing, which uses observability and control loops to run better and self-manage.

I see a lot of recent examples of this:

  • Service orchestrators like Linux-systemd and Kubernetes that monitor the status and performance of services they own, and use that data for self-healing and to optimize how and where those services run.
  • Centralized data collection systems for microservices, which often include automated alerts and control loops. You see a lot of new systems like this, including Splunk, OpenTelemetry, and Pyroscope, as well as internal data collection systems in all of the big cloud vendors. These systems are all trying to centralize as much data as possible about how services run, not just including logs and metrics, but also more low-level data like execution-traces and CPU/RAM profiling data.
  • Hardware metrics in a lot of modern hardware. Before 2010, you were lucky if your hardware reported clock speeds and temperature for hardware components. Nowadays, it seems like hardware components are overflowing with data. Every CPU core now not only reports temperature, but also power usage. You see similar things on GPUs too, and tools like nvitop are critical for modern GPGPU operations. Nowadays, even individual RAM DIMMs report temperature data. The most impressive thing is that now CPUs even use their own internal metrics, like temperature, silicon quality, and power usage, in order to run more efficiently, like you see with AMD's CPPC system.
  • Of source, I said this wasn't an AI hype post, but I think the use of neural networks to enhance user interfaces is definitely a part of this. The way that social media uses neural networks to change what is shown to the user, the upcoming "AI search" in Windows, and the way that all this usage data is fed back into neural networks makes me think that even user-facing computer systems will start to adapt to changing conditions using data science.

I have been kind of thinking about this "trend" for a while, but this announcement that ACPI is now adding hardware health telemetry inspired me to finally write up a bit of a description of this idea.

What do people think? Have other people seen the trend for self-adapting systems like this? Is this an oversimplification on computer engineering?

[-] jlh@lemmy.jlh.name 97 points 7 months ago* (last edited 7 months ago)

It's worse than that. Tesla is refusing to recognize the mechanics union at all. The Swedish mechanics union has many members at Tesla, and have asked to negotiate, and Tesla is flat out refusing to sign a deal to bring their working standards up to national standards. This would be illegal in the US under the NLRA.

Union collective agreements are so important in Sweden that they literally are our labor laws. Sweden does not have a minimum wage or overtime pay at all in the national law. Those are always regulated in the collective agreement. Tesla is refusing to accept any sort of minimum wage, overtime pay, etc for thair employees. They are trying to do business in Sweden without playing by basic labor rules, and they are being shunned by all of Sweden for it. They will end up like Toys R Us in 1995.

Source: I am a Swedish white-collar union member

220
submitted 7 months ago by jlh@lemmy.jlh.name to c/europe@feddit.de

Awful to see our personal privacy and social lives being ransomed like this. €10 seems like a price gouge for a social media site, and I'm even seeing a price tag of 150SEK (~€15) In Sweden.

view more: next ›

jlh

joined 1 year ago