this post was submitted on 05 May 2025
52 points (88.2% liked)

Showerthoughts

34095 readers
1432 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

Was looking through my office window at the data closet and (due to angle, objects, field of view) could only see one server light cluster out of the 6 racks full. And thought it would be nice to scale everything down to 2U. Then day-dreamed about a future where a warehouse data center was reduced to a single hypercube sitting alone in the vast darkness.

top 27 comments
sorted by: hot top controversial new old
[–] theotherbelow@lemmynsfw.com 2 points 18 hours ago

They tend to fill the space. I mean if you drive by a modern data center, so much grid electrical equipment is just right there. Now if hypothetically supermachine uses all that power sure, small data center. Unless they have a nuclear reactor they should (fu felon musk) only rely on grid/solar/renewables.

[–] 4am@lemm.ee 48 points 1 day ago

You think that if we can scale 6 racks down into one cube that someone wouldn’t just buy 6 racks of cubes?

They’ll always hunger for more.

[–] Kolanaki@pawb.social 4 points 1 day ago* (last edited 1 day ago) (2 children)

I sometimes wonder how powerful a computer could be made if we kept the current transistor size we have now, but still built the machine to take up an entire room. At what point would the number of transistors and the size of the machine become more of a problem than a solution? 🤔

[–] Deepus@lemm.ee 2 points 20 hours ago (2 children)

Isnt the main limiting factor signal integrity? Like we could do a CPU the size of a room now but it's pointless as the stuff at one end wouldnt be able to even talk to the stuff in the middle as the signal just get fucked up on the way?

[–] LH0ezVT@sh.itjust.works 1 points 18 hours ago

Signal integrity will probably be fine, you can always go with optical signalling for the long routes. What would be more of an issue is absurd complexity, latency from one end to the other, that kind of stuff. At some point, just breaking it down into a lot of semi-autonomous nodes in a cluster makes more sense. We kind of already started this with multi-core CPUs (and GPUs are essentially a lot of pretty dumb cores). The currently biggest CPUs all have a lot of cores, for a reason.

[–] Jolteon@lemmy.zip 1 points 18 hours ago* (last edited 18 hours ago) (1 children)

IIRC, light speed delay (or technically, electricity speed delay) it's also a factor, but I can't remember how much of a factor.

[–] BullishUtensil@lemmy.world 1 points 15 hours ago

It's significant already. If I get the math right (warning, I'm on my phone in bed at 3am and it's been 10 years) I think that a 1 inch chip running at 3GHz clock rate could, if you aren't careful with the design of the clock network, end up with half a clock cycle physically fitting on the chip. That is, the trace that was supposed to move the signal from one end of the chip to the other, would instead see the clock signal as a standing wave, not moving at all. (Of course people has (tried?) to make use of that effect. I think it was called "resonant clock distribution" or some such)

[–] MNByChoice@midwest.social 6 points 1 day ago (2 children)

They look silly now. Many data centers are not scaling up power per rack. With GPUs, there are often two chassis per rack.

[–] Geologist@lemmy.zip 3 points 19 hours ago

I had this problem with Equinix! They limited our company to like 10kva per rack, and we were installing nvidia dgx servers. Depending on the model we could fit only one or two lol.

[–] InverseParallax@lemmy.world 6 points 1 day ago

Have that problem ourselves, they didn't provision power or cooling for this kind of density, and how do you pipe in multiple megawatts to a warehouse in the middle of nowhere?

[–] XeroxCool@lemmy.world 5 points 1 day ago (1 children)

Only if storage density out paces storage demand. Eventually, physics will hit a limit

[–] mic_check_one_two@lemmy.dbzer0.com 10 points 1 day ago (1 children)

Physics is already hitting limits. We’re already seeing CPUs be limited by things like atom size, and the speed of light across the width of the chip. Those hard physics limitations are a large part of why quantum computing is being so heavily researched.

[–] XeroxCool@lemmy.world 1 points 1 day ago

Which means it doesn't seem like the limit has been hit yet. For standard devices, the general market has not moved to the current physical limitations

[–] sxan@midwest.social 7 points 1 day ago (2 children)

I think what will happen is that we'll just start seeing sub-U servers. First will be 0.5U servers, then 0.25U, and eventually 0.1U. By that point, you'll be racking racks of servers, with 10 0.1U servers slotted into a frame that you mount in an open 1U slot.

Silliness aside, we're kind of already doing that in some uses, only vertically. Multiple GPUs mounted vertically in an xU harness.

[–] Lucien@mander.xyz 21 points 1 day ago

You've reinvented blade servers

[–] partial_accumen@lemmy.world 9 points 1 day ago (2 children)

The future is 12 years ago: HP Moonshot 1500

"The HP Moonshot 1500 System chassis is a proprietary 4.3U chassis that is pretty heavy: 180 lbs or 81.6 Kg. The chassis hosts 45 hot-pluggable Atom S1260 based server nodes"

source

[–] MNByChoice@midwest.social 4 points 1 day ago (1 children)

That did not catch on. I had access to one and the use case and deployment docs were foggy at best

[–] InverseParallax@lemmy.world 4 points 1 day ago (2 children)

It made some sense before virtualization for job separation.

Then docker/k8s came along and nuked everything from orbit.

[–] MNByChoice@midwest.social 1 points 1 day ago (1 children)

VMs were a thing in 2013.

Interestinly, Docker was released in March 2013. So it might have prevented a better company from trying the same thing.

[–] InverseParallax@lemmy.world 2 points 1 day ago (1 children)

Yes, but they weren't as fast, vt-x and the like were still fairly new, and the VM stacks were kind of shit.

Yeah, docker is a shame, I wrote a thin stack on lxc, but BSD Jails are much nicer, if only they improved their deployment system

[–] MNByChoice@midwest.social 2 points 13 hours ago

Agreed.

Highlighting how often software usability reduces adoption of good ideas.

[–] partial_accumen@lemmy.world 2 points 1 day ago (1 children)

The other use case was for hosting companies. They could sell "5 servers" to one customer and "10 servers" to another and have full CPU/memory isolation. I think that use case still exists and we see it used all over the place in public cloud hyperscalers.

Meltdown and Spectre vulnerabilities are a good argument for discrete servers like this. We'll see if a new generation of CPUs will make this more worth it.

[–] InverseParallax@lemmy.world 4 points 1 day ago (1 children)

128-192 cores on a single epyc makes almost nothing worth it, the scaling is incredible.

Also, I happen to know they're working on even more hardware isolation mechanisms, similar to sriov but more enforced.

[–] partial_accumen@lemmy.world 1 points 1 day ago (1 children)

128-192 cores on a single epyc makes almost nothing worth it, the scaling is incredible.

Sure, which is why we haven't seen a huge adoption. However, in some cases it isn't so much an issue of total compute power, its autonomy. If there's a rogue process running on one of those 192 cores and it can end up accessing the memory in your space, its a problem. There are some regulatory rules I've run into that actually forbid company processes on shared CPU infrastructure.

[–] InverseParallax@lemmy.world 1 points 1 day ago

There are, but at that point you're probably buying big iron already, cost isn't an issue.

Sun literally made their living from those applications for a long while.

[–] sxan@midwest.social 1 points 1 day ago

Yeah, that's the stuff.