this post was submitted on 11 Sep 2023
1154 points (97.1% liked)

Programmer Humor

32371 readers
558 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] theamigan@lemmy.dynatron.me 19 points 1 year ago* (last edited 1 year ago) (2 children)

Except each container has its own libc and any other dependencies. If any linked binary or library has a different inode, it gets loaded separately. I would say it is indeed quite similar, even if the images in question here aren't hundreds of megabytes in size like with Electron.

[–] MotoAsh@lemmy.world 4 points 1 year ago* (last edited 1 year ago)

The funny thing is, as much as people shit on Java, that's exactly what its Java EE container arch was for. Truly tiny microservices in wars, an entire app in an ear. All managed by a parent container that can dedup dependencies with a global class loader if done well, and automatically scale wars horizontally, too.

No idea how to get that level of sharing with OS-level containers.

[–] ActuallyRuben@actuallyruben.nl 4 points 1 year ago (1 children)

That's not entirely true. OverlayFS supports page cache sharing for files in image layers. If your images share the same base image layer, then it should share libc and friends in the page cache.

https://docs.docker.com/storage/storagedriver/overlayfs-driver/#overlayfs-and-docker-performance

[–] theamigan@lemmy.dynatron.me 1 points 1 year ago (2 children)

"Different inode" means a different file entirely, not necessarily its majorminor:inode tuple resolved through bind mounts/overlayFS/whatever. I'm saying that if you have containers using even slightly different base images, you effectively have n copies of libc in memory at once on the same system, which does not happen when you do not use containers.

If your applications require different libc versions, then regardless if you used containers or not, you'd have each of them in memory. If they don't require different versions, then you're just blaming containers for something the user is responsible for managing. When alpine images are a dozen or so MBs, base image disk size is basically irrelevant in the grand scheme of things, as you probably have much more than that in dependencies/runtimes. Even Debian base images are pretty tiny these days. Depending on the application, you could have just a single binary with no OS files at all. So if you do care about disk and memory space, then you would take advantage of the tools containers give you to optimize for that. Its the users choice on how many resources they want to use, its not the fault of the tooling.

[–] agressivelyPassive@feddit.de 1 points 1 year ago

If you're running enough images on the same machine to make that a relevant point, you have absolutely no excuse not to provide common base images.

Basically, there are two scenarios here: you're running some service for others to deploy their images (Azure etc), then you want isolation. Or you're running your own images, then you should absolutely provide a common base image.