this post was submitted on 03 Dec 2023
19 points (80.6% liked)

Linux

48003 readers
1049 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Title. Mostly because of two flags: --read-only and --log-driver.

you are viewing a single comment's thread
view the rest of the comments
[–] sir_reginald@lemmy.world 12 points 11 months ago* (last edited 11 months ago) (1 children)

honestly, it's not worth it. hard drives are cheap, just plug one via USB 3 and make all the write operations there. that way your little SBC doesn't suffer the performance overhead of using docker.

[–] aksdb@feddit.de 3 points 11 months ago (1 children)

The point with an external drive is fine (I did that on my RPi as well), but the point with performance overhead due to containers is incorrect. The processes in the container run directly on the host. You even see the processes in ps. They are simply confined using cgroups to be isolated to different degrees.

[–] sir_reginald@lemmy.world -1 points 11 months ago (2 children)

docker images have a ton of extra processes from the OS they were built in. Normally a light distro is used to build images, like Alpine Linux. but still, you're executing a lot more processes than if you were installing things natively.

Of course the images does not contain the kernel, but still they contain a lot of extra processes that would be unnecessary if executing natively.

Containers don't typically have inits, your process is the init - so no extra processes are started for things other than what you care about.

[–] aksdb@feddit.de 3 points 11 months ago

To execute more than one process, you need to explicitly bring along some supervisor or use a more compicated entrypoint script that orchestrates this. But most container images have a simple entrypoint pointing to a single binary (or at most running a script to do some filesystem/permission setup and then run a single process).

Containers running multiple processes are possible, but hard to pull off and therefore rarely used.

What you likely think of are the files included in the images. Sure, some images bring more libs and executables along. But they are not started and/or running in the background (unless you explicitly start them as the entrypoint or using for example docker exec).