[-] TheHobbyist@lemmy.zip 2 points 5 hours ago

Recently started Fallout 3 on steam deck, am a few hours in, pretty good game! Love the freedom, the exploration, but it can occasionally be a challenging game with the fighting and various encounters. Story is good so far.

[-] TheHobbyist@lemmy.zip 79 points 3 days ago

My number one gripe with organic maps is how fragile the search is. If you don't write it exactly right, you get no or irrelevant results. Also, it seems to have no clue of what is popular and what people expect when they search for something. I'm not talking about personalized results but for example the following: searching for "Eiffel", leads me to minor roads, restaurants and all kinds of results unrelated to the Eiffel tower. This is what is troubling me the most.

[-] TheHobbyist@lemmy.zip 13 points 3 days ago

Exactly, this is about compression. Just imagine a full HD image, 1920x1080, with 8 bits of colors for each of the 3 RGB channels. That would lead to 1920x1080x8x3 = 49 766 400 bits, or roughly 50Mb (or roughly 6MB). This is uncompressed. Now imagine a video, at 24 frames per second (typical for movies), that's almost 1200 Mb/second. For a 1h30 movie, that would be an immense amount of storage, just compute it :)

To solve this, movies are compressed (encoded). There are two types, lossless (where the information is exact and no quality loss is resulted) and lossy (where quality is degraded). It is common to use lossy compression because it is what leads to the most storage savings. For a given compression algorithms, the less bandwidth you allow the algorithm, the more it has to sacrifice video quality to meet your requirements. And this is what bitrate is referring to.

Of note: different compression algorithms are more or less effective at storing data within the same file size. AV1 for instance, will allow for significantly higher video quality than h264, at the same file size (or bitrate).

[-] TheHobbyist@lemmy.zip 36 points 3 days ago* (last edited 3 days ago)

To be fair, resolution is not enough to measure quality. The bitrate plays a huge role. You can have a high resolution video looking worse than a lower resolution one if the lower one has a higher bitrate. In general, many videos online claim to be 1080p but still look like garbage because of the low bitrate (e.g. like on YouTube or so). If you go for a high bitrate video, you should be able to tell pretty easily, the hair, the fabric, the skin details, the grass, everything can be noticeably sharper and crisper.

Edit: so yeah, I agree with you, because often they are both of low bitrate...

[-] TheHobbyist@lemmy.zip 8 points 3 days ago

I'm surprised, if I recall, all but one LCD model were to be phased out in November, or at least that's what they said when they announced the OLED version. Were the supplies that large?

[-] TheHobbyist@lemmy.zip 5 points 4 days ago

I think that's what we see with apple silicon, right?

[-] TheHobbyist@lemmy.zip 44 points 4 days ago

Title mixed up Wayland and Nvidia :) I don't think you typically get a new GPU assigned on the fly as you select one window manager over another :D

[-] TheHobbyist@lemmy.zip 1 points 5 days ago

ChatGPT is already free, even GPT-4o since recently.

Granted, you do need an account which requires a phone number, but there are no financial costs.

40
submitted 1 week ago* (last edited 1 week ago) by TheHobbyist@lemmy.zip to c/foss@beehaw.org

Yesterday, there was a live scheduled by Louis Grossman, titled "Addressing futo license drama! Let's see if I get fired...". I was unable to watch it live, but now the stream seems to be gone from YouTube.

Did it air and was later removed? Or did it never happen in the first place?

Here's the link to where it was meant to happen: https://www.youtube.com/watch?v=HTBYMobWQzk

Cheers

Edit: a new video was recently posted at the following link: https://www.youtube.com/watch?v=lCjy2CHP7zU

I do not know if this was the supposedly edited and reuploaded video or if this is unrelated.

[-] TheHobbyist@lemmy.zip 92 points 2 months ago

You can put up a non commercial license and write that if this is for a commercial application they can get in touch with you and you can discuss together a new license for their use case.

[-] TheHobbyist@lemmy.zip 83 points 5 months ago

We Shouldn’t Have to Let Users enroll Service With a Click. Customers may “misunderstand the consequences of enrolling,”

Sounds ridiculous? Because it is. Clicking the cancel or enroll button is pretty much what you expect... This is utter nonsense, obviously.

13
submitted 6 months ago by TheHobbyist@lemmy.zip to c/steamdeck@sopuli.xyz

I was exploring the fps and refresh rate slider and I realized that when setting the framerate limiter to 25, the refresh rate was incorrectly set to 50Hz on the OLED version, when the 75 Hz setting would be a more appropriate setting, for the same reason 30 fps is at 90 Hz and not 60 Hz. Anyone else seeing the same behavior? Is there an explanation I'm missing here?

7

Hi folks, I'm looking for a specific YouTube video which I watched around 5 months ago.

The gist of the video is that it was comparing the transcoding performance of an Intel iGPU when used natively, compared to when passed through to a VM. From what I recall there was a significant performance hit and it was around 50% or so (in terms of fps transcoding). I believe the test was performed on jellyfin. I don't remember whether it was using xcpng, proxmox or another OS. I don't remember which channel published this video nor when it was published, just that I watched it sometime between April and June this year.

Anyone recall or know what video I'm talking about? Possible keywords include: quicksync, passthrough, sriov, iommu, transcoding, iGPU, encoding.

Thank you in advance!

[-] TheHobbyist@lemmy.zip 88 points 8 months ago

I hear you but this seems to largely ignore that we are all already paying google, a lot. It is only thanks to their unscrupulous private data harvesting that they have become the mastodon they are. This has been going on for so long and only in the recent past to we get the scale of this effort. Now they want us to pay them too, while nothing is changing on the data privacy side? Frankly, I don't think they deserve our trust. It's not like paying makes them get any less of our private data, so they are basically double dipping. That does not sit well with me.

I'm all for paying for a due service, but I also have expectations of data privacy rights. Those are mostly vanishing into thin air with google...

18
submitted 10 months ago* (last edited 10 months ago) by TheHobbyist@lemmy.zip to c/selfhosted@lemmy.world

Hi y'all,

I am exploring TrueNAS and configuring some ZFS datasets. As ZFS provides with some parameters to fine-tune its setup to the type of data, I was thinking it would be good to take advantage of it. So I'm here with the simple task of choosing the appropriate "record size".

Initially I thought, well this is simple, the dataset is meant to store videos, movies, tv shows for a jellyfin docker container, so in general large files and a record size of 1M sounds like a good idea (as suggested in Jim Salter's cheatsheet).

Out of curiosity, I ran Wendell's magic command from level1 tech to get a sense for the file size distribution:

find . -type f -print0 | xargs -0 ls -l | awk '{ n=int(log($5)/log(2)); if (n<10) { n=10; } size[n]++ } END { for (i in size) printf("%d %d\n", 2^i, size[i]) }' | sort -n | awk 'function human(x) { x[1]/=1024; if (x[1]>=1024) { x[2]++; human(x) } } { a[1]=$1; a[2]=0; human(a); printf("%3d%s: %6d\n", a[1],substr("kMGTEPYZ",a[2]+1,1),$2) }'

Turns out, that's when I discovered it was not as simple. The directory is obviously filled with videos, but also tiny small files, for subtitiles, NFOs, and small illustration images, valuable for Jellyfin's media organization.

That's where I'm at. The way I see it, there are several options:

    1. Let's not overcomplicate it, just run with the default 64K ZFS dataset recordsize and roll with it. It won't be such a big deal.
    1. Let's try to be clever about it, make 2 datasets, one with a recordsize of 4K for the small files and one with a recordsize of 1M for the videos, then select one as the "main" dataset and use symbolic links for each file to the other dataset such that all content is "visible" from within one file structure. I haven't dug too much in how I would automate it, but might not play nicely with the *arr suite? Perhaps overly complicated...
    1. Make all video files MKV files, embed the subtitles, rename the videos to make NFOs as unnecessary as possible for movies and tv shows (though this will still be useful for private videos, or YT downloads etc)
    1. Other?

So what do you think? And also, how have your personally set it up? Would love to get some feedback, especially if you are also using ZFS and have a videos library with a dedicated dataset. Thanks!

Edit: Alright, so I found the following post by Jim Salter which goes through more detail regarding record size. It clarifies my misconception about recordsize not being the same as the block size, but also it can easily be changed at any time. It's just the size of the chunks of data to be read. So I'll be sticking to 1M recordsize and leave it at that despite having multiple smaller files, because the important will be to effectively stream the larger files. Thank you all!

view more: next ›

TheHobbyist

joined 11 months ago