this post was submitted on 21 Sep 2024
45 points (95.9% liked)

Selfhosted

39353 readers
447 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Having a bit of trouble getting hardware acceleration working on my home server. The cpu of the server is an i7-10700 and has a discrete GPU, RTX 2060. I was hoping to use intel quick sync for the hardware acceleration, but not having much luck.

From the guide on the jellyfin site https://jellyfin.org/docs/general/administration/hardware-acceleration/intel

I have gotten the render group ID using "getent group render | cut -d: -f3" though it mentions on some systems it might not be render, it may be video or input which i tried with those group ID's as well.

When I run "docker exec -it jellyfin /usr/lib/jellyfin-ffmpeg/vainfo" I get back

libva info: VA-API version 1.22.0
libva info: Trying to open /usr/lib/jellyfin-ffmpeg/lib/dri/nvidia_drv_video.so
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/nvidia_drv_video.so
libva info: Trying to open /usr/lib/dri/nvidia_drv_video.so
libva info: Trying to open /usr/local/lib/dri/nvidia_drv_video.so
libva info: va_openDriver() returns -1
vaInitialize failed with error code -1 (unknown libva error),exit

I feel like I need to do something on the host system since its trying to use the discrete card? But I am unsure.

This is the compose file just in case I am missing something

version: "3.8"
services:
  jellyfin:
    image: jellyfin/jellyfin
    user: 1000:1000
    ports:
      - 8096:8096
    group_add:
      - "989" # Change this to match your "render" host group id and remove this comment
      - "985"
      - "994"
    # network_mode: 'host'
    volumes:
      - /home/hoxbug/Docker/jellyfin/config:/config
      - /home/hoxbug/Docker/jellyfin/cache:/cache
      - /mnt/External/Movies:/Movies
    devices:
      - /dev/dri/renderD128:/dev/dri/renderD128
networks:
  external:
    external: true

Thank you for the help.

top 17 comments
sorted by: hot top controversial new old
[–] JustEnoughDucks@feddit.nl 1 points 6 days ago

Do you have the Intel drivers installed on your machine? Are GuC and HuC working?

sudo reboot
sudo dmesg | grep i915
sudo cat /sys/kernel/debug/dri/0/gt/uc/guc_info
sudo cat /sys/kernel/debug/dri/0/gt/uc/huc_info

On Debian I had to manually download the i915 full driver Zip, extract it, take out the Intel drivers, and put it in /usr/lib/firmware

Then hardware acceleration worked on my Arc380.

If you use QSV, your CPU iGPU will be the one that can use it, so make sure to set your render device in docker to the iGPU and not the RTX 2060

[–] sk@hub.utsukta.org 14 points 1 week ago

On my system i was able to use my integrated iGPU wit the follwing:

    devices:     - /dev/dri:/dev/dri

The rest of your compose looks fine.

[–] sneezycat@sopuli.xyz 8 points 1 week ago (1 children)

Isn't your GPU an Nvidia RTX 2060? Why are you trying to use the Intel GPU acceleration method? I'm confused

[–] hoxbug@lemmy.world 1 points 1 week ago (1 children)

It just seemed the easiest route, but I may just give using the GPU a go.

[–] __ghost__@lemmy.ml 2 points 1 week ago (1 children)

From personal experience intel QSV wasn't worth the trouble to txshoot on my hardware. Mine is a lot older than yours though. Vaapi has worked well on my arc card

[–] entropicdrift@lemmy.sdf.org 7 points 1 week ago

QSV is the highest quality video transcoding hardware acceleration out there. It's worth using if you have a modern Intel CPU (8th gen or newer)

[–] bobslaede@feddit.dk 5 points 1 week ago (1 children)

This is how mine works, with a Nvidia GPU

services:
  jellyfin:
    volumes:
      - jellyfin_config:/config
      - jellyfin_cache:/cache
      - type: tmpfs
        target: /cache/transcodes
        tmpfs:
          size: 8G
      - media:/media
    image: jellyfin/jellyfin:latest
    restart: unless-stopped
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              device_ids:
                - "0"
              capabilities:
                - gpu
[–] notfromhere@lemmy.ml 3 points 1 week ago (1 children)
[–] bobslaede@feddit.dk 7 points 1 week ago (1 children)

Temp files for transcoding. No need to hit the disk.

[–] __ghost__@lemmy.ml 4 points 1 week ago (3 children)
[–] 486@lemmy.world 2 points 6 days ago (1 children)

It is. It might end up on disk in swap, if you run low on memory (and have some sort of disk-based swap enabled), but usually it is located in RAM.

[–] __ghost__@lemmy.ml 1 points 6 days ago* (last edited 5 days ago) (1 children)

You can create a tmpfs on other storage devices as well, just curious what their setup looked like

[–] 486@lemmy.world 1 points 4 days ago* (last edited 4 days ago)

No, tmpfs is always located in virtual memory. Have a look at the kernel documentation for more information about tmpfs.

[–] baduhai@sopuli.xyz 3 points 1 week ago
[–] bobslaede@feddit.dk 1 points 1 week ago

Hmm. I would think so. But I haven't actually checked. That was my thought.

[–] HumanPerson@sh.itjust.works 1 points 1 week ago* (last edited 1 week ago)

I have an arc for transcoding, and I had to set the device to /dev/dri without the renderD128 part. If I were you, I would just use the 2060. If it's there for llama or something I'd still try it and see how it does doing both at once, as it should be separate parts of the gpu handling that.

[–] entropicdrift@lemmy.sdf.org 1 points 1 week ago* (last edited 1 week ago)

If you switch the devices line to

- /dev/dri:/dev/dri

as other have suggested, that should expose the Intel iGPU to your Jellyfin docker container. Presently you're only exposing the Nvidia GPU.