NFS performance issue

  • Hi guys,

    when VDR-Office is doing more the 2 recordings at the same time replay on VDR-Living is chopping, both using NFS.

    If I do the repay on VDR-Living using Samba it works fine.

    This is a rough drawing of my LibreELEC setup.

    NFS mountings:

    VDR-Office:

    /storage/videos -> FhemRaspi:/srv/vdr/video

    VDR-Living:

    /storage/videos -> FhemRaspi:/srv/vdr/video

    FhemRaspi /etc/exports:

    Any idea what I need to change to get it working all using NFS ?

    Thx

    JurKub

  • What OS is running on the RPi4 server with 4TB?

    Handling files is a relatively demanding job on a CPU for a RPi, both USB30 and Gigabit ports really only work at half speeds in my experience. So depending on the overall bitrate of the VDR streams, it's possible that the RPi4 is simply not up to the task, despite having set up NFS correctly.

  • What OS is running on the RPi4 server with 4TB?

    Linux fhemraspi 5.10.63-v7l+ #1459 SMP Wed Oct 6 16:41:57 BST 2021 armv7l GNU/Linux

    Handling files is a relatively demanding job on a CPU for a RPi, both USB30 and Gigabit ports really only work at half speeds in my experience. So depending on the overall bitrate of the VDR streams, it's possible that the RPi4 is simply not up to the task, despite having set up NFS correctly.

    I wondering using NFS for recording and Samba for replay at the same time everything woks fine.

    The problem exists only when using NFS for recording and for replay at the same time !

  • I am running Tvheadend with 6 tuners (2 tuners are over Ethernet, and 4 tuners are in a PCI card in the computer) on an AMD 8-Core machine with 20TB storage, it has no problem recording all tuners at the same time, plus serving 2 medacenter devices over NFS. It serves both my media library and Tvheadend content (which is over HTSP [TCP]).

    Have you ran bcmstat.sh on your Pi to get more insight? I would run it with: "bcmstat.sh xce ieth0", you might need to change the eth0 to your network device. I would run that on your Pi hosting the storage, it will give you good visibility to your RX/TX network plus CPU/MEM usage. You might need to check your kernel settings for the number of NFS daemons running (on a Debian based system this is in /etc/default/nfs-kernel-server, changing RPCNFSDCOUNT). I run with 8 myself which is probably overkill since I don't have the overhead of writing to NFS over the network from Tvheadend like you do.

    This article has some advice on tuning nfsd threads:

    Document Display | HPE Support Center

  • I wondering using NFS for recording and Samba for replay at the same time everything woks fine.

    The problem exists only when using NFS for recording and for replay at the same time !

    One other thing to consider here is perhaps NFS locking is an issue, especially if you are trying to watch a file that is being written to (i.e. trying to watch a live recording being written by VDR). Probably worthwhile to ask this question on the VDR forums, as you will probably find someone with similar setups. Tvheadend & MythTV work completely differently, everything is streamed by Tvheadend or MythTV, client doesn't access the recorded media files directly.

    There is just too much opportunity for speculation here with very little data, unfortunately.

  • I am running Tvheadend with 6 tuners (2 tuners are over Ethernet, and 4 tuners are in a PCI card in the computer) on an AMD 8-Core machine with 20TB storage, it has no problem recording all tuners at the same time, plus serving 2 medacenter devices over NFS. It serves both my media library and Tvheadend content (which is over HTSP [TCP]).

    Have you ran bcmstat.sh on your Pi to get more insight? I would run it with: "bcmstat.sh xce ieth0", you might need to change the eth0 to your network device. I would run that on your Pi hosting the storage, it will give you good visibility to your RX/TX network plus CPU/MEM usage. You might need to check your kernel settings for the number of NFS daemons running (on a Debian based system this is in /etc/default/nfs-kernel-server, changing RPCNFSDCOUNT). I run with 8 myself which is probably overkill since I don't have the overhead of writing to NFS over the network from Tvheadend like you do.

    This article has some advice on tuning nfsd threads:

    https://support.hpe.com/hpesc/public/d…mr_na-c02239048

    I don't think the Ethernet port is the issue, because using NFS for recording and Samba for replay at the same time everything woks fine

    /etc/default/nfs-kernel-server

    I've tied sync and async as well in

    /etc/exports

  • One other thing to consider here is perhaps NFS locking is an issue, especially if you are trying to watch a file that is being written to (i.e. trying to watch a live recording being written by VDR). Probably worthwhile to ask this question on the VDR forums, as you will probably find someone with similar setups. Tvheadend & MythTV work completely differently, everything is streamed by Tvheadend or MythTV, client doesn't access the recorded media files directly.

    There is just too much opportunity for speculation here with very little data, unfortunately.

    I don't try to watch a live recording being written by (another) VDR.

    It doesn't matter if I watch a VDR recording or another mp4 video, same issue.

    I'm more than happy to provide as much data you like, just ask for what you need to see ;)

  • JurKub

    - just my 2 ct -

    would playing with wsize and rsize have an affect ?

    see:

    nfs(5) - Linux man page

    moons ago I played with it to see how to get the best throughput.

    it turned out that the defaults were not that ideal.

    at that time the defaults were: rsize=1048576,wsize=1048576

    I got the best with: rsize=32768,wsize=32768

    but -as said - it's years ago: old NFS server (3.x ?), old disk, ...

    write: read: in MB/s

    =================

    6,91 49,68 NFS-default wsize, rsize)

    35,31 81,45 rsize=8192,wsize=8192

    34,90 101.75 rsize=16384,wsize=16384

    49.86 110.62 rsize=32768,wsize=32768 (nearly full GB-LAN speed [ 125 MB/s (theoretically)]

    Edited once, last by GDPR-7 (November 21, 2021 at 9:45 PM).

  • rsize and wsize don't change nevertheless what I'm entering

    Code
    10.0.0.53:/srv/vdr/video on /storage/videos type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.0.53,mountvers=3,mountproto=tcp,local_lock=none,addr=10.0.0.53)
    /
  • another tool to measure bandwidth, MSS/MTU, etc. could be iPerf3:

    iPerf - The TCP, UDP and SCTP network bandwidth measurement tool

    usage:

    iPerf - iPerf3 and iPerf2 user documentation

    P.S.

    maybe it's worth to check on what speed the network interfaces are currently running ?

    Background:

    got a new mainboard with 2.5 Gbit Intel LAN (driver: igc) running in an 1 Gbit LAN with dhcp and sometimes I got 100 Mbit only on the igc.

    - it could be the kernel (5.15.y) , it could be the new firmware on the router too, ..., no idea ... -

    question: is your experiences a regression ?

    Edited once, last by GDPR-7 (November 22, 2021 at 4:59 PM).