Posts by librero

    Just in case that could be helpful to anyone stumbling upon this post.

    This code shown below worked in kodi 18.2 and before, and sort of still works with current kodi versions if the message is a single line.

    Also, long ago there was an attempt to allow base64 encoded images but it seems that was abandoned and so only the pre-defined keywords or files stored in some path that kodi can access can be used.

    Don't forget to enable the remote control options.

    My experience with Nextcloud [1] using webDAV to share files with kodi has been rather dissapointing so far.

    I had no problem with my home router when accessing the WAN ip from inside the LAN. There would be double traffic on the LAN with the routing detour but should be no much of an issue for this aplication (all this traffic would never leave the LAN).

    jakob the path to access my nextcloud server includes the 'nextcloud' parent. Not sure if removing it as you did could be causing you trouble.

    /nextcloud/remote.php/dav/files/<USERNAME>

    But it takes ages to start playing any video even as small as the default 4MB Nextcloud intro.mp4 one.

    Anyway the issue doesn't seems related exclusively to kodi because from my desktop using nemo there is no much difference, to the point of being useless.

    You can happily browse all the directories and files but as soon as I try to play any movie it gets stuck there.

    So as Borygo77 I also use a VPN to access my kodi library (OpenVPN running on my main server).

    [1] linuxserver.io's nextcloud docker stack running on my main server, kodi runing on a RPi3B

    Sorry, forgot to mention but I am using autostart to mount remote nfs volumes.


    Then you could try hacking something like this using kodi's jsonrpc interface to disable and reenable the container. Not pretty but should work while a you look for a better solution.

    Although, I'm not sure if containers created with portainer could be restarted this way., may be you get an idea.

    /storage/.config/autostart.sh

    Code
    (
    <your mounts>
    
    sleep 10
    
    /storage/.config/addonctl.sh <docker.your_container_addon> stop
    <may need to add a delay here also>
    /storage/.config/addonctl.sh <docker.your_container_addon> start
    ) &

    /storage/.config/addonctl.sh

    If you are able to see some directories inside that share but not the files then that looks to me like a permission problem.

    Check the permission flags that you have from within LE for the directory /storage/music where the share is mounted and also for the shared contents. If the directories are not at least 755 (drwxr-xr-x) that may be the problem since containers use uid=65534, gid=100.

    Alternatively you could try adding the docker parameter -v /storage/music:/music in the portainer setup or the addon's additional config page to mount that directory inside the container as /music, but I'm not sure if this will work with a mount point of a share instead of a directory in the filesystem.

    Alternatively you could also mount that same share again from within the container.

    Creating DH parameters for additional security. This may take a very long time. There will be another message once this process is completed Generating DH parameters, 2048 bit long safe prime, generator 2 This is going to take a long time

    Yeah, the dhparams part can take a long time (up to 2 hours on an arm box) or it could be 10 seconds, that's cryptography for you

    For those using an arm device If you don't want to wait always can generate the dhparams.pem in other machine with:

    $ openssl dhparam -out dhparams.pem 2048

    and then, in this case, copy the resulting file to:

    /storage/.kodi/userdata/addon_data/docker.linuxserver.letsencrypt/config/nginx/dhparam.pem

    Any modern x64 PC will complete this operation in less than a minute instead of hours.

    Once you copy the file there, each time the container is started it's init scripts will check if is already there and skip this step unless you change the default DHLEVEL Only then needs to be regenerated.

    I'll look into the pfctl issue, but the nginx http auth jail is working fine on other systems

    WIth the default action seems that isn't used. That it was missing I think was found when I tried:

    banaction = pf[actiontype=<allports>]

    Not sure if currently packet filter can even be enabled. So other more advanced custom filters using pf directly won't be possible either.

    From action.d/pf.conf:

    Code
    # we don't enable PF automatically; to enable run pfctl -e 
    # or add `pf_enable="YES"` to /etc/rc.conf (tested on FreeBSD)

    Doesn't seem to work out-of-the-box with VERSION="9.0.2" since the iptables used in LibreElec is missing the 'multiport' extension used by default with the supplied jail.conf

    Have not checked if the issue is still present in the latest version 9.2 or other architectures. 

    (Should be the same as # docker exec -it docker.linuxserver.letsencrypt cat /proc/net/ip_tables_matches)

    The 'multiport' extension is missing, so by default fail2ban won't be able to setup the filters.

    ~/.kodi/userdata/addon_data/docker.linuxserver.letsencrypt/config/log/fail2ban/fail2ban.log:

    Code
    2020-02-11 18:08:36,083 fail2ban.utils          [359]: #39-Lev. 767fd320 -- exec: iptables -w -N f2b-nginx-botsearch
    iptables -w -A f2b-nginx-botsearch -j RETURN
    iptables -w -I INPUT -p tcp -m multiport --dports http,https -j f2b-nginx-botsearch
    2020-02-11 18:08:36,085 fail2ban.utils          [359]: ERROR   767fd320 -- stderr: 'iptables: Chain already exists.'
    2020-02-11 18:08:36,085 fail2ban.utils          [359]: ERROR   767fd320 -- stderr: "iptables v1.8.3 (legacy): Couldn't load match `multiport':No such file or directory"

    A solution is to change the default action in ~/.kodi/userdata/addon_data/docker.linuxserver.letsencrypt/config/fail2ban/action.d/pf.conf
    from multiport to allports:

    actiontype = <multiport>

    can be set to:

    actiontype = <allports>

    DON'T MODIFY pf.conf, just copy it to pf.local and make the changes there instead, which will override the default.

    Also add to each jail in ~/.kodi/userdata/addon_data/docker.linuxserver.letsencrypt/config/fail2ban/jail.local:

    action = iptables-allports

    Test it adding some bot ip to ban with:

    Code
    # docker exec -it docker.linuxserver.letsencrypt fail2ban-client set nginx-botsearch banip 106.12.5.21
    # docker exec -it docker.linuxserver.letsencrypt fail2ban-client set nginx-botsearch banip 111.229.116.157

    And check that now the iptables rules are created:

    Other issue that found is that the linuxserver/letsencrypt container seems to be missing the pfctl utility used with other actions.

    Code
    2020-02-11 19:29:14,570 fail2ban.utils          [360]: #39-Lev. 767d9638 -- exec: echo "table <f2b-nginx-http-auth> persist counters" | pfctl -a f2b/nginx-http-auth -f-
    port="http,https"; if [ "$port" != "" ] && case "$port" in \{*) false;; esac; then port="{$port}"; fi
    echo "block quick proto tcp from <f2b-nginx-http-auth> to any" | pfctl -a f2b/nginx-http-auth -f-
    2020-02-11 19:29:14,574 fail2ban.utils          [360]: ERROR   767d9638 -- stderr: '/bin/sh: pfctl: not found'
    2020-02-11 19:29:14,575 fail2ban.utils          [360]: ERROR   767d9638 -- stderr: '/bin/sh: pfctl: not found'
    2020-02-11 19:29:14,576 fail2ban.utils          [360]: ERROR   767d9638 -- returned 127
    2020-02-11 19:29:14,576 fail2ban.utils          [360]: INFO    HINT on 127: "Command not found".  Make sure that all commands in 'echo "table <f2b-nginx-http-auth> persist counters" | pfctl -a f2b/nginx-http-auth -f-\nport="http,https"; if [ "$port" != "" ] && case "$port" in \\{*) false;; esac; then port="{$port}"; fi\necho "block quick proto tcp from <f2b-nginx-http-auth> to any" | pfctl -a f2b/nginx-http-auth -f-' are in the PATH of fail2ban-server process (grep -a PATH= /proc/`pidof -x fail2ban-server`/environ). You may want to start "fail2ban-server -f" separately, initiate it with "fail2ban-client reload" in another shell session and observe if additional informative error messages appear in the terminals.
    2020-02-11 19:29:14,577 fail2ban.actions        [360]: ERROR   Failed to start jail 'nginx-http-auth' action 'pf': Error starting action Jail('nginx-http-auth')/pf
    </f2b-nginx-http-auth></f2b-nginx-http-auth></f2b-nginx-http-auth></f2b-nginx-http-auth>

    To use -v /var/media/MyBook:/data the user nobody needs write access to existing files so instead of fmask=0133 the external storage should be mounted fmask=0111, which honestly I can't see why is not this way due that the current default directory mask already grants full access to anyone. And there is little that Nextcloud could do other than providing the check_data_directory_permissions flag.

    Certainly is not a good idea to keep a busy /data directory in an sd card like a Raspberry Pi with multiple continuous log and database writes on top of the kodi ones even if the bulk of the storage is elsewhere.

    I don't know if there is a way to delay the docker to load certain containers so that at boot you could remount the external storage with write access before the cointaners are started.

    May be in a future Libreelec release this fmask parameter could be set to a more useful value.

    [...]

    Ahah, now after just waiting for a while and refreshing the page, this error is back: "Your data directory is readable by other users..."

    I can't understand it :(

    The Nextcloud data directory must have 0770 (rwxrwx---) permisions as shown in that same error page you got. What it doesn't say there is that the data directory also must be owned by nobody.users (65534.100) so Netxcloud would be able to access it.

    By default Libreelec (udevil) automounts the usb storage as 777 (rwxrwxrwx) root.root but Nextcloud runs as user nobody.

    The external storage is automounted with udevil and whose config file is /etc/udevil/udevil.conf that is located in the read only part of the file system and doesn't seem to be possible to override the default_options_ntfs to change the user and group it automounts the hdd with.

    default_options_ntfs      = nosuid, noexec, nodev, noatime, big_writes, fmask=0133, uid=$UID, gid=$GID, utf8

    to:

    default_options_ntfs      = nosuid, noexec, nodev, noatime, big_writes, fmask=0133, uid=65534, gid=100, utf8

    You could try to disable that permission check at boot in:

    /storage/.kodi/userdata/addon_data/docker.linuxserver.nextcloud/config/www/nextcloud/config/config.php with:

    'check_data_directory_permissions' => false,

    but that won't work because of the fmask=0133 that makes all files readonly for non-root users.

    Anyway I've not tried changing udevil.conf and I'm not sure if this would be enough for Nextcloud to have the whole data directory in an NTFS (or exFAT) drive since neither support unix type permisions. Also won't recommend messing with udevil config unless you know what you are doing.

    The other (dirty) option to try would be to unmount and remount the device with the required permissions. A background script launched from /storage/.config/autostart.sh could periodically check if these are correct and unmount and remount when required, but again not sure if will work with NTFS or exFAT..

    Alternatively you also can easily share an NTFS formatted disk in Nextcloud just sharing it via the own Libreelec samba server and adding it via the external storage app, with a similar result to using the -v /var/media/My\ Book:/My\ Book parameter in the addon's additional config page but still the users folders will be kept inside the container's original data directory, which may not be what you want.

    You could enable SMB access to peek at some of the sd contents as a windows share, but not all will be available by default. Or could try to mount an image of the LibreELEC sd from the raspbian vm. But you should first try to solve the ssh access issue that probably is the fastest way to find the space hog and any future issue.

    Once you get ssh access you can use the du command to list the size of each subdirectory.

    Code
    # du

    or sorted by size:

    Code
    # du | sort -n | tail -50

    Did you look if some debug options are enabled? By default kodi writes to /storage/.kodi/temp/kodi.log and that file could grow very large.

    Myself have changed /storage/.kodi/temp to a symlink to /tmp because already killed a couple of sd cards with every addon and their uncle writting every few to the logfile in flash. The drawback is that in case of an (unlikely) crash the log will be gone, but always can revert the change later if needed.

    The failed: Invalid argument error returned by mount can be quite misleading.

    Now to mount SMB1 shares like those of WindowsXP or old NAS, you need to use vers=1.0 and sec=ntlm options in the mount command line. The password can be empty if the share has guest access.

    Bash
    #!/bin/sh
    
    USER=guest
    PASS=
    SHARE=//myhost/myshare
    MOUNT_POINT=/storage/media/myshare
    
    mkdir -p "$MOUNT_POINT"
    mount -t cifs -o rw,vers=1.0,sec=ntlm,username="$USER",password="$PASS" "$SHARE" "$MOUNT_POINT"

    i am using LibreELEC 9.0.0 on my RPi3B. [...] but ScummVM i don't get work, [...] also Atari-Stella and some other add-ons do not work on my RPi3B under LibreELEC.

    The libretro.scummvm-2.0.0.4.116 available in the official repo seems to be still broken.
    Check this post with a solution and a link to the previous working version, libretro.scummvm-2.0.0.4.115. All the ScummVM games I tested work fine in my RPi3B with it.

    Sadly, for the Atari-Stella and many other emulators showing a black/blank screen when starting a game, and showing the image only while the overlay popup is enabled, I've not found any solution for the issue.

    Some external hard disks like my old Toshiba Canvio Basics auto spin down themselves and there is no need to do anything, but others won't as I just found and need a little help.

    As Mr. chewitt succintly pointed out to use hd-idle, installing the virtual.system-tools addon, to start hd-idle edit the system file: /storage/.config/autostart.sh (so that the needed command is executed after every boot)

    Assuming your disk is /dev/sda, (check yours with df or mount command) and want to spin it down after 900 seconds, add the following:

    Code
    (  /storage/.kodi/addons/virtual.system-tools/bin/hd-idle -a sda -i 900 -l /storage/.kodi/temp/hd-idle.log)&

    Then reboot (or execute the line at the command prompt). Check that your disk is actually spun down as with some may not work.

    Honestly can't believe that this still isn't an standard GUI option, forcing users to edit system files for common place things like this. And that the only near (non working) solution given in 2 years is this thread. Hope that helps future users.