Posts by grv0815

    Thanks mglae,

    the tip with the (persistent) journal helped me!

    In short: The sub processes work as expected and are terminated only after the end of the parent process, so a new thread is not necessary.

    This was masked by a astounding bug in the ntfs driver which destroyed the search structure of the directory entries. As a result, existing entries were not found and the rolling of the backup files failed.

    (Windows-)Chkdsk said about this:

    Code
    3 corruption records are checked....
    Record 1 of 3: Damage in index "$I30" of directory "\backup\dbdumps <0x4,0x198c2>" ... Corruption was found and fixed.
    Record 2 of 3: Index "$I30" of directory "\backup\dbdumps <0x4,0x198c2>" is sorted incorrectly ... The down pointer of the current index entry with length 0x18 is invalid.

    But this is another story and not your site.

    Thanks for the quick response and for this project!

    Thanks for your quick reply, mglae!

    I already use wait -n in my shellscript to limit the background jobs to a constant number. On the command line this works as expected.

    But if the script is started via autostart.sh, all started background processes are terminated prematurely - as described by you in #3.

    The observed behavior does not correspond to the (standard?) behavior of ExecStop, as described by you above in #6.

    Background execution is used in autostart.sh to not delay the boot process. If used in shut down they are killed after a short time.

    Why are background jobs killed?

    Is this behavior a design decision or is there a fundamental technical reason for it?

    In other words, is there any way to work around this limitation by fiddling around with the systemctrl configuration?

    Normally, child processes should not be killed while the parent job is still active.

    I wrote a script to backup the Kodi databases and now painfully realized that the three parallel background jobs only produce corrupt tar.bz2 balls.