Posts by ariendj

    Hey awiouy! Thanks for your input!

    Yeah that's the point I don't quite understand :) I'm not really familiar with python...

    I had hoped that I could just write a STRM or M3U file that Kodi understands, with the RTP URL and maybe a hint on the file format in it. I tried going this route but Kodi complains that it cannot play the file. I guess it needs more information on the file format. I don't know how to provide that.

    On line 80 it looks like you define the item to be played. I understand the 'rtp://127.0.0.1:{port}' part, but how is .format(port=PORT defined?

    Is there a URL that I can pass to Kodi that basically says rtp://<loopback>:<port>.<format> or similar? If so, what would the format be? In your code I see a reference to mp3 but I thought pulseaudio would just stream PCM data via RTP?

    If it's not feasible to start playback via a URL, M3U or STRM file I'll have to learn some Python I guess.

    LE is running on my PC and I'd like to have Kodi play the audio from the PC soundcard's line input. For this I intend to use pulseaudio's module-rtp-sink. I got this idea from the source code of the librespot addon by awiouy. He uses pulseaudio to loop audio from librespot into Kodi for playback. I've seen here that he runs

    Code
    pactl load-module module-rtp-send source="$LS_SINK.monitor" destination_ip=127.0.0.1 port="$LS_PORT" source_ip=127.0.0.1 > /dev/null

    and then has Kodi play the stream generated by pulseaudio.

    I'd like to do something similar where I have module-rtp-sink send audio from my line input as 'source' and have Kodi play the generated RTP stream.

    So far I have managed to load the pulse module and have it read from the soundcard's input. What I did not figure out yet is how to have Kodi play the RTP stream from the loopback interface. Can someone point me in the right direction? Thanks!!

    Latency: 0 usec, configured 48219 usec - setting it to 48 is too little, I have put it up to the maximum of 100 and it's almost in sync now, although not 100%. I guess some more msec would be necessary

    You need to play audio via the sink for pulseaudio to actually specify the current latency. As long as there is no audio playing the first value will be zero.

    This is from my laptop:

    No audio playing:

    Code
    pactl list sinks
    Sink #0
           State: SUSPENDED
           Name: alsa_output.pci-0000_00_1b.0.analog-stereo
    [...]
            Latency: 0 usec, configured 0 usec
    [...]

    Audio playing:

    Code
    pactl list sinks
    Sink #0
           State: RUNNING
           Name: alsa_output.pci-0000_00_1b.0.analog-stereo
    [...]
            Latency: 61212 usec, configured 90000 usec
    [...]

    Hope this helps.

    Don't know whether it makes a difference but I specifically use snapclient for playback and then run pactl list sinks to get the latency value. Maybe a different latency value is reported when Kodi plays audio, I have not looked into that.

    At a glance, output to pulse and being able to control Snapserver via RPC from Kodi, at least to select a stream and set volume would allow to simplify a lot of things.

    That would be great!! I stopped using more than one stream and stopped using volume control because I could only set those via the android app. Sometimes it worked, usually it didn't.

    Controlling these features from Kodi would be very cool :)

    If you need any testing done please let me know. I'm not a developer but I can build an image from a custom branch if needed.

    No idea if it is possible to have Alsa search plugins in LD_LIBRARY_PATH.

    I've been looking and I did not find any environment variable that defines the location of alsa plugins either. That's unfortunate.

    I thought there was a possibility to specify plugin locations via asoundrc or asound.conf but I misremembered. While you can set the location of LADSPA plugins in asound.conf and specify their location via a variable called LADSPA_PATH there is no way to do this for alsa plugins.

    Having the alsa plugins in LibreELEC would be really cool though. We'd be able to use alsa's 'file' plugin to write audio to the snapserver fifo instead of using pulseaudio. For playback we could use the alsa 'dmix' plugin to have snapclient play through alsa while at the same time not blocking the output device for Kodi. We'd be using snapcast as it is intended to be used by badaix, running on alsa. As there is no native pulseaudio support snapcast can manage latency for raw alsa devices but not for pulseaudio.

    A benefit of using alsa directly would be that we would not have to set the snapclient's '--latency=xy' parameter to offset the latency that is introduced by pulseaudio (which is a unique value for every output device and cannot be statically defined for all systems and devices).

    The drawback would be that pulseaudio would still be required for playout of snapclient to bluetooth devices. Also setting the latency parameter would be required for synchronous playback. But since bluetooth playback itself introduces considerable latency, the latency parameter would have to be used to account for this anyway.

    There are other use cases as well. For example there's an ALSA plugin that can load LADSPA plugins. Ladspa_(plugin)

    For example I could create an ALSA device called 'night mode' that loads a LADSPA-based dynamics compressor for late night movie watching without having to turn up the volume on dialogue and then turn it down again in action scenes.

    I created a new image with alsa-plugins and it's working! Unfortunately not 100% in sync with the rest yet, but that's a problem for another day

    run 'pactl list sinks' and look at 'Latency'. You'll see a value in usec displayed for your currently active output. Take this value, devide by 1000 and pass it to snapclient as the latency parameter, e.g. for a value of 30000 usec in your pulseaudio playback latency you run 'snapclient --latency=30' and everything will be in sync again.

    Yeah I also ran into this problem. Getting alsa-plugins into LibreELEC will solve this :)

    I will see if I can add both to snapserver (and librespot)

    That's awesome!

    Hello awiouy! First of all thank you for the work you are doing to get snapcast integrated into LibreELEC. It's really awesome!

    You mentioned that sound quality is inferior when using snapserver with Kodi and pulseaudio. I have been experimenting with snapcast a lot and I think I have a solution: In 'Settings' > 'System' you can set 'Output configuration' to 'Fixed'. Then set 'Limit sampling rate kHz' to '48.0'. Then set 'Resample quality' to 'High'.

    Sound quality should be better now as Kodi handles resampling the audio to 48kHz in high quality instead of leaving this task to pulseaudio.

    You could also leave the settings in Kodi alone and make pulseaudio use a better resampler:

    Pulseaudio [LibreELEC.wiki]

    With pulseaudio I usually set it to use the 'sox-vhq' resampler, this also sounds very good. I don't think there's a huge difference between Kodi's high quality resampler and pulseaudio's 'sox-vhq'.

    About A/V sync: You can easily get perfect sync across all snapclients. Just play a video and then go into the settings menu, then choose 'Audio settings' and then select 'Audio offset'. Set this to 'Ahead by 1.000s' and all your snapclients will be in sync with the video playback. No further configuration is needed on the clients this way.

    That's some great news! Thanks chipfunk!

    I'll try to build the addon for Generic against the master tree (I'm running milhouse builds because of netflix and the like). If that doesn't work I'll get the Pi out to try your precompiled version.

    Next I'll have a look at writing a package.mk file and building drc-fir for LibreELEC. Seems like it's a low hanging fruit as it only requires a simple 'make' and copying a few files around to get going.

    I might try to build the snapcast addon that's on github while I'm at it. My final goal is to have a LibreELEC system that does FIR filtering on its audio outputs and synchronous multiroom audio steaming to other zones (that also use FIR filters of course). Considering the cost of ARM boards and DAC chips, why not go crazy? ;)

    Hey there chipfunk!

    I set up a virtual build box that I can take snapshots of. I'm no stranger to messing up the build-env myself :)

    I used it to build your brutefir branch for the 'generic' architecture and that went fine. I have an image that I might get around to testing this sunday.

    What are you using to create filters for BruteFIR? So far I have been using DRC on the command line but I recently came across this:

    GitHub - TheBigW/DRC: Digital Room Correction plugin for rhythmbox

    It's both a plugin for the Rhythmbox music player and a stand-alone application that can use either PORC or DRC to generate impulse response filters. It seems that you can use it to make multiple measurements and use a weighted average to generate the impulse response. I'll be checking that out soon. So far I did not get it to start but that probably has to do with my desktop being KDE and the measurement software being GTK based. I guess I'll just set up a VM and test it on a Gnome based desktop. I'll keep you posted on how that goes. Cheers!

    Installable addons would be really great, I'd start testing those ASAP.

    What platform do you use? I'd love to test something out if you can upload a build of yours somewhere. I can test x86/AMD64, Allwinner H3 or Amlogic S905 if you want. I also have a Raspberry Pi 3 somewhere in my office, I just can't seem to find it :D

    EDIT: Found the Pi :)

    I once tried the brutefir/mkfifo and pulseaudio module-pipe-sink/module-pipe-source approach but gave up on that because the latency was too high on the Amlogic S905D box I was working with. On AMD64 it was less but still not great. Maybe you will have more success than me, I might have missed something.

    Like I mentioned, bmc0's dsp LADSPA filter can also replace BruteFIR for convolution and the rt-plugins for crossover in one package.

    Here's what I did on my NanoPi with armbian: First I built the LADSPA plugin, then you can write a config file with all necessairy crossover filters (Butterworth, Linkwitz-Riley and the like) and add a room correction filter to the config. When you then load the dsp filter into pulseaudio you'll have all filtering done in just one step/one LADSPA plugin instance. It's pretty light on CPU load. It runs on a flimsy Allwinner H2+ easily and I think most load does not come from the filtering done but rather from the speex-float-10 resampler I use.

    The convolver used in 'dsp' is the excellent libzita-convolver by Fons Adriaensen. The only difference in handling I noticed is that it uses 32bit floating point precision while BruteFIR can use 64bit. I have never seen anyone deliver actual proof that 64bit is useful at all though.

    Hey there chipfunk! Great work!

    I'm in a similar situation right now: I'm using a nanopi as a pulseaudio server to host the excellent DSP LADSPA plugin (GitHub - bmc0/dsp: An audio processing program with an interactive mode.).

    It does pretty much anything and everything a computer audiophile would want: it does crossovers, dynamics processing and even room correction based on either FIR or IIR filters. I'm currently using the FIR Filter to EQ my system. The filter is generated by measuring my in-room frequency and phase response with a calibrated mic and having the excellent DRC software (DRC: Digital Room Correction) compute a correction filter. The result is really great.

    I'll have a look at your github repo. Getting LADSPA filters into LibreELEC is something I have attempted to do before but i always got stuck somewhere. I'm sure I'll learn something new! Thanks!

    EDIT: Just had a look at your repo and I saw that you're working on ButeFIR support. Really cool! How are you going to integrate BruteFIR into the LibreELEC sound stack? I tried to do that before with the snd-aloop kernel module and found out the hard way that it's not available for every target platform (Amlogic, I'm looking at you...). If you'd prefer to use a LADSPA plugin that works within PulseAudio's module-ladspa-sink I really recommend the DSP plugin.