Beiträge von rofafor

    Ein paar Anpassungen musste ich trotzdem in der Datei video.c machen, da in meinem System (Mint 18.3, entspricht in etwa Ubuntu Xenial, mit in /opt nachinstalliertem ffmpeg 3.3.2) eigentlich die libva zu alt ist. Das kann man aber mit ein paar Anpassungen umgehen (falls jemand sie braucht, kann ich nen Patch hier anhängen).

    Could you share the modification you needed as a Github issue?

    This is a bit unrelated, but IIRC the VDR doesn't detect new T2 transponders and I was looking into it years ago, but never got it finalized. However, I found a work-in-progress patch on my disk that might pinpoint the problem.

    You can verify the plugin functionality by checking RTSP PLAY commands (tcpdump, wireshark, or debug traces): they should include a "x_pmt=" URL parameter. With 2 CI slot hardware, you can assign certain encryption systems to pre-defined slots via plugins setup menu. This setting adds a "x_ci=" URL parameter into RTSP PLAY commands. If the slot auto-detection doesn't work, you should set "NDS Videoguard (900-9FF)" into a proper CAM slot configuration in the plugin setup menu in order to decrypt the mentioned "13th Street HD" channel.

    As far as I could understand, it was among others to lower the bar for new contributions as github has more features and is easier to use.


    IMO, you're just raising the bar by introducing some extra layers. There haven't been any problems forking projects from vdr-developer.org into github beforehand and I've been doing that for ages. How should one publish changes to the upstream, if they are forking from your mirrors? Are you really going to act as a man in the middle by reviewing pull request and then submitting the same changes forward to the original master? Have you really though the process through as it seems to me, that you're going to kill the VDR community by making the plugin scene just messier?


    BR,
    Rofa

    So you're actually using drivers that were released 6 months before the Kaby Lake announcement. My first suggestion is to use the latest release version 1.8.1 as there have been plenty of GEN9 fixes and tweaks during the last year. Beside the libva and libva-intel-driver, one should use quite recent libdrm too.

    It's the ColorBalance post processing that causes this mess


    Are you sure? It might be also the number of allowed concurrent filters. What's the count of color balance filters ("Supported color balance filter count XX" in your logs)? Can you post all the color balance messages in your logs (grep ++)? Could you bisect the problem a bit more: which one(s) of color balance filters break things up? Are you using libva and intel-vaapi-driver version 1.8.1?


    Both Antti and I are having Haswell only and therefore can't test any newer chipsets.

    There are two things. First, the femon doesn't support the new API and only signal strength and quality values, scaled between 0 and 100, are supported..


    Secondly, SAT>IP protocol provides only signal level and quality measurements. The signal level is scaled between 0 and 255, but no information whether the adaptation is linear or something else:

    Zitat

    Numerical value between 0 and 255 An incoming L-band satellite signal of -25dBm corresponds to 224 -65dBm corresponds to 32 No signal corresponds to 0


    The signal quality is is scaled between 0 and 15:

    Zitat

    Numerical value between 0 and 15 Lowest value corresponds to highest error rate The value 15 shall correspond to -a BER lower than 2x10-4 after Viterbi for DVB-S -a PER lower than 10-7 for DVB-S


    So, for a certain specific satellite combo it's possible to calculate the actual signal level in dBm, but there's absolutely no info how to do it for DVB-T/T2/C/C2. If you'll find any information, please, send a pull request in Github.