Posts by edit4ever

    Can you confirm that lines 865-873 of your zap2xml.py file (in .kodi/addons/script.module.zap2xml) looks like this:

    Code
    def parseJSOND(fn):
        global programs, cp
        with gzip.open(fn,"rb") as f:
            e = f.read(1)
            b = f.read()
            f.close()
        if not e:
            log.pout("[D] Skipping: " + cp,'info')
        else:

    If so - then one of the issues we may be running into is a change to the amount of data that zap2it/screener is allowing to be read/downloaded from their server before sending a blank response. However, it's strange that your zap2xml log didn't have any lines that refer to the details parsing.

    Those should look like - [D] Parsing: EP000191865565
    [hr]
    OK - This version of zapxml.py uses a different method to skip empty files. If the earlier version isn't working for you - you can try this one. Unfortunately, my lineup doesn't seem to have any empty error files to test with - so I'm relying on your feedback to try out fixes.


    No good - figured out a way to test (created an empty details file) - and it didn't work. Stand by...
    [hr]
    4th time's the charm! I tested this on my end with an empty details file and it finsihed the parsing. When a details or icon download file is empty it will show like this:

    [D] Skipping: EP000021770103 2017-02-26 16:34:55,004

    And should just keep going with the rest of the files.

    Here's the updated zap2xml.py file. Please let me know if you see any errors - I think I've got it this time. :)

    zap2xml (4).zip

    Thanks!

    It would be good to use the updated code in post #197 and let that run for a few days to see if it generates any errors.

    As for how many times to update...it depends on how much you want to download at once. Since the guide data is pulled in 6 hour increments, you could update 4x a day and a little bit of new data should download each time. If you use the standard cron settings in tvheadend (didn't make any changes) it downloads twice a day which should work for most setups.

    Thanks again for being a part of the testing!

    Cool - let me know if it hangs in the next day or two. I'll also wait for Stepher to test before I put out an updated addon release.
    [hr]
    As a side note - that is not a large number of stations/downloads, so let's keep an eye out for errors. You can try adding a day or two to the download # of days setting every few days to see if the server is good. I can download a full 14 days for 19 stations - with my cron updates set for twice a day.

    Can I ask how many channels in your lineup and how many days of guide data you have set to download? You may have run into the issue of too much data to download since it hadn't been running well in awhile. It should be fine going forward - but you could always set the delay function to help offset server timeouts.

    Did it stop at the same place:

    Code
    [D] Parsing: EP000019520052
    Getting: http://tvschedule.zap2it.com/tvlistings/gridDetailService?pgmId=EP000019520056
    Exception
    error<class 'urllib2.URLError'>
    Traceback (most recent call last):

    Or was it in a different place? (i.e. a different episode #)

    Awesome! you just helped identify the real problem. Now I have to figure out a solution. Stand by... :)
    [hr]
    Do me a favor - delete that file, rerun the grabber and post the log.
    [hr]
    OK - I've put in some code to skip empty files - I think that will solve the issue.

    Download/unzip the following and replace your zap2xml.py file.

    zap2xml (1).zip

    Rerun the grabber and post the log. Thanks!!

    OK - please add

    log.pout("[D] Parsing: " + cp,'info')

    ahead of line 858 - so it looks like this:

    Code
    def parseJSOND(fn):
        global programs, cp
        log.pout("[D] Parsing: " + cp,'info')
        with gzip.open(fn,"rb") as f:


    Then rerun and post the log output. Thanks! I'm trying to see if your is episode or guide data issue.

    Great - thanks for the updated log. Looks like "Getting: gridDetailService?pgmId=EP024717260011" failed to download and when I check - the file doesn't exist. This means that zap2it/screener is not providing some details for episodes. It also points to the need to fix the code to handle download failures.

    Can you try the zap2xml file that I attached in my previous post?? That one has a line to skip the data parse if the episode details file doesn't exist.

    Interesting - let's make sure your code is the same as mine. Rename your zap2xml.py file to zap2xml.bak and download this version.

    zap2xml.zip

    Extract the zap2xml.py file and put it in place of the one you renamed.

    Now rerun the grabber. You shold still get the error - but I would like to see that log for comparison.

    Thanks!


    Here is my bad log file hope it helps

    [JSON] bad zap2xml grab - Pastebin.com

    OK - it failed on the /storage/.kodi/addons/script.module.zap2xml/cache/1487419200000.html.gz file. That file does exists on the zap2it site - so it should have downloaded. Try to open that file on your system and see if it contains data. It should look like this:

    If the file doesn't exist - or contains no information. Then we at least have a starting point. Thanks!
    [hr]
    OK - maybe if we don't write the details file if it contains no data it will keep the error from happening.

    Let's try changing the following: (depending on which version you're running, this should start at line 341 or line 332)

    Code
    if cp != -1 and "-D" in options:
                        fn = os.path.join(cacheDir,cp + ".js.gz")
                        if not os.path.isfile(fn):
                            data = getURL(urlRoot + "gridDetailService?pgmId=" + cp)
                            wbf(fn, data)
                            log.pout("[D] Parsing: " + cp,'info')
                        if os.path.isfile(fn):
                            parseJSOND(fn)

    To this:

    Code
    if cp != -1 and "-D" in options:
                        fn = os.path.join(cacheDir,cp + ".js.gz")
                        if not os.path.isfile(fn):
                            data = getURL(urlRoot + "gridDetailService?pgmId=" + cp)
                            if data:
                                wbf(fn, data)
                                log.pout("[D] Parsing: " + cp,'info')
                        if os.path.isfile(fn):
                            parseJSOND(fn)


    If you're having an error with your current grab - change the code and try running the grab again. I've changed mine and I'm still waiting to see if I get the failure again.

    Well - my system ran correctly again last night with extra details...so it looks like I'm going to have to wait for another error to appear so i can troubleshoot. I want to narrow down whether the issue is with missing data or changed data. BUt until I have another error - I don't have a place to look.

    If yours fails again - please provide the log so I can look at the point of failure.

    Thanks!

    Looks like my system has now succumb to the extra details json error. The earlier code change did not help. I'm not sure if this is an issue with an LE update or a change to the way screener/zap2it is hosting the data. I'll have to do some testing - but may not get to this for a bit. At least we know it's not limited to a single system or user issue.

    In the meantime, you can turn off the extra details and it will load the basic guide.