You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the current master branch, the scraper uses standard non-daemon threads and then uses .join() on each one to prevent the program from ending.
I think it would be better to use daemon threads (daemon threads die when the parent thread dies) and then use .join() on the queue. We then never have to worry about the threads ending (they can be in an infinite loop reading from the queue), and yet they will still be cleaned up when the scraper exits.
The reason I used join() on the threads instead of the queue is that If there is an error with credentials, the thread will exit without consuming any jobs. If join() was used on the queue instead of the threads then the program will lock up if there is an authentication error (all the threads would exit, but there would still be stuff in the queue).
I prefer to explicitly wait until all the threads are finished (which will only happen when the queue is empty or the threads can't log in), rather than trust that the threads will automatically be killed when the scrape is complete (which isn't guaranteed to ever happen). Is there any advantage to doing it with daemon threads?
I just feel like it would be nice to in the future have the ability to add
things to the queue without having to worry whether all of the workers have
gone and died.
I realize that there are more hurdles that I don't cover in this pull
request. We should probably stick with your system for now.
On 2013-12-19 11:23 AM, "Carey Metcalfe" notifications@github.com wrote:
The reason I used join() on the threads instead of the queue is that If
there is an error with credentials, the thread will exit without consuming
any jobs. If join() was used on the queue instead of the threads then the
program will lock up if there is an authentication error (all the threads
would exit, but there would still be stuff in the queue.
Plus, I don't think daemon threads work like that. According to the docs:
"The entire Python program exits when no alive non-daemon threads are
left". I've done a lot of work with threads in the past and getting the
damn things to stop and exit has always been an issue.
I prefer to explicitly wait until all the threads are finished (which will
only happen when the queue is empty or the threads can't log in), rather
than trust that the threads will automatically be killed when everything is
done.
—
Reply to this email directly or view it on GitHubhttps://github.com//issues/4#issuecomment-30942399
.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
In the current master branch, the scraper uses standard non-daemon threads and then uses .join() on each one to prevent the program from ending.
I think it would be better to use daemon threads (daemon threads die when the parent thread dies) and then use .join() on the queue. We then never have to worry about the threads ending (they can be in an infinite loop reading from the queue), and yet they will still be cleaned up when the scraper exits.