dmarx / videolinkbot Goto Github PK
View Code? Open in Web Editor NEWReddit bot that posts a comment with all of the video links in a submission. Currently only supports YouTube.
License: MIT License
Reddit bot that posts a comment with all of the video links in a submission. Currently only supports YouTube.
License: MIT License
Creation of this bot could probably be simplified by generalizing some of the simplebot code. This might not even be necessary, would be great if I could just use everything as is. I think at least post_aggregate_links could/should be generalized, or maybe moved into a separate script. simplemonitor could also potentially be generalized.
radd.it now allows playlists to start at an arbitrary element. 1st element is the bot comment, so playlists should always start at 2nd element, like so: http://radd.it/comments/199cyw/_/c8m1io3?start=2
videolin false-positive at ytimg donain:
Image is thumbnail for youtube video: http://www.youtube.com/all_comments?v=uVOBEge7YTg
Potential future feature: add video thumbnails to VLB posts via format:
http://i2.ytimg.com/vi/***video_id***/mqdefault.jpg
I don't think this really adds anything. Just a thought.
Make sure we aren't scraping month-old posts.
http://www.reddit.com/r/Music/comments/184f91/93_til_infinity_souls_of_mischief/c8bxnzu
"yes it actually worked great as a video playlist - thanks again! I don't know anything about spotify playlists except that they exist. here is a website that will convert a text file into a playlist that should be able to at least show you what the format is like. this is spotify's API documentation FWIW."
Bot would identify a link comment, scrape the post, then repeat the same "identified link-comment by on submission ". This is causing the bot to waste a lot of time repeating work it has already completed.
Instead, sort comments by new and update the bot comment as appropriate
Bot requires a version of praw.Submission.all_comments_flat, which was removed from praw. To stay up to date, replace all_comments flat like this:
subm.replace_more_comments()
all_comments_flat = flatten_tree(subm.comments)
Pertinent discussion: http://www.reddit.com/r/redditdev/comments/17tn7r/replacement_for_all_comments_flat/
This would let other people use your code, or even submit pull requests to improve it.
Just registering that this was an issue. Resolved today.
People have been complaining that video scores are stagnant. To remedy this: every hour or so, visit bot's comment history, sort by "hot", and re-scrape those posts.
YouTube doesn't have a clear-cut rate limit, but they have blocked the bot before. This manifests as HTTP errors in get_title(), which then just posts the video link with the title "...?..." (by design). get_title should be modified to slow the bot down (more than it already does) when it recognizes youtube may be getting annoyed with the bot.
Alternatively, could potentially identify if certain specific actions resulted in the bot previously being blocked and determine if there's any action that can be taken to mitigate future API blocking.
Right now, memos will grow and grow and grow. After several day sof operation, they only take up a few MB, so maybe this shouldn't even be a concern. But I feel like data that's over a day or two old should get flushed from the memos.
Examples:
m.youtube.com
youtube.googleapis.com
Right now, the bot is memoizing the result from get_title(), including "...?..."
Instead, get_title() should return None if no video title was found. Then, build comment can replace None with "...?..." as needed. If the bot returns to the same post, it will see that it still needs to get a title for any links it missed. Hopefully, this will help get around the YouTube API rate-limiting (or at least, help repair the problems it causes the bot).
Basically have this already, just need to test.
simplemonitor should ignore comments from select subreddits (in particular, those the bot has been banned from). Add a file called "blacklisted_subreddits.txt" that simplemonitor references. If a comment is from a blacklisted subreddit, memoize the id and move along.
What would be really nice would be if the bot can recognize in it its messages that it has been banned from a subreddit. These messages come in a standard format, so it should be fairly easy. When banned, the bot would recognize the message and update the blacklist file.
Keeping the credentials in a file is convenient, but really it would be better to pass something like that in on the commandline. Other interesting commandline options to be explored:
Title comes up as "None". No point posting links to removed content.
Sample:
http://www.youtube.com/watch?feature=player_detailpage&v=rsBOiNAYWtM#t=239s
http://www.reddit.com/r/videos/comments/1c0rdu/the_most_insane_live_crowd_ever_wwe_raw_4813/c9c1kmu
No reason to just sort by score. Should sort by score > author > title. Should be trivial to implement since we're already working with pandas dataframes.
Instead of getting all video titles then formatting the links and appending to the comment body, format links and append them immediately after getting the video title. This way we can cut out as soon as the comment hits the character limit. This should also eliminate the need for the trim_comment() function, since we'll stop building comments before they hit the limit.
The bot was originally designed to attribute video links to the earliest comment that had posted it. In this scenario it was OK to ignore these comments on a second pass (although we would miss new videos if that user had posted any).
The problem now is that we're collecting comment score. As we're ignoring comments we've already seen, we're necessarily not updating these scores properly. We can still skip over parsing the comments for links, but we need to at least check the score on these comments.
Maybe we need to completely reevaluate how we're using the memo objects.
Since the bot is scraping /r/all anyway, would be nice to build a dataset to play with later. The bot should store select information from all the comments it scrapes in a SQLite database. Also, the bot should store information about itself: in particular, what video links it's collecting from each subreddit. Would be interesting to see which videos are popular in which subreddits. Not sure whether or not the deduplication is something I care about or not here.
Data persistence should be turned on via command line argument: default bot operation should be as lightweight as possible.
via https://gist.github.com/dmarx/4732673 (in descending order):
Implemented:
YouTube (from start)
LiveLeak (done: 4/26/13)
Vimeo (done: 5/11/13)
youtubedoubler (done, 5/11/13)
nicovideo (done, 5/12/13)
High Priority:
Vine
TED
DailyMotion
TheDailyShow
colbertnation
FunnyOrDie
CollegeHumor
TheOnion
Low Priority:
ComedyCentral
WorldStarHipHop
DeadSpin
TheStar
nymag
nytimes.com
guardian.co.uk
twitvid
flickr.com
Right now it just pushes messages to stdout. We should have some real logging going on of some kind. In particular, I'd like to log the amount of time it takes to update hot comments to determine if perhaps the bot shouldn't have a different rubric for when to resume normal scraping.
Not sure if this is possible, and even if it is it's probably not a good idea since it would require a GET request on every single link the bot encounters. I can dream (and open up an issue) though.
Two possible options here:
Option (1) is more in accordance with the current state of the bot, but probably not what people would really want to see. Option (2) is definitely more how people would want the videos sorted, but then the "source comment" permalink is a little deceptive.
I should probably go with option (2) and modify the bot to identify each video not with its earliest mention, but with the comment that has achieved the highest score. Alternatively, I could add a second column with a link to the highest scoring comment, but this will be the source comment for most videos. Maybe only populate this column if the source video is not the same comment as the one where the video achieved its highest score? Yuck. Would save space though.
I guess I should probably just go with option (2) and change the "source comment" to link to the highest scoring comment, but this will probably result in a feedback loop where high scoring comments will receive more upvotes. I'd prefer if these upvotes were directed to the first user to post the video, but whatever. C'est la vie.
NB: Timestamp comment with last updated time. UTC?
This wasn't really an issue before, but now it's a big problem because of adding playlist's. Simple workaround would be to add support for a subreddit blacklist, but really we need better error handling. Should be cognizant of similar issue when attempting to comment on a deleted post: use specialized exceptions from praw.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.