Git Product home page Git Product logo

videolinkbot's Issues

"by request" bot

  1. Create a new bot username (partially to skirt around subreddit bans, also to limit messaging to the main bot) and a subreddit of the same name.
  2. Build a script that monitors this subreddit for links to reddit submissions or comments (or reddit links in a self post body) and directs the bot to scrape the associated submissions.
  3. The bot then responds to the bot-specific-subreddit submission and the source submission (if it's able) with the list of videos.
  4. As with the main bot, this bot should update (every hour) on submissions made within 24 hours.

Creation of this bot could probably be simplified by generalizing some of the simplebot code. This might not even be necessary, would be great if I could just use everything as is. I think at least post_aggregate_links could/should be generalized, or maybe moved into a separate script. simplemonitor could also potentially be generalized.

Bot scraping the same posts repeatedly

Bot would identify a link comment, scrape the post, then repeat the same "identified link-comment by on submission ". This is causing the bot to waste a lot of time repeating work it has already completed.

Update existing posts

People have been complaining that video scores are stagnant. To remedy this: every hour or so, visit bot's comment history, sort by "hot", and re-scrape those posts.

Add exception handling to recognize rate-limiting from the YouTube API

YouTube doesn't have a clear-cut rate limit, but they have blocked the bot before. This manifests as HTTP errors in get_title(), which then just posts the video link with the title "...?..." (by design). get_title should be modified to slow the bot down (more than it already does) when it recognizes youtube may be getting annoyed with the bot.

Alternatively, could potentially identify if certain specific actions resulted in the bot previously being blocked and determine if there's any action that can be taken to mitigate future API blocking.

Clear old data from memos

Right now, memos will grow and grow and grow. After several day sof operation, they only take up a few MB, so maybe this shouldn't even be a concern. But I feel like data that's over a day or two old should get flushed from the memos.

Don't memoize bad video titles

Right now, the bot is memoizing the result from get_title(), including "...?..."

Instead, get_title() should return None if no video title was found. Then, build comment can replace None with "...?..." as needed. If the bot returns to the same post, it will see that it still needs to get a title for any links it missed. Hopefully, this will help get around the YouTube API rate-limiting (or at least, help repair the problems it causes the bot).

Subreddits blacklist

simplemonitor should ignore comments from select subreddits (in particular, those the bot has been banned from). Add a file called "blacklisted_subreddits.txt" that simplemonitor references. If a comment is from a blacklisted subreddit, memoize the id and move along.

What would be really nice would be if the bot can recognize in it its messages that it has been banned from a subreddit. These messages come in a standard format, so it should be fairly easy. When banned, the bot would recognize the message and update the blacklist file.

comandline support: username/password/credentials filename

Keeping the credentials in a file is convenient, but really it would be better to pass something like that in on the commandline. Other interesting commandline options to be explored:

  • subreddit to monior (default: all)
  • blacklist filename
  • database name (for persistence)

Add more rigorous sorting

No reason to just sort by score. Should sort by score > author > title. Should be trivial to implement since we're already working with pandas dataframes.

[CRITICAL] Bot should not completley ignore previously scraped comments

The bot was originally designed to attribute video links to the earliest comment that had posted it. In this scenario it was OK to ignore these comments on a second pass (although we would miss new videos if that user had posted any).

The problem now is that we're collecting comment score. As we're ignoring comments we've already seen, we're necessarily not updating these scores properly. We can still skip over parsing the comments for links, but we need to at least check the score on these comments.

Maybe we need to completely reevaluate how we're using the memo objects.

Add (optional) data persistence

Since the bot is scraping /r/all anyway, would be nice to build a dataset to play with later. The bot should store select information from all the comments it scrapes in a SQLite database. Also, the bot should store information about itself: in particular, what video links it's collecting from each subreddit. Would be interesting to see which videos are popular in which subreddits. Not sure whether or not the deduplication is something I care about or not here.

Data persistence should be turned on via command line argument: default bot operation should be as lightweight as possible.

Domain support priorities

via https://gist.github.com/dmarx/4732673 (in descending order):

Implemented:
YouTube (from start)
LiveLeak (done: 4/26/13)
Vimeo (done: 5/11/13)
youtubedoubler (done, 5/11/13)
nicovideo (done, 5/12/13)

High Priority:
Vine
TED
DailyMotion
TheDailyShow
colbertnation
FunnyOrDie
CollegeHumor
TheOnion

Low Priority:
ComedyCentral
WorldStarHipHop
DeadSpin
TheStar
nymag
nytimes.com
guardian.co.uk
twitvid
flickr.com

Add logging support

Right now it just pushes messages to stdout. We should have some real logging going on of some kind. In particular, I'd like to log the amount of time it takes to update hot comments to determine if perhaps the bot shouldn't have a different rubric for when to resume normal scraping.

Recognize videos from arbitrary domain

Not sure if this is possible, and even if it is it's probably not a good idea since it would require a GET request on every single link the bot encounters. I can dream (and open up an issue) though.

Sort video links by score

Two possible options here:

  1. Comment score of earliest comment where this videolink was posted.
  2. Comment score of highest scoring comment containing video link

Option (1) is more in accordance with the current state of the bot, but probably not what people would really want to see. Option (2) is definitely more how people would want the videos sorted, but then the "source comment" permalink is a little deceptive.

I should probably go with option (2) and modify the bot to identify each video not with its earliest mention, but with the comment that has achieved the highest score. Alternatively, I could add a second column with a link to the highest scoring comment, but this will be the source comment for most videos. Maybe only populate this column if the source video is not the same comment as the one where the video achieved its highest score? Yuck. Would save space though.

I guess I should probably just go with option (2) and change the "source comment" to link to the highest scoring comment, but this will probably result in a feedback loop where high scoring comments will receive more upvotes. I'd prefer if these upvotes were directed to the first user to post the video, but whatever. C'est la vie.

NB: Timestamp comment with last updated time. UTC?

Script enters an infinite loop when trying to post to a deleted submission

This wasn't really an issue before, but now it's a big problem because of adding playlist's. Simple workaround would be to add support for a subreddit blacklist, but really we need better error handling. Should be cognizant of similar issue when attempting to comment on a deleted post: use specialized exceptions from praw.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.