Git Product home page Git Product logo

upton's Introduction

Upton

Upton is a framework for easy web-scraping with a useful debug mode that doesn't hammer your target's servers. It does the repetitive parts of writing scrapers, so you only have to write the unique parts for each site.

Installation

Add the gem to your Gemfile and run the bundle command:

gem 'upton'

Documentation

With Upton, you can scrape complex sites to a CSV in just a few lines of code:

scraper = Upton::Scraper.new("http://www.propublica.org", "section#river h1 a")
scraper.scrape_to_csv "output.csv" do |html|
  Nokogiri::HTML(html).search("#comments h2.title-link").map &:text
end

Just specify a URL to a list of links -- or simply a list of links --, an XPath expression or CSS selector for the links and a block of what to do with the content of the pages you've scraped. Upton comes with some pre-written blocks (Procs, technically) for scraping simple lists and tables, like the list function above.

Upton operates on the theory that, for most scraping projects, you need to scrape two types of pages:

  1. Instance pages, which are the goal of your scraping, e.g. job listings or news articles.
  2. Index pages, which list instance pages. For example, a job search site's search page or a newspaper's homepage.

For more complex use cases, subclass Upton::Scraper and override the relevant methods. If you're scraping links from an API, you would override get_index; if you need to log in before scraping a site or do something special with the scraped instance page, you would override get_instance.

The get_instance and get_index methods use a protected method get_page(url) which, well, gets a page. That's not very special. The more interesting part is that get_page(url, stash) transparently stashes the response of each request if the second parameter, stash, is true. Whenever you repeat a request (with true as the second parameter), the stashed HTML is returned without going to the server. This is helpful in the development stages of a project when you're testing some aspect of the code and don't want to hit a server each time. If you are using get_instance and get_index, this can be en/disabled per instance of Upton::Scraper or its subclasses with the @debug option. Setting the stash parameter of the get_page method should only be used if you've overridden get_instance or get_index in a subclass.

Upton also sleeps (by default) 30 seconds between non-stashed requests, to reduce load on the server you're scraping. This is configurable with the @sleep_time_between_requests option.

Upton can handle pagination too. Scraping paginated index pages that use a query string parameter to track the current page (e.g. /search?q=test&page=2) is possible by setting @paginated to true. Use @pagination_param to set the query string parameter used to specify the current page (the default value is page). Use @pagination_max_pages to specify the number of pages to scrape (the default is two pages). You can also set @pagination_interval` if you want to increment pages by a number other than 1 (i.e. if the first page is 1 and lists instances 1 through 20, the second page is 21 and lists instances 21-41, etc.) See the Examples section below.

To handle non-standard pagination, you can override the next_index_page_url and next_instance_page_url methods; Upton will get each page's URL returned by these functions and return their contents.

For more complete documentation, see the RDoc.

Important Note: Upton is alpha software. The API may change at any time.

How is this different than Nokogiri?

Upton is, in essence, sugar around RestClient and Nokogiri. If you just used those tools by themselves to write scrapers, you'd be responsible for writing code to fetch, save (maybe), debug and sew together all the pieces in a slightly different way for each scraper. Upton does most of that work for you, so you can skip the boilerplate.

Upton doesn't quite fit your needs?

Here are some similar libraries to check out for inspiration. No promises, since I've never used them, but they seem similar and were recommended by various HN commenters:

And these are some libraries that do related things:

Examples

If you want to scrape ProPublica's website with Upton, this is how you'd do it. (Scraping our RSS feed would be smarter, but not every site has a full-text RSS feed...)

scraper = Upton::Scraper.new("http://www.propublica.org", "section#river section h1 a")
scraper.scrape do |article_html_string|
  puts "here is the full html content of the ProPublica article listed on the homepage: "
  puts "#{article_html_string}"
  #or, do other stuff here.
end

Simple sites can be scraped with pre-written list block in `Upton::Utils', as below:

scraper = Upton::Scraper.new("http://nytimes.com", "ul.headlinesOnly a")
scraper.scrape_to_csv("output.csv", &Upton::Utils.list("h6.byline"))

A table block also exists in Upton::Utils to scrape tables to an array of arrays, as below:

> scraper = Upton::Scraper.new(["http://website.com/story.html"])
> scraper.scrape(&Upton::Utils.table("//table[2]"))
[["Jeremy", "$8.00"], ["John Doe", "$15.00"]]

This example shows how to scrape the first three pages of ProPublica's search results for the term tools:

scraper = Upton::Scraper.new("http://www.propublica.org/search/search.php?q=tools",
                             ".compact-list a.title-link")
scraper.paginated = true
scraper.pagination_param = 'p'    # default is 'page'
scraper.pagination_max_pages = 3  # default is 2
scraper.scrape_to_csv("output.csv", &Upton::Utils.list("h2"))

Contributing

I'd love to hear from you if you're using Upton. I also appreciate your suggestions/complaints/bug reports/pull requests. If you're interested, check out the issues tab or drop me a note.

In particular, if you have a common, abstract use case, please add them to lib/utils.rb. Check out the table_to_csv and list_to_csv methods for examples.

(The pull request process is pretty easy. Fork the project in Github (or via the git CLI), make your changes, then submit a pull request on Github.)

Why "Upton"

Upton Sinclair was a pioneering, muckraking journalist who is most famous for The Jungle, a novel portraying the reality of immigrant labor struggles in Chicago meatpacking plants at the start of the 1900s. Upton, the gem, sprang out of a ProPublica project pertaining to labor issues.

Notes

Test data is copyrighted by either ProPublica or various Wikipedia contributors. In either case, it's reproduced here under a Creative Commons license. In ProPublica's case, it's BY-NC-ND; in Wikipedia's it's BY-SA.

upton's People

Contributors

jeremybmerrill avatar dannguyen avatar kgrz avatar thejefflarson avatar adelevie avatar arthurclemens avatar bxjx avatar bschne avatar esagara avatar jkokenge avatar kleinmatic avatar vfonic avatar dankeemahill avatar shail avatar

Stargazers

GBRRRL avatar Mikhail avatar  avatar Abdalrahman Saqr avatar Colin Avrech avatar Dariusz Adamczyk avatar Tuguldur Tumurbaatar avatar  avatar Chris Morris avatar Seth Horsley avatar Paul G avatar VNoctis avatar Manuel García avatar  avatar Boticello avatar Ryan Parker avatar Nick Fn Blum avatar Josh Powell avatar haohao2021 avatar Alberto Colón Viera avatar Meliodas avatar Ahmad Fauzan Amirul Isnain avatar  avatar Ben Standefer avatar  avatar Justin Hamilton avatar Tomek Kamiński avatar Agence Growth Marketing DiyGitAll.com avatar Saharak avatar Maikel Urlitzki avatar Wade avatar Matt Petty avatar Aaron Schaffer avatar  avatar Alberto Colón Viera avatar  avatar Daniel Petelin avatar DanceFlower avatar Taylor Brooks avatar Mohamad Kaakati avatar Garrett Mooney avatar Dayton Nolan avatar  avatar Kimon Krenz avatar Maxim Derbenev avatar Mohamed Samir avatar Jack Corley avatar Oleg Lavrovsky avatar Udi avatar Rick Jones avatar  avatar Gunnar avatar vulcangz avatar Jeff Denecke avatar William Thompson avatar  avatar Wumpus avatar krdprog avatar Lindsay Carbonell avatar Kelvin Lockwood avatar Michael Treanor avatar bigsmile Lee avatar Brent Lintner avatar  avatar HARADA Makoto avatar Pedro Schmitt avatar  avatar Matt Hale avatar Alex Ford avatar Jeffrey Guenther avatar Keegan McCallum avatar K. N. avatar  avatar Tomas Jukin avatar  avatar Ronald Foster avatar Ryan Wallace avatar Josh Teneycke avatar Stan Yamane avatar Mikh Jones avatar Wuping Yao avatar Uriel avatar  avatar Den avatar Brie McNally avatar Liliana Bounegru avatar Ryan Stone avatar Pierre Leroux avatar Bartosz avatar Alex Strizhak avatar Erlis Dhima avatar  avatar  avatar Pierre Conti avatar  avatar Jonathan Winter avatar Siddharth (Sid) Rao avatar Petros Koutsolampros avatar Joel Van Horn avatar Papo Killer avatar

Watchers

Jeremy Gailor avatar Ken Schwencke avatar  avatar Henry Huang avatar Brendan Buckingham avatar Kevin Kelley avatar Abhi avatar  avatar  avatar cinco avatar bihicheng avatar Jason Blakeley avatar Nicolas Galineau avatar Rakhmad Azhari avatar Allan MacGregor avatar Francis Addai avatar Antouan Anguelov avatar Biswajit Dutta Baruah avatar Christian Hochfilzer avatar  avatar Keith Pops avatar mayulu avatar  avatar Chris avatar aliraza avatar Mila Frerichs avatar andrea avatar  avatar Goutham Gandhi Nadendla avatar Julius Tröger avatar James Cloos avatar Hygor Hernane avatar  avatar scott hutchinson avatar Andy Gray avatar  avatar Sisi Wei avatar Hilary Fung avatar  avatar Luke Bacon avatar Matt Long avatar  avatar Pulkit Yadav avatar Matthew Kwan avatar Lena Groeger avatar Krista Kjellman Schmidt avatar Trevor Pearson avatar Petr Kočí avatar Steve Gallancy avatar  avatar Tuna Aras avatar  avatar Vas Sudanagunta avatar Loki-sama avatar Cihan Özhan avatar Sam Sharp avatar Pavel R avatar MQ avatar  avatar  avatar Bora Bahar avatar  avatar SIMON DODSON avatar Another Nerd avatar Kristoffer Grønnegaard avatar Kate Webbink avatar Hannah Fresques avatar Ally J. Levine avatar  avatar Devendra Kavthekar avatar maysara avatar Muntasir Mustafa avatar Brandon Simpson avatar  avatar  avatar Project N52 avatar Wumpus avatar  avatar Nick Fn Blum avatar

upton's Issues

Nokogiri::CSS::SyntaxError: unexpected '$' after ''

I've been trying to get my link working as the index_url for a while, and it hasn't been working.

s = Upton::Scraper.new("http://shops.oscommerce.com/directory?country=US&page=1")

s.scrape { |html| puts html } 

Then I get this error:

Nokogiri::CSS::SyntaxError: unexpected '$' after ''
from /Users/user/.rvm/gems/ruby-2.0.0-p247/gems/nokogiri-1.6.1/lib/nokogiri/css/parser_extras.rb:87:in `on_error'

I'm having difficulty debugging this. If I put this into an array Upton::Scraper.new(["http://shops.oscommerce.com/directory?country=US&page=1"]) then it works fine. But I would rather this gem handle the pagination for me.

Can anyone give me some direction as to why this is happening? I know this is an edge case, but I can't find what's causing it.

find by xpath

is it possible to do something like that:

page = Upton::Scraper.new(url)
page.find_by_xpath("//body/div/a").value

Helper methods for scraping one page and for scraping multiple

That Scraper.new takes EITHER a url and a selector OR an array of URLs is confusing. Should keep both on new for backwards compatibility, but add a helper method for each pattern -- and use those helper methods in the README.

This will hopefully allay some of the confusion in #30 and address the API problems that were mentioned in #5 without such a dramatic refactor.

problem scraping index page (Scraping 0 instances)

hi!

if i try to lookup this page alma-ata.alm.slando.kz for links h3.large a.link to go next

  scraper = Upton::Scraper.new('http://alma-ata.alm.slando.kz/','h3.large a.link')

all i get with scraper.verbose = true is

Stashing disabled. Will download from the internet.
Downloading from http://alma-ata.alm.slando.kz/ 
Downloaded http://alma-ata.alm.slando.kz/
sleeping 30 secs
Scraping 0 instances

but from js console on this page i see this

> $('h3.large a.link').size()
> 30

looks like an error somewhere

relative url edge cases

As @dannguyen notes with respect to fix for #14 / #8 :

should handle the following non-absolute href possibilities:
//anothersite.com (keeps scheme, too!)
/root/dir
relative/dir
?query=2
#bang

Not a priority, and may require some refactoring of resolve_url (but I don't think this will break anything)...

New version?

Can we get a new version? =D
rest-client is now in version ~> 1.8.x
Thank you!

relative URLs

issue reported by @danhillreports:

relative URLs aren't handled properly. if a relative URL (in an anchor's href property) is e.g. "/index.php", Upton will try to fetch "/index.php". Obviously, that won't work.

Fix is easy: detect relative urls and, if found, prepend the hostname.

Make Scraper instances additive

Scraper.new('http://website.com/some_index.html', '.link') + Scraper.new('http://website.com/another_index.html', '.hyperlink')

returns another Scraper instance (or one of the original Scrapers?) with all of the links

Create ScrapedPage object

Which is what would be yielded out of Scraper#scrape instead of the HTML, the URL, and instance page's index, etc.

This ScrapedPage object -- which might inherit from Nokogiri::HTML -- would contain the raw HTML, the parsed HTML, the URL, the index page from which the instance page was linked (if present), a reference to the index page's ScrapedPage object, and the instance page's index (i.e. ordinal count) of pages linked to from the index page.

This would be a breaking change, so is farther away from being implemented into stable Upton.

Handle pagination out-of-the-box

It would be nice if upton handled common implementations of pagination with minimal configuration.

As the docs point out, you've already made it super easy to handle paginated indexes by overriding next_index_page_url, but I think it could be nice to have it implemented neatly as part of the library. It could maybe be enabled with an instance variable like propubscraper.paginate = true. There could possibly be other options to set the query string parameter name (by default use page or p) and to set the maximum number of results to scrape.

I'm happy to give you a pull request if you think it's worth doing. Thanks for the useful gem btw!

Improving url_to_filename

A tangent to this discussion: #15

The gsubbing of all non-word characters may cause more collisions than desired in some edge cases, and more importantly, makes it difficult/impossible to reverse the filename and get the original URL. Maybe it's possible to use CGI.escape to convert to a proper file name and then CGI.unescape to reverse the change?

 CGI.escape 'https://github.com/propublica/upton/issues/new#hashbang+42'
 => "https%3A%2F%2Fgithub.com%2Fpropublica%2Fupton%2Fissues%2Fnew%23hashbang%2B42"

Second issue: extremely long filenames

I've run into this before but am not going to take the time to reproduce it...operating systems may restrict the length of a filename to something shorter than the saved URL. This could be mitigated by saving the file to subdirectories according to its URL path:

http://example.com/path/to/filename.html

Gets saved to:

http%3A%2F%2Fexample.com/path/to/filename.html

In some extreme scraping cases, having tens of thousands of files in the stash directory might cause some performance issues. Not sure what the speed of trying to hash the file identity through recursive subdirectory search will be (but that's probably premature optimization)

HTML Comment on stashed pages with info

I had an Upton feature suggestion that would help with large scrapes like this. Would it
be possible when writing the scraped html out to the local copy to add some metadata
about the page to the top in html note format? Something like That way you could
preserve some information about the file even with human readable filenames
disabled.

Suggests @esagara

Recursive function causing a stack overflow

https://github.com/propublica/upton/blob/master/lib/upton.rb#L314-L326

Will cause a stack overflow with large paginations >2300 or so. Possible solution:

def get_instance(url, pagination_index=0, options={})
  resp = self.get_page(url, @debug, options)
  i = pagination_index.to_i
  while !resp.empty?
    next_url = self.next_instance_page_url(url, i += 1)
    next_resp = self.get_page(next_url, @debug, options)
    break if next_url == url
    resp += next_resp
  end
  resp
end

More test coverage, more idiomatic tests

This maybe isn't an issue, but seems like it should be avoidable. I imagine this should all be fairly straightforward to test without starting and stopping thin.

Would you be up for an rspec'd pull request?

Warn users of slug collisions

Keep an in-memory Hash of slugs to URLs. If one is found, warn the user.

This will be imperfect (if the colliding URLs are stashed at separate runs of Upton), but that's okay.

Use content-type to skip non-HTML instance pages

Am trying to scrape all the links on a site.
So for example I tried -

u = Upton::Scraper.new("http://getbootstrap.com/2.3.2/", "a", :css)
u.verbose=true
u.sleep_time_between_request=0

Then it gives encoding error on

Cache of http://getbootstrap.com/2.3.2/assets/bootstrap.zip unavailable. Will download from the internet
Downloading from http://getbootstrap.com/2.3.2/assets/bootstrap.zip
Downloaded http://getbootstrap.com/2.3.2/assets/bootstrap.zip
Writing http://getbootstrap.com/2.3.2/assets/bootstrap.zip data to the cache

Stack Trace

Encoding::UndefinedConversionError: "\xBE" from ASCII-8BIT to UTF-8
    from /home/ubuntu-12-10/.rvm/gems/ruby-2.0.0-p195@scraper/bundler/gems/upton-011ff8ceef17/lib/upton/downloader.rb:86:in `write'
    from /home/ubuntu-12-10/.rvm/gems/ruby-2.0.0-p195@scraper/bundler/gems/upton-011ff8ceef17/lib/upton/downloader.rb:86:in `download_from_cache!'
    from /home/ubuntu-12-10/.rvm/gems/ruby-2.0.0-p195@scraper/bundler/gems/upton-011ff8ceef17/lib/upton/downloader.rb:33:in `get'
    from /home/ubuntu-12-10/.rvm/gems/ruby-2.0.0-p195@scraper/bundler/gems/upton-011ff8ceef17/lib/upton.rb:221:in `get_page'
    from /home/ubuntu-12-10/.rvm/gems/ruby-2.0.0-p195@scraper/bundler/gems/upton-011ff8ceef17/lib/upton.rb:315:in `get_instance'
    from /home/ubuntu-12-10/.rvm/gems/ruby-2.0.0-p195@scraper/bundler/gems/upton-011ff8ceef17/lib/upton.rb:332:in `block in scrape_from_list'
    from /home/ubuntu-12-10/.rvm/gems/ruby-2.0.0-p195@scraper/bundler/gems/upton-011ff8ceef17/lib/upton.rb:331:in `each'
    from /home/ubuntu-12-10/.rvm/gems/ruby-2.0.0-p195@scraper/bundler/gems/upton-011ff8ceef17/lib/upton.rb:331:in `each_with_index'
    from /home/ubuntu-12-10/.rvm/gems/ruby-2.0.0-p195@scraper/bundler/gems/upton-011ff8ceef17/lib/upton.rb:331:in `each'
    from /home/ubuntu-12-10/.rvm/gems/ruby-2.0.0-p195@scraper/bundler/gems/upton-011ff8ceef17/lib/upton.rb:331:in `map'
    from /home/ubuntu-12-10/.rvm/gems/ruby-2.0.0-p195@scraper/bundler/gems/upton-011ff8ceef17/lib/upton.rb:331:in `scrape_from_list'
    from /home/ubuntu-12-10/.rvm/gems/ruby-2.0.0-p195@scraper/bundler/gems/upton-011ff8ceef17/lib/upton.rb:177:in `block in scrape_to_csv'
    from /home/ubuntu-12-10/.rvm/rubies/ruby-2.0.0-p195/lib/ruby/2.0.0/csv.rb:1266:in `open'
    from /home/ubuntu-12-10/.rvm/gems/ruby-2.0.0-p195@scraper/bundler/gems/upton-011ff8ceef17/lib/upton.rb:175:in `scrape_to_csv'

Pagination always double-downloads first page

Hi there,

First off: this is a very cool tool. Thanks so much for putting this together.

I'm a bit of a coding/scraping n00b, so forgive me if I'm missing something obvious here. But I've now tested multiple times using pagination for the index, and I believe there's a minor bug.

I'm using both "pagination_start_index" and "pagination_max_pages" (which, as a side note, doesn't actually designate how MANY pages to paginate but simply which page is the highest one it will go to -- it may be better to call this "pagination_end_index" or something similar).

No matter what I choose, the paginator will eventually download the first page twice. So if I set pagination_start_index to 15 and pagination_max_pages to 18, it will download 15, 16, 17, 18, and then 15 again.

Thank you!

Downloading and Caching part

I was trying to separate the downloading and caching code from the main upton.rb file. The separation would lead to easier testing and further expansion. Can we have that code extracted as a separate gem ( I was surprised to see that a gem with this functionality existed out there. Or may be it did and I didn't know). That would mean that the Upton code would depend on this external gem.

Would love to hear views for and against this.

The example in README.md does not work

The example given in the README.md on the frontpage is not working, it returns the html in its entirety.

scraper = Upton::Scraper.new("http://www.propublica.org", "section#river section h1 a")
scraper.scrape do |article_string|
  puts "here is the full text of the ProPublica article: \n #{article_string}"
  #or, do other stuff here.
end

Refactor API

Code:

require "bundler/setup"
require "upton"

url = "http://www.fcc.gov/encyclopedia/fcc-and-courts"
xpath = "//*[@id='node-18004']/div[2]/ul/li[1]/a"
Upton::Scraper.new(url)
  .scrape_to_csv("output.csv", &Upton::Utils.list(xpath, :xpath))

Error:

$ rescue scrape.rb 

Frame number: 4/11
Frame type: top

From: /Users/adelevie/programming/fcc-ogc-scraper/scrape.rb @ line 8 :

    3: require "upton"
    4: 
    5: url = "http://www.fcc.gov/encyclopedia/fcc-and-courts"
    6: xpath = "//*[@id='node-18004']/div[2]/ul/li[1]/a"
    7: 
 => 8: Upton::Scraper.new(url)
    9:   .scrape_to_csv("output.csv", &Upton::Utils.list(xpath, :xpath))

NoMethodError: undefined method `each_with_index' for "http://www.fcc.gov/encyclopedia/fcc-and-courts":String
from /Users/adelevie/.rvm/gems/ruby-1.9.3-p194/gems/upton-0.2.6/lib/upton.rb:256:in `scrape_from_list'

The readme example uses a String as the argument for Upton::Scraper::new, but for some reason something that responds to #each_with_index is expected here. It could be that the xpath isn't good, but if so, the error shouldn't be pointing to the String provided on line 8.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.