Git Product home page Git Product logo

awesome-crawler's Introduction

Awesome-crawler Awesome

A collection of awesome web crawler,spider and resources in different languages.

Contents

Python

  • Scrapy - A fast high-level screen scraping and web crawling framework.
  • pyspider - A powerful spider system.
  • CoCrawler - A versatile web crawler built using modern tools and concurrency.
  • cola - A distributed crawling framework.
  • Demiurge - PyQuery-based scraping micro-framework.
  • Scrapely - A pure-python HTML screen-scraping library.
  • feedparser - Universal feed parser.
  • you-get - Dumb downloader that scrapes the web.
  • MechanicalSoup - A Python library for automating interaction with websites.
  • portia - Visual scraping for Scrapy.
  • crawley - Pythonic Crawling / Scraping Framework based on Non Blocking I/O operations.
  • RoboBrowser - A simple, Pythonic library for browsing the web without a standalone web browser.
  • MSpider - A simple ,easy spider using gevent and js render.
  • brownant - A lightweight web data extracting framework.
  • PSpider - A simple spider frame in Python3.
  • Gain - Web crawling framework based on asyncio for everyone.
  • sukhoi - Minimalist and powerful Web Crawler.
  • spidy - The simple, easy to use command line web crawler.
  • newspaper - News, full-text, and article metadata extraction in Python 3
  • aspider - An async web scraping micro-framework based on asyncio.

Java

  • ACHE Crawler - An easy to use web crawler for domain-specific search.
  • Apache Nutch - Highly extensible, highly scalable web crawler for production environment.
    • anthelion - A plugin for Apache Nutch to crawl semantic annotations within HTML pages.
  • Crawler4j - Simple and lightweight web crawler.
  • JSoup - Scrapes, parses, manipulates and cleans HTML.
  • websphinx - Website-Specific Processors for HTML information extraction.
  • Open Search Server - A full set of search functions. Build your own indexing strategy. Parsers extract full-text data. The crawlers can index everything.
  • Gecco - A easy to use lightweight web crawler
  • WebCollector - Simple interfaces for crawling the Web,you can setup a multi-threaded web crawler in less than 5 minutes.
  • Webmagic - A scalable crawler framework.
  • Spiderman - A scalable ,extensible, multi-threaded web crawler.
    • Spiderman2 - A distributed web crawler framework,support js render.
  • Heritrix3 - Extensible, web-scale, archival-quality web crawler project.
  • SeimiCrawler - An agile, distributed crawler framework.
  • StormCrawler - An open source collection of resources for building low-latency, scalable web crawlers on Apache Storm
  • Spark-Crawler - Evolving Apache Nutch to run on Spark.
  • webBee - A DFS web spider.
  • spider-flow - A visual spider framework, it's so good that you don't need to write any code to crawl the website.
  • Norconex Web Crawler - Norconex HTTP Collector is a full-featured web crawler (or spider) that can manipulate and store collected data into a repository of your choice (e.g. a search engine). Can be used as a stand alone application or be embedded into Java applications.

C#

  • ccrawler - Built in C# 3.5 version. it contains a simple extension of web content categorizer, which can separate between the web page depending on their content.
  • SimpleCrawler - Simple spider base on mutithreading, regluar expression.
  • DotnetSpider - This is a cross platfrom, ligth spider develop by C#.
  • Abot - C# web crawler built for speed and flexibility.
  • Hawk - Advanced Crawler and ETL tool written in C#/WPF.
  • SkyScraper - An asynchronous web scraper / web crawler using async / await and Reactive Extensions.
  • Infinity Crawler - A simple but powerful web crawler library in C#.

JavaScript

  • scraperjs - A complete and versatile web scraper.
  • scrape-it - A Node.js scraper for humans.
  • simplecrawler - Event driven web crawler.
  • node-crawler - Node-crawler has clean,simple api.
  • js-crawler - Web crawler for Node.JS, both HTTP and HTTPS are supported.
  • webster - A reliable web crawling framework which can scrape ajax and js rendered content in a web page.
  • x-ray - Web scraper with pagination and crawler support.
  • node-osmosis - HTML/XML parser and web scraper for Node.js.
  • web-scraper-chrome-extension - Web data extraction tool implemented as chrome extension.
  • supercrawler - Define custom handlers to parse content. Obeys robots.txt, rate limits and concurrency limits.
  • headless-chrome-crawler - Headless Chrome crawls with jQuery support
  • Squidwarc - High fidelity, user scriptable, archival crawler that uses Chrome or Chromium with or without a head
  • crawlee - A web scraping and browser automation library for Node.js that helps you build reliable crawlers. Fast.

PHP

  • Goutte - A screen scraping and web crawling library for PHP.
  • dom-crawler - The DomCrawler component eases DOM navigation for HTML and XML documents.
  • QueryList - The progressive PHP crawler framework.
  • pspider - Parallel web crawler written in PHP.
  • php-spider - A configurable and extensible PHP web spider.
  • spatie/crawler - An easy to use, powerful crawler implemented in PHP. Can execute Javascript.
  • crawlzone/crawlzone - Crawlzone is a fast asynchronous internet crawling framework for PHP.
  • PHPScraper - PHPScraper is a scraper & crawler built for simplicity.

C++

C

  • httrack - Copy websites to your computer.

Ruby

  • Nokogiri - A Rubygem providing HTML, XML, SAX, and Reader parsers with XPath and CSS selector support.
  • upton - A batteries-included framework for easy web-scraping. Just add CSS(Or do more).
  • wombat - Lightweight Ruby web crawler/scraper with an elegant DSL which extracts structured data from pages.
  • RubyRetriever - RubyRetriever is a Web Crawler, Scraper & File Harvester.
  • Spidr - Spider a site, multiple domains, certain links or infinitely.
  • Cobweb - Web crawler with very flexible crawling options, standalone or using sidekiq.
  • mechanize - Automated web interaction & crawling.

Rust

  • spider - The fastest web crawler and indexer.
  • crawler - A gRPC web indexer turbo charged for performance.

R

  • rvest - Simple web scraping for R.

Erlang

  • ebot - A scalable, distribuited and highly configurable web cawler.

Perl

  • web-scraper - Web Scraping Toolkit using HTML and CSS Selectors or XPath expressions.

Go

  • pholcus - A distributed, high concurrency and powerful web crawler.
  • gocrawl - Polite, slim and concurrent web crawler.
  • fetchbot - A simple and flexible web crawler that follows the robots.txt policies and crawl delays.
  • go_spider - An awesome Go concurrent Crawler(spider) framework.
  • dht - BitTorrent DHT Protocol && DHT Spider.
  • ants-go - A open source, distributed, restful crawler engine in golang.
  • scrape - A simple, higher level interface for Go web scraping.
  • creeper - The Next Generation Crawler Framework (Go).
  • colly - Fast and Elegant Scraping Framework for Gophers.
  • ferret - Declarative web scraping.
  • Dataflow kit - Extract structured data from web pages. Web sites scraping.
  • Hakrawler - Simple, fast web crawler designed for easy, quick discovery of endpoints and assets within a web application

Scala

  • crawler - Scala DSL for web crawling.
  • scrala - Scala crawler(spider) framework, inspired by scrapy.
  • ferrit - Ferrit is a web crawler service written in Scala using Akka, Spray and Cassandra.

awesome-crawler's People

Contributors

aecio avatar asciimoo avatar berti92 avatar briatte avatar brucedone avatar ceylanb avatar ehsanquddusi avatar gustavorps avatar howie6879 avatar j-mendez avatar kaznovac avatar machawk1 avatar mayankpratap avatar moklick avatar ohtwadi avatar rivermont avatar sandofvega avatar sebastian-nagel avatar slotix avatar spekulatius avatar sriharshakappala avatar stewartmckee avatar tmos avatar woto avatar yairhalberstadt avatar zhuyingda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

awesome-crawler's Issues

Crawler with GUI

Is there a crawler with an web GUI or rather a frontend such as "Octoparse" for Windows oder "Web Alert" for Android?

Add Jvppeteer

Jvppeteer is a Java library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Jvppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium
https://github.com/fanyong920/jvppeteer

[Suggestion] Proxy API for crawling

Hi, great repo!
So I'd like to add this website https://www.free-proxies.info
It's very helpful for crawling because it's grant an API for get free or premium proxy!

There are also some filters like transparent proxy or anonymous; google proxy and more..
It's always need a proxy for crawling the web!

Typos

There is typos in this line of the README.md file:

* [DotnetSpider](https://github.com/zlzforever/DotnetSpider) - This is a cross platfrom, ligth spider develop by C#.

platfrom sould be platform and ligth sould be light

Extract_WebFormFields

Hi,
Can we install this crawler using Kali Linux? Also, can we extract the web form field label names from a URL and the sub-links through the main domain, If not, can you suggest one?

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.