Git Product home page Git Product logo

milesj / crawlee Goto Github PK

View Code? Open in Web Editor NEW

This project forked from apify/crawlee

1.0 1.0 0.0 24.67 MB

Crawlee — The scalable web scraping and crawling library for JavaScript/Node.js. Enables development of data extraction and web automation jobs (not only) with headless Chrome and Puppeteer.

Home Page: https://crawlee.dev

License: Apache License 2.0

JavaScript 8.84% TypeScript 88.75% CSS 0.92% HTML 0.05% Dockerfile 1.44%

crawlee's Introduction

Crawlee
The scalable web crawling and scraping library for JavaScript

NPM latest version Downloads Chat on discord Build Status

👉👉👉 Crawlee is the successor to Apify SDK. 🎉 Fully rewritten in TypeScript for a better developer experience, and with even more powerful anti-blocking features. The interface is almost the same as Apify SDK so upgrading is a breeze. Read the upgrading guide to learn about the changes. 👈👈👈

Crawlee simplifies the development of web crawlers, scrapers, data extractors and web automation jobs. It provides tools to manage and automatically scale a pool of headless browsers, to maintain queues of URLs to crawl, store crawling results to a local filesystem or into the cloud, rotate proxies and much more. Crawlee is available as the crawlee NPM package. It can be used either stand-alone in your own applications or in actors running on the Apify Cloud.

View full documentation, guides and examples on the Crawlee project website

Would you like to work with us on Crawlee or similar projects? We are hiring!

Motivation

Thanks to tools like Playwright, Puppeteer or Cheerio, it is easy to write Node.js code to extract data from web pages. But eventually things will get complicated. For example, when you try to:

  • Perform a deep crawl of an entire website using a persistent queue of URLs.
  • Run your scraping code on a list of 100k URLs in a CSV file, without losing any data when your code crashes.
  • Rotate proxies to hide your browser origin and keep user-like sessions.
  • Disable browser fingerprinting protections used by websites.

Python has Scrapy for these tasks, but there was no such library for JavaScript, the language of the web. The use of JavaScript is natural, since the same language is used to write the scripts as well as the data extraction code running in a browser.

The goal of Crawlee is to fill this gap and provide a toolbox for generic web scraping, crawling and automation tasks in JavaScript. So don't reinvent the wheel every time you need data from the web, and focus on writing code specific to the target website, rather than developing commonalities.

Overview

Crawlee is available as the crawlee NPM package and is also available via @crawlee/* packages. It provides the following tools:

  • CheerioCrawler - Enables the parallel crawling of a large number of web pages using the cheerio HTML parser. This is the most efficient web crawler, but it does not work on websites that require JavaScript. Available also under @crawlee/cheerio package.

  • PuppeteerCrawler - Enables the parallel crawling of a large number of web pages using the headless Chrome browser and Puppeteer. The pool of Chrome browsers is automatically scaled up and down based on available system resources. Available also under @crawlee/puppeteer package.

  • PlaywrightCrawler - Unlike PuppeteerCrawler you can use Playwright to manage almost any headless browser. It also provides a cleaner and more mature interface while keeping the ease of use and advanced features. Available also under @crawlee/playwright package.

  • BasicCrawler - Provides a simple framework for the parallel crawling of web pages whose URLs are fed either from a static list or from a dynamic queue of URLs. This class serves as a base for the more specialized crawlers above. Available also under @crawlee/basic package.

  • RequestList - Represents a list of URLs to crawl. The URLs can be passed in code or in a text file hosted on the web. The list persists its state so that crawling can resume when the Node.js process restarts. Available also under @crawlee/core package.

  • RequestQueue - Represents a queue of URLs to crawl, which is stored either in memory, on a local filesystem, or in the Apify Cloud. The queue is used for deep crawling of websites, where you start with several URLs and then recursively follow links to other pages. The data structure supports both breadth-first and depth-first crawling orders. Available also under @crawlee/core package.

  • Dataset - Provides a store for structured data and enables their export to formats like JSON, JSONL, CSV, XML, Excel or HTML. The data is stored on a local filesystem or in the Apify Cloud. Datasets are useful for storing and sharing large tabular crawling results, such as a list of products or real estate offers. Available also under @crawlee/core package.

  • KeyValueStore - A simple key-value store for arbitrary data records or files, along with their MIME content type. It is ideal for saving screenshots of web pages, PDFs or to persist the state of your crawlers. The data is stored on a local filesystem or in the Apify Cloud. Available also under @crawlee/core package.

  • AutoscaledPool - Runs asynchronous background tasks, while automatically adjusting the concurrency based on free system memory and CPU usage. This is useful for running web scraping tasks at the maximum capacity of the system. Available also under @crawlee/core package.

Additionally, the package provides various helper functions to simplify running your code on the Apify Cloud and thus take advantage of its pool of proxies, job scheduler, data storage, etc. For more information, see the Crawlee Programmer's Reference.

Quick Start

This short tutorial will set you up to start using Crawlee in a minute or two. If you want to learn more, proceed to the Getting Started tutorial that will take you step by step through creating your first scraper.

Local stand-alone usage

Crawlee requires Node.js 16 or later. Add Crawlee to any Node.js project by running:

npm install crawlee playwright

Neither playwright nor puppeteer are bundled with Crawlee to reduce install size and allow greater flexibility. That's why we install it with NPM. You can choose one, both, or neither.

Run the following example to perform a recursive crawl of a website using Playwright. For more examples showcasing various features of Crawlee, see the Examples section of the documentation.

import { PlaywrightCrawler, Dataset } from 'crawlee';

const crawler = new PlaywrightCrawler();

crawler.router.addDefaultHandler(async ({ request, page, enqueueLinks }) => {
    const title = await page.title();
    console.log(`Title of ${request.loadedUrl} is '${title}'`);

    // save some results
    await Dataset.pushData({ title, url: request.loadedUrl });

    // enqueue all links targeting the same hostname
    await enqueueLinks();
});

await crawler.run(['https://www.iana.org/']);

When you run the example, you should see Crawlee automating a Chrome browser.

Chrome Scrape

By default, Crawlee stores data to ./storage in the current working directory. You can override this directory via CRAWLEE_STORAGE_DIR env var. For details, see Environment variables, Request storage and Result storage.

Local usage with Crawlee command-line interface (CLI)

To create a boilerplate of your project we can use the Crawlee command-line interface (CLI) tool.

Let's create a boilerplate of your new web crawling project by running:

npx crawlee create my-hello-world

The CLI will prompt you to select a project boilerplate template - just pick "Hello world". The tool will create a directory called my-hello-world with a Node.js project files. You can run the project as follows:

cd my-hello-world
npx crawlee run

By default, the crawling data will be stored in a local directory at ./storage. For example, the input JSON file for the actor is expected to be in the default key-value store in ./storage/key_value_stores/default/INPUT.json.

Usage on the Apify platform

Now if we want to run our new crawler on Apify Platform, we first need to download the apify-cli and login with our token:

We could also use the Apify CLI to generate a new project, which can be better suited if we want to run it on the Apify Platform.

npm i -g apify-cli
apify login

Finally, we can easily deploy our code to the Apify platform by running:

apify push

Your script will be uploaded to the Apify platform and built there so that it can be run. For more information, view the Apify Actor documentation.

You can also develop your web scraping project in an online code editor directly on the Apify platform. You'll need to have an Apify Account. Go to Actors, page in the Apify Console, click Create new and then go to the Source tab and start writing your code or paste one of the examples from the Examples section.

For more information, view the Apify actors quick start guide.

Support

If you find any bug or issue with Crawlee, please submit an issue on GitHub. For questions, you can ask on Stack Overflow or contact [email protected]

Contributing

Your code contributions are welcome, and you'll be praised to eternity! If you have any ideas for improvements, either submit an issue or create a pull request. For contribution guidelines and the code of conduct, see CONTRIBUTING.md.

License

This project is licensed under the Apache License 2.0 - see the LICENSE.md file for details.

crawlee's People

Contributors

0xjgv avatar aarontjdev avatar andreybykov avatar audibookning avatar b4nan avatar barjin avatar cybairfly avatar davidjohnbarton avatar dependabot-preview[bot] avatar dgrcode avatar drobnikj avatar fnesveda avatar gahabeen avatar gippy avatar glgoose avatar jancurn avatar jbartadev avatar metalwarrior665 avatar miqh avatar mnmkng avatar mtrunkat avatar novotnyj avatar petrpatek avatar pocesar avatar renovate-bot avatar renovate[bot] avatar szmarczak avatar vbartonicek avatar vladfrangu avatar yin avatar

Stargazers

 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.