Git Product home page Git Product logo

portia's Introduction

portia

Visual scraping for Scrapy.

Overview

Portia is a tool for visually scraping web sites without any programming knowledge. Just annotate web pages with a point and click editor to indicate what data you want to extract, and portia will learn how to scrape similar pages from the site.

Portia has a web based UI served by a Twisted server, so you can install it on almost any modern platform.

Requirements

  • Python 2.7
  • Works on Linux, Windows, Mac OSX, BSD
  • Supported browsers: Latest versions of Chrome (recommended) or Firefox

Prerequisites

You might need to run the following commands to install the required tools & libraries before building portia:

apt-get install python-pip python-dev libxml2-dev libxslt1-dev libffi-dev libssl-dev
pip install virtualenv

Installation

The recommended way to install dependencies is to use virtualenv:

virtualenv YOUR_ENV_NAME --no-site-packages

and then do:

source YOUR_ENV_NAME/bin/activate
cd slyd
pip install -r requirements.txt

As slybot is a slyd dependency, it will also get installed.

Note: you may need to use sudo or pip --user if you get permissions problems while installing.

Running portia

First, you need to start the ui and create a project. Run slyd using:

cd slyd
twistd -n slyd

and point your browser to: http://localhost:9001/static/main.html

Choose the site you want to scrape and create a project. Every project is created with a default spider named after the domain of the site you are scraping. When you are ready, you can run your project with slybot to do the actual crawling/extraction.

Projects created with slyd can be found at:

slyd/data/projects

To run one of those projects use:

portiacrawl project_path spidername

Where spidername should be one of the project spiders. If you don't remember the name of the spider, just use:

portiacrawl project_path

and you will get the list of spiders for that project.

Portia spiders are ultimately Scrapy spiders. You can pass scrapy spider arguments when running them with portiacrawl by using the -a command line option. A custom settings module may also be specified using the --settings command line option. Please refer to the scrapy documentation for details on arguments and settings.

Running portia with vagrant

This is probably the easiest way to install and run portia.

First, you need to get:

After that cd into the repo directory and run:

vagrant up

This will setup and start an ubuntu virtual machine, build portia and launch the slyd server for you. Just point your browser to http://localhost:8000/static/main.html after vagrant has finished the whole process (you should see default: slyd start/running, process XXXX in your console) and you can start using portia. You can stop the server with vagrant suspend or vagrant halt.

The repository directory is shared with the VM, so you don't need to do anything special to keep it in sync. You can ssh into the virtual machine by running vagrant ssh. The repo dir will be mounted at /vagrant in the VM. Please note that you need to ssh into the VM to run the portiacrawl script.

Repository structure

There are two main components in this repository, slyd and slybot:

###slyd

The visual editor used to create your scraping projects.

###slybot

The Python web crawler that performs the actual site scraping. It's implemented on top of the Scrapy web crawling framework and the Scrapely extraction library. It uses projects created with slyd as input.

portia's People

Contributors

duendex avatar pablohoffman avatar kalessin avatar shaneaevans avatar andresp99999 avatar dangra avatar tpeng avatar kmike avatar chekunkov avatar rvogel avatar amferraz avatar gcmalloc avatar ray- avatar

Watchers

James Cloos avatar Philippe Ombredanne avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.