Git Product home page Git Product logo

ai_papers_scrapper's Introduction

AI Papers Scrapper

Download papers pdfs and other information from main AI conferences (when public available) and store it on ./data/, creating one directory per conference, and one per year. More specifically, it creates the following structure:

.
└── data
    └── conf
        └── year
            ├── abstracts.csv       # format `title|abstract`
            ├── authors.csv         # format `title;authors`
            ├── paper_info.csv      # format `title;abstract_url;pdf_url`
            └── papers
                ├── paper01.pdf     # pdf file from a paper
                ├── paper02.pdf
                ├── ...
                └── paperN.pdf

Based on CVPR_paper_search_tool by Jin Yamanaka. I decided to split the code into multiple projects:

  • this project - Download papers pdfs and other information from main AI conferences
  • AI Papers Cleaner - Extract text from papers PDFs and abstracts, and remove uninformative words
  • AI Papers Search Tool - Automatic paper clustering
  • AI Papers Searcher - Web app to search papers by keywords or similar papers

Currently supports the following conferences, from 2017 and on:

Source Conferences
AAAI Library AAAI
ACL Anthology ACL, COLING (even years), EACL (odd years, except 2019), EMNLP, Findings (2020 and on), IJCNLP (odd years), NAACL (except 2017 and 2020), SIGDIAL, TACL
European Computer Vision Association ECCV (even years)
International Joint Conferences on Artificial Intelligence Organization IJCAI
KDD SIGKDD (abstracts only)
Proceedings of Machine Learning Research ICML
NeurIPS Proceedings NeurIPS
NeurIPS Datasets and Benchmarks Proceedings NeurIPS Track on Datasets and Benchmarks (2021)
OpenReview ICLR
SIGCHI SIGCHI (2018 and on, abstracts only)
The Computer Vision Foundation open access CVPR, ICCV (odd years), WACV (2020 and on)

Requirements

Docker or, for local installation:

Usage

To make it easier to run the code, with or without Docker, I created a few helpers. Both ways use start_here.sh as an entry point. Since there are a few quirks when calling the scrappers, I created this file with all the necessary commands to run the code. All you need to do is to uncomment the relevant lines inside the conferences array and run the script.

Running without Docker

You first need to install Python Poetry. Then, you can install the dependencies and run the code:

poetry install
bash start_here.sh

Running with Docker

To help with the Docker setup, I created a Dockerfile and a Makefile. The Dockerfile contains all the instructions to create the Docker image. The Makefile contains the commands to build the image, run the container, and run the code inside the container. To build the image, simply run:

make

To call start_here.sh inside the container, run:

make run

Running interactive scrapy shell

To run the interactive scrapy shell inside a Docker container, run:

make RUN_STRING="scrapy shell 'https://your.site.com'" run

ai_papers_scrapper's People

Contributors

george-gca avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

tiankaixie

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.