This project deckstats_scraper
used to collect deck information from the website
Install the required libraries from the requirements.txt file with the following command:
pip install -r requirements.txt
Change the file storage configuration in the settings.py
file
- FILES_STORE: Configure the location to store downloaded files (note the change to suit your personal computer)
- Example:
FILES_STORE = r"/home/tony/Documents/Fiverr/decklist/downloaded"
The application scans between FROM_TIME and TO_TIME. Note to change the information LOGIN_USERNAME and LOGIN_PASSWORD. To run the application, run the following command:
cd deckstatsScraper/
scrapy crawl decks
Contact me if you need more support information tony_bidget
No license
๐ The previous sections are the bare minimum, and your project will ultimately determine the content of this document. You might also want to consider adding the following sections.
Badges aren't necessary, per se, but they demonstrate street cred. Badges let other developers know that you know what you're doing. Check out the badges hosted by shields.io. You may not understand what they all represent now, but you will in time.
Your project has a lot of features:
- Collect information on each card from the website https://deckstats.net/decks/search/?search_tags=Commander&lng=en&page=1
- Each card is saved as a file as a text
- The storage location of the image file is located in the settings.py file
No contributor
No write unite-test