VUB Task - Web Crawler
Using Scrapy to get the required data from the website
The spider that is created for scrapping the data from the requested website is ihvz_spider.py
This code works on Python3
- py -m venv venv ==> making a virtual environment with the name 'venv'
- source venev/bin/activate (For Mac) .\venv\Scripts\activate (For Windows) ==> Activating the v environment
- pip install -r requirements.txt ==> To install the required packages to run the code
- cd ihvz_connector
- scrapy crawl content -o articles.json => creating a JSON file named 'articles.json' to store the required data in it.