About โข Tech Stack โข Contact
This Jupyter Notebook uses BeautifulSoup to scrap the valuable data from the website and convert it into useful .CSV files. It also update the readings of data after every given time of interval
Web Scrapper brushes the valuable data from the websites into useful .csv files
Change the link in the URL section to your desired webpage
URL = 'Your link here'
Use the Link headers to get the header of your computer.
http://httpbin.org/get
Page jsut like above picture will appear. Paste the highlited text in the header section
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 Edg/105.0.1343.3
Look out for the data you want to get and copy the Span id, the span class or the div class into title and price section
title = soup2.find('span',{'class':"B_NuCI"}).get_text()
price = soup2.find('div',{'class':'_30jeq3 _16Jk6d'}).get_text()
Now for creating the .CSV file. Run below code and replace the file name as per your preferrence.
import csv
header = ['Title', 'Price', 'Date']
data = [title, price, today]
with open('NothingPhone1Datascrapper.csv', 'w', newline='', encoding='UTF8') as f:
writer = csv.writer(f)
writer.writerow(header)
writer.writerow(data)
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Please make sure to update tests as appropriate.
Made with โค๏ธ by Dharmesh Kashyap, get in touch!