with the help of some online tuts (Bucky), I've managed to write a simple web scraper that just checks if some text is on a webpage. What I would like to do however, is have the code run every hour. I assume I will need to host the code also so it does this? I've done some research but can't seem to find a proper way of running it every hour. Here is the code I've got so far:
import requests
from bs4 import BeautifulSoup
def odeon_spider(max_pages):
page = 1
while page <= max_pages:
url = "http://ift.tt/2dlRZS4" + str(page) #stores url in variable
source_code = requests.get(url) #gets url and sets it as source_code variable
plain_text = source_code.text #stores plain text in plain_text variable
soup = BeautifulSoup(plain_text, "lxml") #create beautifulsoup object
div_content = soup.findAll("div", {"class": "textComponent"}) #finds all divs with specific class
for x in div_content:
find_para = str(x.find('p').text) #finds all paragraphs and stores them in variable
text_to_search = "Register to be notified" #set text to search to variable
if text_to_search in find_para: #checks if text is in find_para
print("No tickets")
else:
print("Tickets")
page += 1
odeon_spider(1)
Thanks!
Aucun commentaire:
Enregistrer un commentaire