So I am using splinter to loop through job descriptions. I am using flask to create full-stack web however even though it works fine on jupyter notebook when I move it to python in vscode, it give me an error going: AttributeError: 'ElementList' object has no attribute 'fill'
app = Flask(__name__)
# list to store scraped data
company = []
location = []
job_desc = []
position = []
# Initialize browser to use chrome and show its process.
executable_path = {'executable_path': "chromedriver.exe"}
browser = Browser('chrome', **executable_path, headless=False)
url = "https://www.glassdoor.ca/index.htm"
browser.visit(url)
def scrape_current_page():
# Getting html of first page
html = browser.html
soup = BeautifulSoup(html, "html.parser")
jobs = soup.find_all("li", class_="jl")
for job in jobs:
# Store all info into a list
position.append(job.find("div", class_="jobTitle").a.text)
# ex: Tommy - Singapore
comp_loc = job.find("div", class_="empLoc").div.text
comp, loc = comp_loc.split("–")
# print(comp)
company.append(comp.strip())
location.append(loc.strip())
# ------------- Scrape Job descriptions within a page -----------
# job description is in another html, therefore retrieve it once again after
# clicking.
browser.click_link_by_href(job.find("a", class_="jobLink")["href"])
html = browser.html
soup = BeautifulSoup(html, "html.parser")
job_desc.append(soup.find("div", class_="desc").text)
def scrape_all():
# grab new html, grab page control elements
html = browser.html
soup = BeautifulSoup(html, "html.parser")
result = soup.find("div", class_="pagingControls").ul
pages = result.find_all("li")
# Scrape first page before going to next
scrape_current_page()
for page in pages:
# run if <a> exists since un-clickable do not have <a> skipping < and pg1
if page.a:
# within <a> tag click except next button
if not page.find("li", class_="Next"):
try:
# Click to goto next page, then scrape it.
browser.click_link_by_href(page.a['href'])
# --------- call scrape data function here ---------
scrape_current_page()
except:
print("This is the last page")
@app.route("/")
def home():
return render_template("index.html")
@app.route("/scrape/<input>")
def test(input):
title, loc = input.split("!")
print(title, f'location = {loc}')
# Find where we should fill using splinter then fill it up
job_type = browser.find_by_id("KeywordSearch")
job_type.fill(title)
location = browser.find_by_id("LocationSearch")
location.fill(loc)
# Clicking button
browser.find_by_id("HeroSearchButton").click()
scrape_all()
This is my code and I believe error is occuring when @app.route("/scrape/") is reached and where there is job_type.fill(title). I do not understand why it is not working I've instantiated browser at beginning of the code and using it to find search input location and simply filling it.
NOTE: I am getting value from javascript in form of ex: data scientist!Paris
Aucun commentaire:
Enregistrer un commentaire