Here is what I am trying to accomplish from the page https://stellar.expert/explorer/public/asset/native?cursor=15297&filter=asset-holders
- extract the data in the table columns (Account and Account balance)
- write the extracted data fields into a text file.
- I am trying to extract multiple pages like from 15297 to 15500.
I am still very new to python and web scrapping and I had been struggling to get the desired output. Help will be very appreciated. Thank you.
from bs4 import BeautifulSoup
try:
import urllib.request as urllib2
except ImportError:
import urllib2
from time import sleep
url = 'https://stellar.expert/explorer/public/asset/native?cursor=15297&filter=asset-holders'
page = urllib2.urlopen((url))
soup = BeautifulSoup(page, 'html.parser')
for div in soup.find_all('div', attrs={'class': 'table exportable space'}):
address = div.find('tbody', attrs={'class': "account-address"})
address = address.text.strip()
print (address)
bal = div.find('tbody', attrs={'class': "text-right nowrap"})
bal = bal.text.strip()
print (balance)
print ("%s%s\n" % ("page=", str(URL)))
Output I wanted to get written to (result.txt):
GBMN2KIUQS66JMJHVOCA7N3S35F4F5PTY63XG3BJKKDMLPI4HPHA7QNU: 27,005 XLM
GBWVMIVJQNILFPHACV4Q7QM7VBOOQ35IQXEOOXO7WBXA4KX7OEADOU65: 27,004 XLM
GD3UXDHKS5EIKSL3PIG3NTVCCK5EQX73JMFJBFBROX3BAZ4K6437TZAC: 27,003 XLM
GBYUHMDQYNNDMJWOVODTZUKWOTVJXA2PCTDOF6J5NQS5DGMMJAL6B66L: 27,002 XLM
GDD3AQFXWWHWEFAOU4AWPZCT3Q6EVK4WYHWSG2EBQB3Z43KZSXJEK6WB: 27,001 XLM
Aucun commentaire:
Enregistrer un commentaire