mercredi 26 juin 2019

What is the fastest way to do http requests in Python

I am trying to build a web application fuzzer. It will take a wordlist and a url from the user and will do request to those urls. At the end, It will give output according to their responses' status codes.

I have written some code, it does ~600req/s in local (takes about 8 seconds to finish 4600 lines of wordlist) but since I'm using requests library I was thinking if there is a faster way to do so.

Only time consuming part as I analyzed is fuzz() and req() functions as they are doing the most job. I have also other functions but those that I've shown must be enough for you to understand (I didn't want to put so much code).

def __init__(self):
    self.statusCodes = [200, 204, 301, 302, 307, 403]
    self.session = requests.Session()
    self.headers = {
        'User-Agent': 'x',
        'Connection': 'Closed'
        }

def req(self, URL):
# request to only one url
    try:
        r = self.session.head(URL, allow_redirects=False, headers=self.headers, timeout=3)
        if r.status_code in self.statusCodes:
            if r.status_code == 301:
                self.directories.append(URL)
                self.warning("301", URL)
                return
            self.success(r.status_code, URL)
            return
        return
    except requests.exceptions.ConnectTimeout:
        return
    except requests.exceptions.ConnectionError:
        self.error("Connection error")
        sys.exit(1)

def fuzz(self):
    pool = ThreadPool(self.threads)
    pool.map(self.req, self.URLList)
    pool.close()
    pool.join()
    return

#self.threads is number of threads
#self.URLList is a list of full urls 




Aucun commentaire:

Enregistrer un commentaire