Proxies with Python 'Requests' module

Just a short, simple one about the excellent Requests module for Python.

I can't seem to find in the documentation what the variable 'proxies' should contain. When I send it a dict with a standard "IP:PORT" value it rejected it asking for 2 values. So, I guess (because this doesn't seem to be covered in the docs) that the first value is the ip and the second the port?

The docs mention this only:

proxies – (optional) Dictionary mapping protocol to the URL of the proxy.

So I tried this... what should I be doing?

proxy = { ip: port}

and should I convert these to some type before putting them in the dict?

r = requests.get(url,headers=headers,proxies=proxy)


The proxies' dict syntax is {"protocol":"ip:port", ...}. With it you can specify different (or the same) proxie(s) for requests using http, https, and ftp protocols:

http_proxy  = ""
https_proxy = ""
ftp_proxy   = ""

proxyDict = { 
              "http"  : http_proxy, 
              "https" : https_proxy, 
              "ftp"   : ftp_proxy

r = requests.get(url, headers=headers, proxies=proxyDict)

Deduced from the requests documentation:

Parameters: method – method for the new Request object. url – URL for the new Request object. ... proxies – (optional) Dictionary mapping protocol to the URL of the proxy. ...

On linux you can also do this via the HTTP_PROXY, HTTPS_PROXY, and FTP_PROXY environment variables:

export HTTP_PROXY=
export FTP_PROXY=

On Windows:

set http_proxy=
set https_proxy=
set ftp_proxy=

Thanks, Jay for pointing this out: The syntax changed with requests 2.0.0. You'll need to add a schema to the url:

I have found that urllib has some really good code to pick up the system's proxy settings and they happen to be in the correct form to use directly. You can use this like:

import urllib

r = requests.get('', proxies=urllib.request.getproxies())

It works really well and urllib knows about getting Mac OS X and Windows settings as well.

You can refer to the proxy documentation here.

If you need to use a proxy, you can configure individual requests with the proxies argument to any request method:

import requests

proxies = {
  "http": "",
  "https": "",

requests.get("", proxies=proxies)

To use HTTP Basic Auth with your proxy, use the syntax:

proxies = {
    "http": "http://user:pass@"

The accepted answer was a good start for me, but I kept getting the following error:

AssertionError: Not supported proxy scheme None

Fix to this was to specify the http:// in the proxy url thus:

http_proxy  = ""
https_proxy  = ""
ftp_proxy   = ""

proxyDict = {
              "http"  : http_proxy,
              "https" : https_proxy,
              "ftp"   : ftp_proxy

I'd be interested as to why the original works for some people but not me.

Edit: I see the main answer is now updated to reflect this :)

here is my basic class in python for the requests module with some proxy configs and stopwatch !

import requests
import time
class BaseCheck():
    def __init__(self, url):
        self.http_proxy  = "http://user:pw@proxy:8080"
        self.https_proxy = "http://user:pw@proxy:8080"
        self.ftp_proxy   = "http://user:pw@proxy:8080"
        self.proxyDict = {
                      "http"  : self.http_proxy,
                      "https" : self.https_proxy,
                      "ftp"   : self.ftp_proxy
        self.url = url
        def makearr(tsteps):
            global stemps
            global steps
            stemps = {}
            for step in tsteps:
                stemps[step] = { 'start': 0, 'end': 0 }
            steps = tsteps
        def starttime(typ = ""):
            for stemp in stemps:
                if typ == "":
                    stemps[stemp]['start'] = time.time()
                    stemps[stemp][typ] = time.time()
    def __str__(self):
        return str(self.url)
    def getrequests(self):
        print g.status_code
        print g.content
        print self.url
        stemps['init']['end'] = time.time()
        #print stemps['init']['end'] - stemps['init']['start']
        x= stemps['init']['end'] - stemps['init']['start']
        print x


If you'd like to persisist cookies and session data, you'd best do it like this:

import requests

proxies = {
    'http': 'http://user:pass@',
    'https': 'https://user:pass@',

# Create the session and set the proxies.
s = requests.Session()
s.proxies = proxies

# Make the HTTP request through the session.
r = s.get('')

It’s a bit late but here is a wrapper class that simplifies scraping proxies and then making an http POST or GET:


i just made a proxy graber and also can connect with same grabed proxy without any input here is :

#Import Modules

from termcolor import colored
from selenium import webdriver
import requests
import os
import sys
import time

#Proxy Grab

options = webdriver.ChromeOptions()
driver = webdriver.Chrome(chrome_options=options)
tbody = driver.find_element_by_tag_name("tbody")
cell = tbody.find_elements_by_tag_name("tr")
for column in cell:

        column = column.text.split(" ")


#Proxy Connection

print(colored('Getting Proxies from graber...','green'))
proxy = {"http": "http://"+ column[0]+":"+column[1]}
url = ''
r = requests.get(url,  proxies=proxy)
print(colored('Connecting using proxy' ,'green'))
sts = r.status_code

Need Your Help

How to play a pure PCM raw stream with C#?

c# pcm wave soundplayer

I was looking into System.Media.SoundPlayer and NAudio, and AFAIK they only play WAVE streams. The WAVE file is composed of the pure PCM data plus a format header.

Problems getting int from void pointer

c pointers

I'm building a tiny linked list library for the purposes of self-enrichment, which has forced me to deal with a problem I'm nearly at a loss to explain. Take the following example, and assume that