Reference

class urlfetch.Response(r, **kwargs)[source]

A Response object.

>>> import urlfetch
>>> response = urlfetch.get("http://docs.python.org/")
>>> response.total_time
0.033042049407959
>>> response.status, response.reason, response.version
(200, 'OK', 10)
>>> type(response.body), len(response.body)
(<type 'str'>, 8719)
>>> type(response.text), len(response.text)
(<type 'unicode'>, 8719)
>>> response.getheader('server')
'Apache/2.2.16 (Debian)'
>>> response.getheaders()
[
    ('content-length', '8719'),
    ('x-cache', 'MISS from localhost'),
    ('accept-ranges', 'bytes'),
    ('vary', 'Accept-Encoding'),
    ('server', 'Apache/2.2.16 (Debian)'),
    ('last-modified', 'Tue, 26 Jun 2012 19:23:18 GMT'),
    ('connection', 'close'),
    ('etag', '"13cc5e4-220f-4c36507ded580"'),
    ('date', 'Wed, 27 Jun 2012 06:50:30 GMT'),
    ('content-type', 'text/html'),
    ('x-cache-lookup', 'MISS from localhost:8080')
]
>>> response.headers
{
    'content-length': '8719',
    'x-cache': 'MISS from localhost',
    'accept-ranges': 'bytes',
    'vary': 'Accept-Encoding',
    'server': 'Apache/2.2.16 (Debian)',
    'last-modified': 'Tue, 26 Jun 2012 19:23:18 GMT',
    'connection': 'close',
    'etag': '"13cc5e4-220f-4c36507ded580"',
    'date': 'Wed, 27 Jun 2012 06:50:30 GMT',
    'content-type': 'text/html',
    'x-cache-lookup': 'MISS from localhost:8080'
}
Raises:ContentLimitExceeded
body[source]

Response body.

Raises:ContentLimitExceeded, ContentDecodingError
close()[source]

Close the connection.

content
cookies[source]

Cookies in dict

cookiestring[source]

Cookie string

classmethod from_httplib(connection, **kwargs)[source]

Make an Response object from a httplib response object.

headers[source]

Response headers.

Response headers is a dict with all keys in lower case.

>>> import urlfetch
>>> response = urlfetch.get("http://docs.python.org/")
>>> response.headers
{
    'content-length': '8719',
    'x-cache': 'MISS from localhost',
    'accept-ranges': 'bytes',
    'vary': 'Accept-Encoding',
    'server': 'Apache/2.2.16 (Debian)',
    'last-modified': 'Tue, 26 Jun 2012 19:23:18 GMT',
    'connection': 'close',
    'etag': '"13cc5e4-220f-4c36507ded580"',
    'date': 'Wed, 27 Jun 2012 06:50:30 GMT',
    'content-type': 'text/html',
    'x-cache-lookup': 'MISS from localhost:8080'
}
json[source]

Load response body as json.

Raises:ContentDecodingError

Links parsed from HTTP Link header

next()
read(chunk_size=65536)[source]

Read content (for streaming and large files)

Parameters:chunk_size (int) – size of chunk, default is 65536, i.e. 64KiB.
reason = None

Reason phrase returned by server.

status = None

Status code returned by server.

status_code = None

An alias of status.

text[source]

Response body in str.

total_time = None

total time

version = None

HTTP protocol version used by server. 10 for HTTP/1.0, 11 for HTTP/1.1.

class urlfetch.Session(headers={}, cookies={}, auth=None)[source]

A session object.

urlfetch.Session can hold common headers and cookies. Every request issued by a urlfetch.Session object will bring u these headers and cookies.

urlfetch.Session plays a role in handling cookies, just like a cookiejar.

Parameters:
  • headers (dict) – Init headers.
  • cookies (dict) – Init cookies.
  • auth (tuple) – (username, password) for basic authentication.
cookies = None

cookies

cookiestring

Cookie string.

It’s assignalbe, and will change cookies correspondingly.

>>> s = Session()
>>> s.cookiestring = 'foo=bar; 1=2'
>>> s.cookies
{'1': '2', 'foo': 'bar'}
delete(*args, **kwargs)[source]

Issue a delete request.

fetch(*args, **kwargs)[source]

Fetch an URL

get(*args, **kwargs)[source]

Issue a get request.

head(*args, **kwargs)[source]

Issue a head request.

headers = None

headers

options(*args, **kwargs)[source]

Issue a options request.

patch(*args, **kwargs)[source]

Issue a patch request.

popcookie(key)[source]

Remove an cookie from default cookies.

popheader(header)[source]

Remove an header from default headers.

post(*args, **kwargs)[source]

Issue a post request.

put(*args, **kwargs)[source]

Issue a put request.

putcookie(key, value='')[source]

Add an cookie to default cookies.

putheader(header, value)[source]

Add an header to default headers.

request(*args, **kwargs)[source]

Issue a request.

snapshot()[source]
trace(*args, **kwargs)[source]

Issue a trace request.

urlfetch.request(url, method='GET', params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, validate_certificate=None, **kwargs)[source]

request an URL

Parameters:
  • url (string) – URL to be fetched.
  • method (string) – (optional) HTTP method, one of GET, DELETE, HEAD, OPTIONS, PUT, POST, TRACE, PATCH. GET is the default.
  • params (dict/string) – (optional) Dict or string to attach to url as querystring.
  • headers (dict) – (optional) HTTP request headers.
  • timeout (float) – (optional) Timeout in seconds
  • files – (optional) Files to be sended
  • randua – (optional) If True or path string, use a random user-agent in headers, instead of 'urlfetch/' + __version__
  • auth (tuple) – (optional) (username, password) for basic authentication
  • length_limit (int) – (optional) If None, no limits on content length, if the limit reached raised exception ‘Content length is more than …’
  • proxies (dict) – (optional) HTTP proxy, like {‘http’: ‘127.0.0.1:8888’, ‘https’: ‘127.0.0.1:563’}
  • trust_env (bool) – (optional) If True, urlfetch will get infomations from env, such as HTTP_PROXY, HTTPS_PROXY
  • max_redirects (int) – (integer, optional) Max redirects allowed within a request. Default is 0, which means redirects are not allowed.
  • source_address (tuple) – (optional) A tuple of (host, port) to specify the source_address to bind to. This argument is ignored if you’re using Python prior to 2.7/3.2.
  • validate_certificate (bool) – (optional) If False, urlfetch skips all the necessary certificate and hostname checks
Returns:

A Response object

Raises:

URLError, UrlfetchException, TooManyRedirects,

urlfetch.fetch(*args, **kwargs)[source]

fetch an URL.

fetch() is a wrapper of request(). It calls get() by default. If one of parameter data or parameter files is supplied, post() is called.

urlfetch.get(url, *, method='GET', params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, validate_certificate=None, **kwargs)

Issue a get request

urlfetch.post(url, *, method='POST', params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, validate_certificate=None, **kwargs)

Issue a post request

urlfetch.head(url, *, method='HEAD', params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, validate_certificate=None, **kwargs)

Issue a head request

urlfetch.put(url, *, method='PUT', params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, validate_certificate=None, **kwargs)

Issue a put request

urlfetch.delete(url, *, method='DELETE', params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, validate_certificate=None, **kwargs)

Issue a delete request

urlfetch.options(url, *, method='OPTIONS', params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, validate_certificate=None, **kwargs)

Issue a options request

urlfetch.trace(url, *, method='TRACE', params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, validate_certificate=None, **kwargs)

Issue a trace request

urlfetch.patch(url, *, method='PATCH', params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, validate_certificate=None, **kwargs)

Issue a patch request

Exceptions

class urlfetch.UrlfetchException[source]

Base exception. All exceptions and errors will subclass from this.

class urlfetch.ContentLimitExceeded[source]

Content length is beyond the limit.

class urlfetch.URLError[source]

Error parsing or handling the URL.

class urlfetch.ContentDecodingError[source]

Failed to decode the content.

class urlfetch.TooManyRedirects[source]

Too many redirects.

class urlfetch.Timeout[source]

Request timed out.

helpers

urlfetch.parse_url(url)[source]

Return a dictionary of parsed url

Including scheme, netloc, path, params, query, fragment, uri, username, password, host, port and http_host

urlfetch.get_proxies_from_environ()[source]

Get proxies from os.environ.

urlfetch.mb_code(s, coding=None, errors='replace')[source]

encoding/decoding helper.

urlfetch.random_useragent(filename=True)[source]

Returns a User-Agent string randomly from file.

Parameters:filename (string) – (Optional) Path to the file from which a random useragent is generated. By default it’s True, a file shipped with this module will be used.
Returns:An user-agent string.
urlfetch.url_concat(url, args, keep_existing=True)[source]

Concatenate url and argument dictionary

>>> url_concat("http://example.com/foo?a=b", dict(c="d"))
'http://example.com/foo?a=b&c=d'
Parameters:
  • url (string) – URL being concat to.
  • args (dict) – Args being concat.
  • keep_existing (bool) – (Optional) Whether to keep the args which are alreay in url, default is True.
urlfetch.choose_boundary()[source]

Generate a multipart boundry.

Returns:A boundary string
urlfetch.encode_multipart(data, files)[source]

Encode multipart.

Parameters:
  • data (dict) – Data to be encoded
  • files (dict) – Files to be encoded
Returns:

Encoded binary string

Raises:

UrlfetchException