3 This is a simple web "crawler" that fetches a bunch of urls using a pool to
4 control the number of outbound connections. It has as many simultaneously open
5 connections as coroutines in the pool.
7 The prints in the body of the fetch function are there to demonstrate that the
8 requests are truly made in parallel.
11 from eventlet.green import urllib2
15 "https://www.google.com/intl/en_ALL/images/logo.gif",
16 "http://python.org/images/python-logo.gif",
17 "http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif",
23 body = urllib2.urlopen(url).read()
24 print("done with", url)
28 pool = eventlet.GreenPool(200)
29 for url, body in pool.imap(fetch, urls):
30 print("got body from", url, "of length", len(body))