在这种情况下,您的任务是I / O绑定的,而不是处理器绑定的- 网站答复所需的时间比cpu通过脚本(不包括TCP请求)循环一次所花费的时间更长。这意味着您不会并行执行此任务而获得任何提速(这是这样multiprocessing
做的)。您想要的是多线程。实现这一目标的方法是使用少量文献记载的文件,也许是名字不好用multiprocessing.dummy
:
import requests
from multiprocessing.dummy import Pool as ThreadPool
urls = ['https://www.python.org',
'https://www.python.org/about/']
def get_status(url):
r = requests.get(url)
return r.status_code
if __name__ == "__main__":
pool = ThreadPool(4) # Make the Pool of workers
results = pool.map(get_status, urls) #Open the urls in their own threads
pool.close() #close the pool and wait for the work to finish
pool.join()