How do I download a file over HTTP using Python?
我有一个小的实用程序,我可以按计划从一个网站下载MP3,然后构建/更新一个podcast XML文件,这显然是我添加到iTunes中的。
创建/更新XML文件的文本处理是用python编写的。不过,我在windows
我努力想找到一种在python中实际下载文件的方法,因此我为什么要使用
那么,如何使用python下载文件呢?
另外,使用
1 2 | import urllib urllib.urlretrieve ("http://www.example.com/songs/mp3.mp3","mp3.mp3") |
(对于python 3+使用
还有一个,有一个"进度条"
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | import urllib2 url ="http://download.thinkbroadband.com/10MB.zip" file_name = url.split('/')[-1] u = urllib2.urlopen(url) f = open(file_name, 'wb') meta = u.info() file_size = int(meta.getheaders("Content-Length")[0]) print"Downloading: %s Bytes: %s" % (file_name, file_size) file_size_dl = 0 block_sz = 8192 while True: buffer = u.read(block_sz) if not buffer: break file_size_dl += len(buffer) f.write(buffer) status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size) status = status + chr(8)*(len(status)+1) print status, f.close() |
在python 2中,使用标准库附带的urllib2。
1 2 3 | import urllib2 response = urllib2.urlopen('http://www.example.com/') html = response.read() |
这是使用库的最基本方法,不包括任何错误处理。您还可以执行更复杂的操作,如更改标题。文档可以在这里找到。
2012年,使用python请求库
1 2 3 4 5 6 | >>> import requests >>> >>> url ="http://download.thinkbroadband.com/10MB.zip" >>> r = requests.get(url) >>> print len(r.content) 10485760 |
你可以运行
请求比其他方法有很多优势,因为API要简单得多。如果必须进行身份验证,这尤其正确。在这种情况下,urllib和urllib2都是非常不明确和痛苦的。
2015-1230
人们对进度条表示赞赏。当然,很酷。现在有几种现成的解决方案,包括
1 2 3 4 5 6 7 8 9 | from tqdm import tqdm import requests url ="http://download.thinkbroadband.com/10MB.zip" response = requests.get(url, stream=True) with open("10MB","wb") as handle: for data in tqdm(response.iter_content()): handle.write(data) |
这基本上是30个月前描述的@kvance的实现。
1 2 3 4 | import urllib2 mp3file = urllib2.urlopen("http://www.example.com/songs/mp3.mp3") with open('test.mp3','wb') as output: output.write(mp3file.read()) |
Python 3
urllib.request.urlopen 1
2
3import urllib.request
response = urllib.request.urlopen('http://www.example.com/')
html = response.read()urllib.request.urlretrieve 1
2import urllib.request
urllib.request.urlretrieve('http://www.example.com/songs/mp3.mp3', 'mp3.mp3')
Python 2
urllib2.urlopen (谢谢科里)1
2
3import urllib2
response = urllib2.urlopen('http://www.example.com/')
html = response.read()urllib.urlretrieve (多谢pablog)1
2import urllib
urllib.urlretrieve('http://www.example.com/songs/mp3.mp3', 'mp3.mp3')
python 2/3的pablog代码的改进版本:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | #!/usr/bin/env python # -*- coding: utf-8 -*- from __future__ import ( division, absolute_import, print_function, unicode_literals ) import sys, os, tempfile, logging if sys.version_info >= (3,): import urllib.request as urllib2 import urllib.parse as urlparse else: import urllib2 import urlparse def download_file(url, dest=None): """ Download and save a file specified by url to dest directory, """ u = urllib2.urlopen(url) scheme, netloc, path, query, fragment = urlparse.urlsplit(url) filename = os.path.basename(path) if not filename: filename = 'downloaded.file' if dest: filename = os.path.join(dest, filename) with open(filename, 'wb') as f: meta = u.info() meta_func = meta.getheaders if hasattr(meta, 'getheaders') else meta.get_all meta_length = meta_func("Content-Length") file_size = None if meta_length: file_size = int(meta_length[0]) print("Downloading: {0} Bytes: {1}".format(url, file_size)) file_size_dl = 0 block_sz = 8192 while True: buffer = u.read(block_sz) if not buffer: break file_size_dl += len(buffer) f.write(buffer) status ="{0:16}".format(file_size_dl) if file_size: status +=" [{0:6.2f}%]".format(file_size_dl * 100 / file_size) status += chr(13) print(status, end="") print() return filename if __name__ =="__main__": # Only run if this file is called directly print("Testing with 10MB download") url ="http://download.thinkbroadband.com/10MB.zip" filename = download_file(url) print(filename) |
使用wget模块:
1 2 | import wget wget.download('url') |
就为了这个目的,用纯Python编写了wget库。从2.0版开始,它被注入具有这些功能的
1 2 | from six.moves import urllib urllib.request.urlretrieve("http://www.example.com/songs/mp3.mp3","mp3.mp3") |
我同意科里的观点,Urllib2比Urllib更完整,如果你想做更复杂的事情,它很可能是使用的模块,但是为了使答案更完整,如果你只需要基本知识,Urllib是一个更简单的模块:
1 2 3 | import urllib response = urllib.urlopen('http://www.example.com/sound.mp3') mp3 = response.read() |
会很好的。或者,如果不想处理"response"对象,可以直接调用read():
1 2 | import urllib mp3 = urllib.urlopen('http://www.example.com/sound.mp3').read() |
以下是在python中下载文件最常用的调用:
注:
1 2 3 4 5 6 7 8 9 10 11 | import os,requests def download(url): get_response = requests.get(url,stream=True) file_name = url.split("/")[-1] with open(file_name, 'wb') as f: for chunk in get_response.iter_content(chunk_size=1024): if chunk: # filter out keep-alive new chunks f.write(chunk) download("https://example.com/example.jpg") |
您还可以通过urlretrieve获得进度反馈:
1 2 3 4 5 6 7 8 9 10 11 | def report(blocknr, blocksize, size): current = blocknr*blocksize sys.stdout.write(" {0:.2f}%".format(100.0*current/size)) def downloadFile(url): print" ",url fname = url.split('/')[-1] print fname urllib.urlretrieve(url, fname, report) |
如果安装了wget,则可以使用并行同步。
pip安装并行同步
1 2 3 4 5 | from parallel_sync import wget urls = ['http://something.png', 'http://somthing.tar.gz', 'http://somthing.zip'] wget.download('/tmp', urls) # or a single file: wget.download('/tmp', urls[0], filenames='x.zip', extract=True) |
Doc:https://pythonhosted.org/parallel_sync/pages/examples.html网站
这是相当强大的。它可以并行下载文件,失败时重试,甚至可以在远程机器上下载文件。
在python3中,你可以使用urllib3和shutil库。使用pip或pip3下载它们(取决于python3是否为默认值)
1 | pip3 install urllib3 shutil |
然后运行此代码
1 2 3 4 5 6 7 | import urllib.request import shutil url ="http://www.somewebsite.com/something.pdf" output_file ="save_this_name.pdf" with urllib.request.urlopen(url) as response, open(output_file, 'wb') as out_file: shutil.copyfileobj(response, out_file) |
注意,您下载
如果速度对你很重要,我对模块
首先,这些是结果(它们在不同的运行中相似):
1 2 3 4 5 6 7 8 9 10 11 | $ python wget_test.py urlretrive_test : starting urlretrive_test : 6.56 ============== wget_no_bar_test : starting wget_no_bar_test : 7.20 ============== wget_with_bar_test : starting 100% [......................................................................] 541335552 / 541335552 wget_with_bar_test : 50.49 ============== |
我执行测试的方式是使用"profile"修饰器。这是完整代码:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | import wget import urllib import time from functools import wraps def profile(func): @wraps(func) def inner(*args): print func.__name__,": starting" start = time.time() ret = func(*args) end = time.time() print func.__name__,": {:.2f}".format(end - start) return ret return inner url1 = 'http://host.com/500a.iso' url2 = 'http://host.com/500b.iso' url3 = 'http://host.com/500c.iso' def do_nothing(*args): pass @profile def urlretrive_test(url): return urllib.urlretrieve(url) @profile def wget_no_bar_test(url): return wget.download(url, out='/tmp/', bar=do_nothing) @profile def wget_with_bar_test(url): return wget.download(url, out='/tmp/') urlretrive_test(url1) print '==============' time.sleep(1) wget_no_bar_test(url2) print '==============' time.sleep(1) wget_with_bar_test(url3) print '==============' time.sleep(1) |
源代码可以是:
1 2 3 4 5 | import urllib sock = urllib.urlopen("http://diveintopython.org/") htmlSource = sock.read() sock.close() print htmlSource |
为了完整起见,也可以调用任何使用
1 2 | import subprocess subprocess.check_output(['wget', '-O', 'example_output_file.html', 'https://example.com']) |
在Jupyter笔记本中,还可以使用
1 | !wget -O example_output_file.html https://example.com |
我写了以下内容,它在普通的python 2或python 3中工作。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | import sys try: import urllib.request python3 = True except ImportError: import urllib2 python3 = False def progress_callback_simple(downloaded,total): sys.stdout.write( " " + (len(str(total))-len(str(downloaded)))*"" + str(downloaded) +"/%d"%total + " [%3.2f%%]"%(100.0*float(downloaded)/float(total)) ) sys.stdout.flush() def download(srcurl, dstfilepath, progress_callback=None, block_size=8192): def _download_helper(response, out_file, file_size): if progress_callback!=None: progress_callback(0,file_size) if block_size == None: buffer = response.read() out_file.write(buffer) if progress_callback!=None: progress_callback(file_size,file_size) else: file_size_dl = 0 while True: buffer = response.read(block_size) if not buffer: break file_size_dl += len(buffer) out_file.write(buffer) if progress_callback!=None: progress_callback(file_size_dl,file_size) with open(dstfilepath,"wb") as out_file: if python3: with urllib.request.urlopen(srcurl) as response: file_size = int(response.getheader("Content-Length")) _download_helper(response,out_file,file_size) else: response = urllib2.urlopen(srcurl) meta = response.info() file_size = int(meta.getheaders("Content-Length")[0]) _download_helper(response,out_file,file_size) import traceback try: download( "https://geometrian.com/data/programming/projects/glLib/glLib%20Reloaded%200.5.9/0.5.9.zip", "output.zip", progress_callback_simple ) except: traceback.print_exc() input() |
笔记:
- 支持"进度条"回调。
- 从我的网站下载一个4 MB的test.zip。
可以在pycurl 2和3上使用pycurl。
1 2 3 4 5 6 7 8 9 10 11 | import pycurl FILE_DEST = 'pycurl.html' FILE_SRC = 'http://pycurl.io/' with open(FILE_DEST, 'wb') as f: c = pycurl.Curl() c.setopt(c.URL, FILE_SRC) c.setopt(c.WRITEDATA, f) c.perform() c.close() |
urlretrieve和requests.get很简单,但实际情况并非如此。我已经为几个站点提取了数据,包括文本和图像,上面的两个可能解决了大部分任务。但为了更普遍的解决方案,我建议使用厄洛彭。因为它包含在python 3标准库中,所以您的代码可以在任何运行python 3的机器上运行,而无需预先安装站点par。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | import urllib.request url_request = urllib.request.Request(url, headers=headers) url_connect = urllib.request.urlopen(url_request) len_content = url_content.length #remember to open file in bytes mode with open(filename, 'wb') as f: while True: buffer = url_connect.read(buffer_size) if not buffer: break #an integer value of size of written data data_wrote = f.write(buffer) #you could probably use with-open-as manner url_connect.close() |
这个答案提供了一个解决方案,当使用python通过http下载文件时,禁止使用HTTP403。我只尝试过请求和URLLIB模块,其他模块可能提供更好的功能,但这是我用来解决大多数问题的模块。
这可能有点晚了,但我看到了pablog的代码,忍不住添加了一个os.system("cls")来让它看起来棒极了!过来看:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | import urllib2,os url ="http://download.thinkbroadband.com/10MB.zip" file_name = url.split('/')[-1] u = urllib2.urlopen(url) f = open(file_name, 'wb') meta = u.info() file_size = int(meta.getheaders("Content-Length")[0]) print"Downloading: %s Bytes: %s" % (file_name, file_size) os.system('cls') file_size_dl = 0 block_sz = 8192 while True: buffer = u.read(block_sz) if not buffer: break file_size_dl += len(buffer) f.write(buffer) status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size) status = status + chr(8)*(len(status)+1) print status, f.close() |
如果在Windows以外的环境中运行,则必须使用"cls"以外的内容。在Mac OS X和Linux中,应该是"清晰的"。