How to protect myself from a gzip or bzip2 bomb?
这与关于压缩炸弹的问题有关,但考虑到g zip或bzip2压缩,例如,接受
python提供了一个方便使用的tarfile模块,但似乎不能提供对压缩炸弹的保护。
在使用tarfile模块的python代码中,检测zip炸弹最优雅的方法是什么,最好不要从tarfile模块复制太多逻辑(例如透明的解压支持)?
而且,只是让它简单一点:不涉及真正的文件;输入是一个类似文件的对象(由Web框架提供,表示用户上载的文件)。
您可以使用
如果需要在内存中解压缩,则可以设置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | import contextlib import resource @contextlib.contextmanager def limit(limit, type=resource.RLIMIT_AS): soft_limit, hard_limit = resource.getrlimit(type) resource.setrlimit(type, (limit, hard_limit)) # set soft limit try: yield finally: resource.setrlimit(type, (soft_limit, hard_limit)) # restore with limit(1 << 30): # 1GB # do the thing that might try to consume all memory |
如果达到极限,则提高
这将确定gzip流的未压缩大小,同时使用有限的内存:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | #!/usr/bin/python import sys import zlib f = open(sys.argv[1],"rb") z = zlib.decompressobj(15+16) total = 0 while True: buf = z.unconsumed_tail if buf =="": buf = f.read(1024) if buf =="": break got = z.decompress(buf, 4096) if got =="": break total += len(got) print total if z.unused_data !="" or f.read(1024) !="": print"warning: more input after end of gzip stream" |
提取时,它将返回稍微高估tar文件中所有文件所需的空间。长度包括这些文件以及tar目录信息。
py代码不控制解压的数据量,除非是由于输入数据的大小。在gzip.py中,它一次读取1024个压缩字节。因此,如果您对未压缩数据的内存使用量可以达到1056768字节(1032*1024,其中1032:1是deflate的最大压缩比),那么可以使用gzip.py。这里的解决方案使用
这将通过解码tar格式准确确定提取的tar条目的总大小:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | #!/usr/bin/python import sys import zlib def decompn(f, z, n): """Return n uncompressed bytes, or fewer if at the end of the compressed stream. This only decompresses as much as necessary, in order to avoid excessive memory usage for highly compressed input. """ blk ="" while len(blk) < n: buf = z.unconsumed_tail if buf =="": buf = f.read(1024) got = z.decompress(buf, n - len(blk)) blk += got if got =="": break return blk f = open(sys.argv[1],"rb") z = zlib.decompressobj(15+16) total = 0 left = 0 while True: blk = decompn(f, z, 512) if len(blk) < 512: break if left == 0: if blk =="\0"*512: continue if blk[156] in ["1","2","3","4","5","6"]: continue if blk[124] == 0x80: size = 0 for i in range(125, 136): size <<= 8 size += blk[i] else: size = int(blk[124:136].split()[0].split("\0")[0], 8) if blk[156] not in ["x","g","X","L","K"]: total += size left = (size + 511) // 512 else: left -= 1 print total if blk !="": print"warning: partial final block" if left != 0: print"warning: tar file ended in the middle of an entry" if z.unused_data !="" or f.read(1024) !="": print"warning: more input after end of gzip stream" |
你可以用它的一个变体来扫描tar文件中的炸弹。这样做的好处是,在必须解压缩数据之前,可以在头信息中找到较大的大小。
至于.tar.bz2文件,python bz2库(至少是3.3的库)对于消耗过多内存的bz2炸弹来说是不可避免的不安全的。
我查看了3.3中的bz2module.c,看看是否有一种未经记录的方法可以使用它来避免这个问题。没有办法绕过它。其中的
如果您是为Linux开发的,那么您可以在单独的进程中运行解压,并使用ulimit来限制内存使用。
1 2 | import subprocess subprocess.Popen("ulimit -v %d; ./decompression_script.py %s" % (LIMIT, FILE)) |
请记住,在写入磁盘之前,解压缩script.py应该在内存中解压缩整个文件。
我想答案是:没有简单的现成的解决方案。我现在用的是:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 | class SafeUncompressor(object): """Small proxy class that enables external file object support for uncompressed, bzip2 and gzip files. Works transparently, and supports a maximum size to avoid zipbombs. """ blocksize = 16 * 1024 class FileTooLarge(Exception): pass def __init__(self, fileobj, maxsize=10*1024*1024): self.fileobj = fileobj self.name = getattr(self.fileobj,"name", None) self.maxsize = maxsize self.init() def init(self): import bz2 import gzip self.pos = 0 self.fileobj.seek(0) self.buf ="" self.format ="plain" magic = self.fileobj.read(2) if magic == '\037\213': self.format ="gzip" self.gzipobj = gzip.GzipFile(fileobj = self.fileobj, mode = 'r') elif magic == 'BZ': raise IOError,"bzip2 support in SafeUncompressor disabled, as self.bz2obj.decompress is not safe" self.format ="bz2" self.bz2obj = bz2.BZ2Decompressor() self.fileobj.seek(0) def read(self, size): b = [self.buf] x = len(self.buf) while x < size: if self.format == 'gzip': data = self.gzipobj.read(self.blocksize) if not data: break elif self.format == 'bz2': raw = self.fileobj.read(self.blocksize) if not raw: break # this can already bomb here, to some extend. # so disable bzip support until resolved. # Also monitor http://stackoverflow.com/questions/13622706/how-to-protect-myself-from-a-gzip-or-bzip2-bomb for ideas data = self.bz2obj.decompress(raw) else: data = self.fileobj.read(self.blocksize) if not data: break b.append(data) x += len(data) if self.pos + x > self.maxsize: self.buf ="" self.pos = 0 raise SafeUncompressor.FileTooLarge,"Compressed file too large" self.buf ="".join(b) buf = self.buf[:size] self.buf = self.buf[size:] self.pos += len(buf) return buf def seek(self, pos, whence=0): if whence != 0: raise IOError,"SafeUncompressor only supports whence=0" if pos < self.pos: self.init() self.read(pos - self.pos) def tell(self): return self.pos |
它对bzip2不起作用,因此部分代码被禁用。原因是
我还需要处理上传的压缩文件中的压缩炸弹。
我通过创建一个固定大小的tmpfs来完成这项工作,并将其解压缩。如果提取的数据太大,则tmpfs将耗尽空间并给出错误。
下面是创建一个200M tmpfs解压到的linux命令。
1 2 | sudo mkdir -p /mnt/ziptmpfs echo 'tmpfs /mnt/ziptmpfs tmpfs rw,nodev,nosuid,size=200M 0 0' | sudo tee -a /etc/fstab |