What is the fastest way to upload a big csv file in notebook to work with python pandas?
我正在上传一个csv文件,它是250MB。基本上400万行,6列时间序列数据(1分钟)。通常的程序是:
1 2 3 | location = r'C:\Users ame\Folder_1\Folder_2\file.csv' df = pd.read_csv(location) |
这个过程大约需要20分钟!!!!.非常初步,我研究了以下选项
- 分块上传,然后把分块放在一起。
- HDF5
- "羽毛"
- 泡菜
我想知道是否有人比较过这些选择(或更多),有一个明确的赢家。如果没有人回答,将来我会公布我的结果。我只是现在没有时间。
以下是我对df的读写比较结果(形状:4000000 x 6,内存大小183.1 MB,未压缩csv的大小-492 MB)。
以下存储格式的比较:(
1 2 3 4 5 6 7 8 9 10 11 | read_s write_s size_ratio_to_CSV storage CSV 17.900 69.00 1.000 CSV.gzip 18.900 186.00 0.047 Pickle 0.173 1.77 0.374 HDF_fixed 0.196 2.03 0.435 HDF_tab 0.230 2.60 0.437 HDF_tab_zlib_c5 0.845 5.44 0.035 HDF_tab_zlib_c9 0.860 5.95 0.035 HDF_tab_bzip2_c5 2.500 36.50 0.011 HDF_tab_bzip2_c9 2.500 36.50 0.011 |
阅读
写入/保存
与未压缩的csv文件相关的文件大小比率
原始数据:
CSV:
1 2 3 4 5 | In [68]: %timeit df.to_csv(fcsv) 1 loop, best of 3: 1min 9s per loop In [74]: %timeit pd.read_csv(fcsv) 1 loop, best of 3: 17.9 s per loop |
CSV.gzip:
1 2 3 4 5 | In [70]: %timeit df.to_csv(fcsv_gz, compression='gzip') 1 loop, best of 3: 3min 6s per loop In [75]: %timeit pd.read_csv(fcsv_gz) 1 loop, best of 3: 18.9 s per loop |
泡菜:
1 2 3 4 5 | In [66]: %timeit df.to_pickle(fpckl) 1 loop, best of 3: 1.77 s per loop In [72]: %timeit pd.read_pickle(fpckl) 10 loops, best of 3: 173 ms per loop |
hdf(
1 2 3 4 5 | In [67]: %timeit df.to_hdf(fh5, 'df') 1 loop, best of 3: 2.03 s per loop In [73]: %timeit pd.read_hdf(fh5, 'df') 10 loops, best of 3: 196 ms per loop |
hdf(
1 2 3 4 5 | In [37]: %timeit df.to_hdf('D:\\temp\\.data\\37010212_tab.h5', 'df', format='t') 1 loop, best of 3: 2.6 s per loop In [38]: %timeit pd.read_hdf('D:\\temp\\.data\\37010212_tab.h5', 'df') 1 loop, best of 3: 230 ms per loop |
hdf(
1 2 3 4 5 | In [40]: %timeit df.to_hdf('D:\\temp\\.data\\37010212_tab_compress_zlib5.h5', 'df', format='t', complevel=5, complib='zlib') 1 loop, best of 3: 5.44 s per loop In [41]: %timeit pd.read_hdf('D:\\temp\\.data\\37010212_tab_compress_zlib5.h5', 'df') 1 loop, best of 3: 854 ms per loop |
HDF(
1 2 3 4 5 | In [36]: %timeit df.to_hdf('D:\\temp\\.data\\37010212_tab_compress_zlib9.h5', 'df', format='t', complevel=9, complib='zlib') 1 loop, best of 3: 5.95 s per loop In [39]: %timeit pd.read_hdf('D:\\temp\\.data\\37010212_tab_compress_zlib9.h5', 'df') 1 loop, best of 3: 860 ms per loop |
HDF(
1 2 3 4 5 | In [42]: %timeit df.to_hdf('D:\\temp\\.data\\37010212_tab_compress_bzip2_l5.h5', 'df', format='t', complevel=5, complib='bzip2') 1 loop, best of 3: 36.5 s per loop In [43]: %timeit pd.read_hdf('D:\\temp\\.data\\37010212_tab_compress_bzip2_l5.h5', 'df') 1 loop, best of 3: 2.5 s per loop |
hdf(
1 2 3 4 5 | In [42]: %timeit df.to_hdf('D:\\temp\\.data\\37010212_tab_compress_bzip2_l9.h5', 'df', format='t', complevel=9, complib='bzip2') 1 loop, best of 3: 36.5 s per loop In [43]: %timeit pd.read_hdf('D:\\temp\\.data\\37010212_tab_compress_bzip2_l9.h5', 'df') 1 loop, best of 3: 2.5 s per loop |
PS我不能在我的Windows笔记本上测试
DF信息:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | In [49]: df.shape Out[49]: (4000000, 6) In [50]: df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 4000000 entries, 0 to 3999999 Data columns (total 6 columns): a datetime64[ns] b datetime64[ns] c datetime64[ns] d datetime64[ns] e datetime64[ns] f datetime64[ns] dtypes: datetime64[ns](6) memory usage: 183.1 MB In [41]: df.head() Out[41]: a b c \ 0 1970-01-01 00:00:00 1970-01-01 00:01:00 1970-01-01 00:02:00 1 1970-01-01 00:01:00 1970-01-01 00:02:00 1970-01-01 00:03:00 2 1970-01-01 00:02:00 1970-01-01 00:03:00 1970-01-01 00:04:00 3 1970-01-01 00:03:00 1970-01-01 00:04:00 1970-01-01 00:05:00 4 1970-01-01 00:04:00 1970-01-01 00:05:00 1970-01-01 00:06:00 d e f 0 1970-01-01 00:03:00 1970-01-01 00:04:00 1970-01-01 00:05:00 1 1970-01-01 00:04:00 1970-01-01 00:05:00 1970-01-01 00:06:00 2 1970-01-01 00:05:00 1970-01-01 00:06:00 1970-01-01 00:07:00 3 1970-01-01 00:06:00 1970-01-01 00:07:00 1970-01-01 00:08:00 4 1970-01-01 00:07:00 1970-01-01 00:08:00 1970-01-01 00:09:00 |
文件大小:
1 2 3 4 5 6 7 8 9 10 | { .data } ? ls -lh 37010212.* /d/temp/.data -rw-r--r-- 1 Max None 492M May 3 22:21 37010212.csv -rw-r--r-- 1 Max None 23M May 3 22:19 37010212.csv.gz -rw-r--r-- 1 Max None 214M May 3 22:02 37010212.h5 -rw-r--r-- 1 Max None 184M May 3 22:02 37010212.pickle -rw-r--r-- 1 Max None 215M May 4 10:39 37010212_tab.h5 -rw-r--r-- 1 Max None 5.4M May 4 10:46 37010212_tab_compress_bzip2_l5.h5 -rw-r--r-- 1 Max None 5.4M May 4 10:51 37010212_tab_compress_bzip2_l9.h5 -rw-r--r-- 1 Max None 17M May 4 10:42 37010212_tab_compress_zlib5.h5 -rw-r--r-- 1 Max None 17M May 4 10:36 37010212_tab_compress_zlib9.h5 |
结论:
PS如果你能建造/使用
PPS:不要对大数据帧使用pickle,因为最终可能会出现SystemError:Error返回,没有异常设置错误消息。这里和这里也有描述。