Converting CSV file to HDF5 using pandas
当我使用pandas将csv文件转换为hdf5文件时,生成的文件非常大。例如,170Mb的测试csv文件(23列,130万行)导致hdf5文件为2Gb。但是,如果绕过pandas并直接写入hdf5文件(使用pytables),它只有20Mb。在以下代码中(用于在pandas中进行转换),数据框中对象列的值显式转换为字符串对象(以防止酸洗):
1 2 3 4 5 6 | # Open the csv file as pandas data frame data = pd.read_csv(csvfilepath, sep=delimiter, low_memory=False) # Write the resulting data frame to the hdf5 file data.to_hdf(hdf5_file_path, table_name, format='table', complevel=9, complib='lzo') |
这是检查的hdf5文件(使用vitables):
1 2 3 4 | <class 'pandas.core.frame.DataFrame'> Int64Index: 1303331 entries, 0 to 1303330 Columns: 23 entries, _PlanId to ACTIVITY_Gratis dtypes: float64(1), int64(5), object(17) |
这是各种IO方法的时间/大小的非正式比较
在64位Linux上使用0.13.1
建立
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | In [3]: N = 1000000 In [4]: df = DataFrame(dict([ ("int{0}".format(i),np.random.randint(0,10,size=N)) for i in range(5) ])) In [5]: df['float'] = np.random.randn(N) In [6]: from random import randrange In [8]: for i in range(10): ...: df["object_1_{0}".format(i)] = ['%08x'%randrange(16**8) for _ in range(N)] ...: In [9]: for i in range(7): ...: df["object_2_{0}".format(i)] = ['%15x'%randrange(16**15) for _ in range(N)] ...: In [11]: df.info() <class 'pandas.core.frame.DataFrame'> Int64Index: 1000000 entries, 0 to 999999 Data columns (total 23 columns): int0 1000000 non-null int64 int1 1000000 non-null int64 int2 1000000 non-null int64 int3 1000000 non-null int64 int4 1000000 non-null int64 float 1000000 non-null float64 object_1_0 1000000 non-null object object_1_1 1000000 non-null object object_1_2 1000000 non-null object object_1_3 1000000 non-null object object_1_4 1000000 non-null object object_1_5 1000000 non-null object object_1_6 1000000 non-null object object_1_7 1000000 non-null object object_1_8 1000000 non-null object object_1_9 1000000 non-null object object_2_0 1000000 non-null object object_2_1 1000000 non-null object object_2_2 1000000 non-null object object_2_3 1000000 non-null object object_2_4 1000000 non-null object object_2_5 1000000 non-null object object_2_6 1000000 non-null object dtypes: float64(1), int64(5), object(17) types: float64(1), int64(5), object(17) |
用各种方法保存
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | In [12]: df.to_hdf('test_fixed.h5','data',format='fixed') In [13]: df.to_hdf('test_table_no_dc.h5','data',format='table') In [14]: df.to_hdf('test_table_dc.h5','data',format='table',data_columns=True) In [15]: df.to_hdf('test_fixed_compressed.h5','data',format='fixed',complib='blosc',complevel=9) !ls -ltr *.h5 In [16]: !ls -ltr *.h5 -rw-rw-r-- 1 jreback users 361093304 Apr 28 09:20 test_fixed.h5 -rw-rw-r-- 1 jreback users 311475690 Apr 28 09:21 test_table_no_dc.h5 -rw-rw-r-- 1 jreback users 351316525 Apr 28 09:22 test_table_dc.h5 -rw-rw-r-- 1 jreback users 317467870 Apr 28 2014 test_fixed_compressed.h5 |
磁盘上的大小将是为每列选择的字符串大小的函数; 如果您使用NO data_columns,则它是ANY字符串的最长大小。 因此,使用data_columns写入可能是这里的大小(由于您有更多列,因此每列需要更多空间)。 您想要指定
以下是磁盘结构的示例:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | In [8]: DataFrame(dict(A = ['foo','bar','bah'], B = [1,2,3], C = [1.0,2.0,3.0], D=[4.0,5.0,6.0])).to_hdf('test.h5','data',mode='w',format='table') In [9]: !ptdump -avd test.h5 / (RootGroup) '' /._v_attrs (AttributeSet), 4 attributes: [CLASS := 'GROUP', PYTABLES_FORMAT_VERSION := '2.1', TITLE := '', VERSION := '1.0'] /data (Group) '' /data._v_attrs (AttributeSet), 14 attributes: [CLASS := 'GROUP', TITLE := '', VERSION := '1.0', data_columns := [], encoding := None, index_cols := [(0, 'index')], info := {1: {'type': 'Index', 'names': [None]}, 'index': {}}, levels := 1, nan_rep := 'nan', non_index_axes := [(1, ['A', 'B', 'C', 'D'])], pandas_type := 'frame_table', pandas_version := '0.10.1', table_type := 'appendable_frame', values_cols := ['values_block_0', 'values_block_1', 'values_block_2']] /data/table (Table(3,)) '' description := { "index": Int64Col(shape=(), dflt=0, pos=0), "values_block_0": Float64Col(shape=(2,), dflt=0.0, pos=1), "values_block_1": Int64Col(shape=(1,), dflt=0, pos=2), "values_block_2": StringCol(itemsize=3, shape=(1,), dflt='', pos=3)} byteorder := 'little' chunkshape := (1872,) autoindex := True colindexes := { "index": Index(6, medium, shuffle, zlib(1)).is_csi=False} /data/table._v_attrs (AttributeSet), 19 attributes: [CLASS := 'TABLE', FIELD_0_FILL := 0, FIELD_0_NAME := 'index', FIELD_1_FILL := 0.0, FIELD_1_NAME := 'values_block_0', FIELD_2_FILL := 0, FIELD_2_NAME := 'values_block_1', FIELD_3_FILL := '', FIELD_3_NAME := 'values_block_2', NROWS := 3, TITLE := '', VERSION := '2.7', index_kind := 'integer', values_block_0_dtype := 'float64', values_block_0_kind := ['C', 'D'], values_block_1_dtype := 'int64', values_block_1_kind := ['B'], values_block_2_dtype := 'string24', values_block_2_kind := ['A']] Data dump: [0] (0, [1.0, 4.0], [1], ['foo']) [1] (1, [2.0, 5.0], [2], ['bar']) [2] (2, [3.0, 6.0], [3], ['bah']) |
Dtypes正在分组成块(如果你有data_columns则它们是分开的)。 这些都是这样印刷的; 它们存储在数组中。