旧代码不适用于新的大型 csv 文件



我有为熊猫编写的遗留代码。

现在新数据变得非常大(CSV格式(,很难用新文件read_csv(文件大小~7,8GB,将来会更大(。

您能否向我建议不更改旧代码但仍使用大型 CSV 文件的最佳方法?我想切换到 spark,但似乎我将不得不更改很多代码。

非常感谢

您是否尝试过分块读取文件?事先定义列 dtype 可能有助于提高性能

chunksize = 1000000
chunks = pd.read_csv(filepath, dtype=dtypes, chunksize=chunksize)
df = pd.concat((chunk for chunk in chunks), ignore_index=True)

编辑:另一个技巧是在加载后减少数据帧的内存使用量。这是来自 Kaggle 内核

def reduce_mem_usage(props):
start_mem_usg = props.memory_usage().sum() / 1024**2 
print("Memory usage of properties dataframe is :",start_mem_usg," MB")
NAlist = [] # Keeps track of columns that have missing values filled in. 
for col in props.columns:
if props[col].dtype != object:  # Exclude strings
# Print current column type
print("******************************")
print("Column: ",col)
print("dtype before: ",props[col].dtype)
# make variables for Int, max and min
IsInt = False
mx = props[col].max()
mn = props[col].min()
# Integer does not support NA, therefore, NA needs to be filled
if not np.isfinite(props[col]).all(): 
NAlist.append(col)
props[col].fillna(mn-1,inplace=True)  
# test if column can be converted to an integer
asint = props[col].fillna(0).astype(np.int64)
result = (props[col] - asint)
result = result.sum()
if result > -0.01 and result < 0.01:
IsInt = True

# Make Integer/unsigned Integer datatypes
if IsInt:
if mn >= 0:
if mx < 255:
props[col] = props[col].astype(np.uint8)
elif mx < 65535:
props[col] = props[col].astype(np.uint16)
elif mx < 4294967295:
props[col] = props[col].astype(np.uint32)
else:
props[col] = props[col].astype(np.uint64)
else:
if mn > np.iinfo(np.int8).min and mx < np.iinfo(np.int8).max:
props[col] = props[col].astype(np.int8)
elif mn > np.iinfo(np.int16).min and mx < np.iinfo(np.int16).max:
props[col] = props[col].astype(np.int16)
elif mn > np.iinfo(np.int32).min and mx < np.iinfo(np.int32).max:
props[col] = props[col].astype(np.int32)
elif mn > np.iinfo(np.int64).min and mx < np.iinfo(np.int64).max:
props[col] = props[col].astype(np.int64)    
# Make float datatypes 32 bit
else:
props[col] = props[col].astype(np.float32)
# Print new column type
print("dtype after: ",props[col].dtype)
print("******************************")
# Print final result
print("___MEMORY USAGE AFTER COMPLETION:___")
mem_usg = props.memory_usage().sum() / 1024**2 
print("Memory usage is: ",mem_usg," MB")
print("This is ",100*mem_usg/start_mem_usg,"% of the initial size")
return props, NAlist

您可以简单地将上述函数应用于数据帧,尤其是在数据具有许多数字列的情况下,它将有所帮助。

最新更新