我想将Pandas数据帧作为一个整体放在MS SQL Server数据库的表中。像我这样的普通用户不允许批量插入。我正在使用 pyodbc 连接到我的数据库。我正在使用熊猫 0.13.1。我在某处读到,从 0.14 版开始,您可以使用 to_sql 方法,因此它不适用于我的熊猫数据帧。因此,我使用了迭代器。我的数据帧有 2 列:Col1 和 Col2。
我的代码正在工作,看起来像:
from pyodbc import connect
import pandasas pd
df = pd.read_csv('PathToMyCSVfile', sep=';', header=0)
cnxn = connect(DRIVER = '{SQL Server}', SERVER = 'MyServer', DATABASE = 'MyDatabase')
cursor = cnxn.cursor()
for index, row in df.interrows():
cursor.execute("INSERT INTO MySchema.MyTable VALUES (?,?)", df['Col1'][index], def['Col2'][index]
cnxn.commit()
如前所述,上面的代码正在工作,但它很慢......我能做些什么来加快速度?
您面临的瓶颈是代码为数据帧中的每一行发送一个 INSERT 语句。也就是说,对于示例数据文件
id;txt
1;alpha
2;bravo
3;charlie
4;delta
5;echo
6;foxtrot
7;golf
您需要七 (7) 次往返服务器才能发送相当于
INSERT INTO MySchema.MyTable VALUES (1,'alpha')
INSERT INTO MySchema.MyTable VALUES (2,'bravo')
INSERT INTO MySchema.MyTable VALUES (3,'charlie')
...
INSERT INTO MySchema.MyTable VALUES (7,'golf')
通过使用表值构造函数在一次往返中执行相同的操作,可以显着加快速度:
INSERT INTO MySchema.MyTable VALUES (1,'alpha'),(2,'bravo'),(3,'charlie'), ... ,(7,'golf')
下面的代码就是这样做的。当我使用包含 5000 行的文件对其进行测试时,使用 rows_per_batch=1000
(最大值)运行它的速度比使用 rows_per_batch=1
(相当于您当前的方法)快约 100 倍。
import numpy
import pandas as pd
import pyodbc
import time
class MyDfInsert:
def __init__(self, cnxn, sql_stub, data_frame, rows_per_batch=1000):
# NB: hard limit is 1000 for SQL Server table value constructor
self._rows_per_batch = 1000 if rows_per_batch > 1000 else rows_per_batch
self._cnxn = cnxn
self._sql_stub = sql_stub
self._num_columns = None
self._row_placeholders = None
self._num_rows_previous = None
self._all_placeholders = None
self._sql = None
row_count = 0
param_list = list()
for df_row in data_frame.itertuples():
param_list.append(tuple(df_row[1:])) # omit zero-based row index
row_count += 1
if row_count >= self._rows_per_batch:
self._send_insert(param_list) # send a full batch
row_count = 0
param_list = list()
self._send_insert(param_list) # send any remaining rows
def _send_insert(self, param_list):
if len(param_list) > 0:
if self._num_columns is None:
# print('[DEBUG] (building items that depend on the number of columns ...)')
# this only happens once
self._num_columns = len(param_list[0])
self._row_placeholders = ','.join(['?' for x in range(self._num_columns)])
# e.g. '?,?'
num_rows = len(param_list)
if num_rows != self._num_rows_previous:
# print('[DEBUG] (building items that depend on the number of rows ...)')
self._all_placeholders = '({})'.format('),('.join([self._row_placeholders for x in range(num_rows)]))
# e.g. '(?,?),(?,?),(?,?)'
self._sql = f'{self._sql_stub} VALUES {self._all_placeholders}'
self._num_rows_previous = num_rows
params = [int(element) if isinstance(element, numpy.int64) else element
for row_tup in param_list for element in row_tup]
# print('[DEBUG] sql: ' + repr(self._sql))
# print('[DEBUG] params: ' + repr(params))
crsr = self._cnxn.cursor()
crsr.execute(self._sql, params)
if __name__ == '__main__':
conn_str = (
'DRIVER=ODBC Driver 11 for SQL Server;'
'SERVER=192.168.1.134,49242;'
'Trusted_Connection=yes;'
)
cnxn = pyodbc.connect(conn_str, autocommit=True)
crsr = cnxn.cursor()
crsr.execute("CREATE TABLE #tmp (id INT PRIMARY KEY, txt NVARCHAR(50))")
df = pd.read_csv(r'C:UsersGordDesktopQuery1.txt', sep=';', header=0)
t0 = time.time()
MyDfInsert(cnxn, "INSERT INTO #tmp (id, txt)", df, rows_per_batch=1000)
print()
print(f'Inserts completed in {time.time() - t0:.2f} seconds.')
cnxn.close()