如何使用pyodc连接将数据帧写入配置单元表。写入时会出现编程错误。。从本地向配置单元写入数据的任何其他方式。
错误
Error Traceback (most recent call last)
C:ProgramDataAnaconda3libsite-packagespandasiosql.py in execute(self, *args, **kwargs)
1430 else:
-> 1431 cur.execute(*args)
1432 return cur
Error: ('HY000', "[HY000] [Cloudera][ImpalaODBC] (110) Error while executing a query in Impala: [HY000] : ParseException: Syntax error in line 1:n...ERE type='table' AND name=?;n ^nEncountered: Unexpected characternExpected: CASE, CAST, DEFAULT, EXISTS, FALSE, IF, INTERVAL, LEFT, NOT, NULL, REPLACE, RIGHT, TRUNCATE, TRUE, IDENTIFIERnnCAUSED BY: Exception: Syntax errorn (110) (SQLPrepare)")
在处理上述异常的过程中,发生了另一个异常:
DatabaseError Traceback (most recent call last) <ipython-input-8-9f82c88c3a27> in <module>
1 import pyodbc
2 with pyodbc.connect("DSN=*****", autocommit=True) as conn:
----> 3 df.to_sql(name='Xyz', con=conn, schema='fgh',if_exists='append',index=False)
您正在将一个原始(DBAPI(pyodbc.Connection
传递给熊猫的to_sql
。作为to_sql状态的文档,这样的Connection
对象被假定为sqlite3连接,因此to_sql
正在发送查询
SELECT name FROM sqlite_master WHERE type='table' AND name=?;
以查询数据库。这对Hive(或SQLite以外的任何数据库(都不起作用。
对于任何其他数据库,您需要向to_sql
传递一个SQLAlchemyEngine
或Connection
对象作为con=
参数。
我使用这个源代码将df写入Hive:https://learn.microsoft.com/en-us/sql/machine-learning/data-exploration/python-dataframe-sql-server?view=sql-服务器-ver15
假设您有两列的df,您可以使用以下代码:您需要确保您的表存在于数据库中。
for index, row in df.iterrows():
writing_query = """
INSERT INTO
table_name
VALUES ('{}','{}')
""" .format(row[0], row[1])
conn = pyodbc.connect("DSN=*****", autocommit=True) # Creates a connection with the database
cursor = conn.cursor() # Creates a cursor
cursor.execute(writing_query) # Asks the cursor to execute the query
conn.close() # Closes the connection