我试图上传一个binary.zip到SQL Server作为varbinary
类型列的内容。
目标表:
CREATE TABLE myTable ( zipFile varbinary(MAX) );
MyNIFI Flow非常简单:
-> GetFile:
filter:binary.zip
-> UpdateAttribute:<br>
sql.args.1.type = -3 # as varbinary according to JDBC types enumeration
sql.args.1.value = ??? # I don't know what to put here ! (I've triying everything!)
sql.args.1.format= ??? # Is It required? I triyed 'hex'
-> PutSQL:<br>
SQLstatement= INSERT INTO myTable (zip_file) VALUES (?);
我应该在sql.args.1.value
里放什么?
我认为它应该是流文件有效载荷,但它会作为插入的一部分在PutSQL?现在还不行!
谢谢!
解决方案更新:
基于https://issues.apache.org/jira/browse/NIFI-8052(考虑我正在发送一些数据作为属性参数)
import java.nio.charset.StandardCharsets
import org.apache.nifi.controller.ControllerService
import groovy.sql.Sql
def flowFile = session.get()
def lookup = context.controllerServiceLookup
def dbServiceName = flowFile.getAttribute('DatabaseConnectionPoolName')
def tableName = flowFile.getAttribute('table_name')
def fieldName = flowFile.getAttribute('field_name')
def dbcpServiceId = lookup.getControllerServiceIdentifiers(ControllerService).find
{ cs -> lookup.getControllerServiceName(cs) == dbServiceName }
def conn = lookup.getControllerService(dbcpServiceId)?.getConnection()
def sql = new Sql(conn)
flowFile.read{ rawIn->
def parms = [rawIn ]
sql.executeInsert "INSERT INTO " + tableName + " (date, "+ fieldName + ") VALUES (CAST( GETDATE() AS Date ) , ?) ", parms
}
conn?.close()
if(!flowFile) return
session.transfer(flowFile, REL_SUCCESS)
session.commit()
也许有一个nifi本地方式插入blob但是你可以使用ExecuteGroovyScript
而不是UpdateAttribute和PutSQL
在处理器级别添加SQL.mydb
参数,并将其链接到所需的DBCP池。
使用以下脚本主体:
def ff=session.get()
if(!ff)return
def statement = "INSERT INTO myTable (zip_file) VALUES (:p_zip_file)"
def params = [
p_zip_file: SQL.mydb.BLOB(ff.read()) //cast flow file content as BLOB sql type
]
SQL.mydb.executeInsert(params, statement) //committed automatically on flow file success
//transfer to success without changes
REL_SUCCESS << ff
脚本SQL.mydb
中的是对groovy.sql.Sql对象
的引用。