使用 ADF REST 连接器读取和转换 FHIR 数据



我正在尝试使用 Azure 数据工厂从 FHIR 服务器读取数据,并将结果转换为 Azure Blob 存储中的换行符分隔的 JSON (ndjson) 文件。具体而言,如果查询 FHIR 服务器,可能会得到如下所示的内容:

{
"resourceType": "Bundle",
"id": "som-id",
"type": "searchset",
"link": [
{
"relation": "next",
"url": "https://fhirserver/?ct=token"
},
{
"relation": "self",
"url": "https://fhirserver/"
}
],
"entry": [
{
"fullUrl": "https://fhirserver/Organization/1234",
"resource": {
"resourceType": "Organization",
"id": "1234",
// More fields
},
{
"fullUrl": "https://fhirserver/Organization/456",
"resource": {
"resourceType": "Organization",
"id": "456",
// More fields
},
// More resources
]
}

基本上是一堆资源。我想将其转换为换行符分隔(又名 ndjson)文件,其中每一行只是资源的 json:

{"resourceType": "Organization", "id": "1234", // More fields }
{"resourceType": "Organization", "id": "456", // More fields }
// More lines with resources

我能够设置 REST 连接器,它可以查询 FHIR 服务器(包括分页),但无论我尝试什么,我似乎都无法生成我想要的输出。我设置了一个 Azure Blob 存储数据集:

{
"name": "AzureBlob1",
"properties": {
"linkedServiceName": {
"referenceName": "AzureBlobStorage1",
"type": "LinkedServiceReference"
},
"type": "AzureBlob",
"typeProperties": {
"format": {
"type": "JsonFormat",
"filePattern": "setOfObjects"
},
"fileName": "myout.json",
"folderPath": "outfhirfromadf"
}
},
"type": "Microsoft.DataFactory/factories/datasets"
}

并配置复制活动:

{
"name": "pipeline1",
"properties": {
"activities": [
{
"name": "Copy Data1",
"type": "Copy",
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"typeProperties": {
"source": {
"type": "RestSource",
"httpRequestTimeout": "00:01:40",
"requestInterval": "00.00:00:00.010"
},
"sink": {
"type": "BlobSink"
},
"enableStaging": false,
"translator": {
"type": "TabularTranslator",
"schemaMapping": {
"resource": "resource"
},
"collectionReference": "$.entry"
}
},
"inputs": [
{
"referenceName": "FHIRSource",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "AzureBlob1",
"type": "DatasetReference"
}
]
}
]
},
"type": "Microsoft.DataFactory/factories/pipelines"
}

但在最后(尽管配置了架构映射),Blob 中的最终结果始终只是从服务器返回的原始捆绑包。如果我将输出 blob 配置为逗号分隔的文本,我可以提取字段并创建平展表格视图,但这不是我真正想要的。

任何建议将不胜感激。

所以我找到了解决方案。如果我执行原始步骤,将捆绑包简单地转储到 JSON 文件中,然后从 JSON 文件到我假装是文本文件的内容转换为另一个 blob,我可以创建 njson 文件。

基本上,定义另一个 blob 数据集:

{
"name": "AzureBlob2",
"properties": {
"linkedServiceName": {
"referenceName": "AzureBlobStorage1",
"type": "LinkedServiceReference"
},
"type": "AzureBlob",
"structure": [
{
"name": "Prop_0",
"type": "String"
}
],
"typeProperties": {
"format": {
"type": "TextFormat",
"columnDelimiter": ",",
"rowDelimiter": "",
"quoteChar": "",
"nullValue": "\N",
"encodingName": null,
"treatEmptyAsNull": true,
"skipLineCount": 0,
"firstRowAsHeader": false
},
"fileName": "myout.json",
"folderPath": "adfjsonout2"
}
},
"type": "Microsoft.DataFactory/factories/datasets"
}

请注意,这TextFormat,并且还请注意quoteChar为空白。如果我随后添加另一个复制活动:

{
"name": "pipeline1",
"properties": {
"activities": [
{
"name": "Copy Data1",
"type": "Copy",
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"typeProperties": {
"source": {
"type": "RestSource",
"httpRequestTimeout": "00:01:40",
"requestInterval": "00.00:00:00.010"
},
"sink": {
"type": "BlobSink"
},
"enableStaging": false,
"translator": {
"type": "TabularTranslator",
"schemaMapping": {
"['resource']": "resource"
},
"collectionReference": "$.entry"
}
},
"inputs": [
{
"referenceName": "FHIRSource",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "AzureBlob1",
"type": "DatasetReference"
}
]
},
{
"name": "Copy Data2",
"type": "Copy",
"dependsOn": [
{
"activity": "Copy Data1",
"dependencyConditions": [
"Succeeded"
]
}
],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"typeProperties": {
"source": {
"type": "BlobSource",
"recursive": true
},
"sink": {
"type": "BlobSink"
},
"enableStaging": false,
"translator": {
"type": "TabularTranslator",
"columnMappings": {
"resource": "Prop_0"
}
}
},
"inputs": [
{
"referenceName": "AzureBlob1",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "AzureBlob2",
"type": "DatasetReference"
}
]
}
]
},
"type": "Microsoft.DataFactory/factories/pipelines"
}

然后一切都解决了。这并不理想,因为我现在在 blob 中有两个数据副本,但我想一个可以很容易地删除。

如果有人有一步到位的解决方案,我仍然很想听听。

正如评论中简要讨论的那样,除了映射数据之外,Copy Activity没有提供太多功能。如文档中所述,复制活动执行以下操作:

  1. 从源数据存储中读取数据。
  2. 执行序列化/
  3. 反序列化、压缩/解压缩、列映射等。它根据 输入数据集、输出数据集和复制的配置 活动。
  4. 将数据写入接收器/目标数据存储。

看起来Copy Activity除了有效地复制东西之外,没有做任何其他事情。

我发现有效的是使用Databrick。

以下是步骤:

  1. 将数据砖帐户添加到订阅;
  2. 通过单击创作按钮转到数据砖页面;
  3. 创建笔记本;
  4. 编写脚本(Scala,Python或.Net最近宣布)。

该脚本将如下所示:

  1. 从 Blob 存储中读取数据;
  2. 根据需要过滤和转换数据;
  3. 将数据写回 Blob 存储;

可以从那里测试脚本,准备就绪后,可以返回到管道并创建一个 Notebook 活动,该活动将指向包含脚本的笔记本。

我在 Scala 中编码很挣扎,但值得:)

对于将来找到这篇文章的任何人,您可以使用$export api调用来完成此操作。请注意,必须有一个链接到 Fhir 服务器的存储帐户。

https://build.fhir.org/ig/HL7/bulk-data/export.html#endpoint---system-level-export

相关内容

  • 没有找到相关文章

最新更新