我正在使用Node.js和Azure SDK v12。我想复制一个现有的blob与访问层==='存档。要做到这一点,我想复制blob并将其写入具有不同blob名称和更改(re水合)访问层的相同容器。
我可以改变现有'Archived"的访问层。直接Blob,但这不是我的目标。我想保留访问层的blob "Archive"并创建一个新的blob,访问层=="Cool"| |"Hot".
我正在按照文档(https://learn.microsoft.com/en-us/azure/storage/blobs/archive-rehydrate-overview)进行。
如果blob具有访问层==='Cool' || 'Hot',则下面的代码可以工作。但是,对于访问层==='Archive'的blob,它会失败。
一边:我认为SDK 'syncCopyFromUrl'和'beginCopyFromUrl'不适合复制blob与访问层==='存档'。如果我尝试这样做,我会得到以下错误:对于' syncopyfromurl ',它给了我:"在存档的blob上不允许此操作。"beginCopyFromUrl"它给了我:"复制源blob已被修改"-当我检查时,blob没有被修改(我检查最后修改日期,它是过去的)。
如何复制已归档的blob并将新blob保存在具有不同访问类型的同一容器中
const { BlobServiceClient,generateBlobSASQueryParameters, BlobSASPermissions } = require("@azure/storage-blob");
export default async (req, res) => {
if (req.method === 'POST') {
const connectionString = 'DefaultEndpointsProtocol=...'
const containerName = 'container';
const srcFile='filename' // this is the filename as it appears on Azure portal (i.e. the blob name)
async function getSignedUrl(blobClient, options={}){
options.permissions = options.permissions || "racwd"
const expiry = 3600;
const startsOn = new Date();
const expiresOn = new Date(new Date().valueOf() + expiry * 1000);
const token = await generateBlobSASQueryParameters(
{
containerName: blobClient.containerName,
blobName: blobClient.name,
permissions: BlobSASPermissions.parse(options.permissions),
startsOn, // Required
expiresOn, // Optional
},
blobClient.credential,
);
return `${blobClient.url}?${token.toString()}`;
}
(async () => {
try {
const blobServiceClient = BlobServiceClient.fromConnectionString(connectionString);
const containerClient = blobServiceClient.getContainerClient(containerName);
const sourceBlobClient = containerClient.getBlockBlobClient(srcFile);
const targetBlobClient = containerClient.getBlockBlobClient('targetFileName');
const url = await getSignedUrl(sourceBlobClient);
console.log(`source: ${url}`);
const result = await targetBlobClient.syncCopyFromURL(url);
// const result = await targetBlobClient.beginCopyFromURL(url);
console.log(result)
} catch (e) {
console.log(e);
}
})();
}
}
export const config = {
api: {
bodyParser: {
sizeLimit: '1gb',
},
},
}
我们需要知道的主要步骤是改变blob的访问层。
使用下面的代码,我们可以从JS设置访问层:// Archive the blob - Log the error codes
await blockBlobClient.setAccessTier("Archive");
try {
// Downloading an archived blockBlob fails
console.log("// Downloading an archived blockBlob fails...");
await blockBlobClient.download();
} catch (err) {
// BlobArchived Conflict (409) This operation is not permitted on an archived blob.
console.log(
`requestId - ${err.details.requestId}, statusCode - ${err.statusCode}, errorCode - ${err.details.errorCode}`
);
console.log(`error message - ${err.details.message}n`);
}
其余的操作可以通过复制事件来完成,如下所示:
import logging
import sys
import os
import azure.functions as func
from azure.storage import blob
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient, __version__
def main(myblob: func.InputStream):
try:
logging.info(f"Python blob trigger function processed blob n")
CONN_STR = "ADD_CON_STR"
blob_service_client = BlobServiceClient.from_connection_string(CONN_STR)
# MAP SOURCE FILE
blob_client = blob_service_client.get_blob_client(container="newcontainer0805", blob="source.txt")
#SOURCE CONTENTS
content = blob_client.download_blob().content_as_text
# WRITE HEADER TO A OUT PUTFILE
output_file_dest = blob_service_client.get_blob_client(container="target", blob="target.csv")
#INITIALIZE OUTPUT
output_str = ""
#STORE COULMN HEADERS
data= list()
data.append(list(["column1", "column2", "column3", "column4"]))
output_str += ('"' + '","'.join(data[0]) + '"n')
output_file_dest.upload_blob(output_str,overwrite=True)
logging.info(' END OF FILE UPLOAD')
except Exception as e:
template = "An exception of type {0} occurred. Arguments:n{1!r}"
message = template.format(type(e).__name__, e.args)
print (message)
if __name__ == "__main__":
main("source.txt")
如果您想将blob保存在相同的容器中,请将目标修改为与源相同的容器,这可以帮助您复制blob并将数据附加到blob中。