如何使用 Salesforce Apex 将较大的文件(大于 12 MB)上传到 AWS S3 存储桶



我需要一些帮助才能从 salesforce apex 服务器端将大文件上传到 s3 存储桶。

我需要能够拆分 blob 并使用 Http PUT 操作将其上传到 aws s3 存储桶。我可以在一次上传中完成高达 12 MB 的文件,因为这是 Apex 中的 PUT 请求正文大小限制。 所以我需要能够使用分段操作上传。我注意到 s3 允许分段上传并返回上传 ID。想知道是否有人以前在Salesforce Apex代码中已经这样做过。将不胜感激。

提前致谢 帕尔巴蒂·博斯。

这是代码

public with sharing class AWSS3Service {
private static Http http;
@auraEnabled
public static  void uploadToAWSS3( String fileToUpload , String filenm , String doctype){

fileToUpload = EncodingUtil.urlDecode(fileToUpload, 'UTF-8');
filenm = EncodingUtil.urlEncode(filenm , 'UTF-8'); // encode the filename in case there are special characters in the name 
String filename = 'Storage' + '/' + filenm ;
String attachmentBody = fileToUpload;
String formattedDateString = DateTime.now().formatGMT('EEE, dd MMM yyyy HH:mm:ss z');


// s3 bucket!
String key = '**********' ;
String secret = '********' ;
String bucketname = 'testbucket' ;
String region = 's3-us-west-2' ;

String host = region + '.' + 'amazonaws.com' ; //aws server base url


try{
HttpRequest req = new HttpRequest();
http = new Http() ;
req.setMethod('PUT');
req.setEndpoint('https://' + bucketname + '.' + host +  '/' +  filename );
req.setHeader('Host', bucketname + '.' + host);
req.setHeader('Content-Encoding', 'UTF-8');
req.setHeader('Content-Type' , doctype);
req.setHeader('Connection', 'keep-alive');
req.setHeader('Date', formattedDateString);
req.setHeader('ACL', 'public-read-write');

String stringToSign = 'PUTnn' +
doctype + 'n' +
formattedDateString + 'n' +
'/' + bucketname +  '/' + filename;

Blob mac = Crypto.generateMac('HMACSHA1', blob.valueof(stringToSign),blob.valueof(secret));
String signed = EncodingUtil.base64Encode(mac);
String authHeader = 'AWS' + ' ' + key + ':' + signed;
req.setHeader('Authorization',authHeader);
req.setBodyAsBlob(EncodingUtil.base64Decode(fileToUpload)) ;

HttpResponse response = http.send(req);
Log.debug('response from aws s3 is ' + response.getStatusCode() + ' and ' + response.getBody());

}catch(Exception e){
Log.debug('error in connecting to s3 ' + e.getMessage());
throw e ;
}
}

在过去的几天里,我一直在研究同样的问题,不幸的是,由于 APEX 堆大小限制为 12MB,您将更好地从 Salesforce 外部执行此传输。 https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_gov_limits.htm

虽然可以使用多部分写入文件,但似乎您无法将它们从数据库中取出以将它们拆分为可以发送的块。在堆栈交换中提出了类似的问题 - https://salesforce.stackexchange.com/questions/264015/how-to-retrieve-file-content-from-content-document-in-chunks-using-soql

适用于 Java 的 AWS 开发工具包公开了一个名为 TransferManager 的高级 API, 简化了分段上传(请参阅使用 上传对象 分段上传 API(。您可以从文件或流上传数据。 您还可以设置高级选项,例如所需的零件尺寸 用于分段上传或并发线程数 想要在上传部分时使用。您还可以设置可选对象 属性、存储类或 ACL。您使用 要设置 PutObjectRequest 和 TransferManagerConfiguration 类 这些高级选项。

下面是来自 https://docs.aws.amazon.com/AmazonS3/latest/dev/HLuploadFileJava.html 的示例代码。

您可以适应您的 Salesforce Apex 代码:

import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;
import com.amazonaws.services.s3.transfer.Upload;
import java.io.File;
public class HighLevelMultipartUpload {
public static void main(String[] args) throws Exception {
Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyName = "*** Object key ***";
String filePath = "*** Path for file to upload ***";
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();
// TransferManager processes all transfers asynchronously,
// so this call returns immediately.
Upload upload = tm.upload(bucketName, keyName, new File(filePath));
System.out.println("Object upload started");
// Optionally, wait for the upload to finish before continuing.
upload.waitForCompletion();
System.out.println("Object upload complete");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process 
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

最新更新