我有一个s3桶xxx。我编写了一个lambda功能,以访问S3存储桶的数据,并将这些详细信息写入RDS PostgreSQL实例。我可以用我的代码做。当文件落在S3上时,我在lambda函数中添加了一个触发器。
但是,从我的代码中,我只能读取名称为" sampledata.csv"的文件。考虑下面给出的我的代码
public class LambdaFunctionHandler implements RequestHandler<S3Event, String> {
private AmazonS3 s3 = AmazonS3ClientBuilder.standard().build();
public LambdaFunctionHandler() {}
// Test purpose only.
LambdaFunctionHandler(AmazonS3 s3) {
this.s3 = s3;
}
@Override
public String handleRequest(S3Event event, Context context) {
context.getLogger().log("Received event: " + event);
String bucket = "xxx";
String key = "SampleData.csv";
System.out.println(key);
try {
S3Object response = s3.getObject(new GetObjectRequest(bucket, key));
String contentType = response.getObjectMetadata().getContentType();
context.getLogger().log("CONTENT TYPE: " + contentType);
// Read the source file as text
AmazonS3 s3Client = new AmazonS3Client();
String body = s3Client.getObjectAsString(bucket, key);
System.out.println("Body: " + body);
System.out.println();
System.out.println("Reading as stream.....");
System.out.println();
BufferedReader br = new BufferedReader(new InputStreamReader(response.getObjectContent()));
// just saving the excel sheet data to the DataBase
String csvOutput;
try {
Class.forName("org.postgresql.Driver");
Connection con = DriverManager.getConnection("jdbc:postgresql://ENDPOINT:5432/DBNAME","USER", "PASSWORD");
System.out.println("Connected");
// Checking EOF
while ((csvOutput = br.readLine()) != null) {
String[] str = csvOutput.split(",");
String name = str[1];
String query = "insert into schema.tablename(name) values('"+name+"')";
Statement statement = con.createStatement();
statement.executeUpdate(query);
}
System.out.println("Inserted Successfully!!!");
}catch (Exception ase) {
context.getLogger().log(String.format(
"Error getting object %s from bucket %s. Make sure they exist and"
+ " your bucket is in the same region as this function.", key, bucket));
// throw ase;
}
return contentType;
} catch (Exception e) {
e.printStackTrace();
context.getLogger().log(String.format(
"Error getting object %s from bucket %s. Make sure they exist and"
+ " your bucket is in the same region as this function.", key, bucket));
throw e;
}
}
从我的代码中,您可以看到我提到了key =" sampledata.csv";有什么方法可以在不指定特定文件名的情况下将键输入存储桶内?
这两个链接将有所帮助。
http://docs.aws.amazon.com/amazons3/latest/dev/listingkeyshierarchy.htmlhttp://docs.aws.amazon.com/amazons3/latest/dev/listingobjectKeysusingjava.html
您可以使用前缀和定界符列出对象,以找到您要寻找的密钥而无需传递特定的文件名。
如果您需要在S3上获取事件详细信息,则实际上可以启用S3事件通知器到lambda函数。参考链接您可以通过
启用此功能- 单击您的存储桶内的"属性"
- 单击"事件"
- 单击"添加通知"
- 命名并选择事件类型(例如,dut,delete等)
- 如有必要,请给前缀和后缀,或者留空,考虑所有事件
- 然后'发送到'lambda函数并提供lambda arn。
现在,事件详细信息将作为JSON格式发送lambda函数。您可以从该JSON中获取细节。输入将是这样:
{"记录":[{" eventversion":" 2.0"," eventsource":" aws:s3"," awsregion":" ap-south-1"," eventtime":" 2017-11-23T09:25:54.845z"," eventname":" objectRemaved:delete"," userIdentity":{" principalid":" aws:aidajasdfgztla6uz7yak"}," requestParameters"}," requestParameters":响应仪":{" X-Amz-Request-id":" A235BER45D4974E"," X-AMZ-ID-2":" GLUK9ZYNDCJMQRGJMQRGJMQRGJMQRGGH0T7DZEBNI PE9TQUGEHKKKKKKHK888ZHOY90DEBCVGR: s3schemaversion":" 1.0",","configurationId':" sns"," bucket":{" name":" xplace-bucket1"," aslorsidentity":{" principalId":" aqfxv36adju8"}," arn":" arn:aws:aws:aws:s3 :::::: example example-bucket1"}," object":{" key":" sampledata.csv"," sequencer":" 005A169422CA7CDF66"}}}}}}}
您可以以objectname = event['Records'][0]['s3']['object']['key']
(糟糕,适用于Python)访问键然后将此信息发送给RDS。