从压缩文件中下载的文本文件,通过url访问,被下载到工作目录,而不是r中的全局环境



我试图检索多个数据文本文件,匹配某种模式,形成我通过url访问的多个压缩文件。我编写了一个脚本,从url下载所需的数据框架文件,将它们保存在一个列表中,然后将所有数据框架绑定在一起。然后在url列表上应用该函数。

我想要的最终结果是有所有下载的数据形式的所有url在一个单一的数据帧在全局环境中的r

然而,目前,单个文件被下载到我的工作目录中,这是我不想要的,并且没有合并到单个数据帧中。我想知道这个问题是否源于download.file,但我一直无法找到解决方案或类似问题的帖子。

# list of urls
url_df = data.frame(model = c("rcp26", "rcp45", "rcp85"),  
url = c("https://b2share.eudat.eu/api/files/d4850267-3ce2-44f4-b5e3-8391a4f3dc27/LTER_site_data_from_EURO-CORDEX-RCMs_rel1.see_disclaimer.77c127c4-2ebe-453b-b5af-61858ff02e31.huss_historical_rcp26_day_txt.zip",
"https://b2share.eudat.eu/api/files/d4850267-3ce2-44f4-b5e3-8391a4f3dc27/LTER_site_data_from_EURO-CORDEX-RCMs_rel1.see_disclaimer.77c127c4-2ebe-453b-b5af-61858ff02e31.huss_historical_rcp45_day_txt.zip",
"https://b2share.eudat.eu/api/files/d4850267-3ce2-44f4-b5e3-8391a4f3dc27/LTER_site_data_from_EURO-CORDEX-RCMs_rel1.see_disclaimer.77c127c4-2ebe-453b-b5af-61858ff02e31.huss_historical_rcp85_day_txt.zip"))
# create empty dataframe where data will be saved
downloaded_data = data.frame()
# create function to retrieve desired files from a single url
get_data = function(url) {
temp <- tempfile() # create temp file
download.file(url,temp) # download file contained in the url

# get a list of the desired files
file.list <- grep("KNMI-RACMO22E.*txt|MPI-CSC-REMO.*txt|SMHI-RCA4.*txt", unzip(temp, list=TRUE)$Name, ignore.case=TRUE, value=TRUE)

data.list = lapply(unzip(temp, files=file.list), read.table, header=FALSE,  comment.char = "", check.names = FALSE)

# bind the dataframes in the list into one single dataframe
bound_data = dplyr::bind_rows(data.list)

downloaded_data = rbind(downloaded_data, bound_data )

return(downloaded_data)

unlink(temp)
}
# apply function over the list of urls
sapply(url_df$url, get_data)

任何帮助都将非常感激!

你不能在函数中引用downloaded_data——函数将分别应用于每个URL,然后你可以将它们绑定在一起创建downloaded_data。还对数据的解压缩和读入进行了一些更改,以确保文件实际被读入。

# list of urls
url_df = data.frame(model = c("rcp26", "rcp45", "rcp85"),  
url = c("https://b2share.eudat.eu/api/files/d4850267-3ce2-44f4-b5e3-8391a4f3dc27/LTER_site_data_from_EURO-CORDEX-RCMs_rel1.see_disclaimer.77c127c4-2ebe-453b-b5af-61858ff02e31.huss_historical_rcp26_day_txt.zip",
"https://b2share.eudat.eu/api/files/d4850267-3ce2-44f4-b5e3-8391a4f3dc27/LTER_site_data_from_EURO-CORDEX-RCMs_rel1.see_disclaimer.77c127c4-2ebe-453b-b5af-61858ff02e31.huss_historical_rcp45_day_txt.zip",
"https://b2share.eudat.eu/api/files/d4850267-3ce2-44f4-b5e3-8391a4f3dc27/LTER_site_data_from_EURO-CORDEX-RCMs_rel1.see_disclaimer.77c127c4-2ebe-453b-b5af-61858ff02e31.huss_historical_rcp85_day_txt.zip"))
# create function to retrieve desired files from a single url
get_data = function(url) {
temp <- tempdir() # create temp file
download.file(url, file.path(temp, "downloaded.zip")) # download file contained in the url
downloaded_files <- unzip(file.path(temp, "downloaded.zip"), exdir = temp)
keep_files <- downloaded_files[grep("KNMI-RACMO22E.*txt|MPI-CSC-REMO.*txt|SMHI-RCA4.*txt", 
downloaded_files)]
data.list <- lapply(keep_files, read.table, header=FALSE,  comment.char = "", check.names = FALSE)
# bind the dataframes in the list into one single dataframe
bound_data = dplyr::bind_rows(data.list)
return(bound_data)
unlink(temp)
}
# apply function over the list of urls
downloaded_data <- dplyr::bind_rows(lapply(url_df$url, get_data))
dim(downloaded_data)
#> [1] 912962      7

最新更新