我有一个大的数据框架,大约有500,000个观测值(由" ID"识别(和150多个变量。一些观察只出现一次。其他人出现多次(超过10个左右(。我想"崩溃"这些多个观察结果,以便每个唯一ID只有一排,并且第2:150列中的所有信息都是串联的。我不需要在这些观察结果上进行任何计算,只是快速爆炸。
我尝试过:
df.new <- group_by(df,"ID")
以及:
library(data.table)
dt = data.table(df)
dt.new <- dt[, lapply(.SD, na.omit), by = "ID"]
不幸的是,都没有起作用。任何帮助都将不胜感激!
使用基本R:
df = data.frame(ID = c("a","a","b","b","b","c","d","d"),
day = c("1","2","3","4","5","6","7","8"),
year = c(2016,2017,2017,2016,2017,2016,2017,2016),
stringsAsFactors = F)
> df
ID day year
1 a 1 2016
2 a 2 2017
3 b 3 2017
4 b 4 2016
5 b 5 2017
6 c 6 2016
7 d 7 2017
8 d 8 2016
做:
z = aggregate(df[,2:3],
by = list(id = df$ID),
function(x){ paste0(x, collapse = "/") }
)
结果:
> z
id day year
1 a 1/2 2016/2017
2 b 3/4/5 2017/2016/2017
3 c 6 2016
4 d 7/8 2017/2016
编辑
如果要避免"崩溃" na做:
z = aggregate(df[,2:3],
by = list(id = df$ID),
function(x){ paste0(x[!is.na(x)],collapse = "/") })
对于:
的数据框架> df
ID day year
1 a 1 2016
2 a 2 NA
3 b 3 2017
4 b 4 2016
5 b <NA> 2017
6 c 6 2016
7 d 7 2017
8 d 8 2016
结果是:
> z
id day year
1 a 1/2 2016
2 b 3/4 2017/2016/2017
3 c 6 2016
4 d 7/8 2017/2016
我过去遇到过类似的问题,但是我没有处理几个相同数据的副本。在许多情况下,只有2个实例,在某些情况下3个实例。以下是我的方法。希望它会有所帮助。
idx <- duplicated(df$key) | duplicated(df$key, fromLast=TRUE) # get the index of the duplicate entries. Or will help get the original value too.
dupes <- df[idx,] # get duplicated values
non_dupes <- df[!idx,] # get all non duplicated values
temp <- dupes %>% group_by(key) %>% # roll up the duplicated ones.
fill_(colnames(dupes), .direction = "down") %>%
fill_(colnames(dupes), .direction = "up") %>%
slice(1)
然后很容易合并temp
和non_dupes
。
编辑
我强烈建议将df
过滤到唯一的人群中,并且与您的最终目标有关,因为此过程可能需要一些时间。
呢?
df %>%
group_by(ID) %>%
summarise_each(funs(paste0(., collapse = "/")))
或可重现...
iris %>%
group_by(Species) %>%
summarise_each(funs(paste0(., collapse = "/")))