如何加速“唯一” Dataframe 搜索

mbzjlibv  于 2023-03-27  发布在  其他
关注(0)|答案(4)|浏览(197)

我有一个dataframe,它的维度是2377426行,2列,看起来像这样:

Name                                            Seq
428293 ENSE00001892940:ENSE00001929862 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
431857 ENSE00001892940:ENSE00001883352 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGGAAGTAAATGAGCTGATGGAAGAGC
432253 ENSE00001892940:ENSE00003623668 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGGAAGTAAATGAGCTGATGGAAGAGC
436213 ENSE00001892940:ENSE00003534967 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGGAAGTAAATGAGCTGATGGAAGAGC
429778 ENSE00001892940:ENSE00002409454 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGAGCTGGGAACCTTTGCTCAAAGCTCC
431263 ENSE00001892940:ENSE00001834214 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGAGCTGGGAACCTTTGCTCAAAGCTCC

第一列(Name)中的所有值都是唯一的,但在“Seq”列中有许多重复的值。我想要一个只包含唯一序列和名称的data.frame。我已经尝试过unique,但这太慢了。我还尝试过对数据库进行排序,并使用以下代码:

dat_sorted = data[order(data$Seq),]
    m = dat_sorted[1,]
    x =1;for(i in 1:length(dat_sorted[,1])){if(dat_sorted[i,2]!=m[x,2]){x=x+1;m[x,]=dat_sorted[i,]}}

这也太慢了!有没有更快的方法在 Dataframe 的一列中找到唯一值?

c90pui9n

c90pui9n1#

data[!duplicated(data$Seq), ]

应该可以。

uttx8gqw

uttx8gqw2#

library(dplyr)
data %>% distinct

应该是值得的,特别是如果你的数据对你的机器来说太大了。

nbewdwxp

nbewdwxp3#

最快的,你可以试试:

data[!kit::fduplicated(data$Seq), ]

以下是直接从文档中获取的一些基准:

x = sample(c(1:10,NA_integer_),1e8,TRUE) # 382 Mb
microbenchmark::microbenchmark(
  duplicated(x),
  fduplicated(x),
  times = 5L
)
# Unit: seconds
#           expr  min   lq  mean  median   uq   max neval
# duplicated(x)  2.21 2.21  2.48    2.21 2.22  3.55     5
# fduplicated(x) 0.38 0.39  0.45    0.48 0.49  0.50     5

kit也有funique功能。

txu3uszq

txu3uszq4#

kit::fduplicated似乎在具有许多唯一行(很少重复)的 Dataframe 中具有轻微的优势,而dplyr::distinct似乎在具有许多重复行(很少唯一行)的 Dataframe 中更有效:

# Make this example reproducible
set.seed(1)
n_samples <- 1e7

# Many unique rows case: Create a data frame with random integers between 1 and 100
df <- as.data.frame(matrix(round(runif(n=n_samples, min=1, max=1000), 0), nrow=n_samples/2))
names(df) <- c('A', 'B')

microbenchmark::microbenchmark(
  un_1 <- df[!base::duplicated(df), ],
  un_2 <- df[!kit::fduplicated(df), ],
  un_3 <- dplyr::distinct(df),
  times = 5L
)

# Unit: milliseconds
#                                expr       min         lq       mean     median         uq        max neval
# un_1 <- df[!base::duplicated(df), ] 9817.6096 10173.5799 10721.0293 10772.2749 11073.4896 11768.1927     5
# un_2 <- df[!kit::fduplicated(df), ]  558.9923   618.1214   673.6863   628.9305   671.2307   891.1565     5
#         un_3 <- dplyr::distinct(df)  596.9396   640.1986   680.0212   643.6371   674.5296   844.8010     5

# Many repeated rows case: Create a data frame with random integers between 1 and 10
df <- as.data.frame(matrix(round(runif(n=n_samples, min=1, max=10), 0), nrow=n_samples/2))
names(df) <- c('A', 'B')

microbenchmark::microbenchmark(
  un_1 <- df[!base::duplicated(df), ],
  un_2 <- df[!kit::fduplicated(df), ],
  un_3 <- dplyr::distinct(df),
  times = 5L
)

#Unit: milliseconds
#                                 expr       min        lq     mean    median        uq       max neval
#  un_1 <- df[!base::duplicated(df), ] 8282.4409 8439.2752 8550.715 8457.0352 8704.7729 8870.0511     5
#  un_2 <- df[!kit::fduplicated(df), ]  130.8126  136.0880  244.323  168.6322  221.6255  564.4568     5
#          un_3 <- dplyr::distinct(df)  148.4684  160.8196  162.815  165.0068  169.5027  170.2775     5

相关问题