R -H2O:无法通过“ H2O.loadModel”从磁盘读取大型模型


  • 根据@Michal Kurka。
  • 更新了26Jun2017,下面。

我无法加载我以本机H2O格式保存的大型GBM型号(即,十六进制(。

  • H2O v3.10.5.1
  • r v3.3.2
  • linux 3.10.0-327.el7.x86_64 gnu/linux

我的目标是最终将此模型保存为Mojo。

这个模型是如此之大,以至于我必须在H2O训练成功运行之前使用最小/最大内存100G/200G初始化H2O。

这就是我训练GBM模型的方式:

localH2O <- h2o.init(ip = 'localhost', port = port, nthreads = -1,
                     min_mem_size = '100G', max_mem_size = '200G')
iret <- h2o.gbm(x = predictors, y = response, training_frame = train.hex,
                validation_frame = holdout.hex, distribution="multinomial",
                ntrees = 3000, learn_rate = 0.01, max_depth = 5, nbins = numCats,
                model_id = basename_model)
gbm <- h2o.getModel(basename_model)
oPath <- h2o.saveModel(gbm, path = './', force = TRUE)

培训数据包含81,886个记录,其中包含1413列。在这些列中,有19个是因素。这些列中的绝大多数为0/1。

$ wc -l training/*.txt
     81887 training/train.txt
     27294 training/holdout.txt

这是写入磁盘的保存模型:

$ ls -l

total 37G
-rw-rw-r-- 1 bfo7328 37G Jun 22 19:57 my_model.hex

这是我尝试使用相同的大型内存分配值100G/200G读取模型的方式:

$ R

R version 3.3.2 (2016-10-31) -- "Sincere Pumpkin Patch"
Copyright (C) 2016 The R Foundation for Statistical Computing
Platform: x86_64-redhat-linux-gnu (64-bit)
> library(h2o)
> localH2O=h2o.init(ip='localhost', port=65432, nthreads=-1,
                  min_mem_size='100G', max_mem_size='200G')
H2O is not running yet, starting it now...
Note:  In case of errors look at the following log files:
    /tmp/RtmpVSwxXR/h2o_bfo7328_started_from_r.out
    /tmp/RtmpVSwxXR/h2o_bfo7328_started_from_r.err
openjdk version "1.8.0_121"
OpenJDK Runtime Environment (build 1.8.0_121-b13)
OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)
Starting H2O JVM and connecting: .. Connection successful!
R is connected to the H2O cluster: 
    H2O cluster uptime:         3 seconds 550 milliseconds 
    H2O cluster version:        3.10.5.1 
    H2O cluster version age:    13 days  
    H2O cluster name:           H2O_started_from_R_bfo7328_kmt050 
    H2O cluster total nodes:    1 
    H2O cluster total memory:   177.78 GB 
    H2O cluster total cores:    64 
    H2O cluster allowed cores:  64 
    H2O cluster healthy:        TRUE 
    H2O Connection ip:          localhost 
    H2O Connection port:        65432 
    H2O Connection proxy:       NA 
    H2O Internal Security:      FALSE 
    R Version:                  R version 3.3.2 (2016-10-31) 

来自/tmp/RtmpVSwxXR/h2o_bfo7328_started_from_r.out

INFO: Processed H2O arguments: [-name, H2O_started_from_R_bfo7328_kmt050, -ip, localhost, -port, 65432, -ice_root, /tmp/RtmpVSwxXR]
INFO: Java availableProcessors: 64
INFO: Java heap totalMemory: 95.83 GB
INFO: Java heap maxMemory: 177.78 GB
INFO: Java version: Java 1.8.0_121 (from Oracle Corporation)
INFO: JVM launch parameters: [-Xms100G, -Xmx200G, -ea]
INFO: OS version: Linux 3.10.0-327.el7.x86_64 (amd64)
INFO: Machine physical memory: 1.476 TB

我致电h2o.loadModel

if ( TRUE ) {
  now <- format(Sys.time(), "%a %b %d %Y %X")
  cat( sprintf( 'Begin %sn', now ))
  model_filename <- './my_model.hex'
  in_model.hex <- h2o.loadModel( model_filename )
  now <- format(Sys.time(), "%a %b %d %Y %X")
  cat( sprintf( 'End   %sn', now ))
}

来自 /tmp/RtmpVSwxXR/h2o_bfo7328_started_from_r.out

INFO: GET /, parms: {}
INFO: GET /, parms: {}
INFO: GET /, parms: {}
INFO: GET /3/InitID, parms: {}
INFO: Locking cloud to new members, because water.api.schemas3.InitIDV3
INFO: POST /99/Models.bin/, parms: {dir=./my_model.hex}

等待一个小时后,我看到了这些"不记忆"(OOM(错误消息:

INFO: POST /99/Models.bin/, parms: {dir=./my_model.hex}
#e Thread WARN: Swapping!  GC CALLBACK, (K/V:24.86 GB + POJO:112.01 GB + FREE:40.90 GB == MEM_MAX:177.78 GB), desiredKV=22.22 GB OOM!
#e Thread WARN: Swapping!  GC CALLBACK, (K/V:26.31 GB + POJO:118.41 GB + FREE:33.06 GB == MEM_MAX:177.78 GB), desiredKV=22.22 GB OOM!
#e Thread WARN: Swapping!  GC CALLBACK, (K/V:27.36 GB + POJO:123.03 GB + FREE:27.39 GB == MEM_MAX:177.78 GB), desiredKV=22.22 GB OOM!
#e Thread WARN: Swapping!  GC CALLBACK, (K/V:28.21 GB + POJO:126.73 GB + FREE:22.83 GB == MEM_MAX:177.78 GB), desiredKV=22.22 GB OOM!

我不希望需要太多的内存来读取磁盘的模型。

如何将此模型从磁盘读取到内存。一旦我这样做,我可以将其保存为Mojo吗?


更新1:26Jun2017

我只是注意到GBM模型的磁盘大小在版本的H2O之间急剧增加:

H2O v3.10.2.1:
    -rw-rw-r-- 1 169M Jun 19 07:23 my_model.hex
H2O v3.10.5.1:
    -rw-rw-r-- 1  37G Jun 22 19:57 my_model.hex

有什么想法?这可能是问题的根源吗?


更新2:28Jun2017响应@Michal Kurka的评论。

当我通过fread加载培训数据时,每列的类(类型(是正确的:* 24列是"角色";* 1389列是"整数"(除一个列外,所有列均为0/1(;* 1413总列。

i然后将R-native数据框架转换为H2O数据框架和手动因子尺寸20列:

train.hex <- as.h2o(df.train, destination_frame = "train.hex”)
length(factorThese)
[1] 20
train.hex[factorThese] <- as.factor(train.hex[factorThese])
str(train.hex)

str(train.hex)的输出的凝结版本,仅显示这19列是因素(1个因素是响应列(:

 - attr(*, "nrow")= int 81886
 - attr(*, "ncol")= int 1413
 - attr(*, "types")=List of 1413
  ..$ : chr "enum" : Factor w/ 72 levels
  ..$ : chr "enum" : Factor w/ 77 levels
  ..$ : chr "enum" : Factor w/ 51 levels
  ..$ : chr "enum" : Factor w/ 4226 levels
  ..$ : chr "enum" : Factor w/ 4183 levels
  ..$ : chr "enum" : Factor w/ 3854 levels
  ..$ : chr "enum" : Factor w/ 3194 levels
  ..$ : chr "enum" : Factor w/ 735 levels
  ..$ : chr "enum" : Factor w/ 133 levels
  ..$ : chr "enum" : Factor w/ 16 levels
  ..$ : chr "enum" : Factor w/ 25 levels
  ..$ : chr "enum" : Factor w/ 647 levels
  ..$ : chr "enum" : Factor w/ 715 levels
  ..$ : chr "enum" : Factor w/ 679 levels
  ..$ : chr "enum" : Factor w/ 477 levels
  ..$ : chr "enum" : Factor w/ 645 levels
  ..$ : chr "enum" : Factor w/ 719 levels
  ..$ : chr "enum" : Factor w/ 678 levels
  ..$ : chr "enum" : Factor w/ 478 levels

上面的结果在v3.10.2.1(写入磁盘:169m(和v3.10.5.1之间完全相同(写入磁盘:37G(。

实际的GBM培训使用nbins <- 37

numCats <- n_distinct(as.matrix(dplyr::select_(df.train,response)))
numCats
[1] 37
iret <- h2o.gbm(x = predictors, y = response, training_frame = train.hex,
          validation_frame = holdout.hex, distribution="multinomial",
          ntrees = 3000, learn_rate = 0.01, max_depth = 5, nbins = numCats,
          model_id = basename_model)

模型的大小(169m vs 37g(的差异令人惊讶。您能否确保H2O识别所有数字列是数字,而不是非常高的基数?

您是使用列类型的自动检测还是手动指定?

最新更新