如何在使用弹簧纱时将资源分配给容器



我正在[github][1]上尝试Spring Yarn示例,该示例由gradle构建。我成功地在纱线上运行了自定义服务示例。

但是我不知道如何为容器分配特殊资源。我尝试在我的 CustomAppmaster 上覆盖类 StaticEventingAppmaster 中的 onContainerAssignedonContainerLaunch 方法并分配资源,如下所示。

@Override
protected void onContainerAllocated(Container container) {
       //==allocate resource
        Resource resource = new ResourcePBImpl();
        resource.setMemory(1300);
        resource.setVirtualCores(7);
        container.setResource(resource);
        //====
        if (getMonitor() instanceof ContainerAware) {
                ((ContainerAware)getMonitor()).onContainer(Arrays.asList(container));
        }
        getLauncher().launchContainer(container, getCommands());
}
@Override
protected void onContainerLaunched(Container container) {
       //==allocate resource
        Resource resource = new ResourcePBImpl();
        resource.setMemory(1300);
        resource.setVirtualCores(7);
        container.setResource(resource);
        //====
        if (getMonitor() instanceof ContainerAware) {
                ((ContainerAware)getMonitor()).onContainer(Arrays.asList(container));
        }
}

在日志中它似乎有效:

2014-12-30 20:06:35,524 DEBUG [AbstractPollingAllocator] - response has 1 new containers
2014-12-30 20:06:35,525 DEBUG [AbstractPollingAllocator] - new container: container_1419934738198_0004_01_000003
////  this line shows the memory is 1300 and cpu core is 7
2014-12-30 20:06:35,525 DEBUG [DefaultContainerMonitor] - Reporting container=Container: [ContainerId: container_1419934738198_0004_01_000003, NodeId: yarn-master1:57799, NodeHttpAddress: yarn-master1:8042, Resource: <memory:1300, vCores:7>, Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.0.170:57799 }, ]
2014-12-30 20:06:35,526 DEBUG [DefaultContainerMonitor] - State after reportContainer: DefaultContainerMonitor [allocated=[container_1419934738198_0004_01_000003,], running=[container_1419934738198_0004_01_000002,], completed=[], failed=[]]
////  this line shows the memory is 1300 and cpu core is 7
2014-12-30 20:06:35,526 DEBUG [DefaultContainerLauncher] - Launching container: Container: [ContainerId: container_1419934738198_0004_01_000003, NodeId: yarn-master1:57799, NodeHttpAddress: yarn-master1:8042, Resource: <memory:1300, vCores:7>, Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.0.170:57799 }, ] with commands $JAVA_HOME/bin/java,org.springframework.yarn.container.CommandLineContainerRunner,container-context.xml,yarnContainer,1><LOG_DIR>/Container.stdout,2><LOG_DIR>/Container.stderr

但是,当我尝试运行资源超出限制的应用程序时,其日志显示内存保持 1GB 而不是 1300,如下所示:

2014-12-30 20:07:05,929 DEBUG [AbstractPollingAllocator] - response has 1 completed containers
//The Same container was stopped because it beyond the limits.
2014-12-30 20:07:05,932 DEBUG [AbstractPollingAllocator] - completed container: container_1419934738198_0004_01_000003 with status=ContainerStatus: [ContainerId: container_1419934738198_0004_01_000003, State: COMPLETE, Diagnostics: Container [pid=10587,containerID=container_1419934738198_0004_01_000003] is running beyond virtual memory limits. Current usage: 86.6 MB of 1 GB physical memory used; 31.8 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1419934738198_0004_01_000003 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 10587 32315 10587 10587 (bash) 2 3 12652544 353 /bin/bash -c /home/novelbio/software/jdk//bin/java org.springframework.yarn.container.CommandLineContainerRunner container-context.xml yarnContainer 1>/home/novelbio/software/hadoop/logs/userlogs/application_1419934738198_0004/container_1419934738198_0004_01_000003/Container.stdout 2>/home/novelbio/software/hadoop/logs/userlogs/application_1419934738198_0004/container_1419934738198_0004_01_000003/Container.stderr 
    |- 10761 10587 10587 10587 (java) 108 10 34135896064 21811 /home/novelbio/software/jdk//bin/java org.springframework.yarn.container.CommandLineContainerRunner container-context.xml yarnContainer 
, ExitStatus: 0, ]

关键点是,在日志中:。当前使用情况:使用 86.6 MB 的 1 GB 物理内存,而不是 1.3GB。

所以我认为我的方法没有生效。任何人都可以告诉我如何正确分配资源吗?

这是 YARN

中的一个问题领域,我相信当越来越多的非 MR 应用程序在 YARN 中使用时,它最终会变得更好。我相信您的设置已正确应用,但是 YARN 的一些奇怪行为导致了这些问题。目前,从应用程序的角度来看,我们能做的很少,因为大多数mem设置都是在YARN本身中强制执行的,来自应用程序的请求只是"请求"。

YARN 上的 Spring XD 依赖于同样的东西,值得检查一下我们写到其文档中的内容:https://github.com/spring-projects/spring-xd/wiki/Running-on-YARN。(请参阅配置 YARN 内存预留部分)。

我将尝试确保相同的信息也用于我们的Spring Hadoop和Spring YARN参考文档。

相关内容

  • 没有找到相关文章