KAFKA 8和内存 - Java运行时环境继续进行内存不足



我正在使用512 megs RAM的Digiocean实例,我在Kafka中遇到以下错误。我不是Java熟练的开发人员。如何调整Kafka来利用少量RAM。这是开发人员。我不想为更大的机器支付更多的费用。

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# //hs_err_pid6500.log
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000bad30000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)

您可以通过编辑kafka-server-start.shzookeeper-server-start.sh等来调整JVM堆大小:

export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"

-Xms参数指定最小堆大小。要使您的服务器至少启动,请尝试将其更改以使用更少的内存。鉴于您只有512m,也应该更改最大堆尺寸(-Xmx):

export KAFKA_HEAP_OPTS="-Xmx256M -Xms128M"

我不确定默认配置中kafka的最小内存要求 - 也许您需要调整kafka中的消息大小以使其运行。

区域:热点/gc

摘要

Crashes due to failure to allocate large pages.
On Linux, failures when allocating large pages can lead to crashes. When running JDK 7u51 or later versions, the issue can be recognized in two ways:
    Before the crash happens, one or more lines similar to the following example will have been printed to the log:
    os::commit_memory(0x00000006b1600000, 352321536, 2097152, 0) failed;
    error='Cannot allocate memory' (errno=12); Cannot allocate large pages, 
    falling back to regular pages
    If a file named hs_err is generated, it will contain a line similar to the following example:
    Large page allocation failures have occurred 3 times
The problem can be avoided by running with large page support turned off, for example, by passing the "-XX:-UseLargePages" option to the java binary.

添加环境变量java_opts。

在Linux中,编辑〜/.bashrc文件,然后更好地来源。

export JAVA_OPTS="$JAVA_OPTS -Xms128M -Xmx128M"

在寡妇中,添加系统变量

JAVA_OPTS "-Xms128M -Xmx128M"

然后重新启动。

,如果它不起作用,则应更改许多文件,e.x.。 bin/kafka-server-start.sh config/server.properties 。减少这些文件中的数字。

对于我的情况,我以前在做Android Studio和一个虚拟机的Android开发整个晚上运行,当我尝试启动Apache Kafka服务器时,我意识到了这个错误。我重新启动笔记本电脑以重置内存,然后重新启动Kafka服务器,希望这对某人有帮助!..

在您的环境路径中,将Java路径更改为JDK64而不是在JDK32上。这解决了我的问题,即我有64GB RAM,并使用32位JVM更改了通往JDK64(64位JVM)的路径解决了问题。

此处的答案也可以用于解决 Docker容器中的相同问题,运行kafka

只需将以下ENV变量添加到您的Docker-Compose文件(或运行脚本):

KAFKA_HEAP_OPTS: "-Xmx512M -Xms256M"

当然,您应该微调托管机上的值

在AWS上遇到相同的错误。我认为我至少需要一个t2.medium才能使我的经纪人和Zookeeper工作。

最新更新