如何降低redisgraph内存使用率?



我们想在一个带有边缘(写的、提到的、链接的)的RedisGraph中存储500k个节点(Tweets、Users、Hashtags)。redisgraph- size -calculator估计远小于1 GB所有的内存.但是对于5000RedisGraph已使用超过2 GB的节点RAM(来自RedisInsight)。所以我们用完了内存,Python客户端抛出一个异常("Connection closed by server"有时MISCONF Redis被配置为保存RDB快照,但目前无法在磁盘上持久化。可能修改数据集的命令被禁用,因为该实例配置为在commit()时,如果RDB快照失败,则在写过程中报告错误。此外,redis每秒的命令速度非常慢我们尝试在不同的位置上提交,并在ram更多的设备上运行redis。如何实现对整个图的存储?

发生在默认docker容器:docker run -p 6379:6379 -it --rm redislabs/redisgraph

import redis
from redisgraph import Node, Graph
db_connection = redis.Redis(host='localhost', port=6379)
graph = Graph('Twitter', db_connection)
for i in range(100000):
graph.add_node(Node(label='user', properties={'id': i, 'name': str(i)}))
graph.commit() #Here raises the exception if i is round about 5000 (could change depending on system ram)
db_connection.close()

尝试:

import redis
from redisgraph import Node, Graph
db_connection = redis.Redis(host='localhost', port=6379)
graph = Graph('Twitter', db_connection)
for i in range(100000):
graph.add_node(Node(label='user', properties={'id': i, 'name': str(i)}))
if i % 5000 == 0:
graph.commit() #Here raises the exception if i is round about 5000 (could change depending on system ram)
db_connection.close()

这样一次提交5000个节点会快得多

像这样加载数据的其他方法可以在一个cypher查询

中完成
UNWIND range(1, 100000) AS x CREATE (:user { id: x, name: toString(x) })

最新更新