我正在尝试用50个并行读取请求加载测试我的代码。
我正在基于我创建的多个索引查询数据。代码看起来像这样:
const fetchRecords = async (predicates) => {
let query = aeroClient.query('test', 'mySet');
let filters = [
predexp.stringValue(predicates.load),
predexp.stringBin('load'),
predexp.stringEqual(),
predexp.stringValue(predicates.disc),
predexp.stringBin('disc'),
predexp.stringEqual(),
predexp.integerBin('date1'),
predexp.integerValue(predicates.date2),
predexp.integerGreaterEq(),
predexp.integerBin('date2'),
predexp.integerValue(predicates.date2),
predexp.integerLessEq(),
predexp.stringValue(predicates.column3),
predexp.stringBin('column3'),
predexp.stringEqual(),
predexp.and(5),
]
query.where(filters);
let records = [];
let stream = query.foreach();
stream.on('data', record => {
records.push(record);
})
stream.on('error', error => { throw error });
await new Promise((resolve, reject) => {
stream.on('end', () => resolve());
});
return records;
}
这失败了,我得到以下错误:
AerospikeError: Operation not allowed at this time.
at Function.fromASError (/Users/.../node_modules/aerospike/lib/error.js:113:21)
at QueryCommand.convertError (/Users/.../node_modules/aerospike/lib/commands/command.js:91:27)
at QueryCommand.convertResponse (/Users/.../node_modules/aerospike/lib/commands/command.js:101:24)
at asCallback (/Users/.../node_modules/aerospike/lib/commands/command.js:163:24)
我的aerospike.conf
内容:
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
pidfile /var/run/aerospike/asd.pid
# service-threads 6 # cpu x 5 in 4.7
# transaction-queues 6 # obsolete in 4.7
# transaction-threads-per-queue 4 # obsolete in 4.7
proto-fd-max 15000
}
<...trimmed section>
namespace test {
replication-factor 2
memory-size 1G
default-ttl 30d # 5 days, use 0 to never expire/evict.
nsup-period 120
# storage-engine memory
# To use file storage backing, comment out the line above and use the
# following lines instead.
storage-engine device {
file /opt/aerospike/data/test.dat
filesize 4G
data-in-memory true # Store data in memory in addition to file.
}
}
从一个类似的问题中,我发现这是由于低系统配置造成的。我该如何修改这些。此外,我认为50个请求应该有效,因为我能够插入大约12K条记录/秒。
我想这些是扫描,而不是单独读取。要增加扫描线程限制:
asinfo -v "set-config:context=service;scan-threads-limit=128"