nodejs http with redis, only have 6000req/s



测试node_redis基准,显示incr的超过100000 ops/s

$ node multi_bench.js   
Client count: 5, node version: 0.10.15, server version: 2.6.4, parser: hiredis  
INCR,     1/5 min/max/avg/p95:    0/   2/   0.06/   1.00   1233ms total, 16220.60 ops/sec  
INCR,    50/5 min/max/avg/p95:    0/   4/   1.61/   3.00    648ms total, 30864.20 ops/sec  
INCR,   200/5 min/max/avg/p95:    0/  14/   5.28/   9.00    529ms total, 37807.18 ops/sec    
INCR, 20000/5 min/max/avg/p95:   42/ 508/ 302.22/ 467.00    519ms total, 38535.65 ops/sec

然后我用http服务器在nodejs中添加redis

var http = require("http"), server,        
redis_client = require("redis").createClient();
server = http.createServer(function (request, response) {
        response.writeHead(200, {
                "Content-Type": "text/plain"
            });
    
        redis_client.incr("requests", function (err, reply) {
            response.write(reply+'n');                                                                                          
            response.end();
        });
}).listen(6666);
server.on('error', function(err){
    console.log(err);
    process.exit(1);
});

使用ab命令进行测试,它只有6000 req/s

$ ab -n 10000 -c 100 localhost:6666/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests

Server Software:        
Server Hostname:        localhost
Server Port:            6666
Document Path:          /
Document Length:        7 bytes
Concurrency Level:      100
Time taken for tests:   1.667 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      1080000 bytes
HTML transferred:       70000 bytes
Requests per second:    6000.38 [#/sec] (mean)
Time per request:       16.666 [ms] (mean)
Time per request:       0.167 [ms] (mean, across all concurrent requests)
Transfer rate:          632.85 [Kbytes/sec] received
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.3      0       2
Processing:    12   16   3.2     15      37
Waiting:       12   16   3.1     15      37
Total:         13   17   3.2     16      37
Percentage of the requests served within a certain time (ms)
  50%     16
  66%     16
  75%     16
  80%     17
  90%     20
  95%     23
  98%     28
  99%     34
 100%     37 (longest request)

上次我测试"你好世界"时,它达到了7k req/s

Requests per second:    7201.18 [#/sec] (mean)

如何分析和找出http中redis失去一些性能的原因?

我认为您误解了multi_bench基准测试的结果。

首先,这个基准测试将负载分布在5个连接上,而node.js程序中只有一个连接。更多的连接意味着更多的通信缓冲区(按每个套接字分配)和更好的性能。

然后,虽然Redis服务器能够维持100K操作/秒(前提是您打开多个连接和/或使用流水线),但node.js和node_Redis无法达到这一水平。运行multi_bench的结果表明,当不使用流水线时,只能实现16K操作/秒。

Client count: 5, node version: 0.10.15, server version: 2.6.4, parser: hiredis  
INCR,     1/5 min/max/avg/p95:    0/   2/   0.06/   1.00   1233ms total, 16220.60 ops/sec  

这一结果意味着,在没有流水线和5个并发连接的情况下,node_redis能够全局处理16K op/s。请注意,在只发送20K个操作(multi_bench的默认值)的情况下测量16K个操作/秒的吞吐量并不是很准确。您应该增加num_requests以获得更好的准确性。

第二个基准测试的结果并不令人惊讶:您添加了一个http层(解析成本比Redis协议本身更高),只使用1个到Redis的连接,而ab尝试打开100个到node.js的并发连接,最终获得6K操作/秒,与"Hello world"http服务器相比,产生了1.2K操作/s的吞吐量开销。你期待什么?

您可以尝试通过利用node.js集群功能来挤出更多的性能,如本答案中所述。

相关内容

最新更新