Redisson客户端异常Netty线程



我的应用程序部署在openshift中,它也使用Redis。白在大多数时候都是有效的,我仍然面临着与再贴现有关的问题,这是间歇性的。在启动应用程序的url时,错误跟踪如下:-

org.redisson.client.WriteRedisConnectionException: Unable to send command! Node source: NodeSource [slot=null, addr=null, redisClient=null, redirect=null, entry=MasterSlaveEntry [masterEntry=[freeSubscribeConnectionsAmount=0, freeSubscribeConnectionsCounter=value:49:queue:0, freeConnectionsAmount=31, freeConnectionsCounter=value:63:queue:0, freezed=false, freezeReason=null, client=[addr=redis://webapp-sessionstore.9m6hkf.ng.0001.apse2.cache.amazonaws.com:6379], nodeType=MASTER, firstFail=0]]], connection: RedisConnection@1568202974 [redisClient=[addr=redis://webapp-sessionstore.9m6hkf.ng.0001.apse2.cache.amazonaws.com:6379], channel=[id: 0xceaf7022, L:/10.103.34.74:32826 ! R:webapp-sessionstore.9m6hkf.ng.0001.apse2.cache.amazonaws.com/10.112.17.104:6379], currentCommand=CommandData [promise=RedissonPromise [promise=ImmediateEventExecutor$ImmediatePromise@68b1bc80(failure: java.util.concurrent.CancellationException)], command=(HMSET), params=[redisson:tomcat_session:306A0C0325AD2189A7FDDB695D0755D2, PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), ...], codec=org.redisson.codec.CompositeCodec@25e7216]], command: (HMSET), params: [redisson:tomcat_session:77C4BB9FC4252BFC2C8411F3A4DBB6C9, PooledUnsafeDirectByteBuf(ridx: 0, widx: 24, cap: 256), PooledUnsafeDirectByteBuf(ridx: 0, widx: 10, cap: 256), PooledUnsafeDirectByteBuf(ridx: 0, widx: 24, cap: 256), PooledUnsafeDirectByteBuf(ridx: 0, widx: 10, cap: 256)] after 3 retry attempts
org.redisson.command.CommandAsyncService.checkWriteFuture(CommandAsyncService.java:872)
org.redisson.command.CommandAsyncService.access$000(CommandAsyncService.java:97)
org.redisson.command.CommandAsyncService$7.operationComplete(CommandAsyncService.java:791)
org.redisson.command.CommandAsyncService$7.operationComplete(CommandAsyncService.java:788)
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:502)
io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:476)
io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:415)
io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:540)
io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:533)
io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:114)
io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:1018)
io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:874)
io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1365)
io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:716)
io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:708)
io.netty.channel.AbstractChannelHandlerContext.access$1700(AbstractChannelHandlerContext.java:56)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1102)
io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1149)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1073)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
Root Cause
io.netty.channel.ExtendedClosedChannelException
io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
Note The full stack trace of the root cause is available in the server logs.

这可能是因为redis集群的负载增加了,因为它在许多应用程序之间共享。作为一种变通方法,每次看到这个问题时,我都会尝试重新部署,从而重置连接,从而解决了这个问题。正如我所说,这只是一种变通方法,永久性的解决方案可能是为您的应用程序提供一个专用的redis集群,这同样取决于您的应用软件的架构和大小。

您需要将redison版本更新到3.16.3以检查更新的异常。因此,根据这一点,您需要增加连接池的大小。

private void checkWriteFuture(ChannelFuture future, RPromise<R> attemptPromise, RedisConnection connection) {
if (future.isCancelled() || attemptPromise.isDone()) {
return;
}
if (!future.isSuccess()) {
exception = new WriteRedisConnectionException(
"Unable to write command into connection! Increase connection pool size. Node source: " + source + ", connection: " + connection +
", command: " + LogHelper.toString(command, params)
+ " after " + attempt + " retry attempts", future.cause());
if (attempt == attempts) {
attemptPromise.tryFailure(exception);
}
return;
}
timeout.cancel();
scheduleResponseTimeout(attemptPromise, connection);
}

最新更新