netty中的http分块文件传输立即完成



我正在编写一个http文件服务器,使用netty在多线程中下载文件。当只使用HttpServerCodec()时,一切都很完美,但OOM:直接缓冲区内存错误。然后我转到ChunkedWriteHandler()处理程序。

但问题是,浏览器(新Edge(要么说"无法下载文件",要么下载零大小的文件。我对此一无所知,需要帮助。

日志显示,传输过程立即完成,没有任何时间成本。

[main] INFO Main - Pick up path C:/Temp
[main] INFO dd.oliver.htp.HtpServer - Server start at 2333
[nioEventLoopGroup-3-1] INFO dd.oliver.htp.RequestHandler - Request C:/Temp/d.test.oliverdd
[nioEventLoopGroup-3-1] INFO dd.oliver.htp.RequestHandler - [id: 0xe5ce2ec6, L:/0:0:0:0:0:0:0:1:2333 - R:/0:0:0:0:0:0:0:1:63040] Transfer complete.

这是我的代码,它引用了netty示例。

这是ChannelInitializer:

class HtpChannelInitializer(val basePath: String) : ChannelInitializer<SocketChannel>() {
override fun initChannel(ch: SocketChannel) {
ch.pipeline().addLast("HttpCodec", HttpServerCodec())
ch.pipeline().addLast("HttpAggregator", HttpObjectAggregator(65536))
ch.pipeline().addLast("HttpChunked", ChunkedWriteHandler())
ch.pipeline().addLast("RequestHandle", RequestHandler(basePath))
}
}

这是RequestHandler:

import io.netty.channel.*
import io.netty.handler.codec.http.*
import io.netty.handler.codec.http.HttpVersion.HTTP_1_0
import io.netty.handler.stream.ChunkedFile
import org.slf4j.LoggerFactory
import java.io.File
import java.io.RandomAccessFile

private val logger = LoggerFactory.getLogger(RequestHandler::class.java)
class RequestHandler_test(val basePath: String) : SimpleChannelInboundHandler<HttpRequest>() {
...
override fun channelReadComplete(ctx: ChannelHandlerContext) {
ctx.flush()
}
override fun channelRead0(ctx: ChannelHandlerContext, msg: HttpRequest) {
val path = basePath + msg.uri() // msg.uri() example: / or /a/b or /a/b/c.txt
logger.info("Request $path")
val file = File(path)
if (file.isFile) {
val rfile = RandomAccessFile(file, "r")
// Line
val response = DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK)
// Headers
response.headers().set("Accept-Ranges", "bytes")
response.headers().set("Content-Disposition", "attachment; filename="${file.name}"")
response.headers().set("Content-Type", "application/octet-stream")
response.headers().set("Content-Length", "${rfile.length()}")
if (!(msg.headers().contains("Connection") && msg.headers().get("Connection") == "keep-alive")) {
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE);
} else if (msg.protocolVersion() == HTTP_1_0) {
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE);
}
// Content
//                    response.content().writeBytes(rfile.channel, 0L, rfile.length().toInt())
//                    ctx.writeAndFlush(response).addListener(ChannelFutureListener.CLOSE)
ctx.write(response)
val sendFileFuture = ctx.write(
HttpChunkedInput(ChunkedFile(rfile, 0, rfile.length(), 8192)),
ctx.newProgressivePromise()
)
sendFileFuture.addListener(object : ChannelProgressiveFutureListener {
override fun operationProgressed(
future: ChannelProgressiveFuture,
progress: Long,
total: Long
) {
if (total < 0) { // total unknown
logger.info(future.channel().toString() + " Transfer progress: " + progress)
} else {
logger.info(
future.channel().toString() + " Transfer progress: " + progress + " / " + total
)
}
}
override fun operationComplete(future: ChannelProgressiveFuture) {
logger.info(future.channel().toString() + " Transfer complete.")
}
})
if (!(msg.headers().contains("Connection") && msg.headers().get("Connection") == "close")) {
sendFileFuture.addListener(ChannelFutureListener.CLOSE)
}
}
}
...
}

明白了!这篇文章也遇到了同样的问题。是DefaultFullHttpResponse造成了错误,当更改为DefaultHttpResponse时,一切都将是正确的。

最新更新