Akka 流 HTTP 速率限制



我的计算图的一个阶段是流的类型Flow[Seq[Request], Seq[Response], NotUsed]. 显然,此阶段应为每个请求分配一个响应,并在解决所有请求后发出 seq。

现在,底层 API 具有严格的速率限制策略,因此我每秒只能触发一个请求。如果我有一个单Requests 的Flow,我可以用每秒发出单个元素的流zip这个流(如何限制 Akka 流每秒仅执行和发送一条消息?),但在这种情况下我没有看到类似的解决方案。

有没有好的方式来表达这一点?我想到的想法是使用低级 Graph DSL 并在那里有一个一秒的即时报价流作为状态,并使用它来处理请求的序列,但我怀疑它是否会变得好看。

正如维克多所说,您可能应该使用默认油门。但是如果你想自己做,它可能看起来像这样

private def throttleFlow[T](rate: FiniteDuration) = Flow.fromGraph(GraphDSL.create() { implicit builder =>
import GraphDSL.Implicits._
val ticker = Source.tick(rate, rate, Unit)
val zip = builder.add(Zip[T, Unit.type])
val map = Flow[(T, Unit.type)].map { case (value, _) => value }
val messageExtractor = builder.add(map)
ticker ~> zip.in1
zip.out ~> messageExtractor.in
FlowShape.of(zip.in0, messageExtractor.out)
})
// And it will be used in your flow as follows
// .via(throttleFlow(FiniteDuration(200, MILLISECONDS)))

此外,由于您要限制对某些 API 的访问,因此您可能希望以集中方式限制对它的调用。假设您的项目中有多个位置调用同一个外部 API,但由于调用来自同一 IP 限制,因此应应用于所有位置。对于这种情况,请考虑将MergeHub.source用于(据说)akka-http 流。每个调用方将创建并执行新图形以进行调用。

这是我最终使用的:

case class FlowItem[I](i: I, requests: Seq[HttpRequest], responses: Seq[String]) {
def withResponse(resp: String) = copy(responses = resp +: responses)
def extractNextRequest = (requests.head, copy(requests = requests.tail))
}

def apiFlow[I, O](requestPer: FiniteDuration,
buildRequests: I => Seq[HttpRequest],
buildOut: (I, Seq[String]) => O
)(implicit system: ActorSystem, materializer: ActorMaterializer) = {
GraphDSL.create() { implicit b =>
import GraphDSL.Implicits._
val in: FlowShape[I, FlowItem[I]] =
b.add(Flow[I].map(i => FlowItem(i, buildRequests(i), Seq.empty)))
val merge: MergePreferredShape[FlowItem[I]] =
b.add(MergePreferred[FlowItem[I]](1))
val throttle: FlowShape[FlowItem[I], FlowItem[I]] =
b.add(Flow[FlowItem[I]].throttle(1, requestPer, 1, ThrottleMode.shaping))
val prepareRequest: FlowShape[FlowItem[I], (HttpRequest, FlowItem[I])] =
b.add(Flow[FlowItem[I]].map(_.extractNextRequest))
val log =
b.add(Flow[(HttpRequest, FlowItem[I])].map { r => Console.println(s"rquest to ${r._1.uri}"); r})
val pool: FlowShape[(HttpRequest, FlowItem[I]), (Try[HttpResponse], FlowItem[I])] =
b.add(Http(system).superPool[FlowItem[I]]())
val transformResponse: FlowShape[(Try[HttpResponse], FlowItem[I]), FlowItem[I]] =
b.add(Flow[(Try[HttpResponse], FlowItem[I])].mapAsync(1) {
case (Success(HttpResponse(StatusCodes.OK, headers, entity, _)), flowItem) =>
entity.toStrict(1.second).map(resp => flowItem.withResponse(resp.data.utf8String))
})
val split: UniformFanOutShape[FlowItem[I], FlowItem[I]] =
b.add(Partition[FlowItem[I]](2, fi => if (fi.requests.isEmpty) 0 else 1))

val out: FlowShape[FlowItem[I], O] =
b.add(Flow[FlowItem[I]].map(fi => buildOut(fi.i, fi.responses)))
in ~> merge ~> throttle ~> prepareRequest ~> log ~> pool ~> transformResponse ~> split ~> out
merge.preferred   <~                                                       split
FlowShape(in.in, out.out)
}
}

这个想法是传递元素抛出限制的次数,只要有请求,并将额外的(尚未执行的)请求与消息一起存储。split元素检查是否还有更多请求。

最新更新