Akka actor 和群集 - 我在使用群集时遇到问题 SingletonManager - 状态为"开始"的未处理事件



我有一个使用Akka 2.2.4的系统,它创建了一堆本地演员,并将它们设置为广播路由器的路由。根据我们传递给它的哈希范围,每个worker处理总工作的某些部分。

现在,我必须将这个应用程序群集起来进行故障转移。基于每个哈希范围在集群上只存在一个工作者/被触发的要求,在我看来,将每个工作者设置为ClusterSingletonManager是有意义的,但是我在让它工作时遇到了麻烦。actor系统启动,它创建ClusterSingletonManager,它将下面引用的代码中的路径添加到Broadcast Router,但由于某种原因,它从未实例化我的实际worker actor来处理我的消息。所有我得到的是一个日志消息:"未处理的事件${我的消息}在状态启动"。我做错了什么?我还需要做些什么来启动这个单实例集群吗?我是不是向错误的演员传达了信息?

这是我的akka配置(我使用默认配置作为备用):

akka{
    cluster{
        roles=["workerSystem"]
        min-nr-of-members = 1
        role {
        workerSystem.min-nr-of-members = 1
}
    }
    daemonic = true
    remote {
        enabled-transports = ["akka.remote.netty.tcp"]
        netty.tcp {
          hostname = "127.0.0.1"
          port = ${akkaPort}
        }
    }
    actor{
        provider = akka.cluster.ClusterActorRefProvider
        single-message-bound-mailbox {
              # FQCN of the MailboxType. The Class of the FQCN must have a public
              # constructor with
              # (akka.actor.ActorSystem.Settings, com.typesafe.config.Config) parameters.
              mailbox-type = "akka.dispatch.BoundedMailbox"
              # If the mailbox is bounded then it uses this setting to determine its
              # capacity. The provided value must be positive.
              # NOTICE:
              # Up to version 2.1 the mailbox type was determined based on this setting;
              # this is no longer the case, the type must explicitly be a bounded mailbox.
              mailbox-capacity = 1
              # If the mailbox is bounded then this is the timeout for enqueueing
              # in case the mailbox is full. Negative values signify infinite
              # timeout, which should be avoided as it bears the risk of dead-lock.
              mailbox-push-timeout-time = 1
        }
        worker-dispatcher{
         type = PinnedDispatcher
         executor = "thread-pool-executor"
          # Throughput defines the number of messages that are processed in a batch
          # before the thread is returned to the pool. Set to 1 for as fair as possible.
         throughput = 500
         thread-pool-executor {
            # Keep alive time for threads
            keep-alive-time = 60s
            # Min number of threads to cap factor-based core number to
            core-pool-size-min = ${workerCount}
            # The core pool size factor is used to determine thread pool core size
            # using the following formula: ceil(available processors * factor).
            # Resulting size is then bounded by the core-pool-size-min and
            # core-pool-size-max values.
            core-pool-size-factor = 3.0
            # Max number of threads to cap factor-based number to
            core-pool-size-max = 64
            # Minimum number of threads to cap factor-based max number to
            # (if using a bounded task queue)
            max-pool-size-min = ${workerCount}
            # Max no of threads (if using a bounded task queue) is determined by
            # calculating: ceil(available processors * factor)
            max-pool-size-factor  = 3.0
            # Max number of threads to cap factor-based max number to
            # (if using a  bounded task queue)
            max-pool-size-max = 64
            # Specifies the bounded capacity of the task queue (< 1 == unbounded)
            task-queue-size = -1
            # Specifies which type of task queue will be used, can be "array" or
            # "linked" (default)
            task-queue-type = "linked"
            # Allow core threads to time out
            allow-core-timeout = on
          }
         fork-join-executor {
            # Min number of threads to cap factor-based parallelism number to
            parallelism-min = 1
            # The parallelism factor is used to determine thread pool size using the
            # following formula: ceil(available processors * factor). Resulting size
            # is then bounded by the parallelism-min and parallelism-max values.
            parallelism-factor = 3.0
            # Max number of threads to cap factor-based parallelism number to
            parallelism-max = 1
          }
        }
    }
}

这里是我创建actor的地方(它是用Groovy编写的):

            Props clusteredProps = ClusterSingletonManager.defaultProps("worker".toString(), PoisonPill.getInstance(), "workerSystem",
                    new ClusterSingletonPropsFactory(){
                        @Override
                        Props create(Object handOverData) {
                            log.info("called in ClusterSingetonManager")
                            Props.create(WorkerActorCreator.create(applicationContext, it.start, it.end)).withDispatcher("akka.actor.worker-dispatcher").withMailbox("akka.actor.single-message-bound-mailbox")
                        }
                    } )
            ActorRef manager = system.actorOf(clusteredProps, "worker-${it.start}-${it.end}".toString())
            String path = manager.path().child("worker").toString()
            path

当我尝试向实际的worker actor发送消息时,上面的路径应该解决吗?目前还没有。我做错了什么?此外,这些角色存在于Spring应用程序中,并且工作角色是用一些@Autowired依赖项设置的。虽然这个Spring集成在非集群环境中工作得很好,但是在集群环境中是否存在我应该注意的问题?

谢谢

仅供参考:我也在akka-user google组中发布了这篇文章。链接在这里

代码中的路径是指向您在每个节点上以角色"workerSystem"启动的ClusterSingletonManager参与者。它将创建一个名为"worker-${It .start}-${It"的子actor (WorkerActor)。End}"在集群中最老的节点上,即集群中的单节点。

您还应该定义ClusterSingletonManager的名称,例如system.actorOf(clusteredProps, "workerSingletonManager")

您不能发送消息给ClusterSingletonManager。你必须将它们发送到活动worker的路径,即包括最老节点的地址。文档中的ConsumerProxy说明了这一点。

我不确定你是否应该为此使用单例。所有工作线程都将运行在同一个节点上,即最老的节点。我更愿意在akka-user google组讨论你的问题的替代解决方案。

最新更新