service_unavailable-支持Kafka群集尚未完成启动;稍后再试



zookeeper0: 扩展: 文件:docker-compose-base.yml 服务:Zookeeper Container_Name:Zookeeper0 环境: -zoo_my_id = 1 -ZOO_SERVERS = SERVER.1 = ZOOKEEPER0:2888:3888 SERVER.2 = ZOOKEEPER1:2888:3888:3888 SERVER.3 = ZOOKEEPER2:2888:3888:3888 网络: 表现: 别名: - $ {core_peer_networkid}

zookeeper1:
    extends:
        file: docker-compose-base.yml
        service: zookeeper
    container_name: zookeeper1
    environment:
        - ZOO_MY_ID=2
        - ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
    networks:
      behave:
         aliases:
           - ${CORE_PEER_NETWORKID}
zookeeper2:
    extends:
        file: docker-compose-base.yml
        service: zookeeper
    container_name: zookeeper2
    environment:
        - ZOO_MY_ID=3
        - ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
    networks:
      behave:
         aliases:
           - ${CORE_PEER_NETWORKID}
kafka0:
    extends:
        file: docker-compose-base.yml
        service: kafka
    container_name: kafka0
    environment:
        - KAFKA_BROKER_ID=0
        - KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
        - KAFKA_MESSAGE_MAX_BYTES=${KAFKA_MESSAGE_MAX_BYTES}
        - KAFKA_REPLICA_FETCH_MAX_BYTES=${KAFKA_REPLICA_FETCH_MAX_BYTES}
        - KAFKA_REPLICA_FETCH_RESPONSE_MAX_BYTES=${KAFKA_REPLICA_FETCH_RESPONSE_MAX_BYTES}
    depends_on:
        - zookeeper0
        - zookeeper1
        - zookeeper2
    networks:
      behave:
         aliases:
           - ${CORE_PEER_NETWORKID}

kafka1: 扩展: 文件:docker-compose-base.yml 服务:卡夫卡 container_name:kafka1 环境: -kafka_broker_id = 1 -kafka_zookeeper_connect = Zookeeper0:2181,Zookeeper1:2181,Zookeeper2:2181 -kafka_message_max_bytes = $ {kafka_message_max_bytes} -kafka_replica_fetch_max_bytes = $ {kafka_replica_fetch_max_bytes} -kafka_replica_fetch_response_max_bytes = $ {kafka_replica_fetch_response_max_bytes} 依赖于取决于: -Zookeeper0 -Zookeeper1 -Zookeeper2 网络: 表现: 别名: - $ {core_peer_networkid}

kafka2: 扩展: 文件:docker-compose-base.yml 服务:卡夫卡 Container_name:Kafka2 环境: -kafka_broker_id = 2 -kafka_zookeeper_connect = Zookeeper0:2181,Zookeeper1:2181,Zookeeper2:2181 -kafka_message_max_bytes = $ {kafka_message_max_bytes} -kafka_replica_fetch_max_bytes = $ {kafka_replica_fetch_max_bytes} -kafka_replica_fetch_response_max_bytes = $ {kafka_replica_fetch_response_max_bytes} 依赖于取决于: -Zookeeper0 -Zookeeper1 -Zookeeper2 网络: 表现: 别名: - $ {core_peer_networkid}

kafka3: 扩展: 文件:docker-compose-base.yml 服务:卡夫卡 Container_name:Kafka3 环境: -kafka_broker_id = 3 -kafka_zookeeper_connect = Zookeeper0:2181,Zookeeper1:2181,Zookeeper2:2181 -kafka_message_max_bytes = $ {kafka_message_max_bytes} -kafka_replica_fetch_max_bytes = $ {kafka_replica_fetch_max_bytes} -kafka_replica_fetch_response_max_bytes = $ {kafka_replica_fetch_response_max_bytes} 依赖于取决于: -Zookeeper0 -Zookeeper1 -Zookeeper2 网络: 表现: 别名: - $ {core_peer_networkid}


logs ::==orderer logs
vagrant@vagrant:~/workspace/kafka-ordering-master$ docker logs orderer0.example.com
    2019-02-15 09:08:02.775 UTC [localconfig] completeInitialization -> INFO 001 Kafka.Version unset, setting to 0.10.2.0
    2019-02-15 09:08:03.466 UTC [orderer.common.server] prettyPrintStruct -> INFO 002 Orderer config values:
            General.LedgerType = "ram"
            General.ListenAddress = "0.0.0.0"
            General.ListenPort = 7050
            General.TLS.Enabled = false
            General.TLS.PrivateKey = "/var/hyperledger/tls/server.key"
            General.TLS.Certificate = "/var/hyperledger/tls/server.crt"
            General.TLS.RootCAs = [/var/hyperledger/tls/ca.crt]
            General.TLS.ClientAuthRequired = false
            General.TLS.ClientRootCAs = []
            General.Cluster.RootCAs = [/etc/hyperledger/fabric/tls/ca.crt]
            General.Cluster.ClientCertificate = ""
            General.Cluster.ClientPrivateKey = ""
            General.Cluster.DialTimeout = 5s
            General.Cluster.RPCTimeout = 7s
            General.Cluster.ReplicationBufferSize = 20971520
            General.Cluster.ReplicationPullTimeout = 5s
            General.Cluster.ReplicationRetryTimeout = 5s
            General.Keepalive.ServerMinInterval = 1m0s
            General.Keepalive.ServerInterval = 2h0m0s
            General.Keepalive.ServerTimeout = 20s
            General.GenesisMethod = "file"
            General.GenesisProfile = "SampleInsecureKafka"
            General.SystemChannel = "test-system-channel-name"
            General.GenesisFile = "/var/hyperledger/configs/orderer.block"
            General.Profile.Enabled = false
            General.Profile.Address = "0.0.0.0:6060"
            General.LocalMSPDir = "/var/hyperledger/msp"
            General.LocalMSPID = "OrdererMSP"
            General.BCCSP.ProviderName = "SW"
            General.BCCSP.SwOpts.SecLevel = 256
            General.BCCSP.SwOpts.HashFamily = "SHA2"
            General.BCCSP.SwOpts.Ephemeral = false
            General.BCCSP.SwOpts.FileKeystore.KeyStorePath = "/var/hyperledger/msp/keystore"
            General.BCCSP.SwOpts.DummyKeystore =
            General.BCCSP.SwOpts.InmemKeystore =
            General.BCCSP.PluginOpts =
            General.Authentication.TimeWindow = 15m0s
            FileLedger.Location = "/var/hyperledger/production/orderer"
            FileLedger.Prefix = "hyperledger-fabric-ordererledger"
            RAMLedger.HistorySize = 1000
            Kafka.Retry.ShortInterval = 1s
            Kafka.Retry.ShortTotal = 30s
            Kafka.Retry.LongInterval = 5m0s
            Kafka.Retry.LongTotal = 12h0m0s
            Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
            Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
            Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
            Kafka.Retry.Metadata.RetryMax = 3
            Kafka.Retry.Metadata.RetryBackoff = 250ms
            Kafka.Retry.Producer.RetryMax = 3
            Kafka.Retry.Producer.RetryBackoff = 100ms
            Kafka.Retry.Consumer.RetryBackoff = 2s
            Kafka.Verbose = true
            Kafka.Version = 0.10.2.0
            Kafka.TLS.Enabled = false
            Kafka.TLS.PrivateKey = ""
            Kafka.TLS.Certificate = ""
            Kafka.TLS.RootCAs = []
            Kafka.TLS.ClientAuthRequired = false
            Kafka.TLS.ClientRootCAs = []
            Kafka.SASLPlain.Enabled = false
            Kafka.SASLPlain.User = ""
            Kafka.SASLPlain.Password = ""
            Kafka.Topic.ReplicationFactor = 3
            Debug.BroadcastTraceDir = ""
            Debug.DeliverTraceDir = ""
            Consensus = map[SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot WALDir:/var/hyperledger/production/orderer/etcdraft/wal]
            Operations.ListenAddress = "127.0.0.1:8443"
            Operations.TLS.Enabled = false
            Operations.TLS.PrivateKey = ""
            Operations.TLS.Certificate = ""
            Operations.TLS.RootCAs = []
            Operations.TLS.ClientAuthRequired = false
            Operations.TLS.ClientRootCAs = []
            Metrics.Provider = "disabled"
            Metrics.Statsd.Network = "udp"
            Metrics.Statsd.Address = "127.0.0.1:8125"
            Metrics.Statsd.WriteInterval = 30s
            Metrics.Statsd.Prefix = ""
    2019-02-15 09:08:03.763 UTC [orderer.consensus.kafka] newChain -> INFO 003 [channel: testchainid] Starting chain with last persisted offset -3 and last recorded block 0
    2019-02-15 09:08:03.787 UTC [orderer.commmon.multichannel] Initialize -> INFO 004 Starting system channel 'testchainid' with genesis block hash 5f2c3828df168808a899ecce5d7d7306c36cc615464ed0d54b4846155cc3979d and orderer type kafka
    2019-02-15 09:08:03.787 UTC [orderer.common.server] Start -> INFO 005 Starting orderer:
     Version: 1.4.0
     Commit SHA: d700b43
     Go version: go1.11.1
     OS/Arch: linux/amd64
    2019-02-15 09:08:03.787 UTC [orderer.common.server] Start -> INFO 006 Beginning to serve requests
    2019-02-15 09:08:03.800 UTC [orderer.consensus.kafka] setupTopicForChannel -> INFO 007 [channel: testchainid] Setting up the topic for this channel...
    2019-02-15 09:08:31.401 UTC [orderer.common.broadcast] ProcessMessage -> WARN 008 [channel: mychannel] Rejecting broadcast of message from 172.25.0.12:39536 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
    2019-02-15 09:08:31.422 UTC [comm.grpc.server] 1 -> INFO 009 streaming call completed {"grpc.start_time": "2019-02-15T09:08:31.041Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "172.25.0.12:39536", "grpc.code": "OK", "grpc.call_duration": "380.924298ms"}
    2019-02-15 09:08:31.453 UTC [common.deliver] Handle -> WARN 00a Error reading from 172.25.0.12:39534: rpc error: code = Canceled desc = context canceled
    2019-02-15 09:08:31.460 UTC [comm.grpc.server] 1 -> INFO 00b streaming call completed {"grpc.start_time": "2019-02-15T09:08:31.036Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.25.0.12:39534", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "423.701471ms"}

它也发生在我身上。我通过以下方式解决的方式:运行Kitematic->单击我的图像(在右上角) ->单击创建

最新更新