启动新节点后DC完全不平衡



我刚刚在Cassandra DC中添加了一个新节点。之前,我的拓扑结构如下:

  1. DC Cassandra: 1 node
  2. DC Solr: 5个节点

当我为Cassandra DC启动第二个节点时,我注意到要流的总字节几乎和现有节点的负载一样大(916gb流;现有Cassandra节点的负载为956gb)。然而,我允许引导程序继续进行。它在几个小时前完成了,现在我的担心被证实了:卡桑德拉DC完全不平衡。

nodeool状态显示如下:

Datacenter: Solr
================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address                                        Load       Owns (effective)  Host ID                               Token                                    Rack
UN  solr node4                                     322.9 GB   40.3%             30f411c3-7419-4786-97ad-395dfc379b40  -8998044611302986942                     rack1
UN  solr node3                                     233.16 GB  39.7%             c7db42c6-c5ae-439e-ab8d-c04b200fffc5  -9145710677669796544                     rack1
UN  solr node5                                     252.42 GB  41.6%             2d3dfa16-a294-48cc-ae3e-d4b99fbc947c  -9004172260145053237                     rack1
UN  solr node2                                     245.97 GB  40.5%             7dbbcc88-aabc-4cf4-a942-08e1aa325300  -9176431489687825236                     rack1
UN  solr node1                                     402.33 GB  38.0%             12976524-b834-473e-9bcc-5f9be74a5d2d  -9197342581446818188                     rack1
Datacenter: Cassandra
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address                                        Load       Owns (effective)  Host ID                               Token                                    Rack
UN  cs node2                                       705.58 GB  99.4%             fa55e0bb-e460-4dc1-ac7a-f71dd00f5380  -9114885310887105386                     rack1
UN  cs node1                                      1013.52 GB  0.6%              6ab7062e-47fe-45f7-98e8-3ee8e1f742a4  -3083852333946106000                     rack1

请注意Cassandra DC中的'Owns'列:node2拥有99.4%,而node1拥有0.6%(尽管node2的'Load'比node1小)。我希望他们各自拥有50%的股份,但这是我得到的。我不知道是什么引起的。我能记得的是,当我启动新节点的引导时,我正在Solr node1中运行一个完整的修复。此时修复仍在运行(我认为它实际上是在新节点完成引导时重新启动的)

如何解决这个问题?(修理)

在Cassandra DC处于此状态时批量加载新数据安全吗?

附加信息:

  1. DSE 4.0.3 (Cassandra 2.0.7)
  2. NetworkTopologyStrategy
  3. Cassandra DC
  4. RF1;Solr DC
  5. 中的RF2
  6. DSE自动分配的DC
  7. <
  8. Vnodes组成启用/gh>
  9. 新节点的配置以现有节点的配置为模型;所以或多或少是正确的
编辑:

原来我不能运行清理也在cs-node1。我得到以下异常:

Exception in thread "main" java.lang.AssertionError: [SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-18509-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-18512-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38320-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38325-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38329-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38322-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38330-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38331-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38321-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38323-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38344-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38345-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38349-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38348-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38346-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-13913-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-13915-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38389-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-39845-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38390-Data.db')]
    at org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2115)
    at org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2112)
    at org.apache.cassandra.db.ColumnFamilyStore.runWithCompactionsDisabled(ColumnFamilyStore.java:2094)
    at org.apache.cassandra.db.ColumnFamilyStore.markAllCompacting(ColumnFamilyStore.java:2125)
    at org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:214)
    at org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:265)
    at org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1105)
    at org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2220)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
    at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
    at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
    at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
    at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
    at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
    at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
    at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
    at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
    at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
    at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
    at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
    at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
    at sun.rmi.transport.Transport$1.run(Transport.java:177)
    at sun.rmi.transport.Transport$1.run(Transport.java:174)
    at java.security.AccessController.doPrivileged(Native Method)
    at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
    at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
    at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
    at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
编辑:

nodeool状态输出(不带keyspace)

Note: Ownership information does not include topology; for complete information, specify a keyspace
Datacenter: Solr
================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address                                        Load       Owns   Host ID                               Token                                    Rack
UN  solr node4                                     323.78 GB  17.1%  30f411c3-7419-4786-97ad-395dfc379b40  -8998044611302986942                     rack1
UN  solr node3                                     236.69 GB  17.3%  c7db42c6-c5ae-439e-ab8d-c04b200fffc5  -9145710677669796544                     rack1
UN  solr node5                                     256.06 GB  16.2%  2d3dfa16-a294-48cc-ae3e-d4b99fbc947c  -9004172260145053237                     rack1
UN  solr node2                                     246.59 GB  18.3%  7dbbcc88-aabc-4cf4-a942-08e1aa325300  -9176431489687825236                     rack1
UN  solr node1                                     411.25 GB  13.9%  12976524-b834-473e-9bcc-5f9be74a5d2d  -9197342581446818188                     rack1
Datacenter: Cassandra
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address                                        Load       Owns   Host ID                               Token                                    Rack
UN  cs node2                                       709.64 GB  17.2%  fa55e0bb-e460-4dc1-ac7a-f71dd00f5380  -9114885310887105386                     rack1
UN  cs node1                                      1003.71 GB  0.1%   6ab7062e-47fe-45f7-98e8-3ee8e1f742a4  -3083852333946106000                     rack1

Cassandra yaml from node1: https://www.dropbox.com/s/ptgzp5lfmdaeq8d/cassandra.yaml(与node2的唯一区别是listen_address和commitlog_directory)

关于CASSANDRA-6774,它有点不同,因为我没有停止先前的清理。虽然我认为我现在走了一条错误的路线,开始一个擦洗(仍在进行中),而不是像他们建议的解决方案那样首先重新启动节点。

更新(2014/04/19):

nodeool清理在执行以下操作后仍然失败并出现断言错误:

  1. 键空间的完全清除
  2. 全集群重启

我现在做一个完整的修复键空间在cs-node1

更新(2014/04/20):

任何试图修复cs-node1中的主键空间的尝试都失败:

失去了通知。您应该检查服务器日志以查看keyspace

的修复状态。

我刚才也看到了这个(dsetool环的输出)

Note: Ownership information does not include topology, please specify a keyspace.
Address          DC           Rack         Workload         Status  State    Load             Owns                 VNodes
solr-node1       Solr         rack1        Search           Up      Normal   447 GB           13.86%               256
solr-node2       Solr         rack1        Search           Up      Normal   267.52 GB        18.30%               256
solr-node3       Solr         rack1        Search           Up      Normal   262.16 GB        17.29%               256
cs-node2         Cassandra    rack1        Cassandra        Up      Normal   808.61 GB        17.21%               256
solr-node5       Solr         rack1        Search           Up      Normal   296.14 GB        16.21%               256
solr-node4       Solr         rack1        Search           Up      Normal   340.53 GB        17.07%               256
cd-node1         Cassandra    rack1        Cassandra        Up      Normal   896.68 GB        0.06%                256
Warning:  Node cs-node2 is serving 270.56 times the token space of node cs-node1, which means it will be using 270.56 times more disk space and network bandwidth. If this is unintentional, check out http://wiki.apache.org/cassandra/Operations#Ring_management
Warning:  Node solr-node2 is serving 1.32 times the token space of node solr-node1, which means it will be using 1.32 times more disk space and network bandwidth. If this is unintentional, check out http://wiki.apache.org/cassandra/Operations#Ring_management

Keyspace-aware:

Address          DC           Rack         Workload         Status  State    Load             Effective-Ownership  VNodes
solr-node1       Solr         rack1        Search           Up      Normal   447 GB           38.00%               256
solr-node2       Solr         rack1        Search           Up      Normal   267.52 GB        40.47%               256
solr-node3       Solr         rack1        Search           Up      Normal   262.16 GB        39.66%               256
cs-node2         Cassandra    rack1        Cassandra        Up      Normal   808.61 GB        99.39%               256
solr-node5       Solr         rack1        Search           Up      Normal   296.14 GB        41.59%               256
solr-node4       Solr         rack1        Search           Up      Normal   340.53 GB        40.28%               256
cs-node1         Cassandra    rack1        Cassandra        Up      Normal   896.68 GB        0.61%                256
Warning:  Node cd-node2 is serving 162.99 times the token space of node cs-node1, which means it will be using 162.99 times more disk space and network bandwidth. If this is unintentional, check out http://wiki.apache.org/cassandra/Operations#Ring_management

这是一个强烈的信号,表明cs-node2的引导方式有问题(正如我在文章开头所描述的)。

看起来您的问题是您很可能在现有节点上从单个令牌切换到vnodes。所有的代币都在一排。在当前的Cassandra版本中,这实际上是不可能做到的,因为它太难了。

修复它并能够添加新节点的唯一真正方法是退役您添加的第一个新节点,然后按照当前文档从单个节点切换到vnode,这基本上是您需要使用其中的节点创建带有全新vnode的全新数据中心,然后退役现有节点。

最新更新