Cinder Volume Create:无有效主机无称重主机



我正在尝试在4节点堆栈上安装和配置OpenStack Mitaka。1个控制器、1个计算、1个块存储、1个对象存储。当尝试创建块存储节点时,我无法通过仪表板创建卷。基础操作系统是Ubuntu 14.04,就像我之前说的,是Mitaka发布的OpenStack。

这是控制节点上的cinder.conf

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.11
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = *********
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = **********

[database]
connection = mysql+pymysql://cinder:********@controller/cinder
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm

Cinder (Block Storage)节点上的Cinder .conf文件

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.41
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = **********
enabled_backends = lvm
glance_api_servers = http://controller:9292
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = ********
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[database]
#connection = mysql+pymysql://cinder:*******@controller/cinder
connection = mysql+pymysql://cinder:*******@controller/cinder
#connection = mysql://cinder:******@controller/cinder
[api_database]
connection = mysql+pymysql://cinder:*******@controller/cinder_api

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm

创建卷后状态为"error"。这里是我在cinder-scheduler.log文件中得到的错误行Controller Node

2016-09-07 17:14:22.291 10607 ERROR cinder.scheduler.flows.create_volume [req-272c5387-a2e3-4371-8a14-8330831910d0 a43909277cbb418fa12fab4d22e0586c 64d180e39e2345ac9bbcd0c389b0a7c4 - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid host was found. No weighed hosts available

我认为这是错误信息中最重要的部分:

volume:create: No valid host was found。没有主机可用

当我从控制节点运行命令"cinder service-list"时,我得到以下输出:

+------------------+------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled |   up  | 2016-09-07T22:13:11.000000 |        -        |
|  cinder-volume   |   cinder   | nova | enabled |   up  | 2016-09-07T22:13:30.000000 |        -        |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

有趣的是,主机名为cinder。在Mitaka安装指南中,主机名是block1@lvm。不知道为什么我的不一样,或者这是否相关。发现它很有趣,也许是我问题的线索。

这让我相信Cinder Node和Controller Node能够"看到"或相互通信。我相信我已经在Cinder节点中正确配置了lvm。为了以防万一,下面是lvm.com .conf文件中的过滤器部分:

filter = [ "a/sda/", "a ...

说了这么多。我认为这是一个分区/硬盘格式的问题。或者,rabbitmq(消息服务)问题。我确实有rabbitmq-server安装在Cinder Node上,我知道这不是指南设置的方式,这意味着它可能是错误的。我现在要做的是从Cinder Node中删除rabbitmq-server。我相信我会遇到的问题是,Cinder NodeController Node不会"看到"对方。如果是这样的话,那么我现在运行的3个节点中的任何一个上的cons文件可能有问题?当前运行的3个节点分别是Controller、Compute和Cinder。

让我知道你们的想法。如果你看到我的文件有问题,请告诉我。最后一段是解释我的想法,以及项目的现状。如果你看到我的逻辑错误,或者认为可能有更好的方法来解决问题,我洗耳恭听!

谢谢大家!

首先检查vgs命令的输出。如果你安装了openstack(和我一样)通过packstack。默认的卷大小是20GB左右。您可以查看packstack应答文件来确认或查看卷组大小

CONFIG_CINDER_VOLUMES_SIZE = 20 g

如果你想扩展这个卷组的大小,使用这个链接:-

希望这将解决您的问题。

您将enabled_backends键放在了错误的部分。它将在控制器和存储节点的[Default]部分中定义。

最新更新