我创建了一个ocf资源代理,并希望将其作为主/从集运行。起初,我的监视器函数在一个正在运行的节点上返回了OCF_SUCCESS(无论它是主节点还是从节点(,该节点确实有效,但起搏器不知道哪一个是当前的主节点(两个实例都报告为从节点(。
这就是为什么我更改了monitor函数,在MASTER上返回OCF_RUNNING_MASTER,在slave上返回OCF-SUCCESS(因为我在drdb的代码中看到了它(。不幸的是,起搏器似乎将其解释为一个错误,杀死主节点,将第二个节点移动到主节点,等等
有人知道我如何让起搏器将OCF_RUNNING_MASTER解释为成功吗?
crm配置:
node 3232286770: VStorage1
attributes standby=off
node 3232286771: VStorage2
primitive virtual_ip IPaddr2
params ip=192.168.100.230 cidr_netmask=32 nic=ens256
op monitor interval=10s
meta migration-threshold=10
primitive filecluster ocf:msn:cluster
op start timeout=120 interval=0
op stop timeout=120 interval=0
op promote timeout=120 interval=0
op demote timeout=120 interval=0
op monitor interval=20s role=Slave
op monitor interval=10s role=Master
meta migration-threshold=10
ms ms filecluster
property cib-bootstrap-options:
have-watchdog=false
dc-version=1.1.14-70404b0
cluster-infrastructure=corosync
cluster-name=debian
stonith-enabled=false
no-quorum-policy=ignore
crm状态输出:
root@VStorage1:/usr/lib/ocf/resource.d# crm status
Last updated: Mon Nov 5 11:21:34 2018 Last change: Fri Nov 2 20:22:53 2018 by root via cibadmin on VStorage1
Stack: corosync
Current DC: VStorage1 (version 1.1.14-70404b0) - partition with quorum
2 nodes and 3 resources configured
Online: [ VStorage1 VStorage2 ]
Full list of resources:
virtual_ip (ocf::heartbeat:IPaddr2): Started VStorage1
Master/Slave Set: ms [filecluster]
Slaves: [ VStorage1 ]
Stopped: [ VStorage2 ]
Failed Actions:
* filecluster_monitor_20000 on VStorage1 'master' (8): call=153, status=complete, exitreason='none',
last-rc-change='Fri Nov 2 20:27:28 2018', queued=0ms, exec=0ms
* filecluster_monitor_20000 on VStorage2 'master' (8): call=135, status=complete, exitreason='none',
last-rc-change='Fri Nov 2 20:27:11 2018', queued=0ms, exec=0ms
只有当升级到master失败时,主从资源代理才会报告两个slave。你的ocf_agent晋升为硕士的条件是什么。请参阅drbd代理以了解资源升级为master时的条件。