Puppet notify Xinetd 不重新加载 Xinetd 服务



我正在尝试通过 debian 7 服务器上的 puppet 使用标准的 check_mk xinetd 配置文件安装check_mk代理。

Check_mk安装没有问题,但我在 xinetd 配置方面遇到了问题。

当我更改 puppet master 上的源配置文件中的端口并在客户端主机上运行 puppet agent -t 时,新配置已正确部署,但 puppet 不会重新加载 xinetd 服务,因为系统无法识别 xinetd 服务的状态。

傀儡清单如下所示:

    class basic::check-mk {
case $operatingsystem {
  debian: {
         package {'check-mk-agent':
                 ensure => present,
                 }
         file    { '/etc/xinetd.d/check_mk':
                 notify => Service['xinetd'],
                 ensure => file,
                 source => 'puppet:///modules/basic/etc--xinetd--checkmk',
                 mode   => '0644',
                 }
         service { 'xinetd':
                 ensure  => running,
                 enable  => true,
                 restart => '/etc/init.d/xinetd reload',
                 }
          }
 }
}

调试如下所示:

    info: Applying configuration version '1464186485'
debug: /Stage[main]/Ntp::Config/notify: subscribes to Class[Ntp::Service]
debug: /Stage[main]/Ntp/Anchor[ntp::begin]/before: requires Class[Ntp::Install]
debug: /Stage[main]/basic::Check-mk/Service[xinetd]/subscribe: subscribes to File[/etc/xinetd.d/check_mk]
debug: /Stage[main]/Ntp::Install/before: requires Class[Ntp::Config]
debug: /Stage[main]/Ntp::Service/before: requires Anchor[ntp::end]
debug: /Schedule[daily]: Skipping device resources because running on a host
debug: /Schedule[monthly]: Skipping device resources because running on a host
debug: /Schedule[hourly]: Skipping device resources because running on a host
debug: Prefetching apt resources for package
debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}n''
debug: Puppet::Type::Package::ProviderApt: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}n''
debug: /Schedule[never]: Skipping device resources because running on a host
debug: file_metadata supports formats: b64_zlib_yaml pson raw yaml; using pson
debug: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]/content: Executing 'diff -u /etc/xinetd.d/check_mk /tmp/puppet-file20160525-10084-1vrr8zf-0'
notice: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]/content:
--- /etc/xinetd.d/check_mk      2016-05-25 14:57:26.220873468 +0200
+++ /tmp/puppet-file20160525-10084-1vrr8zf-0    2016-05-25 16:28:06.393363702 +0200
@@ -25,7 +25,7 @@
 service check_mk
 {
         type           = UNLISTED
-        port           = 6556
+        port           = 6554
         socket_type    = stream
         protocol       = tcp
         wait           = no
debug: Finishing transaction 70294357735140
info: FileBucket got a duplicate file {md5}cb0264ad1863ee2b3749bd3621cdbdd0
info: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]: Filebucketed /etc/xinetd.d/check_mk to puppet with sum cb0264ad1863ee2b3749bd3621cdbdd0
notice: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]/content: content changed '{md5}cb0264ad1863ee2b3749bd3621cdbdd0' to '{md5}56ac5c1a50c298de4999649b27ef6277'
debug: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]: The container Class[basic::Check-mk] will propagate my refresh event
info: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]: Scheduling refresh of Service[xinetd]
debug: Service[ntp](provider=debian): Executing '/etc/init.d/ntp status'
debug: Service[xinetd](provider=debian): Executing '/etc/init.d/xinetd status'
debug: Service[xinetd](provider=debian): Executing '/etc/init.d/xinetd start'
notice: /Stage[main]/basic::Check-mk/Service[xinetd]/ensure: ensure changed 'stopped' to 'running'
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: The container Class[basic::Check-mk] will propagate my refresh event
debug: Service[xinetd](provider=debian): Executing '/etc/init.d/xinetd status'
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: Skipping restart; service is not running
notice: /Stage[main]/basic::Check-mk/Service[xinetd]: Triggered 'refresh' from 1 events
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: The container Class[basic::Check-mk] will propagate my refresh event
debug: Class[basic::Check-mk]: The container Stage[main] will propagate my refresh event
debug: /Schedule[weekly]: Skipping device resources because running on a host
debug: /Schedule[puppet]: Skipping device resources because running on a host
debug: Finishing transaction 70294346109840
debug: Storing state
debug: Stored state in 0.01 seconds
notice: Finished catalog run in 1.43 seconds
debug: Executing '/etc/puppet/etckeeper-commit-post'
debug: report supports formats: b64_zlib_yaml pson raw yaml; using pson

以下行对我来说似乎很可疑:

debug: /Stage[main]/basic::Check-mk/Service[xinetd]: Skipping restart; service is not running

service --status-all[ ? ] xinetd.为什么系统无法识别服务的状态?

调试日志和手动 service 命令的输出表明您的xinetd没有有效的 status 子命令。 因此,Puppet不知道如何(或是否)管理其运行状态。

您可以考虑修复 initscript 以识别 status 子命令并做出符合 LSB 的响应(或者如果服务正在运行,则至少以代码 0 退出,否则任何其他内容)。 或者,您可以向Service资源添加 status 属性,提供 Puppet 可用于确定服务的运行状态的备用命令。 (我已经链接到当前的文档,但我很确定Service早在 Puppet 2.7 之前就已经有了这个属性。

已解决:为了解决这个问题,我不得不在 xinetd 的 init.d 脚本中添加一个状态部分。之后,service xinetd status和傀儡能够识别服务的状态。添加的部分如下所示:

status)
    if pidof xinetd > /dev/null
    then
      echo "xinetd is running."
      exit 0
    else
      echo "xinetd is NOT running."
      exit 1
    fi
;;

此外,我在"用法"行中添加了状态选项:

    *)
    echo "Usage: /etc/init.d/xinetd {start|stop|reload|force-reload|restart|status}"
    exit 1
    ;;

这解决了问题。

相关内容

  • 没有找到相关文章

最新更新