Jul 30 12:58:50 node5 systemd[1]: Started Icinga host/service/network monitoring system. Jul 30 12:58:54 node5 pvedaemon[2473016]: successful auth for user 'prometheus@pve' Jul 30 12:59:00 node5 systemd[1]: Starting Proxmox VE replication runner... Jul 30 12:59:00 node5 pvesr[2496941]: trying to acquire cfs lock 'file-replication_cfg' ... Jul 30 12:59:01 node5 pvesr[2496941]: trying to acquire cfs lock 'file-replication_cfg' ... Jul 30 12:59:02 node5 systemd[1]: pvesr.service: Succeeded. Jul 30 12:59:02 node5 systemd[1]: Started Proxmox VE replication runner. Jul 30 12:59:15 node5 systemd[1]: Reloading. Jul 30 12:59:15 node5 systemd[1]: Reloading. Jul 30 12:59:15 node5 systemd[1]: Stopping udev Kernel Device Manager... Jul 30 12:59:15 node5 systemd[1]: systemd-udevd.service: Succeeded. Jul 30 12:59:15 node5 systemd[1]: Stopped udev Kernel Device Manager. Jul 30 12:59:15 node5 systemd[1]: Starting udev Kernel Device Manager... Jul 30 12:59:15 node5 systemd[1]: Started udev Kernel Device Manager. Jul 30 12:59:15 node5 systemd[1]: Reloading. Jul 30 12:59:19 node5 systemd[1]: Reloading. Jul 30 12:59:20 node5 systemd[1]: Reloading. Jul 30 12:59:20 node5 systemd[1]: Reloading FUSE filesystem for LXC. Jul 30 12:59:20 node5 systemd[1]: Reloaded FUSE filesystem for LXC. Jul 30 12:59:21 node5 kernel: [4370944.589666] audit: type=1400 audit(1596106761.524:17): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/bin/lxc-start" pid=2502010 comm="apparmor_parser" Jul 30 12:59:21 node5 kernel: [4370944.814281] audit: type=1400 audit(1596106761.748:18): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default" pid=2502013 comm="apparmor_parser" Jul 30 12:59:21 node5 kernel: [4370944.814585] audit: type=1400 audit(1596106761.748:19): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-cgns" pid=2502013 comm="apparmor_parser" Jul 30 12:59:21 node5 kernel: [4370944.814897] audit: type=1400 audit(1596106761.748:20): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-with-mounting" pid=2502013 comm="apparmor_parser" Jul 30 12:59:21 node5 kernel: [4370944.815273] audit: type=1400 audit(1596106761.752:21): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-with-nesting" pid=2502013 comm="apparmor_parser" Jul 30 12:59:21 node5 systemd[1]: Reloading. Jul 30 12:59:22 node5 systemd[1]: Stopping LXC Container Monitoring Daemon... Jul 30 12:59:22 node5 systemd[1]: lxc-monitord.service: Succeeded. Jul 30 12:59:22 node5 systemd[1]: Stopped LXC Container Monitoring Daemon. Jul 30 12:59:22 node5 systemd[1]: Stopping LXC network bridge setup... Jul 30 12:59:22 node5 systemd[1]: Started LXC Container Monitoring Daemon. Jul 30 12:59:22 node5 systemd[1]: lxc-net.service: Succeeded. Jul 30 12:59:22 node5 systemd[1]: Stopped LXC network bridge setup. Jul 30 12:59:22 node5 systemd[1]: Starting LXC network bridge setup... Jul 30 12:59:22 node5 systemd[1]: Started LXC network bridge setup. Jul 30 12:59:22 node5 systemd[1]: Reloading. Jul 30 12:59:22 node5 systemd[1]: Reloading LXC Container Initialization and Autoboot Code. Jul 30 12:59:22 node5 kernel: [4370945.584665] audit: type=1400 audit(1596106762.520:22): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/bin/lxc-start" pid=2502098 comm="apparmor_parser" Jul 30 12:59:22 node5 kernel: [4370945.618745] audit: type=1400 audit(1596106762.552:23): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default" pid=2502102 comm="apparmor_parser" Jul 30 12:59:22 node5 kernel: [4370945.618747] audit: type=1400 audit(1596106762.552:24): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-cgns" pid=2502102 comm="apparmor_parser" Jul 30 12:59:22 node5 kernel: [4370945.618750] audit: type=1400 audit(1596106762.552:25): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-with-mounting" pid=2502102 comm="apparmor_parser" Jul 30 12:59:22 node5 kernel: [4370945.618768] audit: type=1400 audit(1596106762.552:26): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-with-nesting" pid=2502102 comm="apparmor_parser" Jul 30 12:59:22 node5 systemd[1]: Reloaded LXC Container Initialization and Autoboot Code. Jul 30 12:59:22 node5 systemd[1]: Reloading. Jul 30 12:59:23 node5 systemd[1]: Starting LSB: exim Mail Transport Agent... Jul 30 12:59:23 node5 exim4[2502138]: Starting MTA: exim4. Jul 30 12:59:23 node5 exim4[2502138]: ALERT: exim paniclog /var/log/exim4/paniclog has non-zero size, mail system possibly broken Jul 30 12:59:23 node5 systemd[1]: Started LSB: exim Mail Transport Agent. Jul 30 12:59:23 node5 systemd[1]: Reloading. Jul 30 12:59:23 node5 corosync[3103]: [MAIN ] Node was shut down by a signal Jul 30 12:59:23 node5 systemd[1]: Stopping Corosync Cluster Engine... Jul 30 12:59:23 node5 corosync[3103]: [SERV ] Unloading all Corosync service engines. Jul 30 12:59:23 node5 corosync[3103]: [QB ] withdrawing server sockets Jul 30 12:59:23 node5 corosync[3103]: [SERV ] Service engine unloaded: corosync vote quorum service v1.0 Jul 30 12:59:23 node5 pmxcfs[2458]: [confdb] crit: cmap_dispatch failed: 2 Jul 30 12:59:23 node5 corosync[3103]: [QB ] withdrawing server sockets Jul 30 12:59:23 node5 corosync[3103]: [SERV ] Service engine unloaded: corosync configuration map access Jul 30 12:59:23 node5 corosync[3103]: [QB ] withdrawing server sockets Jul 30 12:59:23 node5 corosync[3103]: [SERV ] Service engine unloaded: corosync configuration service Jul 30 12:59:23 node5 pmxcfs[2458]: [dcdb] crit: cpg_dispatch failed: 2 Jul 30 12:59:23 node5 pmxcfs[2458]: [dcdb] crit: cpg_leave failed: 2 Jul 30 12:59:23 node5 pmxcfs[2458]: [status] crit: cpg_dispatch failed: 2 Jul 30 12:59:23 node5 pmxcfs[2458]: [status] crit: cpg_leave failed: 2 Jul 30 12:59:23 node5 corosync[3103]: [QB ] withdrawing server sockets Jul 30 12:59:23 node5 corosync[3103]: [SERV ] Service engine unloaded: corosync cluster closed process group service v1.01 Jul 30 12:59:23 node5 pmxcfs[2458]: [quorum] crit: quorum_dispatch failed: 2 Jul 30 12:59:23 node5 pmxcfs[2458]: [status] notice: node lost quorum Jul 30 12:59:23 node5 corosync[3103]: [QB ] withdrawing server sockets Jul 30 12:59:23 node5 corosync[3103]: [SERV ] Service engine unloaded: corosync cluster quorum service v0.1 Jul 30 12:59:23 node5 corosync[3103]: [SERV ] Service engine unloaded: corosync profile loading service Jul 30 12:59:23 node5 corosync[3103]: [SERV ] Service engine unloaded: corosync resource monitoring service Jul 30 12:59:23 node5 corosync[3103]: [SERV ] Service engine unloaded: corosync watchdog service Jul 30 12:59:24 node5 pmxcfs[2458]: [quorum] crit: quorum_initialize failed: 2 Jul 30 12:59:24 node5 pmxcfs[2458]: [quorum] crit: can't initialize service Jul 30 12:59:24 node5 pmxcfs[2458]: [confdb] crit: cmap_initialize failed: 2 Jul 30 12:59:24 node5 pmxcfs[2458]: [confdb] crit: can't initialize service Jul 30 12:59:24 node5 pmxcfs[2458]: [dcdb] notice: start cluster connection Jul 30 12:59:24 node5 pmxcfs[2458]: [dcdb] crit: cpg_initialize failed: 2 Jul 30 12:59:24 node5 pmxcfs[2458]: [dcdb] crit: can't initialize service Jul 30 12:59:24 node5 pmxcfs[2458]: [status] notice: start cluster connection Jul 30 12:59:24 node5 pmxcfs[2458]: [status] crit: cpg_initialize failed: 2 Jul 30 12:59:24 node5 pmxcfs[2458]: [status] crit: can't initialize service Jul 30 12:59:24 node5 corosync[3103]: [KNET ] host: host: 12 (passive) best link: 0 (pri: 0) Jul 30 12:59:24 node5 corosync[3103]: [KNET ] host: host: 12 has no active links Jul 30 12:59:24 node5 corosync[3103]: [MAIN ] Corosync Cluster Engine exiting normally Jul 30 12:59:24 node5 systemd[1]: corosync.service: Succeeded. Jul 30 12:59:24 node5 systemd[1]: Stopped Corosync Cluster Engine. Jul 30 12:59:24 node5 systemd[1]: Starting Corosync Cluster Engine... Jul 30 12:59:24 node5 corosync[2502472]: [MAIN ] Corosync Cluster Engine 3.0.4 starting up Jul 30 12:59:24 node5 corosync[2502472]: [MAIN ] Corosync built-in features: dbus monitoring watchdog systemd xmlconf snmp pie relro bindnow Jul 30 12:59:24 node5 corosync[2502472]: [TOTEM ] Initializing transport (Kronosnet). Jul 30 12:59:24 node5 corosync[2502472]: [TOTEM ] kronosnet crypto initialized: aes256/sha256 Jul 30 12:59:24 node5 corosync[2502472]: [TOTEM ] totemknet initialized Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] common: crypto_nss.so has been loaded from /usr/lib/x86_64-linux-gnu/kronosnet/crypto_nss.so Jul 30 12:59:24 node5 corosync[2502472]: [SERV ] Service engine loaded: corosync configuration map access [0] Jul 30 12:59:24 node5 corosync[2502472]: [QB ] server name: cmap Jul 30 12:59:24 node5 corosync[2502472]: [SERV ] Service engine loaded: corosync configuration service [1] Jul 30 12:59:24 node5 corosync[2502472]: [QB ] server name: cfg Jul 30 12:59:24 node5 corosync[2502472]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2] Jul 30 12:59:24 node5 corosync[2502472]: [QB ] server name: cpg Jul 30 12:59:24 node5 corosync[2502472]: [SERV ] Service engine loaded: corosync profile loading service [4] Jul 30 12:59:24 node5 corosync[2502472]: [SERV ] Service engine loaded: corosync resource monitoring service [6] Jul 30 12:59:24 node5 corosync[2502472]: [WD ] Watchdog not enabled by configuration Jul 30 12:59:24 node5 corosync[2502472]: [WD ] resource load_15min missing a recovery key. Jul 30 12:59:24 node5 corosync[2502472]: [WD ] resource memory_used missing a recovery key. Jul 30 12:59:24 node5 corosync[2502472]: [WD ] no resources configured. Jul 30 12:59:24 node5 corosync[2502472]: [SERV ] Service engine loaded: corosync watchdog service [7] Jul 30 12:59:24 node5 corosync[2502472]: [QUORUM] Using quorum provider corosync_votequorum Jul 30 12:59:24 node5 corosync[2502472]: [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5] Jul 30 12:59:24 node5 corosync[2502472]: [QB ] server name: votequorum Jul 30 12:59:24 node5 corosync[2502472]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3] Jul 30 12:59:24 node5 corosync[2502472]: [QB ] server name: quorum Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 6 (passive) best link: 0 (pri: 0) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 6 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 6 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 6 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 6 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 6 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 7 (passive) best link: 0 (pri: 0) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 7 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 7 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 7 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 7 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 7 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [TOTEM ] A new membership (1.23bf1) was formed. Members joined: 1 Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 3 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 3 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 3 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 4 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 4 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 4 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [QUORUM] Members[1]: 1 Jul 30 12:59:24 node5 systemd[1]: Started Corosync Cluster Engine. Jul 30 12:59:24 node5 corosync[2502472]: [MAIN ] Completed service synchronization, ready to provide service. Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 5 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 5 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 5 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 8 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 8 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 8 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 8 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 8 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 8 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 9 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 9 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 9 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 9 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 9 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 9 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 10 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 10 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 10 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 10 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 10 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 10 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 11 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 11 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 11 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 11 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 11 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 11 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 12 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 12 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 12 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 12 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 12 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 12 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 13 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 13 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 13 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 13 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 13 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 13 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 0) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 1 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 2 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 2 has no active links Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1) Jul 30 12:59:24 node5 corosync[2502472]: [KNET ] host: host: 2 has no active links Jul 30 12:59:27 node5 pve-ha-lrm[3279]: lost lock 'ha_agent_node5_lock - cfs lock update failed - Permission denied Jul 30 12:59:28 node5 systemd[1]: Reloading. Jul 30 12:59:28 node5 systemd[1]: Condition check resulted in Import ZFS pools by cache file being skipped. Jul 30 12:59:28 node5 systemd[1]: Stopped target ZFS pool import target. Jul 30 12:59:28 node5 systemd[1]: Stopping ZFS pool import target. Jul 30 12:59:28 node5 systemd[1]: Condition check resulted in Import ZFS pools by cache file being skipped. Jul 30 12:59:28 node5 systemd[1]: Reached target ZFS pool import target. Jul 30 12:59:28 node5 systemd[1]: zfs-mount.service: Succeeded. Jul 30 12:59:28 node5 systemd[1]: Stopped Mount ZFS filesystems. Jul 30 12:59:28 node5 systemd[1]: Stopping Mount ZFS filesystems... Jul 30 12:59:28 node5 systemd[1]: Starting Mount ZFS filesystems... Jul 30 12:59:28 node5 systemd[1]: zfs-share.service: Succeeded. Jul 30 12:59:28 node5 systemd[1]: Stopped ZFS file system shares. Jul 30 12:59:28 node5 systemd[1]: Stopping ZFS file system shares... Jul 30 12:59:28 node5 systemd[1]: Stopped target ZFS volumes are ready. Jul 30 12:59:28 node5 systemd[1]: Stopping ZFS volumes are ready. Jul 30 12:59:28 node5 systemd[1]: zfs-volume-wait.service: Succeeded. Jul 30 12:59:28 node5 systemd[1]: Stopped Wait for ZFS Volume (zvol) links in /dev. Jul 30 12:59:28 node5 systemd[1]: Stopping Wait for ZFS Volume (zvol) links in /dev... Jul 30 12:59:28 node5 systemd[1]: Starting Wait for ZFS Volume (zvol) links in /dev... Jul 30 12:59:28 node5 systemd[1]: Started Mount ZFS filesystems. Jul 30 12:59:28 node5 systemd[1]: Starting ZFS file system shares... Jul 30 12:59:28 node5 systemd[1]: Stopped target ZFS startup target. Jul 30 12:59:28 node5 systemd[1]: Stopping ZFS startup target. Jul 30 12:59:28 node5 zvol_wait[2502555]: No zvols found, nothing to do. Jul 30 12:59:28 node5 systemd[1]: Started Wait for ZFS Volume (zvol) links in /dev. Jul 30 12:59:28 node5 systemd[1]: Reached target ZFS volumes are ready. Jul 30 12:59:28 node5 systemd[1]: Started ZFS file system shares. Jul 30 12:59:28 node5 systemd[1]: Reached target ZFS startup target. Jul 30 12:59:29 node5 systemd[1]: Reloading. Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] rx: host: 13 link: 0 is up Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] rx: host: 11 link: 0 is up Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] rx: host: 8 link: 0 is up Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] rx: host: 9 link: 0 is up Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] rx: host: 12 link: 0 is up Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] rx: host: 2 link: 0 is up Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] rx: host: 10 link: 0 is up Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] rx: host: 5 link: 0 is up Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] rx: host: 4 link: 0 is up Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] host: host: 13 (passive) best link: 0 (pri: 1) Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] rx: host: 7 link: 0 is up Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] rx: host: 3 link: 0 is up Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] rx: host: 6 link: 0 is up Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] host: host: 11 (passive) best link: 0 (pri: 1) Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] host: host: 8 (passive) best link: 0 (pri: 1) Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] host: host: 9 (passive) best link: 0 (pri: 1) Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] host: host: 12 (passive) best link: 0 (pri: 1) Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1) Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] host: host: 10 (passive) best link: 0 (pri: 1) Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1) Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1) Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] host: host: 7 (passive) best link: 0 (pri: 1) Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1) Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] host: host: 6 (passive) best link: 0 (pri: 1) Jul 30 12:59:29 node5 pve-ha-crm[3270]: status change slave => wait_for_quorum Jul 30 12:59:29 node5 systemd[1]: Stopping ZFS Event Daemon (zed)... Jul 30 12:59:29 node5 zed[1396]: Exiting Jul 30 12:59:29 node5 systemd[1]: zfs-zed.service: Succeeded. Jul 30 12:59:29 node5 systemd[1]: Stopped ZFS Event Daemon (zed). Jul 30 12:59:29 node5 systemd[1]: Started ZFS Event Daemon (zed). Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] pmtud: PMTUD link change for host: 2 link: 0 from 1157 to 1365 Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] pmtud: PMTUD link change for host: 13 link: 0 from 1157 to 1365 Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] pmtud: PMTUD link change for host: 12 link: 0 from 1157 to 1365 Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] pmtud: PMTUD link change for host: 11 link: 0 from 1157 to 1365 Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] pmtud: PMTUD link change for host: 10 link: 0 from 1157 to 1365 Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] pmtud: PMTUD link change for host: 9 link: 0 from 1157 to 1365 Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] pmtud: PMTUD link change for host: 8 link: 0 from 1157 to 1365 Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] pmtud: PMTUD link change for host: 5 link: 0 from 1157 to 1365 Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] pmtud: PMTUD link change for host: 4 link: 0 from 1157 to 1365 Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] pmtud: PMTUD link change for host: 3 link: 0 from 1157 to 1365 Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] pmtud: PMTUD link change for host: 7 link: 0 from 1157 to 1365 Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] pmtud: PMTUD link change for host: 6 link: 0 from 1157 to 1365 Jul 30 12:59:29 node5 corosync[2502472]: [KNET ] pmtud: Global data MTU changed to: 1365 Jul 30 12:59:29 node5 zed[2502607]: ZFS Event Daemon 0.8.4-pve1 (PID 2502607) Jul 30 12:59:29 node5 zed[2502607]: Processing events since eid=0 Jul 30 12:59:30 node5 systemd[1]: Reloading. Jul 30 12:59:30 node5 pmxcfs[2458]: [status] notice: update cluster info (cluster name C1FRA3, version = 13) Jul 30 12:59:30 node5 systemd[1]: Stopping The Proxmox VE cluster filesystem... Jul 30 12:59:30 node5 pmxcfs[2458]: [main] notice: teardown filesystem Jul 30 12:59:31 node5 systemd[2476950]: etc-pve.mount: Succeeded. Jul 30 12:59:31 node5 systemd[1]: etc-pve.mount: Succeeded. Jul 30 12:59:31 node5 corosync[2502472]: [TOTEM ] A new membership (1.23bf5) was formed. Members joined: 2 3 4 5 6 7 8 9 10 11 12 13 Jul 30 12:59:33 node5 pve-ha-lrm[3279]: status change active => lost_agent_lock Jul 30 12:59:33 node5 corosync[2502472]: [KNET ] link: host: 13 link: 0 is down Jul 30 12:59:33 node5 corosync[2502472]: [KNET ] host: host: 13 (passive) best link: 0 (pri: 1) Jul 30 12:59:33 node5 corosync[2502472]: [KNET ] host: host: 13 has no active links Jul 30 12:59:37 node5 corosync[2502472]: [TOTEM ] Token has not been received in 6113 ms Jul 30 12:59:39 node5 corosync[2502472]: [TOTEM ] A processor failed, forming new configuration. Jul 30 12:59:40 node5 systemd[1]: pve-cluster.service: State 'stop-sigterm' timed out. Killing. Jul 30 12:59:40 node5 systemd[1]: pve-cluster.service: Killing process 2458 (pmxcfs) with signal SIGKILL. Jul 30 12:59:40 node5 systemd[1]: pve-cluster.service: Main process exited, code=killed, status=9/KILL Jul 30 12:59:40 node5 systemd[1]: pve-cluster.service: Failed with result 'timeout'. Jul 30 12:59:40 node5 systemd[1]: Stopped The Proxmox VE cluster filesystem. Jul 30 12:59:40 node5 systemd[1]: Starting The Proxmox VE cluster filesystem... Jul 30 12:59:40 node5 pmxcfs[2502656]: [status] notice: update cluster info (cluster name C1FRA3, version = 13) Jul 30 12:59:41 node5 pve-firewall[3212]: status update error: Connection refused Jul 30 12:59:41 node5 pve-firewall[3212]: firewall update time (10.016 seconds) Jul 30 12:59:41 node5 pve-firewall[3212]: status update error: Connection refused Jul 30 12:59:41 node5 pmxcfs[2502656]: [dcdb] notice: cpg_join retry 10 Jul 30 12:59:42 node5 corosync[2502472]: [KNET ] rx: host: 13 link: 0 is up Jul 30 12:59:42 node5 corosync[2502472]: [KNET ] host: host: 13 (passive) best link: 0 (pri: 1) Jul 30 12:59:42 node5 pmxcfs[2502656]: [dcdb] notice: cpg_join retry 20 Jul 30 12:59:42 node5 corosync[2502472]: [TOTEM ] A new membership (1.23bf9) was formed. Members Jul 30 12:59:42 node5 corosync[2502472]: [QUORUM] This node is within the primary component and will provide service. Jul 30 12:59:42 node5 corosync[2502472]: [QUORUM] Members[13]: 1 2 3 4 5 6 7 8 9 10 11 12 13 Jul 30 12:59:42 node5 corosync[2502472]: [MAIN ] Completed service synchronization, ready to provide service. Jul 30 12:59:42 node5 pmxcfs[2502656]: [status] notice: node has quorum Jul 30 12:59:42 node5 pmxcfs[2502656]: [dcdb] notice: members: 1/2458, 1/2502656, 2/2678, 3/2110, 4/2039, 5/2060, 6/1601652, 7/2078, 8/2464, 9/2044, 10/2000, 11/25251, 12/13205, 13/29857 Jul 30 12:59:42 node5 pmxcfs[2502656]: [dcdb] notice: starting data syncronisation Jul 30 12:59:42 node5 systemd[1]: Started The Proxmox VE cluster filesystem. Jul 30 12:59:42 node5 pmxcfs[2502656]: [status] notice: members: 1/2458, 1/2502656, 2/2678, 3/2110, 4/2039, 5/2060, 6/1601652, 7/2078, 8/2464, 9/2044, 10/2000, 11/25251, 12/13205, 13/29857 Jul 30 12:59:42 node5 pmxcfs[2502656]: [status] notice: starting data syncronisation Jul 30 12:59:42 node5 pmxcfs[2502656]: [dcdb] notice: received sync request (epoch 1/2502656/00000001) Jul 30 12:59:42 node5 pmxcfs[2502656]: [status] notice: received sync request (epoch 1/2502656/00000001) Jul 30 12:59:42 node5 pmxcfs[2502656]: [dcdb] crit: ignore sync request from wrong member 2/2678 Jul 30 12:59:42 node5 pmxcfs[2502656]: [dcdb] notice: received sync request (epoch 2/2678/0000001B) Jul 30 12:59:42 node5 pmxcfs[2502656]: [status] crit: ignore sync request from wrong member 2/2678 Jul 30 12:59:42 node5 pmxcfs[2502656]: [status] notice: received sync request (epoch 2/2678/00000018) Jul 30 12:59:42 node5 pvedaemon[2463567]: successful auth for user 'prometheus@pve' Jul 30 12:59:43 node5 pvestatd[3214]: status update time (9.266 seconds) Jul 30 12:59:44 node5 systemd[1]: Reloading. Jul 30 12:59:44 node5 pvefw-logger[2143318]: received terminate request (signal) Jul 30 12:59:44 node5 pvefw-logger[2143318]: stopping pvefw logger Jul 30 12:59:44 node5 systemd[1]: Stopping Proxmox VE firewall logger... Jul 30 12:59:44 node5 systemd[1]: pvefw-logger.service: Succeeded. Jul 30 12:59:44 node5 systemd[1]: Stopped Proxmox VE firewall logger. Jul 30 12:59:44 node5 systemd[1]: Starting Proxmox VE firewall logger... Jul 30 12:59:44 node5 pvefw-logger[2502780]: starting pvefw logger Jul 30 12:59:44 node5 systemd[1]: Started Proxmox VE firewall logger. Jul 30 12:59:44 node5 systemd[1]: Reloading. Jul 30 12:59:44 node5 systemd[1]: Reloading Proxmox VE firewall. Jul 30 12:59:45 node5 pve-firewall[2502806]: send HUP to 3212 Jul 30 12:59:45 node5 pve-firewall[3212]: received signal HUP Jul 30 12:59:45 node5 pve-firewall[3212]: server shutdown (restart) Jul 30 12:59:45 node5 systemd[1]: Reloaded Proxmox VE firewall. Jul 30 12:59:45 node5 systemd[1]: Reloading. Jul 30 12:59:45 node5 pve-firewall[3212]: restarting server Jul 30 12:59:45 node5 systemd[1]: Reloading. Jul 30 12:59:46 node5 systemd[1]: Stopping PVE Qemu Event Daemon... Jul 30 12:59:46 node5 systemd[1]: qmeventd.service: Main process exited, code=killed, status=15/TERM Jul 30 12:59:46 node5 systemd[1]: qmeventd.service: Succeeded. Jul 30 12:59:46 node5 systemd[1]: Stopped PVE Qemu Event Daemon. Jul 30 12:59:46 node5 systemd[1]: Starting PVE Qemu Event Daemon... Jul 30 12:59:46 node5 systemd[1]: Started PVE Qemu Event Daemon. Jul 30 12:59:46 node5 systemd[1]: Reloading. Jul 30 12:59:48 node5 systemd[1]: Reloading PVE API Daemon. Jul 30 12:59:48 node5 pvedaemon[2502925]: send HUP to 3262 Jul 30 12:59:48 node5 pvedaemon[3262]: received signal HUP Jul 30 12:59:48 node5 pvedaemon[3262]: server closing Jul 30 12:59:48 node5 pvedaemon[3262]: server shutdown (restart) Jul 30 12:59:49 node5 systemd[1]: Reloaded PVE API Daemon. Jul 30 12:59:49 node5 systemd[1]: Reloading PVE API Proxy Server. Jul 30 12:59:49 node5 pveproxy[2502929]: send HUP to 3271 Jul 30 12:59:49 node5 pveproxy[3271]: received signal HUP Jul 30 12:59:49 node5 pveproxy[3271]: server closing Jul 30 12:59:49 node5 pveproxy[3271]: server shutdown (restart) Jul 30 12:59:49 node5 pvedaemon[3262]: restarting server Jul 30 12:59:49 node5 pvedaemon[3262]: starting 3 worker(s) Jul 30 12:59:49 node5 pvedaemon[3262]: worker 2502930 started Jul 30 12:59:49 node5 pvedaemon[3262]: worker 2502931 started Jul 30 12:59:49 node5 pvedaemon[3262]: worker 2502932 started Jul 30 12:59:49 node5 systemd[1]: Reloaded PVE API Proxy Server. Jul 30 12:59:49 node5 systemd[1]: Reloading PVE SPICE Proxy Server. Jul 30 12:59:50 node5 spiceproxy[2502936]: send HUP to 3277 Jul 30 12:59:50 node5 spiceproxy[3277]: received signal HUP Jul 30 12:59:50 node5 spiceproxy[3277]: server closing Jul 30 12:59:50 node5 spiceproxy[3277]: server shutdown (restart) Jul 30 12:59:50 node5 systemd[1]: Reloaded PVE SPICE Proxy Server. Jul 30 12:59:50 node5 systemd[1]: Reloading PVE Status Daemon. Jul 30 12:59:50 node5 spiceproxy[3277]: restarting server Jul 30 12:59:50 node5 spiceproxy[3277]: starting 1 worker(s) Jul 30 12:59:50 node5 spiceproxy[3277]: worker 2502941 started Jul 30 12:59:50 node5 pveproxy[3271]: Using '/etc/pve/local/pveproxy-ssl.pem' as certificate for the web interface. Jul 30 12:59:50 node5 pveproxy[3271]: restarting server Jul 30 12:59:50 node5 pveproxy[3271]: starting 3 worker(s) Jul 30 12:59:50 node5 pveproxy[3271]: worker 2502942 started Jul 30 12:59:50 node5 pveproxy[3271]: worker 2502943 started Jul 30 12:59:50 node5 pveproxy[3271]: worker 2502944 started Jul 30 12:59:50 node5 pvestatd[2502940]: send HUP to 3214 Jul 30 12:59:50 node5 pvestatd[3214]: received signal HUP Jul 30 12:59:50 node5 pvestatd[3214]: server shutdown (restart) Jul 30 12:59:50 node5 systemd[1]: Reloaded PVE Status Daemon. Jul 30 12:59:50 node5 systemd[1]: pvebanner.service: Succeeded. Jul 30 12:59:50 node5 systemd[1]: Stopped Proxmox VE Login Banner. Jul 30 12:59:50 node5 systemd[1]: Stopping Proxmox VE Login Banner... Jul 30 12:59:50 node5 systemd[1]: Starting Proxmox VE Login Banner... Jul 30 12:59:50 node5 systemd[1]: Started Proxmox VE Login Banner. Jul 30 12:59:50 node5 systemd[1]: pvesr.timer: Succeeded. Jul 30 12:59:50 node5 systemd[1]: Stopped Proxmox VE replication runner. Jul 30 12:59:50 node5 systemd[1]: Stopping Proxmox VE replication runner. Jul 30 12:59:50 node5 systemd[1]: Started Proxmox VE replication runner. Jul 30 12:59:50 node5 systemd[1]: pve-daily-update.timer: Succeeded. Jul 30 12:59:50 node5 systemd[1]: Stopped Daily PVE download activities. Jul 30 12:59:50 node5 systemd[1]: Stopping Daily PVE download activities. Jul 30 12:59:50 node5 systemd[1]: Started Daily PVE download activities. Jul 30 12:59:51 node5 pvestatd[3214]: restarting server Jul 30 12:59:51 node5 dbus-daemon[1420]: [system] Reloaded configuration Jul 30 12:59:51 node5 systemd[1]: Reloading. Jul 30 12:59:54 node5 pvedaemon[2463132]: worker exit Jul 30 12:59:54 node5 pvedaemon[3262]: worker 2473016 finished Jul 30 12:59:54 node5 pvedaemon[3262]: worker 2463567 finished Jul 30 12:59:54 node5 pvedaemon[3262]: worker 2463132 finished Jul 30 12:59:55 node5 spiceproxy[2143334]: worker exit Jul 30 12:59:55 node5 spiceproxy[3277]: worker 2143334 finished Jul 30 12:59:55 node5 pveproxy[2478831]: worker exit Jul 30 12:59:55 node5 pveproxy[2476236]: worker exit Jul 30 12:59:55 node5 pveproxy[2438194]: worker exit Jul 30 12:59:55 node5 pveproxy[3271]: worker 2476236 finished Jul 30 12:59:55 node5 pveproxy[3271]: worker 2438194 finished Jul 30 12:59:55 node5 pveproxy[3271]: worker 2478831 finished Jul 30 12:59:57 node5 pvedaemon[2504106]: worker exit Jul 30 13:00:00 node5 systemd[1]: Starting Proxmox VE replication runner... ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@