Quantcast
Channel: Recent Topics - HA-Lizard
Viewing all 1408 articles
Browse latest View live

drbd wfconnection and standalone condition - by: ledge

$
0
0
I have been working on a test pool setup to troubleshoot problems on my main pool setup which I have paperwork pending for a support session. So i simulated a power failure on my master and the the pool successfully transfer responsibilities over to the slave. Upon reconnecting the former master back into the pool I have a current situation in which now that i original master back as the primary server and my slave set as the secondary server i have DRBD reporting a wfconnection state on the primary and a standalone on the secondary.

- Xencenter reports connectitivy to both my iscsi SR
- I can enter manual mode swap primary and secondary roles and reboot both servers when either is in the secondary manual mode and the vm's stay active.

it just appears that DRBD has some connectivity issue.

Replacing Failed drive in Server (RAID 1+0) - by: Adam Ward

$
0
0
Hi,

I have 2 x DL 380 G6 Servers in a HA-Lizard NoSAN configuration.

The servers each have a RAID 1+0 array consisting of 6 x 300GB Disks - this is presented into XenServer as my HA-Lizard iSCSI SR.

One of the Servers has just reported that one of the 300GB Disks has failed.

Am I OK to just pull the old drive and put a new one in? I'm assuming that because the HP Server's RAID controller owns the disks, that HA-Lizard will not notice anything is happening whilst the drive is rebuilt...

I thought I should ask just in case I need to be doing something different as I know these days its not always just "put a new disk in" (thinking ZFS pools, etc.)

Thanks,

Adam

What are the implications of DRBD Single-Primary? - by: Sean Nelson

$
0
0
I apologize if this has been discussed before. It seems to be a very fundamental concept, but I haven't been able to find much discussion on it.

DRBD runs in a Single-Primary mode, right? I am having a hard time finding a definitive answer to what that means. Intuitively, I would assume that means if Host A is Primary and Host B is secondary, only the changes written to host A will be kept.

In that case, what happens if, in XenCenter, I right-click a VM and tell it to move to Host B? Does that mean all changes written to Host B will now be lost?

Do all of my VMs need to be located on a single host? Or does DRBD have some logic that allows the secondary to overwrite the primary with newer data when they have not been split?

Auto switch pool master - how to configure? - by: -

$
0
0
Good morning,
I have question:
How to properly configured 2-node pool with auto switch pool master?
I followed up your tutorial on youtube, install ha-lizard both on primary and secondary server but unfortunatelly when primary (pool master) server shutdown, secondary server doesn't become a pool master and I don't know what I'm doing wrong.
HA is configured with defaults settings which I assume should be enough to auto switch pool master works fine.
Additionally: HA POOL enabled must be on both of the servers or just only on primary server?(In youtube tutorial is only mention primary server).
Thank you for your help.
Mike

Test run power failure on master - by: ajmind

$
0
0
Today I have tested to pull out power on the master of our two node setup.

Unfortunately, the slave has correctly tried to self promote to master but has failed for some reason.

Could you please check the log from the slave sent via email?

Test run power failure on master - by: Clayton

$
0
0
Good Morning,

I'm in the environmental testing phase. Could you help me test.

For I am newbie in this HA software.

Test 1:
Did the HA configuration using the documentation, it's all OK.

I do the following test:
I have two servers, primary and secondary. And one virtual machine.

Mando virtual machine for secondary, turn off the same, it returns the primary.

Now if it is the primary, she does not go to the secondary.

Can you help me.

Clayton

Problems with nosan_installer on XenServer 7 - by: conXO

$
0
0
I am trying to install HA-Lizard on a XenServer 7 cluster using the halizard_nosan_installer_1.4.7. (I need the XenServer v. 7 because of better support for different Xeon E5-26xx CPU capabilities.)

I had to make some changes to the script, but it is (mostly) up and running. The only problem I have is the tgtd service is not starting: "iscsi-ha-NOTICE-/etc/iscsi-ha/iscsi-ha.sh: /etc/iscsi-ha/iscsi-ha.sh: line 237: /etc/init.d/tgtd: No such file or directory".

I had to install epel repo and install tgt using
and that does not provide /etc/init.d/tgtd (because of the new handling af services with systemd in CentOS 7).

I found the setting: "ISCSI_TARGET_SERVICE=/etc/init.d/tgtd" in /etc/iscsi-ha/iscsi-ha.conf. I was thinking of replacing this setting with "/usr/sbin/service tgtd", but after looking at iscsi-ha.sh, that probably won't work, because it is using "`basename $ISCSI_TARGET_SERVICE`".

Is there anyway of getting iSCSI-HA working with XenServer 7?
Are you already working on a solution for XenServer 7? ;)

HALizard No San Installer - by: DustinB3403

$
0
0
So I setup my 2 node configuration using the NoSAN installer, and created the shared iSCSI virtual disk storage.

I just wanted to confirm if this appears correct. As well as is there a way for me to know when the storage is setup 100% so I can move my servers.


HALizard Installed and configured - by: DustinB3403

$
0
0
Just curious if this looks correct.



I'm creating a new VM on the cluster, and it appears on my second host. (not that it matters to me)

Is it correct to appear like this?

What is a good way for me to test that the cluster is functional? Just pull the management nic on xenserver-two and see if the VM get's migrated?

floating IP is not online - by: Pieffers

$
0
0
Hoi, this is my first post, please be gentle.

I have worked through the manual " reference design and how-to for HA 2-node Xenserver Pool". I followed all installs and configs, except for the fact that do not use Bonded interfaces.

In the final steps I should create a shared iscsi SR, but the floating IP is not online. My DRBD interfcaces have resp 10.10.10.1 and 10.10.10.2 IPnrs, the floating is in iscsi-ha.conf configured with 10.10.10.3.

How and when should this floating interface be online?

Here are my files:

targets.conf:

# Set the driver. If not specified, defaults to "iscsi".
default-driver iscsi

# Set iSNS parameters, if needed
#iSNSServerIP 192.168.111.222
#iSNSServerPort 3205
#iSNSAccessControl On
#iSNS On

# Continue if tgtadm exits with non-zero code (equivalent of
# --ignore-errors command line option)
#ignore-errors yes
<target iqn.2016.lan.cvo:xenserver-test>
backing-store /dev/drbd1
scsi_id 0000000000
scsi_sn 0000000001
lun 10
</target>



drdb.conf:


global { usage-count no; }
common { syncer { rate 100M; } }
resource iscsi1 {
protocol C;
net {
after-sb-0pri discard-zero-changes;
after-sb-1pri consensus;
cram-hmac-alg sha1;
shared-secret "samson39";
}
on xenserver-test-01 {
device /dev/drbd1;
disk /dev/sdb;
address 10.10.10.1:7789;
meta-disk internal;
}
on xenserver-test-02 {
device /dev/drbd1;
disk /dev/sdb;
address 10.10.10.2:7789;
meta-disk internal;
}
}



iscsi-ha.conf:


DRBD_RESOURCES=iscsi1
ISCSI_TARGET_SERVICE=/etc/init.d/tgtd
DRBD_VIRTUAL_IP=10.10.10.3
DRBD_VIRTUAL_MASK=255.255.255.0
DRBD_INTERFACE=xenbr1
MONITOR_MAX_STARTS=5
MONITOR_DELAY=10
MONITOR_KILLALL=1
MONITOR_SCANRATE=5


Ifconfig:

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:26:18:0c:a9:09 txqueuelen 1000 (Ethernet)
RX packets 17113 bytes 6708443 (6.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 13228 bytes 4197057 (4.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:13:3b:12:6b:be txqueuelen 1000 (Ethernet)
RX packets 257 bytes 21272 (20.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 287 bytes 23016 (22.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 0 (Local Loopback)
RX packets 246 bytes 40697 (39.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 246 bytes 40697 (39.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

xenbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.20.50.57 netmask 255.255.255.0 broadcast 172.20.50.255
ether 00:26:18:0c:a9:09 txqueuelen 0 (Ethernet)
RX packets 17058 bytes 6459915 (6.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 13244 bytes 4199857 (4.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

xenbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.10.10.2 netmask 255.255.255.0 broadcast 10.10.10.255
ether 00:13:3b:12:6b:be txqueuelen 0 (Ethernet)
RX packets 257 bytes 17674 (17.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 287 bytes 21594 (21.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0




fdisk -l


Disk /dev/sdb: 250.1 GB, 250059350016 bytes, 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 250.1 GB, 250059350016 bytes, 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 46139392 83888127 18G Microsoft basic
2 8390656 46139391 18G Microsoft basic
3 83888128 84936703 512M BIOS boot parti
5 2048 8390655 4G Microsoft basic
6 84936704 87033855 1G Linux swap

Disk /dev/mapper/VG_XenStorage--8562342e--5cf9--676f--9a34--a7a7addd68f1-MGT: 4 MB, 4194304 bytes, 8192 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes







Many thx!
Martijn

Xenserver 7 support? - by: Sean Nelson

$
0
0
Is Xenserver 7 supported by HA-Lizard? I'm about to try it in a test environment, but would like to know if it would be appropriate for a production environment.

Master down and slave no wake up - by: juninhomax18

$
0
0
The Master down the pool. :woohoo:

has solution?

:(

E-Mail alert behaviour at OP_MODE=1 - by: ajmind

$
0
0
I am running a single vAPP group in OP_MODE=1 with four VMs. If I disable HA and start the group and enable HA I am getting every hour the message below:



I could trace this condition also in the message logs:



Could This behaviour changed in a way that the trigger to send e-mails checks first if gthe vAPP group is already running?

BR Andreas

Is possible add ssd disk like cache for HA-Lizard - by: Juliano Silva

$
0
0
Dears,

How can add ssd disk like cache for ha-lizzard no-san?

Best Regards,
Juliano

Install HA-Lizard noSAN with existing VMs - by: Andrew R

$
0
0
Hello, I've used HA-Lizard on two empty XenServer 6.5 boxes and it worked successfully. However, it was almost a year ago so I forget the exact steps taken. I was wondering if it is possible to do the installation while one server has virtual machines installed, while the other server is brand new and does not.

So, prior to DRBD sync does the volumes have to be formatted, or is it possible for the data on server 'A' to be replicated to server 'B'?

HA-Lizard emailing error re autopromote_uuid - by: Adam Ward

$
0
0
Hi,

I've just configured my HA-Lizard Pool to send email alerts.

I've just noticed that its sending me this alert:

"write_pool_state: Error retrieving autopromote_uuid from pool configuration"

Anyone know what this is / what I need to do to fix it?

Regards,

Adam

Master Up, slave lost network & VM stuck on slave - by: Tobias Kreidl

$
0
0
Situation:
2-server pool, the master is up and running. The slave server lost all network connectivity and has a VM running on it. Settings are:
DISABLED_VAPPS=()
ENABLE_LOGGING=1
FENCE_ACTION=stop
FENCE_ENABLED=0
FENCE_FILE_LOC=/etc/ha-lizard/fence
FENCE_HA_ONFAIL=0
FENCE_HEURISTICS_IPS=10.15.9.1
FENCE_HOST_FORGET=0
FENCE_IPADDRESS=
FENCE_METHOD=POOL
FENCE_MIN_HOSTS=2
FENCE_PASSWD=
FENCE_QUORUM_REQUIRED=1
FENCE_REBOOT_LONE_HOST=0
FENCE_USE_IP_HEURISTICS=1

Added Note: Changing "FENCE_HA_ONFAIL=1" made no difference in the behavior.

The one stuck VM on the networkless slave server did not restart on the master (it's marked true), not to mention the VM has disappeared from view under XenCenter! It's still running, though, according to "xe vm-list". It shows up as enabled for HA via "ha-cfg get-vm-ha". Pool OP_Mode is set to 2. A clean shutdown of the slave node forces the migration to work, but not this state where the server is running but has lost just the network connectivity.

What settings do I need to modify to allow the VM to be restarted on the master server automatically? And why did the VM disappear altogether from view in XenCenter?! We're running XenServer 6.5 SP1, fully patched up to XS65ESP1034.

Thank you for any assistance!

--Tobias

iSCSI SR Broken if I reboot Slave Server - by: Adam Ward

$
0
0
Hi,

We are about to put HA-Lizard into production, but have an issue I'd like to clear up first:

If we cleanly reboot the Slave XenServer, the iSCSI Storage Repository is marked as "Broken".

No matter what I do (leave it alone, try and replug it using Xen or the HA-Lizard replug script) the Slave iSCSI SR remains as "Broken".

The only way I can fix this is to shut both servers down, restart the Pool Master, then restart the Slave. The iSCSI SR eventually comes back online...

I do get warnings in XenServer "Failed to attach storage or server start" and sometimes the Pool Master requires a "Repair Storage" to connect to the iSCSI SR. Is this correct?

Can anyone explain why this happens / help me fix it?

Thanks in advance,

Adam

URGENT: Shutdown both servers and HA Lizard BROKEN - by: Adam Ward

$
0
0
hi,

we shutdown all virtual machines and did ha-cfg status and said yes on both servers to turn off HA.

We shutdown the slave and then the pool master (we had to move them in the rack).

I have turned them both back on and the HA Lizard iSCSI storage respository says BROKEN. I cannot get the servers to connect to the iSCSI SR.

Have i done something wrong, missed something out? I really need to get these VM's back on! Please Help!

Adam

XAPI is hung - by: DustinB3403

$
0
0
So I've got a NoSAN installation between two host, and XAPI is hung on both host.

The VM's are still running, but are inaccessible from XenCenter or Xen Orchestra. I can however access the VM's directly via RDP or the services they host.

So what is the correct reboot process for this scenario?

Thanks in advance.
Viewing all 1408 articles
Browse latest View live