DRBD not working as it should
Took the plunge and ordered 2 512 servers to test the high availability tutorial,
A few pastebins for you:
crm_mon output - http://p.linode.com/4330
cat /proc/drbd output - http://p.linode.com/4331
I then checked the mounts and on node 1 it reports that the 2 images have mounted to intended location:
/dev/drbd0 /var/lib/mysql ext3 rw,relatime,errors=continue,data=writeback 0 0
/dev/drbd1 /srv/www ext3 rw,relatime,errors=continue,data=writeback 0 0
On node 2 however, they are not mounted at all.
My question is, from the tutorial, should I be performing "mkfs.ext3 /dev/drbd0" and "mkfs.ext3 /dev/drbd1" on node 2, as I get an error if I try and mount these as node1 is configured to.
Any help appreciated, feel like I'm at the last hurdle and just can't make the finish line!
4 Replies
You might try forcing the slave to mount the filesystem read-only, but even then you should be careful because data could change out under the software, and that's bad.
Simultaneously mounting on two hosts requires a filesystem specially designed to deal with it, like GFS or OCFS2.
Most of the answers to your problem may be found in the DRBD manual here: http://www.drbd.org/users-guide/
The surface reason why you cannot mount more than one node is that DRBD only allows one node to be primary by default. To allow two nodes to be primary, you need to setup a few things:
-Specify "allow-two-primaries" in drbd.conf. See
-Setup both of your nodes on a cluster.
-Implement a filesystem that allows for concurrent access, such as GFS or OCFS2. See http://www.drbd.org/users-guide/ch-gfs.htmlhttp://www.drbd.org/users-guide/ch-ocfs2.html
If dual primaries is not a requirement (Failover versus mirror setup),
you do not need to format or partition your drive on node2, only on node1 once. DRBD will replicate the exact configuration on the disk on node2 for you. To mount DRBD on node2, you must do the following:
-Unmount drbd on node1
-Place resource on node1 as secondary: sudo drbdadm secondary all
-Place resource on node2 as primary: sudo drbdadm primary all
-Mount drbd on node2
If you wish for the process to be automated, you should check out on monitoring software such as Heartbeat or Pacemaker. Pacemaker is less old-school, but I personally went Heartbeat since I only needed a quick way of manually switching MySQL/Apache/IP Aliasing over to the failover node whenever node1 crashes and doesn't turn back on.
Hope this helps anyone with a similar problem as well.