rbd-mirror, introduced in the Jewel release, is a means of asynchronously replicating RADOS block device (RBD) content to a remote Ceph cluster. That's all fair and good, but how do I use it? How exactly does the rbd-mirror daemon work, what's the difference between one-way and two-way mirroring, what authentication considerations apply, and how do I deploy it in an automated fashion? How is mirroring related to RBD journaling, and how does that affect my RBD performance? And how do I integrate my mirrored devices into a cloud platform like OpenStack, so I can achieve true site-to-site redundancy and disaster recovery capability for persistent volumes?
This talk gives a run-down of the ins and outs of RBD mirroring, suggests best practices to deploy it, outlines performance considerations, and highlights pitfalls to avoid along the way.
Slides for this talk are at
https://fghaas.github.io/cephalocon2019-rbdmirror/