This focuses the replication work on a specific use case,
and eliminates some of the ambiguity in earlier versions.
Additionally this implementation addresses needs for
devices that do replication based on the whole backend-device
or on Pools.
Use case:
DR scenario, where a storage device is rendered inoperable.
This implementation allows the preservation of user data
for those volumes that are of type replication-enabled.
The goal is NOT to make failures completely transparent
but instead to preserve data access while an Admin tries
to rebuild/recover his/her cloud.
It's very important to note that we're no longer interested in
dealing with replication in Cinder at a Volume level. The concept
of have "some" volumes failover, and "others" left behind, proved
to not only be overly complex and difficult to implement, but we
never identified a concrete use-case where one would use failover
in a scenario where some volumes would stay and be accessible on
a primary but other may be moved and accessed via a secondary.
In this model, it's host/backend based. So when you failover,
you're failing over an entire backend. We heavily leverage
existing resources, specifically services, and capabilities.