This adds a scaled back replication implementation
that leaves the bulk of the work up to the driver.
We just provide basic admin API methods to do things
like enable/disable and fail-over. Set up and
specification of replication targets for a specific
back end are now intended to be part of the cinder.conf
in the driver section itself.
Replication targets are configured via the cinder.conf
file in their associated driver section. See the devref
doc included in this commit for details on the format.
The next step in configuration is to create a volume-type
with replication info in the extra-specs.
extra-specs = replication=enable, volume_backend_name=foo
This instructs the driver to utilize replication, default is
up to the driver but would suggest single way rep and in the case
of multiple targets, driver could choose or have a default.
If the back end doesn't report replication=enabled in it's stats
updates the scheduler will fail to place the volume due to invalid
host, or no hosts available.
Vendors can easily modify extra-specs or their own config settings
to modify this behavior, any vendor-unique adaptation can be
provided through the use of scoped keys. Suggested examples
will be published in docs.
See doc/source/devref/replication.rst for more info