This patch fixes where we are getting the source volume id
from. Previously we were reading the source volume id from
the destination volume object in volume['source_volid']. There are
cases where this value doesn't exist. We are now reading the value
from the src_vref['id'] instead.
Michal Dulko [Wed, 8 Jul 2015 09:27:30 +0000 (11:27 +0200)]
Service object
Add versionedobjects abstraction layer to services. Distinguish time
zone aware DateTime fields. The object derives from
CinderObjectDictCompat, so it supports both object (obj.prop) and dict
(obj['prop']) syntax to access properties. Complete move to object
notation will be made in a follow up clean up patch.
Co-Authored-By: Michal Dulko <michal.dulko@intel.com>
Change-Id: I09f593f9f9aa8befa40d989b731159b78a429071
Partial-Implements: blueprint cinder-objects
Mitsuhiro Tanino [Mon, 10 Aug 2015 20:44:25 +0000 (16:44 -0400)]
Allow specified backend capabilities to be retrieved
This change adds a new admin-api extension to allow admin to
fetch specified backend capabilities.
With this extension, admin can obtain what the current deployed
backend in Cinder is able to do from the endpoint.
The extension takes "host" as an parameter.
GET http://CINDER_API_ENDPOINT/v2/TENANT_ID/capabilities/<host>
John Griffith [Sun, 23 Aug 2015 18:20:54 +0000 (12:20 -0600)]
Add cinder.conf.sample to gitignore
Now that we have some sort of conf checking again in the
tox ini file, we should make sure we don't inadvertenly
commit a generated sample (which I've done twice now).
Liu Xinguo [Tue, 14 Jul 2015 14:36:57 +0000 (22:36 +0800)]
Enhance FC zone support for Huawei driver
Currently, when using FC switch, there are manual operations
necessary to enable the Huawei driver to work. This change
makes it so those manual operations are no longer required
any more. Huawei driver can choose the appropriate
FC ports and manage FC zone automatically.
(Pulled from gate, cinder can no longer pass unit tests)
Jon Bernard [Mon, 1 Jun 2015 18:09:30 +0000 (14:09 -0400)]
Add support for file I/O volume migration
This patch changes the generic volume migration logic to support
non-attachable volumes. Non-attachable refers to volume drivers that do
not support attachment via the typical iSCSI or similar protocols where
a block device is made available on the host machine. Device drivers
such as RBD make volumes available to cinder via a file handle that
proxies read() and write() calls to the Ceph cluster.
This patch improves the generic migration logic to determine whether a
migration operation can proceed with dd using block device paths or file
operations on handles returned from the os-brick connectors.
Changes to the RBD driver are included to correctly rename the target
volume during the completion phase of a successful migration.
It appears there is still some work to be done for attached in-use
volume migration for certain configurations. Successful tests were seen
for:
LVM to LVM (available and in-use)
LVM to/from NFS (available and in-use)
LVM to/from Ceph (available)
Ceph to LVM (in-use)
NFS to/from Ceph (available)
Ceph to NFS (in-use)
Failures were seen (due to Nova) for the following:
LVM to Ceph (in-use)
NFS to Ceph (in-use)
(Pulled from gate, cinder can no longer pass unit tests)
Matt Riedemann [Sun, 30 Aug 2015 15:27:45 +0000 (08:27 -0700)]
Add debug logging before attaching volume in driver
It'd be helpful to log the volume/instance/mount/host
information in attach_volume before calling the volume
driver to do the attachment to know we have the right
values.
Deleting volumes in CloudByte Storage is an asynchronous process.
So, the driver needs to wait till the job/process is completed.
The driver uses the jobid in a wait loop for a configured interval
of time, till the volume deletion is completed.
Efficient volume copy for generic volume migration
Currently Cinder uses dd command for data copy of volume migration,
but the copy always copy full blocks even if the source data contains
many null and zero blocks. The dd command has an option conv=sparse
to skip null or zero blocks for more efficient data copy.
However, if the destination volume is not zero cleared beforehand,
we should copy full block from source to dest volume to cleanup dest
volume in order to avoid security issue.
If the volume pre-initilization(zero cleared) is ensured beforehand,
we can skip copy of null and zero blocks to destination volume by
using sparse copy.
In order to use this option properly, we have to check
sparse_copy_volume capability for destination backend driver via
RPC API before volume copy.
This patch also adds sparse_copy_volume capability flag into volume
stats of LVM and NFS drivers to enable efficient copy for these
backends.
Vincent Hou [Thu, 28 May 2015 03:18:09 +0000 (11:18 +0800)]
Volume status management during migration
This patch proposes a new implementation for the status and
the migration_status for volumes.
* The initial migration_status is None, meaning no migration has been
done; Migration_status 'error' means the previous migration failed.
Migration_status 'success' means the previous migration succeeded.
* If the key 'lock_volume' is set to True from the request, the volume
status should be set to 'maintenance' during migration and goes
back to its original status after migration. Otherwise, if the
key 'lock_volume' is set to False, the volume status will remain the
same as its original status. The default value for lock_volume is
False and it applies to the available volume.
* From the REST's perspectives, all the create, update and delete
actions are not allowed if the volume is in 'maintenance', because
it means the volume is out of service. If it is not in maintenance
mode, the migration can be interrupted if other requests are
issued, e.g. attach. For the termination of migration, another
patch will target to resolve it.
DocImpact
APIImpact The key 'lock_volume' has been added into the API,
telling the volume to change the status to 'maintenance' or not.
The migration_status has been added into results returned
from volume list command, if the request is from an admin.
Jacob Gregor [Thu, 27 Aug 2015 21:40:30 +0000 (16:40 -0500)]
Clean up line continuation in Storwize driver
There are multiple places in the Storwize driver where '\' is used
for line continuation. This patch replaces '\' with '()' to make
the code consistent.
The BaseVD class declares _stats in the constructor and
some drivers seem to be storing their stats from get_volume_stats
in there. This patch updates the 3PAR drivers to do the same
as most.
Jon Bernard [Thu, 27 Aug 2015 20:03:57 +0000 (16:03 -0400)]
Skip intermittent VMDK tests
The test 'test_create_backing_from_stream_optimized_file' is failing
intermittently in the gate, which causes delays in the review process of
other patches. Given the proximity to feature freeze, I think it's
appropriate to temporarily disable these tests until a proper solution
can be found.
Jordan Pittier [Fri, 17 Jul 2015 09:48:23 +0000 (11:48 +0200)]
Rework Scality SOFS driver to use RemoteFS class
This change is big but the logic is simple. The previous version of
this driver reimplemented most of what is now in the RemoteFS base
class. SOFS stands for Scale Out FileSystem and is based on FUSE, so
it's only natural to leverage the RemoteFS class.
This patch adds a base framework of capabilities reporting
feature. The get_capabilities RPC API returns dictionary
which is consisted of two parts.
First part includes static backend capabilities which are
obtained by get_volume_stats(). Second part is properties
which includes parameters correspond to extra specs.
This properties part is consisted of cinder standard
capabilities and vendor unique properties. These properties
are created via these two methods.
Since _init_standard_capabilities() only exposes cinder
standard capabilities into the properties dictionary,
each backend driver needs to expose their own properties
by overriding _init_vendor_properties().
Tom Swanson [Tue, 25 Aug 2015 20:13:31 +0000 (15:13 -0500)]
Dell SC: init_volume stale volume info fix
In init_volume the volume information becomes stale after the volume
is mapped to a server. The volume should be updated before being sent
to unmap.
The unmap function was testing for the volume being active. This
should not be required. It is unlikely an inactive volume will have
mappings but not impossible. This check has been removed.
Removed LOG.error from _find_mapping_profiles as the API returning
an error is fine if the volume is inactive. Plus we were not failing
on it. The LOG.debug is sufficient.
Michal Dulko [Wed, 26 Aug 2015 12:10:00 +0000 (14:10 +0200)]
Validate filters in snapshot*, backup* in db.api
In db.sqlalchemy.api methods filters passed from c-api are applied
directly to the DB query. From all snapshot_get_all* methods filter
validation was done only for snapshot_get_all method. Backup methods
are missing the validation completely. This is causing an exception
about unknown DB column and returns 500 HTTP error when calling API
with an incorrect filter from an admin context (without admin context
filters are validated on an c-api level). This commit adds such
validation to snapshot_get_by_host, snapshot_get_all_by_project and
backup_get_all* methods to prevent such failures. Regression unit tests
are also added.
Cinder Nested Quota Driver patch adds hierarchical support. Quota
API's now can also handle hierarchical projects.
For the first pass only a user with admin role in a parent project
will be able to update and delete its children quotas.
Even after these changes are made, the existing DbQuotaDriver
functionalities will remain unchanged. The same driver will be able to
handle flat projects as well as hierarchical projects.
Co-Authored-By: Erickson Santos <erickson@lsd.ufcg.edu.br>
Change-Id: Ie669d7d74d40c0ff1f1e54f673e7f3ae96b3b950
Implements: bp cinder-nested-quota-driver
Sonia Ghanekar [Wed, 26 Aug 2015 17:08:53 +0000 (10:08 -0700)]
Add manage/unmanage volume support for Nimble
This patch adds support for volume manage/unmanage for a Nimble
ISCSI driver. It uses an additional field 'agent-type' on the
backend for distinguishing volumes managed by the Nimble driver.
While managing an existing volume it sets the agent-type to
Openstack and while unmanaging sets it back to None.
Since this field was not used in the earlier versions, in order
to be backward compatible, during the initialization of the driver,
the 'agent-type' is updated to Openstack for all the existing
Nimble volumes.
diem_tran [Wed, 12 Aug 2015 19:27:10 +0000 (15:27 -0400)]
Local cache feature of Oracle ZFSSA drivers
The feature enables ZFSSA drivers to create and reuse cache volumes
on ZFSSA backends. It takes advantage of ZFSSA snapshot and clone,
which can be created almost instantly from an existing volume.
Cache volumes will be removed if it has no clone. Custom schemas
and WebDAV properties are used to store cache volumes' properties.
This patch replaces urllib.unquote with urllib.parse.unquote
in the vmdk driver.
Commit d08c7ffe52e3ee211e425c363200998704726e58 replaced
urllib and urllib2 with six.moves.urllib and changed urllib.
unquote to urllib.parse.unquote but missed one call site
in the vmdk driver since the code which introduced it was
under review at the same time. Due to this, if we set an
optional parameter 'vmware_cluster_name', the driver init
will fail with AttributeError.
This patch also adds a unit test which would have caught
this error during gate check.
Update volume status AFTER terminate_connection is done
Current volume API terminate_connection() updates volume's status in
DB before volume driver finishes terminate_connection(). In cases
where volume driver needs some time complete terminate_connection(),
it means volume becomes 'available' before volume driver done cleaning
up. This fix changes the order of 'unreserve_volume' (updating DB)
and driver's terminate_connection to make sure when volume is marked
'available' the client has done cleaning connection.
Note that terminate_connection volume RPC API is a synchronized call,
but volume manager/driver doesn't return any value to caller. What's
more, when volume API extension calls volume terminate_connection API,
it doesn't expect return values, so this change removes unnecessary
return values from volume API.
wanghao [Thu, 4 Jun 2015 09:50:12 +0000 (17:50 +0800)]
Incremental backup improvements for L
1. Add 'is_incremental=True' and 'has_dependent_backups=True/False' to
response body of querying.
2. Add parent_id to notification system.
Since we need to get volume has_dependent_backups value when querying
volume detail list, to reduce the performance impact, add index to
parent_id column in backup table.
APIImpact
When showing backup detail it will return additional info
"is_incremental": True/False and "has_dependent_backups": True/False
wanghao [Wed, 31 Dec 2014 04:06:53 +0000 (12:06 +0800)]
Implement function to manage/unmanage snapshots
1. Add snapshots manage action in contrib api and unmanage
action in API extension.
2. Implement manage_existing_snapshot in manager and flow
to import existing snapshots.
3. Add manage/unmanage support in the LVM driver
Implements: blueprint support-import-export-snapshots
DocImpact
APIImpact
Similar to volume manage/unmanage.
Change-Id: Ib6cf8392b0bc99f803316991f31a75788677e9cf
Cory Stone [Wed, 26 Aug 2015 00:05:36 +0000 (19:05 -0500)]
Dont eager load volume type specs on volume list
Some drivers need extra specs loaded on create and the session could be
closed by the time they access them. However for a volume list, eager
loading these attributes causes a severe performance penalty.
Move the eager load to the single volume get case.
John Griffith [Wed, 19 Aug 2015 17:42:40 +0000 (17:42 +0000)]
Filter out extra-specs from type get for non-admin
Currently when a get_volume type call is made to the
db api, we don't check context and auto-fill the
extra-specs info for that type.
Extra specs are intended to be admin only info, but anybody
that calls the API directly to list or get volume-types is
also given this info which is not intended.
This patch just adds a context check to the db api's
private extra-specs builder method. In the case of
non-admin, we just skip adding the extra-specs.
Masaki Kanno [Tue, 25 Aug 2015 23:56:55 +0000 (08:56 +0900)]
Prevent that all backup objects are deleted
All backup objects of a project are deleted by this bug. The issue occurs
when Swift is a backend. The below is reproduction steps of the issue.
Step 1: Create a backup from a volume.
The creation of the backup may fail by some kind of causes. If the case is
a failure in put_container() in ChunkedBackupDriver._create_container(),
backup objects of the volume are not stored in Swift because 'volumebackups'
container is not created. In the case, the following information is recorded
in Cinder DB.
Step 2: Investigate and solve causes in which put_container() failed.
Step 3: Create another backup from a volume.
When the creation of the backup succeeded, 'volumebackups' container is
created in Swift, then backup objects of the volume are stored into the
container.
Step 4: Delete the backup created in step 1.
Only a record of the backup in Cinder DB should be deleted because the
backup objects were not stored in Swift. However, the backup objects created
in step 3 also are deleted by the bug.
Usually SwiftBackupDrier.get_container_entries() lists only object names of
the backup. However, when backup.service_metadata of the backup is None,
SwiftBackupDriver.get_container_entries() lists all object names in
'volumebackups' container. As a result, all the backup objects in the
container are deleted.
This fix prevents that all the backup objects are deleted.
When a backup is deleted, deletion of the backup objects by delete_object()
is not executed if backup.service_metadata of the backup is None.
Yuriy Nesenenko [Tue, 25 Aug 2015 10:55:56 +0000 (13:55 +0300)]
Parameter osapi_max_limit is always used by default
Default value max_limit is initialized before a value is read from
the config file. Default parameter values are always evaluated when
the “def” statement they belong to is executed. So parameter
osapi_max_limit is always used by default even it's set in the
configuration file.