+++ /dev/null
-*.pyc
-*.swp
-*~
-build
-dist
-heat.egg-info
-tags
-*.log
-heat-test.db
-.venv
-AUTHORS
-ChangeLog
-templates/cloudformation-examples
-.tox
-.coverage
-cover
-.testrepository
-.project
-.pydevproject
+++ /dev/null
-[DEFAULT]
-test_command=${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
-test_id_option=--load-list $IDFILE
-test_list_option=--list
+++ /dev/null
-If you would like to contribute to the development of OpenStack,
-you must follow the steps in the "If you're a developer, start here"
-section of this page:
-
- http://wiki.openstack.org/HowToContribute
-
-Once those steps have been completed, changes to OpenStack
-should be submitted for review via the Gerrit tool, following
-the workflow documented at:
-
- http://wiki.openstack.org/GerritWorkflow
-
-Pull requests submitted through GitHub will be ignored.
-
-Bugs should be filed on Launchpad, not GitHub:
-
- https://bugs.launchpad.net/heat
+++ /dev/null
-Heat Style Commandments
-=======================
-
-- Step 1: Read the OpenStack Style Commandments
- https://github.com/openstack-dev/hacking/blob/master/HACKING.rst
-- Step 2: Read on
-
-Heat Specific Commandments
---------------------------
-
-None so far
-
-Creating Unit Tests
--------------------
-For every new feature, unit tests should be created that both test and
-(implicitly) document the usage of said feature. If submitting a patch for a
-bug that had no unit test, a new passing unit test should be added. If a
-submitted bug fix does have a unit test, be sure to add a new one that fails
-without the patch and passes with the patch.
-
-For more information on creating unit tests and utilizing the testing
-infrastructure in OpenStack Heat, please read heat/testing/README.rst.
-
-
-Running Tests
--------------
-The testing system is based on a combination of tox and testr. The canonical
-approach to running tests is to simply run the command `tox`. This will
-create virtual environments, populate them with dependencies and run all of
-the tests that OpenStack CI systems run. Behind the scenes, tox is running
-`testr run --parallel`, but is set up such that you can supply any additional
-testr arguments that are needed to tox. For example, you can run:
-`tox -- --analyze-isolation` to cause tox to tell testr to add
---analyze-isolation to its argument list.
-
-It is also possible to run the tests inside of a virtual environment
-you have created, or it is possible that you have all of the dependencies
-installed locally already. In this case, you can interact with the testr
-command directly. Running `testr run` will run the entire test suite. `testr
-run --parallel` will run it in parallel (this is the default incantation tox
-uses.) More information about testr can be found at:
-http://wiki.openstack.org/testr
-
+++ /dev/null
-
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
+++ /dev/null
-include AUTHORS
-include ChangeLog
-include CONTRIBUTING.rst
-include HACKING.rst
-include LICENSE
-include README.rst
-include MANIFEST.in pylintrc
-include openstack-common.conf
-include babel.cfg install.sh run_tests.sh tox.ini uninstall.sh
-include heat/cloudinit/config
-include heat/cloudinit/boothook.sh
-include heat/cloudinit/loguserdata.py
-include heat/cloudinit/part-handler.py
-include heat/db/sqlalchemy/migrate_repo/migrate.cfg
-include heat/db/sqlalchemy/migrate_repo/README
-include heat/openstack/common/README
-include heat/testing/README.rst
-include heat/tests/examples/tags.txt
-include heat/tests/testing-overview.txt
-include heat/tests/v1_1/testfile.txt
-include heat/tests/policy/deny_stack_user.json
-include heat/tests/policy/notallowed.json
-graft contrib
-graft etc
-graft doc
-graft docs
-graft tools
+++ /dev/null
-====
-HEAT
-====
-
-Heat is a service to orchestrate multiple composite cloud applications using
-templates, through both an OpenStack-native ReST API and a
-CloudFormation-compatible Query API.
-
-Why heat? It makes the clouds rise and keeps them there.
-
-Getting Started
----------------
-
-If you'd like to run from the master branch, you can clone the git repo:
-
- git clone git@github.com:openstack/heat.git
-
-
-* Wiki: http://wiki.openstack.org/Heat
-* Developer docs: http://docs.openstack.org/developer/heat
-
-
-Python client
--------------
-https://github.com/openstack/python-heatclient
-
-References
-----------
-* http://docs.amazonwebservices.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html
-* http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/create-stack.html
-* http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html
-* http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca
-
-We have integration with
-------------------------
-* https://github.com/openstack/python-novaclient (instance)
-* https://github.com/openstack/python-keystoneclient (auth)
-* https://github.com/openstack/python-swiftclient (s3)
-* https://github.com/openstack/python-neutronclient (networking)
+++ /dev/null
-[python: **.py]
+++ /dev/null
-#!/bin/bash
-
-set +e
-
-SERVICE_PASSWORD='servicepass'
-CINDER_USERNAME='cinder'
-
-source `dirname $0`/heat-keystone-setup
-
-ADMIN_ROLE=$(get_data 2 admin 1 keystone role-list)
-SERVICE_TENANT=$(get_data 2 service 1 keystone tenant-list)
-
-CINDER_USERID=$(get_user $CINDER_USERNAME)
-add_role $CINDER_USERID $SERVICE_TENANT $ADMIN_ROLE $CINDER_USERNAME
-
-CINDER_SERVICE=$(get_service cinder volume "Cinder Volume Service")
-add_endpoint $CINDER_SERVICE 'http://localhost:8776/v1/$(tenant_id)s'
+++ /dev/null
-#!/usr/bin/env python
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Heat API Server. An OpenStack ReST API to Heat.
-"""
-
-import eventlet
-eventlet.monkey_patch(os=False)
-
-import os
-import sys
-
-# If ../heat/__init__.py exists, add ../ to Python search path, so that
-# it will override what happens to be installed in /usr/(local/)lib/python...
-possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
- os.pardir,
- os.pardir))
-if os.path.exists(os.path.join(possible_topdir, 'heat', '__init__.py')):
- sys.path.insert(0, possible_topdir)
-
-from heat.openstack.common import gettextutils
-
-gettextutils.install('heat', lazy=True)
-
-from oslo.config import cfg
-
-from heat.common import config
-from heat.common import wsgi
-
-from heat.openstack.common import log as logging
-
-LOG = logging.getLogger('heat.api')
-
-if __name__ == '__main__':
- try:
- cfg.CONF(project='heat', prog='heat-api')
- cfg.CONF.default_log_levels = ['amqplib=WARN',
- 'qpid.messaging=INFO',
- 'keystone=INFO',
- 'eventlet.wsgi.server=WARN',
- ]
- logging.setup('heat')
-
- app = config.load_paste_app()
-
- port = cfg.CONF.heat_api.bind_port
- host = cfg.CONF.heat_api.bind_host
- LOG.info('Starting Heat ReST API on %s:%s' % (host, port))
- server = wsgi.Server()
- server.start(app, cfg.CONF.heat_api, default_port=port)
- server.wait()
- except RuntimeError as e:
- sys.exit("ERROR: %s" % e)
+++ /dev/null
-#!/usr/bin/env python
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Heat API Server. This implements an approximation of the Amazon
-CloudFormation API and translates it into a native representation. It then
-calls the heat-engine via AMQP RPC to implement them.
-"""
-
-import eventlet
-eventlet.monkey_patch(os=False)
-
-import os
-import sys
-
-# If ../heat/__init__.py exists, add ../ to Python search path, so that
-# it will override what happens to be installed in /usr/(local/)lib/python...
-possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
- os.pardir,
- os.pardir))
-if os.path.exists(os.path.join(possible_topdir, 'heat', '__init__.py')):
- sys.path.insert(0, possible_topdir)
-
-from heat.openstack.common import gettextutils
-
-gettextutils.install('heat', lazy=True)
-
-from oslo.config import cfg
-
-from heat.common import config
-from heat.common import wsgi
-
-from heat.openstack.common import log as logging
-
-LOG = logging.getLogger('heat.api.cfn')
-
-if __name__ == '__main__':
- try:
- cfg.CONF(project='heat', prog='heat-api-cfn')
- cfg.CONF.default_log_levels = ['amqplib=WARN',
- 'qpid.messaging=INFO',
- 'keystone=INFO',
- 'eventlet.wsgi.server=WARN',
- ]
- logging.setup('heat')
-
- app = config.load_paste_app()
-
- port = cfg.CONF.heat_api_cfn.bind_port
- host = cfg.CONF.heat_api_cfn.bind_host
- LOG.info('Starting Heat API on %s:%s' % (host, port))
- server = wsgi.Server()
- server.start(app, cfg.CONF.heat_api_cfn, default_port=port)
- server.wait()
- except RuntimeError as e:
- sys.exit("ERROR: %s" % e)
+++ /dev/null
-#!/usr/bin/env python
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Heat API Server. This implements an approximation of the Amazon
-CloudWatch API and translates it into a native representation. It then
-calls the heat-engine via AMQP RPC to implement them.
-"""
-
-import eventlet
-eventlet.monkey_patch(os=False)
-
-import os
-import sys
-
-# If ../heat/__init__.py exists, add ../ to Python search path, so that
-# it will override what happens to be installed in /usr/(local/)lib/python...
-possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
- os.pardir,
- os.pardir))
-if os.path.exists(os.path.join(possible_topdir, 'heat', '__init__.py')):
- sys.path.insert(0, possible_topdir)
-
-from heat.openstack.common import gettextutils
-
-gettextutils.install('heat', lazy=True)
-
-from oslo.config import cfg
-
-from heat.common import config
-from heat.common import wsgi
-
-from heat.openstack.common import log as logging
-
-LOG = logging.getLogger('heat.api.cloudwatch')
-
-if __name__ == '__main__':
- try:
- cfg.CONF(project='heat', prog='heat-api-cloudwatch')
- cfg.CONF.default_log_levels = ['amqplib=WARN',
- 'qpid.messaging=INFO',
- 'keystone=INFO',
- 'eventlet.wsgi.server=WARN',
- ]
- logging.setup('heat')
-
- app = config.load_paste_app()
-
- port = cfg.CONF.heat_api_cloudwatch.bind_port
- host = cfg.CONF.heat_api_cloudwatch.bind_host
- LOG.info('Starting Heat CloudWatch API on %s:%s' % (host, port))
- server = wsgi.Server()
- server.start(app, cfg.CONF.heat_api_cloudwatch, default_port=port)
- server.wait()
- except RuntimeError as e:
- sys.exit("ERROR: %s" % e)
+++ /dev/null
-#!/bin/bash
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-
-#
-# Print --help output and exit.
-#
-usage() {
-
-cat << EOF
-Set up a local MySQL database for use with heat.
-This script will create a 'heat' database that is accessible
-only on localhost by user 'heat' with password 'heat'.
-
-Usage: heat-db-setup <rpm|deb> [options]
-Options:
- select a distro type (rpm or debian)
-
- --help | -h
- Print usage information.
- --password <pw> | -p <pw>
- Specify the password for the 'heat' MySQL user that will
- use to connect to the 'heat' MySQL database. By default,
- the password 'heat' will be used.
- --rootpw <pw> | -r <pw>
- Specify the root MySQL password. If the script installs
- the MySQL server, it will set the root password to this value
- instead of prompting for a password. If the MySQL server is
- already installed, this password will be used to connect to the
- database instead of having to prompt for it.
- --yes | -y
- In cases where the script would normally ask for confirmation
- before doing something, such as installing mysql-server,
- just assume yes. This is useful if you want to run the script
- non-interactively.
-EOF
-
- exit 0
-}
-
-install_mysql_server() {
- if [ -z "${ASSUME_YES}" ] ; then
- $PACKAGE_INSTALL mysql-server
- else
- $PACKAGE_INSTALL -y mysql-server
- fi
-}
-
-start_mysql_server() {
- $SERVICE_START
-}
-
-MYSQL_HEAT_PW_DEFAULT="heat"
-MYSQL_HEAT_PW=${MYSQL_HEAT_PW_DEFAULT}
-HEAT_CONFIG="/etc/heat/heat.conf"
-ASSUME_YES=""
-ELEVATE=""
-
-# Check for root privileges
-if [[ $EUID -ne 0 ]] ; then
- echo "This operation requires superuser privileges, using sudo:"
- if sudo -l > /dev/null ; then
- ELEVATE="sudo"
- else
- exit 1
- fi
-fi
-
-case "$1" in
- rpm)
- echo "Installing on an RPM system."
- PACKAGE_INSTALL="$ELEVATE yum install"
- PACKAGE_STATUS="rpm -q"
- SERVICE_MYSQLD="mysqld"
- SERVICE_START="$ELEVATE service $SERVICE_MYSQLD start"
- SERVICE_STATUS="service $SERVICE_MYSQLD status"
- SERVICE_ENABLE="$ELEVATE chkconfig"
- ;;
- deb)
- echo "Installing on a Debian system."
- PACKAGE_INSTALL="$ELEVATE apt-get install"
- PACKAGE_STATUS="dpkg-query -s"
- SERVICE_MYSQLD="mysql"
- SERVICE_START="$ELEVATE service $SERVICE_MYSQLD start"
- SERVICE_STATUS="$ELEVATE service $SERVICE_MYSQLD status"
- SERVICE_ENABLE=""
- ;;
- *)
- usage
- ;;
-esac
-
-while [ $# -gt 0 ]
-do
- case "$1" in
- -h|--help)
- usage
- ;;
- -p|--password)
- shift
- MYSQL_HEAT_PW=${1}
- ;;
- -r|--rootpw)
- shift
- MYSQL_ROOT_PW=${1}
- ;;
- -y|--yes)
- ASSUME_YES="yes"
- ;;
- *)
- # ignore
- ;;
- esac
- shift
-done
-
-
-# Make sure MySQL is installed.
-
-NEW_MYSQL_INSTALL=0
-if ! $PACKAGE_STATUS mysql-server && ! $PACKAGE_STATUS mariadb-server > /dev/null
-then
- if [ -z "${ASSUME_YES}" ] ; then
- printf "mysql-server is not installed. Would you like to install it now? (y/n): "
- read response
- case "$response" in
- y|Y)
- ;;
- n|N)
- echo "mysql-server must be installed. Please install it before proceeding."
- exit 0
- ;;
- *)
- echo "Invalid response."
- exit 1
- esac
- fi
-
- NEW_MYSQL_INSTALL=1
- install_mysql_server
-fi
-
-
-# Make sure mysqld is running.
-
-if ! $SERVICE_STATUS > /dev/null
-then
- if [ -z "${ASSUME_YES}" ] ; then
- printf "$SERVICE_MYSQLD is not running. Would you like to start it now? (y/n): "
- read response
- case "$response" in
- y|Y)
- ;;
- n|N)
- echo "$SERVICE_MYSQLD must be running. Please start it before proceeding."
- exit 0
- ;;
- *)
- echo "Invalid response."
- exit 1
- esac
- fi
-
- start_mysql_server
-
- # If we both installed and started, ensure it starts at boot
- [ $NEW_MYSQL_INSTALL -eq 1 ] && $SERVICE_ENABLE $SERVICE_MYSQLD on
-fi
-
-
-# Get MySQL root access.
-
-if [ $NEW_MYSQL_INSTALL -eq 1 ]
-then
- if [ ! "${MYSQL_ROOT_PW+defined}" ] ; then
- echo "Since this is a fresh installation of MySQL, please set a password for the 'root' mysql user."
-
- PW_MATCH=0
- while [ $PW_MATCH -eq 0 ]
- do
- printf "Enter new password for 'root' mysql user: "
- read -s MYSQL_ROOT_PW
- echo
- printf "Enter new password again: "
- read -s PW2
- echo
- if [ "${MYSQL_ROOT_PW}" = "${PW2}" ] ; then
- PW_MATCH=1
- else
- echo "Passwords did not match."
- fi
- done
- fi
-
- echo "UPDATE mysql.user SET password = password('${MYSQL_ROOT_PW}') WHERE user = 'root'; DELETE FROM mysql.user WHERE user = ''; flush privileges;" | mysql -u root
- if ! [ $? -eq 0 ] ; then
- echo "Failed to set password for 'root' MySQL user."
- exit 1
- fi
-elif [ ! "${MYSQL_ROOT_PW+defined}" ] ; then
- printf "Please enter the password for the 'root' MySQL user: "
- read -s MYSQL_ROOT_PW
- echo
-fi
-
-
-# Sanity check MySQL credentials.
-
-MYSQL_ROOT_PW_ARG=""
-if [ "${MYSQL_ROOT_PW+defined}" ]
-then
- MYSQL_ROOT_PW_ARG="--password=${MYSQL_ROOT_PW}"
-fi
-echo "SELECT 1;" | mysql -u root ${MYSQL_ROOT_PW_ARG} > /dev/null
-if ! [ $? -eq 0 ]
-then
- echo "Failed to connect to the MySQL server. Please check your root user credentials."
- exit 1
-fi
-echo "Verified connectivity to MySQL."
-
-
-# Now create the db.
-
-echo "Creating 'heat' database."
-cat << EOF | mysql -u root ${MYSQL_ROOT_PW_ARG}
-CREATE DATABASE IF NOT EXISTS heat;
-GRANT ALL ON heat.* TO 'heat'@'localhost' IDENTIFIED BY '${MYSQL_HEAT_PW}';
-GRANT ALL ON heat.* TO 'heat'@'%' IDENTIFIED BY '${MYSQL_HEAT_PW}';
-flush privileges;
-EOF
-
-
-# Make sure heat configuration has the right MySQL password.
-
-if [ "${MYSQL_HEAT_PW}" != "${MYSQL_HEAT_PW_DEFAULT}" ] ; then
- echo "Updating 'heat' database password in ${HEAT_CONFIG}"
- sed -i -e "s/mysql:\/\/heat:\(.*\)@/mysql:\/\/heat:${MYSQL_HEAT_PW}@/" ${HEAT_CONFIG}
-fi
-
-heat-manage db_sync
-
-
-# Do a final sanity check on the database.
-
-echo "SELECT * FROM migrate_version;" | mysql -u heat --password=${MYSQL_HEAT_PW} heat > /dev/null
-if ! [ $? -eq 0 ]
-then
- echo "Final sanity check failed."
- exit 1
-fi
-
-echo "Complete!"
+++ /dev/null
-#!/usr/bin/env python
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Heat Engine Server. This does the work of actually implementing the API
-calls made by the user. Normal communications is done via the heat API
-which then calls into this engine.
-"""
-
-import eventlet
-eventlet.monkey_patch()
-
-import os
-import sys
-
-# If ../heat/__init__.py exists, add ../ to Python search path, so that
-# it will override what happens to be installed in /usr/(local/)lib/python...
-POSSIBLE_TOPDIR = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
- os.pardir,
- os.pardir))
-if os.path.exists(os.path.join(POSSIBLE_TOPDIR, 'heat', '__init__.py')):
- sys.path.insert(0, POSSIBLE_TOPDIR)
-
-from heat.openstack.common import gettextutils
-
-gettextutils.install('heat', lazy=True)
-
-from oslo.config import cfg
-
-from heat.openstack.common import log as logging
-from heat.openstack.common import service
-
-from heat.db import api as db_api
-from heat.rpc import api as rpc_api
-
-
-LOG = logging.getLogger('heat.engine')
-
-if __name__ == '__main__':
-
- cfg.CONF(project='heat', prog='heat-engine')
- cfg.CONF.default_log_levels = ['amqplib=WARN',
- 'sqlalchemy=WARN',
- 'qpid.messaging=INFO',
- 'keystone=INFO',
- 'eventlet.wsgi.server=WARN',
- ]
- logging.setup('heat')
-
- from heat.engine import service as engine
-
- db_api.configure()
- srv = engine.EngineService(cfg.CONF.host, rpc_api.ENGINE_TOPIC)
- launcher = service.launch(srv)
- launcher.wait()
+++ /dev/null
-#!/bin/bash
-
-set +e
-
-KEYSTONE_CONF=${KEYSTONE_CONF:-/etc/keystone/keystone.conf}
-
-# Extract some info from Keystone's configuration file
-if [[ -r "$KEYSTONE_CONF" ]]; then
- CONFIG_SERVICE_TOKEN=$(sed 's/[[:space:]]//g' $KEYSTONE_CONF | grep ^admin_token= | cut -d'=' -f2)
- CONFIG_ADMIN_PORT=$(sed 's/[[:space:]]//g' $KEYSTONE_CONF | grep ^admin_port= | cut -d'=' -f2)
-fi
-
-SERVICE_TOKEN=${SERVICE_TOKEN:-$CONFIG_SERVICE_TOKEN}
-SERVICE_ENDPOINT=${SERVICE_ENDPOINT:-http://127.0.0.1:${CONFIG_ADMIN_PORT:-35357}/v2.0}
-if [[ -z "$SERVICE_TOKEN" ]]; then
- echo "No service token found." >&2
- echo "Set SERVICE_TOKEN manually from keystone.conf admin_token." >&2
- exit 1
-fi
-
-set_admin_token() {
- alias keystone="keystone --token $SERVICE_TOKEN \
- --endpoint $SERVICE_ENDPOINT"
-}
-
-unset_admin_token() {
- unalias keystone
-}
-
-
-get_data() {
- local match_column=$(($1 + 1))
- local regex="$2"
- local output_column=$(($3 + 1))
- shift 3
-
- echo $("$@" | \
- awk -F'|' \
- "! /^\+/ && \$${match_column} ~ \"^ *${regex} *\$\" \
- { print \$${output_column} }")
-}
-
-get_id () {
- get_data 1 id 2 "$@"
-}
-
-get_column_num() {
- local name=$1
- shift
- $@ | awk -F'|' "NR == 2 { for (i=2; i<NF; i++) if (\$i ~ \"^ *${name} *\$\") print (i - 1) }"
-}
-
-get_user() {
- local username=$1
-
- # Outut format of keystone user-list changed between essex and
- # folsom - the columns have been re-ordered (!?), so detect what
- # column to pass to get_data via get_column_num
- namecol=$(get_column_num name keystone user-list)
-
- local user_id=$(get_data $namecol $username 1 keystone user-list)
-
- if [ -n "$user_id" ]; then
- echo "Found existing $username user" >&2
- echo $user_id
- else
- echo "Creating $username user..." >&2
- get_id keystone user-create --name=$username \
- --pass="$SERVICE_PASSWORD" \
- --tenant_id $SERVICE_TENANT \
- --email=$username@example.com
- fi
-}
-
-add_role() {
- local user_id=$1
- local tenant=$2
- local role_id=$3
- local username=$4
-
- # The keystone argument format changed between essex and folsom
- # so we use the fact that the folsom keystone version has a new
- # option "user-role-list" to detect we're on that newer version
- # This also allows us to detect when the user already has the
- # requested role_id, preventing an error on folsom
- user_roles=$(keystone user-role-list \
- --user_id $user_id\
- --tenant_id $tenant 2>/dev/null)
- if [ $? == 0 ]; then
- # Folsom
- existing_role=$(get_data 1 $role_id 1 echo "$user_roles")
- if [ -n "$existing_role" ]
- then
- echo "User $username already has role $role_id" >&2
- return
- fi
- keystone user-role-add --tenant_id $tenant \
- --user_id $user_id \
- --role_id $role_id
- else
- # Essex
- keystone user-role-add --tenant_id $tenant \
- --user $user_id \
- --role $role_id
- fi
-}
-
-create_role() {
- local role_name=$1
-
- role_id=$(get_data 2 $role_name 1 keystone role-list)
- if [ -n "$role_id" ]
- then
- echo "Role $role_name already exists : $role_id" >&2
- else
- keystone role-create --name $role_name
- fi
-}
-
-get_endpoint() {
- local service_type=$1
-
- unset_admin_token
- keystone endpoint-get --service $service_type
- set_admin_token
-}
-
-delete_endpoint() {
- local service_type=$1
-
- case $service_type in
- volume) urlsuffix='\\\\$\\\\(tenant_id)s';;
- orchestration) urlsuffix='%[(]tenant_id[)]s';;
- # cloudformation has no hash suffix
- *) urlsuffix=''
- esac
-
- local url=$(get_data 1 "${service_type}[.]publicURL" 2 \
- get_endpoint $service_type 2>/dev/null | \
- sed -r "s/[a-f0-9]{32}/$urlsuffix/")
-
- if [ -n "$url" ]; then
- local endpoints=$(get_data 3 $url 1 keystone endpoint-list)
-
- for endpoint in $endpoints; do
- echo "Removing $service_type endpoint ${endpoint}..." >&2
- keystone endpoint-delete "$endpoint" >&2
- done
-
- if [ -z "$endpoints" ]; then false; fi
- else
- false
- fi
-}
-
-delete_all_endpoints() {
- while delete_endpoint $1; do
- true
- done
-}
-
-delete_service() {
- local service_type=$1
-
- delete_all_endpoints $service_type
-
- local service_ids=$(get_data 3 $service_type 1 keystone service-list)
-
- for service in $service_ids; do
- local service_name=$(get_data 1 $service 2 keystone service-list)
- echo "Removing $service_name:$service_type service..." >&2
- keystone service-delete $service >&2
- done
-}
-
-get_service() {
- local service_name=$1
- local service_type=$2
- local description="$3"
-
- delete_service $service_type
-
- get_id keystone service-create --name=$service_name \
- --type=$service_type \
- --description="$description"
-}
-
-add_endpoint() {
- local service_id=$1
- local url="$2"
-
- keystone endpoint-create --region RegionOne --service_id $service_id \
- --publicurl "$url" --adminurl "$url" --internalurl "$url" >&2
-}
-
-keystone_setup() {
- TENANT_ID=$(get_data 1 tenant_id 2 keystone token-get)
- set_admin_token
-
- ADMIN_ROLE=$(get_data 2 admin 1 keystone role-list)
- SERVICE_TENANT=$(get_data 2 service 1 keystone tenant-list)
- SERVICE_PASSWORD=${SERVICE_PASSWORD:-$OS_PASSWORD}
- SERVICE_HOST=${SERVICE_HOST:-localhost}
-
- if [[ "$SERVICE_PASSWORD" == "$OS_PASSWORD" ]]; then
- echo "Using the OS_PASSWORD for the SERVICE_PASSWORD." >&2
- fi
-
- if [[ "$SERVICE_HOST" == "localhost" ]]; then
- echo "Endpoints will be registered as localhost, but this usually won't work."
- echo "Set SERVICE_HOST to a publically accessible hostname/IP instead."
- fi
-
- echo ADMIN_ROLE $ADMIN_ROLE
- echo SERVICE_TENANT $SERVICE_TENANT
- echo SERVICE_PASSWORD $SERVICE_PASSWORD
- echo SERVICE_TOKEN $SERVICE_TOKEN
- echo SERVICE_HOST $SERVICE_HOST
-
- HEAT_USERNAME="heat"
- HEAT_USERID=$(get_user $HEAT_USERNAME)
- echo HEAT_USERID $HEAT_USERID
- add_role $HEAT_USERID $SERVICE_TENANT $ADMIN_ROLE $HEAT_USERNAME
-
- # Create a special role which template-defined "stack users" are
- # assigned to in the engine when they are created, this allows them
- # to be more easily differentiated from other users (e.g so we can
- # lock down these implicitly untrusted users via RBAC policy)
- STACK_USER_ROLE="heat_stack_user"
- create_role $STACK_USER_ROLE
-
- HEAT_CFN_SERVICE=$(get_service heat-cfn cloudformation \
- "Heat CloudFormation API")
- add_endpoint $HEAT_CFN_SERVICE "http://$SERVICE_HOST:8000/v1"
-
- HEAT_OS_SERVICE=$(get_service heat orchestration \
- "Heat API")
- add_endpoint $HEAT_OS_SERVICE "http://$SERVICE_HOST:8004/v1/%(tenant_id)s"
-}
-
-if [[ ${BASH_SOURCE[0]} == ${0} ]]; then
- keystone_setup
-fi
+++ /dev/null
-#!/usr/bin/env python
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import os
-import sys
-
-# If ../heat/__init__.py exists, add ../ to Python search path, so that
-# it will override what happens to be installed in /usr/(local/)lib/python...
-POSSIBLE_TOPDIR = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
- os.pardir,
- os.pardir))
-if os.path.exists(os.path.join(POSSIBLE_TOPDIR, 'heat', '__init__.py')):
- sys.path.insert(0, POSSIBLE_TOPDIR)
-
-from heat.openstack.common import gettextutils
-
-gettextutils.install('heat')
-
-from heat.cmd import manage
-
-manage.main()
+++ /dev/null
---- .nova-venv/lib/python2.6/site-packages/eventlet/green/subprocess.py.orig
-2011-05-25
-23:31:34.597271402 +0000
-+++ .nova-venv/lib/python2.6/site-packages/eventlet/green/subprocess.py
-2011-05-25
-23:33:24.055602468 +0000
-@@ -32,7 +32,7 @@
- setattr(self, attr, wrapped_pipe)
- __init__.__doc__ = subprocess_orig.Popen.__init__.__doc__
-
-- def wait(self, check_interval=0.01):
-+ def wait(self, check_interval=0.01, timeout=None):
- # Instead of a blocking OS call, this version of wait() uses logic
- # borrowed from the eventlet 0.2 processes.Process.wait() method.
- try:
-
+++ /dev/null
-target/
-build/
+++ /dev/null
-# Makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line.
-SPHINXOPTS =
-SPHINXBUILD = sphinx-build
-PAPER =
-BUILDDIR = build
-
-# Internal variables.
-PAPEROPT_a4 = -D latex_paper_size=a4
-PAPEROPT_letter = -D latex_paper_size=letter
-ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
-# the i18n builder cannot share the environment and doctrees with the others
-I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
-
-.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
-
-help:
- @echo "Please use \`make <target>' where <target> is one of"
- @echo " html to make standalone HTML files"
- @echo " dirhtml to make HTML files named index.html in directories"
- @echo " singlehtml to make a single large HTML file"
- @echo " pickle to make pickle files"
- @echo " json to make JSON files"
- @echo " htmlhelp to make HTML files and a HTML help project"
- @echo " qthelp to make HTML files and a qthelp project"
- @echo " devhelp to make HTML files and a Devhelp project"
- @echo " epub to make an epub"
- @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
- @echo " latexpdf to make LaTeX files and run them through pdflatex"
- @echo " text to make text files"
- @echo " man to make manual pages"
- @echo " texinfo to make Texinfo files"
- @echo " info to make Texinfo files and run them through makeinfo"
- @echo " gettext to make PO message catalogs"
- @echo " changes to make an overview of all changed/added/deprecated items"
- @echo " linkcheck to check all external links for integrity"
- @echo " doctest to run all doctests embedded in the documentation (if enabled)"
-
-clean:
- -rm -rf $(BUILDDIR)/*
-
-html:
- $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
- @echo
- @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
-
-dirhtml:
- $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
- @echo
- @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
-
-singlehtml:
- $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
- @echo
- @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
-
-pickle:
- $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
- @echo
- @echo "Build finished; now you can process the pickle files."
-
-json:
- $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
- @echo
- @echo "Build finished; now you can process the JSON files."
-
-htmlhelp:
- $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
- @echo
- @echo "Build finished; now you can run HTML Help Workshop with the" \
- ".hhp project file in $(BUILDDIR)/htmlhelp."
-
-qthelp:
- $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
- @echo
- @echo "Build finished; now you can run "qcollectiongenerator" with the" \
- ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
- @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Heat.qhcp"
- @echo "To view the help file:"
- @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Heat.qhc"
-
-devhelp:
- $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
- @echo
- @echo "Build finished."
- @echo "To view the help file:"
- @echo "# mkdir -p $$HOME/.local/share/devhelp/Heat"
- @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Heat"
- @echo "# devhelp"
-
-epub:
- $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
- @echo
- @echo "Build finished. The epub file is in $(BUILDDIR)/epub."
-
-latex:
- $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
- @echo
- @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
- @echo "Run \`make' in that directory to run these through (pdf)latex" \
- "(use \`make latexpdf' here to do that automatically)."
-
-latexpdf:
- $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
- @echo "Running LaTeX files through pdflatex..."
- $(MAKE) -C $(BUILDDIR)/latex all-pdf
- @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
-
-text:
- $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
- @echo
- @echo "Build finished. The text files are in $(BUILDDIR)/text."
-
-man:
- $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
- @echo
- @echo "Build finished. The manual pages are in $(BUILDDIR)/man."
-
-texinfo:
- $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
- @echo
- @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
- @echo "Run \`make' in that directory to run these through makeinfo" \
- "(use \`make info' here to do that automatically)."
-
-info:
- $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
- @echo "Running Texinfo files through makeinfo..."
- make -C $(BUILDDIR)/texinfo info
- @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
-
-gettext:
- $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
- @echo
- @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
-
-changes:
- $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
- @echo
- @echo "The overview file is in $(BUILDDIR)/changes."
-
-linkcheck:
- $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
- @echo
- @echo "Link check complete; look for any errors in the above output " \
- "or in $(BUILDDIR)/linkcheck/output.txt."
-
-doctest:
- $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
- @echo "Testing of doctests in the sources finished, look at the " \
- "results in $(BUILDDIR)/doctest/output.txt."
+++ /dev/null
-===========================
-Building the developer docs
-===========================
-
-For user and admin docs, go to the directory `doc/docbkx`.
-
-Dependencies
-============
-
-Sphinx_
- You'll need sphinx (the python one) and if you are
- using the virtualenv you'll need to install it in the virtualenv
- specifically so that it can load the cinder modules.
-
- ::
-
- sudo yum install python-sphinx
- sudo pip-python install sphinxcontrib-httpdomain
-
-Use `make`
-==========
-
-Just type make::
-
- make
-
-Look in the Makefile for more targets.
-
-To build the man pages:
-
- make man
-
-To build the developer documentation as HTML:
-
- make html
\ No newline at end of file
+++ /dev/null
-================================
-Building the user and admin docs
-================================
-
-This documentation should eventually end up in the OpenStack documentation
-repositories `api-site` and `openstack-manuals`.
-
-Dependencies
-============
-
-on Ubuntu:
-
- sudo apt-get install maven
-
-on Fedora Core:
-
- sudo yum install maven
-
-Use `mvn`
-=========
-
-Build the REST API reference manual:
-
- cd api-ref
- mvn clean generate-sources
-
-Build the Heat admin guide:
-
- cd heat-admin
- mvn clean generate-sources
-
-Build the Heat CLI guide:
-
- cd heat-cli-guide
- mvn clean generate-sources
\ No newline at end of file
+++ /dev/null
-<project xmlns="http://maven.apache.org/POM/4.0.0"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <!-- POM Build file for the Keystone Developer Guide -->
- <modelVersion>4.0.0</modelVersion>
-
- <groupId>org.openstack.identity</groupId>
- <artifactId>docs</artifactId>
- <version>1.0</version>
- <packaging>jar</packaging>
- <name>OpenStack API Page Project</name>
- <profiles>
- <profile>
- <id>Rackspace Research Repositories</id>
- <activation>
- <activeByDefault>true</activeByDefault>
- </activation>
- <repositories>
- <repository>
- <id>rackspace-research</id>
- <name>Rackspace Research Repository</name>
- <url>http://maven.research.rackspacecloud.com/content/groups/public/</url>
- </repository>
- </repositories>
- <pluginRepositories>
- <pluginRepository>
- <id>rackspace-research</id>
- <name>Rackspace Research Repository</name>
- <url>http://maven.research.rackspacecloud.com/content/groups/public/</url>
- <snapshots>
- <updatePolicy>always</updatePolicy>
- </snapshots>
- </pluginRepository>
- </pluginRepositories>
- </profile>
- </profiles>
- <properties>
- <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
- <doctools.version>1.5.1</doctools.version>
- </properties>
- <build>
- <plugins>
- <plugin>
- <groupId>com.rackspace.cloud.api</groupId>
- <artifactId>clouddocs-maven-plugin</artifactId>
- <version>${doctools.version}</version>
- <executions>
- <execution>
- <id>g1</id>
- <goals>
- <goal>generate-html</goal>
- </goals>
- <phase>generate-sources</phase>
- <configuration>
- <highlightSource>false</highlightSource>
- <enableGoogleAnalytics>1</enableGoogleAnalytics>
- <googleAnalyticsId>UA-17511903-1</googleAnalyticsId>
- </configuration>
- </execution>
- </executions>
- <configuration>
- <!-- These parameters apply to pdf and webhelp -->
- <xincludeSupported>true</xincludeSupported>
- <sourceDirectory>src/docbkx</sourceDirectory>
- <includes>
- api-ref.xml
- </includes>
- <profileSecurity>reviewer</profileSecurity>
- <branding>openstack</branding>
- <trimWadlUriCount>1</trimWadlUriCount>
- <showXslMessages>true</showXslMessages>
- </configuration>
- </plugin>
- </plugins>
- </build>
-</project>
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<book xmlns="http://docbook.org/ns/docbook"
- xmlns:xi="http://www.w3.org/2001/XInclude"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- xmlns:wadl="http://wadl.dev.java.net/2009/02"
- version="5.0-extension RackBook-2.0" xml:id="api.openstack.org">
- <info>
- <title>Heat API</title>
- <copyright>
- <year>2012</year>
- </copyright>
- <legalnotice role="apache2">
- <para/>
- </legalnotice>
- </info>
- <chapter xml:id="object">
- <title>Heat</title>
- <para></para>
- <wadl:resources
- href="../wadls/heat-api/src/heat-api-1.0.wadl"
- xmlns:wadl="http://wadl.dev.java.net/2009/02"/>
- </chapter>
-</book>
-
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-
-<!-- (C) 2012 OpenStack Foundation, All Rights Reserved -->
-
-<application xmlns="http://wadl.dev.java.net/2009/02"
- xmlns:xsdxt="http://docs.rackspacecloud.com/xsd-ext/v1.0"
- xmlns:wadl="http://wadl.dev.java.net/2009/02">
-
- <resources base="https://heat.example.com/">
- <resource id="version" path="v1">
- <resource id="tenant_id" path="{tenant_id}">
- <param name="tenant_id" style="template">
- <doc>
- <p xmlns="http://www.w3.org/1999/xhtml">
- The unique identifier of the tenant or account.
- </p>
- </doc>
- </param>
-
- <resource id="stacks" path="stacks">
- <method href="#stack_create" />
- <method href="#stack_list" />
-
- <resource path="{stack_name}">
- <param name="stack_name" style="template" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- The name of an existing stack.
- </p></doc>
- </param>
-
- <method href="#stack_find" />
-
- <resource path="resources">
- <method href="#stack_resources_find" />
- </resource>
-
- <resource path="events">
- <method href="#stack_events_find" />
- </resource>
-
- <resource path="{stack_id}">
- <param name="stack_id" style="template" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- The unique identifier of an existing stack.
- </p></doc>
- </param>
-
- <method href="#stack_show" />
- <method href="#stack_update" />
- <method href="#stack_delete" />
-
- <resource path="events">
- <method href="#stack_event_list" />
- </resource>
-
- <resource path="resources">
- <method href="#resource_list" />
-
- <resource path="{resource_name}">
- <param name="resource_name" style="template" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- The name of a resource in the stack.
- </p></doc>
- </param>
-
- <method href="#resource_show" />
-
- <resource path="metadata">
- <method href="#resource_metadata" />
- </resource>
- </resource>
-
- <resource path="events">
- <method href="#resource_event_list" />
-
- <resource path="{event_id}">
- <param name="event_id" style="template" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- The unique identifier of an event related to the resource in the stack.
- </p></doc>
- </param>
-
- <method href="#event_show" />
- </resource>
- </resource>
-
- </resource>
-
- <resource path="template">
- <method href="#stack_template" />
- </resource>
- </resource>
- </resource>
- </resource>
-
- <resource id="validate_template" path="validate_template">
- <method href="#template_validate" />
- </resource>
-
- <resource id="resource_types" path="resource_types">
- <method href="#resource_type_list" />
- </resource>
-
- </resource>
- </resource>
- </resources>
-
- <method name="GET" id="stack_list">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Get Stack Data">
- <p xmlns="http://www.w3.org/1999/xhtml">Get a list of active stacks.</p>
- </wadl:doc>
-
- <response status="200">
- </response>
- </method>
-
- <method name="POST" id="stack_create">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Create Stack">
- <p xmlns="http://www.w3.org/1999/xhtml">Create a Stack.</p>
- </wadl:doc>
-
- <request>
- <param name="stack_name" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- The name of the stack to create.
- </p></doc>
- </param>
-
- <param name="template_url" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- The URL of the template to instantiate.
- </p><p xmlns="http://www.w3.org/1999/xhtml">
- This is ignored if the template is supplied inline.
- </p></doc>
- </param>
-
- <param name="template" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- A JSON template to instantiate.
- </p><p xmlns="http://www.w3.org/1999/xhtml">
- This takes precedence over the Template URL if both are supplied.
- </p></doc>
- </param>
-
- <param name="environment" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- A JSON envionment for the stack.
- </p></doc>
- </param>
-
- <param name="param_name-n" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- User-defined parameter names to pass to the template.
- </p></doc>
- </param>
-
- <param name="param_value-n" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- User-defined parameter values to pass to the template.
- </p></doc>
- </param>
-
- <param name="timeout_mins" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- The timeout for stack creation in minutes.
- </p></doc>
- </param>
-
- <representation mediaType="application/json">
- <doc xml:lang="EN">
- <xsdxt:code href="samples/stack_create.json" />
- </doc>
- </representation>
- </request>
-
- <response status="201" />
- </method>
-
- <method name="GET" id="stack_find">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Find Stack">
- <p xmlns="http://www.w3.org/1999/xhtml">Find the canonical URL for a stack with a given name.</p>
- <p xmlns="http://www.w3.org/1999/xhtml">This operation also works
- with verbs other than GET, so you can also use it to perform
- PUT and DELETE operations on a current stack. Just set your
- client to follow redirects. Note that when redirecting, the
- request method should not change, as defined in RFC2626.
- However, in many clients the default behaviour is to change the
- method to GET when receiving a 302 because this behaviour is
- ubiquitous in web browsers.</p>
- </wadl:doc>
-
- <response status="302" />
- </method>
-
- <method name="GET" id="stack_show">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Get Stack Data">
- <p xmlns="http://www.w3.org/1999/xhtml">Get data about a stack.</p>
- </wadl:doc>
-
- <response status="200">
- </response>
- </method>
-
- <method name="GET" id="stack_template">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Get Stack Template">
- <p xmlns="http://www.w3.org/1999/xhtml">Retrieve a stack's template.</p>
- </wadl:doc>
-
- <response status="200">
- </response>
- </method>
-
- <method name="PUT" id="stack_update">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Update Stack">
- <p xmlns="http://www.w3.org/1999/xhtml">Update a Stack.</p>
- </wadl:doc>
-
- <request>
- <param name="template_url" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- The URL of the template to instantiate.
- </p><p xmlns="http://www.w3.org/1999/xhtml">
- This is ignored if the template is supplied inline.
- </p></doc>
- </param>
-
- <param name="template" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- A JSON template to instantiate.
- </p><p xmlns="http://www.w3.org/1999/xhtml">
- This takes precedence over the Template URL if both are supplied.
- </p></doc>
- </param>
-
- <param name="environment" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- A JSON envionment for the stack.
- </p></doc>
- </param>
-
- <param name="param_name-n" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- User-defined parameter names to pass to the template.
- </p></doc>
- </param>
-
- <param name="param_value-n" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- User-defined parameter values to pass to the template.
- </p></doc>
- </param>
-
- <param name="timeout_mins" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- The timeout for stack creation in minutes.
- </p></doc>
- </param>
-
- <representation mediaType="application/json">
- <doc xml:lang="EN">
- <xsdxt:code href="samples/stack_update.json" />
- </doc>
- </representation>
- </request>
-
- <response status="202" />
- </method>
-
- <method name="DELETE" id="stack_delete">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Delete Stack">
- <p xmlns="http://www.w3.org/1999/xhtml">Delete a stack.</p>
- </wadl:doc>
-
- <response status="204" />
- </method>
-
- <method name="POST" id="template_validate">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Update Stack">
- <p xmlns="http://www.w3.org/1999/xhtml">Update a Stack.</p>
- </wadl:doc>
-
- <request>
- <param name="template_url" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- The URL of the template to instantiate.
- </p><p xmlns="http://www.w3.org/1999/xhtml">
- This is ignored if the template is supplied inline.
- </p></doc>
- </param>
-
- <param name="template" style="plain" required="true">
- <doc><p xmlns="http://www.w3.org/1999/xhtml">
- A JSON template to instantiate.
- </p><p xmlns="http://www.w3.org/1999/xhtml">
- This takes precedence over the Template URL if both are supplied.
- </p></doc>
- </param>
-
- <representation mediaType="application/json">
- <doc xml:lang="EN">
- <xsdxt:code href="samples/template_validate.json" />
- </doc>
- </representation>
- </request>
-
- <response status="200">
- </response>
- </method>
-
- <method name="GET" id="resource_type_list">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="List Resource Types">
- <p xmlns="http://www.w3.org/1999/xhtml">Get a list of the template resource types that are supported.</p>
- </wadl:doc>
-
- <response status="200">
- </response>
- </method>
-
- <method name="GET" id="stack_resources_find">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Find Stack Resources">
- <p xmlns="http://www.w3.org/1999/xhtml">Find the canonical URL for the resource list of a stack with a given name.</p>
- </wadl:doc>
-
- <response status="302" />
- </method>
-
- <method name="GET" id="resource_list">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Get Resources">
- <p xmlns="http://www.w3.org/1999/xhtml">Get a list of resources in a stack.</p>
- </wadl:doc>
-
- <response status="200">
- </response>
- </method>
-
- <method name="GET" id="resource_show">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Get Resource Data">
- <p xmlns="http://www.w3.org/1999/xhtml">Get data about a resource.</p>
- </wadl:doc>
-
- <response status="200">
- </response>
- </method>
-
- <method name="GET" id="resource_metadata">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Get Resource Metadata">
- <p xmlns="http://www.w3.org/1999/xhtml">Get data a resource's metadata.</p>
- </wadl:doc>
-
- <response status="200">
- </response>
- </method>
-
- <method name="GET" id="stack_events_find">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Find Stack Events">
- <p xmlns="http://www.w3.org/1999/xhtml">Find the canonical URL for the event list of a stack with a given name.</p>
- </wadl:doc>
-
- <response status="302" />
- </method>
-
- <method name="GET" id="stack_event_list">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="List Stack Events">
- <p xmlns="http://www.w3.org/1999/xhtml">Get a list of events for a stack.</p>
- </wadl:doc>
-
- <response status="200">
- </response>
- </method>
-
- <method name="GET" id="resource_event_list">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="List Resource Events">
- <p xmlns="http://www.w3.org/1999/xhtml">Get a list of events for a stack resource.</p>
- </wadl:doc>
-
- <response status="200">
- </response>
- </method>
-
- <method name="GET" id="event_show">
- <wadl:doc xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN" title="Show Event">
- <p xmlns="http://www.w3.org/1999/xhtml">Get data about an event.</p>
- </wadl:doc>
-
- <response status="200">
- </response>
- </method>
-
-</application>
+++ /dev/null
-{
- "stack_name": "{stack_name}",
- "template_url": "{template_url}",
- "parameters": {
- "param_name-1": "param_value-1",
- "param_name-2": "param_value-2"
- },
- "timeout_mins": {timeout_mins}
-}
+++ /dev/null
-{
- "template_url": "{template_url}",
- "parameters": {
- "param_name-1": "param_value-1",
- "param_name-2": "param_value-2"
- },
- "timeout_mins": {timeout_mins}
-}
+++ /dev/null
-{
- "template_url": "{template_url}"
-}
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE appendix [
-<!-- Some useful entities borrowed from HTML -->
-<!ENTITY ndash "–">
-<!ENTITY mdash "—">
-<!ENTITY hellip "…">
-<!ENTITY plusmn "±">
-
-]>
-<appendix xmlns="http://docbook.org/ns/docbook"
- xmlns:xi="http://www.w3.org/2001/XInclude"
- xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
- xml:id="app_core">
- <title>Core Configuration File Options</title>
- <para>TODO</para>
-</appendix>
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE book[
- <!-- Some useful entities borrowed from HTML -->
- <!ENTITY ndash "–">
- <!ENTITY mdash "—">
- <!ENTITY hellip "…">
- <!ENTITY plusmn "±">
-
-]>
-<book xmlns="http://docbook.org/ns/docbook"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- xmlns:xi="http://www.w3.org/2001/XInclude"
- xmlns:svg="http://www.w3.org/2000/svg"
- xmlns:m="http://www.w3.org/1998/Math/MathML"
- xmlns:html="http://www.w3.org/1999/xhtml"
- xmlns:db="http://docbook.org/ns/docbook"
- version="5.0"
- status="final"
- xml:id="Heat-admin-guide-trunk">
- <?rax pdf.url="../bk-heat-admin-guide-trunk.pdf"?>
- <title>Heat Administration Guide</title>
- <info>
- <author>
- <personname>
- <firstname/>
- <surname/>
- </personname>
- </author>
- <copyright>
- <year>2012</year>
- <holder>OpenStack</holder>
- </copyright>
- <releaseinfo>Grizzly (2013.1)</releaseinfo>
- <productname>Heat</productname>
- <pubdate>2012-12-14</pubdate>
- <legalnotice role="apache2">
- <annotation>
- <remark>Copyright details are filled in by the template.</remark>
- </annotation>
- </legalnotice>
- <abstract>
- <para>This document is intended for administrators interested in running the Heat Service. </para>
- </abstract>
-
- <revhistory>
- <!-- ... continue adding more revisions here as you change this document using the markup shown below... -->
- <revision>
- <date>2012-12-14</date>
- <revdescription>
- <itemizedlist spacing="compact">
- <listitem>
- <para>First edition of this document.</para>
- </listitem>
- </itemizedlist>
- </revdescription>
- </revision>
-
-
- </revhistory>
- </info>
- <xi:include href="ch_preface.xml"/>
- <xi:include href="ch_overview.xml"/>
- <xi:include href="ch_install.xml"/>
- <xi:include href="ch_using.xml"/>
- <xi:include href="ch_limitations.xml"/>
- <xi:include href="app_core.xml"/>
-
-
-</book>
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE chapter[
-<!-- Some useful entities borrowed from HTML -->
-<!ENTITY ndash "–">
-<!ENTITY mdash "—">
-<!ENTITY hellip "…">
-<!ENTITY plusmn "±">
-
-]>
-<chapter
- xmlns="http://docbook.org/ns/docbook"
- xmlns:xi="http://www.w3.org/2001/XInclude"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- version="5.0"
- xml:id="ch_install">
- <title>Heat Installation</title>
- <para> This chapter describes how to install the Heat Service
- and get it up and running. </para>
-</chapter>
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE chapter [
-<!-- Some useful entities borrowed from HTML -->
-<!ENTITY ndash "–">
-<!ENTITY mdash "—">
-<!ENTITY hellip "…">
-<!ENTITY plusmn "±">
-
-<!-- Useful for describing APIs -->
-<!ENTITY GET '<command xmlns="http://docbook.org/ns/docbook">GET</command>'>
-<!ENTITY PUT '<command xmlns="http://docbook.org/ns/docbook">PUT</command>'>
-<!ENTITY POST '<command xmlns="http://docbook.org/ns/docbook">POST</command>'>
-<!ENTITY DELETE '<command xmlns="http://docbook.org/ns/docbook">DELETE</command>'>
-
-<!ENTITY CHECK '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
-<imageobject role="fo">
-<imagedata fileref="figures/Check_mark_23x20_02.svg"
-format="SVG" scale="60"/>
-</imageobject>
-<imageobject role="html">
-<imagedata fileref="../figures/Check_mark_23x20_02.png"
-format="PNG" />
-</imageobject>
-</inlinemediaobject>'>
-
-<!ENTITY ARROW '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
-<imageobject role="fo">
-<imagedata fileref="figures/Arrow_east.svg"
-format="SVG" scale="60"/>
-</imageobject>
-<imageobject role="html">
-<imagedata fileref="../figures/Arrow_east.png"
-format="PNG" />
-</imageobject>
-</inlinemediaobject>'>
-]>
-<chapter xmlns="http://docbook.org/ns/docbook"
- xmlns:xi="http://www.w3.org/2001/XInclude"
- xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
- xml:id="ch_limitations">
- <title>Limitations</title>
- <para>TODO</para>
-</chapter>
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE chapter [
-<!-- Some useful entities borrowed from HTML -->
-<!ENTITY ndash "–">
-<!ENTITY mdash "—">
-<!ENTITY hellip "…">
-<!ENTITY plusmn "±">
-
-<!-- Useful for describing APIs -->
-<!ENTITY GET '<command xmlns="http://docbook.org/ns/docbook">GET</command>'>
-<!ENTITY PUT '<command xmlns="http://docbook.org/ns/docbook">PUT</command>'>
-<!ENTITY POST '<command xmlns="http://docbook.org/ns/docbook">POST</command>'>
-<!ENTITY DELETE '<command xmlns="http://docbook.org/ns/docbook">DELETE</command>'>
-
-<!ENTITY CHECK '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
-<imageobject role="fo">
-<imagedata fileref="figures/Check_mark_23x20_02.svg"
-format="SVG" scale="60"/>
-</imageobject>
-<imageobject role="html">
-<imagedata fileref="../figures/Check_mark_23x20_02.png"
-format="PNG" />
-</imageobject>
-</inlinemediaobject>'>
-
-<!ENTITY ARROW '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
-<imageobject role="fo">
-<imagedata fileref="figures/Arrow_east.svg"
-format="SVG" scale="60"/>
-</imageobject>
-<imageobject role="html">
-<imagedata fileref="../figures/Arrow_east.png"
-format="PNG" />
-</imageobject>
-</inlinemediaobject>'>
-<!ENTITY SENova 'services extension'>
-<!ENTITY uSENova 'Services extension'>
-<!ENTITY uuSENova 'Services Extension'>
-<!ENTITY RSNova 'reset state extension'>
-<!ENTITY uRSNova 'Reset state extension'>
-<!ENTITY uuRSNova 'Reset State Extension'>
-<!ENTITY ngCS 'next generation Rackspace Cloud Serversâ„¢ <link
- xlink:href="http://www.rackspace.com/cloud/openstack/"
- >powered by OpenStack</link>'>
-<!ENTITY ungCS 'Next generation Rackspace Cloud Serversâ„¢ <link
- xlink:href="http://www.rackspace.com/cloud/openstack/"
- >powered by OpenStack</link>'>
-]>
-<chapter xmlns="http://docbook.org/ns/docbook"
- xmlns:xi="http://www.w3.org/2001/XInclude"
- xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
- xml:id="ch_overview">
- <title>Overview</title>
- <para> This chapter describes the high-level concepts and
- components of a Heat deployment. </para>
-
-</chapter>
+++ /dev/null
-<!DOCTYPE preface [
-<!-- Some useful entities borrowed from HTML -->
-<!ENTITY ndash "–">
-<!ENTITY mdash "—">
-<!ENTITY hellip "…">
-<!ENTITY plusmn "±">
-
-<!-- Useful for describing APIs -->
-<!ENTITY GET '<command xmlns="http://docbook.org/ns/docbook">GET</command>'>
-<!ENTITY PUT '<command xmlns="http://docbook.org/ns/docbook">PUT</command>'>
-<!ENTITY POST '<command xmlns="http://docbook.org/ns/docbook">POST</command>'>
-<!ENTITY DELETE '<command xmlns="http://docbook.org/ns/docbook">DELETE</command>'>
-
-<!ENTITY CHECK '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
-<imageobject role="fo">
-<imagedata fileref="figures/Check_mark_23x20_02.svg"
-format="SVG" scale="60"/>
-</imageobject>
-<imageobject role="html">
-<imagedata fileref="../figures/Check_mark_23x20_02.png"
-format="PNG" />
-</imageobject>
-</inlinemediaobject>'>
-
-<!ENTITY ARROW '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
-<imageobject role="fo">
-<imagedata fileref="figures/Arrow_east.svg"
-format="SVG" scale="60"/>
-</imageobject>
-<imageobject role="html">
-<imagedata fileref="../figures/Arrow_east.png"
-format="PNG" />
-</imageobject>
-</inlinemediaobject>'>
-<!ENTITY SENova 'services extension'>
-<!ENTITY uSENova 'Services extension'>
-<!ENTITY uuSENova 'Services Extension'>
-<!ENTITY RSNova 'reset state extension'>
-<!ENTITY uRSNova 'Reset state extension'>
-<!ENTITY uuRSNova 'Reset State Extension'>
-<!ENTITY ngCS 'next generation Rackspace Cloud Serversâ„¢ <link
- xlink:href="http://www.rackspace.com/cloud/openstack/"
- >powered by OpenStack</link>'>
-<!ENTITY ungCS 'Next generation Rackspace Cloud Serversâ„¢ <link
- xlink:href="http://www.rackspace.com/cloud/openstack/"
- >powered by OpenStack</link>'>
-]>
-<preface xmlns="http://docbook.org/ns/docbook"
- xmlns:xi="http://www.w3.org/2001/XInclude"
- xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
- xml:id="ch_preface">
- <title>Preface</title>
- <section xml:id="Intended_Audience-d1e85">
- <title>Intended Audience</title>
- <para>TODO</para>
- </section>
- <section xml:id="Document_Change_History-d1e118">
- <title>Document Change History</title>
- <para>The most recent changes are described in the table
- below:</para>
- <?rax revhistory?>
- </section>
- <section xml:id="resources">
- <title>Resources</title>
- <para>TODO</para>
- </section>
- <?hard-pagebreak?>
-</preface>
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE chapter [
-<!-- Some useful entities borrowed from HTML -->
-<!ENTITY ndash "–">
-<!ENTITY mdash "—">
-<!ENTITY hellip "…">
-<!ENTITY plusmn "±">
-
-<!-- Useful for describing APIs -->
-<!ENTITY GET '<command xmlns="http://docbook.org/ns/docbook">GET</command>'>
-<!ENTITY PUT '<command xmlns="http://docbook.org/ns/docbook">PUT</command>'>
-<!ENTITY POST '<command xmlns="http://docbook.org/ns/docbook">POST</command>'>
-<!ENTITY DELETE '<command xmlns="http://docbook.org/ns/docbook">DELETE</command>'>
-
-<!ENTITY CHECK '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
-<imageobject role="fo">
-<imagedata fileref="figures/Check_mark_23x20_02.svg"
-format="SVG" scale="60"/>
-</imageobject>
-<imageobject role="html">
-<imagedata fileref="../figures/Check_mark_23x20_02.png"
-format="PNG" />
-</imageobject>
-</inlinemediaobject>'>
-
-<!ENTITY ARROW '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
-<imageobject role="fo">
-<imagedata fileref="figures/Arrow_east.svg"
-format="SVG" scale="60"/>
-</imageobject>
-<imageobject role="html">
-<imagedata fileref="../figures/Arrow_east.png"
-format="PNG" />
-</imageobject>
-</inlinemediaobject>'>
-]>
-<chapter xmlns="http://docbook.org/ns/docbook"
- xmlns:xi="http://www.w3.org/2001/XInclude"
- xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
- xml:id="ch_using">
- <title>Using Heat</title>
- <para>TODO</para>
-</chapter>
+++ /dev/null
-\r
-<project xmlns="http://maven.apache.org/POM/4.0.0"\r
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"\r
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">\r
-\r
- <modelVersion>4.0.0</modelVersion>\r
-\r
- <groupId>org.openstack.docs</groupId>\r
- <artifactId>openstack-guide</artifactId>\r
- <version>1.0.0-SNAPSHOT</version>\r
- <packaging>jar</packaging>\r
- <name>OpenStack Guides</name>\r
-\r
- <properties>\r
- <!-- This is set by Jenkins according to the branch. -->\r
- <release.path.name>local</release.path.name>\r
- <comments.enabled>1</comments.enabled>\r
- </properties>\r
- <!-- ################################################ -->\r
- <!-- USE "mvn clean generate-sources" to run this POM -->\r
- <!-- Builds the Heat Administration Manual -->\r
- <!-- ################################################ -->\r
- <profiles>\r
- <profile>\r
- <id>Rackspace Research Repositories</id>\r
- <activation>\r
- <activeByDefault>true</activeByDefault>\r
- </activation>\r
- <repositories>\r
- <repository>\r
- <id>rackspace-research</id>\r
- <name>Rackspace Research Repository</name>\r
- <url>http://maven.research.rackspacecloud.com/content/groups/public/</url>\r
- </repository>\r
- </repositories>\r
- <pluginRepositories>\r
- <pluginRepository>\r
- <id>rackspace-research</id>\r
- <name>Rackspace Research Repository</name>\r
- <url>http://maven.research.rackspacecloud.com/content/groups/public/</url>\r
- </pluginRepository>\r
- </pluginRepositories>\r
- </profile>\r
- </profiles>\r
-\r
- <build>\r
- <resources>\r
- <resource>\r
- <directory>target/docbkx/pdf</directory>\r
- <excludes>\r
- <exclude>**/*.fo</exclude>\r
- </excludes>\r
- </resource>\r
- </resources>\r
- <plugins>\r
-\r
- <plugin>\r
- <groupId>com.rackspace.cloud.api</groupId>\r
- <artifactId>clouddocs-maven-plugin</artifactId>\r
- <version>1.5.1</version>\r
- <executions>\r
- <execution>\r
- <id>goal1</id>\r
- <goals>\r
- <goal>generate-pdf</goal>\r
- </goals>\r
- <phase>generate-sources</phase>\r
- <configuration>\r
- <highlightSource>false</highlightSource>\r
- <!-- The following elements sets the autonumbering of sections in output for chapter numbers but no numbered sections-->\r
- <sectionAutolabel>0</sectionAutolabel>\r
- <sectionLabelIncludesComponentLabel>0</sectionLabelIncludesComponentLabel>\r
- </configuration>\r
- </execution>\r
- <execution>\r
- <id>goal2</id>\r
- <goals>\r
- <goal>generate-webhelp</goal>\r
- </goals>\r
- <phase>generate-sources</phase>\r
- <configuration>\r
- <!-- These parameters only apply to webhelp -->\r
- <enableDisqus>${comments.enabled}</enableDisqus>\r
- <useVersionForDisqus>1</useVersionForDisqus>\r
- <disqusShortname>os-heat-guides</disqusShortname>\r
- <enableGoogleAnalytics>1</enableGoogleAnalytics>\r
- <googleAnalyticsId>UA-17511903-1</googleAnalyticsId>\r
- <generateToc>\r
- appendix toc,title\r
- article/appendix nop\r
- article toc,title\r
- book title,figure,table,example,equation\r
- chapter toc,title\r
- part toc,title\r
- preface toc,title\r
- qandadiv toc\r
- qandaset toc\r
- reference toc,title\r
- set toc,title\r
- </generateToc>\r
- <!-- The following elements sets the autonumbering of sections in output for chapter numbers but no numbered sections-->\r
- <sectionAutolabel>0</sectionAutolabel>\r
- <sectionLabelIncludesComponentLabel>0</sectionLabelIncludesComponentLabel>\r
- <postProcess>\r
- <!-- Copies the figures to the correct location for webhelp -->\r
-\r
- <copy todir="${basedir}/target/docbkx/webhelp/${release.path.name}/heat-admin/content/figures">\r
- <fileset dir="${basedir}/figures">\r
- <include name="**/*.*" />\r
- </fileset>\r
- </copy>\r
- <!-- Copies webhelp (HTML output) to desired URL location on docs.openstack.org -->\r
-\r
- <copy\r
- todir="${basedir}/target/docbkx/webhelp/${release.path.name}/heat-admin">\r
- <fileset\r
- dir="${basedir}/target/docbkx/webhelp/bk-heat-admin-guide/">\r
- <include name="**/*" />\r
- </fileset>\r
- </copy>\r
-\r
- <!--Moves PDFs to the needed placement -->\r
-\r
- <move failonerror="false"\r
- file="${basedir}/target/docbkx/pdf/bk-heat-admin-guide.pdf"\r
- tofile="${basedir}/target/docbkx/webhelp/${release.path.name}/heat-admin/bk-heat-admin-guide-${release.path.name}.pdf"/>\r
-\r
- <!--Deletes leftover uneeded directories -->\r
-\r
- <delete dir="${basedir}/target/docbkx/webhelp/bk-heat-admin-guide"/>\r
-\r
- </postProcess>\r
- </configuration>\r
- </execution>\r
- </executions>\r
- <configuration>\r
- <!-- These parameters apply to pdf and webhelp -->\r
- <xincludeSupported>true</xincludeSupported>\r
- <sourceDirectory>.</sourceDirectory>\r
- <includes>\r
- bk-heat-admin-guide.xml\r
- </includes>\r
- <canonicalUrlBase>http://docs.openstack.org/${release.path.name}/heat-admin/content/</canonicalUrlBase>\r
- <profileSecurity>reviewer</profileSecurity>\r
- <branding>openstack</branding>\r
- </configuration>\r
- </plugin>\r
-\r
- </plugins>\r
- </build>\r
-</project>\r
+++ /dev/null
-<project xmlns="http://maven.apache.org/POM/4.0.0"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
-
- <modelVersion>4.0.0</modelVersion>
-
- <groupId>org.openstack.docs</groupId>
- <artifactId>openstack-cli-guide</artifactId>
- <version>1.0.0-SNAPSHOT</version>
- <packaging>jar</packaging>
- <name>OpenStack CLI Guides</name>
-
- <!-- ################################################ -->
- <!-- USE "mvn clean generate-sources" to run this POM -->
- <!-- ################################################ -->
- <profiles>
- <profile>
- <id>Rackspace Research Repositories</id>
- <activation>
- <activeByDefault>true</activeByDefault>
- </activation>
- <repositories>
- <repository>
- <id>rackspace-research</id>
- <name>Rackspace Research Repository</name>
- <url>http://maven.research.rackspacecloud.com/content/groups/public/</url>
- </repository>
- </repositories>
- <pluginRepositories>
- <pluginRepository>
- <id>rackspace-research</id>
- <name>Rackspace Research Repository</name>
- <url>http://maven.research.rackspacecloud.com/content/groups/public/</url>
- </pluginRepository>
- </pluginRepositories>
- </profile>
- </profiles>
-
- <build>
- <resources>
- <resource>
- <directory>target/docbkx/pdf</directory>
- <excludes>
- <exclude>**/*.fo</exclude>
- </excludes>
- </resource>
- </resources>
- <plugins>
-
- <plugin>
- <groupId>com.rackspace.cloud.api</groupId>
- <artifactId>clouddocs-maven-plugin</artifactId>
- <version>1.5.1</version>
- <executions>
- <execution>
- <id>goal1</id>
- <goals>
- <goal>generate-pdf</goal>
- </goals>
- <phase>generate-sources</phase>
- <configuration>
- <!--<pdfUrl>cli-guide.pdf</pdfUrl>-->
- <highlightSource>false</highlightSource>
- <!-- The following elements sets the autonumbering of sections in output for chapter numbers but no numbered sections-->
- <sectionAutolabel>0</sectionAutolabel>
- <sectionLabelIncludesComponentLabel>0</sectionLabelIncludesComponentLabel>
- </configuration>
- </execution>
- <execution>
- <id>goal2</id>
- <goals>
- <goal>generate-webhelp</goal>
- </goals>
- <phase>generate-sources</phase>
- <configuration>
- <!-- These parameters only apply to webhelp -->
- <pdfUrl>cli-guide.pdf</pdfUrl>
- <enableDisqus>1</enableDisqus>
- <disqusShortname>os-cliguide</disqusShortname>
- <enableGoogleAnalytics>1</enableGoogleAnalytics>
- <googleAnalyticsId>UA-17511903-1</googleAnalyticsId>
- <generateToc>
- appendix toc,title
- article/appendix nop
- article toc,title
- book title,figure,table,example,equation
- chapter toc,title
- part toc,title
- preface toc,title
- qandadiv toc
- qandaset toc
- reference toc,title
- set toc,title
- </generateToc>
- <!-- The following elements sets the autonumbering of sections in output for chapter numbers but no numbered sections-->
- <sectionAutolabel>0</sectionAutolabel>
- <sectionLabelIncludesComponentLabel>0</sectionLabelIncludesComponentLabel>
- <postProcess>
- <!-- Copies webhelp (HTML output) to desired URL location on docs.openstack.org -->
- <copy
- todir="${basedir}/target/docbkx/webhelp/cli-guide/">
- <fileset
- dir="${basedir}/target/docbkx/webhelp/bk-cli-guide/">
- <include name="**/*" />
- </fileset>
- </copy>
- <!--Moves PDF to the needed placement -->
- <move failonerror="false"
- file="${basedir}/target/docbkx/pdf/bk-cli-guide.pdf"
- tofile="${basedir}/target/docbkx/webhelp/cli-guide/content/cli-guide.pdf"/>
- <!--Deletes leftover uneeded directories -->
- <delete dir="${basedir}/target/docbkx/webhelp/bk-cli-guide"/>
- </postProcess>
- </configuration>
- </execution>
- </executions>
- <configuration>
- <!-- These parameters apply to pdf and webhelp -->
- <xincludeSupported>true</xincludeSupported>
- <sourceDirectory>src</sourceDirectory>
- <includes>
- bk-cli-guide.xml
- </includes>
- <!--<canonicalUrlBase>http://docs.openstack.org/cli/content/</canonicalUrlBase>-->
- <profileSecurity>reviewer</profileSecurity>
- <branding>openstack</branding>
- </configuration>
- </plugin>
- </plugins>
- </build>
-
-</project>
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<chapter xmlns="http://docbook.org/ns/docbook"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- xmlns:xi="http://www.w3.org/2001/XInclude"
- xmlns:svg="http://www.w3.org/2000/svg"
- xmlns:html="http://www.w3.org/1999/xhtml" version="5.0"
- xml:id="openstack-cli-guide-trunk">
- <title>OpenStack CLI Guide</title>
- <para>Each OpenStack project has a Command-Line-Interface (CLI)
- that interacts with the service's REST API. </para>
-
- <xi:include href="heat_cli_install.xml"/>
- <xi:include href="heat_cli_commands.xml"/>
- <xi:include href="heat_cli_howto.xml"/>
-</chapter>
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<section xmlns="http://docbook.org/ns/docbook"
- xmlns:xi="http://www.w3.org/2001/XInclude"
- xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
- xml:id="heat_client_commands">
- <title>Command List for heat CLI</title>
- <literallayout class="monospaced">
- event-list List events for a stack
- event-show Describe the event
- resource-list Show list of resources belonging to a stack
- resource-metadata List resource metadata
- resource-show Describe the resource
- stack-create Create the stack
- stack-delete Delete the stack
- stack-list List the user's stacks
- stack-show Describe the stack
- stack-update Update the stack
- template-show Get the template for the specified stack
- template-validate Validate a template with parameters
- help Display help about this program or one of its subcommands.
- </literallayout>
-</section>
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<section xmlns="http://docbook.org/ns/docbook"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- xmlns:xi="http://www.w3.org/2001/XInclude"
- xmlns:svg="http://www.w3.org/2000/svg"
- xmlns:html="http://www.w3.org/1999/xhtml"
- version="5.0"
- xml:id="neutron-cli-reference">
- <?dbhtml stop-chunking?>
- <title>OpenStack Heat CLI Guide</title>
- <para>This section describes heat commands</para>
-
-</section>
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<section xmlns="http://docbook.org/ns/docbook"
- xmlns:xi="http://www.w3.org/2001/XInclude"
- xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
- xml:id="install_openstack_heat_cli">
- <title>Install OpenStack heat CLI</title>
- <para>This example walks through installing the heat client. After
- you install a client, you must configure environment variables
- for authentication. </para>
- <procedure>
- <title>To install the heat client:</title>
- <step>
- <title>Install Python</title>
- <para>Install Python 2.6 or later. Currently, the heat
- client does not support Python 3. </para>
- </step>
- <step>
- <title>Install the heat client package</title>
- <para>Choose one of the following methods to install the
- heat client package. <itemizedlist>
- <listitem>
- <para><emphasis role="bold">Recommended
- method:</emphasis>
- <command>pip</command></para>
- <para>Install <command>pip</command> through
- the package manager for your
- system:<informaltable rules="all">
- <col width="20%"/>
- <col width="80%"/>
- <thead>
- <tr>
- <th>System</th>
- <th>Command</th>
- </tr>
- </thead>
- <tbody>
- <tr>
- <td>Mac OS X</td>
- <td>
- <programlisting language="bash" role="gutter: false"><prompt>$</prompt> sudo easy_install pip</programlisting>
- </td>
- </tr>
- <tr>
- <td>Ubuntu</td>
- <td>
- <programlisting language="bash" role="gutter: false"><prompt>$</prompt> aptitude install python-pip </programlisting>
- </td>
- </tr>
- <tr>
- <td>RHEL, CentOS, or Fedora:</td>
- <td>
- <programlisting language="bash" role="gutter: false"><prompt>$</prompt> yum install python-pip</programlisting>
- </td>
- </tr>
- </tbody>
- </informaltable></para>
- <para>Run the following command to install the heat client package:
- <programlisting language="bash" role="gutter: false"><prompt>$</prompt> sudo pip install python-heatclient</programlisting></para>
-
- </listitem>
- <listitem>
- <para><emphasis role="bold"
- >easy_install</emphasis></para>
- <para>Run the following command to install the
- heat client package: </para>
- <programlisting language="bash" role="gutter: false"><prompt>$</prompt> sudo easy_install python-heatclient</programlisting>
- </listitem>
- </itemizedlist></para>
- </step>
- <step>
- <title>Get help for heat client commands</title>
- <para>To get help for heat client commands, run the
- following command:</para>
- <programlisting language="bash" role="gutter: false"><prompt>$</prompt> heat -h </programlisting>
- <para>Depending on your user credentials, you may not have
- permissions to use every command that is listed. The
- heat client was written for use with recent
- development versions of OpenStack.</para>
- <para>To get help for a specific command, type the command
- name after the <parameter>help</parameter> parameter,
- as follows:</para>
- <programlisting language="bash" role="gutter: false"><prompt>$</prompt> heat help <command_name> </programlisting>
- </step>
- </procedure></section>
+++ /dev/null
-..
- Copyright 2011-2012 OpenStack Foundation
- All Rights Reserved.
-
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Heat Architecture
-=================
-
-Heat is a service to orchestrate multiple composite cloud applications using the .. _AWS CloudFormation: http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/Welcome.html?r=7078 template format, through both an OpenStack-native ReST API and a CloudFormation-compatible Query API.
-
-
---------------------
-Detailed Description
---------------------
-
-What is the purpose of the project and vision for it?
-
-*Heat provides an AWS CloudFormation implementation for OpenStack that orchestrates an AWS CloudFormation template describing a cloud application by executing appropriate OpenStack API calls to generate running cloud applications.*
-
-Describe the relevance of the project to other OpenStack projects and the OpenStack mission to provide a ubiquitous cloud computing platform:
-
-*The software integrates other core components of OpenStack into a one-file template system. The templates allow creation of most OpenStack resource types (such as instances, floating ips, volumes, security groups, users, etc), as well as some more advanced functionality such as instance high availability, instance autoscaling, and nested stacks. By providing very tight integration with other OpenStack core projects, all OpenStack core projects could receive a larger user base.*
-*Currently no other CloudFormation implementation exists for OpenStack. The developers believe cloud developers have a strong desire to move workloads from AWS to OpenStack deployments. Given the missing gap of a well-implemented and integrated CloudFormation API in OpenStack, we provide a high quality implementation of this gap improving the ubiquity of OpenStack.*
-
--------------
-Heat Services
--------------
-
-The developers are focused on creating an OpenStack style project using OpenStack design tenets, implemented in Python. We have started with full integration with Keystone. We have a number of components.
-
-As the developers have only started development in March 2012, the architecture is evolving rapidly.
-
-heat
-----
-
-The heat tool is a CLI which communicates with the heat-api to execute AWS CloudFormation APIs. End developers could also use the heat REST API directly.
-
-
-heat-api
---------
-
-The heat-api component provides an OpenStack-native REST API that processes API requests by sending them to the heat-engine over RPC.
-
-
-heat-api-cfn
-------------
-
-The heat-api-cfn component provides an AWS Query API that is compatible with AWS CloudFormation and processes API requests by sending them to the heat-engine over RPC.
-
-
-heat-engine
------------
-
-The heat engine's main responsibility is to orchestrate the launching of templates and provide events back to the API consumer.
-
-The templates integrate well with .. _Puppet: https://s3.amazonaws.com/cloudformation-examples/IntegratingAWSCloudFormationWithPuppet.pdf and .. _Chef: http://www.full360.com/2011/02/27/integrating-aws-cloudformation-and-chef.html
+++ /dev/null
-# -*- coding: utf-8 -*-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# Heat documentation build configuration file, created by
-# sphinx-quickstart on Thu Dec 13 11:23:35 2012.
-#
-# This file is execfile()d with the current directory set to its containing
-# dir.
-#
-# Note that not all possible configuration values are present in this
-# autogenerated file.
-#
-# All configuration values have a default; values that are commented out
-# serve to show the default.
-
-import os
-import sys
-
-BASE_DIR = os.path.dirname(os.path.abspath(__file__))
-ROOT = os.path.abspath(os.path.join(BASE_DIR, "..", ".."))
-
-sys.path.insert(0, ROOT)
-sys.path.insert(0, BASE_DIR)
-
-# This is required for ReadTheDocs.org, but isn't a bad idea anyway.
-os.environ['DJANGO_SETTINGS_MODULE'] = 'openstack_dashboard.settings'
-
-
-def write_autodoc_index():
-
- def find_autodoc_modules(module_name, sourcedir):
- """Return a list of modules in the SOURCE directory."""
- modlist = []
- os.chdir(os.path.join(sourcedir, module_name))
- print("SEARCHING %s" % sourcedir)
- for root, dirs, files in os.walk("."):
- for filename in files:
- if filename.endswith(".py"):
- # remove the pieces of the root
- elements = root.split(os.path.sep)
- # replace the leading "." with the module name
- elements[0] = module_name
- # and get the base module name
- base, extension = os.path.splitext(filename)
- if not (base == "__init__"):
- elements.append(base)
- result = ".".join(elements)
- #print(result)
- modlist.append(result)
- return modlist
-
- RSTDIR = os.path.abspath(os.path.join(BASE_DIR, "sourcecode"))
- SRCS = {'heat': ROOT}
-
- EXCLUDED_MODULES = ('heat.tests',
- 'heat.testing',
- 'heat.cmd',
- 'heat.common',
- 'heat.cloudinit',
- 'heat.cfn_client',
- 'heat.doc',
- 'heat.db',
- 'heat.engine.resources',
- 'heat.locale',
- 'heat.openstack')
- CURRENT_SOURCES = {}
-
- if not(os.path.exists(RSTDIR)):
- os.mkdir(RSTDIR)
- CURRENT_SOURCES[RSTDIR] = ['autoindex.rst', '.gitignore']
-
- INDEXOUT = open(os.path.join(RSTDIR, "autoindex.rst"), "w")
- INDEXOUT.write("=================\n")
- INDEXOUT.write("Source Code Index\n")
- INDEXOUT.write("=================\n")
-
- for modulename, path in SRCS.items():
- sys.stdout.write("Generating source documentation for %s\n" %
- modulename)
- INDEXOUT.write("\n%s\n" % modulename.capitalize())
- INDEXOUT.write("%s\n" % ("=" * len(modulename),))
- INDEXOUT.write(".. toctree::\n")
- INDEXOUT.write(" :maxdepth: 1\n")
- INDEXOUT.write("\n")
-
- MOD_DIR = os.path.join(RSTDIR, modulename)
- CURRENT_SOURCES[MOD_DIR] = []
- if not(os.path.exists(MOD_DIR)):
- os.mkdir(MOD_DIR)
- for module in find_autodoc_modules(modulename, path):
- if any([module.startswith(exclude)
- for exclude
- in EXCLUDED_MODULES]):
- print("Excluded module %s." % module)
- continue
- mod_path = os.path.join(path, *module.split("."))
- generated_file = os.path.join(MOD_DIR, "%s.rst" % module)
-
- INDEXOUT.write(" %s/%s\n" % (modulename, module))
-
- # Find the __init__.py module if this is a directory
- if os.path.isdir(mod_path):
- source_file = ".".join((os.path.join(mod_path, "__init__"),
- "py",))
- else:
- source_file = ".".join((os.path.join(mod_path), "py"))
-
- CURRENT_SOURCES[MOD_DIR].append("%s.rst" % module)
- # Only generate a new file if the source has changed or we don't
- # have a doc file to begin with.
- if not os.access(generated_file, os.F_OK) or \
- os.stat(generated_file).st_mtime < \
- os.stat(source_file).st_mtime:
- print("Module %s updated, generating new documentation."
- % module)
- FILEOUT = open(generated_file, "w")
- header = "The :mod:`%s` Module" % module
- FILEOUT.write("%s\n" % ("=" * len(header),))
- FILEOUT.write("%s\n" % header)
- FILEOUT.write("%s\n" % ("=" * len(header),))
- FILEOUT.write(".. automodule:: %s\n" % module)
- FILEOUT.write(" :members:\n")
- FILEOUT.write(" :undoc-members:\n")
- FILEOUT.write(" :show-inheritance:\n")
- FILEOUT.write(" :noindex:\n")
- FILEOUT.close()
-
- INDEXOUT.close()
-
- # Delete auto-generated .rst files for sources which no longer exist
- for directory, subdirs, files in list(os.walk(RSTDIR)):
- for old_file in files:
- if old_file not in CURRENT_SOURCES.get(directory, []):
- print("Removing outdated file for %s" % old_file)
- os.remove(os.path.join(directory, old_file))
-
-
-write_autodoc_index()
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-#sys.path.insert(0, os.path.abspath('.'))
-
-# -- General configuration ----------------------------------------------------
-
-# If your documentation needs a minimal Sphinx version, state it here.
-#needs_sphinx = '1.0'
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-extensions = ['sphinx.ext.autodoc',
- 'sphinx.ext.ifconfig',
- 'sphinx.ext.viewcode',
- 'sphinx.ext.todo',
- 'sphinx.ext.coverage',
- 'sphinx.ext.pngmath',
- 'sphinx.ext.viewcode',
- 'oslo.sphinx',
- 'heat.doc.resources']
-
-todo_include_todos = True
-
-# Add any paths that contain templates here, relative to this directory.
-if os.getenv('HUDSON_PUBLISH_DOCS'):
- templates_path = ['_ga', '_templates']
-else:
- templates_path = ['_templates']
-
-# The suffix of source filenames.
-source_suffix = '.rst'
-
-# The encoding of source files.
-#source_encoding = 'utf-8-sig'
-
-# The master toctree document.
-master_doc = 'index'
-
-# General information about the project.
-project = u'Heat'
-copyright = u'2012,2013 Heat Developers'
-
-# The language for content autogenerated by Sphinx. Refer to documentation
-# for a list of supported languages.
-#language = None
-
-# There are two options for replacing |today|: either, you set today to some
-# non-false value, then it is used:
-#today = ''
-# Else, today_fmt is used as the format for a strftime call.
-#today_fmt = '%B %d, %Y'
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-exclude_patterns = ['**/#*', '**~', '**/#*#']
-
-# The reST default role (used for this markup: `text`)
-# to use for all documents.
-#default_role = None
-
-# If true, '()' will be appended to :func: etc. cross-reference text.
-#add_function_parentheses = True
-
-# If true, the current module name will be prepended to all description
-# unit titles (such as .. function::).
-#add_module_names = True
-
-# If true, sectionauthor and moduleauthor directives will be shown in the
-# output. They are ignored by default.
-#show_authors = False
-
-# The name of the Pygments (syntax highlighting) style to use.
-pygments_style = 'sphinx'
-
-# A list of ignored prefixes for module index sorting.
-#modindex_common_prefix = []
-
-primary_domain = 'py'
-nitpicky = False
-
-
-# -- Options for HTML output --------------------------------------------------
-
-# The theme to use for HTML and HTML Help pages. See the documentation for
-# a list of builtin themes.
-# html_theme_path = ['.']
-# html_theme = '_theme'
-
-# Theme options are theme-specific and customize the look and feel of a theme
-# further. For a list of options available for each theme, see the
-# documentation.
-html_theme_options = {
- "nosidebar": "false"
-}
-
-# Add any paths that contain custom themes here, relative to this directory.
-#html_theme_path = []
-
-# The name for this set of Sphinx documents. If None, it defaults to
-# "<project> v<release> documentation".
-#html_title = None
-
-# A shorter title for the navigation bar. Default is the same as html_title.
-#html_short_title = None
-
-# The name of an image file (relative to this directory) to place at the top
-# of the sidebar.
-#html_logo = None
-
-# The name of an image file (within the static path) to use as favicon of the
-# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
-# pixels large.
-#html_favicon = None
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-# html_static_path = ['_static']
-
-# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
-# using the given strftime format.
-#html_last_updated_fmt = '%b %d, %Y'
-git_cmd = "git log --pretty=format:'%ad, commit %h' --date=local -n1"
-html_last_updated_fmt = os.popen(git_cmd).read()
-
-# If true, SmartyPants will be used to convert quotes and dashes to
-# typographically correct entities.
-#html_use_smartypants = True
-
-# Custom sidebar templates, maps document names to template names.
-#html_sidebars = {}
-
-# Additional templates that should be rendered to pages, maps page names to
-# template names.
-#html_additional_pages = {}
-
-# If false, no module index is generated.
-#html_domain_indices = True
-
-# If false, no index is generated.
-#html_use_index = True
-
-# If true, the index is split into individual pages for each letter.
-#html_split_index = False
-
-# If true, links to the reST sources are added to the pages.
-#html_show_sourcelink = True
-
-# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
-#html_show_sphinx = True
-
-# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
-#html_show_copyright = True
-
-# If true, an OpenSearch description file will be output, and all pages will
-# contain a <link> tag referring to it. The value of this option must be the
-# base URL from which the finished HTML is served.
-#html_use_opensearch = ''
-
-# This is the file name suffix for HTML files (e.g. ".xhtml").
-#html_file_suffix = None
-
-# Output file base name for HTML help builder.
-htmlhelp_basename = 'Heatdoc'
-
-
-# -- Options for LaTeX output -------------------------------------------------
-
-latex_elements = {
- # The paper size ('letterpaper' or 'a4paper').
- #'papersize': 'letterpaper',
-
- # The font size ('10pt', '11pt' or '12pt').
- #'pointsize': '10pt',
-
- # Additional stuff for the LaTeX preamble.
- #'preamble': '',
-}
-
-# Grouping the document tree into LaTeX files. List of tuples
-# (source start file, target name, title, author, documentclass [howto/manual])
-latex_documents = [
- ('index', 'Heat.tex', u'Heat Documentation',
- u'Heat Developers', 'manual'),
-]
-
-# The name of an image file (relative to this directory) to place at the top of
-# the title page.
-#latex_logo = None
-
-# For "manual" documents, if this is true, then toplevel headings are parts,
-# not chapters.
-#latex_use_parts = False
-
-# If true, show page references after internal links.
-#latex_show_pagerefs = False
-
-# If true, show URL addresses after external links.
-#latex_show_urls = False
-
-# Documents to append as an appendix to all manuals.
-#latex_appendices = []
-
-# If false, no module index is generated.
-#latex_domain_indices = True
-
-
-# -- Options for manual page output -------------------------------------------
-
-# One entry per manual page. List of tuples
-# (source start file, name, description, authors, manual section).
-man_pages = [
- ('man/heat-api', 'heat-api',
- u'REST API service to the heat project.',
- [u'Heat Developers'], 1),
- ('man/heat-api-cfn', 'heat-api-cfn',
- u'CloudFormation compatible API service to the heat project.',
- [u'Heat Developers'], 1),
- ('man/heat-api-cloudwatch', 'heat-api-cloudwatch',
- u'CloudWatch alike API service to the heat project',
- [u'Heat Developers'], 1),
- ('man/heat-db-setup', 'heat-db-setup',
- u'Command line utility to setup the Heat database',
- [u'Heat Developers'], 1),
- ('man/heat-engine', 'heat-engine',
- u'Service which performs the actions from the API calls made by the user',
- [u'Heat Developers'], 1),
- ('man/heat-keystone-setup', 'heat-keystone-setup',
- u'Script which sets up keystone for usage by Heat',
- [u'Heat Developers'], 1),
-]
-
-# If true, show URL addresses after external links.
-#man_show_urls = False
-
-
-# -- Options for Texinfo output -----------------------------------------------
-
-# Grouping the document tree into Texinfo files. List of tuples
-# (source start file, target name, title, author,
-# dir menu entry, description, category)
-texinfo_documents = [
- ('index', 'Heat', u'Heat Documentation',
- u'Heat Developers', 'Heat', 'One line description of project.',
- 'Miscellaneous'),
-]
-
-# Documents to append as an appendix to all manuals.
-#texinfo_appendices = []
-
-# If false, no module index is generated.
-#texinfo_domain_indices = True
-
-# How to display URL addresses: 'footnote', 'no', or 'inline'.
-#texinfo_show_urls = 'footnote'
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Getting Started Guides
-======================
-
-.. toctree::
- :maxdepth: 2
-
- on_devstack
- on_fedora
- on_ubuntu
- on_other
- jeos_building
\ No newline at end of file
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Building JEOS images for use with Heat
-======================================
-Heat's full functionality can only be used when launching cloud images that have
-the heat-cfntools_ package installed.
-This document describes some options for creating a heat-cfntools enabled image
-for yourself.
-
-.. _heat-cfntools: https://github.com/openstack/heat-cfntools
-
-Building an image with diskimage-builder
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-diskimage-builder_ is a tool for customizing cloud images.
-tripleo-image-elements_ is a collection of diskimage-builder elements related
-to the TripleO_ project. It includes an element for heat-cfntools which can be
-used to create heat-enabled images.
-
-.. _diskimage-builder: https://github.com/openstack/diskimage-builder
-.. _tripleo-image-elements: https://github.com/openstack/tripleo-image-elements
-.. _TripleO: https://wiki.openstack.org/wiki/TripleO
-
-Fetch the tool and elements::
-
- git clone https://github.com/openstack/diskimage-builder.git
- git clone https://github.com/openstack/tripleo-image-elements.git
-
-To create a heat-cfntools enabled image with the current release of Fedora x86_64::
-
- export ELEMENTS_PATH=tripleo-image-elements/elements
- diskimage-builder/bin/disk-image-create vm fedora heat-cfntools -a amd64 -o fedora-heat-cfntools
-
-The image may then be pushed to glance, e.g::
-
- source ~/.openstack/keystonerc
- glance image-create --name fedora-heat-cfntools --is-public true --disk-format qcow2 --container-format bare < fedora-heat-cfntools.qcow2
-
-To create a heat-cfntools enabled image with the current release of Ubuntu i386::
-
- export ELEMENTS_PATH=tripleo-image-elements/elements
- diskimage-builder/bin/disk-image-create vm ubuntu heat-cfntools -a i386 -o ubuntu-heat-cfntools
-
-If you are creating your own images you should consider creating golden images
-which contain all the packages required for the stacks that you launch. You can do
-this by writing your own diskimage-builder elements and invoking those elements
-in the call to disk-image-create.
-
-This means that the resulting heat templates only need to modify configuration
-files. This will speed stack launch time and reduce the risk of a transient
-package download failure causing the stack launch to fail.
-
-Building an image with Oz
-~~~~~~~~~~~~~~~~~~~~~~~~~
-Another approach to building a heat-cfntools enabled image is to use Oz wrapped in a convenience script.
-
-The example below demonstrates how to build an F17 image, but there are Oz tdl templates for several other distributions provided in heat-templates/jeos
-
-Get heat-templates
-------------------
-
-Clone the heat-templates repository from GitHub at ``git://github.com/openstack/heat-templates.git``
-
-
-Note Oz does not work in virt on virt situations. In this case, it is recommended that the prebuilt images are used.
-
-Download OS install DVD and copy it to libvirt images location
---------------------------------------------------------------
-
-::
-
- sudo cp Downloads/Fedora-17-x86_64-DVD.iso /var/lib/libvirt/images
-
-Install Oz (RPM distros)
-------------------------
-
-We recommend cloning oz from the latest master. Support for building guests based on recent distributions is not available in the version of Oz shipped with many distros.
-
-On Fedora and other RPM-based distros::
-
- git clone -q https://github.com/clalancette/oz.git
- pushd oz
- rm -f ~/rpmbuild/RPMS/noarch/oz-*
- make rpm
- sudo yum -q -y localinstall ~/rpmbuild/RPMS/noarch/oz-*
- popd
-
-Note: In the steps above, it's only necessary to be root for the yum localinstall, it's recommended not to be root while building the rpm.
-
-Install Oz (DEB distros)
-------------------------
-
-We recommend cloning oz from the latest master. The debian packaging is broken in older versions and support for building guests based on recent distributions is not available in the version of Oz shipped with many distros.
-
-On Fedora and other RPM-based distros:
-
-
-On Debian, Ubuntu and other deb based distros::
-
- git clone https://github.com/clalancette/oz.git
- cd oz
- make deb
- cd ..
- sudo dpkg -i oz_*_all.deb
- sudo apt-get -f install
-
-Note: Select yes to "Create or update supermin appliance.". This will rebuild the guestfs appliance to work with latest updates of Ubuntu. Oz will not work properly without updating the guestfs appliance.
-
-
-Configure libguestfs (required by Oz) to work in latest Ubuntu 12
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Some files shipped with Ubuntu 12 are incompatible with libguestfs
-used by the image creation software Oz. To allow heat-jeos to work
-properly, run the following commands::
-
- sudo chmod 644 /boot/vmlinuz*
- sudo update-guestfs-appliance
-
-Note: For more details see: http://permalink.gmane.org/gmane.comp.emulators.guestfs/1382
-and http://libguestfs.org/guestfs-faq.1.html
-
-Note: If you want to create F17 images, you may need a new libguestfs binary of version 1.18.0 or later. Ubuntu Precise may not have this version yet.
-
-You can use the Debian Wheezy version including the `guestfs shared library`_, the tools_ and the `python libraries`_.
-
-.. _guestfs shared library: http://packages.debian.org/wheezy/amd64/libguestfs0/download
-.. _tools: http://packages.debian.org/wheezy/amd64/libguestfs-tools/download
-.. _python libraries: http://packages.debian.org/wheezy/amd64/python-guestfs/download
-
-
-Create a JEOS with heat-jeos.sh script
---------------------------------------
-
-heat-templates/tools contains a convenience wrapper for Oz which demonstrates how to create a JEOS::
-
- cd heat-templates/tools
- sudo ./heat-jeos.sh ../jeos/F17-x86_64-cfntools.tdl F17-x86_64-cfntools
-
-Note: the second argument is the name as defined inside the TDL, so it may not necessarily match the filename
-
-Note: ``heat-jeos.sh`` must be run as root in order to create the disk image.
-
-Register the image with glance
-------------------------------
-
-On successful completion, the heat-jeos.sh script will generate a qcow2 image under /var/lib/libvirt/images/
-
-The image may then be pushed to glance, e.g::
-
- source ~/.openstack/keystonerc
- glance add name=F17-x86_64-cfntools is_public=true disk_format=qcow2 container_format=bare < /var/lib/libvirt/images/F17-x86_64-cfntools.qcow2
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Heat and Devstack
-=================
-Heat is fully integrated into DevStack. This is a convenient way to try out or develop heat alongside the current development state of all the other OpenStack projects. Heat on DevStack works on both Ubuntu and Fedora.
-
-These instructions assume you already have a working DevStack installation which can launch basic instances.
-
-Configure DevStack to enable Heat
----------------------------------
-Adding the following line to your `localrc` file will enable the heat services::
-
- ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
-
-It would also be useful to automatically download and register
-a VM image that Heat can launch::
-
- IMAGE_URLS+=",http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F17-x86_64-cfntools.qcow2,http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F17-i386-cfntools.qcow2"
-
-URLs for any of [http://fedorapeople.org/groups/heat/prebuilt-jeos-images/ these prebuilt JEOS images] can be specified.
-
-That is all the configuration that is required. When you run `./stack.sh` the Heat processes will be launched in `screen` with the labels prefixed with `h-`.
-
-Confirming heat is responding
------------------------------
-
-Before any heat commands can be run, the authentication environment
-needs to be loaded::
-
- source openrc
-
-You can confirm that Heat is running and responding
-with this command::
-
- heat stack-list
-
-This should return an empty line
-
-Preparing Nova for running stacks
----------------------------------
-
-Enabling Heat in devstack will replace the default Nova flavors with
-flavours that the Heat example templates expect. You can see what
-those flavors are by running::
-
- nova flavor-list
-
-Heat needs to launch instances with a keypair, so we need
-to generate one::
-
- nova keypair-add heat_key > heat_key.priv
- chmod 600 heat_key.priv
-
-Launching a stack
------------------
-Now lets launch a stack, using an example template from the heat-templates repository::
-
- heat stack-create teststack -u
- https://raw.github.com/openstack/heat-templates/master/cfn/WordPress_Single_Instance.template -P "InstanceType=m1.large;DBUsername=wp;DBPassword=verybadpassword;KeyName=heat_key;LinuxDistribution=F17"
-
-Which will respond::
-
- +--------------------------------------+-----------+--------------------+----------------------+
- | ID | Name | Status | Created |
- +--------------------------------------+-----------+--------------------+----------------------+
- | (uuid) | teststack | CREATE_IN_PROGRESS | (timestamp) |
- +--------------------------------------+-----------+--------------------+----------------------+
-
-
-List stacks
-~~~~~~~~~~~
-List the stacks in your tenant::
-
- heat stack-list
-
-List stack events
-~~~~~~~~~~~~~~~~~
-
-List the events related to a particular stack::
-
- heat event-list teststack
-
-Describe the wordpress stack
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Show detailed state of a stack::
-
- heat stack-show teststack
-
-Note: After a few seconds, the stack_status should change from IN_PROGRESS to CREATE_COMPLETE.
-
-Verify instance creation
-~~~~~~~~~~~~~~~~~~~~~~~~
-Because the software takes some time to install from the repository, it may be a few minutes before the Wordpress instance is in a running state.
-
-Point a web browser at the location given by the WebsiteURL Output as shown by heat stack-show teststack::
-
- wget ${WebsiteURL}
-
-Delete the instance when done
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Note: The list operation will show no running stack.::
-
- heat stack-delete teststack
- heat stack-list
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Getting Started With Heat on Fedora
-===================================
-
-..
- This file is a ReStructuredText document, but can be converted to a script
- using the accompanying rst2script.sed script. Any blocks that are indented by
- 4 spaces (including comment blocks) will appear in the script. To document
- code that should not appear in the script, use an indent of less than 4
- spaces. (Using a Quoted instead of Indented Literal block also works.)
- To include code in the script that should not appear in the output, make it
- a comment block.
-
-..
- #!/bin/bash
-
- # Exit on error
- set -e
-
-Get Heat
---------
-
-Clone the heat repository_ from GitHub at ``git://github.com/openstack/heat.git``. Note that OpenStack must be installed before heat.
-Optionally, one may wish to install Heat via RPM. Creation instructions are in the readme in the heat-rpms_ repository at ``git://github.com/heat-api/heat-rpms.git``.
-
-.. _repository: https://github.com/openstack/heat
-.. _heat-rpms: https://github.com/heat-api/heat-rpms
-
-Install OpenStack
------------------
-
-Installing OpenStack on Fedora 17/18
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Note:
- - On Fedora 17 using the `Preview Repository`_ to install the OpenStack Folsom release is recommended
- - On Fedora 18 you can use the included OpenStack Folsom release or the Grizzly `Preview Repository`_
-
-A script called "``openstack``" in the tools directory of the repository will install and start OpenStack for you on Fedora::
-
- ./tools/openstack install -y -r ${MYSQL_ROOT_PASSWORD}
-
-If you use this method, you will need to manually create a guest network. How this is done depends on your environment. An example network create operation:
-
-..
- SUBNET=10.0.0.0/24
-
-::
-
- sudo nova-manage network create demonet ${SUBNET} 1 256 --bridge=demonetbr0
-
-Where ``${SUBNET}`` is of the form ``10.0.0.0/24``. The network range here, must *not* be one used on your existing physical network. It should be a range dedicated for the network that OpenStack will configure. So if ``10.0.0.0/24`` clashes with your local network, pick another subnet.
-
-Currently, the bridge is not created immediately upon running this command, but is actually added when Nova first requires it.
-
-If you wish to set up OpenStack manually on Fedora, read `Getting Started With OpenStack On Fedora`_.
-
-.. _Getting Started With OpenStack on Fedora: http://fedoraproject.org/wiki/Getting_started_with_OpenStack_on_Fedora_17
-.. _Preview Repository: http://fedoraproject.org/wiki/OpenStack#Preview_repository
-
-Download or alternatively generate a JEOS image
------------------------------------------------
-It is possible to either use an image-building tool to create an image or download a prebuilt image of a desired distribution.
-
-Download a prebuilt image and copy to libvirt images location
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Download a prebuilt image from ``http://fedorapeople.org/groups/heat/prebuilt-jeos-images/``.
-
-Note: This example assumes F17-x86_64-cfntools qcow2 was downloaded.
-
-::
-
- sudo cp Downloads/F17-x86_64-cfntools.qcow2 /var/lib/libvirt/images
-
-Register with glance:
-
-::
-
- glance image-create --name=F17-x86_64-cfntools --disk-format=qcow2 --container-format=bare < /var/lib/libvirt/images/F17-x86_64-cfntools.qcow2
-
-Alternatively see JEOS image-building documentation
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-If you wish to create your own JEOS image from scratch, there are a number of approaches which can be used.
-
-One approach is using the Oz image-building tool, which is documented in the `jeos building documentation`_.
-
-.. _jeos building documentation: http://docs.openstack.org/developer/heat/getting_started/jeos_building.html
-
-Install and Configure Heat
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Install heat from source
-------------------------
-
-In the heat directory, run the install script::
-
- sudo ./install.sh
-
-If running OpenStack grizzly installed via tools/openstack, it is necessary to modify the default service user password::
-
- sudo sed -i "s/verybadpass/secrete/" /etc/heat/heat.conf
-
-Source the keystone credentials created with tools/openstack
-------------------------------------------------------------
-
-::
-
- source ~/.openstack/keystonerc
-
-Note: these credentials will be required for all future steps.
-
-Allocate Floating IP Addresses to OpenStack
--------------------------------------------
-
-If you want to use templates that depend on ``AWS::EC2::EIP`` or ``AWS::EC2::EIPAssociation`` (multi-instance stacks often do, single-instance less often but it's still possible), see the wiki page on `Configuring Floating IPs`_.
-
-.. _Configuring Floating IPs: http://wiki.openstack.org/Heat/Configuring-Floating-IPs
-
-Setup the MySQL database for Heat
----------------------------------
-
-::
-
- heat-db-setup rpm -y -r ${MYSQL_ROOT_PASSWORD}
-
-Note: the first argument is either ``rpm`` for RPM-based distros (such as Fedora) or ``deb`` for Debian-based distros (such as Ubuntu). To prompt for confirmation when e.g. installing MySQL Server, omit the ``-y`` option. Run ``heat-db-setup --help`` for detailed documentation.
-
-Register heat with keystone
----------------------------
-
-::
-
- sudo -E ./bin/heat-keystone-setup
-
-Note: The ``-E`` option to ``sudo`` preserves the environment, specifically the keystone credentials, when ``heat-keystone-setup`` is run as root. This script needs to run as root in order to read the admin password.
-
-Register a SSH key-pair with OpenStack Nova
--------------------------------------------
-
-This is for Heat to associate with the virtual machines.
-
-::
-
- nova keypair-add --pub_key ~/.ssh/id_rsa.pub ${USER}_key
-
-
-Verify JEOS registration
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-Check that there is a ``F17-x86_64-cfntools`` JEOS in glance:
-
-..
- GLANCE_INDEX=$(cat <<EOF
-
-::
-
- glance index
-
-..
- EOF
- )
- $GLANCE_INDEX | grep -q "F17-x86_64-cfntools"
-
-Update heat engine configuration file
--------------------------------------
-
-The heat engine configuration file should be updated with the address of the bridge device (demonetbr0), however this device is not created by nova-network until the first instance is launched, so we assume that $BRIDGE_IP is 10.0.0.1 if $SUBNET is 10.0.0.0/24 as in the instructions above:
-
-..
- BRIDGE_IP=`echo $SUBNET | awk -F'[./]' '{printf "%d.%d.%d.%d", $1, $2, $3, or($4, 1)}'`
-
-::
-
- sudo sed -i -e "/^\[DEFAULT\]/ a\\\nheat_metadata_server_url=http://${BRIDGE_IP}:8000/\n" /etc/heat/heat.conf
- sudo sed -i -e "/^\[DEFAULT\]/ a\\\nheat_waitcondition_server_url=http://${BRIDGE_IP}:8000/v1/waitcondition/" /etc/heat/heat.conf
- sudo sed -i -e "/^\[DEFAULT\]/ a\\\nheat_watch_server_url=http://${BRIDGE_IP}:8003/" /etc/heat/heat.conf
-
-
-Launch the Heat services
-------------------------
-
-::
-
- sudo -E bash -c 'heat-api-cfn & heat-engine &'
-
-..
- sleep 5
-
-Launch a Wordpress instance
----------------------------
-
-::
-
- heat-cfn create wordpress --template-url=https://raw.github.com/openstack/heat-templates/master/cfn/WordPress_Single_Instance.template --parameters="InstanceType=m1.xlarge;DBUsername=${USER};DBPassword=verybadpass;KeyName=${USER}_key"
-
-List stacks
------------
-
-::
-
- heat-cfn list
-
-List stack events
------------------
-
-::
-
- heat-cfn event-list wordpress
-
-Describe the ``wordpress`` stack
---------------------------------
-
-..
- HEAT_DESCRIBE=$(cat <<EOF
-
-::
-
- heat-cfn describe wordpress
-
-..
- EOF
- )
-
-After a few seconds, the ``StackStatus`` should change from ``CREATE_IN_PROGRESS`` to ``CREATE_COMPLETE``.
-
-..
- # Wait for Stack creation
- CREATING="<StackStatus>CREATE_IN_PROGRESS</StackStatus>"
- retries=24
- while $HEAT_DESCRIBE | grep -q $CREATING && ((retries-- > 0))
- do
- echo "Waiting for Stack creation to complete..." >&2
- sleep 5
- done
-
- $HEAT_DESCRIBE | grep -q "<StackStatus>CREATE_COMPLETE</StackStatus>"
-
-
-Verify instance creation
-------------------------
-
-Because the software takes some time to install from the repository, it may be a few minutes before the Wordpress intance is in a running state. One way to check is to login via ssh and ``tail -f /var/log/yum.log``. Once ``mysql-server`` installs, the instance should be ready to go.
-
-..
- WebsiteURL=$($HEAT_DESCRIBE | sed \
- -e '/<OutputKey>WebsiteURL<\/OutputKey>/,/<\/member>/ {' \
- -e '/<OutputValue>/ {' \
- -e 's/<OutputValue>\([^<]*\)<\/OutputValue>/\1/' \
- -e p \
- -e '}' -e '}' \
- -e d \
- )
- HOST=`echo $WebsiteURL | sed -r -e 's#http://([^/]+)/.*#\1#'`
-
- retries=9
- while ! ping -q -c 1 $HOST >/dev/null && ((retries-- > 0)); do
- echo "Waiting for host networking..." >&2
- sleep 2
- done
- test $retries -ge 0
-
- sleep 10
-
- retries=49
- while ! ssh -o PasswordAuthentication=no -o StrictHostKeyChecking=no \
- -q -t -l ec2-user $HOST \
- sudo grep -q mysql-server /var/log/yum.log && \
- ((retries-- > 0))
- do
- echo "Waiting for package installation..." >&2
- sleep 5
- done
- test $retries -ge 0
-
- echo "Pausing to wait for application startup..." >&2
- sleep 60
-
-Point a web browser at the location given by the ``WebsiteURL`` Output as shown by ``heat-cfn describe``::
-
- wget ${WebsiteURL}
-
-Delete the instance when done
------------------------------
-
-::
-
- heat-cfn delete wordpress
- heat-cfn list
-
-Note: This operation will show no running stack.
-
-Other Templates
----------------
-Check out the ``Wordpress_2_Instances_with_EBS_EIP.template``. This uses a few different APIs in OpenStack nova, such as the Volume API, the Floating IP API and the Security Groups API, as well as the general nova launching and monitoring APIs.
-
-IPtables rules
---------------
-
-Some templates require the instances to be able to connect to the heat CFN API (for metadata update via cfn-hup and waitcondition notification via cfn-signal):
-
-Open up port 8000 so that the guests can communicate with the heat-api-cfn server::
-
- sudo iptables -I INPUT -p tcp --dport 8000 -j ACCEPT -i demonetbr0
-
-Open up port 8003 so that the guests can communicate with the heat-api-cloudwatch server::
-
- sudo iptables -I INPUT -p tcp --dport 8003 -j ACCEPT -i demonetbr0
-
-Note the above rules will not persist across reboot, so you may wish to add them to /etc/sysconfig/iptables
-
-Start the Heat Cloudwatch server
---------------------------------
-
-If you wish to try any of the HA or autoscaling templates (which collect stats from instances via the CloudWatch API), it is neccessary to start the heat-api-cloudwatch server::
-
- sudo -E bash -c 'heat-api-cloudwatch &'
-
-Further information on using the heat cloudwatch features is available in the Using-Cloudwatch_ wiki page
-
-.. _Using-Cloudwatch: http://wiki.openstack.org/Heat/Using-CloudWatch
-
-Using the OpenStack Heat API
-----------------------------
-
-CloudFormation (heat-api-cfn) and a native OpenStack Heat API (heat-api) are provided. To use the recommended Heat API, a python client library is necessary. To use this library, clone the python-heatclient repository_ from GitHub at ``git://github.com/openstack/python-heatclient.git``.
-
-Install python-heatclient from source
--------------------------------------
-
-In the python-heatclient directory, run the install script::
-
- sudo ./setup.py install
-
-Note that python-heatclient may be installed on a different server than heat itself.
-Note that pip can be used to install python-heatclient, but the instructions vary for each distribution. Read your distribution documentation if you wish to install with pip.
-
-Start the OpenStack specific Heat API
--------------------------------------
-
-When using heat-pythonclient, the OpenStack API service provided by heat must be started::
-
- sudo bash -c 'heat-api &'
-
-List stacks
------------
-
-::
-
- heat stack-list
-
-..
- echo; echo 'Success!'
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Installing OpenStack on other Distributions
-===========================================
-
-- There is a `Debian packaging team for OpenStack`_.
-- There are instructions for `installing OpenStack on Ubuntu`_.
-- Various other distributions may have packaging teams or Getting Started guides available.
-
-.. _Debian packaging team for OpenStack: http://wiki.openstack.org/Packaging/Debian
-.. _installing OpenStack on Ubuntu: http://docs.openstack.org/bexar/openstack-compute/admin/content/ch03s02.html
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Getting Started With Heat on Ubuntu
-===================================
-
-This guide will help to get the current git master of Heat to run on Ubuntu. It makes the following assumptions:
-
-- The host is running Ubuntu 12.04 or 12.10
-- There is a working OpenStack installation based on Folsom, Grizzly or Havana, or that one will be installed via the tools/openstack_ubuntu script described below
-- Heat will be installed on the controller host of the existing OpenStack installation (or if doing a single-host evaluation, on the same host as all other OpenStack services)
-
-Get Heat
---------
-
-Clone the heat repository_ from GitHub at ``git://github.com/openstack/heat.git``. Note that OpenStack must be installed before heat.
-
-.. _repository: https://github.com/openstack/heat
-
-Install OpenStack
------------------
-
-Note, this section may be skipped if you already have a working OpenStack installation
-
-Installing OpenStack on Ubuntu 12.04/12.10
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-A script called openstack_ubuntu in the tools directory of the Heat repository will install and start OpenStack for you on Ubuntu:
-''Note currently only tested on 12.04, if it works for you on 12.10, please let us know''
-::
-
- ./tools/openstack_ubuntu install -r ${MYSQL_ROOT_PASSWORD}
-
-If you use this method, you will need to manually create a guest network. How this is done depends on your environment. An example network create operation:
-
-..
- SUBNET=10.0.0.0/24
-
-::
-
- sudo nova-manage network create --label=demonet --fixed_range_v4=${SUBNET} --bridge=demonetbr0 --bridge_interface=eth0
-
-Where ''${SUBNET}'' is of the form ''10.0.0.0/24''. The network range here, must *not* be one used on your existing physical network. It should be a range dedicated for the network that OpenStack will configure. So if ''10.0.0.0/24'' clashes with your local network, pick another subnet.
-
-The example above assumes you want to bridge with physical device ''eth0''
-
-Currently, the bridge is not created immediately upon running this command, but is actually added when Nova first requires it.
-
-Load keystone authentication into your environment and verify everything is ok.
--------------------------------------------------------------------------------
-
-::
-
- . ~/.openstack/keystonerc
- keystone user-list
- glance index
- nova list
-
-Note ''~/.openstack/keystonerc'' is created by tools/openstack_ubuntu, replace this step with your own credentials file for an admin user if OpenStack was installed by some other method
-
-Install prerequisites
----------------------
-
-::
-
- sudo apt-get install python-pip gcc python2.7-dev
- sudo apt-get install git
- sudo apt-get install build-essential devscripts debhelper python-all gdebi-core
- sudo apt-get install python-setuptools python-prettytable python-lxml
- sudo apt-get install libguestfs*
-
-Install python-heatclient (optional)
-------------------------------------
-*NOTE* If running 12.04 LTS with the packaged Openstack Essex release, do not install python-heatclient, as it will break your OpenStack installation, because it explicitly requires a version of the prettytable library (>0.6) which causes problems with the Essex cli tools (keystone/nova/glance) in 12.04 : https://bugs.launchpad.net/keystone/+bug/995976 The packaged python-prettytable (0.5 version) works OK
-
-::
-
- sudo pip install python-heatclient
-
-Install Heat from master
-------------------------
-
-::
-
- git clone git://github.com/openstack/heat.git
- cd heat
- sudo ./install.sh
-
-Modify configuration for admin password
----------------------------------------
-Later a keystone user called '''heat''' will be created. At this point a password for that user needs to be chosen.
-The following files will need editing:
-
-- /etc/heat/heat.conf
-
-::
-
- [keystone_authtoken]
- admin_password=<heat admin password>
-
-
-Create the MySQL Heat database:
--------------------------------
-::
-
- sudo heat-db-setup deb -r <mysql password>
-
-Create the keystone authentication parameters
----------------------------------------------
-::
-
- sudo -E ./bin/heat-keystone-setup
-
-Download or alternatively generate a JEOS image
-------------------------------------------------
-
-It is possible to either use an image-building tool to create an image or download a prebuilt image of a desired distribution.
-
-Download a prebuilt image and copy to libvirt images location
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Download a prebuilt image from ``http://fedorapeople.org/groups/heat/prebuilt-jeos-images/``.
-
-Note: This example assumes U10-x86_64-cfntools qcow2 was downloaded.
-
-::
-
- sudo cp Downloads/U10-x86_64-cfntools.qcow2 /var/lib/libvirt/images
-
-Register with glance:
-
-::
-
- glance image-create --name=U10-x86_64-cfntools --disk-format=qcow2 --container-format=bare < /var/lib/libvirt/images/U10-x86_64-cfntools.qcow2
-
-Alternatively see JEOS image-building documentation
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-If you wish to create your own JEOS image from scratch, there are a number of approaches which can be used.
-
-One approach is using the Oz image-building tool, which is documented in the `jeos building documentation`_.
-
-.. _jeos building documentation: http://docs.openstack.org/developer/heat/getting_started/jeos_building.html
-
-Configure your host to work with Heat
--------------------------------------
-
-Create SSH key and add it to the Nova sshkey list
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-::
-
- ssh-keygen -t rsa
- nova keypair-add --pub_key ~/.ssh/id_rsa.pub ${USER}_key
-
-Note: If running in a VM, modify /etc/libvirt/qemu/networks/default.xml:
-change network to not conflict with host (default 192.168.122.x)
-::
-
- sudo service libvirt-bin restart
-
-If dnsmasq is not running on the default network
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- sudo virsh net-destroy default
- sudo virsh net-start default
-
-Experiment with Heat
---------------------
-
-Execute the heat api services
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-::
-
- sudo heat-engine &
- sudo heat-api &
- sudo heat-api-cfn &
- sudo heat-api-cloudwatch &
-
-Run the debian wordpress example
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-::
-
- heat stack-create wordpress --template-url=https://raw.github.com/openstack/heat-templates/master/cfn/WordPress_Single_Instance_deb.template --parameters="InstanceType=m1.xlarge;DBUsername=${USER};DBPassword=verybadpassword;KeyName=${USER}_key;LinuxDistribution=U10"
-
-List stacks
-~~~~~~~~~~~
-::
-
- heat stack-list
-
-List stack events
-~~~~~~~~~~~~~~~~~
-::
-
- heat event-list wordpress
-
-Describe the wordpress stack
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-::
-
- heat stack-show wordpress
-
-Note: After a few seconds, the Status should change from IN_PROGRESS to CREATE_COMPLETE.
-
-Verify instance creation
-~~~~~~~~~~~~~~~~~~~~~~~~
-Because the software takes some time to install from the repository, it may be a few minutes before the Wordpress intance is in a running state.
-
-Point a web browser at the location given by the WebsiteURL Output as shown by heat show-stack wordpress::
-::
-
- wget ${WebsiteURL}
-
-Delete the instance when done
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- heat stack-delete wordpress
- heat stack-list
-
-Note: This operation will show no running stack.
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-==========
- Glossary
-==========
-
-.. glossary::
-
- OpenStack
- Open source software for building private and public clouds.
-
- API server
- HTTP REST API service for heat.
-
- orchestrate
- Arrange or direct the elements of (a situation) to produce a desired effect.
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-==================================================
-Welcome to the Heat developer documentation!
-==================================================
-Heat is a service to :term:`orchestrate` multiple composite cloud
-applications using the AWS CloudFormation template format, through
-both an OpenStack-native ReST API and a CloudFormation-compatible Query API.
-
-What is the purpose of the project and vision for it?
-=====================================================
-
-* Heat provides a template based orchestration for describing a cloud application by executing appropriate :term:`OpenStack` API calls to generate running cloud applications.
-* The software integrates other core components of OpenStack into a one-file template system. The templates allow creation of most OpenStack resource types (such as instances, floating ips, volumes, security groups, users, etc), as well as some more advanced functionality such as instance high availability, instance autoscaling, and nested stacks. By providing very tight integration with other OpenStack core projects, all OpenStack core projects could receive a larger user base.
-* Allow deployers to integrate with Heat directly or by adding custom plugins.
-
-This documentation offers information on how heat works and how to contribute to the project.
-
-Getting Started
-===============
-
-.. toctree::
- :maxdepth: 1
-
- getting_started/index
- templates/index
- template_guide/index
- glossary
-
-Man Pages
-=========
-
-.. toctree::
- :maxdepth: 2
-
- man/index
-
-Developers Documentation
-========================
-.. toctree::
- :maxdepth: 1
-
- architecture
-
-Code Documentation
-==================
-.. toctree::
- :maxdepth: 3
-
- sourcecode/autoindex
-
-Indices and tables
-==================
-
-* :ref:`genindex`
-* :ref:`modindex`
-* :ref:`search`
+++ /dev/null
-============
-heat-api-cfn
-============
-
-.. program:: heat-api-cfn
-
-SYNOPSIS
-========
-``heat-api-cfn [options]``
-
-DESCRIPTION
-===========
-heat-api-cfn is a CloudFormation compatible API service to the heat project.
-
-INVENTORY
-=========
-heat-api-cfn is a service that exposes an external REST based api to the
-heat-engine service. The communication between the heat-api-cfn and
-heat-engine uses message queue based RPC.
-
-OPTIONS
-=======
-.. cmdoption:: --config-file
-
- Path to a config file to use. Multiple config files can be specified, with
- values in later files taking precedence.
-
-
-.. cmdoption:: --config-dir
-
- Path to a config directory to pull .conf files from. This file set is
- sorted, so as to provide a predictable parse order if individual options are
- over-ridden. The set is parsed after the file(s), if any, specified via
- --config-file, hence over-ridden options in the directory take precedence.
-
-FILES
-========
-
-* /etc/heat/heat.conf
+++ /dev/null
-===================
-heat-api-cloudwatch
-===================
-
-.. program:: heat-api-cloudwatch
-
-SYNOPSIS
-========
-``heat-api-cloudwatch [options]``
-
-DESCRIPTION
-===========
-heat-api-cloudwatch is a CloudWatch alike API service to the heat project
-
-OPTIONS
-=======
-.. cmdoption:: --config-file
-
- Path to a config file to use. Multiple config files can be specified, with
- values in later files taking precedence.
-
-
-.. cmdoption:: --config-dir
-
- Path to a config directory to pull .conf files from. This file set is
- sorted, so as to provide a predictable parse order if individual options are
- over-ridden. The set is parsed after the file(s), if any, specified via
- --config-file, hence over-ridden options in the directory take precedence.
-
-FILES
-========
-
-* /etc/heat/heat.conf
+++ /dev/null
-========
-heat-api
-========
-
-.. program:: heat-api
-
-SYNOPSIS
-========
-``heat-api [options]``
-
-DESCRIPTION
-===========
-heat-api provides an external REST API to the heat project.
-
-INVENTORY
-=========
-heat-api is a service that exposes an external REST based api to the
-heat-engine service. The communication between the heat-api
-heat-engine uses message queue based RPC.
-
-OPTIONS
-=======
-.. cmdoption:: --config-file
-
- Path to a config file to use. Multiple config files can be specified, with
- values in later files taking precedence.
-
-
-.. cmdoption:: --config-dir
-
- Path to a config directory to pull .conf files from. This file set is
- sorted, so as to provide a predictable parse order if individual options are
- over-ridden. The set is parsed after the file(s), if any, specified via
- --config-file, hence over-ridden options in the directory take precedence.
-
-FILES
-========
-
-* /etc/heat/heat.conf
+++ /dev/null
-=============
-heat-db-setup
-=============
-
-.. program:: heat-db-setup
-
-
-SYNOPSIS
-========
-
-``heat-db-setup [COMMANDS] [OPTIONS]``
-
-
-DESCRIPTION
-===========
-heat-db-setup is a tool which configures the local MySQL database for
-heat. Typically distro-specific tools would provide this functionality
-so please read the distro-specific documentation for configuring Heat.
-
-
-COMMANDS
-========
-
-``rpm``
-
- Indicate the distribution is a RPM packaging based distribution.
-
-``deb``
-
- Indicate the distribution is a DEB packaging based distribution.
-
-
-OPTIONS
-=======
-
-.. cmdoption:: -h, --help
-
- Print usage information.
-
-.. cmdoption:: -p, --password
-
- Specify the password for the 'heat' MySQL user that will use to connect to the 'heat' MySQL database. By default, the password 'heat' will be used.
-
-.. cmdoption:: -r, --rootpw
-
- Specify the root MySQL password. If the script installs the MySQL server, it will set the root password to this value instead of prompting for a password. If the MySQL server is already installed, this password will be used to connect to the database instead of having to prompt for it.
-
-.. cmdoption:: -y, --yes
-
- In cases where the script would normally ask for confirmation before doing something, such as installing mysql-server, just assume yes. This is useful if you want to run the script non-interactively.
-
-EXAMPLES
-========
-
- heat-db-setup rpm -p heat_password -r mysql_pwd -y
-
- heat-db-setup deb -p heat_password -r mysql_pwd -y
-
- heat-db-setup rpm
-
-BUGS
-====
-Heat bugs are managed through Launchpad <https://launchpad.net/heat>
+++ /dev/null
-===========
-heat-engine
-===========
-
-.. program:: heat-api-engine
-
-SYNOPSIS
-========
-``heat-engine [options]``
-
-DESCRIPTION
-===========
-Heat is the heat project server with an internal api called by the heat-api.
-
-INVENTORY
-=========
-The heat engine does all the orchestration work and is the layer in which
-the resource integration is implemented.
-
-OPTIONS
-=======
-.. cmdoption:: --config-file
-
- Path to a config file to use. Multiple config files can be specified, with
- values in later files taking precedence.
-
-
-.. cmdoption:: --config-dir
-
- Path to a config directory to pull .conf files from. This file set is
- sorted, so as to provide a predictable parse order if individual options are
- over-ridden. The set is parsed after the file(s), if any, specified via
- --config-file, hence over-ridden options in the directory take precedence.
-
-FILES
-========
-
-* /etc/heat/heat.conf
+++ /dev/null
-===================
-heat-keystone-setup
-===================
-
-.. program:: heat-keystone-setup
-
-
-SYNOPSIS
-========
-
-``heat-keystone-setup``
-
-
-DESCRIPTION
-===========
-The heat-keystone-setup tool configures keystone for use with Heat. This script requires admin keystone credentials to be available in the shell environment and write access to /etc/keystone.
-
-Distributions may provide other tools to setup keystone for use with Heat, so check the distro documenation first.
-
-EXAMPLES
-========
-
- heat-keystone-setup
-
-BUGS
-====
-Heat bugs are managed through Launchpad <https://launchpad.net/heat>
+++ /dev/null
-====================================
-Man pages for services and utilities
-====================================
-
--------------
-Heat services
--------------
-
-.. toctree::
- :maxdepth: 2
-
- heat-engine
- heat-api
- heat-api-cfn
- heat-api-cloudwatch
-
---------------
-Heat utilities
---------------
-
-.. toctree::
- :maxdepth: 2
-
- heat-db-setup
- heat-keystone-setup
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-CloudFormation Compatible Resource Types
-----------------------------------------
-.. resourcepages:: AWS::
\ No newline at end of file
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-============
-Environments
-============
-
-The environment is used to affect the runtime behaviour of the
-template. It provides a way to override the default resource
-implementation and the parameters passed to Heat.
-
-------
-Format
-------
-It is a yaml text file with two main sections "resource_registry" and "parameters".
-
-------------------
-Command line usage
-------------------
-::
-
- heat stack-create -e my_env.yaml -P "some_parm=bla" -f my_tmpl.yaml
-
---------------
-Usage examples
---------------
-
-1) Pass parameters into Heat
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-::
-
- parameters:
- KeyName: heat_key
- InstanceType: m1.micro
- ImageId: F18-x86_64-cfntools
-
-
-2) Deal with the renaming of Quantum to Neutron
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-::
-
- resource_registry:
- "OS::Quantum*": "OS::Neutron*"
-
-
-3) Override a resource type with a custom TemplateResource
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-::
-
- resource_registry:
- "AWS::EC2::Instance": file:///home/mine/my_instance_with_better_defaults.yaml
-
-
-4) Always map resource type X to Y
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-::
-
- resource_registry:
- "OS::Networking::FloatingIP": "OS::Nova::FloatingIP"
-
-
-5) Use default resources except one for a particular resource in the template
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-::
-
- resource_registry:
- resources:
- my_db_server:
- "OS::DBInstance": file:///home/mine/all_my_cool_templates/db.yaml
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-==================
-Built in functions
-==================
-
-There are a number of functions that you can use to help you write templates.
-
-All of these functions (except *Ref*) start with *Fn::*.
-
----
-Ref
----
-Return the value of the named parameter or Resource.
-
-Parameters
-~~~~~~~~~~
-name : String
- The name of the Resource or Parameter.
-
-Usage
-~~~~~
-::
-
- {Ref: my_server}
-
-Returns ``instance-0003``
-
-----------
-Fn::Base64
-----------
-This returns the Base64 representation of the input string.
-
-Parameters
-~~~~~~~~~~
-value : String
- The string to convert.
-
-Usage
-~~~~~
-
-::
-
- {Base64: "convert this string please."}
-
-Returns the Base64 of the input string.
-
--------------
-Fn::FindInMap
--------------
-Returns the value corresponding to keys into a two-level map declared in the
-Mappings section.
-
-Parameters
-~~~~~~~~~~
-map_name : String
- The logical name of a mapping declared in the Mappings section that
- contains the keys and values.
-
-top_level_key : String
- The top-level key name. It's value is a list of key-value pairs.
-
-second_level_key : String
- The second-level key name, which is set to one of the keys from the list
- assigned to top_level_key.
-
-Usage
-~~~~~
-
-::
-
- Mapping:
- MyContacts:
- jone: {phone: 337, email: a@b.com}
- jim: {phone: 908, email: g@b.com}
-
- {"Fn::FindInMap": ["MyContacts", "jim", "phone" ] }
-
-Returns ``908``
-
-----------
-Fn::GetAtt
-----------
-Returns an attribute of a Resource within the template.
-
-Parameters
-~~~~~~~~~~
-resource : String
- The name of the Resource.
-
-attribute : String
- The name of the attribute.
-
-Usage
-~~~~~
-
-::
-
- {Fn::GetAtt: [my_server, PublicIp]}
-
-Returns an IP address such as ``10.0.0.2``
-
-----------
-Fn::GetAZs
-----------
-Return the Availablity Zones within the given region.
-
-*Note: AZ's and regions are not fully implemented in Heat.*
-
-Parameters
-~~~~~~~~~~
-region : String
- The name of the region.
-
-Usage
-~~~~~
-::
-
- {Fn::GetAZs: ""}
-
-Returns the list provided by ``nova availability-zone-list``
-
---------
-Fn::Join
---------
-Like python join, it joins a list of strings with the given delimiter.
-
-Parameters
-~~~~~~~~~~
-delimiter : String
- The string to join the list with.
-
-list : list
- The list to join.
-
-Usage
-~~~~~
-
-::
-
- {Fn::Join: [",", ["beer", "wine", "more beer"]]}
-
-Returns ``beer, wine, more beer``
-
-----------
-Fn::Select
-----------
-Select an item from a list.
-
-*Heat extension: Select an item from a map*
-
-Parameters
-~~~~~~~~~~
-selector : string or integer
- The number of item in the list or the name of the item in the map.
-
-collection : map or list
- The collection to select the item from.
-
-Usage
-~~~~~
-
-For a list lookup:
-::
-
- { "Fn::Select" : [ "2", [ "apples", "grapes", "mangoes" ] ] }
-
-Returns ``mangoes``
-
-For a map lookup:
-::
-
- { "Fn::Select" : [ "red", {"red": "a", "flu": "b"} ] }
-
-Returns ``a``
-
----------
-Fn::Split
----------
-This is the reverse of Join. Convert a string into a list based on the
-delimiter.
-
-Parameters
-~~~~~~~~~~
-delimiter : string
- Matching string to split on.
-
-string : String
- The string to split.
-
-Usage
-~~~~~
-::
-
- { "Fn::Split" : [ ",", "str1,str2,str3,str4"]}
-
-Returns ``{["str1", "str2", "str3", "str4"]}``
-
------------
-Fn::Replace
------------
-Find an replace one string with another.
-
-Parameters
-~~~~~~~~~~
-subsitutions : map
- A map of subsitutions.
-string: String
- The string to do the substitutions in.
-
-Usage
-~~~~~
-::
-
- {"Fn::Replace": [
- {'$var1': 'foo', '%var2%': 'bar'},
- '$var1 is %var2%'
- ]}
- returns
- "foo is bar"
-
-------------------
-Fn::ResourceFacade
-------------------
-When writing a Template Resource:
- - user writes a template that will fill in for a resource (the resource is the facade).
- - when they are writing their template they need to access the metadata from
- the facade.
-
-
-Parameters
-~~~~~~~~~~
-attribute_name : String
- One of ``Metadata``, ``DeletionPolicy`` or ``UpdatePolicy``.
-
-Usage
-~~~~~
-
-::
-
- {'Fn::ResourceFacade': 'Metadata'}
- {'Fn::ResourceFacade': 'DeletionPolicy'}
- {'Fn::ResourceFacade': 'UpdatePolicy'}
-
-
-Example
-~~~~~~~
-Here is a top level template ``top.yaml``
-
-::
-
- resources:
- my_server:
- type: OS::Compute::Server
- metadata:
- key: value
- some: more stuff
-
-
-Here is a resource template ``my_actual_server.yaml``
-::
-
- resources:
- _actual_server_:
- type: OS::Compute::Server
- metadata: {'Fn::ResourceFacade': Metadata}
-
-The environment file ``env.yaml``
-::
-
- resource_registry:
- resources:
- my_server:
- "OS::Compute::Server": my_actual_server.yaml
-
-To use it
-
-::
-
- heat stack-create -f top.yaml -e env.yaml
-
-
-What happened is the metadata in ``top.yaml`` (key: value, some: more
-stuff) gets passed into the resource template via the `Fn::ResourceFacade`_
-function.
-
--------------------
-Fn::MemberListToMap
--------------------
-Convert an AWS style member list into a map.
-
-Parameters
-~~~~~~~~~~
-key name: string
- The name of the key (normally "Name" or "Key")
-
-value name: string
- The name of the value (normally "Value")
-
-list: A list of strings
- The string to convert.
-
-Usage
-~~~~~
-::
-
- {'Fn::MemberListToMap': ['Name', 'Value', ['.member.0.Name=key',
- '.member.0.Value=door',
- '.member.1.Name=colour',
- '.member.1.Value=green']]}
-
- returns
- {'key': 'door', 'colour': 'green'}
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-.. _hot_guide:
-
-=======================================
-Heat Orchestration Template (HOT) Guide
-=======================================
-
-HOT is a new template format meant to replace the Heat CloudFormation-compatible
-format (CFN) as the native format supported by the Heat over time.
-This guide is targeted towards template authors and explains how to write
-HOT templates based on examples. A detailed specification of HOT can be found
-at :ref:`hot_spec`.
-
-------
-Status
-------
-
-HOT support is still under development and needs more work to provide access to
-all functionality currently available via the CFN compatible template interface.
-This guide will be updated periodically whenever new features get implemented
-for HOT.
-
-----------------------------------
-Writing a hello world HOT template
-----------------------------------
-
-This section gives an introduction on how to write HOT templates, starting from
-very basic steps and then going into more and more detail by means of examples.
-
-A most basic template
----------------------
-The most basic template you can think of may contain only a single resource
-definition using only predefined properties (along with the mandatory Heat
-template version tag). For example, the template below could be used to simply
-deploy a single compute instance.
-
-::
-
- heat_template_version: 2013-05-23
-
- description: Simple template to deploy a single compute instance
-
- resources:
- my_instance:
- type: OS::Nova::Compute
- properties:
- KeyName: my_key
- ImageId: F18-x86_64-cfntools
- InstanceType: m1.small
-
-Each HOT template has to include the *heat_template_version* key with value
-'2013-05-23' (the current version of HOT). While the *description* is optional,
-it is good practice to include some useful text that describes what users can do
-with the template. In case you want to provide a longer description that does
-not fit on a single line, you can provide multi-line text in YAML, for example:
-
-::
-
- description: >
- This is how you can provide a longer description
- of your template that goes over several lines.
-
-The *resources* section is required and must contain at least one resource
-definition. In the example above, a compute instance is defined with fixed
-values for the 'KeyName', 'ImageId' and 'InstanceType' parameters.
-
-Note that all those elements, i.e. a key-pair with the given name, the image and
-the flavor have to exist in the OpenStack environment where the template is
-used. Typically a template is made more easily reusable, though, by defining a
-set of *input parameters* instead of hard-coding such values.
-
-
-Template input parameters
--------------------------
-Input parameters defined in the *parameters* section of a HOT template (see also
-:ref:`hot_spec_parameters`) allow users to customize a template during
-deployment. For example, this allows for providing custom key-pair names or
-image IDs to be used for a deployment.
-From a template author's perspective, this helps to make a template more easily
-reusable by avoiding hardcoded assumptions.
-
-Sticking to the example used above, it makes sense to allow users to provide
-their custom key-pairs, provide their own image, and to select a flavor for the
-compute instance. This can be achieved by extending the initial template as
-follows:
-
-::
-
- heat_template_version: 2013-05-23
-
- description: Simple template to deploy a single compute instance
-
- parameters:
- key_name:
- type: string
- description: Name of key-pair to be used for compute instance
- image_id:
- type: string
- description: Image to be used for compute instance
- instance_type:
- type: string
- description: Type of instance (flavor) to be used
-
- resources:
- my_instance:
- type: OS::Nova::Compute
- properties:
- KeyName: { get_param: key_name }
- ImageId: { get_param: image_id }
- InstanceType: { get_param: instance_type }
-
-In the example above, three input parameters have been defined that have to be
-provided by the user upon deployment. The fixed values for the respective
-resource properties have been replaced by references to the corresponding
-input parameters by means of the *get_param* function (see also
-:ref:`hot_spec_intrinsic_functions`).
-
-You can also define default values for input parameters which will be used in
-case the user does not provide the respective parameter during deployment. For
-example, the following definition for the *instance_type* parameter would select
-the 'm1.small' flavor unless specified otherwise be the user.
-
-::
-
- parameters:
- instance_type:
- type: string
- description: Type of instance (flavor) to be used
- default: m1.small
-
-Another option that can be specified for a parameter is to hide its value when
-users request information about a stack deployed from a template. This is
-achieved by the *hidden* attribute and useful, for example when requesting
-passwords as user input:
-
-::
-
- parameters:
- database_password:
- type: string
- description: Password to be used for database
- hidden: true
-
-
-Restricting user input
-~~~~~~~~~~~~~~~~~~~~~~
-In some cases you might want to restrict the values of input parameters that
-users can supply. For example, you might know that the software running in a
-compute instance needs a certain amount of resources so you might want to
-restrict the *instance_type* parameter introduced above. Parameters in HOT
-templates can be restricted by adding a *constraints* section (see also
-:ref:`hot_spec_parameters_constraints`).
-For example, the following would allow only three values to be provided as input
-for the *instance_type* parameter:
-
-::
-
- parameters:
- instance_type:
- type: string
- description: Type of instance (flavor) to be used
- constraints:
- - allow_values: [ m1.medium, m1.large, m1.xlarge ]
- description: Value must be one of m1.medium, m1.large or m1.xlarge.
-
-The *constraints* section allows for defining a list of constraints that must
-all be fulfilled by user input. For example, the following list of constraints
-could be used to clearly specify format requirements on a password to be
-provided by users:
-
-::
-
- parameters:
- database_password:
- type: string
- description: Password to be used for database
- hidden: true
- constraints:
- - length: { min: 6, max: 8 }
- description: Password length must be between 6 and 8 characters.
- - allowed_pattern: "[a-zA-Z0-9]+"
- description: Password must consist of characters and numbers only.
- - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*"
- description: Password must start with an uppercase character.
-
-Note that you can define multiple constraints of the same type. Especially in
-the case of allowed patterns this not only allows for keeping regular
-expressions simple and maintainable, but also for keeping error messages to be
-presented to users precise.
-
-
-Providing template outputs
---------------------------
-In addition to template customization through input parameters, you will
-typically want to provide outputs to users, which can be done in the
-*outputs* section of a template (see also :ref:`hot_spec_outputs`).
-For example, the IP address by which the instance defined in the example
-above can be accessed should be provided to users. Otherwise, users would have
-to look it up themselves. The definition for providing the IP address of the
-compute instance as an output is shown in the following snippet:
-
-::
-
- outputs:
- instance_ip:
- description: The IP address of the deployed instance
- value: { get_attr: [my_instance, PublicIp] }
-
-Output values are typically resolved using intrinsic function such as
-the *get_attr* function in the example above (see also
-:ref:`hot_spec_intrinsic_functions`).
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-.. _hot_spec:
-
-===============================================
-Heat Orchestration Template (HOT) Specification
-===============================================
-
-HOT is a new template format meant to replace the Heat CloudFormation-compatible
-format (CFN) as the native format supported by the Heat over time.
-This specification explains in detail all elements of the HOT template format.
-An example driven guide to writing HOT templates can be found
-at :ref:`hot_guide`.
-
-------
-Status
-------
-
-HOT support is still under development and needs more work to provide access to
-all functionality currently available via the CFN compatible template interface.
-This specification will be updated periodically whenever new features get
-implemented for HOT.
-
-------------------
-Template Structure
-------------------
-
-HOT templates are defined in YAML and follow the structure outlined below.
-
-::
-
- heat_template_version: 2013-05-23
-
- description: # a description of the template
-
- parameters:
- # declaration of input parameters
-
- resources:
- # declaration of template resources
-
- outputs:
- # declaration of output parameters
-
-heat_template_version
- This key with value *2013-05-23* (or a later date) indicates that the YAML
- document is a HOT template of the specified version.
-
-description
- This *optional* key allows for giving a description of the template, or the
- workload that can be deployed using the template.
-
-parameters
- This section allows for specifying input parameters that have to be provided
- when instantiating the template. The section is *optional* and can be
- omitted when no input is required.
-
-resources
- This section contains the declaration of the single resources of the
- template. This section is mandatory and at least one resource must be
- defined in any HOT template.
-
-outputs
- This section allows for specifying output parameters available to users once
- the template has been instantiated. This section is *optional* and can be
- omitted when no output values are required.
-
-
-.. _hot_spec_parameters:
-
-------------------
-Parameters Section
-------------------
-
-The *parameters* section allows for specifying input parameters that have to be
-provided when instantiating the template. Such parameters are typically used to
-customize each deployment (e.g. by setting custom user names or passwords) or
-for binding to environment-specifics like certain images.
-
-Each parameter is specified in a separated nested block with the name of the
-parameters defined in the first line and additional attributes such as type or
-default value defined as nested elements.
-
-::
-
- parameters:
- <param name>:
- type: <string | number | json | comma_delimited_list>
- description: <description of the parameter>
- default: <default value for parameter>
- hidden: <true | false>
- constraints:
- <parameter constraints>
-
-param name
- The name of the parameter is defined at the top of each parameter block.
-
-type
- This attribute specifies the type of parameter. Currently supported types
- are *string*, *number*, *comma_delimited_list* or *json*.
-
-description
- This *optional* attribute allows for giving a human readable description of
- the parameter.
-
-default
- This *optional* attribute allows for defining a default value for the
- parameters which will be used in case the parameter is not specified by the
- user during deployment.
-
-hidden
- This *optional* attribute allows for specifying whether the parameters
- should be hidden when showing information about a stack created from the
- template at runtime (e.g. for hiding passwords that were specified as
- parameters). If not specified, the default value 'false' will be used.
-
-constraints
- This *optional* block allows for specifying additional constraints on the
- parameter, such as minimum or maximum values for numeric parameters.
-
-The following example shows a minimalistic definition of two parameters. Note
-that the description is actually optional, but is good practice to provide a
-useful description for each parameter.
-
-::
-
- parameters:
- user_name:
- type: string
- description: User name to be configured for the application
- port_number:
- type: number
- description: Port number to be configured for the web server
-
-
-.. _hot_spec_parameters_constraints:
-
-Parameter Constraints
----------------------
-
-The *constraints* block of a parameter definition allows for defining additional
-validation constraints that apply to the value of the parameter. At
-instantiation time of the template, user provided parameter values are validated
-against those constraints to make sure the provided values match expectations of
-the template author.
-Constraints are defined in the form of a bulleted list according to the
-following syntax:
-
-::
-
- constraints:
- - <constraint type>: <constraint definition>
- description: <constraint description>
-
-constraint type
- The constraint type specifies the kind of constraint defined in the current
- bulleted list item. The set of currently supported constraints is given
- below.
-
-constraint definition
- This value defines the actual constraint, depending on the constraint type.
- The concrete syntax for each constraint type is given below.
-
-description
- This *optional* attribute allows for specifying a concrete description of
- the current constraint. This text will be presented to the user, for
- example, when the provided input value for a parameter violates the
- constraint. If omitted, a default validation message will be presented to
- the user.
-
-The following example show the definition of a string parameter with two
-constraints. Note that while the descriptions for each constraint are optional,
-it is good practice to provide concrete descriptions so useful messages can be
-presented to the user at deployment time.
-
-::
-
- parameters:
- user_name:
- type: string
- description: User name to be configured for the application
- constraints:
- - length: { min: 6, max: 8 }
- description: User name must be between 6 and 8 characters
- - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*"
- description: User name must start with an uppercase character
-
-The following sections list the supported types of parameter constraints, along
-with the concrete syntax for each type.
-
-length
-~~~~~~
-The *length* constraint applies to parameters of type *string* and allows for
-defining a lower and upper limit for the length of the string value. The syntax
-for the length constraint is:
-
-::
-
- length: { min: <lower limit>, max: <upper limit> }
-
-It is possible to define a length constraint with only a lower limit or an
-upper limit. However, at least one of *min* or *max* must be specified.
-
-range
-~~~~~
-The *range* constraint applies to parameters of type *number* and allows for
-defining a lower and upper limit for the numeric value of the parameter. The
-syntax of the range constraint is:
-
-::
-
- range: { min: <lower limit>, max: <upper limit> }
-
-It is possible to define a range constraint with only a lower limit or an
-upper limit. However, at least one of *min* or *max* must be specified.
-The minimum or maximum boundaries are included in the range. For example, the
-following range constraint would allow for all numeric values between 0 and 10.
-
-::
-
- range: { min: 0, max: 10 }
-
-
-allowed_values
-~~~~~~~~~~~~~~
-The *allowed_values* constraint applies to parameters of type string or number
-and allows for specifying a set of possible values for a parameter. At
-deployment time, the user provided value for the respective parameter must
-match one of the elements of the specified list. The syntax of the
-allowed_values constraint is:
-
-::
-
- allowed_values: [ <value>, <value>, ... ]
-
-Alternatively, the YAML bulleted list notation can be used:
-
-::
-
- allowed_values:
- - <value>
- - <value>
- - ...
-
-For example:
-
-::
-
- parameters:
- instance_type:
- type: string
- description: Instance type for compute instances
- constraints:
- allowed_values:
- - m1.small
- - m1.medium
- - m1.large
-
-allowed_pattern
-~~~~~~~~~~~~~~~
-The *allowed_pattern* constraint applies to parameters of type string and allows
-for specifying a regular expression against which a user provided parameter
-value must evaluate at deployment
-The syntax of the allowed_pattern constraint is:
-
-::
-
- allowed_pattern: <regular expression>
-
-For example:
-
-::
-
- parameters:
- user_name:
- type: string
- description: User name to be configured for the application
- constraints:
- - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*"
- description: User name must start with an uppercase character
-
-
-.. _hot_spec_resources:
-
------------------
-Resources Section
------------------
-
-In the *resources* section, the templates for actual resources that will make up
-a stack deployed from the HOT template (e.g. compute instances, networks,
-storage volumes) are defined.
-Each resource is defined as a separate block in the resources section according
-to the syntax below.
-
-::
-
- resources:
- <resource ID>:
- type: <resource type>
- properties:
- <property name>: <property value>
- # more resource specific metadata
-
-resource ID
- A resource block is headed by the resource ID, which must be unique within
- the resource section of a template.
-type
- This attribute specifies the type of resource, such as OS::Nova::Compute.
-properties
- This section contains a list of resource specific properties. The property
- value can be provided in place, or can be provided via a function
- (see :ref:`hot_spec_intrinsic_functions`).
-
-Depending on the type of resource, the resource block might include more
-resource specific metadata. Basically all resource types that can be used in
-CFN templates can also be used in HOT templates, adapted to the YAML structure
-as outlined above.
-Below is an example of a simple compute resource definition with some fixed
-property values.
-
-::
-
- resources:
- my_instance:
- type: OS::Nova::Compute
- properties:
- instance_type: m1.small
- image_id: F18-x86_64-cfntools
-
-
-.. _hot_spec_outputs:
-
----------------
-Outputs Section
----------------
-
-In the *outputs* section, any output parameters that should be available to the
-user can be defined. Typically, this would be, for example, parameters such as
-IP addresses of deployed instances, or URLs of web applications deployed as part
-of a stack.
-
-Output parameters are defined according to the following syntax:
-
-::
-
- outputs:
- <parameter name>: <parameter value>
-
-parameter name
- The name of the output parameter is defined as a key in the outputs section.
-parameter value
- This element specifies the value of the output parameter. Typically, this
- will be resolved by means of a function, e.g. by getting an attribute value
- of one of the stack's resources (see also
- :ref:`hot_spec_intrinsic_functions`).
-
-The example below shows, how the public IP address of a compute resource can be
-defined as an output parameter.
-
-::
-
- outputs:
- instance_ip: { get_attr: [my_instance, PublicIp] }
-
-
-.. _hot_spec_intrinsic_functions:
-
--------------------
-Intrinsic Functions
--------------------
-HOT provides a set of intrinsic functions that can be used inside HOT templates
-to perform specific tasks, such as getting the value of a resource attribute at
-runtime. A definition of all intrinsic functions available in HOT is given
-below.
-
-get_param
----------
-The *get_param* function allows for referencing an input parameter of a template
-from anywhere within a template. At runtime, it will be resolved to the value
-provided for this input parameter. The syntax of the get_param function is as
-follows:
-
-::
-
- get_param: <parameter name>
-
-The *parameter name* of the input parameter to be resolved is given as single
-parameter to this function. A sample use of this function in context of a
-resource definition is shown below.
-
-::
-
- parameters:
- instance_type:
- type: string
- description: Instance type to be used.
-
- resources:
- my_instance:
- type: OS::Nova::Compute
- properties:
- instance_type: { get_param: instance_type}
-
-
-get_attr
---------
-The *get_attr* function allows for referencing an attribute of a resource. At
-runtime, it will be resolved to the value of an attribute of a resource instance
-created from the respective resource definition of the template.
-The syntax of the get_attr function is as follows:
-
-::
-
- get_attr: [ <resource ID>, <attribute name> ]
-
-resource ID
- This parameter specifies the resource the attribute of which shall be
- resolved. This resource must be defined within the *resources* section of
- the template (see also :ref:`hot_spec_resources`).
-attribute name
- This parameter specifies the attribute to be resolved.
-
-An example of using the get_attr function is shown below:
-
-::
-
- resources:
- my_instance:
- type: OS::Nova::Compute
- # ...
-
- outputs:
- instance_ip: { get_attr: [my_instance, PublicIp] }
-
-
-get_resource
-------------
-The *get_resource* function allows for referencing another resource within the
-same template. At runtime, it will be resolved to reference ID of the resource,
-which is resource type specific. For example, a reference to a floating IP
-resource will return the respective IP address at runtime.
-The syntax of the get_resource function is as follows:
-
-::
-
- get_resource: <resource ID>
-
-The *resource ID* of the referenced resources as used in the current template is
-given as single parameter to the get_resource function.
-
-
-str_replace
------------
-The *str_replace* function allows for dynamically constructing strings by
-providing a template string with placeholders and a list of mappings to assign
-values to those placeholders at runtime. The functionality of this function is
-similar to that of Python Template strings.
-The syntax of the str_replace function is as follows:
-
-::
-
- str_replace:
- template: <template string>
- params: <parameter mappings>
-
-template
- The *template* argument defines the template string that contains
- placeholders which will be substituted at runtime.
-params
- The *params* argument provides parameter mappings in the form of a
- dictionary, which will be used for placeholder substitution in the template
- string at runtime. Within parameter mappings one can make use of other
- functions (e.g. get_attr to use resource attribute values) for template
- substitution.
-
-The example below shows a simple use of the str_replace function in the outputs
-section of a template to build a URL for logging into a deployed application.
-
-::
-
- resources:
- my_instance:
- type: OS::Nova::Compute
- # general metadata and properties ...
-
- outputs:
- Login_URL:
- description: The URL to log into the deployed application
- value:
- str_replace:
- template: http://$host/MyApplication
- params:
- host: { get_attr: [ my_instance, PublicIp ] }
-
-The str_replace function can also be used for constructing bigger chunks of text
-like scripts for initializing compute instances as shown in the example below:
-
-::
-
- parameters:
- DBRootPassword:
- type: string
- description: Root password for MySQL
- hidden: true
-
- resources:
- my_instance:
- type: OS::Nova::Compute
- properties:
- # general properties ...
- userdata:
- str_replace:
- template: |
- #!/bin/bash
- echo "Hello world"
- echo "Setting MySQL root password"
- mysqladmin -u root password $db_rootpassword
- # do more things ...
- params:
- db_rootpassword: { get_param: DBRootPassword }
-
-In the example above, one can imagine that MySQL is being configured on a
-compute instance and the root password is going to be set based on a user
-provided parameter. The script for doing this is provided as userdata to the
-compute instance, leveraging the str_replace function.
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Template Guide
-==============
-
-.. toctree::
- :maxdepth: 2
-
- environment
- functions
- openstack
- cfn
- rackspace
- hot_guide
- hot_spec
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-
-OpenStack Resource Types
-------------------------
-.. resourcepages:: OS::
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Rackspace Cloud Resource Types
-------------------------------
-.. resourcepages:: Rackspace::
\ No newline at end of file
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Template
---------
-_https://github.com/openstack/heat-templates/blob/master/cfn/WordPress_Single_Instance.template
-
-Description
------------
-AWS CloudFormation Sample Template WordPress_Single_Instance: WordPress is web software you can use to create a beautiful website or blog. This template installs a single-instance WordPress deployment using a local MySQL database to store the data.
-
-
-Parameters
-----------
-*KeyName* :mod:`(required)`
- *type*
- *string*
- *description*
- *Name* of an existing key pair to use for the instance
-*InstanceType* :mod:`(optional)`
- *type*
- *string*
- *description*
- *Instance type* for the instance to be created
-*DBName* :mod:`(optional)`
- *type*
- *string*
- *description*
- *The WordPress database name*
-*DBUsernameName* :mod:`(optional)`
- *type*
- *string*
- *description*
- *The WordPress database admin account username*
-*DBPassword* :mod:`(optional)`
- *type*
- *string*
- *description*
- *The WordPress database admin account password*
-*DBRootPassword* :mod:`(optional)`
- *type*
- *string*
- *description*
- *Root password for MySQL*
-*LinuxDistribution* :mod:`(optional)`
- *type*
- *string*
- *description*
- *Distribution of choice*
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Template
---------
-_https://github.com/openstack/heat-templates/blob/master/hot/hello_world.yaml
-
-Description
------------
- Hello world HOT template that just defines a single compute instance.
- Contains just base features to verify base HOT support.
-
-
-Parameters
-----------
-*KeyName* :mod:`(required)`
- *type*
- *string*
- *description*
- *Name* of an existing key pair to use for the instance
-*InstanceType* :mod:`(required)`
- *type*
- *string*
- *description*
- *Instance type* for the instance to be created
-*ImageId* :mod:`(required)`
- *type*
- *string*
- *description*
- *ID* of the image to use for the instance
+++ /dev/null
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-This page documents the templates at _https://github.com/openstack/heat-templates/
-
-HOT Templates
-=============
-
-.. toctree::
- :maxdepth: 1
-
- hot/hello_world
-
-CFN Templates
-=============
-
-.. toctree::
- :maxdepth: 1
-
- cfn/WordPress_Single_Instance
\ No newline at end of file
+++ /dev/null
-Getting Started With Heat
-=========================
-
-There are `getting started guides`_ for several distributions in the Heat documentation
-
-This is the old guide location, so please update any links to this page.
-
-.. _getting started guides: http://docs.openstack.org/developer/heat/getting_started/index.html
+++ /dev/null
-Heat OpenStack API Reference
-============================
-
-List Stacks
------------
-
-```
-GET /v1/{tenant_id}/stacks
-```
-
-Parameters:
-
-* `tenant_id` The unique identifier of the tenant or account
-
-Create Stack
-------------
-
-```
-POST /v1/{tenant_id}/stacks
-
-{
- "stack_name": "{stack_name}",
- "template_url": "{template_url}",
- "parameters": {
- "{key1}": "{value1}",
- "{key2}": "{value2}"
- },
- "timeout_mins": {timeout_mins}
-}
-```
-
-Parameters:
-
-* `tenant_id` The unique identifier of the tenant or account
-* `stack_name` The name of the stack to create
-* `template_url` The URL of the template to instantiate
-* `template` A JSON template to instantiate - this takes precendence over the `template_url` if both are supplied
-* `keyn`, `valuen` User-defined parameters to pass to the Template
-* `timeout_mins` The timeout for stack creation in minutes
-
-Result:
-
-```
-HTTP/1.1 201 Created
-Location: http://heat.example.com:8004/v1/{tenant_id}/stacks/{stack_name}/{stack_id}
-```
-
-Find Stack ID
--------------
-
-```
-GET /v1/{tenant_id}/stacks/{stack_name}
-```
-
-Parameters:
-
-* `stack_name` The name of the stack to look up
-
-Result:
-
-```
-HTTP/1.1 302 Found
-Location: http://heat.example.com:8004/v1/{tenant_id}/stacks/{stack_name}/{stack_id}
-```
-
-This operation also works with verbs other than `GET`, so you can also use it to perform `PUT` and `DELETE` operations on a current stack. Just set your client to follow redirects. Note that when redirecting, the request method should **not** change, as defined in [RFC2626](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.3). However, in many clients the default behaviour is to change the method to `GET` when receiving a 302 because this behaviour is ubiquitous in web browsers.
-
-Get Stack Data
---------------
-
-```
-GET /v1/{tenant_id}/stacks/{stack_name}/{stack_id}
-```
-
-Parameters:
-
-* `stack_name` The name of the stack to look up
-* `stack_id` The unique identifier of the stack to look up
-
-Retrieve Stack Template
------------------------
-
-```
-GET /v1/{tenant_id}/stacks/{stack_name}/{stack_id}/template
-```
-
-Parameters:
-
-* `tenant_id` The unique identifier of the tenant or account
-* `stack_name` The name of the stack to look up
-* `stack_id` The unique identifier of the stack to look up
-
-Update Stack
-------------
-
-```
-PUT /v1/{tenant_id}/stacks/{stack_name}/{stack_id}
-
-{
- "template_url": "{template_url}",
- "parameters": {
- "{key1}": "{value1}",
- "{key2}": "{value2}"
- },
- "timeout_mins": {timeout_mins}
-}
-```
-
-Parameters:
-
-* `tenant_id` The unique identifier of the tenant or account
-* `stack_name` The name of the stack to create
-* `stack_id` The unique identifier of the stack to look up
-* `template_url` The URL of the updated template
-* `template` An updated JSON template - this takes precendence over the `template_url` if both are supplied
-* `keyn`, `valuen` User-defined parameters to pass to the Template
-* `timeout_mins` The timeout for stack creation in minutes
-
-Result:
-
-```
-HTTP/1.1 202 Accepted
-```
-
-Delete Stack
-------------
-
-```
-DELETE /v1/{tenant_id}/stacks/{stack_name}/{stack_id}
-```
-
-Parameters:
-
-* `tenant_id` The unique identifier of the tenant or account
-* `stack_name` The name of the stack to create
-* `stack_id` The unique identifier of the stack to look up
-
-Result:
-
-```
-HTTP/1.1 204 No Content
-```
-
-Validate Template
------------------
-
-```
-POST /v1/{tenant_id}/validate
-
-{
- "template_url": "{template_url}",
-}
-```
-
-Parameters:
-
-* `tenant_id` The unique identifier of the tenant or account
-* `template_url` The URL of the template to validate
-* `template` A JSON template to validate - this takes precendence over the `template_url` if both are supplied.
-
-List Valid Resource Types
--------------------------
-
-```
-GET /v1/{tenant_id}/resource_types
-```
-
-Parameters:
-
-* `tenant_id` The unique identifier of the tenant or account
-
-List Stack Resources
---------------------
-
-```
-GET /v1/{tenant_id}/stacks/{stack_name}/{stack_id}/resources
-```
-
-Parameters:
-
-* `tenant_id` The unique identifier of the tenant or account
-* `stack_name` The name of the stack to look up
-* `stack_id` The unique identifier of the stack to look up
-
-Find Stack Resources by Name
-----------------------------
-
-```
-GET /v1/{tenant_id}/stacks/{stack_name}/resources
-```
-
-Parameters:
-
-* `stack_name` The name of the stack to look up
-
-Result:
-
-```
-HTTP/1.1 302 Found
-Location: http://heat.example.com:8004/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/resources
-```
-
-This is a shortcut to go directly to the list of stack resources when only the stack name is known.
-
-Get Resource
-------------
-
-```
-GET /v1/{tenant_id}/stacks/{stack_name}/{stack_id}/resources/{resource_name}
-```
-
-Parameters:
-
-* `tenant_id` The unique identifier of the tenant or account
-* `stack_name` The name of the stack to look up
-* `stack_id` The unique identifier of the stack to look up
-* `resource_name` The name of the resource in the template
-
-Get Resource Metadata
----------------------
-
-```
-GET /v1/{tenant_id}/stacks/{stack_name}/{stack_id}/resources/{resource_name}/metadata
-```
-
-Parameters:
-
-* `tenant_id` The unique identifier of the tenant or account
-* `stack_name` The name of the stack to look up
-* `stack_id` The unique identifier of the stack to look up
-* `resource_name` The name of the resource in the template
-
-List Stack Events
------------------
-
-```
-GET /v1/{tenant_id}/stacks/{stack_name}/{stack_id}/events
-```
-
-Parameters:
-
-* `tenant_id` The unique identifier of the tenant or account
-* `stack_name` The name of the stack to look up
-* `stack_id` The unique identifier of the stack to look up
-
-Find Stack Events by Name
--------------------------
-
-```
-GET /v1/{tenant_id}/stacks/{stack_name}/events
-```
-
-Parameters:
-
-* `stack_name` The name of the stack to look up
-
-Result:
-
-```
-HTTP/1.1 302 Found
-Location: http://heat.example.com:8004/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/events
-```
-
-This is a shortcut to go directly to the list of stack events when only the stack name is known.
-
-
-List Resource Events
---------------------
-
-```
-GET /v1/{tenant_id}/stacks/{stack_name}/{stack_id}/resources/{resource_name}/events
-```
-
-Parameters:
-
-* `tenant_id` The unique identifier of the tenant or account
-* `stack_name` The name of the stack to look up
-* `stack_id` The unique identifier of the stack to look up
-* `resource_name` The name of the resource in the template
-
-Get Event
----------
-
-```
-GET /v1/{tenant_id}/stacks/{stack_name}/{stack_id}/resources/{resource_name}/events/{event_id}
-```
-
-Parameters:
-
-* `tenant_id` The unique identifier of the tenant or account
-* `stack_name` The name of the stack to look up
-* `stack_id` The unique identifier of the stack to look up
-* `resource_name` The name of the resource in the template
-* `event_id` The ID of the event
+++ /dev/null
-
-# heat-api pipeline
-[pipeline:heat-api]
-pipeline = faultwrap versionnegotiation authtoken context apiv1app
-
-# heat-api pipeline for standalone heat
-# ie. uses alternative auth backend that authenticates users against keystone
-# using username and password instead of validating token (which requires
-# an admin/service token).
-# To enable, in heat.conf:
-# [paste_deploy]
-# flavor = standalone
-#
-[pipeline:heat-api-standalone]
-pipeline = faultwrap versionnegotiation authpassword context apiv1app
-
-# heat-api pipeline for custom cloud backends
-# i.e. in heat.conf:
-# [paste_deploy]
-# flavor = custombackend
-#
-[pipeline:heat-api-custombackend]
-pipeline = faultwrap versionnegotiation context custombackendauth apiv1app
-
-# heat-api-cfn pipeline
-[pipeline:heat-api-cfn]
-pipeline = cfnversionnegotiation ec2authtoken authtoken context apicfnv1app
-
-# heat-api-cfn pipeline for standalone heat
-# relies exclusively on authenticating with ec2 signed requests
-[pipeline:heat-api-cfn-standalone]
-pipeline = cfnversionnegotiation ec2authtoken context apicfnv1app
-
-# heat-api-cloudwatch pipeline
-[pipeline:heat-api-cloudwatch]
-pipeline = versionnegotiation ec2authtoken authtoken context apicwapp
-
-# heat-api-cloudwatch pipeline for standalone heat
-# relies exclusively on authenticating with ec2 signed requests
-[pipeline:heat-api-cloudwatch-standalone]
-pipeline = versionnegotiation ec2authtoken context apicwapp
-
-[app:apiv1app]
-paste.app_factory = heat.common.wsgi:app_factory
-heat.app_factory = heat.api.openstack.v1:API
-
-[app:apicfnv1app]
-paste.app_factory = heat.common.wsgi:app_factory
-heat.app_factory = heat.api.cfn.v1:API
-
-[app:apicwapp]
-paste.app_factory = heat.common.wsgi:app_factory
-heat.app_factory = heat.api.cloudwatch:API
-
-[filter:versionnegotiation]
-paste.filter_factory = heat.common.wsgi:filter_factory
-heat.filter_factory = heat.api.openstack:version_negotiation_filter
-
-[filter:faultwrap]
-paste.filter_factory = heat.common.wsgi:filter_factory
-heat.filter_factory = heat.api.openstack:faultwrap_filter
-
-[filter:cfnversionnegotiation]
-paste.filter_factory = heat.common.wsgi:filter_factory
-heat.filter_factory = heat.api.cfn:version_negotiation_filter
-
-[filter:cwversionnegotiation]
-paste.filter_factory = heat.common.wsgi:filter_factory
-heat.filter_factory = heat.api.cloudwatch:version_negotiation_filter
-
-[filter:context]
-paste.filter_factory = heat.common.context:ContextMiddleware_filter_factory
-
-[filter:ec2authtoken]
-paste.filter_factory = heat.api.aws.ec2token:EC2Token_filter_factory
-
-# Auth middleware that validates token against keystone
-[filter:authtoken]
-paste.filter_factory = heat.common.auth_token:filter_factory
-
-# Auth middleware that validates username/password against keystone
-[filter:authpassword]
-paste.filter_factory = heat.common.auth_password:filter_factory
-
-# Auth middleware that validates against custom backend
-[filter:custombackendauth]
-paste.filter_factory = heat.common.custom_backend_auth:filter_factory
+++ /dev/null
-
-resource_registry:
- # allow older templates with Quantum in them.
- "OS::Quantum*": "OS::Neutron*"
- # Choose your implementation of AWS::CloudWatch::Alarm
- #"AWS::CloudWatch::Alarm": "file:///etc/heat/templates/AWS_CloudWatch_Alarm.yaml"
- "AWS::CloudWatch::Alarm": "OS::Heat::CWLiteAlarm"
- "OS::Metering::Alarm": "OS::Ceilometer::Alarm"
- "AWS::RDS::DBInstance": "file:///etc/heat/templates/AWS_RDS_DBInstance.yaml"
+++ /dev/null
-[DEFAULT]
-
-#
-# Options defined in heat.common.config
-#
-
-# The SQLAlchemy connection string used to connect to the
-# database (string value)
-#sql_connection=mysql://heat:heat@localhost/heat
-
-# timeout before idle sql connections are reaped (integer
-# value)
-#sql_idle_timeout=3600
-
-# The default user for new instances (string value)
-#instance_user=ec2-user
-
-# Driver to use for controlling instances (string value)
-#instance_driver=heat.engine.nova
-
-# List of directories to search for Plugins (list value)
-#plugin_dirs=/usr/lib64/heat,/usr/lib/heat
-
-# The directory to search for environment files (string value)
-#environment_dir=/etc/heat/environment.d
-
-# Select deferred auth method, stored password or trusts
-# (string value)
-#deferred_auth_method=password
-
-# Subset of trustor roles to be delegated to heat (list value)
-#trusts_delegated_roles=heat_stack_owner
-
-# Maximum resources allowed per top-level stack. (integer
-# value)
-#max_resources_per_stack=1000
-
-# Name of the engine node. This can be an opaque identifier.It
-# is not necessarily a hostname, FQDN, or IP address. (string
-# value)
-#host=heat
-
-# seconds between running periodic tasks (integer value)
-#periodic_interval=60
-
-# URL of the Heat metadata server (string value)
-#heat_metadata_server_url=
-
-# URL of the Heat waitcondition server (string value)
-#heat_waitcondition_server_url=
-
-# URL of the Heat cloudwatch server (string value)
-#heat_watch_server_url=
-
-# Instance connection to cfn/cw API via https (string value)
-#instance_connection_is_secure=0
-
-# Instance connection to cfn/cw API validate certs if ssl
-# (string value)
-#instance_connection_https_validate_certificates=1
-
-# Keystone role for heat template-defined users (string value)
-#heat_stack_user_role=heat_stack_user
-
-# Maximum raw byte size of any template. (integer value)
-#max_template_size=524288
-
-# Maximum depth allowed when using nested stacks. (integer
-# value)
-#max_nested_stack_depth=3
-
-
-#
-# Options defined in heat.common.crypt
-#
-
-# Encryption key used for authentication info in database
-# (string value)
-#auth_encryption_key=notgood but just long enough i think
-
-
-#
-# Options defined in heat.common.wsgi
-#
-
-# Maximum raw byte size of JSON request body. Should be larger
-# than max_template_size. (integer value)
-#max_json_body_size=1048576
-
-
-#
-# Options defined in heat.db.api
-#
-
-# The backend to use for db (string value)
-#db_backend=sqlalchemy
-
-
-#
-# Options defined in heat.engine.clients
-#
-
-# Cloud module to use as a backend. Defaults to OpenStack.
-# (string value)
-#cloud_backend=<None>
-
-
-#
-# Options defined in heat.openstack.common.db.sqlalchemy.session
-#
-
-# the filename to use with sqlite (string value)
-#sqlite_db=heat.sqlite
-
-# If true, use synchronous mode for sqlite (boolean value)
-#sqlite_synchronous=true
-
-
-#
-# Options defined in heat.openstack.common.eventlet_backdoor
-#
-
-# Enable eventlet backdoor. Acceptable values are 0, <port>,
-# and <start>:<end>, where 0 results in listening on a random
-# tcp port number; <port> results in listening on the
-# specified port number (and not enabling backdoor if that
-# port is in use); and <start>:<end> results in listening on
-# the smallest unused port number within the specified range
-# of port numbers. The chosen port is displayed in the
-# service's log file. (string value)
-#backdoor_port=<None>
-
-
-#
-# Options defined in heat.openstack.common.lockutils
-#
-
-# Whether to disable inter-process locks (boolean value)
-#disable_process_locking=false
-
-# Directory to use for lock files. (string value)
-#lock_path=<None>
-
-
-#
-# Options defined in heat.openstack.common.log
-#
-
-# Print debugging output (set logging level to DEBUG instead
-# of default WARNING level). (boolean value)
-#debug=false
-
-# Print more verbose output (set logging level to INFO instead
-# of default WARNING level). (boolean value)
-#verbose=false
-
-# Log output to standard error (boolean value)
-#use_stderr=true
-
-# format string to use for log messages with context (string
-# value)
-#logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user)s %(tenant)s] %(instance)s%(message)s
-
-# format string to use for log messages without context
-# (string value)
-#logging_default_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
-
-# data to append to log format when level is DEBUG (string
-# value)
-#logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d
-
-# prefix each line of exception output with this format
-# (string value)
-#logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s
-
-# list of logger=LEVEL pairs (list value)
-#default_log_levels=amqplib=WARN,sqlalchemy=WARN,boto=WARN,suds=INFO,keystone=INFO,eventlet.wsgi.server=WARN
-
-# publish error events (boolean value)
-#publish_errors=false
-
-# make deprecations fatal (boolean value)
-#fatal_deprecations=false
-
-# If an instance is passed with the log message, format it
-# like this (string value)
-#instance_format="[instance: %(uuid)s] "
-
-# If an instance UUID is passed with the log message, format
-# it like this (string value)
-#instance_uuid_format="[instance: %(uuid)s] "
-
-# If this option is specified, the logging configuration file
-# specified is used and overrides any other logging options
-# specified. Please see the Python logging module
-# documentation for details on logging configuration files.
-# (string value)
-#log_config=<None>
-
-# DEPRECATED. A logging.Formatter log message format string
-# which may use any of the available logging.LogRecord
-# attributes. This option is deprecated. Please use
-# logging_context_format_string and
-# logging_default_format_string instead. (string value)
-#log_format=<None>
-
-# Format string for %%(asctime)s in log records. Default:
-# %(default)s (string value)
-#log_date_format=%Y-%m-%d %H:%M:%S
-
-# (Optional) Name of log file to output to. If no default is
-# set, logging will go to stdout. (string value)
-#log_file=<None>
-
-# (Optional) The base directory used for relative --log-file
-# paths (string value)
-#log_dir=<None>
-
-# Use syslog for logging. (boolean value)
-#use_syslog=false
-
-# syslog facility to receive log lines (string value)
-#syslog_log_facility=LOG_USER
-
-
-#
-# Options defined in heat.openstack.common.notifier.api
-#
-
-# Driver or drivers to handle sending notifications (multi
-# valued)
-#notification_driver=
-
-# Default notification level for outgoing notifications
-# (string value)
-#default_notification_level=INFO
-
-# Default publisher_id for outgoing notifications (string
-# value)
-#default_publisher_id=<None>
-
-
-#
-# Options defined in heat.openstack.common.notifier.list_notifier
-#
-
-# List of drivers to send notifications (multi valued)
-#list_notifier_drivers=heat.openstack.common.notifier.no_op_notifier
-
-
-#
-# Options defined in heat.openstack.common.notifier.rpc_notifier
-#
-
-# AMQP topic used for openstack notifications (list value)
-#notification_topics=notifications
-
-
-#
-# Options defined in heat.openstack.common.policy
-#
-
-# JSON file containing policy (string value)
-#policy_file=policy.json
-
-# Rule enforced when requested rule is not found (string
-# value)
-#policy_default_rule=default
-
-
-#
-# Options defined in heat.openstack.common.rpc
-#
-
-# The messaging module to use, defaults to kombu. (string
-# value)
-#rpc_backend=heat.openstack.common.rpc.impl_kombu
-
-# Size of RPC thread pool (integer value)
-#rpc_thread_pool_size=64
-
-# Size of RPC connection pool (integer value)
-#rpc_conn_pool_size=30
-
-# Seconds to wait for a response from call or multicall
-# (integer value)
-#rpc_response_timeout=60
-
-# Seconds to wait before a cast expires (TTL). Only supported
-# by impl_zmq. (integer value)
-#rpc_cast_timeout=30
-
-# Modules of exceptions that are permitted to be recreatedupon
-# receiving exception data from an rpc call. (list value)
-#allowed_rpc_exception_modules=heat.openstack.common.exception,heat.common.exception,nova.exception,cinder.exception,exceptions
-
-# If passed, use a fake RabbitMQ provider (boolean value)
-#fake_rabbit=false
-
-# AMQP exchange to connect to if using RabbitMQ or Qpid
-# (string value)
-#control_exchange=openstack
-
-
-#
-# Options defined in heat.openstack.common.rpc.amqp
-#
-
-# Use durable queues in amqp. (boolean value)
-#amqp_durable_queues=false
-
-# Auto-delete queues in amqp. (boolean value)
-#amqp_auto_delete=false
-
-
-#
-# Options defined in heat.openstack.common.rpc.impl_kombu
-#
-
-# SSL version to use (valid only if SSL enabled). valid values
-# are TLSv1, SSLv23 and SSLv3. SSLv2 may be available on some
-# distributions (string value)
-#kombu_ssl_version=
-
-# SSL key file (valid only if SSL enabled) (string value)
-#kombu_ssl_keyfile=
-
-# SSL cert file (valid only if SSL enabled) (string value)
-#kombu_ssl_certfile=
-
-# SSL certification authority file (valid only if SSL enabled)
-# (string value)
-#kombu_ssl_ca_certs=
-
-# The RabbitMQ broker address where a single node is used
-# (string value)
-#rabbit_host=localhost
-
-# The RabbitMQ broker port where a single node is used
-# (integer value)
-#rabbit_port=5672
-
-# RabbitMQ HA cluster host:port pairs (list value)
-#rabbit_hosts=$rabbit_host:$rabbit_port
-
-# connect over SSL for RabbitMQ (boolean value)
-#rabbit_use_ssl=false
-
-# the RabbitMQ userid (string value)
-#rabbit_userid=guest
-
-# the RabbitMQ password (string value)
-#rabbit_password=guest
-
-# the RabbitMQ virtual host (string value)
-#rabbit_virtual_host=/
-
-# how frequently to retry connecting with RabbitMQ (integer
-# value)
-#rabbit_retry_interval=1
-
-# how long to backoff for between retries when connecting to
-# RabbitMQ (integer value)
-#rabbit_retry_backoff=2
-
-# maximum retries with trying to connect to RabbitMQ (the
-# default of 0 implies an infinite retry count) (integer
-# value)
-#rabbit_max_retries=0
-
-# use H/A queues in RabbitMQ (x-ha-policy: all).You need to
-# wipe RabbitMQ database when changing this option. (boolean
-# value)
-#rabbit_ha_queues=false
-
-
-#
-# Options defined in heat.openstack.common.rpc.impl_qpid
-#
-
-# Qpid broker hostname (string value)
-#qpid_hostname=localhost
-
-# Qpid broker port (integer value)
-#qpid_port=5672
-
-# Qpid HA cluster host:port pairs (list value)
-#qpid_hosts=$qpid_hostname:$qpid_port
-
-# Username for qpid connection (string value)
-#qpid_username=
-
-# Password for qpid connection (string value)
-#qpid_password=
-
-# Space separated list of SASL mechanisms to use for auth
-# (string value)
-#qpid_sasl_mechanisms=
-
-# Seconds between connection keepalive heartbeats (integer
-# value)
-#qpid_heartbeat=60
-
-# Transport to use, either 'tcp' or 'ssl' (string value)
-#qpid_protocol=tcp
-
-# Disable Nagle algorithm (boolean value)
-#qpid_tcp_nodelay=true
-
-# The qpid topology version to use. Version 1 is what was
-# originally used by impl_qpid. Version 2 includes some
-# backwards-incompatible changes that allow broker federation
-# to work. Users should update to version 2 when they are
-# able to take everything down, as it requires a clean break.
-# (integer value)
-#qpid_topology_version=1
-
-
-#
-# Options defined in heat.openstack.common.rpc.impl_zmq
-#
-
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet
-# interface, or IP. The "host" option should point or resolve
-# to this address. (string value)
-#rpc_zmq_bind_address=*
-
-# MatchMaker driver (string value)
-#rpc_zmq_matchmaker=heat.openstack.common.rpc.matchmaker.MatchMakerLocalhost
-
-# ZeroMQ receiver listening port (integer value)
-#rpc_zmq_port=9501
-
-# Number of ZeroMQ contexts, defaults to 1 (integer value)
-#rpc_zmq_contexts=1
-
-# Maximum number of ingress messages to locally buffer per
-# topic. Default is unlimited. (integer value)
-#rpc_zmq_topic_backlog=<None>
-
-# Directory for holding IPC sockets (string value)
-#rpc_zmq_ipc_dir=/var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP
-# address. Must match "host" option, if running Nova. (string
-# value)
-#rpc_zmq_host=heat
-
-
-#
-# Options defined in heat.openstack.common.rpc.matchmaker
-#
-
-# Heartbeat frequency (integer value)
-#matchmaker_heartbeat_freq=300
-
-# Heartbeat time-to-live. (integer value)
-#matchmaker_heartbeat_ttl=600
-
-
-[ssl]
-
-#
-# Options defined in heat.openstack.common.sslutils
-#
-
-# CA certificate file to use to verify connecting clients
-# (string value)
-#ca_file=<None>
-
-# Certificate file to use when starting the server securely
-# (string value)
-#cert_file=<None>
-
-# Private key file to use when starting the server securely
-# (string value)
-#key_file=<None>
-
-
-[database]
-
-#
-# Options defined in heat.openstack.common.db.api
-#
-
-# The backend to use for db (string value)
-#backend=sqlalchemy
-
-# Enable the experimental use of thread pooling for all DB API
-# calls (boolean value)
-#use_tpool=false
-
-
-#
-# Options defined in heat.openstack.common.db.sqlalchemy.session
-#
-
-# The SQLAlchemy connection string used to connect to the
-# database (string value)
-#connection=sqlite:////heat/openstack/common/db/$sqlite_db
-
-# The SQLAlchemy connection string used to connect to the
-# slave database (string value)
-#slave_connection=
-
-# timeout before idle sql connections are reaped (integer
-# value)
-#idle_timeout=3600
-
-# Minimum number of SQL connections to keep open in a pool
-# (integer value)
-#min_pool_size=1
-
-# Maximum number of SQL connections to keep open in a pool
-# (integer value)
-#max_pool_size=<None>
-
-# maximum db connection retries during startup. (setting -1
-# implies an infinite retry count) (integer value)
-#max_retries=10
-
-# interval between retries of opening a sql connection
-# (integer value)
-#retry_interval=10
-
-# If set, use this value for max_overflow with sqlalchemy
-# (integer value)
-#max_overflow=<None>
-
-# Verbosity of SQL debugging information. 0=None,
-# 100=Everything (integer value)
-#connection_debug=0
-
-# Add python stack traces to SQL as comment strings (boolean
-# value)
-#connection_trace=false
-
-# If set, use this value for pool_timeout with sqlalchemy
-# (integer value)
-#pool_timeout=<None>
-
-
-[paste_deploy]
-
-#
-# Options defined in heat.common.config
-#
-
-# The flavor to use (string value)
-#flavor=<None>
-
-# The API paste config file to use (string value)
-#api_paste_config=api-paste.ini
-
-
-[rpc_notifier2]
-
-#
-# Options defined in heat.openstack.common.notifier.rpc_notifier2
-#
-
-# AMQP topic(s) used for openstack notifications (list value)
-#topics=notifications
-
-
-[ec2authtoken]
-
-#
-# Options defined in heat.api.aws.ec2token
-#
-
-# Authentication Endpoint URI (string value)
-#auth_uri=<None>
-
-# Allow orchestration of multiple clouds (boolean value)
-#multi_cloud=false
-
-# Allowed keystone endpoints for auth_uri when multi_cloud is
-# enabled. At least one endpoint needs to be specified. (list
-# value)
-#allowed_auth_uris=
-
-
-[heat_api_cloudwatch]
-
-#
-# Options defined in heat.common.wsgi
-#
-
-# Address to bind the server. Useful when selecting a
-# particular network interface. (string value)
-#bind_host=0.0.0.0
-
-# The port on which the server will listen. (integer value)
-#bind_port=8003
-
-# Number of backlog requests to configure the socket with
-# (integer value)
-#backlog=4096
-
-# Location of the SSL Certificate File to use for SSL mode
-# (string value)
-#cert_file=<None>
-
-# Location of the SSL Key File to use for enabling SSL mode
-# (string value)
-#key_file=<None>
-
-# Number of workers for Heat service (integer value)
-#workers=0
-
-
-[heat_api]
-
-#
-# Options defined in heat.common.wsgi
-#
-
-# Address to bind the server. Useful when selecting a
-# particular network interface. (string value)
-#bind_host=0.0.0.0
-
-# The port on which the server will listen. (integer value)
-#bind_port=8004
-
-# Number of backlog requests to configure the socket with
-# (integer value)
-#backlog=4096
-
-# Location of the SSL Certificate File to use for SSL mode
-# (string value)
-#cert_file=<None>
-
-# Location of the SSL Key File to use for enabling SSL mode
-# (string value)
-#key_file=<None>
-
-# Number of workers for Heat service (integer value)
-#workers=0
-
-
-[heat_api_cfn]
-
-#
-# Options defined in heat.common.wsgi
-#
-
-# Address to bind the server. Useful when selecting a
-# particular network interface. (string value)
-#bind_host=0.0.0.0
-
-# The port on which the server will listen. (integer value)
-#bind_port=8000
-
-# Number of backlog requests to configure the socket with
-# (integer value)
-#backlog=4096
-
-# Location of the SSL Certificate File to use for SSL mode
-# (string value)
-#cert_file=<None>
-
-# Location of the SSL Key File to use for enabling SSL mode
-# (string value)
-#key_file=<None>
-
-# Number of workers for Heat service (integer value)
-#workers=0
-
-
-[auth_password]
-
-#
-# Options defined in heat.common.config
-#
-
-# Allow orchestration of multiple clouds (boolean value)
-#multi_cloud=false
-
-# Allowed keystone endpoints for auth_uri when multi_cloud is
-# enabled. At least one endpoint needs to be specified. (list
-# value)
-#allowed_auth_uris=
-
-
-[matchmaker_ring]
-
-#
-# Options defined in heat.openstack.common.rpc.matchmaker_ring
-#
-
-# Matchmaker ring file (JSON) (string value)
-#ringfile=/etc/oslo/matchmaker_ring.json
-
-
-[matchmaker_redis]
-
-#
-# Options defined in heat.openstack.common.rpc.matchmaker_redis
-#
-
-# Host to locate redis (string value)
-#host=127.0.0.1
-
-# Use this port to connect to redis host. (integer value)
-#port=6379
-
-# Password for Redis server. (optional) (string value)
-#password=<None>
-
-
+++ /dev/null
-{
- "deny_stack_user": "not role:heat_stack_user",
- "cloudformation:ListStacks": "rule:deny_stack_user",
- "cloudformation:CreateStack": "rule:deny_stack_user",
- "cloudformation:DescribeStacks": "rule:deny_stack_user",
- "cloudformation:DeleteStack": "rule:deny_stack_user",
- "cloudformation:UpdateStack": "rule:deny_stack_user",
- "cloudformation:DescribeStackEvents": "rule:deny_stack_user",
- "cloudformation:ValidateTemplate": "rule:deny_stack_user",
- "cloudformation:GetTemplate": "rule:deny_stack_user",
- "cloudformation:EstimateTemplateCost": "rule:deny_stack_user",
- "cloudformation:DescribeStackResource": "",
- "cloudformation:DescribeStackResources": "rule:deny_stack_user",
- "cloudformation:ListStackResources": "rule:deny_stack_user",
-
- "cloudwatch:DeleteAlarms": "rule:deny_stack_user",
- "cloudwatch:DescribeAlarmHistory": "rule:deny_stack_user",
- "cloudwatch:DescribeAlarms": "rule:deny_stack_user",
- "cloudwatch:DescribeAlarmsForMetric": "rule:deny_stack_user",
- "cloudwatch:DisableAlarmActions": "rule:deny_stack_user",
- "cloudwatch:EnableAlarmActions": "rule:deny_stack_user",
- "cloudwatch:GetMetricStatistics": "rule:deny_stack_user",
- "cloudwatch:ListMetrics": "rule:deny_stack_user",
- "cloudwatch:PutMetricAlarm": "rule:deny_stack_user",
- "cloudwatch:PutMetricData": "",
- "cloudwatch:SetAlarmState": "rule:deny_stack_user"
-}
+++ /dev/null
-HeatTemplateFormatVersion: '2012-12-12'
-Description: AWS::CloudWatch::Alarm using Ceilometer.
-Parameters:
- AlarmDescription:
- Type: String
- Default: An alarm
- EvaluationPeriods:
- Type: String
- MetricName:
- Type: String
- Namespace:
- Type: String
- Default: system/linux
- Period:
- Type: String
- ComparisonOperator:
- Type: String
- AllowedValues: [GreaterThanOrEqualToThreshold, GreaterThanThreshold,
- LessThanThreshold, LessThanOrEqualToThreshold]
- Statistic:
- Type: String
- AllowedValues: [SampleCount, Average, Sum, Minimum, Maximum]
- Threshold:
- Type: String
- Units:
- Type: String
- AllowedValues: [Seconds, Microseconds, Milliseconds, Bytes, Kilobytes,
- Megabytes, Gigabytes, Terabytes, Bits, Kilobits, Megabits,
- Gigabits, Terabits, Percent, Count, Bytes/Second,
- Kilobytes/Second, Megabytes/Second, Gigabytes/Second,
- Terabytes/Second, Bits/Second, Kilobits/Second,
- Megabits/Second, Gigabits/Second, Terabits/Second,
- Count/Second, None]
- Default: None
- AlarmActions:
- Type: CommaDelimitedList
- Default: ''
- OKActions:
- Type: CommaDelimitedList
- Default: ''
- InsufficientDataActions:
- Type: CommaDelimitedList
- Default: ''
- Dimensions:
- Type: CommaDelimitedList
-
-Mappings:
- ComparisonOperatorMap:
- LessThanOrEqualToThreshold: {Ceilometer: le}
- LessThanThreshold: {Ceilometer: lt}
- GreaterThanThreshold: {Ceilometer: gt}
- GreaterThanOrEqualToThreshold: {Ceilometer: ge}
- StatisticMap:
- SampleCount: {Ceilometer: count}
- Average: {Ceilometer: avg}
- Sum: {Ceilometer: sum}
- Minimum: {Ceilometer: min}
- Maximum: {Ceilometer: max}
-
-Resources:
- __alarm__:
- Type: OS::Ceilometer::Alarm
- Properties:
- description:
- Ref: AlarmDescription
- counter_name:
- Ref: MetricName
- period:
- Ref: Period
- evaluation_periods:
- Ref: EvaluationPeriods
- threshold:
- Ref: Threshold
- alarm_actions:
- "Fn::Split": [",", {Ref: AlarmActions}]
- ok_actions:
- "Fn::Split": [",", {Ref: OKActions}]
- insufficient_data_actions:
- "Fn::Split": [",", {Ref: InsufficientDataActions}]
- statistic:
- "Fn::FindInMap": [StatisticMap, {Ref: Statistic}, Ceilometer]
- comparison_operator:
- "Fn::FindInMap": [ComparisonOperatorMap, {Ref: ComparisonOperator}, Ceilometer]
- matching_metadata:
- "Fn::MemberListToMap": [Name, Value, {"Fn::Split": [",", {Ref: Dimensions}]}]
+++ /dev/null
-HeatTemplateFormatVersion: '2012-12-12'
-Description: 'Builtin AWS::RDS::DBInstance'
-Parameters:
- AllocatedStorage:
- Type: String
- DBInstanceClass:
- Type: String
- DBName:
- Type: String
- DBSecurityGroups:
- Type: CommaDelimitedList
- Default: ''
- Engine:
- Type: String
- AllowedValues: ['MySQL']
- MasterUsername:
- Type: String
- MasterUserPassword:
- Type: String
- Port:
- Type: String
- Default: '3306'
- KeyName:
- Type: String
- Default: ''
-
-Mappings:
- DBInstanceToInstance:
- db.m1.small: {Instance: m1.small}
- db.m1.large: {Instance: m1.large}
- db.m1.xlarge: {Instance: m1.xlarge}
- db.m2.xlarge: {Instance: m2.xlarge}
- db.m2.2xlarge: {Instance: m2.2xlarge}
- db.m2.4xlarge: {Instance: m2.4xlarge}
-
-Resources:
- ServerSecurityGroup:
- Type: AWS::EC2::SecurityGroup
- Properties:
- GroupDescription: 'Enable SSH access'
- SecurityGroupIngress:
- - IpProtocol: icmp
- FromPort: '-1'
- ToPort: '-1'
- CidrIp: '0.0.0.0/0'
- - IpProtocol: tcp
- FromPort: '22'
- ToPort : '22'
- CidrIp : '0.0.0.0/0'
- - IpProtocol: tcp
- FromPort: {Ref: Port}
- ToPort : {Ref: Port}
- CidrIp : '0.0.0.0/0'
- DatabaseInstance:
- Type: AWS::EC2::Instance
- Metadata:
- AWS::CloudFormation::Init:
- config:
- files:
- /tmp/db_setup.sql:
- content:
- 'Fn::Replace':
- - DBName: {Ref: DBName}
- MasterUserPassword: {Ref: MasterUserPassword}
- MasterUsername: {Ref: MasterUsername}
- - |
- CREATE DATABASE DBName;
- GRANT ALL PRIVILEGES ON DBName.* TO "MasterUsername"@"%"
- IDENTIFIED BY "MasterUserPassword";
- FLUSH PRIVILEGES;
- EXIT
- mode: '000644'
- owner: root
- group: root
- packages:
- yum:
- mariadb: []
- mariadb-server: []
- services:
- systemd:
- mysqld:
- enabled: true
- ensureRunning: true
- Properties:
- ImageId: F19-x86_64-cfntools
- InstanceType: {'Fn::FindInMap': [DBInstanceToInstance,
- {Ref: DBInstanceClass}, Instance]}
- KeyName: {Ref: KeyName}
- SecurityGroups: [{"Ref" : "ServerSecurityGroup"}]
- UserData:
- Fn::Base64:
- Fn::Replace:
- - 'AWS::StackName': {Ref: 'AWS::StackName'}
- 'AWS::Region': {Ref: 'AWS::Region'}
- MasterUserPassword: {Ref: MasterUserPassword}
- WaitHandle: {Ref: WaitHandle}
- - |
- #!/bin/bash -v
- #
- iptables -F
-
- # Helper function
- function error_exit
- {
- /opt/aws/bin/cfn-signal -e 1 -r \"$1\" 'WaitHandle'
- exit 1
- }
- /opt/aws/bin/cfn-init -s AWS::StackName -r DatabaseInstance --region AWS::Region || error_exit 'Failed to run cfn-init'
- # Setup MySQL root password and create a user
- mysqladmin -u root password 'MasterUserPassword'
- mysql -u root --password='MasterUserPassword' < /tmp/db_setup.sql || error_exit 'Failed to setup mysql'
-
- # Database setup completed, signal success
- /opt/aws/bin/cfn-signal -e 0 -r "MySQL server setup complete" 'WaitHandle'
-
- WaitHandle:
- Type: AWS::CloudFormation::WaitConditionHandle
- WaitCondition:
- Type: AWS::CloudFormation::WaitCondition
- DependsOn: DatabaseInstance
- Properties:
- Handle: {Ref: WaitHandle}
- Timeout: "600"
-
-Outputs:
- Endpoint.Address: {'Fn::GetAtt': [DatabaseInstance, PublicIp]}
- Endpoint.Port: {Ref: Port}
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2013 IBM Corp.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.openstack.common import gettextutils
-
-
-gettextutils.install('heat')
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import hashlib
-import requests
-
-from heat.openstack.common import gettextutils
-from heat.api.aws.exception import HeatAPIException
-
-gettextutils.install('heat')
-
-from heat.common import wsgi
-from heat.openstack.common import jsonutils as json
-from oslo.config import cfg
-
-import webob
-from heat.api.aws import exception
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-opts = [
- cfg.StrOpt('auth_uri',
- default=None,
- help=_("Authentication Endpoint URI")),
- cfg.BoolOpt('multi_cloud',
- default=False,
- help=_('Allow orchestration of multiple clouds')),
- cfg.ListOpt('allowed_auth_uris',
- default=[],
- help=_('Allowed keystone endpoints for auth_uri when '
- 'multi_cloud is enabled. At least one endpoint needs '
- 'to be specified.'))
-]
-cfg.CONF.register_opts(opts, group='ec2authtoken')
-
-
-class EC2Token(wsgi.Middleware):
- """Authenticate an EC2 request with keystone and convert to token."""
-
- def __init__(self, app, conf):
- self.conf = conf
- self.application = app
-
- def _conf_get(self, name):
- # try config from paste-deploy first
- if name in self.conf:
- return self.conf[name]
- else:
- return cfg.CONF.ec2authtoken[name]
-
- @staticmethod
- def _conf_get_keystone_ec2_uri(auth_uri):
- if auth_uri.endswith('/'):
- return '%sec2tokens' % auth_uri
- return '%s/ec2tokens' % auth_uri
-
- def _get_signature(self, req):
- """
- Extract the signature from the request, this can be a get/post
- variable or for v4 also in a header called 'Authorization'
- - params['Signature'] == version 0,1,2,3
- - params['X-Amz-Signature'] == version 4
- - header 'Authorization' == version 4
- see http://docs.aws.amazon.com/general/latest/gr/
- sigv4-signed-request-examples.html
- """
- sig = req.params.get('Signature') or req.params.get('X-Amz-Signature')
- if sig is None and 'Authorization' in req.headers:
- auth_str = req.headers['Authorization']
- sig = auth_str.partition("Signature=")[2].split(',')[0]
-
- return sig
-
- def _get_access(self, req):
- """
- Extract the access key identifier, for v 0/1/2/3 this is passed
- as the AccessKeyId parameter, for version4 it is either and
- X-Amz-Credential parameter or a Credential= field in the
- 'Authorization' header string
- """
- access = req.params.get('AWSAccessKeyId')
- if access is None:
- cred_param = req.params.get('X-Amz-Credential')
- if cred_param:
- access = cred_param.split("/")[0]
-
- if access is None and 'Authorization' in req.headers:
- auth_str = req.headers['Authorization']
- cred_str = auth_str.partition("Credential=")[2].split(',')[0]
- access = cred_str.split("/")[0]
-
- return access
-
- @webob.dec.wsgify(RequestClass=wsgi.Request)
- def __call__(self, req):
- if not self._conf_get('multi_cloud'):
- return self._authorize(req, self._conf_get('auth_uri'))
- else:
- # attempt to authorize for each configured allowed_auth_uris
- # until one is successful.
- # This is safe for the following reasons:
- # 1. AWSAccessKeyId is a randomly generated sequence
- # 2. No secret is transferred to validate a request
- last_failure = None
- for auth_uri in self._conf_get('allowed_auth_uris'):
- try:
- logger.debug("Attempt authorize on %s" % auth_uri)
- return self._authorize(req, auth_uri)
- except HeatAPIException as e:
- logger.debug("Authorize failed: %s" % e.__class__)
- last_failure = e
- raise last_failure or exception.HeatAccessDeniedError()
-
- def _authorize(self, req, auth_uri):
- # Read request signature and access id.
- # If we find X-Auth-User in the headers we ignore a key error
- # here so that we can use both authentication methods.
- # Returning here just means the user didn't supply AWS
- # authentication and we'll let the app try native keystone next.
- logger.info("Checking AWS credentials..")
-
- signature = self._get_signature(req)
- if not signature:
- if 'X-Auth-User' in req.headers:
- return self.application
- else:
- logger.info("No AWS Signature found.")
- raise exception.HeatIncompleteSignatureError()
-
- access = self._get_access(req)
- if not access:
- if 'X-Auth-User' in req.headers:
- return self.application
- else:
- logger.info("No AWSAccessKeyId/Authorization Credential")
- raise exception.HeatMissingAuthenticationTokenError()
-
- logger.info("AWS credentials found, checking against keystone.")
- # Make a copy of args for authentication and signature verification.
- auth_params = dict(req.params)
- # 'Signature' param Not part of authentication args
- auth_params.pop('Signature', None)
-
- # Authenticate the request.
- # AWS v4 authentication requires a hash of the body
- body_hash = hashlib.sha256(req.body).hexdigest()
- creds = {'ec2Credentials': {'access': access,
- 'signature': signature,
- 'host': req.host,
- 'verb': req.method,
- 'path': req.path,
- 'params': auth_params,
- 'headers': req.headers,
- 'body_hash': body_hash
- }}
- creds_json = json.dumps(creds)
- headers = {'Content-Type': 'application/json'}
-
- keystone_ec2_uri = self._conf_get_keystone_ec2_uri(auth_uri)
- logger.info('Authenticating with %s' % keystone_ec2_uri)
- response = requests.post(keystone_ec2_uri, data=creds_json,
- headers=headers)
- result = response.json()
- try:
- token_id = result['access']['token']['id']
- tenant = result['access']['token']['tenant']['name']
- tenant_id = result['access']['token']['tenant']['id']
- logger.info("AWS authentication successful.")
- except (AttributeError, KeyError):
- logger.info("AWS authentication failure.")
- # Try to extract the reason for failure so we can return the
- # appropriate AWS error via raising an exception
- try:
- reason = result['error']['message']
- except KeyError:
- reason = None
-
- if reason == "EC2 access key not found.":
- raise exception.HeatInvalidClientTokenIdError()
- elif reason == "EC2 signature not supplied.":
- raise exception.HeatSignatureError()
- else:
- raise exception.HeatAccessDeniedError()
-
- # Authenticated!
- ec2_creds = {'ec2Credentials': {'access': access,
- 'signature': signature}}
- req.headers['X-Auth-EC2-Creds'] = json.dumps(ec2_creds)
- req.headers['X-Auth-Token'] = token_id
- req.headers['X-Tenant-Name'] = tenant
- req.headers['X-Tenant-Id'] = tenant_id
- req.headers['X-Auth-URL'] = self._conf_get('auth_uri')
-
- metadata = result['access'].get('metadata', {})
- roles = metadata.get('roles', [])
- req.headers['X-Roles'] = ','.join(roles)
-
- return self.application
-
-
-def EC2Token_filter_factory(global_conf, **local_conf):
- """
- Factory method for paste.deploy
- """
- conf = global_conf.copy()
- conf.update(local_conf)
-
- def filter(app):
- return EC2Token(app, conf)
-
- return filter
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Heat API exception subclasses - maps API response errors to AWS Errors"""
-
-import webob.exc
-from heat.common import wsgi
-import heat.openstack.common.rpc.common as rpc_common
-
-
-class HeatAPIException(webob.exc.HTTPError):
- '''
- Subclass webob HTTPError so we can correctly serialize the wsgi response
- into the http response body, using the format specified by the request.
- Note this should not be used directly, instead use of of the subclasses
- defined below which map to AWS API errors
- '''
- code = 400
- title = "HeatAPIException"
- explanation = "Generic HeatAPIException, please use specific subclasses!"
- err_type = "Sender"
-
- def __init__(self, detail=None):
- '''
- Overload HTTPError constructor, so we can create a default serialized
- body. This is required because not all error responses are processed
- by the wsgi controller (ie auth errors, which are further up the
- paste pipeline. We serialize in XML by default (as AWS does)
- '''
- webob.exc.HTTPError.__init__(self, detail=detail)
- serializer = wsgi.XMLResponseSerializer()
- serializer.default(self, self.get_unserialized_body())
-
- def get_unserialized_body(self):
- '''
- Return a dict suitable for serialization in the wsgi controller
- This wraps the exception details in a format which maps to the
- expected format for the AWS API
- '''
- # Note the aws response format specifies a "Code" element which is not
- # the html response code, but the AWS API error code, e.g self.title
- if self.detail:
- message = ":".join([self.explanation, self.detail])
- else:
- message = self.explanation
- return {'ErrorResponse': {'Error': {'Type': self.err_type,
- 'Code': self.title, 'Message': message}}}
-
-
-# Common Error Subclasses:
-# As defined in http://docs.amazonwebservices.com/AWSCloudFormation/
-# latest/APIReference/CommonErrors.html
-
-
-class HeatIncompleteSignatureError(HeatAPIException):
- '''
- The request signature does not conform to AWS standards
- '''
- code = 400
- title = "IncompleteSignature"
- explanation = "The request signature does not conform to AWS standards"
-
-
-class HeatInternalFailureError(HeatAPIException):
- '''
- The request processing has failed due to some unknown error
- '''
- code = 500
- title = "InternalFailure"
- explanation = "The request processing has failed due to an internal error"
- err_type = "Server"
-
-
-class HeatInvalidActionError(HeatAPIException):
- '''
- The action or operation requested is invalid
- '''
- code = 400
- title = "InvalidAction"
- explanation = "The action or operation requested is invalid"
-
-
-class HeatInvalidClientTokenIdError(HeatAPIException):
- '''
- The X.509 certificate or AWS Access Key ID provided does not exist
- '''
- code = 403
- title = "InvalidClientTokenId"
- explanation = "The certificate or AWS Key ID provided does not exist"
-
-
-class HeatInvalidParameterCombinationError(HeatAPIException):
- '''
- Parameters that must not be used together were used together
- '''
- code = 400
- title = "InvalidParameterCombination"
- explanation = "Incompatible parameters were used together"
-
-
-class HeatInvalidParameterValueError(HeatAPIException):
- '''
- A bad or out-of-range value was supplied for the input parameter
- '''
- code = 400
- title = "InvalidParameterValue"
- explanation = "A bad or out-of-range value was supplied"
-
-
-class HeatInvalidQueryParameterError(HeatAPIException):
- '''
- AWS query string is malformed, does not adhere to AWS standards
- '''
- code = 400
- title = "InvalidQueryParameter"
- explanation = "AWS query string is malformed, does not adhere to AWS spec"
-
-
-class HeatMalformedQueryStringError(HeatAPIException):
- '''
- The query string is malformed
- '''
- code = 404
- title = "MalformedQueryString"
- explanation = "The query string is malformed"
-
-
-class HeatMissingActionError(HeatAPIException):
- '''
- The request is missing an action or operation parameter
- '''
- code = 400
- title = "MissingAction"
- explanation = "The request is missing an action or operation parameter"
-
-
-class HeatMissingAuthenticationTokenError(HeatAPIException):
- '''
- Request must contain either a valid (registered) AWS Access Key ID
- or X.509 certificate
- '''
- code = 403
- title = "MissingAuthenticationToken"
- explanation = "Does not contain a valid AWS Access Key or certificate"
-
-
-class HeatMissingParameterError(HeatAPIException):
- '''
- An input parameter that is mandatory for processing the request is missing
- '''
- code = 400
- title = "MissingParameter"
- explanation = "A mandatory input parameter is missing"
-
-
-class HeatOptInRequiredError(HeatAPIException):
- '''
- The AWS Access Key ID needs a subscription for the service
- '''
- code = 403
- title = "OptInRequired"
- explanation = "The AWS Access Key ID needs a subscription for the service"
-
-
-class HeatRequestExpiredError(HeatAPIException):
- '''
- Request is past expires date or the request date (either with 15 minute
- padding), or the request date occurs more than 15 minutes in the future
- '''
- code = 400
- title = "RequestExpired"
- explanation = "Request expired or more than 15mins in the future"
-
-
-class HeatServiceUnavailableError(HeatAPIException):
- '''
- The request has failed due to a temporary failure of the server
- '''
- code = 503
- title = "ServiceUnavailable"
- explanation = "Service temporarily unvavailable"
- err_type = "Server"
-
-
-class HeatThrottlingError(HeatAPIException):
- '''
- Request was denied due to request throttling
- '''
- code = 400
- title = "Throttling"
- explanation = "Request was denied due to request throttling"
-
-
-class AlreadyExistsError(HeatAPIException):
- '''
- Resource with the name requested already exists
- '''
- code = 400
- title = 'AlreadyExists'
- explanation = "Resource with the name requested already exists"
-
-
-# Not documented in the AWS docs, authentication failure errors
-class HeatAccessDeniedError(HeatAPIException):
- '''
- This is the response given when authentication fails due to user
- IAM group memberships meaning we deny access
- '''
- code = 403
- title = "AccessDenied"
- explanation = "User is not authorized to perform action"
-
-
-class HeatSignatureError(HeatAPIException):
- '''
- This is the response given when authentication fails due to
- a bad signature
- '''
- code = 403
- title = "SignatureDoesNotMatch"
- explanation = ("The request signature we calculated does not match the " +
- "signature you provided")
-
-
-# Heat-specific errors
-class HeatAPINotImplementedError(HeatAPIException):
- '''
- This is the response given when an API action is not yet implemented
- '''
- code = 500
- title = "APINotImplemented"
- explanation = ("The requested action is not yet implemented")
- err_type = "Server"
-
-
-def map_remote_error(ex):
- """
- Map rpc_common.RemoteError exceptions returned by the engine
- to HeatAPIException subclasses which can be used to return
- properly formatted AWS error responses
- """
- inval_param_errors = (
- 'AttributeError',
- 'ValueError',
- 'InvalidTenant',
- 'StackNotFound',
- 'ResourceNotFound',
- 'ResourceNotAvailable',
- 'ResourceTypeNotFound',
- 'PhysicalResourceNotFound',
- 'WatchRuleNotFound',
- 'StackValidationFailed',
- 'InvalidTemplateReference',
- 'UnknownUserParameter',
- 'UserParameterMissing',
- )
- denied_errors = ('Forbidden', 'NotAuthorized')
- already_exists_errors = ('StackExists')
-
- ex_type = ex.__class__.__name__
-
- if ex_type.endswith(rpc_common._REMOTE_POSTFIX):
- ex_type = ex_type[:-len(rpc_common._REMOTE_POSTFIX)]
-
- if ex_type in inval_param_errors:
- return HeatInvalidParameterValueError(detail=str(ex.message))
- elif ex_type in denied_errors:
- return HeatAccessDeniedError(detail=str(ex.message))
- elif ex_type in already_exists_errors:
- return AlreadyExistsError(detail=str(ex.message))
- else:
- # Map everything else to internal server error for now
- return HeatInternalFailureError(detail=str(ex.message))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-'''
-Helper utilities related to the AWS API implementations
-'''
-
-import re
-import itertools
-from heat.api.aws import exception
-
-from heat.openstack.common import log as logging
-
-LOG = logging.getLogger(__name__)
-
-
-def format_response(action, response):
- """
- Format response from engine into API format
- """
- return {'%sResponse' % action: {'%sResult' % action: response}}
-
-
-def extract_param_pairs(params, prefix='', keyname='', valuename=''):
- """
- Extract a dictionary of user input parameters, from AWS style
- parameter-pair encoded list
-
- In the AWS API list items appear as two key-value
- pairs (passed as query parameters) with keys of the form below:
-
- Prefix.member.1.keyname=somekey
- Prefix.member.1.keyvalue=somevalue
- Prefix.member.2.keyname=anotherkey
- Prefix.member.2.keyvalue=somevalue
-
- We reformat this into a dict here to match the heat
- engine API expected format
- """
- plist = extract_param_list(params, prefix)
- kvs = [(p[keyname], p[valuename]) for p in plist
- if keyname in p and valuename in p]
-
- return dict(kvs)
-
-
-def extract_param_list(params, prefix=''):
- """
- Extract a list-of-dicts based on parameters containing AWS style list
-
- MetricData.member.1.MetricName=buffers
- MetricData.member.1.Unit=Bytes
- MetricData.member.1.Value=231434333
- MetricData.member.2.MetricName=buffers2
- MetricData.member.2.Unit=Bytes
- MetricData.member.2.Value=12345
-
- This can be extracted by passing prefix=MetricData, resulting in a
- list containing two dicts
- """
-
- key_re = re.compile(r"%s\.member\.([0-9]+)\.(.*)" % (prefix))
-
- def get_param_data(params):
- for param_name, value in params.items():
- match = key_re.match(param_name)
- if match:
- try:
- index = int(match.group(1))
- except ValueError:
- pass
- else:
- key = match.group(2)
-
- yield (index, (key, value))
-
- # Sort and group by index
- key_func = lambda d: d[0]
- data = sorted(get_param_data(params), key=key_func)
- members = itertools.groupby(data, key_func)
-
- return [dict(kv for di, kv in m) for mi, m in members]
-
-
-def get_param_value(params, key):
- """
- Helper function, looks up an expected parameter in a parsed
- params dict and returns the result. If params does not contain
- the requested key we raise an exception of the appropriate type
- """
- try:
- return params[key]
- except KeyError:
- LOG.error("Request does not contain %s parameter!" % key)
- raise exception.HeatMissingParameterError(key)
-
-
-def reformat_dict_keys(keymap={}, inputdict={}):
- '''
- Utility function for mapping one dict format to another
- '''
- return dict([(outk, inputdict[ink]) for ink, outk in keymap.items()
- if ink in inputdict])
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.openstack.common import gettextutils
-
-gettextutils.install('heat')
-
-
-from heat.api.middleware.version_negotiation import VersionNegotiationFilter
-from heat.api.cfn import versions
-
-
-def version_negotiation_filter(app, conf, **local_conf):
- return VersionNegotiationFilter(versions.Controller, app,
- conf, **local_conf)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import routes
-
-from webob import Request
-
-from heat.api.cfn.v1 import stacks
-from heat.api.cfn.v1 import signal
-from heat.common import wsgi
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class API(wsgi.Router):
-
- """
- WSGI router for Heat CloudFormation v1 API requests.
- """
-
- _actions = {
- 'list': 'ListStacks',
- 'create': 'CreateStack',
- 'describe': 'DescribeStacks',
- 'delete': 'DeleteStack',
- 'update': 'UpdateStack',
- 'events_list': 'DescribeStackEvents',
- 'validate_template': 'ValidateTemplate',
- 'get_template': 'GetTemplate',
- 'estimate_template_cost': 'EstimateTemplateCost',
- 'describe_stack_resource': 'DescribeStackResource',
- 'describe_stack_resources': 'DescribeStackResources',
- 'list_stack_resources': 'ListStackResources',
- }
-
- def __init__(self, conf, **local_conf):
- self.conf = conf
- mapper = routes.Mapper()
-
- stacks_resource = stacks.create_resource(conf)
-
- mapper.resource("stack", "stacks", controller=stacks_resource,
- collection={'detail': 'GET'})
-
- def conditions(action):
- api_action = self._actions[action]
-
- def action_match(environ, result):
- req = Request(environ)
- env_action = req.params.get("Action")
- return env_action == api_action
-
- return {'function': action_match}
-
- for action in self._actions:
- mapper.connect("/", controller=stacks_resource, action=action,
- conditions=conditions(action))
-
- mapper.connect("/", controller=stacks_resource, action="index")
-
- # Add controller which handles signals on resources like:
- # waitconditions and alarms.
- # This is not part of the main CFN API spec, hence handle it
- # separately via a different path
- signal_controller = signal.create_resource(conf)
- mapper.connect('/waitcondition/{arn:.*}',
- controller=signal_controller,
- action='update_waitcondition',
- conditions=dict(method=['PUT']))
- mapper.connect('/signal/{arn:.*}',
- controller=signal_controller,
- action='signal',
- conditions=dict(method=['POST']))
-
- super(API, self).__init__(mapper)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import wsgi
-from heat.rpc import client as rpc_client
-from heat.common import identifier
-from heat.api.aws import exception
-
-
-class SignalController(object):
- def __init__(self, options):
- self.options = options
- self.engine = rpc_client.EngineClient()
-
- def update_waitcondition(self, req, body, arn):
- con = req.context
- identity = identifier.ResourceIdentifier.from_arn(arn)
- try:
- md = self.engine.metadata_update(
- con,
- stack_identity=dict(identity.stack()),
- resource_name=identity.resource_name,
- metadata=body)
- except Exception as ex:
- return exception.map_remote_error(ex)
-
- return {'resource': identity.resource_name, 'metadata': md}
-
- def signal(self, req, body, arn):
- con = req.context
- identity = identifier.ResourceIdentifier.from_arn(arn)
- try:
- md = self.engine.resource_signal(
- con,
- stack_identity=dict(identity.stack()),
- resource_name=identity.resource_name,
- details=body)
- except Exception as ex:
- return exception.map_remote_error(ex)
-
-
-def create_resource(options):
- """
- Signal resource factory method.
- """
- deserializer = wsgi.JSONRequestDeserializer()
- return wsgi.Resource(SignalController(options), deserializer)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Stack endpoint for Heat CloudFormation v1 API.
-"""
-
-import json
-import socket
-
-from heat.api.aws import exception
-from heat.api.aws import utils as api_utils
-from heat.common import wsgi
-from heat.common import exception as heat_exception
-from heat.rpc import client as rpc_client
-from heat.common import template_format
-from heat.rpc import api as engine_api
-from heat.common import identifier
-from heat.common import urlfetch
-from heat.common import policy
-
-from heat.openstack.common import log as logging
-from heat.openstack.common.gettextutils import _
-
-logger = logging.getLogger(__name__)
-
-
-class StackController(object):
-
- """
- WSGI controller for stacks resource in Heat CloudFormation v1 API
- Implements the API actions
- """
-
- def __init__(self, options):
- self.options = options
- self.engine_rpcapi = rpc_client.EngineClient()
- self.policy = policy.Enforcer(scope='cloudformation')
-
- def _enforce(self, req, action):
- """Authorize an action against the policy.json."""
- try:
- self.policy.enforce(req.context, action, {})
- except heat_exception.Forbidden:
- raise exception.HeatAccessDeniedError("Action %s not allowed " %
- action + "for user")
- except Exception as ex:
- # We expect policy.enforce to either pass or raise Forbidden
- # however, if anything else happens, we want to raise
- # HeatInternalFailureError, failure to do this results in
- # the user getting a big stacktrace spew as an API response
- raise exception.HeatInternalFailureError("Error authorizing " +
- "action %s" % action)
-
- @staticmethod
- def _id_format(resp):
- """
- Format the StackId field in the response as an ARN, and process other
- IDs into the correct format.
- """
- if 'StackId' in resp:
- identity = identifier.HeatIdentifier(**resp['StackId'])
- resp['StackId'] = identity.arn()
- if 'EventId' in resp:
- identity = identifier.EventIdentifier(**resp['EventId'])
- resp['EventId'] = identity.event_id
- return resp
-
- @staticmethod
- def _extract_user_params(params):
- """
- Extract a dictionary of user input parameters for the stack
-
- In the AWS API parameters, each user parameter appears as two key-value
- pairs with keys of the form below:
-
- Parameters.member.1.ParameterKey
- Parameters.member.1.ParameterValue
- """
- return api_utils.extract_param_pairs(params,
- prefix='Parameters',
- keyname='ParameterKey',
- valuename='ParameterValue')
-
- def _get_identity(self, con, stack_name):
- """
- Generate a stack identifier from the given stack name or ARN.
-
- In the case of a stack name, the identifier will be looked up in the
- engine over RPC.
- """
- try:
- return dict(identifier.HeatIdentifier.from_arn(stack_name))
- except ValueError:
- return self.engine_rpcapi.identify_stack(con, stack_name)
-
- def list(self, req):
- """
- Implements ListStacks API action
- Lists summary information for all stacks
- """
- self._enforce(req, 'ListStacks')
-
- def format_stack_summary(s):
- """
- Reformat engine output into the AWS "StackSummary" format
- """
- # Map the engine-api format to the AWS StackSummary datatype
- keymap = {
- engine_api.STACK_CREATION_TIME: 'CreationTime',
- engine_api.STACK_UPDATED_TIME: 'LastUpdatedTime',
- engine_api.STACK_ID: 'StackId',
- engine_api.STACK_NAME: 'StackName',
- engine_api.STACK_STATUS_DATA: 'StackStatusReason',
- engine_api.STACK_TMPL_DESCRIPTION: 'TemplateDescription',
- }
-
- result = api_utils.reformat_dict_keys(keymap, s)
-
- action = s[engine_api.STACK_ACTION]
- status = s[engine_api.STACK_STATUS]
- result['StackStatus'] = '_'.join((action, status))
-
- # AWS docs indicate DeletionTime is ommitted for current stacks
- # This is still TODO(unknown) in the engine, we don't keep data for
- # stacks after they are deleted
- if engine_api.STACK_DELETION_TIME in s:
- result['DeletionTime'] = s[engine_api.STACK_DELETION_TIME]
-
- return self._id_format(result)
-
- con = req.context
- try:
- stack_list = self.engine_rpcapi.list_stacks(con)
- except Exception as ex:
- return exception.map_remote_error(ex)
-
- res = {'StackSummaries': [format_stack_summary(s) for s in stack_list]}
-
- return api_utils.format_response('ListStacks', res)
-
- def describe(self, req):
- """
- Implements DescribeStacks API action
- Gets detailed information for a stack (or all stacks)
- """
- self._enforce(req, 'DescribeStacks')
-
- def format_stack_outputs(o):
- keymap = {
- engine_api.OUTPUT_DESCRIPTION: 'Description',
- engine_api.OUTPUT_KEY: 'OutputKey',
- engine_api.OUTPUT_VALUE: 'OutputValue',
- }
-
- def replacecolon(d):
- return dict(map(lambda (k, v): (k.replace(':', '.'), v),
- d.items()))
-
- def transform(attrs):
- """
- Recursively replace all : with . in dict keys
- so that they are not interpreted as xml namespaces.
- """
- new = replacecolon(attrs)
- for key, value in new.items():
- if isinstance(value, dict):
- new[key] = transform(value)
- return new
-
- return api_utils.reformat_dict_keys(keymap, transform(o))
-
- def format_stack(s):
- """
- Reformat engine output into the AWS "StackSummary" format
- """
- keymap = {
- engine_api.STACK_CAPABILITIES: 'Capabilities',
- engine_api.STACK_CREATION_TIME: 'CreationTime',
- engine_api.STACK_DESCRIPTION: 'Description',
- engine_api.STACK_DISABLE_ROLLBACK: 'DisableRollback',
- engine_api.STACK_UPDATED_TIME: 'LastUpdatedTime',
- engine_api.STACK_NOTIFICATION_TOPICS: 'NotificationARNs',
- engine_api.STACK_PARAMETERS: 'Parameters',
- engine_api.STACK_ID: 'StackId',
- engine_api.STACK_NAME: 'StackName',
- engine_api.STACK_STATUS_DATA: 'StackStatusReason',
- engine_api.STACK_TIMEOUT: 'TimeoutInMinutes',
- }
-
- result = api_utils.reformat_dict_keys(keymap, s)
-
- action = s[engine_api.STACK_ACTION]
- status = s[engine_api.STACK_STATUS]
- result['StackStatus'] = '_'.join((action, status))
-
- # Reformat outputs, these are handled separately as they are
- # only present in the engine output for a completely created
- # stack
- result['Outputs'] = []
- if engine_api.STACK_OUTPUTS in s:
- for o in s[engine_api.STACK_OUTPUTS]:
- result['Outputs'].append(format_stack_outputs(o))
-
- # Reformat Parameters dict-of-dict into AWS API format
- # This is a list-of-dict with nasty "ParameterKey" : key
- # "ParameterValue" : value format.
- result['Parameters'] = [{'ParameterKey': k,
- 'ParameterValue': v}
- for (k, v) in result['Parameters'].items()]
-
- return self._id_format(result)
-
- con = req.context
- # If no StackName parameter is passed, we pass None into the engine
- # this returns results for all stacks (visible to this user), which
- # is the behavior described in the AWS DescribeStacks API docs
- try:
- if 'StackName' in req.params:
- identity = self._get_identity(con, req.params['StackName'])
- else:
- identity = None
-
- stack_list = self.engine_rpcapi.show_stack(con, identity)
-
- except Exception as ex:
- return exception.map_remote_error(ex)
-
- res = {'Stacks': [format_stack(s) for s in stack_list]}
-
- return api_utils.format_response('DescribeStacks', res)
-
- def _get_template(self, req):
- """
- Get template file contents, either from local file or URL
- """
- if 'TemplateBody' in req.params:
- logger.debug('TemplateBody ...')
- return req.params['TemplateBody']
- elif 'TemplateUrl' in req.params:
- url = req.params['TemplateUrl']
- logger.debug('TemplateUrl %s' % url)
- try:
- return urlfetch.get(url)
- except IOError as exc:
- msg = _('Failed to fetch template: %s') % str(exc)
- raise exception.HeatInvalidParameterValueError(detail=msg)
-
- return None
-
- CREATE_OR_UPDATE_ACTION = (
- CREATE_STACK, UPDATE_STACK,
- ) = (
- "CreateStack", "UpdateStack",
- )
-
- def create(self, req):
- return self.create_or_update(req, self.CREATE_STACK)
-
- def update(self, req):
- return self.create_or_update(req, self.UPDATE_STACK)
-
- def create_or_update(self, req, action=None):
- """
- Implements CreateStack and UpdateStack API actions
- Create or update stack as defined in template file
- """
- def extract_args(params):
- """
- Extract request parameters/arguments and reformat them to match
- the engine API. FIXME: we currently only support a subset of
- the AWS defined parameters (both here and in the engine)
- """
- # TODO(shardy) : Capabilities, NotificationARNs
- keymap = {'TimeoutInMinutes': engine_api.PARAM_TIMEOUT,
- 'DisableRollback': engine_api.PARAM_DISABLE_ROLLBACK}
-
- if 'DisableRollback' in params and 'OnFailure' in params:
- msg = _('DisableRollback and OnFailure '
- 'may not be used together')
- raise exception.HeatInvalidParameterCombinationError(
- detail=msg)
-
- result = {}
- for k in keymap:
- if k in params:
- result[keymap[k]] = params[k]
-
- if 'OnFailure' in params:
- value = params['OnFailure']
- if value == 'DO_NOTHING':
- result[engine_api.PARAM_DISABLE_ROLLBACK] = 'true'
- elif value in ('ROLLBACK', 'DELETE'):
- result[engine_api.PARAM_DISABLE_ROLLBACK] = 'false'
-
- return result
-
- if action not in self.CREATE_OR_UPDATE_ACTION:
- msg = _("Unexpected action %(action)s") % ({'action': action})
- # This should not happen, so return HeatInternalFailureError
- return exception.HeatInternalFailureError(detail=msg)
-
- engine_action = {self.CREATE_STACK: self.engine_rpcapi.create_stack,
- self.UPDATE_STACK: self.engine_rpcapi.update_stack}
-
- con = req.context
-
- # Extract the stack input parameters
- stack_parms = self._extract_user_params(req.params)
-
- # Extract any additional arguments ("Request Parameters")
- create_args = extract_args(req.params)
-
- try:
- templ = self._get_template(req)
- except socket.gaierror:
- msg = _('Invalid Template URL')
- return exception.HeatInvalidParameterValueError(detail=msg)
-
- if templ is None:
- msg = _("TemplateBody or TemplateUrl were not given.")
- return exception.HeatMissingParameterError(detail=msg)
-
- try:
- stack = template_format.parse(templ)
- except ValueError:
- msg = _("The Template must be a JSON or YAML document.")
- return exception.HeatInvalidParameterValueError(detail=msg)
-
- args = {'template': stack,
- 'params': stack_parms,
- 'files': {},
- 'args': create_args}
- try:
- stack_name = req.params['StackName']
- if action == self.CREATE_STACK:
- args['stack_name'] = stack_name
- else:
- args['stack_identity'] = self._get_identity(con, stack_name)
-
- result = engine_action[action](con, **args)
- except Exception as ex:
- return exception.map_remote_error(ex)
-
- try:
- identity = identifier.HeatIdentifier(**result)
- except (ValueError, TypeError):
- response = result
- else:
- response = {'StackId': identity.arn()}
-
- return api_utils.format_response(action, response)
-
- def get_template(self, req):
- """
- Implements the GetTemplate API action
- Get the template body for an existing stack
- """
- self._enforce(req, 'GetTemplate')
-
- con = req.context
- try:
- identity = self._get_identity(con, req.params['StackName'])
- templ = self.engine_rpcapi.get_template(con, identity)
- except Exception as ex:
- return exception.map_remote_error(ex)
-
- if templ is None:
- msg = _('stack not not found')
- return exception.HeatInvalidParameterValueError(detail=msg)
-
- return api_utils.format_response('GetTemplate',
- {'TemplateBody': templ})
-
- def estimate_template_cost(self, req):
- """
- Implements the EstimateTemplateCost API action
- Get the estimated monthly cost of a template
- """
- self._enforce(req, 'EstimateTemplateCost')
-
- return api_utils.format_response('EstimateTemplateCost',
- {'Url':
- 'http://en.wikipedia.org/wiki/Gratis'
- }
- )
-
- def validate_template(self, req):
- """
- Implements the ValidateTemplate API action
- Validates the specified template
- """
- self._enforce(req, 'ValidateTemplate')
-
- con = req.context
- try:
- templ = self._get_template(req)
- except socket.gaierror:
- msg = _('Invalid Template URL')
- return exception.HeatInvalidParameterValueError(detail=msg)
- if templ is None:
- msg = _("TemplateBody or TemplateUrl were not given.")
- return exception.HeatMissingParameterError(detail=msg)
-
- try:
- template = template_format.parse(templ)
- except ValueError:
- msg = _("The Template must be a JSON or YAML document.")
- return exception.HeatInvalidParameterValueError(detail=msg)
-
- logger.info('validate_template')
-
- def format_validate_parameter(key, value):
- """
- Reformat engine output into the AWS "ValidateTemplate" format
- """
-
- return {
- 'ParameterKey': key,
- 'DefaultValue': value.get(engine_api.PARAM_DEFAULT, ''),
- 'Description': value.get(engine_api.PARAM_DESCRIPTION, ''),
- 'NoEcho': value.get(engine_api.PARAM_NO_ECHO, 'false')
- }
-
- try:
- res = self.engine_rpcapi.validate_template(con, template)
- if 'Error' in res:
- return api_utils.format_response('ValidateTemplate',
- res['Error'])
-
- res['Parameters'] = [format_validate_parameter(k, v)
- for k, v in res['Parameters'].items()]
- return api_utils.format_response('ValidateTemplate', res)
- except Exception as ex:
- return exception.map_remote_error(ex)
-
- def delete(self, req):
- """
- Implements the DeleteStack API action
- Deletes the specified stack
- """
- self._enforce(req, 'DeleteStack')
-
- con = req.context
- try:
- identity = self._get_identity(con, req.params['StackName'])
- res = self.engine_rpcapi.delete_stack(con, identity, cast=False)
-
- except Exception as ex:
- return exception.map_remote_error(ex)
-
- if res is None:
- return api_utils.format_response('DeleteStack', '')
- else:
- return api_utils.format_response('DeleteStack', res['Error'])
-
- def events_list(self, req):
- """
- Implements the DescribeStackEvents API action
- Returns events related to a specified stack (or all stacks)
- """
- self._enforce(req, 'DescribeStackEvents')
-
- def format_stack_event(e):
- """
- Reformat engine output into the AWS "StackEvent" format
- """
- keymap = {
- engine_api.EVENT_ID: 'EventId',
- engine_api.EVENT_RES_NAME: 'LogicalResourceId',
- engine_api.EVENT_RES_PHYSICAL_ID: 'PhysicalResourceId',
- engine_api.EVENT_RES_PROPERTIES: 'ResourceProperties',
- engine_api.EVENT_RES_STATUS_DATA: 'ResourceStatusReason',
- engine_api.EVENT_RES_TYPE: 'ResourceType',
- engine_api.EVENT_STACK_ID: 'StackId',
- engine_api.EVENT_STACK_NAME: 'StackName',
- engine_api.EVENT_TIMESTAMP: 'Timestamp',
- }
-
- result = api_utils.reformat_dict_keys(keymap, e)
- action = e[engine_api.EVENT_RES_ACTION]
- status = e[engine_api.EVENT_RES_STATUS]
- result['ResourceStatus'] = '_'.join((action, status))
- result['ResourceProperties'] = json.dumps(result[
- 'ResourceProperties'])
-
- return self._id_format(result)
-
- con = req.context
- stack_name = req.params.get('StackName', None)
- try:
- identity = stack_name and self._get_identity(con, stack_name)
- events = self.engine_rpcapi.list_events(con, identity)
- except Exception as ex:
- return exception.map_remote_error(ex)
-
- result = [format_stack_event(e) for e in events]
-
- return api_utils.format_response('DescribeStackEvents',
- {'StackEvents': result})
-
- @staticmethod
- def _resource_status(res):
- action = res[engine_api.RES_ACTION]
- status = res[engine_api.RES_STATUS]
- return '_'.join((action, status))
-
- def describe_stack_resource(self, req):
- """
- Implements the DescribeStackResource API action
- Return the details of the given resource belonging to the given stack.
- """
- self._enforce(req, 'DescribeStackResource')
-
- def format_resource_detail(r):
- """
- Reformat engine output into the AWS "StackResourceDetail" format
- """
- keymap = {
- engine_api.RES_DESCRIPTION: 'Description',
- engine_api.RES_UPDATED_TIME: 'LastUpdatedTimestamp',
- engine_api.RES_NAME: 'LogicalResourceId',
- engine_api.RES_METADATA: 'Metadata',
- engine_api.RES_PHYSICAL_ID: 'PhysicalResourceId',
- engine_api.RES_STATUS_DATA: 'ResourceStatusReason',
- engine_api.RES_TYPE: 'ResourceType',
- engine_api.RES_STACK_ID: 'StackId',
- engine_api.RES_STACK_NAME: 'StackName',
- }
-
- result = api_utils.reformat_dict_keys(keymap, r)
-
- result['ResourceStatus'] = self._resource_status(r)
-
- return self._id_format(result)
-
- con = req.context
-
- try:
- identity = self._get_identity(con, req.params['StackName'])
- resource_details = self.engine_rpcapi.describe_stack_resource(
- con,
- stack_identity=identity,
- resource_name=req.params.get('LogicalResourceId'))
-
- except Exception as ex:
- return exception.map_remote_error(ex)
-
- result = format_resource_detail(resource_details)
-
- return api_utils.format_response('DescribeStackResource',
- {'StackResourceDetail': result})
-
- def describe_stack_resources(self, req):
- """
- Implements the DescribeStackResources API action
- Return details of resources specified by the parameters.
-
- `StackName`: returns all resources belonging to the stack
- `PhysicalResourceId`: returns all resources belonging to the stack this
- resource is associated with.
-
- Only one of the parameters may be specified.
-
- Optional parameter:
-
- `LogicalResourceId`: filter the resources list by the logical resource
- id.
- """
- self._enforce(req, 'DescribeStackResources')
-
- def format_stack_resource(r):
- """
- Reformat engine output into the AWS "StackResource" format
- """
- keymap = {
- engine_api.RES_DESCRIPTION: 'Description',
- engine_api.RES_NAME: 'LogicalResourceId',
- engine_api.RES_PHYSICAL_ID: 'PhysicalResourceId',
- engine_api.RES_STATUS_DATA: 'ResourceStatusReason',
- engine_api.RES_TYPE: 'ResourceType',
- engine_api.RES_STACK_ID: 'StackId',
- engine_api.RES_STACK_NAME: 'StackName',
- engine_api.RES_UPDATED_TIME: 'Timestamp',
- }
-
- result = api_utils.reformat_dict_keys(keymap, r)
-
- result['ResourceStatus'] = self._resource_status(r)
-
- return self._id_format(result)
-
- con = req.context
- stack_name = req.params.get('StackName')
- physical_resource_id = req.params.get('PhysicalResourceId')
- if stack_name and physical_resource_id:
- msg = 'Use `StackName` or `PhysicalResourceId` but not both'
- return exception.HeatInvalidParameterCombinationError(detail=msg)
-
- try:
- if stack_name is not None:
- identity = self._get_identity(con, stack_name)
- else:
- identity = self.engine_rpcapi.find_physical_resource(
- con,
- physical_resource_id=physical_resource_id)
- resources = self.engine_rpcapi.describe_stack_resources(
- con,
- stack_identity=identity,
- resource_name=req.params.get('LogicalResourceId'))
-
- except Exception as ex:
- return exception.map_remote_error(ex)
-
- result = [format_stack_resource(r) for r in resources]
-
- return api_utils.format_response('DescribeStackResources',
- {'StackResources': result})
-
- def list_stack_resources(self, req):
- """
- Implements the ListStackResources API action
- Return summary of the resources belonging to the specified stack.
- """
- self._enforce(req, 'ListStackResources')
-
- def format_resource_summary(r):
- """
- Reformat engine output into the AWS "StackResourceSummary" format
- """
- keymap = {
- engine_api.RES_UPDATED_TIME: 'LastUpdatedTimestamp',
- engine_api.RES_NAME: 'LogicalResourceId',
- engine_api.RES_PHYSICAL_ID: 'PhysicalResourceId',
- engine_api.RES_STATUS_DATA: 'ResourceStatusReason',
- engine_api.RES_TYPE: 'ResourceType',
- }
-
- result = api_utils.reformat_dict_keys(keymap, r)
-
- result['ResourceStatus'] = self._resource_status(r)
-
- return result
-
- con = req.context
-
- try:
- identity = self._get_identity(con, req.params['StackName'])
- resources = self.engine_rpcapi.list_stack_resources(
- con,
- stack_identity=identity)
- except Exception as ex:
- return exception.map_remote_error(ex)
-
- summaries = [format_resource_summary(r) for r in resources]
-
- return api_utils.format_response('ListStackResources',
- {'StackResourceSummaries': summaries})
-
-
-def create_resource(options):
- """
- Stacks resource factory method.
- """
- deserializer = wsgi.JSONRequestDeserializer()
- return wsgi.Resource(StackController(options), deserializer)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Controller that returns information on the heat API versions
-"""
-
-import httplib
-import json
-
-import webob.dec
-
-
-class Controller(object):
-
- """
- A controller that produces information on the heat API versions.
- """
-
- def __init__(self, conf):
- self.conf = conf
-
- @webob.dec.wsgify
- def __call__(self, req):
- """Respond to a request for all OpenStack API versions."""
- version_objs = [
- {
- "id": "v1.0",
- "status": "CURRENT",
- "links": [
- {
- "rel": "self",
- "href": self.get_href(req)
- }]
- }]
-
- body = json.dumps(dict(versions=version_objs))
-
- response = webob.Response(request=req,
- status=httplib.MULTIPLE_CHOICES,
- content_type='application/json')
- response.body = body
-
- return response
-
- def get_href(self, req):
- return "%s/v1/" % req.host_url
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import routes
-
-from heat.openstack.common import gettextutils
-
-gettextutils.install('heat')
-
-from heat.api.cloudwatch import watch
-from heat.common import wsgi
-
-from webob import Request
-from heat.api.middleware.version_negotiation import VersionNegotiationFilter
-from heat.api.cloudwatch import versions
-
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class API(wsgi.Router):
-
- """
- WSGI router for Heat CloudWatch API
- """
-
- _actions = {
- 'delete_alarms': 'DeleteAlarms',
- 'describe_alarm_history': 'DescribeAlarmHistory',
- 'describe_alarms': 'DescribeAlarms',
- 'describe_alarms_for_metric': 'DescribeAlarmsForMetric',
- 'disable_alarm_actions': 'DisableAlarmActions',
- 'enable_alarm_actions': 'EnableAlarmActions',
- 'get_metric_statistics': 'GetMetricStatistics',
- 'list_metrics': 'ListMetrics',
- 'put_metric_alarm': 'PutMetricAlarm',
- 'put_metric_data': 'PutMetricData',
- 'set_alarm_state': 'SetAlarmState',
- }
-
- def __init__(self, conf, **local_conf):
- self.conf = conf
- mapper = routes.Mapper()
- controller_resource = watch.create_resource(conf)
-
- def conditions(action):
- api_action = self._actions[action]
-
- def action_match(environ, result):
- req = Request(environ)
- env_action = req.params.get("Action")
- return env_action == api_action
-
- return {'function': action_match}
-
- for action in self._actions:
- mapper.connect("/", controller=controller_resource, action=action,
- conditions=conditions(action))
-
- mapper.connect("/", controller=controller_resource, action="index")
-
- super(API, self).__init__(mapper)
-
-
-def version_negotiation_filter(app, conf, **local_conf):
- return VersionNegotiationFilter(versions.Controller, app,
- conf, **local_conf)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Controller that returns information on the heat API versions
-"""
-
-import httplib
-import json
-
-import webob.dec
-
-
-class Controller(object):
-
- """
- A controller that produces information on the heat API versions.
- """
-
- def __init__(self, conf):
- self.conf = conf
-
- @webob.dec.wsgify
- def __call__(self, req):
- """Respond to a request for all OpenStack API versions."""
- version_objs = [
- {
- "id": "v1.0",
- "status": "CURRENT",
- "links": [
- {
- "rel": "self",
- "href": self.get_href(req)
- }]
- }]
-
- body = json.dumps(dict(versions=version_objs))
-
- response = webob.Response(request=req,
- status=httplib.MULTIPLE_CHOICES,
- content_type='application/json')
- response.body = body
-
- return response
-
- def get_href(self, req):
- return "%s/v1/" % req.host_url
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-endpoint for heat AWS-compatible CloudWatch API
-"""
-from heat.api.aws import exception
-from heat.api.aws import utils as api_utils
-from heat.common import wsgi
-from heat.common import policy
-from heat.common import exception as heat_exception
-from heat.rpc import client as rpc_client
-from heat.rpc import api as engine_api
-
-import heat.openstack.common.rpc.common as rpc_common
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class WatchController(object):
-
- """
- WSGI controller for CloudWatch resource in heat API
- Implements the API actions
- """
-
- def __init__(self, options):
- self.options = options
- self.engine_rpcapi = rpc_client.EngineClient()
- self.policy = policy.Enforcer(scope='cloudwatch')
-
- def _enforce(self, req, action):
- """Authorize an action against the policy.json."""
- try:
- self.policy.enforce(req.context, action, {})
- except heat_exception.Forbidden:
- raise exception.HeatAccessDeniedError("Action %s not allowed " %
- action + "for user")
- except Exception as ex:
- # We expect policy.enforce to either pass or raise Forbidden
- # however, if anything else happens, we want to raise
- # HeatInternalFailureError, failure to do this results in
- # the user getting a big stacktrace spew as an API response
- raise exception.HeatInternalFailureError("Error authorizing " +
- "action %s" % action)
-
- @staticmethod
- def _reformat_dimensions(dims):
- '''
- Reformat dimensions list into AWS API format
- Parameter dims is a list of dicts
- '''
- newdims = []
- for count, d in enumerate(dims, 1):
- for key in d.keys():
- newdims.append({'Name': key, 'Value': d[key]})
- return newdims
-
- def delete_alarms(self, req):
- """
- Implements DeleteAlarms API action
- """
- self._enforce(req, 'DeleteAlarms')
- return exception.HeatAPINotImplementedError()
-
- def describe_alarm_history(self, req):
- """
- Implements DescribeAlarmHistory API action
- """
- self._enforce(req, 'DescribeAlarmHistory')
- return exception.HeatAPINotImplementedError()
-
- def describe_alarms(self, req):
- """
- Implements DescribeAlarms API action
- """
- self._enforce(req, 'DescribeAlarms')
-
- def format_metric_alarm(a):
- """
- Reformat engine output into the AWS "MetricAlarm" format
- """
- keymap = {
- engine_api.WATCH_ACTIONS_ENABLED: 'ActionsEnabled',
- engine_api.WATCH_ALARM_ACTIONS: 'AlarmActions',
- engine_api.WATCH_TOPIC: 'AlarmArn',
- engine_api.WATCH_UPDATED_TIME:
- 'AlarmConfigurationUpdatedTimestamp',
- engine_api.WATCH_DESCRIPTION: 'AlarmDescription',
- engine_api.WATCH_NAME: 'AlarmName',
- engine_api.WATCH_COMPARISON: 'ComparisonOperator',
- engine_api.WATCH_DIMENSIONS: 'Dimensions',
- engine_api.WATCH_PERIODS: 'EvaluationPeriods',
- engine_api.WATCH_INSUFFICIENT_ACTIONS:
- 'InsufficientDataActions',
- engine_api.WATCH_METRIC_NAME: 'MetricName',
- engine_api.WATCH_NAMESPACE: 'Namespace',
- engine_api.WATCH_OK_ACTIONS: 'OKActions',
- engine_api.WATCH_PERIOD: 'Period',
- engine_api.WATCH_STATE_REASON: 'StateReason',
- engine_api.WATCH_STATE_REASON_DATA: 'StateReasonData',
- engine_api.WATCH_STATE_UPDATED_TIME: 'StateUpdatedTimestamp',
- engine_api.WATCH_STATE_VALUE: 'StateValue',
- engine_api.WATCH_STATISTIC: 'Statistic',
- engine_api.WATCH_THRESHOLD: 'Threshold',
- engine_api.WATCH_UNIT: 'Unit'}
-
- # AWS doesn't return StackId in the main MetricAlarm
- # structure, so we add StackId as a dimension to all responses
- a[engine_api.WATCH_DIMENSIONS].append({'StackId':
- a[engine_api.WATCH_STACK_ID]
- })
-
- # Reformat dimensions list into AWS API format
- a[engine_api.WATCH_DIMENSIONS] = self._reformat_dimensions(
- a[engine_api.WATCH_DIMENSIONS])
-
- return api_utils.reformat_dict_keys(keymap, a)
-
- con = req.context
- parms = dict(req.params)
- try:
- name = parms['AlarmName']
- except KeyError:
- name = None
-
- try:
- watch_list = self.engine_rpcapi.show_watch(con, watch_name=name)
- except rpc_common.RemoteError as ex:
- return exception.map_remote_error(ex)
-
- res = {'MetricAlarms': [format_metric_alarm(a)
- for a in watch_list]}
-
- result = api_utils.format_response("DescribeAlarms", res)
- return result
-
- def describe_alarms_for_metric(self, req):
- """
- Implements DescribeAlarmsForMetric API action
- """
- self._enforce(req, 'DescribeAlarmsForMetric')
- return exception.HeatAPINotImplementedError()
-
- def disable_alarm_actions(self, req):
- """
- Implements DisableAlarmActions API action
- """
- self._enforce(req, 'DisableAlarmActions')
- return exception.HeatAPINotImplementedError()
-
- def enable_alarm_actions(self, req):
- """
- Implements EnableAlarmActions API action
- """
- self._enforce(req, 'EnableAlarmActions')
- return exception.HeatAPINotImplementedError()
-
- def get_metric_statistics(self, req):
- """
- Implements GetMetricStatistics API action
- """
- self._enforce(req, 'GetMetricStatistics')
- return exception.HeatAPINotImplementedError()
-
- def list_metrics(self, req):
- """
- Implements ListMetrics API action
- Lists metric datapoints associated with a particular alarm,
- or all alarms if none specified
- """
- self._enforce(req, 'ListMetrics')
-
- def format_metric_data(d, fil={}):
- """
- Reformat engine output into the AWS "Metric" format
- Takes an optional filter dict, which is traversed
- so a metric dict is only returned if all keys match
- the filter dict
- """
- dimensions = [
- {'AlarmName': d[engine_api.WATCH_DATA_ALARM]},
- {'Timestamp': d[engine_api.WATCH_DATA_TIME]}
- ]
- for key in d[engine_api.WATCH_DATA]:
- dimensions.append({key: d[engine_api.WATCH_DATA][key]})
-
- newdims = self._reformat_dimensions(dimensions)
-
- result = {
- 'MetricName': d[engine_api.WATCH_DATA_METRIC],
- 'Dimensions': newdims,
- 'Namespace': d[engine_api.WATCH_DATA_NAMESPACE],
- }
-
- for f in fil:
- try:
- value = result[f]
- if value != fil[f]:
- # Filter criteria not met, return None
- return
- except KeyError:
- logger.warning("Invalid filter key %s, ignoring" % f)
-
- return result
-
- con = req.context
- parms = dict(req.params)
- # FIXME : Don't yet handle filtering by Dimensions
- filter_result = dict((k, v) for (k, v) in parms.iteritems() if k in
- ("MetricName", "Namespace"))
- logger.debug("filter parameters : %s" % filter_result)
-
- try:
- # Engine does not currently support query by namespace/metric
- # so we pass None/None and do any filtering locally
- null_kwargs = {'metric_namespace': None,
- 'metric_name': None}
- watch_data = self.engine_rpcapi.show_watch_metric(con,
- **null_kwargs)
- except rpc_common.RemoteError as ex:
- return exception.map_remote_error(ex)
-
- res = {'Metrics': []}
- for d in watch_data:
- metric = format_metric_data(d, filter_result)
- if metric:
- res['Metrics'].append(metric)
-
- result = api_utils.format_response("ListMetrics", res)
- return result
-
- def put_metric_alarm(self, req):
- """
- Implements PutMetricAlarm API action
- """
- self._enforce(req, 'PutMetricAlarm')
- return exception.HeatAPINotImplementedError()
-
- def put_metric_data(self, req):
- """
- Implements PutMetricData API action
- """
- self._enforce(req, 'PutMetricData')
-
- con = req.context
- parms = dict(req.params)
- namespace = api_utils.get_param_value(parms, 'Namespace')
-
- # Extract data from the request so we can pass it to the engine
- # We have to do this in two passes, because the AWS
- # query format nests the dimensions within the MetricData
- # query-parameter-list (see AWS PutMetricData docs)
- # extract_param_list gives a list-of-dict, which we then
- # need to process (each dict) for dimensions
- metric_data = api_utils.extract_param_list(parms, prefix='MetricData')
- if not len(metric_data):
- logger.error("Request does not contain required MetricData")
- return exception.HeatMissingParameterError("MetricData list")
-
- watch_name = None
- dimensions = []
- for p in metric_data:
- dimension = api_utils.extract_param_pairs(p,
- prefix='Dimensions',
- keyname='Name',
- valuename='Value')
- if 'AlarmName' in dimension:
- watch_name = dimension['AlarmName']
- else:
- dimensions.append(dimension)
-
- # Extract the required data from the metric_data
- # and format dict to pass to engine
- data = {'Namespace': namespace,
- api_utils.get_param_value(metric_data[0], 'MetricName'): {
- 'Unit': api_utils.get_param_value(metric_data[0], 'Unit'),
- 'Value': api_utils.get_param_value(metric_data[0],
- 'Value'),
- 'Dimensions': dimensions}}
-
- try:
- self.engine_rpcapi.create_watch_data(con, watch_name, data)
- except rpc_common.RemoteError as ex:
- return exception.map_remote_error(ex)
-
- result = {'ResponseMetadata': None}
- return api_utils.format_response("PutMetricData", result)
-
- def set_alarm_state(self, req):
- """
- Implements SetAlarmState API action
- """
- self._enforce(req, 'SetAlarmState')
-
- # Map from AWS state names to those used in the engine
- state_map = {'OK': engine_api.WATCH_STATE_OK,
- 'ALARM': engine_api.WATCH_STATE_ALARM,
- 'INSUFFICIENT_DATA': engine_api.WATCH_STATE_NODATA}
-
- con = req.context
- parms = dict(req.params)
-
- # Get mandatory parameters
- name = api_utils.get_param_value(parms, 'AlarmName')
- state = api_utils.get_param_value(parms, 'StateValue')
-
- if state not in state_map:
- logger.error("Invalid state %s, expecting one of %s" %
- (state, state_map.keys()))
- return exception.HeatInvalidParameterValueError("Invalid state %s"
- % state)
-
- # Check for optional parameters
- # FIXME : We don't actually do anything with these in the engine yet..
- state_reason = None
- state_reason_data = None
- if 'StateReason' in parms:
- state_reason = parms['StateReason']
- if 'StateReasonData' in parms:
- state_reason_data = parms['StateReasonData']
-
- logger.debug("setting %s to %s" % (name, state_map[state]))
- try:
- self.engine_rpcapi.set_watch_state(con, watch_name=name,
- state=state_map[state])
- except rpc_common.RemoteError as ex:
- return exception.map_remote_error(ex)
-
- return api_utils.format_response("SetAlarmState", "")
-
-
-def create_resource(options):
- """
- Watch resource factory method.
- """
- deserializer = wsgi.JSONRequestDeserializer()
- return wsgi.Resource(WatchController(options), deserializer)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
+++ /dev/null
-# -*- encoding: utf-8 -*-
-#
-# Copyright © 2013 Unitedstack Inc.
-#
-# Author: Jianing YANG (jianingy@unitedstack.com)
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""A middleware that turns exceptions into parsable string. Inspired by
-Cinder's faultwrapper
-"""
-
-import traceback
-import webob
-from oslo.config import cfg
-
-cfg.CONF.import_opt('debug', 'heat.openstack.common.log')
-
-from heat.common import exception
-from heat.openstack.common import log as logging
-import heat.openstack.common.rpc.common as rpc_common
-
-from heat.common import wsgi
-
-logger = logging.getLogger(__name__)
-
-
-class Fault(object):
-
- def __init__(self, error):
- self.error = error
-
- @webob.dec.wsgify(RequestClass=wsgi.Request)
- def __call__(self, req):
- if req.content_type == 'application/xml':
- serializer = wsgi.XMLResponseSerializer()
- else:
- serializer = wsgi.JSONResponseSerializer()
- resp = webob.Response(request=req)
- default_webob_exc = webob.exc.HTTPInternalServerError()
- resp.status_code = self.error.get('code', default_webob_exc.code)
- serializer.default(resp, self.error)
- return resp
-
-
-class FaultWrapper(wsgi.Middleware):
- """Replace error body with something the client can parse."""
-
- error_map = {
- 'AttributeError': webob.exc.HTTPBadRequest,
- 'ValueError': webob.exc.HTTPBadRequest,
- 'StackNotFound': webob.exc.HTTPNotFound,
- 'ResourceNotFound': webob.exc.HTTPNotFound,
- 'ResourceTypeNotFound': webob.exc.HTTPNotFound,
- 'ResourceNotAvailable': webob.exc.HTTPNotFound,
- 'PhysicalResourceNotFound': webob.exc.HTTPNotFound,
- 'InvalidTenant': webob.exc.HTTPForbidden,
- 'StackExists': webob.exc.HTTPConflict,
- 'StackValidationFailed': webob.exc.HTTPBadRequest,
- 'InvalidTemplateReference': webob.exc.HTTPBadRequest,
- 'UnknownUserParameter': webob.exc.HTTPBadRequest,
- 'RevertFailed': webob.exc.HTTPInternalServerError,
- 'ServerBuildFailed': webob.exc.HTTPInternalServerError,
- 'NotSupported': webob.exc.HTTPBadRequest,
- 'MissingCredentialError': webob.exc.HTTPBadRequest,
- 'UserParameterMissing': webob.exc.HTTPBadRequest,
- 'RequestLimitExceeded': webob.exc.HTTPBadRequest,
- }
-
- def _error(self, ex):
-
- trace = None
- webob_exc = None
- if isinstance(ex, exception.HTTPExceptionDisguise):
- # An HTTP exception was disguised so it could make it here
- # let's remove the disguise and set the original HTTP exception
- if cfg.CONF.debug:
- trace = ''.join(traceback.format_tb(ex.tb))
- ex = ex.exc
- webob_exc = ex
-
- ex_type = ex.__class__.__name__
-
- if ex_type.endswith(rpc_common._REMOTE_POSTFIX):
- ex_type = ex_type[:-len(rpc_common._REMOTE_POSTFIX)]
-
- message = unicode(ex.message)
-
- if cfg.CONF.debug and not trace:
- trace = unicode(ex)
- if trace.find('\n') > -1:
- unused, trace = trace.split('\n', 1)
- else:
- trace = traceback.format_exc()
-
- if not webob_exc:
- webob_exc = self.error_map.get(ex_type,
- webob.exc.HTTPInternalServerError)
-
- error = {
- 'code': webob_exc.code,
- 'title': webob_exc.title,
- 'explanation': webob_exc.explanation,
- 'error': {
- 'message': message,
- 'type': ex_type,
- 'traceback': trace,
- }
- }
-
- return error
-
- def process_request(self, req):
- try:
- return req.get_response(self.application)
- except Exception as exc:
- return req.get_response(Fault(self._error(exc)))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-A filter middleware that inspects the requested URI for a version string
-and/or Accept headers and attempts to negotiate an API controller to
-return
-"""
-
-import re
-
-from heat.openstack.common import log as logging
-from heat.openstack.common.gettextutils import _
-
-from heat.common import wsgi
-
-logger = logging.getLogger(__name__)
-
-
-class VersionNegotiationFilter(wsgi.Middleware):
-
- def __init__(self, version_controller, app, conf, **local_conf):
- self.versions_app = version_controller(conf)
- self.version_uri_regex = re.compile(r"^v(\d+)\.?(\d+)?")
- self.conf = conf
- super(VersionNegotiationFilter, self).__init__(app)
-
- def process_request(self, req):
- """
- If there is a version identifier in the URI, simply
- return the correct API controller, otherwise, if we
- find an Accept: header, process it
- """
- # See if a version identifier is in the URI passed to
- # us already. If so, simply return the right version
- # API controller
- msg = _("Processing request: %(method)s %(path)s Accept: "
- "%(accept)s") % ({'method': req.method,
- 'path': req.path, 'accept': req.accept})
- logger.debug(msg)
-
- # If the request is for /versions, just return the versions container
- if req.path_info_peek() == "versions":
- return self.versions_app
-
- match = self._match_version_string(req.path_info_peek(), req)
- if match:
- major_version = req.environ['api.major_version']
- minor_version = req.environ['api.minor_version']
-
- if (major_version == 1 and minor_version == 0):
- logger.debug(_("Matched versioned URI. "
- "Version: %(major_version)d.%(minor_version)d")
- % {'major_version': major_version,
- 'minor_version': minor_version})
- # Strip the version from the path
- req.path_info_pop()
- return None
- else:
- logger.debug(_("Unknown version in versioned URI: "
- "%(major_version)d.%(minor_version)d. "
- "Returning version choices.")
- % {'major_version': major_version,
- 'minor_version': minor_version})
- return self.versions_app
-
- accept = str(req.accept)
- if accept.startswith('application/vnd.openstack.images-'):
- token_loc = len('application/vnd.openstack.images-')
- accept_version = accept[token_loc:]
- match = self._match_version_string(accept_version, req)
- if match:
- major_version = req.environ['api.major_version']
- minor_version = req.environ['api.minor_version']
- if (major_version == 1 and minor_version == 0):
- logger.debug(_("Matched versioned media type. Version: "
- "%(major_version)d.%(minor_version)d")
- % {'major_version': major_version,
- 'minor_version': minor_version})
- return None
- else:
- logger.debug(_("Unknown version in accept header: "
- "%(major_version)d.%(minor_version)d..."
- "returning version choices.")
- % {'major_version': major_version,
- 'minor_version': minor_version})
- return self.versions_app
- else:
- if req.accept not in ('*/*', ''):
- logger.debug(_("Unknown accept header: %s..."
- "returning version choices."), req.accept)
- return self.versions_app
- return None
-
- def _match_version_string(self, subject, req):
- """
- Given a subject string, tries to match a major and/or
- minor version number. If found, sets the api.major_version
- and api.minor_version environ variables.
-
- Returns True if there was a match, false otherwise.
-
- :param subject: The string to check
- :param req: Webob.Request object
- """
- match = self.version_uri_regex.match(subject)
- if match:
- major_version, minor_version = match.groups(0)
- major_version = int(major_version)
- minor_version = int(minor_version)
- req.environ['api.major_version'] = major_version
- req.environ['api.minor_version'] = minor_version
- return match is not None
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.api.middleware.version_negotiation import VersionNegotiationFilter
-from heat.api.middleware.fault import FaultWrapper
-from heat.api.openstack import versions
-
-
-def version_negotiation_filter(app, conf, **local_conf):
- return VersionNegotiationFilter(versions.Controller, app,
- conf, **local_conf)
-
-
-def faultwrap_filter(app, conf, **local_conf):
- return FaultWrapper(app)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import routes
-
-from heat.api.openstack.v1 import stacks
-from heat.api.openstack.v1 import resources
-from heat.api.openstack.v1 import events
-from heat.api.openstack.v1 import actions
-from heat.common import wsgi
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class API(wsgi.Router):
-
- """
- WSGI router for Heat v1 ReST API requests.
- """
-
- def __init__(self, conf, **local_conf):
- self.conf = conf
- mapper = routes.Mapper()
-
- stacks_resource = stacks.create_resource(conf)
-
- with mapper.submapper(controller=stacks_resource,
- path_prefix="/{tenant_id}") as stack_mapper:
- # Template handling
- stack_mapper.connect("template_validate",
- "/validate",
- action="validate_template",
- conditions={'method': 'POST'})
- stack_mapper.connect("resource_types",
- "/resource_types",
- action="list_resource_types",
- conditions={'method': 'GET'})
- stack_mapper.connect("resource_schema",
- "/resource_types/{type_name}",
- action="resource_schema",
- conditions={'method': 'GET'})
- stack_mapper.connect("generate_template",
- "/resource_types/{type_name}/template",
- action="generate_template",
- conditions={'method': 'GET'})
-
- # Stack collection
- stack_mapper.connect("stack_index",
- "/stacks",
- action="index",
- conditions={'method': 'GET'})
- stack_mapper.connect("stack_create",
- "/stacks",
- action="create",
- conditions={'method': 'POST'})
- stack_mapper.connect("stack_detail",
- "/stacks/detail",
- action="detail",
- conditions={'method': 'GET'})
-
- # Stack data
- stack_mapper.connect("stack_lookup",
- "/stacks/{stack_name}",
- action="lookup")
- # \x3A matches on a colon.
- # Routes treats : specially in its regexp
- stack_mapper.connect("stack_lookup",
- r"/stacks/{stack_name:arn\x3A.*}",
- action="lookup")
- subpaths = ['resources', 'events', 'template', 'actions']
- path = "{path:%s}" % '|'.join(subpaths)
- stack_mapper.connect("stack_lookup_subpath",
- "/stacks/{stack_name}/" + path,
- action="lookup",
- conditions={'method': 'GET'})
- stack_mapper.connect("stack_lookup_subpath_post",
- "/stacks/{stack_name}/" + path,
- action="lookup",
- conditions={'method': 'POST'})
- stack_mapper.connect("stack_show",
- "/stacks/{stack_name}/{stack_id}",
- action="show",
- conditions={'method': 'GET'})
- stack_mapper.connect("stack_template",
- "/stacks/{stack_name}/{stack_id}/template",
- action="template",
- conditions={'method': 'GET'})
-
- # Stack update/delete
- stack_mapper.connect("stack_update",
- "/stacks/{stack_name}/{stack_id}",
- action="update",
- conditions={'method': 'PUT'})
- stack_mapper.connect("stack_delete",
- "/stacks/{stack_name}/{stack_id}",
- action="delete",
- conditions={'method': 'DELETE'})
-
- # Resources
- resources_resource = resources.create_resource(conf)
- stack_path = "/{tenant_id}/stacks/{stack_name}/{stack_id}"
- with mapper.submapper(controller=resources_resource,
- path_prefix=stack_path) as res_mapper:
-
- # Resource collection
- res_mapper.connect("resource_index",
- "/resources",
- action="index",
- conditions={'method': 'GET'})
-
- # Resource data
- res_mapper.connect("resource_show",
- "/resources/{resource_name}",
- action="show",
- conditions={'method': 'GET'})
- res_mapper.connect("resource_metadata_show",
- "/resources/{resource_name}/metadata",
- action="metadata",
- conditions={'method': 'GET'})
-
- # Events
- events_resource = events.create_resource(conf)
- with mapper.submapper(controller=events_resource,
- path_prefix=stack_path) as ev_mapper:
-
- # Stack event collection
- ev_mapper.connect("event_index_stack",
- "/events",
- action="index",
- conditions={'method': 'GET'})
- # Resource event collection
- ev_mapper.connect("event_index_resource",
- "/resources/{resource_name}/events",
- action="index",
- conditions={'method': 'GET'})
-
- # Event data
- ev_mapper.connect("event_show",
- "/resources/{resource_name}/events/{event_id}",
- action="show",
- conditions={'method': 'GET'})
-
- # Actions
- actions_resource = actions.create_resource(conf)
- with mapper.submapper(controller=actions_resource,
- path_prefix=stack_path) as ac_mapper:
-
- ac_mapper.connect("action_stack",
- "/actions",
- action="action",
- conditions={'method': 'POST'})
-
- super(API, self).__init__(mapper)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from webob import exc
-
-from heat.api.openstack.v1 import util
-from heat.common import wsgi
-from heat.rpc import client as rpc_client
-
-
-class ActionController(object):
- """
- WSGI controller for Actions in Heat v1 API
- Implements the API for stack actions
- """
-
- ACTIONS = (SUSPEND, RESUME) = ('suspend', 'resume')
-
- def __init__(self, options):
- self.options = options
- self.engine = rpc_client.EngineClient()
-
- @util.identified_stack
- def action(self, req, identity, body={}):
- """
- Performs a specified action on a stack, the body is expecting to
- contain exactly one item whose key specifies the action
- """
-
- if len(body) < 1:
- raise exc.HTTPBadRequest(_("No action specified"))
-
- if len(body) > 1:
- raise exc.HTTPBadRequest(_("Multiple actions specified"))
-
- ac = body.keys()[0]
- if ac not in self.ACTIONS:
- raise exc.HTTPBadRequest(_("Invalid action %s specified") % ac)
-
- if ac == self.SUSPEND:
- res = self.engine.stack_suspend(req.context, identity)
- elif ac == self.RESUME:
- res = self.engine.stack_resume(req.context, identity)
- else:
- raise exc.HTTPInternalServerError(_("Unexpected action %s") % ac)
-
-
-def create_resource(options):
- """
- Actions action factory method.
- """
- # TODO(zaneb) handle XML based on Content-type/Accepts
- deserializer = wsgi.JSONRequestDeserializer()
- serializer = wsgi.JSONResponseSerializer()
- return wsgi.Resource(ActionController(options), deserializer, serializer)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import itertools
-from webob import exc
-
-from heat.api.openstack.v1 import util
-from heat.common import wsgi
-from heat.rpc import api as engine_api
-from heat.common import identifier
-from heat.rpc import client as rpc_client
-
-
-summary_keys = [
- engine_api.EVENT_ID,
- engine_api.EVENT_TIMESTAMP,
- engine_api.EVENT_RES_NAME,
- engine_api.EVENT_RES_STATUS,
- engine_api.EVENT_RES_STATUS_DATA,
- engine_api.EVENT_RES_PHYSICAL_ID,
-]
-
-
-def format_event(req, event, keys=None):
- include_key = lambda k: k in keys if keys else True
-
- def transform(key, value):
- if not include_key(key):
- return
-
- if key == engine_api.EVENT_ID:
- identity = identifier.EventIdentifier(**value)
- yield ('id', identity.event_id)
- yield ('links', [util.make_link(req, identity),
- util.make_link(req, identity.resource(),
- 'resource'),
- util.make_link(req, identity.stack(),
- 'stack')])
- elif key in (engine_api.EVENT_STACK_ID, engine_api.EVENT_STACK_NAME,
- engine_api.EVENT_RES_ACTION):
- return
- elif (key == engine_api.EVENT_RES_STATUS and
- engine_api.EVENT_RES_ACTION in event):
- # To avoid breaking API compatibility, we join RES_ACTION
- # and RES_STATUS, so the API format doesn't expose the
- # internal split of state into action/status
- yield (key, '_'.join((event[engine_api.EVENT_RES_ACTION], value)))
- elif (key == engine_api.RES_NAME):
- yield ('logical_resource_id', value)
- yield (key, value)
-
- else:
- yield (key, value)
-
- return dict(itertools.chain.from_iterable(
- transform(k, v) for k, v in event.items()))
-
-
-class EventController(object):
- """
- WSGI controller for Events in Heat v1 API
- Implements the API actions
- """
-
- def __init__(self, options):
- self.options = options
- self.engine = rpc_client.EngineClient()
-
- def _event_list(self, req, identity,
- filter_func=lambda e: True, detail=False):
- events = self.engine.list_events(req.context,
- identity)
-
- keys = None if detail else summary_keys
-
- return [format_event(req, e, keys) for e in events if filter_func(e)]
-
- @util.identified_stack
- def index(self, req, identity, resource_name=None):
- """
- Lists summary information for all resources
- """
-
- if resource_name is None:
- events = self._event_list(req, identity)
- else:
- res_match = lambda e: e[engine_api.EVENT_RES_NAME] == resource_name
-
- events = self._event_list(req, identity, res_match)
- if not events:
- msg = _('No events found for resource %s') % resource_name
- raise exc.HTTPNotFound(msg)
-
- return {'events': events}
-
- @util.identified_stack
- def show(self, req, identity, resource_name, event_id):
- """
- Gets detailed information for a stack
- """
-
- def event_match(ev):
- identity = identifier.EventIdentifier(**ev[engine_api.EVENT_ID])
- return (ev[engine_api.EVENT_RES_NAME] == resource_name and
- identity.event_id == event_id)
-
- events = self._event_list(req, identity, event_match, True)
- if not events:
- raise exc.HTTPNotFound(_('No event %s found') % event_id)
-
- return {'event': events[0]}
-
-
-def create_resource(options):
- """
- Events resource factory method.
- """
- # TODO(zaneb) handle XML based on Content-type/Accepts
- deserializer = wsgi.JSONRequestDeserializer()
- serializer = wsgi.JSONResponseSerializer()
- return wsgi.Resource(EventController(options), deserializer, serializer)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import itertools
-
-from heat.api.openstack.v1 import util
-from heat.common import wsgi
-from heat.rpc import api as engine_api
-from heat.common import identifier
-from heat.rpc import client as rpc_client
-
-
-def format_resource(req, res, keys=[]):
- include_key = lambda k: k in keys if keys else True
-
- def transform(key, value):
- if not include_key(key):
- return
-
- if key == engine_api.RES_ID:
- identity = identifier.ResourceIdentifier(**value)
- yield ('links', [util.make_link(req, identity),
- util.make_link(req, identity.stack(), 'stack')])
- elif (key == engine_api.RES_STACK_NAME or
- key == engine_api.RES_STACK_ID or
- key == engine_api.RES_ACTION):
- return
- elif (key == engine_api.RES_METADATA):
- return
- elif (key == engine_api.RES_STATUS and engine_api.RES_ACTION in res):
- # To avoid breaking API compatibility, we join RES_ACTION
- # and RES_STATUS, so the API format doesn't expose the
- # internal split of state into action/status
- yield (key, '_'.join((res[engine_api.RES_ACTION], value)))
- elif (key == engine_api.RES_NAME):
- yield ('logical_resource_id', value)
- yield (key, value)
-
- else:
- yield (key, value)
-
- return dict(itertools.chain.from_iterable(
- transform(k, v) for k, v in res.items()))
-
-
-class ResourceController(object):
- """
- WSGI controller for Resources in Heat v1 API
- Implements the API actions
- """
-
- def __init__(self, options):
- self.options = options
- self.engine = rpc_client.EngineClient()
-
- @util.identified_stack
- def index(self, req, identity):
- """
- Lists summary information for all resources
- """
-
- res_list = self.engine.list_stack_resources(req.context,
- identity)
-
- return {'resources': [format_resource(req, res) for res in res_list]}
-
- @util.identified_stack
- def show(self, req, identity, resource_name):
- """
- Gets detailed information for a stack
- """
-
- res = self.engine.describe_stack_resource(req.context,
- identity,
- resource_name)
-
- return {'resource': format_resource(req, res)}
-
- @util.identified_stack
- def metadata(self, req, identity, resource_name):
- """
- Gets detailed information for a stack
- """
-
- res = self.engine.describe_stack_resource(req.context,
- identity,
- resource_name)
-
- return {engine_api.RES_METADATA: res[engine_api.RES_METADATA]}
-
-
-def create_resource(options):
- """
- Resources resource factory method.
- """
- # TODO(zaneb) handle XML based on Content-type/Accepts
- deserializer = wsgi.JSONRequestDeserializer()
- serializer = wsgi.JSONResponseSerializer()
- return wsgi.Resource(ResourceController(options), deserializer, serializer)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Stack endpoint for Heat v1 ReST API.
-"""
-
-import itertools
-from webob import exc
-
-from heat.api.openstack.v1 import util
-from heat.common import identifier
-from heat.common import wsgi
-from heat.common import template_format
-from heat.common import environment_format
-from heat.rpc import api as engine_api
-from heat.rpc import client as rpc_client
-from heat.common import urlfetch
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class InstantiationData(object):
- """
- The data accompanying a PUT or POST request to create or update a stack.
- """
-
- PARAMS = (
- PARAM_STACK_NAME,
- PARAM_TEMPLATE,
- PARAM_TEMPLATE_URL,
- PARAM_USER_PARAMS,
- PARAM_ENVIRONMENT,
- PARAM_FILES,
- ) = (
- 'stack_name',
- 'template',
- 'template_url',
- 'parameters',
- 'environment',
- 'files',
- )
-
- def __init__(self, data):
- """Initialise from the request object."""
- self.data = data
-
- @staticmethod
- def format_parse(data, data_type):
- """
- Parse the supplied data as JSON or YAML, raising the appropriate
- exception if it is in the wrong format.
- """
-
- try:
- if data_type == 'Environment':
- return environment_format.parse(data)
- else:
- return template_format.parse(data)
- except ValueError:
- err_reason = _("%s not in valid format") % data_type
- raise exc.HTTPBadRequest(err_reason)
-
- def stack_name(self):
- """
- Return the stack name.
- """
- if self.PARAM_STACK_NAME not in self.data:
- raise exc.HTTPBadRequest(_("No stack name specified"))
- return self.data[self.PARAM_STACK_NAME]
-
- def template(self):
- """
- Get template file contents, either inline or from a URL, in JSON
- or YAML format.
- """
- if self.PARAM_TEMPLATE in self.data:
- template_data = self.data[self.PARAM_TEMPLATE]
- if isinstance(template_data, dict):
- return template_data
- elif self.PARAM_TEMPLATE_URL in self.data:
- url = self.data[self.PARAM_TEMPLATE_URL]
- logger.debug('TemplateUrl %s' % url)
- try:
- template_data = urlfetch.get(url)
- except IOError as ex:
- err_reason = _('Could not retrieve template: %s') % str(ex)
- raise exc.HTTPBadRequest(err_reason)
- else:
- raise exc.HTTPBadRequest(_("No template specified"))
-
- return self.format_parse(template_data, 'Template')
-
- def environment(self):
- """
- Get the user-supplied environment for the stack in YAML format.
- If the user supplied Parameters then merge these into the
- environment global options.
- """
- env = {}
- if self.PARAM_ENVIRONMENT in self.data:
- env_data = self.data[self.PARAM_ENVIRONMENT]
- if isinstance(env_data, dict):
- env = env_data
- else:
- env = self.format_parse(env_data,
- 'Environment')
-
- environment_format.default_for_missing(env)
- parameters = self.data.get(self.PARAM_USER_PARAMS, {})
- env[self.PARAM_USER_PARAMS].update(parameters)
- return env
-
- def files(self):
- return self.data.get(self.PARAM_FILES, {})
-
- def args(self):
- """
- Get any additional arguments supplied by the user.
- """
- params = self.data.items()
- return dict((k, v) for k, v in params if k not in self.PARAMS)
-
-
-def format_stack(req, stack, keys=[]):
- include_key = lambda k: k in keys if keys else True
-
- def transform(key, value):
- if not include_key(key):
- return
-
- if key == engine_api.STACK_ID:
- yield ('id', value['stack_id'])
- yield ('links', [util.make_link(req, value)])
- elif key == engine_api.STACK_ACTION:
- return
- elif (key == engine_api.STACK_STATUS and
- engine_api.STACK_ACTION in stack):
- # To avoid breaking API compatibility, we join RES_ACTION
- # and RES_STATUS, so the API format doesn't expose the
- # internal split of state into action/status
- yield (key, '_'.join((stack[engine_api.STACK_ACTION], value)))
- else:
- # TODO(zaneb): ensure parameters can be formatted for XML
- #elif key == engine_api.STACK_PARAMETERS:
- # return key, json.dumps(value)
- yield (key, value)
-
- return dict(itertools.chain.from_iterable(
- transform(k, v) for k, v in stack.items()))
-
-
-class StackController(object):
- """
- WSGI controller for stacks resource in Heat v1 API
- Implements the API actions
- """
-
- def __init__(self, options):
- self.options = options
- self.engine = rpc_client.EngineClient()
-
- def default(self, req, **args):
- raise exc.HTTPNotFound()
-
- @util.tenant_local
- def index(self, req):
- """
- Lists summary information for all stacks
- """
-
- stacks = self.engine.list_stacks(req.context)
-
- summary_keys = (engine_api.STACK_ID,
- engine_api.STACK_NAME,
- engine_api.STACK_DESCRIPTION,
- engine_api.STACK_STATUS,
- engine_api.STACK_STATUS_DATA,
- engine_api.STACK_CREATION_TIME,
- engine_api.STACK_DELETION_TIME,
- engine_api.STACK_UPDATED_TIME)
-
- return {'stacks': [format_stack(req, s, summary_keys) for s in stacks]}
-
- @util.tenant_local
- def detail(self, req):
- """
- Lists detailed information for all stacks
- """
- stacks = self.engine.list_stacks(req.context)
-
- return {'stacks': [format_stack(req, s) for s in stacks]}
-
- @util.tenant_local
- def create(self, req, body):
- """
- Create a new stack
- """
-
- data = InstantiationData(body)
-
- result = self.engine.create_stack(req.context,
- data.stack_name(),
- data.template(),
- data.environment(),
- data.files(),
- data.args())
-
- return {'stack': format_stack(req, {engine_api.STACK_ID: result})}
-
- @util.tenant_local
- def lookup(self, req, stack_name, path='', body=None):
- """
- Redirect to the canonical URL for a stack
- """
- try:
- identity = dict(identifier.HeatIdentifier.from_arn(stack_name))
- except ValueError:
- identity = self.engine.identify_stack(req.context,
- stack_name)
-
- location = util.make_url(req, identity)
- if path:
- location = '/'.join([location, path])
-
- raise exc.HTTPFound(location=location)
-
- @util.identified_stack
- def show(self, req, identity):
- """
- Gets detailed information for a stack
- """
-
- stack_list = self.engine.show_stack(req.context,
- identity)
-
- if not stack_list:
- raise exc.HTTPInternalServerError()
-
- stack = stack_list[0]
-
- return {'stack': format_stack(req, stack)}
-
- @util.identified_stack
- def template(self, req, identity):
- """
- Get the template body for an existing stack
- """
-
- templ = self.engine.get_template(req.context,
- identity)
-
- if templ is None:
- raise exc.HTTPNotFound()
-
- # TODO(zaneb): always set Content-type to application/json
- return templ
-
- @util.identified_stack
- def update(self, req, identity, body):
- """
- Update an existing stack with a new template and/or parameters
- """
- data = InstantiationData(body)
-
- res = self.engine.update_stack(req.context,
- identity,
- data.template(),
- data.environment(),
- data.files(),
- data.args())
-
- raise exc.HTTPAccepted()
-
- @util.identified_stack
- def delete(self, req, identity):
- """
- Delete the specified stack
- """
-
- res = self.engine.delete_stack(req.context,
- identity,
- cast=False)
-
- if res is not None:
- raise exc.HTTPBadRequest(res['Error'])
-
- raise exc.HTTPNoContent()
-
- @util.tenant_local
- def validate_template(self, req, body):
- """
- Implements the ValidateTemplate API action
- Validates the specified template
- """
-
- data = InstantiationData(body)
-
- result = self.engine.validate_template(req.context,
- data.template())
-
- if 'Error' in result:
- raise exc.HTTPBadRequest(result['Error'])
-
- return result
-
- @util.tenant_local
- def list_resource_types(self, req):
- """
- Returns a list of valid resource types that may be used in a template.
- """
- return {'resource_types': self.engine.list_resource_types(req.context)}
-
- @util.tenant_local
- def resource_schema(self, req, type_name):
- """
- Returns the schema of the given resource type.
- """
- return self.engine.resource_schema(req.context, type_name)
-
- @util.tenant_local
- def generate_template(self, req, type_name):
- """
- Generates a template based on the specified type.
- """
- return self.engine.generate_template(req.context, type_name)
-
-
-class StackSerializer(wsgi.JSONResponseSerializer):
- """Handles serialization of specific controller method responses."""
-
- def _populate_response_header(self, response, location, status):
- response.status = status
- response.headers['Location'] = location.encode('utf-8')
- response.headers['Content-Type'] = 'application/json'
- return response
-
- def create(self, response, result):
- self._populate_response_header(response,
- result['stack']['links'][0]['href'],
- 201)
- response.body = self.to_json(result)
- return response
-
-
-def create_resource(options):
- """
- Stacks resource factory method.
- """
- # TODO(zaneb) handle XML based on Content-type/Accepts
- deserializer = wsgi.JSONRequestDeserializer()
- serializer = StackSerializer()
- return wsgi.Resource(StackController(options), deserializer, serializer)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from webob import exc
-from functools import wraps
-
-from heat.common import identifier
-
-
-def tenant_local(handler):
- '''
- Decorator for a handler method that sets the correct tenant_id in the
- request context.
- '''
- @wraps(handler)
- def handle_stack_method(controller, req, tenant_id, **kwargs):
- req.context.tenant_id = tenant_id
- return handler(controller, req, **kwargs)
-
- return handle_stack_method
-
-
-def identified_stack(handler):
- '''
- Decorator for a handler method that passes a stack identifier in place of
- the various path components.
- '''
- @tenant_local
- @wraps(handler)
- def handle_stack_method(controller, req, stack_name, stack_id, **kwargs):
- stack_identity = identifier.HeatIdentifier(req.context.tenant_id,
- stack_name,
- stack_id)
- return handler(controller, req, dict(stack_identity), **kwargs)
-
- return handle_stack_method
-
-
-def make_url(req, identity):
- '''Return the URL for the supplied identity dictionary.'''
- try:
- stack_identity = identifier.HeatIdentifier(**identity)
- except ValueError:
- err_reason = _('Invalid Stack address')
- raise exc.HTTPInternalServerError(err_reason)
-
- return req.relative_url(stack_identity.url_path(), True)
-
-
-def make_link(req, identity, relationship='self'):
- '''Return a link structure for the supplied identity dictionary.'''
- return {'href': make_url(req, identity), 'rel': relationship}
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Controller that returns information on the heat API versions
-"""
-
-import httplib
-import json
-
-import webob.dec
-
-
-class Controller(object):
-
- """
- A controller that produces information on the heat API versions.
- """
-
- def __init__(self, conf):
- self.conf = conf
-
- @webob.dec.wsgify
- def __call__(self, req):
- """Respond to a request for all OpenStack API versions."""
- version_objs = [
- {
- "id": "v1.0",
- "status": "CURRENT",
- "links": [
- {
- "rel": "self",
- "href": self.get_href(req)
- }]
- }]
-
- body = json.dumps(dict(versions=version_objs))
-
- response = webob.Response(request=req,
- status=httplib.MULTIPLE_CHOICES,
- content_type='application/json')
- response.body = body
-
- return response
-
- def get_href(self, req):
- return "%s/v1/" % req.host_url
+++ /dev/null
-#!/bin/bash
-setenforce 0
-useradd -m @INSTANCE_USER@
-echo -e '@INSTANCE_USER@\tALL=(ALL)\tNOPASSWD: ALL' >> /etc/sudoers
-
-# Do not remove - the cloud boothook should always return success
-exit 0
+++ /dev/null
-user: @INSTANCE_USER@
-
-cloud_config_modules:
- - locale
- - set_hostname
- - timezone
- - update_etc_hosts
- - update_hostname
-
-# Capture all subprocess output into a logfile
-# Useful for troubleshooting cloud-init issues
-output: {all: '| tee -a /var/log/cloud-init-output.log'}
+++ /dev/null
-#!/usr/bin/env python
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import errno
-import datetime
-import logging
-import pkg_resources
-import os
-import subprocess
-import sys
-
-from distutils.version import LooseVersion
-
-
-VAR_PATH = '/var/lib/heat-cfntools'
-LOG = logging.getLogger('heat-provision')
-
-
-def chk_ci_version():
- v = LooseVersion(pkg_resources.get_distribution('cloud-init').version)
- return v >= LooseVersion('0.6.0')
-
-
-def init_logging():
- LOG.setLevel(logging.INFO)
- LOG.addHandler(logging.StreamHandler())
- fh = logging.FileHandler("/var/log/heat-provision.log")
- os.chmod(fh.baseFilename, 0o600)
- LOG.addHandler(fh)
-
-
-def call(args):
-
- class LogStream(object):
-
- def write(self, data):
- LOG.info(data)
-
- LOG.info('%s\n' % ' '.join(args))
- try:
- ls = LogStream()
- p = subprocess.Popen(args, stdout=subprocess.PIPE,
- stderr=subprocess.PIPE)
- data = p.communicate()
- if data:
- for x in data:
- ls.write(x)
- except OSError as ex:
- if ex.errno == errno.ENOEXEC:
- LOG.error('Userdata empty or not executable: %s\n' % str(ex))
- return os.EX_OK
- else:
- LOG.error('OS error running userdata: %s\n' % str(ex))
- return os.EX_OSERR
- except Exception as ex:
- LOG.error('Unknown error running userdata: %s\n' % str(ex))
- return os.EX_SOFTWARE
- return p.returncode
-
-
-def main():
-
- if not chk_ci_version():
- # pre 0.6.0 - user data executed via cloudinit, not this helper
- LOG.info('Unable to log provisioning, need a newer version of'
- ' cloud-init\n')
- return -1
-
- userdata_path = os.path.join(VAR_PATH, 'cfn-userdata')
- os.chmod(userdata_path, 0o700)
-
- LOG.info('Provision began: %s\n' % datetime.datetime.now())
- returncode = call([userdata_path])
- LOG.info('Provision done: %s\n' % datetime.datetime.now())
- if returncode:
- return returncode
-
-
-if __name__ == '__main__':
- init_logging()
-
- code = main()
- if code:
- LOG.error('Provision failed with exit code %s' % code)
- sys.exit(code)
-
- provision_log = os.path.join(VAR_PATH, 'provision-finished')
- # touch the file so it is timestamped with when finished
- with file(provision_log, 'a'):
- os.utime(provision_log, None)
+++ /dev/null
-#part-handler
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import datetime
-import errno
-import os
-
-
-def list_types():
- return(["text/x-cfninitdata"])
-
-
-def handle_part(data, ctype, filename, payload):
- if ctype == "__begin__":
- try:
- os.makedirs('/var/lib/heat-cfntools', 0o700)
- except OSError as e:
- if e.errno != errno.EEXIST:
- raise
- return
-
- if ctype == "__end__":
- return
-
- with open('/var/log/part-handler.log', 'a') as log:
- timestamp = datetime.datetime.now()
- log.write('%s filename:%s, ctype:%s\n' % (timestamp, filename, ctype))
-
- if ctype == 'text/x-cfninitdata':
- with open('/var/lib/heat-cfntools/%s' % filename, 'w') as f:
- f.write(payload)
-
- # TODO(sdake) hopefully temporary until users move to heat-cfntools-1.3
- with open('/var/lib/cloud/data/%s' % filename, 'w') as f:
- f.write(payload)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
- CLI interface for heat management.
-"""
-
-import sys
-
-from oslo.config import cfg
-
-from heat.db import api as db_api
-from heat.db import migration
-from heat.db import utils
-from heat.openstack.common import log
-from heat import version
-
-
-CONF = cfg.CONF
-
-
-def do_db_version():
- """Print database's current migration level."""
- print(migration.db_version())
-
-
-def do_db_sync():
- """
- Place a database under migration control and upgrade,
- creating first if necessary.
- """
- migration.db_sync(CONF.command.version)
-
-
-def purge_deleted():
- """
- Remove database records that have been previously soft deleted
- """
- utils.purge_deleted(CONF.command.age)
-
-
-def add_command_parsers(subparsers):
- parser = subparsers.add_parser('db_version')
- parser.set_defaults(func=do_db_version)
-
- parser = subparsers.add_parser('db_sync')
- parser.set_defaults(func=do_db_sync)
- parser.add_argument('version', nargs='?')
- parser.add_argument('current_version', nargs='?')
-
- parser = subparsers.add_parser('purge_deleted')
- parser.set_defaults(func=purge_deleted)
- parser.add_argument('age', nargs='?')
-
-command_opt = cfg.SubCommandOpt('command',
- title='Commands',
- help='Available commands',
- handler=add_command_parsers)
-
-
-def main():
- CONF.register_cli_opt(command_opt)
- try:
- default_config_files = cfg.find_config_files('heat', 'heat-engine')
- CONF(sys.argv[1:], project='heat', prog='heat-manage',
- version=version.version_info.version_string(),
- default_config_files=default_config_files)
- log.setup("heat")
- db_api.configure()
- except RuntimeError as e:
- sys.exit("ERROR: %s" % e)
-
- try:
- CONF.command.func()
- except Exception as e:
- sys.exit("ERROR: %s" % e)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2013 OpenStack Foundation
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from keystoneclient.v2_0 import client as keystone_client
-from keystoneclient import exceptions as keystone_exceptions
-from oslo.config import cfg
-from webob.exc import HTTPBadRequest
-from webob.exc import HTTPUnauthorized
-
-from heat.openstack.common import importutils
-
-
-class KeystonePasswordAuthProtocol(object):
- """
- Alternative authentication middleware that uses username and password
- to authenticate against Keystone instead of validating existing auth token.
- The benefit being that you no longer require admin/service token to
- authenticate users.
- """
-
- def __init__(self, app, conf):
- self.app = app
- self.conf = conf
- auth_url = None
- if not cfg.CONF.auth_password.multi_cloud:
- if 'auth_uri' in self.conf:
- auth_url = self.conf['auth_uri']
- else:
- # Import auth_token to have keystone_authtoken settings setup.
- importutils.import_module(
- 'keystoneclient.middleware.auth_token')
- auth_url = cfg.CONF.keystone_authtoken['auth_uri']
- self.auth_url = auth_url
-
- def __call__(self, env, start_response):
- """Authenticate incoming request."""
- username = env.get('HTTP_X_AUTH_USER')
- password = env.get('HTTP_X_AUTH_KEY')
- # Determine tenant id from path.
- tenant = env.get('PATH_INFO').split('/')[1]
- auth_url = self.auth_url
- if cfg.CONF.auth_password.multi_cloud:
- auth_url = env.get('HTTP_X_AUTH_URL')
- error = self._validate_auth_url(env, start_response, auth_url)
- if error:
- return error
- if not tenant:
- return self._reject_request(env, start_response, auth_url)
- try:
- client = keystone_client.Client(
- username=username, password=password, tenant_id=tenant,
- auth_url=auth_url)
- except (keystone_exceptions.Unauthorized,
- keystone_exceptions.Forbidden,
- keystone_exceptions.NotFound,
- keystone_exceptions.AuthorizationFailure):
- return self._reject_request(env, start_response, auth_url)
- env['keystone.token_info'] = client.auth_ref
- env.update(self._build_user_headers(client.auth_ref, auth_url))
- return self.app(env, start_response)
-
- def _reject_request(self, env, start_response, auth_url):
- """Redirect client to auth server."""
- headers = [('WWW-Authenticate', 'Keystone uri=\'%s\'' % auth_url)]
- resp = HTTPUnauthorized('Authentication required', headers)
- return resp(env, start_response)
-
- def _build_user_headers(self, token_info, auth_url):
- """Build headers that represent authenticated user from auth token."""
- tenant_id = token_info['token']['tenant']['id']
- tenant_name = token_info['token']['tenant']['name']
- user_id = token_info['user']['id']
- user_name = token_info['user']['name']
- roles = ','.join(
- [role['name'] for role in token_info['user']['roles']])
- service_catalog = token_info['serviceCatalog']
- auth_token = token_info['token']['id']
-
- headers = {
- 'HTTP_X_IDENTITY_STATUS': 'Confirmed',
- 'HTTP_X_PROJECT_ID': tenant_id,
- 'HTTP_X_PROJECT_NAME': tenant_name,
- 'HTTP_X_USER_ID': user_id,
- 'HTTP_X_USER_NAME': user_name,
- 'HTTP_X_ROLES': roles,
- 'HTTP_X_SERVICE_CATALOG': service_catalog,
- 'HTTP_X_AUTH_TOKEN': auth_token,
- 'HTTP_X_AUTH_URL': auth_url,
- # DEPRECATED
- 'HTTP_X_USER': user_name,
- 'HTTP_X_TENANT_ID': tenant_id,
- 'HTTP_X_TENANT_NAME': tenant_name,
- 'HTTP_X_TENANT': tenant_name,
- 'HTTP_X_ROLE': roles,
- }
-
- return headers
-
- def _validate_auth_url(self, env, start_response, auth_url):
- """Validate auth_url to ensure it can be used."""
- if not auth_url:
- resp = HTTPBadRequest(_('Request missing required header '
- 'X-Auth-Url'))
- return resp(env, start_response)
- allowed = cfg.CONF.auth_password.allowed_auth_uris
- if auth_url not in allowed:
- resp = HTTPUnauthorized(_('Header X-Auth-Url "%s" not an allowed '
- 'endpoint')
- % auth_url)
- return resp(env, start_response)
- return None
-
-
-def filter_factory(global_conf, **local_conf):
- """Returns a WSGI filter app for use with paste.deploy."""
- conf = global_conf.copy()
- conf.update(local_conf)
-
- def auth_filter(app):
- return KeystonePasswordAuthProtocol(app, conf)
- return auth_filter
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010-2012 OpenStack Foundation
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-from keystoneclient.middleware import auth_token
-
-LOG = logging.getLogger(__name__)
-
-
-class AuthProtocol(auth_token.AuthProtocol):
- """
- Subclass of keystoneclient auth_token middleware which also
- sets the 'X-Auth-Url' header to the value specified in the config.
- """
- def _build_user_headers(self, token_info):
- rval = super(AuthProtocol, self)._build_user_headers(token_info)
- rval['X-Auth-Url'] = self.auth_uri
- return rval
-
-
-def filter_factory(global_conf, **local_conf):
- """Returns a WSGI filter app for use with paste.deploy."""
- conf = global_conf.copy()
- conf.update(local_conf)
-
- def auth_filter(app):
- return AuthProtocol(app, conf)
- return auth_filter
-
-
-def app_factory(global_conf, **local_conf):
- conf = global_conf.copy()
- conf.update(local_conf)
- return AuthProtocol(None, conf)
+++ /dev/null
-
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Routines for configuring Heat
-"""
-
-import logging as sys_logging
-import os
-
-from eventlet.green import socket
-from oslo.config import cfg
-
-from heat.common import wsgi
-
-from heat.openstack.common import log as logging
-from heat.openstack.common import rpc
-
-DEFAULT_PORT = 8000
-
-paste_deploy_group = cfg.OptGroup('paste_deploy')
-paste_deploy_opts = [
- cfg.StrOpt('flavor',
- help=_("The flavor to use")),
- cfg.StrOpt('api_paste_config', default="api-paste.ini",
- help=_("The API paste config file to use"))]
-
-
-service_opts = [
- cfg.IntOpt('periodic_interval',
- default=60,
- help='seconds between running periodic tasks'),
- cfg.StrOpt('heat_metadata_server_url',
- default="",
- help='URL of the Heat metadata server'),
- cfg.StrOpt('heat_waitcondition_server_url',
- default="",
- help='URL of the Heat waitcondition server'),
- cfg.StrOpt('heat_watch_server_url',
- default="",
- help='URL of the Heat cloudwatch server'),
- cfg.StrOpt('instance_connection_is_secure',
- default="0",
- help='Instance connection to cfn/cw API via https'),
- cfg.StrOpt('instance_connection_https_validate_certificates',
- default="1",
- help='Instance connection to cfn/cw API validate certs if ssl'),
- cfg.StrOpt('heat_stack_user_role',
- default="heat_stack_user",
- help='Keystone role for heat template-defined users'),
- cfg.IntOpt('max_template_size',
- default=524288,
- help='Maximum raw byte size of any template.'),
- cfg.IntOpt('max_nested_stack_depth',
- default=3,
- help='Maximum depth allowed when using nested stacks.')]
-
-db_opts = [
- cfg.StrOpt('sql_connection',
- default='mysql://heat:heat@localhost/heat',
- help='The SQLAlchemy connection string used to connect to the '
- 'database'),
- cfg.IntOpt('sql_idle_timeout',
- default=3600,
- help='timeout before idle sql connections are reaped')]
-
-engine_opts = [
- cfg.StrOpt('instance_user',
- default='ec2-user',
- help='The default user for new instances'),
- cfg.StrOpt('instance_driver',
- default='heat.engine.nova',
- help='Driver to use for controlling instances'),
- cfg.ListOpt('plugin_dirs',
- default=['/usr/lib64/heat', '/usr/lib/heat'],
- help='List of directories to search for Plugins'),
- cfg.StrOpt('environment_dir',
- default='/etc/heat/environment.d',
- help='The directory to search for environment files'),
- cfg.StrOpt('deferred_auth_method',
- choices=['password', 'trusts'],
- default='password',
- help=_('Select deferred auth method, '
- 'stored password or trusts')),
- cfg.ListOpt('trusts_delegated_roles',
- default=['heat_stack_owner'],
- help=_('Subset of trustor roles to be delegated to heat')),
- cfg.IntOpt('max_resources_per_stack',
- default=1000,
- help='Maximum resources allowed per top-level stack.')]
-
-rpc_opts = [
- cfg.StrOpt('host',
- default=socket.gethostname(),
- help='Name of the engine node. '
- 'This can be an opaque identifier.'
- 'It is not necessarily a hostname, FQDN, or IP address.')]
-
-auth_password_group = cfg.OptGroup('auth_password')
-auth_password_opts = [
- cfg.BoolOpt('multi_cloud',
- default=False,
- help=_('Allow orchestration of multiple clouds')),
- cfg.ListOpt('allowed_auth_uris',
- default=[],
- help=_('Allowed keystone endpoints for auth_uri when '
- 'multi_cloud is enabled. At least one endpoint needs '
- 'to be specified.'))]
-
-cfg.CONF.register_opts(db_opts)
-cfg.CONF.register_opts(engine_opts)
-cfg.CONF.register_opts(service_opts)
-cfg.CONF.register_opts(rpc_opts)
-cfg.CONF.register_group(paste_deploy_group)
-cfg.CONF.register_opts(paste_deploy_opts, group=paste_deploy_group)
-cfg.CONF.register_group(auth_password_group)
-cfg.CONF.register_opts(auth_password_opts, group=auth_password_group)
-
-
-def rpc_set_default():
- rpc.set_defaults(control_exchange='heat')
-
-
-def _get_deployment_flavor():
- """
- Retrieve the paste_deploy.flavor config item, formatted appropriately
- for appending to the application name.
- """
- flavor = cfg.CONF.paste_deploy.flavor
- return '' if not flavor else ('-' + flavor)
-
-
-def _get_deployment_config_file():
- """
- Retrieve the deployment_config_file config item, formatted as an
- absolute pathname.
- """
- config_path = cfg.CONF.find_file(
- cfg.CONF.paste_deploy['api_paste_config'])
- if config_path is None:
- return None
-
- return os.path.abspath(config_path)
-
-
-def load_paste_app(app_name=None):
- """
- Builds and returns a WSGI app from a paste config file.
-
- We assume the last config file specified in the supplied ConfigOpts
- object is the paste config file.
-
- :param app_name: name of the application to load
-
- :raises RuntimeError when config file cannot be located or application
- cannot be loaded from config file
- """
- if app_name is None:
- app_name = cfg.CONF.prog
-
- # append the deployment flavor to the application name,
- # in order to identify the appropriate paste pipeline
- app_name += _get_deployment_flavor()
-
- conf_file = _get_deployment_config_file()
- if conf_file is None:
- raise RuntimeError("Unable to locate config file")
-
- try:
- app = wsgi.paste_deploy_app(conf_file, app_name, cfg.CONF)
-
- # Log the options used when starting if we're in debug mode...
- if cfg.CONF.debug:
- cfg.CONF.log_opt_values(logging.getLogger(app_name),
- sys_logging.DEBUG)
-
- return app
- except (LookupError, ImportError) as e:
- raise RuntimeError("Unable to load %(app_name)s from "
- "configuration file %(conf_file)s."
- "\nGot: %(e)r" % {'app_name': app_name,
- 'conf_file': conf_file,
- 'e': e})
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo.config import cfg
-
-from heat.openstack.common import local
-from heat.common import exception
-from heat.common import wsgi
-from heat.openstack.common import context
-from heat.openstack.common import importutils
-from heat.openstack.common import uuidutils
-from heat.db import api as db_api
-
-
-def generate_request_id():
- return 'req-' + uuidutils.generate_uuid()
-
-
-class RequestContext(context.RequestContext):
- """
- Stores information about the security context under which the user
- accesses the system, as well as additional request information.
- """
-
- def __init__(self, auth_token=None, username=None, password=None,
- aws_creds=None, tenant=None,
- tenant_id=None, auth_url=None, roles=None, is_admin=False,
- read_only=False, show_deleted=False,
- owner_is_tenant=True, overwrite=True,
- trust_id=None, trustor_user_id=None,
- **kwargs):
- """
- :param overwrite: Set to False to ensure that the greenthread local
- copy of the index is not overwritten.
-
- :param kwargs: Extra arguments that might be present, but we ignore
- because they possibly came in from older rpc messages.
- """
- super(RequestContext, self).__init__(auth_token=auth_token,
- user=username, tenant=tenant,
- is_admin=is_admin,
- read_only=read_only,
- show_deleted=show_deleted,
- request_id='unused')
-
- self.username = username
- self.password = password
- self.aws_creds = aws_creds
- self.tenant_id = tenant_id
- self.auth_url = auth_url
- self.roles = roles or []
- self.owner_is_tenant = owner_is_tenant
- if overwrite or not hasattr(local.store, 'context'):
- self.update_store()
- self._session = None
- self.trust_id = trust_id
- self.trustor_user_id = trustor_user_id
-
- def update_store(self):
- local.store.context = self
-
- @property
- def session(self):
- if self._session is None:
- self._session = db_api.get_session()
- return self._session
-
- def to_dict(self):
- return {'auth_token': self.auth_token,
- 'username': self.user,
- 'password': self.password,
- 'aws_creds': self.aws_creds,
- 'tenant': self.tenant,
- 'tenant_id': self.tenant_id,
- 'trust_id': self.trust_id,
- 'trustor_user_id': self.trustor_user_id,
- 'auth_url': self.auth_url,
- 'roles': self.roles,
- 'is_admin': self.is_admin}
-
- @classmethod
- def from_dict(cls, values):
- return cls(**values)
-
- @property
- def owner(self):
- """Return the owner to correlate with an image."""
- return self.tenant if self.owner_is_tenant else self.user
-
-
-def get_admin_context(read_deleted="no"):
- return RequestContext(is_admin=True)
-
-
-class ContextMiddleware(wsgi.Middleware):
-
- opts = [cfg.BoolOpt('owner_is_tenant', default=True),
- cfg.StrOpt('admin_role', default='admin')]
-
- def __init__(self, app, conf, **local_conf):
- cfg.CONF.register_opts(self.opts)
-
- # Determine the context class to use
- self.ctxcls = RequestContext
- if 'context_class' in local_conf:
- self.ctxcls = importutils.import_class(local_conf['context_class'])
-
- super(ContextMiddleware, self).__init__(app)
-
- def make_context(self, *args, **kwargs):
- """
- Create a context with the given arguments.
- """
- kwargs.setdefault('owner_is_tenant', cfg.CONF.owner_is_tenant)
-
- return self.ctxcls(*args, **kwargs)
-
- def process_request(self, req):
- """
- Extract any authentication information in the request and
- construct an appropriate context from it.
-
- A few scenarios exist:
-
- 1. If X-Auth-Token is passed in, then consult TENANT and ROLE headers
- to determine permissions.
-
- 2. An X-Auth-Token was passed in, but the Identity-Status is not
- confirmed. For now, just raising a NotAuthenticated exception.
-
- 3. X-Auth-Token is omitted. If we were using Keystone, then the
- tokenauth middleware would have rejected the request, so we must be
- using NoAuth. In that case, assume that is_admin=True.
- """
- headers = req.headers
-
- try:
- """
- This sets the username/password to the admin user because you
- need this information in order to perform token authentication.
- The real 'username' is the 'tenant'.
-
- We should also check here to see if X-Auth-Token is not set and
- in that case we should assign the user/pass directly as the real
- username/password and token as None. 'tenant' should still be
- the username.
- """
-
- username = None
- password = None
- aws_creds = None
-
- if headers.get('X-Auth-User') is not None:
- username = headers.get('X-Auth-User')
- password = headers.get('X-Auth-Key')
- elif headers.get('X-Auth-EC2-Creds') is not None:
- aws_creds = headers.get('X-Auth-EC2-Creds')
-
- token = headers.get('X-Auth-Token')
- tenant = headers.get('X-Tenant-Name')
- tenant_id = headers.get('X-Tenant-Id')
- auth_url = headers.get('X-Auth-Url')
- roles = headers.get('X-Roles')
- if roles is not None:
- roles = roles.split(',')
-
- except Exception:
- raise exception.NotAuthenticated()
-
- req.context = self.make_context(auth_token=token,
- tenant=tenant, tenant_id=tenant_id,
- aws_creds=aws_creds,
- username=username,
- password=password,
- auth_url=auth_url, roles=roles,
- is_admin=True)
-
-
-def ContextMiddleware_filter_factory(global_conf, **local_conf):
- """
- Factory method for paste.deploy
- """
- conf = global_conf.copy()
- conf.update(local_conf)
-
- def filter(app):
- return ContextMiddleware(app, conf)
-
- return filter
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import base64
-from Crypto.Cipher import AES
-from os import urandom
-
-from oslo.config import cfg
-
-from heat.openstack.common import log as logging
-
-
-auth_opts = [
- cfg.StrOpt('auth_encryption_key',
- default='notgood but just long enough i think',
- help="Encryption key used for authentication info in database")
-]
-
-cfg.CONF.register_opts(auth_opts)
-
-logger = logging.getLogger(__name__)
-
-
-def encrypt(auth_info):
- if auth_info is None:
- return None
- iv = urandom(AES.block_size)
- cipher = AES.new(cfg.CONF.auth_encryption_key[:32], AES.MODE_CFB, iv)
- res = base64.b64encode(iv + cipher.encrypt(auth_info))
- return res
-
-
-def decrypt(auth_info):
- if auth_info is None:
- return None
- auth = base64.b64decode(auth_info)
- iv = auth[:AES.block_size]
- cipher = AES.new(cfg.CONF.auth_encryption_key[:32], AES.MODE_CFB, iv)
- res = cipher.decrypt(auth[AES.block_size:])
- return res
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright (C) 2012, Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Middleware for authenticating against custom backends.
-"""
-
-import logging
-
-from heat.openstack.common import local
-from heat.rpc import client as rpc_client
-import webob.exc
-
-
-LOG = logging.getLogger(__name__)
-
-
-class AuthProtocol(object):
- def __init__(self, app, conf):
- self.conf = conf
- self.app = app
-
- def __call__(self, env, start_response):
- """
- Handle incoming request.
-
- Authenticate send downstream on success. Reject request if
- we can't authenticate.
- """
- LOG.debug('Authenticating user token')
- context = local.store.context
- engine = rpc_client.EngineClient()
- authenticated = engine.authenticated_to_backend(context)
- if authenticated:
- return self.app(env, start_response)
- else:
- return self._reject_request(env, start_response)
-
- def _reject_request(self, env, start_response):
- """
- Redirect client to auth server.
-
- :param env: wsgi request environment
- :param start_response: wsgi response callback
- :returns HTTPUnauthorized http response
- """
- resp = webob.exc.HTTPUnauthorized("Backend authentication failed", [])
- return resp(env, start_response)
-
-
-def filter_factory(global_conf, **local_conf):
- conf = global_conf.copy()
- conf.update(local_conf)
-
- def auth_filter(app):
- return AuthProtocol(app, conf)
- return auth_filter
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common.template_format import yaml
-
-
-SECTIONS = (PARAMETERS, RESOURCE_REGISTRY) = \
- ('parameters', 'resource_registry')
-
-
-def parse(env_str):
- '''
- Takes a string and returns a dict containing the parsed structure.
- This includes determination of whether the string is using the
- JSON or YAML format.
- '''
- try:
- env = yaml.safe_load(env_str)
- except (yaml.scanner.ScannerError, yaml.parser.ParserError) as e:
- raise ValueError(e)
- else:
- if env is None:
- env = {}
-
- for param in env:
- if param not in SECTIONS:
- raise ValueError('environment has wrong section "%s"' % param)
-
- return env
-
-
-def default_for_missing(env):
- '''
- Checks a parsed environment for missing sections.
- '''
- for param in SECTIONS:
- if param not in env:
- env[param] = {}
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Heat exception subclasses"""
-
-import functools
-import urlparse
-import sys
-
-from heat.openstack.common import log as logging
-
-
-_FATAL_EXCEPTION_FORMAT_ERRORS = False
-
-
-logger = logging.getLogger(__name__)
-
-
-class RedirectException(Exception):
- def __init__(self, url):
- self.url = urlparse.urlparse(url)
-
-
-class KeystoneError(Exception):
- def __init__(self, code, message):
- self.code = code
- self.message = message
-
- def __str__(self):
- return "Code: %s, message: %s" % (self.code, self.message)
-
-
-def wrap_exception(notifier=None, publisher_id=None, event_type=None,
- level=None):
- """This decorator wraps a method to catch any exceptions that may
- get thrown. It logs the exception as well as optionally sending
- it to the notification system.
- """
- # TODO(sandy): Find a way to import nova.notifier.api so we don't have
- # to pass it in as a parameter. Otherwise we get a cyclic import of
- # nova.notifier.api -> nova.utils -> nova.exception :(
- # TODO(johannes): Also, it would be nice to use
- # utils.save_and_reraise_exception() without an import loop
- def inner(f):
- def wrapped(*args, **kw):
- try:
- return f(*args, **kw)
- except Exception as e:
- # Save exception since it can be clobbered during processing
- # below before we can re-raise
- exc_info = sys.exc_info()
-
- if notifier:
- payload = dict(args=args, exception=e)
- payload.update(kw)
-
- # Use a temp vars so we don't shadow
- # our outer definitions.
- temp_level = level
- if not temp_level:
- temp_level = notifier.ERROR
-
- temp_type = event_type
- if not temp_type:
- # If f has multiple decorators, they must use
- # functools.wraps to ensure the name is
- # propagated.
- temp_type = f.__name__
-
- notifier.notify(publisher_id, temp_type, temp_level,
- payload)
-
- # re-raise original exception since it may have been clobbered
- raise exc_info[0], exc_info[1], exc_info[2]
-
- return functools.wraps(f)(wrapped)
- return inner
-
-
-class HeatException(Exception):
- """Base Heat Exception
-
- To correctly use this class, inherit from it and define
- a 'msg_fmt' property. That msg_fmt will get printf'd
- with the keyword arguments provided to the constructor.
-
- """
- message = _("An unknown exception occurred.")
-
- def __init__(self, **kwargs):
- self.kwargs = kwargs
-
- try:
- self.message = self.message % kwargs
- except KeyError:
- exc_info = sys.exc_info()
- #kwargs doesn't match a variable in the message
- #log the issue and the kwargs
- logger.exception(_('Exception in string format operation'))
- for name, value in kwargs.iteritems():
- logger.error("%s: %s" % (name, value))
-
- if _FATAL_EXCEPTION_FORMAT_ERRORS:
- raise exc_info[0], exc_info[1], exc_info[2]
-
- def __str__(self):
- return str(self.message)
-
- def __unicode__(self):
- return unicode(self.message)
-
-
-class MissingCredentialError(HeatException):
- message = _("Missing required credential: %(required)s")
-
-
-class BadAuthStrategy(HeatException):
- message = _("Incorrect auth strategy, expected \"%(expected)s\" but "
- "received \"%(received)s\"")
-
-
-class AuthBadRequest(HeatException):
- message = _("Connect error/bad request to Auth service at URL %(url)s.")
-
-
-class AuthUrlNotFound(HeatException):
- message = _("Auth service at URL %(url)s not found.")
-
-
-class AuthorizationFailure(HeatException):
- message = _("Authorization failed.")
-
-
-class NotAuthenticated(HeatException):
- message = _("You are not authenticated.")
-
-
-class Forbidden(HeatException):
- message = _("You are not authorized to complete this action.")
-
-
-#NOTE(bcwaldon): here for backwards-compatability, need to deprecate.
-class NotAuthorized(Forbidden):
- message = _("You are not authorized to complete this action.")
-
-
-class Invalid(HeatException):
- message = _("Data supplied was not valid: %(reason)s")
-
-
-class AuthorizationRedirect(HeatException):
- message = _("Redirecting to %(uri)s for authorization.")
-
-
-class ClientConfigurationError(HeatException):
- message = _("There was an error configuring the client.")
-
-
-class RequestUriTooLong(HeatException):
- message = _("The URI was too long.")
-
-
-class ServerError(HeatException):
- message = _("The request returned 500 Internal Server Error"
- "\n\nThe response body:\n%(body)s")
-
-
-class MaxRedirectsExceeded(HeatException):
- message = _("Maximum redirects (%(redirects)s) was exceeded.")
-
-
-class InvalidRedirect(HeatException):
- message = _("Received invalid HTTP redirect.")
-
-
-class NoServiceEndpoint(HeatException):
- message = _("Response from Keystone does not contain a Heat endpoint.")
-
-
-class RegionAmbiguity(HeatException):
- message = _("Multiple 'image' service matches for region %(region)s. This "
- "generally means that a region is required and you have not "
- "supplied one.")
-
-
-class UserParameterMissing(HeatException):
- message = _("The Parameter (%(key)s) was not provided.")
-
-
-class UnknownUserParameter(HeatException):
- message = _("The Parameter (%(key)s) was not defined in template.")
-
-
-class InvalidTemplateAttribute(HeatException):
- message = _("The Referenced Attribute (%(resource)s %(key)s)"
- " is incorrect.")
-
-
-class InvalidTemplateReference(HeatException):
- message = _("The specified reference \"%(resource)s\" (in %(key)s)"
- " is incorrect.")
-
-
-class UserKeyPairMissing(HeatException):
- message = _("The Key (%(key_name)s) could not be found.")
-
-
-class FlavorMissing(HeatException):
- message = _("The Flavor ID (%(flavor_id)s) could not be found.")
-
-
-class ImageNotFound(HeatException):
- message = _("The Image (%(image_name)s) could not be found.")
-
-
-class NoUniqueImageFound(HeatException):
- message = _("Multiple images were found with name (%(image_name)s).")
-
-
-class InvalidTenant(HeatException):
- message = _("Searching Tenant %(target)s "
- "from Tenant %(actual)s forbidden.")
-
-
-class StackNotFound(HeatException):
- message = _("The Stack (%(stack_name)s) could not be found.")
-
-
-class StackExists(HeatException):
- message = _("The Stack (%(stack_name)s) already exists.")
-
-
-class StackValidationFailed(HeatException):
- message = _("%(message)s")
-
-
-class ResourceNotFound(HeatException):
- message = _("The Resource (%(resource_name)s) could not be found "
- "in Stack %(stack_name)s.")
-
-
-class ResourceTypeNotFound(HeatException):
- message = _("The Resource Type (%(type_name)s) could not be found.")
-
-
-class ResourceNotAvailable(HeatException):
- message = _("The Resource (%(resource_name)s) is not available.")
-
-
-class PhysicalResourceNotFound(HeatException):
- message = _("The Resource (%(resource_id)s) could not be found.")
-
-
-class WatchRuleNotFound(HeatException):
- message = _("The Watch Rule (%(watch_name)s) could not be found.")
-
-
-class ResourceFailure(HeatException):
- message = _("%(exc_type)s: %(message)s")
-
- def __init__(self, exception, resource, action=None):
- if isinstance(exception, ResourceFailure):
- exception = getattr(exception, 'exc', exception)
- self.exc = exception
- self.resource = resource
- self.action = action
- exc_type = type(exception).__name__
- super(ResourceFailure, self).__init__(exc_type=exc_type,
- message=str(exception))
-
-
-class NotSupported(HeatException):
- message = _("%(feature)s is not supported.")
-
-
-class ResourcePropertyConflict(HeatException):
- message = _('Cannot define the following properties at the same time: %s.')
-
- def __init__(self, *args):
- self.message = self.message % ", ".join(args)
- super(ResourcePropertyConflict, self).__init__()
-
-
-class HTTPExceptionDisguise(Exception):
- """Disguises HTTP exceptions so they can be handled by the webob fault
- application in the wsgi pipeline.
- """
-
- def __init__(self, exception):
- self.exc = exception
- self.tb = sys.exc_info()[2]
-
-
-class EgressRuleNotAllowed(HeatException):
- message = _("Egress rules are only allowed when "
- "Neutron is used and the 'VpcId' property is set.")
-
-
-class Error(Exception):
- def __init__(self, message=None):
- super(Error, self).__init__(message)
-
-
-class NotFound(Error):
- pass
-
-
-class InvalidContentType(HeatException):
- message = "Invalid content type %(content_type)s"
-
-
-class RequestLimitExceeded(HeatException):
- message = _('Request limit exceeded: %(message)s')
-
-
-class StackResourceLimitExceeded(HeatException):
- message = _('Maximum resources per stack exceeded.')
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import exception
-
-import eventlet
-import hashlib
-
-from keystoneclient.v2_0 import client as kc
-from keystoneclient.v3 import client as kc_v3
-from oslo.config import cfg
-
-from heat.openstack.common import importutils
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger('heat.common.keystoneclient')
-
-
-class KeystoneClient(object):
- """
- Wrap keystone client so we can encapsulate logic used in resources
- Note this is intended to be initialized from a resource on a per-session
- basis, so the session context is passed in on initialization
- Also note that a copy of this is created every resource as self.keystone()
- via the code in engine/client.py, so there should not be any need to
- directly instantiate instances of this class inside resources themselves
- """
- def __init__(self, context):
- self.context = context
- # We have to maintain two clients authenticated with keystone:
- # - ec2 interface is v2.0 only
- # - trusts is v3 only
- # - passing a v2 auth_token to the v3 client won't work until lp bug
- # #1212778 is fixed
- # - passing a v3 token to the v2 client works but we have to either
- # md5sum it or use the nocatalog option to auth/tokens (not yet
- # supported by keystoneclient), or we hit the v2 8192byte size limit
- # - context.auth_url is expected to contain the v2.0 keystone endpoint
- if cfg.CONF.deferred_auth_method == 'trusts':
- # Create connection to v3 API
- self.client_v3 = self._v3_client_init()
-
- # Set context auth_token to md5sum of v3 token
- auth_token = self.client_v3.auth_ref.get('auth_token')
- self.context.auth_token = self._md5_token(auth_token)
-
- # Create the connection to the v2 API, reusing the md5-ified token
- self.client_v2 = self._v2_client_init()
- else:
- # Create the connection to the v2 API, using the context creds
- self.client_v2 = self._v2_client_init()
- self.client_v3 = None
-
- def _md5_token(self, auth_token):
- # Get the md5sum of the v3 token, which we can pass instead of the
- # actual token to avoid v2 8192byte size limit on the v2 token API
- m_enc = hashlib.md5()
- m_enc.update(auth_token)
- return m_enc.hexdigest()
-
- def _v2_client_init(self):
- kwargs = {
- 'auth_url': self.context.auth_url
- }
- # Note check for auth_token first so we use existing token if
- # available from v3 auth
- if self.context.auth_token is not None:
- kwargs['tenant_name'] = self.context.tenant
- kwargs['token'] = self.context.auth_token
- elif self.context.password is not None:
- kwargs['username'] = self.context.username
- kwargs['password'] = self.context.password
- kwargs['tenant_name'] = self.context.tenant
- kwargs['tenant_id'] = self.context.tenant_id
- else:
- logger.error("Keystone v2 API connection failed, no password or "
- "auth_token!")
- raise exception.AuthorizationFailure()
- client_v2 = kc.Client(**kwargs)
- if not client_v2.authenticate():
- logger.error("Keystone v2 API authentication failed")
- raise exception.AuthorizationFailure()
- return client_v2
-
- @staticmethod
- def _service_admin_creds(api_version=2):
- # Import auth_token to have keystone_authtoken settings setup.
- importutils.import_module('keystoneclient.middleware.auth_token')
-
- creds = {
- 'username': cfg.CONF.keystone_authtoken.admin_user,
- 'password': cfg.CONF.keystone_authtoken.admin_password,
- }
- if api_version >= 3:
- creds['auth_url'] =\
- cfg.CONF.keystone_authtoken.auth_uri.replace('v2.0', 'v3')
- creds['project_name'] =\
- cfg.CONF.keystone_authtoken.admin_tenant_name
- else:
- creds['auth_url'] = cfg.CONF.keystone_authtoken.auth_uri
- creds['tenant_name'] =\
- cfg.CONF.keystone_authtoken.admin_tenant_name
-
- return creds
-
- def _v3_client_init(self):
- kwargs = {}
- if self.context.auth_token is not None:
- kwargs['project_name'] = self.context.tenant
- kwargs['token'] = self.context.auth_token
- kwargs['auth_url'] = self.context.auth_url.replace('v2.0', 'v3')
- kwargs['endpoint'] = kwargs['auth_url']
- elif self.context.trust_id is not None:
- # We got a trust_id, so we use the admin credentials and get a
- # Token back impersonating the trustor user
- kwargs.update(self._service_admin_creds(api_version=3))
- kwargs['trust_id'] = self.context.trust_id
- elif self.context.password is not None:
- kwargs['username'] = self.context.username
- kwargs['password'] = self.context.password
- kwargs['project_name'] = self.context.tenant
- kwargs['project_id'] = self.context.tenant_id
- kwargs['auth_url'] = self.context.auth_url.replace('v2.0', 'v3')
- kwargs['endpoint'] = kwargs['auth_url']
- else:
- logger.error("Keystone v3 API connection failed, no password or "
- "auth_token!")
- raise exception.AuthorizationFailure()
-
- client_v3 = kc_v3.Client(**kwargs)
- if not client_v3.authenticate():
- logger.error("Keystone v3 API authentication failed")
- raise exception.AuthorizationFailure()
- return client_v3
-
- def create_trust_context(self):
- """
- If cfg.CONF.deferred_auth_method is trusts, we create a
- trust using the trustor identity in the current context, with the
- trustee as the heat service user
-
- If deferred_auth_method != trusts, we do nothing
-
- If the current context already contains a trust_id, we do nothing
- """
- if cfg.CONF.deferred_auth_method != 'trusts':
- return
-
- if self.context.trust_id:
- return
-
- # We need the service admin user ID (not name), as the trustor user
- # can't lookup the ID in keystoneclient unless they're admin
- # workaround this by creating a temporary admin client connection
- # then getting the user ID from the auth_ref
- admin_creds = self._service_admin_creds()
- admin_client = kc.Client(**admin_creds)
- if not admin_client.authenticate():
- logger.error("Keystone v2 API admin authentication failed")
- raise exception.AuthorizationFailure()
-
- trustee_user_id = admin_client.auth_ref['user']['id']
- trustor_user_id = self.client_v3.auth_ref['user']['id']
- trustor_project_id = self.client_v3.auth_ref['project']['id']
- roles = cfg.CONF.trusts_delegated_roles
- trust = self.client_v3.trusts.create(trustor_user=trustor_user_id,
- trustee_user=trustee_user_id,
- project=trustor_project_id,
- impersonation=True,
- role_names=roles)
- self.context.trust_id = trust.id
- self.context.trustor_user_id = trustor_user_id
-
- def delete_trust_context(self):
- """
- If a trust_id exists in the context, we delete it
-
- """
- if not self.context.trust_id:
- return
-
- self.client_v3.trusts.delete(self.context.trust_id)
-
- self.context.trust_id = None
- self.context.trustor_user_id = None
-
- def create_stack_user(self, username, password=''):
- """
- Create a user defined as part of a stack, either via template
- or created internally by a resource. This user will be added to
- the heat_stack_user_role as defined in the config
- Returns the keystone ID of the resulting user
- """
- if(len(username) > 64):
- logger.warning("Truncating the username %s to the last 64 "
- "characters." % username)
- #get the last 64 characters of the username
- username = username[-64:]
- user = self.client_v2.users.create(username,
- password,
- '%s@heat-api.org' %
- username,
- tenant_id=self.context.tenant_id,
- enabled=True)
-
- # We add the new user to a special keystone role
- # This role is designed to allow easier differentiation of the
- # heat-generated "stack users" which will generally have credentials
- # deployed on an instance (hence are implicitly untrusted)
- roles = self.client_v2.roles.list()
- stack_user_role = [r.id for r in roles
- if r.name == cfg.CONF.heat_stack_user_role]
- if len(stack_user_role) == 1:
- role_id = stack_user_role[0]
- logger.debug("Adding user %s to role %s" % (user.id, role_id))
- self.client_v2.roles.add_user_role(user.id, role_id,
- self.context.tenant_id)
- else:
- logger.error("Failed to add user %s to role %s, check role exists!"
- % (username, cfg.CONF.heat_stack_user_role))
-
- return user.id
-
- def delete_stack_user(self, user_id):
-
- user = self.client_v2.users.get(user_id)
-
- # FIXME (shardy) : need to test, do we still need this retry logic?
- # Copied from user.py, but seems like something we really shouldn't
- # need to do, no bug reference in the original comment (below)...
- # tempory hack to work around an openstack bug.
- # seems you can't delete a user first time - you have to try
- # a couple of times - go figure!
- tmo = eventlet.Timeout(10)
- status = 'WAITING'
- reason = 'Timed out trying to delete user'
- try:
- while status == 'WAITING':
- try:
- user.delete()
- status = 'DELETED'
- except Exception as ce:
- reason = str(ce)
- logger.warning("Problem deleting user %s: %s" %
- (user_id, reason))
- eventlet.sleep(1)
- except eventlet.Timeout as t:
- if t is not tmo:
- # not my timeout
- raise
- else:
- status = 'TIMEDOUT'
- finally:
- tmo.cancel()
-
- if status != 'DELETED':
- raise exception.Error(reason)
-
- def delete_ec2_keypair(self, user_id, accesskey):
- self.client_v2.ec2.delete(user_id, accesskey)
-
- def get_ec2_keypair(self, user_id):
- # We make the assumption that each user will only have one
- # ec2 keypair, it's not clear if AWS allow multiple AccessKey resources
- # to be associated with a single User resource, but for simplicity
- # we assume that here for now
- cred = self.client_v2.ec2.list(user_id)
- if len(cred) == 0:
- return self.client_v2.ec2.create(user_id, self.context.tenant_id)
- if len(cred) == 1:
- return cred[0]
- else:
- logger.error("Unexpected number of ec2 credentials %s for %s" %
- (len(cred), user_id))
-
- def disable_stack_user(self, user_id):
- # FIXME : This won't work with the v3 keystone API
- self.client_v2.users.update_enabled(user_id, False)
-
- def enable_stack_user(self, user_id):
- # FIXME : This won't work with the v3 keystone API
- self.client_v2.users.update_enabled(user_id, True)
-
- def url_for(self, **kwargs):
- return self.client_v2.service_catalog.url_for(**kwargs)
-
- @property
- def auth_token(self):
- return self.client_v2.auth_token
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import re
-import urllib
-import urlparse
-import collections
-
-
-class HeatIdentifier(collections.Mapping):
-
- FIELDS = (
- TENANT, STACK_NAME, STACK_ID, PATH
- ) = (
- 'tenant', 'stack_name', 'stack_id', 'path'
- )
- path_re = re.compile(r'stacks/([^/]+)/([^/]+)(.*)')
-
- def __init__(self, tenant, stack_name, stack_id, path=''):
- '''
- Initialise a HeatIdentifier from a Tenant ID, Stack name, Stack ID
- and optional path. If a path is supplied and it does not begin with
- "/", a "/" will be prepended.
- '''
- if path and not path.startswith('/'):
- path = '/' + path
-
- if '/' in stack_name:
- raise ValueError('Stack name may not contain "/"')
-
- self.identity = {
- self.TENANT: tenant,
- self.STACK_NAME: stack_name,
- self.STACK_ID: str(stack_id),
- self.PATH: path,
- }
-
- @classmethod
- def from_arn(cls, arn):
- '''
- Return a new HeatIdentifier generated by parsing the supplied ARN.
- '''
- fields = arn.split(':')
- if len(fields) < 6 or fields[0].lower() != 'arn':
- raise ValueError('"%s" is not a valid ARN' % arn)
-
- id_fragment = ':'.join(fields[5:])
- path = cls.path_re.match(id_fragment)
-
- if fields[1] != 'openstack' or fields[2] != 'heat' or not path:
- raise ValueError('"%s" is not a valid Heat ARN' % arn)
-
- return cls(urllib.unquote(fields[4]),
- urllib.unquote(path.group(1)),
- urllib.unquote(path.group(2)),
- urllib.unquote(path.group(3)))
-
- @classmethod
- def from_arn_url(cls, url):
- '''
- Return a new HeatIdentifier generated by parsing the supplied URL
- The URL is expected to contain a valid arn as part of the path
- '''
- # Sanity check the URL
- urlp = urlparse.urlparse(url)
- if (urlp.scheme not in ('http', 'https') or
- not urlp.netloc or not urlp.path):
- raise ValueError('"%s" is not a valid URL' % url)
-
- # Remove any query-string and extract the ARN
- arn_url_prefix = '/arn%3Aopenstack%3Aheat%3A%3A'
- match = re.search(arn_url_prefix, urlp.path, re.IGNORECASE)
- if match is None:
- raise ValueError('"%s" is not a valid ARN URL' % url)
- # the +1 is to skip the leading /
- url_arn = urlp.path[match.start() + 1:]
- arn = urllib.unquote(url_arn)
- return cls.from_arn(arn)
-
- def arn(self):
- '''
- Return an ARN of the form:
- arn:openstack:heat::<tenant>:stacks/<stack_name>/<stack_id><path>
- '''
- return 'arn:openstack:heat::%s:%s' % (urllib.quote(self.tenant, ''),
- self._tenant_path())
-
- def arn_url_path(self):
- '''
- Return an ARN quoted correctly for use in a URL
- '''
- return '/' + urllib.quote(self.arn(), '')
-
- def url_path(self):
- '''
- Return a URL-encoded path segment of a URL in the form:
- <tenant>/stacks/<stack_name>/<stack_id><path>
- '''
- return '/'.join((urllib.quote(self.tenant, ''), self._tenant_path()))
-
- def _tenant_path(self):
- '''
- Return a URL-encoded path segment of a URL within a particular tenant,
- in the form:
- stacks/<stack_name>/<stack_id><path>
- '''
- return 'stacks/%s/%s%s' % (urllib.quote(self.stack_name, ''),
- urllib.quote(self.stack_id, ''),
- urllib.quote(self.path))
-
- def _path_components(self):
- '''Return a list of the path components.'''
- return self.path.lstrip('/').split('/')
-
- def __getattr__(self, attr):
- '''
- Return one of the components of the identity when accessed as an
- attribute.
- '''
- if attr not in self.FIELDS:
- raise AttributeError('Unknown attribute "%s"' % attr)
-
- return self.identity[attr]
-
- def __getitem__(self, key):
- '''Return one of the components of the identity.'''
- if key not in self.FIELDS:
- raise KeyError('Unknown attribute "%s"' % key)
-
- return self.identity[key]
-
- def __len__(self):
- '''Return the number of components in an identity.'''
- return len(self.FIELDS)
-
- def __contains__(self, key):
- return key in self.FIELDS
-
- def __iter__(self):
- return iter(self.FIELDS)
-
- def __repr__(self):
- return repr(dict(self))
-
-
-class ResourceIdentifier(HeatIdentifier):
- '''An identifier for a resource.'''
-
- RESOURCE_NAME = 'resource_name'
-
- def __init__(self, tenant, stack_name, stack_id, path,
- resource_name=None):
- '''
- Return a new Resource identifier based on the identifier components of
- the owning stack and the resource name.
- '''
- if resource_name is not None:
- if '/' in resource_name:
- raise ValueError('Resource name may not contain "/"')
- path = '/'.join([path.rstrip('/'), 'resources', resource_name])
- super(ResourceIdentifier, self).__init__(tenant,
- stack_name,
- stack_id,
- path)
-
- def __getattr__(self, attr):
- '''
- Return one of the components of the identity when accessed as an
- attribute.
- '''
-
- if attr == self.RESOURCE_NAME:
- return self._path_components()[-1]
-
- return HeatIdentifier.__getattr__(self, attr)
-
- def stack(self):
- '''
- Return a HeatIdentifier for the owning stack
- '''
- return HeatIdentifier(self.tenant, self.stack_name, self.stack_id,
- '/'.join(self._path_components()[:-2]))
-
-
-class EventIdentifier(HeatIdentifier):
- '''An identifier for an event.'''
-
- (RESOURCE_NAME, EVENT_ID) = (ResourceIdentifier.RESOURCE_NAME, 'event_id')
-
- def __init__(self, tenant, stack_name, stack_id, path,
- event_id=None):
- '''
- Return a new Event identifier based on the identifier components of
- the associated resource and the event ID.
- '''
- if event_id is not None:
- path = '/'.join([path.rstrip('/'), 'events', event_id])
- super(EventIdentifier, self).__init__(tenant,
- stack_name,
- stack_id,
- path)
-
- def __getattr__(self, attr):
- '''
- Return one of the components of the identity when accessed as an
- attribute.
- '''
-
- if attr == self.RESOURCE_NAME:
- return getattr(self.resource(), attr)
- if attr == self.EVENT_ID:
- return self._path_components()[-1]
-
- return HeatIdentifier.__getattr__(self, attr)
-
- def resource(self):
- '''
- Return a HeatIdentifier for the owning resource
- '''
- return ResourceIdentifier(self.tenant, self.stack_name, self.stack_id,
- '/'.join(self._path_components()[:-2]))
-
- def stack(self):
- '''
- Return a HeatIdentifier for the owning stack
- '''
- return self.resource().stack()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-'''
-Utilities to dynamically load plugin modules.
-
-Modules imported this way remain accessible to static imports, regardless of
-the order in which they are imported. For modules that are not part of an
-existing package tree, use create_subpackage() to dynamically create a package
-for them before loading them.
-'''
-
-import pkgutil
-import sys
-import types
-
-from heat.openstack.common import log as logging
-from heat.openstack.common.gettextutils import _
-
-logger = logging.getLogger(__name__)
-
-
-def _module_name(*components):
- '''Assemble a fully-qualified module name from its components.'''
- return '.'.join(components)
-
-
-def create_subpackage(path, parent_package_name, subpackage_name="plugins"):
- '''
- Dynamically create a package into which to load plugins.
-
- This allows us to not include an __init__.py in the plugins directory. We
- must still create a package for plugins to go in, otherwise we get warning
- messages during import. This also provides a convenient place to store the
- path(s) to the plugins directory.
- '''
- package_name = _module_name(parent_package_name, subpackage_name)
-
- package = types.ModuleType(package_name)
- package.__path__ = [path] if isinstance(path, basestring) else list(path)
- sys.modules[package_name] = package
-
- return package
-
-
-def _import_module(importer, module_name, package):
- '''
- Import a module dynamically into the specified package, given its name and
- PEP302 Importer object (which knows the path to look in).
- '''
-
- # Duplicate copies of modules are bad, so check if this has already been
- # imported statically
- if module_name in sys.modules:
- return sys.modules[module_name]
-
- loader = importer.find_module(module_name)
- if loader is None:
- return None
-
- module = loader.load_module(module_name)
-
- # Make this accessible through the parent package for static imports
- local_name = module_name.partition(package.__name__ + '.')[2]
- module_components = local_name.split('.')
- parent = reduce(getattr, module_components[:-1], package)
- setattr(parent, module_components[-1], module)
-
- return module
-
-
-def load_modules(package, ignore_error=False):
- '''Dynamically load all modules from a given package.'''
- path = package.__path__
- pkg_prefix = package.__name__ + '.'
-
- for importer, module_name, is_package in pkgutil.walk_packages(path,
- pkg_prefix):
- try:
- module = _import_module(importer, module_name, package)
- except ImportError as ex:
- logger.error(_('Failed to import module %s') % module_name)
- if not ignore_error:
- raise
- else:
- if module is not None:
- yield module
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright (c) 2011 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-# Based on glance/api/policy.py
-"""Policy Engine For Heat"""
-
-from oslo.config import cfg
-
-from heat.common import exception
-
-import heat.openstack.common.log as logging
-from heat.openstack.common import policy
-
-logger = logging.getLogger(__name__)
-
-
-CONF = cfg.CONF
-
-DEFAULT_RULES = {
- 'default': policy.FalseCheck(),
-}
-
-
-class Enforcer(object):
- """Responsible for loading and enforcing rules."""
-
- def __init__(self, scope='heat', exc=exception.Forbidden,
- default_rule=DEFAULT_RULES['default']):
- self.scope = scope
- self.exc = exc
- self.default_rule = default_rule
- self.enforcer = policy.Enforcer(default_rule=default_rule)
-
- def set_rules(self, rules, overwrite=True):
- """Create a new Rules object based on the provided dict of rules."""
- rules_obj = policy.Rules(rules, self.default_rule)
- self.enforcer.set_rules(rules_obj, overwrite)
-
- def load_rules(self, force_reload=False):
- """Set the rules found in the json file on disk."""
- self.enforcer.load_rules(force_reload)
-
- def _check(self, context, rule, target, exc, *args, **kwargs):
- """Verifies that the action is valid on the target in this context.
-
- :param context: Heat request context
- :param rule: String representing the action to be checked
- :param object: Dictionary representing the object of the action.
- :raises: self.exc (defaults to heat.common.exception.Forbidden)
- :returns: A non-False value if access is allowed.
- """
- do_raise = False if not exc else True
- credentials = {
- 'roles': context.roles,
- 'user': context.username,
- 'tenant': context.tenant,
- }
- return self.enforcer.enforce(rule, target, credentials,
- do_raise, exc=exc, *args, **kwargs)
-
- def enforce(self, context, action, target):
- """Verifies that the action is valid on the target in this context.
-
- :param context: Heat request context
- :param action: String representing the action to be checked
- :param object: Dictionary representing the object of the action.
- :raises: self.exc (defaults to heat.common.exception.Forbidden)
- :returns: A non-False value if access is allowed.
- """
- _action = '%s:%s' % (self.scope, action)
- return self._check(context, _action, target, self.exc, action=action)
-
- def check(self, context, action, target):
- """Verifies that the action is valid on the target in this context.
-
- :param context: Heat request context
- :param action: String representing the action to be checked
- :param object: Dictionary representing the object of the action.
- :returns: A non-False value if access is allowed.
- """
- return self._check(context, action, target)
-
- def clear(self):
- self.enforcer.clear()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-'''
-Utilities for creating short ID strings based on a random UUID. The IDs
-each comprise 12 (lower-case) alphanumeric characters.
-'''
-
-import base64
-import uuid
-
-
-def _to_byte_string(value, num_bits):
- '''
- Convert an integer to a big-endian string of bytes, with any padding
- required added at the end (i.e. after the least-significant bit).
- '''
- shifts = xrange(num_bits - 8, -8, -8)
- byte_at = lambda off: (value >> off if off >= 0 else value << -off) & 0xff
- return ''.join(chr(byte_at(offset)) for offset in shifts)
-
-
-def get_id(source_uuid):
- '''
- Derive a short (12 character) id from a random UUID.
-
- The supplied UUID must be a version 4 UUID object.
- '''
- if isinstance(source_uuid, basestring):
- source_uuid = uuid.UUID(source_uuid)
- if source_uuid.version != 4:
- raise ValueError('Invalid UUID version (%d)' % source_uuid.version)
-
- # The "time" field of a v4 UUID contains 60 random bits
- # (see RFC4122, Section 4.4)
- random_bytes = _to_byte_string(source_uuid.time, 60)
- # The first 12 bytes (= 60 bits) of base32-encoded output is our data
- encoded = base64.b32encode(random_bytes)[:12]
-
- return encoded.lower()
-
-
-def generate_id():
- '''
- Generate a short (12 character), random id.
- '''
- return get_id(uuid.uuid4())
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import re
-import yaml
-import json
-
-from oslo.config import cfg
-
-from heat.common import exception
-
-cfg.CONF.import_opt('max_template_size', 'heat.common.config')
-
-HEAT_VERSIONS = (u'2012-12-12',)
-CFN_VERSIONS = (u'2010-09-09',)
-
-if hasattr(yaml, 'CSafeLoader'):
- yaml_loader = yaml.CSafeLoader
-else:
- yaml_loader = yaml.SafeLoader
-
-if hasattr(yaml, 'CSafeDumper'):
- yaml_dumper = yaml.CSafeDumper
-else:
- yaml_dumper = yaml.SafeDumper
-
-
-def _construct_yaml_str(self, node):
- # Override the default string handling function
- # to always return unicode objects
- return self.construct_scalar(node)
-yaml_loader.add_constructor(u'tag:yaml.org,2002:str', _construct_yaml_str)
-# Unquoted dates like 2013-05-23 in yaml files get loaded as objects of type
-# datetime.data which causes problems in API layer when being processed by
-# openstack.common.jsonutils. Therefore, make unicode string out of timestamps
-# until jsonutils can handle dates.
-yaml_loader.add_constructor(u'tag:yaml.org,2002:timestamp',
- _construct_yaml_str)
-yaml_loader.add_constructor(u'tag:yaml.org,2002:timestamp',
- _construct_yaml_str)
-
-
-def parse(tmpl_str):
- '''
- Takes a string and returns a dict containing the parsed structure.
- This includes determination of whether the string is using the
- JSON or YAML format.
- '''
- if len(tmpl_str) > cfg.CONF.max_template_size:
- msg = _('Template exceeds maximum allowed size.')
- raise exception.RequestLimitExceeded(message=msg)
- if tmpl_str.startswith('{'):
- tpl = json.loads(tmpl_str)
- else:
- try:
- tpl = yaml.load(tmpl_str, Loader=yaml_loader)
- except (yaml.scanner.ScannerError, yaml.parser.ParserError) as e:
- raise ValueError(e)
- else:
- if tpl is None:
- tpl = {}
- if u'heat_template_version' not in tpl:
- default_for_missing(tpl, u'HeatTemplateFormatVersion',
- HEAT_VERSIONS)
- return tpl
-
-
-def default_for_missing(tpl, version_param, versions):
- '''
- Checks a parsed template for missing version and sections.
-
- This is currently only applied to YAML templates.
- '''
- # if version is missing, implicitly use the lastest one
- if version_param not in tpl:
- tpl[version_param] = versions[-1]
-
- # create empty placeholders for any of the main dict sections
- for param in (u'Parameters', u'Mappings', u'Resources', u'Outputs'):
- if param not in tpl:
- tpl[param] = {}
-
-
-def convert_json_to_yaml(json_str):
- '''Convert a string containing the AWS JSON template format
- to an equivalent string containing the Heat YAML format.
- '''
-
- global key_order
- # Replace AWS format version with Heat format version
- json_str = re.sub('"AWSTemplateFormatVersion"\s*:\s*"[^"]+"\s*,',
- '', json_str)
-
- # insert a sortable order into the key to preserve file ordering
- key_order = 0
-
- def order_key(matchobj):
- global key_order
- key = '%s"__%05d__order__%s" :' % (
- matchobj.group(1),
- key_order,
- matchobj.group(2))
- key_order = key_order + 1
- return key
- key_re = re.compile('^(\s*)"([^"]+)"\s*:', re.M)
- json_str = key_re.sub(order_key, json_str)
-
- # parse the string as json to a python structure
- tpl = yaml.load(json_str, Loader=yaml_loader)
-
- # dump python structure to yaml
- tpl["HeatTemplateFormatVersion"] = '2012-12-12'
- yml = yaml.dump(tpl, Dumper=yaml_dumper)
-
- # remove ordering from key names
- yml = re.sub('__\d*__order__', '', yml)
- return yml
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Utilities for handling ISO 8601 duration format.
-"""
-
-import re
-
-
-iso_duration_re = re.compile('PT(?:(\d+)H)?(?:(\d+)M)?(?:(\d+)S)?$')
-
-
-def parse_isoduration(duration):
- """
- Convert duration in ISO 8601 format to second(s).
-
- Year, Month, Week, and Day designators are not supported.
- Example: 'PT12H30M5S'
- """
- result = iso_duration_re.match(duration)
- if not result:
- raise ValueError('Only ISO 8601 duration format of the form '
- 'PT#H#M#S is supported.')
-
- t = 0
- t += (3600 * int(result.group(1))) if result.group(1) else 0
- t += (60 * int(result.group(2))) if result.group(2) else 0
- t += int(result.group(3)) if result.group(3) else 0
-
- return t
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-'''
-Utility for fetching a resource (e.g. a template) from a URL.
-'''
-
-import requests
-from requests import exceptions
-import urllib2
-import urlparse
-
-from heat.openstack.common import log as logging
-from heat.openstack.common.gettextutils import _
-
-logger = logging.getLogger(__name__)
-
-
-def get(url, allowed_schemes=('http', 'https')):
- '''
- Get the data at the specifier URL.
-
- The URL must use the http: or https: schemes.
- The file: scheme is also supported if you override
- the allowed_schemes argument.
- Raise an IOError if getting the data fails.
- '''
- logger.info(_('Fetching data from %s') % url)
-
- components = urlparse.urlparse(url)
-
- if components.scheme not in allowed_schemes:
- raise IOError('Invalid URL scheme %s' % components.scheme)
-
- if components.scheme == 'file':
- try:
- return urllib2.urlopen(url).read()
- except urllib2.URLError as uex:
- raise IOError('Failed to retrieve template: %s' % str(uex))
-
- try:
- resp = requests.get(url)
- resp.raise_for_status()
- return resp.text
- except exceptions.RequestException as ex:
- raise IOError('Failed to retrieve template: %s' % str(ex))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# Copyright 2013 IBM Corp.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Utility methods for working with WSGI servers
-"""
-
-import datetime
-import errno
-import json
-import logging
-import os
-import signal
-import sys
-import time
-
-import eventlet
-import eventlet.greenio
-from eventlet.green import socket
-from eventlet.green import ssl
-import eventlet.wsgi
-from lxml import etree
-from oslo.config import cfg
-from paste import deploy
-import routes
-import routes.middleware
-import webob.dec
-import webob.exc
-
-from heat.common import exception
-from heat.openstack.common import gettextutils
-from heat.openstack.common import importutils
-
-
-URL_LENGTH_LIMIT = 50000
-
-api_opts = [
- cfg.StrOpt('bind_host', default='0.0.0.0',
- help=_('Address to bind the server. Useful when '
- 'selecting a particular network interface.'),
- deprecated_group='DEFAULT'),
- cfg.IntOpt('bind_port', default=8004,
- help=_('The port on which the server will listen.'),
- deprecated_group='DEFAULT'),
- cfg.IntOpt('backlog', default=4096,
- help=_("Number of backlog requests "
- "to configure the socket with"),
- deprecated_group='DEFAULT'),
- cfg.StrOpt('cert_file', default=None,
- help=_("Location of the SSL Certificate File "
- "to use for SSL mode"),
- deprecated_group='DEFAULT'),
- cfg.StrOpt('key_file', default=None,
- help=_("Location of the SSL Key File to use "
- "for enabling SSL mode"),
- deprecated_group='DEFAULT'),
- cfg.IntOpt('workers', default=0,
- help=_("Number of workers for Heat service"),
- deprecated_group='DEFAULT'),
-]
-api_group = cfg.OptGroup('heat_api')
-cfg.CONF.register_group(api_group)
-cfg.CONF.register_opts(api_opts,
- group=api_group)
-
-api_cfn_opts = [
- cfg.StrOpt('bind_host', default='0.0.0.0',
- help=_('Address to bind the server. Useful when '
- 'selecting a particular network interface.'),
- deprecated_group='DEFAULT'),
- cfg.IntOpt('bind_port', default=8000,
- help=_('The port on which the server will listen.'),
- deprecated_group='DEFAULT'),
- cfg.IntOpt('backlog', default=4096,
- help=_("Number of backlog requests "
- "to configure the socket with"),
- deprecated_group='DEFAULT'),
- cfg.StrOpt('cert_file', default=None,
- help=_("Location of the SSL Certificate File "
- "to use for SSL mode"),
- deprecated_group='DEFAULT'),
- cfg.StrOpt('key_file', default=None,
- help=_("Location of the SSL Key File to use "
- "for enabling SSL mode"),
- deprecated_group='DEFAULT'),
- cfg.IntOpt('workers', default=0,
- help=_("Number of workers for Heat service"),
- deprecated_group='DEFAULT'),
-]
-api_cfn_group = cfg.OptGroup('heat_api_cfn')
-cfg.CONF.register_group(api_cfn_group)
-cfg.CONF.register_opts(api_cfn_opts,
- group=api_cfn_group)
-
-api_cw_opts = [
- cfg.StrOpt('bind_host', default='0.0.0.0',
- help=_('Address to bind the server. Useful when '
- 'selecting a particular network interface.'),
- deprecated_group='DEFAULT'),
- cfg.IntOpt('bind_port', default=8003,
- help=_('The port on which the server will listen.'),
- deprecated_group='DEFAULT'),
- cfg.IntOpt('backlog', default=4096,
- help=_("Number of backlog requests "
- "to configure the socket with"),
- deprecated_group='DEFAULT'),
- cfg.StrOpt('cert_file', default=None,
- help=_("Location of the SSL Certificate File "
- "to use for SSL mode"),
- deprecated_group='DEFAULT'),
- cfg.StrOpt('key_file', default=None,
- help=_("Location of the SSL Key File to use "
- "for enabling SSL mode"),
- deprecated_group='DEFAULT'),
- cfg.IntOpt('workers', default=0,
- help=_("Number of workers for Heat service"),
- deprecated_group='DEFAULT'),
-]
-api_cw_group = cfg.OptGroup('heat_api_cloudwatch')
-cfg.CONF.register_group(api_cw_group)
-cfg.CONF.register_opts(api_cw_opts,
- group=api_cw_group)
-cfg.CONF.import_opt('debug', 'heat.openstack.common.log')
-
-json_size_opt = cfg.IntOpt('max_json_body_size',
- default=1048576,
- help='Maximum raw byte size of JSON request body.'
- ' Should be larger than max_template_size.')
-cfg.CONF.register_opt(json_size_opt)
-
-
-class WritableLogger(object):
- """A thin wrapper that responds to `write` and logs."""
-
- def __init__(self, logger, level=logging.DEBUG):
- self.logger = logger
- self.level = level
-
- def write(self, msg):
- self.logger.log(self.level, msg.strip("\n"))
-
-
-def get_bind_addr(conf, default_port=None):
- """Return the host and port to bind to."""
- return (conf.bind_host, conf.bind_port or default_port)
-
-
-def get_socket(conf, default_port):
- """
- Bind socket to bind ip:port in conf
-
- note: Mostly comes from Swift with a few small changes...
-
- :param conf: a cfg.ConfigOpts object
- :param default_port: port to bind to if none is specified in conf
-
- :returns : a socket object as returned from socket.listen or
- ssl.wrap_socket if conf specifies cert_file
- """
- bind_addr = get_bind_addr(conf, default_port)
-
- # TODO(jaypipes): eventlet's greened socket module does not actually
- # support IPv6 in getaddrinfo(). We need to get around this in the
- # future or monitor upstream for a fix
- address_family = [addr[0] for addr in socket.getaddrinfo(bind_addr[0],
- bind_addr[1], socket.AF_UNSPEC, socket.SOCK_STREAM)
- if addr[0] in (socket.AF_INET, socket.AF_INET6)][0]
-
- cert_file = conf.cert_file
- key_file = conf.key_file
- use_ssl = cert_file or key_file
- if use_ssl and (not cert_file or not key_file):
- raise RuntimeError(_("When running server in SSL mode, you must "
- "specify both a cert_file and key_file "
- "option value in your configuration file"))
-
- sock = None
- retry_until = time.time() + 30
- while not sock and time.time() < retry_until:
- try:
- sock = eventlet.listen(bind_addr, backlog=conf.backlog,
- family=address_family)
- if use_ssl:
- sock = ssl.wrap_socket(sock, certfile=cert_file,
- keyfile=key_file)
- except socket.error as err:
- if err.args[0] != errno.EADDRINUSE:
- raise
- eventlet.sleep(0.1)
- if not sock:
- raise RuntimeError(_("Could not bind to %(bind_addr)s"
- "after trying for 30 seconds")
- % {'bind_addr': bind_addr})
- sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
- # in my experience, sockets can hang around forever without keepalive
- sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
-
- # This option isn't available in the OS X version of eventlet
- if hasattr(socket, 'TCP_KEEPIDLE'):
- sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 600)
-
- return sock
-
-
-class Server(object):
- """Server class to manage multiple WSGI sockets and applications."""
-
- def __init__(self, threads=1000):
- self.threads = threads
- self.children = []
- self.running = True
-
- def start(self, application, conf, default_port):
- """
- Run a WSGI server with the given application.
-
- :param application: The application to run in the WSGI server
- :param conf: a cfg.ConfigOpts object
- :param default_port: Port to bind to if none is specified in conf
- """
- def kill_children(*args):
- """Kills the entire process group."""
- self.logger.error(_('SIGTERM received'))
- signal.signal(signal.SIGTERM, signal.SIG_IGN)
- self.running = False
- os.killpg(0, signal.SIGTERM)
-
- def hup(*args):
- """
- Shuts down the server, but allows running requests to complete
- """
- self.logger.error(_('SIGHUP received'))
- signal.signal(signal.SIGHUP, signal.SIG_IGN)
- self.running = False
-
- self.application = application
- self.sock = get_socket(conf, default_port)
-
- self.logger = logging.getLogger('eventlet.wsgi.server')
-
- if conf.workers == 0:
- # Useful for profiling, test, debug etc.
- self.pool = eventlet.GreenPool(size=self.threads)
- self.pool.spawn_n(self._single_run, application, self.sock)
- return
-
- self.logger.info(_("Starting %d workers") % conf.workers)
- signal.signal(signal.SIGTERM, kill_children)
- signal.signal(signal.SIGHUP, hup)
- while len(self.children) < conf.workers:
- self.run_child()
-
- def wait_on_children(self):
- while self.running:
- try:
- pid, status = os.wait()
- if os.WIFEXITED(status) or os.WIFSIGNALED(status):
- self.logger.error(_('Removing dead child %s') % pid)
- self.children.remove(pid)
- self.run_child()
- except OSError as err:
- if err.errno not in (errno.EINTR, errno.ECHILD):
- raise
- except KeyboardInterrupt:
- self.logger.info(_('Caught keyboard interrupt. Exiting.'))
- break
- eventlet.greenio.shutdown_safe(self.sock)
- self.sock.close()
- self.logger.debug(_('Exited'))
-
- def wait(self):
- """Wait until all servers have completed running."""
- try:
- if self.children:
- self.wait_on_children()
- else:
- self.pool.waitall()
- except KeyboardInterrupt:
- pass
-
- def run_child(self):
- pid = os.fork()
- if pid == 0:
- signal.signal(signal.SIGHUP, signal.SIG_DFL)
- signal.signal(signal.SIGTERM, signal.SIG_DFL)
- self.run_server()
- self.logger.info(_('Child %d exiting normally') % os.getpid())
- return
- else:
- self.logger.info(_('Started child %s') % pid)
- self.children.append(pid)
-
- def run_server(self):
- """Run a WSGI server."""
- eventlet.wsgi.HttpProtocol.default_request_version = "HTTP/1.0"
- eventlet.hubs.use_hub('poll')
- eventlet.patcher.monkey_patch(all=False, socket=True)
- self.pool = eventlet.GreenPool(size=self.threads)
- try:
- eventlet.wsgi.server(self.sock,
- self.application,
- custom_pool=self.pool,
- url_length_limit=URL_LENGTH_LIMIT,
- log=WritableLogger(self.logger),
- debug=cfg.CONF.debug)
- except socket.error as err:
- if err[0] != errno.EINVAL:
- raise
- self.pool.waitall()
-
- def _single_run(self, application, sock):
- """Start a WSGI server in a new green thread."""
- self.logger.info(_("Starting single process server"))
- eventlet.wsgi.server(sock, application,
- custom_pool=self.pool,
- url_length_limit=URL_LENGTH_LIMIT,
- log=WritableLogger(self.logger))
-
-
-class Middleware(object):
- """
- Base WSGI middleware wrapper. These classes require an application to be
- initialized that will be called next. By default the middleware will
- simply call its wrapped app, or you can override __call__ to customize its
- behavior.
- """
-
- def __init__(self, application):
- self.application = application
-
- def process_request(self, req):
- """
- Called on each request.
-
- If this returns None, the next application down the stack will be
- executed. If it returns a response then that response will be returned
- and execution will stop here.
-
- """
- return None
-
- def process_response(self, response):
- """Do whatever you'd like to the response."""
- return response
-
- @webob.dec.wsgify
- def __call__(self, req):
- response = self.process_request(req)
- if response:
- return response
- response = req.get_response(self.application)
- return self.process_response(response)
-
-
-class Debug(Middleware):
- """
- Helper class that can be inserted into any WSGI application chain
- to get information about the request and response.
- """
-
- @webob.dec.wsgify
- def __call__(self, req):
- print(("*" * 40) + " REQUEST ENVIRON")
- for key, value in req.environ.items():
- print(key, "=", value)
- print
- resp = req.get_response(self.application)
-
- print(("*" * 40) + " RESPONSE HEADERS")
- for (key, value) in resp.headers.iteritems():
- print(key, "=", value)
- print
-
- resp.app_iter = self.print_generator(resp.app_iter)
-
- return resp
-
- @staticmethod
- def print_generator(app_iter):
- """
- Iterator that prints the contents of a wrapper string iterator
- when iterated.
- """
- print(("*" * 40) + " BODY")
- for part in app_iter:
- sys.stdout.write(part)
- sys.stdout.flush()
- yield part
- print
-
-
-def debug_filter(app, conf, **local_conf):
- return Debug(app)
-
-
-class Router(object):
- """
- WSGI middleware that maps incoming requests to WSGI apps.
- """
-
- def __init__(self, mapper):
- """
- Create a router for the given routes.Mapper.
-
- Each route in `mapper` must specify a 'controller', which is a
- WSGI app to call. You'll probably want to specify an 'action' as
- well and have your controller be a wsgi.Controller, who will route
- the request to the action method.
-
- Examples:
- mapper = routes.Mapper()
- sc = ServerController()
-
- # Explicit mapping of one route to a controller+action
- mapper.connect(None, "/svrlist", controller=sc, action="list")
-
- # Actions are all implicitly defined
- mapper.resource("server", "servers", controller=sc)
-
- # Pointing to an arbitrary WSGI app. You can specify the
- # {path_info:.*} parameter so the target app can be handed just that
- # section of the URL.
- mapper.connect(None, "/v1.0/{path_info:.*}", controller=BlogApp())
- """
- self.map = mapper
- self._router = routes.middleware.RoutesMiddleware(self._dispatch,
- self.map)
-
- @webob.dec.wsgify
- def __call__(self, req):
- """
- Route the incoming request to a controller based on self.map.
- If no match, return a 404.
- """
- return self._router
-
- @staticmethod
- @webob.dec.wsgify
- def _dispatch(req):
- """
- Called by self._router after matching the incoming request to a route
- and putting the information into req.environ. Either returns 404
- or the routed WSGI app's response.
- """
- match = req.environ['wsgiorg.routing_args'][1]
- if not match:
- return webob.exc.HTTPNotFound()
- app = match['controller']
- return app
-
-
-class Request(webob.Request):
- """Add some Openstack API-specific logic to the base webob.Request."""
-
- def best_match_content_type(self):
- """Determine the requested response content-type."""
- supported = ('application/json',)
- bm = self.accept.best_match(supported)
- return bm or 'application/json'
-
- def get_content_type(self, allowed_content_types):
- """Determine content type of the request body."""
- if "Content-Type" not in self.headers:
- raise exception.InvalidContentType(content_type=None)
-
- content_type = self.content_type
-
- if content_type not in allowed_content_types:
- raise exception.InvalidContentType(content_type=content_type)
- else:
- return content_type
-
- def best_match_language(self):
- """Determines best available locale from the Accept-Language header.
-
- :returns: the best language match or None if the 'Accept-Language'
- header was not available in the request.
- """
- if not self.accept_language:
- return None
- all_languages = gettextutils.get_available_languages('heat')
- return self.accept_language.best_match(all_languages)
-
-
-def is_json_content_type(request):
- if request.method == 'GET':
- try:
- aws_content_type = request.params.get("ContentType")
- except Exception:
- aws_content_type = None
- #respect aws_content_type when both available
- content_type = aws_content_type or request.content_type
- else:
- content_type = request.content_type
- #bug #1887882
- #for back compatible for null or plain content type
- if not content_type or content_type.startswith('text/plain'):
- content_type = 'application/json'
- if content_type in ('JSON', 'application/json')\
- and request.body.startswith('{'):
- return True
- return False
-
-
-class JSONRequestDeserializer(object):
- def has_body(self, request):
- """
- Returns whether a Webob.Request object will possess an entity body.
-
- :param request: Webob.Request object
- """
- if request.content_length > 0 and is_json_content_type(request):
- return True
-
- return False
-
- def from_json(self, datastring):
- try:
- if len(datastring) > cfg.CONF.max_json_body_size:
- msg = _('JSON body size (%(len)s bytes) exceeds maximum '
- 'allowed size (%(limit)s bytes).') % \
- {'len': len(datastring),
- 'limit': cfg.CONF.max_json_body_size}
- raise exception.RequestLimitExceeded(message=msg)
- return json.loads(datastring)
- except ValueError as ex:
- raise webob.exc.HTTPBadRequest(str(ex))
-
- def default(self, request):
- if self.has_body(request):
- return {'body': self.from_json(request.body)}
- else:
- return {}
-
-
-class JSONResponseSerializer(object):
-
- def to_json(self, data):
- def sanitizer(obj):
- if isinstance(obj, datetime.datetime):
- return obj.isoformat()
- return obj
-
- response = json.dumps(data, default=sanitizer)
- logging.debug("JSON response : %s" % response)
- return response
-
- def default(self, response, result):
- response.content_type = 'application/json'
- response.body = self.to_json(result)
-
-
-# Escape XML serialization for these keys, as the AWS API defines them as
-# JSON inside XML when the response format is XML.
-JSON_ONLY_KEYS = ('TemplateBody', 'Metadata')
-
-
-class XMLResponseSerializer(object):
-
- def object_to_element(self, obj, element):
- if isinstance(obj, list):
- for item in obj:
- subelement = etree.SubElement(element, "member")
- self.object_to_element(item, subelement)
- elif isinstance(obj, dict):
- for key, value in obj.items():
- subelement = etree.SubElement(element, key)
- if key in JSON_ONLY_KEYS:
- if value:
- # Need to use json.dumps for the JSON inside XML
- # otherwise quotes get mangled and json.loads breaks
- try:
- subelement.text = json.dumps(value)
- except TypeError:
- subelement.text = str(value)
- else:
- self.object_to_element(value, subelement)
- else:
- element.text = str(obj)
-
- def to_xml(self, data):
- # Assumption : root node is dict with single key
- root = data.keys()[0]
- eltree = etree.Element(root)
- self.object_to_element(data.get(root), eltree)
- response = etree.tostring(eltree)
- logging.debug("XML response : %s" % response)
- return response
-
- def default(self, response, result):
- response.content_type = 'application/xml'
- response.body = self.to_xml(result)
-
-
-class Resource(object):
- """
- WSGI app that handles (de)serialization and controller dispatch.
-
- Reads routing information supplied by RoutesMiddleware and calls
- the requested action method upon its deserializer, controller,
- and serializer. Those three objects may implement any of the basic
- controller action methods (create, update, show, index, delete)
- along with any that may be specified in the api router. A 'default'
- method may also be implemented to be used in place of any
- non-implemented actions. Deserializer methods must accept a request
- argument and return a dictionary. Controller methods must accept a
- request argument. Additionally, they must also accept keyword
- arguments that represent the keys returned by the Deserializer. They
- may raise a webob.exc exception or return a dict, which will be
- serialized by requested content type.
- """
- def __init__(self, controller, deserializer, serializer=None):
- """
- :param controller: object that implement methods created by routes lib
- :param deserializer: object that supports webob request deserialization
- through controller-like actions
- :param serializer: object that supports webob response serialization
- through controller-like actions
- """
- self.controller = controller
- self.deserializer = deserializer
- self.serializer = serializer
-
- @webob.dec.wsgify(RequestClass=Request)
- def __call__(self, request):
- """WSGI method that controls (de)serialization and method dispatch."""
- action_args = self.get_action_args(request.environ)
- action = action_args.pop('action', None)
-
- # From reading the boto code, and observation of real AWS api responses
- # it seems that the AWS api ignores the content-type in the html header
- # Instead it looks at a "ContentType" GET query parameter
- # This doesn't seem to be documented in the AWS cfn API spec, but it
- # would appear that the default response serialization is XML, as
- # described in the API docs, but passing a query parameter of
- # ContentType=JSON results in a JSON serialized response...
- content_type = request.params.get("ContentType")
-
- try:
- deserialized_request = self.dispatch(self.deserializer,
- action, request)
- action_args.update(deserialized_request)
- action_result = self.dispatch(self.controller, action,
- request, **action_args)
- except TypeError as err:
- logging.error(_('Exception handling resource: %s') % str(err))
- msg = _('The server could not comply with the request since\r\n'
- 'it is either malformed or otherwise incorrect.\r\n')
- err = webob.exc.HTTPBadRequest(msg)
- http_exc = translate_exception(err, request.best_match_language())
- # NOTE(luisg): We disguise HTTP exceptions, otherwise they will be
- # treated by wsgi as responses ready to be sent back and they
- # won't make it into the pipeline app that serializes errors
- raise exception.HTTPExceptionDisguise(http_exc)
- except webob.exc.HTTPException as err:
- if isinstance(err, (webob.exc.HTTPOk, webob.exc.HTTPRedirection)):
- # Some HTTPException are actually not errors, they are
- # responses ready to be sent back to the users, so we don't
- # error log, disguise or translate those
- raise
- logging.error(_("Returning %(code)s to user: %(explanation)s"),
- {'code': err.code, 'explanation': err.explanation})
- http_exc = translate_exception(err, request.best_match_language())
- raise exception.HTTPExceptionDisguise(http_exc)
- except exception.HeatException as err:
- logging.error(_("Unexpected error occurred serving API: %s") %
- err.message)
- raise translate_exception(err, request.best_match_language())
- except Exception as err:
- logging.error(_("Unexpected error occurred serving API: %s") % err)
- raise translate_exception(err, request.best_match_language())
-
- # Here we support either passing in a serializer or detecting it
- # based on the content type.
- try:
- serializer = self.serializer
- if serializer is None:
- if content_type == "JSON":
- serializer = JSONResponseSerializer()
- else:
- serializer = XMLResponseSerializer()
-
- response = webob.Response(request=request)
- self.dispatch(serializer, action, response, action_result)
- return response
-
- # return unserializable result (typically an exception)
- except Exception:
- # Here we should get API exceptions derived from HeatAPIException
- # these implement get_unserialized_body(), which allow us to get
- # a dict containing the unserialized error response.
- # We only need to serialize for JSON content_type, as the
- # exception body is pre-serialized to the default XML in the
- # HeatAPIException constructor
- # If we get something else here (e.g a webob.exc exception),
- # this will fail, and we just return it without serializing,
- # which will not conform to the expected AWS error response format
- if content_type == "JSON":
- try:
- err_body = action_result.get_unserialized_body()
- serializer.default(action_result, err_body)
- except Exception:
- logging.warning("Unable to serialize exception response")
-
- return action_result
-
- def dispatch(self, obj, action, *args, **kwargs):
- """Find action-specific method on self and call it."""
- try:
- method = getattr(obj, action)
- except AttributeError:
- method = getattr(obj, 'default')
- return method(*args, **kwargs)
-
- def get_action_args(self, request_environment):
- """Parse dictionary created by routes library."""
- try:
- args = request_environment['wsgiorg.routing_args'][1].copy()
- except Exception:
- return {}
-
- try:
- del args['controller']
- except KeyError:
- pass
-
- try:
- del args['format']
- except KeyError:
- pass
-
- return args
-
-
-def translate_exception(exc, locale):
- """Translates all translatable elements of the given exception."""
- exc.message = gettextutils.get_localized_message(exc.message, locale)
- if isinstance(exc, webob.exc.HTTPError):
- # If the explanation is not a Message, that means that the
- # explanation is the default, generic and not translatable explanation
- # from webop.exc. Since the explanation is the error shown when the
- # exception is converted to a response, let's actually swap it with
- # message, since message is what gets passed in at construction time
- # in the API
- if not isinstance(exc.explanation, gettextutils.Message):
- exc.explanation = exc.message
- exc.detail = ''
- else:
- exc.explanation = \
- gettextutils.get_localized_message(exc.explanation, locale)
- exc.detail = gettextutils.get_localized_message(exc.detail, locale)
- return exc
-
-
-class BasePasteFactory(object):
-
- """A base class for paste app and filter factories.
-
- Sub-classes must override the KEY class attribute and provide
- a __call__ method.
- """
-
- KEY = None
-
- def __init__(self, conf):
- self.conf = conf
-
- def __call__(self, global_conf, **local_conf):
- raise NotImplementedError
-
- def _import_factory(self, local_conf):
- """Import an app/filter class.
-
- Lookup the KEY from the PasteDeploy local conf and import the
- class named there. This class can then be used as an app or
- filter factory.
-
- Note we support the <module>:<class> format.
-
- Note also that if you do e.g.
-
- key =
- value
-
- then ConfigParser returns a value with a leading newline, so
- we strip() the value before using it.
- """
- class_name = local_conf[self.KEY].replace(':', '.').strip()
- return importutils.import_class(class_name)
-
-
-class AppFactory(BasePasteFactory):
-
- """A Generic paste.deploy app factory.
-
- This requires heat.app_factory to be set to a callable which returns a
- WSGI app when invoked. The format of the name is <module>:<callable> e.g.
-
- [app:apiv1app]
- paste.app_factory = heat.common.wsgi:app_factory
- heat.app_factory = heat.api.cfn.v1:API
-
- The WSGI app constructor must accept a ConfigOpts object and a local config
- dict as its two arguments.
- """
-
- KEY = 'heat.app_factory'
-
- def __call__(self, global_conf, **local_conf):
- """The actual paste.app_factory protocol method."""
- factory = self._import_factory(local_conf)
- return factory(self.conf, **local_conf)
-
-
-class FilterFactory(AppFactory):
-
- """A Generic paste.deploy filter factory.
-
- This requires heat.filter_factory to be set to a callable which returns a
- WSGI filter when invoked. The format is <module>:<callable> e.g.
-
- [filter:cache]
- paste.filter_factory = heat.common.wsgi:filter_factory
- heat.filter_factory = heat.api.middleware.cache:CacheFilter
-
- The WSGI filter constructor must accept a WSGI app, a ConfigOpts object and
- a local config dict as its three arguments.
- """
-
- KEY = 'heat.filter_factory'
-
- def __call__(self, global_conf, **local_conf):
- """The actual paste.filter_factory protocol method."""
- factory = self._import_factory(local_conf)
-
- def filter(app):
- return factory(app, self.conf, **local_conf)
-
- return filter
-
-
-def setup_paste_factories(conf):
- """Set up the generic paste app and filter factories.
-
- Set things up so that:
-
- paste.app_factory = heat.common.wsgi:app_factory
-
- and
-
- paste.filter_factory = heat.common.wsgi:filter_factory
-
- work correctly while loading PasteDeploy configuration.
-
- The app factories are constructed at runtime to allow us to pass a
- ConfigOpts object to the WSGI classes.
-
- :param conf: a ConfigOpts object
- """
- global app_factory, filter_factory
- app_factory = AppFactory(conf)
- filter_factory = FilterFactory(conf)
-
-
-def teardown_paste_factories():
- """Reverse the effect of setup_paste_factories()."""
- global app_factory, filter_factory
- del app_factory
- del filter_factory
-
-
-def paste_deploy_app(paste_config_file, app_name, conf):
- """Load a WSGI app from a PasteDeploy configuration.
-
- Use deploy.loadapp() to load the app from the PasteDeploy configuration,
- ensuring that the supplied ConfigOpts object is passed to the app and
- filter constructors.
-
- :param paste_config_file: a PasteDeploy config file
- :param app_name: the name of the app/pipeline to load from the file
- :param conf: a ConfigOpts object to supply to the app and its filters
- :returns: the WSGI app
- """
- setup_paste_factories(conf)
- try:
- return deploy.loadapp("config:%s" % paste_config_file, name=app_name)
- finally:
- teardown_paste_factories()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-'''
-Interface for database access.
-
-Usage:
-
- >>> from heat import db
- >>> db.event_get(context, event_id)
- # Event object received
-
-The underlying driver is loaded . SQLAlchemy is currently the only
-supported backend.
-'''
-
-from oslo.config import cfg
-
-from heat.db import utils
-
-SQL_CONNECTION = 'sqlite://'
-SQL_IDLE_TIMEOUT = 3600
-db_opts = [
- cfg.StrOpt('db_backend',
- default='sqlalchemy',
- help='The backend to use for db')]
-
-cfg.CONF.register_opts(db_opts)
-
-IMPL = utils.LazyPluggable('db_backend',
- sqlalchemy='heat.db.sqlalchemy.api')
-
-
-cfg.CONF.import_opt('sql_connection', 'heat.common.config')
-cfg.CONF.import_opt('sql_idle_timeout', 'heat.common.config')
-
-
-def configure():
- global SQL_CONNECTION
- global SQL_IDLE_TIMEOUT
- SQL_CONNECTION = cfg.CONF.sql_connection
- SQL_IDLE_TIMEOUT = cfg.CONF.sql_idle_timeout
-
-
-def get_session():
- return IMPL.get_session()
-
-
-def raw_template_get(context, template_id):
- return IMPL.raw_template_get(context, template_id)
-
-
-def raw_template_create(context, values):
- return IMPL.raw_template_create(context, values)
-
-
-def resource_data_get(resource, key):
- return IMPL.resource_data_get(resource, key)
-
-
-def resource_data_set(resource, key, value, redact=False):
- return IMPL.resource_data_set(resource, key, value, redact=redact)
-
-
-def resource_data_get_by_key(context, resource_id, key):
- return IMPL.resource_data_get_by_key(context, resource_id, key)
-
-
-def resource_data_delete(resource, key):
- """Remove a resource_data element associated to a resource."""
- return IMPL.resource_data_delete(resource, key)
-
-
-def resource_get(context, resource_id):
- return IMPL.resource_get(context, resource_id)
-
-
-def resource_get_all(context):
- return IMPL.resource_get_all(context)
-
-
-def resource_create(context, values):
- return IMPL.resource_create(context, values)
-
-
-def resource_exchange_stacks(context, resource_id1, resource_id2):
- return IMPL.resource_exchange_stacks(context, resource_id1, resource_id2)
-
-
-def resource_get_all_by_stack(context, stack_id):
- return IMPL.resource_get_all_by_stack(context, stack_id)
-
-
-def resource_get_by_name_and_stack(context, resource_name, stack_id):
- return IMPL.resource_get_by_name_and_stack(context,
- resource_name, stack_id)
-
-
-def resource_get_by_physical_resource_id(context, physical_resource_id):
- return IMPL.resource_get_by_physical_resource_id(context,
- physical_resource_id)
-
-
-def stack_get(context, stack_id, admin=False, show_deleted=False):
- return IMPL.stack_get(context, stack_id, admin, show_deleted=show_deleted)
-
-
-def stack_get_by_name(context, stack_name, owner_id=None):
- return IMPL.stack_get_by_name(context, stack_name, owner_id=owner_id)
-
-
-def stack_get_all(context):
- return IMPL.stack_get_all(context)
-
-
-def stack_get_all_by_tenant(context):
- return IMPL.stack_get_all_by_tenant(context)
-
-
-def stack_count_all_by_tenant(context):
- return IMPL.stack_count_all_by_tenant(context)
-
-
-def stack_create(context, values):
- return IMPL.stack_create(context, values)
-
-
-def stack_update(context, stack_id, values):
- return IMPL.stack_update(context, stack_id, values)
-
-
-def stack_delete(context, stack_id):
- return IMPL.stack_delete(context, stack_id)
-
-
-def user_creds_create(context):
- return IMPL.user_creds_create(context)
-
-
-def user_creds_get(context_id):
- return IMPL.user_creds_get(context_id)
-
-
-def event_get(context, event_id):
- return IMPL.event_get(context, event_id)
-
-
-def event_get_all(context):
- return IMPL.event_get_all(context)
-
-
-def event_get_all_by_tenant(context):
- return IMPL.event_get_all_by_tenant(context)
-
-
-def event_get_all_by_stack(context, stack_id):
- return IMPL.event_get_all_by_stack(context, stack_id)
-
-
-def event_create(context, values):
- return IMPL.event_create(context, values)
-
-
-def watch_rule_get(context, watch_rule_id):
- return IMPL.watch_rule_get(context, watch_rule_id)
-
-
-def watch_rule_get_by_name(context, watch_rule_name):
- return IMPL.watch_rule_get_by_name(context, watch_rule_name)
-
-
-def watch_rule_get_all(context):
- return IMPL.watch_rule_get_all(context)
-
-
-def watch_rule_get_all_by_stack(context, stack_id):
- return IMPL.watch_rule_get_all_by_stack(context, stack_id)
-
-
-def watch_rule_create(context, values):
- return IMPL.watch_rule_create(context, values)
-
-
-def watch_rule_update(context, watch_id, values):
- return IMPL.watch_rule_update(context, watch_id, values)
-
-
-def watch_rule_delete(context, watch_id):
- return IMPL.watch_rule_delete(context, watch_id)
-
-
-def watch_data_create(context, values):
- return IMPL.watch_data_create(context, values)
-
-
-def watch_data_get_all(context):
- return IMPL.watch_data_get_all(context)
-
-
-def watch_data_delete(context, watch_name):
- return IMPL.watch_data_delete(context, watch_name)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Database setup and migration commands."""
-
-from heat.db import utils
-
-
-IMPL = utils.LazyPluggable('db_backend',
- sqlalchemy='heat.db.sqlalchemy.migration')
-
-INIT_VERSION = 14
-
-
-def db_sync(version=None):
- """Migrate the database to `version` or the most recent version."""
- return IMPL.db_sync(version=version)
-
-
-def db_version():
- """Display the current database version."""
- return IMPL.db_version()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-'''Implementation of SQLAlchemy backend.'''
-from datetime import datetime
-from datetime import timedelta
-
-import sqlalchemy
-from sqlalchemy.orm.session import Session
-
-from heat.openstack.common.gettextutils import _
-
-from heat.common import crypt
-from heat.common import exception
-from heat.db.sqlalchemy import models
-from heat.db.sqlalchemy.session import get_engine
-from heat.db.sqlalchemy.session import get_session
-
-
-def model_query(context, *args):
- session = _session(context)
- query = session.query(*args)
-
- return query
-
-
-def soft_delete_aware_query(context, *args, **kwargs):
- """Stack query helper that accounts for context's `show_deleted` field.
-
- :param show_deleted: if present, overrides context's show_deleted field.
- """
-
- query = model_query(context, *args)
- show_deleted = kwargs.get('show_deleted')
-
- if not show_deleted:
- query = query.filter_by(deleted_at=None)
-
- return query
-
-
-def _session(context):
- return (context and context.session) or get_session()
-
-
-def raw_template_get(context, template_id):
- result = model_query(context, models.RawTemplate).get(template_id)
-
- if not result:
- raise exception.NotFound('raw template with id %s not found' %
- template_id)
-
- return result
-
-
-def raw_template_create(context, values):
- raw_template_ref = models.RawTemplate()
- raw_template_ref.update(values)
- raw_template_ref.save(_session(context))
- return raw_template_ref
-
-
-def resource_get(context, resource_id):
- result = model_query(context, models.Resource).get(resource_id)
-
- if not result:
- raise exception.NotFound("resource with id %s not found" % resource_id)
-
- return result
-
-
-def resource_get_by_name_and_stack(context, resource_name, stack_id):
- result = model_query(context, models.Resource).\
- filter_by(name=resource_name).\
- filter_by(stack_id=stack_id).first()
-
- return result
-
-
-def resource_get_by_physical_resource_id(context, physical_resource_id):
- results = (model_query(context, models.Resource)
- .filter_by(nova_instance=physical_resource_id)
- .all())
-
- for result in results:
- if context is None or result.stack.tenant == context.tenant_id:
- return result
-
- return None
-
-
-def resource_get_all(context):
- results = model_query(context, models.Resource).all()
-
- if not results:
- raise exception.NotFound('no resources were found')
-
- return results
-
-
-def resource_data_get(resource, key):
- """Lookup value of resource's data by key."""
- result = resource_data_get_by_key(resource.context, resource.id, key)
- return result.value
-
-
-def _encrypt(value):
- return crypt.encrypt(value.encode('utf-8'))
-
-
-def _decrypt(enc_value):
- value = crypt.decrypt(enc_value)
- if value is not None:
- return unicode(value, 'utf-8')
-
-
-def resource_data_get_by_key(context, resource_id, key):
- result = (model_query(context, models.ResourceData)
- .filter_by(resource_id=resource_id)
- .filter_by(key=key)
- .first())
- if not result:
- raise exception.NotFound('No resource data found')
- if result.redact and result.value:
- result.value = _decrypt(result.value)
- return result
-
-
-def resource_data_set(resource, key, value, redact=False):
- """Save resource's key/value pair to database."""
- if redact:
- value = _encrypt(value)
- try:
- current = resource_data_get_by_key(resource.context, resource.id, key)
- except exception.NotFound:
- current = models.ResourceData()
- current.key = key
- current.resource_id = resource.id
- current.redact = redact
- current.value = value
- current.save()
- return current
-
-
-def resource_exchange_stacks(context, resource_id1, resource_id2):
- query = model_query(context, models.Resource)
- session = query.session
- session.begin()
-
- res1 = query.get(resource_id1)
- res2 = query.get(resource_id2)
-
- res1.stack, res2.stack = res2.stack, res1.stack
-
- session.commit()
-
-
-def resource_data_delete(resource, key):
- result = resource_data_get_by_key(resource.context, resource.id, key)
- result.delete()
-
-
-def resource_create(context, values):
- resource_ref = models.Resource()
- resource_ref.update(values)
- resource_ref.save(_session(context))
- return resource_ref
-
-
-def resource_get_all_by_stack(context, stack_id):
- results = model_query(context, models.Resource).\
- filter_by(stack_id=stack_id).all()
-
- if not results:
- raise exception.NotFound("no resources for stack_id %s were found" %
- stack_id)
-
- return results
-
-
-def stack_get_by_name(context, stack_name, owner_id=None):
- query = soft_delete_aware_query(context, models.Stack).\
- filter_by(tenant=context.tenant_id).\
- filter_by(name=stack_name).\
- filter_by(owner_id=owner_id)
-
- return query.first()
-
-
-def stack_get(context, stack_id, admin=False, show_deleted=False):
- result = model_query(context, models.Stack).get(stack_id)
-
- if result is None or result.deleted_at is not None and not show_deleted:
- return None
-
- # If the admin flag is True, we allow retrieval of a specific
- # stack without the tenant scoping
- if admin:
- return result
-
- if (result is not None and context is not None and
- result.tenant != context.tenant_id):
- return None
-
- return result
-
-
-def stack_get_all(context):
- results = soft_delete_aware_query(context, models.Stack).\
- filter_by(owner_id=None).all()
- return results
-
-
-def _query_stack_get_all_by_tenant(context):
- query = soft_delete_aware_query(context, models.Stack).\
- filter_by(owner_id=None).\
- filter_by(tenant=context.tenant_id)
- return query
-
-
-def stack_get_all_by_tenant(context):
- return _query_stack_get_all_by_tenant(context).all()
-
-
-def stack_count_all_by_tenant(context):
- return _query_stack_get_all_by_tenant(context).count()
-
-
-def stack_create(context, values):
- stack_ref = models.Stack()
- stack_ref.update(values)
- stack_ref.save(_session(context))
- return stack_ref
-
-
-def stack_update(context, stack_id, values):
- stack = stack_get(context, stack_id)
-
- if not stack:
- raise exception.NotFound('Attempt to update a stack with id: %s %s' %
- (stack_id, 'that does not exist'))
-
- old_template_id = stack.raw_template_id
-
- stack.update(values)
- stack.save(_session(context))
-
-
-def stack_delete(context, stack_id):
- s = stack_get(context, stack_id)
- if not s:
- raise exception.NotFound('Attempt to delete a stack with id: %s %s' %
- (stack_id, 'that does not exist'))
-
- session = Session.object_session(s)
-
- for r in s.resources:
- session.delete(r)
-
- s.soft_delete(session=session)
-
- session.flush()
-
-
-def user_creds_create(context):
- values = context.to_dict()
- user_creds_ref = models.UserCreds()
- if values.get('trust_id'):
- user_creds_ref.trust_id = _encrypt(values.get('trust_id'))
- user_creds_ref.trustor_user_id = values.get('trustor_user_id')
- user_creds_ref.username = None
- user_creds_ref.password = None
- else:
- user_creds_ref.update(values)
- user_creds_ref.password = _encrypt(values['password'])
- user_creds_ref.save(_session(context))
- return user_creds_ref
-
-
-def user_creds_get(user_creds_id):
- db_result = model_query(None, models.UserCreds).get(user_creds_id)
- # Return a dict copy of db results, do not decrypt details into db_result
- # or it can be committed back to the DB in decrypted form
- result = dict(db_result)
- result['password'] = _decrypt(result['password'])
- result['trust_id'] = _decrypt(result['trust_id'])
- return result
-
-
-def event_get(context, event_id):
- result = model_query(context, models.Event).get(event_id)
-
- return result
-
-
-def event_get_all(context):
- stacks = soft_delete_aware_query(context, models.Stack)
- stack_ids = [stack.id for stack in stacks]
- results = model_query(context, models.Event).\
- filter(models.Event.stack_id.in_(stack_ids)).all()
-
- return results
-
-
-def event_get_all_by_tenant(context):
- stacks = soft_delete_aware_query(context, models.Stack).\
- filter_by(tenant=context.tenant_id).all()
- results = []
- for stack in stacks:
- results.extend(model_query(context, models.Event).
- filter_by(stack_id=stack.id).all())
-
- return results
-
-
-def event_get_all_by_stack(context, stack_id):
- results = model_query(context, models.Event).\
- filter_by(stack_id=stack_id).all()
-
- return results
-
-
-def event_create(context, values):
- event_ref = models.Event()
- event_ref.update(values)
- event_ref.save(_session(context))
- return event_ref
-
-
-def watch_rule_get(context, watch_rule_id):
- result = model_query(context, models.WatchRule).get(watch_rule_id)
- return result
-
-
-def watch_rule_get_by_name(context, watch_rule_name):
- result = model_query(context, models.WatchRule).\
- filter_by(name=watch_rule_name).first()
- return result
-
-
-def watch_rule_get_all(context):
- results = model_query(context, models.WatchRule).all()
- return results
-
-
-def watch_rule_get_all_by_stack(context, stack_id):
- results = model_query(context, models.WatchRule).\
- filter_by(stack_id=stack_id).all()
- return results
-
-
-def watch_rule_create(context, values):
- obj_ref = models.WatchRule()
- obj_ref.update(values)
- obj_ref.save(_session(context))
- return obj_ref
-
-
-def watch_rule_update(context, watch_id, values):
- wr = watch_rule_get(context, watch_id)
-
- if not wr:
- raise exception.NotFound('Attempt to update a watch with id: %s %s' %
- (watch_id, 'that does not exist'))
-
- wr.update(values)
- wr.save(_session(context))
-
-
-def watch_rule_delete(context, watch_id):
- wr = watch_rule_get(context, watch_id)
- if not wr:
- raise exception.NotFound('Attempt to delete watch_rule: %s %s' %
- (watch_id, 'that does not exist'))
-
- session = Session.object_session(wr)
-
- for d in wr.watch_data:
- session.delete(d)
-
- session.delete(wr)
- session.flush()
-
-
-def watch_data_create(context, values):
- obj_ref = models.WatchData()
- obj_ref.update(values)
- obj_ref.save(_session(context))
- return obj_ref
-
-
-def watch_data_get_all(context):
- results = model_query(context, models.WatchData).all()
- return results
-
-
-def watch_data_delete(context, watch_name):
- ds = model_query(context, models.WatchRule).\
- filter_by(name=watch_name).all()
-
- if not ds:
- raise exception.NotFound('Attempt to delete watch_data: %s %s' %
- (watch_name, 'that does not exist'))
-
- session = Session.object_session(ds)
- for d in ds:
- session.delete(d)
- session.flush()
-
-
-def purge_deleted(age):
- if age is not None:
- try:
- age = int(age)
- except ValueError:
- raise exception.Error(_("age should be an integer"))
- if age < 0:
- raise exception.Error(_("age should be a positive integer"))
- else:
- age = 90
-
- time_line = datetime.now() - timedelta(days=age)
- engine = get_engine()
- meta = sqlalchemy.MetaData()
- meta.bind = engine
-
- stack = sqlalchemy.Table('stack', meta, autoload=True)
- event = sqlalchemy.Table('event', meta, autoload=True)
- raw_template = sqlalchemy.Table('raw_template', meta, autoload=True)
- user_creds = sqlalchemy.Table('user_creds', meta, autoload=True)
-
- stmt = sqlalchemy.select([stack.c.id,
- stack.c.raw_template_id,
- stack.c.user_creds_id]).\
- where(stack.c.deleted_at < time_line)
- deleted_stacks = engine.execute(stmt)
-
- for s in deleted_stacks:
- event_del = event.delete().where(event.c.stack_id == s[0])
- engine.execute(event_del)
- stack_del = stack.delete().where(stack.c.id == s[0])
- engine.execute(stack_del)
- raw_template_del = raw_template.delete().\
- where(raw_template.c.id == s[1])
- engine.execute(raw_template_del)
- user_creds_del = user_creds.delete().where(user_creds.c.id == s[2])
- engine.execute(user_creds_del)
+++ /dev/null
-#!/usr/bin/env python
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from migrate.versioning.shell import main
-import migrate.exceptions
-import ConfigParser
-
-if __name__ == '__main__':
- import os.path
- migrate_repo_path = os.path.join(os.path.dirname(__file__),
- 'migrate_repo')
-
- # Try to get the config-file value for sql_connection
- # Note we can't use openstack.common.cfg because this also insists
- # on parsing the CLI, which we don't want here
- config = ConfigParser.SafeConfigParser()
- try:
- config = ConfigParser.SafeConfigParser()
- config.readfp(open('/etc/heat/heat.conf'))
- sql_connection = config.get('DEFAULT', 'sql_connection')
- except Exception:
- sql_connection = 'mysql://heat:heat@localhost/heat'
-
- try:
- main(url=sql_connection, debug='False', repository=migrate_repo_path)
- except migrate.exceptions.DatabaseAlreadyControlledError:
- print('Database already version controlled.')
+++ /dev/null
-This is a database migration repository.
-
-More information at
-http://code.google.com/p/sqlalchemy-migrate/
+++ /dev/null
-#!/usr/bin/env python
-from migrate.versioning.shell import main
-
-if __name__ == '__main__':
- main(debug='False')
+++ /dev/null
-[db_settings]
-# Used to identify which repository this database is versioned under.
-# You can use the name of your project.
-repository_id=heat
-
-# The name of the database table used to track the schema version.
-# This name shouldn't already be used by your project.
-# If this is changed once a database is under version control, you'll need to
-# change the table name in each database too.
-version_table=migrate_version
-
-# When committing a change script, Migrate will attempt to generate the
-# sql for all supported databases; normally, if one of them fails - probably
-# because you don't have that database installed - it is ignored and the
-# commit continues, perhaps ending successfully.
-# Databases in this list MUST compile successfully during a commit, or the
-# entire commit will fail. List the databases your application will actually
-# be using to ensure your updates to that database work properly.
-# This must be a list; example: ['postgres','sqlite']
-required_dbs=[]
-
-# When creating new change scripts, Migrate will stamp the new script with
-# a version number. By default this is latest_version + 1. You can set this
-# to 'true' to tell Migrate to use the UTC timestamp instead.
-use_timestamp_numbering=False
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-
-
-def upgrade(migrate_engine):
- meta = sqlalchemy.MetaData()
- meta.bind = migrate_engine
-
- raw_template = sqlalchemy.Table(
- 'raw_template', meta,
- sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True,
- nullable=False),
- sqlalchemy.Column('created_at', sqlalchemy.DateTime),
- sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
- sqlalchemy.Column('template', sqlalchemy.Text),
- )
-
- user_creds = sqlalchemy.Table(
- 'user_creds', meta,
- sqlalchemy.Column('id', sqlalchemy.Integer,
- primary_key=True, nullable=False),
- sqlalchemy.Column('created_at', sqlalchemy.DateTime),
- sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
- sqlalchemy.Column('username', sqlalchemy.String(255)),
- sqlalchemy.Column('password', sqlalchemy.String(255)),
- sqlalchemy.Column('service_user', sqlalchemy.String(255)),
- sqlalchemy.Column('service_password', sqlalchemy.String(255)),
- sqlalchemy.Column('tenant', sqlalchemy.String(1024)),
- sqlalchemy.Column('auth_url', sqlalchemy.Text),
- sqlalchemy.Column('aws_auth_url', sqlalchemy.Text),
- sqlalchemy.Column('tenant_id', sqlalchemy.String(256)),
- sqlalchemy.Column('aws_creds', sqlalchemy.Text),
- )
-
- stack = sqlalchemy.Table(
- 'stack', meta,
- sqlalchemy.Column('id', sqlalchemy.String(36),
- primary_key=True, nullable=False),
- sqlalchemy.Column('created_at', sqlalchemy.DateTime),
- sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
- sqlalchemy.Column('name', sqlalchemy.String(255)),
- sqlalchemy.Column('raw_template_id',
- sqlalchemy.Integer,
- sqlalchemy.ForeignKey('raw_template.id'),
- nullable=False),
- sqlalchemy.Column('user_creds_id', sqlalchemy.Integer,
- sqlalchemy.ForeignKey('user_creds.id'),
- nullable=False),
- sqlalchemy.Column('username', sqlalchemy.String(256)),
- sqlalchemy.Column('owner_id', sqlalchemy.String(36)),
- sqlalchemy.Column('status', sqlalchemy.String(255)),
- sqlalchemy.Column('status_reason', sqlalchemy.String(255)),
- sqlalchemy.Column('parameters', sqlalchemy.Text),
- sqlalchemy.Column('timeout', sqlalchemy.Integer, nullable=False),
- sqlalchemy.Column('tenant', sqlalchemy.String(256)),
- sqlalchemy.Column('disable_rollback', sqlalchemy.Boolean,
- nullable=False),
- )
-
- resource = sqlalchemy.Table(
- 'resource', meta,
- sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True,
- nullable=False),
- sqlalchemy.Column('nova_instance', sqlalchemy.String(255)),
- sqlalchemy.Column('name', sqlalchemy.String(255)),
- sqlalchemy.Column('created_at', sqlalchemy.DateTime),
- sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
- sqlalchemy.Column('state', sqlalchemy.String(255)),
- sqlalchemy.Column('state_description', sqlalchemy.String(255)),
- sqlalchemy.Column('stack_id', sqlalchemy.String(36),
- sqlalchemy.ForeignKey('stack.id'), nullable=False),
- sqlalchemy.Column('rsrc_metadata', sqlalchemy.Text),
- )
-
- event = sqlalchemy.Table(
- 'event', meta,
- sqlalchemy.Column('id', sqlalchemy.Integer,
- primary_key=True, nullable=False),
- sqlalchemy.Column('stack_id', sqlalchemy.String(36),
- sqlalchemy.ForeignKey('stack.id'), nullable=False),
- sqlalchemy.Column('created_at', sqlalchemy.DateTime),
- sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
- sqlalchemy.Column('name', sqlalchemy.String(255)),
- sqlalchemy.Column('logical_resource_id', sqlalchemy.String(255)),
- sqlalchemy.Column('physical_resource_id', sqlalchemy.String(255)),
- sqlalchemy.Column('resource_status_reason', sqlalchemy.String(255)),
- sqlalchemy.Column('resource_type', sqlalchemy.String(255)),
- sqlalchemy.Column('resource_properties', sqlalchemy.PickleType),
- )
-
- watch_rule = sqlalchemy.Table(
- 'watch_rule', meta,
- sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True,
- nullable=False),
- sqlalchemy.Column('created_at', sqlalchemy.DateTime),
- sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
- sqlalchemy.Column('name', sqlalchemy.String(255)),
- sqlalchemy.Column('state', sqlalchemy.String(255)),
- sqlalchemy.Column('rule', sqlalchemy.Text),
- sqlalchemy.Column('last_evaluated', sqlalchemy.DateTime),
- sqlalchemy.Column('stack_id', sqlalchemy.String(36),
- sqlalchemy.ForeignKey('stack.id'), nullable=False),
- )
-
- watch_data = sqlalchemy.Table(
- 'watch_data', meta,
- sqlalchemy.Column('id', sqlalchemy.Integer, primary_key=True,
- nullable=False),
- sqlalchemy.Column('created_at', sqlalchemy.DateTime),
- sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
- sqlalchemy.Column('data', sqlalchemy.Text),
- sqlalchemy.Column('watch_rule_id', sqlalchemy.Integer,
- sqlalchemy.ForeignKey('watch_rule.id'),
- nullable=False),
- )
-
- tables = (
- raw_template,
- user_creds,
- stack,
- resource,
- event,
- watch_rule,
- watch_data,
- )
-
- for index, table in enumerate(tables):
- try:
- table.create()
- except:
- # If an error occurs, drop all tables created so far to return
- # to the previously existing state.
- meta.drop_all(tables=tables[:index])
- raise
-
-
-def downgrade(migrate_engine):
- raise Exception('Database downgrade not supported - would drop all tables')
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-
-
-def upgrade(migrate_engine):
- meta = sqlalchemy.MetaData()
- meta.bind = migrate_engine
-
- stack = sqlalchemy.Table('stack', meta, autoload=True)
- stack.c.timeout.alter(nullable=True)
-
-
-def downgrade(migrate_engine):
- meta = sqlalchemy.MetaData()
- meta.bind = migrate_engine
-
- stack = sqlalchemy.Table('stack', meta, autoload=True)
- stack.c.timeout.alter(nullable=False)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-
-
-def upgrade(migrate_engine):
- meta = sqlalchemy.MetaData()
- meta.bind = migrate_engine
-
- event = sqlalchemy.Table('event', meta, autoload=True)
- # Currently there is a 'name' column which really holds the
- # resource status, so rename it and add a separate action column
- # action is e.g "CREATE" and status is e.g "IN_PROGRESS"
- event.c.name.alter(name='resource_status')
- sqlalchemy.Column('resource_action', sqlalchemy.String(255)).create(event)
-
-
-def downgrade(migrate_engine):
- meta = sqlalchemy.MetaData()
- meta.bind = migrate_engine
-
- event = sqlalchemy.Table('event', meta, autoload=True)
- event.c.resource_status.alter(name='name')
- event.c.resource_action.drop()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-from heat.openstack.common import uuidutils
-
-
-def upgrade(migrate_engine):
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- resource = sqlalchemy.Table('resource', meta, autoload=True)
-
- resource.c.id.alter(sqlalchemy.String(36), primary_key=True,
- default=uuidutils.generate_uuid)
-
-
-def downgrade(migrate_engine):
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- resource = sqlalchemy.Table('resource', meta, autoload=True)
-
- resource.c.id.alter(sqlalchemy.Integer, primary_key=True)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-
-
-def upgrade(migrate_engine):
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- resource = sqlalchemy.Table('resource', meta, autoload=True)
- # Align the current state/state_description with the
- # action/status now used in the event table
- action = sqlalchemy.Column('action',
- sqlalchemy.String(length=255))
- action.create(resource)
- resource.c.state.alter(name='status')
- resource.c.state_description.alter(name='status_reason')
-
-
-def downgrade(migrate_engine):
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- resource = sqlalchemy.Table('resource', meta, autoload=True)
- resource.c.status.drop()
- resource.c.status.alter(name='state')
- resource.c.status_reason.alter(name='state_description')
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-
-
-def upgrade(migrate_engine):
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- stack = sqlalchemy.Table('stack', meta, autoload=True)
- # Align with action/status now used in the event/resource tables
- action = sqlalchemy.Column('action',
- sqlalchemy.String(length=255))
- action.create(stack)
-
-
-def downgrade(migrate_engine):
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- stack = sqlalchemy.Table('stack', meta, autoload=True)
- stack.c.action.drop()
+++ /dev/null
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-
-
-def upgrade(migrate_engine):
- meta = sqlalchemy.MetaData()
- meta.bind = migrate_engine
-
- resource_data = sqlalchemy.Table(
- 'resource_data', meta,
- sqlalchemy.Column('id',
- sqlalchemy.Integer,
- primary_key=True,
- nullable=False),
- sqlalchemy.Column('created_at', sqlalchemy.DateTime),
- sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
- sqlalchemy.Column('key', sqlalchemy.String(255)),
- sqlalchemy.Column('value', sqlalchemy.Text),
- sqlalchemy.Column('redact', sqlalchemy.Boolean),
- sqlalchemy.Column('resource_id',
- sqlalchemy.String(36),
- sqlalchemy.ForeignKey('resource.id'),
- nullable=False)
- )
- sqlalchemy.Table('resource', meta, autoload=True)
- resource_data.create()
-
-
-def downgrade(migrate_engine):
- meta = sqlalchemy.MetaData()
- meta.bind = migrate_engine
-
- resource_data = sqlalchemy.Table('resource_data', meta, autoload=True)
- resource_data.drop()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-
-
-def upgrade(migrate_engine):
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- stack = sqlalchemy.Table('stack', meta, autoload=True)
- sqlalchemy.Column('deleted_at', sqlalchemy.DateTime).create(stack)
-
-
-def downgrade(migrate_engine):
- meta = sqlalchemy.MetaData()
- meta.bind = migrate_engine
-
- stack = sqlalchemy.Table('stack', meta, autoload=True)
- event = sqlalchemy.Table('event', meta, autoload=True)
- user_creds = sqlalchemy.Table('user_creds', meta, autoload=True)
- raw_template = sqlalchemy.Table('raw_template', meta, autoload=True)
-
- # Remove soft deleted data
- not_deleted = None
- stmt = sqlalchemy.select([stack.c.id,
- stack.c.raw_template_id,
- stack.c.user_creds_id]).\
- where(stack.c.deleted_at != not_deleted)
- deleted_stacks = migrate_engine.execute(stmt)
- for s in deleted_stacks:
- event_del = event.delete().where(event.c.stack_id == s[0])
- migrate_engine.execute(event_del)
- stack_del = stack.delete().where(stack.c.id == s[0])
- migrate_engine.execute(stack_del)
- raw_template_del = raw_template.delete().\
- where(raw_template.c.id == s[1])
- migrate_engine.execute(raw_template_del)
- user_creds_del = user_creds.delete().where(user_creds.c.id == s[2])
- migrate_engine.execute(user_creds_del)
-
- stack.c.deleted_at.drop()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-from sqlalchemy.dialects import mysql
-from sqlalchemy import types as sqltypes
-
-
-def upgrade(migrate_engine):
- if migrate_engine.name != 'mysql':
- return
-
- meta = sqlalchemy.MetaData(bind=migrate_engine)
- raw_template = sqlalchemy.Table('raw_template', meta, autoload=True)
- raw_template.c.template.alter(type=mysql.LONGTEXT())
-
-
-def downgrade(migrate_engine):
- if migrate_engine.name != 'mysql':
- return
-
- meta = sqlalchemy.MetaData(bind=migrate_engine)
- raw_template = sqlalchemy.Table('raw_template', meta, autoload=True)
- raw_template.c.template.alter(type=sqltypes.TEXT())
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-
-
-def upgrade(migrate_engine):
- meta = sqlalchemy.MetaData()
- meta.bind = migrate_engine
- event = sqlalchemy.Table('event', meta, autoload=True)
- event.c.logical_resource_id.alter(name='resource_name')
-
-
-def downgrade(migrate_engine):
- meta = sqlalchemy.MetaData()
- meta.bind = migrate_engine
- event = sqlalchemy.Table('event', meta, autoload=True)
- event.c.resource_name.alter(name='logical_resource_id')
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-
-
-def upgrade(migrate_engine):
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- user_creds = sqlalchemy.Table('user_creds', meta, autoload=True)
-
- user_creds.c.service_user.drop()
- user_creds.c.service_password.drop()
-
-
-def downgrade(migrate_engine):
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- user_creds = sqlalchemy.Table('user_creds', meta, autoload=True)
-
- service_user = sqlalchemy.Column('service_user',
- sqlalchemy.String(length=255))
- service_user.create(user_creds)
- service_password = sqlalchemy.Column('service_password',
- sqlalchemy.String(length=255))
- service_password.create(user_creds)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-
-
-def upgrade(migrate_engine):
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- user_creds = sqlalchemy.Table('user_creds', meta, autoload=True)
-
- user_creds.c.aws_creds.drop()
- user_creds.c.aws_auth_url.drop()
-
-
-def downgrade(migrate_engine):
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- user_creds = sqlalchemy.Table('user_creds', meta, autoload=True)
-
- aws_creds = sqlalchemy.Column('aws_creds',
- sqlalchemy.String(length=255))
- aws_creds.create(user_creds)
- aws_auth_url = sqlalchemy.Column('aws_auth_url',
- sqlalchemy.String(length=255))
- aws_auth_url.create(user_creds)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-
-
-def upgrade(migrate_engine):
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- user_creds = sqlalchemy.Table('user_creds', meta, autoload=True)
-
- # keystone IDs are 32 characters long, but the keystone DB schema
- # specifies varchar(64) so align with that here, for the trust_id
- # we encrypt it, so align with the 255 chars allowed for password
- trustor_user_id = sqlalchemy.Column('trustor_user_id',
- sqlalchemy.String(length=64))
- trust_id = sqlalchemy.Column('trust_id', sqlalchemy.String(length=255))
- trustor_user_id.create(user_creds)
- trust_id.create(user_creds)
-
-
-def downgrade(migrate_engine):
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- user_creds = sqlalchemy.Table('user_creds', meta, autoload=True)
- user_creds.c.trustor_user_id.drop()
- user_creds.c.trust_id.drop()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sqlalchemy
-from sqlalchemy.dialects import mysql
-from sqlalchemy import types as sqltypes
-
-
-def upgrade(migrate_engine):
- if migrate_engine.name != 'mysql':
- return
-
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- stack = sqlalchemy.Table('stack', meta, autoload=True)
- stack.c.parameters.alter(type=mysql.LONGTEXT())
-
- resource = sqlalchemy.Table('resource', meta, autoload=True)
- resource.c.rsrc_metadata.alter(type=mysql.LONGTEXT())
-
- watch_rule = sqlalchemy.Table('watch_rule', meta, autoload=True)
- watch_rule.c.rule.alter(type=mysql.LONGTEXT())
-
- watch_data = sqlalchemy.Table('watch_data', meta, autoload=True)
- watch_data.c.data.alter(type=mysql.LONGTEXT())
-
-
-def downgrade(migrate_engine):
- if migrate_engine.name != 'mysql':
- return
-
- meta = sqlalchemy.MetaData(bind=migrate_engine)
-
- stack = sqlalchemy.Table('stack', meta, autoload=True)
- stack.c.parameters.alter(type=sqltypes.TEXT())
-
- resource = sqlalchemy.Table('resource', meta, autoload=True)
- resource.c.rsrc_metadata.alter(type=sqltypes.TEXT())
-
- watch_rule = sqlalchemy.Table('watch_rule', meta, autoload=True)
- watch_rule.c.rule.alter(type=sqltypes.TEXT())
-
- watch_data = sqlalchemy.Table('watch_data', meta, autoload=True)
- watch_data.c.data.alter(type=sqltypes.TEXT())
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import distutils.version as dist_version
-import os
-import sys
-
-from heat.db.sqlalchemy.session import get_engine
-from heat.db import migration
-
-import sqlalchemy
-import migrate
-from migrate.versioning import util as migrate_util
-
-from heat.openstack.common import exception
-from heat.openstack.common.gettextutils import _
-
-_REPOSITORY = None
-
-
-@migrate_util.decorator
-def patched_with_engine(f, *a, **kw):
- url = a[0]
- engine = migrate_util.construct_engine(url, **kw)
- try:
- kw['engine'] = engine
- return f(*a, **kw)
- finally:
- if isinstance(engine, migrate_util.Engine) and engine is not url:
- migrate_util.log.debug('Disposing SQLAlchemy engine %s', engine)
- engine.dispose()
-
-
-# TODO(jkoelker) When migrate 0.7.3 is released and nova depends
-# on that version or higher, this can be removed
-MIN_PKG_VERSION = dist_version.StrictVersion('0.7.3')
-if (not hasattr(migrate, '__version__') or
- dist_version.StrictVersion(migrate.__version__) < MIN_PKG_VERSION):
- migrate_util.with_engine = patched_with_engine
-
-
-# NOTE(jkoelker) Delay importing migrate until we are patched
-from migrate.versioning import api as versioning_api
-from migrate.versioning.repository import Repository
-
-try:
- from migrate.versioning import exceptions as versioning_exceptions
-except ImportError:
- try:
- from migrate import exceptions as versioning_exceptions
- except ImportError:
- sys.exit(_("python-migrate is not installed. Exiting."))
-
-#_REPOSITORY = None
-
-
-def db_sync(version=None):
- if version is not None:
- try:
- version = int(version)
- except ValueError:
- raise exception.Error(_("version should be an integer"))
- current_version = db_version()
- repository = _find_migrate_repo()
- if version is None or version > current_version:
- return versioning_api.upgrade(get_engine(), repository, version)
- else:
- return versioning_api.downgrade(get_engine(), repository,
- version)
-
-
-def db_version():
- repository = _find_migrate_repo()
- try:
- return versioning_api.db_version(get_engine(), repository)
- except versioning_exceptions.DatabaseNotControlledError as exc:
- # If we aren't version controlled there may be an existing,
- # non-version controlled database present.
- meta = sqlalchemy.MetaData()
- engine = get_engine()
- meta.reflect(bind=engine)
- tables = meta.tables
- if len(tables):
- raise exc
-
- db_version_control(migration.INIT_VERSION)
- return versioning_api.db_version(get_engine(), repository)
-
-
-def db_version_control(version=None):
- repository = _find_migrate_repo()
- versioning_api.version_control(get_engine(), repository, version)
- return version
-
-
-def _find_migrate_repo():
- """Get the path for the migrate repository."""
- path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
- 'migrate_repo')
- assert os.path.exists(path)
- global _REPOSITORY
- if _REPOSITORY is None:
- _REPOSITORY = Repository(path)
- return _REPOSITORY
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-"""
-SQLAlchemy models for heat data.
-"""
-
-import sqlalchemy
-
-from sqlalchemy.dialects import mysql
-from sqlalchemy.orm import relationship, backref, object_mapper
-from sqlalchemy.exc import IntegrityError
-from sqlalchemy.ext.declarative import declarative_base
-from sqlalchemy import types
-from json import dumps
-from json import loads
-from heat.openstack.common import exception
-from heat.openstack.common import uuidutils
-from heat.openstack.common import timeutils
-from heat.db.sqlalchemy.session import get_session
-from sqlalchemy.orm.session import Session
-
-BASE = declarative_base()
-
-
-class Json(types.TypeDecorator):
- impl = types.Text
-
- def load_dialect_impl(self, dialect):
- if dialect.name == 'mysql':
- return dialect.type_descriptor(mysql.LONGTEXT())
- else:
- return self.impl
-
- def process_bind_param(self, value, dialect):
- return dumps(value)
-
- def process_result_value(self, value, dialect):
- return loads(value)
-
-# TODO(leizhang) When we removed sqlalchemy 0.7 dependence
-# we can import MutableDict directly and remove ./mutable.py
-try:
- from sqlalchemy.ext.mutable import MutableDict as sa_MutableDict
- sa_MutableDict.associate_with(Json)
-except ImportError:
- from heat.db.sqlalchemy.mutable import MutableDict
- MutableDict.associate_with(Json)
-
-
-class HeatBase(object):
- """Base class for Heat Models."""
- __table_args__ = {'mysql_engine': 'InnoDB'}
- __table_initialized__ = False
- created_at = sqlalchemy.Column(sqlalchemy.DateTime,
- default=timeutils.utcnow)
- updated_at = sqlalchemy.Column(sqlalchemy.DateTime,
- onupdate=timeutils.utcnow)
-
- def save(self, session=None):
- """Save this object."""
- if not session:
- session = Session.object_session(self)
- if not session:
- session = get_session()
- session.add(self)
- try:
- session.flush()
- except IntegrityError as e:
- if str(e).endswith('is not unique'):
- raise exception.Duplicate(str(e))
- else:
- raise
-
- def expire(self, session=None, attrs=None):
- """Expire this object ()."""
- if not session:
- session = Session.object_session(self)
- if not session:
- session = get_session()
- session.expire(self, attrs)
-
- def refresh(self, session=None, attrs=None):
- """Refresh this object."""
- if not session:
- session = Session.object_session(self)
- if not session:
- session = get_session()
- session.refresh(self, attrs)
-
- def delete(self, session=None):
- """Delete this object."""
- if not session:
- session = Session.object_session(self)
- if not session:
- session = get_session()
- session.delete(self)
- session.flush()
-
- def __setitem__(self, key, value):
- setattr(self, key, value)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def get(self, key, default=None):
- return getattr(self, key, default)
-
- def __iter__(self):
- self._i = iter(object_mapper(self).columns)
- return self
-
- def next(self):
- n = self._i.next().name
- return n, getattr(self, n)
-
- def update(self, values):
- """Make the model object behave like a dict."""
- for k, v in values.iteritems():
- setattr(self, k, v)
-
- def update_and_save(self, values, session=None):
- if not session:
- session = Session.object_session(self)
- if not session:
- session = get_session()
- session.begin()
- for k, v in values.iteritems():
- setattr(self, k, v)
- session.commit()
-
- def iteritems(self):
- """Make the model object behave like a dict.
-
- Includes attributes from joins.
- """
- local = dict(self)
- joined = dict([(k, v) for k, v in self.__dict__.iteritems()
- if not k[0] == '_'])
- local.update(joined)
- return local.iteritems()
-
-
-class SoftDelete(object):
- deleted_at = sqlalchemy.Column(sqlalchemy.DateTime)
-
- def soft_delete(self, session=None):
- """Mark this object as deleted."""
- self.update_and_save({'deleted_at': timeutils.utcnow()},
- session=session)
-
-
-class RawTemplate(BASE, HeatBase):
- """Represents an unparsed template which should be in JSON format."""
-
- __tablename__ = 'raw_template'
- id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True)
- template = sqlalchemy.Column(Json)
-
-
-class Stack(BASE, HeatBase, SoftDelete):
- """Represents a stack created by the heat engine."""
-
- __tablename__ = 'stack'
-
- id = sqlalchemy.Column(sqlalchemy.String, primary_key=True,
- default=uuidutils.generate_uuid)
- name = sqlalchemy.Column(sqlalchemy.String)
- raw_template_id = sqlalchemy.Column(
- sqlalchemy.Integer,
- sqlalchemy.ForeignKey('raw_template.id'),
- nullable=False)
- raw_template = relationship(RawTemplate, backref=backref('stack'))
- username = sqlalchemy.Column(sqlalchemy.String)
- tenant = sqlalchemy.Column(sqlalchemy.String)
- action = sqlalchemy.Column('action', sqlalchemy.String)
- status = sqlalchemy.Column('status', sqlalchemy.String)
- status_reason = sqlalchemy.Column('status_reason', sqlalchemy.String)
- parameters = sqlalchemy.Column('parameters', Json)
- user_creds_id = sqlalchemy.Column(
- sqlalchemy.Integer,
- sqlalchemy.ForeignKey('user_creds.id'),
- nullable=False)
- owner_id = sqlalchemy.Column(sqlalchemy.String, nullable=True)
- timeout = sqlalchemy.Column(sqlalchemy.Integer)
- disable_rollback = sqlalchemy.Column(sqlalchemy.Boolean)
-
-
-class UserCreds(BASE, HeatBase):
- """
- Represents user credentials and mirrors the 'context'
- handed in by wsgi.
- """
-
- __tablename__ = 'user_creds'
-
- id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True)
- username = sqlalchemy.Column(sqlalchemy.String)
- password = sqlalchemy.Column(sqlalchemy.String)
- tenant = sqlalchemy.Column(sqlalchemy.String)
- auth_url = sqlalchemy.Column(sqlalchemy.String)
- tenant_id = sqlalchemy.Column(sqlalchemy.String)
- trust_id = sqlalchemy.Column(sqlalchemy.String)
- trustor_user_id = sqlalchemy.Column(sqlalchemy.String)
- stack = relationship(Stack, backref=backref('user_creds'))
-
-
-class Event(BASE, HeatBase):
- """Represents an event generated by the heat engine."""
-
- __tablename__ = 'event'
-
- id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True)
- stack_id = sqlalchemy.Column(sqlalchemy.String,
- sqlalchemy.ForeignKey('stack.id'),
- nullable=False)
- stack = relationship(Stack, backref=backref('events'))
-
- resource_action = sqlalchemy.Column(sqlalchemy.String)
- resource_status = sqlalchemy.Column(sqlalchemy.String)
- resource_name = sqlalchemy.Column(sqlalchemy.String)
- physical_resource_id = sqlalchemy.Column(sqlalchemy.String)
- resource_status_reason = sqlalchemy.Column(sqlalchemy.String)
- resource_type = sqlalchemy.Column(sqlalchemy.String)
- resource_properties = sqlalchemy.Column(sqlalchemy.PickleType)
-
-
-class ResourceData(BASE, HeatBase):
- """Key/value store of arbitrary, resource-specific data."""
-
- __tablename__ = 'resource_data'
-
- id = sqlalchemy.Column('id',
- sqlalchemy.Integer,
- primary_key=True,
- nullable=False)
- key = sqlalchemy.Column('key', sqlalchemy.String)
- value = sqlalchemy.Column('value', sqlalchemy.String)
- redact = sqlalchemy.Column('redact', sqlalchemy.Boolean)
- resource_id = sqlalchemy.Column('resource_id',
- sqlalchemy.String,
- sqlalchemy.ForeignKey('resource.id'),
- nullable=False)
-
-
-class Resource(BASE, HeatBase):
- """Represents a resource created by the heat engine."""
-
- __tablename__ = 'resource'
-
- id = sqlalchemy.Column(sqlalchemy.String,
- primary_key=True,
- default=uuidutils.generate_uuid)
- action = sqlalchemy.Column('action', sqlalchemy.String)
- status = sqlalchemy.Column('status', sqlalchemy.String)
- name = sqlalchemy.Column('name', sqlalchemy.String, nullable=False)
- nova_instance = sqlalchemy.Column('nova_instance', sqlalchemy.String)
- status_reason = sqlalchemy.Column('status_reason', sqlalchemy.String)
- # odd name as "metadata" is reserved
- rsrc_metadata = sqlalchemy.Column('rsrc_metadata', Json)
-
- stack_id = sqlalchemy.Column(sqlalchemy.String,
- sqlalchemy.ForeignKey('stack.id'),
- nullable=False)
- stack = relationship(Stack, backref=backref('resources'))
- data = relationship(ResourceData,
- cascade="all,delete",
- backref=backref('resource'))
-
-
-class WatchRule(BASE, HeatBase):
- """Represents a watch_rule created by the heat engine."""
-
- __tablename__ = 'watch_rule'
-
- id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True)
- name = sqlalchemy.Column('name', sqlalchemy.String, nullable=False)
- rule = sqlalchemy.Column('rule', Json)
- state = sqlalchemy.Column('state', sqlalchemy.String)
- last_evaluated = sqlalchemy.Column(sqlalchemy.DateTime,
- default=timeutils.utcnow)
-
- stack_id = sqlalchemy.Column(sqlalchemy.String,
- sqlalchemy.ForeignKey('stack.id'),
- nullable=False)
- stack = relationship(Stack, backref=backref('watch_rule'))
-
-
-class WatchData(BASE, HeatBase):
- """Represents a watch_data created by the heat engine."""
-
- __tablename__ = 'watch_data'
-
- id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True)
- data = sqlalchemy.Column('data', Json)
-
- watch_rule_id = sqlalchemy.Column(
- sqlalchemy.Integer,
- sqlalchemy.ForeignKey('watch_rule.id'),
- nullable=False)
- watch_rule = relationship(WatchRule, backref=backref('watch_data'))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# The MIT License
-#
-# ext/mutable.py
-# Copyright (C) 2005-2013 the SQLAlchemy authors
-# and contributors <see AUTHORS file>
-#
-# This module is part of SQLAlchemy and is released under
-# the MIT License: http://www.opensource.org/lic enses/mit-license.php
-"""
-Submitted on behalf of a third-party: sqlalchemy
-"""
-from sqlalchemy.ext.mutable import Mutable
-
-
-class MutableDict(Mutable, dict):
- """A dictionary type that implements :class:`.Mutable`.
-
- .. versionadded:: 0.8
-
- """
-
- def __setitem__(self, key, value):
- """Detect dictionary set events and emit change events."""
- dict.__setitem__(self, key, value)
- self.changed()
-
- def __delitem__(self, key):
- """Detect dictionary del events and emit change events."""
- dict.__delitem__(self, key)
- self.changed()
-
- def clear(self):
- dict.clear(self)
- self.changed()
-
- @classmethod
- def coerce(cls, key, value):
- """Convert plain dictionary to MutableDict."""
- if not isinstance(value, MutableDict):
- if isinstance(value, dict):
- return MutableDict(value)
- return Mutable.coerce(key, value)
- else:
- return value
-
- def __getstate__(self):
- return dict(self)
-
- def __setstate__(self, state):
- self.update(state)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Session Handling for SQLAlchemy backend."""
-
-import sqlalchemy.interfaces
-import sqlalchemy.orm
-import sqlalchemy.engine
-from sqlalchemy.exc import DisconnectionError
-
-from heat.openstack.common import log as logging
-
-from heat.db import api as db_api
-
-logger = logging.getLogger(__name__)
-_ENGINE = None
-_MAKER = None
-
-
-def get_session(autocommit=True, expire_on_commit=False):
- """Return a SQLAlchemy session."""
- global _MAKER
-
- if _MAKER is None:
- _MAKER = get_maker(get_engine(), autocommit, expire_on_commit)
- return _MAKER()
-
-
-class SynchronousSwitchListener(sqlalchemy.interfaces.PoolListener):
-
- """Switch sqlite connections to non-synchronous mode."""
-
- def connect(self, dbapi_con, con_record):
- dbapi_con.execute("PRAGMA synchronous = OFF")
-
-
-class MySQLPingListener(object):
-
- """
- Ensures that MySQL connections checked out of the
- pool are alive.
-
- Borrowed from:
- http://groups.google.com/group/sqlalchemy/msg/a4ce563d802c929f
- """
-
- def checkout(self, dbapi_con, con_record, con_proxy):
- try:
- dbapi_con.cursor().execute('select 1')
- except dbapi_con.OperationalError as ex:
- if ex.args[0] in (2006, 2013, 2014, 2045, 2055):
- logger.warn('Got mysql server has gone away: %s', ex)
- raise DisconnectionError("Database server went away")
- else:
- raise
-
-
-def get_engine():
- """Return a SQLAlchemy engine."""
- global _ENGINE
- if _ENGINE is None:
- connection_dict = sqlalchemy.engine.url.make_url(_get_sql_connection())
- engine_args = {
- "pool_recycle": _get_sql_idle_timeout(),
- "echo": False,
- 'convert_unicode': True
- }
-
- if 'mysql' in connection_dict.drivername:
- engine_args['listeners'] = [MySQLPingListener()]
-
- _ENGINE = sqlalchemy.create_engine(_get_sql_connection(),
- **engine_args)
- return _ENGINE
-
-
-def get_maker(engine, autocommit=True, expire_on_commit=False):
- """Return a SQLAlchemy sessionmaker using the given engine."""
- ses = sqlalchemy.orm.sessionmaker(
- bind=engine,
- autocommit=autocommit,
- expire_on_commit=expire_on_commit)
- return sqlalchemy.orm.scoped_session(ses)
-
-
-def _get_sql_connection():
- return db_api.SQL_CONNECTION
-
-
-def _get_sql_idle_timeout():
- return db_api.SQL_IDLE_TIMEOUT
+++ /dev/null
-#!/usr/bin/env python
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from __future__ import print_function
-
-import sys
-
-from heat.openstack.common import gettextutils
-
-gettextutils.install('heat')
-
-from oslo.config import cfg
-from heat.openstack.common import log as logging
-from heat.db import api
-from heat.db import migration
-
-LOG = logging.getLogger(__name__)
-
-
-if __name__ == '__main__':
- print('*******************************************', file=sys.stderr)
- print('Deprecated: use heat-manage db_sync instead', file=sys.stderr)
- print('*******************************************', file=sys.stderr)
- cfg.CONF(project='heat', prog='heat-engine')
-
- api.configure()
-
- try:
- migration.db_sync()
- except Exception as exc:
- print(str(exc), file=sys.stderr)
- sys.exit(1)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-class LazyPluggable(object):
- """A pluggable backend loaded lazily based on some value."""
-
- def __init__(self, pivot, **backends):
- self.__backends = backends
- self.__pivot = pivot
- self.__backend = None
-
- def __get_backend(self):
- if not self.__backend:
- backend_name = 'sqlalchemy'
- backend = self.__backends[backend_name]
- if isinstance(backend, tuple):
- name = backend[0]
- fromlist = backend[1]
- else:
- name = backend
- fromlist = backend
-
- self.__backend = __import__(name, None, None, fromlist)
- return self.__backend
-
- def __getattr__(self, key):
- backend = self.__get_backend()
- return getattr(backend, key)
-
-
-IMPL = LazyPluggable('db_backend',
- sqlalchemy='heat.db.sqlalchemy.api')
-
-
-def purge_deleted(age):
- IMPL.purge_deleted(age)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-# -*- coding: utf-8 -*-
-
-from heat.engine import resources
-from heat.engine import properties
-from heat.openstack.common.gettextutils import _
-
-from docutils import nodes
-from sphinx.util.compat import Directive
-
-
-class resourcepages(nodes.General, nodes.Element):
- pass
-
-
-class ResourcePages(Directive):
- has_content = False
- required_arguments = 0
- optional_arguments = 1
- final_argument_whitespace = False
- option_spec = {}
-
- def run(self):
- prefix = self.arguments and self.arguments.pop() or None
- content = []
- for resource_type, resource_class in _all_resources(prefix):
- self.resource_type = resource_type
- self.resource_class = resource_class
- section = self._section(content, resource_type, '%s')
-
- self.props_schemata = properties.schemata(
- self.resource_class.properties_schema)
-
- cls_doc = resource_class.__doc__
- if cls_doc:
- para = nodes.paragraph('', cls_doc)
- section.append(para)
-
- self.contribute_properties(section)
- self.contribute_attributes(section)
-
- self.contribute_hot_syntax(section)
- self.contribute_yaml_syntax(section)
- self.contribute_json_syntax(section)
-
- return content
-
- def _section(self, parent, title, id_pattern):
- id = id_pattern % self.resource_type
- section = nodes.section(ids=[id])
- parent.append(section)
- title = nodes.title('', title)
- section.append(title)
- return section
-
- def _prop_syntax_example(self, prop):
- if not prop:
- return 'Value'
- if prop.type == properties.LIST:
- schema = lambda i: prop.schema[i] if prop.schema else None
- sub_type = [self._prop_syntax_example(schema(i))
- for i in range(2)]
- return '[%s, %s, ...]' % tuple(sub_type)
- elif prop.type == properties.MAP:
- def sub_props():
- for sub_key, sub_value in prop.schema.items():
- if sub_value.implemented:
- yield '"%s": %s' % (
- sub_key, self._prop_syntax_example(sub_value))
- return '{%s}' % (', '.join(sub_props()) if prop.schema else '...')
- else:
- return prop.type
-
- def contribute_hot_syntax(self, parent):
- section = self._section(parent, _('HOT Syntax'), '%s-hot')
- props = []
- for prop_key in sorted(self.props_schemata.keys()):
- prop = self.props_schemata[prop_key]
- if prop.implemented:
- props.append('%s: %s' % (prop_key,
- self._prop_syntax_example(prop)))
-
- template = '''heat_template_version: 2013-05-23
-...
-resources:
- ...
- the_resource:
- type: %s
- properties:
- %s''' % (self.resource_type, '\n '.join(props))
-
- block = nodes.literal_block('', template)
- section.append(block)
-
- def contribute_yaml_syntax(self, parent):
- section = self._section(parent, _('YAML Syntax'), '%s-yaml')
- props = []
- for prop_key in sorted(self.props_schemata.keys()):
- prop = self.props_schemata[prop_key]
- if prop.implemented:
- props.append('%s: %s' % (prop_key,
- self._prop_syntax_example(prop)))
-
- template = '''HeatTemplateFormatVersion: '2012-12-12'
-...
-Resources:
- ...
- TheResource:
- Type: %s
- Properties:
- %s''' % (self.resource_type, '\n '.join(props))
-
- block = nodes.literal_block('', template)
- section.append(block)
-
- def contribute_json_syntax(self, parent):
- section = self._section(parent, _('JSON Syntax'), '%s-json')
-
- props = []
- for prop_key in sorted(self.props_schemata.keys()):
- prop = self.props_schemata[prop_key]
- if prop.implemented:
- props.append('"%s": %s' % (prop_key,
- self._prop_syntax_example(prop)))
- template = '''{
- "AWSTemplateFormatVersion" : "2010-09-09",
- ...
- "Resources" : {
- "TheResource": {
- "Type": "%s",
- "Properties": {
- %s
- }
- }
- }
-}''' % (self.resource_type, ',\n '.join(props))
- block = nodes.literal_block('', template)
- section.append(block)
-
- def contribute_property(self, prop_list, prop_key, prop):
- prop_item = nodes.definition_list_item(
- '', nodes.term('', prop_key))
- prop_list.append(prop_item)
-
- prop_item.append(nodes.classifier('', prop.type))
-
- definition = nodes.definition()
- prop_item.append(definition)
-
- if not prop.implemented:
- para = nodes.inline('', _('Not implemented.'))
- warning = nodes.note('', para)
- definition.append(warning)
- return
-
- if prop.description:
- para = nodes.paragraph('', prop.description)
- definition.append(para)
-
- if prop.required:
- para = nodes.paragraph('', _('Required property.'))
- elif prop.default is not None:
- para = nodes.paragraph(
- '',
- _('Optional property, defaults to "%s".') % prop.default)
- else:
- para = nodes.paragraph('', _('Optional property.'))
- definition.append(para)
-
- for constraint in prop.constraints:
- para = nodes.paragraph('', str(constraint))
- definition.append(para)
-
- sub_schema = None
- if prop.schema and prop.type == properties.MAP:
- para = nodes.emphasis('', _('Map properties:'))
- definition.append(para)
- sub_schema = prop.schema
-
- elif prop.schema and prop.type == properties.LIST:
- para = nodes.emphasis(
- '', _('List contents:'))
- definition.append(para)
- sub_schema = prop.schema
-
- if sub_schema:
- sub_prop_list = nodes.definition_list()
- definition.append(sub_prop_list)
- for sub_prop_key in sorted(sub_schema.keys()):
- sub_prop = sub_schema[sub_prop_key]
- self.contribute_property(sub_prop_list, sub_prop_key, sub_prop)
-
- def contribute_properties(self, parent):
- if not self.props_schemata:
- return
- section = self._section(parent, _('Properties'), '%s-props')
- prop_list = nodes.definition_list()
- section.append(prop_list)
- for prop_key in sorted(self.props_schemata.keys()):
- prop = self.props_schemata[prop_key]
- self.contribute_property(prop_list, prop_key, prop)
-
- def contribute_attributes(self, parent):
- schema = self.resource_class.attributes_schema
- if not schema:
- return
- section = self._section(parent, _('Attributes'), '%s-attrs')
- prop_list = nodes.definition_list()
- section.append(prop_list)
- for prop_key in sorted(schema.keys()):
- description = schema[prop_key]
- prop_item = nodes.definition_list_item(
- '', nodes.term('', prop_key))
- prop_list.append(prop_item)
-
- definition = nodes.definition()
- prop_item.append(definition)
-
- if description:
- def_para = nodes.paragraph('', description)
- definition.append(def_para)
-
-
-def _all_resources(prefix=None):
- g_env = resources.global_env()
- all_resources = g_env.get_types()
- for resource_type in sorted(all_resources):
- resource_class = g_env.get_class(resource_type)
- if not prefix or resource_type.startswith(prefix):
- yield resource_type, resource_class
-
-
-def setup(app):
-
- resources.initialise()
- app.add_node(resourcepages)
-
- app.add_directive('resourcepages', ResourcePages)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.rpc import api
-from heat.openstack.common import timeutils
-from heat.engine import template
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-def extract_args(params):
- '''
- Extract any arguments passed as parameters through the API and return them
- as a dictionary. This allows us to filter the passed args and do type
- conversion where appropriate
- '''
- kwargs = {}
- try:
- timeout_mins = int(params.get(api.PARAM_TIMEOUT, 0))
- except (ValueError, TypeError):
- logger.exception('create timeout conversion')
- else:
- if timeout_mins > 0:
- kwargs[api.PARAM_TIMEOUT] = timeout_mins
-
- if api.PARAM_DISABLE_ROLLBACK in params:
- disable_rollback = params.get(api.PARAM_DISABLE_ROLLBACK)
- if str(disable_rollback).lower() == 'true':
- kwargs[api.PARAM_DISABLE_ROLLBACK] = True
- elif str(disable_rollback).lower() == 'false':
- kwargs[api.PARAM_DISABLE_ROLLBACK] = False
- else:
- raise ValueError("Unexpected value for parameter %s : %s" %
- (api.PARAM_DISABLE_ROLLBACK, disable_rollback))
- return kwargs
-
-
-def format_stack_outputs(stack, outputs):
- '''
- Return a representation of the given output template for the given stack
- that matches the API output expectations.
- '''
- def format_stack_output(k):
- return {api.OUTPUT_DESCRIPTION: outputs[k].get('Description',
- 'No description given'),
- api.OUTPUT_KEY: k,
- api.OUTPUT_VALUE: stack.output(k)}
-
- return [format_stack_output(key) for key in outputs]
-
-
-def format_stack(stack):
- '''
- Return a representation of the given stack that matches the API output
- expectations.
- '''
- info = {
- api.STACK_NAME: stack.name,
- api.STACK_ID: dict(stack.identifier()),
- api.STACK_CREATION_TIME: timeutils.isotime(stack.created_time),
- api.STACK_UPDATED_TIME: timeutils.isotime(stack.updated_time),
- api.STACK_NOTIFICATION_TOPICS: [], # TODO Not implemented yet
- api.STACK_PARAMETERS: stack.parameters.map(str),
- api.STACK_DESCRIPTION: stack.t[template.DESCRIPTION],
- api.STACK_TMPL_DESCRIPTION: stack.t[template.DESCRIPTION],
- api.STACK_ACTION: stack.action or '',
- api.STACK_STATUS: stack.status or '',
- api.STACK_STATUS_DATA: stack.status_reason,
- api.STACK_CAPABILITIES: [], # TODO Not implemented yet
- api.STACK_DISABLE_ROLLBACK: stack.disable_rollback,
- api.STACK_TIMEOUT: stack.timeout_mins,
- }
-
- # only show the outputs on a completely created or updated stack
- if (stack.action != stack.DELETE and stack.status == stack.COMPLETE):
- info[api.STACK_OUTPUTS] = format_stack_outputs(stack, stack.outputs)
-
- return info
-
-
-def format_stack_resource(resource, detail=True):
- '''
- Return a representation of the given resource that matches the API output
- expectations.
- '''
- last_updated_time = resource.updated_time or resource.created_time
- res = {
- api.RES_UPDATED_TIME: timeutils.isotime(last_updated_time),
- api.RES_NAME: resource.name,
- api.RES_PHYSICAL_ID: resource.resource_id or '',
- api.RES_METADATA: resource.metadata,
- api.RES_ACTION: resource.action,
- api.RES_STATUS: resource.status,
- api.RES_STATUS_DATA: resource.status_reason,
- api.RES_TYPE: resource.t['Type'],
- api.RES_ID: dict(resource.identifier()),
- api.RES_STACK_ID: dict(resource.stack.identifier()),
- api.RES_STACK_NAME: resource.stack.name,
- api.RES_REQUIRED_BY: resource.required_by(),
- }
-
- if detail:
- res[api.RES_DESCRIPTION] = resource.parsed_template('Description', '')
- res[api.RES_METADATA] = resource.metadata
-
- return res
-
-
-def format_event(event):
- stack_identifier = event.stack.identifier()
-
- result = {
- api.EVENT_ID: dict(event.identifier()),
- api.EVENT_STACK_ID: dict(stack_identifier),
- api.EVENT_STACK_NAME: stack_identifier.stack_name,
- api.EVENT_TIMESTAMP: timeutils.isotime(event.timestamp),
- api.EVENT_RES_NAME: event.resource_name,
- api.EVENT_RES_PHYSICAL_ID: event.physical_resource_id,
- api.EVENT_RES_ACTION: event.action,
- api.EVENT_RES_STATUS: event.status,
- api.EVENT_RES_STATUS_DATA: event.reason,
- api.EVENT_RES_TYPE: event.resource_type,
- api.EVENT_RES_PROPERTIES: event.resource_properties,
- }
-
- return result
-
-
-def format_watch(watch):
-
- result = {
- api.WATCH_ACTIONS_ENABLED: watch.rule.get(api.RULE_ACTIONS_ENABLED),
- api.WATCH_ALARM_ACTIONS: watch.rule.get(api.RULE_ALARM_ACTIONS),
- api.WATCH_TOPIC: watch.rule.get(api.RULE_TOPIC),
- api.WATCH_UPDATED_TIME: timeutils.isotime(watch.updated_at),
- api.WATCH_DESCRIPTION: watch.rule.get(api.RULE_DESCRIPTION),
- api.WATCH_NAME: watch.name,
- api.WATCH_COMPARISON: watch.rule.get(api.RULE_COMPARISON),
- api.WATCH_DIMENSIONS: watch.rule.get(api.RULE_DIMENSIONS) or [],
- api.WATCH_PERIODS: watch.rule.get(api.RULE_PERIODS),
- api.WATCH_INSUFFICIENT_ACTIONS:
- watch.rule.get(api.RULE_INSUFFICIENT_ACTIONS),
- api.WATCH_METRIC_NAME: watch.rule.get(api.RULE_METRIC_NAME),
- api.WATCH_NAMESPACE: watch.rule.get(api.RULE_NAMESPACE),
- api.WATCH_OK_ACTIONS: watch.rule.get(api.RULE_OK_ACTIONS),
- api.WATCH_PERIOD: watch.rule.get(api.RULE_PERIOD),
- api.WATCH_STATE_REASON: watch.rule.get(api.RULE_STATE_REASON),
- api.WATCH_STATE_REASON_DATA:
- watch.rule.get(api.RULE_STATE_REASON_DATA),
- api.WATCH_STATE_UPDATED_TIME: timeutils.isotime(
- watch.rule.get(api.RULE_STATE_UPDATED_TIME)),
- api.WATCH_STATE_VALUE: watch.state,
- api.WATCH_STATISTIC: watch.rule.get(api.RULE_STATISTIC),
- api.WATCH_THRESHOLD: watch.rule.get(api.RULE_THRESHOLD),
- api.WATCH_UNIT: watch.rule.get(api.RULE_UNIT),
- api.WATCH_STACK_ID: watch.stack_id
- }
-
- return result
-
-
-def format_watch_data(wd):
-
- # Demangle DB format data into something more easily used in the API
- # We are expecting a dict with exactly two items, Namespace and
- # a metric key
- namespace = wd.data['Namespace']
- metric = [(k, v) for k, v in wd.data.items() if k != 'Namespace']
- if len(metric) == 1:
- metric_name, metric_data = metric[0]
- else:
- logger.error("Unexpected number of keys in watch_data.data!")
- return
-
- result = {
- api.WATCH_DATA_ALARM: wd.watch_rule.name,
- api.WATCH_DATA_METRIC: metric_name,
- api.WATCH_DATA_TIME: timeutils.isotime(wd.created_at),
- api.WATCH_DATA_NAMESPACE: namespace,
- api.WATCH_DATA: metric_data
- }
-
- return result
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import collections
-
-
-class Attribute(object):
- """
- An Attribute schema.
- """
-
- (DESCRIPTION,) = ('description',)
-
- def __init__(self, attr_name, description):
- """
- Initialise with a name and description.
-
- :param attr_name: the name of the attribute
- :param description: attribute description
- """
- self.name = attr_name
- self.description = description
-
- def as_output(self, resource_name):
- """
- Return an Output schema entry for a provider template with the given
- resource name.
-
- :param resource_name: the logical name of the provider resource
- :returns: This attribute as a template 'Output' entry
- """
- return {
- "Value": '{"Fn::GetAtt": ["%s", "%s"]}' % (resource_name,
- self.name),
- "Description": self.description
- }
-
-
-class Attributes(collections.Mapping):
- """Models a collection of Resource Attributes."""
-
- def __init__(self, res_name, schema, resolver):
- self._resource_name = res_name
- self._resolver = resolver
- self._attributes = Attributes._make_attributes(schema)
-
- @staticmethod
- def _make_attributes(schema):
- return dict((n, Attribute(n, d)) for n, d in schema.items())
-
- @staticmethod
- def as_outputs(resource_name, resource_class):
- """
- :param resource_name: logical name of the resource
- :param resource_class: resource implementation class
- :returns: The attributes of the specified resource_class as a template
- Output map
- """
- schema = resource_class.attributes_schema
- attribs = Attributes._make_attributes(schema).items()
-
- return dict((n, att.as_output(resource_name)) for n, att in attribs)
-
- @staticmethod
- def schema_from_outputs(json_snippet):
- if json_snippet:
- return dict((k, v.get("Description"))
- for k, v in json_snippet.items())
- return {}
-
- def __getitem__(self, key):
- if key not in self:
- raise KeyError('%s: Invalid attribute %s' %
- (self._resource_name, key))
- return self._resolver(key)
-
- def __len__(self):
- return len(self._attributes)
-
- def __contains__(self, key):
- return key in self._attributes
-
- def __iter__(self):
- return iter(self._attributes)
-
- def __repr__(self):
- return ("Attributes for %s:\n\t" % self._resource_name +
- '\n\t'.join(self._attributes.values()))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo.config import cfg
-
-from heat.openstack.common import importutils
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-from heat.common import heat_keystoneclient as hkc
-from novaclient import client as novaclient
-from novaclient import shell as novashell
-try:
- from swiftclient import client as swiftclient
-except ImportError:
- swiftclient = None
- logger.info('swiftclient not available')
-try:
- from neutronclient.v2_0 import client as neutronclient
-except ImportError:
- neutronclient = None
- logger.info('neutronclient not available')
-try:
- from cinderclient import client as cinderclient
-except ImportError:
- cinderclient = None
- logger.info('cinderclient not available')
-
-try:
- from ceilometerclient.v2 import client as ceilometerclient
-except ImportError:
- ceilometerclient = None
- logger.info('ceilometerclient not available')
-
-
-cloud_opts = [
- cfg.StrOpt('cloud_backend',
- default=None,
- help="Cloud module to use as a backend. Defaults to OpenStack.")
-]
-cfg.CONF.register_opts(cloud_opts)
-
-
-class OpenStackClients(object):
- '''
- Convenience class to create and cache client instances.
- '''
-
- def __init__(self, context):
- self.context = context
- self._nova = {}
- self._keystone = None
- self._swift = None
- self._neutron = None
- self._cinder = None
- self._ceilometer = None
-
- @property
- def auth_token(self):
- # if there is no auth token in the context
- # attempt to get one using the context username and password
- return self.context.auth_token or self.keystone().auth_token
-
- def keystone(self):
- if self._keystone:
- return self._keystone
-
- self._keystone = hkc.KeystoneClient(self.context)
- return self._keystone
-
- def url_for(self, **kwargs):
- return self.keystone().url_for(**kwargs)
-
- def nova(self, service_type='compute'):
- if service_type in self._nova:
- return self._nova[service_type]
-
- con = self.context
- if self.auth_token is None:
- logger.error("Nova connection failed, no auth_token!")
- return None
-
- computeshell = novashell.OpenStackComputeShell()
- extensions = computeshell._discover_extensions("1.1")
-
- args = {
- 'project_id': con.tenant,
- 'auth_url': con.auth_url,
- 'service_type': service_type,
- 'username': None,
- 'api_key': None,
- 'extensions': extensions
- }
-
- client = novaclient.Client(1.1, **args)
-
- management_url = self.url_for(service_type=service_type)
- client.client.auth_token = self.auth_token
- client.client.management_url = management_url
-
- self._nova[service_type] = client
- return client
-
- def swift(self):
- if swiftclient is None:
- return None
- if self._swift:
- return self._swift
-
- con = self.context
- if self.auth_token is None:
- logger.error("Swift connection failed, no auth_token!")
- return None
-
- args = {
- 'auth_version': '2.0',
- 'tenant_name': con.tenant,
- 'user': con.username,
- 'key': None,
- 'authurl': None,
- 'preauthtoken': self.auth_token,
- 'preauthurl': self.url_for(service_type='object-store')
- }
- self._swift = swiftclient.Connection(**args)
- return self._swift
-
- def neutron(self):
- if neutronclient is None:
- return None
- if self._neutron:
- return self._neutron
-
- con = self.context
- if self.auth_token is None:
- logger.error("Neutron connection failed, no auth_token!")
- return None
-
- args = {
- 'auth_url': con.auth_url,
- 'service_type': 'network',
- 'token': self.auth_token,
- 'endpoint_url': self.url_for(service_type='network')
- }
-
- self._neutron = neutronclient.Client(**args)
-
- return self._neutron
-
- def cinder(self):
- if cinderclient is None:
- return self.nova('volume')
- if self._cinder:
- return self._cinder
-
- con = self.context
- if self.auth_token is None:
- logger.error("Cinder connection failed, no auth_token!")
- return None
-
- args = {
- 'service_type': 'volume',
- 'auth_url': con.auth_url,
- 'project_id': con.tenant,
- 'username': None,
- 'api_key': None
- }
-
- self._cinder = cinderclient.Client('1', **args)
- management_url = self.url_for(service_type='volume')
- self._cinder.client.auth_token = self.auth_token
- self._cinder.client.management_url = management_url
-
- return self._cinder
-
- def ceilometer(self):
- if ceilometerclient is None:
- return None
- if self._ceilometer:
- return self._ceilometer
-
- if self.auth_token is None:
- logger.error("Ceilometer connection failed, no auth_token!")
- return None
- con = self.context
- args = {
- 'auth_url': con.auth_url,
- 'service_type': 'metering',
- 'project_id': con.tenant,
- 'token': lambda: self.auth_token,
- 'endpoint': self.url_for(service_type='metering'),
- }
-
- client = ceilometerclient.Client(**args)
-
- self._ceilometer = client
- return self._ceilometer
-
-
-if cfg.CONF.cloud_backend:
- cloud_backend_module = importutils.import_module(cfg.CONF.cloud_backend)
- Clients = cloud_backend_module.Clients
-else:
- Clients = OpenStackClients
-
-logger.debug('Using backend %s' % Clients)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-(TYPE, PROPERTIES, SCRIPTS, RELATIONSHIPS) = (
- 'type', 'properties', 'scripts', 'relationships')
-
-(SOFTWARE_CONFIG_TYPE, HOSTED_ON, DEPENDS_ON) = (
- 'OS::Heat::SoftwareConfig', 'hosted_on', 'depends_on')
-
-
-class Component(dict):
- """
- Model for hot component.
- """
-
- def __init__(self, schema={}):
- super(Component, self).__init__(schema)
-
- @property
- def properties(self):
- return self.get(PROPERTIES, {})
-
- @property
- def type(self):
- return self.get(TYPE, SOFTWARE_CONFIG_TYPE)
-
- @property
- def scripts(self):
- return self.get(SCRIPTS, {})
-
- @property
- def relations(self):
- return self.get(RELATIONSHIPS, [])
-
- def hosted_on(self):
- for rel in self.relations:
- if HOSTED_ON in rel:
- return rel[HOSTED_ON]
- return None
-
- def depends(self):
- deps = []
- rels = self.relations
- for rel in rels:
- if DEPENDS_ON in rel:
- deps.append(rel[DEPENDS_ON])
- return deps
-
-
-class Components(dict):
- """
- Model for hot components.
- """
-
- def __init__(self, schema):
- items = schema.iteritems()
- schema = dict(map(lambda x: (x[0], Component(x[1])), items))
- super(Components, self).__init__(schema)
-
- def depends(self):
- deps = []
- for (k, v) in self.iteritems():
- for dep in v.depends():
- if dep not in deps:
- deps.append(dep)
- return deps
-
- def filter(self, hosted):
- return map(lambda x: x[0],
- filter(lambda x: x[1].hosted_on() == hosted,
- self.iteritems()))
-
- def validate(self):
- deps = self.depends()
- for dep in deps:
- if dep not in self.iterkeys():
- raise ValueError('component %s is not defined.' % dep)
- comp = self[dep]
- if dep in comp.depends():
- raise ValueError('component %s depends on itself.' % dep)
- for (name, comp) in self.iteritems():
- cdeps = comp.depends()
- for dep in cdeps:
- if cdeps.count(dep) > 1:
- msg = 'duplicated %s in %s depends on.' % (dep, name)
- raise ValueError(msg)
- return True
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import collections
-import itertools
-
-from heat.common import exception
-
-from heat.openstack.common.gettextutils import _
-
-
-class CircularDependencyException(exception.HeatException):
- message = _("Circular Dependency Found: %(cycle)s")
-
-
-class Node(object):
- '''A node in a dependency graph.'''
-
- def __init__(self, requires=None, required_by=None):
- '''
- Initialise the node, optionally with a set of keys this node
- requires and/or a set of keys that this node is required by.
- '''
- self.require = requires and requires.copy() or set()
- self.satisfy = required_by and required_by.copy() or set()
-
- def copy(self):
- '''Return a copy of the node.'''
- return Node(self.require, self.satisfy)
-
- def reverse_copy(self):
- '''Return a copy of the node with the edge directions reversed.'''
- return Node(self.satisfy, self.require)
-
- def required_by(self, source=None):
- '''
- List the keys that require this node, and optionally add a
- new one.
- '''
- if source is not None:
- self.satisfy.add(source)
- return iter(self.satisfy)
-
- def requires(self, target):
- '''Add a key that this node requires.'''
- self.require.add(target)
-
- def __isub__(self, target):
- '''Remove a key that this node requires.'''
- self.require.remove(target)
- return self
-
- def __nonzero__(self):
- '''Return True if this node is not a leaf (it requires other nodes).'''
- return bool(self.require)
-
- def stem(self):
- '''Return True if this node is a stem (required by nothing).'''
- return not bool(self.satisfy)
-
- def disjoint(self):
- '''Return True if this node is both a leaf and a stem.'''
- return (not self) and self.stem()
-
- def __len__(self):
- '''Count the number of keys required by this node.'''
- return len(self.require)
-
- def __iter__(self):
- '''Iterate over the keys required by this node.'''
- return iter(self.require)
-
- def __str__(self):
- '''Return a human-readable string representation of the node.'''
- return '{%s}' % ', '.join(str(n) for n in self)
-
- def __repr__(self):
- '''Return a string representation of the node.'''
- return repr(self.require)
-
-
-class Graph(collections.defaultdict):
- '''A mutable mapping of objects to nodes in a dependency graph.'''
-
- def __init__(self, *args):
- super(Graph, self).__init__(Node, *args)
-
- def map(self, func):
- '''
- Return a dictionary derived from mapping the supplied function onto
- each node in the graph.
- '''
- return dict((k, func(n)) for k, n in self.items())
-
- def copy(self):
- '''Return a copy of the graph.'''
- return Graph(self.map(lambda n: n.copy()))
-
- def reverse_copy(self):
- '''Return a copy of the graph with the edges reversed.'''
- return Graph(self.map(lambda n: n.reverse_copy()))
-
- def edges(self):
- '''Return an iterator over all of the edges in the graph.'''
- def outgoing_edges(rqr, node):
- if node.disjoint():
- yield (rqr, None)
- else:
- for rqd in node:
- yield (rqr, rqd)
- return itertools.chain.from_iterable(outgoing_edges(*i)
- for i in self.iteritems())
-
- def __delitem__(self, key):
- '''Delete the node given by the specified key from the graph.'''
- node = self[key]
-
- for src in node.required_by():
- self[src] -= key
-
- return super(Graph, self).__delitem__(key)
-
- def __str__(self):
- '''Convert the graph to a human-readable string.'''
- pairs = ('%s: %s' % (str(k), str(v)) for k, v in self.iteritems())
- return '{%s}' % ', '.join(pairs)
-
- @staticmethod
- def toposort(graph):
- '''
- Return a topologically sorted iterator over a dependency graph.
-
- This is a destructive operation for the graph.
- '''
- for iteration in xrange(len(graph)):
- for key, node in graph.iteritems():
- if not node:
- yield key
- del graph[key]
- break
- else:
- # There are nodes remaining, but none without
- # dependencies: a cycle
- raise CircularDependencyException(cycle=str(graph))
-
-
-class Dependencies(object):
- '''Helper class for calculating a dependency graph.'''
-
- def __init__(self, edges=[]):
- '''
- Initialise, optionally with a list of edges, in the form of
- (requirer, required) tuples.
- '''
- self._graph = Graph()
- for e in edges:
- self += e
-
- def __iadd__(self, edge):
- '''Add another edge, in the form of a (requirer, required) tuple.'''
- requirer, required = edge
-
- if required is None:
- # Just ensure the node is created by accessing the defaultdict
- self._graph[requirer]
- else:
- self._graph[required].required_by(requirer)
- self._graph[requirer].requires(required)
-
- return self
-
- def required_by(self, last):
- '''
- List the keys that require the specified node.
- '''
- if last not in self._graph:
- raise KeyError
-
- return self._graph[last].required_by()
-
- def __getitem__(self, last):
- '''
- Return a partial dependency graph consisting of the specified node and
- all those that require it only.
- '''
- if last not in self._graph:
- raise KeyError
-
- def get_edges(key):
- def requirer_edges(rqr):
- # Concatenate the dependency on the current node with the
- # recursive generated list
- return itertools.chain([(rqr, key)], get_edges(rqr))
-
- # Get the edge list for each node that requires the current node
- edge_lists = itertools.imap(requirer_edges,
- self._graph[key].required_by())
- # Combine the lists into one long list
- return itertools.chain.from_iterable(edge_lists)
-
- if self._graph[last].stem():
- # Nothing requires this, so just add the node itself
- edges = [(last, None)]
- else:
- edges = get_edges(last)
-
- return Dependencies(edges)
-
- def __str__(self):
- '''
- Return a human-readable string representation of the dependency graph
- '''
- return str(self._graph)
-
- def __repr__(self):
- '''Return a string representation of the object.'''
- edge_reprs = (repr(e) for e in self._graph.edges())
- return 'Dependencies([%s])' % ', '.join(edge_reprs)
-
- def graph(self, reverse=False):
- '''Return a copy of the underlying dependency graph.'''
- if reverse:
- return self._graph.reverse_copy()
- else:
- return self._graph.copy()
-
- def __iter__(self):
- '''Return a topologically sorted iterator'''
- return Graph.toposort(self.graph())
-
- def __reversed__(self):
- '''Return a reverse topologically sorted iterator'''
- return Graph.toposort(self.graph(reverse=True))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import itertools
-
-from heat.openstack.common import log
-from heat.common import exception
-
-
-LOG = log.getLogger(__name__)
-
-
-class ResourceInfo(object):
- """Base mapping of resource type to implementation."""
-
- def __new__(cls, registry, path, value, **kwargs):
- '''Create a new ResourceInfo of the appropriate class.'''
-
- if cls != ResourceInfo:
- # Call is already for a subclass, so pass it through
- return super(ResourceInfo, cls).__new__(cls)
-
- name = path[-1]
- if name.endswith(('.yaml', '.template')):
- # a template url for the resource "Type"
- return TemplateResourceInfo(registry, path, value)
- elif not isinstance(value, basestring):
- return ClassResourceInfo(registry, path, value)
- elif value.endswith(('.yaml', '.template')):
- # a registered template
- return TemplateResourceInfo(registry, path, value)
- elif name.endswith('*'):
- return GlobResourceInfo(registry, path, value)
- else:
- return MapResourceInfo(registry, path, value)
-
- def __init__(self, registry, path, value):
- self.registry = registry
- self.path = path
- self.name = path[-1]
- self.value = value
- self.user_resource = True
-
- def __eq__(self, other):
- return (self.path == other.path and
- self.value == other.value and
- self.user_resource == other.user_resource)
-
- def __ne__(self, other):
- return not self.__eq__(other)
-
- def __lt__(self, other):
- if self.user_resource != other.user_resource:
- # user resource must be sorted above system ones.
- return self.user_resource > other.user_resource
- if len(self.path) != len(other.path):
- # more specific (longer) path must be sorted above system ones.
- return len(self.path) > len(other.path)
- return self.path < other.path
-
- def __gt__(self, other):
- return other.__lt__(self)
-
- def get_resource_info(self, resource_type=None, resource_name=None):
- return self
-
- def matches(self, resource_type):
- return False
-
- def __str__(self):
- return '[%s](User:%s) %s -> %s' % (self.description,
- self.user_resource,
- self.name, str(self.value))
-
-
-class ClassResourceInfo(ResourceInfo):
- """Store the mapping of resource name to python class implementation."""
- description = 'Plugin'
-
- def get_class(self):
- return self.value
-
-
-class TemplateResourceInfo(ResourceInfo):
- """Store the info needed to start a TemplateResource.
- """
- description = 'Template'
-
- def __init__(self, registry, path, value):
- super(TemplateResourceInfo, self).__init__(registry, path, value)
- if self.name.endswith(('.yaml', '.template')):
- self.template_name = self.name
- else:
- self.template_name = value
-
- def get_class(self):
- from heat.engine.resources import template_resource
- return template_resource.TemplateResource
-
-
-class MapResourceInfo(ResourceInfo):
- """Store the mapping of one resource type to another.
- like: OS::Networking::FloatingIp -> OS::Neutron::FloatingIp
- """
- description = 'Mapping'
-
- def get_class(self):
- return None
-
- def get_resource_info(self, resource_type=None, resource_name=None):
- return self.registry.get_resource_info(self.value, resource_name)
-
-
-class GlobResourceInfo(MapResourceInfo):
- """Store the mapping (with wild cards) of one resource type to another.
- like: OS::Networking::* -> OS::Neutron::*
- """
- description = 'Wildcard Mapping'
-
- def get_resource_info(self, resource_type=None, resource_name=None):
- orig_prefix = self.name[:-1]
- new_type = self.value[:-1] + resource_type[len(orig_prefix):]
- return self.registry.get_resource_info(new_type, resource_name)
-
- def matches(self, resource_type):
- return resource_type.startswith(self.name[:-1])
-
-
-class ResourceRegistry(object):
- """By looking at the environment, find the resource implementation."""
-
- def __init__(self, global_registry):
- self._registry = {'resources': {}}
- self.global_registry = global_registry
-
- def load(self, json_snippet):
- self._load_registry([], json_snippet)
-
- def register_class(self, resource_type, resource_class):
- ri = ResourceInfo(self, [resource_type], resource_class)
- self._register_info([resource_type], ri)
-
- def _load_registry(self, path, registry):
- for k, v in iter(registry.items()):
- if isinstance(v, dict):
- self._load_registry(path + [k], v)
- else:
- self._register_info(path + [k],
- ResourceInfo(self, path + [k], v))
-
- def _register_info(self, path, info):
- """place the new info in the correct location in the registry.
- path: a list of keys ['resources', 'my_server', 'OS::Compute::Server']
- """
- descriptive_path = '/'.join(path)
- name = path[-1]
- # create the structure if needed
- registry = self._registry
- for key in path[:-1]:
- if key not in registry:
- registry[key] = {}
- registry = registry[key]
-
- if name in registry and isinstance(registry[name], ResourceInfo):
- details = {
- 'path': descriptive_path,
- 'was': str(registry[name].value),
- 'now': str(info.value)}
- LOG.warn(_('Changing %(path)s from %(was)s to %(now)s') % details)
- else:
- LOG.info(_('Registering %(path)s -> %(value)s') % {
- 'path': descriptive_path,
- 'value': str(info.value)})
- info.user_resource = (self.global_registry is not None)
- registry[name] = info
-
- def iterable_by(self, resource_type, resource_name=None):
- if resource_type.endswith(('.yaml', '.template')):
- # resource with a Type == a template
- # we dynamically create an entry as it has not been registered.
- if resource_type not in self._registry:
- res = ResourceInfo(self, [resource_type], None)
- self._register_info([resource_type], res)
- yield self._registry[resource_type]
-
- # handle a specific resource mapping.
- if resource_name:
- impl = self._registry['resources'].get(resource_name)
- if impl and resource_type in impl:
- yield impl[resource_type]
-
- # handle: "OS::Compute::Server" -> "Rackspace::Compute::Server"
- impl = self._registry.get(resource_type)
- if impl:
- yield impl
-
- # handle: "OS::*" -> "Dreamhost::*"
- def is_a_glob(resource_type):
- return resource_type.endswith('*')
- globs = itertools.ifilter(is_a_glob, self._registry.keys())
- for glob in globs:
- if self._registry[glob].matches(resource_type):
- yield self._registry[glob]
-
- def get_resource_info(self, resource_type, resource_name=None,
- registry_type=None):
- """Find possible matches to the resource type and name.
- chain the results from the global and user registry to find
- a match.
- """
- # use cases
- # 1) get the impl.
- # - filter_by(res_type=X), sort_by(res_name=W, is_user=True)
- # 2) in TemplateResource we need to get both the
- # TemplateClass and the ResourceClass
- # - filter_by(res_type=X, impl_type=TemplateResourceInfo),
- # sort_by(res_name=W, is_user=True)
- # - filter_by(res_type=X, impl_type=ClassResourceInfo),
- # sort_by(res_name=W, is_user=True)
- # 3) get_types() from the api
- # - filter_by(is_user=False)
- # 4) as_dict() to write to the db
- # - filter_by(is_user=True)
- if self.global_registry is not None:
- giter = self.global_registry.iterable_by(resource_type,
- resource_name)
- else:
- giter = []
-
- matches = itertools.chain(self.iterable_by(resource_type,
- resource_name),
- giter)
-
- for info in sorted(matches):
- match = info.get_resource_info(resource_type,
- resource_name)
- if registry_type is None or isinstance(match, registry_type):
- return match
-
- def get_class(self, resource_type, resource_name=None):
- info = self.get_resource_info(resource_type,
- resource_name=resource_name)
- if info is None:
- msg = "Unknown resource Type : %s" % resource_type
- raise exception.StackValidationFailed(message=msg)
- return info.get_class()
-
- def as_dict(self):
- """Return user resources in a dict format."""
- def _as_dict(level):
- tmp = {}
- for k, v in iter(level.items()):
- if isinstance(v, dict):
- tmp[k] = _as_dict(v)
- elif v.user_resource:
- tmp[k] = v.value
- return tmp
-
- return _as_dict(self._registry)
-
- def get_types(self):
- '''Return a list of valid resource types.'''
- def is_plugin(key):
- if isinstance(self._registry[key], ClassResourceInfo):
- return True
- return False
- return [k for k in self._registry if is_plugin(k)]
-
-
-SECTIONS = (PARAMETERS, RESOURCE_REGISTRY) = \
- ('parameters', 'resource_registry')
-
-
-class Environment(object):
-
- def __init__(self, env=None, user_env=True):
- """Create an Environment from a dict of varing format.
- 1) old-school flat parameters
- 2) or newer {resource_registry: bla, parameters: foo}
-
- :param env: the json environment
- :param user_env: boolean, if false then we manage python resources too.
- """
- if env is None:
- env = {}
- if user_env:
- from heat.engine import resources
- global_registry = resources.global_env().registry
- else:
- global_registry = None
-
- self.registry = ResourceRegistry(global_registry)
- self.registry.load(env.get(RESOURCE_REGISTRY, {}))
-
- if 'parameters' in env:
- self.params = env['parameters']
- else:
- self.params = dict((k, v) for (k, v) in env.iteritems()
- if k != RESOURCE_REGISTRY)
-
- def load(self, env_snippet):
- self.registry.load(env_snippet.get(RESOURCE_REGISTRY, {}))
- self.params.update(env_snippet.get('parameters', {}))
-
- def user_env_as_dict(self):
- """Get the environment as a dict, ready for storing in the db."""
- return {RESOURCE_REGISTRY: self.registry.as_dict(),
- PARAMETERS: self.params}
-
- def register_class(self, resource_type, resource_class):
- self.registry.register_class(resource_type, resource_class)
-
- def get_class(self, resource_type, resource_name=None):
- return self.registry.get_class(resource_type, resource_name)
-
- def get_types(self):
- return self.registry.get_types()
-
- def get_resource_info(self, resource_type, resource_name=None,
- registry_type=None):
- return self.registry.get_resource_info(resource_type, resource_name,
- registry_type)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.db import api as db_api
-from heat.common import exception
-from heat.common import identifier
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class Event(object):
- '''Class representing a Resource state change.'''
-
- def __init__(self, context, stack, action, status, reason,
- physical_resource_id, resource_properties, resource_name,
- resource_type, timestamp=None, id=None):
- '''
- Initialise from a context, stack, and event information. The timestamp
- and database ID may also be initialised if the event is already in the
- database.
- '''
- self.context = context
- self.stack = stack
- self.action = action
- self.status = status
- self.reason = reason
- self.physical_resource_id = physical_resource_id
- self.resource_name = resource_name
- self.resource_type = resource_type
- try:
- self.resource_properties = dict(resource_properties)
- except ValueError as ex:
- self.resource_properties = {'Error': str(ex)}
- self.timestamp = timestamp
- self.id = id
-
- @classmethod
- def load(cls, context, event_id, event=None, stack=None):
- '''Retrieve an Event from the database.'''
- from heat.engine import parser
-
- ev = event if event is not None else\
- db_api.event_get(context, event_id)
- if ev is None:
- message = 'No event exists with id "%s"' % str(event_id)
- raise exception.NotFound(message)
-
- st = stack if stack is not None else\
- parser.Stack.load(context, ev.stack_id)
-
- return cls(context, st, ev.resource_action, ev.resource_status,
- ev.resource_status_reason, ev.physical_resource_id,
- ev.resource_properties, ev.resource_name,
- ev.resource_type, ev.created_at, ev.id)
-
- def store(self):
- '''Store the Event in the database.'''
- ev = {
- 'resource_name': self.resource_name,
- 'physical_resource_id': self.physical_resource_id,
- 'stack_id': self.stack.id,
- 'resource_action': self.action,
- 'resource_status': self.status,
- 'resource_status_reason': self.reason,
- 'resource_type': self.resource_type,
- 'resource_properties': self.resource_properties,
- }
-
- if self.timestamp is not None:
- ev['created_at'] = self.timestamp
-
- if self.id is not None:
- logger.warning('Duplicating event')
-
- new_ev = db_api.event_create(self.context, ev)
- self.id = new_ev.id
- return self.id
-
- def identifier(self):
- '''Return a unique identifier for the event.'''
- if self.id is None:
- return None
-
- res_id = identifier.ResourceIdentifier(
- resource_name=self.resource_name, **self.stack.identifier())
-
- return identifier.EventIdentifier(event_id=str(self.id), **res_id)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import string
-
-from heat.common import exception
-from heat.engine import template
-from heat.engine.parameters import ParamSchema
-from heat.openstack.common import log as logging
-
-
-logger = logging.getLogger(__name__)
-
-SECTIONS = (VERSION, DESCRIPTION, PARAMETERS,
- RESOURCES, OUTPUTS, UNDEFINED) = \
- ('heat_template_version', 'description', 'parameters',
- 'resources', 'outputs', '__undefined__')
-
-_CFN_TO_HOT_SECTIONS = {template.VERSION: VERSION,
- template.DESCRIPTION: DESCRIPTION,
- template.PARAMETERS: PARAMETERS,
- template.MAPPINGS: UNDEFINED,
- template.RESOURCES: RESOURCES,
- template.OUTPUTS: OUTPUTS}
-
-
-def snake_to_camel(name):
- return ''.join([t.capitalize() for t in name.split('_')])
-
-
-class HOTemplate(template.Template):
- """
- A Heat Orchestration Template format stack template.
- """
-
- def __getitem__(self, section):
- """"Get the relevant section in the template."""
- #first translate from CFN into HOT terminology if necessary
- section = HOTemplate._translate(section, _CFN_TO_HOT_SECTIONS, section)
-
- if section not in SECTIONS:
- raise KeyError('"%s" is not a valid template section' % section)
-
- if section == VERSION:
- return self.t[section]
-
- if section == UNDEFINED:
- return {}
-
- if section == DESCRIPTION:
- default = 'No description'
- else:
- default = {}
-
- the_section = self.t.get(section, default)
-
- # In some cases (e.g. parameters), also translate each entry of
- # a section into CFN format (case, naming, etc) so the rest of the
- # engine can cope with it.
- # This is a shortcut for now and might be changed in the future.
-
- if section == PARAMETERS:
- return self._translate_parameters(the_section)
-
- if section == RESOURCES:
- return self._translate_resources(the_section)
-
- if section == OUTPUTS:
- return self._translate_outputs(the_section)
-
- return the_section
-
- @staticmethod
- def _translate(value, mapping, default=None):
- if value in mapping:
- return mapping[value]
-
- return default
-
- def _translate_constraints(self, constraints):
- param = {}
-
- def add_constraint(key, val, desc):
- cons = param.get(key, [])
- cons.append((val, desc))
- param[key] = cons
-
- def add_min_max(key, val, desc):
- minv = val.get('min')
- maxv = val.get('max')
- if minv:
- add_constraint('Min%s' % key, minv, desc)
- if maxv:
- add_constraint('Max%s' % key, maxv, desc)
-
- for constraint in constraints:
- desc = constraint.get('description')
- for key, val in constraint.iteritems():
- key = snake_to_camel(key)
- if key == 'Description':
- continue
- elif key == 'Range':
- add_min_max('Value', val, desc)
- elif key == 'Length':
- add_min_max(key, val, desc)
- else:
- add_constraint(key, val, desc)
-
- return param
-
- def _translate_parameters(self, parameters):
- """Get the parameters of the template translated into CFN format."""
- params = {}
- for name, attrs in parameters.iteritems():
- param = {}
- for key, val in attrs.iteritems():
- key = snake_to_camel(key)
- if key == 'Type':
- val = snake_to_camel(val)
- elif key == 'Constraints':
- param.update(self._translate_constraints(val))
- continue
- elif key == 'Hidden':
- key = 'NoEcho'
- param[key] = val
- if len(param) > 0:
- params[name] = param
- return params
-
- def _translate_resources(self, resources):
- """Get the resources of the template translated into CFN format."""
- HOT_TO_CFN_ATTRS = {'type': 'Type',
- 'properties': 'Properties'}
-
- cfn_resources = {}
-
- for resource_name, attrs in resources.iteritems():
- cfn_resource = {}
-
- for attr, attr_value in attrs.iteritems():
- cfn_attr = self._translate(attr, HOT_TO_CFN_ATTRS, attr)
- cfn_resource[cfn_attr] = attr_value
-
- cfn_resources[resource_name] = cfn_resource
-
- return cfn_resources
-
- def _translate_outputs(self, outputs):
- """Get the outputs of the template translated into CFN format."""
- HOT_TO_CFN_ATTRS = {'description': 'Description',
- 'value': 'Value'}
-
- cfn_outputs = {}
-
- for output_name, attrs in outputs.iteritems():
- cfn_output = {}
-
- for attr, attr_value in attrs.iteritems():
- cfn_attr = self._translate(attr, HOT_TO_CFN_ATTRS, attr)
- cfn_output[cfn_attr] = attr_value
-
- cfn_outputs[output_name] = cfn_output
-
- return cfn_outputs
-
- @staticmethod
- def resolve_param_refs(s, parameters):
- """
- Resolve constructs of the form { get_param: my_param }
- """
- def match_param_ref(key, value):
- return (key in ['get_param', 'Ref'] and
- value is not None and
- value in parameters)
-
- def handle_param_ref(ref):
- try:
- return parameters[ref]
- except (KeyError, ValueError):
- raise exception.UserParameterMissing(key=ref)
-
- return template._resolve(match_param_ref, handle_param_ref, s)
-
- @staticmethod
- def resolve_resource_refs(s, resources):
- '''
- Resolve constructs of the form { "get_resource" : "resource" }
- '''
- def match_resource_ref(key, value):
- return key in ['get_resource', 'Ref'] and value in resources
-
- def handle_resource_ref(arg):
- return resources[arg].FnGetRefId()
-
- return template._resolve(match_resource_ref, handle_resource_ref, s)
-
- @staticmethod
- def resolve_attributes(s, resources):
- """
- Resolve constructs of the form { get_attr: [my_resource, my_attr] }
- """
- def match_get_attr(key, value):
- return (key in ['get_attr', 'Fn::GetAtt'] and
- isinstance(value, list) and
- len(value) == 2 and
- None not in value and
- value[0] in resources)
-
- def handle_get_attr(args):
- resource, att = args
- try:
- r = resources[resource]
- if r.state in (
- (r.CREATE, r.IN_PROGRESS),
- (r.CREATE, r.COMPLETE),
- (r.UPDATE, r.IN_PROGRESS),
- (r.UPDATE, r.COMPLETE)):
- return r.FnGetAtt(att)
- except KeyError:
- raise exception.InvalidTemplateAttribute(resource=resource,
- key=att)
-
- return template._resolve(match_get_attr, handle_get_attr, s)
-
- @staticmethod
- def resolve_replace(s):
- """
- Resolve template string substitution via function str_replace
-
- Resolves the str_replace function of the form
-
- str_replace:
- template: <string template>
- params:
- <param dictionary>
- """
- def handle_str_replace(args):
- if not (isinstance(args, dict) or isinstance(args, list)):
- raise TypeError('Arguments to "str_replace" must be a'
- 'dictionary or a list')
-
- try:
- if isinstance(args, dict):
- text = args.get('template')
- params = args.get('params', {})
- elif isinstance(args, list):
- params, text = args
- if text is None:
- raise KeyError()
- except KeyError:
- example = ('''str_replace:
- template: This is $var1 template $var2
- params:
- var1: a
- var2: string''')
- raise KeyError('"str_replace" syntax should be %s' %
- example)
- if not hasattr(text, 'replace'):
- raise TypeError('"template" parameter must be a string')
- if not isinstance(params, dict):
- raise TypeError(
- '"params" parameter must be a dictionary')
- if isinstance(args, list):
- for key in params.iterkeys():
- value = params.get(key, '')
- text = text.replace(key, value)
- return text
-
- return string.Template(text).safe_substitute(params)
-
- match_str_replace = lambda k, v: k in ['str_replace', 'Fn::Replace']
- return template._resolve(match_str_replace,
- handle_str_replace, s)
-
- def param_schemata(self):
- params = self[PARAMETERS].iteritems()
- return dict((name, HOTParamSchema(schema)) for name, schema in params)
-
-
-class HOTParamSchema(ParamSchema):
- def do_check(self, name, val, keys):
- for key in keys:
- consts = self.get(key)
- check = self.check(key)
- if consts is None or check is None:
- continue
- for (const, desc) in consts:
- check(name, val, const, desc)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import collections
-import json
-import re
-
-from heat.common import exception
-
-
-PARAMETER_KEYS = (
- TYPE, DEFAULT, NO_ECHO, ALLOWED_VALUES, ALLOWED_PATTERN,
- MAX_LENGTH, MIN_LENGTH, MAX_VALUE, MIN_VALUE,
- DESCRIPTION, CONSTRAINT_DESCRIPTION
-) = (
- 'Type', 'Default', 'NoEcho', 'AllowedValues', 'AllowedPattern',
- 'MaxLength', 'MinLength', 'MaxValue', 'MinValue',
- 'Description', 'ConstraintDescription'
-)
-PARAMETER_TYPES = (
- STRING, NUMBER, COMMA_DELIMITED_LIST, JSON
-) = (
- 'String', 'Number', 'CommaDelimitedList', 'Json'
-)
-PSEUDO_PARAMETERS = (
- PARAM_STACK_ID, PARAM_STACK_NAME, PARAM_REGION
-) = (
- 'AWS::StackId', 'AWS::StackName', 'AWS::Region'
-)
-
-
-class ParamSchema(dict):
- '''Parameter schema.'''
-
- def __init__(self, schema):
- super(ParamSchema, self).__init__(schema)
-
- def do_check(self, name, value, keys):
- for k in keys:
- check = self.check(k)
- const = self.get(k)
- if check is None or const is None:
- continue
- check(name, value, const)
-
- def constraints(self):
- ptype = self[TYPE]
- keys = {
- STRING: [ALLOWED_VALUES, ALLOWED_PATTERN, MAX_LENGTH, MIN_LENGTH],
- NUMBER: [ALLOWED_VALUES, MAX_VALUE, MIN_VALUE],
- JSON: [MAX_LENGTH, MIN_LENGTH]
- }.get(ptype)
- list_keys = {
- COMMA_DELIMITED_LIST: [ALLOWED_VALUES],
- JSON: [ALLOWED_VALUES]
- }.get(ptype)
- return (keys, list_keys)
-
- def validate(self, name, value):
- (keys, list_keys) = self.constraints()
- if keys:
- self.do_check(name, value, keys)
- if list_keys:
- values = value
- for value in values:
- self.do_check(name, value, list_keys)
-
- def raise_error(self, name, message, desc=True):
- if desc:
- message = self.get(CONSTRAINT_DESCRIPTION) or message
- raise ValueError('%s %s' % (name, message))
-
- def check_allowed_values(self, name, val, const, desc=None):
- vals = list(const)
- if val not in vals:
- err = '"%s" not in %s "%s"' % (val, ALLOWED_VALUES, vals)
- self.raise_error(name, desc or err)
-
- def check_allowed_pattern(self, name, val, p, desc=None):
- m = re.match(p, val)
- if m is None or m.end() != len(val):
- err = '"%s" does not match %s "%s"' % (val, ALLOWED_PATTERN, p)
- self.raise_error(name, desc or err)
-
- def check_max_length(self, name, val, const, desc=None):
- max_len = int(const)
- val_len = len(val)
- if val_len > max_len:
- err = 'length (%d) overflows %s (%d)' % (val_len,
- MAX_LENGTH, max_len)
- self.raise_error(name, desc or err)
-
- def check_min_length(self, name, val, const, desc=None):
- min_len = int(const)
- val_len = len(val)
- if val_len < min_len:
- err = 'length (%d) underflows %s (%d)' % (val_len,
- MIN_LENGTH, min_len)
- self.raise_error(name, desc or err)
-
- def check_max_value(self, name, val, const, desc=None):
- max_val = float(const)
- val = float(val)
- if val > max_val:
- err = '%d overflows %s %d' % (val, MAX_VALUE, max_val)
- self.raise_error(name, desc or err)
-
- def check_min_value(self, name, val, const, desc=None):
- min_val = float(const)
- val = float(val)
- if val < min_val:
- err = '%d underflows %s %d' % (val, MIN_VALUE, min_val)
- self.raise_error(name, desc or err)
-
- def check(self, const_key):
- return {ALLOWED_VALUES: self.check_allowed_values,
- ALLOWED_PATTERN: self.check_allowed_pattern,
- MAX_LENGTH: self.check_max_length,
- MIN_LENGTH: self.check_min_length,
- MAX_VALUE: self.check_max_value,
- MIN_VALUE: self.check_min_value}.get(const_key)
-
-
-class Parameter(object):
- '''A template parameter.'''
-
- def __new__(cls, name, schema, value=None, validate_value=True):
- '''Create a new Parameter of the appropriate type.'''
- if cls is not Parameter:
- return super(Parameter, cls).__new__(cls)
-
- param_type = schema[TYPE]
- if param_type == STRING:
- ParamClass = StringParam
- elif param_type == NUMBER:
- ParamClass = NumberParam
- elif param_type == COMMA_DELIMITED_LIST:
- ParamClass = CommaDelimitedListParam
- elif param_type == JSON:
- ParamClass = JsonParam
- else:
- raise ValueError('Invalid Parameter type "%s"' % param_type)
-
- return ParamClass(name, schema, value, validate_value)
-
- def __init__(self, name, schema, value=None, validate_value=True):
- '''
- Initialise the Parameter with a name, schema and optional user-supplied
- value.
- '''
- self.name = name
- self.schema = schema
- self.user_value = value
-
- if validate_value:
- if self.has_default():
- self.validate(self.default())
- if self.user_value is not None:
- self.validate(self.user_value)
- elif not self.has_default():
- raise exception.UserParameterMissing(key=self.name)
-
- def value(self):
- '''Get the parameter value, optionally sanitising it for output.'''
- if self.user_value is not None:
- return self.user_value
-
- if self.has_default():
- return self.default()
-
- raise KeyError('Missing parameter %s' % self.name)
-
- def no_echo(self):
- '''
- Return whether the parameter should be sanitised in any output to
- the user.
- '''
- return str(self.schema.get(NO_ECHO, 'false')).lower() == 'true'
-
- def description(self):
- '''Return the description of the parameter.'''
- return self.schema.get(DESCRIPTION, '')
-
- def has_default(self):
- '''Return whether the parameter has a default value.'''
- return DEFAULT in self.schema
-
- def default(self):
- '''Return the default value of the parameter.'''
- return self.schema.get(DEFAULT)
-
- def __str__(self):
- '''Return a string representation of the parameter'''
- value = self.value()
- if self.no_echo():
- return '******'
- else:
- return str(value)
-
-
-class NumberParam(Parameter):
- '''A template parameter of type "Number".'''
-
- def __int__(self):
- '''Return an integer representation of the parameter'''
- return int(self.value())
-
- def __float__(self):
- '''Return a float representation of the parameter'''
- return float(self.value())
-
- def validate(self, val):
- self.schema.validate(self.name, val)
-
-
-class StringParam(Parameter):
- '''A template parameter of type "String".'''
-
- def validate(self, val):
- self.schema.validate(self.name, val)
-
-
-class CommaDelimitedListParam(Parameter, collections.Sequence):
- '''A template parameter of type "CommaDelimitedList".'''
-
- def __init__(self, name, schema, value=None, validate_value=True):
- super(CommaDelimitedListParam, self).__init__(name, schema, value,
- validate_value)
- self.parsed = self.parse(self.user_value or self.default())
-
- def parse(self, value):
- try:
- if value:
- return value.split(',')
- except (KeyError, AttributeError) as err:
- message = 'Value must be a comma-delimited list string: %s'
- raise ValueError(message % str(err))
- return value
-
- def __len__(self):
- '''Return the length of the list.'''
- return len(self.parsed)
-
- def __getitem__(self, index):
- '''Return an item from the list.'''
- return self.parsed[index]
-
- def validate(self, val):
- parsed = self.parse(val)
- self.schema.validate(self.name, parsed)
-
-
-class JsonParam(Parameter, collections.Mapping):
- """A template parameter who's value is valid map."""
-
- def __init__(self, name, schema, value=None, validate_value=True):
- super(JsonParam, self).__init__(name, schema, value,
- validate_value)
- self.parsed = self.parse(self.user_value or self.default())
-
- def parse(self, value):
- try:
- val = value
- if isinstance(val, collections.Mapping):
- val = json.dumps(val)
- if val:
- return json.loads(val)
- except (ValueError, TypeError) as err:
- message = 'Value must be valid JSON: %s' % str(err)
- raise ValueError(message)
- return value
-
- def value(self):
- val = super(JsonParam, self).value()
- if isinstance(val, collections.Mapping):
- try:
- val = json.dumps(val)
- self.user_value = val
- except (ValueError, TypeError) as err:
- message = 'Value must be valid JSON'
- raise ValueError("%s: %s" % (message, str(err)))
- return val
-
- def __getitem__(self, key):
- return self.parsed[key]
-
- def __iter__(self):
- return iter(self.parsed)
-
- def __len__(self):
- return len(self.parsed)
-
- def validate(self, val):
- val = self.parse(val)
- self.schema.validate(self.name, val)
-
-
-class Parameters(collections.Mapping):
- '''
- The parameters of a stack, with type checking, defaults &c. specified by
- the stack's template.
- '''
- def __init__(self, stack_name, tmpl, user_params={}, stack_id=None,
- validate_value=True):
- '''
- Create the parameter container for a stack from the stack name and
- template, optionally setting the user-supplied parameter values.
- '''
- def parameters():
- yield Parameter(PARAM_STACK_ID,
- ParamSchema({TYPE: STRING,
- DESCRIPTION: 'Stack ID',
- DEFAULT: str(stack_id)}))
- if stack_name is not None:
- yield Parameter(PARAM_STACK_NAME,
- ParamSchema({TYPE: STRING,
- DESCRIPTION: 'Stack Name',
- DEFAULT: stack_name}))
- yield Parameter(PARAM_REGION,
- ParamSchema({TYPE: STRING,
- DEFAULT: 'ap-southeast-1',
- ALLOWED_VALUES:
- ['us-east-1',
- 'us-west-1',
- 'us-west-2',
- 'sa-east-1',
- 'eu-west-1',
- 'ap-southeast-1',
- 'ap-northeast-1']}))
-
- schemata = self.tmpl.param_schemata().iteritems()
- for name, schema in schemata:
- value = user_params.get(name)
- yield Parameter(name, schema, value, validate_value)
-
- self.tmpl = tmpl
- self._validate(user_params)
- self.params = dict((p.name, p) for p in parameters())
-
- def __contains__(self, key):
- '''Return whether the specified parameter exists.'''
- return key in self.params
-
- def __iter__(self):
- '''Return an iterator over the parameter names.'''
- return iter(self.params)
-
- def __len__(self):
- '''Return the number of parameters defined.'''
- return len(self.params)
-
- def __getitem__(self, key):
- '''Get a parameter value.'''
- return self.params[key].value()
-
- def map(self, func, filter_func=lambda p: True):
- '''
- Map the supplied filter function onto each Parameter (with an
- optional filter function) and return the resulting dictionary.
- '''
- return dict((n, func(p))
- for n, p in self.params.iteritems() if filter_func(p))
-
- def set_stack_id(self, stack_id):
- '''
- Set the AWS::StackId pseudo parameter value
- '''
- self.params[PARAM_STACK_ID].schema[DEFAULT] = stack_id
-
- def _validate(self, user_params):
- schemata = self.tmpl.param_schemata()
- for param in user_params:
- if param not in schemata:
- raise exception.UnknownUserParameter(key=param)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import functools
-import re
-
-from heat.engine import environment
-from heat.common import exception
-from heat.engine import dependencies
-from heat.common import identifier
-from heat.engine import resource
-from heat.engine import resources
-from heat.engine import scheduler
-from heat.engine import template
-from heat.engine import timestamp
-from heat.engine import update
-from heat.engine.parameters import Parameters
-from heat.engine.template import Template
-from heat.engine.clients import Clients
-from heat.db import api as db_api
-
-from heat.openstack.common import log as logging
-from heat.openstack.common.gettextutils import _
-
-from heat.common.exception import StackValidationFailed
-
-logger = logging.getLogger(__name__)
-
-(PARAM_STACK_NAME, PARAM_REGION) = ('AWS::StackName', 'AWS::Region')
-
-
-class Stack(object):
-
- ACTIONS = (CREATE, DELETE, UPDATE, ROLLBACK, SUSPEND, RESUME
- ) = ('CREATE', 'DELETE', 'UPDATE', 'ROLLBACK', 'SUSPEND',
- 'RESUME')
-
- STATUSES = (IN_PROGRESS, FAILED, COMPLETE
- ) = ('IN_PROGRESS', 'FAILED', 'COMPLETE')
-
- created_time = timestamp.Timestamp(functools.partial(db_api.stack_get,
- show_deleted=True),
- 'created_at')
- updated_time = timestamp.Timestamp(functools.partial(db_api.stack_get,
- show_deleted=True),
- 'updated_at')
-
- _zones = None
-
- def __init__(self, context, stack_name, tmpl, env=None,
- stack_id=None, action=None, status=None,
- status_reason='', timeout_mins=60, resolve_data=True,
- disable_rollback=True, parent_resource=None, owner_id=None):
- '''
- Initialise from a context, name, Template object and (optionally)
- Environment object. The database ID may also be initialised, if the
- stack is already in the database.
- '''
-
- if owner_id is None:
- if re.match("[a-zA-Z][a-zA-Z0-9_.-]*$", stack_name) is None:
- raise ValueError(_('Invalid stack name %s'
- ' must contain only alphanumeric or '
- '\"_-.\" characters, must start with alpha'
- ) % stack_name)
-
- self.id = stack_id
- self.owner_id = owner_id
- self.context = context
- self.clients = Clients(context)
- self.t = tmpl
- self.name = stack_name
- self.action = action
- self.status = status
- self.status_reason = status_reason
- self.timeout_mins = timeout_mins
- self.disable_rollback = disable_rollback
- self.parent_resource = parent_resource
-
- resources.initialise()
-
- self.env = env or environment.Environment({})
- self.parameters = Parameters(self.name, self.t,
- user_params=self.env.params)
-
- self._set_param_stackid()
-
- if resolve_data:
- self.outputs = self.resolve_static_data(self.t[template.OUTPUTS])
- else:
- self.outputs = {}
-
- template_resources = self.t[template.RESOURCES]
- self.resources = dict((name,
- resource.Resource(name, data, self))
- for (name, data) in template_resources.items())
-
- self.dependencies = self._get_dependencies(self.resources.itervalues())
-
- @property
- def root_stack(self):
- '''
- Return the root stack if this is nested (otherwise return self).
- '''
- if (self.parent_resource and self.parent_resource.stack):
- return self.parent_resource.stack.root_stack
- return self
-
- def total_resources(self):
- '''
- Total number of resources in a stack, including nested stacks below.
- '''
- total = 0
- for res in iter(self.resources.values()):
- if hasattr(res, 'nested') and res.nested():
- total += res.nested().total_resources()
- total += 1
- return total
-
- def _set_param_stackid(self):
- '''
- Update self.parameters with the current ARN which is then provided
- via the Parameters class as the AWS::StackId pseudo parameter
- '''
- # This can fail if constructor called without a valid context,
- # as it is in many tests
- try:
- stack_arn = self.identifier().arn()
- except (AttributeError, ValueError, TypeError):
- logger.warning("Unable to set parameters StackId identifier")
- else:
- self.parameters.set_stack_id(stack_arn)
-
- @staticmethod
- def _get_dependencies(resources):
- '''Return the dependency graph for a list of resources.'''
- deps = dependencies.Dependencies()
- for resource in resources:
- resource.add_dependencies(deps)
-
- return deps
-
- @classmethod
- def load(cls, context, stack_id=None, stack=None, resolve_data=True,
- parent_resource=None, show_deleted=True):
- '''Retrieve a Stack from the database.'''
- if stack is None:
- stack = db_api.stack_get(context, stack_id,
- show_deleted=show_deleted)
- if stack is None:
- message = 'No stack exists with id "%s"' % str(stack_id)
- raise exception.NotFound(message)
-
- template = Template.load(context, stack.raw_template_id)
- env = environment.Environment(stack.parameters)
- stack = cls(context, stack.name, template, env,
- stack.id, stack.action, stack.status, stack.status_reason,
- stack.timeout, resolve_data, stack.disable_rollback,
- parent_resource, owner_id=stack.owner_id)
-
- return stack
-
- def store(self, backup=False):
- '''
- Store the stack in the database and return its ID
- If self.id is set, we update the existing stack
- '''
-
- s = {
- 'name': self._backup_name() if backup else self.name,
- 'raw_template_id': self.t.store(self.context),
- 'parameters': self.env.user_env_as_dict(),
- 'owner_id': self.owner_id,
- 'username': self.context.username,
- 'tenant': self.context.tenant_id,
- 'action': self.action,
- 'status': self.status,
- 'status_reason': self.status_reason,
- 'timeout': self.timeout_mins,
- 'disable_rollback': self.disable_rollback,
- }
- if self.id:
- db_api.stack_update(self.context, self.id, s)
- else:
- new_creds = db_api.user_creds_create(self.context)
- s['user_creds_id'] = new_creds.id
- new_s = db_api.stack_create(self.context, s)
- self.id = new_s.id
-
- self._set_param_stackid()
-
- return self.id
-
- def _backup_name(self):
- return '%s*' % self.name
-
- def identifier(self):
- '''
- Return an identifier for this stack.
- '''
- return identifier.HeatIdentifier(self.context.tenant_id,
- self.name, self.id)
-
- def __iter__(self):
- '''
- Return an iterator over this template's resources in the order that
- they should be started.
- '''
- return iter(self.dependencies)
-
- def __reversed__(self):
- '''
- Return an iterator over this template's resources in the order that
- they should be stopped.
- '''
- return reversed(self.dependencies)
-
- def __len__(self):
- '''Return the number of resources.'''
- return len(self.resources)
-
- def __getitem__(self, key):
- '''Get the resource with the specified name.'''
- return self.resources[key]
-
- def __setitem__(self, key, value):
- '''Set the resource with the specified name to a specific value.'''
- self.resources[key] = value
-
- def __contains__(self, key):
- '''Determine whether the stack contains the specified resource.'''
- return key in self.resources
-
- def keys(self):
- '''Return a list of resource keys for the stack.'''
- return self.resources.keys()
-
- def __str__(self):
- '''Return a human-readable string representation of the stack.'''
- return 'Stack "%s"' % self.name
-
- def resource_by_refid(self, refid):
- '''
- Return the resource in this stack with the specified
- refid, or None if not found
- '''
- for r in self.resources.values():
- if r.state in (
- (r.CREATE, r.IN_PROGRESS),
- (r.CREATE, r.COMPLETE),
- (r.UPDATE, r.IN_PROGRESS),
- (r.UPDATE, r.COMPLETE)) and r.FnGetRefId() == refid:
- return r
-
- def validate(self):
- '''
- http://docs.amazonwebservices.com/AWSCloudFormation/latest/\
- APIReference/API_ValidateTemplate.html
- '''
- # TODO(sdake) Should return line number of invalid reference
-
- # Check duplicate names between parameters and resources
- dup_names = set(self.parameters.keys()) & set(self.resources.keys())
-
- if dup_names:
- logger.debug("Duplicate names %s" % dup_names)
- raise StackValidationFailed(message="Duplicate names %s" %
- dup_names)
-
- for res in self:
- try:
- result = res.validate()
- except exception.Error as ex:
- logger.exception(ex)
- raise ex
- except Exception as ex:
- logger.exception(ex)
- raise StackValidationFailed(message=str(ex))
- if result:
- raise StackValidationFailed(message=result)
-
- def requires_deferred_auth(self):
- '''
- Returns whether this stack may need to perform API requests
- during its lifecycle using the configured deferred authentication
- method.
- '''
- return any(res.requires_deferred_auth for res in self)
-
- def state_set(self, action, status, reason):
- '''Update the stack state in the database.'''
- if action not in self.ACTIONS:
- raise ValueError("Invalid action %s" % action)
-
- if status not in self.STATUSES:
- raise ValueError("Invalid status %s" % status)
-
- self.action = action
- self.status = status
- self.status_reason = reason
-
- if self.id is None:
- return
-
- stack = db_api.stack_get(self.context, self.id)
- stack.update_and_save({'action': action,
- 'status': status,
- 'status_reason': reason})
-
- @property
- def state(self):
- '''Returns state, tuple of action, status.'''
- return (self.action, self.status)
-
- def timeout_secs(self):
- '''
- Return the stack creation timeout in seconds, or None if no timeout
- should be used.
- '''
- if self.timeout_mins is None:
- return None
-
- return self.timeout_mins * 60
-
- def create(self):
- '''
- Create the stack and all of the resources.
- '''
- def rollback():
- if not self.disable_rollback and self.state == (self.CREATE,
- self.FAILED):
- self.delete(action=self.ROLLBACK)
-
- creator = scheduler.TaskRunner(self.stack_task,
- action=self.CREATE,
- reverse=False,
- post_func=rollback)
- creator(timeout=self.timeout_secs())
-
- @scheduler.wrappertask
- def stack_task(self, action, reverse=False, post_func=None):
- '''
- A task to perform an action on the stack and all of the resources
- in forward or reverse dependency order as specfifed by reverse
- '''
- self.state_set(action, self.IN_PROGRESS,
- 'Stack %s started' % action)
-
- stack_status = self.COMPLETE
- reason = 'Stack %s completed successfully' % action.lower()
- res = None
-
- def resource_action(r):
- # Find e.g resource.create and call it
- action_l = action.lower()
- handle = getattr(r, '%s' % action_l)
-
- return handle()
-
- action_task = scheduler.DependencyTaskGroup(self.dependencies,
- resource_action,
- reverse)
-
- try:
- yield action_task()
- except exception.ResourceFailure as ex:
- stack_status = self.FAILED
- reason = 'Resource %s failed: %s' % (action.lower(), str(ex))
- except scheduler.Timeout:
- stack_status = self.FAILED
- reason = '%s timed out' % action.title()
-
- self.state_set(action, stack_status, reason)
-
- if callable(post_func):
- post_func()
-
- def _backup_stack(self, create_if_missing=True):
- '''
- Get a Stack containing any in-progress resources from the previous
- stack state prior to an update.
- '''
- s = db_api.stack_get_by_name(self.context, self._backup_name(),
- owner_id=self.id)
- if s is not None:
- logger.debug('Loaded existing backup stack')
- return self.load(self.context, stack=s)
- elif create_if_missing:
- prev = type(self)(self.context, self.name, self.t, self.env,
- owner_id=self.id)
- prev.store(backup=True)
- logger.debug('Created new backup stack')
- return prev
- else:
- return None
-
- def update(self, newstack):
- '''
- Compare the current stack with newstack,
- and where necessary create/update/delete the resources until
- this stack aligns with newstack.
-
- Note update of existing stack resources depends on update
- being implemented in the underlying resource types
-
- Update will fail if it exceeds the specified timeout. The default is
- 60 minutes, set in the constructor
- '''
- updater = scheduler.TaskRunner(self.update_task, newstack)
- updater()
-
- @scheduler.wrappertask
- def update_task(self, newstack, action=UPDATE):
- if action not in (self.UPDATE, self.ROLLBACK):
- logger.error("Unexpected action %s passed to update!" % action)
- self.state_set(self.UPDATE, self.FAILED,
- "Invalid action %s" % action)
- return
-
- if self.status != self.COMPLETE:
- if (action == self.ROLLBACK and
- self.state == (self.UPDATE, self.IN_PROGRESS)):
- logger.debug("Starting update rollback for %s" % self.name)
- else:
- self.state_set(action, self.FAILED,
- 'State invalid for %s' % action)
- return
-
- self.state_set(self.UPDATE, self.IN_PROGRESS,
- 'Stack %s started' % action)
-
- oldstack = Stack(self.context, self.name, self.t, self.env)
- backup_stack = self._backup_stack()
-
- try:
- update_task = update.StackUpdate(self, newstack, backup_stack,
- rollback=action == self.ROLLBACK)
- updater = scheduler.TaskRunner(update_task)
-
- self.env = newstack.env
- self.parameters = newstack.parameters
-
- try:
- updater.start(timeout=self.timeout_secs())
- yield
- while not updater.step():
- yield
- finally:
- cur_deps = self._get_dependencies(self.resources.itervalues())
- self.dependencies = cur_deps
-
- if action == self.UPDATE:
- reason = 'Stack successfully updated'
- else:
- reason = 'Stack rollback completed'
- stack_status = self.COMPLETE
-
- except scheduler.Timeout:
- stack_status = self.FAILED
- reason = 'Timed out'
- except exception.ResourceFailure as e:
- reason = str(e)
-
- stack_status = self.FAILED
- if action == self.UPDATE:
- # If rollback is enabled, we do another update, with the
- # existing template, so we roll back to the original state
- if not self.disable_rollback:
- yield self.update_task(oldstack, action=self.ROLLBACK)
- return
- else:
- logger.debug('Deleting backup stack')
- backup_stack.delete()
-
- self.state_set(action, stack_status, reason)
-
- # flip the template to the newstack values
- # Note we do this on success and failure, so the current
- # stack resources are stored, even if one is in a failed
- # state (otherwise we won't remove them on delete)
- self.t = newstack.t
- template_outputs = self.t[template.OUTPUTS]
- self.outputs = self.resolve_static_data(template_outputs)
- self.store()
-
- def delete(self, action=DELETE):
- '''
- Delete all of the resources, and then the stack itself.
- The action parameter is used to differentiate between a user
- initiated delete and an automatic stack rollback after a failed
- create, which amount to the same thing, but the states are recorded
- differently.
- '''
- if action not in (self.DELETE, self.ROLLBACK):
- logger.error("Unexpected action %s passed to delete!" % action)
- self.state_set(self.DELETE, self.FAILED,
- "Invalid action %s" % action)
- return
-
- stack_status = self.COMPLETE
- reason = 'Stack %s completed successfully' % action.lower()
- self.state_set(action, self.IN_PROGRESS, 'Stack %s started' % action)
-
- backup_stack = self._backup_stack(False)
- if backup_stack is not None:
- backup_stack.delete()
- if backup_stack.status != backup_stack.COMPLETE:
- errs = backup_stack.status_reason
- failure = 'Error deleting backup resources: %s' % errs
- self.state_set(action, self.FAILED,
- 'Failed to %s : %s' % (action, failure))
- return
-
- action_task = scheduler.DependencyTaskGroup(self.dependencies,
- resource.Resource.destroy,
- reverse=True)
- try:
- scheduler.TaskRunner(action_task)(timeout=self.timeout_secs())
- except exception.ResourceFailure as ex:
- stack_status = self.FAILED
- reason = 'Resource %s failed: %s' % (action.lower(), str(ex))
- except scheduler.Timeout:
- stack_status = self.FAILED
- reason = '%s timed out' % action.title()
-
- self.state_set(action, stack_status, reason)
- if stack_status != self.FAILED:
- db_api.stack_delete(self.context, self.id)
- self.id = None
-
- def suspend(self):
- '''
- Suspend the stack, which invokes handle_suspend for all stack resources
- waits for all resources to become SUSPEND_COMPLETE then declares the
- stack SUSPEND_COMPLETE.
- Note the default implementation for all resources is to do nothing
- other than move to SUSPEND_COMPLETE, so the resources must implement
- handle_suspend for this to have any effect.
- '''
- sus_task = scheduler.TaskRunner(self.stack_task,
- action=self.SUSPEND,
- reverse=True)
- sus_task(timeout=self.timeout_secs())
-
- def resume(self):
- '''
- Resume the stack, which invokes handle_resume for all stack resources
- waits for all resources to become RESUME_COMPLETE then declares the
- stack RESUME_COMPLETE.
- Note the default implementation for all resources is to do nothing
- other than move to RESUME_COMPLETE, so the resources must implement
- handle_resume for this to have any effect.
- '''
- sus_task = scheduler.TaskRunner(self.stack_task,
- action=self.RESUME,
- reverse=False)
- sus_task(timeout=self.timeout_secs())
-
- def output(self, key):
- '''
- Get the value of the specified stack output.
- '''
- value = self.outputs[key].get('Value', '')
- return self.resolve_runtime_data(value)
-
- def restart_resource(self, resource_name):
- '''
- stop resource_name and all that depend on it
- start resource_name and all that depend on it
- '''
- deps = self.dependencies[self[resource_name]]
- failed = False
-
- for res in reversed(deps):
- try:
- scheduler.TaskRunner(res.destroy)()
- except exception.ResourceFailure as ex:
- failed = True
- logger.error('delete: %s' % str(ex))
-
- for res in deps:
- if not failed:
- try:
- res.state_reset()
- scheduler.TaskRunner(res.create)()
- except exception.ResourceFailure as ex:
- logger.exception('create')
- failed = True
- else:
- res.state_set(res.CREATE, res.FAILED,
- 'Resource restart aborted')
- # TODO(asalkeld) if any of this fails we Should
- # restart the whole stack
-
- def get_availability_zones(self):
- if self._zones is None:
- self._zones = [
- zone.zoneName for zone in
- self.clients.nova().availability_zones.list(detailed=False)]
- return self._zones
-
- def resolve_static_data(self, snippet):
- return resolve_static_data(self.t, self, self.parameters, snippet)
-
- def resolve_runtime_data(self, snippet):
- return resolve_runtime_data(self.t, self.resources, snippet)
-
-
-def resolve_static_data(template, stack, parameters, snippet):
- '''
- Resolve static parameters, map lookups, etc. in a template.
-
- Example:
-
- >>> from heat.common import template_format
- >>> template_str = '# JSON or YAML encoded template'
- >>> template = Template(template_format.parse(template_str))
- >>> parameters = Parameters('stack', template, {'KeyName': 'my_key'})
- >>> resolve_static_data(template, None, parameters, {'Ref': 'KeyName'})
- 'my_key'
- '''
- return transform(snippet,
- [functools.partial(template.resolve_param_refs,
- parameters=parameters),
- functools.partial(template.resolve_availability_zones,
- stack=stack),
- functools.partial(template.resolve_resource_facade,
- stack=stack),
- template.resolve_find_in_map,
- template.reduce_joins])
-
-
-def resolve_runtime_data(template, resources, snippet):
- return transform(snippet,
- [functools.partial(template.resolve_resource_refs,
- resources=resources),
- functools.partial(template.resolve_attributes,
- resources=resources),
- template.resolve_split,
- template.resolve_member_list_to_map,
- template.resolve_select,
- template.resolve_joins,
- template.resolve_replace,
- template.resolve_base64])
-
-
-def transform(data, transformations):
- '''
- Apply each of the transformation functions in the supplied list to the data
- in turn.
- '''
- for t in transformations:
- data = t(data)
- return data
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import collections
-import re
-
-from heat.common import exception
-from heat.engine import parameters
-
-SCHEMA_KEYS = (
- REQUIRED, IMPLEMENTED, DEFAULT, TYPE, SCHEMA,
- ALLOWED_PATTERN, MIN_VALUE, MAX_VALUE, ALLOWED_VALUES,
- MIN_LENGTH, MAX_LENGTH, DESCRIPTION,
-) = (
- 'Required', 'Implemented', 'Default', 'Type', 'Schema',
- 'AllowedPattern', 'MinValue', 'MaxValue', 'AllowedValues',
- 'MinLength', 'MaxLength', 'Description',
-)
-
-SCHEMA_TYPES = (
- INTEGER,
- STRING, NUMBER, BOOLEAN,
- MAP, LIST
-) = (
- 'Integer',
- 'String', 'Number', 'Boolean',
- 'Map', 'List'
-)
-
-
-class InvalidPropertySchemaError(Exception):
- pass
-
-
-class Schema(collections.Mapping):
- """
- A Schema for a resource Property.
-
- Schema objects are serialisable to dictionaries following a superset of
- the HOT input Parameter schema using dict().
-
- Serialises to JSON in the form::
-
- {
- 'type': 'list',
- 'required': False
- 'constraints': [
- {
- 'length': {'min': 1},
- 'description': 'List must not be empty'
- }
- ],
- 'schema': {
- '*': {
- 'type': 'string'
- }
- },
- 'description': 'An example list property.'
- }
- """
-
- KEYS = (
- TYPE, DESCRIPTION, DEFAULT, SCHEMA, REQUIRED, CONSTRAINTS,
- ) = (
- 'type', 'description', 'default', 'schema', 'required', 'constraints',
- )
-
- def __init__(self, data_type, description=None,
- default=None, schema=None,
- required=False, constraints=[],
- implemented=True):
- self._len = None
- self.type = data_type
- if self.type not in SCHEMA_TYPES:
- raise InvalidPropertySchemaError('Invalid type (%s)' % self.type)
-
- self.description = description
- self.required = required
- self.implemented = implemented
-
- if isinstance(schema, type(self)):
- if self.type != LIST:
- msg = 'Single schema valid only for %s, not %s' % (LIST,
- self.type)
- raise InvalidPropertySchemaError(msg)
-
- self.schema = AnyIndexDict(schema)
- else:
- self.schema = schema
- if self.schema is not None and self.type not in (LIST, MAP):
- msg = 'Schema valid only for %s or %s, not %s' % (LIST, MAP,
- self.type)
- raise InvalidPropertySchemaError(msg)
-
- self.constraints = constraints
- for c in constraints:
- if self.type not in c.valid_types:
- err_msg = '%s constraint invalid for %s' % (type(c).__name__,
- self.type)
- raise InvalidPropertySchemaError(err_msg)
-
- self.default = default
- if self.default is not None:
- try:
- self.validate_constraints(self.default)
- except (ValueError, TypeError) as exc:
- raise InvalidPropertySchemaError('Invalid default %s (%s)' %
- (self.default, exc))
-
- @classmethod
- def from_legacy(cls, schema_dict):
- """
- Return a new Schema object from a legacy schema dictionary.
- """
-
- # Check for fully-fledged Schema objects
- if isinstance(schema_dict, cls):
- return schema_dict
-
- unknown = [k for k in schema_dict if k not in SCHEMA_KEYS]
- if unknown:
- raise InvalidPropertySchemaError('Unknown key(s) %s' % unknown)
-
- def constraints():
- def get_num(key):
- val = schema_dict.get(key)
- if val is not None:
- val = Property.str_to_num(val)
- return val
-
- if MIN_VALUE in schema_dict or MAX_VALUE in schema_dict:
- yield Range(get_num(MIN_VALUE),
- get_num(MAX_VALUE))
- if MIN_LENGTH in schema_dict or MAX_LENGTH in schema_dict:
- yield Length(get_num(MIN_LENGTH),
- get_num(MAX_LENGTH))
- if ALLOWED_VALUES in schema_dict:
- yield AllowedValues(schema_dict[ALLOWED_VALUES])
- if ALLOWED_PATTERN in schema_dict:
- yield AllowedPattern(schema_dict[ALLOWED_PATTERN])
-
- try:
- data_type = schema_dict[TYPE]
- except KeyError:
- raise InvalidPropertySchemaError('No %s specified' % TYPE)
-
- if SCHEMA in schema_dict:
- if data_type == LIST:
- ss = cls.from_legacy(schema_dict[SCHEMA])
- elif data_type == MAP:
- schema_dicts = schema_dict[SCHEMA].items()
- ss = dict((n, cls.from_legacy(sd)) for n, sd in schema_dicts)
- else:
- raise InvalidPropertySchemaError('%s supplied for %s %s' %
- (SCHEMA, TYPE, data_type))
- else:
- ss = None
-
- return cls(data_type,
- description=schema_dict.get(DESCRIPTION),
- default=schema_dict.get(DEFAULT),
- schema=ss,
- required=schema_dict.get(REQUIRED, False),
- constraints=list(constraints()),
- implemented=schema_dict.get(IMPLEMENTED, True))
-
- @classmethod
- def from_parameter(cls, param):
- """
- Return a property Schema corresponding to a parameter.
-
- Convert a parameter schema from a provider template to a property
- Schema for the corresponding resource facade.
- """
- param_type_map = {
- parameters.STRING: STRING,
- parameters.NUMBER: NUMBER,
- parameters.COMMA_DELIMITED_LIST: LIST,
- parameters.JSON: MAP
- }
-
- def constraints():
- def get_num(key):
- val = param.get(key)
- if val is not None:
- val = Property.str_to_num(val)
- return val
-
- desc = param.get(parameters.CONSTRAINT_DESCRIPTION)
-
- if parameters.MIN_VALUE in param or parameters.MAX_VALUE in param:
- yield Range(get_num(parameters.MIN_VALUE),
- get_num(parameters.MAX_VALUE))
- if (parameters.MIN_LENGTH in param or
- parameters.MAX_LENGTH in param):
- yield Length(get_num(parameters.MIN_LENGTH),
- get_num(parameters.MAX_LENGTH))
- if parameters.ALLOWED_VALUES in param:
- yield AllowedValues(param[parameters.ALLOWED_VALUES], desc)
- if parameters.ALLOWED_PATTERN in param:
- yield AllowedPattern(param[parameters.ALLOWED_PATTERN], desc)
-
- return cls(param_type_map.get(param[parameters.TYPE], MAP),
- description=param.get(parameters.DESCRIPTION),
- required=parameters.DEFAULT not in param,
- constraints=list(constraints()))
-
- def validate_constraints(self, value):
- for constraint in self.constraints:
- constraint.validate(value)
-
- def __getitem__(self, key):
- if key == self.TYPE:
- return self.type.lower()
- elif key == self.DESCRIPTION:
- if self.description is not None:
- return self.description
- elif key == self.DEFAULT:
- if self.default is not None:
- return self.default
- elif key == self.SCHEMA:
- if self.schema is not None:
- return dict((n, dict(s)) for n, s in self.schema.items())
- elif key == self.REQUIRED:
- return self.required
- elif key == self.CONSTRAINTS:
- if self.constraints:
- return [dict(c) for c in self.constraints]
-
- raise KeyError(key)
-
- def __iter__(self):
- for k in self.KEYS:
- try:
- self[k]
- except KeyError:
- pass
- else:
- yield k
-
- def __len__(self):
- if self._len is None:
- self._len = len(list(iter(self)))
- return self._len
-
-
-class AnyIndexDict(collections.Mapping):
- """
- A Mapping that returns the same value for any integer index.
-
- Used for storing the schema for a list. When converted to a dictionary,
- it contains a single item with the key '*'.
- """
-
- ANYTHING = '*'
-
- def __init__(self, value):
- self.value = value
-
- def __getitem__(self, key):
- if key != self.ANYTHING and not isinstance(key, (int, long)):
- raise KeyError('Invalid key %s' % str(key))
-
- return self.value
-
- def __iter__(self):
- yield self.ANYTHING
-
- def __len__(self):
- return 1
-
-
-class Constraint(collections.Mapping):
- """
- Parent class for constraints on allowable values for a Property.
-
- Constraints are serialisable to dictionaries following the HOT input
- Parameter constraints schema using dict().
- """
-
- (DESCRIPTION,) = ('description',)
-
- def __init__(self, description=None):
- self.description = description
-
- def __str__(self):
- def desc():
- if self.description:
- yield self.description
- yield self._str()
-
- return '\n'.join(desc())
-
- def validate(self, value):
- if not self._is_valid(value):
- if self.description:
- err_msg = self.description
- else:
- err_msg = self._err_msg(value)
- raise ValueError(err_msg)
-
- @classmethod
- def _name(cls):
- return '_'.join(w.lower() for w in re.findall('[A-Z]?[a-z]+',
- cls.__name__))
-
- def __getitem__(self, key):
- if key == self.DESCRIPTION:
- if self.description is None:
- raise KeyError(key)
- return self.description
-
- if key == self._name():
- return self._constraint()
-
- raise KeyError(key)
-
- def __iter__(self):
- if self.description is not None:
- yield self.DESCRIPTION
-
- yield self._name()
-
- def __len__(self):
- return 2 if self.description is not None else 1
-
-
-class Range(Constraint):
- """
- Constrain values within a range.
-
- Serialises to JSON as::
-
- {
- 'range': {'min': <min>, 'max': <max>},
- 'description': <description>
- }
- """
-
- (MIN, MAX) = ('min', 'max')
-
- valid_types = (INTEGER, NUMBER)
-
- def __init__(self, min=None, max=None, description=None):
- super(Range, self).__init__(description)
- self.min = min
- self.max = max
-
- for param in (min, max):
- if not isinstance(param, (float, int, long, type(None))):
- raise InvalidPropertySchemaError('min/max must be numeric')
-
- if min is max is None:
- raise InvalidPropertySchemaError('range must have min and/or max')
-
- def _str(self):
- if self.max is None:
- fmt = _('The value must be at least %(min)s.')
- elif self.min is None:
- fmt = _('The value must be no greater than %(max)s.')
- else:
- fmt = _('The value must be in the range %(min)s to %(max)s.')
- return fmt % self._constraint()
-
- def _err_msg(self, value):
- return '%s is out of range (min: %s, max: %s)' % (value,
- self.min,
- self.max)
-
- def _is_valid(self, value):
- value = Property.str_to_num(value)
-
- if self.min is not None:
- if value < self.min:
- return False
-
- if self.max is not None:
- if value > self.max:
- return False
-
- return True
-
- def _constraint(self):
- def constraints():
- if self.min is not None:
- yield self.MIN, self.min
- if self.max is not None:
- yield self.MAX, self.max
-
- return dict(constraints())
-
-
-class Length(Range):
- """
- Constrain the length of values within a range.
-
- Serialises to JSON as::
-
- {
- 'length': {'min': <min>, 'max': <max>},
- 'description': <description>
- }
- """
-
- valid_types = (STRING, LIST)
-
- def __init__(self, min=None, max=None, description=None):
- super(Length, self).__init__(min, max, description)
-
- for param in (min, max):
- if not isinstance(param, (int, long, type(None))):
- msg = 'min/max length must be integral'
- raise InvalidPropertySchemaError(msg)
-
- def _str(self):
- if self.max is None:
- fmt = _('The length must be at least %(min)s.')
- elif self.min is None:
- fmt = _('The length must be no greater than %(max)s.')
- else:
- fmt = _('The length must be in the range %(min)s to %(max)s.')
- return fmt % self._constraint()
-
- def _err_msg(self, value):
- return 'length (%d) is out of range (min: %s, max: %s)' % (len(value),
- self.min,
- self.max)
-
- def _is_valid(self, value):
- return super(Length, self)._is_valid(len(value))
-
-
-class AllowedValues(Constraint):
- """
- Constrain values to a predefined set.
-
- Serialises to JSON as::
-
- {
- 'allowed_values': [<allowed1>, <allowed2>, ...],
- 'description': <description>
- }
- """
-
- valid_types = (STRING, INTEGER, NUMBER, BOOLEAN)
-
- def __init__(self, allowed, description=None):
- super(AllowedValues, self).__init__(description)
- if (not isinstance(allowed, collections.Sequence) or
- isinstance(allowed, basestring)):
- raise InvalidPropertySchemaError('AllowedValues must be a list')
- self.allowed = tuple(allowed)
-
- def _str(self):
- allowed = ', '.join(str(a) for a in self.allowed)
- return _('Allowed values: %s') % allowed
-
- def _err_msg(self, value):
- allowed = '[%s]' % ', '.join(str(a) for a in self.allowed)
- return '"%s" is not an allowed value %s' % (value, allowed)
-
- def _is_valid(self, value):
- return value in self.allowed
-
- def _constraint(self):
- return list(self.allowed)
-
-
-class AllowedPattern(Constraint):
- """
- Constrain values to a predefined regular expression pattern.
-
- Serialises to JSON as::
-
- {
- 'allowed_pattern': <pattern>,
- 'description': <description>
- }
- """
-
- valid_types = (STRING,)
-
- def __init__(self, pattern, description=None):
- super(AllowedPattern, self).__init__(description)
- self.pattern = pattern
- self.match = re.compile(pattern).match
-
- def _str(self):
- return _('Value must match pattern: %s') % self.pattern
-
- def _err_msg(self, value):
- return '"%s" does not match pattern "%s"' % (value, self.pattern)
-
- def _is_valid(self, value):
- match = self.match(value)
- return match is not None and match.end() == len(value)
-
- def _constraint(self):
- return self.pattern
-
-
-class Property(object):
-
- def __init__(self, schema, name=None):
- self.schema = Schema.from_legacy(schema)
- self.name = name
-
- def required(self):
- return self.schema.required
-
- def implemented(self):
- return self.schema.implemented
-
- def has_default(self):
- return self.schema.default is not None
-
- def default(self):
- return self.schema.default
-
- def type(self):
- return self.schema.type
-
- @staticmethod
- def str_to_num(value):
- try:
- return int(value)
- except ValueError:
- return float(value)
-
- def _validate_integer(self, value):
- if value is None:
- value = self.has_default() and self.default() or 0
- if not isinstance(value, (int, long)):
- raise TypeError('value is not an integer')
- return self._validate_number(value)
-
- def _validate_number(self, value):
- if value is None:
- value = self.has_default() and self.default() or 0
-
- num = self.str_to_num(value)
-
- return value
-
- def _validate_string(self, value):
- if value is None:
- value = self.has_default() and self.default() or ''
- if not isinstance(value, basestring):
- raise ValueError('Value must be a string')
- return value
-
- def _validate_children(self, child_values, keys=None):
- if self.schema.schema is not None:
- if keys is None:
- keys = list(self.schema.schema)
- schemata = dict((k, self.schema.schema[k]) for k in keys)
- properties = Properties(schemata, dict(child_values),
- parent_name=self.name)
- return ((k, properties[k]) for k in keys)
- else:
- return child_values
-
- def _validate_map(self, value):
- if value is None:
- value = self.has_default() and self.default() or {}
- if not isinstance(value, collections.Mapping):
- raise TypeError('"%s" is not a map' % value)
-
- return dict(self._validate_children(value.iteritems()))
-
- def _validate_list(self, value):
- if value is None:
- value = self.has_default() and self.default() or []
- if (not isinstance(value, collections.Sequence) or
- isinstance(value, basestring)):
- raise TypeError('"%s" is not a list' % repr(value))
-
- return [v for i, v in self._validate_children(enumerate(value),
- range(len(value)))]
-
- def _validate_bool(self, value):
- if value is None:
- value = self.has_default() and self.default() or False
- if isinstance(value, bool):
- return value
- normalised = value.lower()
- if normalised not in ['true', 'false']:
- raise ValueError('"%s" is not a valid boolean')
-
- return normalised == 'true'
-
- def _validate_data_type(self, value):
- t = self.type()
- if t == STRING:
- return self._validate_string(value)
- elif t == INTEGER:
- return self._validate_integer(value)
- elif t == NUMBER:
- return self._validate_number(value)
- elif t == MAP:
- return self._validate_map(value)
- elif t == LIST:
- return self._validate_list(value)
- elif t == BOOLEAN:
- return self._validate_bool(value)
-
- def validate_data(self, value):
- value = self._validate_data_type(value)
- self.schema.validate_constraints(value)
- return value
-
-
-def schemata(schema_dicts):
- """
- Return a dictionary of Schema objects for the given dictionary of schemata.
-
- The input schemata are converted from the legacy (dictionary-based) format
- to Schema objects where necessary.
- """
- return dict((n, Schema.from_legacy(s)) for n, s in schema_dicts.items())
-
-
-class Properties(collections.Mapping):
-
- def __init__(self, schema, data, resolver=lambda d: d, parent_name=None):
- self.props = dict((k, Property(s, k)) for k, s in schema.items())
- self.resolve = resolver
- self.data = data
- if parent_name is None:
- self.error_prefix = ''
- else:
- self.error_prefix = '%s: ' % parent_name
-
- @staticmethod
- def schema_from_params(params_snippet):
- """
- Convert a template snippet that defines parameters
- into a properties schema
-
- :param params_snippet: parameter definition from a template
- :returns: an equivalent properties schema for the specified params
- """
- if params_snippet:
- return dict((n, Schema.from_parameter(p)) for n, p
- in params_snippet.items())
- return {}
-
- def validate(self, with_value=True):
- for (key, prop) in self.props.items():
- if with_value:
- try:
- self[key]
- except ValueError as e:
- msg = "Property error : %s" % str(e)
- raise exception.StackValidationFailed(message=msg)
-
- # are there unimplemented Properties
- if not prop.implemented() and key in self.data:
- msg = "Property %s not implemented yet" % key
- raise exception.StackValidationFailed(message=msg)
-
- for key in self.data:
- if key not in self.props:
- msg = "Unknown Property %s" % key
- raise exception.StackValidationFailed(message=msg)
-
- def __getitem__(self, key):
- if key not in self:
- raise KeyError(self.error_prefix + 'Invalid Property %s' % key)
-
- prop = self.props[key]
-
- if key in self.data:
- try:
- value = self.resolve(self.data[key])
- return prop.validate_data(value)
- # the resolver function could raise any number of exceptions,
- # so handle this generically
- except Exception as e:
- raise ValueError(self.error_prefix + '%s %s' % (key, str(e)))
- elif prop.has_default():
- return prop.default()
- elif prop.required():
- raise ValueError(self.error_prefix +
- 'Property %s not assigned' % key)
-
- def __len__(self):
- return len(self.props)
-
- def __contains__(self, key):
- return key in self.props
-
- def __iter__(self):
- return iter(self.props)
-
- @staticmethod
- def _generate_input(schema, params=None, path=None):
- '''Generate an input based on a path in the schema or property
- defaults.
-
- :param schema: The schema to generate a parameter or value for.
- :param params: A dict to map a schema to a parameter path.
- :param path: Required if params != None. The params key
- to save the schema at.
- :returns: A Ref to the parameter if path != None and params != None
- :returns: The property default if params == None or path == None
- '''
- if schema.get('Implemented') is False:
- return
-
- if schema[TYPE] == LIST:
- params[path] = {parameters.TYPE: parameters.COMMA_DELIMITED_LIST}
- return {'Fn::Split': {'Ref': path}}
-
- elif schema[TYPE] == MAP:
- params[path] = {parameters.TYPE: parameters.JSON}
- return {'Ref': path}
-
- elif params is not None and path is not None:
- for prop in schema.keys():
- if prop not in parameters.PARAMETER_KEYS and prop in schema:
- del schema[prop]
- params[path] = schema
- return {'Ref': path}
- else:
- prop = Property(schema)
- return prop.has_default() and prop.default() or None
-
- @staticmethod
- def _schema_to_params_and_props(schema, params=None):
- '''Generates a default template based on the provided schema.
- ::
-
- ex: input: schema = {'foo': {'Type': 'String'}}, params = {}
- output: {'foo': {'Ref': 'foo'}},
- params = {'foo': {'Type': 'String'}}
-
- ex: input: schema = {'foo' :{'Type': 'List'}, 'bar': {'Type': 'Map'}}
- ,params={}
- output: {'foo': {'Fn::Split': {'Ref': 'foo'}},
- 'bar': {'Ref': 'bar'}},
- params = {'foo' : {parameters.TYPE:
- parameters.COMMA_DELIMITED_LIST},
- 'bar': {parameters.TYPE: parameters.JSON}}
-
- :param schema: The schema to generate a parameter or value for.
- :param params: A dict to map a schema to a parameter path.
- :returns: A dict of properties resolved for a template's schema
- '''
- properties = {}
- for prop, nested_schema in schema.iteritems():
- properties[prop] = Properties._generate_input(nested_schema,
- params,
- prop)
- #remove not implemented properties
- if properties[prop] is None:
- del properties[prop]
- return properties
-
- @staticmethod
- def schema_to_parameters_and_properties(schema):
- '''Generates properties with params resolved for a resource's
- properties_schema.
-
- :param schema: A resource's properties_schema
- :returns: A tuple of params and properties dicts
- '''
- params = {}
- properties = (Properties.
- _schema_to_params_and_props(schema, params=params))
- return (params, properties)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import base64
-from datetime import datetime
-
-from heat.engine import event
-from heat.common import exception
-from heat.openstack.common import excutils
-from heat.db import api as db_api
-from heat.common import identifier
-from heat.common import short_id
-from heat.engine import scheduler
-from heat.engine import resources
-from heat.engine import timestamp
-# import class to avoid name collisions and ugly aliasing
-from heat.engine.attributes import Attributes
-from heat.engine.properties import Properties
-
-from heat.openstack.common import log as logging
-from heat.openstack.common.gettextutils import _
-
-logger = logging.getLogger(__name__)
-
-
-def get_types():
- '''Return an iterator over the list of valid resource types.'''
- return iter(resources.global_env().get_types())
-
-
-def get_class(resource_type, resource_name=None):
- '''Return the Resource class for a given resource type.'''
- return resources.global_env().get_class(resource_type, resource_name)
-
-
-def _register_class(resource_type, resource_class):
- resources.global_env().register_class(resource_type, resource_class)
-
-
-class UpdateReplace(Exception):
- '''
- Raised when resource update requires replacement
- '''
- _message = _("The Resource %s requires replacement.")
-
- def __init__(self, resource_name='Unknown',
- message=_("The Resource %s requires replacement.")):
- try:
- msg = message % resource_name
- except TypeError:
- msg = message
- super(Exception, self).__init__(msg)
-
-
-class Metadata(object):
- '''
- A descriptor for accessing the metadata of a resource while ensuring the
- most up-to-date data is always obtained from the database.
- '''
-
- def __get__(self, resource, resource_class):
- '''Return the metadata for the owning resource.'''
- if resource is None:
- return None
- if resource.id is None:
- return resource.parsed_template('Metadata')
- rs = db_api.resource_get(resource.stack.context, resource.id)
- rs.refresh(attrs=['rsrc_metadata'])
- return rs.rsrc_metadata
-
- def __set__(self, resource, metadata):
- '''Update the metadata for the owning resource.'''
- if resource.id is None:
- raise exception.ResourceNotAvailable(resource_name=resource.name)
- rs = db_api.resource_get(resource.stack.context, resource.id)
- rs.update_and_save({'rsrc_metadata': metadata})
-
-
-class Resource(object):
- ACTIONS = (INIT, CREATE, DELETE, UPDATE, ROLLBACK, SUSPEND, RESUME
- ) = ('INIT', 'CREATE', 'DELETE', 'UPDATE', 'ROLLBACK',
- 'SUSPEND', 'RESUME')
-
- STATUSES = (IN_PROGRESS, FAILED, COMPLETE
- ) = ('IN_PROGRESS', 'FAILED', 'COMPLETE')
-
- # If True, this resource must be created before it can be referenced.
- strict_dependency = True
-
- created_time = timestamp.Timestamp(db_api.resource_get, 'created_at')
- updated_time = timestamp.Timestamp(db_api.resource_get, 'updated_at')
-
- metadata = Metadata()
-
- # Resource implementation set this to the subset of template keys
- # which are supported for handle_update, used by update_template_diff
- update_allowed_keys = ()
-
- # Resource implementation set this to the subset of resource properties
- # supported for handle_update, used by update_template_diff_properties
- update_allowed_properties = ()
-
- # Resource implementations set this to the name: description dictionary
- # that describes the appropriate resource attributes
- attributes_schema = {}
-
- # If True, this resource may perform authenticated API requests
- # throughout its lifecycle
- requires_deferred_auth = False
-
- def __new__(cls, name, json, stack):
- '''Create a new Resource of the appropriate class for its type.'''
-
- if cls != Resource:
- # Call is already for a subclass, so pass it through
- return super(Resource, cls).__new__(cls)
-
- # Select the correct subclass to instantiate
- ResourceClass = stack.env.get_class(json['Type'],
- resource_name=name)
- return ResourceClass(name, json, stack)
-
- def __init__(self, name, json_snippet, stack):
- if '/' in name:
- raise ValueError(_('Resource name may not contain "/"'))
-
- self.stack = stack
- self.context = stack.context
- self.name = name
- self.json_snippet = json_snippet
- self.t = stack.resolve_static_data(json_snippet)
- self.properties = Properties(self.properties_schema,
- self.t.get('Properties', {}),
- self._resolve_runtime_data,
- self.name)
- self.attributes = Attributes(self.name,
- self.attributes_schema,
- self._resolve_attribute)
-
- resource = db_api.resource_get_by_name_and_stack(self.context,
- name, stack.id)
- if resource:
- self.resource_id = resource.nova_instance
- self.action = resource.action
- self.status = resource.status
- self.status_reason = resource.status_reason
- self.id = resource.id
- self.data = resource.data
- else:
- self.resource_id = None
- # if the stack is being deleted, assume we've already been deleted
- if stack.action == stack.DELETE:
- self.action = self.DELETE
- else:
- self.action = self.INIT
- self.status = self.COMPLETE
- self.status_reason = ''
- self.id = None
- self.data = []
-
- def __eq__(self, other):
- '''Allow == comparison of two resources.'''
- # For the purposes of comparison, we declare two resource objects
- # equal if their names and parsed_templates are the same
- if isinstance(other, Resource):
- return (self.name == other.name) and (
- self.parsed_template() == other.parsed_template())
- return NotImplemented
-
- def __ne__(self, other):
- '''Allow != comparison of two resources.'''
- result = self.__eq__(other)
- if result is NotImplemented:
- return result
- return not result
-
- def type(self):
- return self.t['Type']
-
- def _resolve_runtime_data(self, snippet):
- return self.stack.resolve_runtime_data(snippet)
-
- def has_interface(self, resource_type):
- """Check to see if this resource is either mapped to resource_type
- or is a "resource_type".
- """
- if self.type() == resource_type:
- return True
- ri = self.stack.env.get_resource_info(self.type(),
- self.name)
- return ri.name == resource_type
-
- def identifier(self):
- '''Return an identifier for this resource.'''
- return identifier.ResourceIdentifier(resource_name=self.name,
- **self.stack.identifier())
-
- def parsed_template(self, section=None, default={}):
- '''
- Return the parsed template data for the resource. May be limited to
- only one section of the data, in which case a default value may also
- be supplied.
- '''
- if section is None:
- template = self.t
- else:
- template = self.t.get(section, default)
- return self._resolve_runtime_data(template)
-
- def update_template_diff(self, after, before):
- '''
- Returns the difference between the before and after json snippets. If
- something has been removed in after which exists in before we set it to
- None. If any keys have changed which are not in update_allowed_keys,
- raises UpdateReplace if the differing keys are not in
- update_allowed_keys
- '''
- update_allowed_set = set(self.update_allowed_keys)
-
- # Create a set containing the keys in both current and update template
- template_keys = set(before.keys())
- template_keys.update(set(after.keys()))
-
- # Create a set of keys which differ (or are missing/added)
- changed_keys_set = set([k for k in template_keys
- if before.get(k) != after.get(k)])
-
- if not changed_keys_set.issubset(update_allowed_set):
- badkeys = changed_keys_set - update_allowed_set
- raise UpdateReplace(self.name)
-
- return dict((k, after.get(k)) for k in changed_keys_set)
-
- def update_template_diff_properties(self, after, before):
- '''
- Returns the changed Properties between the before and after json
- snippets. If a property has been removed in after which exists in
- before we set it to None. If any properties have changed which are not
- in update_allowed_properties, raises UpdateReplace if the modified
- properties are not in the update_allowed_properties
- '''
- update_allowed_set = set(self.update_allowed_properties)
-
- # Create a set containing the keys in both current and update template
- current_properties = before.get('Properties', {})
-
- template_properties = set(current_properties.keys())
- updated_properties = after.get('Properties', {})
- template_properties.update(set(updated_properties.keys()))
-
- # Create a set of keys which differ (or are missing/added)
- changed_properties_set = set(k for k in template_properties
- if current_properties.get(k) !=
- updated_properties.get(k))
-
- if not changed_properties_set.issubset(update_allowed_set):
- raise UpdateReplace(self.name)
-
- return dict((k, updated_properties.get(k))
- for k in changed_properties_set)
-
- def __str__(self):
- return '%s "%s"' % (self.__class__.__name__, self.name)
-
- def _add_dependencies(self, deps, path, fragment):
- if isinstance(fragment, dict):
- for key, value in fragment.items():
- if key in ('DependsOn', 'Ref', 'Fn::GetAtt', 'get_attr',
- 'get_resource'):
- if key in ('Fn::GetAtt', 'get_attr'):
- value, att = value
-
- try:
- target = self.stack.resources[value]
- except KeyError:
- raise exception.InvalidTemplateReference(
- resource=value,
- key=path)
- if key == 'DependsOn' or target.strict_dependency:
- deps += (self, target)
- else:
- self._add_dependencies(deps, '%s.%s' % (path, key), value)
- elif isinstance(fragment, list):
- for index, item in enumerate(fragment):
- self._add_dependencies(deps, '%s[%d]' % (path, index), item)
-
- def add_dependencies(self, deps):
- self._add_dependencies(deps, self.name, self.t)
- deps += (self, None)
-
- def required_by(self):
- '''
- Returns a list of names of resources which directly require this
- resource as a dependency.
- '''
- return list(
- [r.name for r in self.stack.dependencies.required_by(self)])
-
- def keystone(self):
- return self.stack.clients.keystone()
-
- def nova(self, service_type='compute'):
- return self.stack.clients.nova(service_type)
-
- def swift(self):
- return self.stack.clients.swift()
-
- def neutron(self):
- return self.stack.clients.neutron()
-
- def cinder(self):
- return self.stack.clients.cinder()
-
- def ceilometer(self):
- return self.stack.clients.ceilometer()
-
- def _do_action(self, action, pre_func=None):
- '''
- Perform a transition to a new state via a specified action
- action should be e.g self.CREATE, self.UPDATE etc, we set
- status based on this, the transistion is handled by calling the
- corresponding handle_* and check_*_complete functions
- Note pre_func is an optional function reference which will
- be called before the handle_<action> function
-
- If the resource does not declare a check_$action_complete function,
- we declare COMPLETE status as soon as the handle_$action call has
- finished, and if no handle_$action function is declared, then we do
- nothing, useful e.g if the resource requires no action for a given
- state transition
- '''
- assert action in self.ACTIONS, 'Invalid action %s' % action
-
- try:
- self.state_set(action, self.IN_PROGRESS)
-
- action_l = action.lower()
- handle = getattr(self, 'handle_%s' % action_l, None)
- check = getattr(self, 'check_%s_complete' % action_l, None)
-
- if callable(pre_func):
- pre_func()
-
- handle_data = None
- if callable(handle):
- handle_data = handle()
- yield
- if callable(check):
- while not check(handle_data):
- yield
- except Exception as ex:
- logger.exception('%s : %s' % (action, str(self)))
- failure = exception.ResourceFailure(ex, self, action)
- self.state_set(action, self.FAILED, str(failure))
- raise failure
- except:
- with excutils.save_and_reraise_exception():
- try:
- self.state_set(action, self.FAILED,
- '%s aborted' % action)
- except Exception:
- logger.exception('Error marking resource as failed')
- else:
- self.state_set(action, self.COMPLETE)
-
- def create(self):
- '''
- Create the resource. Subclasses should provide a handle_create() method
- to customise creation.
- '''
- action = self.CREATE
- if (self.action, self.status) != (self.INIT, self.COMPLETE):
- exc = exception.Error('State %s invalid for create'
- % str(self.state))
- raise exception.ResourceFailure(exc, self, action)
-
- logger.info('creating %s' % str(self))
-
- # Re-resolve the template, since if the resource Ref's
- # the AWS::StackId pseudo parameter, it will change after
- # the parser.Stack is stored (which is after the resources
- # are __init__'d, but before they are create()'d)
- self.t = self.stack.resolve_static_data(self.json_snippet)
- self.properties = Properties(self.properties_schema,
- self.t.get('Properties', {}),
- self._resolve_runtime_data,
- self.name)
- return self._do_action(action, self.properties.validate)
-
- def update(self, after, before=None):
- '''
- update the resource. Subclasses should provide a handle_update() method
- to customise update, the base-class handle_update will fail by default.
- '''
- action = self.UPDATE
-
- if before is None:
- before = self.parsed_template()
-
- if (self.action, self.status) in ((self.CREATE, self.IN_PROGRESS),
- (self.UPDATE, self.IN_PROGRESS)):
- exc = Exception('Resource update already requested')
- raise exception.ResourceFailure(exc, self, action)
-
- logger.info('updating %s' % str(self))
-
- try:
- self.state_set(action, self.IN_PROGRESS)
- properties = Properties(self.properties_schema,
- after.get('Properties', {}),
- self._resolve_runtime_data,
- self.name)
- properties.validate()
- tmpl_diff = self.update_template_diff(after, before)
- prop_diff = self.update_template_diff_properties(after, before)
- if callable(getattr(self, 'handle_update', None)):
- handle_data = self.handle_update(after, tmpl_diff, prop_diff)
- yield
- if callable(getattr(self, 'check_update_complete', None)):
- while not self.check_update_complete(handle_data):
- yield
- except UpdateReplace:
- logger.debug("Resource %s update requires replacement" % self.name)
- raise
- except Exception as ex:
- logger.exception('update %s : %s' % (str(self), str(ex)))
- failure = exception.ResourceFailure(ex, self, action)
- self.state_set(action, self.FAILED, str(failure))
- raise failure
- else:
- self.t = self.stack.resolve_static_data(after)
- self.state_set(action, self.COMPLETE)
-
- def suspend(self):
- '''
- Suspend the resource. Subclasses should provide a handle_suspend()
- method to implement suspend
- '''
- action = self.SUSPEND
-
- # Don't try to suspend the resource unless it's in a stable state
- if (self.action == self.DELETE or self.status != self.COMPLETE):
- exc = exception.Error('State %s invalid for suspend'
- % str(self.state))
- raise exception.ResourceFailure(exc, self, action)
-
- logger.info('suspending %s' % str(self))
- return self._do_action(action)
-
- def resume(self):
- '''
- Resume the resource. Subclasses should provide a handle_resume()
- method to implement resume
- '''
- action = self.RESUME
-
- # Can't resume a resource unless it's SUSPEND_COMPLETE
- if self.state != (self.SUSPEND, self.COMPLETE):
- exc = exception.Error('State %s invalid for resume'
- % str(self.state))
- raise exception.ResourceFailure(exc, self, action)
-
- logger.info('resuming %s' % str(self))
- return self._do_action(action)
-
- def physical_resource_name(self):
- if self.id is None:
- return None
-
- return '%s-%s-%s' % (self.stack.name,
- self.name,
- short_id.get_id(self.id))
-
- def validate(self):
- logger.info('Validating %s' % str(self))
-
- self.validate_deletion_policy(self.t)
- return self.properties.validate()
-
- @classmethod
- def validate_deletion_policy(cls, template):
- deletion_policy = template.get('DeletionPolicy', 'Delete')
- if deletion_policy not in ('Delete', 'Retain', 'Snapshot'):
- msg = 'Invalid DeletionPolicy %s' % deletion_policy
- raise exception.StackValidationFailed(message=msg)
- elif deletion_policy == 'Snapshot':
- if not callable(getattr(cls, 'handle_snapshot_delete', None)):
- msg = 'Snapshot DeletionPolicy not supported'
- raise exception.StackValidationFailed(message=msg)
-
- def delete(self):
- '''
- Delete the resource. Subclasses should provide a handle_delete() method
- to customise deletion.
- '''
- action = self.DELETE
-
- if (self.action, self.status) == (self.DELETE, self.COMPLETE):
- return
- # No need to delete if the resource has never been created
- if self.action == self.INIT:
- return
-
- initial_state = self.state
-
- logger.info('deleting %s' % str(self))
-
- try:
- self.state_set(action, self.IN_PROGRESS)
-
- deletion_policy = self.t.get('DeletionPolicy', 'Delete')
- handle_data = None
- if deletion_policy == 'Delete':
- if callable(getattr(self, 'handle_delete', None)):
- handle_data = self.handle_delete()
- yield
- elif deletion_policy == 'Snapshot':
- if callable(getattr(self, 'handle_snapshot_delete', None)):
- handle_data = self.handle_snapshot_delete(initial_state)
- yield
-
- if (deletion_policy != 'Retain' and
- callable(getattr(self, 'check_delete_complete', None))):
- while not self.check_delete_complete(handle_data):
- yield
-
- except Exception as ex:
- logger.exception('Delete %s', str(self))
- failure = exception.ResourceFailure(ex, self, self.action)
- self.state_set(action, self.FAILED, str(failure))
- raise failure
- except:
- with excutils.save_and_reraise_exception():
- try:
- self.state_set(action, self.FAILED,
- 'Deletion aborted')
- except Exception:
- logger.exception('Error marking resource deletion failed')
- else:
- self.state_set(action, self.COMPLETE)
-
- @scheduler.wrappertask
- def destroy(self):
- '''
- Delete the resource and remove it from the database.
- '''
- yield self.delete()
-
- if self.id is None:
- return
-
- try:
- db_api.resource_get(self.context, self.id).delete()
- except exception.NotFound:
- # Don't fail on delete if the db entry has
- # not been created yet.
- pass
-
- self.id = None
-
- def resource_id_set(self, inst):
- self.resource_id = inst
- if self.id is not None:
- try:
- rs = db_api.resource_get(self.context, self.id)
- rs.update_and_save({'nova_instance': self.resource_id})
- except Exception as ex:
- logger.warn('db error %s' % str(ex))
-
- def _store(self):
- '''Create the resource in the database.'''
- metadata = self.metadata
- try:
- rs = {'action': self.action,
- 'status': self.status,
- 'status_reason': self.status_reason,
- 'stack_id': self.stack.id,
- 'nova_instance': self.resource_id,
- 'name': self.name,
- 'rsrc_metadata': metadata,
- 'stack_name': self.stack.name}
-
- new_rs = db_api.resource_create(self.context, rs)
- self.id = new_rs.id
-
- self.stack.updated_time = datetime.utcnow()
-
- except Exception as ex:
- logger.error('DB error %s' % str(ex))
-
- def _add_event(self, action, status, reason):
- '''Add a state change event to the database.'''
- ev = event.Event(self.context, self.stack, action, status, reason,
- self.resource_id, self.properties,
- self.name, self.type())
-
- try:
- ev.store()
- except Exception as ex:
- logger.error('DB error %s' % str(ex))
-
- def _store_or_update(self, action, status, reason):
- self.action = action
- self.status = status
- self.status_reason = reason
-
- if self.id is not None:
- try:
- rs = db_api.resource_get(self.context, self.id)
- rs.update_and_save({'action': self.action,
- 'status': self.status,
- 'status_reason': reason,
- 'stack_id': self.stack.id,
- 'nova_instance': self.resource_id})
-
- self.stack.updated_time = datetime.utcnow()
- except Exception as ex:
- logger.error('DB error %s' % str(ex))
-
- # store resource in DB on transition to CREATE_IN_PROGRESS
- # all other transistions (other than to DELETE_COMPLETE)
- # should be handled by the update_and_save above..
- elif (action, status) == (self.CREATE, self.IN_PROGRESS):
- self._store()
-
- def _resolve_attribute(self, name):
- """
- Default implementation; should be overridden by resources that expose
- attributes
-
- :param name: The attribute to resolve
- :returns: the resource attribute named key
- """
- # By default, no attributes resolve
- pass
-
- def state_reset(self):
- """
- Reset state to (INIT, COMPLETE)
- """
- self.action = self.INIT
- self.status = self.COMPLETE
-
- def state_set(self, action, status, reason="state changed"):
- if action not in self.ACTIONS:
- raise ValueError("Invalid action %s" % action)
-
- if status not in self.STATUSES:
- raise ValueError("Invalid status %s" % status)
-
- old_state = (self.action, self.status)
- new_state = (action, status)
- self._store_or_update(action, status, reason)
-
- if new_state != old_state:
- self._add_event(action, status, reason)
-
- @property
- def state(self):
- '''Returns state, tuple of action, status.'''
- return (self.action, self.status)
-
- def FnGetRefId(self):
- '''
- http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/\
- intrinsic-function-reference-ref.html
- '''
- if self.resource_id is not None:
- return unicode(self.resource_id)
- else:
- return unicode(self.name)
-
- def FnGetAtt(self, key):
- '''
- http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/\
- intrinsic-function-reference-getatt.html
- '''
- try:
- return self.attributes[key]
- except KeyError:
- raise exception.InvalidTemplateAttribute(resource=self.name,
- key=key)
-
- def FnBase64(self, data):
- '''
- http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/\
- intrinsic-function-reference-base64.html
- '''
- return base64.b64encode(data)
-
- def signal(self, details=None):
- '''
- signal the resource. Subclasses should provide a handle_signal() method
- to implement the signal, the base-class raise an exception if no
- handler is implemented.
- '''
- try:
- if self.action in (self.SUSPEND, self.DELETE):
- msg = 'Cannot signal resource during %s' % self.action
- raise Exception(msg)
-
- if not callable(getattr(self, 'handle_signal', None)):
- msg = 'Resource %s is not able to receive a signal' % str(self)
- raise Exception(msg)
-
- self._add_event('signal', self.status, details)
- self.handle_signal(details)
- except Exception as ex:
- logger.exception('signal %s : %s' % (str(self), str(ex)))
- failure = exception.ResourceFailure(ex, self)
- raise failure
-
- def handle_update(self, json_snippet=None, tmpl_diff=None, prop_diff=None):
- raise UpdateReplace(self.name)
-
- def metadata_update(self, new_metadata=None):
- '''
- No-op for resources which don't explicitly override this method
- '''
- if new_metadata:
- logger.warning("Resource %s does not implement metadata update" %
- self.name)
-
- @classmethod
- def resource_to_template(cls, resource_type):
- '''
- :param resource_type: The resource type to be displayed in the template
- :param explode_nested: True if a resource's nested properties schema
- should be resolved.
- :returns: A template where the resource's properties_schema is mapped
- as parameters, and the resource's attributes_schema is mapped as
- outputs
- '''
- (parameters, properties) = (Properties.
- schema_to_parameters_and_properties(
- cls.properties_schema))
-
- resource_name = cls.__name__
- return {
- 'Parameters': parameters,
- 'Resources': {
- resource_name: {
- 'Type': resource_type,
- 'Properties': properties
- }
- },
- 'Outputs': Attributes.as_outputs(resource_name, cls)
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import os
-import os.path
-
-from heat.common import environment_format
-from heat.openstack.common import log
-from heat.openstack.common.gettextutils import _
-from heat.engine import environment
-
-
-LOG = log.getLogger(__name__)
-
-
-def _register_resources(type_pairs):
-
- for res_name, res_class in type_pairs:
- _environment.register_class(res_name, res_class)
-
-
-def _get_module_resources(module):
- if callable(getattr(module, 'resource_mapping', None)):
- try:
- return module.resource_mapping().iteritems()
- except Exception as ex:
- LOG.error(_('Failed to load resources from %s') % str(module))
- else:
- return []
-
-
-def _register_modules(modules):
- import itertools
-
- resource_lists = (_get_module_resources(m) for m in modules)
- _register_resources(itertools.chain.from_iterable(resource_lists))
-
-
-_environment = None
-
-
-def global_env():
- global _environment
- if _environment is None:
- initialise()
- return _environment
-
-
-def _list_environment_files(env_dir):
- try:
- return os.listdir(env_dir)
- except OSError as osex:
- LOG.error('Failed to read %s' % (env_dir))
- LOG.exception(osex)
- return []
-
-
-def _load_global_environment(env_dir):
- for env_name in _list_environment_files(env_dir):
- try:
- file_path = os.path.join(env_dir, env_name)
- with open(file_path) as env_fd:
- LOG.info('Loading %s' % file_path)
- env_body = environment_format.parse(env_fd.read())
- environment_format.default_for_missing(env_body)
- _environment.load(env_body)
- except ValueError as vex:
- LOG.error('Failed to parse %s/%s' % (env_dir, env_name))
- LOG.exception(vex)
- except IOError as ioex:
- LOG.error('Failed to read %s/%s' % (env_dir, env_name))
- LOG.exception(ioex)
-
-
-def initialise():
- global _environment
- if _environment is not None:
- return
- import sys
- from oslo.config import cfg
- from heat.common import plugin_loader
-
- _environment = environment.Environment({}, user_env=False)
- cfg.CONF.import_opt('environment_dir', 'heat.common.config')
- _load_global_environment(cfg.CONF.environment_dir)
- _register_modules(plugin_loader.load_modules(sys.modules[__name__]))
-
- cfg.CONF.import_opt('plugin_dirs', 'heat.common.config')
-
- plugin_pkg = plugin_loader.create_subpackage(cfg.CONF.plugin_dirs,
- 'heat.engine')
- _register_modules(plugin_loader.load_modules(plugin_pkg, True))
- _initialized = True
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import copy
-
-from heat.engine import resource
-from heat.engine import signal_responder
-
-from heat.common import exception
-from heat.openstack.common import log as logging
-from heat.openstack.common import timeutils
-from heat.engine.properties import Properties
-from heat.engine import properties
-from heat.engine import scheduler
-from heat.engine import stack_resource
-
-logger = logging.getLogger(__name__)
-
-
-class CooldownMixin(object):
- '''
- Utility class to encapsulate Cooldown related logic which is shared
- between AutoScalingGroup and ScalingPolicy
- '''
- def _cooldown_inprogress(self):
- inprogress = False
- try:
- # Negative values don't make sense, so they are clamped to zero
- cooldown = max(0, int(self.properties['Cooldown']))
- except TypeError:
- # If not specified, it will be None, same as cooldown == 0
- cooldown = 0
-
- metadata = self.metadata
- if metadata and cooldown != 0:
- last_adjust = metadata.keys()[0]
- if not timeutils.is_older_than(last_adjust, cooldown):
- inprogress = True
- return inprogress
-
- def _cooldown_timestamp(self, reason):
- # Save resource metadata with a timestamp and reason
- # If we wanted to implement the AutoScaling API like AWS does,
- # we could maintain event history here, but since we only need
- # the latest event for cooldown, just store that for now
- metadata = {timeutils.strtime(): reason}
- self.metadata = metadata
-
-
-class InstanceGroup(stack_resource.StackResource):
- tags_schema = {'Key': {'Type': 'String',
- 'Required': True},
- 'Value': {'Type': 'String',
- 'Required': True}}
- properties_schema = {
- 'AvailabilityZones': {
- 'Required': True,
- 'Type': 'List',
- 'Description': _('Not Implemented.')},
- 'LaunchConfigurationName': {
- 'Required': True,
- 'Type': 'String',
- 'Description': _('Name of LaunchConfiguration resource.')},
- 'Size': {
- 'Required': True,
- 'Type': 'Number',
- 'Description': _('Desired number of instances.')},
- 'LoadBalancerNames': {
- 'Type': 'List',
- 'Description': _('List of LoadBalancer resources.')},
- 'Tags': {
- 'Type': 'List',
- 'Schema': {'Type': 'Map', 'Schema': tags_schema},
- 'Description': _('Tags to attach to this group.')}
- }
- update_allowed_keys = ('Properties', 'UpdatePolicy',)
- update_allowed_properties = ('Size', 'LaunchConfigurationName',)
- attributes_schema = {
- "InstanceList": _("A comma-delimited list of server ip addresses. "
- "(Heat extension).")
- }
- rolling_update_schema = {
- 'MinInstancesInService': properties.Schema(properties.NUMBER,
- default=0),
- 'MaxBatchSize': properties.Schema(properties.NUMBER,
- default=1),
- 'PauseTime': properties.Schema(properties.STRING,
- default='PT0S')
- }
- update_policy_schema = {
- 'RollingUpdate': properties.Schema(properties.MAP,
- schema=rolling_update_schema)
- }
-
- def __init__(self, name, json_snippet, stack):
- """
- UpdatePolicy is currently only specific to InstanceGroup and
- AutoScalingGroup. Therefore, init is overridden to parse for the
- UpdatePolicy.
- """
- super(InstanceGroup, self).__init__(name, json_snippet, stack)
- self.update_policy = Properties(self.update_policy_schema,
- self.t.get('UpdatePolicy', {}),
- parent_name=self.name)
-
- def validate(self):
- """
- Add validation for update_policy
- """
- super(InstanceGroup, self).validate()
- if self.update_policy:
- self.update_policy.validate()
-
- def get_instance_names(self):
- """Get a list of resource names of the instances in this InstanceGroup.
-
- Failed resources will be ignored.
- """
- return sorted(x.name for x in self.get_instances())
-
- def get_instances(self):
- """Get a set of all the instance resources managed by this group."""
- return [resource for resource in self.nested()
- if resource.state[1] != resource.FAILED]
-
- def handle_create(self):
- """Create a nested stack and add the initial resources to it."""
- num_instances = int(self.properties['Size'])
- initial_template = self._create_template(num_instances)
- return self.create_with_template(initial_template, {})
-
- def check_create_complete(self, task):
- """
- When stack creation is done, update the load balancer.
-
- If any instances failed to be created, delete them.
- """
- done = super(InstanceGroup, self).check_create_complete(task)
- if done:
- self._lb_reload()
- return done
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- """
- If Properties has changed, update self.properties, so we
- get the new values during any subsequent adjustment.
- """
- if tmpl_diff:
- # parse update policy
- if 'UpdatePolicy' in tmpl_diff:
- self.update_policy = Properties(
- self.update_policy_schema,
- json_snippet.get('UpdatePolicy', {}),
- parent_name=self.name)
-
- if prop_diff:
- self.properties = Properties(self.properties_schema,
- json_snippet.get('Properties', {}),
- self.stack.resolve_runtime_data,
- self.name)
-
- # Get the current capacity, we may need to adjust if
- # Size has changed
- if 'Size' in prop_diff:
- inst_list = self.get_instances()
- if len(inst_list) != int(self.properties['Size']):
- self.resize(int(self.properties['Size']))
-
- def _tags(self):
- """
- Make sure that we add a tag that Ceilometer can pick up.
- These need to be prepended with 'metering.'.
- """
- tags = self.properties.get('Tags') or []
- for t in tags:
- if t['Key'].startswith('metering.'):
- # the user has added one, don't add another.
- return tags
- return tags + [{'Key': 'metering.groupname',
- 'Value': self.FnGetRefId()}]
-
- def handle_delete(self):
- return self.delete_nested()
-
- def _create_template(self, num_instances):
- """
- Create a template with a number of instance definitions based on the
- launch configuration.
- """
- conf_name = self.properties['LaunchConfigurationName']
- conf = self.stack.resource_by_refid(conf_name)
- instance_definition = copy.deepcopy(conf.t)
- instance_definition['Type'] = 'AWS::EC2::Instance'
- instance_definition['Properties']['Tags'] = self._tags()
- if self.properties.get('VPCZoneIdentifier'):
- instance_definition['Properties']['SubnetId'] = \
- self.properties['VPCZoneIdentifier'][0]
- # resolve references within the context of this stack.
- fully_parsed = self.stack.resolve_runtime_data(instance_definition)
-
- resources = {}
- for i in range(num_instances):
- resources["%s-%d" % (self.name, i)] = fully_parsed
- return {"Resources": resources}
-
- def resize(self, new_capacity):
- """
- Resize the instance group to the new capacity.
-
- When shrinking, the newest instances will be removed.
- """
- new_template = self._create_template(new_capacity)
- try:
- updater = self.update_with_template(new_template, {})
- updater.run_to_completion()
- self.check_update_complete(updater)
- finally:
- # Reload the LB in any case, so it's only pointing at healthy
- # nodes.
- self._lb_reload()
-
- def _lb_reload(self):
- '''
- Notify the LoadBalancer to reload its config to include
- the changes in instances we have just made.
-
- This must be done after activation (instance in ACTIVE state),
- otherwise the instances' IP addresses may not be available.
- '''
- if self.properties['LoadBalancerNames']:
- id_list = [inst.FnGetRefId() for inst in self.get_instances()]
- for lb in self.properties['LoadBalancerNames']:
- lb_resource = self.stack[lb]
- if 'Instances' in lb_resource.properties_schema:
- lb_resource.json_snippet['Properties']['Instances'] = (
- id_list)
- elif 'members' in lb_resource.properties_schema:
- lb_resource.json_snippet['Properties']['members'] = (
- id_list)
- else:
- raise exception.Error(
- "Unsupported resource '%s' in LoadBalancerNames" %
- (lb,))
- resolved_snippet = self.stack.resolve_static_data(
- lb_resource.json_snippet)
- scheduler.TaskRunner(lb_resource.update, resolved_snippet)()
-
- def FnGetRefId(self):
- return self.physical_resource_name()
-
- def _resolve_attribute(self, name):
- '''
- heat extension: "InstanceList" returns comma delimited list of server
- ip addresses.
- '''
- if name == 'InstanceList':
- ips = [inst.FnGetAtt('PublicIp')
- for inst in self.nested().resources.values()]
- if ips:
- return unicode(','.join(ips))
-
-
-class AutoScalingGroup(InstanceGroup, CooldownMixin):
- tags_schema = {'Key': {'Type': 'String',
- 'Required': True},
- 'Value': {'Type': 'String',
- 'Required': True}}
- properties_schema = {
- 'AvailabilityZones': {
- 'Required': True,
- 'Type': 'List',
- 'Description': _('Not Implemented.')},
- 'LaunchConfigurationName': {
- 'Required': True,
- 'Type': 'String',
- 'Description': _('Name of LaunchConfiguration resource.')},
- 'MaxSize': {
- 'Required': True,
- 'Type': 'String',
- 'Description': _('Maximum number of instances in the group.')},
- 'MinSize': {
- 'Required': True,
- 'Type': 'String',
- 'Description': _('Minimum number of instances in the group.')},
- 'Cooldown': {
- 'Type': 'String',
- 'Description': _('Cooldown period, in seconds.')},
- 'DesiredCapacity': {
- 'Type': 'Number',
- 'Description': _('Desired initial number of instances.')},
- 'HealthCheckGracePeriod': {
- 'Type': 'Integer',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'HealthCheckType': {
- 'Type': 'String',
- 'AllowedValues': ['EC2', 'ELB'],
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'LoadBalancerNames': {
- 'Type': 'List',
- 'Description': _('List of LoadBalancer resources.')},
- 'VPCZoneIdentifier': {
- 'Type': 'List',
- 'Description': _('List of VPC subnet identifiers.')},
- 'Tags': {
- 'Type': 'List',
- 'Schema': {'Type': 'Map', 'Schema': tags_schema},
- 'Description': _('Tags to attach to this group.')}
- }
- rolling_update_schema = {
- 'MinInstancesInService': properties.Schema(properties.NUMBER,
- default=0),
- 'MaxBatchSize': properties.Schema(properties.NUMBER,
- default=1),
- 'PauseTime': properties.Schema(properties.STRING,
- default='PT0S')
- }
- update_policy_schema = {
- 'AutoScalingRollingUpdate': properties.Schema(
- properties.MAP, schema=rolling_update_schema)
- }
-
- # template keys and properties supported for handle_update,
- # note trailing comma is required for a single item to get a tuple
- update_allowed_keys = ('Properties', 'UpdatePolicy',)
- update_allowed_properties = ('LaunchConfigurationName',
- 'MaxSize', 'MinSize',
- 'Cooldown', 'DesiredCapacity',)
-
- def handle_create(self):
- if self.properties['DesiredCapacity']:
- num_to_create = int(self.properties['DesiredCapacity'])
- else:
- num_to_create = int(self.properties['MinSize'])
- initial_template = self._create_template(num_to_create)
- return self.create_with_template(initial_template, {})
-
- def check_create_complete(self, task):
- """Invoke the cooldown after creation succeeds."""
- done = super(AutoScalingGroup, self).check_create_complete(task)
- if done:
- self._cooldown_timestamp(
- "%s : %s" % ('ExactCapacity', len(self.get_instances())))
- return done
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- """
- If Properties has changed, update self.properties, so we get the new
- values during any subsequent adjustment.
- """
- if tmpl_diff:
- # parse update policy
- if 'UpdatePolicy' in tmpl_diff:
- self.update_policy = Properties(
- self.update_policy_schema,
- json_snippet.get('UpdatePolicy', {}),
- parent_name=self.name)
-
- if prop_diff:
- self.properties = Properties(self.properties_schema,
- json_snippet.get('Properties', {}),
- self.stack.resolve_runtime_data,
- self.name)
-
- # Get the current capacity, we may need to adjust if
- # MinSize or MaxSize has changed
- capacity = len(self.get_instances())
-
- # Figure out if an adjustment is required
- new_capacity = None
- if 'MinSize' in prop_diff:
- if capacity < int(self.properties['MinSize']):
- new_capacity = int(self.properties['MinSize'])
- if 'MaxSize' in prop_diff:
- if capacity > int(self.properties['MaxSize']):
- new_capacity = int(self.properties['MaxSize'])
- if 'DesiredCapacity' in prop_diff:
- if self.properties['DesiredCapacity']:
- new_capacity = int(self.properties['DesiredCapacity'])
-
- if new_capacity is not None:
- self.adjust(new_capacity, adjustment_type='ExactCapacity')
-
- def adjust(self, adjustment, adjustment_type='ChangeInCapacity'):
- """
- Adjust the size of the scaling group if the cooldown permits.
- """
- if self._cooldown_inprogress():
- logger.info("%s NOT performing scaling adjustment, cooldown %s" %
- (self.name, self.properties['Cooldown']))
- return
-
- capacity = len(self.get_instances())
- if adjustment_type == 'ChangeInCapacity':
- new_capacity = capacity + adjustment
- elif adjustment_type == 'ExactCapacity':
- new_capacity = adjustment
- else:
- # PercentChangeInCapacity
- new_capacity = capacity + (capacity * adjustment / 100)
-
- if new_capacity > int(self.properties['MaxSize']):
- logger.warn('can not exceed %s' % self.properties['MaxSize'])
- return
- if new_capacity < int(self.properties['MinSize']):
- logger.warn('can not be less than %s' % self.properties['MinSize'])
- return
-
- if new_capacity == capacity:
- logger.debug('no change in capacity %d' % capacity)
- return
-
- result = self.resize(new_capacity)
-
- self._cooldown_timestamp("%s : %s" % (adjustment_type, adjustment))
-
- return result
-
- def _tags(self):
- """Add Identifing Tags to all servers in the group.
-
- This is so the Dimensions received from cfn-push-stats all include
- the groupname and stack id.
- Note: the group name must match what is returned from FnGetRefId
- """
- autoscaling_tag = [{'Key': 'AutoScalingGroupName',
- 'Value': self.FnGetRefId()}]
- return super(AutoScalingGroup, self)._tags() + autoscaling_tag
-
- def validate(self):
- res = super(AutoScalingGroup, self).validate()
- if res:
- return res
-
- # TODO(pasquier-s): once Neutron is able to assign subnets to
- # availability zones, it will be possible to specify multiple subnets.
- # For now, only one subnet can be specified. The bug #1096017 tracks
- # this issue.
- if self.properties.get('VPCZoneIdentifier') and \
- len(self.properties['VPCZoneIdentifier']) != 1:
- raise exception.NotSupported(feature=_("Anything other than one "
- "VPCZoneIdentifier"))
-
-
-class LaunchConfiguration(resource.Resource):
- tags_schema = {'Key': {'Type': 'String',
- 'Required': True},
- 'Value': {'Type': 'String',
- 'Required': True}}
- properties_schema = {
- 'ImageId': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Glance image ID or name.')},
- 'InstanceType': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Nova instance type (flavor).')},
- 'KeyName': {
- 'Type': 'String',
- 'Description': _('Optional Nova keypair name.')},
- 'UserData': {
- 'Type': 'String',
- 'Description': _('User data to pass to instance.')},
- 'SecurityGroups': {
- 'Type': 'List',
- 'Description': _('Security group names to assign.')},
- 'KernelId': {
- 'Type': 'String',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'RamDiskId': {
- 'Type': 'String',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'BlockDeviceMappings': {
- 'Type': 'String',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'NovaSchedulerHints': {
- 'Type': 'List',
- 'Schema': {'Type': 'Map', 'Schema': tags_schema},
- 'Description': _('Scheduler hints to pass '
- 'to Nova (Heat extension).')},
- }
-
- def FnGetRefId(self):
- return unicode(self.physical_resource_name())
-
-
-class ScalingPolicy(signal_responder.SignalResponder, CooldownMixin):
- properties_schema = {
- 'AutoScalingGroupName': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('AutoScaling group name to apply policy to.')},
- 'ScalingAdjustment': {
- 'Type': 'Number',
- 'Required': True,
- 'Description': _('Size of adjustment.')},
- 'AdjustmentType': {
- 'Type': 'String',
- 'AllowedValues': ['ChangeInCapacity',
- 'ExactCapacity',
- 'PercentChangeInCapacity'],
- 'Required': True,
- 'Description': _('Type of adjustment (absolute or percentage).')},
- 'Cooldown': {
- 'Type': 'Number',
- 'Description': _('Cooldown period, in seconds.')},
- }
-
- update_allowed_keys = ('Properties',)
- update_allowed_properties = ('ScalingAdjustment', 'AdjustmentType',
- 'Cooldown',)
- attributes_schema = {
- "AlarmUrl": _("A signed url to handle the alarm. "
- "(Heat extension).")
- }
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- """
- If Properties has changed, update self.properties, so we get the new
- values during any subsequent adjustment.
- """
- if prop_diff:
- self.properties = Properties(self.properties_schema,
- json_snippet.get('Properties', {}),
- self.stack.resolve_runtime_data,
- self.name)
-
- def handle_signal(self, details=None):
- # ceilometer sends details like this:
- # {u'alarm_id': ID, u'previous': u'ok', u'current': u'alarm',
- # u'reason': u'...'})
- # in this policy we currently assume that this gets called
- # only when there is an alarm. But the template writer can
- # put the policy in all the alarm notifiers (nodata, and ok).
- #
- # our watchrule has upper case states so lower() them all.
- if details is None:
- alarm_state = 'alarm'
- else:
- alarm_state = details.get('current',
- details.get('state', 'alarm')).lower()
-
- logger.info('%s Alarm, new state %s' % (self.name, alarm_state))
-
- if alarm_state != 'alarm':
- return
- if self._cooldown_inprogress():
- logger.info("%s NOT performing scaling action, cooldown %s" %
- (self.name, self.properties['Cooldown']))
- return
-
- asgn_id = self.properties['AutoScalingGroupName']
- group = self.stack.resource_by_refid(asgn_id)
-
- logger.info('%s Alarm, adjusting Group %s by %s' %
- (self.name, group.name,
- self.properties['ScalingAdjustment']))
- group.adjust(int(self.properties['ScalingAdjustment']),
- self.properties['AdjustmentType'])
-
- self._cooldown_timestamp("%s : %s" %
- (self.properties['AdjustmentType'],
- self.properties['ScalingAdjustment']))
-
- def _resolve_attribute(self, name):
- '''
- heat extension: "AlarmUrl" returns the url to post to the policy
- when there is an alarm.
- '''
- if name == 'AlarmUrl' and self.resource_id is not None:
- return unicode(self._get_signed_url())
-
- def FnGetRefId(self):
- if self.resource_id is not None:
- return unicode(self._get_signed_url())
- else:
- return unicode(self.name)
-
-
-def resource_mapping():
- return {
- 'AWS::AutoScaling::LaunchConfiguration': LaunchConfiguration,
- 'AWS::AutoScaling::AutoScalingGroup': AutoScalingGroup,
- 'AWS::AutoScaling::ScalingPolicy': ScalingPolicy,
- 'OS::Heat::InstanceGroup': InstanceGroup,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import exception
-from heat.engine import resource
-from heat.engine import watchrule
-
-
-class CeilometerAlarm(resource.Resource):
-
- properties_schema = {
- 'comparison_operator': {
- 'Type': 'String',
- 'Required': True,
- 'AllowedValues': ['ge', 'gt', 'eq', 'ne', 'lt', 'le'],
- 'Description': _('Operator used to compare specified statistic '
- 'with threshold')
- },
- 'evaluation_periods': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Number of periods to evaluate over')
- },
- 'counter_name': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Counter name watched by the alarm')
- },
- 'period': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Period (seconds) to evaluate over')
- },
- 'statistic': {
- 'Type': 'String',
- 'Required': True,
- 'AllowedValues': ['count', 'avg', 'sum', 'min', 'max'],
- 'Description': _('Counter statistic to evaluate')
- },
- 'threshold': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Threshold to evaluate against')
- },
- 'alarm_actions': {
- 'Type': 'List',
- 'Description': _('A list of URLs (webhooks) to invoke when state '
- 'transitions to alarm')
- },
- 'ok_actions': {
- 'Type': 'List',
- 'Description': _('A list of URLs (webhooks) to invoke when state '
- 'transitions to ok')
- },
- 'insufficient_data_actions': {
- 'Type': 'List',
- 'Description': _('A list of URLs (webhooks) to invoke when state '
- 'transitions to insufficient-data')
- },
- 'description': {
- 'Type': 'String',
- 'Description': _('Description for the alarm')
- },
- 'enabled': {
- 'Type': 'Boolean',
- 'Default': 'true',
- 'Description': _('True if alarm evaluation/actioning is enabled')
- },
- 'repeat_actions': {
- 'Type': 'Boolean',
- 'Default': 'false',
- 'Description': _('True to trigger actions each time the threshold '
- 'is reached. '
- 'By default, actions are called when : '
- 'the threshold is reached AND the alarm\'s state '
- 'have changed')
- },
- 'matching_metadata': {
- 'Type': 'Map',
- 'Description': _('Counter should match this resource metadata '
- '(key=value) additionally to the counter_name')
- }
- }
-
- update_allowed_keys = ('Properties',)
- # allow the properties that affect the watch calculation.
- # note: when using in-instance monitoring you can only change the
- # metric name if you re-configure the instance too.
- update_allowed_properties = ('comparison_operator', 'description',
- 'evaluation_periods', 'period', 'statistic',
- 'alarm_actions', 'ok_actions',
- 'insufficient_data_actions', 'threshold',
- 'enabled', 'repeat_actions')
-
- def _actions_to_urls(self, props):
- kwargs = {}
- for k, v in iter(props.items()):
- if k in ['alarm_actions', 'ok_actions',
- 'insufficient_data_actions'] and v is not None:
- kwargs[k] = []
- for act in v:
- # if the action is a resource name
- # we ask the destination resource for an alarm url.
- # the template writer should really do this in the
- # template if possible with:
- # {Fn::GetAtt: ['MyAction', 'AlarmUrl']}
- if act in self.stack:
- url = self.stack[act].FnGetAtt('AlarmUrl')
- kwargs[k].append(url)
- else:
- kwargs[k].append(act)
- else:
- kwargs[k] = v
- return kwargs
-
- def handle_create(self):
- props = self._actions_to_urls(self.parsed_template('Properties'))
- props['name'] = self.physical_resource_name()
-
- alarm = self.ceilometer().alarms.create(**props)
- self.resource_id_set(alarm.alarm_id)
-
- # the watchrule below is for backwards compatibility.
- # 1) so we don't create watch tasks unneccessarly
- # 2) to support CW stats post, we will redirect the request
- # to ceilometer.
- wr = watchrule.WatchRule(context=self.context,
- watch_name=self.physical_resource_name(),
- rule=self.parsed_template('Properties'),
- stack_id=self.stack.id)
- wr.state = wr.CEILOMETER_CONTROLLED
- wr.store()
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- if prop_diff:
- kwargs = {'alarm_id': self.resource_id}
- kwargs.update(prop_diff)
- self.ceilometer().alarms.update(**self._actions_to_urls(kwargs))
-
- def handle_suspend(self):
- if self.resource_id is not None:
- self.ceilometer().alarms.update(alarm_id=self.resource_id,
- enabled=False)
-
- def handle_resume(self):
- if self.resource_id is not None:
- self.ceilometer().alarms.update(alarm_id=self.resource_id,
- enabled=True)
-
- def handle_delete(self):
- try:
- wr = watchrule.WatchRule.load(
- self.context, watch_name=self.physical_resource_name())
- wr.destroy()
- except exception.WatchRuleNotFound:
- pass
-
- if self.resource_id is not None:
- self.ceilometer().alarms.delete(self.resource_id)
-
-
-def resource_mapping():
- return {
- 'OS::Ceilometer::Alarm': CeilometerAlarm,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import exception
-from heat.engine import watchrule
-from heat.engine import resource
-from heat.engine.properties import Properties
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class CloudWatchAlarm(resource.Resource):
- properties_schema = {
- 'ComparisonOperator': {
- 'Type': 'String',
- 'AllowedValues': ['GreaterThanOrEqualToThreshold',
- 'GreaterThanThreshold',
- 'LessThanThreshold',
- 'LessThanOrEqualToThreshold'],
- 'Description': _('Operator used to compare the specified '
- 'Statistic with Threshold.')
- },
- 'AlarmDescription': {
- 'Type': 'String',
- 'Description': _('Description for the alarm.')
- },
- 'EvaluationPeriods': {
- 'Type': 'String',
- 'Description': _('Number of periods to evaluate over.')
- },
- 'MetricName': {
- 'Type': 'String',
- 'Description': _('Metric name watched by the alarm.')
- },
- 'Namespace': {
- 'Type': 'String',
- 'Description': _('Namespace for the metric.')
- },
- 'Period': {
- 'Type': 'String',
- 'Description': _('Period (seconds) to evaluate over.')
- },
- 'Statistic': {
- 'Type': 'String',
- 'AllowedValues': ['SampleCount',
- 'Average',
- 'Sum',
- 'Minimum',
- 'Maximum'],
- 'Description': _('Metric statistic to evaluate.')
- },
- 'AlarmActions': {
- 'Type': 'List',
- 'Description': _('A list of actions to execute when state '
- 'transitions to alarm.')
- },
- 'OKActions': {
- 'Type': 'List',
- 'Description': _('A list of actions to execute when state '
- 'transitions to ok.')
- },
- 'Dimensions': {
- 'Type': 'List',
- 'Description': _('A list of dimensions (arbitrary name/value '
- 'pairs) associated with the metric.')
- },
- 'InsufficientDataActions': {
- 'Type': 'List',
- 'Description': _('A list of actions to execute when state '
- 'transitions to insufficient-data.')
- },
- 'Threshold': {
- 'Type': 'String',
- 'Description': _('Threshold to evaluate against.')
- },
- 'Units': {
- 'Type': 'String',
- 'AllowedValues': ['Seconds',
- 'Microseconds',
- 'Milliseconds',
- 'Bytes',
- 'Kilobytes',
- 'Megabytes',
- 'Gigabytes',
- 'Terabytes',
- 'Bits',
- 'Kilobits',
- 'Megabits',
- 'Gigabits',
- 'Terabits',
- 'Percent',
- 'Count',
- 'Bytes/Second',
- 'Kilobytes/Second',
- 'Megabytes/Second',
- 'Gigabytes/Second',
- 'Terabytes/Second',
- 'Bits/Second',
- 'Kilobits/Second',
- 'Megabits/Second',
- 'Gigabits/Second',
- 'Terabits/Second',
- 'Count/Second',
- None],
- 'Description': _('Unit for the metric.')
- }
- }
-
- strict_dependency = False
- update_allowed_keys = ('Properties',)
- # allow the properties that affect the watch calculation.
- # note: when using in-instance monitoring you can only change the
- # metric name if you re-configure the instance too.
- update_allowed_properties = ('ComparisonOperator', 'AlarmDescription',
- 'EvaluationPeriods', 'Period', 'Statistic',
- 'AlarmActions', 'OKActions', 'Units'
- 'InsufficientDataActions', 'Threshold')
-
- def handle_create(self):
- wr = watchrule.WatchRule(context=self.context,
- watch_name=self.physical_resource_name(),
- rule=self.parsed_template('Properties'),
- stack_id=self.stack.id)
- wr.store()
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- # If Properties has changed, update self.properties, so we
- # get the new values during any subsequent adjustment
- if prop_diff:
- self.properties = Properties(self.properties_schema,
- json_snippet.get('Properties', {}),
- self.stack.resolve_runtime_data,
- self.name)
- loader = watchrule.WatchRule.load
- wr = loader(self.context,
- watch_name=self.physical_resource_name())
-
- wr.rule = self.parsed_template('Properties')
- wr.store()
-
- def handle_delete(self):
- try:
- wr = watchrule.WatchRule.load(
- self.context, watch_name=self.physical_resource_name())
- wr.destroy()
- except exception.WatchRuleNotFound:
- pass
-
- def handle_suspend(self):
- wr = watchrule.WatchRule.load(self.context,
- watch_name=self.physical_resource_name())
- wr.state_set(wr.SUSPENDED)
-
- def handle_resume(self):
- wr = watchrule.WatchRule.load(self.context,
- watch_name=self.physical_resource_name())
- # Just set to NODATA, which will be re-evaluated next periodic task
- wr.state_set(wr.NODATA)
-
- def FnGetRefId(self):
- return unicode(self.physical_resource_name())
-
- def physical_resource_name(self):
- return '%s-%s' % (self.stack.name, self.name)
-
-
-def resource_mapping():
- return {
- 'OS::Heat::CWLiteAlarm': CloudWatchAlarm,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import clients
-from heat.engine import resource
-from heat.engine.resources.vpc import VPC
-from heat.common import exception
-
-from heat.openstack.common import excutils
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class ElasticIp(resource.Resource):
- properties_schema = {'Domain': {'Type': 'String',
- 'AllowedValues': ['vpc']},
- 'InstanceId': {'Type': 'String'}}
- attributes_schema = {
- "AllocationId": ("ID that AWS assigns to represent the allocation of"
- "the address for use with Amazon VPC. Returned only"
- " for VPC elastic IP addresses.")
- }
-
- def __init__(self, name, json_snippet, stack):
- super(ElasticIp, self).__init__(name, json_snippet, stack)
- self.ipaddress = None
-
- def _ipaddress(self):
- if self.ipaddress is None and self.resource_id is not None:
- if self.properties['Domain'] and clients.neutronclient:
- ne = clients.neutronclient.exceptions.NeutronClientException
- try:
- ips = self.neutron().show_floatingip(self.resource_id)
- except ne as e:
- if e.status_code == 404:
- logger.warn("Floating IPs not found: %s" % str(e))
- else:
- self.ipaddress = ips['floatingip']['floating_ip_address']
- else:
- try:
- ips = self.nova().floating_ips.get(self.resource_id)
- except clients.novaclient.exceptions.NotFound as ex:
- logger.warn("Floating IPs not found: %s" % str(ex))
- else:
- self.ipaddress = ips.ip
- return self.ipaddress or ''
-
- def handle_create(self):
- """Allocate a floating IP for the current tenant."""
- ips = None
- if self.properties['Domain'] and clients.neutronclient:
- from heat.engine.resources.internet_gateway import InternetGateway
-
- ext_net = InternetGateway.get_external_network_id(self.neutron())
- props = {'floating_network_id': ext_net}
- ips = self.neutron().create_floatingip({
- 'floatingip': props})['floatingip']
- self.ipaddress = ips['floating_ip_address']
- self.resource_id_set(ips['id'])
- logger.info('ElasticIp create %s' % str(ips))
- else:
- if self.properties['Domain']:
- raise exception.Error('Domain property can not be set on '
- 'resource %s without Neutron available' %
- self.name)
- try:
- ips = self.nova().floating_ips.create()
- except clients.novaclient.exceptions.NotFound:
- with excutils.save_and_reraise_exception():
- msg = ("No default floating IP pool configured."
- "Set 'default_floating_pool' in nova.conf.")
- logger.error(msg)
-
- if ips:
- self.ipaddress = ips.ip
- self.resource_id_set(ips.id)
- logger.info('ElasticIp create %s' % str(ips))
-
- if self.properties['InstanceId']:
- server = self.nova().servers.get(self.properties['InstanceId'])
- res = server.add_floating_ip(self._ipaddress())
-
- def handle_delete(self):
- if self.properties['InstanceId']:
- try:
- server = self.nova().servers.get(self.properties['InstanceId'])
- if server:
- server.remove_floating_ip(self._ipaddress())
- except clients.novaclient.exceptions.NotFound as ex:
- pass
-
- """De-allocate a floating IP."""
- if self.resource_id is not None:
- if self.properties['Domain'] and clients.neutronclient:
- ne = clients.neutronclient.exceptions.NeutronClientException
- try:
- self.neutron().delete_floatingip(self.resource_id)
- except ne as e:
- if e.status_code != 404:
- raise e
- else:
- self.nova().floating_ips.delete(self.resource_id)
-
- def FnGetRefId(self):
- return unicode(self._ipaddress())
-
- def _resolve_attribute(self, name):
- if name == 'AllocationId':
- return unicode(self.resource_id)
-
-
-class ElasticIpAssociation(resource.Resource):
- properties_schema = {'InstanceId': {'Type': 'String',
- 'Required': False},
- 'EIP': {'Type': 'String'},
- 'AllocationId': {'Type': 'String'},
- 'NetworkInterfaceId': {'Type': 'String'}}
-
- def FnGetRefId(self):
- return unicode(self.physical_resource_name())
-
- def handle_create(self):
- """Add a floating IP address to a server."""
- if self.properties['EIP'] is not None \
- and self.properties['AllocationId'] is not None:
- raise exception.ResourcePropertyConflict('EIP',
- 'AllocationId')
-
- if self.properties['EIP']:
- if not self.properties['InstanceId']:
- logger.warn('Skipping association, InstanceId not specified')
- return
- server = self.nova().servers.get(self.properties['InstanceId'])
- server.add_floating_ip(self.properties['EIP'])
- self.resource_id_set(self.properties['EIP'])
- logger.debug('ElasticIpAssociation %s.add_floating_ip(%s)' %
- (self.properties['InstanceId'],
- self.properties['EIP']))
- elif self.properties['AllocationId']:
- assert clients.neutronclient, "Neutron required for VPC operations"
- port_id = None
- port_rsrc = None
- if self.properties['NetworkInterfaceId']:
- port_id = self.properties['NetworkInterfaceId']
- port_rsrc = self.neutron().list_ports(id=port_id)['ports'][0]
- elif self.properties['InstanceId']:
- instance_id = self.properties['InstanceId']
- ports = self.neutron().list_ports(device_id=instance_id)
- port_rsrc = ports['ports'][0]
- port_id = port_rsrc['id']
- else:
- logger.warn('Skipping association, resource not specified')
- return
-
- float_id = self.properties['AllocationId']
- self.resource_id_set(float_id)
-
- # assuming only one fixed_ip
- subnet_id = port_rsrc['fixed_ips'][0]['subnet_id']
- subnets = self.neutron().list_subnets(id=subnet_id)
- subnet_rsrc = subnets['subnets'][0]
- netid = subnet_rsrc['network_id']
-
- router_id = VPC.router_for_vpc(self.neutron(), netid)['id']
- floatingip = self.neutron().show_floatingip(float_id)
- floating_net_id = floatingip['floatingip']['floating_network_id']
-
- self.neutron().add_gateway_router(
- router_id, {'network_id': floating_net_id})
-
- self.neutron().update_floatingip(
- float_id, {'floatingip': {'port_id': port_id}})
-
- def handle_delete(self):
- """Remove a floating IP address from a server or port."""
- if self.properties['EIP']:
- try:
- server = self.nova().servers.get(self.properties['InstanceId'])
- if server:
- server.remove_floating_ip(self.properties['EIP'])
- except clients.novaclient.exceptions.NotFound as ex:
- pass
- elif self.properties['AllocationId']:
- float_id = self.properties['AllocationId']
- ne = clients.neutronclient.exceptions.NeutronClientException
- try:
- self.neutron().update_floatingip(
- float_id, {'floatingip': {'port_id': None}})
- except ne as e:
- if e.status_code != 404:
- raise e
-
-
-def resource_mapping():
- return {
- 'AWS::EC2::EIP': ElasticIp,
- 'AWS::EC2::EIPAssociation': ElasticIpAssociation,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import signal_responder
-from heat.engine import clients
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine.resources import nova_utils
-from heat.engine.resources import volume
-
-from heat.common import exception
-from heat.engine.resources.network_interface import NetworkInterface
-
-from heat.openstack.common.gettextutils import _
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class Restarter(signal_responder.SignalResponder):
- properties_schema = {
- 'InstanceId': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Instance ID to be restarted.')}}
- attributes_schema = {
- "AlarmUrl": _("A signed url to handle the alarm "
- "(Heat extension).")
- }
-
- def _find_resource(self, resource_id):
- '''
- Return the resource with the specified instance ID, or None if it
- cannot be found.
- '''
- for resource in self.stack:
- if resource.resource_id == resource_id:
- return resource
- return None
-
- def handle_signal(self, details=None):
- if details is None:
- alarm_state = 'alarm'
- else:
- alarm_state = details.get('state', 'alarm').lower()
-
- logger.info('%s Alarm, new state %s' % (self.name, alarm_state))
-
- if alarm_state != 'alarm':
- return
-
- victim = self._find_resource(self.properties['InstanceId'])
- if victim is None:
- logger.info('%s Alarm, can not find instance %s' %
- (self.name, self.properties['InstanceId']))
- return
-
- logger.info('%s Alarm, restarting resource: %s' %
- (self.name, victim.name))
- self.stack.restart_resource(victim.name)
-
- def _resolve_attribute(self, name):
- '''
- heat extension: "AlarmUrl" returns the url to post to the policy
- when there is an alarm.
- '''
- if name == 'AlarmUrl' and self.resource_id is not None:
- return unicode(self._get_signed_url())
-
-
-class Instance(resource.Resource):
- # AWS does not require InstanceType but Heat does because the nova
- # create api call requires a flavor
- tags_schema = {'Key': {'Type': 'String',
- 'Required': True},
- 'Value': {'Type': 'String',
- 'Required': True}}
-
- properties_schema = {
- 'ImageId': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Glance image ID or name.')},
- 'InstanceType': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Nova instance type (flavor).')},
- 'KeyName': {
- 'Type': 'String',
- 'Description': _('Optional Nova keypair name.')},
- 'AvailabilityZone': {
- 'Type': 'String',
- 'Description': _('Availability zone to launch the instance in.')},
- 'DisableApiTermination': {
- 'Type': 'String',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'KernelId': {
- 'Type': 'String',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'Monitoring': {
- 'Type': 'Boolean',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'PlacementGroupName': {
- 'Type': 'String',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'PrivateIpAddress': {
- 'Type': 'String',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'RamDiskId': {
- 'Type': 'String',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'SecurityGroups': {
- 'Type': 'List',
- 'Description': _('Security group names to assign.')},
- 'SecurityGroupIds': {
- 'Type': 'List',
- 'Description': _('Security group IDs to assign.')},
- 'NetworkInterfaces': {
- 'Type': 'List',
- 'Description': _('Network interfaces to associate with '
- 'instance.')},
- 'SourceDestCheck': {
- 'Type': 'Boolean',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'SubnetId': {
- 'Type': 'String',
- 'Description': _('Subnet ID to launch instance in.')},
- 'Tags': {
- 'Type': 'List',
- 'Schema': {'Type': 'Map', 'Schema': tags_schema},
- 'Description': _('Tags to attach to instance.')},
- 'NovaSchedulerHints': {
- 'Type': 'List',
- 'Schema': {'Type': 'Map', 'Schema': tags_schema},
- 'Description': _('Scheduler hints to pass '
- 'to Nova (Heat extension).')},
- 'Tenancy': {
- 'Type': 'String',
- 'AllowedValues': ['dedicated', 'default'],
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'UserData': {
- 'Type': 'String',
- 'Description': _('User data to pass to instance.')},
- 'Volumes': {
- 'Type': 'List',
- 'Description': _('Volumes to attach to instance.')}}
-
- attributes_schema = {'AvailabilityZone': _('The Availability Zone where '
- 'the specified instance is '
- 'launched.'),
- 'PrivateDnsName': _('Private DNS name of the'
- ' specified instance.'),
- 'PublicDnsName': _('Public DNS name of the specified '
- 'instance.'),
- 'PrivateIp': _('Private IP address of the specified '
- 'instance.'),
- 'PublicIp': _('Public IP address of the specified '
- 'instance.')}
-
- update_allowed_keys = ('Metadata', 'Properties')
- update_allowed_properties = ('InstanceType',)
-
- def __init__(self, name, json_snippet, stack):
- super(Instance, self).__init__(name, json_snippet, stack)
- self.ipaddress = None
- self.mime_string = None
-
- def _set_ipaddress(self, networks):
- '''
- Read the server's IP address from a list of networks provided by Nova
- '''
- # Just record the first ipaddress
- for n in networks:
- if len(networks[n]) > 0:
- self.ipaddress = networks[n][0]
- break
-
- def _ipaddress(self):
- '''
- Return the server's IP address, fetching it from Nova if necessary
- '''
- if self.ipaddress is None:
- self.ipaddress = nova_utils.server_to_ipaddress(
- self.nova(), self.resource_id)
-
- return self.ipaddress or '0.0.0.0'
-
- def _resolve_attribute(self, name):
- res = None
- if name == 'AvailabilityZone':
- res = self.properties['AvailabilityZone']
- elif name in ['PublicIp', 'PrivateIp', 'PublicDnsName',
- 'PrivateDnsName']:
- res = self._ipaddress()
-
- logger.info('%s._resolve_attribute(%s) == %s' % (self.name, name, res))
- return unicode(res) if res else None
-
- def _build_nics(self, network_interfaces,
- security_groups=None, subnet_id=None):
-
- nics = None
-
- if network_interfaces:
- unsorted_nics = []
- for entry in network_interfaces:
- nic = (entry
- if not isinstance(entry, basestring)
- else {'NetworkInterfaceId': entry,
- 'DeviceIndex': len(unsorted_nics)})
- unsorted_nics.append(nic)
- sorted_nics = sorted(unsorted_nics,
- key=lambda nic: int(nic['DeviceIndex']))
- nics = [{'port-id': nic['NetworkInterfaceId']}
- for nic in sorted_nics]
- else:
- # if SubnetId property in Instance, ensure subnet exists
- if subnet_id:
- neutronclient = self.neutron()
- network_id = NetworkInterface.network_id_from_subnet_id(
- neutronclient, subnet_id)
- # if subnet verified, create a port to use this subnet
- # if port is not created explicitly, nova will choose
- # the first subnet in the given network.
- if network_id:
- fixed_ip = {'subnet_id': subnet_id}
- props = {
- 'admin_state_up': True,
- 'network_id': network_id,
- 'fixed_ips': [fixed_ip]
- }
-
- if security_groups:
- props['security_groups'] = \
- self._get_security_groups_id(security_groups)
-
- port = neutronclient.create_port({'port': props})['port']
- nics = [{'port-id': port['id']}]
-
- return nics
-
- def _get_security_groups_id(self, security_groups):
- """Extract security_groups ids from security group list
-
- This function will be deprecated if Neutron client resolves security
- group name to id internally.
-
- Args:
- security_groups : A list contains security_groups ids or names
- Returns:
- A list of security_groups ids.
- """
- ids = []
- response = self.neutron().list_security_groups(self.resource_id)
- for item in response:
- if item['security_groups'] is not None:
- for security_group in security_groups:
- for groups in item['security_groups']:
- if groups['name'] == security_group \
- and groups['id'] not in ids:
- ids.append(groups['id'])
- elif groups['id'] == security_group \
- and groups['id'] not in ids:
- ids.append(groups['id'])
- return ids
-
- def _get_security_groups(self):
- security_groups = []
- for property in ('SecurityGroups', 'SecurityGroupIds'):
- if self.properties.get(property) is not None:
- for sg in self.properties.get(property):
- security_groups.append(sg)
- if not security_groups:
- security_groups = None
- return security_groups
-
- def get_mime_string(self, userdata):
- if not self.mime_string:
- self.mime_string = nova_utils.build_userdata(self, userdata)
- return self.mime_string
-
- def handle_create(self):
- security_groups = self._get_security_groups()
-
- userdata = self.properties['UserData'] or ''
- flavor = self.properties['InstanceType']
- availability_zone = self.properties['AvailabilityZone']
-
- key_name = self.properties['KeyName']
- if key_name:
- # confirm keypair exists
- nova_utils.get_keypair(self.nova(), key_name)
-
- image_name = self.properties['ImageId']
-
- image_id = nova_utils.get_image_id(self.nova(), image_name)
-
- flavor_id = nova_utils.get_flavor_id(self.nova(), flavor)
-
- tags = {}
- if self.properties['Tags']:
- for tm in self.properties['Tags']:
- tags[tm['Key']] = tm['Value']
- else:
- tags = None
-
- scheduler_hints = {}
- if self.properties['NovaSchedulerHints']:
- for tm in self.properties['NovaSchedulerHints']:
- scheduler_hints[tm['Key']] = tm['Value']
- else:
- scheduler_hints = None
-
- nics = self._build_nics(self.properties['NetworkInterfaces'],
- security_groups=security_groups,
- subnet_id=self.properties['SubnetId'])
- server = None
- try:
- server = self.nova().servers.create(
- name=self.physical_resource_name(),
- image=image_id,
- flavor=flavor_id,
- key_name=key_name,
- security_groups=security_groups,
- userdata=self.get_mime_string(userdata),
- meta=tags,
- scheduler_hints=scheduler_hints,
- nics=nics,
- availability_zone=availability_zone)
- finally:
- # Avoid a race condition where the thread could be cancelled
- # before the ID is stored
- if server is not None:
- self.resource_id_set(server.id)
-
- return server, scheduler.TaskRunner(self._attach_volumes_task())
-
- def _attach_volumes_task(self):
- attach_tasks = (volume.VolumeAttachTask(self.stack,
- self.resource_id,
- volume_id,
- device)
- for volume_id, device in self.volumes())
- return scheduler.PollingTaskGroup(attach_tasks)
-
- def check_create_complete(self, cookie):
- return self._check_active(cookie)
-
- def _check_active(self, cookie):
- server, volume_attach = cookie
-
- if not volume_attach.started():
- if server.status != 'ACTIVE':
- server.get()
-
- # Some clouds append extra (STATUS) strings to the status
- short_server_status = server.status.split('(')[0]
- if short_server_status in nova_utils.deferred_server_statuses:
- return False
- elif server.status == 'ACTIVE':
- self._set_ipaddress(server.networks)
- volume_attach.start()
- return volume_attach.done()
- elif server.status == 'ERROR':
- fault = getattr(server, 'fault', {})
- message = fault.get('message', 'Unknown')
- code = fault.get('code', 500)
- exc = exception.Error(_("Creation of server %(server)s "
- "failed: %(message)s (%(code)s)") %
- dict(server=server.name,
- message=message,
- code=code))
- raise exc
- else:
- exc = exception.Error(_("Creation of server %(server)s failed "
- "with unknown status: %(status)s") %
- dict(server=server.name,
- status=server.status))
- raise exc
- else:
- return volume_attach.step()
-
- def volumes(self):
- """
- Return an iterator over (volume_id, device) tuples for all volumes
- that should be attached to this instance.
- """
- volumes = self.properties['Volumes']
- if volumes is None:
- return []
-
- return ((vol['VolumeId'], vol['Device']) for vol in volumes)
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- if 'Metadata' in tmpl_diff:
- self.metadata = tmpl_diff['Metadata']
- if 'InstanceType' in prop_diff:
- flavor = prop_diff['InstanceType']
- flavor_id = nova_utils.get_flavor_id(self.nova(), flavor)
- server = self.nova().servers.get(self.resource_id)
- server.resize(flavor_id)
- checker = scheduler.TaskRunner(nova_utils.check_resize,
- server, flavor)
- checker.start()
- return checker
-
- def check_update_complete(self, checker):
- return checker.step() if checker is not None else True
-
- def metadata_update(self, new_metadata=None):
- '''
- Refresh the metadata if new_metadata is None
- '''
- if new_metadata is None:
- self.metadata = self.parsed_template('Metadata')
-
- def validate(self):
- '''
- Validate any of the provided params
- '''
- res = super(Instance, self).validate()
- if res:
- return res
-
- # check validity of key
- key_name = self.properties.get('KeyName', None)
- if key_name:
- nova_utils.get_keypair(self.nova(), key_name)
-
- # check validity of security groups vs. network interfaces
- security_groups = self._get_security_groups()
- if security_groups and self.properties.get('NetworkInterfaces'):
- raise exception.ResourcePropertyConflict(
- 'SecurityGroups/SecurityGroupIds',
- 'NetworkInterfaces')
-
- # make sure the image exists.
- nova_utils.get_image_id(self.nova(), self.properties['ImageId'])
-
- @scheduler.wrappertask
- def _delete_server(self, server):
- '''
- Return a co-routine that deletes the server and waits for it to
- disappear from Nova.
- '''
- yield self._detach_volumes_task()()
- server.delete()
-
- while True:
- yield
-
- try:
- server.get()
- except clients.novaclient.exceptions.NotFound:
- self.resource_id = None
- break
-
- def _detach_volumes_task(self):
- '''
- Detach volumes from the instance
- '''
- detach_tasks = (volume.VolumeDetachTask(self.stack,
- self.resource_id,
- volume_id)
- for volume_id, device in self.volumes())
- return scheduler.PollingTaskGroup(detach_tasks)
-
- def handle_delete(self):
- '''
- Delete an instance, blocking until it is disposed by OpenStack
- '''
- if self.resource_id is None:
- return
-
- try:
- server = self.nova().servers.get(self.resource_id)
- except clients.novaclient.exceptions.NotFound:
- self.resource_id = None
- return
-
- server_delete_task = scheduler.TaskRunner(self._delete_server,
- server=server)
- server_delete_task.start()
- return server_delete_task
-
- def check_delete_complete(self, server_delete_task):
- # if the resource was already deleted, server_delete_task will be None
- if server_delete_task is None:
- return True
- else:
- return server_delete_task.step()
-
- def handle_suspend(self):
- '''
- Suspend an instance - note we do not wait for the SUSPENDED state,
- this is polled for by check_suspend_complete in a similar way to the
- create logic so we can take advantage of coroutines
- '''
- if self.resource_id is None:
- raise exception.Error(_('Cannot suspend %s, resource_id not set') %
- self.name)
-
- try:
- server = self.nova().servers.get(self.resource_id)
- except clients.novaclient.exceptions.NotFound:
- raise exception.NotFound(_('Failed to find instance %s') %
- self.resource_id)
- else:
- logger.debug("suspending instance %s" % self.resource_id)
- # We want the server.suspend to happen after the volume
- # detachement has finished, so pass both tasks and the server
- suspend_runner = scheduler.TaskRunner(server.suspend)
- volumes_runner = scheduler.TaskRunner(self._detach_volumes_task())
- return server, suspend_runner, volumes_runner
-
- def check_suspend_complete(self, cookie):
- server, suspend_runner, volumes_runner = cookie
-
- if not volumes_runner.started():
- volumes_runner.start()
-
- if volumes_runner.done():
- if not suspend_runner.started():
- suspend_runner.start()
-
- if suspend_runner.done():
- if server.status == 'SUSPENDED':
- return True
-
- server.get()
- logger.debug("%s check_suspend_complete status = %s" %
- (self.name, server.status))
- if server.status in list(nova_utils.deferred_server_statuses +
- ['ACTIVE']):
- return server.status == 'SUSPENDED'
- else:
- raise exception.Error(_(' nova reported unexpected '
- 'instance[%(instance)s] '
- 'status[%(status)s]') %
- {'instance': self.name,
- 'status': server.status})
- else:
- suspend_runner.step()
- else:
- return volumes_runner.step()
-
- def handle_resume(self):
- '''
- Resume an instance - note we do not wait for the ACTIVE state,
- this is polled for by check_resume_complete in a similar way to the
- create logic so we can take advantage of coroutines
- '''
- if self.resource_id is None:
- raise exception.Error(_('Cannot resume %s, resource_id not set') %
- self.name)
-
- try:
- server = self.nova().servers.get(self.resource_id)
- except clients.novaclient.exceptions.NotFound:
- raise exception.NotFound(_('Failed to find instance %s') %
- self.resource_id)
- else:
- logger.debug("resuming instance %s" % self.resource_id)
- server.resume()
- return server, scheduler.TaskRunner(self._attach_volumes_task())
-
- def check_resume_complete(self, cookie):
- return self._check_active(cookie)
-
-
-def resource_mapping():
- return {
- 'AWS::EC2::Instance': Instance,
- 'OS::Heat::HARestarter': Restarter,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import clients
-from heat.common import exception
-from heat.openstack.common import log as logging
-from heat.engine import resource
-
-logger = logging.getLogger(__name__)
-
-
-class InternetGateway(resource.Resource):
- tags_schema = {'Key': {'Type': 'String',
- 'Required': True},
- 'Value': {'Type': 'String',
- 'Required': True}}
-
- properties_schema = {
- 'Tags': {'Type': 'List', 'Schema': {
- 'Type': 'Map',
- 'Implemented': False,
- 'Schema': tags_schema}}
- }
-
- def handle_create(self):
- self.resource_id_set(self.physical_resource_name())
-
- def handle_delete(self):
- pass
-
- @staticmethod
- def get_external_network_id(client):
- ext_filter = {'router:external': True}
- ext_nets = client.list_networks(**ext_filter)['networks']
- if len(ext_nets) != 1:
- # TODO(sbaker) if there is more than one external network
- # add a heat configuration variable to set the ID of
- # the default one
- raise exception.Error(
- 'Expected 1 external network, found %d' % len(ext_nets))
- external_network_id = ext_nets[0]['id']
- return external_network_id
-
-
-class VPCGatewayAttachment(resource.Resource):
-
- properties_schema = {
- 'VpcId': {
- 'Type': 'String',
- 'Required': True},
- 'InternetGatewayId': {'Type': 'String'},
- 'VpnGatewayId': {
- 'Type': 'String',
- 'Implemented': False}
- }
-
- def _vpc_route_tables(self):
- for resource in self.stack.resources.itervalues():
- if (resource.has_interface('AWS::EC2::RouteTable') and
- resource.properties.get('VpcId') ==
- self.properties.get('VpcId')):
- yield resource
-
- def add_dependencies(self, deps):
- super(VPCGatewayAttachment, self).add_dependencies(deps)
- # Depend on any route table in this template with the same
- # VpcId as this VpcId.
- # All route tables must exist before gateway attachment
- # as attachment happens to routers (not VPCs)
- for route_table in self._vpc_route_tables():
- deps += (self, route_table)
-
- def handle_create(self):
- client = self.neutron()
- external_network_id = InternetGateway.get_external_network_id(client)
- for router in self._vpc_route_tables():
- client.add_gateway_router(router.resource_id, {
- 'network_id': external_network_id})
-
- def handle_delete(self):
- from neutronclient.common.exceptions import NeutronClientException
-
- client = self.neutron()
- for router in self._vpc_route_tables():
- try:
- client.remove_gateway_router(router.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
-
-def resource_mapping():
- if clients.neutronclient is None:
- return {}
-
- return {
- 'AWS::EC2::InternetGateway': InternetGateway,
- 'AWS::EC2::VPCGatewayAttachment': VPCGatewayAttachment,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import template_format
-from heat.engine import stack_resource
-from heat.engine.resources import nova_utils
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-lb_template = r'''
-{
- "AWSTemplateFormatVersion": "2010-09-09",
- "Description": "Built in HAProxy server",
- "Parameters" : {
- "KeyName" : {
- "Type" : "String"
- }
- },
- "Resources": {
- "latency_watcher": {
- "Type": "AWS::CloudWatch::Alarm",
- "Properties": {
- "MetricName": "Latency",
- "Namespace": "AWS/ELB",
- "Statistic": "Average",
- "Period": "60",
- "EvaluationPeriods": "1",
- "Threshold": "2",
- "AlarmActions": [],
- "ComparisonOperator": "GreaterThanThreshold"
- }
- },
- "CfnLBUser" : {
- "Type" : "AWS::IAM::User"
- },
- "CfnLBAccessKey" : {
- "Type" : "AWS::IAM::AccessKey",
- "Properties" : {
- "UserName" : {"Ref": "CfnLBUser"}
- }
- },
- "LB_instance": {
- "Type": "AWS::EC2::Instance",
- "Metadata": {
- "AWS::CloudFormation::Init": {
- "config": {
- "packages": {
- "yum": {
- "cronie" : [],
- "haproxy" : [],
- "socat" : [],
- "python-psutil" : []
- }
- },
- "services": {
- "systemd": {
- "crond" : { "enabled" : "true", "ensureRunning" : "true" }
- }
- },
- "files": {
- "/etc/cfn/cfn-credentials" : {
- "content" : { "Fn::Join" : ["", [
- "AWSAccessKeyId=", { "Ref" : "CfnLBAccessKey" }, "\n",
- "AWSSecretKey=", {"Fn::GetAtt": ["CfnLBAccessKey",
- "SecretAccessKey"]}, "\n"
- ]]},
- "mode" : "000400",
- "owner" : "root",
- "group" : "root"
- },
- "/etc/cfn/cfn-hup.conf" : {
- "content" : { "Fn::Join" : ["", [
- "[main]\n",
- "stack=", { "Ref" : "AWS::StackId" }, "\n",
- "credential-file=/etc/cfn/cfn-credentials\n",
- "region=", { "Ref" : "AWS::Region" }, "\n",
- "interval=60\n"
- ]]},
- "mode" : "000400",
- "owner" : "root",
- "group" : "root"
- },
- "/etc/cfn/hooks.conf" : {
- "content": { "Fn::Join" : ["", [
- "[cfn-init]\n",
- "triggers=post.update\n",
- "path=Resources.LB_instance.Metadata\n",
- "action=/opt/aws/bin/cfn-init -s ",
- { "Ref": "AWS::StackId" },
- " -r LB_instance ",
- " --region ", { "Ref": "AWS::Region" }, "\n",
- "runas=root\n",
- "\n",
- "[reload]\n",
- "triggers=post.update\n",
- "path=Resources.LB_instance.Metadata\n",
- "action=systemctl reload-or-restart haproxy.service\n",
- "runas=root\n"
- ]]},
- "mode" : "000400",
- "owner" : "root",
- "group" : "root"
- },
- "/etc/haproxy/haproxy.cfg": {
- "content": "",
- "mode": "000644",
- "owner": "root",
- "group": "root"
- },
- "/tmp/cfn-hup-crontab.txt" : {
- "content" : { "Fn::Join" : ["", [
- "MAIL=\"\"\n",
- "\n",
- "* * * * * /opt/aws/bin/cfn-hup -f\n",
- "* * * * * /opt/aws/bin/cfn-push-stats ",
- " --watch ", { "Ref" : "latency_watcher" }, " --haproxy\n"
- ]]},
- "mode" : "000600",
- "owner" : "root",
- "group" : "root"
- }
- }
- }
- }
- },
- "Properties": {
- "ImageId": "F17-x86_64-cfntools",
- "InstanceType": "m1.small",
- "KeyName": { "Ref": "KeyName" },
- "UserData": { "Fn::Base64": { "Fn::Join": ["", [
- "#!/bin/bash -v\n",
- "# Helper function\n",
- "function error_exit\n",
- "{\n",
- " /opt/aws/bin/cfn-signal -e 1 -r \"$1\" '",
- { "Ref" : "WaitHandle" }, "'\n",
- " exit 1\n",
- "}\n",
-
- "/opt/aws/bin/cfn-init -s ",
- { "Ref": "AWS::StackId" },
- " -r LB_instance ",
- " --region ", { "Ref": "AWS::Region" }, "\n",
- "# install cfn-hup crontab\n",
- "crontab /tmp/cfn-hup-crontab.txt\n",
-
- "# LB setup completed, signal success\n",
- "/opt/aws/bin/cfn-signal -e 0 -r \"LB server setup complete\" '",
- { "Ref" : "WaitHandle" }, "'\n"
-
- ]]}}
- }
- },
- "WaitHandle" : {
- "Type" : "AWS::CloudFormation::WaitConditionHandle"
- },
-
- "WaitCondition" : {
- "Type" : "AWS::CloudFormation::WaitCondition",
- "DependsOn" : "LB_instance",
- "Properties" : {
- "Handle" : {"Ref" : "WaitHandle"},
- "Timeout" : "600"
- }
- }
- },
-
- "Outputs": {
- "PublicIp": {
- "Value": { "Fn::GetAtt": [ "LB_instance", "PublicIp" ] },
- "Description": "instance IP"
- }
- }
-}
-'''
-
-
-#
-# TODO(asalkeld) the above inline template _could_ be placed in an external
-# file at the moment this is because we will probably need to implement a
-# LoadBalancer based on keepalived as well (for for ssl support).
-#
-class LoadBalancer(stack_resource.StackResource):
-
- listeners_schema = {
- 'InstancePort': {
- 'Type': 'Number',
- 'Required': True,
- 'Description': _('TCP port on which the instance server is'
- ' listening.')},
- 'LoadBalancerPort': {
- 'Type': 'Number',
- 'Required': True,
- 'Description': _('The external load balancer port number.')},
- 'Protocol': {
- 'Type': 'String',
- 'Required': True,
- 'AllowedValues': ['TCP', 'HTTP'],
- 'Description': _('The load balancer transport protocol to use.')},
- 'SSLCertificateId': {
- 'Type': 'String',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'PolicyNames': {
- 'Type': 'List',
- 'Implemented': False,
- 'Description': _('Not Implemented.')}
- }
- healthcheck_schema = {
- 'HealthyThreshold': {
- 'Type': 'Number',
- 'Required': True,
- 'Description': _('The number of consecutive health probe successes'
- ' required before moving the instance to the'
- ' healthy state.')},
- 'Interval': {
- 'Type': 'Number',
- 'Required': True,
- 'Description': _('The approximate interval, in seconds, between'
- ' health checks of an individual instance.')},
- 'Target': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('The port being checked.')},
- 'Timeout': {
- 'Type': 'Number',
- 'Required': True,
- 'Description': _('Health probe timeout, in seconds.')},
- 'UnhealthyThreshold': {
- 'Type': 'Number',
- 'Required': True,
- 'Description': _('The number of consecutive health probe failures'
- ' required before moving the instance to the'
- ' unhealthy state')},
- }
-
- properties_schema = {
- 'AvailabilityZones': {
- 'Type': 'List',
- 'Required': True,
- 'Description': _('The Availability Zones in which to create the'
- ' load balancer.')},
- 'HealthCheck': {
- 'Type': 'Map',
- 'Schema': healthcheck_schema,
- 'Description': _('An application health check for the'
- ' instances.')},
- 'Instances': {
- 'Type': 'List',
- 'Description': _('The list of instance IDs load balanced.')},
- 'Listeners': {
- 'Type': 'List', 'Required': True,
- 'Schema': {'Type': 'Map', 'Schema': listeners_schema},
- 'Description': _('One or more listeners for this load balancer.')},
- 'AppCookieStickinessPolicy': {
- 'Type': 'String',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'LBCookieStickinessPolicy': {
- 'Type': 'String',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'SecurityGroups': {
- 'Type': 'String',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'Subnets': {
- 'Type': 'List',
- 'Implemented': False,
- 'Description': _('Not Implemented.')}
- }
- attributes_schema = {
- "CanonicalHostedZoneName": ("The name of the hosted zone that is "
- "associated with the LoadBalancer."),
- "CanonicalHostedZoneNameID": ("The ID of the hosted zone name that is "
- "associated with the LoadBalancer."),
- "DNSName": "The DNS name for the LoadBalancer.",
- "SourceSecurityGroup.GroupName": ("The security group that you can use"
- " as part of your inbound rules for "
- "your LoadBalancer's back-end "
- "instances."),
- "SourceSecurityGroup.OwnerAlias": "Owner of the source security group."
- }
- update_allowed_keys = ('Properties',)
- update_allowed_properties = ('Instances',)
-
- def _haproxy_config(self, templ, instances):
- # initial simplifications:
- # - only one Listener
- # - only http (no tcp or ssl)
- #
- # option httpchk HEAD /check.txt HTTP/1.0
- gl = '''
- global
- daemon
- maxconn 256
- stats socket /tmp/.haproxy-stats
-
- defaults
- mode http
- timeout connect 5000ms
- timeout client 50000ms
- timeout server 50000ms
-'''
-
- listener = self.properties['Listeners'][0]
- lb_port = listener['LoadBalancerPort']
- inst_port = listener['InstancePort']
- spaces = ' '
- frontend = '''
- frontend http
- bind *:%s
-''' % (lb_port)
-
- health_chk = self.properties['HealthCheck']
- if health_chk:
- check = 'check inter %ss fall %s rise %s' % (
- health_chk['Interval'],
- health_chk['UnhealthyThreshold'],
- health_chk['HealthyThreshold'])
- timeout_check = 'timeout check %ds' % int(health_chk['Timeout'])
- else:
- check = ''
- timeout_check = ''
-
- backend = '''
- default_backend servers
-
- backend servers
- balance roundrobin
- option http-server-close
- option forwardfor
- option httpchk
- %s
-''' % timeout_check
-
- servers = []
- n = 1
- client = self.nova()
- for i in instances:
- ip = nova_utils.server_to_ipaddress(client, i) or '0.0.0.0'
- logger.debug('haproxy server:%s' % ip)
- servers.append('%sserver server%d %s:%s %s' % (spaces, n,
- ip, inst_port,
- check))
- n = n + 1
-
- return '%s%s%s%s\n' % (gl, frontend, backend, '\n'.join(servers))
-
- def handle_create(self):
- templ = template_format.parse(lb_template)
-
- if self.properties['Instances']:
- md = templ['Resources']['LB_instance']['Metadata']
- files = md['AWS::CloudFormation::Init']['config']['files']
- cfg = self._haproxy_config(templ, self.properties['Instances'])
- files['/etc/haproxy/haproxy.cfg']['content'] = cfg
-
- # If the owning stack defines KeyName, we use that key for the nested
- # template, otherwise use no key
- try:
- param = {'KeyName': self.stack.parameters['KeyName']}
- except KeyError:
- del templ['Resources']['LB_instance']['Properties']['KeyName']
- del templ['Parameters']['KeyName']
- param = {}
-
- return self.create_with_template(templ, param)
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- '''
- re-generate the Metadata
- save it to the db.
- rely on the cfn-hup to reconfigure HAProxy
- '''
- if 'Instances' in prop_diff:
- templ = template_format.parse(lb_template)
- cfg = self._haproxy_config(templ, prop_diff['Instances'])
-
- md = self.nested()['LB_instance'].metadata
- files = md['AWS::CloudFormation::Init']['config']['files']
- files['/etc/haproxy/haproxy.cfg']['content'] = cfg
-
- self.nested()['LB_instance'].metadata = md
-
- def handle_delete(self):
- return self.delete_nested()
-
- def validate(self):
- '''
- Validate any of the provided params
- '''
- res = super(LoadBalancer, self).validate()
- if res:
- return res
-
- health_chk = self.properties['HealthCheck']
- if health_chk:
- if float(health_chk['Interval']) < float(health_chk['Timeout']):
- return {'Error':
- 'Interval must be larger than Timeout'}
-
- def FnGetRefId(self):
- return unicode(self.name)
-
- def _resolve_attribute(self, name):
- '''
- We don't really support any of these yet.
- '''
- if name == 'DNSName':
- return self.get_output('PublicIp')
- elif name in self.attributes_schema:
- # Not sure if we should return anything for the other attribs
- # since they aren't really supported in any meaningful way
- return ''
-
-
-def resource_mapping():
- return {
- 'AWS::ElasticLoadBalancing::LoadBalancer': LoadBalancer,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import clients
-from heat.openstack.common import log as logging
-from heat.engine import resource
-from heat.engine.resources.neutron import neutron
-
-logger = logging.getLogger(__name__)
-
-
-class NetworkInterface(resource.Resource):
- tags_schema = {'Key': {'Type': 'String',
- 'Required': True},
- 'Value': {'Type': 'String',
- 'Required': True}}
-
- properties_schema = {
- 'Description': {'Type': 'String'},
- 'GroupSet': {'Type': 'List'},
- 'PrivateIpAddress': {'Type': 'String'},
- 'SourceDestCheck': {
- 'Type': 'Boolean',
- 'Implemented': False},
- 'SubnetId': {
- 'Type': 'String',
- 'Required': True},
- 'Tags': {'Type': 'List', 'Schema': {
- 'Type': 'Map',
- 'Implemented': False,
- 'Schema': tags_schema}}
- }
-
- @staticmethod
- def network_id_from_subnet_id(neutronclient, subnet_id):
- subnet_info = neutronclient.show_subnet(subnet_id)
- return subnet_info['subnet']['network_id']
-
- def handle_create(self):
- client = self.neutron()
-
- subnet_id = self.properties['SubnetId']
- network_id = self.network_id_from_subnet_id(client, subnet_id)
-
- fixed_ip = {'subnet_id': subnet_id}
- if self.properties['PrivateIpAddress']:
- fixed_ip['ip_address'] = self.properties['PrivateIpAddress']
-
- props = {
- 'name': self.physical_resource_name(),
- 'admin_state_up': True,
- 'network_id': network_id,
- 'fixed_ips': [fixed_ip]
- }
-
- if self.properties['GroupSet']:
- sgs = neutron.NeutronResource.get_secgroup_uuids(
- self.stack, self.properties, 'GroupSet', self.name,
- self.neutron())
- props['security_groups'] = sgs
- port = client.create_port({'port': props})['port']
- self.resource_id_set(port['id'])
-
- def handle_delete(self):
- from neutronclient.common.exceptions import NeutronClientException
-
- client = self.neutron()
- try:
- client.delete_port(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
-
-def resource_mapping():
- if clients.neutronclient is None:
- return {}
-
- return {
- 'AWS::EC2::NetworkInterface': NetworkInterface,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import clients
-from heat.engine.resources.neutron import neutron
-from heat.engine import scheduler
-
-if clients.neutronclient is not None:
- from neutronclient.common.exceptions import NeutronClientException
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class Firewall(neutron.NeutronResource):
- """
- A resource for the Firewall resource in Neutron FWaaS.
- """
-
- properties_schema = {'name': {'Type': 'String'},
- 'description': {'Type': 'String'},
- 'admin_state_up': {'Type': 'Boolean',
- 'Default': True},
- 'firewall_policy_id': {'Type': 'String',
- 'Required': True}}
-
- attributes_schema = {
- 'id': 'unique identifier for the Firewall',
- 'name': 'name for the Firewall',
- 'description': 'description of the Firewall',
- 'admin_state_up': 'the administrative state of the Firewall',
- 'firewall_policy_id': 'unique identifier of the FirewallPolicy used to'
- 'create the Firewall',
- 'status': 'the status of the Firewall',
- 'tenant_id': 'Id of the tenant owning the Firewall'
- }
-
- update_allowed_keys = ('Properties',)
- update_allowed_properties = ('name', 'description', 'admin_state_up',
- 'firewall_policy_id')
-
- def _show_resource(self):
- return self.neutron().show_firewall(self.resource_id)['firewall']
-
- def handle_create(self):
- props = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
- firewall = self.neutron().create_firewall({'firewall': props})[
- 'firewall']
- self.resource_id_set(firewall['id'])
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- if prop_diff:
- self.neutron().update_firewall(
- self.resource_id, {'firewall': prop_diff})
-
- def handle_delete(self):
- client = self.neutron()
- try:
- client.delete_firewall(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- else:
- return scheduler.TaskRunner(self._confirm_delete)()
-
-
-class FirewallPolicy(neutron.NeutronResource):
- """
- A resource for the FirewallPolicy resource in Neutron FWaaS.
- """
-
- properties_schema = {'name': {'Type': 'String'},
- 'description': {'Type': 'String'},
- 'shared': {'Type': 'Boolean',
- 'Default': False},
- 'audited': {'Type': 'Boolean',
- 'Default': False},
- 'firewall_rules': {'Type': 'List',
- 'Required': True}}
-
- attributes_schema = {
- 'id': 'unique identifier for the FirewallPolicy',
- 'name': 'name for the FirewallPolicy',
- 'description': 'description of the FirewallPolicy',
- 'firewall_rules': 'list of FirewallRules in this FirewallPolicy',
- 'shared': 'shared status of this FirewallPolicy',
- 'audited': 'audit status of this FirewallPolicy',
- 'tenant_id': 'Id of the tenant owning the FirewallPolicy'
- }
-
- update_allowed_keys = ('Properties',)
- update_allowed_properties = ('name', 'description', 'shared',
- 'audited', 'firewall_rules')
-
- def _show_resource(self):
- return self.neutron().show_firewall_policy(self.resource_id)[
- 'firewall_policy']
-
- def handle_create(self):
- props = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
- firewall_policy = self.neutron().create_firewall_policy(
- {'firewall_policy': props})['firewall_policy']
- self.resource_id_set(firewall_policy['id'])
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- if prop_diff:
- self.neutron().update_firewall_policy(
- self.resource_id, {'firewall_policy': prop_diff})
-
- def handle_delete(self):
- client = self.neutron()
- try:
- client.delete_firewall_policy(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- else:
- return scheduler.TaskRunner(self._confirm_delete)()
-
-
-class FirewallRule(neutron.NeutronResource):
- """
- A resource for the FirewallRule resource in Neutron FWaaS.
- """
-
- properties_schema = {'name': {'Type': 'String'},
- 'description': {'Type': 'String'},
- 'shared': {'Type': 'Boolean',
- 'Default': False},
- 'protocol': {'Type': 'String',
- 'AllowedValues': ['tcp', 'udp', 'icmp',
- None],
- 'Default': None},
- 'ip_version': {'Type': 'String',
- 'AllowedValues': ['4', '6'],
- 'Default': '4'},
- 'source_ip_address': {'Type': 'String',
- 'Default': None},
- 'destination_ip_address': {'Type': 'String',
- 'Default': None},
- 'source_port': {'Type': 'String',
- 'Default': None},
- 'destination_port': {'Type': 'String',
- 'Default': None},
- 'action': {'Type': 'String',
- 'AllowedValues': ['allow', 'deny'],
- 'Default': 'deny'},
- 'enabled': {'Type': 'Boolean',
- 'Default': True}}
-
- attributes_schema = {
- 'id': 'unique identifier for the FirewallRule',
- 'name': 'name for the FirewallRule',
- 'description': 'description of the FirewallRule',
- 'firewall_policy_id': 'unique identifier of the FirewallPolicy to'
- 'which this FirewallRule belongs',
- 'shared': 'shared status of this FirewallRule',
- 'protocol': 'protocol value for this FirewallRule',
- 'ip_version': 'ip_version for this FirewallRule',
- 'source_ip_address': 'source ip_address for this FirewallRule',
- 'destination_ip_address': 'destination ip_address for this'
- 'FirewallRule',
- 'source_port': 'source port range for this FirewallRule',
- 'destination_port': 'destination port range for this FirewallRule',
- 'action': 'allow or deny action for this FirewallRule',
- 'enabled': 'indicates whether this FirewallRule is enabled or not',
- 'position': 'position of the rule within the FirewallPolicy',
- 'tenant_id': 'Id of the tenant owning the Firewall'
- }
-
- update_allowed_keys = ('Properties',)
- update_allowed_properties = ('name', 'description', 'shared',
- 'protocol', 'ip_version', 'source_ip_address',
- 'destination_ip_address', 'source_port',
- 'destination_port', 'action', 'enabled')
-
- def _show_resource(self):
- return self.neutron().show_firewall_rule(
- self.resource_id)['firewall_rule']
-
- def handle_create(self):
- props = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
- firewall_rule = self.neutron().create_firewall_rule(
- {'firewall_rule': props})['firewall_rule']
- self.resource_id_set(firewall_rule['id'])
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- if prop_diff:
- self.neutron().update_firewall_rule(
- self.resource_id, {'firewall_rule': prop_diff})
-
- def handle_delete(self):
- client = self.neutron()
- try:
- client.delete_firewall_rule(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- else:
- return scheduler.TaskRunner(self._confirm_delete)()
-
-
-def resource_mapping():
- if clients.neutronclient is None:
- return {}
-
- return {
- 'OS::Neutron::Firewall': Firewall,
- 'OS::Neutron::FirewallPolicy': FirewallPolicy,
- 'OS::Neutron::FirewallRule': FirewallRule,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import clients
-from heat.openstack.common import log as logging
-from heat.engine.resources.neutron import neutron
-
-if clients.neutronclient is not None:
- from neutronclient.common.exceptions import NeutronClientException
-
-logger = logging.getLogger(__name__)
-
-
-class FloatingIP(neutron.NeutronResource):
- properties_schema = {'floating_network_id': {'Type': 'String',
- 'Required': True},
- 'value_specs': {'Type': 'Map',
- 'Default': {}},
- 'port_id': {'Type': 'String'},
- 'fixed_ip_address': {'Type': 'String'}}
-
- def add_dependencies(self, deps):
- super(FloatingIP, self).add_dependencies(deps)
- # depend on any RouterGateway in this template with the same
- # network_id as this floating_network_id
- for resource in self.stack.resources.itervalues():
- if (resource.has_interface('OS::Neutron::RouterGateway') and
- resource.properties.get('network_id') ==
- self.properties.get('floating_network_id')):
- deps += (self, resource)
-
- def handle_create(self):
- props = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
- fip = self.neutron().create_floatingip({
- 'floatingip': props})['floatingip']
- self.resource_id_set(fip['id'])
-
- def handle_delete(self):
- client = self.neutron()
- try:
- client.delete_floatingip(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
- def FnGetAtt(self, key):
- try:
- attributes = self.neutron().show_floatingip(
- self.resource_id)['floatingip']
- except NeutronClientException as ex:
- logger.warn("failed to fetch resource attributes: %s" % str(ex))
- return None
- return self.handle_get_attributes(self.name, key, attributes)
-
-
-class FloatingIPAssociation(neutron.NeutronResource):
- properties_schema = {'floatingip_id': {'Type': 'String',
- 'Required': True},
- 'port_id': {'Type': 'String',
- 'Required': True},
- 'fixed_ip_address': {'Type': 'String'}}
-
- def handle_create(self):
- props = self.prepare_properties(self.properties, self.name)
-
- floatingip_id = props.pop('floatingip_id')
-
- self.neutron().update_floatingip(floatingip_id, {
- 'floatingip': props})['floatingip']
- self.resource_id_set('%s:%s' % (floatingip_id, props['port_id']))
-
- def handle_delete(self):
- if not self.resource_id:
- return
- client = self.neutron()
- (floatingip_id, port_id) = self.resource_id.split(':')
- try:
- client.update_floatingip(
- floatingip_id,
- {'floatingip': {'port_id': None}})
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
-
-def resource_mapping():
- if clients.neutronclient is None:
- return {}
-
- return {
- 'OS::Neutron::FloatingIP': FloatingIP,
- 'OS::Neutron::FloatingIPAssociation': FloatingIPAssociation,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import exception
-from heat.db.sqlalchemy import api as db_api
-from heat.engine import clients
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine.resources import nova_utils
-from heat.engine.resources.neutron import neutron
-
-if clients.neutronclient is not None:
- from neutronclient.common.exceptions import NeutronClientException
-
-
-class HealthMonitor(neutron.NeutronResource):
- """
- A resource for managing health monitors for load balancers in Neutron.
- """
-
- properties_schema = {
- 'delay': {'Type': 'Integer', 'Required': True},
- 'type': {'Type': 'String', 'Required': True,
- 'AllowedValues': ['PING', 'TCP', 'HTTP', 'HTTPS']},
- 'max_retries': {'Type': 'Integer', 'Required': True},
- 'timeout': {'Type': 'Integer', 'Required': True},
- 'admin_state_up': {'Default': True, 'Type': 'Boolean'},
- 'http_method': {'Type': 'String'},
- 'expected_codes': {'Type': 'String'},
- 'url_path': {'Type': 'String'},
- }
-
- update_allowed_keys = ('Properties',)
- update_allowed_properties = ('delay', 'max_retries', 'timeout',
- 'admin_state_up', 'http_method',
- 'expected_codes', 'url_path')
-
- attributes_schema = {
- 'admin_state_up': 'the administrative state of this health monitor',
- 'delay': 'the minimum time in seconds between regular connections '
- 'of the member',
- 'expected_codes': 'the list of HTTP status codes expected in '
- 'response from the member to declare it healthy',
- 'http_method': 'the HTTP method used for requests by the monitor of '
- 'type HTTP',
- 'id': 'unique identifier for this health monitor',
- 'max_retries': 'number of permissible connection failures before '
- 'changing the member status to INACTIVE.',
- 'timeout': 'maximum number of seconds for a monitor to wait for a '
- 'connection to be established before it times out',
- 'type': 'one of predefined health monitor types',
- 'url_path': 'the HTTP path used in the HTTP request used by the '
- 'monitor to test a member health',
- 'tenant_id': 'tenant owning the health monitor',
- }
-
- def handle_create(self):
- properties = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
- health_monitor = self.neutron().create_health_monitor(
- {'health_monitor': properties})['health_monitor']
- self.resource_id_set(health_monitor['id'])
-
- def _show_resource(self):
- return self.neutron().show_health_monitor(
- self.resource_id)['health_monitor']
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- if prop_diff:
- self.neutron().update_health_monitor(
- self.resource_id, {'health_monitor': prop_diff})
-
- def handle_delete(self):
- try:
- self.neutron().delete_health_monitor(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- else:
- return scheduler.TaskRunner(self._confirm_delete)()
-
-
-class Pool(neutron.NeutronResource):
- """
- A resource for managing load balancer pools in Neutron.
- """
-
- vip_schema = {
- 'name': {'Type': 'String'},
- 'description': {'Type': 'String'},
- 'address': {'Type': 'String'},
- 'connection_limit': {'Type': 'Integer'},
- 'protocol_port': {'Type': 'Integer', 'Required': True},
- 'admin_state_up': {'Default': True, 'Type': 'Boolean'},
- }
-
- properties_schema = {
- 'protocol': {'Type': 'String', 'Required': True,
- 'AllowedValues': ['TCP', 'HTTP', 'HTTPS']},
- 'subnet_id': {'Type': 'String', 'Required': True},
- 'lb_method': {'Type': 'String', 'Required': True,
- 'AllowedValues': ['ROUND_ROBIN', 'LEAST_CONNECTIONS',
- 'SOURCE_IP']},
- 'name': {'Type': 'String'},
- 'description': {'Type': 'String'},
- 'admin_state_up': {'Default': True, 'Type': 'Boolean'},
- 'vip': {'Type': 'Map', 'Schema': vip_schema, 'Required': True},
- 'monitors': {'Type': 'List'},
- }
-
- update_allowed_keys = ('Properties',)
- update_allowed_properties = ('description', 'admin_state_up', 'lb_method',
- 'monitors')
-
- attributes_schema = {
- 'admin_state_up': 'the administrative state of this pool',
- 'id': 'unique identifier for this pool',
- 'name': 'friendly name of the pool',
- 'protocol': 'protocol to balance',
- 'subnet_id': 'the subnet on which the members of the pool '
- 'will be located',
- 'lb_method': 'the algorithm used to distribute load between the '
- 'members of the pool',
- 'description': 'description of the pool',
- 'tenant_id': 'tenant owning the pool',
- 'vip': 'ip of the pool',
- }
-
- def handle_create(self):
- properties = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
- vip_properties = properties.pop('vip')
- monitors = properties.pop('monitors', [])
- client = self.neutron()
- pool = client.create_pool({'pool': properties})['pool']
- self.resource_id_set(pool['id'])
-
- for monitor in monitors:
- client.associate_health_monitor(
- pool['id'], {'health_monitor': {'id': monitor}})
-
- vip_arguments = self.prepare_properties(
- vip_properties,
- '%s.vip' % (self.name,))
- vip_arguments['protocol'] = self.properties['protocol']
- vip_arguments['subnet_id'] = self.properties['subnet_id']
- vip_arguments['pool_id'] = pool['id']
- vip = client.create_vip({'vip': vip_arguments})['vip']
-
- self.metadata = {'vip': vip['id']}
-
- def _show_resource(self):
- return self.neutron().show_pool(self.resource_id)['pool']
-
- def check_create_complete(self, data):
- attributes = self._show_resource()
- if attributes['status'] == 'PENDING_CREATE':
- return False
- elif attributes['status'] == 'ACTIVE':
- vip_attributes = self.neutron().show_vip(
- self.metadata['vip'])['vip']
- if vip_attributes['status'] == 'PENDING_CREATE':
- return False
- elif vip_attributes['status'] == 'ACTIVE':
- return True
- raise exception.Error(
- 'neutron reported unexpected vip resource[%s] status[%s]' %
- (vip_attributes['name'], vip_attributes['status']))
- raise exception.Error(
- 'neutron report unexpected pool resource[%s] status[%s]' %
- (attributes['name'], attributes['status']))
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- if prop_diff:
- client = self.neutron()
- monitors = set(prop_diff.pop('monitors', []))
- if monitors:
- old_monitors = set(self.t['Properties']['monitors'])
- for monitor in old_monitors - monitors:
- client.disassociate_health_monitor(
- self.resource_id, {'health_monitor': {'id': monitor}})
- for monitor in monitors - old_monitors:
- client.associate_health_monitor(
- self.resource_id, {'health_monitor': {'id': monitor}})
-
- if prop_diff:
- client.update_pool(self.resource_id, {'pool': prop_diff})
-
- def _resolve_attribute(self, name):
- if name == 'vip':
- return self.neutron().show_vip(self.metadata['vip'])['vip']
- return super(Pool, self)._resolve_attribute(name)
-
- def _confirm_vip_delete(self):
- client = self.neutron()
- while True:
- try:
- yield
- client.show_vip(self.metadata['vip'])
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- break
- self._delete_pool()
-
- def _delete_pool(self):
- try:
- self.neutron().delete_pool(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- else:
- return scheduler.TaskRunner(self._confirm_delete)()
-
- def handle_delete(self):
- if self.metadata:
- try:
- self.neutron().delete_vip(self.metadata['vip'])
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- self._delete_pool()
- else:
- return scheduler.TaskRunner(self._confirm_vip_delete)()
- else:
- self._delete_pool()
-
-
-class LoadBalancer(resource.Resource):
- """
- A resource to link a neutron pool with servers.
- """
-
- properties_schema = {
- 'pool_id': {
- 'Type': 'String', 'Required': True,
- 'Description': _('The ID of the load balancing pool')},
- 'protocol_port': {
- 'Type': 'Integer', 'Required': True,
- 'Description': _('Port number on which the servers are '
- 'running on the members')},
- 'members': {
- 'Type': 'List',
- 'Description': _('The list of Nova server IDs load balanced')},
- }
-
- update_allowed_keys = ('Properties',)
-
- update_allowed_properties = ('members',)
-
- def handle_create(self):
- pool = self.properties['pool_id']
- client = self.neutron()
- nova_client = self.nova()
- protocol_port = self.properties['protocol_port']
- for member in self.properties['members']:
- address = nova_utils.server_to_ipaddress(nova_client, member)
- lb_member = client.create_member({
- 'member': {
- 'pool_id': pool,
- 'address': address,
- 'protocol_port': protocol_port}})['member']
- db_api.resource_data_set(self, member, lb_member['id'])
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- if 'members' in prop_diff:
- members = set(prop_diff['members'])
- old_members = set(self.t['Properties'].get('members', []))
- client = self.neutron()
- for member in old_members - members:
- member_id = db_api.resource_data_get(self, member)
- try:
- client.delete_member(member_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- db_api.resource_data_delete(self, member)
- pool = self.properties['pool_id']
- nova_client = self.nova()
- protocol_port = self.properties['protocol_port']
- for member in members - old_members:
- address = nova_utils.server_to_ipaddress(nova_client, member)
- lb_member = client.create_member({
- 'member': {
- 'pool_id': pool,
- 'address': address,
- 'protocol_port': protocol_port}})['member']
- db_api.resource_data_set(self, member, lb_member['id'])
-
- def handle_delete(self):
- client = self.neutron()
- for member in self.properties['members']:
- member_id = db_api.resource_data_get(self, member)
- try:
- client.delete_member(member_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- db_api.resource_data_delete(self, member)
-
-
-def resource_mapping():
- if clients.neutronclient is None:
- return {}
-
- return {
- 'OS::Neutron::HealthMonitor': HealthMonitor,
- 'OS::Neutron::Pool': Pool,
- 'OS::Neutron::LoadBalancer': LoadBalancer,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import clients
-from heat.openstack.common import log as logging
-from heat.engine.resources.neutron import neutron
-from heat.engine import scheduler
-
-if clients.neutronclient is not None:
- from neutronclient.common.exceptions import NeutronClientException
-
-logger = logging.getLogger(__name__)
-
-
-class Net(neutron.NeutronResource):
- properties_schema = {'name': {'Type': 'String'},
- 'value_specs': {'Type': 'Map',
- 'Default': {}},
- 'admin_state_up': {'Default': True,
- 'Type': 'Boolean'},
- 'tenant_id': {'Type': 'String'}}
- attributes_schema = {
- "id": "the unique identifier for this network",
- "status": "the status of the network",
- "name": "the name of the network",
- "subnets": "subnets of this network",
- "admin_state_up": "the administrative status of the network",
- "tenant_id": "the tenant owning this network"
- }
-
- def handle_create(self):
- props = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
- net = self.neutron().create_network({'network': props})['network']
- self.resource_id_set(net['id'])
-
- def _show_resource(self):
- return self.neutron().show_network(
- self.resource_id)['network']
-
- def check_create_complete(self, *args):
- attributes = self._show_resource()
- return self.is_built(attributes)
-
- def handle_delete(self):
- client = self.neutron()
- try:
- client.delete_network(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- else:
- return scheduler.TaskRunner(self._confirm_delete)()
-
-
-def resource_mapping():
- if clients.neutronclient is None:
- return {}
-
- return {
- 'OS::Neutron::Net': Net,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from neutronclient.common.exceptions import NeutronClientException
-
-from heat.common import exception
-from heat.engine import resource
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class NeutronResource(resource.Resource):
-
- def validate(self):
- '''
- Validate any of the provided params
- '''
- res = super(NeutronResource, self).validate()
- if res:
- return res
- return self.validate_properties(self.properties)
-
- @staticmethod
- def validate_properties(properties):
- '''
- Validates to ensure nothing in value_specs overwrites
- any key that exists in the schema.
-
- Also ensures that shared and tenant_id is not specified
- in value_specs.
- '''
- if 'value_specs' in properties.keys():
- vs = properties.get('value_specs')
- banned_keys = set(['shared', 'tenant_id']).union(
- properties.keys())
- for k in banned_keys.intersection(vs.keys()):
- return '%s not allowed in value_specs' % k
-
- @staticmethod
- def prepare_properties(properties, name):
- '''
- Prepares the property values so that they can be passed directly to
- the Neutron call.
-
- Removes None values and value_specs, merges value_specs with the main
- values.
- '''
- props = dict((k, v) for k, v in properties.items()
- if v is not None and k != 'value_specs')
-
- if 'name' in properties.keys():
- props.setdefault('name', name)
-
- if 'value_specs' in properties.keys():
- props.update(properties.get('value_specs'))
-
- return props
-
- @staticmethod
- def handle_get_attributes(name, key, attributes):
- '''
- Support method for responding to FnGetAtt
- '''
- if key == 'show':
- return attributes
-
- if key in attributes.keys():
- return attributes[key]
-
- raise exception.InvalidTemplateAttribute(resource=name, key=key)
-
- @staticmethod
- def is_built(attributes):
- if attributes['status'] == 'BUILD':
- return False
- if attributes['status'] in ('ACTIVE', 'DOWN'):
- return True
- else:
- raise exception.Error('%s resource[%s] status[%s]' %
- ('neutron reported unexpected',
- attributes['name'], attributes['status']))
-
- def _resolve_attribute(self, name):
- try:
- attributes = self._show_resource()
- except NeutronClientException as ex:
- logger.warn("failed to fetch resource attributes: %s" % str(ex))
- return None
- return self.handle_get_attributes(self.name, name, attributes)
-
- def _confirm_delete(self):
- while True:
- try:
- yield
- self._show_resource()
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- return
-
- def FnGetRefId(self):
- return unicode(self.resource_id)
-
- @staticmethod
- def get_secgroup_uuids(stack, props, props_name, rsrc_name, client):
- '''
- Returns security group names in UUID form.
-
- Args:
- stack: stack associated with given resource
- props: properties described in the template
- props_name: name of security group property
- rsrc_name: name of the given resource
- client: reference to neutronclient
- '''
- seclist = []
- for sg in props.get(props_name):
- resource = stack.resource_by_refid(sg)
- if resource is not None:
- seclist.append(resource.resource_id)
- else:
- try:
- client.show_security_group(sg)
- seclist.append(sg)
- except NeutronClientException as e:
- if e.status_code == 404:
- raise exception.InvalidTemplateAttribute(
- resource=rsrc_name,
- key=props_name)
- else:
- raise
- return seclist
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import clients
-from heat.openstack.common import log as logging
-from heat.engine.resources.neutron import neutron
-from heat.engine import scheduler
-
-if clients.neutronclient is not None:
- from neutronclient.common.exceptions import NeutronClientException
-
-logger = logging.getLogger(__name__)
-
-
-class Port(neutron.NeutronResource):
-
- fixed_ip_schema = {'subnet_id': {'Type': 'String',
- 'Required': True},
- 'ip_address': {'Type': 'String'}}
-
- properties_schema = {'network_id': {'Type': 'String',
- 'Required': True},
- 'name': {'Type': 'String'},
- 'value_specs': {'Type': 'Map',
- 'Default': {}},
- 'admin_state_up': {'Default': True,
- 'Type': 'Boolean'},
- 'fixed_ips': {'Type': 'List',
- 'Schema': {'Type': 'Map',
- 'Schema': fixed_ip_schema}},
- 'mac_address': {'Type': 'String'},
- 'device_id': {'Type': 'String'},
- 'security_groups': {'Type': 'List'}}
- attributes_schema = {
- "admin_state_up": "the administrative state of this port",
- "device_id": "unique identifier for the device",
- "device_owner": "name of the network owning the port",
- "fixed_ips": "fixed ip addresses",
- "id": "the unique identifier for the port",
- "mac_address": "mac address of the port",
- "name": "friendly name of the port",
- "network_id": "unique identifier for the network owning the port",
- "security_groups": "a list of security groups for the port",
- "status": "the status of the port",
- "tenant_id": "tenant owning the port"
- }
-
- def add_dependencies(self, deps):
- super(Port, self).add_dependencies(deps)
- # Depend on any Subnet in this template with the same
- # network_id as this network_id.
- # It is not known which subnet a port might be assigned
- # to so all subnets in a network should be created before
- # the ports in that network.
- for resource in self.stack.resources.itervalues():
- if (resource.has_interface('OS::Neutron::Subnet') and
- resource.properties.get('network_id') ==
- self.properties.get('network_id')):
- deps += (self, resource)
-
- def handle_create(self):
- props = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
-
- if self.properties['security_groups']:
- props['security_groups'] = self.get_secgroup_uuids(
- self.stack, self.properties, 'security_groups', self.name,
- self.neutron())
-
- port = self.neutron().create_port({'port': props})['port']
- self.resource_id_set(port['id'])
-
- def _show_resource(self):
- return self.neutron().show_port(
- self.resource_id)['port']
-
- def check_create_complete(self, *args):
- attributes = self._show_resource()
- return self.is_built(attributes)
-
- def handle_delete(self):
- client = self.neutron()
- try:
- client.delete_port(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- else:
- return scheduler.TaskRunner(self._confirm_delete)()
-
-
-def resource_mapping():
- if clients.neutronclient is None:
- return {}
-
- return {
- 'OS::Neutron::Port': Port,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import clients
-from heat.engine.resources.neutron import neutron
-from heat.engine import scheduler
-
-if clients.neutronclient is not None:
- from neutronclient.common.exceptions import NeutronClientException
- from neutronclient.neutron import v2_0 as neutronV20
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class Router(neutron.NeutronResource):
- properties_schema = {'name': {'Type': 'String'},
- 'value_specs': {'Type': 'Map',
- 'Default': {}},
- 'admin_state_up': {'Type': 'Boolean',
- 'Default': True}}
- attributes_schema = {
- "status": "the status of the router",
- "external_gateway_info": "gateway network for the router",
- "name": "friendly name of the router",
- "admin_state_up": "administrative state of the router",
- "tenant_id": "tenant owning the router",
- "id": "unique identifier for the router"
- }
-
- def handle_create(self):
- props = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
- router = self.neutron().create_router({'router': props})['router']
- self.resource_id_set(router['id'])
-
- def _show_resource(self):
- return self.neutron().show_router(
- self.resource_id)['router']
-
- def check_create_complete(self, *args):
- attributes = self._show_resource()
- return self.is_built(attributes)
-
- def handle_delete(self):
- client = self.neutron()
- try:
- client.delete_router(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- else:
- return scheduler.TaskRunner(self._confirm_delete)()
-
-
-class RouterInterface(neutron.NeutronResource):
- properties_schema = {'router_id': {'Type': 'String',
- 'Required': True},
- 'subnet_id': {'Type': 'String',
- 'Required': True}}
-
- def handle_create(self):
- router_id = self.properties.get('router_id')
- subnet_id = self.properties.get('subnet_id')
- self.neutron().add_interface_router(
- router_id,
- {'subnet_id': subnet_id})
- self.resource_id_set('%s:%s' % (router_id, subnet_id))
-
- def handle_delete(self):
- client = self.neutron()
- (router_id, subnet_id) = self.resource_id.split(':')
- try:
- client.remove_interface_router(
- router_id,
- {'subnet_id': subnet_id})
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
-
-class RouterGateway(neutron.NeutronResource):
- properties_schema = {'router_id': {'Type': 'String',
- 'Required': True},
- 'network_id': {'Type': 'String',
- 'Required': True}}
-
- def add_dependencies(self, deps):
- super(RouterGateway, self).add_dependencies(deps)
- for resource in self.stack.resources.itervalues():
- # depend on any RouterInterface in this template with the same
- # router_id as this router_id
- if (resource.has_interface('OS::Neutron::RouterInterface') and
- resource.properties.get('router_id') ==
- self.properties.get('router_id')):
- deps += (self, resource)
- # depend on any subnet in this template with the same network_id
- # as this network_id, as the gateway implicitly creates a port
- # on that subnet
- elif (resource.has_interface('OS::Neutron::Subnet') and
- resource.properties.get('network_id') ==
- self.properties.get('network_id')):
- deps += (self, resource)
-
- def handle_create(self):
- router_id = self.properties.get('router_id')
- network_id = neutronV20.find_resourceid_by_name_or_id(
- self.neutron(),
- 'network',
- self.properties.get('network_id'))
- self.neutron().add_gateway_router(
- router_id,
- {'network_id': network_id})
- self.resource_id_set('%s:%s' % (router_id, network_id))
-
- def handle_delete(self):
- client = self.neutron()
- (router_id, network_id) = self.resource_id.split(':')
- try:
- client.remove_gateway_router(router_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
-
-def resource_mapping():
- if clients.neutronclient is None:
- return {}
-
- return {
- 'OS::Neutron::Router': Router,
- 'OS::Neutron::RouterInterface': RouterInterface,
- 'OS::Neutron::RouterGateway': RouterGateway,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import clients
-from heat.openstack.common import log as logging
-from heat.engine.resources.neutron import neutron
-from heat.engine import scheduler
-
-if clients.neutronclient is not None:
- from neutronclient.common.exceptions import NeutronClientException
-
-logger = logging.getLogger(__name__)
-
-
-class Subnet(neutron.NeutronResource):
-
- allocation_schema = {'start': {'Type': 'String',
- 'Required': True},
- 'end': {'Type': 'String',
- 'Required': True}}
-
- properties_schema = {'network_id': {'Type': 'String',
- 'Required': True},
- 'cidr': {'Type': 'String',
- 'Required': True},
- 'value_specs': {'Type': 'Map',
- 'Default': {}},
- 'name': {'Type': 'String'},
- 'ip_version': {'Type': 'Integer',
- 'AllowedValues': [4, 6],
- 'Default': 4},
- 'dns_nameservers': {'Type': 'List'},
- 'gateway_ip': {'Type': 'String'},
- 'enable_dhcp': {'Type': 'Boolean'},
- 'allocation_pools': {'Type': 'List',
- 'Schema': {
- 'Type': 'Map',
- 'Schema': allocation_schema
- }},
- 'tenant_id': {'Type': 'String'}}
- attributes_schema = {
- "name": "friendly name of the subnet",
- "network_id": "parent network of the subnet",
- "tenant_id": "tenant owning the subnet",
- "allocation_pools": "ip allocation pools and their ranges",
- "gateway_ip": "ip of the subnet's gateway",
- "ip_version": "ip version for the subnet",
- "cidr": "CIDR block notation for this subnet",
- "id": "unique identifier for this subnet",
- # dns_nameservers isn't in the api docs; is it right?
- "dns_nameservers": "list of dns nameservers",
- "enable_dhcp": ("'true' if DHCP is enabled for this subnet; 'false'"
- "otherwise")
- }
-
- def handle_create(self):
- props = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
- subnet = self.neutron().create_subnet({'subnet': props})['subnet']
- self.resource_id_set(subnet['id'])
-
- def handle_delete(self):
- client = self.neutron()
- try:
- client.delete_subnet(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- else:
- return scheduler.TaskRunner(self._confirm_delete)()
-
- def _show_resource(self):
- return self.neutron().show_subnet(self.resource_id)['subnet']
-
-
-def resource_mapping():
- if clients.neutronclient is None:
- return {}
-
- return {
- 'OS::Neutron::Subnet': Subnet,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import clients
-from heat.engine.resources.neutron import neutron
-from heat.engine import scheduler
-
-if clients.neutronclient is not None:
- from neutronclient.common.exceptions import NeutronClientException
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class VPNService(neutron.NeutronResource):
- """
- A resource for VPN service in Neutron.
- """
-
- properties_schema = {
- 'name': {
- 'Type': 'String',
- 'Description': _('Name for the vpn service.')
- },
- 'description': {
- 'Type': 'String',
- 'Description': _('Description for the vpn service.')
- },
- 'admin_state_up': {
- 'Type': 'Boolean',
- 'Default': True,
- 'Description': _('Administrative state for the vpn service.')
- },
- 'subnet_id': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Unique identifier for the subnet in which the '
- 'vpn service will be created.')
- },
- 'router_id': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Unique identifier for the router to which the '
- 'vpn service will be inserted.')
- }
- }
-
- attributes_schema = {
- 'admin_state_up': _('The administrative state of the vpn service.'),
- 'description': _('The description of the vpn service.'),
- 'id': _('The unique identifier of the vpn service.'),
- 'name': _('The name of the vpn service.'),
- 'router_id': _('The unique identifier of the router to which the vpn '
- 'service was inserted.'),
- 'status': _('The status of the vpn service.'),
- 'subnet_id': _('The unique identifier of the subnet in which the vpn '
- 'service was created.'),
- 'tenant_id': _('The unique identifier of the tenant owning the vpn '
- 'service.')
- }
-
- update_allowed_keys = ('Properties',)
-
- update_allowed_properties = ('name', 'description', 'admin_state_up',)
-
- def _show_resource(self):
- return self.neutron().show_vpnservice(self.resource_id)['vpnservice']
-
- def handle_create(self):
- props = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
- vpnservice = self.neutron().create_vpnservice({'vpnservice': props})[
- 'vpnservice']
- self.resource_id_set(vpnservice['id'])
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- if prop_diff:
- self.neutron().update_vpnservice(self.resource_id,
- {'vpnservice': prop_diff})
-
- def handle_delete(self):
- client = self.neutron()
- try:
- client.delete_vpnservice(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- else:
- return scheduler.TaskRunner(self._confirm_delete)()
-
-
-class IPsecSiteConnection(neutron.NeutronResource):
- """
- A resource for IPsec site connection in Neutron.
- """
-
- dpd_schema = {
- 'actions': {
- 'Type': 'String',
- 'AllowedValues': ['clear', 'disabled', 'hold', 'restart',
- 'restart-by-peer'],
- 'Default': 'hold',
- 'Description': _('Controls DPD protocol mode.')
- },
- 'interval': {
- 'Type': 'Integer',
- 'Default': 30,
- 'Description': _('Number of seconds for the DPD delay.')
- },
- 'timeout': {
- 'Type': 'Integer',
- 'Default': 120,
- 'Description': _('Number of seconds for the DPD timeout.')
- },
- }
-
- properties_schema = {
- 'name': {
- 'Type': 'String',
- 'Description': _('Name for the ipsec site connection.')
- },
- 'description': {
- 'Type': 'String',
- 'Description': _('Description for the ipsec site connection.')
- },
- 'peer_address': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Remote branch router public IPv4 address or '
- 'IPv6 address or FQDN.')
- },
- 'peer_id': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Remote branch router identity.')
- },
- 'peer_cidrs': {
- 'Type': 'List',
- 'Required': True,
- 'Description': _('Remote subnet(s) in CIDR format.')
- },
- 'mtu': {
- 'Type': 'Integer',
- 'Default': 1500,
- 'Description': _('Maximum transmission unit size (in bytes) for '
- 'the ipsec site connection.')
- },
- 'dpd': {
- 'Type': 'Map',
- 'Schema': dpd_schema,
- 'Description': _('Dead Peer Detection protocol configuration for '
- 'the ipsec site connection.')
- },
- 'psk': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Pre-shared key string for the ipsec site '
- 'connection.')
- },
- 'initiator': {
- 'Type': 'String',
- 'AllowedValues': ['bi-directional', 'response-only'],
- 'Default': 'bi-directional',
- 'Description': _('Initiator state in lowercase for the ipsec site '
- 'connection.')
- },
- 'admin_state_up': {
- 'Type': 'Boolean',
- 'Default': True,
- 'Description': _('Administrative state for the ipsec site '
- 'connection.')
- },
- 'ikepolicy_id': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Unique identifier for the ike policy associated '
- 'with the ipsec site connection.')
- },
- 'ipsecpolicy_id': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Unique identifier for the ipsec policy '
- 'associated with the ipsec site connection.')
- },
- 'vpnservice_id': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('Unique identifier for the vpn service '
- 'associated with the ipsec site connection.')
- }
- }
-
- attributes_schema = {
- 'admin_state_up': _('The administrative state of the ipsec site '
- 'connection.'),
- 'auth_mode': _('The authentication mode of the ipsec site '
- 'connection.'),
- 'description': _('The description of the ipsec site connection.'),
- 'dpd': _('The dead peer detection protocol configuration of the ipsec '
- 'site connection.'),
- 'id': _('The unique identifier of the ipsec site connection.'),
- 'ikepolicy_id': _('The unique identifier of ike policy associated '
- 'with the ipsec site connection.'),
- 'initiator': _('The initiator of the ipsec site connection.'),
- 'ipsecpolicy_id': _('The unique identifier of ipsec policy '
- 'associated with the ipsec site connection.'),
- 'mtu': _('The maximum transmission unit size (in bytes) of the ipsec '
- 'site connection.'),
- 'name': _('The name of the ipsec site connection.'),
- 'peer_address': _('The remote branch router public IPv4 address or '
- 'IPv6 address or FQDN.'),
- 'peer_cidrs': _('The remote subnet(s) in CIDR format of the ipsec '
- 'site connection.'),
- 'peer_id': _('The remote branch router identity of the ipsec site '
- 'connection.'),
- 'psk': _('The pre-shared key string of the ipsec site connection.'),
- 'route_mode': _('The route mode of the ipsec site connection.'),
- 'status': _('The status of the ipsec site connection.'),
- 'tenant_id': _('The unique identifier of the tenant owning the ipsec '
- 'site connection.'),
- 'vpnservice_id': _('The unique identifier of vpn service associated '
- 'with the ipsec site connection.')
- }
-
- update_allowed_keys = ('Properties',)
-
- update_allowed_properties = ('name', 'description', 'admin_state_up',)
-
- def _show_resource(self):
- return self.neutron().show_ipsec_site_connection(self.resource_id)[
- 'ipsec_site_connection']
-
- def handle_create(self):
- props = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
- ipsec_site_connection = self.neutron().create_ipsec_site_connection(
- {'ipsec_site_connection': props})['ipsec_site_connection']
- self.resource_id_set(ipsec_site_connection['id'])
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- if prop_diff:
- self.neutron().update_ipsec_site_connection(
- self.resource_id, {'ipsec_site_connection': prop_diff})
-
- def handle_delete(self):
- client = self.neutron()
- try:
- client.delete_ipsec_site_connection(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- else:
- return scheduler.TaskRunner(self._confirm_delete)()
-
-
-class IKEPolicy(neutron.NeutronResource):
- """
- A resource for IKE policy in Neutron.
- """
-
- lifetime_schema = {
- 'units': {
- 'Type': 'String',
- 'AllowedValues': ['seconds', 'kilobytes'],
- 'Default': 'seconds',
- 'Description': _('Safety assessment lifetime units.')
- },
- 'value': {
- 'Type': 'Integer',
- 'Default': 3600,
- 'Description': _('Safety assessment lifetime value in specified '
- 'units.')
- },
- }
-
- properties_schema = {
- 'name': {
- 'Type': 'String',
- 'Description': _('Name for the ike policy.')
- },
- 'description': {
- 'Type': 'String',
- 'Description': _('Description for the ike policy.')
- },
- 'auth_algorithm': {
- 'Type': 'String',
- 'AllowedValues': ['sha1'],
- 'Default': 'sha1',
- 'Description': _('Authentication hash algorithm for the ike '
- 'policy.')
- },
- 'encryption_algorithm': {
- 'Type': 'String',
- 'AllowedValues': ['3des', 'aes-128', 'aes-192', 'aes-256'],
- 'Default': 'aes-128',
- 'Description': _('Encryption algorithm for the ike policy.')
- },
- 'phase1_negotiation_mode': {
- 'Type': 'String',
- 'AllowedValues': ['main'],
- 'Default': 'main',
- 'Description': _('Negotiation mode for the ike policy.')
- },
- 'lifetime': {
- 'Type': 'Map',
- 'Schema': lifetime_schema,
- 'Description': _('Safety assessment lifetime configuration for '
- 'the ike policy.')
- },
- 'pfs': {
- 'Type': 'String',
- 'AllowedValues': ['group2', 'group5', 'group14'],
- 'Default': 'group5',
- 'Description': _('Perfect forward secrecy in lowercase for the '
- 'ike policy.')
- },
- 'ike_version': {
- 'Type': 'String',
- 'AllowedValues': ['v1', 'v2'],
- 'Default': 'v1',
- 'Description': _('Version for the ike policy.')
- }
- }
-
- attributes_schema = {
- 'auth_algorithm': _('The authentication hash algorithm used by the ike'
- ' policy.'),
- 'description': _('The description of the ike policy.'),
- 'encryption_algorithm': _('The encryption algorithm used by the ike '
- 'policy.'),
- 'id': _('The unique identifier for the ike policy.'),
- 'ike_version': _('The version of the ike policy.'),
- 'lifetime': _('The safety assessment lifetime configuration for the '
- 'ike policy.'),
- 'name': _('The name of the ike policy.'),
- 'pfs': _('The perfect forward secrecy of the ike policy.'),
- 'phase1_negotiation_mode': _('The negotiation mode of the ike '
- 'policy.'),
- 'tenant_id': _('The unique identifier of the tenant owning the ike '
- 'policy.'),
- }
-
- update_allowed_keys = ('Properties',)
-
- update_allowed_properties = ('name', 'description',)
-
- def _show_resource(self):
- return self.neutron().show_ikepolicy(self.resource_id)['ikepolicy']
-
- def handle_create(self):
- props = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
- ikepolicy = self.neutron().create_ikepolicy({'ikepolicy': props})[
- 'ikepolicy']
- self.resource_id_set(ikepolicy['id'])
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- if prop_diff:
- self.neutron().update_ikepolicy(self.resource_id,
- {'ikepolicy': prop_diff})
-
- def handle_delete(self):
- client = self.neutron()
- try:
- client.delete_ikepolicy(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- else:
- return scheduler.TaskRunner(self._confirm_delete)()
-
-
-class IPsecPolicy(neutron.NeutronResource):
- """
- A resource for IPsec policy in Neutron.
- """
-
- lifetime_schema = {
- 'units': {
- 'Type': 'String',
- 'AllowedValues': ['seconds', 'kilobytes'],
- 'Default': 'seconds',
- 'Description': _('Safety assessment lifetime units.')
- },
- 'value': {
- 'Type': 'Integer',
- 'Default': 3600,
- 'Description': _('Safety assessment lifetime value in specified '
- 'units.')
- },
- }
-
- properties_schema = {
- 'name': {
- 'Type': 'String',
- 'Description': _('Name for the ipsec policy.')
- },
- 'description': {
- 'Type': 'String',
- 'Description': _('Description for the ipsec policy.')
- },
- 'transform_protocol': {
- 'Type': 'String',
- 'AllowedValues': ['esp', 'ah', 'ah-esp'],
- 'Default': 'esp',
- 'Description': _('Transform protocol for the ipsec policy.')
- },
- 'encapsulation_mode': {
- 'Type': 'String',
- 'AllowedValues': ['tunnel', 'transport'],
- 'Default': 'tunnel',
- 'Description': _('Encapsulation mode for the ipsec policy.')
- },
- 'auth_algorithm': {
- 'Type': 'String',
- 'AllowedValues': ['sha1'],
- 'Default': 'sha1',
- 'Description': _('Authentication hash algorithm for the ipsec '
- 'policy.')
- },
- 'encryption_algorithm': {
- 'Type': 'String',
- 'AllowedValues': ['3des', 'aes-128', 'aes-192', 'aes-256'],
- 'Default': 'aes-128',
- 'Description': _('Encryption algorithm for the ipsec policy.')
- },
- 'lifetime': {
- 'Type': 'Map',
- 'Schema': lifetime_schema,
- 'Description': _('Safety assessment lifetime configuration for '
- 'the ipsec policy.')
- },
- 'pfs': {
- 'Type': 'String',
- 'AllowedValues': ['group2', 'group5', 'group14'],
- 'Default': 'group5',
- 'Description': _('Perfect forward secrecy for the ipsec policy.')
- }
- }
-
- attributes_schema = {
- 'auth_algorithm': _('The authentication hash algorithm of the ipsec '
- 'policy.'),
- 'description': _('The description of the ipsec policy.'),
- 'encapsulation_mode': _('The encapsulation mode of the ipsec policy.'),
- 'encryption_algorithm': _('The encryption algorithm of the ipsec '
- 'policy.'),
- 'id': _('The unique identifier of this ipsec policy.'),
- 'lifetime': _('The safety assessment lifetime configuration of the '
- 'ipsec policy.'),
- 'name': _('The name of the ipsec policy.'),
- 'pfs': _('The perfect forward secrecy of the ipsec policy.'),
- 'tenant_id': _('The unique identifier of the tenant owning the '
- 'ipsec policy.'),
- 'transform_protocol': _('The transform protocol of the ipsec policy.')
- }
-
- update_allowed_keys = ('Properties',)
-
- update_allowed_properties = ('name', 'description',)
-
- def _show_resource(self):
- return self.neutron().show_ipsecpolicy(self.resource_id)['ipsecpolicy']
-
- def handle_create(self):
- props = self.prepare_properties(
- self.properties,
- self.physical_resource_name())
- ipsecpolicy = self.neutron().create_ipsecpolicy(
- {'ipsecpolicy': props})['ipsecpolicy']
- self.resource_id_set(ipsecpolicy['id'])
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- if prop_diff:
- self.neutron().update_ipsecpolicy(self.resource_id,
- {'ipsecpolicy': prop_diff})
-
- def handle_delete(self):
- client = self.neutron()
- try:
- client.delete_ipsecpolicy(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
- else:
- return scheduler.TaskRunner(self._confirm_delete)()
-
-
-def resource_mapping():
- if clients.neutronclient is None:
- return {}
-
- return {
- 'OS::Neutron::VPNService': VPNService,
- 'OS::Neutron::IPsecSiteConnection': IPsecSiteConnection,
- 'OS::Neutron::IKEPolicy': IKEPolicy,
- 'OS::Neutron::IPsecPolicy': IPsecPolicy,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-"""Utilities for Resources that use the Openstack Nova API."""
-
-from email.mime.multipart import MIMEMultipart
-from email.mime.text import MIMEText
-
-import json
-import os
-import pkgutil
-
-from urlparse import urlparse
-
-from oslo.config import cfg
-
-from heat.common import exception
-from heat.engine import clients
-from heat.openstack.common import log as logging
-from heat.openstack.common import uuidutils
-
-logger = logging.getLogger(__name__)
-
-
-deferred_server_statuses = ['BUILD',
- 'HARD_REBOOT',
- 'PASSWORD',
- 'REBOOT',
- 'RESCUE',
- 'RESIZE',
- 'REVERT_RESIZE',
- 'SHUTOFF',
- 'SUSPENDED',
- 'VERIFY_RESIZE']
-
-
-def get_image_id(nova_client, image_identifier):
- '''
- Return an id for the specified image name or identifier.
-
- :param nova_client: the nova client to use
- :param image_identifier: image name or a UUID-like identifier
- :returns: the id of the requested :image_identifier:
- :raises: exception.ImageNotFound, exception.NoUniqueImageFound
- '''
- image_id = None
- if uuidutils.is_uuid_like(image_identifier):
- try:
- image_id = nova_client.images.get(image_identifier).id
- except clients.novaclient.exceptions.NotFound:
- logger.info("Image %s was not found in glance"
- % image_identifier)
- raise exception.ImageNotFound(image_name=image_identifier)
- else:
- try:
- image_list = nova_client.images.list()
- except clients.novaclient.exceptions.ClientException as ex:
- raise exception.Error(
- message="Error retrieving image list from nova: %s" % str(ex))
- image_names = dict(
- (o.id, o.name)
- for o in image_list if o.name == image_identifier)
- if len(image_names) == 0:
- logger.info("Image %s was not found in glance" %
- image_identifier)
- raise exception.ImageNotFound(image_name=image_identifier)
- elif len(image_names) > 1:
- logger.info("Mulitple images %s were found in glance with name"
- % image_identifier)
- raise exception.NoUniqueImageFound(image_name=image_identifier)
- image_id = image_names.popitem()[0]
- return image_id
-
-
-def get_flavor_id(nova_client, flavor):
- '''
- Get the id for the specified flavor name.
- If the specified value is flavor id, just return it.
-
- :param nova_client: the nova client to use
- :param flavor: the name of the flavor to find
- :returns: the id of :flavor:
- :raises: exception.FlavorMissing
- '''
- flavor_id = None
- flavor_list = nova_client.flavors.list()
- for o in flavor_list:
- if o.name == flavor:
- flavor_id = o.id
- break
- if o.id == flavor:
- flavor_id = o.id
- break
- if flavor_id is None:
- raise exception.FlavorMissing(flavor_id=flavor)
- return flavor_id
-
-
-def get_keypair(nova_client, key_name):
- '''
- Get the public key specified by :key_name:
-
- :param nova_client: the nova client to use
- :param key_name: the name of the key to look for
- :returns: the keypair (name, public_key) for :key_name:
- :raises: exception.UserKeyPairMissing
- '''
- for keypair in nova_client.keypairs.list():
- if keypair.name == key_name:
- return keypair
- raise exception.UserKeyPairMissing(key_name=key_name)
-
-
-def build_userdata(resource, userdata=None):
- '''
- Build multipart data blob for CloudInit which includes user-supplied
- Metadata, user data, and the required Heat in-instance configuration.
-
- :param resource: the resource implementation
- :type resource: heat.engine.Resource
- :param userdata: user data string
- :type userdata: str or None
- :returns: multipart mime as a string
- '''
-
- def make_subpart(content, filename, subtype=None):
- if subtype is None:
- subtype = os.path.splitext(filename)[0]
- msg = MIMEText(content, _subtype=subtype)
- msg.add_header('Content-Disposition', 'attachment',
- filename=filename)
- return msg
-
- def read_cloudinit_file(fn):
- data = pkgutil.get_data('heat', 'cloudinit/%s' % fn)
- data = data.replace('@INSTANCE_USER@',
- cfg.CONF.instance_user)
- return data
-
- attachments = [(read_cloudinit_file('config'), 'cloud-config'),
- (read_cloudinit_file('boothook.sh'), 'boothook.sh',
- 'cloud-boothook'),
- (read_cloudinit_file('part_handler.py'),
- 'part-handler.py'),
- (userdata, 'cfn-userdata', 'x-cfninitdata'),
- (read_cloudinit_file('loguserdata.py'),
- 'loguserdata.py', 'x-shellscript')]
-
- if 'Metadata' in resource.t:
- attachments.append((json.dumps(resource.metadata),
- 'cfn-init-data', 'x-cfninitdata'))
-
- attachments.append((cfg.CONF.heat_watch_server_url,
- 'cfn-watch-server', 'x-cfninitdata'))
-
- attachments.append((cfg.CONF.heat_metadata_server_url,
- 'cfn-metadata-server', 'x-cfninitdata'))
-
- # Create a boto config which the cfntools on the host use to know
- # where the cfn and cw API's are to be accessed
- cfn_url = urlparse(cfg.CONF.heat_metadata_server_url)
- cw_url = urlparse(cfg.CONF.heat_watch_server_url)
- is_secure = cfg.CONF.instance_connection_is_secure
- vcerts = cfg.CONF.instance_connection_https_validate_certificates
- boto_cfg = "\n".join(["[Boto]",
- "debug = 0",
- "is_secure = %s" % is_secure,
- "https_validate_certificates = %s" % vcerts,
- "cfn_region_name = heat",
- "cfn_region_endpoint = %s" %
- cfn_url.hostname,
- "cloudwatch_region_name = heat",
- "cloudwatch_region_endpoint = %s" %
- cw_url.hostname])
- attachments.append((boto_cfg,
- 'cfn-boto-cfg', 'x-cfninitdata'))
-
- subparts = [make_subpart(*args) for args in attachments]
- mime_blob = MIMEMultipart(_subparts=subparts)
-
- return mime_blob.as_string()
-
-
-def delete_server(server):
- '''
- Return a co-routine that deletes the server and waits for it to
- disappear from Nova.
- '''
- server.delete()
-
- while True:
- yield
-
- try:
- server.get()
- except clients.novaclient.exceptions.NotFound:
- break
-
-
-def check_resize(server, flavor):
- """
- Verify that the server is properly resized. If that's the case, confirm
- the resize, if not raise an error.
- """
- yield
- server.get()
- while server.status == 'RESIZE':
- yield
- server.get()
- if server.status == 'VERIFY_RESIZE':
- server.confirm_resize()
- else:
- raise exception.Error(
- _("Resizing to '%(flavor)s' failed, status '%(status)s'") %
- dict(flavor=flavor, status=server.status))
-
-
-def server_to_ipaddress(client, server):
- '''
- Return the server's IP address, fetching it from Nova.
- '''
- try:
- server = client.servers.get(server)
- except clients.novaclient.exceptions.NotFound as ex:
- logger.warn('Instance (%s) not found: %s' % (server, str(ex)))
- else:
- for n in server.networks:
- if len(server.networks[n]) > 0:
- return server.networks[n][0]
+++ /dev/null
-# pyrax client library is required to work with Rackspace cloud resources,
-# such as rackspace cloud database, rackspace cloud servers etc...
-# If pyrax client library is not installed, then heat-engine would not register
-# the rackspace cloud resource providers.
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-try:
- from pyrax.exceptions import NotFound
-except ImportError:
- #Setup fake exception for testing without pyrax
- class NotFound(Exception):
- pass
-
-from heat.openstack.common import log as logging
-from heat.openstack.common.gettextutils import _
-from heat.engine import scheduler
-from heat.engine.properties import Properties
-from heat.engine.resources.rackspace import rackspace_resource
-from heat.common import exception
-
-logger = logging.getLogger(__name__)
-
-
-class LoadbalancerBuildError(exception.HeatException):
- message = _("There was an error building the loadbalancer:%(lb_name)s.")
-
-
-class CloudLoadBalancer(rackspace_resource.RackspaceResource):
-
- protocol_values = ["DNS_TCP", "DNS_UDP", "FTP", "HTTP", "HTTPS", "IMAPS",
- "IMAPv4", "LDAP", "LDAPS", "MYSQL", "POP3", "POP3S",
- "SMTP", "TCP", "TCP_CLIENT_FIRST", "UDP", "UDP_STREAM",
- "SFTP"]
-
- algorithm_values = ["LEAST_CONNECTIONS", "RANDOM", "ROUND_ROBIN",
- "WEIGHTED_LEAST_CONNECTIONS", "WEIGHTED_ROUND_ROBIN"]
-
- nodes_schema = {
- 'address': {'Type': 'String', 'Required': False},
- 'ref': {'Type': 'String', 'Required': False},
- 'port': {'Type': 'Number', 'Required': True},
- 'condition': {'Type': 'String', 'Required': True,
- 'AllowedValues': ['ENABLED', 'DISABLED'],
- 'Default': 'ENABLED'},
- 'type': {'Type': 'String', 'Required': False,
- 'AllowedValues': ['PRIMARY', 'SECONDARY']},
- 'weight': {'Type': 'Number', 'MinValue': 1, 'MaxValue': 100}
- }
-
- access_list_schema = {
- 'address': {'Type': 'String', 'Required': True},
- 'type': {'Type': 'String', 'Required': True,
- 'AllowedValues': ['ALLOW', 'DENY']}
- }
-
- connection_logging_schema = {
- 'enabled': {'Type': 'String', 'Required': True,
- 'AllowedValues': ["true", "false"]}
- }
-
- connection_throttle_schema = {
- 'maxConnectionRate': {'Type': 'Number', 'Required': False,
- 'MinValue': 0, 'MaxValue': 100000},
- 'minConnections': {'Type': 'Number', 'Required': False, 'MinValue': 1,
- 'MaxValue': 1000},
- 'maxConnections': {'Type': 'Number', 'Required': False, 'MinValue': 1,
- 'MaxValue': 100000},
- 'rateInterval': {'Type': 'Number', 'Required': False, 'MinValue': 1,
- 'MaxValue': 3600}
- }
-
- virtualip_schema = {
- 'type': {'Type': 'String', 'Required': True,
- 'AllowedValues': ['SERVICENET', 'PUBLIC']},
- 'ipVersion': {'Type': 'String', 'Required': False,
- 'AllowedValues': ['IPV6', 'IPV4'],
- 'Default': 'IPV6'}
- }
-
- health_monitor_base_schema = {
- 'attemptsBeforeDeactivation': {'Type': 'Number', 'MinValue': 1,
- 'MaxValue': 10, 'Required': True},
- 'delay': {'Type': 'Number', 'MinValue': 1, 'MaxValue': 3600,
- 'Required': True},
- 'timeout': {'Type': 'Number', 'MinValue': 1, 'MaxValue': 300,
- 'Required': True},
- 'type': {'Type': 'String',
- 'AllowedValues': ['CONNECT', 'HTTP', 'HTTPS'],
- 'Required': True},
- 'bodyRegex': {'Type': 'String', 'Required': False},
- 'hostHeader': {'Type': 'String', 'Required': False},
- 'path': {'Type': 'String', 'Required': False},
- 'statusRegex': {'Type': 'String', 'Required': False},
- }
-
- health_monitor_connect_schema = {
- 'attemptsBeforeDeactivation': {'Type': 'Number', 'MinValue': 1,
- 'MaxValue': 10, 'Required': True},
- 'delay': {'Type': 'Number', 'MinValue': 1, 'MaxValue': 3600,
- 'Required': True},
- 'timeout': {'Type': 'Number', 'MinValue': 1, 'MaxValue': 300,
- 'Required': True},
- 'type': {'Type': 'String', 'AllowedValues': ['CONNECT'],
- 'Required': True}
- }
-
- health_monitor_http_schema = {
- 'attemptsBeforeDeactivation': {'Type': 'Number', 'Required': True,
- 'MaxValue': 10, 'MinValue': 1},
- 'bodyRegex': {'Type': 'String', 'Required': True},
- 'delay': {'Type': 'Number', 'Required': True,
- 'MaxValue': 3600, 'MinValue': 1},
- 'hostHeader': {'Type': 'String', 'Required': False},
- 'path': {'Type': 'String', 'Required': True},
- 'statusRegex': {'Type': 'String', 'Required': True},
- 'timeout': {'Type': 'Number', 'Required': True,
- 'MaxValue': 300, 'MinValue': 1},
- 'type': {'Type': 'String', 'Required': True,
- 'AllowedValues': ['HTTP', 'HTTPS']}
- }
-
- ssl_termination_base_schema = {
- "enabled": {'Type': 'Boolean', 'Required': True},
- "securePort": {'Type': 'Number', 'Required': False},
- "privatekey": {'Type': 'String', 'Required': False},
- "certificate": {'Type': 'String', 'Required': False},
- #only required if configuring intermediate ssl termination
- #add to custom validation
- "intermediateCertificate": {'Type': 'String', 'Required': False},
- #pyrax will default to false
- "secureTrafficOnly": {'Type': 'Boolean', 'Required': False}
- }
-
- ssl_termination_enabled_schema = {
- "securePort": {'Type': 'Number', 'Required': True},
- "privatekey": {'Type': 'String', 'Required': True},
- "certificate": {'Type': 'String', 'Required': True},
- "intermediateCertificate": {'Type': 'String', 'Required': False},
- "enabled": {'Type': 'Boolean', 'Required': True,
- 'AllowedValues': [True]},
- "secureTrafficOnly": {'Type': 'Boolean', 'Required': False}
- }
-
- properties_schema = {
- 'name': {'Type': 'String', 'Required': False},
- 'nodes': {'Type': 'List', 'Required': True,
- 'Schema': {'Type': 'Map', 'Schema': nodes_schema}},
- 'protocol': {'Type': 'String', 'Required': True,
- 'AllowedValues': protocol_values},
- 'accessList': {'Type': 'List', 'Required': False,
- 'Schema': {'Type': 'Map',
- 'Schema': access_list_schema}},
- 'halfClosed': {'Type': 'Boolean', 'Required': False},
- 'algorithm': {'Type': 'String', 'Required': False},
- 'connectionLogging': {'Type': 'Boolean', 'Required': False},
- 'metadata': {'Type': 'Map', 'Required': False},
- 'port': {'Type': 'Number', 'Required': True},
- 'timeout': {'Type': 'Number', 'Required': False, 'MinValue': 1,
- 'MaxValue': 120},
- 'connectionThrottle': {'Type': 'Map', 'Required': False,
- 'Schema': connection_throttle_schema},
- 'sessionPersistence': {'Type': 'String', 'Required': False,
- 'AllowedValues': ['HTTP_COOKIE', 'SOURCE_IP']},
- 'virtualIps': {'Type': 'List', 'Required': True,
- 'Schema': {'Type': 'Map', 'Schema': virtualip_schema}},
- 'contentCaching': {'Type': 'String', 'Required': False,
- 'AllowedValues': ['ENABLED', 'DISABLED']},
- 'healthMonitor': {'Type': 'Map', 'Required': False,
- 'Schema': health_monitor_base_schema},
- 'sslTermination': {'Type': 'Map', 'Required': False,
- 'Schema': ssl_termination_base_schema},
- 'errorPage': {'Type': 'String', 'Required': False}
- }
-
- attributes_schema = {
- 'PublicIp': ('Public IP address of the specified '
- 'instance.')}
-
- update_allowed_keys = ('Properties',)
- update_allowed_properties = ('nodes',)
-
- def __init__(self, name, json_snippet, stack):
- super(CloudLoadBalancer, self).__init__(name, json_snippet, stack)
- self.clb = self.cloud_lb()
-
- def _setup_properties(self, properties, function):
- """Use defined schema properties as kwargs for loadbalancer objects."""
- if properties and function:
- return [function(**item_dict) for item_dict in properties]
- elif function:
- return [function()]
-
- def _alter_properties_for_api(self):
- """The following properties have usless key/value pairs which must
- be passed into the api. Set them up to make template definition easier.
- """
- session_persistence = None
- if'sessionPersistence' in self.properties.data:
- session_persistence = {'persistenceType':
- self.properties['sessionPersistence']}
- connection_logging = None
- if 'connectionLogging' in self.properties.data:
- connection_logging = {'enabled':
- self.properties['connectionLogging']}
- metadata = None
- if 'metadata' in self.properties.data:
- metadata = [{'key': k, 'value': v}
- for k, v in self.properties['metadata'].iteritems()]
-
- return (session_persistence, connection_logging, metadata)
-
- def _check_status(self, loadbalancer, status_list):
- """Update the loadbalancer state, check the status."""
- loadbalancer.get()
- if loadbalancer.status in status_list:
- return True
- else:
- return False
-
- def _configure_post_creation(self, loadbalancer):
- """Configure all load balancer properties that must be done post
- creation.
- """
- if self.properties['accessList']:
- while not self._check_status(loadbalancer, ['ACTIVE']):
- yield
- loadbalancer.add_access_list(self.properties['accessList'])
-
- if self.properties['errorPage']:
- while not self._check_status(loadbalancer, ['ACTIVE']):
- yield
- loadbalancer.set_error_page(self.properties['errorPage'])
-
- if self.properties['sslTermination']:
- while not self._check_status(loadbalancer, ['ACTIVE']):
- yield
- loadbalancer.add_ssl_termination(
- self.properties['sslTermination']['securePort'],
- self.properties['sslTermination']['privatekey'],
- self.properties['sslTermination']['certificate'],
- intermediateCertificate=
- self.properties['sslTermination']
- ['intermediateCertificate'],
- enabled=self.properties['sslTermination']['enabled'],
- secureTrafficOnly=self.properties['sslTermination']
- ['secureTrafficOnly'])
-
- if 'contentCaching' in self.properties:
- enabled = True if self.properties['contentCaching'] == 'ENABLED'\
- else False
- while not self._check_status(loadbalancer, ['ACTIVE']):
- yield
- loadbalancer.content_caching = enabled
-
- def handle_create(self):
- node_list = []
- for node in self.properties['nodes']:
- # resolve references to stack resource IP's
- if node.get('ref'):
- node['address'] = (self.stack
- .resource_by_refid(node['ref'])
- .FnGetAtt('PublicIp'))
- del node['ref']
- node_list.append(node)
-
- nodes = self._setup_properties(node_list, self.clb.Node)
- virtual_ips = self._setup_properties(self.properties.get('virtualIps'),
- self.clb.VirtualIP)
-
- (session_persistence, connection_logging, metadata) = \
- self._alter_properties_for_api()
-
- lb_body = {
- 'port': self.properties['port'],
- 'protocol': self.properties['protocol'],
- 'nodes': nodes,
- 'virtual_ips': virtual_ips,
- 'algorithm': self.properties.get('algorithm'),
- 'halfClosed': self.properties.get('halfClosed'),
- 'connectionThrottle': self.properties.get('connectionThrottle'),
- 'metadata': metadata,
- 'healthMonitor': self.properties.get('healthMonitor'),
- 'sessionPersistence': session_persistence,
- 'timeout': self.properties.get('timeout'),
- 'connectionLogging': connection_logging,
- }
-
- lb_name = self.properties.get('name') or self.physical_resource_name()
- logger.debug('Creating loadbalancer: %s' % {lb_name: lb_body})
- loadbalancer = self.clb.create(lb_name, **lb_body)
- self.resource_id_set(str(loadbalancer.id))
-
- post_create = scheduler.TaskRunner(self._configure_post_creation,
- loadbalancer)
- post_create(timeout=600)
- return loadbalancer
-
- def check_create_complete(self, loadbalancer):
- return self._check_status(loadbalancer, ['ACTIVE'])
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- """
- Add and remove nodes specified in the prop_diff.
- """
- loadbalancer = self.clb.get(self.resource_id)
- if 'nodes' in prop_diff:
- current_nodes = loadbalancer.nodes
- #Loadbalancers can be uniquely identified by address and port.
- #Old is a dict of all nodes the loadbalancer currently knows about.
- for node in prop_diff['nodes']:
- # resolve references to stack resource IP's
- if node.get('ref'):
- node['address'] = (self.stack
- .resource_by_refid(node['ref'])
- .FnGetAtt('PublicIp'))
- del node['ref']
- old = dict(("{0.address}{0.port}".format(node), node)
- for node in current_nodes)
- #New is a dict of the nodes the loadbalancer will know about after
- #this update.
- new = dict(("%s%s" % (node['address'], node['port']), node)
- for node in prop_diff['nodes'])
-
- old_set = set(old.keys())
- new_set = set(new.keys())
-
- deleted = old_set.difference(new_set)
- added = new_set.difference(old_set)
- updated = new_set.intersection(old_set)
-
- if len(current_nodes) + len(added) - len(deleted) < 1:
- raise ValueError("The loadbalancer:%s requires at least one "
- "node." % self.name)
- """
- Add loadbalancers in the new map that are not in the old map.
- Add before delete to avoid deleting the last node and getting in
- an invalid state.
- """
- new_nodes = [self.clb.Node(**new[lb_node])
- for lb_node in added]
- if new_nodes:
- loadbalancer.add_nodes(new_nodes)
-
- #Delete loadbalancers in the old dict that are not in the new dict.
- for node in deleted:
- old[node].delete()
-
- #Update nodes that have been changed
- for node in updated:
- node_changed = False
- for attribute in new[node].keys():
- if new[node][attribute] != getattr(old[node], attribute):
- node_changed = True
- setattr(old[node], attribute, new[node][attribute])
- if node_changed:
- old[node].update()
-
- def handle_delete(self):
- if self.resource_id is None:
- return
- try:
- loadbalancer = self.clb.get(self.resource_id)
- except NotFound:
- pass
- else:
- if loadbalancer.status != 'DELETED':
- loadbalancer.delete()
- self.resource_id_set(None)
-
- def _remove_none(self, property_dict):
- '''
- Remove values that may be initialized to None and would cause problems
- during schema validation.
- '''
- return dict((key, value)
- for (key, value) in property_dict.iteritems()
- if value)
-
- def validate(self):
- """
- Validate any of the provided params
- """
- res = super(CloudLoadBalancer, self).validate()
- if res:
- return res
-
- if self.properties.get('halfClosed'):
- if not (self.properties['protocol'] == 'TCP' or
- self.properties['protocol'] == 'TCP_CLIENT_FIRST'):
- return {'Error':
- 'The halfClosed property is only available for the '
- 'TCP or TCP_CLIENT_FIRST protocols'}
-
- #health_monitor connect and http types require completely different
- #schema
- if self.properties.get('healthMonitor'):
- health_monitor = \
- self._remove_none(self.properties['healthMonitor'])
-
- if health_monitor['type'] == 'CONNECT':
- schema = CloudLoadBalancer.health_monitor_connect_schema
- else:
- schema = CloudLoadBalancer.health_monitor_http_schema
- try:
- Properties(schema,
- health_monitor,
- self.stack.resolve_runtime_data,
- self.name).validate()
- except exception.StackValidationFailed as svf:
- return {'Error': str(svf)}
-
- if self.properties.get('sslTermination'):
- ssl_termination = self._remove_none(
- self.properties['sslTermination'])
-
- if ssl_termination['enabled']:
- try:
- Properties(CloudLoadBalancer.
- ssl_termination_enabled_schema,
- ssl_termination,
- self.stack.resolve_runtime_data,
- self.name).validate()
- except exception.StackValidationFailed as svf:
- return {'Error': str(svf)}
-
- def FnGetRefId(self):
- return unicode(self.name)
-
- def _public_ip(self):
- #TODO(andrew-plunk) return list here and let caller choose ip
- for ip in self.clb.get(self.resource_id).virtual_ips:
- if ip.type == 'PUBLIC':
- return ip.address
-
- def _resolve_attribute(self, key):
- attribute_function = {
- 'PublicIp': self._public_ip()
- }
- if key not in attribute_function:
- raise exception.InvalidTemplateAttribute(resource=self.name,
- key=key)
- function = attribute_function[key]
- logger.info('%s.GetAtt(%s) == %s' % (self.name, key, function))
- return unicode(function)
-
-
-def resource_mapping():
- if rackspace_resource.PYRAX_INSTALLED:
- return {
- 'Rackspace::Cloud::LoadBalancer': CloudLoadBalancer
- }
- else:
- return {}
+++ /dev/null
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import tempfile
-
-import json
-import paramiko
-from Crypto.PublicKey import RSA
-import novaclient.exceptions as novaexception
-
-from heat.common import exception
-from heat.openstack.common import log as logging
-from heat.engine import scheduler
-from heat.engine.resources import instance
-from heat.engine.resources import nova_utils
-from heat.engine.resources.rackspace import rackspace_resource
-from heat.db.sqlalchemy import api as db_api
-
-logger = logging.getLogger(__name__)
-
-
-class CloudServer(instance.Instance):
- """Resource for Rackspace Cloud Servers."""
-
- properties_schema = {'flavor': {'Type': 'String', 'Required': True},
- 'image': {'Type': 'String', 'Required': True},
- 'user_data': {'Type': 'String'},
- 'key_name': {'Type': 'String'},
- 'Volumes': {'Type': 'List'},
- 'name': {'Type': 'String'}}
-
- attributes_schema = {'PrivateDnsName': ('Private DNS name of the specified'
- ' instance.'),
- 'PublicDnsName': ('Public DNS name of the specified '
- 'instance.'),
- 'PrivateIp': ('Private IP address of the specified '
- 'instance.'),
- 'PublicIp': ('Public IP address of the specified '
- 'instance.')}
-
- base_script = """#!/bin/bash
-
-# Install cloud-init and heat-cfntools
-%s
-# Create data source for cloud-init
-mkdir -p /var/lib/cloud/seed/nocloud-net
-mv /tmp/userdata /var/lib/cloud/seed/nocloud-net/user-data
-touch /var/lib/cloud/seed/nocloud-net/meta-data
-chmod 600 /var/lib/cloud/seed/nocloud-net/*
-
-# Run cloud-init & cfn-init
-cloud-init start || cloud-init init
-bash -x /var/lib/cloud/data/cfn-userdata > /root/cfn-userdata.log 2>&1
-"""
-
- # - Ubuntu 12.04: Verified working
- ubuntu_script = base_script % """\
-apt-get update
-apt-get install -y cloud-init python-boto python-pip gcc python-dev
-pip install heat-cfntools
-cfn-create-aws-symlinks --source /usr/local/bin
-"""
-
- # - Fedora 17: Verified working
- # - Fedora 18: Not working. selinux needs to be in "Permissive"
- # mode for cloud-init to work. It's disabled by default in the
- # Rackspace Cloud Servers image. To enable selinux, a reboot is
- # required.
- # - Fedora 19: Verified working
- fedora_script = base_script % """\
-yum install -y cloud-init python-boto python-pip gcc python-devel
-pip-python install heat-cfntools
-cfn-create-aws-symlinks
-"""
-
- # - Centos 6.4: Verified working
- centos_script = base_script % """\
-rpm -ivh http://mirror.rackspace.com/epel/6/i386/epel-release-6-8.noarch.rpm
-yum install -y cloud-init python-boto python-pip gcc python-devel \
- python-argparse
-pip-python install heat-cfntools
-"""
-
- # - RHEL 6.4: Verified working
- rhel_script = base_script % """\
-rpm -ivh http://mirror.rackspace.com/epel/6/i386/epel-release-6-8.noarch.rpm
-# The RPM DB stays locked for a few secs
-while fuser /var/lib/rpm/*; do sleep 1; done
-yum install -y cloud-init python-boto python-pip gcc python-devel \
- python-argparse
-pip-python install heat-cfntools
-cfn-create-aws-symlinks
-"""
-
- # - Debian 7: Not working (heat-cfntools patch submitted)
- # TODO(jason): Test with Debian 7 as soon as heat-cfntools patch
- # is in https://review.openstack.org/#/c/38822/
- debian_script = base_script % """\
-echo "deb http://mirror.rackspace.com/debian wheezy-backports main" >> \
- /etc/apt/sources.list
-apt-get update
-apt-get -t wheezy-backports install -y cloud-init
-apt-get install -y python-pip gcc python-dev
-pip install heat-cfntools
-"""
-
- # - Arch 2013.6: Not working (deps not in default package repos)
- # TODO(jason): Install cloud-init & other deps from third-party repos
- arch_script = base_script % """\
-pacman -S --noconfirm python-pip gcc
-"""
-
- # - Gentoo 13.2: Not working (deps not in default package repos)
- # TODO(jason): Install cloud-init & other deps from third-party repos
- gentoo_script = base_script % """\
-emerge cloud-init python-boto python-pip gcc python-devel
-"""
-
- # - OpenSUSE 12.3: Not working (deps not in default package repos)
- # TODO(jason): Install cloud-init & other deps from third-party repos
- opensuse_script = base_script % """\
-zypper --non-interactive rm patterns-openSUSE-minimal_base-conflicts
-zypper --non-interactive in cloud-init python-boto python-pip gcc python-devel
-"""
-
- # List of supported Linux distros and their corresponding config scripts
- image_scripts = {'arch': None,
- 'centos': centos_script,
- 'debian': None,
- 'fedora': fedora_script,
- 'gentoo': None,
- 'opensuse': None,
- 'rhel': rhel_script,
- 'ubuntu': ubuntu_script}
-
- # Template keys supported for handle_update. Properties not
- # listed here trigger an UpdateReplace
- update_allowed_keys = ('Metadata', 'Properties')
- update_allowed_properties = ('flavor', 'name')
-
- def __init__(self, name, json_snippet, stack):
- super(CloudServer, self).__init__(name, json_snippet, stack)
- self._private_key = None
- self._server = None
- self._distro = None
- self._public_ip = None
- self._private_ip = None
- self.rs = rackspace_resource.RackspaceResource(name,
- json_snippet,
- stack)
-
- def physical_resource_name(self):
- name = self.properties.get('name')
- if name:
- return name
-
- return super(CloudServer, self).physical_resource_name()
-
- def nova(self):
- return self.rs.nova() # Override the Instance method
-
- def cinder(self):
- return self.rs.cinder()
-
- @property
- def server(self):
- """Get the Cloud Server object."""
- if not self._server:
- logger.debug("Calling nova().servers.get()")
- self._server = self.nova().servers.get(self.resource_id)
- return self._server
-
- @property
- def distro(self):
- """Get the Linux distribution for this server."""
- if not self._distro:
- logger.debug("Calling nova().images.get()")
- image = self.nova().images.get(self.properties['image'])
- self._distro = image.metadata['os_distro']
- return self._distro
-
- @property
- def script(self):
- """Get the config script for the Cloud Server image."""
- return self.image_scripts[self.distro]
-
- @property
- def flavors(self):
- """Get the flavors from the API."""
- logger.debug("Calling nova().flavors.list()")
- return [flavor.id for flavor in self.nova().flavors.list()]
-
- @property
- def private_key(self):
- """Return the private SSH key for the resource."""
- if self._private_key:
- return self._private_key
- if self.id is not None:
- private_key = db_api.resource_data_get(self, 'private_key')
- if not private_key:
- return None
- self._private_key = private_key
- return private_key
-
- @private_key.setter
- def private_key(self, private_key):
- """Save the resource's private SSH key to the database."""
- self._private_key = private_key
- if self.id is not None:
- db_api.resource_data_set(self, 'private_key', private_key, True)
-
- def _get_ip(self, ip_type):
- """Return the IP of the Cloud Server."""
- if ip_type in self.server.addresses:
- for ip in self.server.addresses[ip_type]:
- if ip['version'] == 4:
- return ip['addr']
-
- raise exception.Error("Could not determine the %s IP of %s." %
- (ip_type, self.properties['image']))
-
- @property
- def public_ip(self):
- """Return the public IP of the Cloud Server."""
- if not self._public_ip:
- self._public_ip = self._get_ip('public')
- return self._public_ip
-
- @property
- def private_ip(self):
- """Return the private IP of the Cloud Server."""
- if not self._private_ip:
- self._private_ip = self._get_ip('private')
- return self._private_ip
-
- @property
- def has_userdata(self):
- if self.properties['user_data'] or self.metadata != {}:
- return True
- else:
- return False
-
- def validate(self):
- """Validate user parameters."""
- if self.properties['flavor'] not in self.flavors:
- return {'Error': "flavor not found."}
-
- # It's okay if there's no script, as long as user_data and
- # metadata are empty
- if not self.script and self.has_userdata:
- return {'Error': "user_data/metadata are not supported with %s." %
- self.properties['image']}
-
- def _run_ssh_command(self, command):
- """Run a shell command on the Cloud Server via SSH."""
- with tempfile.NamedTemporaryFile() as private_key_file:
- private_key_file.write(self.private_key)
- private_key_file.seek(0)
- ssh = paramiko.SSHClient()
- ssh.set_missing_host_key_policy(paramiko.MissingHostKeyPolicy())
- ssh.connect(self.public_ip,
- username="root",
- key_filename=private_key_file.name)
- stdin, stdout, stderr = ssh.exec_command(command)
- logger.debug(stdout.read())
- logger.debug(stderr.read())
-
- def _sftp_files(self, files):
- """Transfer files to the Cloud Server via SFTP."""
- with tempfile.NamedTemporaryFile() as private_key_file:
- private_key_file.write(self.private_key)
- private_key_file.seek(0)
- pkey = paramiko.RSAKey.from_private_key_file(private_key_file.name)
- transport = paramiko.Transport((self.public_ip, 22))
- transport.connect(hostkey=None, username="root", pkey=pkey)
- sftp = paramiko.SFTPClient.from_transport(transport)
- for remote_file in files:
- sftp_file = sftp.open(remote_file['path'], 'w')
- sftp_file.write(remote_file['data'])
- sftp_file.close()
-
- def handle_create(self):
- """Create a Rackspace Cloud Servers container.
-
- Rackspace Cloud Servers does not have the metadata service
- running, so we have to transfer the user-data file to the
- server and then trigger cloud-init.
- """
- # Retrieve server creation parameters from properties
- flavor = self.properties['flavor']
-
- # Generate SSH public/private keypair
- if self._private_key is not None:
- rsa = RSA.importKey(self._private_key)
- else:
- rsa = RSA.generate(1024)
- self.private_key = rsa.exportKey()
- public_keys = [rsa.publickey().exportKey('OpenSSH')]
- if self.properties.get('key_name'):
- key_name = self.properties['key_name']
- public_keys.append(nova_utils.get_keypair(self.nova(),
- key_name).public_key)
- personality_files = {
- "/root/.ssh/authorized_keys": '\n'.join(public_keys)}
-
- # Create server
- client = self.nova().servers
- logger.debug("Calling nova().servers.create()")
- server = client.create(self.physical_resource_name(),
- self.properties['image'],
- flavor,
- files=personality_files)
-
- # Save resource ID to db
- self.resource_id_set(server.id)
-
- return server, scheduler.TaskRunner(self._attach_volumes_task())
-
- def _attach_volumes_task(self):
- tasks = (scheduler.TaskRunner(self._attach_volume, volume_id, device)
- for volume_id, device in self.volumes())
- return scheduler.PollingTaskGroup(tasks)
-
- def _attach_volume(self, volume_id, device):
- logger.debug("Calling nova().volumes.create_server_volume()")
- self.nova().volumes.create_server_volume(self.server.id,
- volume_id,
- device or None)
- yield
- volume = self.cinder().get(volume_id)
- while volume.status in ('available', 'attaching'):
- yield
- volume.get()
-
- if volume.status != 'in-use':
- raise exception.Error(volume.status)
-
- def _detach_volumes_task(self):
- tasks = (scheduler.TaskRunner(self._detach_volume, volume_id)
- for volume_id, device in self.volumes())
- return scheduler.PollingTaskGroup(tasks)
-
- def _detach_volume(self, volume_id):
- volume = self.cinder().get(volume_id)
- volume.detach()
- yield
- while volume.status in ('in-use', 'detaching'):
- yield
- volume.get()
-
- if volume.status != 'available':
- raise exception.Error(volume.status)
-
- def check_create_complete(self, cookie):
- """Check if server creation is complete and handle server configs."""
- if not self._check_active(cookie):
- return False
-
- if self.has_userdata:
- # Create heat-script and userdata files on server
- raw_userdata = self.properties['user_data'] or ''
- userdata = nova_utils.build_userdata(self, raw_userdata)
-
- files = [{'path': "/tmp/userdata", 'data': userdata},
- {'path': "/root/heat-script.sh", 'data': self.script}]
- self._sftp_files(files)
-
- # Connect via SSH and run script
- cmd = "bash -ex /root/heat-script.sh > /root/heat-script.log 2>&1"
- self._run_ssh_command(cmd)
-
- return True
-
- # TODO(jason): Make this consistent with Instance and inherit
- def _delete_server(self, server):
- """Return a coroutine that deletes the Cloud Server."""
- server.delete()
- while True:
- yield
- try:
- server.get()
- if server.status == "DELETED":
- break
- elif server.status == "ERROR":
- raise exception.Error("Deletion of server %s failed." %
- server.name)
- except novaexception.NotFound:
- break
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- """Try to update a Cloud Server's parameters.
-
- If the Cloud Server's Metadata or flavor changed, update the
- Cloud Server. If any other parameters changed, re-create the
- Cloud Server with the new parameters.
- """
- # If name is the only update, fail update
- if prop_diff.keys() == ['name'] and \
- tmpl_diff.keys() == ['Properties']:
- raise exception.NotSupported(feature="Cloud Server rename")
- # Other updates were successful, so don't cause update to fail
- elif 'name' in prop_diff:
- logger.info("Cloud Server rename not supported.")
-
- if 'Metadata' in tmpl_diff:
- self.metadata = json_snippet['Metadata']
- metadata_string = json.dumps(self.metadata)
-
- files = [{'path': "/var/cache/heat-cfntools/last_metadata",
- 'data': metadata_string}]
- self._sftp_files(files)
-
- command = "bash -x /var/lib/cloud/data/cfn-userdata > " + \
- "/root/cfn-userdata.log 2>&1"
- self._run_ssh_command(command)
-
- if 'flavor' in prop_diff:
- self.flavor = json_snippet['Properties']['flavor']
- self.server.resize(self.flavor)
- resize = scheduler.TaskRunner(nova_utils.check_resize,
- self.server,
- self.flavor)
- resize.start()
- return resize
-
- def _resolve_attribute(self, key):
- """Return the method that provides a given template attribute."""
- attribute_function = {'PublicIp': self.public_ip,
- 'PrivateIp': self.private_ip,
- 'PublicDnsName': self.public_ip,
- 'PrivateDnsName': self.public_ip}
- if key not in attribute_function:
- raise exception.InvalidTemplateAttribute(resource=self.name,
- key=key)
- function = attribute_function[key]
- logger.info('%s._resolve_attribute(%s) == %s'
- % (self.name, key, function))
- return unicode(function)
-
-
-# pyrax module is required to work with Rackspace cloud server provider.
-# If it is not installed, don't register cloud server provider
-def resource_mapping():
- if rackspace_resource.PYRAX_INSTALLED:
- return {'Rackspace::Cloud::Server': CloudServer}
- else:
- return {}
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-try:
- from pyrax.exceptions import NotFound
-except ImportError:
- # fake exception for testing without pyrax
- class NotFound(Exception):
- pass
-
-from heat.common import exception
-from heat.engine.resources.rackspace import rackspace_resource
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class CloudDBInstance(rackspace_resource.RackspaceResource):
- '''
- Rackspace cloud database resource.
- '''
- database_schema = {
- "Character_set": {
- "Type": "String",
- "Default": "utf8",
- "Required": False
- },
- "Collate": {
- "Type": "String",
- "Default": "utf8_general_ci",
- "Required": False
- },
- "Name": {
- "Type": "String",
- "Required": True,
- "MaxLength": "64",
- "AllowedPattern": "[a-zA-Z0-9_]+[a-zA-Z0-9_@?#\s]*[a-zA-Z0-9_]+"
- }
- }
-
- user_schema = {
- "Name": {
- "Type": "String",
- "Required": True,
- "MaxLength": "16",
- "AllowedPattern": "[a-zA-Z0-9_]+[a-zA-Z0-9_@?#\s]*[a-zA-Z0-9_]+"
- },
- "Password": {
- "Type": "String",
- "Required": True,
- "AllowedPattern": "[a-zA-Z0-9_]+[a-zA-Z0-9_@?#\s]*[a-zA-Z0-9_]+"
- },
- "Host": {
- "Type": "String",
- "Default": "%"
- },
- "Databases": {
- "Type": "List",
- "Required": True
- }
- }
-
- properties_schema = {
- "InstanceName": {
- "Type": "String",
- "Required": True,
- "MaxLength": "255"
- },
-
- "FlavorRef": {
- "Type": "String",
- "Required": True
- },
-
- "VolumeSize": {
- "Type": "Number",
- "MinValue": 1,
- "MaxValue": 150,
- "Required": True
- },
-
- "Databases": {
- 'Type': 'List',
- 'Required': False,
- 'Schema': {
- 'Type': 'Map',
- 'Schema': database_schema
- }
- },
-
- "Users": {
- 'Type': 'List',
- 'Required': False,
- 'Schema': {
- 'Type': 'Map',
- 'Schema': user_schema
- }
- },
- }
-
- attributes_schema = {
- "hostname": "Hostname of the instance",
- "href": "Api endpoint reference of the instance"
- }
-
- def __init__(self, name, json_snippet, stack):
- super(CloudDBInstance, self).__init__(name, json_snippet, stack)
- self.hostname = None
- self.href = None
-
- def handle_create(self):
- '''
- Create Rackspace Cloud DB Instance.
- '''
- logger.debug("Cloud DB instance handle_create called")
- self.sqlinstancename = self.properties['InstanceName']
- self.flavor = self.properties['FlavorRef']
- self.volume = self.properties['VolumeSize']
- self.databases = self.properties.get('Databases', None)
- self.users = self.properties.get('Users', None)
-
- # create db instance
- logger.info("Creating Cloud DB instance %s" % self.sqlinstancename)
- instance = self.cloud_db().create(self.sqlinstancename,
- flavor=self.flavor,
- volume=self.volume)
- if instance is not None:
- self.resource_id_set(instance.id)
-
- self.hostname = instance.hostname
- self.href = instance.links[0]['href']
- return instance
-
- def check_create_complete(self, instance):
- '''
- Check if cloud DB instance creation is complete.
- '''
- instance.get() # get updated attributes
- if instance.status == 'ERROR':
- instance.delete()
- raise exception.Error("Cloud DB instance creation failed.")
-
- if instance.status != 'ACTIVE':
- return False
-
- logger.info("Cloud DB instance %s created (flavor:%s, volume:%s)" %
- (self.sqlinstancename, self.flavor, self.volume))
- # create databases
- for database in self.databases:
- instance.create_database(
- database['Name'],
- character_set=database['Character_set'],
- collate=database['Collate'])
- logger.info("Database %s created on cloud DB instance %s" %
- (database['Name'], self.sqlinstancename))
-
- # add users
- dbs = []
- for user in self.users:
- if user['Databases']:
- dbs = user['Databases']
- instance.create_user(user['Name'], user['Password'], dbs)
- logger.info("Cloud database user %s created successfully" %
- (user['Name']))
- return True
-
- def handle_delete(self):
- '''
- Delete a Rackspace Cloud DB Instance.
- '''
- logger.debug("CloudDBInstance handle_delete called.")
- if self.resource_id is None:
- return
-
- try:
- instances = self.cloud_db().delete(self.resource_id)
- except NotFound:
- pass
- self.resource_id = None
-
- def validate(self):
- '''
- Validate any of the provided params
- '''
- res = super(CloudDBInstance, self).validate()
- if res:
- return res
-
- # check validity of user and databases
- users = self.properties.get('Users', None)
- if not users:
- return
-
- databases = self.properties.get('Databases', None)
- if not databases:
- return {'Error':
- 'Databases property is required if Users property'
- ' is provided'}
-
- for user in users:
- if not user['Databases']:
- return {'Error':
- 'Must provide access to at least one database for '
- 'user %s' % user['Name']}
-
- missing_db = [db_name for db_name in user['Databases']
- if db_name not in [db['Name'] for db in databases]]
- if missing_db:
- return {'Error':
- 'Database %s specified for user does not exist in '
- 'databases.' % missing_db}
- return
-
- def _hostname(self):
- if self.hostname is None and self.resource_id is not None:
- dbinstance = self.cloud_db().get(self.resource_id)
- self.hostname = dbinstance.hostname
-
- return self.hostname
-
- def _href(self):
- if self.href is None and self.resource_id is not None:
- dbinstance = self.cloud_db().get(self.resource_id)
- self.href = self._gethref(dbinstance)
-
- return self.href
-
- def _gethref(self, dbinstance):
- if dbinstance is None or dbinstance.links is None:
- return None
-
- for link in dbinstance.links:
- if link['rel'] == 'self':
- return link['href']
-
- def _resolve_attribute(self, name):
- if name == 'hostname':
- return self._hostname()
- elif name == 'href':
- return self._href()
- else:
- return None
-
-
-# pyrax module is required to work with Rackspace cloud database provider.
-# If it is not installed, don't register clouddatabase provider
-def resource_mapping():
- if rackspace_resource.PYRAX_INSTALLED:
- return {
- 'Rackspace::Cloud::DBInstance': CloudDBInstance,
- }
- else:
- return {}
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-PYRAX_INSTALLED = True
-try:
- import pyrax
-except ImportError:
- PYRAX_INSTALLED = False
-
-from heat.engine import resource
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class RackspaceResource(resource.Resource):
- '''
- Common base class for Rackspace Resource Providers
- '''
- properties_schema = {}
-
- def __init__(self, name, json_snippet, stack):
- super(RackspaceResource, self).__init__(name, json_snippet, stack)
- if PYRAX_INSTALLED:
- self.pyrax = pyrax
- self._cloud_db = None
- self._cloud_dns = None
- self._cloud_lb = None
- self._cloud_server = None
- self._cloud_nw = None
- self._cloud_blockstore = None
- self._authenticated = False
-
- def cloud_db(self):
- '''Rackspace cloud database client.'''
- if not self._cloud_db:
- self.__authenticate()
- self._cloud_db = self.pyrax.cloud_databases
-
- return self._cloud_db
-
- def cloud_lb(self):
- '''Rackspace cloud loadbalancer client.'''
- if not self._cloud_lb:
- self.__authenticate()
- self._cloud_lb = self.pyrax.cloud_loadbalancers
-
- return self._cloud_lb
-
- def cloud_dns(self):
- '''Rackspace cloud dns client.'''
- if not self._cloud_dns:
- self.__authenticate()
- self._cloud_dns = self.pyrax.cloud_dns
-
- return self._cloud_dns
-
- def nova(self):
- '''Rackspace cloudservers client.'''
- if not self._cloud_server:
- self.__authenticate()
- self._cloud_server = self.pyrax.cloudservers
-
- return self._cloud_server
-
- def cinder(self):
- '''Rackspace cinder client.'''
- if not self._cloud_blockstore:
- self.__authenticate()
- self._cloud_blockstore = self.pyrax.cloud_blockstorage
-
- return self._cloud_blockstore
-
- def neutron(self):
- '''Rackspace neutron client.'''
- if not self._cloud_nw:
- self.__authenticate()
- self._cloud_nw = self.pyrax.cloud_networks
-
- return self._cloud_nw
-
- def __authenticate(self):
- # current implemenation shown below authenticates using
- # username and password. Need make it work with auth-token
- if not self._authenticated:
- pyrax.set_setting("identity_type", "keystone")
- pyrax.set_setting("auth_endpoint", self.context.auth_url)
- pyrax.set_setting("tenant_id", self.context.tenant)
- logger.info("Authenticating with username:%s" %
- self.context.username)
- pyrax.auth_with_token(self.context.auth_token,
- tenant_id=self.context.tenant_id,
- tenant_name=self.context.tenant)
- logger.info("User %s authenticated successfully."
- % self.context.username)
- self._authenticated = True
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import clients
-from heat.openstack.common import log as logging
-from heat.engine import resource
-from heat.engine.resources.neutron import neutron
-from heat.engine.resources.vpc import VPC
-
-if clients.neutronclient is not None:
- from neutronclient.common.exceptions import NeutronClientException
-
-logger = logging.getLogger(__name__)
-
-
-class RouteTable(resource.Resource):
- tags_schema = {'Key': {'Type': 'String',
- 'Required': True},
- 'Value': {'Type': 'String',
- 'Required': True}}
-
- properties_schema = {
- 'VpcId': {
- 'Type': 'String',
- 'Required': True},
- 'Tags': {'Type': 'List', 'Schema': {
- 'Type': 'Map',
- 'Implemented': False,
- 'Schema': tags_schema}}
- }
-
- def handle_create(self):
- client = self.neutron()
- props = {'name': self.physical_resource_name()}
- router = client.create_router({'router': props})['router']
- self.resource_id_set(router['id'])
-
- def check_create_complete(self, *args):
- client = self.neutron()
- attributes = client.show_router(
- self.resource_id)['router']
- if not neutron.NeutronResource.is_built(attributes):
- return False
-
- network_id = self.properties.get('VpcId')
- default_router = VPC.router_for_vpc(client, network_id)
- if default_router and default_router.get('external_gateway_info'):
- # the default router for the VPC is connected
- # to the external router, so do it for this too.
- external_network_id = default_router[
- 'external_gateway_info']['network_id']
- client.add_gateway_router(self.resource_id, {
- 'network_id': external_network_id})
- return True
-
- def handle_delete(self):
- client = self.neutron()
-
- router_id = self.resource_id
- try:
- client.delete_router(router_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
- # just in case this router has been added to a gateway, remove it
- try:
- client.remove_gateway_router(router_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
-
-class SubnetRouteTableAssocation(resource.Resource):
-
- properties_schema = {
- 'RouteTableId': {
- 'Type': 'String',
- 'Required': True},
- 'SubnetId': {
- 'Type': 'String',
- 'Required': True}
- }
-
- def handle_create(self):
- client = self.neutron()
- subnet_id = self.properties.get('SubnetId')
-
- router_id = self.properties.get('RouteTableId')
-
- #remove the default router association for this subnet.
- try:
- previous_router = self._router_for_subnet(subnet_id)
- if previous_router:
- client.remove_interface_router(
- previous_router['id'],
- {'subnet_id': subnet_id})
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
- client.add_interface_router(
- router_id, {'subnet_id': subnet_id})
-
- def _router_for_subnet(self, subnet_id):
- client = self.neutron()
- subnet = client.show_subnet(
- subnet_id)['subnet']
- network_id = subnet['network_id']
- return VPC.router_for_vpc(client, network_id)
-
- def handle_delete(self):
- client = self.neutron()
- subnet_id = self.properties.get('SubnetId')
-
- router_id = self.properties.get('RouteTableId')
-
- try:
- client.remove_interface_router(router_id, {
- 'subnet_id': subnet_id})
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
- # add back the default router
- try:
- default_router = self._router_for_subnet(subnet_id)
- if default_router:
- client.add_interface_router(
- default_router['id'], {'subnet_id': subnet_id})
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
-
-def resource_mapping():
- if clients.neutronclient is None:
- return {}
-
- return {
- 'AWS::EC2::RouteTable': RouteTable,
- 'AWS::EC2::SubnetRouteTableAssocation': SubnetRouteTableAssocation,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from urlparse import urlparse
-
-from heat.engine import clients
-from heat.engine import resource
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class S3Bucket(resource.Resource):
- website_schema = {'IndexDocument': {'Type': 'String'},
- 'ErrorDocument': {'Type': 'String'}}
- properties_schema = {'AccessControl': {
- 'Type': 'String',
- 'AllowedValues': ['Private',
- 'PublicRead',
- 'PublicReadWrite',
- 'AuthenticatedRead',
- 'BucketOwnerRead',
- 'BucketOwnerFullControl']},
- 'WebsiteConfiguration': {'Type': 'Map',
- 'Schema': website_schema}}
- attributes_schema = {
- "DomainName": "The DNS name of the specified bucket.",
- "WebsiteURL": "The website endpoint for the specified bucket."
- }
-
- def validate(self):
- '''
- Validate any of the provided params
- '''
- #check if swiftclient is installed
- if clients.swiftclient is None:
- return {'Error':
- 'S3 services unavailable because of missing swiftclient.'}
-
- def handle_create(self):
- """Create a bucket."""
- container = self.physical_resource_name()
- headers = {}
- logger.debug('S3Bucket create container %s with headers %s' %
- (container, headers))
- if self.properties['WebsiteConfiguration'] is not None:
- sc = self.properties['WebsiteConfiguration']
- # we will assume that swift is configured for the staticweb
- # wsgi middleware
- headers['X-Container-Meta-Web-Index'] = sc['IndexDocument']
- headers['X-Container-Meta-Web-Error'] = sc['ErrorDocument']
-
- con = self.context
- ac = self.properties['AccessControl']
- tenant_username = '%s:%s' % (con.tenant, con.username)
- if ac in ('PublicRead', 'PublicReadWrite'):
- headers['X-Container-Read'] = '.r:*'
- elif ac == 'AuthenticatedRead':
- headers['X-Container-Read'] = con.tenant
- else:
- headers['X-Container-Read'] = tenant_username
-
- if ac == 'PublicReadWrite':
- headers['X-Container-Write'] = '.r:*'
- else:
- headers['X-Container-Write'] = tenant_username
-
- self.swift().put_container(container, headers)
- self.resource_id_set(container)
-
- def handle_delete(self):
- """Perform specified delete policy."""
- logger.debug('S3Bucket delete container %s' % self.resource_id)
- if self.resource_id is not None:
- try:
- self.swift().delete_container(self.resource_id)
- except clients.swiftclient.ClientException as ex:
- logger.warn("Delete container failed: %s" % str(ex))
-
- def FnGetRefId(self):
- return unicode(self.resource_id)
-
- def _resolve_attribute(self, name):
- url = self.swift().get_auth()[0]
- parsed = list(urlparse(url))
- if name == 'DomainName':
- return parsed[1].split(':')[0]
- elif name == 'WebsiteURL':
- return '%s://%s%s/%s' % (parsed[0], parsed[1], parsed[2],
- self.resource_id)
-
-
-def resource_mapping():
- if clients.swiftclient is None:
- return {}
-
- return {
- 'AWS::S3::Bucket': S3Bucket,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import clients
-from heat.engine import resource
-
-from heat.common import exception
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class SecurityGroup(resource.Resource):
- rule_schema = {'CidrIp': {'Type': 'String'},
- 'FromPort': {'Type': 'String'},
- 'ToPort': {'Type': 'String'},
- 'IpProtocol': {'Type': 'String'},
- 'SourceSecurityGroupId': {'Type': 'String'},
- 'SourceSecurityGroupName': {'Type': 'String'},
- 'SourceSecurityGroupOwnerId': {'Type': 'String',
- 'Implemented': False}}
- properties_schema = {'GroupDescription': {'Type': 'String',
- 'Required': True},
- 'VpcId': {'Type': 'String'},
- 'SecurityGroupIngress': {'Type': 'List',
- 'Schema': {
- 'Type': 'Map',
- 'Schema': rule_schema}},
- 'SecurityGroupEgress': {'Type': 'List',
- 'Schema': {
- 'Type': 'Map',
- 'Schema': rule_schema}}}
-
- def handle_create(self):
- if self.properties['VpcId'] and clients.neutronclient is not None:
- self._handle_create_neutron()
- else:
- self._handle_create_nova()
-
- def _convert_to_neutron_rule(self, direction, sg_rule):
- return {
- 'direction': direction,
- 'ethertype': 'IPv4',
- 'remote_ip_prefix': sg_rule.get('CidrIp'),
- 'port_range_min': sg_rule.get('FromPort'),
- 'port_range_max': sg_rule.get('ToPort'),
- 'protocol': sg_rule.get('IpProtocol'),
- # Neutron understands both names and ids
- 'remote_group_id': sg_rule.get('SourceSecurityGroupId') or
- sg_rule.get('SourceSecurityGroupName'),
- 'security_group_id': self.resource_id
- }
-
- def _handle_create_neutron(self):
- from neutronclient.common.exceptions import NeutronClientException
- client = self.neutron()
-
- sec = client.create_security_group({'security_group': {
- 'name': self.physical_resource_name(),
- 'description': self.properties['GroupDescription']}
- })['security_group']
-
- def sanitize_security_group(i):
- # Neutron only accepts positive ints
- if i.get('FromPort') is not None and int(i['FromPort']) < 0:
- i['FromPort'] = None
- if i.get('ToPort') is not None and int(i['ToPort']) < 0:
- i['ToPort'] = None
- if i.get('FromPort') is None and i.get('ToPort') is None:
- i['CidrIp'] = None
-
- self.resource_id_set(sec['id'])
- if self.properties['SecurityGroupIngress']:
- for i in self.properties['SecurityGroupIngress']:
- sanitize_security_group(i)
- try:
- rule = client.create_security_group_rule({
- 'security_group_rule':
- self._convert_to_neutron_rule('ingress', i)
- })
- except NeutronClientException as ex:
- if ex.status_code == 409:
- # no worries, the rule is already there
- pass
- else:
- # unexpected error
- raise
- if self.properties['SecurityGroupEgress']:
- # Delete the default rules which allow all egress traffic
- for rule in sec['security_group_rules']:
- if rule['direction'] == 'egress':
- client.delete_security_group_rule(rule['id'])
-
- for i in self.properties['SecurityGroupEgress']:
- sanitize_security_group(i)
- try:
- rule = client.create_security_group_rule({
- 'security_group_rule':
- self._convert_to_neutron_rule('egress', i)
- })
- except NeutronClientException as ex:
- if ex.status_code == 409:
- # no worries, the rule is already there
- pass
- else:
- # unexpected error
- raise
-
- def _handle_create_nova(self):
- sec = None
-
- groups = self.nova().security_groups.list()
- for group in groups:
- if group.name == self.physical_resource_name():
- sec = group
- break
-
- if not sec:
- sec = self.nova().security_groups.create(
- self.physical_resource_name(),
- self.properties['GroupDescription'])
-
- self.resource_id_set(sec.id)
- if self.properties['SecurityGroupIngress']:
- rules_client = self.nova().security_group_rules
- for i in self.properties['SecurityGroupIngress']:
- source_group_id = None
- if i.get('SourceSecurityGroupId') is not None:
- source_group_id = i['SourceSecurityGroupId']
- elif i.get('SourceSecurityGroupName') is not None:
- for group in groups:
- if group.name == i['SourceSecurityGroupName']:
- source_group_id = group.id
- break
- try:
- rule = rules_client.create(
- sec.id,
- i.get('IpProtocol'),
- i.get('FromPort'),
- i.get('ToPort'),
- i.get('CidrIp'),
- source_group_id)
- except clients.novaclient.exceptions.BadRequest as ex:
- if ex.message.find('already exists') >= 0:
- # no worries, the rule is already there
- pass
- else:
- # unexpected error
- raise
-
- def handle_delete(self):
- if self.properties['VpcId'] and clients.neutronclient is not None:
- self._handle_delete_neutron()
- else:
- self._handle_delete_nova()
-
- def _handle_delete_nova(self):
- if self.resource_id is not None:
- try:
- sec = self.nova().security_groups.get(self.resource_id)
- except clients.novaclient.exceptions.NotFound:
- pass
- else:
- for rule in sec.rules:
- try:
- self.nova().security_group_rules.delete(rule['id'])
- except clients.novaclient.exceptions.NotFound:
- pass
-
- self.nova().security_groups.delete(self.resource_id)
- self.resource_id = None
-
- def _handle_delete_neutron(self):
- from neutronclient.common.exceptions import NeutronClientException
- client = self.neutron()
-
- if self.resource_id is not None:
- try:
- sec = client.show_security_group(
- self.resource_id)['security_group']
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise
- else:
- for rule in sec['security_group_rules']:
- try:
- client.delete_security_group_rule(rule['id'])
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise
-
- try:
- client.delete_security_group(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise
- self.resource_id = None
-
- def FnGetRefId(self):
- if self.properties['VpcId']:
- return super(SecurityGroup, self).FnGetRefId()
- else:
- return self.physical_resource_name()
-
- def validate(self):
- res = super(SecurityGroup, self).validate()
- if res:
- return res
-
- if self.properties['SecurityGroupEgress'] and not(
- self.properties['VpcId'] and
- clients.neutronclient is not None):
- raise exception.EgressRuleNotAllowed()
-
-
-def resource_mapping():
- return {
- 'AWS::EC2::SecurityGroup': SecurityGroup,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import exception
-from heat.engine import clients
-from heat.engine import scheduler
-from heat.engine.resources import nova_utils
-from heat.engine import resource
-from heat.openstack.common.gettextutils import _
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class Server(resource.Resource):
-
- block_mapping_schema = {
- 'device_name': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('A device name where the volume will be '
- 'attached in the system at /dev/device_name. '
- 'This value is typically vda')},
- 'volume_id': {
- 'Type': 'String',
- 'Description': _('The ID of the volume to boot from. Only one of '
- 'volume_id or snapshot_id should be provided')},
- 'snapshot_id': {
- 'Type': 'String',
- 'Description': _('The ID of the snapshot to create a volume '
- 'from')},
- 'volume_size': {
- 'Type': 'String',
- 'Description': _('The size of the volume, in GB. It is safe to '
- 'leave this blank and have the Compute service '
- 'infer the size')},
- 'delete_on_termination': {
- 'Type': 'Boolean',
- 'Description': _('Indicate whether the volume should be deleted '
- 'when the server is terminated')}
- }
-
- networks_schema = {
- 'uuid': {
- 'Type': 'String',
- 'Description': _('ID of network to create a port on')},
- 'fixed_ip': {
- 'Type': 'String',
- 'Description': _('Fixed IP address to specify for the port '
- 'created on the requested network')},
- 'port': {
- 'Type': 'String',
- 'Description': _('ID of an existing port to associate with '
- 'this server')},
- }
-
- properties_schema = {
- 'name': {
- 'Type': 'String',
- 'Description': _('Optional server name')},
- 'image': {
- 'Type': 'String',
- 'Description': _('The ID or name of the image to boot with')},
- 'block_device_mapping': {
- 'Type': 'List',
- 'Description': _('Block device mappings for this server'),
- 'Schema': {
- 'Type': 'Map',
- 'Schema': block_mapping_schema
- }
- },
- 'flavor': {
- 'Type': 'String',
- 'Description': _('The ID or name of the flavor to boot onto'),
- 'Required': True},
- 'flavor_update_policy': {
- 'Type': 'String',
- 'Description': _('Policy on how to apply a flavor update; either '
- 'by requesting a server resize or by replacing '
- 'the entire server'),
- 'Default': 'RESIZE',
- 'AllowedValues': ['RESIZE', 'REPLACE']},
- 'key_name': {
- 'Type': 'String',
- 'Description': _('Name of keypair to inject into the server')},
- 'availability_zone': {
- 'Type': 'String',
- 'Description': _('Name of the availability zone for server '
- 'placement')},
- 'security_groups': {
- 'Type': 'List',
- 'Description': _('List of security group names')},
- 'networks': {
- 'Type': 'List',
- 'Description': _('An ordered list of nics to be '
- 'added to this server, with information about '
- 'connected networks, fixed ips, port etc'),
- 'Schema': {
- 'Type': 'Map',
- 'Schema': networks_schema
- }
- },
- 'scheduler_hints': {
- 'Type': 'Map',
- 'Description': _('Arbitrary key-value pairs specified by the '
- 'client to help boot a server')},
- 'metadata': {
- 'Type': 'Map',
- 'Description': _('Arbitrary key/value metadata to store for this '
- 'server. A maximum of five entries is allowed, '
- 'and both keys and values must be 255 characters '
- 'or less')},
- 'user_data': {
- 'Type': 'String',
- 'Description': _('User data script to be executed by cloud-init')},
- 'reservation_id': {
- 'Type': 'String',
- 'Description': _('A UUID for the set of servers being requested'),
- 'Implemented': False},
- 'config_drive': {
- 'Type': 'String',
- 'Description': _('value for config drive either boolean, or '
- 'volume-id'),
- 'Implemented': False},
- # diskConfig translates to API attribute OS-DCF:diskConfig
- # hence the camel case instead of underscore to separate the words
- 'diskConfig': {
- 'Type': 'String',
- 'Description': _('Control how the disk is partitioned when the '
- 'server is created'),
- 'AllowedValues': ['AUTO', 'MANUAL']}
- }
-
- attributes_schema = {
- 'show': _('A dict of all server details as returned by the API'),
- 'addresses': _('A dict of all network addresses as returned by '
- 'the API'),
- 'networks': _('A dict of assigned network addresses of the form: '
- '{"public": [ip1, ip2...], "private": [ip3, ip4]}'),
- 'first_private_address': _('Convenience attribute to fetch the first '
- 'assigned private network address, or an '
- 'empty string if nothing has been assigned '
- 'at this time'),
- 'first_public_address': _('Convenience attribute to fetch the first '
- 'assigned public network address, or an '
- 'empty string if nothing has been assigned '
- 'at this time'),
- 'instance_name': _('AWS compatible instance name'),
- 'accessIPv4': _('The manually assigned alternative public IPv4 '
- 'address of the server'),
- 'accessIPv6': _('The manually assigned alternative public IPv6 '
- 'address of the server'),
- }
-
- update_allowed_keys = ('Metadata', 'Properties')
- update_allowed_properties = ('flavor', 'flavor_update_policy')
-
- def __init__(self, name, json_snippet, stack):
- super(Server, self).__init__(name, json_snippet, stack)
- self.mime_string = None
-
- def get_mime_string(self, userdata):
- if not self.mime_string:
- self.mime_string = nova_utils.build_userdata(self, userdata)
- return self.mime_string
-
- def handle_create(self):
- security_groups = self.properties.get('security_groups', [])
- userdata = self.properties.get('user_data', '')
- flavor = self.properties['flavor']
- availability_zone = self.properties['availability_zone']
-
- key_name = self.properties['key_name']
- if key_name:
- # confirm keypair exists
- nova_utils.get_keypair(self.nova(), key_name)
-
- image = self.properties.get('image')
- if image:
- image = nova_utils.get_image_id(self.nova(), image)
-
- flavor_id = nova_utils.get_flavor_id(self.nova(), flavor)
- instance_meta = self.properties.get('metadata')
- scheduler_hints = self.properties.get('scheduler_hints')
- nics = self._build_nics(self.properties.get('networks'))
- block_device_mapping = self._build_block_device_mapping(
- self.properties.get('block_device_mapping'))
- reservation_id = self.properties.get('reservation_id')
- config_drive = self.properties.get('config_drive')
- disk_config = self.properties.get('diskConfig')
-
- server = None
- try:
- server = self.nova().servers.create(
- name=self.physical_resource_name(),
- image=image,
- flavor=flavor_id,
- key_name=key_name,
- security_groups=security_groups,
- userdata=self.get_mime_string(userdata),
- meta=instance_meta,
- scheduler_hints=scheduler_hints,
- nics=nics,
- availability_zone=availability_zone,
- block_device_mapping=block_device_mapping,
- reservation_id=reservation_id,
- config_drive=config_drive,
- disk_config=disk_config)
- finally:
- # Avoid a race condition where the thread could be cancelled
- # before the ID is stored
- if server is not None:
- self.resource_id_set(server.id)
-
- return server
-
- def check_create_complete(self, server):
- return self._check_active(server)
-
- def _check_active(self, server):
-
- if server.status != 'ACTIVE':
- server.get()
-
- # Some clouds append extra (STATUS) strings to the status
- short_server_status = server.status.split('(')[0]
- if short_server_status in nova_utils.deferred_server_statuses:
- return False
- elif server.status == 'ACTIVE':
- return True
- elif server.status == 'ERROR':
- exc = exception.Error(_('Creation of server %s failed.') %
- server.name)
- raise exc
- else:
- exc = exception.Error(_('Creation of server %(server)s failed '
- 'with unknown status: %(status)s') %
- dict(server=server.name,
- status=server.status))
- raise exc
-
- @staticmethod
- def _build_block_device_mapping(bdm):
- if not bdm:
- return None
- bdm_dict = {}
- for mapping in bdm:
- mapping_parts = []
- if mapping.get('snapshot_id'):
- mapping_parts.append(mapping.get('snapshot_id'))
- mapping_parts.append('snap')
- else:
- mapping_parts.append(mapping.get('volume_id'))
- mapping_parts.append('')
- if (mapping.get('volume_size') or
- mapping.get('delete_on_termination')):
-
- mapping_parts.append(mapping.get('volume_size', '0'))
- if mapping.get('delete_on_termination'):
- mapping_parts.append(str(mapping.get('delete_on_termination')))
- bdm_dict[mapping.get('device_name')] = ':'.join(mapping_parts)
-
- return bdm_dict
-
- @staticmethod
- def _build_nics(networks):
- if not networks:
- return None
-
- nics = []
-
- for net_data in networks:
- nic_info = {}
- if net_data.get('uuid'):
- nic_info['net-id'] = net_data['uuid']
- if net_data.get('fixed_ip'):
- nic_info['v4-fixed-ip'] = net_data['fixed_ip']
- if net_data.get('port'):
- nic_info['port-id'] = net_data['port']
- nics.append(nic_info)
- return nics
-
- def _resolve_attribute(self, name):
- server = self.nova().servers.get(self.resource_id)
- if name == 'addresses':
- return server.addresses
- if name == 'networks':
- return server.networks
- if name == 'first_private_address':
- private = server.networks.get('private', [])
- if len(private) > 0:
- return private[0]
- return ''
- if name == 'first_public_address':
- public = server.networks.get('public', [])
- if len(public) > 0:
- return public[0]
- return ''
- if name == 'instance_name':
- return server._info.get('OS-EXT-SRV-ATTR:instance_name')
- if name == 'accessIPv4':
- return server.accessIPv4
- if name == 'accessIPv6':
- return server.accessIPv6
- if name == 'show':
- return server._info
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- if 'Metadata' in tmpl_diff:
- self.metadata = tmpl_diff['Metadata']
-
- if 'flavor' in prop_diff:
-
- flavor_update_policy = (
- prop_diff.get('flavor_update_policy') or
- self.properties.get('flavor_update_policy'))
-
- if flavor_update_policy == 'REPLACE':
- raise resource.UpdateReplace(self.name)
-
- flavor = prop_diff['flavor']
- flavor_id = nova_utils.get_flavor_id(self.nova(), flavor)
- server = self.nova().servers.get(self.resource_id)
- server.resize(flavor_id)
- checker = scheduler.TaskRunner(nova_utils.check_resize,
- server, flavor)
- checker.start()
- return checker
-
- def check_update_complete(self, checker):
- return checker.step() if checker is not None else True
-
- def metadata_update(self, new_metadata=None):
- '''
- Refresh the metadata if new_metadata is None
- '''
- if new_metadata is None:
- self.metadata = self.parsed_template('Metadata')
-
- def validate(self):
- '''
- Validate any of the provided params
- '''
- super(Server, self).validate()
-
- # check validity of key
- key_name = self.properties.get('key_name', None)
- if key_name:
- nova_utils.get_keypair(self.nova(), key_name)
-
- # either volume_id or snapshot_id needs to be specified, but not both
- # for block device mapping.
- bdm = self.properties.get('block_device_mapping') or []
- bootable_vol = False
- for mapping in bdm:
- if mapping['device_name'] is 'vda':
- bootable_vol = True
-
- if mapping.get('volume_id') and mapping.get('snapshot_id'):
- raise exception.ResourcePropertyConflict('volume_id',
- 'snapshot_id')
- if not mapping.get('volume_id') and not mapping.get('snapshot_id'):
- msg = _('Either volume_id or snapshot_id must be specified for'
- ' device mapping %s') % mapping['device_name']
- raise exception.StackValidationFailed(message=msg)
-
- # make sure the image exists if specified.
- image = self.properties.get('image', None)
- if image:
- nova_utils.get_image_id(self.nova(), image)
- elif not image and not bootable_vol:
- msg = _('Neither image nor bootable volume is specified for'
- ' instance %s') % self.name
- raise exception.StackValidationFailed(message=msg)
-
- def handle_delete(self):
- '''
- Delete a server, blocking until it is disposed by OpenStack
- '''
- if self.resource_id is None:
- return
-
- try:
- server = self.nova().servers.get(self.resource_id)
- except clients.novaclient.exceptions.NotFound:
- pass
- else:
- delete = scheduler.TaskRunner(nova_utils.delete_server, server)
- delete(wait_time=0.2)
-
- self.resource_id = None
-
- def handle_suspend(self):
- '''
- Suspend a server - note we do not wait for the SUSPENDED state,
- this is polled for by check_suspend_complete in a similar way to the
- create logic so we can take advantage of coroutines
- '''
- if self.resource_id is None:
- raise exception.Error(_('Cannot suspend %s, resource_id not set') %
- self.name)
-
- try:
- server = self.nova().servers.get(self.resource_id)
- except clients.novaclient.exceptions.NotFound:
- raise exception.NotFound(_('Failed to find server %s') %
- self.resource_id)
- else:
- logger.debug('suspending server %s' % self.resource_id)
- # We want the server.suspend to happen after the volume
- # detachement has finished, so pass both tasks and the server
- suspend_runner = scheduler.TaskRunner(server.suspend)
- return server, suspend_runner
-
- def check_suspend_complete(self, cookie):
- server, suspend_runner = cookie
-
- if not suspend_runner.started():
- suspend_runner.start()
-
- if suspend_runner.done():
- if server.status == 'SUSPENDED':
- return True
-
- server.get()
- logger.debug('%s check_suspend_complete status = %s' %
- (self.name, server.status))
- if server.status in list(nova_utils.deferred_server_statuses +
- ['ACTIVE']):
- return server.status == 'SUSPENDED'
- else:
- exc = exception.Error(_('Suspend of server %(server)s failed '
- 'with unknown status: %(status)s') %
- dict(server=server.name,
- status=server.status))
- raise exc
-
- def handle_resume(self):
- '''
- Resume a server - note we do not wait for the ACTIVE state,
- this is polled for by check_resume_complete in a similar way to the
- create logic so we can take advantage of coroutines
- '''
- if self.resource_id is None:
- raise exception.Error(_('Cannot resume %s, resource_id not set') %
- self.name)
-
- try:
- server = self.nova().servers.get(self.resource_id)
- except clients.novaclient.exceptions.NotFound:
- raise exception.NotFound(_('Failed to find server %s') %
- self.resource_id)
- else:
- logger.debug('resuming server %s' % self.resource_id)
- server.resume()
- return server
-
- def check_resume_complete(self, server):
- return self._check_active(server)
-
-
-def resource_mapping():
- return {
- 'OS::Nova::Server': Server,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import exception
-from heat.common import template_format
-from heat.common import urlfetch
-from heat.engine.properties import Properties
-from heat.engine import stack_resource
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-(PROP_TEMPLATE_URL,
- PROP_TIMEOUT_MINS,
- PROP_PARAMETERS) = ('TemplateURL', 'TimeoutInMinutes', 'Parameters')
-
-
-class NestedStack(stack_resource.StackResource):
- '''
- A Resource representing a child stack to allow composition of templates.
- '''
-
- properties_schema = {
- PROP_TEMPLATE_URL: {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('The URL of a template that specifies the stack'
- ' to be created as a resource.')},
- PROP_TIMEOUT_MINS: {
- 'Type': 'Number',
- 'Description': _('The length of time, in minutes, to wait for the'
- ' nested stack creation.')},
- PROP_PARAMETERS: {
- 'Type': 'Map',
- 'Description': _('The set of parameters passed to this nested'
- ' stack.')}}
-
- update_allowed_keys = ('Properties',)
- update_allowed_properties = (PROP_TEMPLATE_URL, PROP_TIMEOUT_MINS,
- PROP_PARAMETERS)
-
- def handle_create(self):
- template_data = urlfetch.get(self.properties[PROP_TEMPLATE_URL])
- template = template_format.parse(template_data)
-
- return self.create_with_template(template,
- self.properties[PROP_PARAMETERS],
- self.properties[PROP_TIMEOUT_MINS])
-
- def handle_delete(self):
- return self.delete_nested()
-
- def FnGetAtt(self, key):
- if key and not key.startswith('Outputs.'):
- raise exception.InvalidTemplateAttribute(resource=self.name,
- key=key)
- return self.get_output(key.partition('.')[-1])
-
- def FnGetRefId(self):
- return self.nested().identifier().arn()
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- # Nested stack template may be changed even if the prop_diff is empty.
- self.properties = Properties(self.properties_schema,
- json_snippet.get('Properties', {}),
- self.stack.resolve_runtime_data,
- self.name)
-
- template_data = urlfetch.get(self.properties[PROP_TEMPLATE_URL])
- template = template_format.parse(template_data)
-
- return self.update_with_template(template,
- self.properties[PROP_PARAMETERS],
- self.properties[PROP_TIMEOUT_MINS])
-
-
-def resource_mapping():
- return {
- 'AWS::CloudFormation::Stack': NestedStack,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import clients
-from heat.common import exception
-from heat.openstack.common import log as logging
-from heat.engine import resource
-from heat.engine.resources.vpc import VPC
-
-logger = logging.getLogger(__name__)
-
-
-class Subnet(resource.Resource):
- tags_schema = {'Key': {'Type': 'String',
- 'Required': True},
- 'Value': {'Type': 'String',
- 'Required': True}}
-
- properties_schema = {
- 'AvailabilityZone': {'Type': 'String'},
- 'CidrBlock': {
- 'Type': 'String',
- 'Required': True},
- 'VpcId': {
- 'Type': 'String',
- 'Required': True},
- 'Tags': {'Type': 'List', 'Schema': {
- 'Type': 'Map',
- 'Implemented': False,
- 'Schema': tags_schema}}
- }
-
- def handle_create(self):
- client = self.neutron()
- # TODO(sbaker) Verify that this CidrBlock is within the vpc CidrBlock
- network_id = self.properties.get('VpcId')
-
- props = {
- 'network_id': network_id,
- 'cidr': self.properties.get('CidrBlock'),
- 'name': self.physical_resource_name(),
- 'ip_version': 4
- }
- subnet = client.create_subnet({'subnet': props})['subnet']
-
- router = VPC.router_for_vpc(self.neutron(), network_id)
- if router:
- client.add_interface_router(
- router['id'],
- {'subnet_id': subnet['id']})
- self.resource_id_set(subnet['id'])
-
- def handle_delete(self):
- from neutronclient.common.exceptions import NeutronClientException
-
- client = self.neutron()
- network_id = self.properties.get('VpcId')
- subnet_id = self.resource_id
-
- try:
- router = VPC.router_for_vpc(self.neutron(), network_id)
- if router:
- client.remove_interface_router(
- router['id'],
- {'subnet_id': subnet_id})
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
- try:
- client.delete_subnet(subnet_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
- def FnGetAtt(self, key):
- if key == 'AvailabilityZone':
- return self.properties.get(key, '')
- raise exception.InvalidTemplateAttribute(resource=self.name, key=key)
-
-
-def resource_mapping():
- if clients.neutronclient is None:
- return {}
-
- return {
- 'AWS::EC2::Subnet': Subnet,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from urlparse import urlparse
-
-from heat.common import exception
-from heat.engine import resource
-from heat.openstack.common import log as logging
-from heat.engine import clients
-
-logger = logging.getLogger(__name__)
-
-
-class SwiftContainer(resource.Resource):
- properties_schema = {
- 'name': {'Type': 'String'},
- 'X-Container-Read': {'Type': 'String'},
- 'X-Container-Write': {'Type': 'String'},
- 'X-Container-Meta': {'Type': 'Map', 'Default': {}}}
-
- def validate(self):
- '''
- Validate any of the provided params
- '''
- #check if swiftclient is installed
- if clients.swiftclient is None:
- return {'Error':
- 'SwiftContainer unavailable due to missing swiftclient.'}
-
- def physical_resource_name(self):
- name = self.properties.get('name')
- if name:
- return name
-
- return super(SwiftContainer, self).physical_resource_name()
-
- @staticmethod
- def _build_meta_headers(meta_props):
- '''
- Returns a new dict where each key is prepended with:
- X-Container-Meta-
- '''
- if meta_props is None:
- return {}
- return dict(
- ('X-Container-Meta-' + k, v) for (k, v) in meta_props.items())
-
- def handle_create(self):
- """Create a container."""
- container = self.physical_resource_name()
- headers = SwiftContainer._build_meta_headers(
- self.properties['X-Container-Meta'])
- if 'X-Container-Read' in self.properties.keys():
- headers['X-Container-Read'] = self.properties['X-Container-Read']
- if 'X-Container-Write' in self.properties.keys():
- headers['X-Container-Write'] = self.properties['X-Container-Write']
- logger.debug('SwiftContainer create container %s with headers %s' %
- (container, headers))
-
- self.swift().put_container(container, headers)
- self.resource_id_set(container)
-
- def handle_delete(self):
- """Perform specified delete policy."""
- logger.debug('SwiftContainer delete container %s' % self.resource_id)
- if self.resource_id is not None:
- try:
- self.swift().delete_container(self.resource_id)
- except clients.swiftclient.ClientException as ex:
- logger.warn("Delete container failed: %s" % str(ex))
-
- def FnGetRefId(self):
- return unicode(self.resource_id)
-
- def FnGetAtt(self, key):
- url, token_id = self.swift().get_auth()
- parsed = list(urlparse(url))
- if key == 'DomainName':
- return parsed[1].split(':')[0]
- elif key == 'WebsiteURL':
- return '%s://%s%s/%s' % (parsed[0], parsed[1], parsed[2],
- self.resource_id)
- elif key == 'RootURL':
- return '%s://%s%s' % (parsed[0], parsed[1], parsed[2])
- elif self.resource_id and key in (
- 'ObjectCount', 'BytesUsed', 'HeadContainer'):
- try:
- headers = self.swift().head_container(self.resource_id)
- except clients.swiftclient.ClientException as ex:
- logger.warn("Head container failed: %s" % str(ex))
- return None
- else:
- if key == 'ObjectCount':
- return headers['x-container-object-count']
- elif key == 'BytesUsed':
- return headers['x-container-bytes-used']
- elif key == 'HeadContainer':
- return headers
- else:
- raise exception.InvalidTemplateAttribute(resource=self.name,
- key=key)
-
-
-def resource_mapping():
- if clients.swiftclient is None:
- return {}
-
- return {
- 'OS::Swift::Container': SwiftContainer,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from requests import exceptions
-
-from heat.common import exception
-from heat.common import template_format
-from heat.common import urlfetch
-from heat.engine import attributes
-from heat.engine import environment
-from heat.engine import properties
-from heat.engine import stack_resource
-from heat.engine import template
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class TemplateResource(stack_resource.StackResource):
- '''
- A resource implemented by a nested stack.
-
- This implementation passes resource properties as parameters to the nested
- stack. Outputs of the nested stack are exposed as attributes of this
- resource.
- '''
-
- def __init__(self, name, json_snippet, stack):
- self._parsed_nested = None
- self.stack = stack
- tri = stack.env.get_resource_info(
- json_snippet['Type'],
- registry_type=environment.TemplateResourceInfo)
- self.template_name = tri.template_name
- if tri.user_resource:
- self.allowed_schemes = ('http', 'https')
- else:
- self.allowed_schemes = ('http', 'https', 'file')
-
- # parse_nested can fail if the URL in the environment is bad
- # or otherwise inaccessible. Suppress the error here so the
- # stack can be deleted, and detect it at validate/create time
- try:
- tmpl = template.Template(self.parsed_nested)
- except ValueError:
- tmpl = template.Template({})
-
- self.properties_schema = (properties.Properties
- .schema_from_params(tmpl.param_schemata()))
- self.attributes_schema = (attributes.Attributes
- .schema_from_outputs(tmpl[template.OUTPUTS]))
-
- super(TemplateResource, self).__init__(name, json_snippet, stack)
-
- def _to_parameters(self):
- '''
- :return: parameter values for our nested stack based on our properties
- '''
- params = {}
- for n, v in iter(self.properties.props.items()):
- if not v.implemented():
- continue
-
- val = self.properties[n]
-
- if val is not None:
- # take a list and create a CommaDelimitedList
- if v.type() == properties.LIST:
- if len(val) == 0:
- val = ''
- elif isinstance(val[0], dict):
- flattened = []
- for (i, item) in enumerate(val):
- for (k, v) in iter(item.items()):
- mem_str = '.member.%d.%s=%s' % (i, k, v)
- flattened.append(mem_str)
- params[n] = ','.join(flattened)
- else:
- val = ','.join(val)
-
- # for MAP, the JSON param takes either a collection or string,
- # so just pass it on and let the param validate as appropriate
-
- params[n] = val
-
- return params
-
- @property
- def parsed_nested(self):
- if not self._parsed_nested:
- self._parsed_nested = template_format.parse(self.template_data)
- return self._parsed_nested
-
- @property
- def template_data(self):
- t_data = self.stack.t.files.get(self.template_name)
- if not t_data and self.template_name.endswith((".yaml", ".template")):
- try:
- t_data = urlfetch.get(self.template_name,
- allowed_schemes=self.allowed_schemes)
- except (exceptions.RequestException, IOError) as r_exc:
- raise ValueError("Could not fetch remote template '%s': %s" %
- (self.template_name, str(r_exc)))
- else:
- # TODO(Randall) Whoops, misunderstanding on my part; this
- # doesn't actually persist to the db like I thought.
- # Find a better way
- self.stack.t.files[self.template_name] = t_data
- return t_data
-
- def _validate_against_facade(self, facade_cls):
- facade_schemata = properties.schemata(facade_cls.properties_schema)
-
- for n, fs in facade_schemata.items():
- if fs.required and n not in self.properties_schema:
- msg = ("Required property %s for facade %s "
- "missing in provider") % (n, self.type())
- raise exception.StackValidationFailed(message=msg)
-
- ps = self.properties_schema.get(n)
- if (n in self.properties_schema and
- (fs.type != ps.type)):
- # Type mismatch
- msg = ("Property %s type mismatch between facade %s (%s) "
- "and provider (%s)") % (n, self.type(),
- fs.type, ps.type)
- raise exception.StackValidationFailed(message=msg)
-
- for n, ps in self.properties_schema.items():
- if ps.required and n not in facade_schemata:
- # Required property for template not present in facade
- msg = ("Provider requires property %s "
- "unknown in facade %s") % (n, self.type())
- raise exception.StackValidationFailed(message=msg)
-
- for attr in facade_cls.attributes_schema:
- if attr not in self.attributes_schema:
- msg = ("Attribute %s for facade %s "
- "missing in provider") % (attr, self.type())
- raise exception.StackValidationFailed(message=msg)
-
- def validate(self):
- try:
- td = self.template_data
- except ValueError as ex:
- msg = _("Failed to retrieve template data: %s") % str(ex)
- raise exception.StackValidationFailed(message=msg)
- cri = self.stack.env.get_resource_info(
- self.type(),
- registry_type=environment.ClassResourceInfo)
-
- # If we're using an existing resource type as a facade for this
- # template, check for compatibility between the interfaces.
- if cri is not None and not isinstance(self, cri.get_class()):
- facade_cls = cri.get_class()
- self._validate_against_facade(facade_cls)
-
- return super(TemplateResource, self).validate()
-
- def handle_create(self):
- return self.create_with_template(self.parsed_nested,
- self._to_parameters())
-
- def handle_delete(self):
- return self.delete_nested()
-
- def FnGetRefId(self):
- if not self.nested():
- return unicode(self.name)
- return self.nested().identifier().arn()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import exception
-from heat.engine import clients
-from heat.engine import resource
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-#
-# We are ignoring Groups as keystone does not support them.
-# For now support users and accesskeys,
-# We also now support a limited heat-native Policy implementation
-#
-
-
-class User(resource.Resource):
- properties_schema = {
- 'Path': {
- 'Type': 'String',
- 'Description': _('Not Implemented.')},
- 'Groups': {
- 'Type': 'List',
- 'Description': _('Not Implemented.')},
- 'LoginProfile': {
- 'Type': 'Map',
- 'Schema': {'Password': {'Type': 'String'}},
- 'Description': _('A login profile for the user.')},
- 'Policies': {
- 'Type': 'List',
- 'Description': _('Access policies to apply to the user.')}}
-
- def _validate_policies(self, policies):
- for policy in (policies or []):
- # When we support AWS IAM style policies, we will have to accept
- # either a ref to an AWS::IAM::Policy defined in the stack, or
- # and embedded dict describing the policy directly, but for now
- # we only expect this list to contain strings, which must map
- # to an OS::Heat::AccessPolicy in this stack
- # If a non-string (e.g embedded IAM dict policy) is passed, we
- # ignore the policy (don't reject it because we previously ignored
- # and we don't want to break templates which previously worked
- if not isinstance(policy, basestring):
- logger.warning("Ignoring policy %s, " % policy
- + "must be string resource name")
- continue
-
- try:
- policy_rsrc = self.stack.resources[policy]
- except KeyError:
- logger.error("Policy %s does not exist in stack %s" %
- (policy, self.stack.name))
- return False
-
- if not callable(getattr(policy_rsrc, 'access_allowed', None)):
- logger.error("Policy %s is not an AccessPolicy resource" %
- policy)
- return False
-
- return True
-
- def handle_create(self):
- passwd = ''
- if self.properties['LoginProfile'] and \
- 'Password' in self.properties['LoginProfile']:
- passwd = self.properties['LoginProfile']['Password']
-
- if self.properties['Policies']:
- if not self._validate_policies(self.properties['Policies']):
- raise exception.InvalidTemplateAttribute(resource=self.name,
- key='Policies')
-
- uid = self.keystone().create_stack_user(self.physical_resource_name(),
- passwd)
- self.resource_id_set(uid)
-
- def handle_delete(self):
- if self.resource_id is None:
- logger.error("Cannot delete User resource before user created!")
- return
- try:
- self.keystone().delete_stack_user(self.resource_id)
- except clients.hkc.kc.exceptions.NotFound:
- pass
-
- def handle_suspend(self):
- if self.resource_id is None:
- logger.error("Cannot suspend User resource before user created!")
- return
- self.keystone().disable_stack_user(self.resource_id)
-
- def handle_resume(self):
- if self.resource_id is None:
- logger.error("Cannot resume User resource before user created!")
- return
- self.keystone().enable_stack_user(self.resource_id)
-
- def FnGetRefId(self):
- return unicode(self.physical_resource_name())
-
- def FnGetAtt(self, key):
- #TODO(asalkeld) Implement Arn attribute
- raise exception.InvalidTemplateAttribute(
- resource=self.name, key=key)
-
- def access_allowed(self, resource_name):
- policies = (self.properties['Policies'] or [])
- for policy in policies:
- if not isinstance(policy, basestring):
- logger.warning("Ignoring policy %s, " % policy
- + "must be string resource name")
- continue
- policy_rsrc = self.stack.resources[policy]
- if not policy_rsrc.access_allowed(resource_name):
- return False
- return True
-
-
-class AccessKey(resource.Resource):
- properties_schema = {
- 'Serial': {
- 'Type': 'Integer',
- 'Implemented': False,
- 'Description': _('Not Implemented.')},
- 'UserName': {
- 'Type': 'String',
- 'Required': True,
- 'Description': _('The name of the user that the new key will'
- ' belong to.')},
- 'Status': {
- 'Type': 'String',
- 'Implemented': False,
- 'AllowedValues': ['Active', 'Inactive'],
- 'Description': _('Not Implemented.')}}
-
- def __init__(self, name, json_snippet, stack):
- super(AccessKey, self).__init__(name, json_snippet, stack)
- self._secret = None
-
- def _get_user(self):
- """
- Helper function to derive the keystone userid, which is stored in the
- resource_id of the User associated with this key. We want to avoid
- looking the name up via listing keystone users, as this requires admin
- rights in keystone, so FnGetAtt which calls _secret_accesskey won't
- work for normal non-admin users
- """
- # Lookup User resource by intrinsic reference (which is what is passed
- # into the UserName parameter. Would be cleaner to just make the User
- # resource return resource_id for FnGetRefId but the AWS definition of
- # user does say it returns a user name not ID
- return self.stack.resource_by_refid(self.properties['UserName'])
-
- def handle_create(self):
- user = self._get_user()
- if user is None:
- raise exception.NotFound('could not find user %s' %
- self.properties['UserName'])
-
- kp = self.keystone().get_ec2_keypair(user.resource_id)
- if not kp:
- raise exception.Error("Error creating ec2 keypair for user %s" %
- user)
-
- self.resource_id_set(kp.access)
- self._secret = kp.secret
-
- def handle_delete(self):
- self._secret = None
- if self.resource_id is None:
- return
-
- user = self._get_user()
- if user is None:
- logger.warning('Error deleting %s - user not found' % str(self))
- return
- user_id = user.resource_id
- if user_id:
- try:
- self.keystone().delete_ec2_keypair(user_id, self.resource_id)
- except clients.hkc.kc.exceptions.NotFound:
- pass
-
- self.resource_id_set(None)
-
- def _secret_accesskey(self):
- '''
- Return the user's access key, fetching it from keystone if necessary
- '''
- if self._secret is None:
- if not self.resource_id:
- logger.warn('could not get secret for %s Error:%s' %
- (self.properties['UserName'],
- "resource_id not yet set"))
- else:
- try:
- user_id = self._get_user().resource_id
- kp = self.keystone().get_ec2_keypair(user_id)
- except Exception as ex:
- logger.warn('could not get secret for %s Error:%s' %
- (self.properties['UserName'],
- str(ex)))
- else:
- if kp.access == self.resource_id:
- self._secret = kp.secret
- else:
- msg = ("Unexpected ec2 keypair, for %s access %s" %
- (user_id, kp.access))
- logger.error(msg)
-
- return self._secret or '000-000-000'
-
- def FnGetAtt(self, key):
- res = None
- log_res = None
- if key == 'UserName':
- res = self.properties['UserName']
- log_res = res
- elif key == 'SecretAccessKey':
- res = self._secret_accesskey()
- log_res = "<SANITIZED>"
- else:
- raise exception.InvalidTemplateAttribute(
- resource=self.physical_resource_name(), key=key)
-
- logger.info('%s.GetAtt(%s) == %s' % (self.physical_resource_name(),
- key, log_res))
- return unicode(res)
-
- def access_allowed(self, resource_name):
- return self._get_user().access_allowed(resource_name)
-
-
-class AccessPolicy(resource.Resource):
- properties_schema = {
- 'AllowedResources': {
- 'Type': 'List',
- 'Required': True,
- 'Description': _('Resources that users are allowed to access by'
- ' the DescribeStackResource API.')}}
-
- def handle_create(self):
- resources = self.properties['AllowedResources']
- # All of the provided resource names must exist in this stack
- for resource in resources:
- if resource not in self.stack:
- logger.error("AccessPolicy resource %s not in stack" %
- resource)
- raise exception.ResourceNotFound(resource_name=resource,
- stack_name=self.stack.name)
-
- def access_allowed(self, resource_name):
- return resource_name in self.properties['AllowedResources']
-
-
-def resource_mapping():
- return {
- 'AWS::IAM::User': User,
- 'AWS::IAM::AccessKey': AccessKey,
- 'OS::Heat::AccessPolicy': AccessPolicy,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import json
-
-from heat.openstack.common import log as logging
-from heat.openstack.common.importutils import try_import
-
-from heat.common import exception
-from heat.engine import clients
-from heat.engine import resource
-from heat.engine import scheduler
-
-volume_backups = try_import('cinderclient.v1.volume_backups')
-
-logger = logging.getLogger(__name__)
-
-
-class Volume(resource.Resource):
-
- properties_schema = {
- 'AvailabilityZone': {
- 'Type': 'String', 'Required': True,
- 'Description': _('The availability zone in which the volume '
- 'will be created')},
- 'Size': {
- 'Type': 'Number',
- 'Description': _('The size of the volume in GB')},
- 'SnapshotId': {
- 'Type': 'String',
- 'Description': _('If specified, the backup used as the source '
- 'to create the volume')},
- 'Tags': {
- 'Type': 'List',
- 'Description': _('The list of tags to associate with the volume '
- ' (ignored)')}
- }
-
- _restore_property = 'SnapshotId'
-
- def _display_name(self):
- return self.physical_resource_name()
-
- def _display_description(self):
- return self.physical_resource_name()
-
- def _create_arguments(self):
- return {
- 'size': self.properties['Size'],
- 'availability_zone': self.properties['AvailabilityZone'] or None,
- }
-
- def handle_create(self):
- backup_id = self.properties.get(self._restore_property)
- cinder = self.cinder()
- if backup_id is not None:
- if volume_backups is None:
- raise exception.Error(
- '%s not supported' % self._restore_property)
- vol_id = cinder.restores.restore(backup_id).volume_id
-
- vol = cinder.volumes.get(vol_id)
- vol.update(
- display_name=self._display_name(),
- display_description=self._display_description())
- else:
- vol = cinder.volumes.create(
- display_name=self._display_name(),
- display_description=self._display_description(),
- **self._create_arguments())
- self.resource_id_set(vol.id)
-
- return vol
-
- def check_create_complete(self, vol):
- vol.get()
-
- if vol.status == 'available':
- return True
- elif vol.status == 'creating':
- return False
- elif vol.status == 'restoring-backup':
- return False
- else:
- raise exception.Error(vol.status)
-
- def _backup(self):
- backup = self.cinder().backups.create(self.resource_id)
- while backup.status == 'creating':
- yield
- backup.get()
- if backup.status != 'available':
- raise exception.Error(backup.status)
-
- @scheduler.wrappertask
- def _delete(self, backup=False):
- if self.resource_id is not None:
- try:
- vol = self.cinder().volumes.get(self.resource_id)
-
- if backup:
- yield self._backup()
- vol.get()
-
- if vol.status == 'in-use':
- logger.warn('cant delete volume when in-use')
- raise exception.Error('Volume in use')
-
- vol.delete()
- while True:
- yield
- vol.get()
- except clients.cinderclient.exceptions.NotFound:
- self.resource_id = None
-
- if volume_backups is not None:
- def handle_snapshot_delete(self, state):
- backup = state not in ((self.CREATE, self.FAILED),
- (self.UPDATE, self.FAILED))
-
- delete_task = scheduler.TaskRunner(self._delete, backup=backup)
- delete_task.start()
- return delete_task
-
- def handle_delete(self):
- delete_task = scheduler.TaskRunner(self._delete)
- delete_task.start()
- return delete_task
-
- def check_delete_complete(self, delete_task):
- return delete_task.step()
-
-
-class VolumeAttachTask(object):
- """A task for attaching a volume to a Nova server."""
-
- def __init__(self, stack, server_id, volume_id, device):
- """
- Initialise with the stack (for obtaining the clients), ID of the
- server and volume, and the device name on the server.
- """
- self.clients = stack.clients
- self.server_id = server_id
- self.volume_id = volume_id
- self.device = device
- self.attachment_id = None
-
- def __str__(self):
- """Return a human-readable string description of the task."""
- return 'Attaching Volume %s to Instance %s as %s' % (self.volume_id,
- self.server_id,
- self.device)
-
- def __repr__(self):
- """Return a brief string description of the task."""
- return '%s(%s -> %s [%s])' % (type(self).__name__,
- self.volume_id,
- self.server_id,
- self.device)
-
- def __call__(self):
- """Return a co-routine which runs the task."""
- logger.debug(str(self))
-
- va = self.clients.nova().volumes.create_server_volume(
- server_id=self.server_id,
- volume_id=self.volume_id,
- device=self.device)
- self.attachment_id = va.id
- yield
-
- vol = self.clients.cinder().volumes.get(self.volume_id)
- while vol.status == 'available' or vol.status == 'attaching':
- logger.debug('%s - volume status: %s' % (str(self), vol.status))
- yield
- vol.get()
-
- if vol.status != 'in-use':
- raise exception.Error(vol.status)
-
- logger.info('%s - complete' % str(self))
-
-
-class VolumeDetachTask(object):
- """A task for detaching a volume from a Nova server."""
-
- def __init__(self, stack, server_id, volume_id):
- """
- Initialise with the stack (for obtaining the clients), and the IDs of
- the server and volume.
- """
- self.clients = stack.clients
- self.server_id = server_id
- self.volume_id = volume_id
-
- def __str__(self):
- """Return a human-readable string description of the task."""
- return 'Detaching Volume %s from Instance %s' % (self.volume_id,
- self.server_id)
-
- def __repr__(self):
- """Return a brief string description of the task."""
- return '%s(%s -/> %s)' % (type(self).__name__,
- self.volume_id,
- self.server_id)
-
- def __call__(self):
- """Return a co-routine which runs the task."""
- logger.debug(str(self))
-
- try:
- vol = self.clients.cinder().volumes.get(self.volume_id)
- except clients.cinderclient.exceptions.NotFound:
- logger.warning('%s - volume not found' % str(self))
- return
-
- server_api = self.clients.nova().volumes
-
- try:
- server_api.delete_server_volume(self.server_id, self.volume_id)
- except clients.novaclient.exceptions.NotFound:
- logger.warning('%s - not found' % str(self))
-
- yield
-
- try:
- vol.get()
- while vol.status in ('in-use', 'detaching'):
- logger.debug('%s - volume still in use' % str(self))
- yield
-
- try:
- server_api.delete_server_volume(self.server_id,
- self.volume_id)
- except clients.novaclient.exceptions.NotFound:
- pass
- vol.get()
-
- logger.info('%s - status: %s' % (str(self), vol.status))
- if vol.status != 'available':
- raise exception.Error(vol.status)
-
- except clients.cinderclient.exceptions.NotFound:
- logger.warning('%s - volume not found' % str(self))
-
-
-class VolumeAttachment(resource.Resource):
- properties_schema = {
- 'InstanceId': {
- 'Type': 'String', 'Required': True,
- 'Description': _('The ID of the instance to which the '
- 'volume attaches')},
- 'VolumeId': {
- 'Type': 'String', 'Required': True,
- 'Description': _('The ID of the volume to be attached')},
- 'Device': {
- 'Type': 'String', 'Required': True,
- 'AllowedPattern': '/dev/vd[b-z]',
- 'Description': _('The device where the volume is exposed on '
- 'the instance')}
- }
-
- _instance_property = 'InstanceId'
- _volume_property = 'VolumeId'
- _device_property = 'Device'
-
- def handle_create(self):
- server_id = self.properties[self._instance_property]
- volume_id = self.properties[self._volume_property]
- dev = self.properties[self._device_property]
-
- attach_task = VolumeAttachTask(self.stack, server_id, volume_id, dev)
- attach_runner = scheduler.TaskRunner(attach_task)
-
- attach_runner.start()
-
- self.resource_id_set(attach_task.attachment_id)
-
- return attach_runner
-
- def check_create_complete(self, attach_runner):
- return attach_runner.step()
-
- def handle_delete(self):
- server_id = self.properties[self._instance_property]
- volume_id = self.properties[self._volume_property]
- detach_task = VolumeDetachTask(self.stack, server_id, volume_id)
- scheduler.TaskRunner(detach_task)()
-
-
-class CinderVolume(Volume):
-
- properties_schema = {
- 'availability_zone': {
- 'Type': 'String', 'Required': True,
- 'Description': _('The availability zone in which the volume '
- 'will be created')},
- 'size': {
- 'Type': 'Number',
- 'Description': _('The size of the volume in GB')},
- 'snapshot_id': {
- 'Type': 'String',
- 'Description': _('If specified, the snapshot to create the '
- 'volume from.')},
- 'backup_id': {
- 'Type': 'String',
- 'Description': _('If specified, the backup to create the '
- 'volume from')},
- 'name': {
- 'Type': 'String',
- 'Description': _('A name used to distinguish the volume')},
- 'description': {
- 'Type': 'String',
- 'Description': _('A description of the volume')},
- 'volume_type': {
- 'Type': 'String',
- 'Description': _('If specified, the type of volume to use, '
- 'mapping to a specific backend')},
- 'metadata': {
- 'Type': 'Map',
- 'Description': _('Key/value pairs to associate with the volume')},
- 'imageRef': {
- 'Type': 'String',
- 'Description': _('If specified, the image to create the volume '
- 'from')},
- 'source_volid': {
- 'Type': 'String',
- 'Description': _('If specified, the volume to use as source')}
- }
-
- _restore_property = 'backup_id'
-
- def _display_name(self):
- name = self.properties['name']
- if name:
- return name
- return super(CinderVolume, self)._display_name()
-
- def _display_description(self):
- return self.properties['description']
-
- def _create_arguments(self):
- arguments = {
- 'size': self.properties['size'],
- 'availability_zone': self.properties['availability_zone']
- }
- optionals = ['snapshot_id', 'volume_type', 'imageRef', 'source_volid',
- 'metadata']
- arguments.update((prop, self.properties[prop]) for prop in optionals
- if self.properties[prop])
- return arguments
-
- def FnGetAtt(self, key):
- if key == 'id':
- return self.resource_id
- attributes = ['availability_zone', 'size', 'snapshot_id',
- 'display_name', 'display_description', 'volume_type',
- 'metadata', 'source_volid', 'status', 'created_at',
- 'bootable']
- if key not in attributes:
- raise exception.InvalidTemplateAttribute(resource=self.name,
- key=key)
- vol = self.cinder().volumes.get(self.resource_id)
- if key == 'metadata':
- return unicode(json.dumps(vol.metadata))
- return unicode(getattr(vol, key))
-
-
-class CinderVolumeAttachment(VolumeAttachment):
-
- properties_schema = {
- 'instance_uuid': {
- 'Type': 'String', 'Required': True,
- 'Description': _('The ID of the server to which the '
- 'volume attaches')},
- 'volume_id': {
- 'Type': 'String', 'Required': True,
- 'Description': _('The ID of the volume to be attached')},
- 'mountpoint': {
- 'Type': 'String', 'Required': True,
- 'Description': _('The location where the volume is exposed on '
- 'the instance')}
- }
-
- _instance_property = 'instance_uuid'
- _volume_property = 'volume_id'
- _device_property = 'mountpoint'
-
-
-def resource_mapping():
- return {
- 'AWS::EC2::Volume': Volume,
- 'AWS::EC2::VolumeAttachment': VolumeAttachment,
- 'OS::Cinder::Volume': CinderVolume,
- 'OS::Cinder::VolumeAttachment': CinderVolumeAttachment,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import exception
-from heat.engine import clients
-from heat.openstack.common import log as logging
-from heat.engine import resource
-from heat.engine.resources.neutron import neutron
-
-logger = logging.getLogger(__name__)
-
-
-class VPC(resource.Resource):
- tags_schema = {'Key': {'Type': 'String',
- 'Required': True},
- 'Value': {'Type': 'String',
- 'Required': True}}
-
- properties_schema = {
- 'CidrBlock': {'Type': 'String'},
- 'InstanceTenancy': {
- 'Type': 'String',
- 'AllowedValues': ['default',
- 'dedicated'],
- 'Default': 'default',
- 'Implemented': False},
- 'Tags': {'Type': 'List', 'Schema': {
- 'Type': 'Map',
- 'Implemented': False,
- 'Schema': tags_schema}}
- }
-
- def handle_create(self):
- client = self.neutron()
- # The VPC's net and router are associated by having identical names.
- net_props = {'name': self.physical_resource_name()}
- router_props = {'name': self.physical_resource_name()}
-
- net = client.create_network({'network': net_props})['network']
- client.create_router({'router': router_props})['router']
-
- self.resource_id_set(net['id'])
-
- @staticmethod
- def network_for_vpc(client, network_id):
- return client.show_network(network_id)['network']
-
- @staticmethod
- def router_for_vpc(client, network_id):
- # first get the neutron net
- net = VPC.network_for_vpc(client, network_id)
- # then find a router with the same name
- routers = client.list_routers(name=net['name'])['routers']
- if len(routers) == 0:
- # There may be no router if the net was created manually
- # instead of in another stack.
- return None
- if len(routers) > 1:
- raise exception.Error(
- _('Multiple routers found with name %s') % net['name'])
- return routers[0]
-
- def check_create_complete(self, *args):
- net = self.network_for_vpc(self.neutron(), self.resource_id)
- if not neutron.NeutronResource.is_built(net):
- return False
- router = self.router_for_vpc(self.neutron(), self.resource_id)
- return neutron.NeutronResource.is_built(router)
-
- def handle_delete(self):
- from neutronclient.common.exceptions import NeutronClientException
- client = self.neutron()
- router = self.router_for_vpc(client, self.resource_id)
- try:
- client.delete_router(router['id'])
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
- try:
- client.delete_network(self.resource_id)
- except NeutronClientException as ex:
- if ex.status_code != 404:
- raise ex
-
-
-def resource_mapping():
- if clients.neutronclient is None:
- return {}
-
- return {
- 'AWS::EC2::VPC': VPC,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import json
-
-from heat.common import exception
-from heat.common import identifier
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine import signal_responder
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class WaitConditionHandle(signal_responder.SignalResponder):
- '''
- the main point of this class is to :
- have no dependancies (so the instance can reference it)
- generate a unique url (to be returned in the refernce)
- then the cfn-signal will use this url to post to and
- WaitCondition will poll it to see if has been written to.
- '''
- properties_schema = {}
-
- def FnGetRefId(self):
- '''
- Override the default resource FnGetRefId so we return the signed URL
- '''
- if self.resource_id:
- wc = signal_responder.WAITCONDITION
- return unicode(self._get_signed_url(signal_type=wc))
- else:
- return unicode(self.name)
-
- def _metadata_format_ok(self, metadata):
- """
- Check the format of the provided metadata is as expected.
- metadata must use the following format:
- {
- "Status" : "Status (must be SUCCESS or FAILURE)"
- "UniqueId" : "Some ID, should be unique for Count>1",
- "Data" : "Arbitrary Data",
- "Reason" : "Reason String"
- }
- """
- expected_keys = ['Data', 'Reason', 'Status', 'UniqueId']
- if sorted(metadata.keys()) == expected_keys:
- return metadata['Status'] in WAIT_STATUSES
-
- def metadata_update(self, new_metadata=None):
- '''
- Validate and update the resource metadata
- '''
- if new_metadata is None:
- return
-
- if self._metadata_format_ok(new_metadata):
- rsrc_metadata = self.metadata
- if new_metadata['UniqueId'] in rsrc_metadata:
- logger.warning("Overwriting Metadata item for UniqueId %s!" %
- new_metadata['UniqueId'])
- safe_metadata = {}
- for k in ('Data', 'Reason', 'Status'):
- safe_metadata[k] = new_metadata[k]
- # Note we can't update self.metadata directly, as it
- # is a Metadata descriptor object which only supports get/set
- rsrc_metadata.update({new_metadata['UniqueId']: safe_metadata})
- self.metadata = rsrc_metadata
- else:
- logger.error("Metadata failed validation for %s" % self.name)
- raise ValueError("Metadata format invalid")
-
- def get_status(self):
- '''
- Return a list of the Status values for the handle signals
- '''
- return [self.metadata[s]['Status']
- for s in self.metadata]
-
- def get_status_reason(self, status):
- '''
- Return the reason associated with a particular status
- If there is more than one handle signal matching the specified status
- then return a semicolon delimited string containing all reasons
- '''
- return ';'.join([self.metadata[s]['Reason']
- for s in self.metadata
- if self.metadata[s]['Status'] == status])
-
-
-WAIT_STATUSES = (
- STATUS_FAILURE,
- STATUS_SUCCESS,
-) = (
- 'FAILURE',
- 'SUCCESS',
-)
-
-
-class WaitConditionFailure(Exception):
- def __init__(self, wait_condition, handle):
- reasons = handle.get_status_reason(STATUS_FAILURE)
- super(WaitConditionFailure, self).__init__(reasons)
-
-
-class WaitConditionTimeout(Exception):
- def __init__(self, wait_condition, handle):
- reasons = handle.get_status_reason(STATUS_SUCCESS)
- message = '%d of %d received' % (len(reasons), wait_condition.count)
- if reasons:
- message += ' - %s' % reasons
-
- super(WaitConditionTimeout, self).__init__(message)
-
-
-class WaitCondition(resource.Resource):
- properties_schema = {'Handle': {'Type': 'String',
- 'Required': True},
- 'Timeout': {'Type': 'Number',
- 'Required': True,
- 'MinValue': '1'},
- 'Count': {'Type': 'Number',
- 'MinValue': '1'}}
-
- def __init__(self, name, json_snippet, stack):
- super(WaitCondition, self).__init__(name, json_snippet, stack)
-
- self.count = int(self.t['Properties'].get('Count', '1'))
-
- def _validate_handle_url(self):
- handle_url = self.properties['Handle']
- handle_id = identifier.ResourceIdentifier.from_arn_url(handle_url)
- if handle_id.tenant != self.stack.context.tenant_id:
- raise ValueError("WaitCondition invalid Handle tenant %s" %
- handle_id.tenant)
- if handle_id.stack_name != self.stack.name:
- raise ValueError("WaitCondition invalid Handle stack %s" %
- handle_id.stack_name)
- if handle_id.stack_id != self.stack.id:
- raise ValueError("WaitCondition invalid Handle stack %s" %
- handle_id.stack_id)
- if handle_id.resource_name not in self.stack:
- raise ValueError("WaitCondition invalid Handle %s" %
- handle_id.resource_name)
- if not isinstance(self.stack[handle_id.resource_name],
- WaitConditionHandle):
- raise ValueError("WaitCondition invalid Handle %s" %
- handle_id.resource_name)
-
- def _get_handle_resource_name(self):
- handle_url = self.properties['Handle']
- handle_id = identifier.ResourceIdentifier.from_arn_url(handle_url)
- return handle_id.resource_name
-
- def _wait(self, handle):
- while True:
- try:
- yield
- except scheduler.Timeout:
- timeout = WaitConditionTimeout(self, handle)
- logger.info('%s Timed out (%s)' % (str(self), str(timeout)))
- raise timeout
-
- handle_status = handle.get_status()
-
- if any(s != STATUS_SUCCESS for s in handle_status):
- failure = WaitConditionFailure(self, handle)
- logger.info('%s Failed (%s)' % (str(self), str(failure)))
- raise failure
-
- if len(handle_status) >= self.count:
- logger.info("%s Succeeded" % str(self))
- return
-
- def handle_create(self):
- self._validate_handle_url()
- handle_res_name = self._get_handle_resource_name()
- handle = self.stack[handle_res_name]
- self.resource_id_set(handle_res_name)
-
- runner = scheduler.TaskRunner(self._wait, handle)
- runner.start(timeout=float(self.properties['Timeout']))
- return runner
-
- def check_create_complete(self, runner):
- return runner.step()
-
- def handle_delete(self):
- if self.resource_id is None:
- return
-
- handle = self.stack[self.resource_id]
- handle.metadata = {}
-
- def FnGetAtt(self, key):
- res = {}
- handle_res_name = self._get_handle_resource_name()
- handle = self.stack[handle_res_name]
- if key == 'Data':
- meta = handle.metadata
- # Note, can't use a dict generator on python 2.6, hence:
- res = dict([(k, meta[k]['Data']) for k in meta])
- else:
- raise exception.InvalidTemplateAttribute(resource=self.name,
- key=key)
-
- logger.debug('%s.GetAtt(%s) == %s' % (self.name, key, res))
- return unicode(json.dumps(res))
-
-
-def resource_mapping():
- return {
- 'AWS::CloudFormation::WaitCondition': WaitCondition,
- 'AWS::CloudFormation::WaitConditionHandle': WaitConditionHandle,
- }
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import eventlet
-import functools
-import itertools
-import sys
-import types
-from time import time as wallclock
-
-from heat.openstack.common import excutils
-from heat.openstack.common import log as logging
-from heat.openstack.common.gettextutils import _
-
-logger = logging.getLogger(__name__)
-
-
-# Whether TaskRunner._sleep actually does an eventlet sleep when called.
-ENABLE_SLEEP = True
-
-
-def task_description(task):
- """
- Return a human-readable string description of a task suitable for logging
- the status of the task.
- """
- name = task.__name__ if hasattr(task, '__name__') else None
- if isinstance(task, types.MethodType):
- if name is not None and hasattr(task, '__self__'):
- return '%s from %s' % (name, task.__self__)
- elif isinstance(task, types.FunctionType):
- if name is not None:
- return str(name)
- return repr(task)
-
-
-class Timeout(BaseException):
- """
- Timeout exception, raised within a task when it has exceeded its allotted
- (wallclock) running time.
-
- This allows the task to perform any necessary cleanup, as well as use a
- different exception to notify the controlling task if appropriate. If the
- task supresses the exception altogether, it will be cancelled but the
- controlling task will not be notified of the timeout.
- """
-
- def __init__(self, task_runner, timeout):
- """
- Initialise with the TaskRunner and a timeout period in seconds.
- """
- message = _('%s Timed out') % task_runner
- super(Timeout, self).__init__(message)
-
- # Note that we don't attempt to handle leap seconds or large clock
- # jumps here. The latter are assumed to be rare and the former
- # negligible in the context of the timeout. Time zone adjustments,
- # Daylight Savings and the like *are* handled. PEP 418 adds a proper
- # monotonic clock, but only in Python 3.3.
- self._endtime = wallclock() + timeout
-
- def expired(self):
- return wallclock() > self._endtime
-
-
-class TaskRunner(object):
- """
- Wrapper for a resumable task (co-routine).
- """
-
- def __init__(self, task, *args, **kwargs):
- """
- Initialise with a task function, and arguments to be passed to it when
- it is started.
-
- The task function may be a co-routine that yields control flow between
- steps.
- """
- assert callable(task), "Task is not callable"
-
- self._task = task
- self._args = args
- self._kwargs = kwargs
- self._runner = None
- self._done = False
- self._timeout = None
- self.name = task_description(task)
-
- def __str__(self):
- """Return a human-readable string representation of the task."""
- return 'Task %s' % self.name
-
- def _sleep(self, wait_time):
- """Sleep for the specified number of seconds."""
- if ENABLE_SLEEP and wait_time is not None:
- logger.debug('%s sleeping' % str(self))
- eventlet.sleep(wait_time)
-
- def __call__(self, wait_time=1, timeout=None):
- """
- Start and run the task to completion.
-
- The task will sleep for `wait_time` seconds between steps. To avoid
- sleeping, pass `None` for `wait_time`.
- """
- self.start(timeout=timeout)
- self.run_to_completion(wait_time=wait_time)
-
- def start(self, timeout=None):
- """
- Initialise the task and run its first step.
-
- If a timeout is specified, any attempt to step the task after that
- number of seconds has elapsed will result in a Timeout being
- raised inside the task.
- """
- assert self._runner is None, "Task already started"
-
- logger.debug('%s starting' % str(self))
-
- if timeout is not None:
- self._timeout = Timeout(self, timeout)
-
- result = self._task(*self._args, **self._kwargs)
- if isinstance(result, types.GeneratorType):
- self._runner = result
- self.step()
- else:
- self._runner = False
- self._done = True
- logger.debug('%s done (not resumable)' % str(self))
-
- def step(self):
- """
- Run another step of the task, and return True if the task is complete;
- False otherwise.
- """
- if not self.done():
- assert self._runner is not None, "Task not started"
-
- if self._timeout is not None and self._timeout.expired():
- logger.info('%s timed out' % str(self))
-
- try:
- self._runner.throw(self._timeout)
- except StopIteration:
- self._done = True
- else:
- # Clean up in case task swallows exception without exiting
- self.cancel()
- else:
- logger.debug('%s running' % str(self))
-
- try:
- next(self._runner)
- except StopIteration:
- self._done = True
- logger.debug('%s complete' % str(self))
-
- return self._done
-
- def run_to_completion(self, wait_time=1):
- """
- Run the task to completion.
-
- The task will sleep for `wait_time` seconds between steps. To avoid
- sleeping, pass `None` for `wait_time`.
- """
- while not self.step():
- self._sleep(wait_time)
-
- def cancel(self):
- """Cancel the task if it is running."""
- if self.started() and not self.done():
- logger.debug('%s cancelled' % str(self))
- self._runner.close()
- self._done = True
-
- def started(self):
- """Return True if the task has been started."""
- return self._runner is not None
-
- def done(self):
- """Return True if the task is complete."""
- return self._done
-
- def __nonzero__(self):
- """Return True if there are steps remaining."""
- return not self.done()
-
-
-def wrappertask(task):
- """
- Decorator for a task that needs to drive a subtask.
-
- This is essentially a replacement for the Python 3-only "yield from"
- keyword (PEP 380), using the "yield" keyword that is supported in
- Python 2. For example:
-
- @wrappertask
- def parent_task(self):
- self.setup()
-
- yield self.child_task()
-
- self.cleanup()
- """
-
- @functools.wraps(task)
- def wrapper(*args, **kwargs):
- parent = task(*args, **kwargs)
-
- subtask = next(parent)
-
- while True:
- try:
- if subtask is not None:
- subtask_running = True
- try:
- step = next(subtask)
- except StopIteration:
- subtask_running = False
-
- while subtask_running:
- try:
- yield step
- except GeneratorExit as exit:
- subtask.close()
- raise exit
- except:
- try:
- step = subtask.throw(*sys.exc_info())
- except StopIteration:
- subtask_running = False
- else:
- try:
- step = next(subtask)
- except StopIteration:
- subtask_running = False
- else:
- yield
- except GeneratorExit as exit:
- parent.close()
- raise exit
- except:
- subtask = parent.throw(*sys.exc_info())
- else:
- subtask = next(parent)
-
- return wrapper
-
-
-class DependencyTaskGroup(object):
- """
- A task which manages a group of subtasks that have ordering dependencies.
- """
-
- def __init__(self, dependencies, task=lambda o: o(),
- reverse=False, name=None):
- """
- Initialise with the task dependencies and (optionally) a task to run on
- each.
-
- If no task is supplied, it is assumed that the tasks are stored
- directly in the dependency tree. If a task is supplied, the object
- stored in the dependency tree is passed as an argument.
- """
- self._runners = dict((o, TaskRunner(task, o)) for o in dependencies)
- self._graph = dependencies.graph(reverse=reverse)
-
- if name is None:
- name = '(%s) %s' % (getattr(task, '__name__',
- task_description(task)),
- str(dependencies))
- self.name = name
-
- def __repr__(self):
- """Return a string representation of the task."""
- return '%s(%s)' % (type(self).__name__, self.name)
-
- def __call__(self):
- """Return a co-routine which runs the task group."""
- try:
- while any(self._runners.itervalues()):
- for k, r in self._ready():
- r.start()
-
- yield
-
- for k, r in self._running():
- if r.step():
- del self._graph[k]
- except:
- with excutils.save_and_reraise_exception():
- for r in self._runners.itervalues():
- r.cancel()
-
- def _ready(self):
- """
- Iterate over all subtasks that are ready to start - i.e. all their
- dependencies have been satisfied but they have not yet been started.
- """
- for k, n in self._graph.iteritems():
- if not n:
- runner = self._runners[k]
- if not runner.started():
- yield k, runner
-
- def _running(self):
- """
- Iterate over all subtasks that are currently running - i.e. they have
- been started but have not yet completed.
- """
- running = lambda (k, r): k in self._graph and r.started()
- return itertools.ifilter(running, self._runners.iteritems())
-
-
-class PollingTaskGroup(object):
- """
- A task which manages a group of subtasks.
-
- When the task is started, all of its subtasks are also started. The task
- completes when all subtasks are complete.
-
- Once started, the subtasks are assumed to be only polling for completion
- of an asynchronous operation, so no attempt is made to give them equal
- scheduling slots.
- """
-
- def __init__(self, tasks, name=None):
- """Initialise with a list of tasks."""
- self._tasks = list(tasks)
- if name is None:
- name = ', '.join(task_description(t) for t in self._tasks)
- self.name = name
-
- @staticmethod
- def _args(arg_lists):
- """Return a list containing the positional args for each subtask."""
- return zip(*arg_lists)
-
- @staticmethod
- def _kwargs(kwarg_lists):
- """Return a list containing the keyword args for each subtask."""
- keygroups = (itertools.izip(itertools.repeat(name),
- arglist)
- for name, arglist in kwarg_lists.iteritems())
- return [dict(kwargs) for kwargs in itertools.izip(*keygroups)]
-
- @classmethod
- def from_task_with_args(cls, task, *arg_lists, **kwarg_lists):
- """
- Return a new PollingTaskGroup where each subtask is identical except
- for the arguments passed to it.
-
- Each argument to use should be passed as a list (or iterable) of values
- such that one is passed in the corresponding position for each subtask.
- The number of subtasks spawned depends on the length of the argument
- lists. For example:
-
- PollingTaskGroup.from_task_with_args(my_task,
- [1, 2, 3],
- alpha=['a', 'b', 'c'])
-
- will start three TaskRunners that will run:
-
- my_task(1, alpha='a')
- my_task(2, alpha='b')
- my_task(3, alpha='c')
-
- respectively.
-
- If multiple arguments are supplied, each list should be of the same
- length. In the case of any discrepancy, the length of the shortest
- argument list will be used, and any extra arguments discarded.
- """
-
- args_list = cls._args(arg_lists)
- kwargs_list = cls._kwargs(kwarg_lists)
-
- if kwarg_lists and not arg_lists:
- args_list = [[]] * len(kwargs_list)
- elif arg_lists and not kwarg_lists:
- kwargs_list = [{}] * len(args_list)
-
- task_args = itertools.izip(args_list, kwargs_list)
- tasks = (functools.partial(task, *a, **kwa) for a, kwa in task_args)
-
- return cls(tasks, name=task_description(task))
-
- def __repr__(self):
- """Return a string representation of the task group."""
- return '%s(%s)' % (type(self).__name__, self.name)
-
- def __call__(self):
- """Return a co-routine which runs the task group."""
- runners = [TaskRunner(t) for t in self._tasks]
-
- try:
- for r in runners:
- r.start()
-
- while runners:
- yield
- runners = list(itertools.dropwhile(lambda r: r.step(),
- runners))
- except:
- with excutils.save_and_reraise_exception():
- for r in runners:
- r.cancel()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import functools
-import json
-
-from oslo.config import cfg
-import webob
-
-cfg.CONF.import_opt('max_resources_per_stack', 'heat.common.config')
-
-from heat.openstack.common import timeutils
-from heat.common import context
-from heat.db import api as db_api
-from heat.engine import api
-from heat.rpc import api as rpc_api
-from heat.engine import attributes
-from heat.engine import clients
-from heat.engine.event import Event
-from heat.engine import environment
-from heat.common import exception
-from heat.common import identifier
-from heat.common import heat_keystoneclient as hkc
-from heat.engine import parameters
-from heat.engine import parser
-from heat.engine import properties
-from heat.engine import resource
-from heat.engine import resources
-from heat.engine import template as tpl
-from heat.engine import watchrule
-
-from heat.openstack.common import log as logging
-from heat.openstack.common import threadgroup
-from heat.openstack.common.gettextutils import _
-from heat.openstack.common.rpc import service
-from heat.openstack.common import uuidutils
-
-
-logger = logging.getLogger(__name__)
-
-
-def request_context(func):
- @functools.wraps(func)
- def wrapped(self, ctx, *args, **kwargs):
- if ctx is not None and not isinstance(ctx, context.RequestContext):
- ctx = context.RequestContext.from_dict(ctx.to_dict())
- return func(self, ctx, *args, **kwargs)
- return wrapped
-
-
-class EngineService(service.Service):
- """
- Manages the running instances from creation to destruction.
- All the methods in here are called from the RPC backend. This is
- all done dynamically so if a call is made via RPC that does not
- have a corresponding method here, an exception will be thrown when
- it attempts to call into this class. Arguments to these methods
- are also dynamically added and will be named as keyword arguments
- by the RPC caller.
- """
- def __init__(self, host, topic, manager=None):
- super(EngineService, self).__init__(host, topic)
- # stg == "Stack Thread Groups"
- self.stg = {}
- resources.initialise()
-
- def _start_in_thread(self, stack_id, func, *args, **kwargs):
- if stack_id not in self.stg:
- self.stg[stack_id] = threadgroup.ThreadGroup()
- self.stg[stack_id].add_thread(func, *args, **kwargs)
-
- def _timer_in_thread(self, stack_id, func, *args, **kwargs):
- """
- Define a periodic task, to be run in a separate thread, in the stack
- threadgroups. Periodicity is cfg.CONF.periodic_interval
- """
- if stack_id not in self.stg:
- self.stg[stack_id] = threadgroup.ThreadGroup()
- self.stg[stack_id].add_timer(cfg.CONF.periodic_interval,
- func, *args, **kwargs)
-
- def _service_task(self):
- """
- This is a dummy task which gets queued on the service.Service
- threadgroup. Without this service.Service sees nothing running
- i.e has nothing to wait() on, so the process exits..
- This could also be used to trigger periodic non-stack-specific
- housekeeping tasks
- """
- pass
-
- def _start_watch_task(self, stack_id, cnxt):
- wrs = db_api.watch_rule_get_all_by_stack(cnxt,
- stack_id)
-
- now = timeutils.utcnow()
- start_watch_thread = False
- for wr in wrs:
- # reset the last_evaluated so we don't fire off alarms when
- # the engine has not been running.
- db_api.watch_rule_update(cnxt, wr.id, {'last_evaluated': now})
-
- if wr.state != rpc_api.WATCH_STATE_CEILOMETER_CONTROLLED:
- start_watch_thread = True
-
- if start_watch_thread:
- self._timer_in_thread(stack_id, self._periodic_watcher_task,
- sid=stack_id)
-
- def start(self):
- super(EngineService, self).start()
-
- # Create dummy service task, because when there is nothing queued
- # on self.tg the process exits
- self.tg.add_timer(cfg.CONF.periodic_interval,
- self._service_task)
-
- # Create a periodic_watcher_task per-stack
- admin_context = context.get_admin_context()
- stacks = db_api.stack_get_all(admin_context)
- for s in stacks:
- self._start_watch_task(s.id, admin_context)
-
- @request_context
- def identify_stack(self, cnxt, stack_name):
- """
- The identify_stack method returns the full stack identifier for a
- single, live stack given the stack name.
- arg1 -> RPC context.
- arg2 -> Name or UUID of the stack to look up.
- """
- if uuidutils.is_uuid_like(stack_name):
- s = db_api.stack_get(cnxt, stack_name, show_deleted=True)
- else:
- s = db_api.stack_get_by_name(cnxt, stack_name)
- if s:
- stack = parser.Stack.load(cnxt, stack=s)
- return dict(stack.identifier())
- else:
- raise exception.StackNotFound(stack_name=stack_name)
-
- def _get_stack(self, cnxt, stack_identity, show_deleted=False):
- identity = identifier.HeatIdentifier(**stack_identity)
-
- if identity.tenant != cnxt.tenant_id:
- raise exception.InvalidTenant(target=identity.tenant,
- actual=cnxt.tenant_id)
-
- s = db_api.stack_get(cnxt, identity.stack_id,
- show_deleted=show_deleted)
-
- if s is None:
- raise exception.StackNotFound(stack_name=identity.stack_name)
-
- if identity.path or s.name != identity.stack_name:
- raise exception.StackNotFound(stack_name=identity.stack_name)
-
- return s
-
- @request_context
- def show_stack(self, cnxt, stack_identity):
- """
- Return detailed information about one or all stacks.
- arg1 -> RPC cnxt.
- arg2 -> Name of the stack you want to show, or None to show all
- """
- if stack_identity is not None:
- stacks = [self._get_stack(cnxt, stack_identity, show_deleted=True)]
- else:
- stacks = db_api.stack_get_all_by_tenant(cnxt) or []
-
- def format_stack_detail(s):
- stack = parser.Stack.load(cnxt, stack=s)
- return api.format_stack(stack)
-
- return [format_stack_detail(s) for s in stacks]
-
- @request_context
- def list_stacks(self, cnxt):
- """
- The list_stacks method returns attributes of all stacks.
- arg1 -> RPC cnxt.
- """
-
- def format_stack_details(stacks):
- for s in stacks:
- try:
- stack = parser.Stack.load(cnxt, stack=s,
- resolve_data=False)
- except exception.NotFound:
- # The stack may have been deleted between listing
- # and formatting
- pass
- else:
- yield api.format_stack(stack)
-
- stacks = db_api.stack_get_all_by_tenant(cnxt) or []
- return list(format_stack_details(stacks))
-
- def _validate_deferred_auth_context(self, cnxt, stack):
- if cfg.CONF.deferred_auth_method != 'password':
- return
-
- if not stack.requires_deferred_auth():
- return
-
- if cnxt.username is None:
- raise exception.MissingCredentialError(required='X-Auth-User')
- if cnxt.password is None:
- raise exception.MissingCredentialError(required='X-Auth-Key')
-
- @request_context
- def create_stack(self, cnxt, stack_name, template, params, files, args):
- """
- The create_stack method creates a new stack using the template
- provided.
- Note that at this stage the template has already been fetched from the
- heat-api process if using a template-url.
- :param cnxt: RPC context.
- :param stack_name: Name of the stack you want to create.
- :param template: Template of stack you want to create.
- :param params: Stack Input Params
- :param files: Files referenced from the template
- (currently provider templates).
- :param args: Request parameters/args passed from API
- """
- logger.info('template is %s' % template)
-
- def _stack_create(stack):
- # Create the stack, and create the periodic task if successful
- stack.create()
- if stack.action == stack.CREATE and stack.status == stack.COMPLETE:
- # Schedule a periodic watcher task for this stack
- self._start_watch_task(stack.id, cnxt)
- else:
- logger.warning("Stack create failed, status %s" % stack.status)
-
- if db_api.stack_get_by_name(cnxt, stack_name):
- raise exception.StackExists(stack_name=stack_name)
-
- tmpl = parser.Template(template, files=files)
-
- if len(tmpl[tpl.RESOURCES]) > cfg.CONF.max_resources_per_stack:
- raise exception.StackResourceLimitExceeded()
-
- # Extract the common query parameters
- common_params = api.extract_args(args)
- env = environment.Environment(params)
- stack = parser.Stack(cnxt, stack_name, tmpl,
- env, **common_params)
-
- self._validate_deferred_auth_context(cnxt, stack)
-
- stack.validate()
-
- # Creates a trust and sets the trust_id and trustor_user_id in
- # the current context, before we store it in stack.store()
- # Does nothing if deferred_auth_method is 'password'
- stack.clients.keystone().create_trust_context()
-
- stack_id = stack.store()
-
- self._start_in_thread(stack_id, _stack_create, stack)
-
- return dict(stack.identifier())
-
- @request_context
- def update_stack(self, cnxt, stack_identity, template, params,
- files, args):
- """
- The update_stack method updates an existing stack based on the
- provided template and parameters.
- Note that at this stage the template has already been fetched from the
- heat-api process if using a template-url.
- arg1 -> RPC context.
- arg2 -> Name of the stack you want to create.
- arg3 -> Template of stack you want to create.
- arg4 -> Stack Input Params
- arg4 -> Request parameters/args passed from API
- """
- logger.info('template is %s' % template)
-
- # Get the database representation of the existing stack
- db_stack = self._get_stack(cnxt, stack_identity)
-
- current_stack = parser.Stack.load(cnxt, stack=db_stack)
-
- # Now parse the template and any parameters for the updated
- # stack definition.
- tmpl = parser.Template(template, files=files)
- if len(tmpl[tpl.RESOURCES]) > cfg.CONF.max_resources_per_stack:
- raise exception.StackResourceLimitExceeded()
- stack_name = current_stack.name
- common_params = api.extract_args(args)
- env = environment.Environment(params)
- updated_stack = parser.Stack(cnxt, stack_name, tmpl,
- env, **common_params)
-
- self._validate_deferred_auth_context(cnxt, updated_stack)
- updated_stack.validate()
-
- self._start_in_thread(db_stack.id, current_stack.update, updated_stack)
-
- return dict(current_stack.identifier())
-
- @request_context
- def validate_template(self, cnxt, template):
- """
- The validate_template method uses the stack parser to check
- the validity of a template.
-
- arg1 -> RPC context.
- arg3 -> Template of stack you want to create.
- arg4 -> Stack Input Params
- """
- logger.info('validate_template')
- if template is None:
- msg = _("No Template provided.")
- return webob.exc.HTTPBadRequest(explanation=msg)
-
- tmpl = parser.Template(template)
- tmpl_resources = tmpl.get('Resources', [])
-
- if not tmpl_resources:
- return {'Error': 'At least one Resources member must be defined.'}
-
- for res in tmpl_resources.values():
- try:
- if not res.get('Type'):
- return {'Error':
- 'Every Resource object must '
- 'contain a Type member.'}
- except AttributeError:
- type_res = type(res)
- if isinstance(res, unicode):
- type_res = "string"
- return {'Error':
- 'Resources must contain Resource. '
- 'Found a [%s] instead' % type_res}
-
- ResourceClass = resource.get_class(res['Type'])
- props = properties.Properties(ResourceClass.properties_schema,
- res.get('Properties', {}))
- try:
- ResourceClass.validate_deletion_policy(res)
- props.validate(with_value=False)
- except Exception as ex:
- return {'Error': str(ex)}
-
- tmpl_params = parser.Parameters(None, tmpl, validate_value=False)
- format_validate_parameter = lambda p: dict(p.schema)
- is_real_param = lambda p: p.name not in parameters.PSEUDO_PARAMETERS
- params = tmpl_params.map(format_validate_parameter, is_real_param)
-
- result = {
- 'Description': tmpl.get('Description', ''),
- 'Parameters': params,
- }
- return result
-
- @request_context
- def authenticated_to_backend(self, cnxt):
- """
- Verify that the credentials in the RPC context are valid for the
- current cloud backend.
- """
- return clients.Clients(cnxt).authenticated()
-
- @request_context
- def get_template(self, cnxt, stack_identity):
- """
- Get the template.
- arg1 -> RPC context.
- arg2 -> Name of the stack you want to see.
- """
- s = self._get_stack(cnxt, stack_identity, show_deleted=True)
- if s:
- return s.raw_template.template
- return None
-
- @request_context
- def delete_stack(self, cnxt, stack_identity):
- """
- The delete_stack method deletes a given stack.
- arg1 -> RPC context.
- arg2 -> Name of the stack you want to delete.
- """
- st = self._get_stack(cnxt, stack_identity)
-
- logger.info('deleting stack %s' % st.name)
-
- stack = parser.Stack.load(cnxt, stack=st)
-
- # If we created a trust, delete it
- # Note this is using the current request context, not the stored
- # context, as it seems it's not possible to delete a trust with
- # a token obtained via that trust. This means that only the user
- # who created the stack can delete it when using trusts atm.
- stack.clients.keystone().delete_trust_context()
-
- # Kill any pending threads by calling ThreadGroup.stop()
- if st.id in self.stg:
- self.stg[st.id].stop()
- del self.stg[st.id]
- # use the service ThreadGroup for deletes
- self.tg.add_thread(stack.delete)
- return None
-
- def list_resource_types(self, cnxt):
- """
- Get a list of supported resource types.
- arg1 -> RPC context.
- """
- return list(resource.get_types())
-
- def resource_schema(self, cnxt, type_name):
- """
- Return the schema of the specified type.
- arg1 -> RPC context.
- arg2 -> Name of the resource type to obtain the schema of.
- """
- try:
- resource_class = resource.get_class(type_name)
- except exception.StackValidationFailed:
- raise exception.ResourceTypeNotFound(type_name=type_name)
-
- def properties_schema():
- for name, schema_dict in resource_class.properties_schema.items():
- schema = properties.Schema.from_legacy(schema_dict)
- if schema.implemented:
- yield name, dict(schema)
-
- def attributes_schema():
- for schema_item in resource_class.attributes_schema.items():
- schema = attributes.Attribute(*schema_item)
- yield schema.name, {schema.DESCRIPTION: schema.description}
-
- return {
- rpc_api.RES_SCHEMA_RES_TYPE: type_name,
- rpc_api.RES_SCHEMA_PROPERTIES: dict(properties_schema()),
- rpc_api.RES_SCHEMA_ATTRIBUTES: dict(attributes_schema()),
- }
-
- def generate_template(self, cnxt, type_name):
- """
- Generate a template based on the specified type.
- arg1 -> RPC context.
- arg2 -> Name of the resource type to generate a template for.
- """
- try:
- return \
- resource.get_class(type_name).resource_to_template(type_name)
- except exception.StackValidationFailed:
- raise exception.ResourceTypeNotFound(type_name=type_name)
-
- @request_context
- def list_events(self, cnxt, stack_identity):
- """
- The list_events method lists all events associated with a given stack.
- arg1 -> RPC context.
- arg2 -> Name of the stack you want to get events for.
- """
-
- if stack_identity is not None:
- st = self._get_stack(cnxt, stack_identity, show_deleted=True)
-
- events = db_api.event_get_all_by_stack(cnxt, st.id)
- else:
- events = db_api.event_get_all_by_tenant(cnxt)
-
- stacks = {}
-
- def get_stack(stack_id):
- if stack_id not in stacks:
- stacks[stack_id] = parser.Stack.load(cnxt, stack_id)
- return stacks[stack_id]
-
- return [api.format_event(Event.load(cnxt,
- e.id, e,
- get_stack(e.stack_id)))
- for e in events]
-
- def _authorize_stack_user(self, cnxt, stack, resource_name):
- '''
- Filter access to describe_stack_resource for stack in-instance users
- - The user must map to a User resource defined in the requested stack
- - The user resource must validate OK against any Policy specified
- '''
- # We're expecting EC2 credentials because all in-instance credentials
- # are deployed as ec2 keypairs
- try:
- ec2_creds = json.loads(cnxt.aws_creds).get('ec2Credentials')
- except (TypeError, AttributeError):
- ec2_creds = None
-
- if ec2_creds:
- access_key = ec2_creds.get('access')
- # Then we look up the AccessKey resource and check the stack
- try:
- akey_rsrc = self.find_physical_resource(cnxt, access_key)
- except exception.PhysicalResourceNotFound:
- logger.warning("access_key % not found!" % access_key)
- return False
-
- akey_rsrc_id = identifier.ResourceIdentifier(**akey_rsrc)
- if stack.identifier() == akey_rsrc_id.stack():
- # The stack matches, so check if access is allowed to this
- # resource via the AccessKey resource access_allowed()
- ak_akey_rsrc = stack[akey_rsrc_id.resource_name]
- return ak_akey_rsrc.access_allowed(resource_name)
- else:
- logger.warning("Cannot access resource from wrong stack!")
- else:
- logger.warning("Cannot access resource, invalid credentials!")
-
- return False
-
- @request_context
- def describe_stack_resource(self, cnxt, stack_identity, resource_name):
- s = self._get_stack(cnxt, stack_identity)
- stack = parser.Stack.load(cnxt, stack=s)
-
- if cfg.CONF.heat_stack_user_role in cnxt.roles:
- if not self._authorize_stack_user(cnxt, stack, resource_name):
- logger.warning("Access denied to resource %s" % resource_name)
- raise exception.Forbidden()
-
- if resource_name not in stack:
- raise exception.ResourceNotFound(resource_name=resource_name,
- stack_name=stack.name)
-
- resource = stack[resource_name]
- if resource.id is None:
- raise exception.ResourceNotAvailable(resource_name=resource_name)
-
- return api.format_stack_resource(stack[resource_name])
-
- @request_context
- def resource_signal(self, cnxt, stack_identity, resource_name, details):
- s = self._get_stack(cnxt, stack_identity)
-
- # This is not "nice" converting to the stored context here,
- # but this happens because the keystone user associated with the
- # signal doesn't have permission to read the secret key of
- # the user associated with the cfn-credentials file
- stack_context = self._load_user_creds(s.user_creds_id)
- stack = parser.Stack.load(stack_context, stack=s)
-
- if resource_name not in stack:
- raise exception.ResourceNotFound(resource_name=resource_name,
- stack_name=stack.name)
-
- resource = stack[resource_name]
- if resource.id is None:
- raise exception.ResourceNotAvailable(resource_name=resource_name)
-
- if callable(stack[resource_name].signal):
- stack[resource_name].signal(details)
-
- @request_context
- def find_physical_resource(self, cnxt, physical_resource_id):
- """
- Return an identifier for the resource with the specified physical
- resource ID.
- arg1 -> RPC context.
- arg2 -> The physical resource ID to look up.
- """
- rs = db_api.resource_get_by_physical_resource_id(cnxt,
- physical_resource_id)
- if not rs:
- raise exception.PhysicalResourceNotFound(
- resource_id=physical_resource_id)
-
- stack = parser.Stack.load(cnxt, stack=rs.stack)
- resource = stack[rs.name]
-
- return dict(resource.identifier())
-
- @request_context
- def describe_stack_resources(self, cnxt, stack_identity, resource_name):
- s = self._get_stack(cnxt, stack_identity)
-
- stack = parser.Stack.load(cnxt, stack=s)
-
- if resource_name is not None:
- name_match = lambda r: r.name == resource_name
- else:
- name_match = lambda r: True
-
- return [api.format_stack_resource(resource)
- for resource in stack if name_match(resource)]
-
- @request_context
- def list_stack_resources(self, cnxt, stack_identity):
- s = self._get_stack(cnxt, stack_identity)
-
- stack = parser.Stack.load(cnxt, stack=s)
-
- return [api.format_stack_resource(resource, detail=False)
- for resource in stack]
-
- @request_context
- def stack_suspend(self, cnxt, stack_identity):
- '''
- Handle request to perform suspend action on a stack
- '''
- def _stack_suspend(stack):
- logger.debug("suspending stack %s" % stack.name)
- stack.suspend()
-
- s = self._get_stack(cnxt, stack_identity)
-
- stack = parser.Stack.load(cnxt, stack=s)
- self._start_in_thread(stack.id, _stack_suspend, stack)
-
- @request_context
- def stack_resume(self, cnxt, stack_identity):
- '''
- Handle request to perform a resume action on a stack
- '''
- def _stack_resume(stack):
- logger.debug("resuming stack %s" % stack.name)
- stack.resume()
-
- s = self._get_stack(cnxt, stack_identity)
-
- stack = parser.Stack.load(cnxt, stack=s)
- self._start_in_thread(stack.id, _stack_resume, stack)
-
- def _load_user_creds(self, creds_id):
- user_creds = db_api.user_creds_get(creds_id)
- stored_context = context.RequestContext.from_dict(user_creds)
- # heat_keystoneclient populates the context with an auth_token
- # either via the stored user/password or trust_id, depending
- # on how deferred_auth_method is configured in the conf file
- kc = hkc.KeystoneClient(stored_context)
- return stored_context
-
- @request_context
- def metadata_update(self, cnxt, stack_identity,
- resource_name, metadata):
- """
- Update the metadata for the given resource.
- """
- s = self._get_stack(cnxt, stack_identity)
-
- stack = parser.Stack.load(cnxt, stack=s)
- if resource_name not in stack:
- raise exception.ResourceNotFound(resource_name=resource_name,
- stack_name=stack.name)
-
- resource = stack[resource_name]
- resource.metadata_update(new_metadata=metadata)
-
- # This is not "nice" converting to the stored context here,
- # but this happens because the keystone user associated with the
- # WaitCondition doesn't have permission to read the secret key of
- # the user associated with the cfn-credentials file
- stack_context = self._load_user_creds(s.user_creds_id)
- refresh_stack = parser.Stack.load(stack_context, stack=s)
-
- # Refresh the metadata for all other resources, since we expect
- # resource_name to be a WaitCondition resource, and other
- # resources may refer to WaitCondition Fn::GetAtt Data, which
- # is updated here.
- for res in refresh_stack:
- if res.name != resource_name and res.id is not None:
- res.metadata_update()
-
- return resource.metadata
-
- def _periodic_watcher_task(self, sid):
- """
- Periodic task, created for each stack, triggers watch-rule
- evaluation for all rules defined for the stack
- sid = stack ID
- """
- # Retrieve the stored credentials & create context
- # Require admin=True to the stack_get to defeat tenant
- # scoping otherwise we fail to retrieve the stack
- logger.debug("Periodic watcher task for stack %s" % sid)
- admin_context = context.get_admin_context()
- stack = db_api.stack_get(admin_context, sid, admin=True)
- if not stack:
- logger.error("Unable to retrieve stack %s for periodic task" %
- sid)
- return
- stack_context = self._load_user_creds(stack.user_creds_id)
-
- # Get all watchrules for this stack and evaluate them
- try:
- wrs = db_api.watch_rule_get_all_by_stack(stack_context, sid)
- except Exception as ex:
- logger.warn('periodic_task db error (%s) %s' %
- ('watch rule removed?', str(ex)))
- return
-
- def run_alarm_action(actions, details):
- for action in actions:
- action(details=details)
-
- stk = parser.Stack.load(stack_context, stack=stack)
- for res in stk:
- res.metadata_update()
-
- for wr in wrs:
- rule = watchrule.WatchRule.load(stack_context, watch=wr)
- actions = rule.evaluate()
- if actions:
- self._start_in_thread(sid, run_alarm_action, actions,
- rule.get_details())
-
- @request_context
- def create_watch_data(self, cnxt, watch_name, stats_data):
- '''
- This could be used by CloudWatch and WaitConditions
- and treat HA service events like any other CloudWatch.
- '''
- def get_matching_watches():
- if watch_name:
- yield watchrule.WatchRule.load(cnxt, watch_name)
- else:
- for wr in db_api.watch_rule_get_all(cnxt):
- if watchrule.rule_can_use_sample(wr, stats_data):
- yield watchrule.WatchRule.load(cnxt, watch=wr)
-
- rule_run = False
- for rule in get_matching_watches():
- rule.create_watch_data(stats_data)
- rule_run = True
-
- if not rule_run:
- if watch_name is None:
- watch_name = 'Unknown'
- raise exception.WatchRuleNotFound(watch_name=watch_name)
-
- return stats_data
-
- @request_context
- def show_watch(self, cnxt, watch_name):
- '''
- The show_watch method returns the attributes of one watch/alarm
- arg1 -> RPC context.
- arg2 -> Name of the watch you want to see, or None to see all
- '''
- if watch_name:
- wrn = [watch_name]
- else:
- try:
- wrn = [w.name for w in db_api.watch_rule_get_all(cnxt)]
- except Exception as ex:
- logger.warn('show_watch (all) db error %s' % str(ex))
- return
-
- wrs = [watchrule.WatchRule.load(cnxt, w) for w in wrn]
- result = [api.format_watch(w) for w in wrs]
- return result
-
- @request_context
- def show_watch_metric(self, cnxt, metric_namespace=None, metric_name=None):
- '''
- The show_watch method returns the datapoints for a metric
- arg1 -> RPC context.
- arg2 -> Name of the namespace you want to see, or None to see all
- arg3 -> Name of the metric you want to see, or None to see all
- '''
-
- # DB API and schema does not yet allow us to easily query by
- # namespace/metric, but we will want this at some point
- # for now, the API can query all metric data and filter locally
- if metric_namespace is not None or metric_name is not None:
- logger.error("Filtering by namespace/metric not yet supported")
- return
-
- try:
- wds = db_api.watch_data_get_all(cnxt)
- except Exception as ex:
- logger.warn('show_metric (all) db error %s' % str(ex))
- return
-
- result = [api.format_watch_data(w) for w in wds]
- return result
-
- @request_context
- def set_watch_state(self, cnxt, watch_name, state):
- '''
- Temporarily set the state of a given watch
- arg1 -> RPC context.
- arg2 -> Name of the watch
- arg3 -> State (must be one defined in WatchRule class
- '''
- wr = watchrule.WatchRule.load(cnxt, watch_name)
- if wr.state == rpc_api.WATCH_STATE_CEILOMETER_CONTROLLED:
- return
- actions = wr.set_watch_state(state)
- for action in actions:
- self._start_in_thread(wr.stack_id, action)
-
- # Return the watch with the state overriden to indicate success
- # We do not update the timestamps as we are not modifying the DB
- result = api.format_watch(wr)
- result[rpc_api.WATCH_STATE_VALUE] = state
- return result
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import urllib
-import urlparse
-
-from oslo.config import cfg
-
-from keystoneclient.contrib.ec2 import utils as ec2_utils
-
-from heat.db import api as db_api
-from heat.common import exception
-from heat.engine import clients
-from heat.engine import resource
-
-from heat.openstack.common import log
-
-LOG = log.getLogger(__name__)
-SIGNAL_TYPES = (
- WAITCONDITION, SIGNAL
-) = (
- '/waitcondition', '/signal'
-)
-SIGNAL_VERB = {WAITCONDITION: 'PUT',
- SIGNAL: 'POST'}
-
-
-class SignalResponder(resource.Resource):
-
- # Anything which subclasses this may trigger authenticated
- # API operations as a consequence of handling a signal
- requires_deferred_auth = True
-
- def handle_create(self):
- # Create a keystone user so we can create a signed URL via FnGetRefId
- user_id = self.keystone().create_stack_user(
- self.physical_resource_name())
- kp = self.keystone().get_ec2_keypair(user_id)
- if not kp:
- raise exception.Error("Error creating ec2 keypair for user %s" %
- user_id)
- else:
- self.resource_id_set(user_id)
-
- def handle_delete(self):
- if self.resource_id is None:
- return
- try:
- self.keystone().delete_stack_user(self.resource_id)
- except clients.hkc.kc.exceptions.NotFound:
- pass
- try:
- db_api.resource_data_delete(self, 'ec2_signed_url')
- except exception.NotFound:
- pass
-
- def _get_signed_url(self, signal_type=SIGNAL):
- """Create properly formatted and pre-signed URL.
-
- This uses the created user for the credentials.
-
- See http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/
- restarter-signature.html
- Also see boto/auth.py::QuerySignatureV2AuthHandler
-
- :param signal_type: either WAITCONDITION or SIGNAL.
- """
- try:
- stored = db_api.resource_data_get(self, 'ec2_signed_url')
- except exception.NotFound:
- stored = None
- if stored is not None:
- return stored
-
- waitcond_url = cfg.CONF.heat_waitcondition_server_url
- signal_url = waitcond_url.replace('/waitcondition', signal_type)
- host_url = urlparse.urlparse(signal_url)
-
- path = self.identifier().arn_url_path()
- credentials = self.keystone().get_ec2_keypair(self.resource_id)
-
- # Note the WSGI spec apparently means that the webob request we end up
- # prcessing in the CFN API (ec2token.py) has an unquoted path, so we
- # need to calculate the signature with the path component unquoted, but
- # ensure the actual URL contains the quoted version...
- unquoted_path = urllib.unquote(host_url.path + path)
- request = {'host': host_url.netloc.lower(),
- 'verb': SIGNAL_VERB[signal_type],
- 'path': unquoted_path,
- 'params': {'SignatureMethod': 'HmacSHA256',
- 'SignatureVersion': '2',
- 'AWSAccessKeyId': credentials.access,
- 'Timestamp':
- self.created_time.strftime("%Y-%m-%dT%H:%M:%SZ")
- }}
- # Sign the requested
- signer = ec2_utils.Ec2Signer(credentials.secret)
- request['params']['Signature'] = signer.generate(request)
-
- qs = urllib.urlencode(request['params'])
- url = "%s%s?%s" % (signal_url.lower(),
- path, qs)
-
- db_api.resource_data_set(self, 'ec2_signed_url', url)
- return url
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo.config import cfg
-
-from heat.common import exception
-from heat.engine import attributes
-from heat.engine import environment
-from heat.engine import parser
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine import template as tmpl
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class StackResource(resource.Resource):
- '''
- An abstract Resource subclass that allows the management of an entire Stack
- as a resource in a parent stack.
- '''
-
- def __init__(self, name, json_snippet, stack):
- super(StackResource, self).__init__(name, json_snippet, stack)
- self._nested = None
- if self.stack.parent_resource:
- self.recursion_depth = (
- self.stack.parent_resource.recursion_depth + 1)
- else:
- self.recursion_depth = 0
-
- def _outputs_to_attribs(self, json_snippet):
- if not self.attributes and 'Outputs' in json_snippet:
- self.attributes_schema = (
- attributes.Attributes
- .schema_from_outputs(json_snippet.get('Outputs')))
- self.attributes = attributes.Attributes(self.name,
- self.attributes_schema,
- self._resolve_attribute)
-
- def nested(self):
- '''
- Return a Stack object representing the nested (child) stack.
- '''
- if self._nested is None and self.resource_id is not None:
- self._nested = parser.Stack.load(self.context,
- self.resource_id,
- parent_resource=self)
-
- if self._nested is None:
- raise exception.NotFound('Nested stack not found in DB')
-
- return self._nested
-
- def create_with_template(self, child_template, user_params,
- timeout_mins=None):
- '''
- Handle the creation of the nested stack from a given JSON template.
- '''
- if self.recursion_depth >= cfg.CONF.max_nested_stack_depth:
- msg = _("Recursion depth exceeds %d.") % \
- cfg.CONF.max_nested_stack_depth
- raise exception.RequestLimitExceeded(message=msg)
- template = parser.Template(child_template)
- if ((len(template[tmpl.RESOURCES]) +
- self.stack.root_stack.total_resources() >
- cfg.CONF.max_resources_per_stack)):
- raise exception.StackResourceLimitExceeded()
- self._outputs_to_attribs(child_template)
-
- # Note we disable rollback for nested stacks, since they
- # should be rolled back by the parent stack on failure
- nested = parser.Stack(self.context,
- self.physical_resource_name(),
- template,
- environment.Environment(user_params),
- timeout_mins=timeout_mins,
- disable_rollback=True,
- parent_resource=self,
- owner_id=self.stack.id)
- nested.validate()
- self._nested = nested
- nested_id = self._nested.store()
- self.resource_id_set(nested_id)
-
- stack_creator = scheduler.TaskRunner(self._nested.stack_task,
- action=self._nested.CREATE)
- stack_creator.start(timeout=self._nested.timeout_secs())
- return stack_creator
-
- def check_create_complete(self, stack_creator):
- done = stack_creator.step()
- if done:
- if self._nested.state != (self._nested.CREATE,
- self._nested.COMPLETE):
- raise exception.Error(self._nested.status_reason)
-
- return done
-
- def update_with_template(self, child_template, user_params,
- timeout_mins=None):
- """Update the nested stack with the new template."""
- template = parser.Template(child_template)
- # Note that there is no call to self._outputs_to_attribs here.
- # If we have a use case for updating attributes of the resource based
- # on updated templates we should make sure it's optional because not
- # all subclasses want that behavior, since they may offer custom
- # attributes.
-
- # Note we disable rollback for nested stacks, since they
- # should be rolled back by the parent stack on failure
- stack = parser.Stack(self.context,
- self.physical_resource_name(),
- template,
- environment.Environment(user_params),
- timeout_mins=timeout_mins,
- disable_rollback=True,
- parent_resource=self,
- owner_id=self.stack.id)
- stack.validate()
-
- nested_stack = self.nested()
- if nested_stack is None:
- raise exception.Error(_('Cannot update %s, stack not created')
- % self.name)
-
- if not hasattr(type(self), 'attributes_schema'):
- self.attributes = None
- self._outputs_to_attribs(child_template)
-
- updater = scheduler.TaskRunner(nested_stack.update_task, stack)
- updater.start()
- return updater
-
- def check_update_complete(self, updater):
- if updater is None:
- return True
-
- if not updater.step():
- return False
-
- nested_stack = self.nested()
- if nested_stack.state != (nested_stack.UPDATE,
- nested_stack.COMPLETE):
- raise exception.Error("Nested stack update failed: %s" %
- nested_stack.status_reason)
- return True
-
- def delete_nested(self):
- '''
- Delete the nested stack.
- '''
- try:
- stack = self.nested()
- except exception.NotFound:
- logger.info("Stack not found to delete")
- else:
- if stack is not None:
- delete_task = scheduler.TaskRunner(stack.delete)
- delete_task.start()
- return delete_task
-
- def check_delete_complete(self, delete_task):
- if delete_task is None:
- return True
-
- done = delete_task.step()
- if done:
- nested_stack = self.nested()
- if nested_stack.state != (nested_stack.DELETE,
- nested_stack.COMPLETE):
- raise exception.Error(nested_stack.status_reason)
-
- return done
-
- def handle_suspend(self):
- stack = self.nested()
- if stack is None:
- raise exception.Error(_('Cannot suspend %s, stack not created')
- % self.name)
-
- suspend_task = scheduler.TaskRunner(self._nested.stack_task,
- action=self._nested.SUSPEND,
- reverse=True)
-
- suspend_task.start(timeout=self._nested.timeout_secs())
- return suspend_task
-
- def check_suspend_complete(self, suspend_task):
- done = suspend_task.step()
- if done:
- if self._nested.state != (self._nested.SUSPEND,
- self._nested.COMPLETE):
- raise exception.Error(self._nested.status_reason)
-
- return done
-
- def handle_resume(self):
- stack = self.nested()
- if stack is None:
- raise exception.Error(_('Cannot resume %s, stack not created')
- % self.name)
-
- resume_task = scheduler.TaskRunner(self._nested.stack_task,
- action=self._nested.RESUME,
- reverse=False)
-
- resume_task.start(timeout=self._nested.timeout_secs())
- return resume_task
-
- def check_resume_complete(self, resume_task):
- done = resume_task.step()
- if done:
- if self._nested.state != (self._nested.RESUME,
- self._nested.COMPLETE):
- raise exception.Error(self._nested.status_reason)
-
- return done
-
- def get_output(self, op):
- '''
- Return the specified Output value from the nested stack.
-
- If the output key does not exist, raise an InvalidTemplateAttribute
- exception.
- '''
- stack = self.nested()
- if stack is None:
- return None
- if op not in stack.outputs:
- raise exception.InvalidTemplateAttribute(resource=self.name,
- key=op)
- return stack.output(op)
-
- def _resolve_attribute(self, name):
- return unicode(self.get_output(name))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import collections
-import json
-
-from heat.api.aws import utils as aws_utils
-from heat.db import api as db_api
-from heat.common import exception
-from heat.engine.parameters import ParamSchema
-
-SECTIONS = (VERSION, DESCRIPTION, MAPPINGS,
- PARAMETERS, RESOURCES, OUTPUTS) = \
- ('AWSTemplateFormatVersion', 'Description', 'Mappings',
- 'Parameters', 'Resources', 'Outputs')
-
-
-class Template(collections.Mapping):
- '''A stack template.'''
-
- def __new__(cls, template, *args, **kwargs):
- '''Create a new Template of the appropriate class.'''
-
- if cls == Template:
- if 'heat_template_version' in template:
- # deferred import of HOT module to avoid circular dependency
- # at load time
- from heat.engine import hot
- return hot.HOTemplate(template, *args, **kwargs)
-
- return super(Template, cls).__new__(cls)
-
- def __init__(self, template, template_id=None, files=None):
- '''
- Initialise the template with a JSON object and a set of Parameters
- '''
- self.id = template_id
- self.t = template
- self.files = files or {}
- self.maps = self[MAPPINGS]
-
- @classmethod
- def load(cls, context, template_id):
- '''Retrieve a Template with the given ID from the database.'''
- t = db_api.raw_template_get(context, template_id)
- return cls(t.template, template_id)
-
- def store(self, context=None):
- '''Store the Template in the database and return its ID.'''
- if self.id is None:
- rt = {'template': self.t}
- new_rt = db_api.raw_template_create(context, rt)
- self.id = new_rt.id
- return self.id
-
- def __getitem__(self, section):
- '''Get the relevant section in the template.'''
- if section not in SECTIONS:
- raise KeyError('"%s" is not a valid template section' % section)
- if section == VERSION:
- return self.t[section]
-
- if section == DESCRIPTION:
- default = 'No description'
- else:
- default = {}
-
- return self.t.get(section, default)
-
- def __iter__(self):
- '''Return an iterator over the section names.'''
- return iter(SECTIONS)
-
- def __len__(self):
- '''Return the number of sections.'''
- return len(SECTIONS)
-
- def resolve_find_in_map(self, s):
- '''
- Resolve constructs of the form { "Fn::FindInMap" : [ "mapping",
- "key",
- "value" ] }
- '''
- def handle_find_in_map(args):
- try:
- name, key, value = args
- return self.maps[name][key][value]
- except (ValueError, TypeError) as ex:
- raise KeyError(str(ex))
-
- return _resolve(lambda k, v: k == 'Fn::FindInMap',
- handle_find_in_map, s)
-
- @staticmethod
- def resolve_availability_zones(s, stack):
- '''
- looking for { "Fn::GetAZs" : "str" }
- '''
- def match_get_az(key, value):
- return (key == 'Fn::GetAZs' and
- isinstance(value, basestring))
-
- def handle_get_az(ref):
- if stack is None:
- return ['nova']
- else:
- return stack.get_availability_zones()
-
- return _resolve(match_get_az, handle_get_az, s)
-
- @staticmethod
- def resolve_param_refs(s, parameters):
- '''
- Resolve constructs of the form { "Ref" : "string" }
- '''
- def match_param_ref(key, value):
- return (key == 'Ref' and
- isinstance(value, basestring) and
- value in parameters)
-
- def handle_param_ref(ref):
- try:
- return parameters[ref]
- except (KeyError, ValueError):
- raise exception.UserParameterMissing(key=ref)
-
- return _resolve(match_param_ref, handle_param_ref, s)
-
- @staticmethod
- def resolve_resource_refs(s, resources):
- '''
- Resolve constructs of the form { "Ref" : "resource" }
- '''
- def match_resource_ref(key, value):
- return key == 'Ref' and value in resources
-
- def handle_resource_ref(arg):
- return resources[arg].FnGetRefId()
-
- return _resolve(match_resource_ref, handle_resource_ref, s)
-
- @staticmethod
- def resolve_attributes(s, resources):
- '''
- Resolve constructs of the form { "Fn::GetAtt" : [ "WebServer",
- "PublicIp" ] }
- '''
- def handle_getatt(args):
- resource, att = args
- try:
- r = resources[resource]
- if r.state in (
- (r.CREATE, r.IN_PROGRESS),
- (r.CREATE, r.COMPLETE),
- (r.UPDATE, r.IN_PROGRESS),
- (r.UPDATE, r.COMPLETE)):
- return r.FnGetAtt(att)
- except KeyError:
- raise exception.InvalidTemplateAttribute(resource=resource,
- key=att)
-
- return _resolve(lambda k, v: k == 'Fn::GetAtt', handle_getatt, s)
-
- @staticmethod
- def reduce_joins(s):
- '''
- Reduces contiguous strings in Fn::Join to a single joined string
- eg the following
- { "Fn::Join" : [ " ", [ "str1", "str2", {"f": "b"}, "str3", "str4"]}
- is reduced to
- { "Fn::Join" : [ " ", [ "str1 str2", {"f": "b"}, "str3 str4"]}
- '''
- def handle_join(args):
- if not isinstance(args, (list, tuple)):
- raise TypeError('Arguments to "Fn::Join" must be a list')
- try:
- delim, items = args
- except ValueError as ex:
- example = '"Fn::Join" : [ " ", [ "str1", "str2"]]'
- raise ValueError('Incorrect arguments to "Fn::Join" %s: %s' %
- ('should be', example))
-
- if not isinstance(items, (list, tuple)):
- raise TypeError('Arguments to "Fn::Join" not fully resolved')
- reduced = []
- contiguous = []
- for item in items:
- if isinstance(item, (str, unicode)):
- contiguous.append(item)
- else:
- if contiguous:
- reduced.append(delim.join(contiguous))
- contiguous = []
- reduced.append(item)
- if contiguous:
- reduced.append(delim.join(contiguous))
- return {'Fn::Join': [delim, reduced]}
-
- return _resolve(lambda k, v: k == 'Fn::Join', handle_join, s)
-
- @staticmethod
- def resolve_select(s):
- '''
- Resolve constructs of the form:
- (for a list lookup)
- { "Fn::Select" : [ "2", [ "apples", "grapes", "mangoes" ] ] }
- returns "mangoes"
-
- (for a dict lookup)
- { "Fn::Select" : [ "red", {"red": "a", "flu": "b"} ] }
- returns "a"
-
- Note: can raise IndexError, KeyError, ValueError and TypeError
- '''
- def handle_select(args):
- if not isinstance(args, (list, tuple)):
- raise TypeError('Arguments to "Fn::Select" must be a list')
-
- try:
- lookup, strings = args
- except ValueError as ex:
- example = '"Fn::Select" : [ "4", [ "str1", "str2"]]'
- raise ValueError('Incorrect arguments to "Fn::Select" %s: %s' %
- ('should be', example))
-
- try:
- index = int(lookup)
- except ValueError as ex:
- index = lookup
-
- if isinstance(strings, basestring):
- # might be serialized json.
- # if not allow it to raise a ValueError
- strings = json.loads(strings)
-
- if isinstance(strings, (list, tuple)) and isinstance(index, int):
- return strings[index]
- if isinstance(strings, dict) and isinstance(index, basestring):
- return strings[index]
- if strings is None:
- return ''
-
- raise TypeError('Arguments to "Fn::Select" not fully resolved')
-
- return _resolve(lambda k, v: k == 'Fn::Select', handle_select, s)
-
- @staticmethod
- def resolve_joins(s):
- '''
- Resolve constructs of the form { "Fn::Join" : [ "delim", [ "str1",
- "str2" ] }
- '''
- def handle_join(args):
- if not isinstance(args, (list, tuple)):
- raise TypeError('Arguments to "Fn::Join" must be a list')
-
- try:
- delim, strings = args
- except ValueError as ex:
- example = '"Fn::Join" : [ " ", [ "str1", "str2"]]'
- raise ValueError('Incorrect arguments to "Fn::Join" %s: %s' %
- ('should be', example))
-
- if not isinstance(strings, (list, tuple)):
- raise TypeError('Arguments to "Fn::Join" not fully resolved')
-
- def empty_for_none(v):
- if v is None:
- return ''
- else:
- return v
-
- return delim.join(empty_for_none(value) for value in strings)
-
- return _resolve(lambda k, v: k == 'Fn::Join', handle_join, s)
-
- @staticmethod
- def resolve_split(s):
- '''
- Split strings in Fn::Split to a list of sub strings
- eg the following
- { "Fn::Split" : [ ",", "str1,str2,str3,str4"]}
- is reduced to
- {["str1", "str2", "str3", "str4"]}
- '''
- def handle_split(args):
- if not isinstance(args, (list, tuple)):
- raise TypeError('Arguments to "Fn::Split" must be a list')
-
- example = '"Fn::Split" : [ ",", "str1, str2"]]'
- try:
- delim, strings = args
- except ValueError as ex:
- raise ValueError('Incorrect arguments to "Fn::Split" %s: %s' %
- ('should be', example))
- if not isinstance(strings, basestring):
- raise TypeError('Incorrect arguments to "Fn::Split" %s: %s' %
- ('should be', example))
- return strings.split(delim)
- return _resolve(lambda k, v: k == 'Fn::Split', handle_split, s)
-
- @staticmethod
- def resolve_replace(s):
- """
- Resolve constructs of the form.
- {"Fn::Replace": [
- {'$var1': 'foo', '%var2%': 'bar'},
- '$var1 is %var2%'
- ]}
- This is implemented using python str.replace on each key
- """
- def handle_replace(args):
- if not isinstance(args, (list, tuple)):
- raise TypeError('Arguments to "Fn::Replace" must be a list')
-
- try:
- mapping, string = args
- except ValueError as ex:
- example = ('{"Fn::Replace": '
- '[ {"$var1": "foo", "%var2%": "bar"}, '
- '"$var1 is %var2%"]}')
- raise ValueError(
- 'Incorrect arguments to "Fn::Replace" %s: %s' %
- ('should be', example))
-
- if not isinstance(mapping, dict):
- raise TypeError(
- 'Arguments to "Fn::Replace" not fully resolved')
- if not isinstance(string, basestring):
- raise TypeError(
- 'Arguments to "Fn::Replace" not fully resolved')
-
- for k, v in mapping.items():
- if v is None:
- v = ''
- string = string.replace(k, v)
- return string
-
- return _resolve(lambda k, v: k == 'Fn::Replace', handle_replace, s)
-
- @staticmethod
- def resolve_base64(s):
- '''
- Resolve constructs of the form { "Fn::Base64" : "string" }
- '''
- def handle_base64(string):
- if not isinstance(string, basestring):
- raise TypeError('Arguments to "Fn::Base64" not fully resolved')
- return string
-
- return _resolve(lambda k, v: k == 'Fn::Base64', handle_base64, s)
-
- @staticmethod
- def resolve_member_list_to_map(s):
- '''
- Resolve constructs of the form
- {'Fn::MemberListToMap': ['Name', 'Value', ['.member.0.Name=key',
- '.member.0.Value=door']]}
- the first two arguments are the names of the key and value.
- '''
-
- def handle_member_list_to_map(args):
- correct = '''
- {'Fn::MemberListToMap': ['Name', 'Value',
- ['.member.0.Name=key',
- '.member.0.Value=door']]}
- '''
- if not isinstance(args, (list, tuple)):
- raise TypeError('Wrong Arguments try: "%s"' % correct)
- if len(args) != 3:
- raise TypeError('Wrong Arguments try: "%s"' % correct)
- if not isinstance(args[0], basestring):
- raise TypeError('Wrong Arguments try: "%s"' % correct)
- if not isinstance(args[1], basestring):
- raise TypeError('Wrong Arguments try: "%s"' % correct)
- if not isinstance(args[2], (list, tuple)):
- raise TypeError('Wrong Arguments try: "%s"' % correct)
-
- partial = {}
- for item in args[2]:
- sp = item.split('=')
- partial[sp[0]] = sp[1]
- return aws_utils.extract_param_pairs(partial,
- prefix='',
- keyname=args[0],
- valuename=args[1])
-
- return _resolve(lambda k, v: k == 'Fn::MemberListToMap',
- handle_member_list_to_map, s)
-
- @staticmethod
- def resolve_resource_facade(s, stack):
- '''
- Resolve constructs of the form {'Fn::ResourceFacade': 'Metadata'}
- '''
- resource_attributes = ('Metadata', 'DeletionPolicy', 'UpdatePolicy')
-
- def handle_resource_facade(arg):
- if arg not in resource_attributes:
- raise ValueError(
- 'Incorrect arguments to "Fn::ResourceFacade" %s: %s' %
- ('should be one of', str(resource_attributes)))
- try:
- if arg == 'Metadata':
- return stack.parent_resource.metadata
- return stack.parent_resource.t[arg]
- except KeyError:
- raise KeyError('"%s" is not specified in parent resource' %
- arg)
-
- return _resolve(lambda k, v: k == 'Fn::ResourceFacade',
- handle_resource_facade,
- s)
-
- def param_schemata(self):
- parameters = self[PARAMETERS].iteritems()
- return dict((name, ParamSchema(schema)) for name, schema in parameters)
-
-
-def _resolve(match, handle, snippet):
- '''
- Resolve constructs in a snippet of a template. The supplied match function
- should return True if a particular key-value pair should be substituted,
- and the handle function should return the correct substitution when passed
- the argument list as parameters.
-
- Returns a copy of the original snippet with the substitutions performed.
- '''
- recurse = lambda s: _resolve(match, handle, s)
-
- if isinstance(snippet, dict):
- if len(snippet) == 1:
- k, v = snippet.items()[0]
- if match(k, v):
- return handle(recurse(v))
- return dict((k, recurse(v)) for k, v in snippet.items())
- elif isinstance(snippet, list):
- return [recurse(s) for s in snippet]
- return snippet
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import exception
-
-
-class Timestamp(object):
- '''
- A descriptor for writing a timestamp to the database.
- '''
-
- def __init__(self, db_fetch, attribute):
- '''
- Initialise with a function to fetch the database representation of an
- object (given a context and ID) and the name of the attribute to
- retrieve.
- '''
- self.db_fetch = db_fetch
- self.attribute = attribute
-
- def __get__(self, obj, obj_class):
- '''
- Get timestamp for the given object and class.
- '''
- if obj is None or obj.id is None:
- return None
-
- o = self.db_fetch(obj.context, obj.id)
- return getattr(o, self.attribute)
-
- def __set__(self, obj, timestamp):
- '''Update the timestamp for the given object.'''
- if obj.id is None:
- raise exception.ResourceNotAvailable(resource_name=obj.name)
- o = self.db_fetch(obj.context, obj.id)
- o.update_and_save({self.attribute: timestamp})
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.db import api as db_api
-
-from heat.engine import dependencies
-from heat.engine import resource
-from heat.engine import scheduler
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class StackUpdate(object):
- """
- A Task to perform the update of an existing stack to a new template.
- """
-
- def __init__(self, existing_stack, new_stack, previous_stack,
- rollback=False):
- """Initialise with the existing stack and the new stack."""
- self.existing_stack = existing_stack
- self.new_stack = new_stack
- self.previous_stack = previous_stack
-
- self.rollback = rollback
-
- self.existing_snippets = dict((r.name, r.parsed_template())
- for r in self.existing_stack)
-
- def __repr__(self):
- if self.rollback:
- return '%s Rollback' % str(self.existing_stack)
- else:
- return '%s Update' % str(self.existing_stack)
-
- @scheduler.wrappertask
- def __call__(self):
- """Return a co-routine that updates the stack."""
-
- cleanup_prev = scheduler.DependencyTaskGroup(
- self.previous_stack.dependencies,
- self._remove_backup_resource,
- reverse=True)
-
- update = scheduler.DependencyTaskGroup(self.dependencies(),
- self._resource_update)
-
- if not self.rollback:
- yield cleanup_prev()
-
- try:
- yield update()
- finally:
- prev_deps = self.previous_stack._get_dependencies(
- self.previous_stack.resources.itervalues())
- self.previous_stack.dependencies = prev_deps
-
- def _resource_update(self, res):
- if res.name in self.new_stack and self.new_stack[res.name] is res:
- return self._process_new_resource_update(res)
- else:
- return self._process_existing_resource_update(res)
-
- @scheduler.wrappertask
- def _remove_backup_resource(self, prev_res):
- if prev_res.state not in ((prev_res.INIT, prev_res.COMPLETE),
- (prev_res.DELETE, prev_res.COMPLETE)):
- logger.debug("Deleting backup resource %s" % prev_res.name)
- yield prev_res.destroy()
-
- @staticmethod
- def _exchange_stacks(existing_res, prev_res):
- db_api.resource_exchange_stacks(existing_res.stack.context,
- existing_res.id, prev_res.id)
- existing_res.stack, prev_res.stack = prev_res.stack, existing_res.stack
- existing_res.stack[existing_res.name] = existing_res
- prev_res.stack[prev_res.name] = prev_res
-
- @scheduler.wrappertask
- def _create_resource(self, new_res):
- res_name = new_res.name
-
- # Clean up previous resource
- if res_name in self.previous_stack:
- prev_res = self.previous_stack[res_name]
-
- if prev_res.state not in ((prev_res.INIT, prev_res.COMPLETE),
- (prev_res.DELETE, prev_res.COMPLETE)):
- # Swap in the backup resource if it is in a valid state,
- # instead of creating a new resource
- if prev_res.status == prev_res.COMPLETE:
- logger.debug("Swapping in backup Resource %s" % res_name)
- self._exchange_stacks(self.existing_stack[res_name],
- prev_res)
- return
-
- logger.debug("Deleting backup Resource %s" % res_name)
- yield prev_res.destroy()
-
- # Back up existing resource
- if res_name in self.existing_stack:
- logger.debug("Backing up existing Resource %s" % res_name)
- existing_res = self.existing_stack[res_name]
- existing_res.stack = self.previous_stack
- self.previous_stack[res_name] = existing_res
- existing_res.state_set(existing_res.UPDATE, existing_res.COMPLETE)
-
- new_res.stack = self.existing_stack
- self.existing_stack[res_name] = new_res
- yield new_res.create()
-
- @scheduler.wrappertask
- def _process_new_resource_update(self, new_res):
- res_name = new_res.name
-
- if res_name in self.existing_stack:
- existing_res = self.existing_stack[res_name]
- try:
- yield self._update_in_place(existing_res,
- new_res)
- except resource.UpdateReplace:
- pass
- else:
- logger.info("Resource %s for stack %s updated" %
- (res_name, self.existing_stack.name))
- return
-
- yield self._create_resource(new_res)
-
- @scheduler.wrappertask
- def _update_in_place(self, existing_res, new_res):
- # Compare resolved pre/post update resource snippets,
- # note the new resource snippet is resolved in the context
- # of the existing stack (which is the stack being updated)
- existing_snippet = self.existing_snippets[existing_res.name]
- new_snippet = self.existing_stack.resolve_runtime_data(new_res.t)
-
- if new_snippet != existing_snippet:
- yield existing_res.update(new_snippet, existing_snippet)
-
- @scheduler.wrappertask
- def _process_existing_resource_update(self, existing_res):
- res_name = existing_res.name
-
- if res_name in self.previous_stack:
- yield self._remove_backup_resource(self.previous_stack[res_name])
-
- if res_name in self.new_stack:
- new_res = self.new_stack[res_name]
- if new_res.state == (new_res.INIT, new_res.COMPLETE):
- # Already updated in-place
- return
-
- if existing_res.stack is not self.previous_stack:
- yield existing_res.destroy()
-
- if res_name not in self.new_stack:
- del self.existing_stack.resources[res_name]
-
- def dependencies(self):
- '''
- Return a Dependencies object representing the dependencies between
- update operations to move from an existing stack definition to a new
- one.
- '''
- existing_deps = self.existing_stack.dependencies
- new_deps = self.new_stack.dependencies
-
- def edges():
- # Create/update the new stack's resources in create order
- for e in new_deps.graph().edges():
- yield e
- # Destroy/cleanup the old stack's resources in delete order
- for e in existing_deps.graph(reverse=True).edges():
- yield e
- # Don't cleanup old resources until after they have been replaced
- for res in self.existing_stack:
- if res.name in self.new_stack:
- yield (res, self.new_stack[res.name])
-
- return dependencies.Dependencies(edges())
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import datetime
-from heat.common import exception
-from heat.openstack.common import log as logging
-from heat.openstack.common import timeutils
-from heat.engine import timestamp
-from heat.db import api as db_api
-from heat.engine import parser
-from heat.rpc import api as rpc_api
-
-logger = logging.getLogger(__name__)
-
-
-class WatchRule(object):
- WATCH_STATES = (
- ALARM,
- NORMAL,
- NODATA,
- SUSPENDED,
- CEILOMETER_CONTROLLED,
- ) = (
- rpc_api.WATCH_STATE_ALARM,
- rpc_api.WATCH_STATE_OK,
- rpc_api.WATCH_STATE_NODATA,
- rpc_api.WATCH_STATE_SUSPENDED,
- rpc_api.WATCH_STATE_CEILOMETER_CONTROLLED,
- )
- ACTION_MAP = {ALARM: 'AlarmActions',
- NORMAL: 'OKActions',
- NODATA: 'InsufficientDataActions'}
-
- created_at = timestamp.Timestamp(db_api.watch_rule_get, 'created_at')
- updated_at = timestamp.Timestamp(db_api.watch_rule_get, 'updated_at')
-
- def __init__(self, context, watch_name, rule, stack_id=None,
- state=NODATA, wid=None, watch_data=[],
- last_evaluated=timeutils.utcnow()):
- self.context = context
- self.now = timeutils.utcnow()
- self.name = watch_name
- self.state = state
- self.rule = rule
- self.stack_id = stack_id
- period = 0
- if 'Period' in rule:
- period = int(rule['Period'])
- elif 'period' in rule:
- period = int(rule['period'])
- self.timeperiod = datetime.timedelta(seconds=period)
- self.id = wid
- self.watch_data = watch_data
- self.last_evaluated = last_evaluated
-
- @classmethod
- def load(cls, context, watch_name=None, watch=None):
- '''
- Load the watchrule object, either by name or via an existing DB object
- '''
- if watch is None:
- try:
- watch = db_api.watch_rule_get_by_name(context, watch_name)
- except Exception as ex:
- logger.warn('WatchRule.load (%s) db error %s' %
- (watch_name, str(ex)))
- if watch is None:
- raise exception.WatchRuleNotFound(watch_name=watch_name)
- else:
- return cls(context=context,
- watch_name=watch.name,
- rule=watch.rule,
- stack_id=watch.stack_id,
- state=watch.state,
- wid=watch.id,
- watch_data=watch.watch_data,
- last_evaluated=watch.last_evaluated)
-
- def store(self):
- '''
- Store the watchrule in the database and return its ID
- If self.id is set, we update the existing rule
- '''
-
- wr_values = {
- 'name': self.name,
- 'rule': self.rule,
- 'state': self.state,
- 'stack_id': self.stack_id
- }
-
- if not self.id:
- wr = db_api.watch_rule_create(self.context, wr_values)
- self.id = wr.id
- else:
- db_api.watch_rule_update(self.context, self.id, wr_values)
-
- def destroy(self):
- '''
- Delete the watchrule from the database.
- '''
- if self.id:
- db_api.watch_rule_delete(self.context, self.id)
-
- def do_data_cmp(self, data, threshold):
- op = self.rule['ComparisonOperator']
- if op == 'GreaterThanThreshold':
- return data > threshold
- elif op == 'GreaterThanOrEqualToThreshold':
- return data >= threshold
- elif op == 'LessThanThreshold':
- return data < threshold
- elif op == 'LessThanOrEqualToThreshold':
- return data <= threshold
- else:
- return False
-
- def do_Maximum(self):
- data = 0
- have_data = False
- for d in self.watch_data:
- if d.created_at < self.now - self.timeperiod:
- continue
- if not have_data:
- data = float(d.data[self.rule['MetricName']]['Value'])
- have_data = True
- if float(d.data[self.rule['MetricName']]['Value']) > data:
- data = float(d.data[self.rule['MetricName']]['Value'])
-
- if not have_data:
- return self.NODATA
-
- if self.do_data_cmp(data,
- float(self.rule['Threshold'])):
- return self.ALARM
- else:
- return self.NORMAL
-
- def do_Minimum(self):
- data = 0
- have_data = False
- for d in self.watch_data:
- if d.created_at < self.now - self.timeperiod:
- continue
- if not have_data:
- data = float(d.data[self.rule['MetricName']]['Value'])
- have_data = True
- elif float(d.data[self.rule['MetricName']]['Value']) < data:
- data = float(d.data[self.rule['MetricName']]['Value'])
-
- if not have_data:
- return self.NODATA
-
- if self.do_data_cmp(data,
- float(self.rule['Threshold'])):
- return self.ALARM
- else:
- return self.NORMAL
-
- def do_SampleCount(self):
- '''
- count all samples within the specified period
- '''
- data = 0
- for d in self.watch_data:
- if d.created_at < self.now - self.timeperiod:
- continue
- data = data + 1
-
- if self.do_data_cmp(data,
- float(self.rule['Threshold'])):
- return self.ALARM
- else:
- return self.NORMAL
-
- def do_Average(self):
- data = 0
- samples = 0
- for d in self.watch_data:
- if d.created_at < self.now - self.timeperiod:
- continue
- samples = samples + 1
- data = data + float(d.data[self.rule['MetricName']]['Value'])
-
- if samples == 0:
- return self.NODATA
-
- data = data / samples
- if self.do_data_cmp(data,
- float(self.rule['Threshold'])):
- return self.ALARM
- else:
- return self.NORMAL
-
- def do_Sum(self):
- data = 0
- for d in self.watch_data:
- if d.created_at < self.now - self.timeperiod:
- logger.debug('ignoring %s' % str(d.data))
- continue
- data = data + float(d.data[self.rule['MetricName']]['Value'])
-
- if self.do_data_cmp(data,
- float(self.rule['Threshold'])):
- return self.ALARM
- else:
- return self.NORMAL
-
- def get_alarm_state(self):
- fn = getattr(self, 'do_%s' % self.rule['Statistic'])
- return fn()
-
- def evaluate(self):
- if self.state == self.SUSPENDED:
- return []
- # has enough time progressed to run the rule
- self.now = timeutils.utcnow()
- if self.now < (self.last_evaluated + self.timeperiod):
- return []
- return self.run_rule()
-
- def get_details(self):
- return {'alarm': self.name,
- 'state': self.state}
-
- def run_rule(self):
- new_state = self.get_alarm_state()
- actions = self.rule_actions(new_state)
- self.state = new_state
-
- self.last_evaluated = self.now
- self.store()
- return actions
-
- def rule_actions(self, new_state):
- logger.info('WATCH: stack:%s, watch_name:%s %s',
- self.stack_id, self.name, new_state)
- actions = []
- if self.ACTION_MAP[new_state] not in self.rule:
- logger.info('no action for new state %s',
- new_state)
- else:
- s = db_api.stack_get(self.context, self.stack_id)
- stack = parser.Stack.load(self.context, stack=s)
- if (stack.action != stack.DELETE
- and stack.status == stack.COMPLETE):
- for refid in self.rule[self.ACTION_MAP[new_state]]:
- actions.append(stack.resource_by_refid(refid).signal)
- else:
- logger.warning("Could not process watch state %s for stack" %
- new_state)
- return actions
-
- def _to_ceilometer(self, data):
- from heat.engine import clients
- clients = clients.Clients(self.context)
- sample = {}
- sample['counter_type'] = 'gauge'
-
- for k, d in iter(data.items()):
- if k == 'Namespace':
- continue
- sample['counter_name'] = k
- sample['counter_volume'] = d['Value']
- sample['counter_unit'] = d['Unit']
- dims = d.get('Dimensions', {})
- if isinstance(dims, list):
- dims = dims[0]
- sample['resource_metadata'] = dims
- sample['resource_id'] = dims.get('InstanceId')
- logger.debug('new sample:%s data:%s' % (k, sample))
- clients.ceilometer().samples.create(**sample)
-
- def create_watch_data(self, data):
- if self.state == self.CEILOMETER_CONTROLLED:
- # this is a short term measure for those that have cfn-push-stats
- # within their templates, but want to use Ceilometer alarms.
-
- self._to_ceilometer(data)
- return
-
- if self.state == self.SUSPENDED:
- logger.debug('Ignoring metric data for %s, SUSPENDED state'
- % self.name)
- return []
-
- if self.rule['MetricName'] not in data:
- # Our simplified cloudwatch implementation only expects a single
- # Metric associated with each alarm, but some cfn-push-stats
- # options, e.g --haproxy try to push multiple metrics when we
- # actually only care about one (the one we're alarming on)
- # so just ignore any data which doesn't contain MetricName
- logger.debug('Ignoring metric data (only accept %s) : %s' %
- (self.rule['MetricName'], data))
- return
-
- watch_data = {
- 'data': data,
- 'watch_rule_id': self.id
- }
- wd = db_api.watch_data_create(None, watch_data)
- logger.debug('new watch:%s data:%s' % (self.name, str(wd.data)))
-
- def state_set(self, state):
- '''
- Persistently store the watch state
- '''
- if state not in self.WATCH_STATES:
- raise ValueError("Invalid watch state %s" % state)
-
- self.state = state
- self.store()
-
- def set_watch_state(self, state):
- '''
- Temporarily set the watch state, returns list of functions to be
- scheduled in the stack ThreadGroup for the specified state
- '''
-
- if state not in self.WATCH_STATES:
- raise ValueError('Unknown watch state %s' % state)
-
- actions = []
- if state != self.state:
- actions = self.rule_actions(state)
- if actions:
- logger.debug("Overriding state %s for watch %s with %s" %
- (self.state, self.name, state))
- else:
- logger.warning("Unable to override state %s for watch %s" %
- (self.state, self.name))
- return actions
-
-
-def rule_can_use_sample(wr, stats_data):
- def match_dimesions(rule, data):
- for k, v in iter(rule.items()):
- if k not in data:
- return False
- elif v != data[k]:
- return False
- return True
-
- if wr.state == WatchRule.SUSPENDED:
- return False
- if wr.state == WatchRule.CEILOMETER_CONTROLLED:
- metric = wr.rule['counter_name']
- rule_dims = {}
- for k, v in iter(wr.rule.get('matching_metadata', {}).items()):
- name = k.split('.')[-1]
- rule_dims[name] = v
- else:
- metric = wr.rule['MetricName']
- rule_dims = dict((d['Name'], d['Value'])
- for d in wr.rule.get('Dimensions', []))
-
- if metric not in stats_data:
- return False
-
- for k, v in iter(stats_data.items()):
- if k == 'Namespace':
- continue
- if k == metric:
- data_dims = v.get('Dimensions', {})
- if isinstance(data_dims, list):
- data_dims = data_dims[0]
- if match_dimesions(rule_dims, data_dims):
- return True
- return False
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-# Translations template for heat.
-# Copyright (C) 2013 ORGANIZATION
-# This file is distributed under the same license as the heat project.
-# FIRST AUTHOR <EMAIL@ADDRESS>, 2013.
-#
-#, fuzzy
-msgid ""
-msgstr ""
-"Project-Id-Version: heat 2013.2.a430.gd264018\n"
-"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
-"POT-Creation-Date: 2013-08-27 11:15+0800\n"
-"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
-"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
-"Language-Team: LANGUAGE <LL@li.org>\n"
-"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=utf-8\n"
-"Content-Transfer-Encoding: 8bit\n"
-"Generated-By: Babel 0.9.6\n"
-
-#: heat/api/aws/ec2token.py:38
-msgid "Authentication Endpoint URI"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:260
-#, python-format
-msgid "Failed to fetch template: %s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:293
-msgid "DisableRollback and OnFailure may not be used together"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:313
-#, python-format
-msgid "Unexpected action %(action)s"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:331 heat/api/cfn/v1/stacks.py:413
-msgid "Invalid Template URL"
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:335 heat/api/cfn/v1/stacks.py:416
-msgid "TemplateBody or TemplateUrl were not given."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:341 heat/api/cfn/v1/stacks.py:422
-msgid "The Template must be a JSON or YAML document."
-msgstr ""
-
-#: heat/api/cfn/v1/stacks.py:383
-msgid "stack not not found"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:49
-#, python-format
-msgid "Processing request: %(method)s %(path)s Accept: %(accept)s"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:64
-#, python-format
-msgid "Matched versioned URI. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:72
-#, python-format
-msgid ""
-"Unknown version in versioned URI: %(major_version)d.%(minor_version)d. "
-"Returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:88
-#, python-format
-msgid "Matched versioned media type. Version: %(major_version)d.%(minor_version)d"
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:94
-#, python-format
-msgid ""
-"Unknown version in accept header: "
-"%(major_version)d.%(minor_version)d...returning version choices."
-msgstr ""
-
-#: heat/api/middleware/version_negotiation.py:102
-#, python-format
-msgid "Unknown accept header: %s...returning version choices."
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:43
-msgid "No action specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:46
-msgid "Multiple actions specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:50
-#, python-format
-msgid "Invalid action %s specified"
-msgstr ""
-
-#: heat/api/openstack/v1/actions.py:57
-#, python-format
-msgid "Unexpected action %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:100
-#, python-format
-msgid "No events found for resource %s"
-msgstr ""
-
-#: heat/api/openstack/v1/events.py:118
-#, python-format
-msgid "No event %s found"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:75
-#, python-format
-msgid "%s not in valid format"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:83
-msgid "No stack name specified"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:101
-#, python-format
-msgid "Could not retrieve template: %s"
-msgstr ""
-
-#: heat/api/openstack/v1/stacks.py:104
-msgid "No template specified"
-msgstr ""
-
-#: heat/api/openstack/v1/util.py:56
-msgid "Invalid Stack address"
-msgstr ""
-
-#: heat/common/auth_password.py:116
-msgid "Request missing required header X-Auth-Url"
-msgstr ""
-
-#: heat/common/auth_password.py:121
-#, python-format
-msgid "Header X-Auth-Url \"%s\" not an allowed endpoint"
-msgstr ""
-
-#: heat/common/config.py:37
-msgid "The flavor to use"
-msgstr ""
-
-#: heat/common/config.py:39
-msgid "The API paste config file to use"
-msgstr ""
-
-#: heat/common/config.py:102
-msgid "Allow orchestration of multiple clouds"
-msgstr ""
-
-#: heat/common/config.py:105
-msgid ""
-"Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At "
-"least one endpoint needs to be specified."
-msgstr ""
-
-#: heat/common/exception.py:102
-msgid "An unknown exception occurred."
-msgstr ""
-
-#: heat/common/exception.py:113 heat/openstack/common/rpc/common.py:89
-msgid "Exception in string format operation"
-msgstr ""
-
-#: heat/common/exception.py:125
-#, python-format
-msgid "Missing required credential: %(required)s"
-msgstr ""
-
-#: heat/common/exception.py:129
-#, python-format
-msgid ""
-"Incorrect auth strategy, expected \"%(expected)s\" but received "
-"\"%(received)s\""
-msgstr ""
-
-#: heat/common/exception.py:134
-#, python-format
-msgid "Connect error/bad request to Auth service at URL %(url)s."
-msgstr ""
-
-#: heat/common/exception.py:138
-#, python-format
-msgid "Auth service at URL %(url)s not found."
-msgstr ""
-
-#: heat/common/exception.py:142
-msgid "Authorization failed."
-msgstr ""
-
-#: heat/common/exception.py:146
-msgid "You are not authenticated."
-msgstr ""
-
-#: heat/common/exception.py:150 heat/common/exception.py:155
-msgid "You are not authorized to complete this action."
-msgstr ""
-
-#: heat/common/exception.py:159
-#, python-format
-msgid "Data supplied was not valid: %(reason)s"
-msgstr ""
-
-#: heat/common/exception.py:163
-#, python-format
-msgid "Redirecting to %(uri)s for authorization."
-msgstr ""
-
-#: heat/common/exception.py:167
-msgid "There was an error configuring the client."
-msgstr ""
-
-#: heat/common/exception.py:171
-#, python-format
-msgid ""
-"The request returned a 302 Multiple Choices. This generally means that "
-"you have not included a version indicator in a request URI.\n"
-"\n"
-"The body of response returned:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:177
-#, python-format
-msgid ""
-"The request returned a 413 Request Entity Too Large. This generally means"
-" that rate limiting or a quota threshold was breached.\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:188
-msgid ""
-"The request returned a 503 ServiceUnavilable. This generally occurs on "
-"service overload or other transient outage."
-msgstr ""
-
-#: heat/common/exception.py:199
-msgid "The URI was too long."
-msgstr ""
-
-#: heat/common/exception.py:203
-#, python-format
-msgid ""
-"The request returned 500 Internal Server Error\n"
-"\n"
-"The response body:\n"
-"%(body)s"
-msgstr ""
-
-#: heat/common/exception.py:208
-#, python-format
-msgid "Maximum redirects (%(redirects)s) was exceeded."
-msgstr ""
-
-#: heat/common/exception.py:212
-msgid "Received invalid HTTP redirect."
-msgstr ""
-
-#: heat/common/exception.py:216
-msgid "Response from Keystone does not contain a Heat endpoint."
-msgstr ""
-
-#: heat/common/exception.py:220
-#, python-format
-msgid ""
-"Multiple 'image' service matches for region %(region)s. This generally "
-"means that a region is required and you have not supplied one."
-msgstr ""
-
-#: heat/common/exception.py:226
-#, python-format
-msgid "The Parameter (%(key)s) was not provided."
-msgstr ""
-
-#: heat/common/exception.py:230
-#, python-format
-msgid "The Parameter (%(key)s) was not defined in template."
-msgstr ""
-
-#: heat/common/exception.py:234
-#, python-format
-msgid "The Referenced Attribute (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:239
-#, python-format
-msgid "The specified reference (%(resource)s %(key)s) is incorrect."
-msgstr ""
-
-#: heat/common/exception.py:244
-#, python-format
-msgid "The Key (%(key_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:248
-#, python-format
-msgid "The Flavor ID (%(flavor_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:252
-#, python-format
-msgid "The Image (%(image_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:256
-#, python-format
-msgid "Multiple images were found with name (%(image_name)s)."
-msgstr ""
-
-#: heat/common/exception.py:260
-#, python-format
-msgid "Searching Tenant %(target)s from Tenant %(actual)s forbidden."
-msgstr ""
-
-#: heat/common/exception.py:265
-#, python-format
-msgid "The Stack (%(stack_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:269
-#, python-format
-msgid "The Stack (%(stack_name)s) already exists."
-msgstr ""
-
-#: heat/common/exception.py:273
-#, python-format
-msgid "%(message)s"
-msgstr ""
-
-#: heat/common/exception.py:277
-#, python-format
-msgid ""
-"The Resource (%(resource_name)s) could not be found in Stack "
-"%(stack_name)s."
-msgstr ""
-
-#: heat/common/exception.py:282
-#, python-format
-msgid "The Resource Type (%(type_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:286
-#, python-format
-msgid "The Resource (%(resource_name)s) is not available."
-msgstr ""
-
-#: heat/common/exception.py:290
-#, python-format
-msgid "The Resource (%(resource_id)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:294
-#, python-format
-msgid "The Watch Rule (%(watch_name)s) could not be found."
-msgstr ""
-
-#: heat/common/exception.py:298
-#, python-format
-msgid "%(exc_type)s: %(message)s"
-msgstr ""
-
-#: heat/common/exception.py:312
-#, python-format
-msgid "%(feature)s is not supported."
-msgstr ""
-
-#: heat/common/exception.py:316
-#, python-format
-msgid "Cannot define the following properties at the same time: %s."
-msgstr ""
-
-#: heat/common/exception.py:334
-msgid "Template exceeds maximum allowed size."
-msgstr ""
-
-#: heat/common/exception.py:338
-msgid ""
-"Egress rules are only allowed when Neutron is used and the 'VpcId' "
-"property is set."
-msgstr ""
-
-#: heat/common/plugin_loader.py:94
-#, python-format
-msgid "Failed to import module %s"
-msgstr ""
-
-#: heat/common/urlfetch.py:37
-#, python-format
-msgid "Fetching data from %s"
-msgstr ""
-
-#: heat/common/wsgi.py:55
-msgid ""
-"Address to bind the server. Useful when selecting a particular network "
-"interface."
-msgstr ""
-
-#: heat/common/wsgi.py:58
-msgid "The port on which the server will listen."
-msgstr ""
-
-#: heat/common/wsgi.py:65
-msgid "Number of backlog requests to configure the socket with"
-msgstr ""
-
-#: heat/common/wsgi.py:68
-msgid "Location of the SSL Certificate File to use for SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:71
-msgid "Location of the SSL Key File to use for enabling SSL mode"
-msgstr ""
-
-#: heat/common/wsgi.py:78
-msgid "Number of workers for Heat service"
-msgstr ""
-
-#: heat/common/wsgi.py:129 heat/openstack/common/sslutils.py:61
-msgid ""
-"When running server in SSL mode, you must specify both a cert_file and "
-"key_file option value in your configuration file"
-msgstr ""
-
-#: heat/common/wsgi.py:147
-#, python-format
-msgid "Could not bind to %(bind_addr)safter trying for 30 seconds"
-msgstr ""
-
-#: heat/common/wsgi.py:179
-msgid "SIGTERM received"
-msgstr ""
-
-#: heat/common/wsgi.py:188
-msgid "SIGHUP received"
-msgstr ""
-
-#: heat/common/wsgi.py:203 heat/openstack/common/service.py:288
-#, python-format
-msgid "Starting %d workers"
-msgstr ""
-
-#: heat/common/wsgi.py:214
-#, python-format
-msgid "Removing dead child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:221
-msgid "Caught keyboard interrupt. Exiting."
-msgstr ""
-
-#: heat/common/wsgi.py:225
-msgid "Exited"
-msgstr ""
-
-#: heat/common/wsgi.py:243
-#, python-format
-msgid "Child %d exiting normally"
-msgstr ""
-
-#: heat/common/wsgi.py:246
-#, python-format
-msgid "Started child %s"
-msgstr ""
-
-#: heat/common/wsgi.py:268
-msgid "Starting single process server"
-msgstr ""
-
-#: heat/common/wsgi.py:596
-#, python-format
-msgid "Exception handling resource: %s"
-msgstr ""
-
-#: heat/common/wsgi.py:597
-msgid ""
-"The server could not comply with the request since\r\n"
-"it is either malformed or otherwise incorrect.\r\n"
-msgstr ""
-
-#: heat/common/wsgi.py:611
-#, python-format
-msgid "Returning %(code)s to user: %(explanation)s"
-msgstr ""
-
-#: heat/common/wsgi.py:616
-#, python-format
-msgid "Unexpected error occurred serving API: %s"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:393
-msgid "age should be an integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/api.py:395
-msgid "age should be a positive integer"
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:63
-msgid "python-migrate is not installed. Exiting."
-msgstr ""
-
-#: heat/db/sqlalchemy/migration.py:73
-msgid "version should be an integer"
-msgstr ""
-
-#: heat/doc/resources.py:85
-msgid "HOT Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:107
-msgid "YAML Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:129
-msgid "JSON Syntax"
-msgstr ""
-
-#: heat/doc/resources.py:161
-#, python-format
-msgid " from %s"
-msgstr ""
-
-#: heat/doc/resources.py:163
-#, python-format
-msgid " up to %s"
-msgstr ""
-
-#: heat/doc/resources.py:165
-#, python-format
-msgid " from length %s"
-msgstr ""
-
-#: heat/doc/resources.py:167
-#, python-format
-msgid " up to length %s"
-msgstr ""
-
-#: heat/doc/resources.py:174
-msgid "Not implemented."
-msgstr ""
-
-#: heat/doc/resources.py:185
-msgid "Required property."
-msgstr ""
-
-#: heat/doc/resources.py:189
-#, python-format
-msgid "Optional property, defaults to \"%s\"."
-msgstr ""
-
-#: heat/doc/resources.py:192
-msgid "Optional property."
-msgstr ""
-
-#: heat/doc/resources.py:196
-#, python-format
-msgid "Value must match pattern: %s"
-msgstr ""
-
-#: heat/doc/resources.py:203
-#, python-format
-msgid "Allowed values: %s"
-msgstr ""
-
-#: heat/doc/resources.py:209
-msgid "Map properties:"
-msgstr ""
-
-#: heat/doc/resources.py:215
-msgid "List contains maps with the properties:"
-msgstr ""
-
-#: heat/doc/resources.py:230
-msgid "Properties"
-msgstr ""
-
-#: heat/doc/resources.py:241
-msgid "Attributes"
-msgstr ""
-
-#: heat/engine/dependencies.py:25
-#, python-format
-msgid "Circular Dependency Found: %(cycle)s"
-msgstr ""
-
-#: heat/engine/environment.py:181
-#, python-format
-msgid "Changing %(path)s from %(was)s to %(now)s"
-msgstr ""
-
-#: heat/engine/environment.py:183
-#, python-format
-msgid "Registering %(path)s -> %(value)s"
-msgstr ""
-
-#: heat/engine/parser.py:74
-#, python-format
-msgid ""
-"Invalid stack name %s must contain only alphanumeric or \"_-.\" "
-"characters, must start with alpha"
-msgstr ""
-
-#: heat/engine/resource.py:55 heat/engine/resource.py:58
-#, python-format
-msgid "The Resource %s requires replacement."
-msgstr ""
-
-#: heat/engine/resource.py:132
-msgid "Resource name may not contain \"/\""
-msgstr ""
-
-#: heat/engine/scheduler.py:65
-#, python-format
-msgid "%s Timed out"
-msgstr ""
-
-#: heat/engine/service.py:311
-msgid "No Template provided."
-msgstr ""
-
-#: heat/engine/stack_resource.py:122
-#, python-format
-msgid "Cannot update %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:151
-#, python-format
-msgid "Cannot suspend %s, stack not created"
-msgstr ""
-
-#: heat/engine/stack_resource.py:173
-#, python-format
-msgid "Cannot resume %s, stack not created"
-msgstr ""
-
-#: heat/engine/resources/__init__.py:39
-#, python-format
-msgid "Failed to load resources from %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:314
-#, python-format
-msgid "Build of server %(server)s failed: %(message)s (%(code)s)"
-msgstr ""
-
-#: heat/engine/resources/instance.py:321
-#, python-format
-msgid "Nova reported unexpected instance[%(name)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:418
-#, python-format
-msgid "Cannot suspend %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/instance.py:424 heat/engine/resources/instance.py:478
-#, python-format
-msgid "Failed to find instance %s"
-msgstr ""
-
-#: heat/engine/resources/instance.py:455
-#, python-format
-msgid " nova reported unexpected instance[%(instance)s] status[%(status)s]"
-msgstr ""
-
-#: heat/engine/resources/instance.py:472
-#, python-format
-msgid "Cannot resume %s, resource_id not set"
-msgstr ""
-
-#: heat/engine/resources/nova_utils.py:221
-#, python-format
-msgid "Resizing to '%(flavor)s' failed, status '%(status)s'"
-msgstr ""
-
-#: heat/engine/resources/vpc.py:72
-#, python-format
-msgid "Multiple routers found with name %s"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:251
-msgid "The ID of the load balancing pool"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:254
-msgid "Port number on which the servers are running on the members"
-msgstr ""
-
-#: heat/engine/resources/neutron/loadbalancer.py:258
-msgid "The list of Nova server IDs load balanced"
-msgstr ""
-
-#: heat/engine/resources/rackspace/cloud_loadbalancer.py:33
-#, python-format
-msgid "There was an error building the loadbalancer:%(lb_name)s."
-msgstr ""
-
-#: heat/openstack/common/eventlet_backdoor.py:142
-#, python-format
-msgid "Eventlet backdoor listening on %(port)s for process %(pid)d"
-msgstr ""
-
-#: heat/openstack/common/exception.py:102
-msgid "Uncaught exception"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:62
-#, python-format
-msgid "Original exception being dropped: %s"
-msgstr ""
-
-#: heat/openstack/common/excutils.py:90
-#, python-format
-msgid "Unexpected exception occurred %d time(s)... retrying."
-msgstr ""
-
-#: heat/openstack/common/fileutils.py:64
-#, python-format
-msgid "Reloading cached file %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:100
-#, python-format
-msgid "Could not release the acquired lock `%s`"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:166
-#, python-format
-msgid "Got semaphore \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:175
-#, python-format
-msgid "Attempting to grab file lock \"%(lock)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:185
-#, python-format
-msgid "Created lock path: %s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:203
-#, python-format
-msgid "Got file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:207
-#, python-format
-msgid "Released file lock \"%(lock)s\" at %(path)s"
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:244
-#, python-format
-msgid "Got semaphore / lock \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/lockutils.py:248
-#, python-format
-msgid "Semaphore / lock released \"%(function)s\""
-msgstr ""
-
-#: heat/openstack/common/log.py:244
-#, python-format
-msgid "Deprecated: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:336
-#, python-format
-msgid "Error loading logging config %(log_config)s: %(err_msg)s"
-msgstr ""
-
-#: heat/openstack/common/log.py:386
-#, python-format
-msgid "syslog facility must be one of: %s"
-msgstr ""
-
-#: heat/openstack/common/log.py:556
-#, python-format
-msgid "Fatal call to deprecated config: %(msg)s"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:84
-#, python-format
-msgid "task run outlasted interval by %s sec"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:91
-msgid "in fixed duration looping call"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:131
-#, python-format
-msgid "Dynamic looping call sleeping for %.02f seconds"
-msgstr ""
-
-#: heat/openstack/common/loopingcall.py:138
-msgid "in dynamic looping call"
-msgstr ""
-
-#: heat/openstack/common/policy.py:75
-msgid "JSON file containing policy"
-msgstr ""
-
-#: heat/openstack/common/policy.py:78
-msgid "Rule enforced when requested rule is not found"
-msgstr ""
-
-#: heat/openstack/common/policy.py:92
-#, python-format
-msgid "Policy doesn't allow %s to be performed."
-msgstr ""
-
-#: heat/openstack/common/policy.py:177
-#, python-format
-msgid "Rules must be an instance of dict or Rules, got %s instead"
-msgstr ""
-
-#: heat/openstack/common/policy.py:207
-msgid "Rules successfully reloaded"
-msgstr ""
-
-#: heat/openstack/common/policy.py:253
-#, python-format
-msgid "Rule %s will be now enforced"
-msgstr ""
-
-#: heat/openstack/common/policy.py:268
-#, python-format
-msgid "Rule [%s] doesn't exist"
-msgstr ""
-
-#: heat/openstack/common/policy.py:478
-#, python-format
-msgid "Failed to understand rule %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:488
-#, python-format
-msgid "No handler for matches of kind %s"
-msgstr ""
-
-#: heat/openstack/common/policy.py:759
-#, python-format
-msgid "Failed to understand rule %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:127
-#, python-format
-msgid "Got unknown keyword args to utils.execute: %r"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:142
-#, python-format
-msgid "Running cmd (subprocess): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:167
-#: heat/openstack/common/processutils.py:240
-#, python-format
-msgid "Result was %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:179
-#, python-format
-msgid "%r failed. Retrying."
-msgstr ""
-
-#: heat/openstack/common/processutils.py:219
-#, python-format
-msgid "Running cmd (SSH): %s"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:221
-msgid "Environment not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/processutils.py:225
-msgid "process_input not supported over SSH"
-msgstr ""
-
-#: heat/openstack/common/service.py:118 heat/openstack/common/service.py:335
-msgid "Full set of CONF:"
-msgstr ""
-
-#: heat/openstack/common/service.py:127 heat/openstack/common/service.py:217
-#, python-format
-msgid "Caught %s, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:139
-msgid "Exception during rpc cleanup."
-msgstr ""
-
-#: heat/openstack/common/service.py:188
-msgid "Parent process has died unexpectedly, exiting"
-msgstr ""
-
-#: heat/openstack/common/service.py:223
-msgid "Unhandled exception"
-msgstr ""
-
-#: heat/openstack/common/service.py:256
-msgid "Forking too fast, sleeping"
-msgstr ""
-
-#: heat/openstack/common/service.py:278
-#, python-format
-msgid "Started child %d"
-msgstr ""
-
-#: heat/openstack/common/service.py:305
-#, python-format
-msgid "Child %(pid)d killed by signal %(sig)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:309
-#, python-format
-msgid "Child %(pid)s exited with status %(code)d"
-msgstr ""
-
-#: heat/openstack/common/service.py:313
-#, python-format
-msgid "pid %d not in child list"
-msgstr ""
-
-#: heat/openstack/common/service.py:345
-#, python-format
-msgid "Caught %s, stopping children"
-msgstr ""
-
-#: heat/openstack/common/service.py:363
-#, python-format
-msgid "Waiting on %d children to exit"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:52
-#, python-format
-msgid "Unable to find cert_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:55
-#, python-format
-msgid "Unable to find ca_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:58
-#, python-format
-msgid "Unable to find key_file : %s"
-msgstr ""
-
-#: heat/openstack/common/sslutils.py:100
-#, python-format
-msgid "Invalid SSL version : %s"
-msgstr ""
-
-#: heat/openstack/common/db/exception.py:44
-msgid "Invalid Parameter: Unicode is not supported by the current database."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:552
-msgid "DB exception wrapped."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:614
-#, python-format
-msgid "Got mysql server has gone away: %s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/session.py:694
-#, python-format
-msgid "SQL connection failed. %s attempts left."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:50
-msgid "Sort key supplied was not valid."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:89
-msgid "Id not in sort_keys; is sort_keys unique?"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:111
-msgid "Unknown sort direction, must be 'desc' or 'asc'"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:182
-#, python-format
-msgid ""
-"Please specify column %s in col_name_col_instance param. It is required "
-"because column has unsupported type by sqlite)."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:188
-#, python-format
-msgid ""
-"col_name_col_instance param has wrong type of column instance for column "
-"%s It should be instance of sqlalchemy.Column."
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:268
-#, python-format
-msgid "Deleting duplicated row with id: %(id)s from table: %(table)s"
-msgstr ""
-
-#: heat/openstack/common/db/sqlalchemy/utils.py:290
-msgid "Unsupported id columns type"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:129
-#, python-format
-msgid "%s not in valid priorities"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:145
-#, python-format
-msgid ""
-"Problem '%(e)s' attempting to send to notification system. "
-"Payload=%(payload)s"
-msgstr ""
-
-#: heat/openstack/common/notifier/api.py:164
-#, python-format
-msgid "Failed to load notifier %s. These notifications will not be sent."
-msgstr ""
-
-#: heat/openstack/common/notifier/list_notifier.py:112
-#, python-format
-msgid "Problem '%(e)s' attempting to send to notification driver %(driver)s."
-msgstr ""
-
-#: heat/openstack/common/notifier/rabbit_notifier.py:27
-msgid "The rabbit_notifier is now deprecated. Please use rpc_notifier instead."
-msgstr ""
-
-#: heat/openstack/common/notifier/rpc_notifier.py:45
-#: heat/openstack/common/notifier/rpc_notifier2.py:51
-#, python-format
-msgid "Could not send notification to %(topic)s. Payload=%(message)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/__init__.py:107
-#, python-format
-msgid ""
-"A RPC is being made while holding a lock. The locks currently held are "
-"%(locks)s. This is probably a bug. Please report it. Include the "
-"following: [%(stack)s]."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:75
-msgid "Pool creating new connection"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:202
-#, python-format
-msgid "No calling threads waiting for msg_id : %(msg_id)s, message : %(data)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:205
-#, python-format
-msgid "_call_waiters: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:212
-#, python-format
-msgid ""
-"Number of call waiters is greater than warning threshhold: %d. There "
-"could be a MulticallProxyWaiter leak."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:290
-#, python-format
-msgid "unpacked context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:336
-#, python-format
-msgid "UNIQUE_ID is %s."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:408
-#, python-format
-msgid "received %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:416
-#, python-format
-msgid "no method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:417
-#, python-format
-msgid "No method for message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:445
-#: heat/openstack/common/rpc/impl_zmq.py:280
-#, python-format
-msgid "Expected exception during message handling (%s)"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:453
-#: heat/openstack/common/rpc/impl_zmq.py:286
-msgid "Exception during message handling"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:527
-#, python-format
-msgid "Making synchronous call on %s ..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:530
-#, python-format
-msgid "MSG_ID is %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:556
-#, python-format
-msgid "Making asynchronous cast on %s..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:565
-msgid "Making asynchronous fanout cast..."
-msgstr ""
-
-#: heat/openstack/common/rpc/amqp.py:593
-#, python-format
-msgid "Sending %(event_type)s on %(topic)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:77
-msgid "An unknown RPC related exception occurred."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:107
-#, python-format
-msgid ""
-"Remote error: %(exc_type)s %(value)s\n"
-"%(traceback)s."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:124
-#, python-format
-msgid ""
-"Timeout while waiting on RPC response - topic: \"%(topic)s\", RPC method:"
-" \"%(method)s\" info: \"%(info)s\""
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:141
-#: heat/openstack/common/rpc/common.py:142
-#: heat/openstack/common/rpc/common.py:143
-msgid "<unknown>"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:147
-#, python-format
-msgid "Found duplicate message(%(msg_id)s). Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:151
-msgid "Invalid reuse of an RPC connection."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:155
-#, python-format
-msgid "Specified RPC version, %(version)s, not supported by this endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:160
-#, python-format
-msgid ""
-"Specified RPC envelope version, %(version)s, not supported by this "
-"endpoint."
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:165
-#, python-format
-msgid "Specified RPC version cap, %(version_cap)s, is too low"
-msgstr ""
-
-#: heat/openstack/common/rpc/common.py:289
-#, python-format
-msgid "Returning exception %s to caller"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:162
-msgid "Failed to process message ... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:165
-msgid "Failed to process message ... will requeue."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:502
-#, python-format
-msgid "Reconnecting to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:524
-#, python-format
-msgid "Connected to AMQP server on %(hostname)s:%(port)d"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:561
-#, python-format
-msgid ""
-"Unable to connect to AMQP server on %(hostname)s:%(port)d after "
-"%(max_retries)d tries: %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:575
-#, python-format
-msgid ""
-"AMQP server on %(hostname)s:%(port)d is unreachable: %(err_str)s. Trying "
-"again in %(sleep_time)d seconds."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:629
-#: heat/openstack/common/rpc/impl_qpid.py:492
-#, python-format
-msgid "Failed to declare consumer for topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:647
-#: heat/openstack/common/rpc/impl_qpid.py:507
-#, python-format
-msgid "Timed out waiting for RPC response: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:651
-#: heat/openstack/common/rpc/impl_qpid.py:511
-#, python-format
-msgid "Failed to consume message from queue: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_kombu.py:690
-#: heat/openstack/common/rpc/impl_qpid.py:546
-#, python-format
-msgid "Failed to publish message to topic '%(topic)s': %(err_str)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:161
-msgid "Failed to process message... skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:434
-#, python-format
-msgid "Unable to connect to AMQP server: %(e)s. Sleeping %(delay)s seconds"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:440
-#, python-format
-msgid "Connected to AMQP server on %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:453
-msgid "Re-established AMQP queues"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_qpid.py:519
-msgid "Error processing message. Skipping it."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:96
-msgid "JSON serialization failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:101
-#, python-format
-msgid "Deserializing: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:136
-#, python-format
-msgid "Connecting to %(addr)s with %(type)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:137
-#, python-format
-msgid "-> Subscribed to %(subscribe)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:138
-#, python-format
-msgid "-> bind: %(bind)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:146
-msgid "Could not open socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:158
-#, python-format
-msgid "Subscribing to %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:200
-msgid "You cannot recv on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:205
-msgid "You cannot send on this socket."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:267
-#, python-format
-msgid "Running func with context: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:305
-msgid "Sending reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:339
-msgid "RPC message did not include method."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:371
-msgid "Registering reactor"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:383
-msgid "In reactor registered"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:387
-msgid "Consuming socket"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:437
-#, python-format
-msgid "Creating proxy for topic: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:443
-msgid "Topic contained dangerous characters."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:475
-msgid "Topic socket file creation failed."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:481
-#, python-format
-msgid "Local per-topic backlog buffer full for topic %(topic)s. Dropping message."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:497
-#, python-format
-msgid "Required IPC directory does not exist at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:506
-#, python-format
-msgid "Permission denied to IPC directory at %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:509
-msgid "Could not create ZeroMQ receiver daemon. Socket may already be in use."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:543
-#, python-format
-msgid "CONSUMER RECEIVED DATA: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:562
-msgid "ZMQ Envelope version unsupported or unknown."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:590
-msgid "Skipping topic registration. Already registered."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:597
-#, python-format
-msgid "Consumer is a zmq.%s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:649
-msgid "Creating payload"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:662
-msgid "Creating queue socket for reply waiter"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:675
-msgid "Sending cast"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:678
-msgid "Cast sent; Waiting reply"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:681
-#, python-format
-msgid "Received message: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:682
-msgid "Unpacking response"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:691
-msgid "Unsupported or unknown ZMQ envelope returned."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:698
-msgid "RPC Message Invalid."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:721
-#, python-format
-msgid "%(msg)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:724
-#, python-format
-msgid "Sending message(s) to: %s"
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:728
-msgid "No matchmaker results. Not casting."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:731
-msgid "No match from matchmaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/impl_zmq.py:813
-#, python-format
-msgid "rpc_zmq_matchmaker = %(orig)s is deprecated; use %(new)s instead"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:47
-msgid "Match not found by MatchMaker."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:81
-msgid "Matchmaker does not implement registration or heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:217
-#, python-format
-msgid "Matchmaker unregistered: %(key)s, %(host)s"
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker.py:229
-msgid "Register before starting heartbeat."
-msgstr ""
-
-#: heat/openstack/common/rpc/matchmaker_ring.py:77
-#: heat/openstack/common/rpc/matchmaker_ring.py:95
-#, python-format
-msgid "No key defining hosts for topic '%s', see ringfile"
-msgstr ""
-
-#: heat/openstack/common/rpc/service.py:49
-#, python-format
-msgid "Creating Consumer connection for Service %s"
-msgstr ""
-
-#: heat/tests/test_exception.py:24
-#, python-format
-msgid "Testing message %(text)s"
-msgstr ""
-
+++ /dev/null
-openstack-common
-----------------
-
-A number of modules from openstack-common are imported into this project.
-
-These modules are "incubating" in openstack-common and are kept in sync
-with the help of openstack-common's update.py script. See:
-
- http://wiki.openstack.org/CommonLibrary#Incubation
-
-The copy of the code should never be directly modified here. Please
-always update openstack-common first and then run the script to copy
-the changes across.
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2012 SINA Corporation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-
-"""Extracts OpenStack config option info from module(s)."""
-
-from __future__ import print_function
-
-import imp
-import os
-import re
-import socket
-import sys
-import textwrap
-
-from oslo.config import cfg
-
-from heat.openstack.common import gettextutils
-from heat.openstack.common import importutils
-
-gettextutils.install('heat')
-
-STROPT = "StrOpt"
-BOOLOPT = "BoolOpt"
-INTOPT = "IntOpt"
-FLOATOPT = "FloatOpt"
-LISTOPT = "ListOpt"
-MULTISTROPT = "MultiStrOpt"
-
-OPT_TYPES = {
- STROPT: 'string value',
- BOOLOPT: 'boolean value',
- INTOPT: 'integer value',
- FLOATOPT: 'floating point value',
- LISTOPT: 'list value',
- MULTISTROPT: 'multi valued',
-}
-
-OPTION_REGEX = re.compile(r"(%s)" % "|".join([STROPT, BOOLOPT, INTOPT,
- FLOATOPT, LISTOPT,
- MULTISTROPT]))
-
-PY_EXT = ".py"
-BASEDIR = os.path.abspath(os.path.join(os.path.dirname(__file__),
- "../../../../"))
-WORDWRAP_WIDTH = 60
-
-
-def generate(srcfiles):
- mods_by_pkg = dict()
- for filepath in srcfiles:
- pkg_name = filepath.split(os.sep)[1]
- mod_str = '.'.join(['.'.join(filepath.split(os.sep)[:-1]),
- os.path.basename(filepath).split('.')[0]])
- mods_by_pkg.setdefault(pkg_name, list()).append(mod_str)
- # NOTE(lzyeval): place top level modules before packages
- pkg_names = filter(lambda x: x.endswith(PY_EXT), mods_by_pkg.keys())
- pkg_names.sort()
- ext_names = filter(lambda x: x not in pkg_names, mods_by_pkg.keys())
- ext_names.sort()
- pkg_names.extend(ext_names)
-
- # opts_by_group is a mapping of group name to an options list
- # The options list is a list of (module, options) tuples
- opts_by_group = {'DEFAULT': []}
-
- for pkg_name in pkg_names:
- mods = mods_by_pkg.get(pkg_name)
- mods.sort()
- for mod_str in mods:
- if mod_str.endswith('.__init__'):
- mod_str = mod_str[:mod_str.rfind(".")]
-
- mod_obj = _import_module(mod_str)
- if not mod_obj:
- continue
-
- for group, opts in _list_opts(mod_obj):
- opts_by_group.setdefault(group, []).append((mod_str, opts))
-
- print_group_opts('DEFAULT', opts_by_group.pop('DEFAULT', []))
- for group, opts in opts_by_group.items():
- print_group_opts(group, opts)
-
-
-def _import_module(mod_str):
- try:
- if mod_str.startswith('bin.'):
- imp.load_source(mod_str[4:], os.path.join('bin', mod_str[4:]))
- return sys.modules[mod_str[4:]]
- else:
- return importutils.import_module(mod_str)
- except ImportError as ie:
- sys.stderr.write("%s\n" % str(ie))
- return None
- except Exception:
- return None
-
-
-def _is_in_group(opt, group):
- "Check if opt is in group."
- for key, value in group._opts.items():
- if value['opt'] == opt:
- return True
- return False
-
-
-def _guess_groups(opt, mod_obj):
- # is it in the DEFAULT group?
- if _is_in_group(opt, cfg.CONF):
- return 'DEFAULT'
-
- # what other groups is it in?
- for key, value in cfg.CONF.items():
- if isinstance(value, cfg.CONF.GroupAttr):
- if _is_in_group(opt, value._group):
- return value._group.name
-
- raise RuntimeError(
- "Unable to find group for option %s, "
- "maybe it's defined twice in the same group?"
- % opt.name
- )
-
-
-def _list_opts(obj):
- def is_opt(o):
- return (isinstance(o, cfg.Opt) and
- not isinstance(o, cfg.SubCommandOpt))
-
- opts = list()
- for attr_str in dir(obj):
- attr_obj = getattr(obj, attr_str)
- if is_opt(attr_obj):
- opts.append(attr_obj)
- elif (isinstance(attr_obj, list) and
- all(map(lambda x: is_opt(x), attr_obj))):
- opts.extend(attr_obj)
-
- ret = {}
- for opt in opts:
- ret.setdefault(_guess_groups(opt, obj), []).append(opt)
- return ret.items()
-
-
-def print_group_opts(group, opts_by_module):
- print("[%s]" % group)
- print('')
- for mod, opts in opts_by_module:
- print('#')
- print('# Options defined in %s' % mod)
- print('#')
- print('')
- for opt in opts:
- _print_opt(opt)
- print('')
-
-
-def _get_my_ip():
- try:
- csock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
- csock.connect(('8.8.8.8', 80))
- (addr, port) = csock.getsockname()
- csock.close()
- return addr
- except socket.error:
- return None
-
-
-def _sanitize_default(name, value):
- """Set up a reasonably sensible default for pybasedir, my_ip and host."""
- if value.startswith(sys.prefix):
- # NOTE(jd) Don't use os.path.join, because it is likely to think the
- # second part is an absolute pathname and therefore drop the first
- # part.
- value = os.path.normpath("/usr/" + value[len(sys.prefix):])
- elif value.startswith(BASEDIR):
- return value.replace(BASEDIR, '/usr/lib/python/site-packages')
- elif BASEDIR in value:
- return value.replace(BASEDIR, '')
- elif value == _get_my_ip():
- return '10.0.0.1'
- elif value == socket.gethostname() and 'host' in name:
- return 'heat'
- elif value.strip() != value:
- return '"%s"' % value
- return value
-
-
-def _print_opt(opt):
- opt_name, opt_default, opt_help = opt.dest, opt.default, opt.help
- if not opt_help:
- sys.stderr.write('WARNING: "%s" is missing help string.\n' % opt_name)
- opt_help = ""
- opt_type = None
- try:
- opt_type = OPTION_REGEX.search(str(type(opt))).group(0)
- except (ValueError, AttributeError) as err:
- sys.stderr.write("%s\n" % str(err))
- sys.exit(1)
- opt_help += ' (' + OPT_TYPES[opt_type] + ')'
- print('#', "\n# ".join(textwrap.wrap(opt_help, WORDWRAP_WIDTH)))
- try:
- if opt_default is None:
- print('#%s=<None>' % opt_name)
- elif opt_type == STROPT:
- assert(isinstance(opt_default, basestring))
- print('#%s=%s' % (opt_name, _sanitize_default(opt_name,
- opt_default)))
- elif opt_type == BOOLOPT:
- assert(isinstance(opt_default, bool))
- print('#%s=%s' % (opt_name, str(opt_default).lower()))
- elif opt_type == INTOPT:
- assert(isinstance(opt_default, int) and
- not isinstance(opt_default, bool))
- print('#%s=%s' % (opt_name, opt_default))
- elif opt_type == FLOATOPT:
- assert(isinstance(opt_default, float))
- print('#%s=%s' % (opt_name, opt_default))
- elif opt_type == LISTOPT:
- assert(isinstance(opt_default, list))
- print('#%s=%s' % (opt_name, ','.join(opt_default)))
- elif opt_type == MULTISTROPT:
- assert(isinstance(opt_default, list))
- if not opt_default:
- opt_default = ['']
- for default in opt_default:
- print('#%s=%s' % (opt_name, default))
- print('')
- except Exception:
- sys.stderr.write('Error in option "%s"\n' % opt_name)
- sys.exit(1)
-
-
-def main():
- if len(sys.argv) < 2:
- print("usage: %s [srcfile]...\n" % sys.argv[0])
- sys.exit(0)
- generate(sys.argv[1:])
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Simple class that stores security context information in the web request.
-
-Projects should subclass this class if they wish to enhance the request
-context or provide additional information in their specific WSGI pipeline.
-"""
-
-import itertools
-
-from heat.openstack.common import uuidutils
-
-
-def generate_request_id():
- return 'req-%s' % uuidutils.generate_uuid()
-
-
-class RequestContext(object):
-
- """Helper class to represent useful information about a request context.
-
- Stores information about the security context under which the user
- accesses the system, as well as additional request information.
- """
-
- def __init__(self, auth_token=None, user=None, tenant=None, is_admin=False,
- read_only=False, show_deleted=False, request_id=None):
- self.auth_token = auth_token
- self.user = user
- self.tenant = tenant
- self.is_admin = is_admin
- self.read_only = read_only
- self.show_deleted = show_deleted
- if not request_id:
- request_id = generate_request_id()
- self.request_id = request_id
-
- def to_dict(self):
- return {'user': self.user,
- 'tenant': self.tenant,
- 'is_admin': self.is_admin,
- 'read_only': self.read_only,
- 'show_deleted': self.show_deleted,
- 'auth_token': self.auth_token,
- 'request_id': self.request_id}
-
-
-def get_admin_context(show_deleted=False):
- context = RequestContext(None,
- tenant=None,
- is_admin=True,
- show_deleted=show_deleted)
- return context
-
-
-def get_context_from_function_and_args(function, args, kwargs):
- """Find an arg of type RequestContext and return it.
-
- This is useful in a couple of decorators where we don't
- know much about the function we're wrapping.
- """
-
- for arg in itertools.chain(kwargs.values(), args):
- if isinstance(arg, RequestContext):
- return arg
-
- return None
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2012 Cloudscaling Group, Inc
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright (c) 2013 Rackspace Hosting
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Multiple DB API backend support.
-
-Supported configuration options:
-
-The following two parameters are in the 'database' group:
-`backend`: DB backend name or full module path to DB backend module.
-`use_tpool`: Enable thread pooling of DB API calls.
-
-A DB backend module should implement a method named 'get_backend' which
-takes no arguments. The method can return any object that implements DB
-API methods.
-
-*NOTE*: There are bugs in eventlet when using tpool combined with
-threading locks. The python logging module happens to use such locks. To
-work around this issue, be sure to specify thread=False with
-eventlet.monkey_patch().
-
-A bug for eventlet has been filed here:
-
-https://bitbucket.org/eventlet/eventlet/issue/137/
-"""
-import functools
-
-from oslo.config import cfg
-
-from heat.openstack.common import importutils
-from heat.openstack.common import lockutils
-
-
-db_opts = [
- cfg.StrOpt('backend',
- default='sqlalchemy',
- deprecated_name='db_backend',
- deprecated_group='DEFAULT',
- help='The backend to use for db'),
- cfg.BoolOpt('use_tpool',
- default=False,
- deprecated_name='dbapi_use_tpool',
- deprecated_group='DEFAULT',
- help='Enable the experimental use of thread pooling for '
- 'all DB API calls')
-]
-
-CONF = cfg.CONF
-CONF.register_opts(db_opts, 'database')
-
-
-class DBAPI(object):
- def __init__(self, backend_mapping=None):
- if backend_mapping is None:
- backend_mapping = {}
- self.__backend = None
- self.__backend_mapping = backend_mapping
-
- @lockutils.synchronized('dbapi_backend', 'heat-')
- def __get_backend(self):
- """Get the actual backend. May be a module or an instance of
- a class. Doesn't matter to us. We do this synchronized as it's
- possible multiple greenthreads started very quickly trying to do
- DB calls and eventlet can switch threads before self.__backend gets
- assigned.
- """
- if self.__backend:
- # Another thread assigned it
- return self.__backend
- backend_name = CONF.database.backend
- self.__use_tpool = CONF.database.use_tpool
- if self.__use_tpool:
- from eventlet import tpool
- self.__tpool = tpool
- # Import the untranslated name if we don't have a
- # mapping.
- backend_path = self.__backend_mapping.get(backend_name,
- backend_name)
- backend_mod = importutils.import_module(backend_path)
- self.__backend = backend_mod.get_backend()
- return self.__backend
-
- def __getattr__(self, key):
- backend = self.__backend or self.__get_backend()
- attr = getattr(backend, key)
- if not self.__use_tpool or not hasattr(attr, '__call__'):
- return attr
-
- def tpool_wrapper(*args, **kwargs):
- return self.__tpool.execute(attr, *args, **kwargs)
-
- functools.update_wrapper(tpool_wrapper, attr)
- return tpool_wrapper
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""DB related custom exceptions."""
-
-from heat.openstack.common.gettextutils import _ # noqa
-
-
-class DBError(Exception):
- """Wraps an implementation specific exception."""
- def __init__(self, inner_exception=None):
- self.inner_exception = inner_exception
- super(DBError, self).__init__(str(inner_exception))
-
-
-class DBDuplicateEntry(DBError):
- """Wraps an implementation specific exception."""
- def __init__(self, columns=[], inner_exception=None):
- self.columns = columns
- super(DBDuplicateEntry, self).__init__(inner_exception)
-
-
-class DBDeadlock(DBError):
- def __init__(self, inner_exception=None):
- super(DBDeadlock, self).__init__(inner_exception)
-
-
-class DBInvalidUnicodeParameter(Exception):
- message = _("Invalid Parameter: "
- "Unicode is not supported by the current database.")
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2012 Cloudscaling Group, Inc
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
+++ /dev/null
-# coding: utf-8
-#
-# Copyright (c) 2013 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# Base on code in migrate/changeset/databases/sqlite.py which is under
-# the following license:
-#
-# The MIT License
-#
-# Copyright (c) 2009 Evan Rosson, Jan Dittberner, Domen Kožar
-#
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-# The above copyright notice and this permission notice shall be included in
-# all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE
-
-import re
-
-from migrate.changeset import ansisql
-from migrate.changeset.databases import sqlite
-from sqlalchemy.schema import UniqueConstraint
-
-
-def _get_unique_constraints(self, table):
- """Retrieve information about existing unique constraints of the table
-
- This feature is needed for _recreate_table() to work properly.
- Unfortunately, it's not available in sqlalchemy 0.7.x/0.8.x.
-
- """
-
- data = table.metadata.bind.execute(
- """SELECT sql
- FROM sqlite_master
- WHERE
- type='table' AND
- name=:table_name""",
- table_name=table.name
- ).fetchone()[0]
-
- UNIQUE_PATTERN = "CONSTRAINT (\w+) UNIQUE \(([^\)]+)\)"
- return [
- UniqueConstraint(
- *[getattr(table.columns, c.strip(' "')) for c in cols.split(",")],
- name=name
- )
- for name, cols in re.findall(UNIQUE_PATTERN, data)
- ]
-
-
-def _recreate_table(self, table, column=None, delta=None, omit_uniques=None):
- """Recreate the table properly
-
- Unlike the corresponding original method of sqlalchemy-migrate this one
- doesn't drop existing unique constraints when creating a new one.
-
- """
-
- table_name = self.preparer.format_table(table)
-
- # we remove all indexes so as not to have
- # problems during copy and re-create
- for index in table.indexes:
- index.drop()
-
- # reflect existing unique constraints
- for uc in self._get_unique_constraints(table):
- table.append_constraint(uc)
- # omit given unique constraints when creating a new table if required
- table.constraints = set([
- cons for cons in table.constraints
- if omit_uniques is None or cons.name not in omit_uniques
- ])
-
- self.append('ALTER TABLE %s RENAME TO migration_tmp' % table_name)
- self.execute()
-
- insertion_string = self._modify_table(table, column, delta)
-
- table.create(bind=self.connection)
- self.append(insertion_string % {'table_name': table_name})
- self.execute()
- self.append('DROP TABLE migration_tmp')
- self.execute()
-
-
-def _visit_migrate_unique_constraint(self, *p, **k):
- """Drop the given unique constraint
-
- The corresponding original method of sqlalchemy-migrate just
- raises NotImplemented error
-
- """
-
- self.recreate_table(p[0].table, omit_uniques=[p[0].name])
-
-
-def patch_migrate():
- """A workaround for SQLite's inability to alter things
-
- SQLite abilities to alter tables are very limited (please read
- http://www.sqlite.org/lang_altertable.html for more details).
- E. g. one can't drop a column or a constraint in SQLite. The
- workaround for this is to recreate the original table omitting
- the corresponding constraint (or column).
-
- sqlalchemy-migrate library has recreate_table() method that
- implements this workaround, but it does it wrong:
-
- - information about unique constraints of a table
- is not retrieved. So if you have a table with one
- unique constraint and a migration adding another one
- you will end up with a table that has only the
- latter unique constraint, and the former will be lost
-
- - dropping of unique constraints is not supported at all
-
- The proper way to fix this is to provide a pull-request to
- sqlalchemy-migrate, but the project seems to be dead. So we
- can go on with monkey-patching of the lib at least for now.
-
- """
-
- # this patch is needed to ensure that recreate_table() doesn't drop
- # existing unique constraints of the table when creating a new one
- helper_cls = sqlite.SQLiteHelper
- helper_cls.recreate_table = _recreate_table
- helper_cls._get_unique_constraints = _get_unique_constraints
-
- # this patch is needed to be able to drop existing unique constraints
- constraint_cls = sqlite.SQLiteConstraintDropper
- constraint_cls.visit_migrate_unique_constraint = \
- _visit_migrate_unique_constraint
- constraint_cls.__bases__ = (ansisql.ANSIColumnDropper,
- sqlite.SQLiteConstraintGenerator)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright (c) 2011 X.commerce, a business unit of eBay Inc.
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# Copyright 2011 Piston Cloud Computing, Inc.
-# Copyright 2012 Cloudscaling Group, Inc.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-"""
-SQLAlchemy models.
-"""
-
-import six
-
-from sqlalchemy import Column, Integer
-from sqlalchemy import DateTime
-from sqlalchemy.orm import object_mapper
-
-from heat.openstack.common.db.sqlalchemy import session as sa
-from heat.openstack.common import timeutils
-
-
-class ModelBase(object):
- """Base class for models."""
- __table_initialized__ = False
-
- def save(self, session=None):
- """Save this object."""
- if not session:
- session = sa.get_session()
- # NOTE(boris-42): This part of code should be look like:
- # sesssion.add(self)
- # session.flush()
- # But there is a bug in sqlalchemy and eventlet that
- # raises NoneType exception if there is no running
- # transaction and rollback is called. As long as
- # sqlalchemy has this bug we have to create transaction
- # explicity.
- with session.begin(subtransactions=True):
- session.add(self)
- session.flush()
-
- def __setitem__(self, key, value):
- setattr(self, key, value)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def get(self, key, default=None):
- return getattr(self, key, default)
-
- def __iter__(self):
- columns = dict(object_mapper(self).columns).keys()
- # NOTE(russellb): Allow models to specify other keys that can be looked
- # up, beyond the actual db columns. An example would be the 'name'
- # property for an Instance.
- if hasattr(self, '_extra_keys'):
- columns.extend(self._extra_keys())
- self._i = iter(columns)
- return self
-
- def next(self):
- n = six.advance_iterator(self._i)
- return n, getattr(self, n)
-
- def update(self, values):
- """Make the model object behave like a dict."""
- for k, v in six.iteritems(values):
- setattr(self, k, v)
-
- def iteritems(self):
- """Make the model object behave like a dict.
-
- Includes attributes from joins.
- """
- local = dict(self)
- joined = dict([(k, v) for k, v in six.iteritems(self.__dict__)
- if not k[0] == '_'])
- local.update(joined)
- return local.iteritems()
-
-
-class TimestampMixin(object):
- created_at = Column(DateTime, default=timeutils.utcnow)
- updated_at = Column(DateTime, onupdate=timeutils.utcnow)
-
-
-class SoftDeleteMixin(object):
- deleted_at = Column(DateTime)
- deleted = Column(Integer, default=0)
-
- def soft_delete(self, session=None):
- """Mark this object as deleted."""
- self.deleted = self.id
- self.deleted_at = timeutils.utcnow()
- self.save(session=session)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Session Handling for SQLAlchemy backend.
-
-Initializing:
-
-* Call set_defaults with the minimal of the following kwargs:
- sql_connection, sqlite_db
-
- Example:
-
- session.set_defaults(
- sql_connection="sqlite:///var/lib/heat/sqlite.db",
- sqlite_db="/var/lib/heat/sqlite.db")
-
-Recommended ways to use sessions within this framework:
-
-* Don't use them explicitly; this is like running with AUTOCOMMIT=1.
- model_query() will implicitly use a session when called without one
- supplied. This is the ideal situation because it will allow queries
- to be automatically retried if the database connection is interrupted.
-
- Note: Automatic retry will be enabled in a future patch.
-
- It is generally fine to issue several queries in a row like this. Even though
- they may be run in separate transactions and/or separate sessions, each one
- will see the data from the prior calls. If needed, undo- or rollback-like
- functionality should be handled at a logical level. For an example, look at
- the code around quotas and reservation_rollback().
-
- Examples:
-
- def get_foo(context, foo):
- return model_query(context, models.Foo).\
- filter_by(foo=foo).\
- first()
-
- def update_foo(context, id, newfoo):
- model_query(context, models.Foo).\
- filter_by(id=id).\
- update({'foo': newfoo})
-
- def create_foo(context, values):
- foo_ref = models.Foo()
- foo_ref.update(values)
- foo_ref.save()
- return foo_ref
-
-
-* Within the scope of a single method, keeping all the reads and writes within
- the context managed by a single session. In this way, the session's __exit__
- handler will take care of calling flush() and commit() for you.
- If using this approach, you should not explicitly call flush() or commit().
- Any error within the context of the session will cause the session to emit
- a ROLLBACK. If the connection is dropped before this is possible, the
- database will implicitly rollback the transaction.
-
- Note: statements in the session scope will not be automatically retried.
-
- If you create models within the session, they need to be added, but you
- do not need to call model.save()
-
- def create_many_foo(context, foos):
- session = get_session()
- with session.begin():
- for foo in foos:
- foo_ref = models.Foo()
- foo_ref.update(foo)
- session.add(foo_ref)
-
- def update_bar(context, foo_id, newbar):
- session = get_session()
- with session.begin():
- foo_ref = model_query(context, models.Foo, session).\
- filter_by(id=foo_id).\
- first()
- model_query(context, models.Bar, session).\
- filter_by(id=foo_ref['bar_id']).\
- update({'bar': newbar})
-
- Note: update_bar is a trivially simple example of using "with session.begin".
- Whereas create_many_foo is a good example of when a transaction is needed,
- it is always best to use as few queries as possible. The two queries in
- update_bar can be better expressed using a single query which avoids
- the need for an explicit transaction. It can be expressed like so:
-
- def update_bar(context, foo_id, newbar):
- subq = model_query(context, models.Foo.id).\
- filter_by(id=foo_id).\
- limit(1).\
- subquery()
- model_query(context, models.Bar).\
- filter_by(id=subq.as_scalar()).\
- update({'bar': newbar})
-
- For reference, this emits approximagely the following SQL statement:
-
- UPDATE bar SET bar = ${newbar}
- WHERE id=(SELECT bar_id FROM foo WHERE id = ${foo_id} LIMIT 1);
-
-* Passing an active session between methods. Sessions should only be passed
- to private methods. The private method must use a subtransaction; otherwise
- SQLAlchemy will throw an error when you call session.begin() on an existing
- transaction. Public methods should not accept a session parameter and should
- not be involved in sessions within the caller's scope.
-
- Note that this incurs more overhead in SQLAlchemy than the above means
- due to nesting transactions, and it is not possible to implicitly retry
- failed database operations when using this approach.
-
- This also makes code somewhat more difficult to read and debug, because a
- single database transaction spans more than one method. Error handling
- becomes less clear in this situation. When this is needed for code clarity,
- it should be clearly documented.
-
- def myfunc(foo):
- session = get_session()
- with session.begin():
- # do some database things
- bar = _private_func(foo, session)
- return bar
-
- def _private_func(foo, session=None):
- if not session:
- session = get_session()
- with session.begin(subtransaction=True):
- # do some other database things
- return bar
-
-
-There are some things which it is best to avoid:
-
-* Don't keep a transaction open any longer than necessary.
-
- This means that your "with session.begin()" block should be as short
- as possible, while still containing all the related calls for that
- transaction.
-
-* Avoid "with_lockmode('UPDATE')" when possible.
-
- In MySQL/InnoDB, when a "SELECT ... FOR UPDATE" query does not match
- any rows, it will take a gap-lock. This is a form of write-lock on the
- "gap" where no rows exist, and prevents any other writes to that space.
- This can effectively prevent any INSERT into a table by locking the gap
- at the end of the index. Similar problems will occur if the SELECT FOR UPDATE
- has an overly broad WHERE clause, or doesn't properly use an index.
-
- One idea proposed at ODS Fall '12 was to use a normal SELECT to test the
- number of rows matching a query, and if only one row is returned,
- then issue the SELECT FOR UPDATE.
-
- The better long-term solution is to use INSERT .. ON DUPLICATE KEY UPDATE.
- However, this can not be done until the "deleted" columns are removed and
- proper UNIQUE constraints are added to the tables.
-
-
-Enabling soft deletes:
-
-* To use/enable soft-deletes, the SoftDeleteMixin must be added
- to your model class. For example:
-
- class NovaBase(models.SoftDeleteMixin, models.ModelBase):
- pass
-
-
-Efficient use of soft deletes:
-
-* There are two possible ways to mark a record as deleted:
- model.soft_delete() and query.soft_delete().
-
- model.soft_delete() method works with single already fetched entry.
- query.soft_delete() makes only one db request for all entries that correspond
- to query.
-
-* In almost all cases you should use query.soft_delete(). Some examples:
-
- def soft_delete_bar():
- count = model_query(BarModel).find(some_condition).soft_delete()
- if count == 0:
- raise Exception("0 entries were soft deleted")
-
- def complex_soft_delete_with_synchronization_bar(session=None):
- if session is None:
- session = get_session()
- with session.begin(subtransactions=True):
- count = model_query(BarModel).\
- find(some_condition).\
- soft_delete(synchronize_session=True)
- # Here synchronize_session is required, because we
- # don't know what is going on in outer session.
- if count == 0:
- raise Exception("0 entries were soft deleted")
-
-* There is only one situation where model.soft_delete() is appropriate: when
- you fetch a single record, work with it, and mark it as deleted in the same
- transaction.
-
- def soft_delete_bar_model():
- session = get_session()
- with session.begin():
- bar_ref = model_query(BarModel).find(some_condition).first()
- # Work with bar_ref
- bar_ref.soft_delete(session=session)
-
- However, if you need to work with all entries that correspond to query and
- then soft delete them you should use query.soft_delete() method:
-
- def soft_delete_multi_models():
- session = get_session()
- with session.begin():
- query = model_query(BarModel, session=session).\
- find(some_condition)
- model_refs = query.all()
- # Work with model_refs
- query.soft_delete(synchronize_session=False)
- # synchronize_session=False should be set if there is no outer
- # session and these entries are not used after this.
-
- When working with many rows, it is very important to use query.soft_delete,
- which issues a single query. Using model.soft_delete(), as in the following
- example, is very inefficient.
-
- for bar_ref in bar_refs:
- bar_ref.soft_delete(session=session)
- # This will produce count(bar_refs) db requests.
-"""
-
-import os.path
-import re
-import time
-
-from eventlet import greenthread
-from oslo.config import cfg
-import six
-from sqlalchemy import exc as sqla_exc
-import sqlalchemy.interfaces
-from sqlalchemy.interfaces import PoolListener
-import sqlalchemy.orm
-from sqlalchemy.pool import NullPool, StaticPool
-from sqlalchemy.sql.expression import literal_column
-
-from heat.openstack.common.db import exception
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import log as logging
-from heat.openstack.common import timeutils
-
-sqlite_db_opts = [
- cfg.StrOpt('sqlite_db',
- default='heat.sqlite',
- help='the filename to use with sqlite'),
- cfg.BoolOpt('sqlite_synchronous',
- default=True,
- help='If true, use synchronous mode for sqlite'),
-]
-
-database_opts = [
- cfg.StrOpt('connection',
- default='sqlite:///' +
- os.path.abspath(os.path.join(os.path.dirname(__file__),
- '../', '$sqlite_db')),
- help='The SQLAlchemy connection string used to connect to the '
- 'database',
- deprecated_opts=[cfg.DeprecatedOpt('sql_connection',
- group='DEFAULT'),
- cfg.DeprecatedOpt('sql_connection',
- group='DATABASE')],
- secret=True),
- cfg.StrOpt('slave_connection',
- default='',
- help='The SQLAlchemy connection string used to connect to the '
- 'slave database',
- secret=True),
- cfg.IntOpt('idle_timeout',
- default=3600,
- deprecated_opts=[cfg.DeprecatedOpt('sql_idle_timeout',
- group='DEFAULT'),
- cfg.DeprecatedOpt('sql_idle_timeout',
- group='DATABASE')],
- help='timeout before idle sql connections are reaped'),
- cfg.IntOpt('min_pool_size',
- default=1,
- deprecated_opts=[cfg.DeprecatedOpt('sql_min_pool_size',
- group='DEFAULT'),
- cfg.DeprecatedOpt('sql_min_pool_size',
- group='DATABASE')],
- help='Minimum number of SQL connections to keep open in a '
- 'pool'),
- cfg.IntOpt('max_pool_size',
- default=None,
- deprecated_opts=[cfg.DeprecatedOpt('sql_max_pool_size',
- group='DEFAULT'),
- cfg.DeprecatedOpt('sql_max_pool_size',
- group='DATABASE')],
- help='Maximum number of SQL connections to keep open in a '
- 'pool'),
- cfg.IntOpt('max_retries',
- default=10,
- deprecated_opts=[cfg.DeprecatedOpt('sql_max_retries',
- group='DEFAULT'),
- cfg.DeprecatedOpt('sql_max_retries',
- group='DATABASE')],
- help='maximum db connection retries during startup. '
- '(setting -1 implies an infinite retry count)'),
- cfg.IntOpt('retry_interval',
- default=10,
- deprecated_opts=[cfg.DeprecatedOpt('sql_retry_interval',
- group='DEFAULT'),
- cfg.DeprecatedOpt('reconnect_interval',
- group='DATABASE')],
- help='interval between retries of opening a sql connection'),
- cfg.IntOpt('max_overflow',
- default=None,
- deprecated_opts=[cfg.DeprecatedOpt('sql_max_overflow',
- group='DEFAULT'),
- cfg.DeprecatedOpt('sqlalchemy_max_overflow',
- group='DATABASE')],
- help='If set, use this value for max_overflow with sqlalchemy'),
- cfg.IntOpt('connection_debug',
- default=0,
- deprecated_opts=[cfg.DeprecatedOpt('sql_connection_debug',
- group='DEFAULT')],
- help='Verbosity of SQL debugging information. 0=None, '
- '100=Everything'),
- cfg.BoolOpt('connection_trace',
- default=False,
- deprecated_opts=[cfg.DeprecatedOpt('sql_connection_trace',
- group='DEFAULT')],
- help='Add python stack traces to SQL as comment strings'),
- cfg.IntOpt('pool_timeout',
- default=None,
- deprecated_opts=[cfg.DeprecatedOpt('sqlalchemy_pool_timeout',
- group='DATABASE')],
- help='If set, use this value for pool_timeout with sqlalchemy'),
-]
-
-CONF = cfg.CONF
-CONF.register_opts(sqlite_db_opts)
-CONF.register_opts(database_opts, 'database')
-
-LOG = logging.getLogger(__name__)
-
-_ENGINE = None
-_MAKER = None
-_SLAVE_ENGINE = None
-_SLAVE_MAKER = None
-
-
-def set_defaults(sql_connection, sqlite_db, max_pool_size=None,
- max_overflow=None, pool_timeout=None):
- """Set defaults for configuration variables."""
- cfg.set_defaults(database_opts,
- connection=sql_connection)
- cfg.set_defaults(sqlite_db_opts,
- sqlite_db=sqlite_db)
- # Update the QueuePool defaults
- if max_pool_size is not None:
- cfg.set_defaults(database_opts,
- max_pool_size=max_pool_size)
- if max_overflow is not None:
- cfg.set_defaults(database_opts,
- max_overflow=max_overflow)
- if pool_timeout is not None:
- cfg.set_defaults(database_opts,
- pool_timeout=pool_timeout)
-
-
-def cleanup():
- global _ENGINE, _MAKER
- global _SLAVE_ENGINE, _SLAVE_MAKER
-
- if _MAKER:
- _MAKER.close_all()
- _MAKER = None
- if _ENGINE:
- _ENGINE.dispose()
- _ENGINE = None
- if _SLAVE_MAKER:
- _SLAVE_MAKER.close_all()
- _SLAVE_MAKER = None
- if _SLAVE_ENGINE:
- _SLAVE_ENGINE.dispose()
- _SLAVE_ENGINE = None
-
-
-class SqliteForeignKeysListener(PoolListener):
- """Ensures that the foreign key constraints are enforced in SQLite.
-
- The foreign key constraints are disabled by default in SQLite,
- so the foreign key constraints will be enabled here for every
- database connection
- """
- def connect(self, dbapi_con, con_record):
- dbapi_con.execute('pragma foreign_keys=ON')
-
-
-def get_session(autocommit=True, expire_on_commit=False,
- sqlite_fk=False, slave_session=False):
- """Return a SQLAlchemy session."""
- global _MAKER
- global _SLAVE_MAKER
- maker = _MAKER
-
- if slave_session:
- maker = _SLAVE_MAKER
-
- if maker is None:
- engine = get_engine(sqlite_fk=sqlite_fk, slave_engine=slave_session)
- maker = get_maker(engine, autocommit, expire_on_commit)
-
- if slave_session:
- _SLAVE_MAKER = maker
- else:
- _MAKER = maker
-
- session = maker()
- return session
-
-
-# note(boris-42): In current versions of DB backends unique constraint
-# violation messages follow the structure:
-#
-# sqlite:
-# 1 column - (IntegrityError) column c1 is not unique
-# N columns - (IntegrityError) column c1, c2, ..., N are not unique
-#
-# postgres:
-# 1 column - (IntegrityError) duplicate key value violates unique
-# constraint "users_c1_key"
-# N columns - (IntegrityError) duplicate key value violates unique
-# constraint "name_of_our_constraint"
-#
-# mysql:
-# 1 column - (IntegrityError) (1062, "Duplicate entry 'value_of_c1' for key
-# 'c1'")
-# N columns - (IntegrityError) (1062, "Duplicate entry 'values joined
-# with -' for key 'name_of_our_constraint'")
-_DUP_KEY_RE_DB = {
- "sqlite": re.compile(r"^.*columns?([^)]+)(is|are)\s+not\s+unique$"),
- "postgresql": re.compile(r"^.*duplicate\s+key.*\"([^\"]+)\"\s*\n.*$"),
- "mysql": re.compile(r"^.*\(1062,.*'([^\']+)'\"\)$")
-}
-
-
-def _raise_if_duplicate_entry_error(integrity_error, engine_name):
- """Raise exception if two entries are duplicated.
-
- In this function will be raised DBDuplicateEntry exception if integrity
- error wrap unique constraint violation.
- """
-
- def get_columns_from_uniq_cons_or_name(columns):
- # note(vsergeyev): UniqueConstraint name convention: "uniq_t0c10c2"
- # where `t` it is table name and columns `c1`, `c2`
- # are in UniqueConstraint.
- uniqbase = "uniq_"
- if not columns.startswith(uniqbase):
- if engine_name == "postgresql":
- return [columns[columns.index("_") + 1:columns.rindex("_")]]
- return [columns]
- return columns[len(uniqbase):].split("0")[1:]
-
- if engine_name not in ["mysql", "sqlite", "postgresql"]:
- return
-
- # FIXME(johannes): The usage of the .message attribute has been
- # deprecated since Python 2.6. However, the exceptions raised by
- # SQLAlchemy can differ when using unicode() and accessing .message.
- # An audit across all three supported engines will be necessary to
- # ensure there are no regressions.
- m = _DUP_KEY_RE_DB[engine_name].match(integrity_error.message)
- if not m:
- return
- columns = m.group(1)
-
- if engine_name == "sqlite":
- columns = columns.strip().split(", ")
- else:
- columns = get_columns_from_uniq_cons_or_name(columns)
- raise exception.DBDuplicateEntry(columns, integrity_error)
-
-
-# NOTE(comstud): In current versions of DB backends, Deadlock violation
-# messages follow the structure:
-#
-# mysql:
-# (OperationalError) (1213, 'Deadlock found when trying to get lock; try '
-# 'restarting transaction') <query_str> <query_args>
-_DEADLOCK_RE_DB = {
- "mysql": re.compile(r"^.*\(1213, 'Deadlock.*")
-}
-
-
-def _raise_if_deadlock_error(operational_error, engine_name):
- """Raise exception on deadlock condition.
-
- Raise DBDeadlock exception if OperationalError contains a Deadlock
- condition.
- """
- re = _DEADLOCK_RE_DB.get(engine_name)
- if re is None:
- return
- # FIXME(johannes): The usage of the .message attribute has been
- # deprecated since Python 2.6. However, the exceptions raised by
- # SQLAlchemy can differ when using unicode() and accessing .message.
- # An audit across all three supported engines will be necessary to
- # ensure there are no regressions.
- m = re.match(operational_error.message)
- if not m:
- return
- raise exception.DBDeadlock(operational_error)
-
-
-def _wrap_db_error(f):
- def _wrap(*args, **kwargs):
- try:
- return f(*args, **kwargs)
- except UnicodeEncodeError:
- raise exception.DBInvalidUnicodeParameter()
- # note(boris-42): We should catch unique constraint violation and
- # wrap it by our own DBDuplicateEntry exception. Unique constraint
- # violation is wrapped by IntegrityError.
- except sqla_exc.OperationalError as e:
- _raise_if_deadlock_error(e, get_engine().name)
- # NOTE(comstud): A lot of code is checking for OperationalError
- # so let's not wrap it for now.
- raise
- except sqla_exc.IntegrityError as e:
- # note(boris-42): SqlAlchemy doesn't unify errors from different
- # DBs so we must do this. Also in some tables (for example
- # instance_types) there are more than one unique constraint. This
- # means we should get names of columns, which values violate
- # unique constraint, from error message.
- _raise_if_duplicate_entry_error(e, get_engine().name)
- raise exception.DBError(e)
- except Exception as e:
- LOG.exception(_('DB exception wrapped.'))
- raise exception.DBError(e)
- _wrap.func_name = f.func_name
- return _wrap
-
-
-def get_engine(sqlite_fk=False, slave_engine=False):
- """Return a SQLAlchemy engine."""
- global _ENGINE
- global _SLAVE_ENGINE
- engine = _ENGINE
- db_uri = CONF.database.connection
-
- if slave_engine:
- engine = _SLAVE_ENGINE
- db_uri = CONF.database.slave_connection
-
- if engine is None:
- engine = create_engine(db_uri,
- sqlite_fk=sqlite_fk)
- if slave_engine:
- _SLAVE_ENGINE = engine
- else:
- _ENGINE = engine
-
- return engine
-
-
-def _synchronous_switch_listener(dbapi_conn, connection_rec):
- """Switch sqlite connections to non-synchronous mode."""
- dbapi_conn.execute("PRAGMA synchronous = OFF")
-
-
-def _add_regexp_listener(dbapi_con, con_record):
- """Add REGEXP function to sqlite connections."""
-
- def regexp(expr, item):
- reg = re.compile(expr)
- return reg.search(six.text_type(item)) is not None
- dbapi_con.create_function('regexp', 2, regexp)
-
-
-def _greenthread_yield(dbapi_con, con_record):
- """Ensure other greenthreads get a chance to be executed.
-
- Force a context switch. With common database backends (eg MySQLdb and
- sqlite), there is no implicit yield caused by network I/O since they are
- implemented by C libraries that eventlet cannot monkey patch.
- """
- greenthread.sleep(0)
-
-
-def _ping_listener(dbapi_conn, connection_rec, connection_proxy):
- """Ensures that MySQL connections checked out of the pool are alive.
-
- Borrowed from:
- http://groups.google.com/group/sqlalchemy/msg/a4ce563d802c929f
- """
- try:
- dbapi_conn.cursor().execute('select 1')
- except dbapi_conn.OperationalError as ex:
- if ex.args[0] in (2006, 2013, 2014, 2045, 2055):
- LOG.warn(_('Got mysql server has gone away: %s'), ex)
- raise sqla_exc.DisconnectionError("Database server went away")
- else:
- raise
-
-
-def _is_db_connection_error(args):
- """Return True if error in connecting to db."""
- # NOTE(adam_g): This is currently MySQL specific and needs to be extended
- # to support Postgres and others.
- conn_err_codes = ('2002', '2003', '2006')
- for err_code in conn_err_codes:
- if args.find(err_code) != -1:
- return True
- return False
-
-
-def create_engine(sql_connection, sqlite_fk=False):
- """Return a new SQLAlchemy engine."""
- # NOTE(geekinutah): At this point we could be connecting to the normal
- # db handle or the slave db handle. Things like
- # _wrap_db_error aren't going to work well if their
- # backends don't match. Let's check.
- _assert_matching_drivers()
- connection_dict = sqlalchemy.engine.url.make_url(sql_connection)
-
- engine_args = {
- "pool_recycle": CONF.database.idle_timeout,
- "echo": False,
- 'convert_unicode': True,
- }
-
- # Map our SQL debug level to SQLAlchemy's options
- if CONF.database.connection_debug >= 100:
- engine_args['echo'] = 'debug'
- elif CONF.database.connection_debug >= 50:
- engine_args['echo'] = True
-
- if "sqlite" in connection_dict.drivername:
- if sqlite_fk:
- engine_args["listeners"] = [SqliteForeignKeysListener()]
- engine_args["poolclass"] = NullPool
-
- if CONF.database.connection == "sqlite://":
- engine_args["poolclass"] = StaticPool
- engine_args["connect_args"] = {'check_same_thread': False}
- else:
- if CONF.database.max_pool_size is not None:
- engine_args['pool_size'] = CONF.database.max_pool_size
- if CONF.database.max_overflow is not None:
- engine_args['max_overflow'] = CONF.database.max_overflow
- if CONF.database.pool_timeout is not None:
- engine_args['pool_timeout'] = CONF.database.pool_timeout
-
- engine = sqlalchemy.create_engine(sql_connection, **engine_args)
-
- sqlalchemy.event.listen(engine, 'checkin', _greenthread_yield)
-
- if 'mysql' in connection_dict.drivername:
- sqlalchemy.event.listen(engine, 'checkout', _ping_listener)
- elif 'sqlite' in connection_dict.drivername:
- if not CONF.sqlite_synchronous:
- sqlalchemy.event.listen(engine, 'connect',
- _synchronous_switch_listener)
- sqlalchemy.event.listen(engine, 'connect', _add_regexp_listener)
-
- if (CONF.database.connection_trace and
- engine.dialect.dbapi.__name__ == 'MySQLdb'):
- _patch_mysqldb_with_stacktrace_comments()
-
- try:
- engine.connect()
- except sqla_exc.OperationalError as e:
- if not _is_db_connection_error(e.args[0]):
- raise
-
- remaining = CONF.database.max_retries
- if remaining == -1:
- remaining = 'infinite'
- while True:
- msg = _('SQL connection failed. %s attempts left.')
- LOG.warn(msg % remaining)
- if remaining != 'infinite':
- remaining -= 1
- time.sleep(CONF.database.retry_interval)
- try:
- engine.connect()
- break
- except sqla_exc.OperationalError as e:
- if (remaining != 'infinite' and remaining == 0) or \
- not _is_db_connection_error(e.args[0]):
- raise
- return engine
-
-
-class Query(sqlalchemy.orm.query.Query):
- """Subclass of sqlalchemy.query with soft_delete() method."""
- def soft_delete(self, synchronize_session='evaluate'):
- return self.update({'deleted': literal_column('id'),
- 'updated_at': literal_column('updated_at'),
- 'deleted_at': timeutils.utcnow()},
- synchronize_session=synchronize_session)
-
-
-class Session(sqlalchemy.orm.session.Session):
- """Custom Session class to avoid SqlAlchemy Session monkey patching."""
- @_wrap_db_error
- def query(self, *args, **kwargs):
- return super(Session, self).query(*args, **kwargs)
-
- @_wrap_db_error
- def flush(self, *args, **kwargs):
- return super(Session, self).flush(*args, **kwargs)
-
- @_wrap_db_error
- def execute(self, *args, **kwargs):
- return super(Session, self).execute(*args, **kwargs)
-
-
-def get_maker(engine, autocommit=True, expire_on_commit=False):
- """Return a SQLAlchemy sessionmaker using the given engine."""
- return sqlalchemy.orm.sessionmaker(bind=engine,
- class_=Session,
- autocommit=autocommit,
- expire_on_commit=expire_on_commit,
- query_cls=Query)
-
-
-def _patch_mysqldb_with_stacktrace_comments():
- """Adds current stack trace as a comment in queries.
-
- Patches MySQLdb.cursors.BaseCursor._do_query.
- """
- import MySQLdb.cursors
- import traceback
-
- old_mysql_do_query = MySQLdb.cursors.BaseCursor._do_query
-
- def _do_query(self, q):
- stack = ''
- for file, line, method, function in traceback.extract_stack():
- # exclude various common things from trace
- if file.endswith('session.py') and method == '_do_query':
- continue
- if file.endswith('api.py') and method == 'wrapper':
- continue
- if file.endswith('utils.py') and method == '_inner':
- continue
- if file.endswith('exception.py') and method == '_wrap':
- continue
- # db/api is just a wrapper around db/sqlalchemy/api
- if file.endswith('db/api.py'):
- continue
- # only trace inside heat
- index = file.rfind('heat')
- if index == -1:
- continue
- stack += "File:%s:%s Method:%s() Line:%s | " \
- % (file[index:], line, method, function)
-
- # strip trailing " | " from stack
- if stack:
- stack = stack[:-3]
- qq = "%s /* %s */" % (q, stack)
- else:
- qq = q
- old_mysql_do_query(self, qq)
-
- setattr(MySQLdb.cursors.BaseCursor, '_do_query', _do_query)
-
-
-def _assert_matching_drivers():
- """Make sure slave handle and normal handle have the same driver."""
- # NOTE(geekinutah): There's no use case for writing to one backend and
- # reading from another. Who knows what the future holds?
- if CONF.database.slave_connection == '':
- return
-
- normal = sqlalchemy.engine.url.make_url(CONF.database.connection)
- slave = sqlalchemy.engine.url.make_url(CONF.database.slave_connection)
- assert normal.drivername == slave.drivername
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# Copyright 2010-2011 OpenStack Foundation.
-# Copyright 2012 Justin Santa Barbara
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from migrate.changeset import UniqueConstraint
-import sqlalchemy
-from sqlalchemy import Boolean
-from sqlalchemy import CheckConstraint
-from sqlalchemy import Column
-from sqlalchemy.engine import reflection
-from sqlalchemy.ext.compiler import compiles
-from sqlalchemy import func
-from sqlalchemy import Index
-from sqlalchemy import Integer
-from sqlalchemy import MetaData
-from sqlalchemy.sql.expression import literal_column
-from sqlalchemy.sql.expression import UpdateBase
-from sqlalchemy.sql import select
-from sqlalchemy import String
-from sqlalchemy import Table
-from sqlalchemy.types import NullType
-
-from heat.openstack.common.gettextutils import _ # noqa
-
-from heat.openstack.common import exception
-from heat.openstack.common import log as logging
-from heat.openstack.common import timeutils
-
-
-LOG = logging.getLogger(__name__)
-
-
-class InvalidSortKey(Exception):
- message = _("Sort key supplied was not valid.")
-
-
-# copy from glance/db/sqlalchemy/api.py
-def paginate_query(query, model, limit, sort_keys, marker=None,
- sort_dir=None, sort_dirs=None):
- """Returns a query with sorting / pagination criteria added.
-
- Pagination works by requiring a unique sort_key, specified by sort_keys.
- (If sort_keys is not unique, then we risk looping through values.)
- We use the last row in the previous page as the 'marker' for pagination.
- So we must return values that follow the passed marker in the order.
- With a single-valued sort_key, this would be easy: sort_key > X.
- With a compound-values sort_key, (k1, k2, k3) we must do this to repeat
- the lexicographical ordering:
- (k1 > X1) or (k1 == X1 && k2 > X2) or (k1 == X1 && k2 == X2 && k3 > X3)
-
- We also have to cope with different sort_directions.
-
- Typically, the id of the last row is used as the client-facing pagination
- marker, then the actual marker object must be fetched from the db and
- passed in to us as marker.
-
- :param query: the query object to which we should add paging/sorting
- :param model: the ORM model class
- :param limit: maximum number of items to return
- :param sort_keys: array of attributes by which results should be sorted
- :param marker: the last item of the previous page; we returns the next
- results after this value.
- :param sort_dir: direction in which results should be sorted (asc, desc)
- :param sort_dirs: per-column array of sort_dirs, corresponding to sort_keys
-
- :rtype: sqlalchemy.orm.query.Query
- :return: The query with sorting/pagination added.
- """
-
- if 'id' not in sort_keys:
- # TODO(justinsb): If this ever gives a false-positive, check
- # the actual primary key, rather than assuming its id
- LOG.warn(_('Id not in sort_keys; is sort_keys unique?'))
-
- assert(not (sort_dir and sort_dirs))
-
- # Default the sort direction to ascending
- if sort_dirs is None and sort_dir is None:
- sort_dir = 'asc'
-
- # Ensure a per-column sort direction
- if sort_dirs is None:
- sort_dirs = [sort_dir for _sort_key in sort_keys]
-
- assert(len(sort_dirs) == len(sort_keys))
-
- # Add sorting
- for current_sort_key, current_sort_dir in zip(sort_keys, sort_dirs):
- try:
- sort_dir_func = {
- 'asc': sqlalchemy.asc,
- 'desc': sqlalchemy.desc,
- }[current_sort_dir]
- except KeyError:
- raise ValueError(_("Unknown sort direction, "
- "must be 'desc' or 'asc'"))
- try:
- sort_key_attr = getattr(model, current_sort_key)
- except AttributeError:
- raise InvalidSortKey()
- query = query.order_by(sort_dir_func(sort_key_attr))
-
- # Add pagination
- if marker is not None:
- marker_values = []
- for sort_key in sort_keys:
- v = getattr(marker, sort_key)
- marker_values.append(v)
-
- # Build up an array of sort criteria as in the docstring
- criteria_list = []
- for i in range(0, len(sort_keys)):
- crit_attrs = []
- for j in range(0, i):
- model_attr = getattr(model, sort_keys[j])
- crit_attrs.append((model_attr == marker_values[j]))
-
- model_attr = getattr(model, sort_keys[i])
- if sort_dirs[i] == 'desc':
- crit_attrs.append((model_attr < marker_values[i]))
- else:
- crit_attrs.append((model_attr > marker_values[i]))
-
- criteria = sqlalchemy.sql.and_(*crit_attrs)
- criteria_list.append(criteria)
-
- f = sqlalchemy.sql.or_(*criteria_list)
- query = query.filter(f)
-
- if limit is not None:
- query = query.limit(limit)
-
- return query
-
-
-def get_table(engine, name):
- """Returns an sqlalchemy table dynamically from db.
-
- Needed because the models don't work for us in migrations
- as models will be far out of sync with the current data.
- """
- metadata = MetaData()
- metadata.bind = engine
- return Table(name, metadata, autoload=True)
-
-
-class InsertFromSelect(UpdateBase):
- """Form the base for `INSERT INTO table (SELECT ... )` statement."""
- def __init__(self, table, select):
- self.table = table
- self.select = select
-
-
-@compiles(InsertFromSelect)
-def visit_insert_from_select(element, compiler, **kw):
- """Form the `INSERT INTO table (SELECT ... )` statement."""
- return "INSERT INTO %s %s" % (
- compiler.process(element.table, asfrom=True),
- compiler.process(element.select))
-
-
-def _get_not_supported_column(col_name_col_instance, column_name):
- try:
- column = col_name_col_instance[column_name]
- except KeyError:
- msg = _("Please specify column %s in col_name_col_instance "
- "param. It is required because column has unsupported "
- "type by sqlite).")
- raise exception.OpenstackException(message=msg % column_name)
-
- if not isinstance(column, Column):
- msg = _("col_name_col_instance param has wrong type of "
- "column instance for column %s It should be instance "
- "of sqlalchemy.Column.")
- raise exception.OpenstackException(message=msg % column_name)
- return column
-
-
-def drop_unique_constraint(migrate_engine, table_name, uc_name, *columns,
- **col_name_col_instance):
- """Drop unique constraint from table.
-
- This method drops UC from table and works for mysql, postgresql and sqlite.
- In mysql and postgresql we are able to use "alter table" construction.
- Sqlalchemy doesn't support some sqlite column types and replaces their
- type with NullType in metadata. We process these columns and replace
- NullType with the correct column type.
-
- :param migrate_engine: sqlalchemy engine
- :param table_name: name of table that contains uniq constraint.
- :param uc_name: name of uniq constraint that will be dropped.
- :param columns: columns that are in uniq constraint.
- :param col_name_col_instance: contains pair column_name=column_instance.
- column_instance is instance of Column. These params
- are required only for columns that have unsupported
- types by sqlite. For example BigInteger.
- """
-
- meta = MetaData()
- meta.bind = migrate_engine
- t = Table(table_name, meta, autoload=True)
-
- if migrate_engine.name == "sqlite":
- override_cols = [
- _get_not_supported_column(col_name_col_instance, col.name)
- for col in t.columns
- if isinstance(col.type, NullType)
- ]
- for col in override_cols:
- t.columns.replace(col)
-
- uc = UniqueConstraint(*columns, table=t, name=uc_name)
- uc.drop()
-
-
-def drop_old_duplicate_entries_from_table(migrate_engine, table_name,
- use_soft_delete, *uc_column_names):
- """Drop all old rows having the same values for columns in uc_columns.
-
- This method drop (or mark ad `deleted` if use_soft_delete is True) old
- duplicate rows form table with name `table_name`.
-
- :param migrate_engine: Sqlalchemy engine
- :param table_name: Table with duplicates
- :param use_soft_delete: If True - values will be marked as `deleted`,
- if False - values will be removed from table
- :param uc_column_names: Unique constraint columns
- """
- meta = MetaData()
- meta.bind = migrate_engine
-
- table = Table(table_name, meta, autoload=True)
- columns_for_group_by = [table.c[name] for name in uc_column_names]
-
- columns_for_select = [func.max(table.c.id)]
- columns_for_select.extend(columns_for_group_by)
-
- duplicated_rows_select = select(columns_for_select,
- group_by=columns_for_group_by,
- having=func.count(table.c.id) > 1)
-
- for row in migrate_engine.execute(duplicated_rows_select):
- # NOTE(boris-42): Do not remove row that has the biggest ID.
- delete_condition = table.c.id != row[0]
- is_none = None # workaround for pyflakes
- delete_condition &= table.c.deleted_at == is_none
- for name in uc_column_names:
- delete_condition &= table.c[name] == row[name]
-
- rows_to_delete_select = select([table.c.id]).where(delete_condition)
- for row in migrate_engine.execute(rows_to_delete_select).fetchall():
- LOG.info(_("Deleting duplicated row with id: %(id)s from table: "
- "%(table)s") % dict(id=row[0], table=table_name))
-
- if use_soft_delete:
- delete_statement = table.update().\
- where(delete_condition).\
- values({
- 'deleted': literal_column('id'),
- 'updated_at': literal_column('updated_at'),
- 'deleted_at': timeutils.utcnow()
- })
- else:
- delete_statement = table.delete().where(delete_condition)
- migrate_engine.execute(delete_statement)
-
-
-def _get_default_deleted_value(table):
- if isinstance(table.c.id.type, Integer):
- return 0
- if isinstance(table.c.id.type, String):
- return ""
- raise exception.OpenstackException(
- message=_("Unsupported id columns type"))
-
-
-def _restore_indexes_on_deleted_columns(migrate_engine, table_name, indexes):
- table = get_table(migrate_engine, table_name)
-
- insp = reflection.Inspector.from_engine(migrate_engine)
- real_indexes = insp.get_indexes(table_name)
- existing_index_names = dict(
- [(index['name'], index['column_names']) for index in real_indexes])
-
- # NOTE(boris-42): Restore indexes on `deleted` column
- for index in indexes:
- if 'deleted' not in index['column_names']:
- continue
- name = index['name']
- if name in existing_index_names:
- column_names = [table.c[c] for c in existing_index_names[name]]
- old_index = Index(name, *column_names, unique=index["unique"])
- old_index.drop(migrate_engine)
-
- column_names = [table.c[c] for c in index['column_names']]
- new_index = Index(index["name"], *column_names, unique=index["unique"])
- new_index.create(migrate_engine)
-
-
-def change_deleted_column_type_to_boolean(migrate_engine, table_name,
- **col_name_col_instance):
- if migrate_engine.name == "sqlite":
- return _change_deleted_column_type_to_boolean_sqlite(
- migrate_engine, table_name, **col_name_col_instance)
- insp = reflection.Inspector.from_engine(migrate_engine)
- indexes = insp.get_indexes(table_name)
-
- table = get_table(migrate_engine, table_name)
-
- old_deleted = Column('old_deleted', Boolean, default=False)
- old_deleted.create(table, populate_default=False)
-
- table.update().\
- where(table.c.deleted == table.c.id).\
- values(old_deleted=True).\
- execute()
-
- table.c.deleted.drop()
- table.c.old_deleted.alter(name="deleted")
-
- _restore_indexes_on_deleted_columns(migrate_engine, table_name, indexes)
-
-
-def _change_deleted_column_type_to_boolean_sqlite(migrate_engine, table_name,
- **col_name_col_instance):
- insp = reflection.Inspector.from_engine(migrate_engine)
- table = get_table(migrate_engine, table_name)
-
- columns = []
- for column in table.columns:
- column_copy = None
- if column.name != "deleted":
- if isinstance(column.type, NullType):
- column_copy = _get_not_supported_column(col_name_col_instance,
- column.name)
- else:
- column_copy = column.copy()
- else:
- column_copy = Column('deleted', Boolean, default=0)
- columns.append(column_copy)
-
- constraints = [constraint.copy() for constraint in table.constraints]
-
- meta = MetaData(bind=migrate_engine)
- new_table = Table(table_name + "__tmp__", meta,
- *(columns + constraints))
- new_table.create()
-
- indexes = []
- for index in insp.get_indexes(table_name):
- column_names = [new_table.c[c] for c in index['column_names']]
- indexes.append(Index(index["name"], *column_names,
- unique=index["unique"]))
-
- c_select = []
- for c in table.c:
- if c.name != "deleted":
- c_select.append(c)
- else:
- c_select.append(table.c.deleted == table.c.id)
-
- ins = InsertFromSelect(new_table, select(c_select))
- migrate_engine.execute(ins)
-
- table.drop()
- [index.create(migrate_engine) for index in indexes]
-
- new_table.rename(table_name)
- new_table.update().\
- where(new_table.c.deleted == new_table.c.id).\
- values(deleted=True).\
- execute()
-
-
-def change_deleted_column_type_to_id_type(migrate_engine, table_name,
- **col_name_col_instance):
- if migrate_engine.name == "sqlite":
- return _change_deleted_column_type_to_id_type_sqlite(
- migrate_engine, table_name, **col_name_col_instance)
- insp = reflection.Inspector.from_engine(migrate_engine)
- indexes = insp.get_indexes(table_name)
-
- table = get_table(migrate_engine, table_name)
-
- new_deleted = Column('new_deleted', table.c.id.type,
- default=_get_default_deleted_value(table))
- new_deleted.create(table, populate_default=True)
-
- deleted = True # workaround for pyflakes
- table.update().\
- where(table.c.deleted == deleted).\
- values(new_deleted=table.c.id).\
- execute()
- table.c.deleted.drop()
- table.c.new_deleted.alter(name="deleted")
-
- _restore_indexes_on_deleted_columns(migrate_engine, table_name, indexes)
-
-
-def _change_deleted_column_type_to_id_type_sqlite(migrate_engine, table_name,
- **col_name_col_instance):
- # NOTE(boris-42): sqlaclhemy-migrate can't drop column with check
- # constraints in sqlite DB and our `deleted` column has
- # 2 check constraints. So there is only one way to remove
- # these constraints:
- # 1) Create new table with the same columns, constraints
- # and indexes. (except deleted column).
- # 2) Copy all data from old to new table.
- # 3) Drop old table.
- # 4) Rename new table to old table name.
- insp = reflection.Inspector.from_engine(migrate_engine)
- meta = MetaData(bind=migrate_engine)
- table = Table(table_name, meta, autoload=True)
- default_deleted_value = _get_default_deleted_value(table)
-
- columns = []
- for column in table.columns:
- column_copy = None
- if column.name != "deleted":
- if isinstance(column.type, NullType):
- column_copy = _get_not_supported_column(col_name_col_instance,
- column.name)
- else:
- column_copy = column.copy()
- else:
- column_copy = Column('deleted', table.c.id.type,
- default=default_deleted_value)
- columns.append(column_copy)
-
- def is_deleted_column_constraint(constraint):
- # NOTE(boris-42): There is no other way to check is CheckConstraint
- # associated with deleted column.
- if not isinstance(constraint, CheckConstraint):
- return False
- sqltext = str(constraint.sqltext)
- return (sqltext.endswith("deleted in (0, 1)") or
- sqltext.endswith("deleted IN (:deleted_1, :deleted_2)"))
-
- constraints = []
- for constraint in table.constraints:
- if not is_deleted_column_constraint(constraint):
- constraints.append(constraint.copy())
-
- new_table = Table(table_name + "__tmp__", meta,
- *(columns + constraints))
- new_table.create()
-
- indexes = []
- for index in insp.get_indexes(table_name):
- column_names = [new_table.c[c] for c in index['column_names']]
- indexes.append(Index(index["name"], *column_names,
- unique=index["unique"]))
-
- ins = InsertFromSelect(new_table, table.select())
- migrate_engine.execute(ins)
-
- table.drop()
- [index.create(migrate_engine) for index in indexes]
-
- new_table.rename(table_name)
- deleted = True # workaround for pyflakes
- new_table.update().\
- where(new_table.c.deleted == deleted).\
- values(deleted=new_table.c.id).\
- execute()
-
- # NOTE(boris-42): Fix value of deleted column: False -> "" or 0.
- deleted = False # workaround for pyflakes
- new_table.update().\
- where(new_table.c.deleted == deleted).\
- values(deleted=default_deleted_value).\
- execute()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright (c) 2012 OpenStack Foundation.
-# Administrator of the National Aeronautics and Space Administration.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from __future__ import print_function
-
-import errno
-import gc
-import os
-import pprint
-import socket
-import sys
-import traceback
-
-import eventlet
-import eventlet.backdoor
-import greenlet
-from oslo.config import cfg
-
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import log as logging
-
-help_for_backdoor_port = (
- "Acceptable values are 0, <port>, and <start>:<end>, where 0 results "
- "in listening on a random tcp port number; <port> results in listening "
- "on the specified port number (and not enabling backdoor if that port "
- "is in use); and <start>:<end> results in listening on the smallest "
- "unused port number within the specified range of port numbers. The "
- "chosen port is displayed in the service's log file.")
-eventlet_backdoor_opts = [
- cfg.StrOpt('backdoor_port',
- default=None,
- help="Enable eventlet backdoor. %s" % help_for_backdoor_port)
-]
-
-CONF = cfg.CONF
-CONF.register_opts(eventlet_backdoor_opts)
-LOG = logging.getLogger(__name__)
-
-
-class EventletBackdoorConfigValueError(Exception):
- def __init__(self, port_range, help_msg, ex):
- msg = ('Invalid backdoor_port configuration %(range)s: %(ex)s. '
- '%(help)s' %
- {'range': port_range, 'ex': ex, 'help': help_msg})
- super(EventletBackdoorConfigValueError, self).__init__(msg)
- self.port_range = port_range
-
-
-def _dont_use_this():
- print("Don't use this, just disconnect instead")
-
-
-def _find_objects(t):
- return filter(lambda o: isinstance(o, t), gc.get_objects())
-
-
-def _print_greenthreads():
- for i, gt in enumerate(_find_objects(greenlet.greenlet)):
- print(i, gt)
- traceback.print_stack(gt.gr_frame)
- print()
-
-
-def _print_nativethreads():
- for threadId, stack in sys._current_frames().items():
- print(threadId)
- traceback.print_stack(stack)
- print()
-
-
-def _parse_port_range(port_range):
- if ':' not in port_range:
- start, end = port_range, port_range
- else:
- start, end = port_range.split(':', 1)
- try:
- start, end = int(start), int(end)
- if end < start:
- raise ValueError
- return start, end
- except ValueError as ex:
- raise EventletBackdoorConfigValueError(port_range, ex,
- help_for_backdoor_port)
-
-
-def _listen(host, start_port, end_port, listen_func):
- try_port = start_port
- while True:
- try:
- return listen_func((host, try_port))
- except socket.error as exc:
- if (exc.errno != errno.EADDRINUSE or
- try_port >= end_port):
- raise
- try_port += 1
-
-
-def initialize_if_enabled():
- backdoor_locals = {
- 'exit': _dont_use_this, # So we don't exit the entire process
- 'quit': _dont_use_this, # So we don't exit the entire process
- 'fo': _find_objects,
- 'pgt': _print_greenthreads,
- 'pnt': _print_nativethreads,
- }
-
- if CONF.backdoor_port is None:
- return None
-
- start_port, end_port = _parse_port_range(str(CONF.backdoor_port))
-
- # NOTE(johannes): The standard sys.displayhook will print the value of
- # the last expression and set it to __builtin__._, which overwrites
- # the __builtin__._ that gettext sets. Let's switch to using pprint
- # since it won't interact poorly with gettext, and it's easier to
- # read the output too.
- def displayhook(val):
- if val is not None:
- pprint.pprint(val)
- sys.displayhook = displayhook
-
- sock = _listen('localhost', start_port, end_port, eventlet.listen)
-
- # In the case of backdoor port being zero, a port number is assigned by
- # listen(). In any case, pull the port number out here.
- port = sock.getsockname()[1]
- LOG.info(_('Eventlet backdoor listening on %(port)s for process %(pid)d') %
- {'port': port, 'pid': os.getpid()})
- eventlet.spawn_n(eventlet.backdoor.backdoor_server, sock,
- locals=backdoor_locals)
- return port
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Exceptions common to OpenStack projects
-"""
-
-import logging
-
-_FATAL_EXCEPTION_FORMAT_ERRORS = False
-
-
-class Error(Exception):
- def __init__(self, message=None):
- super(Error, self).__init__(message)
-
-
-class ApiError(Error):
- def __init__(self, message='Unknown', code='Unknown'):
- self.message = message
- self.code = code
- super(ApiError, self).__init__('%s: %s' % (code, message))
-
-
-class NotFound(Error):
- pass
-
-
-class UnknownScheme(Error):
-
- msg = "Unknown scheme '%s' found in URI"
-
- def __init__(self, scheme):
- msg = self.__class__.msg % scheme
- super(UnknownScheme, self).__init__(msg)
-
-
-class BadStoreUri(Error):
-
- msg = "The Store URI %s was malformed. Reason: %s"
-
- def __init__(self, uri, reason):
- msg = self.__class__.msg % (uri, reason)
- super(BadStoreUri, self).__init__(msg)
-
-
-class Duplicate(Error):
- pass
-
-
-class NotAuthorized(Error):
- pass
-
-
-class NotEmpty(Error):
- pass
-
-
-class Invalid(Error):
- pass
-
-
-class BadInputError(Exception):
- """Error resulting from a client sending bad input to a server"""
- pass
-
-
-class MissingArgumentError(Error):
- pass
-
-
-class DatabaseMigrationError(Error):
- pass
-
-
-class ClientConnectionError(Exception):
- """Error resulting from a client connecting to a server"""
- pass
-
-
-def wrap_exception(f):
- def _wrap(*args, **kw):
- try:
- return f(*args, **kw)
- except Exception as e:
- if not isinstance(e, Error):
- #exc_type, exc_value, exc_traceback = sys.exc_info()
- logging.exception(_('Uncaught exception'))
- #logging.error(traceback.extract_stack(exc_traceback))
- raise Error(str(e))
- raise
- _wrap.func_name = f.func_name
- return _wrap
-
-
-class OpenstackException(Exception):
- """Base Exception class.
-
- To correctly use this class, inherit from it and define
- a 'message' property. That message will get printf'd
- with the keyword arguments provided to the constructor.
- """
- message = "An unknown exception occurred"
-
- def __init__(self, **kwargs):
- try:
- self.kwargs = kwargs
- self._error_string = self.message % kwargs
-
- except Exception as e:
- if _FATAL_EXCEPTION_FORMAT_ERRORS:
- raise e
- else:
- # at least get the core message out if something happened
- self._error_string = self.message
-
- def __str__(self):
- return self._error_string
-
-
-class MalformedRequestBody(OpenstackException):
- message = "Malformed message body: %(reason)s"
-
-
-class InvalidContentType(OpenstackException):
- message = "Invalid content type %(content_type)s"
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation.
-# Copyright 2012, Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Exception related utilities.
-"""
-
-import logging
-import sys
-import time
-import traceback
-
-from heat.openstack.common.gettextutils import _ # noqa
-
-
-class save_and_reraise_exception(object):
- """Save current exception, run some code and then re-raise.
-
- In some cases the exception context can be cleared, resulting in None
- being attempted to be re-raised after an exception handler is run. This
- can happen when eventlet switches greenthreads or when running an
- exception handler, code raises and catches an exception. In both
- cases the exception context will be cleared.
-
- To work around this, we save the exception state, run handler code, and
- then re-raise the original exception. If another exception occurs, the
- saved exception is logged and the new exception is re-raised.
-
- In some cases the caller may not want to re-raise the exception, and
- for those circumstances this context provides a reraise flag that
- can be used to suppress the exception. For example:
-
- except Exception:
- with save_and_reraise_exception() as ctxt:
- decide_if_need_reraise()
- if not should_be_reraised:
- ctxt.reraise = False
- """
- def __init__(self):
- self.reraise = True
-
- def __enter__(self):
- self.type_, self.value, self.tb, = sys.exc_info()
- return self
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- if exc_type is not None:
- logging.error(_('Original exception being dropped: %s'),
- traceback.format_exception(self.type_,
- self.value,
- self.tb))
- return False
- if self.reraise:
- raise self.type_, self.value, self.tb
-
-
-def forever_retry_uncaught_exceptions(infunc):
- def inner_func(*args, **kwargs):
- last_log_time = 0
- last_exc_message = None
- exc_count = 0
- while True:
- try:
- return infunc(*args, **kwargs)
- except Exception as exc:
- if exc.message == last_exc_message:
- exc_count += 1
- else:
- exc_count = 1
- # Do not log any more frequently than once a minute unless
- # the exception message changes
- cur_time = int(time.time())
- if (cur_time - last_log_time > 60 or
- exc.message != last_exc_message):
- logging.exception(
- _('Unexpected exception occurred %d time(s)... '
- 'retrying.') % exc_count)
- last_log_time = cur_time
- last_exc_message = exc.message
- exc_count = 0
- # This should be a very rare event. In case it isn't, do
- # a sleep.
- time.sleep(1)
- return inner_func
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import contextlib
-import errno
-import os
-
-from heat.openstack.common import excutils
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import log as logging
-
-LOG = logging.getLogger(__name__)
-
-_FILE_CACHE = {}
-
-
-def ensure_tree(path):
- """Create a directory (and any ancestor directories required)
-
- :param path: Directory to create
- """
- try:
- os.makedirs(path)
- except OSError as exc:
- if exc.errno == errno.EEXIST:
- if not os.path.isdir(path):
- raise
- else:
- raise
-
-
-def read_cached_file(filename, force_reload=False):
- """Read from a file if it has been modified.
-
- :param force_reload: Whether to reload the file.
- :returns: A tuple with a boolean specifying if the data is fresh
- or not.
- """
- global _FILE_CACHE
-
- if force_reload and filename in _FILE_CACHE:
- del _FILE_CACHE[filename]
-
- reloaded = False
- mtime = os.path.getmtime(filename)
- cache_info = _FILE_CACHE.setdefault(filename, {})
-
- if not cache_info or mtime > cache_info.get('mtime', 0):
- LOG.debug(_("Reloading cached file %s") % filename)
- with open(filename) as fap:
- cache_info['data'] = fap.read()
- cache_info['mtime'] = mtime
- reloaded = True
- return (reloaded, cache_info['data'])
-
-
-def delete_if_exists(path):
- """Delete a file, but ignore file not found error.
-
- :param path: File to delete
- """
-
- try:
- os.unlink(path)
- except OSError as e:
- if e.errno == errno.ENOENT:
- return
- else:
- raise
-
-
-@contextlib.contextmanager
-def remove_path_on_error(path):
- """Protect code that wants to operate on PATH atomically.
- Any exception will cause PATH to be removed.
-
- :param path: File to work with
- """
- try:
- yield
- except Exception:
- with excutils.save_and_reraise_exception():
- delete_if_exists(path)
-
-
-def file_open(*args, **kwargs):
- """Open file
-
- see built-in file() documentation for more details
-
- Note: The reason this is kept in a separate module is to easily
- be able to provide a stub module that doesn't alter system
- state at all (for unit tests)
- """
- return file(*args, **kwargs)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2012 Red Hat, Inc.
-# Copyright 2013 IBM Corp.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-gettext for openstack-common modules.
-
-Usual usage in an openstack.common module:
-
- from heat.openstack.common.gettextutils import _
-"""
-
-import copy
-import gettext
-import logging.handlers
-import os
-import re
-import UserString
-
-from babel import localedata
-import six
-
-_localedir = os.environ.get('heat'.upper() + '_LOCALEDIR')
-_t = gettext.translation('heat', localedir=_localedir, fallback=True)
-
-_AVAILABLE_LANGUAGES = []
-
-
-def _(msg):
- return _t.ugettext(msg)
-
-
-def install(domain, lazy=False):
- """Install a _() function using the given translation domain.
-
- Given a translation domain, install a _() function using gettext's
- install() function.
-
- The main difference from gettext.install() is that we allow
- overriding the default localedir (e.g. /usr/share/locale) using
- a translation-domain-specific environment variable (e.g.
- NOVA_LOCALEDIR).
-
- :param domain: the translation domain
- :param lazy: indicates whether or not to install the lazy _() function.
- The lazy _() introduces a way to do deferred translation
- of messages by installing a _ that builds Message objects,
- instead of strings, which can then be lazily translated into
- any available locale.
- """
- if lazy:
- # NOTE(mrodden): Lazy gettext functionality.
- #
- # The following introduces a deferred way to do translations on
- # messages in OpenStack. We override the standard _() function
- # and % (format string) operation to build Message objects that can
- # later be translated when we have more information.
- #
- # Also included below is an example LocaleHandler that translates
- # Messages to an associated locale, effectively allowing many logs,
- # each with their own locale.
-
- def _lazy_gettext(msg):
- """Create and return a Message object.
-
- Lazy gettext function for a given domain, it is a factory method
- for a project/module to get a lazy gettext function for its own
- translation domain (i.e. nova, glance, cinder, etc.)
-
- Message encapsulates a string so that we can translate
- it later when needed.
- """
- return Message(msg, domain)
-
- import __builtin__
- __builtin__.__dict__['_'] = _lazy_gettext
- else:
- localedir = '%s_LOCALEDIR' % domain.upper()
- gettext.install(domain,
- localedir=os.environ.get(localedir),
- unicode=True)
-
-
-class Message(UserString.UserString, object):
- """Class used to encapsulate translatable messages."""
- def __init__(self, msg, domain):
- # _msg is the gettext msgid and should never change
- self._msg = msg
- self._left_extra_msg = ''
- self._right_extra_msg = ''
- self.params = None
- self.locale = None
- self.domain = domain
-
- @property
- def data(self):
- # NOTE(mrodden): this should always resolve to a unicode string
- # that best represents the state of the message currently
-
- localedir = os.environ.get(self.domain.upper() + '_LOCALEDIR')
- if self.locale:
- lang = gettext.translation(self.domain,
- localedir=localedir,
- languages=[self.locale],
- fallback=True)
- else:
- # use system locale for translations
- lang = gettext.translation(self.domain,
- localedir=localedir,
- fallback=True)
-
- full_msg = (self._left_extra_msg +
- lang.ugettext(self._msg) +
- self._right_extra_msg)
-
- if self.params is not None:
- full_msg = full_msg % self.params
-
- return six.text_type(full_msg)
-
- def _save_dictionary_parameter(self, dict_param):
- full_msg = self.data
- # look for %(blah) fields in string;
- # ignore %% and deal with the
- # case where % is first character on the line
- keys = re.findall('(?:[^%]|^)%\((\w*)\)[a-z]', full_msg)
-
- # if we don't find any %(blah) blocks but have a %s
- if not keys and re.findall('(?:[^%]|^)%[a-z]', full_msg):
- # apparently the full dictionary is the parameter
- params = copy.deepcopy(dict_param)
- else:
- params = {}
- for key in keys:
- try:
- params[key] = copy.deepcopy(dict_param[key])
- except TypeError:
- # cast uncopyable thing to unicode string
- params[key] = unicode(dict_param[key])
-
- return params
-
- def _save_parameters(self, other):
- # we check for None later to see if
- # we actually have parameters to inject,
- # so encapsulate if our parameter is actually None
- if other is None:
- self.params = (other, )
- elif isinstance(other, dict):
- self.params = self._save_dictionary_parameter(other)
- else:
- # fallback to casting to unicode,
- # this will handle the problematic python code-like
- # objects that cannot be deep-copied
- try:
- self.params = copy.deepcopy(other)
- except TypeError:
- self.params = unicode(other)
-
- return self
-
- # overrides to be more string-like
- def __unicode__(self):
- return self.data
-
- def __str__(self):
- return self.data.encode('utf-8')
-
- def __getstate__(self):
- to_copy = ['_msg', '_right_extra_msg', '_left_extra_msg',
- 'domain', 'params', 'locale']
- new_dict = self.__dict__.fromkeys(to_copy)
- for attr in to_copy:
- new_dict[attr] = copy.deepcopy(self.__dict__[attr])
-
- return new_dict
-
- def __setstate__(self, state):
- for (k, v) in state.items():
- setattr(self, k, v)
-
- # operator overloads
- def __add__(self, other):
- copied = copy.deepcopy(self)
- copied._right_extra_msg += other.__str__()
- return copied
-
- def __radd__(self, other):
- copied = copy.deepcopy(self)
- copied._left_extra_msg += other.__str__()
- return copied
-
- def __mod__(self, other):
- # do a format string to catch and raise
- # any possible KeyErrors from missing parameters
- self.data % other
- copied = copy.deepcopy(self)
- return copied._save_parameters(other)
-
- def __mul__(self, other):
- return self.data * other
-
- def __rmul__(self, other):
- return other * self.data
-
- def __getitem__(self, key):
- return self.data[key]
-
- def __getslice__(self, start, end):
- return self.data.__getslice__(start, end)
-
- def __getattribute__(self, name):
- # NOTE(mrodden): handle lossy operations that we can't deal with yet
- # These override the UserString implementation, since UserString
- # uses our __class__ attribute to try and build a new message
- # after running the inner data string through the operation.
- # At that point, we have lost the gettext message id and can just
- # safely resolve to a string instead.
- ops = ['capitalize', 'center', 'decode', 'encode',
- 'expandtabs', 'ljust', 'lstrip', 'replace', 'rjust', 'rstrip',
- 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill']
- if name in ops:
- return getattr(self.data, name)
- else:
- return UserString.UserString.__getattribute__(self, name)
-
-
-def get_available_languages(domain):
- """Lists the available languages for the given translation domain.
-
- :param domain: the domain to get languages for
- """
- if _AVAILABLE_LANGUAGES:
- return _AVAILABLE_LANGUAGES
-
- localedir = '%s_LOCALEDIR' % domain.upper()
- find = lambda x: gettext.find(domain,
- localedir=os.environ.get(localedir),
- languages=[x])
-
- # NOTE(mrodden): en_US should always be available (and first in case
- # order matters) since our in-line message strings are en_US
- _AVAILABLE_LANGUAGES.append('en_US')
- # NOTE(luisg): Babel <1.0 used a function called list(), which was
- # renamed to locale_identifiers() in >=1.0, the requirements master list
- # requires >=0.9.6, uncapped, so defensively work with both. We can remove
- # this check when the master list updates to >=1.0, and all projects udpate
- list_identifiers = (getattr(localedata, 'list', None) or
- getattr(localedata, 'locale_identifiers'))
- locale_identifiers = list_identifiers()
- for i in locale_identifiers:
- if find(i) is not None:
- _AVAILABLE_LANGUAGES.append(i)
- return _AVAILABLE_LANGUAGES
-
-
-def get_localized_message(message, user_locale):
- """Gets a localized version of the given message in the given locale."""
- if (isinstance(message, Message)):
- if user_locale:
- message.locale = user_locale
- return unicode(message)
- else:
- return message
-
-
-class LocaleHandler(logging.Handler):
- """Handler that can have a locale associated to translate Messages.
-
- A quick example of how to utilize the Message class above.
- LocaleHandler takes a locale and a target logging.Handler object
- to forward LogRecord objects to after translating the internal Message.
- """
-
- def __init__(self, locale, target):
- """Initialize a LocaleHandler
-
- :param locale: locale to use for translating messages
- :param target: logging.Handler object to forward
- LogRecord objects to after translation
- """
- logging.Handler.__init__(self)
- self.locale = locale
- self.target = target
-
- def emit(self, record):
- if isinstance(record.msg, Message):
- # set the locale and resolve to a string
- record.msg.locale = self.locale
-
- self.target.emit(record)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Import related utilities and helper functions.
-"""
-
-import sys
-import traceback
-
-
-def import_class(import_str):
- """Returns a class from a string including module and class."""
- mod_str, _sep, class_str = import_str.rpartition('.')
- try:
- __import__(mod_str)
- return getattr(sys.modules[mod_str], class_str)
- except (ValueError, AttributeError):
- raise ImportError('Class %s cannot be found (%s)' %
- (class_str,
- traceback.format_exception(*sys.exc_info())))
-
-
-def import_object(import_str, *args, **kwargs):
- """Import a class and return an instance of it."""
- return import_class(import_str)(*args, **kwargs)
-
-
-def import_object_ns(name_space, import_str, *args, **kwargs):
- """Tries to import object from default namespace.
-
- Imports a class and return an instance of it, first by trying
- to find the class in a default namespace, then failing back to
- a full path if not found in the default namespace.
- """
- import_value = "%s.%s" % (name_space, import_str)
- try:
- return import_class(import_value)(*args, **kwargs)
- except ImportError:
- return import_class(import_str)(*args, **kwargs)
-
-
-def import_module(import_str):
- """Import a module."""
- __import__(import_str)
- return sys.modules[import_str]
-
-
-def try_import(import_str, default=None):
- """Try to import a module and if it fails return default."""
- try:
- return import_module(import_str)
- except ImportError:
- return default
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# Copyright 2011 Justin Santa Barbara
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-'''
-JSON related utilities.
-
-This module provides a few things:
-
- 1) A handy function for getting an object down to something that can be
- JSON serialized. See to_primitive().
-
- 2) Wrappers around loads() and dumps(). The dumps() wrapper will
- automatically use to_primitive() for you if needed.
-
- 3) This sets up anyjson to use the loads() and dumps() wrappers if anyjson
- is available.
-'''
-
-
-import datetime
-import functools
-import inspect
-import itertools
-import json
-import types
-import xmlrpclib
-
-import netaddr
-import six
-
-from heat.openstack.common import timeutils
-
-
-_nasty_type_tests = [inspect.ismodule, inspect.isclass, inspect.ismethod,
- inspect.isfunction, inspect.isgeneratorfunction,
- inspect.isgenerator, inspect.istraceback, inspect.isframe,
- inspect.iscode, inspect.isbuiltin, inspect.isroutine,
- inspect.isabstract]
-
-_simple_types = (types.NoneType, int, basestring, bool, float, long)
-
-
-def to_primitive(value, convert_instances=False, convert_datetime=True,
- level=0, max_depth=3):
- """Convert a complex object into primitives.
-
- Handy for JSON serialization. We can optionally handle instances,
- but since this is a recursive function, we could have cyclical
- data structures.
-
- To handle cyclical data structures we could track the actual objects
- visited in a set, but not all objects are hashable. Instead we just
- track the depth of the object inspections and don't go too deep.
-
- Therefore, convert_instances=True is lossy ... be aware.
-
- """
- # handle obvious types first - order of basic types determined by running
- # full tests on nova project, resulting in the following counts:
- # 572754 <type 'NoneType'>
- # 460353 <type 'int'>
- # 379632 <type 'unicode'>
- # 274610 <type 'str'>
- # 199918 <type 'dict'>
- # 114200 <type 'datetime.datetime'>
- # 51817 <type 'bool'>
- # 26164 <type 'list'>
- # 6491 <type 'float'>
- # 283 <type 'tuple'>
- # 19 <type 'long'>
- if isinstance(value, _simple_types):
- return value
-
- if isinstance(value, datetime.datetime):
- if convert_datetime:
- return timeutils.strtime(value)
- else:
- return value
-
- # value of itertools.count doesn't get caught by nasty_type_tests
- # and results in infinite loop when list(value) is called.
- if type(value) == itertools.count:
- return six.text_type(value)
-
- # FIXME(vish): Workaround for LP bug 852095. Without this workaround,
- # tests that raise an exception in a mocked method that
- # has a @wrap_exception with a notifier will fail. If
- # we up the dependency to 0.5.4 (when it is released) we
- # can remove this workaround.
- if getattr(value, '__module__', None) == 'mox':
- return 'mock'
-
- if level > max_depth:
- return '?'
-
- # The try block may not be necessary after the class check above,
- # but just in case ...
- try:
- recursive = functools.partial(to_primitive,
- convert_instances=convert_instances,
- convert_datetime=convert_datetime,
- level=level,
- max_depth=max_depth)
- if isinstance(value, dict):
- return dict((k, recursive(v)) for k, v in value.iteritems())
- elif isinstance(value, (list, tuple)):
- return [recursive(lv) for lv in value]
-
- # It's not clear why xmlrpclib created their own DateTime type, but
- # for our purposes, make it a datetime type which is explicitly
- # handled
- if isinstance(value, xmlrpclib.DateTime):
- value = datetime.datetime(*tuple(value.timetuple())[:6])
-
- if convert_datetime and isinstance(value, datetime.datetime):
- return timeutils.strtime(value)
- elif hasattr(value, 'iteritems'):
- return recursive(dict(value.iteritems()), level=level + 1)
- elif hasattr(value, '__iter__'):
- return recursive(list(value))
- elif convert_instances and hasattr(value, '__dict__'):
- # Likely an instance of something. Watch for cycles.
- # Ignore class member vars.
- return recursive(value.__dict__, level=level + 1)
- elif isinstance(value, netaddr.IPAddress):
- return six.text_type(value)
- else:
- if any(test(value) for test in _nasty_type_tests):
- return six.text_type(value)
- return value
- except TypeError:
- # Class objects are tricky since they may define something like
- # __iter__ defined but it isn't callable as list().
- return six.text_type(value)
-
-
-def dumps(value, default=to_primitive, **kwargs):
- return json.dumps(value, default=default, **kwargs)
-
-
-def loads(s):
- return json.loads(s)
-
-
-def load(s):
- return json.load(s)
-
-
-try:
- import anyjson
-except ImportError:
- pass
-else:
- anyjson._modules.append((__name__, 'dumps', TypeError,
- 'loads', ValueError, 'load'))
- anyjson.force_implementation(__name__)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Greenthread local storage of variables using weak references"""
-
-import weakref
-
-from eventlet import corolocal
-
-
-class WeakLocal(corolocal.local):
- def __getattribute__(self, attr):
- rval = corolocal.local.__getattribute__(self, attr)
- if rval:
- # NOTE(mikal): this bit is confusing. What is stored is a weak
- # reference, not the value itself. We therefore need to lookup
- # the weak reference and return the inner value here.
- rval = rval()
- return rval
-
- def __setattr__(self, attr, value):
- value = weakref.ref(value)
- return corolocal.local.__setattr__(self, attr, value)
-
-
-# NOTE(mikal): the name "store" should be deprecated in the future
-store = WeakLocal()
-
-# A "weak" store uses weak references and allows an object to fall out of scope
-# when it falls out of scope in the code that uses the thread local storage. A
-# "strong" store will hold a reference to the object so that it never falls out
-# of scope.
-weak_store = WeakLocal()
-strong_store = corolocal.local
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import contextlib
-import errno
-import functools
-import os
-import time
-import weakref
-
-from eventlet import semaphore
-from oslo.config import cfg
-
-from heat.openstack.common import fileutils
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import local
-from heat.openstack.common import log as logging
-
-
-LOG = logging.getLogger(__name__)
-
-
-util_opts = [
- cfg.BoolOpt('disable_process_locking', default=False,
- help='Whether to disable inter-process locks'),
- cfg.StrOpt('lock_path',
- help=('Directory to use for lock files.'))
-]
-
-
-CONF = cfg.CONF
-CONF.register_opts(util_opts)
-
-
-def set_defaults(lock_path):
- cfg.set_defaults(util_opts, lock_path=lock_path)
-
-
-class _InterProcessLock(object):
- """Lock implementation which allows multiple locks, working around
- issues like bugs.debian.org/cgi-bin/bugreport.cgi?bug=632857 and does
- not require any cleanup. Since the lock is always held on a file
- descriptor rather than outside of the process, the lock gets dropped
- automatically if the process crashes, even if __exit__ is not executed.
-
- There are no guarantees regarding usage by multiple green threads in a
- single process here. This lock works only between processes. Exclusive
- access between local threads should be achieved using the semaphores
- in the @synchronized decorator.
-
- Note these locks are released when the descriptor is closed, so it's not
- safe to close the file descriptor while another green thread holds the
- lock. Just opening and closing the lock file can break synchronisation,
- so lock files must be accessed only using this abstraction.
- """
-
- def __init__(self, name):
- self.lockfile = None
- self.fname = name
-
- def __enter__(self):
- self.lockfile = open(self.fname, 'w')
-
- while True:
- try:
- # Using non-blocking locks since green threads are not
- # patched to deal with blocking locking calls.
- # Also upon reading the MSDN docs for locking(), it seems
- # to have a laughable 10 attempts "blocking" mechanism.
- self.trylock()
- return self
- except IOError as e:
- if e.errno in (errno.EACCES, errno.EAGAIN):
- # external locks synchronise things like iptables
- # updates - give it some time to prevent busy spinning
- time.sleep(0.01)
- else:
- raise
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- try:
- self.unlock()
- self.lockfile.close()
- except IOError:
- LOG.exception(_("Could not release the acquired lock `%s`"),
- self.fname)
-
- def trylock(self):
- raise NotImplementedError()
-
- def unlock(self):
- raise NotImplementedError()
-
-
-class _WindowsLock(_InterProcessLock):
- def trylock(self):
- msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_NBLCK, 1)
-
- def unlock(self):
- msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_UNLCK, 1)
-
-
-class _PosixLock(_InterProcessLock):
- def trylock(self):
- fcntl.lockf(self.lockfile, fcntl.LOCK_EX | fcntl.LOCK_NB)
-
- def unlock(self):
- fcntl.lockf(self.lockfile, fcntl.LOCK_UN)
-
-
-if os.name == 'nt':
- import msvcrt
- InterProcessLock = _WindowsLock
-else:
- import fcntl
- InterProcessLock = _PosixLock
-
-_semaphores = weakref.WeakValueDictionary()
-
-
-@contextlib.contextmanager
-def lock(name, lock_file_prefix=None, external=False, lock_path=None):
- """Context based lock
-
- This function yields a `semaphore.Semaphore` instance unless external is
- True, in which case, it'll yield an InterProcessLock instance.
-
- :param lock_file_prefix: The lock_file_prefix argument is used to provide
- lock files on disk with a meaningful prefix.
-
- :param external: The external keyword argument denotes whether this lock
- should work across multiple processes. This means that if two different
- workers both run a a method decorated with @synchronized('mylock',
- external=True), only one of them will execute at a time.
-
- :param lock_path: The lock_path keyword argument is used to specify a
- special location for external lock files to live. If nothing is set, then
- CONF.lock_path is used as a default.
- """
- # NOTE(soren): If we ever go natively threaded, this will be racy.
- # See http://stackoverflow.com/questions/5390569/dyn
- # amically-allocating-and-destroying-mutexes
- sem = _semaphores.get(name, semaphore.Semaphore())
- if name not in _semaphores:
- # this check is not racy - we're already holding ref locally
- # so GC won't remove the item and there was no IO switch
- # (only valid in greenthreads)
- _semaphores[name] = sem
-
- with sem:
- LOG.debug(_('Got semaphore "%(lock)s"'), {'lock': name})
-
- # NOTE(mikal): I know this looks odd
- if not hasattr(local.strong_store, 'locks_held'):
- local.strong_store.locks_held = []
- local.strong_store.locks_held.append(name)
-
- try:
- if external and not CONF.disable_process_locking:
- LOG.debug(_('Attempting to grab file lock "%(lock)s"'),
- {'lock': name})
-
- # We need a copy of lock_path because it is non-local
- local_lock_path = lock_path or CONF.lock_path
- if not local_lock_path:
- raise cfg.RequiredOptError('lock_path')
-
- if not os.path.exists(local_lock_path):
- fileutils.ensure_tree(local_lock_path)
- LOG.info(_('Created lock path: %s'), local_lock_path)
-
- def add_prefix(name, prefix):
- if not prefix:
- return name
- sep = '' if prefix.endswith('-') else '-'
- return '%s%s%s' % (prefix, sep, name)
-
- # NOTE(mikal): the lock name cannot contain directory
- # separators
- lock_file_name = add_prefix(name.replace(os.sep, '_'),
- lock_file_prefix)
-
- lock_file_path = os.path.join(local_lock_path, lock_file_name)
-
- try:
- lock = InterProcessLock(lock_file_path)
- with lock as lock:
- LOG.debug(_('Got file lock "%(lock)s" at %(path)s'),
- {'lock': name, 'path': lock_file_path})
- yield lock
- finally:
- LOG.debug(_('Released file lock "%(lock)s" at %(path)s'),
- {'lock': name, 'path': lock_file_path})
- else:
- yield sem
-
- finally:
- local.strong_store.locks_held.remove(name)
-
-
-def synchronized(name, lock_file_prefix=None, external=False, lock_path=None):
- """Synchronization decorator.
-
- Decorating a method like so::
-
- @synchronized('mylock')
- def foo(self, *args):
- ...
-
- ensures that only one thread will execute the foo method at a time.
-
- Different methods can share the same lock::
-
- @synchronized('mylock')
- def foo(self, *args):
- ...
-
- @synchronized('mylock')
- def bar(self, *args):
- ...
-
- This way only one of either foo or bar can be executing at a time.
- """
-
- def wrap(f):
- @functools.wraps(f)
- def inner(*args, **kwargs):
- with lock(name, lock_file_prefix, external, lock_path):
- LOG.debug(_('Got semaphore / lock "%(function)s"'),
- {'function': f.__name__})
- return f(*args, **kwargs)
-
- LOG.debug(_('Semaphore / lock released "%(function)s"'),
- {'function': f.__name__})
- return inner
- return wrap
-
-
-def synchronized_with_prefix(lock_file_prefix):
- """Partial object generator for the synchronization decorator.
-
- Redefine @synchronized in each project like so::
-
- (in nova/utils.py)
- from nova.openstack.common import lockutils
-
- synchronized = lockutils.synchronized_with_prefix('nova-')
-
-
- (in nova/foo.py)
- from nova import utils
-
- @utils.synchronized('mylock')
- def bar(self, *args):
- ...
-
- The lock_file_prefix argument is used to provide lock files on disk with a
- meaningful prefix.
- """
-
- return functools.partial(synchronized, lock_file_prefix=lock_file_prefix)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation.
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Openstack logging handler.
-
-This module adds to logging functionality by adding the option to specify
-a context object when calling the various log methods. If the context object
-is not specified, default formatting is used. Additionally, an instance uuid
-may be passed as part of the log message, which is intended to make it easier
-for admins to find messages related to a specific instance.
-
-It also allows setting of formatting information through conf.
-
-"""
-
-import inspect
-import itertools
-import logging
-import logging.config
-import logging.handlers
-import os
-import sys
-import traceback
-
-from oslo.config import cfg
-from six import moves
-
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import importutils
-from heat.openstack.common import jsonutils
-from heat.openstack.common import local
-
-
-_DEFAULT_LOG_DATE_FORMAT = "%Y-%m-%d %H:%M:%S"
-
-common_cli_opts = [
- cfg.BoolOpt('debug',
- short='d',
- default=False,
- help='Print debugging output (set logging level to '
- 'DEBUG instead of default WARNING level).'),
- cfg.BoolOpt('verbose',
- short='v',
- default=False,
- help='Print more verbose output (set logging level to '
- 'INFO instead of default WARNING level).'),
-]
-
-logging_cli_opts = [
- cfg.StrOpt('log-config',
- metavar='PATH',
- help='If this option is specified, the logging configuration '
- 'file specified is used and overrides any other logging '
- 'options specified. Please see the Python logging module '
- 'documentation for details on logging configuration '
- 'files.'),
- cfg.StrOpt('log-format',
- default=None,
- metavar='FORMAT',
- help='DEPRECATED. '
- 'A logging.Formatter log message format string which may '
- 'use any of the available logging.LogRecord attributes. '
- 'This option is deprecated. Please use '
- 'logging_context_format_string and '
- 'logging_default_format_string instead.'),
- cfg.StrOpt('log-date-format',
- default=_DEFAULT_LOG_DATE_FORMAT,
- metavar='DATE_FORMAT',
- help='Format string for %%(asctime)s in log records. '
- 'Default: %(default)s'),
- cfg.StrOpt('log-file',
- metavar='PATH',
- deprecated_name='logfile',
- help='(Optional) Name of log file to output to. '
- 'If no default is set, logging will go to stdout.'),
- cfg.StrOpt('log-dir',
- deprecated_name='logdir',
- help='(Optional) The base directory used for relative '
- '--log-file paths'),
- cfg.BoolOpt('use-syslog',
- default=False,
- help='Use syslog for logging.'),
- cfg.StrOpt('syslog-log-facility',
- default='LOG_USER',
- help='syslog facility to receive log lines')
-]
-
-generic_log_opts = [
- cfg.BoolOpt('use_stderr',
- default=True,
- help='Log output to standard error')
-]
-
-log_opts = [
- cfg.StrOpt('logging_context_format_string',
- default='%(asctime)s.%(msecs)03d %(process)d %(levelname)s '
- '%(name)s [%(request_id)s %(user)s %(tenant)s] '
- '%(instance)s%(message)s',
- help='format string to use for log messages with context'),
- cfg.StrOpt('logging_default_format_string',
- default='%(asctime)s.%(msecs)03d %(process)d %(levelname)s '
- '%(name)s [-] %(instance)s%(message)s',
- help='format string to use for log messages without context'),
- cfg.StrOpt('logging_debug_format_suffix',
- default='%(funcName)s %(pathname)s:%(lineno)d',
- help='data to append to log format when level is DEBUG'),
- cfg.StrOpt('logging_exception_prefix',
- default='%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s '
- '%(instance)s',
- help='prefix each line of exception output with this format'),
- cfg.ListOpt('default_log_levels',
- default=[
- 'amqplib=WARN',
- 'sqlalchemy=WARN',
- 'boto=WARN',
- 'suds=INFO',
- 'keystone=INFO',
- 'eventlet.wsgi.server=WARN'
- ],
- help='list of logger=LEVEL pairs'),
- cfg.BoolOpt('publish_errors',
- default=False,
- help='publish error events'),
- cfg.BoolOpt('fatal_deprecations',
- default=False,
- help='make deprecations fatal'),
-
- # NOTE(mikal): there are two options here because sometimes we are handed
- # a full instance (and could include more information), and other times we
- # are just handed a UUID for the instance.
- cfg.StrOpt('instance_format',
- default='[instance: %(uuid)s] ',
- help='If an instance is passed with the log message, format '
- 'it like this'),
- cfg.StrOpt('instance_uuid_format',
- default='[instance: %(uuid)s] ',
- help='If an instance UUID is passed with the log message, '
- 'format it like this'),
-]
-
-CONF = cfg.CONF
-CONF.register_cli_opts(common_cli_opts)
-CONF.register_cli_opts(logging_cli_opts)
-CONF.register_opts(generic_log_opts)
-CONF.register_opts(log_opts)
-
-# our new audit level
-# NOTE(jkoelker) Since we synthesized an audit level, make the logging
-# module aware of it so it acts like other levels.
-logging.AUDIT = logging.INFO + 1
-logging.addLevelName(logging.AUDIT, 'AUDIT')
-
-
-try:
- NullHandler = logging.NullHandler
-except AttributeError: # NOTE(jkoelker) NullHandler added in Python 2.7
- class NullHandler(logging.Handler):
- def handle(self, record):
- pass
-
- def emit(self, record):
- pass
-
- def createLock(self):
- self.lock = None
-
-
-def _dictify_context(context):
- if context is None:
- return None
- if not isinstance(context, dict) and getattr(context, 'to_dict', None):
- context = context.to_dict()
- return context
-
-
-def _get_binary_name():
- return os.path.basename(inspect.stack()[-1][1])
-
-
-def _get_log_file_path(binary=None):
- logfile = CONF.log_file
- logdir = CONF.log_dir
-
- if logfile and not logdir:
- return logfile
-
- if logfile and logdir:
- return os.path.join(logdir, logfile)
-
- if logdir:
- binary = binary or _get_binary_name()
- return '%s.log' % (os.path.join(logdir, binary),)
-
-
-class BaseLoggerAdapter(logging.LoggerAdapter):
-
- def audit(self, msg, *args, **kwargs):
- self.log(logging.AUDIT, msg, *args, **kwargs)
-
-
-class LazyAdapter(BaseLoggerAdapter):
- def __init__(self, name='unknown', version='unknown'):
- self._logger = None
- self.extra = {}
- self.name = name
- self.version = version
-
- @property
- def logger(self):
- if not self._logger:
- self._logger = getLogger(self.name, self.version)
- return self._logger
-
-
-class ContextAdapter(BaseLoggerAdapter):
- warn = logging.LoggerAdapter.warning
-
- def __init__(self, logger, project_name, version_string):
- self.logger = logger
- self.project = project_name
- self.version = version_string
-
- @property
- def handlers(self):
- return self.logger.handlers
-
- def deprecated(self, msg, *args, **kwargs):
- stdmsg = _("Deprecated: %s") % msg
- if CONF.fatal_deprecations:
- self.critical(stdmsg, *args, **kwargs)
- raise DeprecatedConfig(msg=stdmsg)
- else:
- self.warn(stdmsg, *args, **kwargs)
-
- def process(self, msg, kwargs):
- if 'extra' not in kwargs:
- kwargs['extra'] = {}
- extra = kwargs['extra']
-
- context = kwargs.pop('context', None)
- if not context:
- context = getattr(local.store, 'context', None)
- if context:
- extra.update(_dictify_context(context))
-
- instance = kwargs.pop('instance', None)
- instance_extra = ''
- if instance:
- instance_extra = CONF.instance_format % instance
- else:
- instance_uuid = kwargs.pop('instance_uuid', None)
- if instance_uuid:
- instance_extra = (CONF.instance_uuid_format
- % {'uuid': instance_uuid})
- extra.update({'instance': instance_extra})
-
- extra.update({"project": self.project})
- extra.update({"version": self.version})
- extra['extra'] = extra.copy()
- return msg, kwargs
-
-
-class JSONFormatter(logging.Formatter):
- def __init__(self, fmt=None, datefmt=None):
- # NOTE(jkoelker) we ignore the fmt argument, but its still there
- # since logging.config.fileConfig passes it.
- self.datefmt = datefmt
-
- def formatException(self, ei, strip_newlines=True):
- lines = traceback.format_exception(*ei)
- if strip_newlines:
- lines = [itertools.ifilter(
- lambda x: x,
- line.rstrip().splitlines()) for line in lines]
- lines = list(itertools.chain(*lines))
- return lines
-
- def format(self, record):
- message = {'message': record.getMessage(),
- 'asctime': self.formatTime(record, self.datefmt),
- 'name': record.name,
- 'msg': record.msg,
- 'args': record.args,
- 'levelname': record.levelname,
- 'levelno': record.levelno,
- 'pathname': record.pathname,
- 'filename': record.filename,
- 'module': record.module,
- 'lineno': record.lineno,
- 'funcname': record.funcName,
- 'created': record.created,
- 'msecs': record.msecs,
- 'relative_created': record.relativeCreated,
- 'thread': record.thread,
- 'thread_name': record.threadName,
- 'process_name': record.processName,
- 'process': record.process,
- 'traceback': None}
-
- if hasattr(record, 'extra'):
- message['extra'] = record.extra
-
- if record.exc_info:
- message['traceback'] = self.formatException(record.exc_info)
-
- return jsonutils.dumps(message)
-
-
-def _create_logging_excepthook(product_name):
- def logging_excepthook(type, value, tb):
- extra = {}
- if CONF.verbose:
- extra['exc_info'] = (type, value, tb)
- getLogger(product_name).critical(str(value), **extra)
- return logging_excepthook
-
-
-class LogConfigError(Exception):
-
- message = _('Error loading logging config %(log_config)s: %(err_msg)s')
-
- def __init__(self, log_config, err_msg):
- self.log_config = log_config
- self.err_msg = err_msg
-
- def __str__(self):
- return self.message % dict(log_config=self.log_config,
- err_msg=self.err_msg)
-
-
-def _load_log_config(log_config):
- try:
- logging.config.fileConfig(log_config)
- except moves.configparser.Error as exc:
- raise LogConfigError(log_config, str(exc))
-
-
-def setup(product_name):
- """Setup logging."""
- if CONF.log_config:
- _load_log_config(CONF.log_config)
- else:
- _setup_logging_from_conf()
- sys.excepthook = _create_logging_excepthook(product_name)
-
-
-def set_defaults(logging_context_format_string):
- cfg.set_defaults(log_opts,
- logging_context_format_string=
- logging_context_format_string)
-
-
-def _find_facility_from_conf():
- facility_names = logging.handlers.SysLogHandler.facility_names
- facility = getattr(logging.handlers.SysLogHandler,
- CONF.syslog_log_facility,
- None)
-
- if facility is None and CONF.syslog_log_facility in facility_names:
- facility = facility_names.get(CONF.syslog_log_facility)
-
- if facility is None:
- valid_facilities = facility_names.keys()
- consts = ['LOG_AUTH', 'LOG_AUTHPRIV', 'LOG_CRON', 'LOG_DAEMON',
- 'LOG_FTP', 'LOG_KERN', 'LOG_LPR', 'LOG_MAIL', 'LOG_NEWS',
- 'LOG_AUTH', 'LOG_SYSLOG', 'LOG_USER', 'LOG_UUCP',
- 'LOG_LOCAL0', 'LOG_LOCAL1', 'LOG_LOCAL2', 'LOG_LOCAL3',
- 'LOG_LOCAL4', 'LOG_LOCAL5', 'LOG_LOCAL6', 'LOG_LOCAL7']
- valid_facilities.extend(consts)
- raise TypeError(_('syslog facility must be one of: %s') %
- ', '.join("'%s'" % fac
- for fac in valid_facilities))
-
- return facility
-
-
-def _setup_logging_from_conf():
- log_root = getLogger(None).logger
- for handler in log_root.handlers:
- log_root.removeHandler(handler)
-
- if CONF.use_syslog:
- facility = _find_facility_from_conf()
- syslog = logging.handlers.SysLogHandler(address='/dev/log',
- facility=facility)
- log_root.addHandler(syslog)
-
- logpath = _get_log_file_path()
- if logpath:
- filelog = logging.handlers.WatchedFileHandler(logpath)
- log_root.addHandler(filelog)
-
- if CONF.use_stderr:
- streamlog = ColorHandler()
- log_root.addHandler(streamlog)
-
- elif not CONF.log_file:
- # pass sys.stdout as a positional argument
- # python2.6 calls the argument strm, in 2.7 it's stream
- streamlog = logging.StreamHandler(sys.stdout)
- log_root.addHandler(streamlog)
-
- if CONF.publish_errors:
- handler = importutils.import_object(
- "heat.openstack.common.log_handler.PublishErrorsHandler",
- logging.ERROR)
- log_root.addHandler(handler)
-
- datefmt = CONF.log_date_format
- for handler in log_root.handlers:
- # NOTE(alaski): CONF.log_format overrides everything currently. This
- # should be deprecated in favor of context aware formatting.
- if CONF.log_format:
- handler.setFormatter(logging.Formatter(fmt=CONF.log_format,
- datefmt=datefmt))
- log_root.info('Deprecated: log_format is now deprecated and will '
- 'be removed in the next release')
- else:
- handler.setFormatter(ContextFormatter(datefmt=datefmt))
-
- if CONF.debug:
- log_root.setLevel(logging.DEBUG)
- elif CONF.verbose:
- log_root.setLevel(logging.INFO)
- else:
- log_root.setLevel(logging.WARNING)
-
- for pair in CONF.default_log_levels:
- mod, _sep, level_name = pair.partition('=')
- level = logging.getLevelName(level_name)
- logger = logging.getLogger(mod)
- logger.setLevel(level)
-
-_loggers = {}
-
-
-def getLogger(name='unknown', version='unknown'):
- if name not in _loggers:
- _loggers[name] = ContextAdapter(logging.getLogger(name),
- name,
- version)
- return _loggers[name]
-
-
-def getLazyLogger(name='unknown', version='unknown'):
- """Returns lazy logger.
-
- Creates a pass-through logger that does not create the real logger
- until it is really needed and delegates all calls to the real logger
- once it is created.
- """
- return LazyAdapter(name, version)
-
-
-class WritableLogger(object):
- """A thin wrapper that responds to `write` and logs."""
-
- def __init__(self, logger, level=logging.INFO):
- self.logger = logger
- self.level = level
-
- def write(self, msg):
- self.logger.log(self.level, msg)
-
-
-class ContextFormatter(logging.Formatter):
- """A context.RequestContext aware formatter configured through flags.
-
- The flags used to set format strings are: logging_context_format_string
- and logging_default_format_string. You can also specify
- logging_debug_format_suffix to append extra formatting if the log level is
- debug.
-
- For information about what variables are available for the formatter see:
- http://docs.python.org/library/logging.html#formatter
-
- """
-
- def format(self, record):
- """Uses contextstring if request_id is set, otherwise default."""
- # NOTE(sdague): default the fancier formating params
- # to an empty string so we don't throw an exception if
- # they get used
- for key in ('instance', 'color'):
- if key not in record.__dict__:
- record.__dict__[key] = ''
-
- if record.__dict__.get('request_id', None):
- self._fmt = CONF.logging_context_format_string
- else:
- self._fmt = CONF.logging_default_format_string
-
- if (record.levelno == logging.DEBUG and
- CONF.logging_debug_format_suffix):
- self._fmt += " " + CONF.logging_debug_format_suffix
-
- # Cache this on the record, Logger will respect our formated copy
- if record.exc_info:
- record.exc_text = self.formatException(record.exc_info, record)
- return logging.Formatter.format(self, record)
-
- def formatException(self, exc_info, record=None):
- """Format exception output with CONF.logging_exception_prefix."""
- if not record:
- return logging.Formatter.formatException(self, exc_info)
-
- stringbuffer = moves.StringIO()
- traceback.print_exception(exc_info[0], exc_info[1], exc_info[2],
- None, stringbuffer)
- lines = stringbuffer.getvalue().split('\n')
- stringbuffer.close()
-
- if CONF.logging_exception_prefix.find('%(asctime)') != -1:
- record.asctime = self.formatTime(record, self.datefmt)
-
- formatted_lines = []
- for line in lines:
- pl = CONF.logging_exception_prefix % record.__dict__
- fl = '%s%s' % (pl, line)
- formatted_lines.append(fl)
- return '\n'.join(formatted_lines)
-
-
-class ColorHandler(logging.StreamHandler):
- LEVEL_COLORS = {
- logging.DEBUG: '\033[00;32m', # GREEN
- logging.INFO: '\033[00;36m', # CYAN
- logging.AUDIT: '\033[01;36m', # BOLD CYAN
- logging.WARN: '\033[01;33m', # BOLD YELLOW
- logging.ERROR: '\033[01;31m', # BOLD RED
- logging.CRITICAL: '\033[01;31m', # BOLD RED
- }
-
- def format(self, record):
- record.color = self.LEVEL_COLORS[record.levelno]
- return logging.StreamHandler.format(self, record)
-
-
-class DeprecatedConfig(Exception):
- message = _("Fatal call to deprecated config: %(msg)s")
-
- def __init__(self, msg):
- super(Exception, self).__init__(self.message % dict(msg=msg))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# Copyright 2011 Justin Santa Barbara
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sys
-
-from eventlet import event
-from eventlet import greenthread
-
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import log as logging
-from heat.openstack.common import timeutils
-
-LOG = logging.getLogger(__name__)
-
-
-class LoopingCallDone(Exception):
- """Exception to break out and stop a LoopingCall.
-
- The poll-function passed to LoopingCall can raise this exception to
- break out of the loop normally. This is somewhat analogous to
- StopIteration.
-
- An optional return-value can be included as the argument to the exception;
- this return-value will be returned by LoopingCall.wait()
-
- """
-
- def __init__(self, retvalue=True):
- """:param retvalue: Value that LoopingCall.wait() should return."""
- self.retvalue = retvalue
-
-
-class LoopingCallBase(object):
- def __init__(self, f=None, *args, **kw):
- self.args = args
- self.kw = kw
- self.f = f
- self._running = False
- self.done = None
-
- def stop(self):
- self._running = False
-
- def wait(self):
- return self.done.wait()
-
-
-class FixedIntervalLoopingCall(LoopingCallBase):
- """A fixed interval looping call."""
-
- def start(self, interval, initial_delay=None):
- self._running = True
- done = event.Event()
-
- def _inner():
- if initial_delay:
- greenthread.sleep(initial_delay)
-
- try:
- while self._running:
- start = timeutils.utcnow()
- self.f(*self.args, **self.kw)
- end = timeutils.utcnow()
- if not self._running:
- break
- delay = interval - timeutils.delta_seconds(start, end)
- if delay <= 0:
- LOG.warn(_('task run outlasted interval by %s sec') %
- -delay)
- greenthread.sleep(delay if delay > 0 else 0)
- except LoopingCallDone as e:
- self.stop()
- done.send(e.retvalue)
- except Exception:
- LOG.exception(_('in fixed duration looping call'))
- done.send_exception(*sys.exc_info())
- return
- else:
- done.send(True)
-
- self.done = done
-
- greenthread.spawn_n(_inner)
- return self.done
-
-
-# TODO(mikal): this class name is deprecated in Havana and should be removed
-# in the I release
-LoopingCall = FixedIntervalLoopingCall
-
-
-class DynamicLoopingCall(LoopingCallBase):
- """A looping call which sleeps until the next known event.
-
- The function called should return how long to sleep for before being
- called again.
- """
-
- def start(self, initial_delay=None, periodic_interval_max=None):
- self._running = True
- done = event.Event()
-
- def _inner():
- if initial_delay:
- greenthread.sleep(initial_delay)
-
- try:
- while self._running:
- idle = self.f(*self.args, **self.kw)
- if not self._running:
- break
-
- if periodic_interval_max is not None:
- idle = min(idle, periodic_interval_max)
- LOG.debug(_('Dynamic looping call sleeping for %.02f '
- 'seconds'), idle)
- greenthread.sleep(idle)
- except LoopingCallDone as e:
- self.stop()
- done.send(e.retvalue)
- except Exception:
- LOG.exception(_('in dynamic looping call'))
- done.send_exception(*sys.exc_info())
- return
- else:
- done.send(True)
-
- self.done = done
-
- greenthread.spawn(_inner)
- return self.done
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2012 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Network-related utilities and helper functions.
-"""
-
-import urlparse
-
-
-def parse_host_port(address, default_port=None):
- """Interpret a string as a host:port pair.
-
- An IPv6 address MUST be escaped if accompanied by a port,
- because otherwise ambiguity ensues: 2001:db8:85a3::8a2e:370:7334
- means both [2001:db8:85a3::8a2e:370:7334] and
- [2001:db8:85a3::8a2e:370]:7334.
-
- >>> parse_host_port('server01:80')
- ('server01', 80)
- >>> parse_host_port('server01')
- ('server01', None)
- >>> parse_host_port('server01', default_port=1234)
- ('server01', 1234)
- >>> parse_host_port('[::1]:80')
- ('::1', 80)
- >>> parse_host_port('[::1]')
- ('::1', None)
- >>> parse_host_port('[::1]', default_port=1234)
- ('::1', 1234)
- >>> parse_host_port('2001:db8:85a3::8a2e:370:7334', default_port=1234)
- ('2001:db8:85a3::8a2e:370:7334', 1234)
-
- """
- if address[0] == '[':
- # Escaped ipv6
- _host, _port = address[1:].split(']')
- host = _host
- if ':' in _port:
- port = _port.split(':')[1]
- else:
- port = default_port
- else:
- if address.count(':') == 1:
- host, port = address.split(':')
- else:
- # 0 means ipv4, >1 means ipv6.
- # We prohibit unescaped ipv6 addresses with port.
- host = address
- port = default_port
-
- return (host, None if port is None else int(port))
-
-
-def urlsplit(url, scheme='', allow_fragments=True):
- """Parse a URL using urlparse.urlsplit(), splitting query and fragments.
- This function papers over Python issue9374 when needed.
-
- The parameters are the same as urlparse.urlsplit.
- """
- scheme, netloc, path, query, fragment = urlparse.urlsplit(
- url, scheme, allow_fragments)
- if allow_fragments and '#' in path:
- path, fragment = path.split('#', 1)
- if '?' in path:
- path, query = path.split('?', 1)
- return urlparse.SplitResult(scheme, netloc, path, query, fragment)
+++ /dev/null
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
+++ /dev/null
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import socket
-import uuid
-
-from oslo.config import cfg
-
-from heat.openstack.common import context
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import importutils
-from heat.openstack.common import jsonutils
-from heat.openstack.common import log as logging
-from heat.openstack.common import timeutils
-
-
-LOG = logging.getLogger(__name__)
-
-notifier_opts = [
- cfg.MultiStrOpt('notification_driver',
- default=[],
- help='Driver or drivers to handle sending notifications'),
- cfg.StrOpt('default_notification_level',
- default='INFO',
- help='Default notification level for outgoing notifications'),
- cfg.StrOpt('default_publisher_id',
- default=None,
- help='Default publisher_id for outgoing notifications'),
-]
-
-CONF = cfg.CONF
-CONF.register_opts(notifier_opts)
-
-WARN = 'WARN'
-INFO = 'INFO'
-ERROR = 'ERROR'
-CRITICAL = 'CRITICAL'
-DEBUG = 'DEBUG'
-
-log_levels = (DEBUG, WARN, INFO, ERROR, CRITICAL)
-
-
-class BadPriorityException(Exception):
- pass
-
-
-def notify_decorator(name, fn):
- """Decorator for notify which is used from utils.monkey_patch().
-
- :param name: name of the function
- :param function: - object of the function
- :returns: function -- decorated function
-
- """
- def wrapped_func(*args, **kwarg):
- body = {}
- body['args'] = []
- body['kwarg'] = {}
- for arg in args:
- body['args'].append(arg)
- for key in kwarg:
- body['kwarg'][key] = kwarg[key]
-
- ctxt = context.get_context_from_function_and_args(fn, args, kwarg)
- notify(ctxt,
- CONF.default_publisher_id or socket.gethostname(),
- name,
- CONF.default_notification_level,
- body)
- return fn(*args, **kwarg)
- return wrapped_func
-
-
-def publisher_id(service, host=None):
- if not host:
- try:
- host = CONF.host
- except AttributeError:
- host = CONF.default_publisher_id or socket.gethostname()
- return "%s.%s" % (service, host)
-
-
-def notify(context, publisher_id, event_type, priority, payload):
- """Sends a notification using the specified driver
-
- :param publisher_id: the source worker_type.host of the message
- :param event_type: the literal type of event (ex. Instance Creation)
- :param priority: patterned after the enumeration of Python logging
- levels in the set (DEBUG, WARN, INFO, ERROR, CRITICAL)
- :param payload: A python dictionary of attributes
-
- Outgoing message format includes the above parameters, and appends the
- following:
-
- message_id
- a UUID representing the id for this notification
-
- timestamp
- the GMT timestamp the notification was sent at
-
- The composite message will be constructed as a dictionary of the above
- attributes, which will then be sent via the transport mechanism defined
- by the driver.
-
- Message example::
-
- {'message_id': str(uuid.uuid4()),
- 'publisher_id': 'compute.host1',
- 'timestamp': timeutils.utcnow(),
- 'priority': 'WARN',
- 'event_type': 'compute.create_instance',
- 'payload': {'instance_id': 12, ... }}
-
- """
- if priority not in log_levels:
- raise BadPriorityException(
- _('%s not in valid priorities') % priority)
-
- # Ensure everything is JSON serializable.
- payload = jsonutils.to_primitive(payload, convert_instances=True)
-
- msg = dict(message_id=str(uuid.uuid4()),
- publisher_id=publisher_id,
- event_type=event_type,
- priority=priority,
- payload=payload,
- timestamp=str(timeutils.utcnow()))
-
- for driver in _get_drivers():
- try:
- driver.notify(context, msg)
- except Exception as e:
- LOG.exception(_("Problem '%(e)s' attempting to "
- "send to notification system. "
- "Payload=%(payload)s")
- % dict(e=e, payload=payload))
-
-
-_drivers = None
-
-
-def _get_drivers():
- """Instantiate, cache, and return drivers based on the CONF."""
- global _drivers
- if _drivers is None:
- _drivers = {}
- for notification_driver in CONF.notification_driver:
- try:
- driver = importutils.import_module(notification_driver)
- _drivers[notification_driver] = driver
- except ImportError:
- LOG.exception(_("Failed to load notifier %s. "
- "These notifications will not be sent.") %
- notification_driver)
- return _drivers.values()
-
-
-def _reset_drivers():
- """Used by unit tests to reset the drivers."""
- global _drivers
- _drivers = None
+++ /dev/null
-# Copyright 2011 OpenStack LLC.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo.config import cfg
-
-from heat.openstack.common.gettextutils import _
-from heat.openstack.common import importutils
-from heat.openstack.common import log as logging
-
-
-list_notifier_drivers_opt = cfg.MultiStrOpt(
- 'list_notifier_drivers',
- default=['heat.openstack.common.notifier.no_op_notifier'],
- help='List of drivers to send notifications')
-
-CONF = cfg.CONF
-CONF.register_opt(list_notifier_drivers_opt)
-
-LOG = logging.getLogger(__name__)
-
-drivers = None
-
-
-class ImportFailureNotifier(object):
- """Noisily re-raises some exception over-and-over when notify is called."""
-
- def __init__(self, exception):
- self.exception = exception
-
- def notify(self, context, message):
- raise self.exception
-
-
-def _get_drivers():
- """Instantiates and returns drivers based on the flag values."""
- global drivers
- if drivers is None:
- drivers = []
- for notification_driver in CONF.list_notifier_drivers:
- try:
- drivers.append(importutils.import_module(notification_driver))
- except ImportError as e:
- drivers.append(ImportFailureNotifier(e))
- return drivers
-
-
-def add_driver(notification_driver):
- """Add a notification driver at runtime."""
- # Make sure the driver list is initialized.
- _get_drivers()
- if isinstance(notification_driver, basestring):
- # Load and add
- try:
- drivers.append(importutils.import_module(notification_driver))
- except ImportError as e:
- drivers.append(ImportFailureNotifier(e))
- else:
- # Driver is already loaded; just add the object.
- drivers.append(notification_driver)
-
-
-def _object_name(obj):
- name = []
- if hasattr(obj, '__module__'):
- name.append(obj.__module__)
- if hasattr(obj, '__name__'):
- name.append(obj.__name__)
- else:
- name.append(obj.__class__.__name__)
- return '.'.join(name)
-
-
-def remove_driver(notification_driver):
- """Remove a notification driver at runtime."""
- # Make sure the driver list is initialized.
- _get_drivers()
- removed = False
- if notification_driver in drivers:
- # We're removing an object. Easy.
- drivers.remove(notification_driver)
- removed = True
- else:
- # We're removing a driver by name. Search for it.
- for driver in drivers:
- if _object_name(driver) == notification_driver:
- drivers.remove(driver)
- removed = True
-
- if not removed:
- raise ValueError("Cannot remove; %s is not in list" %
- notification_driver)
-
-
-def notify(context, message):
- """Passes notification to multiple notifiers in a list."""
- for driver in _get_drivers():
- try:
- driver.notify(context, message)
- except Exception as e:
- LOG.exception(_("Problem '%(e)s' attempting to send to "
- "notification driver %(driver)s."), locals())
-
-
-def _reset_drivers():
- """Used by unit tests to reset the drivers."""
- global drivers
- drivers = None
+++ /dev/null
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo.config import cfg
-
-from heat.openstack.common import jsonutils
-from heat.openstack.common import log as logging
-
-
-CONF = cfg.CONF
-
-
-def notify(_context, message):
- """Notifies the recipient of the desired event given the model.
-
- Log notifications using openstack's default logging system.
- """
-
- priority = message.get('priority',
- CONF.default_notification_level)
- priority = priority.lower()
- logger = logging.getLogger(
- 'heat.openstack.common.notification.%s' %
- message['event_type'])
- getattr(logger, priority)(jsonutils.dumps(message))
+++ /dev/null
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-def notify(_context, message):
- """Notifies the recipient of the desired event given the model."""
- pass
+++ /dev/null
-# Copyright 2012 Red Hat, Inc.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-from heat.openstack.common.gettextutils import _
-from heat.openstack.common import log as logging
-from heat.openstack.common.notifier import rpc_notifier
-
-LOG = logging.getLogger(__name__)
-
-
-def notify(context, message):
- """Deprecated in Grizzly. Please use rpc_notifier instead."""
-
- LOG.deprecated(_("The rabbit_notifier is now deprecated."
- " Please use rpc_notifier instead."))
- rpc_notifier.notify(context, message)
+++ /dev/null
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from oslo.config import cfg
-
-from heat.openstack.common import context as req_context
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import log as logging
-from heat.openstack.common import rpc
-
-LOG = logging.getLogger(__name__)
-
-notification_topic_opt = cfg.ListOpt(
- 'notification_topics', default=['notifications', ],
- help='AMQP topic used for openstack notifications')
-
-CONF = cfg.CONF
-CONF.register_opt(notification_topic_opt)
-
-
-def notify(context, message):
- """Sends a notification via RPC."""
- if not context:
- context = req_context.get_admin_context()
- priority = message.get('priority',
- CONF.default_notification_level)
- priority = priority.lower()
- for topic in CONF.notification_topics:
- topic = '%s.%s' % (topic, priority)
- try:
- rpc.notify(context, topic, message)
- except Exception:
- LOG.exception(_("Could not send notification to %(topic)s. "
- "Payload=%(message)s"), locals())
+++ /dev/null
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-'''messaging based notification driver, with message envelopes'''
-
-from oslo.config import cfg
-
-from heat.openstack.common import context as req_context
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import log as logging
-from heat.openstack.common import rpc
-
-LOG = logging.getLogger(__name__)
-
-notification_topic_opt = cfg.ListOpt(
- 'topics', default=['notifications', ],
- help='AMQP topic(s) used for openstack notifications')
-
-opt_group = cfg.OptGroup(name='rpc_notifier2',
- title='Options for rpc_notifier2')
-
-CONF = cfg.CONF
-CONF.register_group(opt_group)
-CONF.register_opt(notification_topic_opt, opt_group)
-
-
-def notify(context, message):
- """Sends a notification via RPC."""
- if not context:
- context = req_context.get_admin_context()
- priority = message.get('priority',
- CONF.default_notification_level)
- priority = priority.lower()
- for topic in CONF.rpc_notifier2.topics:
- topic = '%s.%s' % (topic, priority)
- try:
- rpc.notify(context, topic, message, envelope=True)
- except Exception:
- LOG.exception(_("Could not send notification to %(topic)s. "
- "Payload=%(message)s"), locals())
+++ /dev/null
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-NOTIFICATIONS = []
-
-
-def notify(_context, message):
- """Test notifier, stores notifications in memory for unittests."""
- NOTIFICATIONS.append(message)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright (c) 2012 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Common Policy Engine Implementation
-
-Policies can be expressed in one of two forms: A list of lists, or a
-string written in the new policy language.
-
-In the list-of-lists representation, each check inside the innermost
-list is combined as with an "and" conjunction--for that check to pass,
-all the specified checks must pass. These innermost lists are then
-combined as with an "or" conjunction. This is the original way of
-expressing policies, but there now exists a new way: the policy
-language.
-
-In the policy language, each check is specified the same way as in the
-list-of-lists representation: a simple "a:b" pair that is matched to
-the correct code to perform that check. However, conjunction
-operators are available, allowing for more expressiveness in crafting
-policies.
-
-As an example, take the following rule, expressed in the list-of-lists
-representation::
-
- [["role:admin"], ["project_id:%(project_id)s", "role:projectadmin"]]
-
-In the policy language, this becomes::
-
- role:admin or (project_id:%(project_id)s and role:projectadmin)
-
-The policy language also has the "not" operator, allowing a richer
-policy rule::
-
- project_id:%(project_id)s and not role:dunce
-
-Finally, two special policy checks should be mentioned; the policy
-check "@" will always accept an access, and the policy check "!" will
-always reject an access. (Note that if a rule is either the empty
-list ("[]") or the empty string, this is equivalent to the "@" policy
-check.) Of these, the "!" policy check is probably the most useful,
-as it allows particular rules to be explicitly disabled.
-"""
-
-import abc
-import re
-import urllib
-import urllib2
-
-from oslo.config import cfg
-import six
-
-from heat.openstack.common import fileutils
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import jsonutils
-from heat.openstack.common import log as logging
-
-policy_opts = [
- cfg.StrOpt('policy_file',
- default='policy.json',
- help=_('JSON file containing policy')),
- cfg.StrOpt('policy_default_rule',
- default='default',
- help=_('Rule enforced when requested rule is not found')),
-]
-
-CONF = cfg.CONF
-CONF.register_opts(policy_opts)
-
-LOG = logging.getLogger(__name__)
-
-_checks = {}
-
-
-class PolicyNotAuthorized(Exception):
-
- def __init__(self, rule):
- msg = _("Policy doesn't allow %s to be performed.") % rule
- super(PolicyNotAuthorized, self).__init__(msg)
-
-
-class Rules(dict):
- """A store for rules. Handles the default_rule setting directly."""
-
- @classmethod
- def load_json(cls, data, default_rule=None):
- """Allow loading of JSON rule data."""
-
- # Suck in the JSON data and parse the rules
- rules = dict((k, parse_rule(v)) for k, v in
- jsonutils.loads(data).items())
-
- return cls(rules, default_rule)
-
- def __init__(self, rules=None, default_rule=None):
- """Initialize the Rules store."""
-
- super(Rules, self).__init__(rules or {})
- self.default_rule = default_rule
-
- def __missing__(self, key):
- """Implements the default rule handling."""
-
- if isinstance(self.default_rule, dict):
- raise KeyError(key)
-
- # If the default rule isn't actually defined, do something
- # reasonably intelligent
- if not self.default_rule or self.default_rule not in self:
- raise KeyError(key)
-
- if isinstance(self.default_rule, BaseCheck):
- return self.default_rule
- elif isinstance(self.default_rule, six.string_types):
- return self[self.default_rule]
-
- def __str__(self):
- """Dumps a string representation of the rules."""
-
- # Start by building the canonical strings for the rules
- out_rules = {}
- for key, value in self.items():
- # Use empty string for singleton TrueCheck instances
- if isinstance(value, TrueCheck):
- out_rules[key] = ''
- else:
- out_rules[key] = str(value)
-
- # Dump a pretty-printed JSON representation
- return jsonutils.dumps(out_rules, indent=4)
-
-
-class Enforcer(object):
- """Responsible for loading and enforcing rules.
-
- :param policy_file: Custom policy file to use, if none is
- specified, `CONF.policy_file` will be
- used.
- :param rules: Default dictionary / Rules to use. It will be
- considered just in the first instantiation. If
- `load_rules(True)`, `clear()` or `set_rules(True)`
- is called this will be overwritten.
- :param default_rule: Default rule to use, CONF.default_rule will
- be used if none is specified.
- """
-
- def __init__(self, policy_file=None, rules=None, default_rule=None):
- self.rules = Rules(rules, default_rule)
- self.default_rule = default_rule or CONF.policy_default_rule
-
- self.policy_path = None
- self.policy_file = policy_file or CONF.policy_file
-
- def set_rules(self, rules, overwrite=True):
- """Create a new Rules object based on the provided dict of rules.
-
- :param rules: New rules to use. It should be an instance of dict.
- :param overwrite: Whether to overwrite current rules or update them
- with the new rules.
- """
-
- if not isinstance(rules, dict):
- raise TypeError(_("Rules must be an instance of dict or Rules, "
- "got %s instead") % type(rules))
-
- if overwrite:
- self.rules = Rules(rules, self.default_rule)
- else:
- self.rules.update(rules)
-
- def clear(self):
- """Clears Enforcer rules, policy's cache and policy's path."""
- self.set_rules({})
- self.default_rule = None
- self.policy_path = None
-
- def load_rules(self, force_reload=False):
- """Loads policy_path's rules.
-
- Policy file is cached and will be reloaded if modified.
-
- :param force_reload: Whether to overwrite current rules.
- """
-
- if not self.policy_path:
- self.policy_path = self._get_policy_path()
-
- reloaded, data = fileutils.read_cached_file(self.policy_path,
- force_reload=force_reload)
- if reloaded or not self.rules:
- rules = Rules.load_json(data, self.default_rule)
- self.set_rules(rules)
- LOG.debug(_("Rules successfully reloaded"))
-
- def _get_policy_path(self):
- """Locate the policy json data file.
-
- :param policy_file: Custom policy file to locate.
-
- :returns: The policy path
-
- :raises: ConfigFilesNotFoundError if the file couldn't
- be located.
- """
- policy_file = CONF.find_file(self.policy_file)
-
- if policy_file:
- return policy_file
-
- raise cfg.ConfigFilesNotFoundError(path=CONF.policy_file)
-
- def enforce(self, rule, target, creds, do_raise=False,
- exc=None, *args, **kwargs):
- """Checks authorization of a rule against the target and credentials.
-
- :param rule: A string or BaseCheck instance specifying the rule
- to evaluate.
- :param target: As much information about the object being operated
- on as possible, as a dictionary.
- :param creds: As much information about the user performing the
- action as possible, as a dictionary.
- :param do_raise: Whether to raise an exception or not if check
- fails.
- :param exc: Class of the exception to raise if the check fails.
- Any remaining arguments passed to check() (both
- positional and keyword arguments) will be passed to
- the exception class. If not specified, PolicyNotAuthorized
- will be used.
-
- :return: Returns False if the policy does not allow the action and
- exc is not provided; otherwise, returns a value that
- evaluates to True. Note: for rules using the "case"
- expression, this True value will be the specified string
- from the expression.
- """
-
- # NOTE(flaper87): Not logging target or creds to avoid
- # potential security issues.
- LOG.debug(_("Rule %s will be now enforced") % rule)
-
- self.load_rules()
-
- # Allow the rule to be a Check tree
- if isinstance(rule, BaseCheck):
- result = rule(target, creds, self)
- elif not self.rules:
- # No rules to reference means we're going to fail closed
- result = False
- else:
- try:
- # Evaluate the rule
- result = self.rules[rule](target, creds, self)
- except KeyError:
- LOG.debug(_("Rule [%s] doesn't exist") % rule)
- # If the rule doesn't exist, fail closed
- result = False
-
- # If it is False, raise the exception if requested
- if do_raise and not result:
- if exc:
- raise exc(*args, **kwargs)
-
- raise PolicyNotAuthorized(rule)
-
- return result
-
-
-class BaseCheck(object):
- """Abstract base class for Check classes."""
-
- __metaclass__ = abc.ABCMeta
-
- @abc.abstractmethod
- def __str__(self):
- """String representation of the Check tree rooted at this node."""
-
- pass
-
- @abc.abstractmethod
- def __call__(self, target, cred, enforcer):
- """Triggers if instance of the class is called.
-
- Performs the check. Returns False to reject the access or a
- true value (not necessary True) to accept the access.
- """
-
- pass
-
-
-class FalseCheck(BaseCheck):
- """A policy check that always returns False (disallow)."""
-
- def __str__(self):
- """Return a string representation of this check."""
-
- return "!"
-
- def __call__(self, target, cred, enforcer):
- """Check the policy."""
-
- return False
-
-
-class TrueCheck(BaseCheck):
- """A policy check that always returns True (allow)."""
-
- def __str__(self):
- """Return a string representation of this check."""
-
- return "@"
-
- def __call__(self, target, cred, enforcer):
- """Check the policy."""
-
- return True
-
-
-class Check(BaseCheck):
- """A base class to allow for user-defined policy checks."""
-
- def __init__(self, kind, match):
- """Initiates Check instance.
-
- :param kind: The kind of the check, i.e., the field before the
- ':'.
- :param match: The match of the check, i.e., the field after
- the ':'.
- """
-
- self.kind = kind
- self.match = match
-
- def __str__(self):
- """Return a string representation of this check."""
-
- return "%s:%s" % (self.kind, self.match)
-
-
-class NotCheck(BaseCheck):
- """Implements the "not" logical operator.
-
- A policy check that inverts the result of another policy check.
- """
-
- def __init__(self, rule):
- """Initialize the 'not' check.
-
- :param rule: The rule to negate. Must be a Check.
- """
-
- self.rule = rule
-
- def __str__(self):
- """Return a string representation of this check."""
-
- return "not %s" % self.rule
-
- def __call__(self, target, cred, enforcer):
- """Check the policy.
-
- Returns the logical inverse of the wrapped check.
- """
-
- return not self.rule(target, cred, enforcer)
-
-
-class AndCheck(BaseCheck):
- """Implements the "and" logical operator.
-
- A policy check that requires that a list of other checks all return True.
- """
-
- def __init__(self, rules):
- """Initialize the 'and' check.
-
- :param rules: A list of rules that will be tested.
- """
-
- self.rules = rules
-
- def __str__(self):
- """Return a string representation of this check."""
-
- return "(%s)" % ' and '.join(str(r) for r in self.rules)
-
- def __call__(self, target, cred, enforcer):
- """Check the policy.
-
- Requires that all rules accept in order to return True.
- """
-
- for rule in self.rules:
- if not rule(target, cred):
- return False
-
- return True
-
- def add_check(self, rule):
- """Adds rule to be tested.
-
- Allows addition of another rule to the list of rules that will
- be tested. Returns the AndCheck object for convenience.
- """
-
- self.rules.append(rule)
- return self
-
-
-class OrCheck(BaseCheck):
- """Implements the "or" operator.
-
- A policy check that requires that at least one of a list of other
- checks returns True.
- """
-
- def __init__(self, rules):
- """Initialize the 'or' check.
-
- :param rules: A list of rules that will be tested.
- """
-
- self.rules = rules
-
- def __str__(self):
- """Return a string representation of this check."""
-
- return "(%s)" % ' or '.join(str(r) for r in self.rules)
-
- def __call__(self, target, cred, enforcer):
- """Check the policy.
-
- Requires that at least one rule accept in order to return True.
- """
-
- for rule in self.rules:
- if rule(target, cred):
- return True
-
- return False
-
- def add_check(self, rule):
- """Adds rule to be tested.
-
- Allows addition of another rule to the list of rules that will
- be tested. Returns the OrCheck object for convenience.
- """
-
- self.rules.append(rule)
- return self
-
-
-def _parse_check(rule):
- """Parse a single base check rule into an appropriate Check object."""
-
- # Handle the special checks
- if rule == '!':
- return FalseCheck()
- elif rule == '@':
- return TrueCheck()
-
- try:
- kind, match = rule.split(':', 1)
- except Exception:
- LOG.exception(_("Failed to understand rule %s") % rule)
- # If the rule is invalid, we'll fail closed
- return FalseCheck()
-
- # Find what implements the check
- if kind in _checks:
- return _checks[kind](kind, match)
- elif None in _checks:
- return _checks[None](kind, match)
- else:
- LOG.error(_("No handler for matches of kind %s") % kind)
- return FalseCheck()
-
-
-def _parse_list_rule(rule):
- """Translates the old list-of-lists syntax into a tree of Check objects.
-
- Provided for backwards compatibility.
- """
-
- # Empty rule defaults to True
- if not rule:
- return TrueCheck()
-
- # Outer list is joined by "or"; inner list by "and"
- or_list = []
- for inner_rule in rule:
- # Elide empty inner lists
- if not inner_rule:
- continue
-
- # Handle bare strings
- if isinstance(inner_rule, basestring):
- inner_rule = [inner_rule]
-
- # Parse the inner rules into Check objects
- and_list = [_parse_check(r) for r in inner_rule]
-
- # Append the appropriate check to the or_list
- if len(and_list) == 1:
- or_list.append(and_list[0])
- else:
- or_list.append(AndCheck(and_list))
-
- # If we have only one check, omit the "or"
- if not or_list:
- return FalseCheck()
- elif len(or_list) == 1:
- return or_list[0]
-
- return OrCheck(or_list)
-
-
-# Used for tokenizing the policy language
-_tokenize_re = re.compile(r'\s+')
-
-
-def _parse_tokenize(rule):
- """Tokenizer for the policy language.
-
- Most of the single-character tokens are specified in the
- _tokenize_re; however, parentheses need to be handled specially,
- because they can appear inside a check string. Thankfully, those
- parentheses that appear inside a check string can never occur at
- the very beginning or end ("%(variable)s" is the correct syntax).
- """
-
- for tok in _tokenize_re.split(rule):
- # Skip empty tokens
- if not tok or tok.isspace():
- continue
-
- # Handle leading parens on the token
- clean = tok.lstrip('(')
- for i in range(len(tok) - len(clean)):
- yield '(', '('
-
- # If it was only parentheses, continue
- if not clean:
- continue
- else:
- tok = clean
-
- # Handle trailing parens on the token
- clean = tok.rstrip(')')
- trail = len(tok) - len(clean)
-
- # Yield the cleaned token
- lowered = clean.lower()
- if lowered in ('and', 'or', 'not'):
- # Special tokens
- yield lowered, clean
- elif clean:
- # Not a special token, but not composed solely of ')'
- if len(tok) >= 2 and ((tok[0], tok[-1]) in
- [('"', '"'), ("'", "'")]):
- # It's a quoted string
- yield 'string', tok[1:-1]
- else:
- yield 'check', _parse_check(clean)
-
- # Yield the trailing parens
- for i in range(trail):
- yield ')', ')'
-
-
-class ParseStateMeta(type):
- """Metaclass for the ParseState class.
-
- Facilitates identifying reduction methods.
- """
-
- def __new__(mcs, name, bases, cls_dict):
- """Create the class.
-
- Injects the 'reducers' list, a list of tuples matching token sequences
- to the names of the corresponding reduction methods.
- """
-
- reducers = []
-
- for key, value in cls_dict.items():
- if not hasattr(value, 'reducers'):
- continue
- for reduction in value.reducers:
- reducers.append((reduction, key))
-
- cls_dict['reducers'] = reducers
-
- return super(ParseStateMeta, mcs).__new__(mcs, name, bases, cls_dict)
-
-
-def reducer(*tokens):
- """Decorator for reduction methods.
-
- Arguments are a sequence of tokens, in order, which should trigger running
- this reduction method.
- """
-
- def decorator(func):
- # Make sure we have a list of reducer sequences
- if not hasattr(func, 'reducers'):
- func.reducers = []
-
- # Add the tokens to the list of reducer sequences
- func.reducers.append(list(tokens))
-
- return func
-
- return decorator
-
-
-class ParseState(object):
- """Implement the core of parsing the policy language.
-
- Uses a greedy reduction algorithm to reduce a sequence of tokens into
- a single terminal, the value of which will be the root of the Check tree.
-
- Note: error reporting is rather lacking. The best we can get with
- this parser formulation is an overall "parse failed" error.
- Fortunately, the policy language is simple enough that this
- shouldn't be that big a problem.
- """
-
- __metaclass__ = ParseStateMeta
-
- def __init__(self):
- """Initialize the ParseState."""
-
- self.tokens = []
- self.values = []
-
- def reduce(self):
- """Perform a greedy reduction of the token stream.
-
- If a reducer method matches, it will be executed, then the
- reduce() method will be called recursively to search for any more
- possible reductions.
- """
-
- for reduction, methname in self.reducers:
- if (len(self.tokens) >= len(reduction) and
- self.tokens[-len(reduction):] == reduction):
- # Get the reduction method
- meth = getattr(self, methname)
-
- # Reduce the token stream
- results = meth(*self.values[-len(reduction):])
-
- # Update the tokens and values
- self.tokens[-len(reduction):] = [r[0] for r in results]
- self.values[-len(reduction):] = [r[1] for r in results]
-
- # Check for any more reductions
- return self.reduce()
-
- def shift(self, tok, value):
- """Adds one more token to the state. Calls reduce()."""
-
- self.tokens.append(tok)
- self.values.append(value)
-
- # Do a greedy reduce...
- self.reduce()
-
- @property
- def result(self):
- """Obtain the final result of the parse.
-
- Raises ValueError if the parse failed to reduce to a single result.
- """
-
- if len(self.values) != 1:
- raise ValueError("Could not parse rule")
- return self.values[0]
-
- @reducer('(', 'check', ')')
- @reducer('(', 'and_expr', ')')
- @reducer('(', 'or_expr', ')')
- def _wrap_check(self, _p1, check, _p2):
- """Turn parenthesized expressions into a 'check' token."""
-
- return [('check', check)]
-
- @reducer('check', 'and', 'check')
- def _make_and_expr(self, check1, _and, check2):
- """Create an 'and_expr'.
-
- Join two checks by the 'and' operator.
- """
-
- return [('and_expr', AndCheck([check1, check2]))]
-
- @reducer('and_expr', 'and', 'check')
- def _extend_and_expr(self, and_expr, _and, check):
- """Extend an 'and_expr' by adding one more check."""
-
- return [('and_expr', and_expr.add_check(check))]
-
- @reducer('check', 'or', 'check')
- def _make_or_expr(self, check1, _or, check2):
- """Create an 'or_expr'.
-
- Join two checks by the 'or' operator.
- """
-
- return [('or_expr', OrCheck([check1, check2]))]
-
- @reducer('or_expr', 'or', 'check')
- def _extend_or_expr(self, or_expr, _or, check):
- """Extend an 'or_expr' by adding one more check."""
-
- return [('or_expr', or_expr.add_check(check))]
-
- @reducer('not', 'check')
- def _make_not_expr(self, _not, check):
- """Invert the result of another check."""
-
- return [('check', NotCheck(check))]
-
-
-def _parse_text_rule(rule):
- """Parses policy to the tree.
-
- Translates a policy written in the policy language into a tree of
- Check objects.
- """
-
- # Empty rule means always accept
- if not rule:
- return TrueCheck()
-
- # Parse the token stream
- state = ParseState()
- for tok, value in _parse_tokenize(rule):
- state.shift(tok, value)
-
- try:
- return state.result
- except ValueError:
- # Couldn't parse the rule
- LOG.exception(_("Failed to understand rule %r") % rule)
-
- # Fail closed
- return FalseCheck()
-
-
-def parse_rule(rule):
- """Parses a policy rule into a tree of Check objects."""
-
- # If the rule is a string, it's in the policy language
- if isinstance(rule, basestring):
- return _parse_text_rule(rule)
- return _parse_list_rule(rule)
-
-
-def register(name, func=None):
- """Register a function or Check class as a policy check.
-
- :param name: Gives the name of the check type, e.g., 'rule',
- 'role', etc. If name is None, a default check type
- will be registered.
- :param func: If given, provides the function or class to register.
- If not given, returns a function taking one argument
- to specify the function or class to register,
- allowing use as a decorator.
- """
-
- # Perform the actual decoration by registering the function or
- # class. Returns the function or class for compliance with the
- # decorator interface.
- def decorator(func):
- _checks[name] = func
- return func
-
- # If the function or class is given, do the registration
- if func:
- return decorator(func)
-
- return decorator
-
-
-@register("rule")
-class RuleCheck(Check):
- def __call__(self, target, creds, enforcer):
- """Recursively checks credentials based on the defined rules."""
-
- try:
- return enforcer.rules[self.match](target, creds, enforcer)
- except KeyError:
- # We don't have any matching rule; fail closed
- return False
-
-
-@register("role")
-class RoleCheck(Check):
- def __call__(self, target, creds, enforcer):
- """Check that there is a matching role in the cred dict."""
-
- return self.match.lower() in [x.lower() for x in creds['roles']]
-
-
-@register('http')
-class HttpCheck(Check):
- def __call__(self, target, creds, enforcer):
- """Check http: rules by calling to a remote server.
-
- This example implementation simply verifies that the response
- is exactly 'True'.
- """
-
- url = ('http:' + self.match) % target
- data = {'target': jsonutils.dumps(target),
- 'credentials': jsonutils.dumps(creds)}
- post_data = urllib.urlencode(data)
- f = urllib2.urlopen(url, post_data)
- return f.read() == "True"
-
-
-@register(None)
-class GenericCheck(Check):
- def __call__(self, target, creds, enforcer):
- """Check an individual match.
-
- Matches look like:
-
- tenant:%(tenant_id)s
- role:compute:admin
- """
-
- # TODO(termie): do dict inspection via dot syntax
- match = self.match % target
- if self.kind in creds:
- return match == six.text_type(creds[self.kind])
- return False
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-System-level utilities and helper functions.
-"""
-
-import os
-import random
-import shlex
-import signal
-
-from eventlet.green import subprocess
-from eventlet import greenthread
-
-from heat.openstack.common.gettextutils import _
-from heat.openstack.common import log as logging
-
-
-LOG = logging.getLogger(__name__)
-
-
-class InvalidArgumentError(Exception):
- def __init__(self, message=None):
- super(InvalidArgumentError, self).__init__(message)
-
-
-class UnknownArgumentError(Exception):
- def __init__(self, message=None):
- super(UnknownArgumentError, self).__init__(message)
-
-
-class ProcessExecutionError(Exception):
- def __init__(self, stdout=None, stderr=None, exit_code=None, cmd=None,
- description=None):
- self.exit_code = exit_code
- self.stderr = stderr
- self.stdout = stdout
- self.cmd = cmd
- self.description = description
-
- if description is None:
- description = "Unexpected error while running command."
- if exit_code is None:
- exit_code = '-'
- message = ("%s\nCommand: %s\nExit code: %s\nStdout: %r\nStderr: %r"
- % (description, cmd, exit_code, stdout, stderr))
- super(ProcessExecutionError, self).__init__(message)
-
-
-class NoRootWrapSpecified(Exception):
- def __init__(self, message=None):
- super(NoRootWrapSpecified, self).__init__(message)
-
-
-def _subprocess_setup():
- # Python installs a SIGPIPE handler by default. This is usually not what
- # non-Python subprocesses expect.
- signal.signal(signal.SIGPIPE, signal.SIG_DFL)
-
-
-def execute(*cmd, **kwargs):
- """
- Helper method to shell out and execute a command through subprocess with
- optional retry.
-
- :param cmd: Passed to subprocess.Popen.
- :type cmd: string
- :param process_input: Send to opened process.
- :type proces_input: string
- :param check_exit_code: Single bool, int, or list of allowed exit
- codes. Defaults to [0]. Raise
- :class:`ProcessExecutionError` unless
- program exits with one of these code.
- :type check_exit_code: boolean, int, or [int]
- :param delay_on_retry: True | False. Defaults to True. If set to True,
- wait a short amount of time before retrying.
- :type delay_on_retry: boolean
- :param attempts: How many times to retry cmd.
- :type attempts: int
- :param run_as_root: True | False. Defaults to False. If set to True,
- the command is prefixed by the command specified
- in the root_helper kwarg.
- :type run_as_root: boolean
- :param root_helper: command to prefix to commands called with
- run_as_root=True
- :type root_helper: string
- :param shell: whether or not there should be a shell used to
- execute this command. Defaults to false.
- :type shell: boolean
- :returns: (stdout, stderr) from process execution
- :raises: :class:`UnknownArgumentError` on
- receiving unknown arguments
- :raises: :class:`ProcessExecutionError`
- """
-
- process_input = kwargs.pop('process_input', None)
- check_exit_code = kwargs.pop('check_exit_code', [0])
- ignore_exit_code = False
- delay_on_retry = kwargs.pop('delay_on_retry', True)
- attempts = kwargs.pop('attempts', 1)
- run_as_root = kwargs.pop('run_as_root', False)
- root_helper = kwargs.pop('root_helper', '')
- shell = kwargs.pop('shell', False)
-
- if isinstance(check_exit_code, bool):
- ignore_exit_code = not check_exit_code
- check_exit_code = [0]
- elif isinstance(check_exit_code, int):
- check_exit_code = [check_exit_code]
-
- if kwargs:
- raise UnknownArgumentError(_('Got unknown keyword args '
- 'to utils.execute: %r') % kwargs)
-
- if run_as_root and os.geteuid() != 0:
- if not root_helper:
- raise NoRootWrapSpecified(
- message=('Command requested root, but did not specify a root '
- 'helper.'))
- cmd = shlex.split(root_helper) + list(cmd)
-
- cmd = map(str, cmd)
-
- while attempts > 0:
- attempts -= 1
- try:
- LOG.debug(_('Running cmd (subprocess): %s'), ' '.join(cmd))
- _PIPE = subprocess.PIPE # pylint: disable=E1101
-
- if os.name == 'nt':
- preexec_fn = None
- close_fds = False
- else:
- preexec_fn = _subprocess_setup
- close_fds = True
-
- obj = subprocess.Popen(cmd,
- stdin=_PIPE,
- stdout=_PIPE,
- stderr=_PIPE,
- close_fds=close_fds,
- preexec_fn=preexec_fn,
- shell=shell)
- result = None
- if process_input is not None:
- result = obj.communicate(process_input)
- else:
- result = obj.communicate()
- obj.stdin.close() # pylint: disable=E1101
- _returncode = obj.returncode # pylint: disable=E1101
- if _returncode:
- LOG.debug(_('Result was %s') % _returncode)
- if not ignore_exit_code and _returncode not in check_exit_code:
- (stdout, stderr) = result
- raise ProcessExecutionError(exit_code=_returncode,
- stdout=stdout,
- stderr=stderr,
- cmd=' '.join(cmd))
- return result
- except ProcessExecutionError:
- if not attempts:
- raise
- else:
- LOG.debug(_('%r failed. Retrying.'), cmd)
- if delay_on_retry:
- greenthread.sleep(random.randint(20, 200) / 100.0)
- finally:
- # NOTE(termie): this appears to be necessary to let the subprocess
- # call clean something up in between calls, without
- # it two execute calls in a row hangs the second one
- greenthread.sleep(0)
-
-
-def trycmd(*args, **kwargs):
- """
- A wrapper around execute() to more easily handle warnings and errors.
-
- Returns an (out, err) tuple of strings containing the output of
- the command's stdout and stderr. If 'err' is not empty then the
- command can be considered to have failed.
-
- :discard_warnings True | False. Defaults to False. If set to True,
- then for succeeding commands, stderr is cleared
-
- """
- discard_warnings = kwargs.pop('discard_warnings', False)
-
- try:
- out, err = execute(*args, **kwargs)
- failed = False
- except ProcessExecutionError, exn:
- out, err = '', str(exn)
- failed = True
-
- if not failed and discard_warnings and err:
- # Handle commands that output to stderr but otherwise succeed
- err = ''
-
- return out, err
-
-
-def ssh_execute(ssh, cmd, process_input=None,
- addl_env=None, check_exit_code=True):
- LOG.debug(_('Running cmd (SSH): %s'), cmd)
- if addl_env:
- raise InvalidArgumentError(_('Environment not supported over SSH'))
-
- if process_input:
- # This is (probably) fixable if we need it...
- raise InvalidArgumentError(_('process_input not supported over SSH'))
-
- stdin_stream, stdout_stream, stderr_stream = ssh.exec_command(cmd)
- channel = stdout_stream.channel
-
- # NOTE(justinsb): This seems suspicious...
- # ...other SSH clients have buffering issues with this approach
- stdout = stdout_stream.read()
- stderr = stderr_stream.read()
- stdin_stream.close()
-
- exit_status = channel.recv_exit_status()
-
- # exit_status == -1 if no exit code was returned
- if exit_status != -1:
- LOG.debug(_('Result was %s') % exit_status)
- if check_exit_code and exit_status != 0:
- raise ProcessExecutionError(exit_code=exit_status,
- stdout=stdout,
- stderr=stderr,
- cmd=cmd)
-
- return (stdout, stderr)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# All Rights Reserved.
-# Copyright 2011 Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-A remote procedure call (rpc) abstraction.
-
-For some wrappers that add message versioning to rpc, see:
- rpc.dispatcher
- rpc.proxy
-"""
-
-import inspect
-
-from oslo.config import cfg
-
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import importutils
-from heat.openstack.common import local
-from heat.openstack.common import log as logging
-
-
-LOG = logging.getLogger(__name__)
-
-
-rpc_opts = [
- cfg.StrOpt('rpc_backend',
- default='%s.impl_kombu' % __package__,
- help="The messaging module to use, defaults to kombu."),
- cfg.IntOpt('rpc_thread_pool_size',
- default=64,
- help='Size of RPC thread pool'),
- cfg.IntOpt('rpc_conn_pool_size',
- default=30,
- help='Size of RPC connection pool'),
- cfg.IntOpt('rpc_response_timeout',
- default=60,
- help='Seconds to wait for a response from call or multicall'),
- cfg.IntOpt('rpc_cast_timeout',
- default=30,
- help='Seconds to wait before a cast expires (TTL). '
- 'Only supported by impl_zmq.'),
- cfg.ListOpt('allowed_rpc_exception_modules',
- default=['heat.openstack.common.exception',
- 'heat.common.exception',
- 'nova.exception',
- 'cinder.exception',
- 'exceptions',
- ],
- help='Modules of exceptions that are permitted to be recreated'
- 'upon receiving exception data from an rpc call.'),
- cfg.BoolOpt('fake_rabbit',
- default=False,
- help='If passed, use a fake RabbitMQ provider'),
- cfg.StrOpt('control_exchange',
- default='openstack',
- help='AMQP exchange to connect to if using RabbitMQ or Qpid'),
-]
-
-CONF = cfg.CONF
-CONF.register_opts(rpc_opts)
-
-
-def set_defaults(control_exchange):
- cfg.set_defaults(rpc_opts,
- control_exchange=control_exchange)
-
-
-def create_connection(new=True):
- """Create a connection to the message bus used for rpc.
-
- For some example usage of creating a connection and some consumers on that
- connection, see nova.service.
-
- :param new: Whether or not to create a new connection. A new connection
- will be created by default. If new is False, the
- implementation is free to return an existing connection from a
- pool.
-
- :returns: An instance of openstack.common.rpc.common.Connection
- """
- return _get_impl().create_connection(CONF, new=new)
-
-
-def _check_for_lock():
- if not CONF.debug:
- return None
-
- if ((hasattr(local.strong_store, 'locks_held')
- and local.strong_store.locks_held)):
- stack = ' :: '.join([frame[3] for frame in inspect.stack()])
- LOG.warn(_('A RPC is being made while holding a lock. The locks '
- 'currently held are %(locks)s. This is probably a bug. '
- 'Please report it. Include the following: [%(stack)s].'),
- {'locks': local.strong_store.locks_held,
- 'stack': stack})
- return True
-
- return False
-
-
-def call(context, topic, msg, timeout=None, check_for_lock=False):
- """Invoke a remote method that returns something.
-
- :param context: Information that identifies the user that has made this
- request.
- :param topic: The topic to send the rpc message to. This correlates to the
- topic argument of
- openstack.common.rpc.common.Connection.create_consumer()
- and only applies when the consumer was created with
- fanout=False.
- :param msg: This is a dict in the form { "method" : "method_to_invoke",
- "args" : dict_of_kwargs }
- :param timeout: int, number of seconds to use for a response timeout.
- If set, this overrides the rpc_response_timeout option.
- :param check_for_lock: if True, a warning is emitted if a RPC call is made
- with a lock held.
-
- :returns: A dict from the remote method.
-
- :raises: openstack.common.rpc.common.Timeout if a complete response
- is not received before the timeout is reached.
- """
- if check_for_lock:
- _check_for_lock()
- return _get_impl().call(CONF, context, topic, msg, timeout)
-
-
-def cast(context, topic, msg):
- """Invoke a remote method that does not return anything.
-
- :param context: Information that identifies the user that has made this
- request.
- :param topic: The topic to send the rpc message to. This correlates to the
- topic argument of
- openstack.common.rpc.common.Connection.create_consumer()
- and only applies when the consumer was created with
- fanout=False.
- :param msg: This is a dict in the form { "method" : "method_to_invoke",
- "args" : dict_of_kwargs }
-
- :returns: None
- """
- return _get_impl().cast(CONF, context, topic, msg)
-
-
-def fanout_cast(context, topic, msg):
- """Broadcast a remote method invocation with no return.
-
- This method will get invoked on all consumers that were set up with this
- topic name and fanout=True.
-
- :param context: Information that identifies the user that has made this
- request.
- :param topic: The topic to send the rpc message to. This correlates to the
- topic argument of
- openstack.common.rpc.common.Connection.create_consumer()
- and only applies when the consumer was created with
- fanout=True.
- :param msg: This is a dict in the form { "method" : "method_to_invoke",
- "args" : dict_of_kwargs }
-
- :returns: None
- """
- return _get_impl().fanout_cast(CONF, context, topic, msg)
-
-
-def multicall(context, topic, msg, timeout=None, check_for_lock=False):
- """Invoke a remote method and get back an iterator.
-
- In this case, the remote method will be returning multiple values in
- separate messages, so the return values can be processed as the come in via
- an iterator.
-
- :param context: Information that identifies the user that has made this
- request.
- :param topic: The topic to send the rpc message to. This correlates to the
- topic argument of
- openstack.common.rpc.common.Connection.create_consumer()
- and only applies when the consumer was created with
- fanout=False.
- :param msg: This is a dict in the form { "method" : "method_to_invoke",
- "args" : dict_of_kwargs }
- :param timeout: int, number of seconds to use for a response timeout.
- If set, this overrides the rpc_response_timeout option.
- :param check_for_lock: if True, a warning is emitted if a RPC call is made
- with a lock held.
-
- :returns: An iterator. The iterator will yield a tuple (N, X) where N is
- an index that starts at 0 and increases by one for each value
- returned and X is the Nth value that was returned by the remote
- method.
-
- :raises: openstack.common.rpc.common.Timeout if a complete response
- is not received before the timeout is reached.
- """
- if check_for_lock:
- _check_for_lock()
- return _get_impl().multicall(CONF, context, topic, msg, timeout)
-
-
-def notify(context, topic, msg, envelope=False):
- """Send notification event.
-
- :param context: Information that identifies the user that has made this
- request.
- :param topic: The topic to send the notification to.
- :param msg: This is a dict of content of event.
- :param envelope: Set to True to enable message envelope for notifications.
-
- :returns: None
- """
- return _get_impl().notify(cfg.CONF, context, topic, msg, envelope)
-
-
-def cleanup():
- """Clean up resoruces in use by implementation.
-
- Clean up any resources that have been allocated by the RPC implementation.
- This is typically open connections to a messaging service. This function
- would get called before an application using this API exits to allow
- connections to get torn down cleanly.
-
- :returns: None
- """
- return _get_impl().cleanup()
-
-
-def cast_to_server(context, server_params, topic, msg):
- """Invoke a remote method that does not return anything.
-
- :param context: Information that identifies the user that has made this
- request.
- :param server_params: Connection information
- :param topic: The topic to send the notification to.
- :param msg: This is a dict in the form { "method" : "method_to_invoke",
- "args" : dict_of_kwargs }
-
- :returns: None
- """
- return _get_impl().cast_to_server(CONF, context, server_params, topic,
- msg)
-
-
-def fanout_cast_to_server(context, server_params, topic, msg):
- """Broadcast to a remote method invocation with no return.
-
- :param context: Information that identifies the user that has made this
- request.
- :param server_params: Connection information
- :param topic: The topic to send the notification to.
- :param msg: This is a dict in the form { "method" : "method_to_invoke",
- "args" : dict_of_kwargs }
-
- :returns: None
- """
- return _get_impl().fanout_cast_to_server(CONF, context, server_params,
- topic, msg)
-
-
-def queue_get_for(context, topic, host):
- """Get a queue name for a given topic + host.
-
- This function only works if this naming convention is followed on the
- consumer side, as well. For example, in nova, every instance of the
- nova-foo service calls create_consumer() for two topics:
-
- foo
- foo.<host>
-
- Messages sent to the 'foo' topic are distributed to exactly one instance of
- the nova-foo service. The services are chosen in a round-robin fashion.
- Messages sent to the 'foo.<host>' topic are sent to the nova-foo service on
- <host>.
- """
- return '%s.%s' % (topic, host) if host else topic
-
-
-_RPCIMPL = None
-
-
-def _get_impl():
- """Delay import of rpc_backend until configuration is loaded."""
- global _RPCIMPL
- if _RPCIMPL is None:
- try:
- _RPCIMPL = importutils.import_module(CONF.rpc_backend)
- except ImportError:
- # For backwards compatibility with older nova config.
- impl = CONF.rpc_backend.replace('nova.rpc',
- 'nova.openstack.common.rpc')
- _RPCIMPL = importutils.import_module(impl)
- return _RPCIMPL
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# All Rights Reserved.
-# Copyright 2011 - 2012, Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Shared code between AMQP based openstack.common.rpc implementations.
-
-The code in this module is shared between the rpc implemenations based on AMQP.
-Specifically, this includes impl_kombu and impl_qpid. impl_carrot also uses
-AMQP, but is deprecated and predates this code.
-"""
-
-import collections
-import inspect
-import sys
-import uuid
-
-from eventlet import greenpool
-from eventlet import pools
-from eventlet import queue
-from eventlet import semaphore
-from oslo.config import cfg
-
-from heat.openstack.common import excutils
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import local
-from heat.openstack.common import log as logging
-from heat.openstack.common.rpc import common as rpc_common
-
-
-amqp_opts = [
- cfg.BoolOpt('amqp_durable_queues',
- default=False,
- deprecated_name='rabbit_durable_queues',
- deprecated_group='DEFAULT',
- help='Use durable queues in amqp.'),
- cfg.BoolOpt('amqp_auto_delete',
- default=False,
- help='Auto-delete queues in amqp.'),
-]
-
-cfg.CONF.register_opts(amqp_opts)
-
-UNIQUE_ID = '_unique_id'
-LOG = logging.getLogger(__name__)
-
-
-class Pool(pools.Pool):
- """Class that implements a Pool of Connections."""
- def __init__(self, conf, connection_cls, *args, **kwargs):
- self.connection_cls = connection_cls
- self.conf = conf
- kwargs.setdefault("max_size", self.conf.rpc_conn_pool_size)
- kwargs.setdefault("order_as_stack", True)
- super(Pool, self).__init__(*args, **kwargs)
- self.reply_proxy = None
-
- # TODO(comstud): Timeout connections not used in a while
- def create(self):
- LOG.debug(_('Pool creating new connection'))
- return self.connection_cls(self.conf)
-
- def empty(self):
- while self.free_items:
- self.get().close()
- # Force a new connection pool to be created.
- # Note that this was added due to failing unit test cases. The issue
- # is the above "while loop" gets all the cached connections from the
- # pool and closes them, but never returns them to the pool, a pool
- # leak. The unit tests hang waiting for an item to be returned to the
- # pool. The unit tests get here via the tearDown() method. In the run
- # time code, it gets here via cleanup() and only appears in service.py
- # just before doing a sys.exit(), so cleanup() only happens once and
- # the leakage is not a problem.
- self.connection_cls.pool = None
-
-
-_pool_create_sem = semaphore.Semaphore()
-
-
-def get_connection_pool(conf, connection_cls):
- with _pool_create_sem:
- # Make sure only one thread tries to create the connection pool.
- if not connection_cls.pool:
- connection_cls.pool = Pool(conf, connection_cls)
- return connection_cls.pool
-
-
-class ConnectionContext(rpc_common.Connection):
- """The class that is actually returned to the create_connection() caller.
-
- This is essentially a wrapper around Connection that supports 'with'.
- It can also return a new Connection, or one from a pool.
-
- The function will also catch when an instance of this class is to be
- deleted. With that we can return Connections to the pool on exceptions
- and so forth without making the caller be responsible for catching them.
- If possible the function makes sure to return a connection to the pool.
- """
-
- def __init__(self, conf, connection_pool, pooled=True, server_params=None):
- """Create a new connection, or get one from the pool."""
- self.connection = None
- self.conf = conf
- self.connection_pool = connection_pool
- if pooled:
- self.connection = connection_pool.get()
- else:
- self.connection = connection_pool.connection_cls(
- conf,
- server_params=server_params)
- self.pooled = pooled
-
- def __enter__(self):
- """When with ConnectionContext() is used, return self."""
- return self
-
- def _done(self):
- """If the connection came from a pool, clean it up and put it back.
- If it did not come from a pool, close it.
- """
- if self.connection:
- if self.pooled:
- # Reset the connection so it's ready for the next caller
- # to grab from the pool
- self.connection.reset()
- self.connection_pool.put(self.connection)
- else:
- try:
- self.connection.close()
- except Exception:
- pass
- self.connection = None
-
- def __exit__(self, exc_type, exc_value, tb):
- """End of 'with' statement. We're done here."""
- self._done()
-
- def __del__(self):
- """Caller is done with this connection. Make sure we cleaned up."""
- self._done()
-
- def close(self):
- """Caller is done with this connection."""
- self._done()
-
- def create_consumer(self, topic, proxy, fanout=False):
- self.connection.create_consumer(topic, proxy, fanout)
-
- def create_worker(self, topic, proxy, pool_name):
- self.connection.create_worker(topic, proxy, pool_name)
-
- def join_consumer_pool(self, callback, pool_name, topic, exchange_name,
- ack_on_error=True):
- self.connection.join_consumer_pool(callback,
- pool_name,
- topic,
- exchange_name,
- ack_on_error)
-
- def consume_in_thread(self):
- self.connection.consume_in_thread()
-
- def __getattr__(self, key):
- """Proxy all other calls to the Connection instance."""
- if self.connection:
- return getattr(self.connection, key)
- else:
- raise rpc_common.InvalidRPCConnectionReuse()
-
-
-class ReplyProxy(ConnectionContext):
- """Connection class for RPC replies / callbacks."""
- def __init__(self, conf, connection_pool):
- self._call_waiters = {}
- self._num_call_waiters = 0
- self._num_call_waiters_wrn_threshhold = 10
- self._reply_q = 'reply_' + uuid.uuid4().hex
- super(ReplyProxy, self).__init__(conf, connection_pool, pooled=False)
- self.declare_direct_consumer(self._reply_q, self._process_data)
- self.consume_in_thread()
-
- def _process_data(self, message_data):
- msg_id = message_data.pop('_msg_id', None)
- waiter = self._call_waiters.get(msg_id)
- if not waiter:
- LOG.warn(_('No calling threads waiting for msg_id : %(msg_id)s'
- ', message : %(data)s'), {'msg_id': msg_id,
- 'data': message_data})
- LOG.warn(_('_call_waiters: %s') % str(self._call_waiters))
- else:
- waiter.put(message_data)
-
- def add_call_waiter(self, waiter, msg_id):
- self._num_call_waiters += 1
- if self._num_call_waiters > self._num_call_waiters_wrn_threshhold:
- LOG.warn(_('Number of call waiters is greater than warning '
- 'threshhold: %d. There could be a MulticallProxyWaiter '
- 'leak.') % self._num_call_waiters_wrn_threshhold)
- self._num_call_waiters_wrn_threshhold *= 2
- self._call_waiters[msg_id] = waiter
-
- def del_call_waiter(self, msg_id):
- self._num_call_waiters -= 1
- del self._call_waiters[msg_id]
-
- def get_reply_q(self):
- return self._reply_q
-
-
-def msg_reply(conf, msg_id, reply_q, connection_pool, reply=None,
- failure=None, ending=False, log_failure=True):
- """Sends a reply or an error on the channel signified by msg_id.
-
- Failure should be a sys.exc_info() tuple.
-
- """
- with ConnectionContext(conf, connection_pool) as conn:
- if failure:
- failure = rpc_common.serialize_remote_exception(failure,
- log_failure)
-
- msg = {'result': reply, 'failure': failure}
- if ending:
- msg['ending'] = True
- _add_unique_id(msg)
- # If a reply_q exists, add the msg_id to the reply and pass the
- # reply_q to direct_send() to use it as the response queue.
- # Otherwise use the msg_id for backward compatibilty.
- if reply_q:
- msg['_msg_id'] = msg_id
- conn.direct_send(reply_q, rpc_common.serialize_msg(msg))
- else:
- conn.direct_send(msg_id, rpc_common.serialize_msg(msg))
-
-
-class RpcContext(rpc_common.CommonRpcContext):
- """Context that supports replying to a rpc.call."""
- def __init__(self, **kwargs):
- self.msg_id = kwargs.pop('msg_id', None)
- self.reply_q = kwargs.pop('reply_q', None)
- self.conf = kwargs.pop('conf')
- super(RpcContext, self).__init__(**kwargs)
-
- def deepcopy(self):
- values = self.to_dict()
- values['conf'] = self.conf
- values['msg_id'] = self.msg_id
- values['reply_q'] = self.reply_q
- return self.__class__(**values)
-
- def reply(self, reply=None, failure=None, ending=False,
- connection_pool=None, log_failure=True):
- if self.msg_id:
- msg_reply(self.conf, self.msg_id, self.reply_q, connection_pool,
- reply, failure, ending, log_failure)
- if ending:
- self.msg_id = None
-
-
-def unpack_context(conf, msg):
- """Unpack context from msg."""
- context_dict = {}
- for key in list(msg.keys()):
- # NOTE(vish): Some versions of python don't like unicode keys
- # in kwargs.
- key = str(key)
- if key.startswith('_context_'):
- value = msg.pop(key)
- context_dict[key[9:]] = value
- context_dict['msg_id'] = msg.pop('_msg_id', None)
- context_dict['reply_q'] = msg.pop('_reply_q', None)
- context_dict['conf'] = conf
- ctx = RpcContext.from_dict(context_dict)
- rpc_common._safe_log(LOG.debug, _('unpacked context: %s'), ctx.to_dict())
- return ctx
-
-
-def pack_context(msg, context):
- """Pack context into msg.
-
- Values for message keys need to be less than 255 chars, so we pull
- context out into a bunch of separate keys. If we want to support
- more arguments in rabbit messages, we may want to do the same
- for args at some point.
-
- """
- if isinstance(context, dict):
- context_d = dict([('_context_%s' % key, value)
- for (key, value) in context.iteritems()])
- else:
- context_d = dict([('_context_%s' % key, value)
- for (key, value) in context.to_dict().iteritems()])
-
- msg.update(context_d)
-
-
-class _MsgIdCache(object):
- """This class checks any duplicate messages."""
-
- # NOTE: This value is considered can be a configuration item, but
- # it is not necessary to change its value in most cases,
- # so let this value as static for now.
- DUP_MSG_CHECK_SIZE = 16
-
- def __init__(self, **kwargs):
- self.prev_msgids = collections.deque([],
- maxlen=self.DUP_MSG_CHECK_SIZE)
-
- def check_duplicate_message(self, message_data):
- """AMQP consumers may read same message twice when exceptions occur
- before ack is returned. This method prevents doing it.
- """
- if UNIQUE_ID in message_data:
- msg_id = message_data[UNIQUE_ID]
- if msg_id not in self.prev_msgids:
- self.prev_msgids.append(msg_id)
- else:
- raise rpc_common.DuplicateMessageError(msg_id=msg_id)
-
-
-def _add_unique_id(msg):
- """Add unique_id for checking duplicate messages."""
- unique_id = uuid.uuid4().hex
- msg.update({UNIQUE_ID: unique_id})
- LOG.debug(_('UNIQUE_ID is %s.') % (unique_id))
-
-
-class _ThreadPoolWithWait(object):
- """Base class for a delayed invocation manager.
-
- Used by the Connection class to start up green threads
- to handle incoming messages.
- """
-
- def __init__(self, conf, connection_pool):
- self.pool = greenpool.GreenPool(conf.rpc_thread_pool_size)
- self.connection_pool = connection_pool
- self.conf = conf
-
- def wait(self):
- """Wait for all callback threads to exit."""
- self.pool.waitall()
-
-
-class CallbackWrapper(_ThreadPoolWithWait):
- """Wraps a straight callback.
-
- Allows it to be invoked in a green thread.
- """
-
- def __init__(self, conf, callback, connection_pool,
- wait_for_consumers=False):
- """Initiates CallbackWrapper object.
-
- :param conf: cfg.CONF instance
- :param callback: a callable (probably a function)
- :param connection_pool: connection pool as returned by
- get_connection_pool()
- :param wait_for_consumers: wait for all green threads to
- complete and raise the last
- caught exception, if any.
-
- """
- super(CallbackWrapper, self).__init__(
- conf=conf,
- connection_pool=connection_pool,
- )
- self.callback = callback
- self.wait_for_consumers = wait_for_consumers
- self.exc_info = None
-
- def _wrap(self, message_data, **kwargs):
- """Wrap the callback invocation to catch exceptions.
- """
- try:
- self.callback(message_data, **kwargs)
- except Exception:
- self.exc_info = sys.exc_info()
-
- def __call__(self, message_data):
- self.exc_info = None
- self.pool.spawn_n(self._wrap, message_data)
-
- if self.wait_for_consumers:
- self.pool.waitall()
- if self.exc_info:
- raise self.exc_info[1], None, self.exc_info[2]
-
-
-class ProxyCallback(_ThreadPoolWithWait):
- """Calls methods on a proxy object based on method and args."""
-
- def __init__(self, conf, proxy, connection_pool):
- super(ProxyCallback, self).__init__(
- conf=conf,
- connection_pool=connection_pool,
- )
- self.proxy = proxy
- self.msg_id_cache = _MsgIdCache()
-
- def __call__(self, message_data):
- """Consumer callback to call a method on a proxy object.
-
- Parses the message for validity and fires off a thread to call the
- proxy object method.
-
- Message data should be a dictionary with two keys:
- method: string representing the method to call
- args: dictionary of arg: value
-
- Example: {'method': 'echo', 'args': {'value': 42}}
-
- """
- # It is important to clear the context here, because at this point
- # the previous context is stored in local.store.context
- if hasattr(local.store, 'context'):
- del local.store.context
- rpc_common._safe_log(LOG.debug, _('received %s'), message_data)
- self.msg_id_cache.check_duplicate_message(message_data)
- ctxt = unpack_context(self.conf, message_data)
- method = message_data.get('method')
- args = message_data.get('args', {})
- version = message_data.get('version')
- namespace = message_data.get('namespace')
- if not method:
- LOG.warn(_('no method for message: %s') % message_data)
- ctxt.reply(_('No method for message: %s') % message_data,
- connection_pool=self.connection_pool)
- return
- self.pool.spawn_n(self._process_data, ctxt, version, method,
- namespace, args)
-
- def _process_data(self, ctxt, version, method, namespace, args):
- """Process a message in a new thread.
-
- If the proxy object we have has a dispatch method
- (see rpc.dispatcher.RpcDispatcher), pass it the version,
- method, and args and let it dispatch as appropriate. If not, use
- the old behavior of magically calling the specified method on the
- proxy we have here.
- """
- ctxt.update_store()
- try:
- rval = self.proxy.dispatch(ctxt, version, method, namespace,
- **args)
- # Check if the result was a generator
- if inspect.isgenerator(rval):
- for x in rval:
- ctxt.reply(x, None, connection_pool=self.connection_pool)
- else:
- ctxt.reply(rval, None, connection_pool=self.connection_pool)
- # This final None tells multicall that it is done.
- ctxt.reply(ending=True, connection_pool=self.connection_pool)
- except rpc_common.ClientException as e:
- LOG.debug(_('Expected exception during message handling (%s)') %
- e._exc_info[1])
- ctxt.reply(None, e._exc_info,
- connection_pool=self.connection_pool,
- log_failure=False)
- except Exception:
- # sys.exc_info() is deleted by LOG.exception().
- exc_info = sys.exc_info()
- LOG.error(_('Exception during message handling'),
- exc_info=exc_info)
- ctxt.reply(None, exc_info, connection_pool=self.connection_pool)
-
-
-class MulticallProxyWaiter(object):
- def __init__(self, conf, msg_id, timeout, connection_pool):
- self._msg_id = msg_id
- self._timeout = timeout or conf.rpc_response_timeout
- self._reply_proxy = connection_pool.reply_proxy
- self._done = False
- self._got_ending = False
- self._conf = conf
- self._dataqueue = queue.LightQueue()
- # Add this caller to the reply proxy's call_waiters
- self._reply_proxy.add_call_waiter(self, self._msg_id)
- self.msg_id_cache = _MsgIdCache()
-
- def put(self, data):
- self._dataqueue.put(data)
-
- def done(self):
- if self._done:
- return
- self._done = True
- # Remove this caller from reply proxy's call_waiters
- self._reply_proxy.del_call_waiter(self._msg_id)
-
- def _process_data(self, data):
- result = None
- self.msg_id_cache.check_duplicate_message(data)
- if data['failure']:
- failure = data['failure']
- result = rpc_common.deserialize_remote_exception(self._conf,
- failure)
- elif data.get('ending', False):
- self._got_ending = True
- else:
- result = data['result']
- return result
-
- def __iter__(self):
- """Return a result until we get a reply with an 'ending' flag."""
- if self._done:
- raise StopIteration
- while True:
- try:
- data = self._dataqueue.get(timeout=self._timeout)
- result = self._process_data(data)
- except queue.Empty:
- self.done()
- raise rpc_common.Timeout()
- except Exception:
- with excutils.save_and_reraise_exception():
- self.done()
- if self._got_ending:
- self.done()
- raise StopIteration
- if isinstance(result, Exception):
- self.done()
- raise result
- yield result
-
-
-def create_connection(conf, new, connection_pool):
- """Create a connection."""
- return ConnectionContext(conf, connection_pool, pooled=not new)
-
-
-_reply_proxy_create_sem = semaphore.Semaphore()
-
-
-def multicall(conf, context, topic, msg, timeout, connection_pool):
- """Make a call that returns multiple times."""
- LOG.debug(_('Making synchronous call on %s ...'), topic)
- msg_id = uuid.uuid4().hex
- msg.update({'_msg_id': msg_id})
- LOG.debug(_('MSG_ID is %s') % (msg_id))
- _add_unique_id(msg)
- pack_context(msg, context)
-
- with _reply_proxy_create_sem:
- if not connection_pool.reply_proxy:
- connection_pool.reply_proxy = ReplyProxy(conf, connection_pool)
- msg.update({'_reply_q': connection_pool.reply_proxy.get_reply_q()})
- wait_msg = MulticallProxyWaiter(conf, msg_id, timeout, connection_pool)
- with ConnectionContext(conf, connection_pool) as conn:
- conn.topic_send(topic, rpc_common.serialize_msg(msg), timeout)
- return wait_msg
-
-
-def call(conf, context, topic, msg, timeout, connection_pool):
- """Sends a message on a topic and wait for a response."""
- rv = multicall(conf, context, topic, msg, timeout, connection_pool)
- # NOTE(vish): return the last result from the multicall
- rv = list(rv)
- if not rv:
- return
- return rv[-1]
-
-
-def cast(conf, context, topic, msg, connection_pool):
- """Sends a message on a topic without waiting for a response."""
- LOG.debug(_('Making asynchronous cast on %s...'), topic)
- _add_unique_id(msg)
- pack_context(msg, context)
- with ConnectionContext(conf, connection_pool) as conn:
- conn.topic_send(topic, rpc_common.serialize_msg(msg))
-
-
-def fanout_cast(conf, context, topic, msg, connection_pool):
- """Sends a message on a fanout exchange without waiting for a response."""
- LOG.debug(_('Making asynchronous fanout cast...'))
- _add_unique_id(msg)
- pack_context(msg, context)
- with ConnectionContext(conf, connection_pool) as conn:
- conn.fanout_send(topic, rpc_common.serialize_msg(msg))
-
-
-def cast_to_server(conf, context, server_params, topic, msg, connection_pool):
- """Sends a message on a topic to a specific server."""
- _add_unique_id(msg)
- pack_context(msg, context)
- with ConnectionContext(conf, connection_pool, pooled=False,
- server_params=server_params) as conn:
- conn.topic_send(topic, rpc_common.serialize_msg(msg))
-
-
-def fanout_cast_to_server(conf, context, server_params, topic, msg,
- connection_pool):
- """Sends a message on a fanout exchange to a specific server."""
- _add_unique_id(msg)
- pack_context(msg, context)
- with ConnectionContext(conf, connection_pool, pooled=False,
- server_params=server_params) as conn:
- conn.fanout_send(topic, rpc_common.serialize_msg(msg))
-
-
-def notify(conf, context, topic, msg, connection_pool, envelope):
- """Sends a notification event on a topic."""
- LOG.debug(_('Sending %(event_type)s on %(topic)s'),
- dict(event_type=msg.get('event_type'),
- topic=topic))
- _add_unique_id(msg)
- pack_context(msg, context)
- with ConnectionContext(conf, connection_pool) as conn:
- if envelope:
- msg = rpc_common.serialize_msg(msg)
- conn.notify_send(topic, msg)
-
-
-def cleanup(connection_pool):
- if connection_pool:
- connection_pool.empty()
-
-
-def get_control_exchange(conf):
- return conf.control_exchange
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# All Rights Reserved.
-# Copyright 2011 Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import copy
-import sys
-import traceback
-
-from oslo.config import cfg
-import six
-
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import importutils
-from heat.openstack.common import jsonutils
-from heat.openstack.common import local
-from heat.openstack.common import log as logging
-
-
-CONF = cfg.CONF
-LOG = logging.getLogger(__name__)
-
-
-'''RPC Envelope Version.
-
-This version number applies to the top level structure of messages sent out.
-It does *not* apply to the message payload, which must be versioned
-independently. For example, when using rpc APIs, a version number is applied
-for changes to the API being exposed over rpc. This version number is handled
-in the rpc proxy and dispatcher modules.
-
-This version number applies to the message envelope that is used in the
-serialization done inside the rpc layer. See serialize_msg() and
-deserialize_msg().
-
-The current message format (version 2.0) is very simple. It is:
-
- {
- 'oslo.version': <RPC Envelope Version as a String>,
- 'oslo.message': <Application Message Payload, JSON encoded>
- }
-
-Message format version '1.0' is just considered to be the messages we sent
-without a message envelope.
-
-So, the current message envelope just includes the envelope version. It may
-eventually contain additional information, such as a signature for the message
-payload.
-
-We will JSON encode the application message payload. The message envelope,
-which includes the JSON encoded application message body, will be passed down
-to the messaging libraries as a dict.
-'''
-_RPC_ENVELOPE_VERSION = '2.0'
-
-_VERSION_KEY = 'oslo.version'
-_MESSAGE_KEY = 'oslo.message'
-
-_REMOTE_POSTFIX = '_Remote'
-
-
-class RPCException(Exception):
- msg_fmt = _("An unknown RPC related exception occurred.")
-
- def __init__(self, message=None, **kwargs):
- self.kwargs = kwargs
-
- if not message:
- try:
- message = self.msg_fmt % kwargs
-
- except Exception:
- # kwargs doesn't match a variable in the message
- # log the issue and the kwargs
- LOG.exception(_('Exception in string format operation'))
- for name, value in kwargs.iteritems():
- LOG.error("%s: %s" % (name, value))
- # at least get the core message out if something happened
- message = self.msg_fmt
-
- super(RPCException, self).__init__(message)
-
-
-class RemoteError(RPCException):
- """Signifies that a remote class has raised an exception.
-
- Contains a string representation of the type of the original exception,
- the value of the original exception, and the traceback. These are
- sent to the parent as a joined string so printing the exception
- contains all of the relevant info.
-
- """
- msg_fmt = _("Remote error: %(exc_type)s %(value)s\n%(traceback)s.")
-
- def __init__(self, exc_type=None, value=None, traceback=None):
- self.exc_type = exc_type
- self.value = value
- self.traceback = traceback
- super(RemoteError, self).__init__(exc_type=exc_type,
- value=value,
- traceback=traceback)
-
-
-class Timeout(RPCException):
- """Signifies that a timeout has occurred.
-
- This exception is raised if the rpc_response_timeout is reached while
- waiting for a response from the remote side.
- """
- msg_fmt = _('Timeout while waiting on RPC response - '
- 'topic: "%(topic)s", RPC method: "%(method)s" '
- 'info: "%(info)s"')
-
- def __init__(self, info=None, topic=None, method=None):
- """Initiates Timeout object.
-
- :param info: Extra info to convey to the user
- :param topic: The topic that the rpc call was sent to
- :param rpc_method_name: The name of the rpc method being
- called
- """
- self.info = info
- self.topic = topic
- self.method = method
- super(Timeout, self).__init__(
- None,
- info=info or _('<unknown>'),
- topic=topic or _('<unknown>'),
- method=method or _('<unknown>'))
-
-
-class DuplicateMessageError(RPCException):
- msg_fmt = _("Found duplicate message(%(msg_id)s). Skipping it.")
-
-
-class InvalidRPCConnectionReuse(RPCException):
- msg_fmt = _("Invalid reuse of an RPC connection.")
-
-
-class UnsupportedRpcVersion(RPCException):
- msg_fmt = _("Specified RPC version, %(version)s, not supported by "
- "this endpoint.")
-
-
-class UnsupportedRpcEnvelopeVersion(RPCException):
- msg_fmt = _("Specified RPC envelope version, %(version)s, "
- "not supported by this endpoint.")
-
-
-class RpcVersionCapError(RPCException):
- msg_fmt = _("Specified RPC version cap, %(version_cap)s, is too low")
-
-
-class Connection(object):
- """A connection, returned by rpc.create_connection().
-
- This class represents a connection to the message bus used for rpc.
- An instance of this class should never be created by users of the rpc API.
- Use rpc.create_connection() instead.
- """
- def close(self):
- """Close the connection.
-
- This method must be called when the connection will no longer be used.
- It will ensure that any resources associated with the connection, such
- as a network connection, and cleaned up.
- """
- raise NotImplementedError()
-
- def create_consumer(self, topic, proxy, fanout=False):
- """Create a consumer on this connection.
-
- A consumer is associated with a message queue on the backend message
- bus. The consumer will read messages from the queue, unpack them, and
- dispatch them to the proxy object. The contents of the message pulled
- off of the queue will determine which method gets called on the proxy
- object.
-
- :param topic: This is a name associated with what to consume from.
- Multiple instances of a service may consume from the same
- topic. For example, all instances of nova-compute consume
- from a queue called "compute". In that case, the
- messages will get distributed amongst the consumers in a
- round-robin fashion if fanout=False. If fanout=True,
- every consumer associated with this topic will get a
- copy of every message.
- :param proxy: The object that will handle all incoming messages.
- :param fanout: Whether or not this is a fanout topic. See the
- documentation for the topic parameter for some
- additional comments on this.
- """
- raise NotImplementedError()
-
- def create_worker(self, topic, proxy, pool_name):
- """Create a worker on this connection.
-
- A worker is like a regular consumer of messages directed to a
- topic, except that it is part of a set of such consumers (the
- "pool") which may run in parallel. Every pool of workers will
- receive a given message, but only one worker in the pool will
- be asked to process it. Load is distributed across the members
- of the pool in round-robin fashion.
-
- :param topic: This is a name associated with what to consume from.
- Multiple instances of a service may consume from the same
- topic.
- :param proxy: The object that will handle all incoming messages.
- :param pool_name: String containing the name of the pool of workers
- """
- raise NotImplementedError()
-
- def join_consumer_pool(self, callback, pool_name, topic, exchange_name):
- """Register as a member of a group of consumers.
-
- Uses given topic from the specified exchange.
- Exactly one member of a given pool will receive each message.
-
- A message will be delivered to multiple pools, if more than
- one is created.
-
- :param callback: Callable to be invoked for each message.
- :type callback: callable accepting one argument
- :param pool_name: The name of the consumer pool.
- :type pool_name: str
- :param topic: The routing topic for desired messages.
- :type topic: str
- :param exchange_name: The name of the message exchange where
- the client should attach. Defaults to
- the configured exchange.
- :type exchange_name: str
- """
- raise NotImplementedError()
-
- def consume_in_thread(self):
- """Spawn a thread to handle incoming messages.
-
- Spawn a thread that will be responsible for handling all incoming
- messages for consumers that were set up on this connection.
-
- Message dispatching inside of this is expected to be implemented in a
- non-blocking manner. An example implementation would be having this
- thread pull messages in for all of the consumers, but utilize a thread
- pool for dispatching the messages to the proxy objects.
- """
- raise NotImplementedError()
-
-
-def _safe_log(log_func, msg, msg_data):
- """Sanitizes the msg_data field before logging."""
- SANITIZE = ['_context_auth_token', 'auth_token', 'new_pass']
-
- def _fix_passwords(d):
- """Sanitizes the password fields in the dictionary."""
- for k in d.iterkeys():
- if k.lower().find('password') != -1:
- d[k] = '<SANITIZED>'
- elif k.lower() in SANITIZE:
- d[k] = '<SANITIZED>'
- elif isinstance(d[k], dict):
- _fix_passwords(d[k])
- return d
-
- return log_func(msg, _fix_passwords(copy.deepcopy(msg_data)))
-
-
-def serialize_remote_exception(failure_info, log_failure=True):
- """Prepares exception data to be sent over rpc.
-
- Failure_info should be a sys.exc_info() tuple.
-
- """
- tb = traceback.format_exception(*failure_info)
- failure = failure_info[1]
- if log_failure:
- LOG.error(_("Returning exception %s to caller"),
- six.text_type(failure))
- LOG.error(tb)
-
- kwargs = {}
- if hasattr(failure, 'kwargs'):
- kwargs = failure.kwargs
-
- # NOTE(matiu): With cells, it's possible to re-raise remote, remote
- # exceptions. Lets turn it back into the original exception type.
- cls_name = str(failure.__class__.__name__)
- mod_name = str(failure.__class__.__module__)
- if (cls_name.endswith(_REMOTE_POSTFIX) and
- mod_name.endswith(_REMOTE_POSTFIX)):
- cls_name = cls_name[:-len(_REMOTE_POSTFIX)]
- mod_name = mod_name[:-len(_REMOTE_POSTFIX)]
-
- data = {
- 'class': cls_name,
- 'module': mod_name,
- 'message': six.text_type(failure),
- 'tb': tb,
- 'args': failure.args,
- 'kwargs': kwargs
- }
-
- json_data = jsonutils.dumps(data)
-
- return json_data
-
-
-def deserialize_remote_exception(conf, data):
- failure = jsonutils.loads(str(data))
-
- trace = failure.get('tb', [])
- message = failure.get('message', "") + "\n" + "\n".join(trace)
- name = failure.get('class')
- module = failure.get('module')
-
- # NOTE(ameade): We DO NOT want to allow just any module to be imported, in
- # order to prevent arbitrary code execution.
- if module not in conf.allowed_rpc_exception_modules:
- return RemoteError(name, failure.get('message'), trace)
-
- try:
- mod = importutils.import_module(module)
- klass = getattr(mod, name)
- if not issubclass(klass, Exception):
- raise TypeError("Can only deserialize Exceptions")
-
- failure = klass(*failure.get('args', []), **failure.get('kwargs', {}))
- except (AttributeError, TypeError, ImportError):
- return RemoteError(name, failure.get('message'), trace)
-
- ex_type = type(failure)
- str_override = lambda self: message
- new_ex_type = type(ex_type.__name__ + _REMOTE_POSTFIX, (ex_type,),
- {'__str__': str_override, '__unicode__': str_override})
- new_ex_type.__module__ = '%s%s' % (module, _REMOTE_POSTFIX)
- try:
- # NOTE(ameade): Dynamically create a new exception type and swap it in
- # as the new type for the exception. This only works on user defined
- # Exceptions and not core python exceptions. This is important because
- # we cannot necessarily change an exception message so we must override
- # the __str__ method.
- failure.__class__ = new_ex_type
- except TypeError:
- # NOTE(ameade): If a core exception then just add the traceback to the
- # first exception argument.
- failure.args = (message,) + failure.args[1:]
- return failure
-
-
-class CommonRpcContext(object):
- def __init__(self, **kwargs):
- self.values = kwargs
-
- def __getattr__(self, key):
- try:
- return self.values[key]
- except KeyError:
- raise AttributeError(key)
-
- def to_dict(self):
- return copy.deepcopy(self.values)
-
- @classmethod
- def from_dict(cls, values):
- return cls(**values)
-
- def deepcopy(self):
- return self.from_dict(self.to_dict())
-
- def update_store(self):
- local.store.context = self
-
- def elevated(self, read_deleted=None, overwrite=False):
- """Return a version of this context with admin flag set."""
- # TODO(russellb) This method is a bit of a nova-ism. It makes
- # some assumptions about the data in the request context sent
- # across rpc, while the rest of this class does not. We could get
- # rid of this if we changed the nova code that uses this to
- # convert the RpcContext back to its native RequestContext doing
- # something like nova.context.RequestContext.from_dict(ctxt.to_dict())
-
- context = self.deepcopy()
- context.values['is_admin'] = True
-
- context.values.setdefault('roles', [])
-
- if 'admin' not in context.values['roles']:
- context.values['roles'].append('admin')
-
- if read_deleted is not None:
- context.values['read_deleted'] = read_deleted
-
- return context
-
-
-class ClientException(Exception):
- """Encapsulates actual exception expected to be hit by a RPC proxy object.
-
- Merely instantiating it records the current exception information, which
- will be passed back to the RPC client without exceptional logging.
- """
- def __init__(self):
- self._exc_info = sys.exc_info()
-
-
-def catch_client_exception(exceptions, func, *args, **kwargs):
- try:
- return func(*args, **kwargs)
- except Exception as e:
- if type(e) in exceptions:
- raise ClientException()
- else:
- raise
-
-
-def client_exceptions(*exceptions):
- """Decorator for manager methods that raise expected exceptions.
-
- Marking a Manager method with this decorator allows the declaration
- of expected exceptions that the RPC layer should not consider fatal,
- and not log as if they were generated in a real error scenario. Note
- that this will cause listed exceptions to be wrapped in a
- ClientException, which is used internally by the RPC layer.
- """
- def outer(func):
- def inner(*args, **kwargs):
- return catch_client_exception(exceptions, func, *args, **kwargs)
- return inner
- return outer
-
-
-def version_is_compatible(imp_version, version):
- """Determine whether versions are compatible.
-
- :param imp_version: The version implemented
- :param version: The version requested by an incoming message.
- """
- version_parts = version.split('.')
- imp_version_parts = imp_version.split('.')
- if int(version_parts[0]) != int(imp_version_parts[0]): # Major
- return False
- if int(version_parts[1]) > int(imp_version_parts[1]): # Minor
- return False
- return True
-
-
-def serialize_msg(raw_msg):
- # NOTE(russellb) See the docstring for _RPC_ENVELOPE_VERSION for more
- # information about this format.
- msg = {_VERSION_KEY: _RPC_ENVELOPE_VERSION,
- _MESSAGE_KEY: jsonutils.dumps(raw_msg)}
-
- return msg
-
-
-def deserialize_msg(msg):
- # NOTE(russellb): Hang on to your hats, this road is about to
- # get a little bumpy.
- #
- # Robustness Principle:
- # "Be strict in what you send, liberal in what you accept."
- #
- # At this point we have to do a bit of guessing about what it
- # is we just received. Here is the set of possibilities:
- #
- # 1) We received a dict. This could be 2 things:
- #
- # a) Inspect it to see if it looks like a standard message envelope.
- # If so, great!
- #
- # b) If it doesn't look like a standard message envelope, it could either
- # be a notification, or a message from before we added a message
- # envelope (referred to as version 1.0).
- # Just return the message as-is.
- #
- # 2) It's any other non-dict type. Just return it and hope for the best.
- # This case covers return values from rpc.call() from before message
- # envelopes were used. (messages to call a method were always a dict)
-
- if not isinstance(msg, dict):
- # See #2 above.
- return msg
-
- base_envelope_keys = (_VERSION_KEY, _MESSAGE_KEY)
- if not all(map(lambda key: key in msg, base_envelope_keys)):
- # See #1.b above.
- return msg
-
- # At this point we think we have the message envelope
- # format we were expecting. (#1.a above)
-
- if not version_is_compatible(_RPC_ENVELOPE_VERSION, msg[_VERSION_KEY]):
- raise UnsupportedRpcEnvelopeVersion(version=msg[_VERSION_KEY])
-
- raw_msg = jsonutils.loads(msg[_MESSAGE_KEY])
-
- return raw_msg
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2012 Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Code for rpc message dispatching.
-
-Messages that come in have a version number associated with them. RPC API
-version numbers are in the form:
-
- Major.Minor
-
-For a given message with version X.Y, the receiver must be marked as able to
-handle messages of version A.B, where:
-
- A = X
-
- B >= Y
-
-The Major version number would be incremented for an almost completely new API.
-The Minor version number would be incremented for backwards compatible changes
-to an existing API. A backwards compatible change could be something like
-adding a new method, adding an argument to an existing method (but not
-requiring it), or changing the type for an existing argument (but still
-handling the old type as well).
-
-The conversion over to a versioned API must be done on both the client side and
-server side of the API at the same time. However, as the code stands today,
-there can be both versioned and unversioned APIs implemented in the same code
-base.
-
-EXAMPLES
-========
-
-Nova was the first project to use versioned rpc APIs. Consider the compute rpc
-API as an example. The client side is in nova/compute/rpcapi.py and the server
-side is in nova/compute/manager.py.
-
-
-Example 1) Adding a new method.
--------------------------------
-
-Adding a new method is a backwards compatible change. It should be added to
-nova/compute/manager.py, and RPC_API_VERSION should be bumped from X.Y to
-X.Y+1. On the client side, the new method in nova/compute/rpcapi.py should
-have a specific version specified to indicate the minimum API version that must
-be implemented for the method to be supported. For example::
-
- def get_host_uptime(self, ctxt, host):
- topic = _compute_topic(self.topic, ctxt, host, None)
- return self.call(ctxt, self.make_msg('get_host_uptime'), topic,
- version='1.1')
-
-In this case, version '1.1' is the first version that supported the
-get_host_uptime() method.
-
-
-Example 2) Adding a new parameter.
-----------------------------------
-
-Adding a new parameter to an rpc method can be made backwards compatible. The
-RPC_API_VERSION on the server side (nova/compute/manager.py) should be bumped.
-The implementation of the method must not expect the parameter to be present.::
-
- def some_remote_method(self, arg1, arg2, newarg=None):
- # The code needs to deal with newarg=None for cases
- # where an older client sends a message without it.
- pass
-
-On the client side, the same changes should be made as in example 1. The
-minimum version that supports the new parameter should be specified.
-"""
-
-from heat.openstack.common.rpc import common as rpc_common
-from heat.openstack.common.rpc import serializer as rpc_serializer
-
-
-class RpcDispatcher(object):
- """Dispatch rpc messages according to the requested API version.
-
- This class can be used as the top level 'manager' for a service. It
- contains a list of underlying managers that have an API_VERSION attribute.
- """
-
- def __init__(self, callbacks, serializer=None):
- """Initialize the rpc dispatcher.
-
- :param callbacks: List of proxy objects that are an instance
- of a class with rpc methods exposed. Each proxy
- object should have an RPC_API_VERSION attribute.
- :param serializer: The Serializer object that will be used to
- deserialize arguments before the method call and
- to serialize the result after it returns.
- """
- self.callbacks = callbacks
- if serializer is None:
- serializer = rpc_serializer.NoOpSerializer()
- self.serializer = serializer
- super(RpcDispatcher, self).__init__()
-
- def _deserialize_args(self, context, kwargs):
- """Helper method called to deserialize args before dispatch.
-
- This calls our serializer on each argument, returning a new set of
- args that have been deserialized.
-
- :param context: The request context
- :param kwargs: The arguments to be deserialized
- :returns: A new set of deserialized args
- """
- new_kwargs = dict()
- for argname, arg in kwargs.iteritems():
- new_kwargs[argname] = self.serializer.deserialize_entity(context,
- arg)
- return new_kwargs
-
- def dispatch(self, ctxt, version, method, namespace, **kwargs):
- """Dispatch a message based on a requested version.
-
- :param ctxt: The request context
- :param version: The requested API version from the incoming message
- :param method: The method requested to be called by the incoming
- message.
- :param namespace: The namespace for the requested method. If None,
- the dispatcher will look for a method on a callback
- object with no namespace set.
- :param kwargs: A dict of keyword arguments to be passed to the method.
-
- :returns: Whatever is returned by the underlying method that gets
- called.
- """
- if not version:
- version = '1.0'
-
- had_compatible = False
- for proxyobj in self.callbacks:
- # Check for namespace compatibility
- try:
- cb_namespace = proxyobj.RPC_API_NAMESPACE
- except AttributeError:
- cb_namespace = None
-
- if namespace != cb_namespace:
- continue
-
- # Check for version compatibility
- try:
- rpc_api_version = proxyobj.RPC_API_VERSION
- except AttributeError:
- rpc_api_version = '1.0'
-
- is_compatible = rpc_common.version_is_compatible(rpc_api_version,
- version)
- had_compatible = had_compatible or is_compatible
-
- if not hasattr(proxyobj, method):
- continue
- if is_compatible:
- kwargs = self._deserialize_args(ctxt, kwargs)
- result = getattr(proxyobj, method)(ctxt, **kwargs)
- return self.serializer.serialize_entity(ctxt, result)
-
- if had_compatible:
- raise AttributeError("No such RPC function '%s'" % method)
- else:
- raise rpc_common.UnsupportedRpcVersion(version=version)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-"""Fake RPC implementation which calls proxy methods directly with no
-queues. Casts will block, but this is very useful for tests.
-"""
-
-import inspect
-# NOTE(russellb): We specifically want to use json, not our own jsonutils.
-# jsonutils has some extra logic to automatically convert objects to primitive
-# types so that they can be serialized. We want to catch all cases where
-# non-primitive types make it into this code and treat it as an error.
-import json
-import time
-
-import eventlet
-
-from heat.openstack.common.rpc import common as rpc_common
-
-CONSUMERS = {}
-
-
-class RpcContext(rpc_common.CommonRpcContext):
- def __init__(self, **kwargs):
- super(RpcContext, self).__init__(**kwargs)
- self._response = []
- self._done = False
-
- def deepcopy(self):
- values = self.to_dict()
- new_inst = self.__class__(**values)
- new_inst._response = self._response
- new_inst._done = self._done
- return new_inst
-
- def reply(self, reply=None, failure=None, ending=False):
- if ending:
- self._done = True
- if not self._done:
- self._response.append((reply, failure))
-
-
-class Consumer(object):
- def __init__(self, topic, proxy):
- self.topic = topic
- self.proxy = proxy
-
- def call(self, context, version, method, namespace, args, timeout):
- done = eventlet.event.Event()
-
- def _inner():
- ctxt = RpcContext.from_dict(context.to_dict())
- try:
- rval = self.proxy.dispatch(context, version, method,
- namespace, **args)
- res = []
- # Caller might have called ctxt.reply() manually
- for (reply, failure) in ctxt._response:
- if failure:
- raise failure[0], failure[1], failure[2]
- res.append(reply)
- # if ending not 'sent'...we might have more data to
- # return from the function itself
- if not ctxt._done:
- if inspect.isgenerator(rval):
- for val in rval:
- res.append(val)
- else:
- res.append(rval)
- done.send(res)
- except rpc_common.ClientException as e:
- done.send_exception(e._exc_info[1])
- except Exception as e:
- done.send_exception(e)
-
- thread = eventlet.greenthread.spawn(_inner)
-
- if timeout:
- start_time = time.time()
- while not done.ready():
- eventlet.greenthread.sleep(1)
- cur_time = time.time()
- if (cur_time - start_time) > timeout:
- thread.kill()
- raise rpc_common.Timeout()
-
- return done.wait()
-
-
-class Connection(object):
- """Connection object."""
-
- def __init__(self):
- self.consumers = []
-
- def create_consumer(self, topic, proxy, fanout=False):
- consumer = Consumer(topic, proxy)
- self.consumers.append(consumer)
- if topic not in CONSUMERS:
- CONSUMERS[topic] = []
- CONSUMERS[topic].append(consumer)
-
- def close(self):
- for consumer in self.consumers:
- CONSUMERS[consumer.topic].remove(consumer)
- self.consumers = []
-
- def consume_in_thread(self):
- pass
-
-
-def create_connection(conf, new=True):
- """Create a connection."""
- return Connection()
-
-
-def check_serialize(msg):
- """Make sure a message intended for rpc can be serialized."""
- json.dumps(msg)
-
-
-def multicall(conf, context, topic, msg, timeout=None):
- """Make a call that returns multiple times."""
-
- check_serialize(msg)
-
- method = msg.get('method')
- if not method:
- return
- args = msg.get('args', {})
- version = msg.get('version', None)
- namespace = msg.get('namespace', None)
-
- try:
- consumer = CONSUMERS[topic][0]
- except (KeyError, IndexError):
- raise rpc_common.Timeout("No consumers available")
- else:
- return consumer.call(context, version, method, namespace, args,
- timeout)
-
-
-def call(conf, context, topic, msg, timeout=None):
- """Sends a message on a topic and wait for a response."""
- rv = multicall(conf, context, topic, msg, timeout)
- # NOTE(vish): return the last result from the multicall
- rv = list(rv)
- if not rv:
- return
- return rv[-1]
-
-
-def cast(conf, context, topic, msg):
- check_serialize(msg)
- try:
- call(conf, context, topic, msg)
- except Exception:
- pass
-
-
-def notify(conf, context, topic, msg, envelope):
- check_serialize(msg)
-
-
-def cleanup():
- pass
-
-
-def fanout_cast(conf, context, topic, msg):
- """Cast to all consumers of a topic."""
- check_serialize(msg)
- method = msg.get('method')
- if not method:
- return
- args = msg.get('args', {})
- version = msg.get('version', None)
- namespace = msg.get('namespace', None)
-
- for consumer in CONSUMERS.get(topic, []):
- try:
- consumer.call(context, version, method, namespace, args, None)
- except Exception:
- pass
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import functools
-import itertools
-import socket
-import ssl
-import time
-import uuid
-
-import eventlet
-import greenlet
-import kombu
-import kombu.connection
-import kombu.entity
-import kombu.messaging
-from oslo.config import cfg
-
-from heat.openstack.common import excutils
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import network_utils
-from heat.openstack.common.rpc import amqp as rpc_amqp
-from heat.openstack.common.rpc import common as rpc_common
-from heat.openstack.common import sslutils
-
-kombu_opts = [
- cfg.StrOpt('kombu_ssl_version',
- default='',
- help='SSL version to use (valid only if SSL enabled). '
- 'valid values are TLSv1, SSLv23 and SSLv3. SSLv2 may '
- 'be available on some distributions'
- ),
- cfg.StrOpt('kombu_ssl_keyfile',
- default='',
- help='SSL key file (valid only if SSL enabled)'),
- cfg.StrOpt('kombu_ssl_certfile',
- default='',
- help='SSL cert file (valid only if SSL enabled)'),
- cfg.StrOpt('kombu_ssl_ca_certs',
- default='',
- help=('SSL certification authority file '
- '(valid only if SSL enabled)')),
- cfg.StrOpt('rabbit_host',
- default='localhost',
- help='The RabbitMQ broker address where a single node is used'),
- cfg.IntOpt('rabbit_port',
- default=5672,
- help='The RabbitMQ broker port where a single node is used'),
- cfg.ListOpt('rabbit_hosts',
- default=['$rabbit_host:$rabbit_port'],
- help='RabbitMQ HA cluster host:port pairs'),
- cfg.BoolOpt('rabbit_use_ssl',
- default=False,
- help='connect over SSL for RabbitMQ'),
- cfg.StrOpt('rabbit_userid',
- default='guest',
- help='the RabbitMQ userid'),
- cfg.StrOpt('rabbit_password',
- default='guest',
- help='the RabbitMQ password',
- secret=True),
- cfg.StrOpt('rabbit_virtual_host',
- default='/',
- help='the RabbitMQ virtual host'),
- cfg.IntOpt('rabbit_retry_interval',
- default=1,
- help='how frequently to retry connecting with RabbitMQ'),
- cfg.IntOpt('rabbit_retry_backoff',
- default=2,
- help='how long to backoff for between retries when connecting '
- 'to RabbitMQ'),
- cfg.IntOpt('rabbit_max_retries',
- default=0,
- help='maximum retries with trying to connect to RabbitMQ '
- '(the default of 0 implies an infinite retry count)'),
- cfg.BoolOpt('rabbit_ha_queues',
- default=False,
- help='use H/A queues in RabbitMQ (x-ha-policy: all).'
- 'You need to wipe RabbitMQ database when '
- 'changing this option.'),
-
-]
-
-cfg.CONF.register_opts(kombu_opts)
-
-LOG = rpc_common.LOG
-
-
-def _get_queue_arguments(conf):
- """Construct the arguments for declaring a queue.
-
- If the rabbit_ha_queues option is set, we declare a mirrored queue
- as described here:
-
- http://www.rabbitmq.com/ha.html
-
- Setting x-ha-policy to all means that the queue will be mirrored
- to all nodes in the cluster.
- """
- return {'x-ha-policy': 'all'} if conf.rabbit_ha_queues else {}
-
-
-class ConsumerBase(object):
- """Consumer base class."""
-
- def __init__(self, channel, callback, tag, **kwargs):
- """Declare a queue on an amqp channel.
-
- 'channel' is the amqp channel to use
- 'callback' is the callback to call when messages are received
- 'tag' is a unique ID for the consumer on the channel
-
- queue name, exchange name, and other kombu options are
- passed in here as a dictionary.
- """
- self.callback = callback
- self.tag = str(tag)
- self.kwargs = kwargs
- self.queue = None
- self.ack_on_error = kwargs.get('ack_on_error', True)
- self.reconnect(channel)
-
- def reconnect(self, channel):
- """Re-declare the queue after a rabbit reconnect."""
- self.channel = channel
- self.kwargs['channel'] = channel
- self.queue = kombu.entity.Queue(**self.kwargs)
- self.queue.declare()
-
- def _callback_handler(self, message, callback):
- """Call callback with deserialized message.
-
- Messages that are processed without exception are ack'ed.
-
- If the message processing generates an exception, it will be
- ack'ed if ack_on_error=True. Otherwise it will be .requeue()'ed.
- """
-
- try:
- msg = rpc_common.deserialize_msg(message.payload)
- callback(msg)
- except Exception:
- if self.ack_on_error:
- LOG.exception(_("Failed to process message"
- " ... skipping it."))
- message.ack()
- else:
- LOG.exception(_("Failed to process message"
- " ... will requeue."))
- message.requeue()
- else:
- message.ack()
-
- def consume(self, *args, **kwargs):
- """Actually declare the consumer on the amqp channel. This will
- start the flow of messages from the queue. Using the
- Connection.iterconsume() iterator will process the messages,
- calling the appropriate callback.
-
- If a callback is specified in kwargs, use that. Otherwise,
- use the callback passed during __init__()
-
- If kwargs['nowait'] is True, then this call will block until
- a message is read.
-
- """
-
- options = {'consumer_tag': self.tag}
- options['nowait'] = kwargs.get('nowait', False)
- callback = kwargs.get('callback', self.callback)
- if not callback:
- raise ValueError("No callback defined")
-
- def _callback(raw_message):
- message = self.channel.message_to_python(raw_message)
- self._callback_handler(message, callback)
-
- self.queue.consume(*args, callback=_callback, **options)
-
- def cancel(self):
- """Cancel the consuming from the queue, if it has started."""
- try:
- self.queue.cancel(self.tag)
- except KeyError as e:
- # NOTE(comstud): Kludge to get around a amqplib bug
- if str(e) != "u'%s'" % self.tag:
- raise
- self.queue = None
-
-
-class DirectConsumer(ConsumerBase):
- """Queue/consumer class for 'direct'."""
-
- def __init__(self, conf, channel, msg_id, callback, tag, **kwargs):
- """Init a 'direct' queue.
-
- 'channel' is the amqp channel to use
- 'msg_id' is the msg_id to listen on
- 'callback' is the callback to call when messages are received
- 'tag' is a unique ID for the consumer on the channel
-
- Other kombu options may be passed
- """
- # Default options
- options = {'durable': False,
- 'queue_arguments': _get_queue_arguments(conf),
- 'auto_delete': True,
- 'exclusive': False}
- options.update(kwargs)
- exchange = kombu.entity.Exchange(name=msg_id,
- type='direct',
- durable=options['durable'],
- auto_delete=options['auto_delete'])
- super(DirectConsumer, self).__init__(channel,
- callback,
- tag,
- name=msg_id,
- exchange=exchange,
- routing_key=msg_id,
- **options)
-
-
-class TopicConsumer(ConsumerBase):
- """Consumer class for 'topic'."""
-
- def __init__(self, conf, channel, topic, callback, tag, name=None,
- exchange_name=None, **kwargs):
- """Init a 'topic' queue.
-
- :param channel: the amqp channel to use
- :param topic: the topic to listen on
- :paramtype topic: str
- :param callback: the callback to call when messages are received
- :param tag: a unique ID for the consumer on the channel
- :param name: optional queue name, defaults to topic
- :paramtype name: str
-
- Other kombu options may be passed as keyword arguments
- """
- # Default options
- options = {'durable': conf.amqp_durable_queues,
- 'queue_arguments': _get_queue_arguments(conf),
- 'auto_delete': conf.amqp_auto_delete,
- 'exclusive': False}
- options.update(kwargs)
- exchange_name = exchange_name or rpc_amqp.get_control_exchange(conf)
- exchange = kombu.entity.Exchange(name=exchange_name,
- type='topic',
- durable=options['durable'],
- auto_delete=options['auto_delete'])
- super(TopicConsumer, self).__init__(channel,
- callback,
- tag,
- name=name or topic,
- exchange=exchange,
- routing_key=topic,
- **options)
-
-
-class FanoutConsumer(ConsumerBase):
- """Consumer class for 'fanout'."""
-
- def __init__(self, conf, channel, topic, callback, tag, **kwargs):
- """Init a 'fanout' queue.
-
- 'channel' is the amqp channel to use
- 'topic' is the topic to listen on
- 'callback' is the callback to call when messages are received
- 'tag' is a unique ID for the consumer on the channel
-
- Other kombu options may be passed
- """
- unique = uuid.uuid4().hex
- exchange_name = '%s_fanout' % topic
- queue_name = '%s_fanout_%s' % (topic, unique)
-
- # Default options
- options = {'durable': False,
- 'queue_arguments': _get_queue_arguments(conf),
- 'auto_delete': True,
- 'exclusive': False}
- options.update(kwargs)
- exchange = kombu.entity.Exchange(name=exchange_name, type='fanout',
- durable=options['durable'],
- auto_delete=options['auto_delete'])
- super(FanoutConsumer, self).__init__(channel, callback, tag,
- name=queue_name,
- exchange=exchange,
- routing_key=topic,
- **options)
-
-
-class Publisher(object):
- """Base Publisher class."""
-
- def __init__(self, channel, exchange_name, routing_key, **kwargs):
- """Init the Publisher class with the exchange_name, routing_key,
- and other options
- """
- self.exchange_name = exchange_name
- self.routing_key = routing_key
- self.kwargs = kwargs
- self.reconnect(channel)
-
- def reconnect(self, channel):
- """Re-establish the Producer after a rabbit reconnection."""
- self.exchange = kombu.entity.Exchange(name=self.exchange_name,
- **self.kwargs)
- self.producer = kombu.messaging.Producer(exchange=self.exchange,
- channel=channel,
- routing_key=self.routing_key)
-
- def send(self, msg, timeout=None):
- """Send a message."""
- if timeout:
- #
- # AMQP TTL is in milliseconds when set in the header.
- #
- self.producer.publish(msg, headers={'ttl': (timeout * 1000)})
- else:
- self.producer.publish(msg)
-
-
-class DirectPublisher(Publisher):
- """Publisher class for 'direct'."""
- def __init__(self, conf, channel, msg_id, **kwargs):
- """init a 'direct' publisher.
-
- Kombu options may be passed as keyword args to override defaults
- """
-
- options = {'durable': False,
- 'auto_delete': True,
- 'exclusive': False}
- options.update(kwargs)
- super(DirectPublisher, self).__init__(channel, msg_id, msg_id,
- type='direct', **options)
-
-
-class TopicPublisher(Publisher):
- """Publisher class for 'topic'."""
- def __init__(self, conf, channel, topic, **kwargs):
- """init a 'topic' publisher.
-
- Kombu options may be passed as keyword args to override defaults
- """
- options = {'durable': conf.amqp_durable_queues,
- 'auto_delete': conf.amqp_auto_delete,
- 'exclusive': False}
- options.update(kwargs)
- exchange_name = rpc_amqp.get_control_exchange(conf)
- super(TopicPublisher, self).__init__(channel,
- exchange_name,
- topic,
- type='topic',
- **options)
-
-
-class FanoutPublisher(Publisher):
- """Publisher class for 'fanout'."""
- def __init__(self, conf, channel, topic, **kwargs):
- """init a 'fanout' publisher.
-
- Kombu options may be passed as keyword args to override defaults
- """
- options = {'durable': False,
- 'auto_delete': True,
- 'exclusive': False}
- options.update(kwargs)
- super(FanoutPublisher, self).__init__(channel, '%s_fanout' % topic,
- None, type='fanout', **options)
-
-
-class NotifyPublisher(TopicPublisher):
- """Publisher class for 'notify'."""
-
- def __init__(self, conf, channel, topic, **kwargs):
- self.durable = kwargs.pop('durable', conf.amqp_durable_queues)
- self.queue_arguments = _get_queue_arguments(conf)
- super(NotifyPublisher, self).__init__(conf, channel, topic, **kwargs)
-
- def reconnect(self, channel):
- super(NotifyPublisher, self).reconnect(channel)
-
- # NOTE(jerdfelt): Normally the consumer would create the queue, but
- # we do this to ensure that messages don't get dropped if the
- # consumer is started after we do
- queue = kombu.entity.Queue(channel=channel,
- exchange=self.exchange,
- durable=self.durable,
- name=self.routing_key,
- routing_key=self.routing_key,
- queue_arguments=self.queue_arguments)
- queue.declare()
-
-
-class Connection(object):
- """Connection object."""
-
- pool = None
-
- def __init__(self, conf, server_params=None):
- self.consumers = []
- self.consumer_thread = None
- self.proxy_callbacks = []
- self.conf = conf
- self.max_retries = self.conf.rabbit_max_retries
- # Try forever?
- if self.max_retries <= 0:
- self.max_retries = None
- self.interval_start = self.conf.rabbit_retry_interval
- self.interval_stepping = self.conf.rabbit_retry_backoff
- # max retry-interval = 30 seconds
- self.interval_max = 30
- self.memory_transport = False
-
- if server_params is None:
- server_params = {}
- # Keys to translate from server_params to kombu params
- server_params_to_kombu_params = {'username': 'userid'}
-
- ssl_params = self._fetch_ssl_params()
- params_list = []
- for adr in self.conf.rabbit_hosts:
- hostname, port = network_utils.parse_host_port(
- adr, default_port=self.conf.rabbit_port)
-
- params = {
- 'hostname': hostname,
- 'port': port,
- 'userid': self.conf.rabbit_userid,
- 'password': self.conf.rabbit_password,
- 'virtual_host': self.conf.rabbit_virtual_host,
- }
-
- for sp_key, value in server_params.iteritems():
- p_key = server_params_to_kombu_params.get(sp_key, sp_key)
- params[p_key] = value
-
- if self.conf.fake_rabbit:
- params['transport'] = 'memory'
- if self.conf.rabbit_use_ssl:
- params['ssl'] = ssl_params
-
- params_list.append(params)
-
- self.params_list = params_list
-
- self.memory_transport = self.conf.fake_rabbit
-
- self.connection = None
- self.reconnect()
-
- def _fetch_ssl_params(self):
- """Handles fetching what ssl params should be used for the connection
- (if any).
- """
- ssl_params = dict()
-
- # http://docs.python.org/library/ssl.html - ssl.wrap_socket
- if self.conf.kombu_ssl_version:
- ssl_params['ssl_version'] = sslutils.validate_ssl_version(
- self.conf.kombu_ssl_version)
- if self.conf.kombu_ssl_keyfile:
- ssl_params['keyfile'] = self.conf.kombu_ssl_keyfile
- if self.conf.kombu_ssl_certfile:
- ssl_params['certfile'] = self.conf.kombu_ssl_certfile
- if self.conf.kombu_ssl_ca_certs:
- ssl_params['ca_certs'] = self.conf.kombu_ssl_ca_certs
- # We might want to allow variations in the
- # future with this?
- ssl_params['cert_reqs'] = ssl.CERT_REQUIRED
-
- # Return the extended behavior or just have the default behavior
- return ssl_params or True
-
- def _connect(self, params):
- """Connect to rabbit. Re-establish any queues that may have
- been declared before if we are reconnecting. Exceptions should
- be handled by the caller.
- """
- if self.connection:
- LOG.info(_("Reconnecting to AMQP server on "
- "%(hostname)s:%(port)d") % params)
- try:
- self.connection.release()
- except self.connection_errors:
- pass
- # Setting this in case the next statement fails, though
- # it shouldn't be doing any network operations, yet.
- self.connection = None
- self.connection = kombu.connection.BrokerConnection(**params)
- self.connection_errors = self.connection.connection_errors
- if self.memory_transport:
- # Kludge to speed up tests.
- self.connection.transport.polling_interval = 0.0
- self.consumer_num = itertools.count(1)
- self.connection.connect()
- self.channel = self.connection.channel()
- # work around 'memory' transport bug in 1.1.3
- if self.memory_transport:
- self.channel._new_queue('ae.undeliver')
- for consumer in self.consumers:
- consumer.reconnect(self.channel)
- LOG.info(_('Connected to AMQP server on %(hostname)s:%(port)d') %
- params)
-
- def reconnect(self):
- """Handles reconnecting and re-establishing queues.
- Will retry up to self.max_retries number of times.
- self.max_retries = 0 means to retry forever.
- Sleep between tries, starting at self.interval_start
- seconds, backing off self.interval_stepping number of seconds
- each attempt.
- """
-
- attempt = 0
- while True:
- params = self.params_list[attempt % len(self.params_list)]
- attempt += 1
- try:
- self._connect(params)
- return
- except (IOError, self.connection_errors) as e:
- pass
- except Exception as e:
- # NOTE(comstud): Unfortunately it's possible for amqplib
- # to return an error not covered by its transport
- # connection_errors in the case of a timeout waiting for
- # a protocol response. (See paste link in LP888621)
- # So, we check all exceptions for 'timeout' in them
- # and try to reconnect in this case.
- if 'timeout' not in str(e):
- raise
-
- log_info = {}
- log_info['err_str'] = str(e)
- log_info['max_retries'] = self.max_retries
- log_info.update(params)
-
- if self.max_retries and attempt == self.max_retries:
- msg = _('Unable to connect to AMQP server on '
- '%(hostname)s:%(port)d after %(max_retries)d '
- 'tries: %(err_str)s') % log_info
- LOG.error(msg)
- raise rpc_common.RPCException(msg)
-
- if attempt == 1:
- sleep_time = self.interval_start or 1
- elif attempt > 1:
- sleep_time += self.interval_stepping
- if self.interval_max:
- sleep_time = min(sleep_time, self.interval_max)
-
- log_info['sleep_time'] = sleep_time
- LOG.error(_('AMQP server on %(hostname)s:%(port)d is '
- 'unreachable: %(err_str)s. Trying again in '
- '%(sleep_time)d seconds.') % log_info)
- time.sleep(sleep_time)
-
- def ensure(self, error_callback, method, *args, **kwargs):
- while True:
- try:
- return method(*args, **kwargs)
- except (self.connection_errors, socket.timeout, IOError) as e:
- if error_callback:
- error_callback(e)
- except Exception as e:
- # NOTE(comstud): Unfortunately it's possible for amqplib
- # to return an error not covered by its transport
- # connection_errors in the case of a timeout waiting for
- # a protocol response. (See paste link in LP888621)
- # So, we check all exceptions for 'timeout' in them
- # and try to reconnect in this case.
- if 'timeout' not in str(e):
- raise
- if error_callback:
- error_callback(e)
- self.reconnect()
-
- def get_channel(self):
- """Convenience call for bin/clear_rabbit_queues."""
- return self.channel
-
- def close(self):
- """Close/release this connection."""
- self.cancel_consumer_thread()
- self.wait_on_proxy_callbacks()
- self.connection.release()
- self.connection = None
-
- def reset(self):
- """Reset a connection so it can be used again."""
- self.cancel_consumer_thread()
- self.wait_on_proxy_callbacks()
- self.channel.close()
- self.channel = self.connection.channel()
- # work around 'memory' transport bug in 1.1.3
- if self.memory_transport:
- self.channel._new_queue('ae.undeliver')
- self.consumers = []
-
- def declare_consumer(self, consumer_cls, topic, callback):
- """Create a Consumer using the class that was passed in and
- add it to our list of consumers
- """
-
- def _connect_error(exc):
- log_info = {'topic': topic, 'err_str': str(exc)}
- LOG.error(_("Failed to declare consumer for topic '%(topic)s': "
- "%(err_str)s") % log_info)
-
- def _declare_consumer():
- consumer = consumer_cls(self.conf, self.channel, topic, callback,
- self.consumer_num.next())
- self.consumers.append(consumer)
- return consumer
-
- return self.ensure(_connect_error, _declare_consumer)
-
- def iterconsume(self, limit=None, timeout=None):
- """Return an iterator that will consume from all queues/consumers."""
-
- info = {'do_consume': True}
-
- def _error_callback(exc):
- if isinstance(exc, socket.timeout):
- LOG.debug(_('Timed out waiting for RPC response: %s') %
- str(exc))
- raise rpc_common.Timeout()
- else:
- LOG.exception(_('Failed to consume message from queue: %s') %
- str(exc))
- info['do_consume'] = True
-
- def _consume():
- if info['do_consume']:
- queues_head = self.consumers[:-1] # not fanout.
- queues_tail = self.consumers[-1] # fanout
- for queue in queues_head:
- queue.consume(nowait=True)
- queues_tail.consume(nowait=False)
- info['do_consume'] = False
- return self.connection.drain_events(timeout=timeout)
-
- for iteration in itertools.count(0):
- if limit and iteration >= limit:
- raise StopIteration
- yield self.ensure(_error_callback, _consume)
-
- def cancel_consumer_thread(self):
- """Cancel a consumer thread."""
- if self.consumer_thread is not None:
- self.consumer_thread.kill()
- try:
- self.consumer_thread.wait()
- except greenlet.GreenletExit:
- pass
- self.consumer_thread = None
-
- def wait_on_proxy_callbacks(self):
- """Wait for all proxy callback threads to exit."""
- for proxy_cb in self.proxy_callbacks:
- proxy_cb.wait()
-
- def publisher_send(self, cls, topic, msg, timeout=None, **kwargs):
- """Send to a publisher based on the publisher class."""
-
- def _error_callback(exc):
- log_info = {'topic': topic, 'err_str': str(exc)}
- LOG.exception(_("Failed to publish message to topic "
- "'%(topic)s': %(err_str)s") % log_info)
-
- def _publish():
- publisher = cls(self.conf, self.channel, topic, **kwargs)
- publisher.send(msg, timeout)
-
- self.ensure(_error_callback, _publish)
-
- def declare_direct_consumer(self, topic, callback):
- """Create a 'direct' queue.
- In nova's use, this is generally a msg_id queue used for
- responses for call/multicall
- """
- self.declare_consumer(DirectConsumer, topic, callback)
-
- def declare_topic_consumer(self, topic, callback=None, queue_name=None,
- exchange_name=None, ack_on_error=True):
- """Create a 'topic' consumer."""
- self.declare_consumer(functools.partial(TopicConsumer,
- name=queue_name,
- exchange_name=exchange_name,
- ack_on_error=ack_on_error,
- ),
- topic, callback)
-
- def declare_fanout_consumer(self, topic, callback):
- """Create a 'fanout' consumer."""
- self.declare_consumer(FanoutConsumer, topic, callback)
-
- def direct_send(self, msg_id, msg):
- """Send a 'direct' message."""
- self.publisher_send(DirectPublisher, msg_id, msg)
-
- def topic_send(self, topic, msg, timeout=None):
- """Send a 'topic' message."""
- self.publisher_send(TopicPublisher, topic, msg, timeout)
-
- def fanout_send(self, topic, msg):
- """Send a 'fanout' message."""
- self.publisher_send(FanoutPublisher, topic, msg)
-
- def notify_send(self, topic, msg, **kwargs):
- """Send a notify message on a topic."""
- self.publisher_send(NotifyPublisher, topic, msg, None, **kwargs)
-
- def consume(self, limit=None):
- """Consume from all queues/consumers."""
- it = self.iterconsume(limit=limit)
- while True:
- try:
- it.next()
- except StopIteration:
- return
-
- def consume_in_thread(self):
- """Consumer from all queues/consumers in a greenthread."""
- @excutils.forever_retry_uncaught_exceptions
- def _consumer_thread():
- try:
- self.consume()
- except greenlet.GreenletExit:
- return
- if self.consumer_thread is None:
- self.consumer_thread = eventlet.spawn(_consumer_thread)
- return self.consumer_thread
-
- def create_consumer(self, topic, proxy, fanout=False):
- """Create a consumer that calls a method in a proxy object."""
- proxy_cb = rpc_amqp.ProxyCallback(
- self.conf, proxy,
- rpc_amqp.get_connection_pool(self.conf, Connection))
- self.proxy_callbacks.append(proxy_cb)
-
- if fanout:
- self.declare_fanout_consumer(topic, proxy_cb)
- else:
- self.declare_topic_consumer(topic, proxy_cb)
-
- def create_worker(self, topic, proxy, pool_name):
- """Create a worker that calls a method in a proxy object."""
- proxy_cb = rpc_amqp.ProxyCallback(
- self.conf, proxy,
- rpc_amqp.get_connection_pool(self.conf, Connection))
- self.proxy_callbacks.append(proxy_cb)
- self.declare_topic_consumer(topic, proxy_cb, pool_name)
-
- def join_consumer_pool(self, callback, pool_name, topic,
- exchange_name=None, ack_on_error=True):
- """Register as a member of a group of consumers for a given topic from
- the specified exchange.
-
- Exactly one member of a given pool will receive each message.
-
- A message will be delivered to multiple pools, if more than
- one is created.
- """
- callback_wrapper = rpc_amqp.CallbackWrapper(
- conf=self.conf,
- callback=callback,
- connection_pool=rpc_amqp.get_connection_pool(self.conf,
- Connection),
- wait_for_consumers=not ack_on_error
- )
- self.proxy_callbacks.append(callback_wrapper)
- self.declare_topic_consumer(
- queue_name=pool_name,
- topic=topic,
- exchange_name=exchange_name,
- callback=callback_wrapper,
- ack_on_error=ack_on_error,
- )
-
-
-def create_connection(conf, new=True):
- """Create a connection."""
- return rpc_amqp.create_connection(
- conf, new,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def multicall(conf, context, topic, msg, timeout=None):
- """Make a call that returns multiple times."""
- return rpc_amqp.multicall(
- conf, context, topic, msg, timeout,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def call(conf, context, topic, msg, timeout=None):
- """Sends a message on a topic and wait for a response."""
- return rpc_amqp.call(
- conf, context, topic, msg, timeout,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def cast(conf, context, topic, msg):
- """Sends a message on a topic without waiting for a response."""
- return rpc_amqp.cast(
- conf, context, topic, msg,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def fanout_cast(conf, context, topic, msg):
- """Sends a message on a fanout exchange without waiting for a response."""
- return rpc_amqp.fanout_cast(
- conf, context, topic, msg,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def cast_to_server(conf, context, server_params, topic, msg):
- """Sends a message on a topic to a specific server."""
- return rpc_amqp.cast_to_server(
- conf, context, server_params, topic, msg,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def fanout_cast_to_server(conf, context, server_params, topic, msg):
- """Sends a message on a fanout exchange to a specific server."""
- return rpc_amqp.fanout_cast_to_server(
- conf, context, server_params, topic, msg,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def notify(conf, context, topic, msg, envelope):
- """Sends a notification event on a topic."""
- return rpc_amqp.notify(
- conf, context, topic, msg,
- rpc_amqp.get_connection_pool(conf, Connection),
- envelope)
-
-
-def cleanup():
- return rpc_amqp.cleanup(Connection.pool)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation
-# Copyright 2011 - 2012, Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import functools
-import itertools
-import time
-import uuid
-
-import eventlet
-import greenlet
-from oslo.config import cfg
-
-from heat.openstack.common import excutils
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import importutils
-from heat.openstack.common import jsonutils
-from heat.openstack.common import log as logging
-from heat.openstack.common.rpc import amqp as rpc_amqp
-from heat.openstack.common.rpc import common as rpc_common
-
-qpid_codec = importutils.try_import("qpid.codec010")
-qpid_messaging = importutils.try_import("qpid.messaging")
-qpid_exceptions = importutils.try_import("qpid.messaging.exceptions")
-
-LOG = logging.getLogger(__name__)
-
-qpid_opts = [
- cfg.StrOpt('qpid_hostname',
- default='localhost',
- help='Qpid broker hostname'),
- cfg.IntOpt('qpid_port',
- default=5672,
- help='Qpid broker port'),
- cfg.ListOpt('qpid_hosts',
- default=['$qpid_hostname:$qpid_port'],
- help='Qpid HA cluster host:port pairs'),
- cfg.StrOpt('qpid_username',
- default='',
- help='Username for qpid connection'),
- cfg.StrOpt('qpid_password',
- default='',
- help='Password for qpid connection',
- secret=True),
- cfg.StrOpt('qpid_sasl_mechanisms',
- default='',
- help='Space separated list of SASL mechanisms to use for auth'),
- cfg.IntOpt('qpid_heartbeat',
- default=60,
- help='Seconds between connection keepalive heartbeats'),
- cfg.StrOpt('qpid_protocol',
- default='tcp',
- help="Transport to use, either 'tcp' or 'ssl'"),
- cfg.BoolOpt('qpid_tcp_nodelay',
- default=True,
- help='Disable Nagle algorithm'),
- # NOTE(russellb) If any additional versions are added (beyond 1 and 2),
- # this file could probably use some additional refactoring so that the
- # differences between each version are split into different classes.
- cfg.IntOpt('qpid_topology_version',
- default=1,
- help="The qpid topology version to use. Version 1 is what "
- "was originally used by impl_qpid. Version 2 includes "
- "some backwards-incompatible changes that allow broker "
- "federation to work. Users should update to version 2 "
- "when they are able to take everything down, as it "
- "requires a clean break."),
-]
-
-cfg.CONF.register_opts(qpid_opts)
-
-JSON_CONTENT_TYPE = 'application/json; charset=utf8'
-
-
-def raise_invalid_topology_version(conf):
- msg = (_("Invalid value for qpid_topology_version: %d") %
- conf.qpid_topology_version)
- LOG.error(msg)
- raise Exception(msg)
-
-
-class ConsumerBase(object):
- """Consumer base class."""
-
- def __init__(self, conf, session, callback, node_name, node_opts,
- link_name, link_opts):
- """Declare a queue on an amqp session.
-
- 'session' is the amqp session to use
- 'callback' is the callback to call when messages are received
- 'node_name' is the first part of the Qpid address string, before ';'
- 'node_opts' will be applied to the "x-declare" section of "node"
- in the address string.
- 'link_name' goes into the "name" field of the "link" in the address
- string
- 'link_opts' will be applied to the "x-declare" section of "link"
- in the address string.
- """
- self.callback = callback
- self.receiver = None
- self.session = None
-
- if conf.qpid_topology_version == 1:
- addr_opts = {
- "create": "always",
- "node": {
- "type": "topic",
- "x-declare": {
- "durable": True,
- "auto-delete": True,
- },
- },
- "link": {
- "name": link_name,
- "durable": True,
- "x-declare": {
- "durable": False,
- "auto-delete": True,
- "exclusive": False,
- },
- },
- }
- addr_opts["node"]["x-declare"].update(node_opts)
- elif conf.qpid_topology_version == 2:
- addr_opts = {
- "link": {
- "x-declare": {
- "auto-delete": True,
- },
- },
- }
- else:
- raise_invalid_topology_version()
-
- addr_opts["link"]["x-declare"].update(link_opts)
-
- self.address = "%s ; %s" % (node_name, jsonutils.dumps(addr_opts))
-
- self.connect(session)
-
- def connect(self, session):
- """Declare the reciever on connect."""
- self._declare_receiver(session)
-
- def reconnect(self, session):
- """Re-declare the receiver after a qpid reconnect."""
- self._declare_receiver(session)
-
- def _declare_receiver(self, session):
- self.session = session
- self.receiver = session.receiver(self.address)
- self.receiver.capacity = 1
-
- def _unpack_json_msg(self, msg):
- """Load the JSON data in msg if msg.content_type indicates that it
- is necessary. Put the loaded data back into msg.content and
- update msg.content_type appropriately.
-
- A Qpid Message containing a dict will have a content_type of
- 'amqp/map', whereas one containing a string that needs to be converted
- back from JSON will have a content_type of JSON_CONTENT_TYPE.
-
- :param msg: a Qpid Message object
- :returns: None
- """
- if msg.content_type == JSON_CONTENT_TYPE:
- msg.content = jsonutils.loads(msg.content)
- msg.content_type = 'amqp/map'
-
- def consume(self):
- """Fetch the message and pass it to the callback object."""
- message = self.receiver.fetch()
- try:
- self._unpack_json_msg(message)
- msg = rpc_common.deserialize_msg(message.content)
- self.callback(msg)
- except Exception:
- LOG.exception(_("Failed to process message... skipping it."))
- finally:
- # TODO(sandy): Need support for optional ack_on_error.
- self.session.acknowledge(message)
-
- def get_receiver(self):
- return self.receiver
-
- def get_node_name(self):
- return self.address.split(';')[0]
-
-
-class DirectConsumer(ConsumerBase):
- """Queue/consumer class for 'direct'."""
-
- def __init__(self, conf, session, msg_id, callback):
- """Init a 'direct' queue.
-
- 'session' is the amqp session to use
- 'msg_id' is the msg_id to listen on
- 'callback' is the callback to call when messages are received
- """
-
- link_opts = {
- "auto-delete": conf.amqp_auto_delete,
- "exclusive": True,
- "durable": conf.amqp_durable_queues,
- }
-
- if conf.qpid_topology_version == 1:
- node_name = "%s/%s" % (msg_id, msg_id)
- node_opts = {"type": "direct"}
- elif conf.qpid_topology_version == 2:
- node_name = "amq.direct/%s" % msg_id
- node_opts = {}
- else:
- raise_invalid_topology_version()
-
- super(DirectConsumer, self).__init__(conf, session, callback,
- node_name, node_opts, msg_id,
- link_opts)
-
-
-class TopicConsumer(ConsumerBase):
- """Consumer class for 'topic'."""
-
- def __init__(self, conf, session, topic, callback, name=None,
- exchange_name=None):
- """Init a 'topic' queue.
-
- :param session: the amqp session to use
- :param topic: is the topic to listen on
- :paramtype topic: str
- :param callback: the callback to call when messages are received
- :param name: optional queue name, defaults to topic
- """
-
- exchange_name = exchange_name or rpc_amqp.get_control_exchange(conf)
- link_opts = {
- "auto-delete": conf.amqp_auto_delete,
- "durable": conf.amqp_durable_queues,
- }
-
- if conf.qpid_topology_version == 1:
- node_name = "%s/%s" % (exchange_name, topic)
- elif conf.qpid_topology_version == 2:
- node_name = "amq.topic/topic/%s/%s" % (exchange_name, topic)
- else:
- raise_invalid_topology_version()
-
- super(TopicConsumer, self).__init__(conf, session, callback, node_name,
- {}, name or topic, link_opts)
-
-
-class FanoutConsumer(ConsumerBase):
- """Consumer class for 'fanout'."""
-
- def __init__(self, conf, session, topic, callback):
- """Init a 'fanout' queue.
-
- 'session' is the amqp session to use
- 'topic' is the topic to listen on
- 'callback' is the callback to call when messages are received
- """
- self.conf = conf
-
- link_opts = {"exclusive": True}
-
- if conf.qpid_topology_version == 1:
- node_name = "%s_fanout" % topic
- node_opts = {"durable": False, "type": "fanout"}
- link_name = "%s_fanout_%s" % (topic, uuid.uuid4().hex)
- elif conf.qpid_topology_version == 2:
- node_name = "amq.topic/fanout/%s" % topic
- node_opts = {}
- link_name = ""
- else:
- raise_invalid_topology_version()
-
- super(FanoutConsumer, self).__init__(conf, session, callback,
- node_name, node_opts, link_name,
- link_opts)
-
- def reconnect(self, session):
- topic = self.get_node_name().rpartition('_fanout')[0]
- params = {
- 'session': session,
- 'topic': topic,
- 'callback': self.callback,
- }
-
- self.__init__(conf=self.conf, **params)
-
- super(FanoutConsumer, self).reconnect(session)
-
-
-class Publisher(object):
- """Base Publisher class."""
-
- def __init__(self, conf, session, node_name, node_opts=None):
- """Init the Publisher class with the exchange_name, routing_key,
- and other options
- """
- self.sender = None
- self.session = session
-
- if conf.qpid_topology_version == 1:
- addr_opts = {
- "create": "always",
- "node": {
- "type": "topic",
- "x-declare": {
- "durable": False,
- # auto-delete isn't implemented for exchanges in qpid,
- # but put in here anyway
- "auto-delete": True,
- },
- },
- }
- if node_opts:
- addr_opts["node"]["x-declare"].update(node_opts)
-
- self.address = "%s ; %s" % (node_name, jsonutils.dumps(addr_opts))
- elif conf.qpid_topology_version == 2:
- self.address = node_name
- else:
- raise_invalid_topology_version()
-
- self.reconnect(session)
-
- def reconnect(self, session):
- """Re-establish the Sender after a reconnection."""
- self.sender = session.sender(self.address)
-
- def _pack_json_msg(self, msg):
- """Qpid cannot serialize dicts containing strings longer than 65535
- characters. This function dumps the message content to a JSON
- string, which Qpid is able to handle.
-
- :param msg: May be either a Qpid Message object or a bare dict.
- :returns: A Qpid Message with its content field JSON encoded.
- """
- try:
- msg.content = jsonutils.dumps(msg.content)
- except AttributeError:
- # Need to have a Qpid message so we can set the content_type.
- msg = qpid_messaging.Message(jsonutils.dumps(msg))
- msg.content_type = JSON_CONTENT_TYPE
- return msg
-
- def send(self, msg):
- """Send a message."""
- try:
- # Check if Qpid can encode the message
- check_msg = msg
- if not hasattr(check_msg, 'content_type'):
- check_msg = qpid_messaging.Message(msg)
- content_type = check_msg.content_type
- enc, dec = qpid_messaging.message.get_codec(content_type)
- enc(check_msg.content)
- except qpid_codec.CodecException:
- # This means the message couldn't be serialized as a dict.
- msg = self._pack_json_msg(msg)
- self.sender.send(msg)
-
-
-class DirectPublisher(Publisher):
- """Publisher class for 'direct'."""
- def __init__(self, conf, session, msg_id):
- """Init a 'direct' publisher."""
-
- if conf.qpid_topology_version == 1:
- node_name = msg_id
- node_opts = {"type": "direct"}
- elif conf.qpid_topology_version == 2:
- node_name = "amq.direct/%s" % msg_id
- node_opts = {}
- else:
- raise_invalid_topology_version()
-
- super(DirectPublisher, self).__init__(conf, session, node_name,
- node_opts)
-
-
-class TopicPublisher(Publisher):
- """Publisher class for 'topic'."""
- def __init__(self, conf, session, topic):
- """init a 'topic' publisher.
- """
- exchange_name = rpc_amqp.get_control_exchange(conf)
-
- if conf.qpid_topology_version == 1:
- node_name = "%s/%s" % (exchange_name, topic)
- elif conf.qpid_topology_version == 2:
- node_name = "amq.topic/topic/%s/%s" % (exchange_name, topic)
- else:
- raise_invalid_topology_version()
-
- super(TopicPublisher, self).__init__(conf, session, node_name)
-
-
-class FanoutPublisher(Publisher):
- """Publisher class for 'fanout'."""
- def __init__(self, conf, session, topic):
- """init a 'fanout' publisher.
- """
-
- if conf.qpid_topology_version == 1:
- node_name = "%s_fanout" % topic
- node_opts = {"type": "fanout"}
- elif conf.qpid_topology_version == 2:
- node_name = "amq.topic/fanout/%s" % topic
- node_opts = {}
- else:
- raise_invalid_topology_version()
-
- super(FanoutPublisher, self).__init__(conf, session, node_name,
- node_opts)
-
-
-class NotifyPublisher(Publisher):
- """Publisher class for notifications."""
- def __init__(self, conf, session, topic):
- """init a 'topic' publisher.
- """
- exchange_name = rpc_amqp.get_control_exchange(conf)
- node_opts = {"durable": True}
-
- if conf.qpid_topology_version == 1:
- node_name = "%s/%s" % (exchange_name, topic)
- elif conf.qpid_topology_version == 2:
- node_name = "amq.topic/topic/%s/%s" % (exchange_name, topic)
- else:
- raise_invalid_topology_version()
-
- super(NotifyPublisher, self).__init__(conf, session, node_name,
- node_opts)
-
-
-class Connection(object):
- """Connection object."""
-
- pool = None
-
- def __init__(self, conf, server_params=None):
- if not qpid_messaging:
- raise ImportError("Failed to import qpid.messaging")
-
- self.session = None
- self.consumers = {}
- self.consumer_thread = None
- self.proxy_callbacks = []
- self.conf = conf
-
- if server_params and 'hostname' in server_params:
- # NOTE(russellb) This enables support for cast_to_server.
- server_params['qpid_hosts'] = [
- '%s:%d' % (server_params['hostname'],
- server_params.get('port', 5672))
- ]
-
- params = {
- 'qpid_hosts': self.conf.qpid_hosts,
- 'username': self.conf.qpid_username,
- 'password': self.conf.qpid_password,
- }
- params.update(server_params or {})
-
- self.brokers = params['qpid_hosts']
- self.username = params['username']
- self.password = params['password']
- self.connection_create(self.brokers[0])
- self.reconnect()
-
- def connection_create(self, broker):
- # Create the connection - this does not open the connection
- self.connection = qpid_messaging.Connection(broker)
-
- # Check if flags are set and if so set them for the connection
- # before we call open
- self.connection.username = self.username
- self.connection.password = self.password
-
- self.connection.sasl_mechanisms = self.conf.qpid_sasl_mechanisms
- # Reconnection is done by self.reconnect()
- self.connection.reconnect = False
- self.connection.heartbeat = self.conf.qpid_heartbeat
- self.connection.transport = self.conf.qpid_protocol
- self.connection.tcp_nodelay = self.conf.qpid_tcp_nodelay
-
- def _register_consumer(self, consumer):
- self.consumers[str(consumer.get_receiver())] = consumer
-
- def _lookup_consumer(self, receiver):
- return self.consumers[str(receiver)]
-
- def reconnect(self):
- """Handles reconnecting and re-establishing sessions and queues."""
- attempt = 0
- delay = 1
- while True:
- # Close the session if necessary
- if self.connection.opened():
- try:
- self.connection.close()
- except qpid_exceptions.ConnectionError:
- pass
-
- broker = self.brokers[attempt % len(self.brokers)]
- attempt += 1
-
- try:
- self.connection_create(broker)
- self.connection.open()
- except qpid_exceptions.ConnectionError as e:
- msg_dict = dict(e=e, delay=delay)
- msg = _("Unable to connect to AMQP server: %(e)s. "
- "Sleeping %(delay)s seconds") % msg_dict
- LOG.error(msg)
- time.sleep(delay)
- delay = min(2 * delay, 60)
- else:
- LOG.info(_('Connected to AMQP server on %s'), broker)
- break
-
- self.session = self.connection.session()
-
- if self.consumers:
- consumers = self.consumers
- self.consumers = {}
-
- for consumer in consumers.itervalues():
- consumer.reconnect(self.session)
- self._register_consumer(consumer)
-
- LOG.debug(_("Re-established AMQP queues"))
-
- def ensure(self, error_callback, method, *args, **kwargs):
- while True:
- try:
- return method(*args, **kwargs)
- except (qpid_exceptions.Empty,
- qpid_exceptions.ConnectionError) as e:
- if error_callback:
- error_callback(e)
- self.reconnect()
-
- def close(self):
- """Close/release this connection."""
- self.cancel_consumer_thread()
- self.wait_on_proxy_callbacks()
- try:
- self.connection.close()
- except Exception:
- # NOTE(dripton) Logging exceptions that happen during cleanup just
- # causes confusion; there's really nothing useful we can do with
- # them.
- pass
- self.connection = None
-
- def reset(self):
- """Reset a connection so it can be used again."""
- self.cancel_consumer_thread()
- self.wait_on_proxy_callbacks()
- self.session.close()
- self.session = self.connection.session()
- self.consumers = {}
-
- def declare_consumer(self, consumer_cls, topic, callback):
- """Create a Consumer using the class that was passed in and
- add it to our list of consumers
- """
- def _connect_error(exc):
- log_info = {'topic': topic, 'err_str': str(exc)}
- LOG.error(_("Failed to declare consumer for topic '%(topic)s': "
- "%(err_str)s") % log_info)
-
- def _declare_consumer():
- consumer = consumer_cls(self.conf, self.session, topic, callback)
- self._register_consumer(consumer)
- return consumer
-
- return self.ensure(_connect_error, _declare_consumer)
-
- def iterconsume(self, limit=None, timeout=None):
- """Return an iterator that will consume from all queues/consumers."""
-
- def _error_callback(exc):
- if isinstance(exc, qpid_exceptions.Empty):
- LOG.debug(_('Timed out waiting for RPC response: %s') %
- str(exc))
- raise rpc_common.Timeout()
- else:
- LOG.exception(_('Failed to consume message from queue: %s') %
- str(exc))
-
- def _consume():
- nxt_receiver = self.session.next_receiver(timeout=timeout)
- try:
- self._lookup_consumer(nxt_receiver).consume()
- except Exception:
- LOG.exception(_("Error processing message. Skipping it."))
-
- for iteration in itertools.count(0):
- if limit and iteration >= limit:
- raise StopIteration
- yield self.ensure(_error_callback, _consume)
-
- def cancel_consumer_thread(self):
- """Cancel a consumer thread."""
- if self.consumer_thread is not None:
- self.consumer_thread.kill()
- try:
- self.consumer_thread.wait()
- except greenlet.GreenletExit:
- pass
- self.consumer_thread = None
-
- def wait_on_proxy_callbacks(self):
- """Wait for all proxy callback threads to exit."""
- for proxy_cb in self.proxy_callbacks:
- proxy_cb.wait()
-
- def publisher_send(self, cls, topic, msg):
- """Send to a publisher based on the publisher class."""
-
- def _connect_error(exc):
- log_info = {'topic': topic, 'err_str': str(exc)}
- LOG.exception(_("Failed to publish message to topic "
- "'%(topic)s': %(err_str)s") % log_info)
-
- def _publisher_send():
- publisher = cls(self.conf, self.session, topic)
- publisher.send(msg)
-
- return self.ensure(_connect_error, _publisher_send)
-
- def declare_direct_consumer(self, topic, callback):
- """Create a 'direct' queue.
- In nova's use, this is generally a msg_id queue used for
- responses for call/multicall
- """
- self.declare_consumer(DirectConsumer, topic, callback)
-
- def declare_topic_consumer(self, topic, callback=None, queue_name=None,
- exchange_name=None):
- """Create a 'topic' consumer."""
- self.declare_consumer(functools.partial(TopicConsumer,
- name=queue_name,
- exchange_name=exchange_name,
- ),
- topic, callback)
-
- def declare_fanout_consumer(self, topic, callback):
- """Create a 'fanout' consumer."""
- self.declare_consumer(FanoutConsumer, topic, callback)
-
- def direct_send(self, msg_id, msg):
- """Send a 'direct' message."""
- self.publisher_send(DirectPublisher, msg_id, msg)
-
- def topic_send(self, topic, msg, timeout=None):
- """Send a 'topic' message."""
- #
- # We want to create a message with attributes, e.g. a TTL. We
- # don't really need to keep 'msg' in its JSON format any longer
- # so let's create an actual qpid message here and get some
- # value-add on the go.
- #
- # WARNING: Request timeout happens to be in the same units as
- # qpid's TTL (seconds). If this changes in the future, then this
- # will need to be altered accordingly.
- #
- qpid_message = qpid_messaging.Message(content=msg, ttl=timeout)
- self.publisher_send(TopicPublisher, topic, qpid_message)
-
- def fanout_send(self, topic, msg):
- """Send a 'fanout' message."""
- self.publisher_send(FanoutPublisher, topic, msg)
-
- def notify_send(self, topic, msg, **kwargs):
- """Send a notify message on a topic."""
- self.publisher_send(NotifyPublisher, topic, msg)
-
- def consume(self, limit=None):
- """Consume from all queues/consumers."""
- it = self.iterconsume(limit=limit)
- while True:
- try:
- it.next()
- except StopIteration:
- return
-
- def consume_in_thread(self):
- """Consumer from all queues/consumers in a greenthread."""
- @excutils.forever_retry_uncaught_exceptions
- def _consumer_thread():
- try:
- self.consume()
- except greenlet.GreenletExit:
- return
- if self.consumer_thread is None:
- self.consumer_thread = eventlet.spawn(_consumer_thread)
- return self.consumer_thread
-
- def create_consumer(self, topic, proxy, fanout=False):
- """Create a consumer that calls a method in a proxy object."""
- proxy_cb = rpc_amqp.ProxyCallback(
- self.conf, proxy,
- rpc_amqp.get_connection_pool(self.conf, Connection))
- self.proxy_callbacks.append(proxy_cb)
-
- if fanout:
- consumer = FanoutConsumer(self.conf, self.session, topic, proxy_cb)
- else:
- consumer = TopicConsumer(self.conf, self.session, topic, proxy_cb)
-
- self._register_consumer(consumer)
-
- return consumer
-
- def create_worker(self, topic, proxy, pool_name):
- """Create a worker that calls a method in a proxy object."""
- proxy_cb = rpc_amqp.ProxyCallback(
- self.conf, proxy,
- rpc_amqp.get_connection_pool(self.conf, Connection))
- self.proxy_callbacks.append(proxy_cb)
-
- consumer = TopicConsumer(self.conf, self.session, topic, proxy_cb,
- name=pool_name)
-
- self._register_consumer(consumer)
-
- return consumer
-
- def join_consumer_pool(self, callback, pool_name, topic,
- exchange_name=None, ack_on_error=True):
- """Register as a member of a group of consumers for a given topic from
- the specified exchange.
-
- Exactly one member of a given pool will receive each message.
-
- A message will be delivered to multiple pools, if more than
- one is created.
- """
- callback_wrapper = rpc_amqp.CallbackWrapper(
- conf=self.conf,
- callback=callback,
- connection_pool=rpc_amqp.get_connection_pool(self.conf,
- Connection),
- wait_for_consumers=not ack_on_error
- )
- self.proxy_callbacks.append(callback_wrapper)
-
- consumer = TopicConsumer(conf=self.conf,
- session=self.session,
- topic=topic,
- callback=callback_wrapper,
- name=pool_name,
- exchange_name=exchange_name)
-
- self._register_consumer(consumer)
- return consumer
-
-
-def create_connection(conf, new=True):
- """Create a connection."""
- return rpc_amqp.create_connection(
- conf, new,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def multicall(conf, context, topic, msg, timeout=None):
- """Make a call that returns multiple times."""
- return rpc_amqp.multicall(
- conf, context, topic, msg, timeout,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def call(conf, context, topic, msg, timeout=None):
- """Sends a message on a topic and wait for a response."""
- return rpc_amqp.call(
- conf, context, topic, msg, timeout,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def cast(conf, context, topic, msg):
- """Sends a message on a topic without waiting for a response."""
- return rpc_amqp.cast(
- conf, context, topic, msg,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def fanout_cast(conf, context, topic, msg):
- """Sends a message on a fanout exchange without waiting for a response."""
- return rpc_amqp.fanout_cast(
- conf, context, topic, msg,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def cast_to_server(conf, context, server_params, topic, msg):
- """Sends a message on a topic to a specific server."""
- return rpc_amqp.cast_to_server(
- conf, context, server_params, topic, msg,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def fanout_cast_to_server(conf, context, server_params, topic, msg):
- """Sends a message on a fanout exchange to a specific server."""
- return rpc_amqp.fanout_cast_to_server(
- conf, context, server_params, topic, msg,
- rpc_amqp.get_connection_pool(conf, Connection))
-
-
-def notify(conf, context, topic, msg, envelope):
- """Sends a notification event on a topic."""
- return rpc_amqp.notify(conf, context, topic, msg,
- rpc_amqp.get_connection_pool(conf, Connection),
- envelope)
-
-
-def cleanup():
- return rpc_amqp.cleanup(Connection.pool)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 Cloudscaling Group, Inc
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import os
-import pprint
-import re
-import socket
-import sys
-import types
-import uuid
-
-import eventlet
-import greenlet
-from oslo.config import cfg
-
-from heat.openstack.common import excutils
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import importutils
-from heat.openstack.common import jsonutils
-from heat.openstack.common.rpc import common as rpc_common
-
-zmq = importutils.try_import('eventlet.green.zmq')
-
-# for convenience, are not modified.
-pformat = pprint.pformat
-Timeout = eventlet.timeout.Timeout
-LOG = rpc_common.LOG
-RemoteError = rpc_common.RemoteError
-RPCException = rpc_common.RPCException
-
-zmq_opts = [
- cfg.StrOpt('rpc_zmq_bind_address', default='*',
- help='ZeroMQ bind address. Should be a wildcard (*), '
- 'an ethernet interface, or IP. '
- 'The "host" option should point or resolve to this '
- 'address.'),
-
- # The module.Class to use for matchmaking.
- cfg.StrOpt(
- 'rpc_zmq_matchmaker',
- default=('heat.openstack.common.rpc.'
- 'matchmaker.MatchMakerLocalhost'),
- help='MatchMaker driver',
- ),
-
- # The following port is unassigned by IANA as of 2012-05-21
- cfg.IntOpt('rpc_zmq_port', default=9501,
- help='ZeroMQ receiver listening port'),
-
- cfg.IntOpt('rpc_zmq_contexts', default=1,
- help='Number of ZeroMQ contexts, defaults to 1'),
-
- cfg.IntOpt('rpc_zmq_topic_backlog', default=None,
- help='Maximum number of ingress messages to locally buffer '
- 'per topic. Default is unlimited.'),
-
- cfg.StrOpt('rpc_zmq_ipc_dir', default='/var/run/openstack',
- help='Directory for holding IPC sockets'),
-
- cfg.StrOpt('rpc_zmq_host', default=socket.gethostname(),
- help='Name of this node. Must be a valid hostname, FQDN, or '
- 'IP address. Must match "host" option, if running Nova.')
-]
-
-
-CONF = cfg.CONF
-CONF.register_opts(zmq_opts)
-
-ZMQ_CTX = None # ZeroMQ Context, must be global.
-matchmaker = None # memoized matchmaker object
-
-
-def _serialize(data):
- """Serialization wrapper.
-
- We prefer using JSON, but it cannot encode all types.
- Error if a developer passes us bad data.
- """
- try:
- return jsonutils.dumps(data, ensure_ascii=True)
- except TypeError:
- with excutils.save_and_reraise_exception():
- LOG.error(_("JSON serialization failed."))
-
-
-def _deserialize(data):
- """Deserialization wrapper."""
- LOG.debug(_("Deserializing: %s"), data)
- return jsonutils.loads(data)
-
-
-class ZmqSocket(object):
- """A tiny wrapper around ZeroMQ.
-
- Simplifies the send/recv protocol and connection management.
- Can be used as a Context (supports the 'with' statement).
- """
-
- def __init__(self, addr, zmq_type, bind=True, subscribe=None):
- self.sock = _get_ctxt().socket(zmq_type)
- self.addr = addr
- self.type = zmq_type
- self.subscriptions = []
-
- # Support failures on sending/receiving on wrong socket type.
- self.can_recv = zmq_type in (zmq.PULL, zmq.SUB)
- self.can_send = zmq_type in (zmq.PUSH, zmq.PUB)
- self.can_sub = zmq_type in (zmq.SUB, )
-
- # Support list, str, & None for subscribe arg (cast to list)
- do_sub = {
- list: subscribe,
- str: [subscribe],
- type(None): []
- }[type(subscribe)]
-
- for f in do_sub:
- self.subscribe(f)
-
- str_data = {'addr': addr, 'type': self.socket_s(),
- 'subscribe': subscribe, 'bind': bind}
-
- LOG.debug(_("Connecting to %(addr)s with %(type)s"), str_data)
- LOG.debug(_("-> Subscribed to %(subscribe)s"), str_data)
- LOG.debug(_("-> bind: %(bind)s"), str_data)
-
- try:
- if bind:
- self.sock.bind(addr)
- else:
- self.sock.connect(addr)
- except Exception:
- raise RPCException(_("Could not open socket."))
-
- def socket_s(self):
- """Get socket type as string."""
- t_enum = ('PUSH', 'PULL', 'PUB', 'SUB', 'REP', 'REQ', 'ROUTER',
- 'DEALER')
- return dict(map(lambda t: (getattr(zmq, t), t), t_enum))[self.type]
-
- def subscribe(self, msg_filter):
- """Subscribe."""
- if not self.can_sub:
- raise RPCException("Cannot subscribe on this socket.")
- LOG.debug(_("Subscribing to %s"), msg_filter)
-
- try:
- self.sock.setsockopt(zmq.SUBSCRIBE, msg_filter)
- except Exception:
- return
-
- self.subscriptions.append(msg_filter)
-
- def unsubscribe(self, msg_filter):
- """Unsubscribe."""
- if msg_filter not in self.subscriptions:
- return
- self.sock.setsockopt(zmq.UNSUBSCRIBE, msg_filter)
- self.subscriptions.remove(msg_filter)
-
- def close(self):
- if self.sock is None or self.sock.closed:
- return
-
- # We must unsubscribe, or we'll leak descriptors.
- if self.subscriptions:
- for f in self.subscriptions:
- try:
- self.sock.setsockopt(zmq.UNSUBSCRIBE, f)
- except Exception:
- pass
- self.subscriptions = []
-
- try:
- # Default is to linger
- self.sock.close()
- except Exception:
- # While this is a bad thing to happen,
- # it would be much worse if some of the code calling this
- # were to fail. For now, lets log, and later evaluate
- # if we can safely raise here.
- LOG.error("ZeroMQ socket could not be closed.")
- self.sock = None
-
- def recv(self, **kwargs):
- if not self.can_recv:
- raise RPCException(_("You cannot recv on this socket."))
- return self.sock.recv_multipart(**kwargs)
-
- def send(self, data, **kwargs):
- if not self.can_send:
- raise RPCException(_("You cannot send on this socket."))
- self.sock.send_multipart(data, **kwargs)
-
-
-class ZmqClient(object):
- """Client for ZMQ sockets."""
-
- def __init__(self, addr):
- self.outq = ZmqSocket(addr, zmq.PUSH, bind=False)
-
- def cast(self, msg_id, topic, data, envelope):
- msg_id = msg_id or 0
-
- if not envelope:
- self.outq.send(map(bytes,
- (msg_id, topic, 'cast', _serialize(data))))
- return
-
- rpc_envelope = rpc_common.serialize_msg(data[1], envelope)
- zmq_msg = reduce(lambda x, y: x + y, rpc_envelope.items())
- self.outq.send(map(bytes,
- (msg_id, topic, 'impl_zmq_v2', data[0]) + zmq_msg))
-
- def close(self):
- self.outq.close()
-
-
-class RpcContext(rpc_common.CommonRpcContext):
- """Context that supports replying to a rpc.call."""
- def __init__(self, **kwargs):
- self.replies = []
- super(RpcContext, self).__init__(**kwargs)
-
- def deepcopy(self):
- values = self.to_dict()
- values['replies'] = self.replies
- return self.__class__(**values)
-
- def reply(self, reply=None, failure=None, ending=False):
- if ending:
- return
- self.replies.append(reply)
-
- @classmethod
- def marshal(self, ctx):
- ctx_data = ctx.to_dict()
- return _serialize(ctx_data)
-
- @classmethod
- def unmarshal(self, data):
- return RpcContext.from_dict(_deserialize(data))
-
-
-class InternalContext(object):
- """Used by ConsumerBase as a private context for - methods."""
-
- def __init__(self, proxy):
- self.proxy = proxy
- self.msg_waiter = None
-
- def _get_response(self, ctx, proxy, topic, data):
- """Process a curried message and cast the result to topic."""
- LOG.debug(_("Running func with context: %s"), ctx.to_dict())
- data.setdefault('version', None)
- data.setdefault('args', {})
-
- try:
- result = proxy.dispatch(
- ctx, data['version'], data['method'],
- data.get('namespace'), **data['args'])
- return ConsumerBase.normalize_reply(result, ctx.replies)
- except greenlet.GreenletExit:
- # ignore these since they are just from shutdowns
- pass
- except rpc_common.ClientException as e:
- LOG.debug(_("Expected exception during message handling (%s)") %
- e._exc_info[1])
- return {'exc':
- rpc_common.serialize_remote_exception(e._exc_info,
- log_failure=False)}
- except Exception:
- LOG.error(_("Exception during message handling"))
- return {'exc':
- rpc_common.serialize_remote_exception(sys.exc_info())}
-
- def reply(self, ctx, proxy,
- msg_id=None, context=None, topic=None, msg=None):
- """Reply to a casted call."""
- # NOTE(ewindisch): context kwarg exists for Grizzly compat.
- # this may be able to be removed earlier than
- # 'I' if ConsumerBase.process were refactored.
- if type(msg) is list:
- payload = msg[-1]
- else:
- payload = msg
-
- response = ConsumerBase.normalize_reply(
- self._get_response(ctx, proxy, topic, payload),
- ctx.replies)
-
- LOG.debug(_("Sending reply"))
- _multi_send(_cast, ctx, topic, {
- 'method': '-process_reply',
- 'args': {
- 'msg_id': msg_id, # Include for Folsom compat.
- 'response': response
- }
- }, _msg_id=msg_id)
-
-
-class ConsumerBase(object):
- """Base Consumer."""
-
- def __init__(self):
- self.private_ctx = InternalContext(None)
-
- @classmethod
- def normalize_reply(self, result, replies):
- #TODO(ewindisch): re-evaluate and document this method.
- if isinstance(result, types.GeneratorType):
- return list(result)
- elif replies:
- return replies
- else:
- return [result]
-
- def process(self, proxy, ctx, data):
- data.setdefault('version', None)
- data.setdefault('args', {})
-
- # Method starting with - are
- # processed internally. (non-valid method name)
- method = data.get('method')
- if not method:
- LOG.error(_("RPC message did not include method."))
- return
-
- # Internal method
- # uses internal context for safety.
- if method == '-reply':
- self.private_ctx.reply(ctx, proxy, **data['args'])
- return
-
- proxy.dispatch(ctx, data['version'],
- data['method'], data.get('namespace'), **data['args'])
-
-
-class ZmqBaseReactor(ConsumerBase):
- """A consumer class implementing a centralized casting broker (PULL-PUSH).
-
- Used for RoundRobin requests.
- """
-
- def __init__(self, conf):
- super(ZmqBaseReactor, self).__init__()
-
- self.proxies = {}
- self.threads = []
- self.sockets = []
- self.subscribe = {}
-
- self.pool = eventlet.greenpool.GreenPool(conf.rpc_thread_pool_size)
-
- def register(self, proxy, in_addr, zmq_type_in,
- in_bind=True, subscribe=None):
-
- LOG.info(_("Registering reactor"))
-
- if zmq_type_in not in (zmq.PULL, zmq.SUB):
- raise RPCException("Bad input socktype")
-
- # Items push in.
- inq = ZmqSocket(in_addr, zmq_type_in, bind=in_bind,
- subscribe=subscribe)
-
- self.proxies[inq] = proxy
- self.sockets.append(inq)
-
- LOG.info(_("In reactor registered"))
-
- def consume_in_thread(self):
- @excutils.forever_retry_uncaught_exceptions
- def _consume(sock):
- LOG.info(_("Consuming socket"))
- while True:
- self.consume(sock)
-
- for k in self.proxies.keys():
- self.threads.append(
- self.pool.spawn(_consume, k)
- )
-
- def wait(self):
- for t in self.threads:
- t.wait()
-
- def close(self):
- for s in self.sockets:
- s.close()
-
- for t in self.threads:
- t.kill()
-
-
-class ZmqProxy(ZmqBaseReactor):
- """A consumer class implementing a topic-based proxy.
-
- Forwards to IPC sockets.
- """
-
- def __init__(self, conf):
- super(ZmqProxy, self).__init__(conf)
- pathsep = set((os.path.sep or '', os.path.altsep or '', '/', '\\'))
- self.badchars = re.compile(r'[%s]' % re.escape(''.join(pathsep)))
-
- self.topic_proxy = {}
-
- def consume(self, sock):
- ipc_dir = CONF.rpc_zmq_ipc_dir
-
- data = sock.recv(copy=False)
- topic = data[1].bytes
-
- if topic.startswith('fanout~'):
- sock_type = zmq.PUB
- topic = topic.split('.', 1)[0]
- elif topic.startswith('zmq_replies'):
- sock_type = zmq.PUB
- else:
- sock_type = zmq.PUSH
-
- if topic not in self.topic_proxy:
- def publisher(waiter):
- LOG.info(_("Creating proxy for topic: %s"), topic)
-
- try:
- # The topic is received over the network,
- # don't trust this input.
- if self.badchars.search(topic) is not None:
- emsg = _("Topic contained dangerous characters.")
- LOG.warn(emsg)
- raise RPCException(emsg)
-
- out_sock = ZmqSocket("ipc://%s/zmq_topic_%s" %
- (ipc_dir, topic),
- sock_type, bind=True)
- except RPCException:
- waiter.send_exception(*sys.exc_info())
- return
-
- self.topic_proxy[topic] = eventlet.queue.LightQueue(
- CONF.rpc_zmq_topic_backlog)
- self.sockets.append(out_sock)
-
- # It takes some time for a pub socket to open,
- # before we can have any faith in doing a send() to it.
- if sock_type == zmq.PUB:
- eventlet.sleep(.5)
-
- waiter.send(True)
-
- while(True):
- data = self.topic_proxy[topic].get()
- out_sock.send(data, copy=False)
-
- wait_sock_creation = eventlet.event.Event()
- eventlet.spawn(publisher, wait_sock_creation)
-
- try:
- wait_sock_creation.wait()
- except RPCException:
- LOG.error(_("Topic socket file creation failed."))
- return
-
- try:
- self.topic_proxy[topic].put_nowait(data)
- except eventlet.queue.Full:
- LOG.error(_("Local per-topic backlog buffer full for topic "
- "%(topic)s. Dropping message.") % {'topic': topic})
-
- def consume_in_thread(self):
- """Runs the ZmqProxy service."""
- ipc_dir = CONF.rpc_zmq_ipc_dir
- consume_in = "tcp://%s:%s" % \
- (CONF.rpc_zmq_bind_address,
- CONF.rpc_zmq_port)
- consumption_proxy = InternalContext(None)
-
- try:
- os.makedirs(ipc_dir)
- except os.error:
- if not os.path.isdir(ipc_dir):
- with excutils.save_and_reraise_exception():
- LOG.error(_("Required IPC directory does not exist at"
- " %s") % (ipc_dir, ))
- try:
- self.register(consumption_proxy,
- consume_in,
- zmq.PULL)
- except zmq.ZMQError:
- if os.access(ipc_dir, os.X_OK):
- with excutils.save_and_reraise_exception():
- LOG.error(_("Permission denied to IPC directory at"
- " %s") % (ipc_dir, ))
- with excutils.save_and_reraise_exception():
- LOG.error(_("Could not create ZeroMQ receiver daemon. "
- "Socket may already be in use."))
-
- super(ZmqProxy, self).consume_in_thread()
-
-
-def unflatten_envelope(packenv):
- """Unflattens the RPC envelope.
-
- Takes a list and returns a dictionary.
- i.e. [1,2,3,4] => {1: 2, 3: 4}
- """
- i = iter(packenv)
- h = {}
- try:
- while True:
- k = i.next()
- h[k] = i.next()
- except StopIteration:
- return h
-
-
-class ZmqReactor(ZmqBaseReactor):
- """A consumer class implementing a consumer for messages.
-
- Can also be used as a 1:1 proxy
- """
-
- def __init__(self, conf):
- super(ZmqReactor, self).__init__(conf)
-
- def consume(self, sock):
- #TODO(ewindisch): use zero-copy (i.e. references, not copying)
- data = sock.recv()
- LOG.debug(_("CONSUMER RECEIVED DATA: %s"), data)
-
- proxy = self.proxies[sock]
-
- if data[2] == 'cast': # Legacy protocol
- packenv = data[3]
-
- ctx, msg = _deserialize(packenv)
- request = rpc_common.deserialize_msg(msg)
- ctx = RpcContext.unmarshal(ctx)
- elif data[2] == 'impl_zmq_v2':
- packenv = data[4:]
-
- msg = unflatten_envelope(packenv)
- request = rpc_common.deserialize_msg(msg)
-
- # Unmarshal only after verifying the message.
- ctx = RpcContext.unmarshal(data[3])
- else:
- LOG.error(_("ZMQ Envelope version unsupported or unknown."))
- return
-
- self.pool.spawn_n(self.process, proxy, ctx, request)
-
-
-class Connection(rpc_common.Connection):
- """Manages connections and threads."""
-
- def __init__(self, conf):
- self.topics = []
- self.reactor = ZmqReactor(conf)
-
- def create_consumer(self, topic, proxy, fanout=False):
- # Register with matchmaker.
- _get_matchmaker().register(topic, CONF.rpc_zmq_host)
-
- # Subscription scenarios
- if fanout:
- sock_type = zmq.SUB
- subscribe = ('', fanout)[type(fanout) == str]
- topic = 'fanout~' + topic.split('.', 1)[0]
- else:
- sock_type = zmq.PULL
- subscribe = None
- topic = '.'.join((topic.split('.', 1)[0], CONF.rpc_zmq_host))
-
- if topic in self.topics:
- LOG.info(_("Skipping topic registration. Already registered."))
- return
-
- # Receive messages from (local) proxy
- inaddr = "ipc://%s/zmq_topic_%s" % \
- (CONF.rpc_zmq_ipc_dir, topic)
-
- LOG.debug(_("Consumer is a zmq.%s"),
- ['PULL', 'SUB'][sock_type == zmq.SUB])
-
- self.reactor.register(proxy, inaddr, sock_type,
- subscribe=subscribe, in_bind=False)
- self.topics.append(topic)
-
- def close(self):
- _get_matchmaker().stop_heartbeat()
- for topic in self.topics:
- _get_matchmaker().unregister(topic, CONF.rpc_zmq_host)
-
- self.reactor.close()
- self.topics = []
-
- def wait(self):
- self.reactor.wait()
-
- def consume_in_thread(self):
- _get_matchmaker().start_heartbeat()
- self.reactor.consume_in_thread()
-
-
-def _cast(addr, context, topic, msg, timeout=None, envelope=False,
- _msg_id=None):
- timeout_cast = timeout or CONF.rpc_cast_timeout
- payload = [RpcContext.marshal(context), msg]
-
- with Timeout(timeout_cast, exception=rpc_common.Timeout):
- try:
- conn = ZmqClient(addr)
-
- # assumes cast can't return an exception
- conn.cast(_msg_id, topic, payload, envelope)
- except zmq.ZMQError:
- raise RPCException("Cast failed. ZMQ Socket Exception")
- finally:
- if 'conn' in vars():
- conn.close()
-
-
-def _call(addr, context, topic, msg, timeout=None,
- envelope=False):
- # timeout_response is how long we wait for a response
- timeout = timeout or CONF.rpc_response_timeout
-
- # The msg_id is used to track replies.
- msg_id = uuid.uuid4().hex
-
- # Replies always come into the reply service.
- reply_topic = "zmq_replies.%s" % CONF.rpc_zmq_host
-
- LOG.debug(_("Creating payload"))
- # Curry the original request into a reply method.
- mcontext = RpcContext.marshal(context)
- payload = {
- 'method': '-reply',
- 'args': {
- 'msg_id': msg_id,
- 'topic': reply_topic,
- # TODO(ewindisch): safe to remove mcontext in I.
- 'msg': [mcontext, msg]
- }
- }
-
- LOG.debug(_("Creating queue socket for reply waiter"))
-
- # Messages arriving async.
- # TODO(ewindisch): have reply consumer with dynamic subscription mgmt
- with Timeout(timeout, exception=rpc_common.Timeout):
- try:
- msg_waiter = ZmqSocket(
- "ipc://%s/zmq_topic_zmq_replies.%s" %
- (CONF.rpc_zmq_ipc_dir,
- CONF.rpc_zmq_host),
- zmq.SUB, subscribe=msg_id, bind=False
- )
-
- LOG.debug(_("Sending cast"))
- _cast(addr, context, topic, payload, envelope)
-
- LOG.debug(_("Cast sent; Waiting reply"))
- # Blocks until receives reply
- msg = msg_waiter.recv()
- LOG.debug(_("Received message: %s"), msg)
- LOG.debug(_("Unpacking response"))
-
- if msg[2] == 'cast': # Legacy version
- raw_msg = _deserialize(msg[-1])[-1]
- elif msg[2] == 'impl_zmq_v2':
- rpc_envelope = unflatten_envelope(msg[4:])
- raw_msg = rpc_common.deserialize_msg(rpc_envelope)
- else:
- raise rpc_common.UnsupportedRpcEnvelopeVersion(
- _("Unsupported or unknown ZMQ envelope returned."))
-
- responses = raw_msg['args']['response']
- # ZMQError trumps the Timeout error.
- except zmq.ZMQError:
- raise RPCException("ZMQ Socket Error")
- except (IndexError, KeyError):
- raise RPCException(_("RPC Message Invalid."))
- finally:
- if 'msg_waiter' in vars():
- msg_waiter.close()
-
- # It seems we don't need to do all of the following,
- # but perhaps it would be useful for multicall?
- # One effect of this is that we're checking all
- # responses for Exceptions.
- for resp in responses:
- if isinstance(resp, types.DictType) and 'exc' in resp:
- raise rpc_common.deserialize_remote_exception(CONF, resp['exc'])
-
- return responses[-1]
-
-
-def _multi_send(method, context, topic, msg, timeout=None,
- envelope=False, _msg_id=None):
- """Wraps the sending of messages.
-
- Dispatches to the matchmaker and sends message to all relevant hosts.
- """
- conf = CONF
- LOG.debug(_("%(msg)s") % {'msg': ' '.join(map(pformat, (topic, msg)))})
-
- queues = _get_matchmaker().queues(topic)
- LOG.debug(_("Sending message(s) to: %s"), queues)
-
- # Don't stack if we have no matchmaker results
- if not queues:
- LOG.warn(_("No matchmaker results. Not casting."))
- # While not strictly a timeout, callers know how to handle
- # this exception and a timeout isn't too big a lie.
- raise rpc_common.Timeout(_("No match from matchmaker."))
-
- # This supports brokerless fanout (addresses > 1)
- for queue in queues:
- (_topic, ip_addr) = queue
- _addr = "tcp://%s:%s" % (ip_addr, conf.rpc_zmq_port)
-
- if method.__name__ == '_cast':
- eventlet.spawn_n(method, _addr, context,
- _topic, msg, timeout, envelope,
- _msg_id)
- return
- return method(_addr, context, _topic, msg, timeout,
- envelope)
-
-
-def create_connection(conf, new=True):
- return Connection(conf)
-
-
-def multicall(conf, *args, **kwargs):
- """Multiple calls."""
- return _multi_send(_call, *args, **kwargs)
-
-
-def call(conf, *args, **kwargs):
- """Send a message, expect a response."""
- data = _multi_send(_call, *args, **kwargs)
- return data[-1]
-
-
-def cast(conf, *args, **kwargs):
- """Send a message expecting no reply."""
- _multi_send(_cast, *args, **kwargs)
-
-
-def fanout_cast(conf, context, topic, msg, **kwargs):
- """Send a message to all listening and expect no reply."""
- # NOTE(ewindisch): fanout~ is used because it avoid splitting on .
- # and acts as a non-subtle hint to the matchmaker and ZmqProxy.
- _multi_send(_cast, context, 'fanout~' + str(topic), msg, **kwargs)
-
-
-def notify(conf, context, topic, msg, envelope):
- """Send notification event.
-
- Notifications are sent to topic-priority.
- This differs from the AMQP drivers which send to topic.priority.
- """
- # NOTE(ewindisch): dot-priority in rpc notifier does not
- # work with our assumptions.
- topic = topic.replace('.', '-')
- cast(conf, context, topic, msg, envelope=envelope)
-
-
-def cleanup():
- """Clean up resources in use by implementation."""
- global ZMQ_CTX
- if ZMQ_CTX:
- ZMQ_CTX.term()
- ZMQ_CTX = None
-
- global matchmaker
- matchmaker = None
-
-
-def _get_ctxt():
- if not zmq:
- raise ImportError("Failed to import eventlet.green.zmq")
-
- global ZMQ_CTX
- if not ZMQ_CTX:
- ZMQ_CTX = zmq.Context(CONF.rpc_zmq_contexts)
- return ZMQ_CTX
-
-
-def _get_matchmaker(*args, **kwargs):
- global matchmaker
- if not matchmaker:
- mm = CONF.rpc_zmq_matchmaker
- if mm.endswith('matchmaker.MatchMakerRing'):
- mm.replace('matchmaker', 'matchmaker_ring')
- LOG.warn(_('rpc_zmq_matchmaker = %(orig)s is deprecated; use'
- ' %(new)s instead') % dict(
- orig=CONF.rpc_zmq_matchmaker, new=mm))
- matchmaker = importutils.import_object(mm, *args, **kwargs)
- return matchmaker
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 Cloudscaling Group, Inc
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-"""
-The MatchMaker classes should except a Topic or Fanout exchange key and
-return keys for direct exchanges, per (approximate) AMQP parlance.
-"""
-
-import contextlib
-
-import eventlet
-from oslo.config import cfg
-
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import log as logging
-
-
-matchmaker_opts = [
- cfg.IntOpt('matchmaker_heartbeat_freq',
- default=300,
- help='Heartbeat frequency'),
- cfg.IntOpt('matchmaker_heartbeat_ttl',
- default=600,
- help='Heartbeat time-to-live.'),
-]
-
-CONF = cfg.CONF
-CONF.register_opts(matchmaker_opts)
-LOG = logging.getLogger(__name__)
-contextmanager = contextlib.contextmanager
-
-
-class MatchMakerException(Exception):
- """Signified a match could not be found."""
- message = _("Match not found by MatchMaker.")
-
-
-class Exchange(object):
- """Implements lookups.
-
- Subclass this to support hashtables, dns, etc.
- """
- def __init__(self):
- pass
-
- def run(self, key):
- raise NotImplementedError()
-
-
-class Binding(object):
- """A binding on which to perform a lookup."""
- def __init__(self):
- pass
-
- def test(self, key):
- raise NotImplementedError()
-
-
-class MatchMakerBase(object):
- """Match Maker Base Class.
-
- Build off HeartbeatMatchMakerBase if building a heartbeat-capable
- MatchMaker.
- """
- def __init__(self):
- # Array of tuples. Index [2] toggles negation, [3] is last-if-true
- self.bindings = []
-
- self.no_heartbeat_msg = _('Matchmaker does not implement '
- 'registration or heartbeat.')
-
- def register(self, key, host):
- """Register a host on a backend.
-
- Heartbeats, if applicable, may keepalive registration.
- """
- pass
-
- def ack_alive(self, key, host):
- """Acknowledge that a key.host is alive.
-
- Used internally for updating heartbeats, but may also be used
- publically to acknowledge a system is alive (i.e. rpc message
- successfully sent to host)
- """
- pass
-
- def is_alive(self, topic, host):
- """Checks if a host is alive."""
- pass
-
- def expire(self, topic, host):
- """Explicitly expire a host's registration."""
- pass
-
- def send_heartbeats(self):
- """Send all heartbeats.
-
- Use start_heartbeat to spawn a heartbeat greenthread,
- which loops this method.
- """
- pass
-
- def unregister(self, key, host):
- """Unregister a topic."""
- pass
-
- def start_heartbeat(self):
- """Spawn heartbeat greenthread."""
- pass
-
- def stop_heartbeat(self):
- """Destroys the heartbeat greenthread."""
- pass
-
- def add_binding(self, binding, rule, last=True):
- self.bindings.append((binding, rule, False, last))
-
- #NOTE(ewindisch): kept the following method in case we implement the
- # underlying support.
- #def add_negate_binding(self, binding, rule, last=True):
- # self.bindings.append((binding, rule, True, last))
-
- def queues(self, key):
- workers = []
-
- # bit is for negate bindings - if we choose to implement it.
- # last stops processing rules if this matches.
- for (binding, exchange, bit, last) in self.bindings:
- if binding.test(key):
- workers.extend(exchange.run(key))
-
- # Support last.
- if last:
- return workers
- return workers
-
-
-class HeartbeatMatchMakerBase(MatchMakerBase):
- """Base for a heart-beat capable MatchMaker.
-
- Provides common methods for registering, unregistering, and maintaining
- heartbeats.
- """
- def __init__(self):
- self.hosts = set()
- self._heart = None
- self.host_topic = {}
-
- super(HeartbeatMatchMakerBase, self).__init__()
-
- def send_heartbeats(self):
- """Send all heartbeats.
-
- Use start_heartbeat to spawn a heartbeat greenthread,
- which loops this method.
- """
- for key, host in self.host_topic:
- self.ack_alive(key, host)
-
- def ack_alive(self, key, host):
- """Acknowledge that a host.topic is alive.
-
- Used internally for updating heartbeats, but may also be used
- publically to acknowledge a system is alive (i.e. rpc message
- successfully sent to host)
- """
- raise NotImplementedError("Must implement ack_alive")
-
- def backend_register(self, key, host):
- """Implements registration logic.
-
- Called by register(self,key,host)
- """
- raise NotImplementedError("Must implement backend_register")
-
- def backend_unregister(self, key, key_host):
- """Implements de-registration logic.
-
- Called by unregister(self,key,host)
- """
- raise NotImplementedError("Must implement backend_unregister")
-
- def register(self, key, host):
- """Register a host on a backend.
-
- Heartbeats, if applicable, may keepalive registration.
- """
- self.hosts.add(host)
- self.host_topic[(key, host)] = host
- key_host = '.'.join((key, host))
-
- self.backend_register(key, key_host)
-
- self.ack_alive(key, host)
-
- def unregister(self, key, host):
- """Unregister a topic."""
- if (key, host) in self.host_topic:
- del self.host_topic[(key, host)]
-
- self.hosts.discard(host)
- self.backend_unregister(key, '.'.join((key, host)))
-
- LOG.info(_("Matchmaker unregistered: %(key)s, %(host)s"),
- {'key': key, 'host': host})
-
- def start_heartbeat(self):
- """Implementation of MatchMakerBase.start_heartbeat.
-
- Launches greenthread looping send_heartbeats(),
- yielding for CONF.matchmaker_heartbeat_freq seconds
- between iterations.
- """
- if not self.hosts:
- raise MatchMakerException(
- _("Register before starting heartbeat."))
-
- def do_heartbeat():
- while True:
- self.send_heartbeats()
- eventlet.sleep(CONF.matchmaker_heartbeat_freq)
-
- self._heart = eventlet.spawn(do_heartbeat)
-
- def stop_heartbeat(self):
- """Destroys the heartbeat greenthread."""
- if self._heart:
- self._heart.kill()
-
-
-class DirectBinding(Binding):
- """Specifies a host in the key via a '.' character.
-
- Although dots are used in the key, the behavior here is
- that it maps directly to a host, thus direct.
- """
- def test(self, key):
- return '.' in key
-
-
-class TopicBinding(Binding):
- """Where a 'bare' key without dots.
-
- AMQP generally considers topic exchanges to be those *with* dots,
- but we deviate here in terminology as the behavior here matches
- that of a topic exchange (whereas where there are dots, behavior
- matches that of a direct exchange.
- """
- def test(self, key):
- return '.' not in key
-
-
-class FanoutBinding(Binding):
- """Match on fanout keys, where key starts with 'fanout.' string."""
- def test(self, key):
- return key.startswith('fanout~')
-
-
-class StubExchange(Exchange):
- """Exchange that does nothing."""
- def run(self, key):
- return [(key, None)]
-
-
-class LocalhostExchange(Exchange):
- """Exchange where all direct topics are local."""
- def __init__(self, host='localhost'):
- self.host = host
- super(Exchange, self).__init__()
-
- def run(self, key):
- return [('.'.join((key.split('.')[0], self.host)), self.host)]
-
-
-class DirectExchange(Exchange):
- """Exchange where all topic keys are split, sending to second half.
-
- i.e. "compute.host" sends a message to "compute.host" running on "host"
- """
- def __init__(self):
- super(Exchange, self).__init__()
-
- def run(self, key):
- e = key.split('.', 1)[1]
- return [(key, e)]
-
-
-class MatchMakerLocalhost(MatchMakerBase):
- """Match Maker where all bare topics resolve to localhost.
-
- Useful for testing.
- """
- def __init__(self, host='localhost'):
- super(MatchMakerLocalhost, self).__init__()
- self.add_binding(FanoutBinding(), LocalhostExchange(host))
- self.add_binding(DirectBinding(), DirectExchange())
- self.add_binding(TopicBinding(), LocalhostExchange(host))
-
-
-class MatchMakerStub(MatchMakerBase):
- """Match Maker where topics are untouched.
-
- Useful for testing, or for AMQP/brokered queues.
- Will not work where knowledge of hosts is known (i.e. zeromq)
- """
- def __init__(self):
- super(MatchMakerStub, self).__init__()
-
- self.add_binding(FanoutBinding(), StubExchange())
- self.add_binding(DirectBinding(), StubExchange())
- self.add_binding(TopicBinding(), StubExchange())
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2013 Cloudscaling Group, Inc
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-"""
-The MatchMaker classes should accept a Topic or Fanout exchange key and
-return keys for direct exchanges, per (approximate) AMQP parlance.
-"""
-
-from oslo.config import cfg
-
-from heat.openstack.common import importutils
-from heat.openstack.common import log as logging
-from heat.openstack.common.rpc import matchmaker as mm_common
-
-redis = importutils.try_import('redis')
-
-
-matchmaker_redis_opts = [
- cfg.StrOpt('host',
- default='127.0.0.1',
- help='Host to locate redis'),
- cfg.IntOpt('port',
- default=6379,
- help='Use this port to connect to redis host.'),
- cfg.StrOpt('password',
- default=None,
- help='Password for Redis server. (optional)'),
-]
-
-CONF = cfg.CONF
-opt_group = cfg.OptGroup(name='matchmaker_redis',
- title='Options for Redis-based MatchMaker')
-CONF.register_group(opt_group)
-CONF.register_opts(matchmaker_redis_opts, opt_group)
-LOG = logging.getLogger(__name__)
-
-
-class RedisExchange(mm_common.Exchange):
- def __init__(self, matchmaker):
- self.matchmaker = matchmaker
- self.redis = matchmaker.redis
- super(RedisExchange, self).__init__()
-
-
-class RedisTopicExchange(RedisExchange):
- """Exchange where all topic keys are split, sending to second half.
-
- i.e. "compute.host" sends a message to "compute" running on "host"
- """
- def run(self, topic):
- while True:
- member_name = self.redis.srandmember(topic)
-
- if not member_name:
- # If this happens, there are no
- # longer any members.
- break
-
- if not self.matchmaker.is_alive(topic, member_name):
- continue
-
- host = member_name.split('.', 1)[1]
- return [(member_name, host)]
- return []
-
-
-class RedisFanoutExchange(RedisExchange):
- """Return a list of all hosts."""
- def run(self, topic):
- topic = topic.split('~', 1)[1]
- hosts = self.redis.smembers(topic)
- good_hosts = filter(
- lambda host: self.matchmaker.is_alive(topic, host), hosts)
-
- return [(x, x.split('.', 1)[1]) for x in good_hosts]
-
-
-class MatchMakerRedis(mm_common.HeartbeatMatchMakerBase):
- """MatchMaker registering and looking-up hosts with a Redis server."""
- def __init__(self):
- super(MatchMakerRedis, self).__init__()
-
- if not redis:
- raise ImportError("Failed to import module redis.")
-
- self.redis = redis.StrictRedis(
- host=CONF.matchmaker_redis.host,
- port=CONF.matchmaker_redis.port,
- password=CONF.matchmaker_redis.password)
-
- self.add_binding(mm_common.FanoutBinding(), RedisFanoutExchange(self))
- self.add_binding(mm_common.DirectBinding(), mm_common.DirectExchange())
- self.add_binding(mm_common.TopicBinding(), RedisTopicExchange(self))
-
- def ack_alive(self, key, host):
- topic = "%s.%s" % (key, host)
- if not self.redis.expire(topic, CONF.matchmaker_heartbeat_ttl):
- # If we could not update the expiration, the key
- # might have been pruned. Re-register, creating a new
- # key in Redis.
- self.register(self.topic_host[host], host)
-
- def is_alive(self, topic, host):
- if self.redis.ttl(host) == -1:
- self.expire(topic, host)
- return False
- return True
-
- def expire(self, topic, host):
- with self.redis.pipeline() as pipe:
- pipe.multi()
- pipe.delete(host)
- pipe.srem(topic, host)
- pipe.execute()
-
- def backend_register(self, key, key_host):
- with self.redis.pipeline() as pipe:
- pipe.multi()
- pipe.sadd(key, key_host)
-
- # No value is needed, we just
- # care if it exists. Sets aren't viable
- # because only keys can expire.
- pipe.set(key_host, '')
-
- pipe.execute()
-
- def backend_unregister(self, key, key_host):
- with self.redis.pipeline() as pipe:
- pipe.multi()
- pipe.srem(key, key_host)
- pipe.delete(key_host)
- pipe.execute()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011-2013 Cloudscaling Group, Inc
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-"""
-The MatchMaker classes should except a Topic or Fanout exchange key and
-return keys for direct exchanges, per (approximate) AMQP parlance.
-"""
-
-import itertools
-import json
-
-from oslo.config import cfg
-
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import log as logging
-from heat.openstack.common.rpc import matchmaker as mm
-
-
-matchmaker_opts = [
- # Matchmaker ring file
- cfg.StrOpt('ringfile',
- deprecated_name='matchmaker_ringfile',
- deprecated_group='DEFAULT',
- default='/etc/oslo/matchmaker_ring.json',
- help='Matchmaker ring file (JSON)'),
-]
-
-CONF = cfg.CONF
-CONF.register_opts(matchmaker_opts, 'matchmaker_ring')
-LOG = logging.getLogger(__name__)
-
-
-class RingExchange(mm.Exchange):
- """Match Maker where hosts are loaded from a static JSON formatted file.
-
- __init__ takes optional ring dictionary argument, otherwise
- loads the ringfile from CONF.mathcmaker_ringfile.
- """
- def __init__(self, ring=None):
- super(RingExchange, self).__init__()
-
- if ring:
- self.ring = ring
- else:
- fh = open(CONF.matchmaker_ring.ringfile, 'r')
- self.ring = json.load(fh)
- fh.close()
-
- self.ring0 = {}
- for k in self.ring.keys():
- self.ring0[k] = itertools.cycle(self.ring[k])
-
- def _ring_has(self, key):
- return key in self.ring0
-
-
-class RoundRobinRingExchange(RingExchange):
- """A Topic Exchange based on a hashmap."""
- def __init__(self, ring=None):
- super(RoundRobinRingExchange, self).__init__(ring)
-
- def run(self, key):
- if not self._ring_has(key):
- LOG.warn(
- _("No key defining hosts for topic '%s', "
- "see ringfile") % (key, )
- )
- return []
- host = next(self.ring0[key])
- return [(key + '.' + host, host)]
-
-
-class FanoutRingExchange(RingExchange):
- """Fanout Exchange based on a hashmap."""
- def __init__(self, ring=None):
- super(FanoutRingExchange, self).__init__(ring)
-
- def run(self, key):
- # Assume starts with "fanout~", strip it for lookup.
- nkey = key.split('fanout~')[1:][0]
- if not self._ring_has(nkey):
- LOG.warn(
- _("No key defining hosts for topic '%s', "
- "see ringfile") % (nkey, )
- )
- return []
- return map(lambda x: (key + '.' + x, x), self.ring[nkey])
-
-
-class MatchMakerRing(mm.MatchMakerBase):
- """Match Maker where hosts are loaded from a static hashmap."""
- def __init__(self, ring=None):
- super(MatchMakerRing, self).__init__()
- self.add_binding(mm.FanoutBinding(), FanoutRingExchange(ring))
- self.add_binding(mm.DirectBinding(), mm.DirectExchange())
- self.add_binding(mm.TopicBinding(), RoundRobinRingExchange(ring))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2012-2013 Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-A helper class for proxy objects to remote APIs.
-
-For more information about rpc API version numbers, see:
- rpc/dispatcher.py
-"""
-
-
-from heat.openstack.common import rpc
-from heat.openstack.common.rpc import common as rpc_common
-from heat.openstack.common.rpc import serializer as rpc_serializer
-
-
-class RpcProxy(object):
- """A helper class for rpc clients.
-
- This class is a wrapper around the RPC client API. It allows you to
- specify the topic and API version in a single place. This is intended to
- be used as a base class for a class that implements the client side of an
- rpc API.
- """
-
- # The default namespace, which can be overriden in a subclass.
- RPC_API_NAMESPACE = None
-
- def __init__(self, topic, default_version, version_cap=None,
- serializer=None):
- """Initialize an RpcProxy.
-
- :param topic: The topic to use for all messages.
- :param default_version: The default API version to request in all
- outgoing messages. This can be overridden on a per-message
- basis.
- :param version_cap: Optionally cap the maximum version used for sent
- messages.
- :param serializer: Optionaly (de-)serialize entities with a
- provided helper.
- """
- self.topic = topic
- self.default_version = default_version
- self.version_cap = version_cap
- if serializer is None:
- serializer = rpc_serializer.NoOpSerializer()
- self.serializer = serializer
- super(RpcProxy, self).__init__()
-
- def _set_version(self, msg, vers):
- """Helper method to set the version in a message.
-
- :param msg: The message having a version added to it.
- :param vers: The version number to add to the message.
- """
- v = vers if vers else self.default_version
- if (self.version_cap and not
- rpc_common.version_is_compatible(self.version_cap, v)):
- raise rpc_common.RpcVersionCapError(version_cap=self.version_cap)
- msg['version'] = v
-
- def _get_topic(self, topic):
- """Return the topic to use for a message."""
- return topic if topic else self.topic
-
- def can_send_version(self, version):
- """Check to see if a version is compatible with the version cap."""
- return (not self.version_cap or
- rpc_common.version_is_compatible(self.version_cap, version))
-
- @staticmethod
- def make_namespaced_msg(method, namespace, **kwargs):
- return {'method': method, 'namespace': namespace, 'args': kwargs}
-
- def make_msg(self, method, **kwargs):
- return self.make_namespaced_msg(method, self.RPC_API_NAMESPACE,
- **kwargs)
-
- def _serialize_msg_args(self, context, kwargs):
- """Helper method called to serialize message arguments.
-
- This calls our serializer on each argument, returning a new
- set of args that have been serialized.
-
- :param context: The request context
- :param kwargs: The arguments to serialize
- :returns: A new set of serialized arguments
- """
- new_kwargs = dict()
- for argname, arg in kwargs.iteritems():
- new_kwargs[argname] = self.serializer.serialize_entity(context,
- arg)
- return new_kwargs
-
- def call(self, context, msg, topic=None, version=None, timeout=None):
- """rpc.call() a remote method.
-
- :param context: The request context
- :param msg: The message to send, including the method and args.
- :param topic: Override the topic for this message.
- :param version: (Optional) Override the requested API version in this
- message.
- :param timeout: (Optional) A timeout to use when waiting for the
- response. If no timeout is specified, a default timeout will be
- used that is usually sufficient.
-
- :returns: The return value from the remote method.
- """
- self._set_version(msg, version)
- msg['args'] = self._serialize_msg_args(context, msg['args'])
- real_topic = self._get_topic(topic)
- try:
- result = rpc.call(context, real_topic, msg, timeout)
- return self.serializer.deserialize_entity(context, result)
- except rpc.common.Timeout as exc:
- raise rpc.common.Timeout(
- exc.info, real_topic, msg.get('method'))
-
- def multicall(self, context, msg, topic=None, version=None, timeout=None):
- """rpc.multicall() a remote method.
-
- :param context: The request context
- :param msg: The message to send, including the method and args.
- :param topic: Override the topic for this message.
- :param version: (Optional) Override the requested API version in this
- message.
- :param timeout: (Optional) A timeout to use when waiting for the
- response. If no timeout is specified, a default timeout will be
- used that is usually sufficient.
-
- :returns: An iterator that lets you process each of the returned values
- from the remote method as they arrive.
- """
- self._set_version(msg, version)
- msg['args'] = self._serialize_msg_args(context, msg['args'])
- real_topic = self._get_topic(topic)
- try:
- result = rpc.multicall(context, real_topic, msg, timeout)
- return self.serializer.deserialize_entity(context, result)
- except rpc.common.Timeout as exc:
- raise rpc.common.Timeout(
- exc.info, real_topic, msg.get('method'))
-
- def cast(self, context, msg, topic=None, version=None):
- """rpc.cast() a remote method.
-
- :param context: The request context
- :param msg: The message to send, including the method and args.
- :param topic: Override the topic for this message.
- :param version: (Optional) Override the requested API version in this
- message.
-
- :returns: None. rpc.cast() does not wait on any return value from the
- remote method.
- """
- self._set_version(msg, version)
- msg['args'] = self._serialize_msg_args(context, msg['args'])
- rpc.cast(context, self._get_topic(topic), msg)
-
- def fanout_cast(self, context, msg, topic=None, version=None):
- """rpc.fanout_cast() a remote method.
-
- :param context: The request context
- :param msg: The message to send, including the method and args.
- :param topic: Override the topic for this message.
- :param version: (Optional) Override the requested API version in this
- message.
-
- :returns: None. rpc.fanout_cast() does not wait on any return value
- from the remote method.
- """
- self._set_version(msg, version)
- msg['args'] = self._serialize_msg_args(context, msg['args'])
- rpc.fanout_cast(context, self._get_topic(topic), msg)
-
- def cast_to_server(self, context, server_params, msg, topic=None,
- version=None):
- """rpc.cast_to_server() a remote method.
-
- :param context: The request context
- :param server_params: Server parameters. See rpc.cast_to_server() for
- details.
- :param msg: The message to send, including the method and args.
- :param topic: Override the topic for this message.
- :param version: (Optional) Override the requested API version in this
- message.
-
- :returns: None. rpc.cast_to_server() does not wait on any
- return values.
- """
- self._set_version(msg, version)
- msg['args'] = self._serialize_msg_args(context, msg['args'])
- rpc.cast_to_server(context, server_params, self._get_topic(topic), msg)
-
- def fanout_cast_to_server(self, context, server_params, msg, topic=None,
- version=None):
- """rpc.fanout_cast_to_server() a remote method.
-
- :param context: The request context
- :param server_params: Server parameters. See rpc.cast_to_server() for
- details.
- :param msg: The message to send, including the method and args.
- :param topic: Override the topic for this message.
- :param version: (Optional) Override the requested API version in this
- message.
-
- :returns: None. rpc.fanout_cast_to_server() does not wait on any
- return values.
- """
- self._set_version(msg, version)
- msg['args'] = self._serialize_msg_args(context, msg['args'])
- rpc.fanout_cast_to_server(context, server_params,
- self._get_topic(topic), msg)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2013 Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import base64
-import collections
-import os
-import struct
-import time
-
-import requests
-
-from oslo.config import cfg
-
-from heat.openstack.common.crypto import utils as cryptoutils
-from heat.openstack.common import jsonutils
-from heat.openstack.common import log as logging
-
-secure_message_opts = [
- cfg.BoolOpt('enabled', default=True,
- help='Whether Secure Messaging (Signing) is enabled,'
- ' defaults to enabled'),
- cfg.BoolOpt('enforced', default=False,
- help='Whether Secure Messaging (Signing) is enforced,'
- ' defaults to not enforced'),
- cfg.BoolOpt('encrypt', default=False,
- help='Whether Secure Messaging (Encryption) is enabled,'
- ' defaults to not enabled'),
- cfg.StrOpt('secret_keys_file',
- help='Path to the file containing the keys, takes precedence'
- ' over secret_key'),
- cfg.MultiStrOpt('secret_key',
- help='A list of keys: (ex: name:<base64 encoded key>),'
- ' ignored if secret_keys_file is set'),
- cfg.StrOpt('kds_endpoint',
- help='KDS endpoint (ex: http://kds.example.com:35357/v3)'),
-]
-secure_message_group = cfg.OptGroup('secure_messages',
- title='Secure Messaging options')
-
-LOG = logging.getLogger(__name__)
-
-
-class SecureMessageException(Exception):
- """Generic Exception for Secure Messages."""
-
- msg = "An unknown Secure Message related exception occurred."
-
- def __init__(self, msg=None):
- if msg is None:
- msg = self.msg
- super(SecureMessageException, self).__init__(msg)
-
-
-class SharedKeyNotFound(SecureMessageException):
- """No shared key was found and no other external authentication mechanism
- is available.
- """
-
- msg = "Shared Key for [%s] Not Found. (%s)"
-
- def __init__(self, name, errmsg):
- super(SharedKeyNotFound, self).__init__(self.msg % (name, errmsg))
-
-
-class InvalidMetadata(SecureMessageException):
- """The metadata is invalid."""
-
- msg = "Invalid metadata: %s"
-
- def __init__(self, err):
- super(InvalidMetadata, self).__init__(self.msg % err)
-
-
-class InvalidSignature(SecureMessageException):
- """Signature validation failed."""
-
- msg = "Failed to validate signature (source=%s, destination=%s)"
-
- def __init__(self, src, dst):
- super(InvalidSignature, self).__init__(self.msg % (src, dst))
-
-
-class UnknownDestinationName(SecureMessageException):
- """The Destination name is unknown to us."""
-
- msg = "Invalid destination name (%s)"
-
- def __init__(self, name):
- super(UnknownDestinationName, self).__init__(self.msg % name)
-
-
-class InvalidEncryptedTicket(SecureMessageException):
- """The Encrypted Ticket could not be successfully handled."""
-
- msg = "Invalid Ticket (source=%s, destination=%s)"
-
- def __init__(self, src, dst):
- super(InvalidEncryptedTicket, self).__init__(self.msg % (src, dst))
-
-
-class InvalidExpiredTicket(SecureMessageException):
- """The ticket received is already expired."""
-
- msg = "Expired ticket (source=%s, destination=%s)"
-
- def __init__(self, src, dst):
- super(InvalidExpiredTicket, self).__init__(self.msg % (src, dst))
-
-
-class CommunicationError(SecureMessageException):
- """The Communication with the KDS failed."""
-
- msg = "Communication Error (target=%s): %s"
-
- def __init__(self, target, errmsg):
- super(CommunicationError, self).__init__(self.msg % (target, errmsg))
-
-
-class InvalidArgument(SecureMessageException):
- """Bad initialization argument."""
-
- msg = "Invalid argument: %s"
-
- def __init__(self, errmsg):
- super(InvalidArgument, self).__init__(self.msg % errmsg)
-
-
-Ticket = collections.namedtuple('Ticket', ['skey', 'ekey', 'esek'])
-
-
-class KeyStore(object):
- """A storage class for Signing and Encryption Keys.
-
- This class creates an object that holds Generic Keys like Signing
- Keys, Encryption Keys, Encrypted SEK Tickets ...
- """
-
- def __init__(self):
- self._kvps = dict()
-
- def _get_key_name(self, source, target, ktype):
- return (source, target, ktype)
-
- def _put(self, src, dst, ktype, expiration, data):
- name = self._get_key_name(src, dst, ktype)
- self._kvps[name] = (expiration, data)
-
- def _get(self, src, dst, ktype):
- name = self._get_key_name(src, dst, ktype)
- if name in self._kvps:
- expiration, data = self._kvps[name]
- if expiration > time.time():
- return data
- else:
- del self._kvps[name]
-
- return None
-
- def clear(self):
- """Wipes the store clear of all data."""
- self._kvps.clear()
-
- def put_ticket(self, source, target, skey, ekey, esek, expiration):
- """Puts a sek pair in the cache.
-
- :param source: Client name
- :param target: Target name
- :param skey: The Signing Key
- :param ekey: The Encription Key
- :param esek: The token encrypted with the target key
- :param expiration: Expiration time in seconds since Epoch
- """
- keys = Ticket(skey, ekey, esek)
- self._put(source, target, 'ticket', expiration, keys)
-
- def get_ticket(self, source, target):
- """Returns a Ticket (skey, ekey, esek) namedtuple for the
- source/target pair.
- """
- return self._get(source, target, 'ticket')
-
-
-_KEY_STORE = KeyStore()
-
-
-class _KDSClient(object):
-
- USER_AGENT = 'oslo-incubator/rpc'
-
- def __init__(self, endpoint=None, timeout=None):
- """A KDS Client class."""
-
- self._endpoint = endpoint
- if timeout is not None:
- self.timeout = float(timeout)
- else:
- self.timeout = None
-
- def _do_get(self, url, request):
- req_kwargs = dict()
- req_kwargs['headers'] = dict()
- req_kwargs['headers']['User-Agent'] = self.USER_AGENT
- req_kwargs['headers']['Content-Type'] = 'application/json'
- req_kwargs['data'] = jsonutils.dumps({'request': request})
- if self.timeout is not None:
- req_kwargs['timeout'] = self.timeout
-
- try:
- resp = requests.get(url, **req_kwargs)
- except requests.ConnectionError as e:
- err = "Unable to establish connection. %s" % e
- raise CommunicationError(url, err)
-
- return resp
-
- def _get_reply(self, url, resp):
- if resp.text:
- try:
- body = jsonutils.loads(resp.text)
- reply = body['reply']
- except (KeyError, TypeError, ValueError):
- msg = "Failed to decode reply: %s" % resp.text
- raise CommunicationError(url, msg)
- else:
- msg = "No reply data was returned."
- raise CommunicationError(url, msg)
-
- return reply
-
- def _get_ticket(self, request, url=None, redirects=10):
- """Send an HTTP request.
-
- Wraps around 'requests' to handle redirects and common errors.
- """
- if url is None:
- if not self._endpoint:
- raise CommunicationError(url, 'Endpoint not configured')
- url = self._endpoint + '/kds/ticket'
-
- while redirects:
- resp = self._do_get(url, request)
- if resp.status_code in (301, 302, 305):
- # Redirected. Reissue the request to the new location.
- url = resp.headers['location']
- redirects -= 1
- continue
- elif resp.status_code != 200:
- msg = "Request returned failure status: %s (%s)"
- err = msg % (resp.status_code, resp.text)
- raise CommunicationError(url, err)
-
- return self._get_reply(url, resp)
-
- raise CommunicationError(url, "Too many redirections, giving up!")
-
- def get_ticket(self, source, target, crypto, key):
-
- # prepare metadata
- md = {'requestor': source,
- 'target': target,
- 'timestamp': time.time(),
- 'nonce': struct.unpack('Q', os.urandom(8))[0]}
- metadata = base64.b64encode(jsonutils.dumps(md))
-
- # sign metadata
- signature = crypto.sign(key, metadata)
-
- # HTTP request
- reply = self._get_ticket({'metadata': metadata,
- 'signature': signature})
-
- # verify reply
- signature = crypto.sign(key, (reply['metadata'] + reply['ticket']))
- if signature != reply['signature']:
- raise InvalidEncryptedTicket(md['source'], md['destination'])
- md = jsonutils.loads(base64.b64decode(reply['metadata']))
- if ((md['source'] != source or
- md['destination'] != target or
- md['expiration'] < time.time())):
- raise InvalidEncryptedTicket(md['source'], md['destination'])
-
- # return ticket data
- tkt = jsonutils.loads(crypto.decrypt(key, reply['ticket']))
-
- return tkt, md['expiration']
-
-
-# we need to keep a global nonce, as this value should never repeat non
-# matter how many SecureMessage objects we create
-_NONCE = None
-
-
-def _get_nonce():
- """We keep a single counter per instance, as it is so huge we can't
- possibly cycle through within 1/100 of a second anyway.
- """
-
- global _NONCE
- # Lazy initialize, for now get a random value, multiply by 2^32 and
- # use it as the nonce base. The counter itself will rotate after
- # 2^32 increments.
- if _NONCE is None:
- _NONCE = [struct.unpack('I', os.urandom(4))[0], 0]
-
- # Increment counter and wrap at 2^32
- _NONCE[1] += 1
- if _NONCE[1] > 0xffffffff:
- _NONCE[1] = 0
-
- # Return base + counter
- return long((_NONCE[0] * 0xffffffff)) + _NONCE[1]
-
-
-class SecureMessage(object):
- """A Secure Message object.
-
- This class creates a signing/encryption facility for RPC messages.
- It encapsulates all the necessary crypto primitives to insulate
- regular code from the intricacies of message authentication, validation
- and optionally encryption.
-
- :param topic: The topic name of the queue
- :param host: The server name, together with the topic it forms a unique
- name that is used to source signing keys, and verify
- incoming messages.
- :param conf: a ConfigOpts object
- :param key: (optional) explicitly pass in endpoint private key.
- If not provided it will be sourced from the service config
- :param key_store: (optional) Storage class for local caching
- :param encrypt: (defaults to False) Whether to encrypt messages
- :param enctype: (defaults to AES) Cipher to use
- :param hashtype: (defaults to SHA256) Hash function to use for signatures
- """
-
- def __init__(self, topic, host, conf, key=None, key_store=None,
- encrypt=None, enctype='AES', hashtype='SHA256'):
-
- conf.register_group(secure_message_group)
- conf.register_opts(secure_message_opts, group='secure_messages')
-
- self._name = '%s.%s' % (topic, host)
- self._key = key
- self._conf = conf.secure_messages
- self._encrypt = self._conf.encrypt if (encrypt is None) else encrypt
- self._crypto = cryptoutils.SymmetricCrypto(enctype, hashtype)
- self._hkdf = cryptoutils.HKDF(hashtype)
- self._kds = _KDSClient(self._conf.kds_endpoint)
-
- if self._key is None:
- self._key = self._init_key(topic, self._name)
- if self._key is None:
- err = "Secret Key (or key file) is missing or malformed"
- raise SharedKeyNotFound(self._name, err)
-
- self._key_store = key_store or _KEY_STORE
-
- def _init_key(self, topic, name):
- keys = None
- if self._conf.secret_keys_file:
- with open(self._conf.secret_keys_file, 'r') as f:
- keys = f.readlines()
- elif self._conf.secret_key:
- keys = self._conf.secret_key
-
- if keys is None:
- return None
-
- for k in keys:
- if k[0] == '#':
- continue
- if ':' not in k:
- break
- svc, key = k.split(':', 1)
- if svc == topic or svc == name:
- return base64.b64decode(key)
-
- return None
-
- def _split_key(self, key, size):
- sig_key = key[:size]
- enc_key = key[size:]
- return sig_key, enc_key
-
- def _decode_esek(self, key, source, target, timestamp, esek):
- """This function decrypts the esek buffer passed in and returns a
- KeyStore to be used to check and decrypt the received message.
-
- :param key: The key to use to decrypt the ticket (esek)
- :param source: The name of the source service
- :param traget: The name of the target service
- :param timestamp: The incoming message timestamp
- :param esek: a base64 encoded encrypted block containing a JSON string
- """
- rkey = None
-
- try:
- s = self._crypto.decrypt(key, esek)
- j = jsonutils.loads(s)
-
- rkey = base64.b64decode(j['key'])
- expiration = j['timestamp'] + j['ttl']
- if j['timestamp'] > timestamp or timestamp > expiration:
- raise InvalidExpiredTicket(source, target)
-
- except Exception:
- raise InvalidEncryptedTicket(source, target)
-
- info = '%s,%s,%s' % (source, target, str(j['timestamp']))
-
- sek = self._hkdf.expand(rkey, info, len(key) * 2)
-
- return self._split_key(sek, len(key))
-
- def _get_ticket(self, target):
- """This function will check if we already have a SEK for the specified
- target in the cache, or will go and try to fetch a new SEK from the key
- server.
-
- :param target: The name of the target service
- """
- ticket = self._key_store.get_ticket(self._name, target)
-
- if ticket is not None:
- return ticket
-
- tkt, expiration = self._kds.get_ticket(self._name, target,
- self._crypto, self._key)
-
- self._key_store.put_ticket(self._name, target,
- base64.b64decode(tkt['skey']),
- base64.b64decode(tkt['ekey']),
- tkt['esek'], expiration)
- return self._key_store.get_ticket(self._name, target)
-
- def encode(self, version, target, json_msg):
- """This is the main encoding function.
-
- It takes a target and a message and returns a tuple consisting of a
- JSON serialized metadata object, a JSON serialized (and optionally
- encrypted) message, and a signature.
-
- :param version: the current envelope version
- :param target: The name of the target service (usually with hostname)
- :param json_msg: a serialized json message object
- """
- ticket = self._get_ticket(target)
-
- metadata = jsonutils.dumps({'source': self._name,
- 'destination': target,
- 'timestamp': time.time(),
- 'nonce': _get_nonce(),
- 'esek': ticket.esek,
- 'encryption': self._encrypt})
-
- message = json_msg
- if self._encrypt:
- message = self._crypto.encrypt(ticket.ekey, message)
-
- signature = self._crypto.sign(ticket.skey,
- version + metadata + message)
-
- return (metadata, message, signature)
-
- def decode(self, version, metadata, message, signature):
- """This is the main decoding function.
-
- It takes a version, metadata, message and signature strings and
- returns a tuple with a (decrypted) message and metadata or raises
- an exception in case of error.
-
- :param version: the current envelope version
- :param metadata: a JSON serialized object with metadata for validation
- :param message: a JSON serialized (base64 encoded encrypted) message
- :param signature: a base64 encoded signature
- """
- md = jsonutils.loads(metadata)
-
- check_args = ('source', 'destination', 'timestamp',
- 'nonce', 'esek', 'encryption')
- for arg in check_args:
- if arg not in md:
- raise InvalidMetadata('Missing metadata "%s"' % arg)
-
- if md['destination'] != self._name:
- # TODO(simo) handle group keys by checking target
- raise UnknownDestinationName(md['destination'])
-
- try:
- skey, ekey = self._decode_esek(self._key,
- md['source'], md['destination'],
- md['timestamp'], md['esek'])
- except InvalidExpiredTicket:
- raise
- except Exception:
- raise InvalidMetadata('Failed to decode ESEK for %s/%s' % (
- md['source'], md['destination']))
-
- sig = self._crypto.sign(skey, version + metadata + message)
-
- if sig != signature:
- raise InvalidSignature(md['source'], md['destination'])
-
- if md['encryption'] is True:
- msg = self._crypto.decrypt(ekey, message)
- else:
- msg = message
-
- return (md, msg)
+++ /dev/null
-# Copyright 2013 IBM Corp.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Provides the definition of an RPC serialization handler"""
-
-import abc
-
-
-class Serializer(object):
- """Generic (de-)serialization definition base class."""
- __metaclass__ = abc.ABCMeta
-
- @abc.abstractmethod
- def serialize_entity(self, context, entity):
- """Serialize something to primitive form.
-
- :param context: Security context
- :param entity: Entity to be serialized
- :returns: Serialized form of entity
- """
- pass
-
- @abc.abstractmethod
- def deserialize_entity(self, context, entity):
- """Deserialize something from primitive form.
-
- :param context: Security context
- :param entity: Primitive to be deserialized
- :returns: Deserialized form of entity
- """
- pass
-
-
-class NoOpSerializer(Serializer):
- """A serializer that does nothing."""
-
- def serialize_entity(self, context, entity):
- return entity
-
- def deserialize_entity(self, context, entity):
- return entity
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# All Rights Reserved.
-# Copyright 2011 Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import log as logging
-from heat.openstack.common import rpc
-from heat.openstack.common.rpc import dispatcher as rpc_dispatcher
-from heat.openstack.common import service
-
-
-LOG = logging.getLogger(__name__)
-
-
-class Service(service.Service):
- """Service object for binaries running on hosts.
-
- A service enables rpc by listening to queues based on topic and host.
- """
- def __init__(self, host, topic, manager=None, serializer=None):
- super(Service, self).__init__()
- self.host = host
- self.topic = topic
- self.serializer = serializer
- if manager is None:
- self.manager = self
- else:
- self.manager = manager
-
- def start(self):
- super(Service, self).start()
-
- self.conn = rpc.create_connection(new=True)
- LOG.debug(_("Creating Consumer connection for Service %s") %
- self.topic)
-
- dispatcher = rpc_dispatcher.RpcDispatcher([self.manager],
- self.serializer)
-
- # Share this same connection for these Consumers
- self.conn.create_consumer(self.topic, dispatcher, fanout=False)
-
- node_topic = '%s.%s' % (self.topic, self.host)
- self.conn.create_consumer(node_topic, dispatcher, fanout=False)
-
- self.conn.create_consumer(self.topic, dispatcher, fanout=True)
-
- # Hook to allow the manager to do other initializations after
- # the rpc connection is created.
- if callable(getattr(self.manager, 'initialize_service_hook', None)):
- self.manager.initialize_service_hook(self)
-
- # Consume from all consumers in a thread
- self.conn.consume_in_thread()
-
- def stop(self):
- # Try to shut the connection down, but if we get any sort of
- # errors, go ahead and ignore them.. as we're shutting down anyway
- try:
- self.conn.close()
- except Exception:
- pass
- super(Service, self).stop()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import eventlet
-eventlet.monkey_patch()
-
-import contextlib
-import sys
-
-from oslo.config import cfg
-
-from heat.openstack.common import log as logging
-from heat.openstack.common import rpc
-from heat.openstack.common.rpc import impl_zmq
-
-CONF = cfg.CONF
-CONF.register_opts(rpc.rpc_opts)
-CONF.register_opts(impl_zmq.zmq_opts)
-
-
-def main():
- CONF(sys.argv[1:], project='oslo')
- logging.setup("oslo")
-
- with contextlib.closing(impl_zmq.ZmqProxy(CONF)) as reactor:
- reactor.consume_in_thread()
- reactor.wait()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# Copyright 2011 Justin Santa Barbara
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Generic Node base class for all workers that run on hosts."""
-
-import errno
-import os
-import random
-import signal
-import sys
-import time
-
-import eventlet
-from eventlet import event
-import logging as std_logging
-from oslo.config import cfg
-
-from heat.openstack.common import eventlet_backdoor
-from heat.openstack.common.gettextutils import _ # noqa
-from heat.openstack.common import importutils
-from heat.openstack.common import log as logging
-from heat.openstack.common import threadgroup
-
-
-rpc = importutils.try_import('heat.openstack.common.rpc')
-CONF = cfg.CONF
-LOG = logging.getLogger(__name__)
-
-
-class Launcher(object):
- """Launch one or more services and wait for them to complete."""
-
- def __init__(self):
- """Initialize the service launcher.
-
- :returns: None
-
- """
- self.services = Services()
- self.backdoor_port = eventlet_backdoor.initialize_if_enabled()
-
- def launch_service(self, service):
- """Load and start the given service.
-
- :param service: The service you would like to start.
- :returns: None
-
- """
- service.backdoor_port = self.backdoor_port
- self.services.add(service)
-
- def stop(self):
- """Stop all services which are currently running.
-
- :returns: None
-
- """
- self.services.stop()
-
- def wait(self):
- """Waits until all services have been stopped, and then returns.
-
- :returns: None
-
- """
- self.services.wait()
-
- def restart(self):
- """Reload config files and restart service.
-
- :returns: None
-
- """
- cfg.CONF.reload_config_files()
- self.services.restart()
-
-
-class SignalExit(SystemExit):
- def __init__(self, signo, exccode=1):
- super(SignalExit, self).__init__(exccode)
- self.signo = signo
-
-
-class ServiceLauncher(Launcher):
- def _handle_signal(self, signo, frame):
- # Allow the process to be killed again and die from natural causes
- signal.signal(signal.SIGTERM, signal.SIG_DFL)
- signal.signal(signal.SIGINT, signal.SIG_DFL)
- signal.signal(signal.SIGHUP, signal.SIG_DFL)
-
- raise SignalExit(signo)
-
- def handle_signal(self):
- signal.signal(signal.SIGTERM, self._handle_signal)
- signal.signal(signal.SIGINT, self._handle_signal)
- signal.signal(signal.SIGHUP, self._handle_signal)
-
- def _wait_for_exit_or_signal(self):
- status = None
- signo = 0
-
- LOG.debug(_('Full set of CONF:'))
- CONF.log_opt_values(LOG, std_logging.DEBUG)
-
- try:
- super(ServiceLauncher, self).wait()
- except SignalExit as exc:
- signame = {signal.SIGTERM: 'SIGTERM',
- signal.SIGINT: 'SIGINT',
- signal.SIGHUP: 'SIGHUP'}[exc.signo]
- LOG.info(_('Caught %s, exiting'), signame)
- status = exc.code
- signo = exc.signo
- except SystemExit as exc:
- status = exc.code
- finally:
- self.stop()
- if rpc:
- try:
- rpc.cleanup()
- except Exception:
- # We're shutting down, so it doesn't matter at this point.
- LOG.exception(_('Exception during rpc cleanup.'))
-
- return status, signo
-
- def wait(self):
- while True:
- self.handle_signal()
- status, signo = self._wait_for_exit_or_signal()
- if signo != signal.SIGHUP:
- return status
- self.restart()
-
-
-class ServiceWrapper(object):
- def __init__(self, service, workers):
- self.service = service
- self.workers = workers
- self.children = set()
- self.forktimes = []
-
-
-class ProcessLauncher(object):
- def __init__(self):
- self.children = {}
- self.sigcaught = None
- self.running = True
- rfd, self.writepipe = os.pipe()
- self.readpipe = eventlet.greenio.GreenPipe(rfd, 'r')
- self.handle_signal()
-
- def handle_signal(self):
- signal.signal(signal.SIGTERM, self._handle_signal)
- signal.signal(signal.SIGINT, self._handle_signal)
- signal.signal(signal.SIGHUP, self._handle_signal)
-
- def _handle_signal(self, signo, frame):
- self.sigcaught = signo
- self.running = False
-
- # Allow the process to be killed again and die from natural causes
- signal.signal(signal.SIGTERM, signal.SIG_DFL)
- signal.signal(signal.SIGINT, signal.SIG_DFL)
- signal.signal(signal.SIGHUP, signal.SIG_DFL)
-
- def _pipe_watcher(self):
- # This will block until the write end is closed when the parent
- # dies unexpectedly
- self.readpipe.read()
-
- LOG.info(_('Parent process has died unexpectedly, exiting'))
-
- sys.exit(1)
-
- def _child_process_handle_signal(self):
- # Setup child signal handlers differently
- def _sigterm(*args):
- signal.signal(signal.SIGTERM, signal.SIG_DFL)
- raise SignalExit(signal.SIGTERM)
-
- def _sighup(*args):
- signal.signal(signal.SIGHUP, signal.SIG_DFL)
- raise SignalExit(signal.SIGHUP)
-
- signal.signal(signal.SIGTERM, _sigterm)
- signal.signal(signal.SIGHUP, _sighup)
- # Block SIGINT and let the parent send us a SIGTERM
- signal.signal(signal.SIGINT, signal.SIG_IGN)
-
- def _child_wait_for_exit_or_signal(self, launcher):
- status = None
- signo = 0
-
- try:
- launcher.wait()
- except SignalExit as exc:
- signame = {signal.SIGTERM: 'SIGTERM',
- signal.SIGINT: 'SIGINT',
- signal.SIGHUP: 'SIGHUP'}[exc.signo]
- LOG.info(_('Caught %s, exiting'), signame)
- status = exc.code
- signo = exc.signo
- except SystemExit as exc:
- status = exc.code
- except BaseException:
- LOG.exception(_('Unhandled exception'))
- status = 2
- finally:
- launcher.stop()
-
- return status, signo
-
- def _child_process(self, service):
- self._child_process_handle_signal()
-
- # Reopen the eventlet hub to make sure we don't share an epoll
- # fd with parent and/or siblings, which would be bad
- eventlet.hubs.use_hub()
-
- # Close write to ensure only parent has it open
- os.close(self.writepipe)
- # Create greenthread to watch for parent to close pipe
- eventlet.spawn_n(self._pipe_watcher)
-
- # Reseed random number generator
- random.seed()
-
- launcher = Launcher()
- launcher.launch_service(service)
- return launcher
-
- def _start_child(self, wrap):
- if len(wrap.forktimes) > wrap.workers:
- # Limit ourselves to one process a second (over the period of
- # number of workers * 1 second). This will allow workers to
- # start up quickly but ensure we don't fork off children that
- # die instantly too quickly.
- if time.time() - wrap.forktimes[0] < wrap.workers:
- LOG.info(_('Forking too fast, sleeping'))
- time.sleep(1)
-
- wrap.forktimes.pop(0)
-
- wrap.forktimes.append(time.time())
-
- pid = os.fork()
- if pid == 0:
- # NOTE(johannes): All exceptions are caught to ensure this
- # doesn't fallback into the loop spawning children. It would
- # be bad for a child to spawn more children.
- launcher = self._child_process(wrap.service)
- while True:
- self._child_process_handle_signal()
- status, signo = self._child_wait_for_exit_or_signal(launcher)
- if signo != signal.SIGHUP:
- break
- launcher.restart()
-
- os._exit(status)
-
- LOG.info(_('Started child %d'), pid)
-
- wrap.children.add(pid)
- self.children[pid] = wrap
-
- return pid
-
- def launch_service(self, service, workers=1):
- wrap = ServiceWrapper(service, workers)
-
- LOG.info(_('Starting %d workers'), wrap.workers)
- while self.running and len(wrap.children) < wrap.workers:
- self._start_child(wrap)
-
- def _wait_child(self):
- try:
- # Don't block if no child processes have exited
- pid, status = os.waitpid(0, os.WNOHANG)
- if not pid:
- return None
- except OSError as exc:
- if exc.errno not in (errno.EINTR, errno.ECHILD):
- raise
- return None
-
- if os.WIFSIGNALED(status):
- sig = os.WTERMSIG(status)
- LOG.info(_('Child %(pid)d killed by signal %(sig)d'),
- dict(pid=pid, sig=sig))
- else:
- code = os.WEXITSTATUS(status)
- LOG.info(_('Child %(pid)s exited with status %(code)d'),
- dict(pid=pid, code=code))
-
- if pid not in self.children:
- LOG.warning(_('pid %d not in child list'), pid)
- return None
-
- wrap = self.children.pop(pid)
- wrap.children.remove(pid)
- return wrap
-
- def _respawn_children(self):
- while self.running:
- wrap = self._wait_child()
- if not wrap:
- # Yield to other threads if no children have exited
- # Sleep for a short time to avoid excessive CPU usage
- # (see bug #1095346)
- eventlet.greenthread.sleep(.01)
- continue
- while self.running and len(wrap.children) < wrap.workers:
- self._start_child(wrap)
-
- def wait(self):
- """Loop waiting on children to die and respawning as necessary."""
-
- LOG.debug(_('Full set of CONF:'))
- CONF.log_opt_values(LOG, std_logging.DEBUG)
-
- while True:
- self.handle_signal()
- self._respawn_children()
- if self.sigcaught:
- signame = {signal.SIGTERM: 'SIGTERM',
- signal.SIGINT: 'SIGINT',
- signal.SIGHUP: 'SIGHUP'}[self.sigcaught]
- LOG.info(_('Caught %s, stopping children'), signame)
- if self.sigcaught != signal.SIGHUP:
- break
-
- for pid in self.children:
- os.kill(pid, signal.SIGHUP)
- self.running = True
- self.sigcaught = None
-
- for pid in self.children:
- try:
- os.kill(pid, signal.SIGTERM)
- except OSError as exc:
- if exc.errno != errno.ESRCH:
- raise
-
- # Wait for children to die
- if self.children:
- LOG.info(_('Waiting on %d children to exit'), len(self.children))
- while self.children:
- self._wait_child()
-
-
-class Service(object):
- """Service object for binaries running on hosts."""
-
- def __init__(self, threads=1000):
- self.tg = threadgroup.ThreadGroup(threads)
-
- # signal that the service is done shutting itself down:
- self._done = event.Event()
-
- def reset(self):
- # NOTE(Fengqian): docs for Event.reset() recommend against using it
- self._done = event.Event()
-
- def start(self):
- pass
-
- def stop(self):
- self.tg.stop()
- self.tg.wait()
- # Signal that service cleanup is done:
- if not self._done.ready():
- self._done.send()
-
- def wait(self):
- self._done.wait()
-
-
-class Services(object):
-
- def __init__(self):
- self.services = []
- self.tg = threadgroup.ThreadGroup()
- self.done = event.Event()
-
- def add(self, service):
- self.services.append(service)
- self.tg.add_thread(self.run_service, service, self.done)
-
- def stop(self):
- # wait for graceful shutdown of services:
- for service in self.services:
- service.stop()
- service.wait()
-
- # Each service has performed cleanup, now signal that the run_service
- # wrapper threads can now die:
- if not self.done.ready():
- self.done.send()
-
- # reap threads:
- self.tg.stop()
-
- def wait(self):
- self.tg.wait()
-
- def restart(self):
- self.stop()
- self.done = event.Event()
- for restart_service in self.services:
- restart_service.reset()
- self.tg.add_thread(self.run_service, restart_service, self.done)
-
- @staticmethod
- def run_service(service, done):
- """Service start wrapper.
-
- :param service: service to run
- :param done: event to wait on until a shutdown is triggered
- :returns: None
-
- """
- service.start()
- done.wait()
-
-
-def launch(service, workers=None):
- if workers:
- launcher = ProcessLauncher()
- launcher.launch_service(service, workers=workers)
- else:
- launcher = ServiceLauncher()
- launcher.launch_service(service)
- return launcher
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2013 IBM Corp.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import os
-import ssl
-
-from oslo.config import cfg
-
-from heat.openstack.common.gettextutils import _ # noqa
-
-
-ssl_opts = [
- cfg.StrOpt('ca_file',
- default=None,
- help="CA certificate file to use to verify "
- "connecting clients"),
- cfg.StrOpt('cert_file',
- default=None,
- help="Certificate file to use when starting "
- "the server securely"),
- cfg.StrOpt('key_file',
- default=None,
- help="Private key file to use when starting "
- "the server securely"),
-]
-
-
-CONF = cfg.CONF
-CONF.register_opts(ssl_opts, "ssl")
-
-
-def is_enabled():
- cert_file = CONF.ssl.cert_file
- key_file = CONF.ssl.key_file
- ca_file = CONF.ssl.ca_file
- use_ssl = cert_file or key_file
-
- if cert_file and not os.path.exists(cert_file):
- raise RuntimeError(_("Unable to find cert_file : %s") % cert_file)
-
- if ca_file and not os.path.exists(ca_file):
- raise RuntimeError(_("Unable to find ca_file : %s") % ca_file)
-
- if key_file and not os.path.exists(key_file):
- raise RuntimeError(_("Unable to find key_file : %s") % key_file)
-
- if use_ssl and (not cert_file or not key_file):
- raise RuntimeError(_("When running server in SSL mode, you must "
- "specify both a cert_file and key_file "
- "option value in your configuration file"))
-
- return use_ssl
-
-
-def wrap(sock):
- ssl_kwargs = {
- 'server_side': True,
- 'certfile': CONF.ssl.cert_file,
- 'keyfile': CONF.ssl.key_file,
- 'cert_reqs': ssl.CERT_NONE,
- }
-
- if CONF.ssl.ca_file:
- ssl_kwargs['ca_certs'] = CONF.ssl.ca_file
- ssl_kwargs['cert_reqs'] = ssl.CERT_REQUIRED
-
- return ssl.wrap_socket(sock, **ssl_kwargs)
-
-
-_SSL_PROTOCOLS = {
- "tlsv1": ssl.PROTOCOL_TLSv1,
- "sslv23": ssl.PROTOCOL_SSLv23,
- "sslv3": ssl.PROTOCOL_SSLv3
-}
-
-try:
- _SSL_PROTOCOLS["sslv2"] = ssl.PROTOCOL_SSLv2
-except AttributeError:
- pass
-
-
-def validate_ssl_version(version):
- key = version.lower()
- try:
- return _SSL_PROTOCOLS[key]
- except KeyError:
- raise RuntimeError(_("Invalid SSL version : %s") % version)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2012 Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import eventlet
-from eventlet import greenpool
-from eventlet import greenthread
-
-from heat.openstack.common import log as logging
-from heat.openstack.common import loopingcall
-
-
-LOG = logging.getLogger(__name__)
-
-
-def _thread_done(gt, *args, **kwargs):
- """Callback function to be passed to GreenThread.link() when we spawn()
- Calls the :class:`ThreadGroup` to notify if.
-
- """
- kwargs['group'].thread_done(kwargs['thread'])
-
-
-class Thread(object):
- """Wrapper around a greenthread, that holds a reference to the
- :class:`ThreadGroup`. The Thread will notify the :class:`ThreadGroup` when
- it has done so it can be removed from the threads list.
- """
- def __init__(self, thread, group):
- self.thread = thread
- self.thread.link(_thread_done, group=group, thread=self)
-
- def stop(self):
- self.thread.kill()
-
- def wait(self):
- return self.thread.wait()
-
-
-class ThreadGroup(object):
- """The point of the ThreadGroup classis to:
-
- * keep track of timers and greenthreads (making it easier to stop them
- when need be).
- * provide an easy API to add timers.
- """
- def __init__(self, thread_pool_size=10):
- self.pool = greenpool.GreenPool(thread_pool_size)
- self.threads = []
- self.timers = []
-
- def add_dynamic_timer(self, callback, initial_delay=None,
- periodic_interval_max=None, *args, **kwargs):
- timer = loopingcall.DynamicLoopingCall(callback, *args, **kwargs)
- timer.start(initial_delay=initial_delay,
- periodic_interval_max=periodic_interval_max)
- self.timers.append(timer)
-
- def add_timer(self, interval, callback, initial_delay=None,
- *args, **kwargs):
- pulse = loopingcall.FixedIntervalLoopingCall(callback, *args, **kwargs)
- pulse.start(interval=interval,
- initial_delay=initial_delay)
- self.timers.append(pulse)
-
- def add_thread(self, callback, *args, **kwargs):
- gt = self.pool.spawn(callback, *args, **kwargs)
- th = Thread(gt, self)
- self.threads.append(th)
-
- def thread_done(self, thread):
- self.threads.remove(thread)
-
- def stop(self):
- current = greenthread.getcurrent()
- for x in self.threads:
- if x is current:
- # don't kill the current thread.
- continue
- try:
- x.stop()
- except Exception as ex:
- LOG.exception(ex)
-
- for x in self.timers:
- try:
- x.stop()
- except Exception as ex:
- LOG.exception(ex)
- self.timers = []
-
- def wait(self):
- for x in self.timers:
- try:
- x.wait()
- except eventlet.greenlet.GreenletExit:
- pass
- except Exception as ex:
- LOG.exception(ex)
- current = greenthread.getcurrent()
- for x in self.threads:
- if x is current:
- continue
- try:
- x.wait()
- except eventlet.greenlet.GreenletExit:
- pass
- except Exception as ex:
- LOG.exception(ex)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2011 OpenStack Foundation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Time related utilities and helper functions.
-"""
-
-import calendar
-import datetime
-
-import iso8601
-import six
-
-
-# ISO 8601 extended time format with microseconds
-_ISO8601_TIME_FORMAT_SUBSECOND = '%Y-%m-%dT%H:%M:%S.%f'
-_ISO8601_TIME_FORMAT = '%Y-%m-%dT%H:%M:%S'
-PERFECT_TIME_FORMAT = _ISO8601_TIME_FORMAT_SUBSECOND
-
-
-def isotime(at=None, subsecond=False):
- """Stringify time in ISO 8601 format."""
- if not at:
- at = utcnow()
- st = at.strftime(_ISO8601_TIME_FORMAT
- if not subsecond
- else _ISO8601_TIME_FORMAT_SUBSECOND)
- tz = at.tzinfo.tzname(None) if at.tzinfo else 'UTC'
- st += ('Z' if tz == 'UTC' else tz)
- return st
-
-
-def parse_isotime(timestr):
- """Parse time from ISO 8601 format."""
- try:
- return iso8601.parse_date(timestr)
- except iso8601.ParseError as e:
- raise ValueError(unicode(e))
- except TypeError as e:
- raise ValueError(unicode(e))
-
-
-def strtime(at=None, fmt=PERFECT_TIME_FORMAT):
- """Returns formatted utcnow."""
- if not at:
- at = utcnow()
- return at.strftime(fmt)
-
-
-def parse_strtime(timestr, fmt=PERFECT_TIME_FORMAT):
- """Turn a formatted time back into a datetime."""
- return datetime.datetime.strptime(timestr, fmt)
-
-
-def normalize_time(timestamp):
- """Normalize time in arbitrary timezone to UTC naive object."""
- offset = timestamp.utcoffset()
- if offset is None:
- return timestamp
- return timestamp.replace(tzinfo=None) - offset
-
-
-def is_older_than(before, seconds):
- """Return True if before is older than seconds."""
- if isinstance(before, six.string_types):
- before = parse_strtime(before).replace(tzinfo=None)
- return utcnow() - before > datetime.timedelta(seconds=seconds)
-
-
-def is_newer_than(after, seconds):
- """Return True if after is newer than seconds."""
- if isinstance(after, six.string_types):
- after = parse_strtime(after).replace(tzinfo=None)
- return after - utcnow() > datetime.timedelta(seconds=seconds)
-
-
-def utcnow_ts():
- """Timestamp version of our utcnow function."""
- return calendar.timegm(utcnow().timetuple())
-
-
-def utcnow():
- """Overridable version of utils.utcnow."""
- if utcnow.override_time:
- try:
- return utcnow.override_time.pop(0)
- except AttributeError:
- return utcnow.override_time
- return datetime.datetime.utcnow()
-
-
-def iso8601_from_timestamp(timestamp):
- """Returns a iso8601 formated date from timestamp."""
- return isotime(datetime.datetime.utcfromtimestamp(timestamp))
-
-
-utcnow.override_time = None
-
-
-def set_time_override(override_time=datetime.datetime.utcnow()):
- """Overrides utils.utcnow.
-
- Make it return a constant time or a list thereof, one at a time.
- """
- utcnow.override_time = override_time
-
-
-def advance_time_delta(timedelta):
- """Advance overridden time using a datetime.timedelta."""
- assert(not utcnow.override_time is None)
- try:
- for dt in utcnow.override_time:
- dt += timedelta
- except TypeError:
- utcnow.override_time += timedelta
-
-
-def advance_time_seconds(seconds):
- """Advance overridden time by seconds."""
- advance_time_delta(datetime.timedelta(0, seconds))
-
-
-def clear_time_override():
- """Remove the overridden time."""
- utcnow.override_time = None
-
-
-def marshall_now(now=None):
- """Make an rpc-safe datetime with microseconds.
-
- Note: tzinfo is stripped, but not required for relative times.
- """
- if not now:
- now = utcnow()
- return dict(day=now.day, month=now.month, year=now.year, hour=now.hour,
- minute=now.minute, second=now.second,
- microsecond=now.microsecond)
-
-
-def unmarshall_time(tyme):
- """Unmarshall a datetime dict."""
- return datetime.datetime(day=tyme['day'],
- month=tyme['month'],
- year=tyme['year'],
- hour=tyme['hour'],
- minute=tyme['minute'],
- second=tyme['second'],
- microsecond=tyme['microsecond'])
-
-
-def delta_seconds(before, after):
- """Return the difference between two timing objects.
-
- Compute the difference in seconds between two date, time, or
- datetime objects (as a float, to microsecond resolution).
- """
- delta = after - before
- try:
- return delta.total_seconds()
- except AttributeError:
- return ((delta.days * 24 * 3600) + delta.seconds +
- float(delta.microseconds) / (10 ** 6))
-
-
-def is_soon(dt, window):
- """Determines if time is going to happen in the next window seconds.
-
- :params dt: the time
- :params window: minimum seconds to remain to consider the time not soon
-
- :return: True if expiration is within the given duration
- """
- soon = (utcnow() + datetime.timedelta(seconds=window))
- return normalize_time(dt) <= soon
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright (c) 2012 Intel Corporation.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-UUID related utilities and helper functions.
-"""
-
-import uuid
-
-
-def generate_uuid():
- return str(uuid.uuid4())
-
-
-def is_uuid_like(val):
- """Returns validation of a value as a UUID.
-
- For our purposes, a UUID is a canonical form string:
- aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa
-
- """
- try:
- return str(uuid.UUID(val)) == val
- except (TypeError, ValueError, AttributeError):
- return False
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-ENGINE_TOPIC = 'engine'
-
-PARAM_KEYS = (
- PARAM_TIMEOUT, PARAM_DISABLE_ROLLBACK
-) = (
- 'timeout_mins', 'disable_rollback'
-)
-
-STACK_KEYS = (
- STACK_NAME, STACK_ID,
- STACK_CREATION_TIME, STACK_UPDATED_TIME, STACK_DELETION_TIME,
- STACK_NOTIFICATION_TOPICS,
- STACK_DESCRIPTION, STACK_TMPL_DESCRIPTION,
- STACK_PARAMETERS, STACK_OUTPUTS, STACK_ACTION,
- STACK_STATUS, STACK_STATUS_DATA, STACK_CAPABILITIES,
- STACK_DISABLE_ROLLBACK, STACK_TIMEOUT,
-) = (
- 'stack_name', 'stack_identity',
- 'creation_time', 'updated_time', 'deletion_time',
- 'notification_topics',
- 'description', 'template_description',
- 'parameters', 'outputs', 'stack_action',
- 'stack_status', 'stack_status_reason', 'capabilities',
- 'disable_rollback', 'timeout_mins',
-)
-
-STACK_OUTPUT_KEYS = (
- OUTPUT_DESCRIPTION,
- OUTPUT_KEY, OUTPUT_VALUE,
-) = (
- 'description',
- 'output_key', 'output_value',
-)
-
-RES_KEYS = (
- RES_DESCRIPTION, RES_UPDATED_TIME,
- RES_NAME, RES_PHYSICAL_ID, RES_METADATA,
- RES_ACTION, RES_STATUS, RES_STATUS_DATA,
- RES_TYPE, RES_ID, RES_STACK_ID, RES_STACK_NAME,
- RES_REQUIRED_BY,
-) = (
- 'description', 'updated_time',
- 'resource_name', 'physical_resource_id', 'metadata',
- 'resource_action', 'resource_status', 'resource_status_reason',
- 'resource_type', 'resource_identity', STACK_ID, STACK_NAME,
- 'required_by',
-)
-
-RES_SCHEMA_KEYS = (
- RES_SCHEMA_RES_TYPE, RES_SCHEMA_PROPERTIES, RES_SCHEMA_ATTRIBUTES,
-) = (
- RES_TYPE, 'properties', 'attributes',
-)
-
-EVENT_KEYS = (
- EVENT_ID,
- EVENT_STACK_ID, EVENT_STACK_NAME,
- EVENT_TIMESTAMP,
- EVENT_RES_NAME, EVENT_RES_PHYSICAL_ID, EVENT_RES_ACTION,
- EVENT_RES_STATUS, EVENT_RES_STATUS_DATA, EVENT_RES_TYPE,
- EVENT_RES_PROPERTIES,
-) = (
- 'event_identity',
- STACK_ID, STACK_NAME,
- "event_time",
- RES_NAME, RES_PHYSICAL_ID, RES_ACTION,
- RES_STATUS, RES_STATUS_DATA, RES_TYPE,
- 'resource_properties',
-)
-
-# This is the representation of a watch we expose to the API via RPC
-WATCH_KEYS = (
- WATCH_ACTIONS_ENABLED, WATCH_ALARM_ACTIONS, WATCH_TOPIC,
- WATCH_UPDATED_TIME, WATCH_DESCRIPTION, WATCH_NAME,
- WATCH_COMPARISON, WATCH_DIMENSIONS, WATCH_PERIODS,
- WATCH_INSUFFICIENT_ACTIONS, WATCH_METRIC_NAME, WATCH_NAMESPACE,
- WATCH_OK_ACTIONS, WATCH_PERIOD, WATCH_STATE_REASON,
- WATCH_STATE_REASON_DATA, WATCH_STATE_UPDATED_TIME, WATCH_STATE_VALUE,
- WATCH_STATISTIC, WATCH_THRESHOLD, WATCH_UNIT, WATCH_STACK_ID,
-) = (
- 'actions_enabled', 'actions', 'topic',
- 'updated_time', 'description', 'name',
- 'comparison', 'dimensions', 'periods',
- 'insufficient_actions', 'metric_name', 'namespace',
- 'ok_actions', 'period', 'state_reason',
- 'state_reason_data', 'state_updated_time', 'state_value',
- 'statistic', 'threshold', 'unit', 'stack_id',
-)
-
-# Alternate representation of a watch rule to align with DB format
-# FIXME : These align with AWS naming for compatibility with the
-# current cfn-push-stats & metadata server, fix when we've ported
-# cfn-push-stats to use the Cloudwatch server and/or moved metric
-# collection into ceilometer, these should just be WATCH_KEYS
-# or each field should be stored separately in the DB watch_data
-# table if we stick to storing watch data in the heat DB
-WATCH_RULE_KEYS = (
- RULE_ACTIONS_ENABLED, RULE_ALARM_ACTIONS, RULE_TOPIC,
- RULE_UPDATED_TIME, RULE_DESCRIPTION, RULE_NAME,
- RULE_COMPARISON, RULE_DIMENSIONS, RULE_PERIODS,
- RULE_INSUFFICIENT_ACTIONS, RULE_METRIC_NAME, RULE_NAMESPACE,
- RULE_OK_ACTIONS, RULE_PERIOD, RULE_STATE_REASON,
- RULE_STATE_REASON_DATA, RULE_STATE_UPDATED_TIME, RULE_STATE_VALUE,
- RULE_STATISTIC, RULE_THRESHOLD, RULE_UNIT, RULE_STACK_NAME,
-) = (
- 'ActionsEnabled', 'AlarmActions', 'AlarmArn',
- 'AlarmConfigurationUpdatedTimestamp', 'AlarmDescription', 'AlarmName',
- 'ComparisonOperator', 'Dimensions', 'EvaluationPeriods',
- 'InsufficientDataActions', 'MetricName', 'Namespace',
- 'OKActions', 'Period', 'StateReason',
- 'StateReasonData', 'StateUpdatedTimestamp', 'StateValue',
- 'Statistic', 'Threshold', 'Unit', 'StackName',
-)
-
-WATCH_STATES = (
- WATCH_STATE_OK, WATCH_STATE_ALARM, WATCH_STATE_NODATA,
- WATCH_STATE_SUSPENDED, WATCH_STATE_CEILOMETER_CONTROLLED
-) = (
- 'NORMAL', 'ALARM', 'NODATA',
- 'SUSPENDED', 'CEILOMETER_CONTROLLED'
-)
-
-WATCH_DATA_KEYS = (
- WATCH_DATA_ALARM, WATCH_DATA_METRIC, WATCH_DATA_TIME,
- WATCH_DATA_NAMESPACE, WATCH_DATA
-) = (
- 'watch_name', 'metric_name', 'timestamp',
- 'namespace', 'data'
-)
-
-VALIDATE_PARAM_KEYS = (
- PARAM_TYPE, PARAM_DEFAULT, PARAM_NO_ECHO,
- PARAM_ALLOWED_VALUES, PARAM_ALLOWED_PATTERN, PARAM_MAX_LENGTH,
- PARAM_MIN_LENGTH, PARAM_MAX_VALUE, PARAM_MIN_VALUE,
- PARAM_DESCRIPTION, PARAM_CONSTRAINT_DESCRIPTION
-) = (
- 'Type', 'Default', 'NoEcho',
- 'AllowedValues', 'AllowedPattern', 'MaxLength',
- 'MinLength', 'MaxValue', 'MinValue',
- 'Description', 'ConstraintDescription'
-)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2012, Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Client side of the heat engine RPC API.
-"""
-
-from heat.rpc import api
-
-import heat.openstack.common.rpc.proxy
-
-
-class EngineClient(heat.openstack.common.rpc.proxy.RpcProxy):
- '''Client side of the heat engine rpc API.
-
- API version history:
-
- 1.0 - Initial version.
- '''
-
- BASE_RPC_API_VERSION = '1.0'
-
- def __init__(self):
- super(EngineClient, self).__init__(
- topic=api.ENGINE_TOPIC,
- default_version=self.BASE_RPC_API_VERSION)
-
- def identify_stack(self, ctxt, stack_name):
- """
- The identify_stack method returns the full stack identifier for a
- single, live stack given the stack name.
-
- :param ctxt: RPC context.
- :param stack_name: Name of the stack you want to see,
- or None to see all
- """
- return self.call(ctxt, self.make_msg('identify_stack',
- stack_name=stack_name))
-
- def list_stacks(self, ctxt):
- """
- The list_stacks method returns the attributes of all stacks.
-
- :param ctxt: RPC context.
- """
- return self.call(ctxt, self.make_msg('list_stacks'))
-
- def show_stack(self, ctxt, stack_identity):
- """
- Return detailed information about one or all stacks.
- :param ctxt: RPC context.
- :param stack_identity: Name of the stack you want to show, or None to
- show all
- """
- return self.call(ctxt, self.make_msg('show_stack',
- stack_identity=stack_identity))
-
- def create_stack(self, ctxt, stack_name, template, params, files, args):
- """
- The create_stack method creates a new stack using the template
- provided.
- Note that at this stage the template has already been fetched from the
- heat-api process if using a template-url.
-
- :param ctxt: RPC context.
- :param stack_name: Name of the stack you want to create.
- :param template: Template of stack you want to create.
- :param params: Stack Input Params/Environment
- :param files: files referenced from the environment.
- :param args: Request parameters/args passed from API
- """
- return self.call(ctxt,
- self.make_msg('create_stack', stack_name=stack_name,
- template=template,
- params=params, files=files, args=args))
-
- def update_stack(self, ctxt, stack_identity, template, params,
- files, args):
- """
- The update_stack method updates an existing stack based on the
- provided template and parameters.
- Note that at this stage the template has already been fetched from the
- heat-api process if using a template-url.
-
- :param ctxt: RPC context.
- :param stack_name: Name of the stack you want to create.
- :param template: Template of stack you want to create.
- :param params: Stack Input Params/Environment
- :param files: files referenced from the environment.
- :param args: Request parameters/args passed from API
- """
- return self.call(ctxt, self.make_msg('update_stack',
- stack_identity=stack_identity,
- template=template,
- params=params,
- files=files,
- args=args))
-
- def validate_template(self, ctxt, template):
- """
- The validate_template method uses the stack parser to check
- the validity of a template.
-
- :param ctxt: RPC context.
- :param template: Template of stack you want to create.
- """
- return self.call(ctxt, self.make_msg('validate_template',
- template=template))
-
- def authenticated_to_backend(self, ctxt):
- """
- Verify that the credentials in the RPC context are valid for the
- current cloud backend.
-
- :param ctxt: RPC context.
- """
- return self.call(ctxt, self.make_msg('authenticated_to_backend'))
-
- def get_template(self, ctxt, stack_identity):
- """
- Get the template.
-
- :param ctxt: RPC context.
- :param stack_name: Name of the stack you want to see.
- """
- return self.call(ctxt, self.make_msg('get_template',
- stack_identity=stack_identity))
-
- def delete_stack(self, ctxt, stack_identity, cast=True):
- """
- The delete_stack method deletes a given stack.
-
- :param ctxt: RPC context.
- :param stack_identity: Name of the stack you want to delete.
- :param cast: cast the message or use call (default: True)
- """
- rpc_method = self.cast if cast else self.call
- return rpc_method(ctxt,
- self.make_msg('delete_stack',
- stack_identity=stack_identity))
-
- def list_resource_types(self, ctxt):
- """
- Get a list of valid resource types.
-
- :param ctxt: RPC context.
- """
- return self.call(ctxt, self.make_msg('list_resource_types'))
-
- def resource_schema(self, ctxt, type_name):
- """
- Get the schema for a resource type.
-
- :param ctxt: RPC context.
- """
- return self.call(ctxt, self.make_msg('resource_schema',
- type_name=type_name))
-
- def generate_template(self, ctxt, type_name):
- """
- Generate a template based on the specified type.
-
- :param ctxt: RPC context.
- :param type_name: The resource type name to generate a template for.
- """
- return self.call(ctxt, self.make_msg('generate_template',
- type_name=type_name))
-
- def list_events(self, ctxt, stack_identity):
- """
- The list_events method lists all events associated with a given stack.
-
- :param ctxt: RPC context.
- :param stack_identity: Name of the stack you want to get events for.
- """
- return self.call(ctxt, self.make_msg('list_events',
- stack_identity=stack_identity))
-
- def describe_stack_resource(self, ctxt, stack_identity, resource_name):
- """
- Get detailed resource information about a particular resource.
- :param ctxt: RPC context.
- :param stack_identity: Name of the stack.
- :param resource_name: the Resource.
- """
- return self.call(ctxt, self.make_msg('describe_stack_resource',
- stack_identity=stack_identity,
- resource_name=resource_name))
-
- def find_physical_resource(self, ctxt, physical_resource_id):
- """
- Return an identifier for the resource with the specified physical
- resource ID.
- :param ctxt RPC context.
- :param physcial_resource_id The physical resource ID to look up.
- """
- return self.call(ctxt,
- self.make_msg(
- 'find_physical_resource',
- physical_resource_id=physical_resource_id))
-
- def describe_stack_resources(self, ctxt, stack_identity, resource_name):
- """
- Get detailed resource information about one or more resources.
- :param ctxt: RPC context.
- :param stack_identity: Name of the stack.
- :param resource_name: the Resource.
- """
- return self.call(ctxt, self.make_msg('describe_stack_resources',
- stack_identity=stack_identity,
- resource_name=resource_name))
-
- def list_stack_resources(self, ctxt, stack_identity):
- """
- List the resources belonging to a stack.
- :param ctxt: RPC context.
- :param stack_identity: Name of the stack.
- """
- return self.call(ctxt, self.make_msg('list_stack_resources',
- stack_identity=stack_identity))
-
- def stack_suspend(self, ctxt, stack_identity):
- return self.call(ctxt, self.make_msg('stack_suspend',
- stack_identity=stack_identity))
-
- def stack_resume(self, ctxt, stack_identity):
- return self.call(ctxt, self.make_msg('stack_resume',
- stack_identity=stack_identity))
-
- def metadata_update(self, ctxt, stack_identity, resource_name, metadata):
- """
- Update the metadata for the given resource.
- """
- return self.call(ctxt, self.make_msg('metadata_update',
- stack_identity=stack_identity,
- resource_name=resource_name,
- metadata=metadata))
-
- def resource_signal(self, ctxt, stack_identity, resource_name, details):
- """
- Generate an alarm on the resource.
- :param ctxt: RPC context.
- :param stack_identity: Name of the stack.
- :param resource_name: the Resource.
- :param details: the details of the signal.
- """
- return self.call(ctxt, self.make_msg('resource_signal',
- stack_identity=stack_identity,
- resource_name=resource_name,
- details=details))
-
- def create_watch_data(self, ctxt, watch_name, stats_data):
- '''
- This could be used by CloudWatch and WaitConditions
- and treat HA service events like any other CloudWatch.
- :param ctxt: RPC context.
- :param watch_name: Name of the watch/alarm
- :param stats_data: The data to post.
- '''
- return self.call(ctxt, self.make_msg('create_watch_data',
- watch_name=watch_name,
- stats_data=stats_data))
-
- def show_watch(self, ctxt, watch_name):
- """
- The show_watch method returns the attributes of one watch
- or all watches if no watch_name is passed
-
- :param ctxt: RPC context.
- :param watch_name: Name of the watch/alarm you want to see,
- or None to see all
- """
- return self.call(ctxt, self.make_msg('show_watch',
- watch_name=watch_name))
-
- def show_watch_metric(self, ctxt, metric_namespace=None, metric_name=None):
- """
- The show_watch_metric method returns the datapoints associated
- with a specified metric, or all metrics if no metric_name is passed
-
- :param ctxt: RPC context.
- :param metric_namespace: Name of the namespace you want to see,
- or None to see all
- :param metric_name: Name of the metric you want to see,
- or None to see all
- """
- return self.call(ctxt, self.make_msg('show_watch_metric',
- metric_namespace=metric_namespace,
- metric_name=metric_name))
-
- def set_watch_state(self, ctxt, watch_name, state):
- '''
- Temporarily set the state of a given watch
- :param ctxt: RPC context.
- :param watch_name: Name of the watch
- :param state: State (must be one defined in WatchRule class)
- '''
- return self.call(ctxt, self.make_msg('set_watch_state',
- watch_name=watch_name,
- state=state))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-# See http://code.google.com/p/python-nose/issues/detail?id=373
-# The code below enables nosetests to work with i18n _() blocks
-import __builtin__
-setattr(__builtin__, '_', lambda x: x)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import fixtures
-import logging
-import mox
-import os
-import sys
-import testtools
-
-from oslo.config import cfg
-
-import heat.engine.scheduler as scheduler
-from heat.engine import environment
-from heat.engine import resources
-
-
-class HeatTestCase(testtools.TestCase):
-
- def setUp(self):
- super(HeatTestCase, self).setUp()
- self.m = mox.Mox()
- self.addCleanup(self.m.UnsetStubs)
- self.useFixture(fixtures.FakeLogger(level=logging.DEBUG))
- scheduler.ENABLE_SLEEP = False
-
- def enable_sleep():
- scheduler.ENABLE_SLEEP = True
-
- self.addCleanup(enable_sleep)
-
- mod_dir = os.path.dirname(sys.modules[__name__].__file__)
- project_dir = os.path.abspath(os.path.join(mod_dir, '../../'))
- env_dir = os.path.join(project_dir, 'etc', 'heat',
- 'environment.d')
-
- cfg.CONF.set_default('environment_dir', env_dir)
- self.addCleanup(cfg.CONF.reset)
-
- tri = resources.global_env().get_resource_info(
- 'AWS::RDS::DBInstance',
- registry_type=environment.TemplateResourceInfo)
- if tri is not None:
- cur_path = tri.template_name
- templ_path = os.path.join(project_dir, 'etc', 'heat', 'templates')
- if templ_path not in cur_path:
- tri.template_name = cur_path.replace('/etc/heat/templates',
- templ_path)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-"""
-A fake server that "responds" to API methods with pre-canned responses.
-
-All of these responses come from the spec, so if for some reason the spec's
-wrong the tests might raise AssertionError. I've indicated in comments the
-places where actual behavior differs from the spec.
-"""
-
-
-def assert_has_keys(a_dict, required=(), optional=()):
- """Raise an assertion if a_dict has the wrong keys.
-
- :param a_dict: A dict to look for keys in.
- :param required: An iterable of keys that must be present.
- :param optional: An iterable of keys that may be present.
-
- If any key from required is missing, an AssertionError will be raised.
- If any key other than those from required + optional is present, an
- AssertionError will be raised.
- """
- keys = set(a_dict.keys())
- required = set(required)
- optional = set(optional)
- missing = required - keys
- extra = keys - (required | optional)
- if missing or extra:
- raise AssertionError(
- "Missing keys %r, with extra keys %r in %r" %
- (missing, extra, keys))
-
-
-class FakeClient(object):
-
- def assert_called(self, method, url, body=None, pos=-1):
- """
- Assert than an API method was just called.
- """
- expected = (method, url)
- called = self.client.callstack[pos][0:2]
-
- assert self.client.callstack, \
- "Expected %s %s but no calls were made." % expected
-
- assert expected == called, 'Expected %s %s; got %s %s' % \
- (expected + called)
-
- if body is not None:
- assert self.client.callstack[pos][2] == body
-
- def assert_called_anytime(self, method, url, body=None):
- """
- Assert than an API method was called anytime in the test.
- """
- expected = (method, url)
-
- assert self.client.callstack, \
- "Expected %s %s but no calls were made." % expected
-
- found = False
- for entry in self.client.callstack:
- if expected == entry[0:2]:
- found = True
- break
-
- assert found, 'Expected %s %s; got %s' % \
- (expected, self.client.callstack)
- if body is not None:
- try:
- assert entry[2] == body
- except AssertionError:
- print(entry[2])
- print("!=")
- print(body)
- raise
-
- self.client.callstack = []
-
- def clear_callstack(self):
- self.client.callstack = []
-
- def authenticate(self):
- pass
-
-
-class FakeKeystoneClient(object):
- def __init__(self, username='test_user', user_id='1234', access='4567',
- secret='8901'):
- self.username = username
- self.user_id = user_id
- self.access = access
- self.secret = secret
- self.creds = None
- self.auth_token = 'abcd1234'
-
- def create_stack_user(self, username, password=''):
- self.username = username
- return self.user_id
-
- def delete_stack_user(self, user_id):
- self.user_id = None
-
- def get_ec2_keypair(self, user_id):
- if user_id == self.user_id:
- if not self.creds:
- class FakeCred(object):
- access = self.access
- secret = self.secret
- self.creds = FakeCred()
- return self.creds
-
- def delete_ec2_keypair(self, user_id, access):
- if user_id == self.user_id and access == self.creds.access:
- self.creds = None
- else:
- raise Exception('Incorrect user_id or access')
-
- def enable_stack_user(self, user_id):
- pass
-
- def disable_stack_user(self, user_id):
- pass
-
- def url_for(self, **kwargs):
- return 'http://example.com:1234/v1'
-
- def create_trust_context(self):
- pass
-
- def delete_trust_context(self):
- pass
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import resource
-from heat.engine import signal_responder
-
-from heat.openstack.common import log as logging
-
-logger = logging.getLogger(__name__)
-
-
-class GenericResource(resource.Resource):
- '''
- Dummy resource for use in tests
- '''
- properties_schema = {}
- attributes_schema = {'foo': 'A generic attribute',
- 'Foo': 'Another generic attribute'}
-
- def handle_create(self):
- logger.warning('Creating generic resource (Type "%s")' % self.type())
-
- def handle_update(self, json_snippet, tmpl_diff, prop_diff):
- logger.warning('Updating generic resource (Type "%s")' % self.type())
-
- def handle_delete(self):
- logger.warning('Deleting generic resource (Type "%s")' % self.type())
-
- def _resolve_attribute(self, name):
- return self.name
-
- def handle_suspend(self):
- logger.warning('Suspending generic resource (Type "%s")' % self.type())
-
- def handle_resume(self):
- logger.warning('Resuming generic resource (Type "%s")' % self.type())
-
-
-class ResourceWithProps(GenericResource):
- properties_schema = {'Foo': {'Type': 'String'}}
-
-
-class ResourceWithRequiredProps(GenericResource):
- properties_schema = {'Foo': {'Type': 'String',
- 'Required': True}}
-
-
-class SignalResource(signal_responder.SignalResponder):
- properties_schema = {}
- attributes_schema = {'AlarmUrl': 'Get a signed webhook'}
-
- def handle_signal(self, details=None):
- logger.warning('Signaled resource (Type "%s") %s' % (self.type(),
- details))
-
- def _resolve_attribute(self, name):
- if name == 'AlarmUrl' and self.resource_id is not None:
- return unicode(self._get_signed_url())
+++ /dev/null
-{
- "deny_stack_user": "not role:heat_stack_user",
- "cloudformation:ListStacks": "rule:deny_stack_user",
- "cloudformation:CreateStack": "rule:deny_stack_user",
- "cloudformation:DescribeStacks": "rule:deny_stack_user",
- "cloudformation:DeleteStack": "rule:deny_stack_user",
- "cloudformation:UpdateStack": "rule:deny_stack_user",
- "cloudformation:DescribeStackEvents": "rule:deny_stack_user",
- "cloudformation:ValidateTemplate": "rule:deny_stack_user",
- "cloudformation:GetTemplate": "rule:deny_stack_user",
- "cloudformation:EstimateTemplateCost": "rule:deny_stack_user",
- "cloudformation:DescribeStackResource": "",
- "cloudformation:DescribeStackResources": "rule:deny_stack_user",
- "cloudformation:ListStackResources": "rule:deny_stack_user",
-
- "cloudwatch:DeleteAlarms": "rule:deny_stack_user",
- "cloudwatch:DescribeAlarmHistory": "rule:deny_stack_user",
- "cloudwatch:DescribeAlarms": "rule:deny_stack_user",
- "cloudwatch:DescribeAlarmsForMetric": "rule:deny_stack_user",
- "cloudwatch:DisableAlarmActions": "rule:deny_stack_user",
- "cloudwatch:EnableAlarmActions": "rule:deny_stack_user",
- "cloudwatch:GetMetricStatistics": "rule:deny_stack_user",
- "cloudwatch:ListMetrics": "rule:deny_stack_user",
- "cloudwatch:PutMetricAlarm": "rule:deny_stack_user",
- "cloudwatch:PutMetricData": "",
- "cloudwatch:SetAlarmState": "rule:deny_stack_user"
-}
+++ /dev/null
-{
- "cloudformation:ListStacks": "!",
- "cloudformation:CreateStack": "!",
- "cloudformation:DescribeStacks": "!",
- "cloudformation:DeleteStack": "!",
- "cloudformation:UpdateStack": "!",
- "cloudformation:DescribeStackEvents": "!",
- "cloudformation:ValidateTemplate": "!",
- "cloudformation:GetTemplate": "!",
- "cloudformation:EstimateTemplateCost": "!",
- "cloudformation:DescribeStackResource": "!",
- "cloudformation:DescribeStackResources": "!",
- "cloudformation:ListStackResources": "!"
-}
+++ /dev/null
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
-
- "Description" : "Template to test Neutron resources",
-
- "Parameters" : {
-
- },
-
- "Resources" : {
- "network": {
- "Type": "OS::Neutron::Net",
- "Properties": {
- "name": "the_network"
- }
- },
- "unnamed_network": {
- "Type": "OS::Neutron::Net"
- },
- "admin_down_network": {
- "Type": "OS::Neutron::Net",
- "Properties": {
- "admin_state_up": false
- }
- },
-
- "subnet": {
- "Type": "OS::Neutron::Subnet",
- "Properties": {
- "network_id": { "Ref" : "network" },
- "ip_version": 4,
- "cidr": "10.0.3.0/24",
- "allocation_pools": [{"start": "10.0.3.20", "end": "10.0.3.150"}]
- }
- },
-
- "port": {
- "Type": "OS::Neutron::Port",
- "Properties": {
- "device_id": "d6b4d3a5-c700-476f-b609-1493dd9dadc0",
- "name": "port1",
- "network_id": { "Ref" : "network" },
- "fixed_ips": [{
- "subnet_id": { "Ref" : "subnet" },
- "ip_address": "10.0.3.21"
- }]
- }
- },
-
- "router": {
- "Type": "OS::Neutron::Router"
- },
-
- "router_interface": {
- "Type": "OS::Neutron::RouterInterface",
- "Properties": {
- "router_id": { "Ref" : "router" },
- "subnet_id": { "Ref" : "subnet" }
- }
- }
- },
- "Outputs" : {
- "the_network_status" : {
- "Value" : { "Fn::GetAtt" : [ "network", "status" ]},
- "Description" : "Status of network"
- },
- "port_device_owner" : {
- "Value" : { "Fn::GetAtt" : [ "port", "device_owner" ]},
- "Description" : "Device owner of the port"
- },
- "port_fixed_ips" : {
- "Value" : { "Fn::GetAtt" : [ "port", "fixed_ips" ]},
- "Description" : "Fixed IPs of the port"
- },
- "port_mac_address" : {
- "Value" : { "Fn::GetAtt" : [ "port", "mac_address" ]},
- "Description" : "MAC address of the port"
- },
- "port_status" : {
- "Value" : { "Fn::GetAtt" : [ "port", "status" ]},
- "Description" : "Status of the port"
- },
- "port_show" : {
- "Value" : { "Fn::GetAtt" : [ "port", "show" ]},
- "Description" : "All attributes for port"
- },
- "subnet_show" : {
- "Value" : { "Fn::GetAtt" : [ "subnet", "show" ]},
- "Description" : "All attributes for subnet"
- },
- "network_show" : {
- "Value" : { "Fn::GetAtt" : [ "network", "show" ]},
- "Description" : "All attributes for network"
- },
- "router_show" : {
- "Value" : { "Fn::GetAtt" : [ "router", "show" ]},
- "Description" : "All attributes for router"
- }
- }
-}
+++ /dev/null
-HeatTemplateFormatVersion: '2012-12-12'
-Description: Template to test Neutron resources
-Resources:
- network:
- Type: OS::Neutron::Net
- Properties: {name: the_network}
- unnamed_network:
- Type: 'OS::Neutron::Net'
- admin_down_network:
- Type: OS::Neutron::Net
- Properties: {admin_state_up: false}
- subnet:
- Type: OS::Neutron::Subnet
- Properties:
- network_id: {Ref: network}
- ip_version: 4
- cidr: 10.0.3.0/24
- allocation_pools:
- - {end: 10.0.3.150, start: 10.0.3.20}
- port:
- Type: OS::Neutron::Port
- Properties:
- device_id: d6b4d3a5-c700-476f-b609-1493dd9dadc0
- name: port1
- network_id: {Ref: network}
- fixed_ips:
- - subnet_id: {Ref: subnet}
- ip_address: 10.0.3.21
- router:
- Type: 'OS::Neutron::Router'
- router_interface:
- Type: OS::Neutron::RouterInterface
- Properties:
- router_id: {Ref: router}
- subnet_id: {Ref: subnet}
-Outputs:
- the_network_status:
- Value:
- Fn::GetAtt: [network, status]
- Description: Status of network
- port_device_owner:
- Value:
- Fn::GetAtt: [port, device_owner]
- Description: Device owner of the port
- port_fixed_ips:
- Value:
- Fn::GetAtt: [port, fixed_ips]
- Description: Fixed IPs of the port
- port_mac_address:
- Value:
- Fn::GetAtt: [port, mac_address]
- Description: MAC address of the port
- port_status:
- Value:
- Fn::GetAtt: [port, status]
- Description: Status of the port
- port_show:
- Value:
- Fn::GetAtt: [port, show]
- Description: All attributes for port
- subnet_show:
- Value:
- Fn::GetAtt: [subnet, show]
- Description: All attributes for subnet
- network_show:
- Value:
- Fn::GetAtt: [network, show]
- Description: All attributes for network
- router_show:
- Value:
- Fn::GetAtt: [router, show]
- Description: All attributes for router
+++ /dev/null
-These templates are required by test_template_format and test_provider_template
-in situations where we don't want to use a minimal template snippet. Ideally we
-want to test the maximum possible syntax to prove the format conversion works.
-
-In general, tests should not depend on these templates, inline minimal
-template snippets are preferred.
+++ /dev/null
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
-
- "Description" : "AWS CloudFormation Sample Template WordPress_Single_Instance: WordPress is web software you can use to create a beautiful website or blog. This template installs a single-instance WordPress deployment using a local MySQL database to store the data.",
-
- "Parameters" : {
-
- "KeyName" : {
- "Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instances",
- "Type" : "String"
- },
-
- "InstanceType" : {
- "Description" : "WebServer EC2 instance type",
- "Type" : "String",
- "Default" : "m1.large",
- "AllowedValues" : [ "t1.micro", "m1.small", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "c1.medium", "c1.xlarge", "cc1.4xlarge" ],
- "ConstraintDescription" : "must be a valid EC2 instance type."
- },
-
- "DBName": {
- "Default": "wordpress",
- "Description" : "The WordPress database name",
- "Type": "String",
- "MinLength": "1",
- "MaxLength": "64",
- "AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*",
- "ConstraintDescription" : "must begin with a letter and contain only alphanumeric characters."
- },
-
- "DBUsername": {
- "Default": "admin",
- "NoEcho": "true",
- "Description" : "The WordPress database admin account username",
- "Type": "String",
- "MinLength": "1",
- "MaxLength": "16",
- "AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*",
- "ConstraintDescription" : "must begin with a letter and contain only alphanumeric characters."
- },
-
- "DBPassword": {
- "Default": "admin",
- "NoEcho": "true",
- "Description" : "The WordPress database admin account password",
- "Type": "String",
- "MinLength": "1",
- "MaxLength": "41",
- "AllowedPattern" : "[a-zA-Z0-9]*",
- "ConstraintDescription" : "must contain only alphanumeric characters."
- },
-
- "DBRootPassword": {
- "Default": "admin",
- "NoEcho": "true",
- "Description" : "Root password for MySQL",
- "Type": "String",
- "MinLength": "1",
- "MaxLength": "41",
- "AllowedPattern" : "[a-zA-Z0-9]*",
- "ConstraintDescription" : "must contain only alphanumeric characters."
- },
- "LinuxDistribution": {
- "Default": "F17",
- "Description" : "Distribution of choice",
- "Type": "String",
- "AllowedValues" : [ "F18", "F17", "U10", "RHEL-6.1", "RHEL-6.2", "RHEL-6.3" ]
- }
- },
-
- "Mappings" : {
- "AWSInstanceType2Arch" : {
- "t1.micro" : { "Arch" : "32" },
- "m1.small" : { "Arch" : "32" },
- "m1.large" : { "Arch" : "64" },
- "m1.xlarge" : { "Arch" : "64" },
- "m2.xlarge" : { "Arch" : "64" },
- "m2.2xlarge" : { "Arch" : "64" },
- "m2.4xlarge" : { "Arch" : "64" },
- "c1.medium" : { "Arch" : "32" },
- "c1.xlarge" : { "Arch" : "64" },
- "cc1.4xlarge" : { "Arch" : "64" }
- },
- "DistroArch2AMI": {
- "F18" : { "32" : "F18-i386-cfntools", "64" : "F18-x86_64-cfntools" },
- "F17" : { "32" : "F17-i386-cfntools", "64" : "F17-x86_64-cfntools" },
- "U10" : { "32" : "U10-i386-cfntools", "64" : "U10-x86_64-cfntools" },
- "RHEL-6.1" : { "32" : "rhel61-i386-cfntools", "64" : "rhel61-x86_64-cfntools" },
- "RHEL-6.2" : { "32" : "rhel62-i386-cfntools", "64" : "rhel62-x86_64-cfntools" },
- "RHEL-6.3" : { "32" : "rhel63-i386-cfntools", "64" : "rhel63-x86_64-cfntools" }
- }
- },
-
- "Resources" : {
- "WikiDatabase": {
- "Type": "AWS::EC2::Instance",
- "Metadata" : {
- "AWS::CloudFormation::Init" : {
- "config" : {
- "packages" : {
- "yum" : {
- "mysql" : [],
- "mysql-server" : [],
- "httpd" : [],
- "wordpress" : []
- }
- },
- "services" : {
- "systemd" : {
- "mysqld" : { "enabled" : "true", "ensureRunning" : "true" },
- "httpd" : { "enabled" : "true", "ensureRunning" : "true" }
- }
- }
- }
- }
- },
- "Properties": {
- "ImageId" : { "Fn::FindInMap" : [ "DistroArch2AMI", { "Ref" : "LinuxDistribution" },
- { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
- "InstanceType" : { "Ref" : "InstanceType" },
- "KeyName" : { "Ref" : "KeyName" },
- "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
- "#!/bin/bash -v\n",
- "/opt/aws/bin/cfn-init\n",
- "# Setup MySQL root password and create a user\n",
- "mysqladmin -u root password '", { "Ref" : "DBRootPassword" }, "'\n",
- "cat << EOF | mysql -u root --password='", { "Ref" : "DBRootPassword" }, "'\n",
- "CREATE DATABASE ", { "Ref" : "DBName" }, ";\n",
- "GRANT ALL PRIVILEGES ON ", { "Ref" : "DBName" }, ".* TO \"", { "Ref" : "DBUsername" }, "\"@\"localhost\"\n",
- "IDENTIFIED BY \"", { "Ref" : "DBPassword" }, "\";\n",
- "FLUSH PRIVILEGES;\n",
- "EXIT\n",
- "EOF\n",
- "sed -i \"/Deny from All/d\" /etc/httpd/conf.d/wordpress.conf\n",
- "sed -i \"s/Require local/Require all granted/\" /etc/httpd/conf.d/wordpress.conf\n",
- "sed --in-place --e s/database_name_here/", { "Ref" : "DBName" }, "/ --e s/username_here/", { "Ref" : "DBUsername" }, "/ --e s/password_here/", { "Ref" : "DBPassword" }, "/ /usr/share/wordpress/wp-config.php\n",
- "systemctl restart httpd.service\n"
- ]]}}
- }
- }
- },
-
- "Outputs" : {
- "WebsiteURL" : {
- "Value" : { "Fn::Join" : ["", ["http://", { "Fn::GetAtt" : [ "WikiDatabase", "PublicIp" ]}, "/wordpress"]] },
- "Description" : "URL for Wordpress wiki"
- }
- }
-}
+++ /dev/null
-HeatTemplateFormatVersion: '2012-12-12'
-Description: 'AWS CloudFormation Sample Template WordPress_Single_Instance:
- WordPress is web software you can use to create a beautiful website or blog. This
- template installs a single-instance WordPress deployment using a local MySQL database
- to store the data.'
-Parameters:
- KeyName: {Description: Name of an existing EC2 KeyPair
- to enable SSH access to the instances, Type: String}
- InstanceType:
- Description: WebServer EC2 instance type
- Type: String
- Default: m1.large
- AllowedValues: [t1.micro, m1.small, m1.large, m1.xlarge, m2.xlarge,
- m2.2xlarge, m2.4xlarge, c1.medium, c1.xlarge, cc1.4xlarge]
- ConstraintDescription: must be a valid EC2 instance type.
- DBName: {Default: wordpress, Description: The
- WordPress database name, Type: String, MinLength: '1',
- MaxLength: '64', AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*',
- ConstraintDescription: must begin with a letter and contain only
- alphanumeric characters.}
- DBUsername: {Default: admin, NoEcho: 'true',
- Description: The WordPress database admin account username, Type: String,
- MinLength: '1', MaxLength: '16', AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*',
- ConstraintDescription: must begin with a letter and contain only
- alphanumeric characters.}
- DBPassword: {Default: admin, NoEcho: 'true',
- Description: The WordPress database admin account password, Type: String,
- MinLength: '1', MaxLength: '41', AllowedPattern: '[a-zA-Z0-9]*',
- ConstraintDescription: must contain only alphanumeric characters.}
- DBRootPassword: {Default: admin, NoEcho: 'true',
- Description: Root password for MySQL, Type: String,
- MinLength: '1', MaxLength: '41', AllowedPattern: '[a-zA-Z0-9]*',
- ConstraintDescription: must contain only alphanumeric characters.}
- LinuxDistribution:
- Default: F17
- Description: Distribution of choice
- Type: String
- AllowedValues: [F18, F17, U10, RHEL-6.1, RHEL-6.2, RHEL-6.3]
-Mappings:
- AWSInstanceType2Arch:
- t1.micro: {Arch: '32'}
- m1.small: {Arch: '32'}
- m1.large: {Arch: '64'}
- m1.xlarge: {Arch: '64'}
- m2.xlarge: {Arch: '64'}
- m2.2xlarge: {Arch: '64'}
- m2.4xlarge: {Arch: '64'}
- c1.medium: {Arch: '32'}
- c1.xlarge: {Arch: '64'}
- cc1.4xlarge: {Arch: '64'}
- DistroArch2AMI:
- F18: {'32': F18-i386-cfntools, '64': F18-x86_64-cfntools}
- F17: {'32': F17-i386-cfntools, '64': F17-x86_64-cfntools}
- U10: {'32': U10-i386-cfntools, '64': U10-x86_64-cfntools}
- RHEL-6.1: {'32': rhel61-i386-cfntools, '64': rhel61-x86_64-cfntools}
- RHEL-6.2: {'32': rhel62-i386-cfntools, '64': rhel62-x86_64-cfntools}
- RHEL-6.3: {'32': rhel63-i386-cfntools, '64': rhel63-x86_64-cfntools}
-Resources:
- WikiDatabase:
- Type: AWS::EC2::Instance
- Metadata:
- AWS::CloudFormation::Init:
- config:
- packages:
- yum:
- mysql: []
- mysql-server: []
- httpd: []
- wordpress: []
- services:
- systemd:
- mysqld: {enabled: 'true', ensureRunning: 'true'}
- httpd: {enabled: 'true', ensureRunning: 'true'}
- Properties:
- ImageId:
- Fn::FindInMap:
- - DistroArch2AMI
- - {Ref: LinuxDistribution}
- - Fn::FindInMap:
- - AWSInstanceType2Arch
- - {Ref: InstanceType}
- - Arch
- InstanceType: {Ref: InstanceType}
- KeyName: {Ref: KeyName}
- UserData:
- Fn::Base64:
- Fn::Join:
- - ''
- - - '#!/bin/bash -v
-
- '
- - '/opt/aws/bin/cfn-init
-
- '
- - '# Setup MySQL root password and create a user
-
- '
- - mysqladmin -u root password '
- - {Ref: DBRootPassword}
- - '''
-
- '
- - cat << EOF | mysql -u root --password='
- - {Ref: DBRootPassword}
- - '''
-
- '
- - 'CREATE DATABASE '
- - {Ref: DBName}
- - ';
-
- '
- - 'GRANT ALL PRIVILEGES ON '
- - {Ref: DBName}
- - .* TO "
- - {Ref: DBUsername}
- - '"@"localhost"
-
- '
- - IDENTIFIED BY "
- - {Ref: DBPassword}
- - '";
-
- '
- - 'FLUSH PRIVILEGES;
-
- '
- - 'EXIT
-
- '
- - 'EOF
-
- '
- - 'sed -i "/Deny from All/d" /etc/httpd/conf.d/wordpress.conf
-
- '
- - 'sed -i "s/Require local/Require all granted/" /etc/httpd/conf.d/wordpress.conf
-
- '
- - sed --in-place --e s/database_name_here/
- - {Ref: DBName}
- - / --e s/username_here/
- - {Ref: DBUsername}
- - / --e s/password_here/
- - {Ref: DBPassword}
- - '/ /usr/share/wordpress/wp-config.php
-
- '
- - 'systemctl restart httpd.service
-
- '
-Outputs:
- WebsiteURL:
- Value:
- Fn::Join:
- - ''
- - - http://
- - Fn::GetAtt: [WikiDatabase, PublicIp]
- - /wordpress
- Description: URL for Wordpress wiki
-
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-from heat.tests.common import HeatTestCase
-from heat.api.aws import utils as api_utils
-
-
-class AWSCommonTest(HeatTestCase):
- '''
- Tests the api/aws common componenents
- '''
- # The tests
- def test_format_response(self):
- response = api_utils.format_response("Foo", "Bar")
- expected = {'FooResponse': {'FooResult': 'Bar'}}
- self.assertEqual(response, expected)
-
- def test_params_extract(self):
- p = {'Parameters.member.1.ParameterKey': 'foo',
- 'Parameters.member.1.ParameterValue': 'bar',
- 'Parameters.member.2.ParameterKey': 'blarg',
- 'Parameters.member.2.ParameterValue': 'wibble'}
- params = api_utils.extract_param_pairs(p, prefix='Parameters',
- keyname='ParameterKey',
- valuename='ParameterValue')
- self.assertEqual(len(params), 2)
- self.assertTrue('foo' in params)
- self.assertEqual(params['foo'], 'bar')
- self.assertTrue('blarg' in params)
- self.assertEqual(params['blarg'], 'wibble')
-
- def test_params_extract_dots(self):
- p = {'Parameters.member.1.1.ParameterKey': 'foo',
- 'Parameters.member.1.1.ParameterValue': 'bar',
- 'Parameters.member.2.1.ParameterKey': 'blarg',
- 'Parameters.member.2.1.ParameterValue': 'wibble'}
- params = api_utils.extract_param_pairs(p, prefix='Parameters',
- keyname='ParameterKey',
- valuename='ParameterValue')
- self.assertFalse(params)
-
- def test_params_extract_garbage(self):
- p = {'Parameters.member.1.ParameterKey': 'foo',
- 'Parameters.member.1.ParameterValue': 'bar',
- 'Foo.1.ParameterKey': 'blarg',
- 'Foo.1.ParameterValue': 'wibble'}
- params = api_utils.extract_param_pairs(p, prefix='Parameters',
- keyname='ParameterKey',
- valuename='ParameterValue')
- self.assertEqual(len(params), 1)
- self.assertTrue('foo' in params)
- self.assertEqual(params['foo'], 'bar')
-
- def test_params_extract_garbage_prefix(self):
- p = {'prefixParameters.member.Foo.Bar.ParameterKey': 'foo',
- 'Parameters.member.Foo.Bar.ParameterValue': 'bar'}
- params = api_utils.extract_param_pairs(p, prefix='Parameters',
- keyname='ParameterKey',
- valuename='ParameterValue')
- self.assertFalse(params)
-
- def test_params_extract_garbage_suffix(self):
- p = {'Parameters.member.1.ParameterKeysuffix': 'foo',
- 'Parameters.member.1.ParameterValue': 'bar'}
- params = api_utils.extract_param_pairs(p, prefix='Parameters',
- keyname='ParameterKey',
- valuename='ParameterValue')
- self.assertFalse(params)
-
- def test_extract_param_list(self):
- p = {'MetricData.member.1.MetricName': 'foo',
- 'MetricData.member.1.Unit': 'Bytes',
- 'MetricData.member.1.Value': 234333}
- params = api_utils.extract_param_list(p, prefix='MetricData')
- self.assertEqual(len(params), 1)
- self.assertTrue('MetricName' in params[0])
- self.assertTrue('Unit' in params[0])
- self.assertTrue('Value' in params[0])
- self.assertEqual(params[0]['MetricName'], 'foo')
- self.assertEqual(params[0]['Unit'], 'Bytes')
- self.assertEqual(params[0]['Value'], 234333)
-
- def test_extract_param_list_garbage_prefix(self):
- p = {'AMetricData.member.1.MetricName': 'foo',
- 'MetricData.member.1.Unit': 'Bytes',
- 'MetricData.member.1.Value': 234333}
- params = api_utils.extract_param_list(p, prefix='MetricData')
- self.assertEqual(len(params), 1)
- self.assertTrue('MetricName' not in params[0])
- self.assertTrue('Unit' in params[0])
- self.assertTrue('Value' in params[0])
- self.assertEqual(params[0]['Unit'], 'Bytes')
- self.assertEqual(params[0]['Value'], 234333)
-
- def test_extract_param_list_garbage_prefix2(self):
- p = {'AMetricData.member.1.MetricName': 'foo',
- 'BMetricData.member.1.Unit': 'Bytes',
- 'CMetricData.member.1.Value': 234333}
- params = api_utils.extract_param_list(p, prefix='MetricData')
- self.assertEqual(len(params), 0)
-
- def test_extract_param_list_garbage_suffix(self):
- p = {'MetricData.member.1.AMetricName': 'foo',
- 'MetricData.member.1.Unit': 'Bytes',
- 'MetricData.member.1.Value': 234333}
- params = api_utils.extract_param_list(p, prefix='MetricData')
- self.assertEqual(len(params), 1)
- self.assertTrue('MetricName' not in params[0])
- self.assertTrue('Unit' in params[0])
- self.assertTrue('Value' in params[0])
- self.assertEqual(params[0]['Unit'], 'Bytes')
- self.assertEqual(params[0]['Value'], 234333)
-
- def test_extract_param_list_multiple(self):
- p = {'MetricData.member.1.MetricName': 'foo',
- 'MetricData.member.1.Unit': 'Bytes',
- 'MetricData.member.1.Value': 234333,
- 'MetricData.member.2.MetricName': 'foo2',
- 'MetricData.member.2.Unit': 'Bytes',
- 'MetricData.member.2.Value': 12345}
- params = api_utils.extract_param_list(p, prefix='MetricData')
- self.assertEqual(len(params), 2)
- self.assertTrue('MetricName' in params[0])
- self.assertTrue('MetricName' in params[1])
- self.assertEqual(params[0]['MetricName'], 'foo')
- self.assertEqual(params[0]['Unit'], 'Bytes')
- self.assertEqual(params[0]['Value'], 234333)
- self.assertEqual(params[1]['MetricName'], 'foo2')
- self.assertEqual(params[1]['Unit'], 'Bytes')
- self.assertEqual(params[1]['Value'], 12345)
-
- def test_extract_param_list_multiple_missing(self):
- # Handle case where there is an empty list item
- p = {'MetricData.member.1.MetricName': 'foo',
- 'MetricData.member.1.Unit': 'Bytes',
- 'MetricData.member.1.Value': 234333,
- 'MetricData.member.3.MetricName': 'foo2',
- 'MetricData.member.3.Unit': 'Bytes',
- 'MetricData.member.3.Value': 12345}
- params = api_utils.extract_param_list(p, prefix='MetricData')
- self.assertEqual(len(params), 2)
- self.assertTrue('MetricName' in params[0])
- self.assertTrue('MetricName' in params[1])
- self.assertEqual(params[0]['MetricName'], 'foo')
- self.assertEqual(params[0]['Unit'], 'Bytes')
- self.assertEqual(params[0]['Value'], 234333)
- self.assertEqual(params[1]['MetricName'], 'foo2')
- self.assertEqual(params[1]['Unit'], 'Bytes')
- self.assertEqual(params[1]['Value'], 12345)
-
- def test_extract_param_list_badindex(self):
- p = {'MetricData.member.xyz.MetricName': 'foo',
- 'MetricData.member.$!&^.Unit': 'Bytes',
- 'MetricData.member.+.Value': 234333,
- 'MetricData.member.--.MetricName': 'foo2',
- 'MetricData.member._3.Unit': 'Bytes',
- 'MetricData.member.-1000.Value': 12345}
- params = api_utils.extract_param_list(p, prefix='MetricData')
- self.assertEqual(len(params), 0)
-
- def test_reformat_dict_keys(self):
- keymap = {"foo": "bar"}
- data = {"foo": 123}
- expected = {"bar": 123}
- result = api_utils.reformat_dict_keys(keymap, data)
- self.assertEqual(result, expected)
-
- def test_reformat_dict_keys_missing(self):
- keymap = {"foo": "bar", "foo2": "bar2"}
- data = {"foo": 123}
- expected = {"bar": 123}
- result = api_utils.reformat_dict_keys(keymap, data)
- self.assertEqual(result, expected)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import json
-import os
-
-from oslo.config import cfg
-
-from heat.common import exception as heat_exception
-from heat.common import identifier
-from heat.common import policy
-from heat.openstack.common import rpc
-from heat.common.wsgi import Request
-from heat.rpc import api as rpc_api
-from heat.api.aws import exception
-import heat.api.cfn.v1.stacks as stacks
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-policy_path = os.path.dirname(os.path.realpath(__file__)) + "/policy/"
-
-
-class CfnStackControllerTest(HeatTestCase):
- '''
- Tests the API class which acts as the WSGI controller,
- the endpoint processing API requests after they are routed
- '''
-
- def _dummy_GET_request(self, params={}):
- # Mangle the params dict into a query string
- qs = "&".join(["=".join([k, str(params[k])]) for k in params])
- environ = {'REQUEST_METHOD': 'GET', 'QUERY_STRING': qs}
- req = Request(environ)
- req.context = utils.dummy_context()
- return req
-
- # The tests
- def test_stackid_addprefix(self):
- self.m.ReplayAll()
-
- response = self.controller._id_format({
- 'StackName': 'Foo',
- 'StackId': {
- u'tenant': u't',
- u'stack_name': u'Foo',
- u'stack_id': u'123',
- u'path': u''
- }
- })
- expected = {'StackName': 'Foo',
- 'StackId': 'arn:openstack:heat::t:stacks/Foo/123'}
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_enforce_default(self):
- self.m.ReplayAll()
- params = {'Action': 'ListStacks'}
- dummy_req = self._dummy_GET_request(params)
- self.controller.policy.policy_path = None
- response = self.controller._enforce(dummy_req, 'ListStacks')
- self.assertEqual(response, None)
- self.m.VerifyAll()
-
- def test_enforce_denied(self):
- self.m.ReplayAll()
- params = {'Action': 'ListStacks'}
- dummy_req = self._dummy_GET_request(params)
- dummy_req.context.roles = ['heat_stack_user']
- self.controller.policy.policy_path = (policy_path +
- 'deny_stack_user.json')
- self.assertRaises(exception.HeatAccessDeniedError,
- self.controller._enforce, dummy_req, 'ListStacks')
- self.m.VerifyAll()
-
- def test_enforce_ise(self):
- params = {'Action': 'ListStacks'}
- dummy_req = self._dummy_GET_request(params)
- dummy_req.context.roles = ['heat_stack_user']
-
- self.m.StubOutWithMock(policy.Enforcer, 'enforce')
- policy.Enforcer.enforce(dummy_req.context, 'ListStacks', {}
- ).AndRaise(AttributeError)
- self.m.ReplayAll()
-
- self.controller.policy.policy_path = (policy_path +
- 'deny_stack_user.json')
- self.assertRaises(exception.HeatInternalFailureError,
- self.controller._enforce, dummy_req, 'ListStacks')
- self.m.VerifyAll()
-
- def test_list(self):
- # Format a dummy GET request to pass into the WSGI handler
- params = {'Action': 'ListStacks'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- engine_resp = [{u'stack_identity': {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'1',
- u'path': u''},
- u'updated_time': u'2012-07-09T09:13:11Z',
- u'template_description': u'blah',
- u'stack_status_reason': u'Stack successfully created',
- u'creation_time': u'2012-07-09T09:12:45Z',
- u'stack_name': u'wordpress',
- u'stack_action': u'CREATE',
- u'stack_status': u'COMPLETE'}]
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'list_stacks',
- 'args': {},
- 'version': self.api_version},
- None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- # Call the list controller function and compare the response
- result = self.controller.list(dummy_req)
- expected = {'ListStacksResponse': {'ListStacksResult':
- {'StackSummaries':
- [{u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1',
- u'LastUpdatedTime': u'2012-07-09T09:13:11Z',
- u'TemplateDescription': u'blah',
- u'StackStatusReason': u'Stack successfully created',
- u'CreationTime': u'2012-07-09T09:12:45Z',
- u'StackName': u'wordpress',
- u'StackStatus': u'CREATE_COMPLETE'}]}}}
- self.assertEqual(result, expected)
- self.m.VerifyAll()
-
- def test_list_rmt_aterr(self):
- params = {'Action': 'ListStacks'}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'list_stacks',
- 'args': {},
- 'version': self.api_version},
- None).AndRaise(AttributeError())
-
- self.m.ReplayAll()
-
- # Call the list controller function and compare the response
- result = self.controller.list(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_list_rmt_interr(self):
- params = {'Action': 'ListStacks'}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'list_stacks',
- 'args': {},
- 'version': self.api_version},
- None).AndRaise(Exception())
-
- self.m.ReplayAll()
-
- # Call the list controller function and compare the response
- result = self.controller.list(dummy_req)
- self.assertEqual(type(result), exception.HeatInternalFailureError)
- self.m.VerifyAll()
-
- def test_describe(self):
- # Format a dummy GET request to pass into the WSGI handler
- stack_name = u"wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- params = {'Action': 'DescribeStacks', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- # Note the engine returns a load of keys we don't actually use
- # so this is a subset of the real response format
- engine_resp = [{u'stack_identity':
- {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'6',
- u'path': u''},
- u'updated_time': u'2012-07-09T09:13:11Z',
- u'parameters': {u'DBUsername': u'admin',
- u'LinuxDistribution': u'F17',
- u'InstanceType': u'm1.large',
- u'DBRootPassword': u'admin',
- u'DBPassword': u'admin',
- u'DBName': u'wordpress'},
- u'outputs':
- [{u'output_key': u'WebsiteURL',
- u'description': u'URL for Wordpress wiki',
- u'output_value': u'http://10.0.0.8/wordpress'}],
- u'stack_status_reason': u'Stack successfully created',
- u'creation_time': u'2012-07-09T09:12:45Z',
- u'stack_name': u'wordpress',
- u'notification_topics': [],
- u'stack_action': u'CREATE',
- u'stack_status': u'COMPLETE',
- u'description': u'blah',
- u'disable_rollback': 'true',
- u'timeout_mins':60,
- u'capabilities':[]}]
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'show_stack',
- 'args': {'stack_identity': identity},
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- # Call the list controller function and compare the response
- response = self.controller.describe(dummy_req)
-
- expected = {'DescribeStacksResponse':
- {'DescribeStacksResult':
- {'Stacks':
- [{'StackId': u'arn:openstack:heat::t:stacks/wordpress/6',
- 'StackStatusReason': u'Stack successfully created',
- 'Description': u'blah',
- 'Parameters':
- [{'ParameterValue': u'admin',
- 'ParameterKey': u'DBUsername'},
- {'ParameterValue': u'F17',
- 'ParameterKey': u'LinuxDistribution'},
- {'ParameterValue': u'm1.large',
- 'ParameterKey': u'InstanceType'},
- {'ParameterValue': u'admin',
- 'ParameterKey': u'DBRootPassword'},
- {'ParameterValue': u'admin',
- 'ParameterKey': u'DBPassword'},
- {'ParameterValue': u'wordpress',
- 'ParameterKey': u'DBName'}],
- 'Outputs':
- [{'OutputKey': u'WebsiteURL',
- 'OutputValue': u'http://10.0.0.8/wordpress',
- 'Description': u'URL for Wordpress wiki'}],
- 'TimeoutInMinutes': 60,
- 'CreationTime': u'2012-07-09T09:12:45Z',
- 'Capabilities': [],
- 'StackName': u'wordpress',
- 'NotificationARNs': [],
- 'StackStatus': u'CREATE_COMPLETE',
- 'DisableRollback': 'true',
- 'LastUpdatedTime': u'2012-07-09T09:13:11Z'}]}}}
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_describe_arn(self):
- # Format a dummy GET request to pass into the WSGI handler
- stack_name = u"wordpress"
- stack_identifier = identifier.HeatIdentifier('t', stack_name, '6')
- identity = dict(stack_identifier)
- params = {'Action': 'DescribeStacks',
- 'StackName': stack_identifier.arn()}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- # Note the engine returns a load of keys we don't actually use
- # so this is a subset of the real response format
- engine_resp = [{u'stack_identity': {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'6',
- u'path': u''},
- u'updated_time': u'2012-07-09T09:13:11Z',
- u'parameters': {u'DBUsername': u'admin',
- u'LinuxDistribution': u'F17',
- u'InstanceType': u'm1.large',
- u'DBRootPassword': u'admin',
- u'DBPassword': u'admin',
- u'DBName': u'wordpress'},
- u'outputs':
- [{u'output_key': u'WebsiteURL',
- u'description': u'URL for Wordpress wiki',
- u'output_value': u'http://10.0.0.8/wordpress'}],
- u'stack_status_reason': u'Stack successfully created',
- u'creation_time': u'2012-07-09T09:12:45Z',
- u'stack_name': u'wordpress',
- u'notification_topics': [],
- u'stack_action': u'CREATE',
- u'stack_status': u'COMPLETE',
- u'description': u'blah',
- u'disable_rollback': 'true',
- u'timeout_mins':60,
- u'capabilities':[]}]
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'show_stack',
- 'args': {'stack_identity': identity},
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- # Call the list controller function and compare the response
- response = self.controller.describe(dummy_req)
-
- expected = {'DescribeStacksResponse':
- {'DescribeStacksResult':
- {'Stacks':
- [{'StackId': u'arn:openstack:heat::t:stacks/wordpress/6',
- 'StackStatusReason': u'Stack successfully created',
- 'Description': u'blah',
- 'Parameters':
- [{'ParameterValue': u'admin',
- 'ParameterKey': u'DBUsername'},
- {'ParameterValue': u'F17',
- 'ParameterKey': u'LinuxDistribution'},
- {'ParameterValue': u'm1.large',
- 'ParameterKey': u'InstanceType'},
- {'ParameterValue': u'admin',
- 'ParameterKey': u'DBRootPassword'},
- {'ParameterValue': u'admin',
- 'ParameterKey': u'DBPassword'},
- {'ParameterValue': u'wordpress',
- 'ParameterKey': u'DBName'}],
- 'Outputs':
- [{'OutputKey': u'WebsiteURL',
- 'OutputValue': u'http://10.0.0.8/wordpress',
- 'Description': u'URL for Wordpress wiki'}],
- 'TimeoutInMinutes': 60,
- 'CreationTime': u'2012-07-09T09:12:45Z',
- 'Capabilities': [],
- 'StackName': u'wordpress',
- 'NotificationARNs': [],
- 'StackStatus': u'CREATE_COMPLETE',
- 'DisableRollback': 'true',
- 'LastUpdatedTime': u'2012-07-09T09:13:11Z'}]}}}
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_describe_arn_invalidtenant(self):
- # Format a dummy GET request to pass into the WSGI handler
- stack_name = u"wordpress"
- stack_identifier = identifier.HeatIdentifier('wibble', stack_name, '6')
- identity = dict(stack_identifier)
- params = {'Action': 'DescribeStacks',
- 'StackName': stack_identifier.arn()}
- dummy_req = self._dummy_GET_request(params)
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'show_stack',
- 'args': {'stack_identity': identity},
- 'version': self.api_version},
- None).AndRaise(heat_exception.InvalidTenant())
-
- self.m.ReplayAll()
-
- result = self.controller.describe(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_describe_aterr(self):
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- params = {'Action': 'DescribeStacks', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'show_stack',
- 'args': {'stack_identity': identity},
- 'version': self.api_version}, None
- ).AndRaise(AttributeError())
-
- self.m.ReplayAll()
-
- result = self.controller.describe(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_describe_bad_name(self):
- stack_name = "wibble"
- params = {'Action': 'DescribeStacks', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None
- ).AndRaise(heat_exception.StackNotFound())
-
- self.m.ReplayAll()
-
- result = self.controller.describe(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_get_template_int_body(self):
- '''Test the internal _get_template function.'''
- params = {'TemplateBody': "abcdef"}
- dummy_req = self._dummy_GET_request(params)
- result = self.controller._get_template(dummy_req)
- expected = "abcdef"
- self.assertEqual(result, expected)
-
- # TODO(shardy) : test the _get_template TemplateUrl case
-
- def test_create(self):
- # Format a dummy request
- stack_name = "wordpress"
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- params = {'Action': 'CreateStack', 'StackName': stack_name,
- 'TemplateBody': '%s' % json_template,
- 'TimeoutInMinutes': 30,
- 'DisableRollback': 'true',
- 'Parameters.member.1.ParameterKey': 'InstanceType',
- 'Parameters.member.1.ParameterValue': 'm1.xlarge'}
- engine_parms = {u'InstanceType': u'm1.xlarge'}
- engine_args = {'timeout_mins': u'30', 'disable_rollback': 'true'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- engine_resp = {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'1',
- u'path': u''}
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': engine_parms,
- 'files': {},
- 'args': engine_args},
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- response = self.controller.create(dummy_req)
-
- expected = {
- 'CreateStackResponse': {
- 'CreateStackResult': {
- u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1'
- }
- }
- }
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_create_rollback(self):
- # Format a dummy request
- stack_name = "wordpress"
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- params = {'Action': 'CreateStack', 'StackName': stack_name,
- 'TemplateBody': '%s' % json_template,
- 'TimeoutInMinutes': 30,
- 'DisableRollback': 'false',
- 'Parameters.member.1.ParameterKey': 'InstanceType',
- 'Parameters.member.1.ParameterValue': 'm1.xlarge'}
- engine_parms = {u'InstanceType': u'm1.xlarge'}
- engine_args = {'timeout_mins': u'30', 'disable_rollback': 'false'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- engine_resp = {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'1',
- u'path': u''}
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': engine_parms,
- 'files': {},
- 'args': engine_args},
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- response = self.controller.create(dummy_req)
-
- expected = {
- 'CreateStackResponse': {
- 'CreateStackResult': {
- u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1'
- }
- }
- }
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_create_onfailure_true(self):
- # Format a dummy request
- stack_name = "wordpress"
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- params = {'Action': 'CreateStack', 'StackName': stack_name,
- 'TemplateBody': '%s' % json_template,
- 'TimeoutInMinutes': 30,
- 'OnFailure': 'DO_NOTHING',
- 'Parameters.member.1.ParameterKey': 'InstanceType',
- 'Parameters.member.1.ParameterValue': 'm1.xlarge'}
- engine_parms = {u'InstanceType': u'm1.xlarge'}
- engine_args = {'timeout_mins': u'30', 'disable_rollback': 'true'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- engine_resp = {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'1',
- u'path': u''}
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': engine_parms,
- 'files': {},
- 'args': engine_args},
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- response = self.controller.create(dummy_req)
-
- expected = {
- 'CreateStackResponse': {
- 'CreateStackResult': {
- u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1'
- }
- }
- }
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_create_onfailure_false_delete(self):
- # Format a dummy request
- stack_name = "wordpress"
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- params = {'Action': 'CreateStack', 'StackName': stack_name,
- 'TemplateBody': '%s' % json_template,
- 'TimeoutInMinutes': 30,
- 'OnFailure': 'DELETE',
- 'Parameters.member.1.ParameterKey': 'InstanceType',
- 'Parameters.member.1.ParameterValue': 'm1.xlarge'}
- engine_parms = {u'InstanceType': u'm1.xlarge'}
- engine_args = {'timeout_mins': u'30', 'disable_rollback': 'false'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- engine_resp = {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'1',
- u'path': u''}
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': engine_parms,
- 'files': {},
- 'args': engine_args},
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- response = self.controller.create(dummy_req)
-
- expected = {
- 'CreateStackResponse': {
- 'CreateStackResult': {
- u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1'
- }
- }
- }
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_create_onfailure_false_rollback(self):
- # Format a dummy request
- stack_name = "wordpress"
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- params = {'Action': 'CreateStack', 'StackName': stack_name,
- 'TemplateBody': '%s' % json_template,
- 'TimeoutInMinutes': 30,
- 'OnFailure': 'ROLLBACK',
- 'Parameters.member.1.ParameterKey': 'InstanceType',
- 'Parameters.member.1.ParameterValue': 'm1.xlarge'}
- engine_parms = {u'InstanceType': u'm1.xlarge'}
- engine_args = {'timeout_mins': u'30', 'disable_rollback': 'false'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- engine_resp = {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'1',
- u'path': u''}
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': engine_parms,
- 'files': {},
- 'args': engine_args},
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- response = self.controller.create(dummy_req)
-
- expected = {
- 'CreateStackResponse': {
- 'CreateStackResult': {
- u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1'
- }
- }
- }
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_create_onfailure_err(self):
- # Format a dummy request
- stack_name = "wordpress"
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- params = {'Action': 'CreateStack', 'StackName': stack_name,
- 'TemplateBody': '%s' % json_template,
- 'TimeoutInMinutes': 30,
- 'DisableRollback': 'true',
- 'OnFailure': 'DO_NOTHING',
- 'Parameters.member.1.ParameterKey': 'InstanceType',
- 'Parameters.member.1.ParameterValue': 'm1.xlarge'}
- engine_parms = {u'InstanceType': u'm1.xlarge'}
- engine_args = {'timeout_mins': u'30', 'disable_rollback': 'false'}
- dummy_req = self._dummy_GET_request(params)
-
- self.assertRaises(exception.HeatInvalidParameterCombinationError,
- self.controller.create, dummy_req)
-
- def test_create_err_no_template(self):
- # Format a dummy request with a missing template field
- stack_name = "wordpress"
- params = {'Action': 'CreateStack', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- result = self.controller.create(dummy_req)
- self.assertEqual(type(result), exception.HeatMissingParameterError)
-
- def test_create_err_inval_template(self):
- # Format a dummy request with an invalid TemplateBody
- stack_name = "wordpress"
- json_template = "!$%**_+}@~?"
- params = {'Action': 'CreateStack', 'StackName': stack_name,
- 'TemplateBody': '%s' % json_template}
- dummy_req = self._dummy_GET_request(params)
-
- result = self.controller.create(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
-
- def test_create_err_rpcerr(self):
- # Format a dummy request
- stack_name = "wordpress"
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- params = {'Action': 'CreateStack', 'StackName': stack_name,
- 'TemplateBody': '%s' % json_template,
- 'TimeoutInMinutes': 30,
- 'Parameters.member.1.ParameterKey': 'InstanceType',
- 'Parameters.member.1.ParameterValue': 'm1.xlarge'}
- engine_parms = {u'InstanceType': u'm1.xlarge'}
- engine_args = {'timeout_mins': u'30'}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
-
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': engine_parms,
- 'files': {},
- 'args': engine_args},
- 'version': self.api_version}, None
- ).AndRaise(AttributeError())
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': engine_parms,
- 'files': {},
- 'args': engine_args},
- 'version': self.api_version}, None
- ).AndRaise(heat_exception.UnknownUserParameter())
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': engine_parms,
- 'files': {},
- 'args': engine_args},
- 'version': self.api_version}, None
- ).AndRaise(heat_exception.UserParameterMissing())
-
- self.m.ReplayAll()
-
- result = self.controller.create(dummy_req)
-
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
-
- result = self.controller.create(dummy_req)
-
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
-
- result = self.controller.create(dummy_req)
-
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
-
- self.m.VerifyAll()
-
- def test_create_err_exists(self):
- # Format a dummy request
- stack_name = "wordpress"
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- params = {'Action': 'CreateStack', 'StackName': stack_name,
- 'TemplateBody': '%s' % json_template,
- 'TimeoutInMinutes': 30,
- 'Parameters.member.1.ParameterKey': 'InstanceType',
- 'Parameters.member.1.ParameterValue': 'm1.xlarge'}
- engine_parms = {u'InstanceType': u'm1.xlarge'}
- engine_args = {'timeout_mins': u'30'}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
-
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': engine_parms,
- 'files': {},
- 'args': engine_args},
- 'version': self.api_version}, None
- ).AndRaise(heat_exception.StackExists())
-
- self.m.ReplayAll()
-
- result = self.controller.create(dummy_req)
-
- self.assertEqual(type(result),
- exception.AlreadyExistsError)
- self.m.VerifyAll()
-
- def test_create_err_engine(self):
- # Format a dummy request
- stack_name = "wordpress"
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- params = {'Action': 'CreateStack', 'StackName': stack_name,
- 'TemplateBody': '%s' % json_template,
- 'TimeoutInMinutes': 30,
- 'Parameters.member.1.ParameterKey': 'InstanceType',
- 'Parameters.member.1.ParameterValue': 'm1.xlarge'}
- engine_parms = {u'InstanceType': u'm1.xlarge'}
- engine_args = {'timeout_mins': u'30'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- self.m.StubOutWithMock(rpc, 'call')
-
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': engine_parms,
- 'files': {},
- 'args': engine_args},
- 'version': self.api_version}, None).AndRaise(
- heat_exception.StackValidationFailed(
- message='Something went wrong'))
-
- self.m.ReplayAll()
-
- result = self.controller.create(dummy_req)
-
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_update(self):
- # Format a dummy request
- stack_name = "wordpress"
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- params = {'Action': 'UpdateStack', 'StackName': stack_name,
- 'TemplateBody': '%s' % json_template,
- 'Parameters.member.1.ParameterKey': 'InstanceType',
- 'Parameters.member.1.ParameterValue': 'm1.xlarge'}
- engine_parms = {u'InstanceType': u'm1.xlarge'}
- engine_args = {}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- identity = dict(identifier.HeatIdentifier('t', stack_name, '1'))
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
-
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'update_stack',
- 'args': {'stack_identity': identity,
- 'template': template,
- 'params': engine_parms,
- 'files': {},
- 'args': engine_args},
- 'version': self.api_version},
- None).AndReturn(identity)
-
- self.m.ReplayAll()
-
- response = self.controller.update(dummy_req)
-
- expected = {
- 'UpdateStackResponse': {
- 'UpdateStackResult': {
- u'StackId': u'arn:openstack:heat::t:stacks/wordpress/1'
- }
- }
- }
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_update_bad_name(self):
- stack_name = "wibble"
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- params = {'Action': 'UpdateStack', 'StackName': stack_name,
- 'TemplateBody': '%s' % json_template,
- 'Parameters.member.1.ParameterKey': 'InstanceType',
- 'Parameters.member.1.ParameterValue': 'm1.xlarge'}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None
- ).AndRaise(heat_exception.StackNotFound())
-
- self.m.ReplayAll()
-
- result = self.controller.update(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_create_or_update_err(self):
- result = self.controller.create_or_update(req={}, action="dsdgfdf")
- self.assertEqual(type(result), exception.HeatInternalFailureError)
-
- def test_get_template(self):
- # Format a dummy request
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- template = {u'Foo': u'bar'}
- params = {'Action': 'GetTemplate', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- engine_resp = template
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'get_template',
- 'args': {'stack_identity': identity},
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- response = self.controller.get_template(dummy_req)
-
- expected = {'GetTemplateResponse':
- {'GetTemplateResult':
- {'TemplateBody': template}}}
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_get_template_err_rpcerr(self):
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- template = {u'Foo': u'bar'}
- params = {'Action': 'GetTemplate', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'get_template',
- 'args': {'stack_identity': identity},
- 'version': self.api_version}, None
- ).AndRaise(AttributeError())
-
- self.m.ReplayAll()
-
- result = self.controller.get_template(dummy_req)
-
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_get_template_bad_name(self):
- stack_name = "wibble"
- params = {'Action': 'GetTemplate', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None
- ).AndRaise(heat_exception.StackNotFound())
-
- self.m.ReplayAll()
-
- result = self.controller.get_template(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_get_template_err_none(self):
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- template = {u'Foo': u'bar'}
- params = {'Action': 'GetTemplate', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine to return None
- # this test the "no such stack" error path
- engine_resp = None
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'get_template',
- 'args': {'stack_identity': identity},
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- result = self.controller.get_template(dummy_req)
-
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_validate_err_no_template(self):
- # Format a dummy request with a missing template field
- stack_name = "wordpress"
- params = {'Action': 'ValidateTemplate'}
- dummy_req = self._dummy_GET_request(params)
-
- result = self.controller.validate_template(dummy_req)
- self.assertEqual(type(result), exception.HeatMissingParameterError)
-
- def test_validate_err_inval_template(self):
- # Format a dummy request with an invalid TemplateBody
- json_template = "!$%**_+}@~?"
- params = {'Action': 'ValidateTemplate',
- 'TemplateBody': '%s' % json_template}
- dummy_req = self._dummy_GET_request(params)
-
- result = self.controller.validate_template(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
-
- def test_bad_resources_in_template(self):
- # Format a dummy request
- json_template = {
- 'template': {
- 'AWSTemplateFormatVersion': '2010-09-09',
- 'Resources': {
- 'Type': 'AWS: : EC2: : Instance',
- },
- }
- }
- params = {'Action': 'ValidateTemplate',
- 'TemplateBody': '%s' % json.dumps(json_template)}
- response = {'Error': 'Resources must contain Resource. '
- 'Found a [string] instead'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'validate_template',
- 'args': {'template': json_template},
- 'version': self.api_version}, None).AndReturn(response)
- self.m.ReplayAll()
-
- response = self.controller.validate_template(dummy_req)
-
- expected = {'ValidateTemplateResponse':
- {'ValidateTemplateResult':
- 'Resources must contain Resource. '
- 'Found a [string] instead'}}
- self.assertEqual(expected, response)
- self.m.VerifyAll()
-
- def test_delete(self):
- # Format a dummy request
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '1'))
- params = {'Action': 'DeleteStack', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
- # Engine returns None when delete successful
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'delete_stack',
- 'args': {'stack_identity': identity},
- 'version': self.api_version}, None).AndReturn(None)
-
- self.m.ReplayAll()
-
- response = self.controller.delete(dummy_req)
-
- expected = {'DeleteStackResponse': {'DeleteStackResult': ''}}
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_delete_err_rpcerr(self):
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '1'))
- params = {'Action': 'DeleteStack', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'delete_stack',
- 'args': {'stack_identity': identity},
- 'version': self.api_version}, None
- ).AndRaise(AttributeError())
-
- self.m.ReplayAll()
-
- result = self.controller.delete(dummy_req)
-
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_delete_bad_name(self):
- stack_name = "wibble"
- params = {'Action': 'DeleteStack', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None
- ).AndRaise(heat_exception.StackNotFound())
-
- self.m.ReplayAll()
-
- result = self.controller.delete(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_events_list(self):
- # Format a dummy request
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- params = {'Action': 'DescribeStackEvents', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- engine_resp = [{u'stack_name': u'wordpress',
- u'event_time': u'2012-07-23T13:05:39Z',
- u'stack_identity': {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'6',
- u'path': u''},
- u'resource_name': u'WikiDatabase',
- u'resource_status_reason': u'state changed',
- u'event_identity':
- {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'6',
- u'path': u'/resources/WikiDatabase/events/42'},
- u'resource_action': u'TEST',
- u'resource_status': u'IN_PROGRESS',
- u'physical_resource_id': None,
- u'resource_properties': {u'UserData': u'blah'},
- u'resource_type': u'AWS::EC2::Instance'}]
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'list_events',
- 'args': {'stack_identity': identity},
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- response = self.controller.events_list(dummy_req)
-
- expected = {'DescribeStackEventsResponse':
- {'DescribeStackEventsResult':
- {'StackEvents':
- [{'EventId': u'42',
- 'StackId': u'arn:openstack:heat::t:stacks/wordpress/6',
- 'ResourceStatus': u'TEST_IN_PROGRESS',
- 'ResourceType': u'AWS::EC2::Instance',
- 'Timestamp': u'2012-07-23T13:05:39Z',
- 'StackName': u'wordpress',
- 'ResourceProperties':
- json.dumps({u'UserData': u'blah'}),
- 'PhysicalResourceId': None,
- 'ResourceStatusReason': u'state changed',
- 'LogicalResourceId': u'WikiDatabase'}]}}}
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_events_list_err_rpcerr(self):
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- params = {'Action': 'DescribeStackEvents', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'list_events',
- 'args': {'stack_identity': identity},
- 'version': self.api_version}, None
- ).AndRaise(Exception())
-
- self.m.ReplayAll()
-
- result = self.controller.events_list(dummy_req)
-
- self.assertEqual(type(result), exception.HeatInternalFailureError)
- self.m.VerifyAll()
-
- def test_events_list_bad_name(self):
- stack_name = "wibble"
- params = {'Action': 'DescribeStackEvents', 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None
- ).AndRaise(heat_exception.StackNotFound())
-
- self.m.ReplayAll()
-
- result = self.controller.events_list(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_describe_stack_resource(self):
- # Format a dummy request
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- params = {'Action': 'DescribeStackResource',
- 'StackName': stack_name,
- 'LogicalResourceId': "WikiDatabase"}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- engine_resp = {u'description': u'',
- u'resource_identity': {
- u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'6',
- u'path': u'resources/WikiDatabase'
- },
- u'stack_name': u'wordpress',
- u'resource_name': u'WikiDatabase',
- u'resource_status_reason': None,
- u'updated_time': u'2012-07-23T13:06:00Z',
- u'stack_identity': {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'6',
- u'path': u''},
- u'resource_action': u'CREATE',
- u'resource_status': u'COMPLETE',
- u'physical_resource_id':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- u'resource_type': u'AWS::EC2::Instance',
- u'metadata': {u'wordpress': []}}
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
- args = {
- 'stack_identity': identity,
- 'resource_name': dummy_req.params.get('LogicalResourceId'),
- }
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'describe_stack_resource',
- 'args': args,
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- response = self.controller.describe_stack_resource(dummy_req)
-
- expected = {'DescribeStackResourceResponse':
- {'DescribeStackResourceResult':
- {'StackResourceDetail':
- {'StackId': u'arn:openstack:heat::t:stacks/wordpress/6',
- 'ResourceStatus': u'CREATE_COMPLETE',
- 'Description': u'',
- 'ResourceType': u'AWS::EC2::Instance',
- 'ResourceStatusReason': None,
- 'LastUpdatedTimestamp': u'2012-07-23T13:06:00Z',
- 'StackName': u'wordpress',
- 'PhysicalResourceId':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- 'Metadata': {u'wordpress': []},
- 'LogicalResourceId': u'WikiDatabase'}}}}
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_describe_stack_resource_nonexistent_stack(self):
- # Format a dummy request
- stack_name = "wibble"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- params = {'Action': 'DescribeStackResource',
- 'StackName': stack_name,
- 'LogicalResourceId': "WikiDatabase"}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version},
- None).AndRaise(heat_exception.StackNotFound())
-
- self.m.ReplayAll()
-
- result = self.controller.describe_stack_resource(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_describe_stack_resource_nonexistent(self):
- # Format a dummy request
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- params = {'Action': 'DescribeStackResource',
- 'StackName': stack_name,
- 'LogicalResourceId': "wibble"}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
- args = {
- 'stack_identity': identity,
- 'resource_name': dummy_req.params.get('LogicalResourceId'),
- }
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'describe_stack_resource',
- 'args': args,
- 'version': self.api_version},
- None).AndRaise(heat_exception.ResourceNotFound())
-
- self.m.ReplayAll()
-
- result = self.controller.describe_stack_resource(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_describe_stack_resources(self):
- # Format a dummy request
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- params = {'Action': 'DescribeStackResources',
- 'StackName': stack_name,
- 'LogicalResourceId': "WikiDatabase"}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- engine_resp = [{u'description': u'',
- u'resource_identity': {
- u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'6',
- u'path': u'resources/WikiDatabase'
- },
- u'stack_name': u'wordpress',
- u'resource_name': u'WikiDatabase',
- u'resource_status_reason': None,
- u'updated_time': u'2012-07-23T13:06:00Z',
- u'stack_identity': {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'6',
- u'path': u''},
- u'resource_action': u'CREATE',
- u'resource_status': u'COMPLETE',
- u'physical_resource_id':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- u'resource_type': u'AWS::EC2::Instance',
- u'metadata': {u'ensureRunning': u'true''true'}}]
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
- args = {
- 'stack_identity': identity,
- 'resource_name': dummy_req.params.get('LogicalResourceId'),
- }
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'describe_stack_resources',
- 'args': args,
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- response = self.controller.describe_stack_resources(dummy_req)
-
- expected = {'DescribeStackResourcesResponse':
- {'DescribeStackResourcesResult':
- {'StackResources':
- [{'StackId': u'arn:openstack:heat::t:stacks/wordpress/6',
- 'ResourceStatus': u'CREATE_COMPLETE',
- 'Description': u'',
- 'ResourceType': u'AWS::EC2::Instance',
- 'Timestamp': u'2012-07-23T13:06:00Z',
- 'ResourceStatusReason': None,
- 'StackName': u'wordpress',
- 'PhysicalResourceId':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- 'LogicalResourceId': u'WikiDatabase'}]}}}
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_describe_stack_resources_bad_name(self):
- stack_name = "wibble"
- params = {'Action': 'DescribeStackResources',
- 'StackName': stack_name,
- 'LogicalResourceId': "WikiDatabase"}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None
- ).AndRaise(heat_exception.StackNotFound())
-
- self.m.ReplayAll()
-
- result = self.controller.describe_stack_resources(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_describe_stack_resources_physical(self):
- # Format a dummy request
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- params = {'Action': 'DescribeStackResources',
- 'LogicalResourceId': "WikiDatabase",
- 'PhysicalResourceId': 'a3455d8c-9f88-404d-a85b-5315293e67de'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- engine_resp = [{u'description': u'',
- u'resource_identity': {
- u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'6',
- u'path': u'resources/WikiDatabase'
- },
- u'stack_name': u'wordpress',
- u'resource_name': u'WikiDatabase',
- u'resource_status_reason': None,
- u'updated_time': u'2012-07-23T13:06:00Z',
- u'stack_identity': {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'6',
- u'path': u''},
- u'resource_action': u'CREATE',
- u'resource_status': u'COMPLETE',
- u'physical_resource_id':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- u'resource_type': u'AWS::EC2::Instance',
- u'metadata': {u'ensureRunning': u'true''true'}}]
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'find_physical_resource',
- 'args': {'physical_resource_id':
- 'a3455d8c-9f88-404d-a85b-5315293e67de'},
- 'version': self.api_version}, None).AndReturn(identity)
- args = {
- 'stack_identity': identity,
- 'resource_name': dummy_req.params.get('LogicalResourceId'),
- }
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'describe_stack_resources',
- 'args': args,
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- response = self.controller.describe_stack_resources(dummy_req)
-
- expected = {'DescribeStackResourcesResponse':
- {'DescribeStackResourcesResult':
- {'StackResources':
- [{'StackId': u'arn:openstack:heat::t:stacks/wordpress/6',
- 'ResourceStatus': u'CREATE_COMPLETE',
- 'Description': u'',
- 'ResourceType': u'AWS::EC2::Instance',
- 'Timestamp': u'2012-07-23T13:06:00Z',
- 'ResourceStatusReason': None,
- 'StackName': u'wordpress',
- 'PhysicalResourceId':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- 'LogicalResourceId': u'WikiDatabase'}]}}}
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_describe_stack_resources_physical_not_found(self):
- # Format a dummy request
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- params = {'Action': 'DescribeStackResources',
- 'LogicalResourceId': "WikiDatabase",
- 'PhysicalResourceId': 'aaaaaaaa-9f88-404d-cccc-ffffffffffff'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'find_physical_resource',
- 'args': {'physical_resource_id':
- 'aaaaaaaa-9f88-404d-cccc-ffffffffffff'},
- 'version': self.api_version},
- None).AndRaise(
- heat_exception.PhysicalResourceNotFound())
-
- self.m.ReplayAll()
-
- response = self.controller.describe_stack_resources(dummy_req)
-
- self.assertEqual(type(response),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def test_describe_stack_resources_err_inval(self):
- # Format a dummy request containing both StackName and
- # PhysicalResourceId, which is invalid and should throw a
- # HeatInvalidParameterCombinationError
- stack_name = "wordpress"
- params = {'Action': 'DescribeStackResources',
- 'StackName': stack_name,
- 'PhysicalResourceId': "123456"}
- dummy_req = self._dummy_GET_request(params)
- ret = self.controller.describe_stack_resources(dummy_req)
- self.assertEqual(type(ret),
- exception.HeatInvalidParameterCombinationError)
- self.m.VerifyAll()
-
- def test_list_stack_resources(self):
- # Format a dummy request
- stack_name = "wordpress"
- identity = dict(identifier.HeatIdentifier('t', stack_name, '6'))
- params = {'Action': 'ListStackResources',
- 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- engine_resp = [{u'resource_identity':
- {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'6',
- u'path': u'/resources/WikiDatabase'},
- u'stack_name': u'wordpress',
- u'resource_name': u'WikiDatabase',
- u'resource_status_reason': None,
- u'updated_time': u'2012-07-23T13:06:00Z',
- u'stack_identity': {u'tenant': u't',
- u'stack_name': u'wordpress',
- u'stack_id': u'6',
- u'path': u''},
- u'resource_action': u'CREATE',
- u'resource_status': u'COMPLETE',
- u'physical_resource_id':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- u'resource_type': u'AWS::EC2::Instance'}]
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None).AndReturn(identity)
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'list_stack_resources',
- 'args': {'stack_identity': identity},
- 'version': self.api_version}, None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- response = self.controller.list_stack_resources(dummy_req)
-
- expected = {'ListStackResourcesResponse': {'ListStackResourcesResult':
- {'StackResourceSummaries':
- [{'ResourceStatus': u'CREATE_COMPLETE',
- 'ResourceType': u'AWS::EC2::Instance',
- 'ResourceStatusReason': None,
- 'LastUpdatedTimestamp': u'2012-07-23T13:06:00Z',
- 'PhysicalResourceId':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- 'LogicalResourceId': u'WikiDatabase'}]}}}
-
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_list_stack_resources_bad_name(self):
- stack_name = "wibble"
- params = {'Action': 'ListStackResources',
- 'StackName': stack_name}
- dummy_req = self._dummy_GET_request(params)
-
- # Insert an engine RPC error and ensure we map correctly to the
- # heat exception type
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version}, None
- ).AndRaise(heat_exception.StackNotFound())
-
- self.m.ReplayAll()
-
- result = self.controller.list_stack_resources(dummy_req)
- self.assertEqual(type(result),
- exception.HeatInvalidParameterValueError)
- self.m.VerifyAll()
-
- def setUp(self):
- super(CfnStackControllerTest, self).setUp()
-
- opts = [
- cfg.StrOpt('config_dir', default=policy_path),
- cfg.StrOpt('config_file', default='foo'),
- cfg.StrOpt('project', default='heat'),
- ]
- cfg.CONF.register_opts(opts)
- cfg.CONF.set_default('host', 'host')
- self.topic = rpc_api.ENGINE_TOPIC
- self.api_version = '1.0'
-
- # Create WSGI controller instance
- class DummyConfig():
- bind_port = 8000
- cfgopts = DummyConfig()
- self.controller = stacks.StackController(options=cfgopts)
- self.controller.policy.enforcer.policy_path = (policy_path +
- 'deny_stack_user.json')
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import os
-
-from oslo.config import cfg
-
-from heat.common import policy
-from heat.openstack.common import rpc
-from heat.common.wsgi import Request
-from heat.api.aws import exception
-import heat.api.cloudwatch.watch as watches
-from heat.rpc import api as engine_api
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-class WatchControllerTest(HeatTestCase):
- '''
- Tests the API class which acts as the WSGI controller,
- the endpoint processing API requests after they are routed
- '''
-
- def _dummy_GET_request(self, params={}):
- # Mangle the params dict into a query string
- qs = "&".join(["=".join([k, str(params[k])]) for k in params])
- environ = {'REQUEST_METHOD': 'GET', 'QUERY_STRING': qs}
- req = Request(environ)
- req.context = utils.dummy_context()
- return req
-
- # The tests
- def test_reformat_dimensions(self):
-
- dims = [{'StackId': u'21617058-781e-4262-97ab-5f9df371ee52',
- 'Foo': 'bar'}]
- self.assertEqual([{'Name': 'StackId',
- 'Value': u'21617058-781e-4262-97ab-5f9df371ee52'},
- {'Name': 'Foo', 'Value': 'bar'}],
- self.controller._reformat_dimensions(dims)
- )
-
- def test_enforce_default(self):
- self.m.ReplayAll()
- params = {'Action': 'ListMetrics'}
- dummy_req = self._dummy_GET_request(params)
- self.controller.policy.policy_path = None
- response = self.controller._enforce(dummy_req, 'ListMetrics')
- self.assertEqual(response, None)
- self.m.VerifyAll()
-
- def test_enforce_denied(self):
- self.m.ReplayAll()
- params = {'Action': 'ListMetrics'}
- dummy_req = self._dummy_GET_request(params)
- dummy_req.context.roles = ['heat_stack_user']
- self.controller.policy.policy_path = (self.policy_path +
- 'deny_stack_user.json')
- self.assertRaises(exception.HeatAccessDeniedError,
- self.controller._enforce, dummy_req, 'ListMetrics')
- self.m.VerifyAll()
-
- def test_enforce_ise(self):
- params = {'Action': 'ListMetrics'}
- dummy_req = self._dummy_GET_request(params)
- dummy_req.context.roles = ['heat_stack_user']
-
- self.m.StubOutWithMock(policy.Enforcer, 'enforce')
- policy.Enforcer.enforce(dummy_req.context, 'ListMetrics', {}
- ).AndRaise(AttributeError)
- self.m.ReplayAll()
-
- self.controller.policy.policy_path = (self.policy_path +
- 'deny_stack_user.json')
- self.assertRaises(exception.HeatInternalFailureError,
- self.controller._enforce, dummy_req, 'ListMetrics')
- self.m.VerifyAll()
-
- def test_delete(self):
- # Not yet implemented, should raise HeatAPINotImplementedError
- params = {'Action': 'DeleteAlarms'}
- dummy_req = self._dummy_GET_request(params)
- result = self.controller.delete_alarms(dummy_req)
- self.assertEqual(exception.HeatAPINotImplementedError, type(result))
-
- def test_describe_alarm_history(self):
- # Not yet implemented, should raise HeatAPINotImplementedError
- params = {'Action': 'DescribeAlarmHistory'}
- dummy_req = self._dummy_GET_request(params)
- result = self.controller.describe_alarm_history(dummy_req)
- self.assertEqual(exception.HeatAPINotImplementedError, type(result))
-
- def test_describe_all(self):
- watch_name = None # Get all watches
-
- # Format a dummy GET request to pass into the WSGI handler
- params = {'Action': 'DescribeAlarms'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- engine_resp = [{u'state_updated_time': u'2012-08-30T14:13:21Z',
- u'stack_id': u'21617058-781e-4262-97ab-5f9df371ee52',
- u'period': u'300',
- u'actions': [u'WebServerRestartPolicy'],
- u'topic': None,
- u'periods': u'1',
- u'statistic': u'SampleCount',
- u'threshold': u'2',
- u'unit': None,
- u'state_reason': None,
- u'dimensions': [],
- u'namespace': u'system/linux',
- u'state_value': u'NORMAL',
- u'ok_actions': None,
- u'description': u'Restart the WikiDatabase',
- u'actions_enabled': None,
- u'state_reason_data': None,
- u'insufficient_actions': None,
- u'metric_name': u'ServiceFailure',
- u'comparison': u'GreaterThanThreshold',
- u'name': u'HttpFailureAlarm',
- u'updated_time': u'2012-08-30T14:10:46Z'}]
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'args': {'watch_name': watch_name},
- 'method': 'show_watch',
- 'version': self.api_version},
- None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- expected = {'DescribeAlarmsResponse': {'DescribeAlarmsResult':
- {'MetricAlarms': [
- {'EvaluationPeriods': u'1',
- 'StateReasonData': None,
- 'AlarmArn': None,
- 'StateUpdatedTimestamp': u'2012-08-30T14:13:21Z',
- 'AlarmConfigurationUpdatedTimestamp':
- u'2012-08-30T14:10:46Z',
- 'AlarmActions': [u'WebServerRestartPolicy'],
- 'Threshold': u'2',
- 'AlarmDescription': u'Restart the WikiDatabase',
- 'Namespace': u'system/linux',
- 'Period': u'300',
- 'StateValue': u'NORMAL',
- 'ComparisonOperator': u'GreaterThanThreshold',
- 'AlarmName': u'HttpFailureAlarm',
- 'Unit': None,
- 'Statistic': u'SampleCount',
- 'StateReason': None,
- 'InsufficientDataActions': None,
- 'OKActions': None,
- 'MetricName': u'ServiceFailure',
- 'ActionsEnabled': None,
- 'Dimensions':
- [{'Name': 'StackId',
- 'Value': u'21617058-781e-4262-97ab-5f9df371ee52'}]
- }]}}}
-
- # Call the list controller function and compare the response
- self.assertEqual(expected, self.controller.describe_alarms(dummy_req))
-
- def test_describe_alarms_for_metric(self):
- # Not yet implemented, should raise HeatAPINotImplementedError
- params = {'Action': 'DescribeAlarmsForMetric'}
- dummy_req = self._dummy_GET_request(params)
- result = self.controller.describe_alarms_for_metric(dummy_req)
- self.assertEqual(exception.HeatAPINotImplementedError, type(result))
-
- def test_disable_alarm_actions(self):
- # Not yet implemented, should raise HeatAPINotImplementedError
- params = {'Action': 'DisableAlarmActions'}
- dummy_req = self._dummy_GET_request(params)
- result = self.controller.disable_alarm_actions(dummy_req)
- self.assertEqual(exception.HeatAPINotImplementedError, type(result))
-
- def test_enable_alarm_actions(self):
- # Not yet implemented, should raise HeatAPINotImplementedError
- params = {'Action': 'EnableAlarmActions'}
- dummy_req = self._dummy_GET_request(params)
- result = self.controller.enable_alarm_actions(dummy_req)
- self.assertEqual(exception.HeatAPINotImplementedError, type(result))
-
- def test_get_metric_statistics(self):
- # Not yet implemented, should raise HeatAPINotImplementedError
- params = {'Action': 'GetMetricStatistics'}
- dummy_req = self._dummy_GET_request(params)
- result = self.controller.get_metric_statistics(dummy_req)
- self.assertEqual(exception.HeatAPINotImplementedError, type(result))
-
- def test_list_metrics_all(self):
- params = {'Action': 'ListMetrics'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- # We dummy three different metrics and namespaces to test
- # filtering by parameter
- engine_resp = [{u'timestamp': u'2012-08-30T15:09:02Z',
- u'watch_name': u'HttpFailureAlarm',
- u'namespace': u'system/linux',
- u'metric_name': u'ServiceFailure',
- u'data': {u'Units': u'Counter', u'Value': 1}},
-
- {u'timestamp': u'2012-08-30T15:10:03Z',
- u'watch_name': u'HttpFailureAlarm2',
- u'namespace': u'system/linux2',
- u'metric_name': u'ServiceFailure2',
- u'data': {u'Units': u'Counter', u'Value': 1}},
-
- {u'timestamp': u'2012-08-30T15:16:03Z',
- u'watch_name': u'HttpFailureAlar3m',
- u'namespace': u'system/linux3',
- u'metric_name': u'ServiceFailure3',
- u'data': {u'Units': u'Counter', u'Value': 1}}]
-
- self.m.StubOutWithMock(rpc, 'call')
- # Current engine implementation means we filter in the API
- # and pass None/None for namespace/watch_name which returns
- # all metric data which we post-process in the API
- rpc.call(dummy_req.context, self.topic,
- {'namespace': None,
- 'args': {'metric_namespace': None, 'metric_name': None},
- 'method': 'show_watch_metric',
- 'version': self.api_version},
- None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- expected = {'ListMetricsResponse':
- {'ListMetricsResult':
- {'Metrics': [{'Namespace': u'system/linux',
- 'Dimensions':
- [{'Name': 'AlarmName',
- 'Value': u'HttpFailureAlarm'},
- {'Name': 'Timestamp',
- 'Value': u'2012-08-30T15:09:02Z'},
- {'Name': u'Units',
- 'Value': u'Counter'},
- {'Name': u'Value',
- 'Value': 1}],
- 'MetricName': u'ServiceFailure'},
- {'Namespace': u'system/linux2',
- 'Dimensions':
- [{'Name': 'AlarmName',
- 'Value': u'HttpFailureAlarm2'},
- {'Name': 'Timestamp',
- 'Value': u'2012-08-30T15:10:03Z'},
- {'Name': u'Units',
- 'Value': u'Counter'},
- {'Name': u'Value',
- 'Value': 1}],
- 'MetricName': u'ServiceFailure2'},
- {'Namespace': u'system/linux3',
- 'Dimensions':
- [{'Name': 'AlarmName',
- 'Value': u'HttpFailureAlar3m'},
- {'Name': 'Timestamp',
- 'Value': u'2012-08-30T15:16:03Z'},
- {'Name': u'Units',
- 'Value': u'Counter'},
- {'Name': u'Value',
- 'Value': 1}],
- 'MetricName': u'ServiceFailure3'}]}}}
-
- # First pass no query paramters filtering, should get all three
- self.assertEqual(expected, self.controller.list_metrics(dummy_req))
-
- def test_list_metrics_filter_name(self):
-
- # Add a MetricName filter, so we should only get one of the three
- params = {'Action': 'ListMetrics',
- 'MetricName': 'ServiceFailure'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- # We dummy three different metrics and namespaces to test
- # filtering by parameter
- engine_resp = [{u'timestamp': u'2012-08-30T15:09:02Z',
- u'watch_name': u'HttpFailureAlarm',
- u'namespace': u'system/linux',
- u'metric_name': u'ServiceFailure',
- u'data': {u'Units': u'Counter', u'Value': 1}},
-
- {u'timestamp': u'2012-08-30T15:10:03Z',
- u'watch_name': u'HttpFailureAlarm2',
- u'namespace': u'system/linux2',
- u'metric_name': u'ServiceFailure2',
- u'data': {u'Units': u'Counter', u'Value': 1}},
-
- {u'timestamp': u'2012-08-30T15:16:03Z',
- u'watch_name': u'HttpFailureAlar3m',
- u'namespace': u'system/linux3',
- u'metric_name': u'ServiceFailure3',
- u'data': {u'Units': u'Counter', u'Value': 1}}]
-
- self.m.StubOutWithMock(rpc, 'call')
- # Current engine implementation means we filter in the API
- # and pass None/None for namespace/watch_name which returns
- # all metric data which we post-process in the API
- rpc.call(dummy_req.context, self.topic, {'args':
- {'metric_namespace': None,
- 'metric_name': None},
- 'namespace': None,
- 'method': 'show_watch_metric',
- 'version': self.api_version},
- None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- expected = {'ListMetricsResponse':
- {'ListMetricsResult':
- {'Metrics':
- [{'Namespace': u'system/linux',
- 'Dimensions':
- [{'Name': 'AlarmName',
- 'Value': u'HttpFailureAlarm'},
- {'Name': 'Timestamp',
- 'Value': u'2012-08-30T15:09:02Z'},
- {'Name': u'Units',
- 'Value': u'Counter'},
- {'Name': u'Value',
- 'Value': 1}],
- 'MetricName': u'ServiceFailure'}]}}}
- # First pass no query paramters filtering, should get all three
- self.assertEqual(expected, self.controller.list_metrics(dummy_req))
-
- def test_list_metrics_filter_namespace(self):
-
- # Add a Namespace filter and change the engine response so
- # we should get two reponses
- params = {'Action': 'ListMetrics',
- 'Namespace': 'atestnamespace/foo'}
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to the engine with a pre-canned response
- # We dummy three different metrics and namespaces to test
- # filtering by parameter
- engine_resp = [{u'timestamp': u'2012-08-30T15:09:02Z',
- u'watch_name': u'HttpFailureAlarm',
- u'namespace': u'atestnamespace/foo',
- u'metric_name': u'ServiceFailure',
- u'data': {u'Units': u'Counter', u'Value': 1}},
-
- {u'timestamp': u'2012-08-30T15:10:03Z',
- u'watch_name': u'HttpFailureAlarm2',
- u'namespace': u'atestnamespace/foo',
- u'metric_name': u'ServiceFailure2',
- u'data': {u'Units': u'Counter', u'Value': 1}},
-
- {u'timestamp': u'2012-08-30T15:16:03Z',
- u'watch_name': u'HttpFailureAlar3m',
- u'namespace': u'system/linux3',
- u'metric_name': u'ServiceFailure3',
- u'data': {u'Units': u'Counter', u'Value': 1}}]
-
- self.m.StubOutWithMock(rpc, 'call')
- # Current engine implementation means we filter in the API
- # and pass None/None for namespace/watch_name which returns
- # all metric data which we post-process in the API
- rpc.call(dummy_req.context, self.topic,
- {'args': {'metric_namespace': None, 'metric_name': None},
- 'namespace': None,
- 'method': 'show_watch_metric',
- 'version': self.api_version},
- None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- expected = {'ListMetricsResponse':
- {'ListMetricsResult':
- {'Metrics':
- [{'Namespace': u'atestnamespace/foo',
- 'Dimensions':
- [{'Name': 'AlarmName',
- 'Value': u'HttpFailureAlarm'},
- {'Name': 'Timestamp',
- 'Value': u'2012-08-30T15:09:02Z'},
- {'Name': u'Units',
- 'Value': u'Counter'},
- {'Name': u'Value',
- 'Value': 1}],
- 'MetricName': u'ServiceFailure'},
- {'Namespace': u'atestnamespace/foo',
- 'Dimensions':
- [{'Name': 'AlarmName',
- 'Value': u'HttpFailureAlarm2'},
- {'Name': 'Timestamp',
- 'Value': u'2012-08-30T15:10:03Z'},
- {'Name': u'Units',
- 'Value': u'Counter'},
- {'Name': u'Value',
- 'Value': 1}],
- 'MetricName': u'ServiceFailure2'}]}}}
- self.assertEqual(expected, self.controller.list_metrics(dummy_req))
-
- def test_put_metric_alarm(self):
- # Not yet implemented, should raise HeatAPINotImplementedError
- params = {'Action': 'PutMetricAlarm'}
- dummy_req = self._dummy_GET_request(params)
- result = self.controller.put_metric_alarm(dummy_req)
- self.assertEqual(type(result), exception.HeatAPINotImplementedError)
-
- def test_put_metric_data(self):
-
- params = {u'Namespace': u'system/linux',
- u'MetricData.member.1.Unit': u'Count',
- u'MetricData.member.1.Value': u'1',
- u'MetricData.member.1.MetricName': u'ServiceFailure',
- u'MetricData.member.1.Dimensions.member.1.Name':
- u'AlarmName',
- u'MetricData.member.1.Dimensions.member.1.Value':
- u'HttpFailureAlarm',
- u'Action': u'PutMetricData'}
-
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to verify the engine call parameters
- engine_resp = {}
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'args':
- {'stats_data':
- {'Namespace': u'system/linux',
- u'ServiceFailure':
- {'Value': u'1',
- 'Unit': u'Count',
- 'Dimensions': []}},
- 'watch_name': u'HttpFailureAlarm'},
- 'namespace': None,
- 'method': 'create_watch_data',
- 'version': self.api_version},
- None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- expected = {'PutMetricDataResponse': {'PutMetricDataResult':
- {'ResponseMetadata': None}}}
- self.assertEqual(expected, self.controller.put_metric_data(dummy_req))
-
- def test_set_alarm_state(self):
- state_map = {'OK': engine_api.WATCH_STATE_OK,
- 'ALARM': engine_api.WATCH_STATE_ALARM,
- 'INSUFFICIENT_DATA': engine_api.WATCH_STATE_NODATA}
-
- for state in state_map.keys():
- params = {u'StateValue': state,
- u'StateReason': u'',
- u'AlarmName': u'HttpFailureAlarm',
- u'Action': u'SetAlarmState'}
-
- dummy_req = self._dummy_GET_request(params)
-
- # Stub out the RPC call to verify the engine call parameters
- # The real engine response is the same as show_watch but with
- # the state overridden, but since the API doesn't make use
- # of the response at present we pass nothing back from the stub
- engine_resp = {}
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(dummy_req.context, self.topic,
- {'args':
- {'state': state_map[state],
- 'watch_name': u'HttpFailureAlarm'},
- 'namespace': None,
- 'method': 'set_watch_state',
- 'version': self.api_version},
- None).AndReturn(engine_resp)
-
- self.m.ReplayAll()
-
- expected = {'SetAlarmStateResponse': {'SetAlarmStateResult': ''}}
- self.assertEqual(expected,
- self.controller.set_alarm_state(dummy_req))
-
- self.m.UnsetStubs()
- self.m.VerifyAll()
-
- def test_set_alarm_state_badstate(self):
- params = {u'StateValue': "baaaaad",
- u'StateReason': u'',
- u'AlarmName': u'HttpFailureAlarm',
- u'Action': u'SetAlarmState'}
- dummy_req = self._dummy_GET_request(params)
-
- # should raise HeatInvalidParameterValueError
- result = self.controller.set_alarm_state(dummy_req)
- self.assertEqual(exception.HeatInvalidParameterValueError,
- type(result))
-
- def setUp(self):
- super(WatchControllerTest, self).setUp()
- self.path = os.path.dirname(os.path.realpath(__file__))
- self.policy_path = self.path + "/policy/"
- opts = [
- cfg.StrOpt('config_dir', default=self.policy_path),
- cfg.StrOpt('config_file', default='foo'),
- cfg.StrOpt('project', default='heat'),
- ]
- cfg.CONF.register_opts(opts)
- cfg.CONF.set_default('host', 'host')
- self.topic = engine_api.ENGINE_TOPIC
- self.api_version = '1.0'
-
- # Create WSGI controller instance
- class DummyConfig():
- bind_port = 8003
- cfgopts = DummyConfig()
- self.controller = watches.WatchController(options=cfgopts)
- self.controller.policy.enforcer.policy_path = (self.policy_path +
- 'deny_stack_user.json')
- self.addCleanup(self.m.VerifyAll)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-from heat.tests.common import HeatTestCase
-
-import requests
-import json
-from oslo.config import cfg
-
-from heat.api.aws import exception
-from heat.common.wsgi import Request
-from heat.api.aws import ec2token
-
-
-class Ec2TokenTest(HeatTestCase):
- '''
- Tests the Ec2Token middleware
- '''
-
- def setUp(self):
- super(Ec2TokenTest, self).setUp()
- self.m.StubOutWithMock(requests, 'post')
-
- def _dummy_GET_request(self, params={}, environ={}):
- # Mangle the params dict into a query string
- qs = "&".join(["=".join([k, str(params[k])]) for k in params])
- environ.update({'REQUEST_METHOD': 'GET', 'QUERY_STRING': qs})
- req = Request(environ)
- return req
-
- def test_conf_get_paste(self):
- dummy_conf = {'auth_uri': 'http://192.0.2.9/v2.0'}
- ec2 = ec2token.EC2Token(app=None, conf=dummy_conf)
- self.assertEqual(ec2._conf_get('auth_uri'), 'http://192.0.2.9/v2.0')
- self.assertEqual(
- ec2._conf_get_keystone_ec2_uri('http://192.0.2.9/v2.0'),
- 'http://192.0.2.9/v2.0/ec2tokens')
-
- def test_conf_get_opts(self):
- cfg.CONF.set_default('auth_uri', 'http://192.0.2.9/v2.0/',
- group='ec2authtoken')
- ec2 = ec2token.EC2Token(app=None, conf={})
- self.assertEqual(ec2._conf_get('auth_uri'), 'http://192.0.2.9/v2.0/')
- self.assertEqual(
- ec2._conf_get_keystone_ec2_uri('http://192.0.2.9/v2.0/'),
- 'http://192.0.2.9/v2.0/ec2tokens')
-
- def test_get_signature_param_old(self):
- params = {'Signature': 'foo'}
- dummy_req = self._dummy_GET_request(params)
- ec2 = ec2token.EC2Token(app=None, conf={})
- self.assertEqual(ec2._get_signature(dummy_req), 'foo')
-
- def test_get_signature_param_new(self):
- params = {'X-Amz-Signature': 'foo'}
- dummy_req = self._dummy_GET_request(params)
- ec2 = ec2token.EC2Token(app=None, conf={})
- self.assertEqual(ec2._get_signature(dummy_req), 'foo')
-
- def test_get_signature_header_space(self):
- req_env = {'HTTP_AUTHORIZATION':
- ('Authorization: foo Credential=foo/bar, '
- 'SignedHeaders=content-type;host;x-amz-date, '
- 'Signature=xyz')}
- dummy_req = self._dummy_GET_request(environ=req_env)
- ec2 = ec2token.EC2Token(app=None, conf={})
- self.assertEqual(ec2._get_signature(dummy_req), 'xyz')
-
- def test_get_signature_header_notlast(self):
- req_env = {'HTTP_AUTHORIZATION':
- ('Authorization: foo Credential=foo/bar, '
- 'Signature=xyz,'
- 'SignedHeaders=content-type;host;x-amz-date ')}
- dummy_req = self._dummy_GET_request(environ=req_env)
- ec2 = ec2token.EC2Token(app=None, conf={})
- self.assertEqual(ec2._get_signature(dummy_req), 'xyz')
-
- def test_get_signature_header_nospace(self):
- req_env = {'HTTP_AUTHORIZATION':
- ('Authorization: foo Credential=foo/bar,'
- 'SignedHeaders=content-type;host;x-amz-date,'
- 'Signature=xyz')}
- dummy_req = self._dummy_GET_request(environ=req_env)
- ec2 = ec2token.EC2Token(app=None, conf={})
- self.assertEqual(ec2._get_signature(dummy_req), 'xyz')
-
- def test_get_access_param_old(self):
- params = {'AWSAccessKeyId': 'foo'}
- dummy_req = self._dummy_GET_request(params)
- ec2 = ec2token.EC2Token(app=None, conf={})
- self.assertEqual(ec2._get_access(dummy_req), 'foo')
-
- def test_get_access_param_new(self):
- params = {'X-Amz-Credential': 'foo/bar'}
- dummy_req = self._dummy_GET_request(params)
- ec2 = ec2token.EC2Token(app=None, conf={})
- self.assertEqual(ec2._get_access(dummy_req), 'foo')
-
- def test_get_access_header_space(self):
- req_env = {'HTTP_AUTHORIZATION':
- ('Authorization: foo Credential=foo/bar, '
- 'SignedHeaders=content-type;host;x-amz-date, '
- 'Signature=xyz')}
- dummy_req = self._dummy_GET_request(environ=req_env)
- ec2 = ec2token.EC2Token(app=None, conf={})
- self.assertEqual(ec2._get_access(dummy_req), 'foo')
-
- def test_get_access_header_nospace(self):
- req_env = {'HTTP_AUTHORIZATION':
- ('Authorization: foo Credential=foo/bar,'
- 'SignedHeaders=content-type;host;x-amz-date,'
- 'Signature=xyz')}
- dummy_req = self._dummy_GET_request(environ=req_env)
- ec2 = ec2token.EC2Token(app=None, conf={})
- self.assertEqual(ec2._get_access(dummy_req), 'foo')
-
- def test_get_access_header_last(self):
- req_env = {'HTTP_AUTHORIZATION':
- ('Authorization: foo '
- 'SignedHeaders=content-type;host;x-amz-date,'
- 'Signature=xyz,Credential=foo/bar')}
- dummy_req = self._dummy_GET_request(environ=req_env)
- ec2 = ec2token.EC2Token(app=None, conf={})
- self.assertEqual(ec2._get_access(dummy_req), 'foo')
-
- def test_call_x_auth_user(self):
- req_env = {'HTTP_X_AUTH_USER': 'foo'}
- dummy_req = self._dummy_GET_request(environ=req_env)
- ec2 = ec2token.EC2Token(app='xyz', conf={})
- self.assertEqual(ec2.__call__(dummy_req), 'xyz')
-
- def test_call_auth_nosig(self):
- req_env = {'HTTP_AUTHORIZATION':
- ('Authorization: foo Credential=foo/bar, '
- 'SignedHeaders=content-type;host;x-amz-date')}
- dummy_req = self._dummy_GET_request(environ=req_env)
- ec2 = ec2token.EC2Token(app='xyz', conf={})
- self.assertRaises(exception.HeatIncompleteSignatureError,
- ec2.__call__, dummy_req)
-
- def test_call_auth_nouser(self):
- req_env = {'HTTP_AUTHORIZATION':
- ('Authorization: foo '
- 'SignedHeaders=content-type;host;x-amz-date,'
- 'Signature=xyz')}
- dummy_req = self._dummy_GET_request(environ=req_env)
- ec2 = ec2token.EC2Token(app='xyz', conf={})
- self.assertRaises(exception.HeatMissingAuthenticationTokenError,
- ec2.__call__, dummy_req)
-
- def test_call_auth_noaccess(self):
- # If there's no accesskey in params or header, but there is a
- # Signature, we expect HeatMissingAuthenticationTokenError
- params = {'Signature': 'foo'}
- dummy_req = self._dummy_GET_request(params)
- ec2 = ec2token.EC2Token(app='xyz', conf={})
- self.assertRaises(exception.HeatMissingAuthenticationTokenError,
- ec2.__call__, dummy_req)
-
- def test_call_x_auth_nouser_x_auth_user(self):
- req_env = {'HTTP_X_AUTH_USER': 'foo',
- 'HTTP_AUTHORIZATION':
- ('Authorization: foo '
- 'SignedHeaders=content-type;host;x-amz-date,'
- 'Signature=xyz')}
- dummy_req = self._dummy_GET_request(environ=req_env)
- ec2 = ec2token.EC2Token(app='xyz', conf={})
- self.assertEqual(ec2.__call__(dummy_req), 'xyz')
-
- def _stub_http_connection(self, headers={}, params={}, response=None,
- req_url='http://123:5000/v2.0/ec2tokens'):
-
- class DummyHTTPResponse(object):
- text = response
-
- def json(self):
- return json.loads(self.text)
-
- body_hash = ('e3b0c44298fc1c149afbf4c8996fb9'
- '2427ae41e4649b934ca495991b7852b855')
- req_creds = json.dumps({"ec2Credentials":
- {"access": "foo",
- "headers": headers,
- "host": "heat:8000",
- "verb": "GET",
- "params": params,
- "signature": "xyz",
- "path": "/v1",
- "body_hash": body_hash}})
- req_headers = {'Content-Type': 'application/json'}
- requests.post(req_url, data=req_creds,
- headers=req_headers).AndReturn(DummyHTTPResponse())
-
- def test_call_ok(self):
- dummy_conf = {'auth_uri': 'http://123:5000/v2.0'}
- ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf)
-
- auth_str = ('Authorization: foo Credential=foo/bar, '
- 'SignedHeaders=content-type;host;x-amz-date, '
- 'Signature=xyz')
- req_env = {'SERVER_NAME': 'heat',
- 'SERVER_PORT': '8000',
- 'PATH_INFO': '/v1',
- 'HTTP_AUTHORIZATION': auth_str}
- dummy_req = self._dummy_GET_request(environ=req_env)
-
- ok_resp = json.dumps({'access': {'token': {
- 'id': 123,
- 'tenant': {'name': 'tenant', 'id': 'abcd1234'}}}})
- self._stub_http_connection(headers={'Authorization': auth_str},
- response=ok_resp)
- self.m.ReplayAll()
- self.assertEqual(ec2.__call__(dummy_req), 'woot')
-
- self.assertEqual('tenant', dummy_req.headers['X-Tenant-Name'])
- self.assertEqual('abcd1234', dummy_req.headers['X-Tenant-Id'])
- self.m.VerifyAll()
-
- def test_call_ok_roles(self):
- dummy_conf = {'auth_uri': 'http://123:5000/v2.0'}
- ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf)
-
- auth_str = ('Authorization: foo Credential=foo/bar, '
- 'SignedHeaders=content-type;host;x-amz-date, '
- 'Signature=xyz')
- req_env = {'SERVER_NAME': 'heat',
- 'SERVER_PORT': '8000',
- 'PATH_INFO': '/v1',
- 'HTTP_AUTHORIZATION': auth_str}
- dummy_req = self._dummy_GET_request(environ=req_env)
-
- ok_resp = json.dumps({'access': {
- 'token': {
- 'id': 123,
- 'tenant': {'name': 'tenant', 'id': 'abcd1234'}
- },
- 'metadata': {'roles': ['aa', 'bb', 'cc']}}})
- self._stub_http_connection(headers={'Authorization': auth_str},
- response=ok_resp)
- self.m.ReplayAll()
- self.assertEqual(ec2.__call__(dummy_req), 'woot')
-
- self.assertEqual('aa,bb,cc', dummy_req.headers['X-Roles'])
- self.m.VerifyAll()
-
- def test_call_err_tokenid(self):
- dummy_conf = {'auth_uri': 'http://123:5000/v2.0/'}
- ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf)
-
- auth_str = ('Authorization: foo Credential=foo/bar, '
- 'SignedHeaders=content-type;host;x-amz-date, '
- 'Signature=xyz')
- req_env = {'SERVER_NAME': 'heat',
- 'SERVER_PORT': '8000',
- 'PATH_INFO': '/v1',
- 'HTTP_AUTHORIZATION': auth_str}
- dummy_req = self._dummy_GET_request(environ=req_env)
-
- err_msg = "EC2 access key not found."
- err_resp = json.dumps({'error': {'message': err_msg}})
- self._stub_http_connection(headers={'Authorization': auth_str},
- response=err_resp)
- self.m.ReplayAll()
- self.assertRaises(exception.HeatInvalidClientTokenIdError,
- ec2.__call__, dummy_req)
-
- self.m.VerifyAll()
-
- def test_call_err_signature(self):
- dummy_conf = {'auth_uri': 'http://123:5000/v2.0'}
- ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf)
-
- auth_str = ('Authorization: foo Credential=foo/bar, '
- 'SignedHeaders=content-type;host;x-amz-date, '
- 'Signature=xyz')
- req_env = {'SERVER_NAME': 'heat',
- 'SERVER_PORT': '8000',
- 'PATH_INFO': '/v1',
- 'HTTP_AUTHORIZATION': auth_str}
- dummy_req = self._dummy_GET_request(environ=req_env)
-
- err_msg = "EC2 signature not supplied."
- err_resp = json.dumps({'error': {'message': err_msg}})
- self._stub_http_connection(headers={'Authorization': auth_str},
- response=err_resp)
- self.m.ReplayAll()
- self.assertRaises(exception.HeatSignatureError,
- ec2.__call__, dummy_req)
-
- self.m.VerifyAll()
-
- def test_call_err_denied(self):
- dummy_conf = {'auth_uri': 'http://123:5000/v2.0'}
- ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf)
-
- auth_str = ('Authorization: foo Credential=foo/bar, '
- 'SignedHeaders=content-type;host;x-amz-date, '
- 'Signature=xyz')
- req_env = {'SERVER_NAME': 'heat',
- 'SERVER_PORT': '8000',
- 'PATH_INFO': '/v1',
- 'HTTP_AUTHORIZATION': auth_str}
- dummy_req = self._dummy_GET_request(environ=req_env)
-
- err_resp = json.dumps({})
- self._stub_http_connection(headers={'Authorization': auth_str},
- response=err_resp)
- self.m.ReplayAll()
- self.assertRaises(exception.HeatAccessDeniedError,
- ec2.__call__, dummy_req)
-
- self.m.VerifyAll()
-
- def test_call_ok_v2(self):
- dummy_conf = {'auth_uri': 'http://123:5000/v2.0'}
- ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf)
- params = {'AWSAccessKeyId': 'foo', 'Signature': 'xyz'}
- req_env = {'SERVER_NAME': 'heat',
- 'SERVER_PORT': '8000',
- 'PATH_INFO': '/v1'}
- dummy_req = self._dummy_GET_request(params, req_env)
-
- ok_resp = json.dumps({'access': {'metadata': {}, 'token': {
- 'id': 123,
- 'tenant': {'name': 'tenant', 'id': 'abcd1234'}}}})
- self._stub_http_connection(response=ok_resp,
- params={'AWSAccessKeyId': 'foo'})
- self.m.ReplayAll()
- self.assertEqual(ec2.__call__(dummy_req), 'woot')
-
- self.m.VerifyAll()
-
- def test_call_ok_multicloud(self):
- dummy_conf = {
- 'allowed_auth_uris': [
- 'http://123:5000/v2.0', 'http://456:5000/v2.0'],
- 'multi_cloud': True
- }
- ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf)
- params = {'AWSAccessKeyId': 'foo', 'Signature': 'xyz'}
- req_env = {'SERVER_NAME': 'heat',
- 'SERVER_PORT': '8000',
- 'PATH_INFO': '/v1'}
- dummy_req = self._dummy_GET_request(params, req_env)
-
- ok_resp = json.dumps({'access': {'metadata': {}, 'token': {
- 'id': 123,
- 'tenant': {'name': 'tenant', 'id': 'abcd1234'}}}})
- err_msg = "EC2 access key not found."
- err_resp = json.dumps({'error': {'message': err_msg}})
-
- # first request fails
- self._stub_http_connection(
- req_url='http://123:5000/v2.0/ec2tokens',
- response=err_resp,
- params={'AWSAccessKeyId': 'foo'})
-
- # second request passes
- self._stub_http_connection(
- req_url='http://456:5000/v2.0/ec2tokens',
- response=ok_resp,
- params={'AWSAccessKeyId': 'foo'})
-
- self.m.ReplayAll()
- self.assertEqual(ec2.__call__(dummy_req), 'woot')
-
- self.m.VerifyAll()
-
- def test_call_err_multicloud(self):
- dummy_conf = {
- 'allowed_auth_uris': [
- 'http://123:5000/v2.0', 'http://456:5000/v2.0'],
- 'multi_cloud': True
- }
- ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf)
- params = {'AWSAccessKeyId': 'foo', 'Signature': 'xyz'}
- req_env = {'SERVER_NAME': 'heat',
- 'SERVER_PORT': '8000',
- 'PATH_INFO': '/v1'}
- dummy_req = self._dummy_GET_request(params, req_env)
-
- err_resp1 = json.dumps({})
-
- err_msg2 = "EC2 access key not found."
- err_resp2 = json.dumps({'error': {'message': err_msg2}})
-
- # first request fails with HeatAccessDeniedError
- self._stub_http_connection(
- req_url='http://123:5000/v2.0/ec2tokens',
- response=err_resp1,
- params={'AWSAccessKeyId': 'foo'})
-
- # second request fails with HeatInvalidClientTokenIdError
- self._stub_http_connection(
- req_url='http://456:5000/v2.0/ec2tokens',
- response=err_resp2,
- params={'AWSAccessKeyId': 'foo'})
-
- self.m.ReplayAll()
- # raised error matches last failure
- self.assertRaises(exception.HeatInvalidClientTokenIdError,
- ec2.__call__, dummy_req)
-
- self.m.VerifyAll()
-
- def test_call_err_multicloud_none_allowed(self):
- dummy_conf = {
- 'allowed_auth_uris': [],
- 'multi_cloud': True
- }
- ec2 = ec2token.EC2Token(app='woot', conf=dummy_conf)
- params = {'AWSAccessKeyId': 'foo', 'Signature': 'xyz'}
- req_env = {'SERVER_NAME': 'heat',
- 'SERVER_PORT': '8000',
- 'PATH_INFO': '/v1'}
- dummy_req = self._dummy_GET_request(params, req_env)
-
- self.m.ReplayAll()
- self.assertRaises(exception.HeatAccessDeniedError,
- ec2.__call__, dummy_req)
-
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import json
-import mock
-
-from oslo.config import cfg
-import webob.exc
-
-from heat.common import identifier
-from heat.openstack.common import rpc
-
-from heat.common import exception as heat_exc
-from heat.common.wsgi import Request
-from heat.common import urlfetch
-from heat.openstack.common.rpc import common as rpc_common
-from heat.rpc import api as rpc_api
-from heat.tests.common import HeatTestCase
-
-import heat.api.openstack.v1 as api_v1
-import heat.api.openstack.v1.stacks as stacks
-import heat.api.openstack.v1.resources as resources
-import heat.api.openstack.v1.events as events
-import heat.api.openstack.v1.actions as actions
-from heat.tests import utils
-
-import heat.api.middleware.fault as fault
-
-
-def request_with_middleware(middleware, func, req, *args, **kwargs):
-
- @webob.dec.wsgify
- def _app(req):
- return func(req, *args, **kwargs)
-
- resp = middleware(_app).process_request(req)
- return resp
-
-
-def to_remote_error(error):
- """Converts the given exception to the one with the _Remote suffix.
- """
- exc_info = (type(error), error, None)
- serialized = rpc_common.serialize_remote_exception(exc_info)
- remote_error = rpc_common.deserialize_remote_exception(cfg.CONF,
- serialized)
- return remote_error
-
-
-class InstantiationDataTest(HeatTestCase):
-
- def test_format_parse(self):
- data = {"key1": ["val1[0]", "val1[1]"], "key2": "val2"}
- json_repr = '{ "key1": [ "val1[0]", "val1[1]" ], "key2": "val2" }'
- parsed = stacks.InstantiationData.format_parse(json_repr, 'foo')
- self.assertEqual(parsed, data)
-
- def test_format_parse_invalid(self):
- self.assertRaises(webob.exc.HTTPBadRequest,
- stacks.InstantiationData.format_parse,
- '!@#$%^¬ json', 'Garbage')
-
- def test_stack_name(self):
- body = {'stack_name': 'wibble'}
- data = stacks.InstantiationData(body)
- self.assertEqual(data.stack_name(), 'wibble')
-
- def test_stack_name_missing(self):
- body = {'not the stack_name': 'wibble'}
- data = stacks.InstantiationData(body)
- self.assertRaises(webob.exc.HTTPBadRequest, data.stack_name)
-
- def test_template_inline(self):
- template = {'foo': 'bar', 'blarg': 'wibble'}
- body = {'template': template}
- data = stacks.InstantiationData(body)
- self.assertEqual(data.template(), template)
-
- def test_template_string_json(self):
- template = '{"foo": "bar", "blarg": "wibble"}'
- body = {'template': template}
- data = stacks.InstantiationData(body)
- self.assertEqual(data.template(), json.loads(template))
-
- def test_template_string_yaml(self):
- template = '''foo: bar
-blarg: wibble
-'''
- parsed = {u'HeatTemplateFormatVersion': u'2012-12-12',
- u'Mappings': {},
- u'Outputs': {},
- u'Parameters': {},
- u'Resources': {},
- u'blarg': u'wibble',
- u'foo': u'bar'}
-
- body = {'template': template}
- data = stacks.InstantiationData(body)
- self.assertEqual(data.template(), parsed)
-
- def test_template_url(self):
- template = {'foo': 'bar', 'blarg': 'wibble'}
- url = 'http://example.com/template'
- body = {'template_url': url}
- data = stacks.InstantiationData(body)
-
- self.m.StubOutWithMock(urlfetch, 'get')
- urlfetch.get(url).AndReturn(json.dumps(template))
- self.m.ReplayAll()
-
- self.assertEqual(data.template(), template)
- self.m.VerifyAll()
-
- def test_template_priority(self):
- template = {'foo': 'bar', 'blarg': 'wibble'}
- url = 'http://example.com/template'
- body = {'template': template, 'template_url': url}
- data = stacks.InstantiationData(body)
-
- self.m.StubOutWithMock(urlfetch, 'get')
- self.m.ReplayAll()
-
- self.assertEqual(data.template(), template)
- self.m.VerifyAll()
-
- def test_template_missing(self):
- template = {'foo': 'bar', 'blarg': 'wibble'}
- body = {'not the template': template}
- data = stacks.InstantiationData(body)
- self.assertRaises(webob.exc.HTTPBadRequest, data.template)
-
- def test_parameters(self):
- params = {'foo': 'bar', 'blarg': 'wibble'}
- body = {'parameters': params,
- 'resource_registry': {}}
- data = stacks.InstantiationData(body)
- self.assertEqual(data.environment(), body)
-
- def test_environment_only_params(self):
- env = {'parameters': {'foo': 'bar', 'blarg': 'wibble'}}
- body = {'environment': env}
- data = stacks.InstantiationData(body)
- self.assertEqual(data.environment(), env)
-
- def test_environment_and_parameters(self):
- body = {'parameters': {'foo': 'bar'},
- 'environment': {'parameters': {'blarg': 'wibble'}}}
- expect = {'parameters': {'blarg': 'wibble',
- 'foo': 'bar'},
- 'resource_registry': {}}
- data = stacks.InstantiationData(body)
- self.assertEqual(data.environment(), expect)
-
- def test_parameters_override_environment(self):
- # This tests that the cli parameters will override
- # any parameters in the environment.
- body = {'parameters': {'foo': 'bar',
- 'tester': 'Yes'},
- 'environment': {'parameters': {'blarg': 'wibble',
- 'tester': 'fail'}}}
- expect = {'parameters': {'blarg': 'wibble',
- 'foo': 'bar',
- 'tester': 'Yes'},
- 'resource_registry': {}}
- data = stacks.InstantiationData(body)
- self.assertEqual(data.environment(), expect)
-
- def test_environment_bad_format(self):
- env = {'somethingnotsupported': {'blarg': 'wibble'}}
- body = {'environment': json.dumps(env)}
- data = stacks.InstantiationData(body)
- self.assertRaises(webob.exc.HTTPBadRequest, data.environment)
-
- def test_environment_missing(self):
- env = {'foo': 'bar', 'blarg': 'wibble'}
- body = {'not the environment': env}
- data = stacks.InstantiationData(body)
- self.assertEqual(data.environment(),
- {'parameters': {},
- 'resource_registry': {}})
-
- def test_args(self):
- body = {
- 'parameters': {},
- 'environment': {},
- 'stack_name': 'foo',
- 'template': {},
- 'template_url': 'http://example.com/',
- 'timeout_mins': 60,
- }
- data = stacks.InstantiationData(body)
- self.assertEqual(data.args(), {'timeout_mins': 60})
-
-
-class ControllerTest(object):
- """
- Common utilities for testing API Controllers.
- """
-
- def __init__(self, *args, **kwargs):
- super(ControllerTest, self).__init__(*args, **kwargs)
-
- cfg.CONF.set_default('host', 'host')
- self.topic = rpc_api.ENGINE_TOPIC
- self.api_version = '1.0'
- self.tenant = 't'
-
- def _environ(self, path):
- return {
- 'SERVER_NAME': 'heat.example.com',
- 'SERVER_PORT': 8004,
- 'SCRIPT_NAME': '/v1',
- 'PATH_INFO': '/%s' % self.tenant + path,
- 'wsgi.url_scheme': 'http',
- }
-
- def _simple_request(self, path, method='GET'):
- environ = self._environ(path)
- environ['REQUEST_METHOD'] = method
-
- req = Request(environ)
- req.context = utils.dummy_context('api_test_user', self.tenant)
- return req
-
- def _get(self, path):
- return self._simple_request(path)
-
- def _delete(self, path):
- return self._simple_request(path, method='DELETE')
-
- def _data_request(self, path, data, content_type='application/json',
- method='POST'):
- environ = self._environ(path)
- environ['REQUEST_METHOD'] = method
-
- req = Request(environ)
- req.context = utils.dummy_context('api_test_user', self.tenant)
- req.body = data
- return req
-
- def _post(self, path, data, content_type='application/json'):
- return self._data_request(path, data, content_type)
-
- def _put(self, path, data, content_type='application/json'):
- return self._data_request(path, data, content_type, method='PUT')
-
- def _url(self, id):
- host = 'heat.example.com:8004'
- path = '/v1/%(tenant)s/stacks/%(stack_name)s/%(stack_id)s%(path)s' % id
- return 'http://%s%s' % (host, path)
-
-
-class StackControllerTest(ControllerTest, HeatTestCase):
- '''
- Tests the API class which acts as the WSGI controller,
- the endpoint processing API requests after they are routed
- '''
-
- def setUp(self):
- super(StackControllerTest, self).setUp()
- # Create WSGI controller instance
-
- class DummyConfig():
- bind_port = 8004
-
- cfgopts = DummyConfig()
- self.controller = stacks.StackController(options=cfgopts)
-
- def test_index(self):
- req = self._get('/stacks')
-
- identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1')
-
- engine_resp = [
- {
- u'stack_identity': dict(identity),
- u'updated_time': u'2012-07-09T09:13:11Z',
- u'template_description': u'blah',
- u'description': u'blah',
- u'stack_status_reason': u'Stack successfully created',
- u'creation_time': u'2012-07-09T09:12:45Z',
- u'stack_name': identity.stack_name,
- u'stack_action': u'CREATE',
- u'stack_status': u'COMPLETE',
- u'parameters': {},
- u'outputs': [],
- u'notification_topics': [],
- u'capabilities': [],
- u'disable_rollback': True,
- u'timeout_mins': 60,
- }
- ]
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_stacks',
- 'args': {},
- 'version': self.api_version},
- None).AndReturn(engine_resp)
- self.m.ReplayAll()
-
- result = self.controller.index(req, tenant_id=identity.tenant)
-
- expected = {
- 'stacks': [
- {
- 'links': [{"href": self._url(identity),
- "rel": "self"}],
- 'id': '1',
- u'updated_time': u'2012-07-09T09:13:11Z',
- u'description': u'blah',
- u'stack_status_reason': u'Stack successfully created',
- u'creation_time': u'2012-07-09T09:12:45Z',
- u'stack_name': u'wordpress',
- u'stack_status': u'CREATE_COMPLETE'
- }
- ]
- }
- self.assertEqual(result, expected)
- self.m.VerifyAll()
-
- def test_detail(self):
- req = self._get('/stacks/detail')
-
- identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1')
-
- engine_resp = [
- {
- u'stack_identity': dict(identity),
- u'updated_time': u'2012-07-09T09:13:11Z',
- u'template_description': u'blah',
- u'description': u'blah',
- u'stack_status_reason': u'Stack successfully created',
- u'creation_time': u'2012-07-09T09:12:45Z',
- u'stack_name': identity.stack_name,
- u'stack_action': u'CREATE',
- u'stack_status': u'COMPLETE',
- u'parameters': {'foo': 'bar'},
- u'outputs': ['key', 'value'],
- u'notification_topics': [],
- u'capabilities': [],
- u'disable_rollback': True,
- u'timeout_mins': 60,
- }
- ]
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_stacks',
- 'args': {},
- 'version': self.api_version},
- None).AndReturn(engine_resp)
- self.m.ReplayAll()
-
- result = self.controller.detail(req, tenant_id=identity.tenant)
-
- expected = {
- 'stacks': [
- {
- 'links': [{"href": self._url(identity),
- "rel": "self"}],
- 'id': '1',
- u'updated_time': u'2012-07-09T09:13:11Z',
- u'template_description': u'blah',
- u'description': u'blah',
- u'stack_status_reason': u'Stack successfully created',
- u'creation_time': u'2012-07-09T09:12:45Z',
- u'stack_name': identity.stack_name,
- u'stack_status': u'CREATE_COMPLETE',
- u'parameters': {'foo': 'bar'},
- u'outputs': ['key', 'value'],
- u'notification_topics': [],
- u'capabilities': [],
- u'disable_rollback': True,
- u'timeout_mins': 60,
- }
- ]
- }
-
- self.assertEqual(result, expected)
- self.m.VerifyAll()
-
- def test_index_rmt_aterr(self):
- req = self._get('/stacks')
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_stacks',
- 'args': {},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(AttributeError()))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.index,
- req, tenant_id=self.tenant)
-
- self.assertEqual(resp.json['code'], 400)
- self.assertEqual(resp.json['error']['type'], 'AttributeError')
- self.m.VerifyAll()
-
- def test_index_rmt_interr(self):
- req = self._get('/stacks')
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_stacks',
- 'args': {},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(Exception()))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.index,
- req, tenant_id=self.tenant)
-
- self.assertEqual(resp.json['code'], 500)
- self.assertEqual(resp.json['error']['type'], 'Exception')
- self.m.VerifyAll()
-
- def test_create(self):
- identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1')
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- parameters = {u'InstanceType': u'm1.xlarge'}
- body = {'template': template,
- 'stack_name': identity.stack_name,
- 'parameters': parameters,
- 'timeout_mins': 30}
-
- req = self._post('/stacks', json.dumps(body))
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': identity.stack_name,
- 'template': template,
- 'params': {'parameters': parameters,
- 'resource_registry': {}},
- 'files': {},
- 'args': {'timeout_mins': 30}},
- 'version': self.api_version},
- None).AndReturn(dict(identity))
- self.m.ReplayAll()
-
- response = self.controller.create(req,
- tenant_id=identity.tenant,
- body=body)
-
- expected = {'stack':
- {'id': '1',
- 'links': [{'href': self._url(identity), 'rel': 'self'}]}}
- self.assertEqual(response, expected)
-
- self.m.VerifyAll()
-
- def test_create_with_files(self):
- identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1')
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- parameters = {u'InstanceType': u'm1.xlarge'}
- body = {'template': template,
- 'stack_name': identity.stack_name,
- 'parameters': parameters,
- 'files': {'my.yaml': 'This is the file contents.'},
- 'timeout_mins': 30}
-
- req = self._post('/stacks', json.dumps(body))
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': identity.stack_name,
- 'template': template,
- 'params': {'parameters': parameters,
- 'resource_registry': {}},
- 'files': {'my.yaml': 'This is the file contents.'},
- 'args': {'timeout_mins': 30}},
- 'version': self.api_version},
- None).AndReturn(dict(identity))
- self.m.ReplayAll()
-
- result = self.controller.create(req,
- tenant_id=identity.tenant,
- body=body)
- expected = {'stack':
- {'id': '1',
- 'links': [{'href': self._url(identity), 'rel': 'self'}]}}
- self.assertEqual(result, expected)
-
- self.m.VerifyAll()
-
- def test_create_err_rpcerr(self):
- stack_name = "wordpress"
- template = {u'Foo': u'bar'}
- parameters = {u'InstanceType': u'm1.xlarge'}
- json_template = json.dumps(template)
- body = {'template': template,
- 'stack_name': stack_name,
- 'parameters': parameters,
- 'timeout_mins': 30}
-
- req = self._post('/stacks', json.dumps(body))
-
- unknown_parameter = heat_exc.UnknownUserParameter(key='a')
- missing_parameter = heat_exc.UserParameterMissing(key='a')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': {'parameters': parameters,
- 'resource_registry': {}},
- 'files': {},
- 'args': {'timeout_mins': 30}},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(AttributeError()))
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': {'parameters': parameters,
- 'resource_registry': {}},
- 'files': {},
- 'args': {'timeout_mins': 30}},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(unknown_parameter))
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': {'parameters': parameters,
- 'resource_registry': {}},
- 'files': {},
- 'args': {'timeout_mins': 30}},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(missing_parameter))
- self.m.ReplayAll()
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.create,
- req, tenant_id=self.tenant, body=body)
-
- self.assertEqual(resp.json['code'], 400)
- self.assertEqual(resp.json['error']['type'], 'AttributeError')
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.create,
- req, tenant_id=self.tenant, body=body)
-
- self.assertEqual(resp.json['code'], 400)
- self.assertEqual(resp.json['error']['type'], 'UnknownUserParameter')
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.create,
- req, tenant_id=self.tenant, body=body)
-
- self.assertEqual(resp.json['code'], 400)
- self.assertEqual(resp.json['error']['type'], 'UserParameterMissing')
- self.m.VerifyAll()
-
- def test_create_err_existing(self):
- stack_name = "wordpress"
- template = {u'Foo': u'bar'}
- parameters = {u'InstanceType': u'm1.xlarge'}
- json_template = json.dumps(template)
- body = {'template': template,
- 'stack_name': stack_name,
- 'parameters': parameters,
- 'timeout_mins': 30}
-
- req = self._post('/stacks', json.dumps(body))
-
- error = heat_exc.StackExists(stack_name='s')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': {'parameters': parameters,
- 'resource_registry': {}},
- 'files': {},
- 'args': {'timeout_mins': 30}},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.create,
- req, tenant_id=self.tenant, body=body)
-
- self.assertEqual(resp.json['code'], 409)
- self.assertEqual(resp.json['error']['type'], 'StackExists')
- self.m.VerifyAll()
-
- def test_create_err_engine(self):
- stack_name = "wordpress"
- template = {u'Foo': u'bar'}
- parameters = {u'InstanceType': u'm1.xlarge'}
- json_template = json.dumps(template)
- body = {'template': template,
- 'stack_name': stack_name,
- 'parameters': parameters,
- 'timeout_mins': 30}
-
- req = self._post('/stacks', json.dumps(body))
-
- error = heat_exc.StackValidationFailed(message='')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'create_stack',
- 'args': {'stack_name': stack_name,
- 'template': template,
- 'params': {'parameters': parameters,
- 'resource_registry': {}},
- 'files': {},
- 'args': {'timeout_mins': 30}},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.create,
- req, tenant_id=self.tenant, body=body)
-
- self.assertEqual(resp.json['code'], 400)
- self.assertEqual(resp.json['error']['type'], 'StackValidationFailed')
- self.m.VerifyAll()
-
- def test_create_err_stack_bad_reqest(self):
- cfg.CONF.set_override('debug', True)
- template = {u'Foo': u'bar'}
- parameters = {u'InstanceType': u'm1.xlarge'}
- body = {'template': template,
- 'parameters': parameters,
- 'timeout_mins': 30}
-
- req = self._post('/stacks', json.dumps(body))
-
- error = heat_exc.HTTPExceptionDisguise(webob.exc.HTTPBadRequest())
- self.controller.create = mock.MagicMock(side_effect=error)
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.create, req, body)
-
- # When HTTP disguised exceptions reach the fault app, they are
- # converted into regular responses, just like non-HTTP exceptions
- self.assertEqual(resp.json['code'], 400)
- self.assertEqual(resp.json['error']['type'], 'HTTPBadRequest')
- self.assertIsNotNone(resp.json['error']['traceback'])
-
- def test_lookup(self):
- identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1')
-
- req = self._get('/stacks/%(stack_name)s' % identity)
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': identity.stack_name},
- 'version': self.api_version},
- None).AndReturn(identity)
-
- self.m.ReplayAll()
-
- try:
- result = self.controller.lookup(req, tenant_id=identity.tenant,
- stack_name=identity.stack_name)
- except webob.exc.HTTPFound as found:
- self.assertEqual(found.location, self._url(identity))
- else:
- self.fail('No redirect generated')
- self.m.VerifyAll()
-
- def test_lookup_arn(self):
- identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1')
-
- req = self._get('/stacks%s' % identity.arn_url_path())
-
- self.m.ReplayAll()
-
- try:
- result = self.controller.lookup(req, tenant_id=identity.tenant,
- stack_name=identity.arn())
- except webob.exc.HTTPFound as found:
- self.assertEqual(found.location, self._url(identity))
- else:
- self.fail('No redirect generated')
- self.m.VerifyAll()
-
- def test_lookup_nonexistant(self):
- stack_name = 'wibble'
-
- req = self._get('/stacks/%(stack_name)s' % {
- 'stack_name': stack_name})
-
- error = heat_exc.StackNotFound(stack_name='a')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.lookup,
- req, tenant_id=self.tenant,
- stack_name=stack_name)
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'StackNotFound')
- self.m.VerifyAll()
-
- def test_lookup_resource(self):
- identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '1')
-
- req = self._get('/stacks/%(stack_name)s/resources' % identity)
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': identity.stack_name},
- 'version': self.api_version},
- None).AndReturn(identity)
-
- self.m.ReplayAll()
-
- try:
- result = self.controller.lookup(req, tenant_id=identity.tenant,
- stack_name=identity.stack_name,
- path='resources')
- except webob.exc.HTTPFound as found:
- self.assertEqual(found.location,
- self._url(identity) + '/resources')
- else:
- self.fail('No redirect generated')
- self.m.VerifyAll()
-
- def test_lookup_resource_nonexistant(self):
- stack_name = 'wibble'
-
- req = self._get('/stacks/%(stack_name)s/resources' % {
- 'stack_name': stack_name})
-
- error = heat_exc.StackNotFound(stack_name='a')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'identify_stack',
- 'args': {'stack_name': stack_name},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.lookup,
- req, tenant_id=self.tenant,
- stack_name=stack_name,
- path='resources')
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'StackNotFound')
- self.m.VerifyAll()
-
- def test_show(self):
- identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6')
-
- req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity)
-
- parameters = {u'DBUsername': u'admin',
- u'LinuxDistribution': u'F17',
- u'InstanceType': u'm1.large',
- u'DBRootPassword': u'admin',
- u'DBPassword': u'admin',
- u'DBName': u'wordpress'}
- outputs = [{u'output_key': u'WebsiteURL',
- u'description': u'URL for Wordpress wiki',
- u'output_value': u'http://10.0.0.8/wordpress'}]
-
- engine_resp = [
- {
- u'stack_identity': dict(identity),
- u'updated_time': u'2012-07-09T09:13:11Z',
- u'parameters': parameters,
- u'outputs': outputs,
- u'stack_status_reason': u'Stack successfully created',
- u'creation_time': u'2012-07-09T09:12:45Z',
- u'stack_name': identity.stack_name,
- u'notification_topics': [],
- u'stack_action': u'CREATE',
- u'stack_status': u'COMPLETE',
- u'description': u'blah',
- u'disable_rollback': True,
- u'timeout_mins':60,
- u'capabilities': [],
- }
- ]
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'show_stack',
- 'args': {'stack_identity': dict(identity)},
- 'version': self.api_version},
- None).AndReturn(engine_resp)
- self.m.ReplayAll()
-
- response = self.controller.show(req,
- tenant_id=identity.tenant,
- stack_name=identity.stack_name,
- stack_id=identity.stack_id)
-
- expected = {
- 'stack': {
- 'links': [{"href": self._url(identity),
- "rel": "self"}],
- 'id': '6',
- u'updated_time': u'2012-07-09T09:13:11Z',
- u'parameters': parameters,
- u'outputs': outputs,
- u'description': u'blah',
- u'stack_status_reason': u'Stack successfully created',
- u'creation_time': u'2012-07-09T09:12:45Z',
- u'stack_name': identity.stack_name,
- u'stack_status': u'CREATE_COMPLETE',
- u'capabilities': [],
- u'notification_topics': [],
- u'disable_rollback': True,
- u'timeout_mins': 60,
- }
- }
- self.assertEqual(response, expected)
- self.m.VerifyAll()
-
- def test_show_notfound(self):
- identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6')
-
- req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity)
-
- error = heat_exc.StackNotFound(stack_name='a')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'show_stack',
- 'args': {'stack_identity': dict(identity)},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.show,
- req, tenant_id=identity.tenant,
- stack_name=identity.stack_name,
- stack_id=identity.stack_id)
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'StackNotFound')
- self.m.VerifyAll()
-
- def test_show_invalidtenant(self):
- identity = identifier.HeatIdentifier('wibble', 'wordpress', '6')
-
- req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity)
-
- error = heat_exc.InvalidTenant(target='a', actual='b')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'show_stack',
- 'args': {'stack_identity': dict(identity)},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.show,
- req, tenant_id=identity.tenant,
- stack_name=identity.stack_name,
- stack_id=identity.stack_id)
-
- self.assertEqual(resp.json['code'], 403)
- self.assertEqual(resp.json['error']['type'], 'InvalidTenant')
- self.m.VerifyAll()
-
- def test_get_template(self):
- identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6')
- req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity)
- template = {u'Foo': u'bar'}
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'get_template',
- 'args': {'stack_identity': dict(identity)},
- 'version': self.api_version},
- None).AndReturn(template)
- self.m.ReplayAll()
-
- response = self.controller.template(req, tenant_id=identity.tenant,
- stack_name=identity.stack_name,
- stack_id=identity.stack_id)
-
- self.assertEqual(response, template)
- self.m.VerifyAll()
-
- def test_get_template_err_notfound(self):
- identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6')
- req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity)
-
- error = heat_exc.StackNotFound(stack_name='a')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'get_template',
- 'args': {'stack_identity': dict(identity)},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
-
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.template,
- req, tenant_id=identity.tenant,
- stack_name=identity.stack_name,
- stack_id=identity.stack_id)
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'StackNotFound')
- self.m.VerifyAll()
-
- def test_update(self):
- identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6')
- stack_name = u'wordpress'
- stack_id = u'6'
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- parameters = {u'InstanceType': u'm1.xlarge'}
- body = {'template': template,
- 'parameters': parameters,
- 'files': {},
- 'timeout_mins': 30}
-
- req = self._put('/stacks/%(stack_name)s/%(stack_id)s' % identity,
- json.dumps(body))
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'update_stack',
- 'args': {'stack_identity': dict(identity),
- 'template': template,
- 'params': {'parameters': parameters,
- 'resource_registry': {}},
- 'files': {},
- 'args': {'timeout_mins': 30}},
- 'version': self.api_version},
- None).AndReturn(dict(identity))
- self.m.ReplayAll()
-
- self.assertRaises(webob.exc.HTTPAccepted,
- self.controller.update,
- req, tenant_id=identity.tenant,
- stack_name=identity.stack_name,
- stack_id=identity.stack_id,
- body=body)
- self.m.VerifyAll()
-
- def test_update_bad_name(self):
- identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6')
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- parameters = {u'InstanceType': u'm1.xlarge'}
- body = {'template': template,
- 'parameters': parameters,
- 'files': {},
- 'timeout_mins': 30}
-
- req = self._put('/stacks/%(stack_name)s/%(stack_id)s' % identity,
- json.dumps(body))
-
- error = heat_exc.StackNotFound(stack_name='a')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'update_stack',
- 'args': {'stack_identity': dict(identity),
- 'template': template,
- 'params': {u'parameters': parameters,
- u'resource_registry': {}},
- 'files': {},
- 'args': {'timeout_mins': 30}},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.update,
- req, tenant_id=identity.tenant,
- stack_name=identity.stack_name,
- stack_id=identity.stack_id,
- body=body)
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'StackNotFound')
- self.m.VerifyAll()
-
- def test_delete(self):
- identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6')
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- parameters = {u'InstanceType': u'm1.xlarge'}
- body = {'template': template,
- 'parameters': parameters,
- 'timeout_mins': 30}
-
- req = self._delete('/stacks/%(stack_name)s/%(stack_id)s' % identity)
-
- self.m.StubOutWithMock(rpc, 'call')
- # Engine returns None when delete successful
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'delete_stack',
- 'args': {'stack_identity': dict(identity)},
- 'version': self.api_version},
- None).AndReturn(None)
- self.m.ReplayAll()
-
- self.assertRaises(webob.exc.HTTPNoContent,
- self.controller.delete,
- req, tenant_id=identity.tenant,
- stack_name=identity.stack_name,
- stack_id=identity.stack_id)
- self.m.VerifyAll()
-
- def test_delete_bad_name(self):
- identity = identifier.HeatIdentifier(self.tenant, 'wibble', '6')
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- parameters = {u'InstanceType': u'm1.xlarge'}
- body = {'template': template,
- 'parameters': parameters,
- 'timeout_mins': 30}
-
- req = self._delete('/stacks/%(stack_name)s/%(stack_id)s' % identity)
-
- error = heat_exc.StackNotFound(stack_name='a')
- self.m.StubOutWithMock(rpc, 'call')
- # Engine returns None when delete successful
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'delete_stack',
- 'args': {'stack_identity': dict(identity)},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.delete,
- req, tenant_id=identity.tenant,
- stack_name=identity.stack_name,
- stack_id=identity.stack_id)
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'StackNotFound')
- self.m.VerifyAll()
-
- def test_validate_template(self):
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- body = {'template': template}
-
- req = self._post('/validate', json.dumps(body))
-
- engine_response = {
- u'Description': u'blah',
- u'Parameters': [
- {
- u'NoEcho': u'false',
- u'ParameterKey': u'InstanceType',
- u'Description': u'Instance type'
- }
- ]
- }
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'validate_template',
- 'args': {'template': template},
- 'version': self.api_version},
- None).AndReturn(engine_response)
- self.m.ReplayAll()
-
- response = self.controller.validate_template(req,
- tenant_id=self.tenant,
- body=body)
- self.assertEqual(response, engine_response)
- self.m.VerifyAll()
-
- def test_validate_template_error(self):
- template = {u'Foo': u'bar'}
- json_template = json.dumps(template)
- body = {'template': template}
-
- req = self._post('/validate', json.dumps(body))
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'validate_template',
- 'args': {'template': template},
- 'version': self.api_version},
- None).AndReturn({'Error': 'fubar'})
- self.m.ReplayAll()
-
- self.assertRaises(webob.exc.HTTPBadRequest,
- self.controller.validate_template,
- req, tenant_id=self.tenant, body=body)
- self.m.VerifyAll()
-
- def test_list_resource_types(self):
- req = self._get('/resource_types')
-
- engine_response = ['AWS::EC2::Instance',
- 'AWS::EC2::EIP',
- 'AWS::EC2::EIPAssociation']
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_resource_types',
- 'args': {},
- 'version': self.api_version},
- None).AndReturn(engine_response)
- self.m.ReplayAll()
- response = self.controller.list_resource_types(req,
- tenant_id=self.tenant)
- self.assertEqual(response, {'resource_types': engine_response})
- self.m.VerifyAll()
-
- def test_list_resource_types_error(self):
- req = self._get('/resource_types')
-
- error = heat_exc.ServerError(body='')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_resource_types',
- 'args': {},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.list_resource_types,
- req, tenant_id=self.tenant)
- self.assertEqual(resp.json['code'], 500)
- self.assertEqual(resp.json['error']['type'], 'ServerError')
- self.m.VerifyAll()
-
- def test_resource_schema(self):
- req = self._get('/resource_types/ResourceWithProps')
- type_name = 'ResourceWithProps'
-
- engine_response = {
- 'resource_type': type_name,
- 'properties': {
- 'Foo': {'type': 'string', 'required': False},
- },
- 'attributes': {
- 'foo': {'description': 'A generic attribute'},
- 'Foo': {'description': 'Another generic attribute'},
- },
- }
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'resource_schema',
- 'args': {'type_name': type_name},
- 'version': self.api_version},
- None).AndReturn(engine_response)
- self.m.ReplayAll()
- response = self.controller.resource_schema(req,
- tenant_id=self.tenant,
- type_name=type_name)
- self.assertEqual(response, engine_response)
- self.m.VerifyAll()
-
- def test_resource_schema_nonexist(self):
- req = self._get('/resource_types/BogusResourceType')
- type_name = 'BogusResourceType'
-
- error = heat_exc.ResourceTypeNotFound(type_name='BogusResourceType')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'resource_schema',
- 'args': {'type_name': type_name},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.resource_schema,
- req, tenant_id=self.tenant,
- type_name=type_name)
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'ResourceTypeNotFound')
- self.m.VerifyAll()
-
- def test_generate_template(self):
-
- req = self._get('/resource_types/TEST_TYPE/template')
-
- engine_response = {'Type': 'TEST_TYPE'}
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'generate_template',
- 'args': {'type_name': 'TEST_TYPE'},
- 'version': self.api_version},
- None).AndReturn(engine_response)
- self.m.ReplayAll()
- self.controller.generate_template(req, tenant_id=self.tenant,
- type_name='TEST_TYPE')
- self.m.VerifyAll()
-
- def test_generate_template_not_found(self):
- req = self._get('/resource_types/NOT_FOUND/template')
-
- error = heat_exc.ResourceTypeNotFound(type_name='a')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'generate_template',
- 'args': {'type_name': 'NOT_FOUND'},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.generate_template,
- req, tenant_id=self.tenant,
- type_name='NOT_FOUND')
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'ResourceTypeNotFound')
- self.m.VerifyAll()
-
-
-class StackSerializerTest(HeatTestCase):
-
- def setUp(self):
- super(StackSerializerTest, self).setUp()
- self.serializer = stacks.StackSerializer()
-
- def test_serialize_create(self):
- result = {'stack':
- {'id': '1',
- 'links': [{'href': 'location', "rel": "self"}]}}
- response = webob.Response()
- response = self.serializer.create(response, result)
- self.assertEqual(response.status_int, 201)
- self.assertEqual(response.headers['Location'], 'location')
- self.assertEqual(response.headers['Content-Type'], 'application/json')
-
-
-class ResourceControllerTest(ControllerTest, HeatTestCase):
- '''
- Tests the API class which acts as the WSGI controller,
- the endpoint processing API requests after they are routed
- '''
-
- def setUp(self):
- super(ResourceControllerTest, self).setUp()
- # Create WSGI controller instance
-
- class DummyConfig():
- bind_port = 8004
-
- cfgopts = DummyConfig()
- self.controller = resources.ResourceController(options=cfgopts)
-
- def test_index(self):
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '1')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
-
- req = self._get(stack_identity._tenant_path() + '/resources')
-
- engine_resp = [
- {
- u'resource_identity': dict(res_identity),
- u'stack_name': stack_identity.stack_name,
- u'resource_name': res_name,
- u'resource_status_reason': None,
- u'updated_time': u'2012-07-23T13:06:00Z',
- u'stack_identity': stack_identity,
- u'resource_action': u'CREATE',
- u'resource_status': u'COMPLETE',
- u'physical_resource_id':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- u'resource_type': u'AWS::EC2::Instance',
- }
- ]
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_stack_resources',
- 'args': {'stack_identity': stack_identity},
- 'version': self.api_version},
- None).AndReturn(engine_resp)
- self.m.ReplayAll()
-
- result = self.controller.index(req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id)
-
- expected = {
- 'resources': [{'links': [{'href': self._url(res_identity),
- 'rel': 'self'},
- {'href': self._url(stack_identity),
- 'rel': 'stack'}],
- u'resource_name': res_name,
- u'logical_resource_id': res_name,
- u'resource_status_reason': None,
- u'updated_time': u'2012-07-23T13:06:00Z',
- u'resource_status': u'CREATE_COMPLETE',
- u'physical_resource_id':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- u'resource_type': u'AWS::EC2::Instance'}]}
-
- self.assertEqual(result, expected)
- self.m.VerifyAll()
-
- def test_index_nonexist(self):
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'rubbish', '1')
-
- req = self._get(stack_identity._tenant_path() + '/resources')
-
- error = heat_exc.StackNotFound(stack_name='a')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_stack_resources',
- 'args': {'stack_identity': stack_identity},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.index,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id)
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'StackNotFound')
- self.m.VerifyAll()
-
- def test_show(self):
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '6')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
-
- req = self._get(stack_identity._tenant_path())
-
- engine_resp = {
- u'description': u'',
- u'resource_identity': dict(res_identity),
- u'stack_name': stack_identity.stack_name,
- u'resource_name': res_name,
- u'resource_status_reason': None,
- u'updated_time': u'2012-07-23T13:06:00Z',
- u'stack_identity': dict(stack_identity),
- u'resource_action': u'CREATE',
- u'resource_status': u'COMPLETE',
- u'physical_resource_id':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- u'resource_type': u'AWS::EC2::Instance',
- u'metadata': {u'ensureRunning': u'true'}
- }
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'describe_stack_resource',
- 'args': {'stack_identity': stack_identity,
- 'resource_name': res_name},
- 'version': self.api_version},
- None).AndReturn(engine_resp)
- self.m.ReplayAll()
-
- result = self.controller.show(req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- resource_name=res_name)
-
- expected = {
- 'resource': {
- 'links': [
- {'href': self._url(res_identity), 'rel': 'self'},
- {'href': self._url(stack_identity), 'rel': 'stack'},
- ],
- u'description': u'',
- u'resource_name': res_name,
- u'logical_resource_id': res_name,
- u'resource_status_reason': None,
- u'updated_time': u'2012-07-23T13:06:00Z',
- u'resource_status': u'CREATE_COMPLETE',
- u'physical_resource_id':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- u'resource_type': u'AWS::EC2::Instance',
- }
- }
-
- self.assertEqual(result, expected)
- self.m.VerifyAll()
-
- def test_show_nonexist(self):
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'rubbish', '1')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
-
- req = self._get(res_identity._tenant_path())
-
- error = heat_exc.StackNotFound(stack_name='a')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'describe_stack_resource',
- 'args': {'stack_identity': stack_identity,
- 'resource_name': res_name},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.show,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- resource_name=res_name)
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'StackNotFound')
- self.m.VerifyAll()
-
- def test_show_nonexist_resource(self):
- res_name = 'Wibble'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '1')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
-
- req = self._get(res_identity._tenant_path())
-
- error = heat_exc.ResourceNotFound(stack_name='a', resource_name='b')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'describe_stack_resource',
- 'args': {'stack_identity': stack_identity,
- 'resource_name': res_name},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.show,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- resource_name=res_name)
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'ResourceNotFound')
- self.m.VerifyAll()
-
- def test_show_uncreated_resource(self):
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '1')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
-
- req = self._get(res_identity._tenant_path())
-
- error = heat_exc.ResourceNotAvailable(resource_name='')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'describe_stack_resource',
- 'args': {'stack_identity': stack_identity,
- 'resource_name': res_name},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.show,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- resource_name=res_name)
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'ResourceNotAvailable')
- self.m.VerifyAll()
-
- def test_metadata_show(self):
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '6')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
-
- req = self._get(stack_identity._tenant_path())
-
- engine_resp = {
- u'description': u'',
- u'resource_identity': dict(res_identity),
- u'stack_name': stack_identity.stack_name,
- u'resource_name': res_name,
- u'resource_status_reason': None,
- u'updated_time': u'2012-07-23T13:06:00Z',
- u'stack_identity': dict(stack_identity),
- u'resource_action': u'CREATE',
- u'resource_status': u'COMPLETE',
- u'physical_resource_id':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- u'resource_type': u'AWS::EC2::Instance',
- u'metadata': {u'ensureRunning': u'true'}
- }
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'describe_stack_resource',
- 'args': {'stack_identity': stack_identity,
- 'resource_name': res_name},
- 'version': self.api_version},
- None).AndReturn(engine_resp)
- self.m.ReplayAll()
-
- result = self.controller.metadata(req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- resource_name=res_name)
-
- expected = {'metadata': {u'ensureRunning': u'true'}}
-
- self.assertEqual(result, expected)
- self.m.VerifyAll()
-
- def test_metadata_show_nonexist(self):
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'rubbish', '1')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
-
- req = self._get(res_identity._tenant_path() + '/metadata')
-
- error = heat_exc.StackNotFound(stack_name='a')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'describe_stack_resource',
- 'args': {'stack_identity': stack_identity,
- 'resource_name': res_name},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.metadata,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- resource_name=res_name)
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'StackNotFound')
- self.m.VerifyAll()
-
- def test_metadata_show_nonexist_resource(self):
- res_name = 'wibble'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '1')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
-
- req = self._get(res_identity._tenant_path() + '/metadata')
-
- error = heat_exc.ResourceNotFound(stack_name='a', resource_name='b')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'describe_stack_resource',
- 'args': {'stack_identity': stack_identity,
- 'resource_name': res_name},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.metadata,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- resource_name=res_name)
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'ResourceNotFound')
- self.m.VerifyAll()
-
-
-class EventControllerTest(ControllerTest, HeatTestCase):
- '''
- Tests the API class which acts as the WSGI controller,
- the endpoint processing API requests after they are routed
- '''
-
- def setUp(self):
- super(EventControllerTest, self).setUp()
- # Create WSGI controller instance
-
- class DummyConfig():
- bind_port = 8004
-
- cfgopts = DummyConfig()
- self.controller = events.EventController(options=cfgopts)
-
- def test_resource_index(self):
- event_id = '42'
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '6')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
- ev_identity = identifier.EventIdentifier(event_id=event_id,
- **res_identity)
-
- req = self._get(stack_identity._tenant_path() +
- '/resources/' + res_name + '/events')
-
- engine_resp = [
- {
- u'stack_name': u'wordpress',
- u'event_time': u'2012-07-23T13:05:39Z',
- u'stack_identity': dict(stack_identity),
- u'resource_name': res_name,
- u'resource_status_reason': u'state changed',
- u'event_identity': dict(ev_identity),
- u'resource_action': u'CREATE',
- u'resource_status': u'IN_PROGRESS',
- u'physical_resource_id': None,
- u'resource_properties': {u'UserData': u'blah'},
- u'resource_type': u'AWS::EC2::Instance',
- },
- {
- u'stack_name': u'wordpress',
- u'event_time': u'2012-07-23T13:05:39Z',
- u'stack_identity': dict(stack_identity),
- u'resource_name': 'SomeOtherResource',
- u'logical_resource_id': 'SomeOtherResource',
- u'resource_status_reason': u'state changed',
- u'event_identity': dict(ev_identity),
- u'resource_action': u'CREATE',
- u'resource_status': u'IN_PROGRESS',
- u'physical_resource_id': None,
- u'resource_properties': {u'UserData': u'blah'},
- u'resource_type': u'AWS::EC2::Instance',
- }
- ]
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_events',
- 'args': {'stack_identity': stack_identity},
- 'version': self.api_version},
- None).AndReturn(engine_resp)
- self.m.ReplayAll()
-
- result = self.controller.index(req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- resource_name=res_name)
-
- expected = {
- 'events': [
- {
- 'id': event_id,
- 'links': [
- {'href': self._url(ev_identity), 'rel': 'self'},
- {'href': self._url(res_identity), 'rel': 'resource'},
- {'href': self._url(stack_identity), 'rel': 'stack'},
- ],
- u'resource_name': res_name,
- u'logical_resource_id': res_name,
- u'resource_status_reason': u'state changed',
- u'event_time': u'2012-07-23T13:05:39Z',
- u'resource_status': u'CREATE_IN_PROGRESS',
- u'physical_resource_id': None,
- }
- ]
- }
-
- self.assertEqual(result, expected)
- self.m.VerifyAll()
-
- def test_stack_index(self):
- event_id = '42'
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '6')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
- ev_identity = identifier.EventIdentifier(event_id=event_id,
- **res_identity)
-
- req = self._get(stack_identity._tenant_path() + '/events')
-
- engine_resp = [
- {
- u'stack_name': u'wordpress',
- u'event_time': u'2012-07-23T13:05:39Z',
- u'stack_identity': dict(stack_identity),
- u'resource_name': res_name,
- u'resource_status_reason': u'state changed',
- u'event_identity': dict(ev_identity),
- u'resource_action': u'CREATE',
- u'resource_status': u'IN_PROGRESS',
- u'physical_resource_id': None,
- u'resource_properties': {u'UserData': u'blah'},
- u'resource_type': u'AWS::EC2::Instance',
- }
- ]
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_events',
- 'args': {'stack_identity': stack_identity},
- 'version': self.api_version},
- None).AndReturn(engine_resp)
- self.m.ReplayAll()
-
- result = self.controller.index(req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id)
-
- expected = {
- 'events': [
- {
- 'id': event_id,
- 'links': [
- {'href': self._url(ev_identity), 'rel': 'self'},
- {'href': self._url(res_identity), 'rel': 'resource'},
- {'href': self._url(stack_identity), 'rel': 'stack'},
- ],
- u'resource_name': res_name,
- u'logical_resource_id': res_name,
- u'resource_status_reason': u'state changed',
- u'event_time': u'2012-07-23T13:05:39Z',
- u'resource_status': u'CREATE_IN_PROGRESS',
- u'physical_resource_id': None,
- }
- ]
- }
-
- self.assertEqual(result, expected)
- self.m.VerifyAll()
-
- def test_index_stack_nonexist(self):
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wibble', '6')
-
- req = self._get(stack_identity._tenant_path() + '/events')
-
- error = heat_exc.StackNotFound(stack_name='a')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_events',
- 'args': {'stack_identity': stack_identity},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.index,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id)
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'StackNotFound')
- self.m.VerifyAll()
-
- def test_index_resource_nonexist(self):
- event_id = '42'
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '6')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
- ev_identity = identifier.EventIdentifier(event_id=event_id,
- **res_identity)
-
- req = self._get(stack_identity._tenant_path() +
- '/resources/' + res_name + '/events')
-
- engine_resp = [
- {
- u'stack_name': u'wordpress',
- u'event_time': u'2012-07-23T13:05:39Z',
- u'stack_identity': dict(stack_identity),
- u'resource_name': 'SomeOtherResource',
- u'resource_status_reason': u'state changed',
- u'event_identity': dict(ev_identity),
- u'resource_action': u'CREATE',
- u'resource_status': u'IN_PROGRESS',
- u'physical_resource_id': None,
- u'resource_properties': {u'UserData': u'blah'},
- u'resource_type': u'AWS::EC2::Instance',
- }
- ]
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_events',
- 'args': {'stack_identity': stack_identity},
- 'version': self.api_version},
- None).AndReturn(engine_resp)
- self.m.ReplayAll()
-
- self.assertRaises(webob.exc.HTTPNotFound,
- self.controller.index,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- resource_name=res_name)
- self.m.VerifyAll()
-
- def test_show(self):
- event_id = '42'
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '6')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
- ev1_identity = identifier.EventIdentifier(event_id='41',
- **res_identity)
- ev_identity = identifier.EventIdentifier(event_id=event_id,
- **res_identity)
-
- req = self._get(stack_identity._tenant_path() +
- '/resources/' + res_name + '/events/' + event_id)
-
- engine_resp = [
- {
- u'stack_name': u'wordpress',
- u'event_time': u'2012-07-23T13:05:39Z',
- u'stack_identity': dict(stack_identity),
- u'resource_name': res_name,
- u'resource_status_reason': u'state changed',
- u'event_identity': dict(ev1_identity),
- u'resource_action': u'CREATE',
- u'resource_status': u'IN_PROGRESS',
- u'physical_resource_id': None,
- u'resource_properties': {u'UserData': u'blah'},
- u'resource_type': u'AWS::EC2::Instance',
- },
- {
- u'stack_name': u'wordpress',
- u'event_time': u'2012-07-23T13:06:00Z',
- u'stack_identity': dict(stack_identity),
- u'resource_name': res_name,
- u'resource_status_reason': u'state changed',
- u'event_identity': dict(ev_identity),
- u'resource_action': u'CREATE',
- u'resource_status': u'COMPLETE',
- u'physical_resource_id':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- u'resource_properties': {u'UserData': u'blah'},
- u'resource_type': u'AWS::EC2::Instance',
- }
- ]
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_events',
- 'args': {'stack_identity': stack_identity},
- 'version': self.api_version},
- None).AndReturn(engine_resp)
- self.m.ReplayAll()
-
- result = self.controller.show(req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- resource_name=res_name,
- event_id=event_id)
-
- expected = {
- 'event': {
- 'id': event_id,
- 'links': [
- {'href': self._url(ev_identity), 'rel': 'self'},
- {'href': self._url(res_identity), 'rel': 'resource'},
- {'href': self._url(stack_identity), 'rel': 'stack'},
- ],
- u'resource_name': res_name,
- u'logical_resource_id': res_name,
- u'resource_status_reason': u'state changed',
- u'event_time': u'2012-07-23T13:06:00Z',
- u'resource_status': u'CREATE_COMPLETE',
- u'physical_resource_id':
- u'a3455d8c-9f88-404d-a85b-5315293e67de',
- u'resource_type': u'AWS::EC2::Instance',
- u'resource_properties': {u'UserData': u'blah'},
- }
- }
-
- self.assertEqual(result, expected)
- self.m.VerifyAll()
-
- def test_show_nonexist(self):
- event_id = '42'
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '6')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
- ev_identity = identifier.EventIdentifier(event_id='41',
- **res_identity)
-
- req = self._get(stack_identity._tenant_path() +
- '/resources/' + res_name + '/events/' + event_id)
-
- engine_resp = [
- {
- u'stack_name': u'wordpress',
- u'event_time': u'2012-07-23T13:05:39Z',
- u'stack_identity': dict(stack_identity),
- u'resource_name': res_name,
- u'resource_status_reason': u'state changed',
- u'event_identity': dict(ev_identity),
- u'resource_action': u'CREATE',
- u'resource_status': u'IN_PROGRESS',
- u'physical_resource_id': None,
- u'resource_properties': {u'UserData': u'blah'},
- u'resource_type': u'AWS::EC2::Instance',
- }
- ]
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_events',
- 'args': {'stack_identity': stack_identity},
- 'version': self.api_version},
- None).AndReturn(engine_resp)
- self.m.ReplayAll()
-
- self.assertRaises(webob.exc.HTTPNotFound,
- self.controller.show,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- resource_name=res_name, event_id=event_id)
- self.m.VerifyAll()
-
- def test_show_bad_resource(self):
- event_id = '42'
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '6')
- res_identity = identifier.ResourceIdentifier(resource_name=res_name,
- **stack_identity)
- ev_identity = identifier.EventIdentifier(event_id='41',
- **res_identity)
-
- req = self._get(stack_identity._tenant_path() +
- '/resources/' + res_name + '/events/' + event_id)
-
- engine_resp = [
- {
- u'stack_name': u'wordpress',
- u'event_time': u'2012-07-23T13:05:39Z',
- u'stack_identity': dict(stack_identity),
- u'resource_name': 'SomeOtherResourceName',
- u'resource_status_reason': u'state changed',
- u'event_identity': dict(ev_identity),
- u'resource_action': u'CREATE',
- u'resource_status': u'IN_PROGRESS',
- u'physical_resource_id': None,
- u'resource_properties': {u'UserData': u'blah'},
- u'resource_type': u'AWS::EC2::Instance',
- }
- ]
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_events',
- 'args': {'stack_identity': stack_identity},
- 'version': self.api_version},
- None).AndReturn(engine_resp)
- self.m.ReplayAll()
-
- self.assertRaises(webob.exc.HTTPNotFound,
- self.controller.show,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- resource_name=res_name, event_id=event_id)
- self.m.VerifyAll()
-
- def test_show_stack_nonexist(self):
- event_id = '42'
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wibble', '6')
-
- req = self._get(stack_identity._tenant_path() +
- '/resources/' + res_name + '/events/' + event_id)
-
- error = heat_exc.StackNotFound(stack_name='a')
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'list_events',
- 'args': {'stack_identity': stack_identity},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(error))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.show,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- resource_name=res_name,
- event_id=event_id)
-
- self.assertEqual(resp.json['code'], 404)
- self.assertEqual(resp.json['error']['type'], 'StackNotFound')
- self.m.VerifyAll()
-
-
-class RoutesTest(HeatTestCase):
-
- def assertRoute(self, mapper, path, method, action, controller, params={}):
- route = mapper.match(path, {'REQUEST_METHOD': method})
- self.assertIsNotNone(route)
- self.assertEqual(action, route['action'])
- self.assertEqual(
- controller, route['controller'].controller.__class__.__name__)
- del(route['action'])
- del(route['controller'])
- self.assertEqual(params, route)
-
- def setUp(self):
- super(RoutesTest, self).setUp()
- self.m = api_v1.API({}).map
-
- def test_template_handling(self):
- self.assertRoute(
- self.m,
- '/aaaa/resource_types',
- 'GET',
- 'list_resource_types',
- 'StackController',
- {
- 'tenant_id': 'aaaa',
- })
-
- self.assertRoute(
- self.m,
- '/aaaa/resource_types/test_type',
- 'GET',
- 'resource_schema',
- 'StackController',
- {
- 'tenant_id': 'aaaa',
- 'type_name': 'test_type'
- })
-
- self.assertRoute(
- self.m,
- '/aaaa/resource_types/test_type/template',
- 'GET',
- 'generate_template',
- 'StackController',
- {
- 'tenant_id': 'aaaa',
- 'type_name': 'test_type'
- })
-
- self.assertRoute(
- self.m,
- '/aaaa/validate',
- 'POST',
- 'validate_template',
- 'StackController',
- {
- 'tenant_id': 'aaaa'
- })
-
- def test_stack_collection(self):
- self.assertRoute(
- self.m,
- '/aaaa/stacks',
- 'GET',
- 'index',
- 'StackController',
- {
- 'tenant_id': 'aaaa'
- })
- self.assertRoute(
- self.m,
- '/aaaa/stacks',
- 'POST',
- 'create',
- 'StackController',
- {
- 'tenant_id': 'aaaa'
- })
- self.assertRoute(
- self.m,
- '/aaaa/stacks/detail',
- 'GET',
- 'detail',
- 'StackController',
- {
- 'tenant_id': 'aaaa'
- })
-
- def test_stack_data(self):
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack',
- 'GET',
- 'lookup',
- 'StackController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack'
- })
- self.assertRoute(
- self.m,
- '/aaaa/stacks/arn:openstack:heat::6548ab64fbda49deb188851a3b7d8c8b'
- ':stacks/stack-1411-06/1c5d9bb2-3464-45e2-a728-26dfa4e1d34a',
- 'GET',
- 'lookup',
- 'StackController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'arn:openstack:heat:'
- ':6548ab64fbda49deb188851a3b7d8c8b:stacks/stack-1411-06/'
- '1c5d9bb2-3464-45e2-a728-26dfa4e1d34a'
- })
-
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/resources',
- 'GET',
- 'lookup',
- 'StackController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'path': 'resources'
- })
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/events',
- 'GET',
- 'lookup',
- 'StackController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'path': 'events'
- })
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/bbbb',
- 'GET',
- 'show',
- 'StackController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'stack_id': 'bbbb',
- })
-
- def test_stack_data_template(self):
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/bbbb/template',
- 'GET',
- 'template',
- 'StackController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'stack_id': 'bbbb',
- })
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/template',
- 'GET',
- 'lookup',
- 'StackController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'path': 'template'
- })
-
- def test_stack_post_actions(self):
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/bbbb/actions',
- 'POST',
- 'action',
- 'ActionController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'stack_id': 'bbbb',
- })
-
- def test_stack_post_actions_lookup_redirect(self):
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/actions',
- 'POST',
- 'lookup',
- 'StackController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'path': 'actions'
- })
-
- def test_stack_update_delete(self):
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/bbbb',
- 'PUT',
- 'update',
- 'StackController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'stack_id': 'bbbb',
- })
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/bbbb',
- 'DELETE',
- 'delete',
- 'StackController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'stack_id': 'bbbb',
- })
-
- def test_resources(self):
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/bbbb/resources',
- 'GET',
- 'index',
- 'ResourceController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'stack_id': 'bbbb'
- })
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/bbbb/resources/cccc',
- 'GET',
- 'show',
- 'ResourceController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'stack_id': 'bbbb',
- 'resource_name': 'cccc'
- })
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/bbbb/resources/cccc/metadata',
- 'GET',
- 'metadata',
- 'ResourceController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'stack_id': 'bbbb',
- 'resource_name': 'cccc'
- })
-
- def test_events(self):
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/bbbb/events',
- 'GET',
- 'index',
- 'EventController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'stack_id': 'bbbb'
- })
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/bbbb/resources/cccc/events',
- 'GET',
- 'index',
- 'EventController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'stack_id': 'bbbb',
- 'resource_name': 'cccc'
- })
- self.assertRoute(
- self.m,
- '/aaaa/stacks/teststack/bbbb/resources/cccc/events/dddd',
- 'GET',
- 'show',
- 'EventController',
- {
- 'tenant_id': 'aaaa',
- 'stack_name': 'teststack',
- 'stack_id': 'bbbb',
- 'resource_name': 'cccc',
- 'event_id': 'dddd'
- })
-
-
-class ActionControllerTest(ControllerTest, HeatTestCase):
- '''
- Tests the API class which acts as the WSGI controller,
- the endpoint processing API requests after they are routed
- '''
-
- def setUp(self):
- super(ActionControllerTest, self).setUp()
- # Create WSGI controller instance
-
- class DummyConfig():
- bind_port = 8004
-
- cfgopts = DummyConfig()
- self.controller = actions.ActionController(options=cfgopts)
-
- def test_action_suspend(self):
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '1')
- body = {'suspend': None}
- req = self._post(stack_identity._tenant_path() + '/actions',
- data=json.dumps(body))
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'stack_suspend',
- 'args': {'stack_identity': stack_identity},
- 'version': self.api_version},
- None).AndReturn(None)
- self.m.ReplayAll()
-
- result = self.controller.action(req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- body=body)
- self.assertEqual(result, None)
- self.m.VerifyAll()
-
- def test_action_resume(self):
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '1')
- body = {'resume': None}
- req = self._post(stack_identity._tenant_path() + '/actions',
- data=json.dumps(body))
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'stack_resume',
- 'args': {'stack_identity': stack_identity},
- 'version': self.api_version},
- None).AndReturn(None)
- self.m.ReplayAll()
-
- result = self.controller.action(req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- body=body)
- self.assertEqual(result, None)
- self.m.VerifyAll()
-
- def test_action_badaction(self):
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '1')
- body = {'notallowed': None}
- req = self._post(stack_identity._tenant_path() + '/actions',
- data=json.dumps(body))
-
- self.m.ReplayAll()
-
- self.assertRaises(webob.exc.HTTPBadRequest, self.controller.action,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- body=body)
- self.m.VerifyAll()
-
- def test_action_badaction_empty(self):
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '1')
- body = {}
- req = self._post(stack_identity._tenant_path() + '/actions',
- data=json.dumps(body))
-
- self.m.ReplayAll()
-
- self.assertRaises(webob.exc.HTTPBadRequest, self.controller.action,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- body=body)
- self.m.VerifyAll()
-
- def test_action_badaction_multiple(self):
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '1')
- body = {'one': None, 'two': None}
- req = self._post(stack_identity._tenant_path() + '/actions',
- data=json.dumps(body))
-
- self.m.ReplayAll()
-
- self.assertRaises(webob.exc.HTTPBadRequest, self.controller.action,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- body=body)
- self.m.VerifyAll()
-
- def test_action_rmt_aterr(self):
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '1')
- body = {'suspend': None}
- req = self._post(stack_identity._tenant_path() + '/actions',
- data=json.dumps(body))
-
- self.m.StubOutWithMock(rpc, 'call')
- rpc.call(req.context, self.topic,
- {'namespace': None,
- 'method': 'stack_suspend',
- 'args': {'stack_identity': stack_identity},
- 'version': self.api_version},
- None).AndRaise(to_remote_error(AttributeError()))
- self.m.ReplayAll()
-
- resp = request_with_middleware(fault.FaultWrapper,
- self.controller.action,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- body=body)
-
- self.assertEqual(resp.json['code'], 400)
- self.assertEqual(resp.json['error']['type'], 'AttributeError')
- self.m.VerifyAll()
-
- def test_action_badaction_ise(self):
- res_name = 'WikiDatabase'
- stack_identity = identifier.HeatIdentifier(self.tenant,
- 'wordpress', '1')
- body = {'oops': None}
- req = self._post(stack_identity._tenant_path() + '/actions',
- data=json.dumps(body))
-
- self.m.ReplayAll()
-
- self.controller.ACTIONS = (SUSPEND, NEW) = ('suspend', 'oops')
-
- self.assertRaises(webob.exc.HTTPInternalServerError,
- self.controller.action,
- req, tenant_id=self.tenant,
- stack_name=stack_identity.stack_name,
- stack_id=stack_identity.stack_id,
- body=body)
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine import attributes
-from heat.tests import common
-
-
-class AttributeTest(common.HeatTestCase):
- """Test the Attribute class."""
-
- def test_as_output(self):
- """Test that Attribute looks right when viewed as an Output."""
- expected = {
- "Value": '{"Fn::GetAtt": ["test_resource", "test1"]}',
- "Description": "The first test attribute"
- }
- attr = attributes.Attribute("test1", "The first test attribute")
- self.assertEqual(expected, attr.as_output("test_resource"))
-
-
-class AttributesTest(common.HeatTestCase):
- """Test the Attributes class."""
-
- attributes_schema = {
- "test1": "Test attrib 1",
- "test2": "Test attrib 2",
- "test3": "Test attrib 3"
- }
-
- def setUp(self):
- super(AttributesTest, self).setUp()
- self.addCleanup(self.m.VerifyAll)
-
- def test_get_attribute(self):
- """Test that we get the attribute values we expect."""
- test_resolver = lambda x: "value1"
- self.m.ReplayAll()
- attribs = attributes.Attributes('test resource',
- self.attributes_schema,
- test_resolver)
- self.assertEqual("value1", attribs['test1'])
-
- def test_get_attribute_none(self):
- """Test that we get the attribute values we expect."""
- test_resolver = lambda x: None
- self.m.ReplayAll()
- attribs = attributes.Attributes('test resource',
- self.attributes_schema,
- test_resolver)
- self.assertEqual(None, attribs['test1'])
-
- def test_get_attribute_nonexist(self):
- """Test that we get the attribute values we expect."""
- test_resolver = lambda x: "value1"
- self.m.ReplayAll()
- attribs = attributes.Attributes('test resource',
- self.attributes_schema,
- test_resolver)
- self.assertRaises(KeyError, attribs.__getitem__, 'not there')
-
- def test_as_outputs(self):
- """Test that Output format works as expected."""
- expected = {
- "test1": {
- "Value": '{"Fn::GetAtt": ["test_resource", "test1"]}',
- "Description": "Test attrib 1"
- },
- "test2": {
- "Value": '{"Fn::GetAtt": ["test_resource", "test2"]}',
- "Description": "Test attrib 2"
- },
- "test3": {
- "Value": '{"Fn::GetAtt": ["test_resource", "test3"]}',
- "Description": "Test attrib 3"
- }
- }
- MyTestResourceClass = self.m.CreateMockAnything()
- MyTestResourceClass.attributes_schema = {
- "test1": "Test attrib 1",
- "test2": "Test attrib 2",
- "test3": "Test attrib 3"
- }
- self.m.ReplayAll()
- self.assertEqual(
- expected,
- attributes.Attributes.as_outputs("test_resource",
- MyTestResourceClass))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2013 OpenStack Foundation
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from keystoneclient.v2_0 import client as keystone_client
-from keystoneclient.exceptions import Unauthorized
-from oslo.config import cfg
-import webob
-
-from heat.common.auth_password import KeystonePasswordAuthProtocol
-from heat.tests.common import HeatTestCase
-
-EXPECTED_V2_DEFAULT_ENV_RESPONSE = {
- 'HTTP_X_IDENTITY_STATUS': 'Confirmed',
- 'HTTP_X_TENANT_ID': 'tenant_id1',
- 'HTTP_X_TENANT_NAME': 'tenant_name1',
- 'HTTP_X_USER_ID': 'user_id1',
- 'HTTP_X_USER_NAME': 'user_name1',
- 'HTTP_X_ROLES': 'role1,role2',
- 'HTTP_X_USER': 'user_name1', # deprecated (diablo-compat)
- 'HTTP_X_TENANT': 'tenant_name1', # deprecated (diablo-compat)
- 'HTTP_X_ROLE': 'role1,role2', # deprecated (diablo-compat)
-}
-
-TOKEN_RESPONSE = {
- 'token': {
- 'id': 'lalalalalala',
- 'expires': '2020-01-01T00:00:10.000123Z',
- 'tenant': {
- 'id': 'tenant_id1',
- 'name': 'tenant_name1',
- },
- },
- 'user': {
- 'id': 'user_id1',
- 'name': 'user_name1',
- 'roles': [
- {'name': 'role1'},
- {'name': 'role2'},
- ],
- },
- 'serviceCatalog': {}
-}
-
-
-class FakeApp(object):
- """This represents a WSGI app protected by our auth middleware."""
-
- def __init__(self, expected_env=None):
- expected_env = expected_env or {}
- self.expected_env = dict(EXPECTED_V2_DEFAULT_ENV_RESPONSE)
- self.expected_env.update(expected_env)
-
- def __call__(self, env, start_response):
- """Assert that expected environment is present when finally called."""
- for k, v in self.expected_env.items():
- assert env[k] == v, '%s != %s' % (env[k], v)
- resp = webob.Response()
- resp.body = 'SUCCESS'
- return resp(env, start_response)
-
-
-class KeystonePasswordAuthProtocolTest(HeatTestCase):
-
- def setUp(self):
- super(KeystonePasswordAuthProtocolTest, self).setUp()
- self.config = {'auth_uri': 'http://keystone.test.com:5000'}
- self.app = FakeApp(
- expected_env={'HTTP_X_AUTH_URL': self.config['auth_uri']})
- self.middleware = KeystonePasswordAuthProtocol(self.app, self.config)
-
- def _start_fake_response(self, status, headers):
- self.response_status = int(status.split(' ', 1)[0])
- self.response_headers = dict(headers)
-
- def test_valid_request(self):
- self.m.StubOutClassWithMocks(keystone_client, 'Client')
- mock_client = keystone_client.Client(
- username='user_name1', password='goodpassword',
- tenant_id='tenant_id1', auth_url=self.config['auth_uri'])
- mock_client.auth_ref = TOKEN_RESPONSE
- self.m.ReplayAll()
- req = webob.Request.blank('/tenant_id1/')
- req.headers['X_AUTH_USER'] = 'user_name1'
- req.headers['X_AUTH_KEY'] = 'goodpassword'
- self.middleware(req.environ, self._start_fake_response)
- self.m.VerifyAll()
-
- def test_request_with_bad_credentials(self):
- self.m.StubOutWithMock(
- keystone_client, 'Client', use_mock_anything=True)
- mock_client = keystone_client.Client(
- username='user_name1', password='badpassword',
- tenant_id='tenant_id1', auth_url=self.config['auth_uri'])
- mock_client.AndRaise(Unauthorized(401))
- self.m.ReplayAll()
- req = webob.Request.blank('/tenant_id1/')
- req.headers['X_AUTH_USER'] = 'user_name1'
- req.headers['X_AUTH_KEY'] = 'badpassword'
- self.middleware(req.environ, self._start_fake_response)
- self.m.VerifyAll()
- self.assertEqual(self.response_status, 401)
-
- def test_request_with_no_tenant_in_url_or_auth_headers(self):
- req = webob.Request.blank('/')
- self.middleware(req.environ, self._start_fake_response)
- self.assertEqual(self.response_status, 401)
-
- def test_multi_cloud(self):
- allowed_auth_uris = ['http://multicloud.test.com:5000/v2.0']
- cfg.CONF.set_override('multi_cloud', True, group='auth_password')
- auth_url = 'http://multicloud.test.com:5000/v2.0'
- cfg.CONF.set_override('allowed_auth_uris',
- allowed_auth_uris,
- group='auth_password')
- self.app = FakeApp(
- expected_env={'HTTP_X_AUTH_URL': auth_url})
- self.middleware = KeystonePasswordAuthProtocol(self.app, self.config)
-
- self.m.StubOutClassWithMocks(keystone_client, 'Client')
- mock_client = keystone_client.Client(
- username='user_name1', password='goodpassword',
- tenant_id='tenant_id1', auth_url=auth_url)
- mock_client.auth_ref = TOKEN_RESPONSE
- self.m.ReplayAll()
- req = webob.Request.blank('/tenant_id1/')
- req.headers['X_AUTH_USER'] = 'user_name1'
- req.headers['X_AUTH_KEY'] = 'goodpassword'
- req.headers['X_AUTH_URL'] = auth_url
- self.middleware(req.environ, self._start_fake_response)
- self.m.VerifyAll()
-
- def test_multi_cloud_empty_allowed_uris(self):
- cfg.CONF.set_override('multi_cloud', True, group='auth_password')
- auth_url = 'http://multicloud.test.com:5000/v2.0'
- cfg.CONF.set_override('allowed_auth_uris',
- [],
- group='auth_password')
- req = webob.Request.blank('/tenant_id1/')
- req.headers['X_AUTH_USER'] = 'user_name1'
- req.headers['X_AUTH_KEY'] = 'goodpassword'
- req.headers['X_AUTH_URL'] = auth_url
- self.middleware(req.environ, self._start_fake_response)
- self.assertEqual(self.response_status, 401)
-
- def test_multi_cloud_target_not_allowed(self):
- cfg.CONF.set_override('multi_cloud', True, group='auth_password')
- auth_url = 'http://multicloud.test.com:5000/v2.0'
- cfg.CONF.set_override('allowed_auth_uris',
- ['http://some.other.url:5000/v2.0'],
- group='auth_password')
- req = webob.Request.blank('/tenant_id1/')
- req.headers['X_AUTH_USER'] = 'user_name1'
- req.headers['X_AUTH_KEY'] = 'goodpassword'
- req.headers['X_AUTH_URL'] = auth_url
- self.middleware(req.environ, self._start_fake_response)
- self.assertEqual(self.response_status, 401)
-
- def test_multi_cloud_no_auth_url(self):
- cfg.CONF.set_override('multi_cloud', True, group='auth_password')
- req = webob.Request.blank('/tenant_id1/')
- req.headers['X_AUTH_USER'] = 'user_name1'
- req.headers['X_AUTH_KEY'] = 'goodpassword'
- response = self.middleware(req.environ, self._start_fake_response)
- self.assertEqual(self.response_status, 400)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import datetime
-import copy
-
-import mox
-
-from testtools import skipIf
-
-from oslo.config import cfg
-
-from heat.common import template_format
-from heat.common import exception
-from heat.engine.resources import autoscaling as asc
-from heat.engine.resources import loadbalancer
-from heat.engine.resources import instance
-from heat.engine.resources.neutron import loadbalancer as neutron_lb
-from heat.engine import parser
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine.resource import Metadata
-from heat.openstack.common import timeutils
-from heat.openstack.common.importutils import try_import
-from heat.tests.common import HeatTestCase
-from heat.tests import fakes
-from heat.tests import utils
-
-neutronclient = try_import('neutronclient.v2_0.client')
-
-
-as_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "AutoScaling Test",
- "Parameters" : {
- "ImageId": {"Type": "String"},
- "KeyName": {"Type": "String"}
- },
- "Resources" : {
- "WebServerGroup" : {
- "Type" : "AWS::AutoScaling::AutoScalingGroup",
- "Properties" : {
- "AvailabilityZones" : ["nova"],
- "LaunchConfigurationName" : { "Ref" : "LaunchConfig" },
- "MinSize" : "1",
- "MaxSize" : "5",
- "LoadBalancerNames" : [ { "Ref" : "ElasticLoadBalancer" } ]
- }
- },
- "WebServerScaleUpPolicy" : {
- "Type" : "AWS::AutoScaling::ScalingPolicy",
- "Properties" : {
- "AdjustmentType" : "ChangeInCapacity",
- "AutoScalingGroupName" : { "Ref" : "WebServerGroup" },
- "Cooldown" : "60",
- "ScalingAdjustment" : "1"
- }
- },
- "WebServerScaleDownPolicy" : {
- "Type" : "AWS::AutoScaling::ScalingPolicy",
- "Properties" : {
- "AdjustmentType" : "ChangeInCapacity",
- "AutoScalingGroupName" : { "Ref" : "WebServerGroup" },
- "Cooldown" : "60",
- "ScalingAdjustment" : "-1"
- }
- },
- "ElasticLoadBalancer" : {
- "Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
- "Properties" : {
- "AvailabilityZones" : ["nova"],
- "Listeners" : [ {
- "LoadBalancerPort" : "80",
- "InstancePort" : "80",
- "Protocol" : "HTTP"
- }]
- }
- },
- "LaunchConfig" : {
- "Type" : "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : {"Ref": "ImageId"},
- "InstanceType" : "bar",
- }
- }
- }
-}
-'''
-
-
-class AutoScalingTest(HeatTestCase):
- dummy_instance_id = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'
- params = {'KeyName': 'test', 'ImageId': 'foo'}
-
- def setUp(self):
- super(AutoScalingTest, self).setUp()
- utils.setup_dummy_db()
- cfg.CONF.set_default('heat_waitcondition_server_url',
- 'http://server.test:8000/v1/waitcondition')
- self.fc = fakes.FakeKeystoneClient()
-
- def create_scaling_group(self, t, stack, resource_name):
- # create the launch configuration resource
- conf = stack.resources['LaunchConfig']
- self.assertEqual(None, conf.validate())
- scheduler.TaskRunner(conf.create)()
- self.assertEqual((conf.CREATE, conf.COMPLETE), conf.state)
-
- # create the group resource
- rsrc = stack.resources[resource_name]
- self.assertEqual(None, rsrc.validate())
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def create_scaling_policy(self, t, stack, resource_name):
- rsrc = stack.resources[resource_name]
- self.assertEqual(None, rsrc.validate())
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def _stub_validate(self):
- self.m.StubOutWithMock(parser.Stack, 'validate')
- parser.Stack.validate().MultipleTimes()
-
- def _stub_create(self, num):
- self._stub_validate()
- self.m.StubOutWithMock(instance.Instance, 'handle_create')
- self.m.StubOutWithMock(instance.Instance, 'check_create_complete')
- cookie = object()
- for x in range(num):
- instance.Instance.handle_create().AndReturn(cookie)
- instance.Instance.check_create_complete(cookie).AndReturn(False)
- instance.Instance.check_create_complete(
- cookie).MultipleTimes().AndReturn(True)
-
- def _stub_lb_reload(self, num, unset=True, nochange=False):
- expected_list = [self.dummy_instance_id] * num
- if unset:
- self.m.VerifyAll()
- self.m.UnsetStubs()
- if num > 0:
- self.m.StubOutWithMock(instance.Instance, 'FnGetRefId')
- instance.Instance.FnGetRefId().MultipleTimes().AndReturn(
- self.dummy_instance_id)
-
- self.m.StubOutWithMock(loadbalancer.LoadBalancer, 'handle_update')
- if nochange:
- loadbalancer.LoadBalancer.handle_update(
- mox.IgnoreArg(), mox.IgnoreArg(), {}).AndReturn(None)
- else:
- loadbalancer.LoadBalancer.handle_update(
- mox.IgnoreArg(), mox.IgnoreArg(),
- {'Instances': expected_list}).AndReturn(None)
-
- def _stub_meta_expected(self, now, data, nmeta=1):
- # Stop time at now
- self.m.StubOutWithMock(timeutils, 'utcnow')
- timeutils.utcnow().MultipleTimes().AndReturn(now)
-
- # Then set a stub to ensure the metadata update is as
- # expected based on the timestamp and data
- self.m.StubOutWithMock(Metadata, '__set__')
- expected = {timeutils.strtime(now): data}
- # Note for ScalingPolicy, we expect to get a metadata
- # update for the policy and autoscaling group, so pass nmeta=2
- for x in range(nmeta):
- Metadata.__set__(mox.IgnoreArg(), expected).AndReturn(None)
-
- def test_scaling_delete_empty(self):
- t = template_format.parse(as_template)
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['MinSize'] = '0'
- properties['MaxSize'] = '0'
- stack = utils.parse_stack(t, params=self.params)
- self._stub_lb_reload(0)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(None, rsrc.FnGetAtt("InstanceList"))
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_adjust_down_empty(self):
- t = template_format.parse(as_template)
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['MinSize'] = '1'
- properties['MaxSize'] = '1'
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
-
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- # Reduce the min size to 0, should complete without adjusting
- update_snippet = copy.deepcopy(rsrc.parsed_template())
- update_snippet['Properties']['MinSize'] = '0'
- scheduler.TaskRunner(rsrc.update, update_snippet)()
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- # trigger adjustment to reduce to 0, there should be no more instances
- self._stub_lb_reload(0)
- self._stub_meta_expected(now, 'ChangeInCapacity : -1')
- self.m.ReplayAll()
- rsrc.adjust(-1)
- self.assertEqual([], rsrc.get_instance_names())
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_group_update_replace(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
-
- self.assertEqual(utils.PhysName(stack.name, rsrc.name),
- rsrc.FnGetRefId())
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
- update_snippet = copy.deepcopy(rsrc.parsed_template())
- update_snippet['Properties']['AvailabilityZones'] = ['foo']
- updater = scheduler.TaskRunner(rsrc.update, update_snippet)
- self.assertRaises(resource.UpdateReplace, updater)
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_group_suspend(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(utils.PhysName(stack.name, rsrc.name),
- rsrc.FnGetRefId())
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- self.m.StubOutWithMock(instance.Instance, 'handle_suspend')
- self.m.StubOutWithMock(instance.Instance, 'check_suspend_complete')
- inst_cookie = (object(), object(), object())
- instance.Instance.handle_suspend().AndReturn(inst_cookie)
- instance.Instance.check_suspend_complete(inst_cookie).AndReturn(False)
- instance.Instance.check_suspend_complete(inst_cookie).AndReturn(True)
- self.m.ReplayAll()
-
- scheduler.TaskRunner(rsrc.suspend)()
- self.assertEqual(rsrc.state, (rsrc.SUSPEND, rsrc.COMPLETE))
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_group_resume(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(utils.PhysName(stack.name, rsrc.name),
- rsrc.FnGetRefId())
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- self.m.StubOutWithMock(instance.Instance, 'handle_resume')
- self.m.StubOutWithMock(instance.Instance, 'check_resume_complete')
- inst_cookie = (object(), object(), object())
- instance.Instance.handle_resume().AndReturn(inst_cookie)
- instance.Instance.check_resume_complete(inst_cookie).AndReturn(False)
- instance.Instance.check_resume_complete(inst_cookie).AndReturn(True)
- self.m.ReplayAll()
-
- rsrc.state_set(rsrc.SUSPEND, rsrc.COMPLETE)
- for i in rsrc.nested().resources.values():
- i.state_set(rsrc.SUSPEND, rsrc.COMPLETE)
-
- scheduler.TaskRunner(rsrc.resume)()
- self.assertEqual(rsrc.state, (rsrc.RESUME, rsrc.COMPLETE))
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_group_suspend_multiple(self):
- t = template_format.parse(as_template)
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['DesiredCapacity'] = '2'
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(2)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 2')
- self._stub_create(2)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(utils.PhysName(stack.name, rsrc.name),
- rsrc.FnGetRefId())
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- self.m.StubOutWithMock(instance.Instance, 'handle_suspend')
- self.m.StubOutWithMock(instance.Instance, 'check_suspend_complete')
- inst_cookie1 = ('foo1', 'foo2', 'foo3')
- inst_cookie2 = ('bar1', 'bar2', 'bar3')
- instance.Instance.handle_suspend().InAnyOrder().AndReturn(inst_cookie1)
- instance.Instance.handle_suspend().InAnyOrder().AndReturn(inst_cookie2)
- instance.Instance.check_suspend_complete(inst_cookie1).InAnyOrder(
- ).AndReturn(True)
- instance.Instance.check_suspend_complete(inst_cookie2).InAnyOrder(
- ).AndReturn(True)
- self.m.ReplayAll()
-
- scheduler.TaskRunner(rsrc.suspend)()
- self.assertEqual(rsrc.state, (rsrc.SUSPEND, rsrc.COMPLETE))
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_group_resume_multiple(self):
- t = template_format.parse(as_template)
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['DesiredCapacity'] = '2'
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(2)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 2')
- self._stub_create(2)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(utils.PhysName(stack.name, rsrc.name),
- rsrc.FnGetRefId())
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- self.m.StubOutWithMock(instance.Instance, 'handle_resume')
- self.m.StubOutWithMock(instance.Instance, 'check_resume_complete')
- inst_cookie1 = ('foo1', 'foo2', 'foo3')
- inst_cookie2 = ('bar1', 'bar2', 'bar3')
- instance.Instance.handle_resume().InAnyOrder().AndReturn(inst_cookie1)
- instance.Instance.handle_resume().InAnyOrder().AndReturn(inst_cookie2)
- instance.Instance.check_resume_complete(inst_cookie1).InAnyOrder(
- ).AndReturn(True)
- instance.Instance.check_resume_complete(inst_cookie2).InAnyOrder(
- ).AndReturn(True)
- self.m.ReplayAll()
-
- rsrc.state_set(rsrc.SUSPEND, rsrc.COMPLETE)
- for i in rsrc.nested().resources.values():
- i.state_set(rsrc.SUSPEND, rsrc.COMPLETE)
-
- scheduler.TaskRunner(rsrc.resume)()
- self.assertEqual(rsrc.state, (rsrc.RESUME, rsrc.COMPLETE))
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_group_suspend_fail(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(utils.PhysName(stack.name, rsrc.name),
- rsrc.FnGetRefId())
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- self.m.StubOutWithMock(instance.Instance, 'handle_suspend')
- self.m.StubOutWithMock(instance.Instance, 'check_suspend_complete')
- instance.Instance.handle_suspend().AndRaise(Exception('oops'))
- self.m.ReplayAll()
-
- sus_task = scheduler.TaskRunner(rsrc.suspend)
- self.assertRaises(exception.ResourceFailure, sus_task, ())
- self.assertEqual(rsrc.state, (rsrc.SUSPEND, rsrc.FAILED))
- self.assertEqual(rsrc.status_reason,
- 'Error: Resource suspend failed: Exception: oops')
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_group_resume_fail(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(utils.PhysName(stack.name, rsrc.name),
- rsrc.FnGetRefId())
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- self.m.StubOutWithMock(instance.Instance, 'handle_resume')
- self.m.StubOutWithMock(instance.Instance, 'check_resume_complete')
- instance.Instance.handle_resume().AndRaise(Exception('oops'))
- self.m.ReplayAll()
-
- rsrc.state_set(rsrc.SUSPEND, rsrc.COMPLETE)
- for i in rsrc.nested().resources.values():
- i.state_set(rsrc.SUSPEND, rsrc.COMPLETE)
-
- sus_task = scheduler.TaskRunner(rsrc.resume)
- self.assertRaises(exception.ResourceFailure, sus_task, ())
- self.assertEqual(rsrc.state, (rsrc.RESUME, rsrc.FAILED))
- self.assertEqual(rsrc.status_reason,
- 'Error: Resource resume failed: Exception: oops')
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_group_create_error(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_validate()
- self.m.StubOutWithMock(instance.Instance, 'handle_create')
- self.m.StubOutWithMock(instance.Instance, 'check_create_complete')
- instance.Instance.handle_create().AndRaise(Exception)
-
- self.m.ReplayAll()
-
- conf = stack.resources['LaunchConfig']
- self.assertEqual(None, conf.validate())
- scheduler.TaskRunner(conf.create)()
- self.assertEqual((conf.CREATE, conf.COMPLETE), conf.state)
-
- rsrc = stack.resources['WebServerGroup']
- self.assertEqual(None, rsrc.validate())
- self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.create))
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
-
- self.assertEqual([], rsrc.get_instance_names())
-
- self.m.VerifyAll()
-
- def test_scaling_group_update_ok_maxsize(self):
- t = template_format.parse(as_template)
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['MinSize'] = '1'
- properties['MaxSize'] = '3'
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- # Reduce the max size to 2, should complete without adjusting
- update_snippet = copy.deepcopy(rsrc.parsed_template())
- update_snippet['Properties']['MaxSize'] = '2'
- scheduler.TaskRunner(rsrc.update, update_snippet)()
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- self.assertEqual('2', rsrc.properties['MaxSize'])
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_group_update_ok_minsize(self):
- t = template_format.parse(as_template)
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['MinSize'] = '1'
- properties['MaxSize'] = '3'
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- # Increase min size to 2, should trigger an ExactCapacity adjust
- self._stub_lb_reload(2)
- self._stub_meta_expected(now, 'ExactCapacity : 2')
- self._stub_create(1)
- self.m.ReplayAll()
-
- update_snippet = copy.deepcopy(rsrc.parsed_template())
- update_snippet['Properties']['MinSize'] = '2'
- scheduler.TaskRunner(rsrc.update, update_snippet)()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
- self.assertEqual('2', rsrc.properties['MinSize'])
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_group_update_ok_desired(self):
- t = template_format.parse(as_template)
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['MinSize'] = '1'
- properties['MaxSize'] = '3'
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- # Increase min size to 2 via DesiredCapacity, should adjust
- self._stub_lb_reload(2)
- self._stub_meta_expected(now, 'ExactCapacity : 2')
- self._stub_create(1)
- self.m.ReplayAll()
-
- update_snippet = copy.deepcopy(rsrc.parsed_template())
- update_snippet['Properties']['DesiredCapacity'] = '2'
- scheduler.TaskRunner(rsrc.update, update_snippet)()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- self.assertEqual('2', rsrc.properties['DesiredCapacity'])
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_group_update_ok_desired_remove(self):
- t = template_format.parse(as_template)
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['DesiredCapacity'] = '2'
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(2)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 2')
- self._stub_create(2)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # Remove DesiredCapacity from the updated template, which should
- # have no effect, it's an optional parameter
- update_snippet = copy.deepcopy(rsrc.parsed_template())
- del(update_snippet['Properties']['DesiredCapacity'])
- scheduler.TaskRunner(rsrc.update, update_snippet)()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- self.assertEqual(None, rsrc.properties['DesiredCapacity'])
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_group_update_ok_cooldown(self):
- t = template_format.parse(as_template)
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['Cooldown'] = '60'
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
-
- self.assertEqual(utils.PhysName(stack.name, rsrc.name),
- rsrc.FnGetRefId())
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
- update_snippet = copy.deepcopy(rsrc.parsed_template())
- update_snippet['Properties']['Cooldown'] = '61'
- scheduler.TaskRunner(rsrc.update, update_snippet)()
- self.assertEqual('61', rsrc.properties['Cooldown'])
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_lb_reload_static_resolve(self):
- t = template_format.parse(as_template)
- properties = t['Resources']['ElasticLoadBalancer']['Properties']
- properties['AvailabilityZones'] = {'Fn::GetAZs': ''}
-
- self.m.StubOutWithMock(parser.Stack, 'get_availability_zones')
- parser.Stack.get_availability_zones().MultipleTimes().AndReturn(
- ['abc', 'xyz'])
-
- # Check that the Fn::GetAZs is correctly resolved
- expected = {u'Type': u'AWS::ElasticLoadBalancing::LoadBalancer',
- u'Properties': {'Instances': ['WebServerGroup-0'],
- u'Listeners': [{u'InstancePort': u'80',
- u'LoadBalancerPort': u'80',
- u'Protocol': u'HTTP'}],
- u'AvailabilityZones': ['abc', 'xyz']}}
-
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- stack = utils.parse_stack(t, params=self.params)
-
- lb = stack['ElasticLoadBalancer']
- self.m.StubOutWithMock(lb, 'handle_update')
- lb.handle_update(expected,
- mox.IgnoreArg(),
- mox.IgnoreArg()).AndReturn(None)
- self.m.ReplayAll()
-
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(utils.PhysName(stack.name, rsrc.name),
- rsrc.FnGetRefId())
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
- update_snippet = copy.deepcopy(rsrc.parsed_template())
- update_snippet['Properties']['Cooldown'] = '61'
- scheduler.TaskRunner(rsrc.update, update_snippet)()
-
- rsrc.delete()
- self.m.VerifyAll()
-
- @skipIf(neutronclient is None, 'neutronclient unavailable')
- def test_lb_reload_members(self):
- t = template_format.parse(as_template)
- t['Resources']['ElasticLoadBalancer'] = {
- 'Type': 'OS::Neutron::LoadBalancer',
- 'Properties': {
- 'protocol_port': 8080,
- 'pool_id': 'pool123'
- }
- }
-
- expected = {
- 'Type': 'OS::Neutron::LoadBalancer',
- 'Properties': {
- 'protocol_port': 8080,
- 'pool_id': 'pool123',
- 'members': [u'WebServerGroup-0']}
- }
- self.m.StubOutWithMock(neutron_lb.LoadBalancer, 'handle_update')
- neutron_lb.LoadBalancer.handle_update(expected,
- mox.IgnoreArg(),
- mox.IgnoreArg()).AndReturn(None)
-
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- stack = utils.parse_stack(t, params=self.params)
- self.create_scaling_group(t, stack, 'WebServerGroup')
-
- self.m.VerifyAll()
-
- @skipIf(neutronclient is None, 'neutronclient unavailable')
- def test_lb_reload_invalid_resource(self):
- t = template_format.parse(as_template)
- t['Resources']['ElasticLoadBalancer'] = {
- 'Type': 'AWS::EC2::Volume',
- 'Properties': {
- 'AvailabilityZone': 'nova'
- }
- }
-
- self._stub_create(1)
- self.m.ReplayAll()
- stack = utils.parse_stack(t, params=self.params)
- error = self.assertRaises(
- exception.ResourceFailure,
- self.create_scaling_group, t, stack, 'WebServerGroup')
- self.assertEqual(
- "Error: Unsupported resource 'ElasticLoadBalancer' in "
- "LoadBalancerNames",
- str(error))
-
- self.m.VerifyAll()
-
- def test_scaling_group_adjust(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # start with 3
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['DesiredCapacity'] = '3'
- self._stub_lb_reload(3)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 3')
- self._stub_create(3)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1',
- 'WebServerGroup-2'],
- rsrc.get_instance_names())
-
- # reduce to 1
- self._stub_lb_reload(1)
- self._stub_validate()
- self._stub_meta_expected(now, 'ChangeInCapacity : -2')
- self.m.ReplayAll()
- rsrc.adjust(-2)
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- # raise to 3
- self._stub_lb_reload(3)
- self._stub_meta_expected(now, 'ChangeInCapacity : 2')
- self._stub_create(2)
- self.m.ReplayAll()
- rsrc.adjust(2)
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1',
- 'WebServerGroup-2'],
- rsrc.get_instance_names())
-
- # set to 2
- self._stub_lb_reload(2)
- self._stub_validate()
- self._stub_meta_expected(now, 'ExactCapacity : 2')
- self.m.ReplayAll()
- rsrc.adjust(2, 'ExactCapacity')
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
- self.m.VerifyAll()
-
- def test_scaling_group_scale_up_failure(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # Create initial group
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- # Scale up one 1 instance with resource failure
- self.m.StubOutWithMock(instance.Instance, 'handle_create')
- instance.Instance.handle_create().AndRaise(exception.Error())
- self._stub_lb_reload(1, unset=False, nochange=True)
- self._stub_validate()
- self.m.ReplayAll()
-
- self.assertRaises(exception.Error, rsrc.adjust, 1)
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- self.m.VerifyAll()
-
- def test_scaling_group_nochange(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # Create initial group, 2 instances
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['DesiredCapacity'] = '2'
- self._stub_lb_reload(2)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 2')
- self._stub_create(2)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- stack.resources['WebServerGroup'] = rsrc
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # raise above the max
- rsrc.adjust(4)
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # lower below the min
- rsrc.adjust(-2)
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # no change
- rsrc.adjust(0)
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_group_percent(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # Create initial group, 2 instances
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['DesiredCapacity'] = '2'
- self._stub_lb_reload(2)
- self._stub_create(2)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 2')
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- stack.resources['WebServerGroup'] = rsrc
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # reduce by 50%
- self._stub_lb_reload(1)
- self._stub_meta_expected(now, 'PercentChangeInCapacity : -50')
- self._stub_validate()
- self.m.ReplayAll()
- rsrc.adjust(-50, 'PercentChangeInCapacity')
- self.assertEqual(['WebServerGroup-0'],
- rsrc.get_instance_names())
-
- # raise by 200%
- self._stub_lb_reload(3)
- self._stub_meta_expected(now, 'PercentChangeInCapacity : 200')
- self._stub_create(2)
- self.m.ReplayAll()
- rsrc.adjust(200, 'PercentChangeInCapacity')
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1',
- 'WebServerGroup-2'],
- rsrc.get_instance_names())
-
- rsrc.delete()
-
- def test_scaling_group_cooldown_toosoon(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # Create initial group, 2 instances, Cooldown 60s
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['DesiredCapacity'] = '2'
- properties['Cooldown'] = '60'
- self._stub_lb_reload(2)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 2')
- self._stub_create(2)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- stack.resources['WebServerGroup'] = rsrc
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # reduce by 50%
- self._stub_lb_reload(1)
- self._stub_validate()
- self._stub_meta_expected(now, 'PercentChangeInCapacity : -50')
- self.m.ReplayAll()
- rsrc.adjust(-50, 'PercentChangeInCapacity')
- self.assertEqual(['WebServerGroup-0'],
- rsrc.get_instance_names())
-
- # Now move time on 10 seconds - Cooldown in template is 60
- # so this should not update the policy metadata, and the
- # scaling group instances should be unchanged
- # Note we have to stub Metadata.__get__ since up_policy isn't
- # stored in the DB (because the stack hasn't really been created)
- previous_meta = {timeutils.strtime(now):
- 'PercentChangeInCapacity : -50'}
-
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- now = now + datetime.timedelta(seconds=10)
- self.m.StubOutWithMock(timeutils, 'utcnow')
- timeutils.utcnow().MultipleTimes().AndReturn(now)
-
- self.m.StubOutWithMock(Metadata, '__get__')
- Metadata.__get__(mox.IgnoreArg(), rsrc, mox.IgnoreArg()
- ).AndReturn(previous_meta)
-
- self.m.ReplayAll()
-
- # raise by 200%, too soon for Cooldown so there should be no change
- rsrc.adjust(200, 'PercentChangeInCapacity')
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- rsrc.delete()
-
- def test_scaling_group_cooldown_ok(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # Create initial group, 2 instances, Cooldown 60s
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['DesiredCapacity'] = '2'
- properties['Cooldown'] = '60'
- self._stub_lb_reload(2)
- self._stub_create(2)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 2')
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- stack.resources['WebServerGroup'] = rsrc
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # reduce by 50%
- self._stub_lb_reload(1)
- self._stub_validate()
- self._stub_meta_expected(now, 'PercentChangeInCapacity : -50')
- self.m.ReplayAll()
- rsrc.adjust(-50, 'PercentChangeInCapacity')
- self.assertEqual(['WebServerGroup-0'],
- rsrc.get_instance_names())
-
- # Now move time on 61 seconds - Cooldown in template is 60
- # so this should update the policy metadata, and the
- # scaling group instances updated
- previous_meta = {timeutils.strtime(now):
- 'PercentChangeInCapacity : -50'}
-
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- now = now + datetime.timedelta(seconds=61)
-
- self.m.StubOutWithMock(Metadata, '__get__')
- Metadata.__get__(mox.IgnoreArg(), rsrc, mox.IgnoreArg()
- ).AndReturn(previous_meta)
-
- #stub for the metadata accesses while creating the two instances
- Metadata.__get__(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg())
- Metadata.__get__(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg())
-
- # raise by 200%, should work
- self._stub_lb_reload(3, unset=False)
- self._stub_create(2)
- self._stub_meta_expected(now, 'PercentChangeInCapacity : 200')
- self.m.ReplayAll()
- rsrc.adjust(200, 'PercentChangeInCapacity')
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1',
- 'WebServerGroup-2'],
- rsrc.get_instance_names())
-
- rsrc.delete()
-
- def test_scaling_group_cooldown_zero(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # Create initial group, 2 instances, Cooldown 0
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['DesiredCapacity'] = '2'
- properties['Cooldown'] = '0'
- self._stub_lb_reload(2)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 2')
- self._stub_create(2)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- stack.resources['WebServerGroup'] = rsrc
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # reduce by 50%
- self._stub_lb_reload(1)
- self._stub_meta_expected(now, 'PercentChangeInCapacity : -50')
- self._stub_validate()
- self.m.ReplayAll()
- rsrc.adjust(-50, 'PercentChangeInCapacity')
- self.assertEqual(['WebServerGroup-0'],
- rsrc.get_instance_names())
-
- # Don't move time, since cooldown is zero, it should work
- previous_meta = {timeutils.strtime(now):
- 'PercentChangeInCapacity : -50'}
-
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- self.m.StubOutWithMock(Metadata, '__get__')
- Metadata.__get__(mox.IgnoreArg(), rsrc, mox.IgnoreArg()
- ).AndReturn(previous_meta)
-
- #stub for the metadata accesses while creating the two instances
- Metadata.__get__(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg())
- Metadata.__get__(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg())
- # raise by 200%, should work
-
- self._stub_lb_reload(3, unset=False)
- self._stub_meta_expected(now, 'PercentChangeInCapacity : 200')
- self._stub_create(2)
- self.m.ReplayAll()
- rsrc.adjust(200, 'PercentChangeInCapacity')
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1',
- 'WebServerGroup-2'],
- rsrc.get_instance_names())
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_policy_up(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # Create initial group
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
-
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- stack.resources['WebServerGroup'] = rsrc
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- # Scale up one
- self._stub_lb_reload(2)
- self._stub_meta_expected(now, 'ChangeInCapacity : 1', 2)
- self._stub_create(1)
-
- self.m.StubOutWithMock(asc.ScalingPolicy, 'keystone')
- asc.ScalingPolicy.keystone().MultipleTimes().AndReturn(
- self.fc)
-
- self.m.ReplayAll()
- up_policy = self.create_scaling_policy(t, stack,
- 'WebServerScaleUpPolicy')
-
- alarm_url = up_policy.FnGetAtt('AlarmUrl')
- self.assertNotEqual(None, alarm_url)
- up_policy.signal()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_policy_down(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # Create initial group, 2 instances
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['DesiredCapacity'] = '2'
- self._stub_lb_reload(2)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 2')
- self._stub_create(2)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- stack.resources['WebServerGroup'] = rsrc
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # Scale down one
- self._stub_lb_reload(1)
- self._stub_validate()
- self._stub_meta_expected(now, 'ChangeInCapacity : -1', 2)
-
- self.m.StubOutWithMock(asc.ScalingPolicy, 'keystone')
- asc.ScalingPolicy.keystone().MultipleTimes().AndReturn(
- self.fc)
-
- self.m.ReplayAll()
- down_policy = self.create_scaling_policy(t, stack,
- 'WebServerScaleDownPolicy')
- down_policy.signal()
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_policy_cooldown_toosoon(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # Create initial group
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- stack.resources['WebServerGroup'] = rsrc
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- # Scale up one
- self._stub_lb_reload(2)
- self._stub_meta_expected(now, 'ChangeInCapacity : 1', 2)
- self._stub_create(1)
-
- self.m.StubOutWithMock(asc.ScalingPolicy, 'keystone')
- asc.ScalingPolicy.keystone().MultipleTimes().AndReturn(
- self.fc)
-
- self.m.ReplayAll()
- up_policy = self.create_scaling_policy(t, stack,
- 'WebServerScaleUpPolicy')
- up_policy.signal()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # Now move time on 10 seconds - Cooldown in template is 60
- # so this should not update the policy metadata, and the
- # scaling group instances should be unchanged
- # Note we have to stub Metadata.__get__ since up_policy isn't
- # stored in the DB (because the stack hasn't really been created)
- previous_meta = {timeutils.strtime(now): 'ChangeInCapacity : 1'}
-
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- now = now + datetime.timedelta(seconds=10)
- self.m.StubOutWithMock(timeutils, 'utcnow')
- timeutils.utcnow().MultipleTimes().AndReturn(now)
-
- self.m.StubOutWithMock(Metadata, '__get__')
- Metadata.__get__(mox.IgnoreArg(), up_policy, mox.IgnoreArg()
- ).AndReturn(previous_meta)
-
- self.m.ReplayAll()
- up_policy.signal()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_policy_cooldown_ok(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # Create initial group
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- stack.resources['WebServerGroup'] = rsrc
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- # Scale up one
- self._stub_lb_reload(2)
- self._stub_meta_expected(now, 'ChangeInCapacity : 1', 2)
- self._stub_create(1)
-
- self.m.StubOutWithMock(asc.ScalingPolicy, 'keystone')
- asc.ScalingPolicy.keystone().MultipleTimes().AndReturn(
- self.fc)
-
- self.m.ReplayAll()
- up_policy = self.create_scaling_policy(t, stack,
- 'WebServerScaleUpPolicy')
- up_policy.signal()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # Now move time on 61 seconds - Cooldown in template is 60
- # so this should trigger a scale-up
- previous_meta = {timeutils.strtime(now): 'ChangeInCapacity : 1'}
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- self.m.StubOutWithMock(Metadata, '__get__')
- Metadata.__get__(mox.IgnoreArg(), up_policy, mox.IgnoreArg()
- ).AndReturn(previous_meta)
- Metadata.__get__(mox.IgnoreArg(), rsrc, mox.IgnoreArg()
- ).AndReturn(previous_meta)
-
- #stub for the metadata accesses while creating the additional instance
- Metadata.__get__(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg())
-
- now = now + datetime.timedelta(seconds=61)
- self._stub_lb_reload(3, unset=False)
- self._stub_meta_expected(now, 'ChangeInCapacity : 1', 2)
- self._stub_create(1)
-
- self.m.ReplayAll()
- up_policy.signal()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1',
- 'WebServerGroup-2'],
- rsrc.get_instance_names())
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_policy_cooldown_zero(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # Create initial group
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- stack.resources['WebServerGroup'] = rsrc
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- # Create the scaling policy (with Cooldown=0) and scale up one
- properties = t['Resources']['WebServerScaleUpPolicy']['Properties']
- properties['Cooldown'] = '0'
- self._stub_lb_reload(2)
- self._stub_meta_expected(now, 'ChangeInCapacity : 1', 2)
- self._stub_create(1)
-
- self.m.StubOutWithMock(asc.ScalingPolicy, 'keystone')
- asc.ScalingPolicy.keystone().MultipleTimes().AndReturn(
- self.fc)
-
- self.m.ReplayAll()
- up_policy = self.create_scaling_policy(t, stack,
- 'WebServerScaleUpPolicy')
- up_policy.signal()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # Now trigger another scale-up without changing time, should work
- previous_meta = {timeutils.strtime(now): 'ChangeInCapacity : 1'}
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- self.m.StubOutWithMock(Metadata, '__get__')
- Metadata.__get__(mox.IgnoreArg(), up_policy, mox.IgnoreArg()
- ).AndReturn(previous_meta)
- Metadata.__get__(mox.IgnoreArg(), rsrc, mox.IgnoreArg()
- ).AndReturn(previous_meta)
-
- #stub for the metadata accesses while creating the additional instance
- Metadata.__get__(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg())
-
- self._stub_lb_reload(3, unset=False)
- self._stub_meta_expected(now, 'ChangeInCapacity : 1', 2)
- self._stub_create(1)
-
- self.m.ReplayAll()
- up_policy.signal()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1',
- 'WebServerGroup-2'],
- rsrc.get_instance_names())
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_policy_cooldown_none(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # Create initial group
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- stack.resources['WebServerGroup'] = rsrc
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- # Create the scaling policy no Cooldown property, should behave the
- # same as when Cooldown==0
- properties = t['Resources']['WebServerScaleUpPolicy']['Properties']
- del(properties['Cooldown'])
- self._stub_lb_reload(2)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ChangeInCapacity : 1', 2)
- self._stub_create(1)
-
- self.m.StubOutWithMock(asc.ScalingPolicy, 'keystone')
- asc.ScalingPolicy.keystone().MultipleTimes().AndReturn(
- self.fc)
-
- self.m.ReplayAll()
- up_policy = self.create_scaling_policy(t, stack,
- 'WebServerScaleUpPolicy')
- up_policy.signal()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # Now trigger another scale-up without changing time, should work
- previous_meta = {timeutils.strtime(now): 'ChangeInCapacity : 1'}
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- self.m.StubOutWithMock(Metadata, '__get__')
- Metadata.__get__(mox.IgnoreArg(), up_policy, mox.IgnoreArg()
- ).AndReturn(previous_meta)
- Metadata.__get__(mox.IgnoreArg(), rsrc, mox.IgnoreArg()
- ).AndReturn(previous_meta)
-
- #stub for the metadata accesses while creating the addtional instance
- Metadata.__get__(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg())
-
- self._stub_lb_reload(3, unset=False)
- self._stub_meta_expected(now, 'ChangeInCapacity : 1', 2)
- self._stub_create(1)
-
- self.m.ReplayAll()
- up_policy.signal()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1',
- 'WebServerGroup-2'],
- rsrc.get_instance_names())
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_scaling_policy_update(self):
- t = template_format.parse(as_template)
- stack = utils.parse_stack(t, params=self.params)
-
- # Create initial group
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
-
- self.m.StubOutWithMock(asc.ScalingPolicy, 'keystone')
- asc.ScalingPolicy.keystone().MultipleTimes().AndReturn(
- self.fc)
-
- self.m.ReplayAll()
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- stack.resources['WebServerGroup'] = rsrc
- self.assertEqual(['WebServerGroup-0'], rsrc.get_instance_names())
-
- # Create initial scaling policy
- up_policy = self.create_scaling_policy(t, stack,
- 'WebServerScaleUpPolicy')
-
- # Scale up one
- self._stub_lb_reload(2)
- self._stub_meta_expected(now, 'ChangeInCapacity : 1', 2)
- self._stub_create(1)
-
- self.m.StubOutWithMock(asc.ScalingPolicy, 'keystone')
- asc.ScalingPolicy.keystone().MultipleTimes().AndReturn(
- self.fc)
-
- self.m.ReplayAll()
-
- # Trigger alarm
- up_policy.signal()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1'],
- rsrc.get_instance_names())
-
- # Update scaling policy
- update_snippet = copy.deepcopy(up_policy.parsed_template())
- update_snippet['Properties']['ScalingAdjustment'] = '2'
- scheduler.TaskRunner(up_policy.update, update_snippet)()
- self.assertEqual('2',
- up_policy.properties['ScalingAdjustment'])
-
- # Now move time on 61 seconds - Cooldown in template is 60
- # so this should trigger a scale-up
- previous_meta = {timeutils.strtime(now): 'ChangeInCapacity : 1'}
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- self.m.StubOutWithMock(Metadata, '__get__')
-
- Metadata.__get__(mox.IgnoreArg(), up_policy, mox.IgnoreArg()
- ).AndReturn(previous_meta)
- Metadata.__get__(mox.IgnoreArg(), rsrc, mox.IgnoreArg()
- ).AndReturn(previous_meta)
-
- #stub for the metadata accesses while creating the two instances
- Metadata.__get__(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg())
- Metadata.__get__(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg())
-
- now = now + datetime.timedelta(seconds=61)
-
- self._stub_lb_reload(4, unset=False)
- self._stub_meta_expected(now, 'ChangeInCapacity : 2', 2)
- self._stub_create(2)
- self.m.ReplayAll()
-
- # Trigger alarm
- up_policy.signal()
- self.assertEqual(['WebServerGroup-0', 'WebServerGroup-1',
- 'WebServerGroup-2', 'WebServerGroup-3'],
- rsrc.get_instance_names())
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_vpc_zone_identifier(self):
- t = template_format.parse(as_template)
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['VPCZoneIdentifier'] = ['xxxx']
-
- stack = utils.parse_stack(t, params=self.params)
-
- self._stub_lb_reload(1)
- now = timeutils.utcnow()
- self._stub_meta_expected(now, 'ExactCapacity : 1')
- self._stub_create(1)
- self.m.ReplayAll()
-
- rsrc = self.create_scaling_group(t, stack, 'WebServerGroup')
- instances = rsrc.get_instances()
- self.assertEqual(1, len(instances))
- self.assertEqual('xxxx', instances[0].properties['SubnetId'])
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_invalid_vpc_zone_identifier(self):
- t = template_format.parse(as_template)
- properties = t['Resources']['WebServerGroup']['Properties']
- properties['VPCZoneIdentifier'] = ['xxxx', 'yyyy']
-
- stack = utils.parse_stack(t, params=self.params)
-
- self.assertRaises(exception.NotSupported, self.create_scaling_group, t,
- stack, 'WebServerGroup')
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import re
-
-from heat.common import exception
-from heat.common import template_format
-from heat.engine.resources import instance
-from heat.engine import parser
-from heat.tests.common import HeatTestCase
-from heat.tests.utils import setup_dummy_db
-from heat.tests import utils
-
-
-asg_tmpl_without_updt_policy = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to create autoscaling group.",
- "Parameters" : {},
- "Resources" : {
- "WebServerGroup" : {
- "Type" : "AWS::AutoScaling::AutoScalingGroup",
- "Properties" : {
- "AvailabilityZones" : ["nova"],
- "LaunchConfigurationName" : { "Ref" : "LaunchConfig" },
- "MinSize" : "1",
- "MaxSize" : "10"
- }
- },
- "LaunchConfig" : {
- "Type" : "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : "foo",
- "InstanceType" : "m1.medium",
- "KeyName" : "test",
- "SecurityGroups" : [ "sg-1" ],
- "UserData" : "jsconfig data"
- }
- }
- }
-}
-'''
-
-asg_tmpl_with_bad_updt_policy = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to create autoscaling group.",
- "Parameters" : {},
- "Resources" : {
- "WebServerGroup" : {
- "UpdatePolicy": {
- "foo": {
- }
- },
- "Type" : "AWS::AutoScaling::AutoScalingGroup",
- "Properties" : {
- "AvailabilityZones" : ["nova"],
- "LaunchConfigurationName" : { "Ref" : "LaunchConfig" },
- "MinSize" : "1",
- "MaxSize" : "10"
- }
- },
- "LaunchConfig" : {
- "Type" : "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : "foo",
- "InstanceType" : "m1.medium",
- "KeyName" : "test",
- "SecurityGroups" : [ "sg-1" ],
- "UserData" : "jsconfig data"
- }
- }
- }
-}
-'''
-
-asg_tmpl_with_default_updt_policy = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to create autoscaling group.",
- "Parameters" : {},
- "Resources" : {
- "WebServerGroup" : {
- "UpdatePolicy" : {
- "AutoScalingRollingUpdate" : {
- }
- },
- "Type" : "AWS::AutoScaling::AutoScalingGroup",
- "Properties" : {
- "AvailabilityZones" : ["nova"],
- "LaunchConfigurationName" : { "Ref" : "LaunchConfig" },
- "MinSize" : "1",
- "MaxSize" : "10"
- }
- },
- "LaunchConfig" : {
- "Type" : "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : "foo",
- "InstanceType" : "m1.medium",
- "KeyName" : "test",
- "SecurityGroups" : [ "sg-1" ],
- "UserData" : "jsconfig data"
- }
- }
- }
-}
-'''
-
-asg_tmpl_with_updt_policy_1 = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to create autoscaling group.",
- "Parameters" : {},
- "Resources" : {
- "WebServerGroup" : {
- "UpdatePolicy" : {
- "AutoScalingRollingUpdate" : {
- "MinInstancesInService" : "1",
- "MaxBatchSize" : "3",
- "PauseTime" : "PT30S"
- }
- },
- "Type" : "AWS::AutoScaling::AutoScalingGroup",
- "Properties" : {
- "AvailabilityZones" : ["nova"],
- "LaunchConfigurationName" : { "Ref" : "LaunchConfig" },
- "MinSize" : "1",
- "MaxSize" : "10"
- }
- },
- "LaunchConfig" : {
- "Type" : "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : "foo",
- "InstanceType" : "m1.medium",
- "KeyName" : "test",
- "SecurityGroups" : [ "sg-1" ],
- "UserData" : "jsconfig data"
- }
- }
- }
-}
-'''
-
-asg_tmpl_with_updt_policy_2 = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to create autoscaling group.",
- "Parameters" : {},
- "Resources" : {
- "WebServerGroup" : {
- "UpdatePolicy" : {
- "AutoScalingRollingUpdate" : {
- "MinInstancesInService" : "1",
- "MaxBatchSize" : "5",
- "PauseTime" : "PT30S"
- }
- },
- "Type" : "AWS::AutoScaling::AutoScalingGroup",
- "Properties" : {
- "AvailabilityZones" : ["nova"],
- "LaunchConfigurationName" : { "Ref" : "LaunchConfig" },
- "MinSize" : "1",
- "MaxSize" : "10"
- }
- },
- "LaunchConfig" : {
- "Type" : "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : "foo",
- "InstanceType" : "m1.large",
- "KeyName" : "test",
- "SecurityGroups" : [ "sg-1" ],
- "UserData" : "jsconfig data"
- }
- }
- }
-}
-'''
-
-
-class InstanceGroupTest(HeatTestCase):
- def setUp(self):
- super(InstanceGroupTest, self).setUp()
- setup_dummy_db()
-
- def _stub_create(self, num, instance_class=instance.Instance):
- """
- Expect creation of C{num} number of Instances.
-
- :param instance_class: The resource class to expect to be created
- instead of instance.Instance.
- """
-
- self.m.StubOutWithMock(parser.Stack, 'validate')
- parser.Stack.validate()
-
- self.m.StubOutWithMock(instance_class, 'handle_create')
- self.m.StubOutWithMock(instance_class, 'check_create_complete')
- cookie = object()
- for x in range(num):
- instance_class.handle_create().AndReturn(cookie)
- instance_class.check_create_complete(cookie).AndReturn(False)
- instance_class.check_create_complete(
- cookie).MultipleTimes().AndReturn(True)
-
- def get_launch_conf_name(self, stack, ig_name):
- return stack.resources[ig_name].properties['LaunchConfigurationName']
-
- def test_parse_without_update_policy(self):
- tmpl = template_format.parse(asg_tmpl_without_updt_policy)
- stack = utils.parse_stack(tmpl)
- grp = stack.resources['WebServerGroup']
- self.assertFalse(grp.update_policy['AutoScalingRollingUpdate'])
-
- def test_parse_with_update_policy(self):
- tmpl = template_format.parse(asg_tmpl_with_updt_policy_1)
- stack = utils.parse_stack(tmpl)
- grp = stack.resources['WebServerGroup']
- self.assertTrue(grp.update_policy)
- self.assertTrue(len(grp.update_policy) == 1)
- self.assertTrue('AutoScalingRollingUpdate' in grp.update_policy)
- policy = grp.update_policy['AutoScalingRollingUpdate']
- self.assertTrue(policy and len(policy) > 0)
- self.assertEqual(int(policy['MinInstancesInService']), 1)
- self.assertEqual(int(policy['MaxBatchSize']), 3)
- self.assertEqual(policy['PauseTime'], 'PT30S')
-
- def test_parse_with_default_update_policy(self):
- tmpl = template_format.parse(asg_tmpl_with_default_updt_policy)
- stack = utils.parse_stack(tmpl)
- grp = stack.resources['WebServerGroup']
- self.assertTrue(grp.update_policy)
- self.assertTrue(len(grp.update_policy) == 1)
- self.assertTrue('AutoScalingRollingUpdate' in grp.update_policy)
- policy = grp.update_policy['AutoScalingRollingUpdate']
- self.assertTrue(policy and len(policy) > 0)
- self.assertEqual(int(policy['MinInstancesInService']), 0)
- self.assertEqual(int(policy['MaxBatchSize']), 1)
- self.assertEqual(policy['PauseTime'], 'PT0S')
-
- def test_parse_with_bad_update_policy(self):
- tmpl = template_format.parse(asg_tmpl_with_bad_updt_policy)
- stack = utils.parse_stack(tmpl)
- self.assertRaises(exception.StackValidationFailed, stack.validate)
-
- def validate_update_policy_diff(self, current, updated):
-
- # load current stack
- current_tmpl = template_format.parse(current)
- current_stack = utils.parse_stack(current_tmpl)
-
- # get the json snippet for the current InstanceGroup resource
- current_grp = current_stack.resources['WebServerGroup']
- current_snippets = dict((r.name, r.parsed_template())
- for r in current_stack)
- current_grp_json = current_snippets[current_grp.name]
-
- # load the updated stack
- updated_tmpl = template_format.parse(updated)
- updated_stack = utils.parse_stack(updated_tmpl)
-
- # get the updated json snippet for the InstanceGroup resource in the
- # context of the current stack
- updated_grp = updated_stack.resources['WebServerGroup']
- updated_grp_json = current_stack.resolve_runtime_data(updated_grp.t)
-
- # identify the template difference
- tmpl_diff = updated_grp.update_template_diff(
- updated_grp_json, current_grp_json)
- updated_policy = (updated_grp.t['UpdatePolicy']
- if 'UpdatePolicy' in updated_grp.t else None)
- expected = {u'UpdatePolicy': updated_policy}
- self.assertEqual(tmpl_diff, expected)
-
- def test_update_policy_added(self):
- self.validate_update_policy_diff(asg_tmpl_without_updt_policy,
- asg_tmpl_with_updt_policy_1)
-
- def test_update_policy_updated(self):
- self.validate_update_policy_diff(asg_tmpl_with_updt_policy_1,
- asg_tmpl_with_updt_policy_2)
-
- def test_update_policy_removed(self):
- self.validate_update_policy_diff(asg_tmpl_with_updt_policy_1,
- asg_tmpl_without_updt_policy)
-
- def test_autoscaling_group_update(self):
-
- # setup stack from the initial template
- tmpl = template_format.parse(asg_tmpl_with_updt_policy_1)
- stack = utils.parse_stack(tmpl)
- nested = stack.resources['WebServerGroup'].nested()
-
- # test stack create
- # test the number of instance creation
- # test that physical resource name of launch configuration is used
- size = int(stack.resources['WebServerGroup'].properties['MinSize'])
- self._stub_create(size)
- self.m.ReplayAll()
- stack.create()
- self.m.VerifyAll()
- self.assertEqual(stack.state, ('CREATE', 'COMPLETE'))
- conf = stack.resources['LaunchConfig']
- conf_name_pattern = '%s-LaunchConfig-[a-zA-Z0-9]+$' % stack.name
- regex_pattern = re.compile(conf_name_pattern)
- self.assertTrue(regex_pattern.match(conf.FnGetRefId()))
- nested = stack.resources['WebServerGroup'].nested()
- self.assertTrue(len(nested.resources), size)
-
- # test stack update
- # test that update policy is updated
- # test that launch configuration is replaced
- current_grp = stack.resources['WebServerGroup']
- self.assertTrue('AutoScalingRollingUpdate'
- in current_grp.update_policy)
- current_policy = current_grp.update_policy['AutoScalingRollingUpdate']
- self.assertTrue(current_policy and len(current_policy) > 0)
- self.assertEqual(int(current_policy['MaxBatchSize']), 3)
- conf_name = self.get_launch_conf_name(stack, 'WebServerGroup')
- updated_tmpl = template_format.parse(asg_tmpl_with_updt_policy_2)
- updated_stack = utils.parse_stack(updated_tmpl)
- stack.update(updated_stack)
- self.assertEqual(stack.state, ('UPDATE', 'COMPLETE'))
- updated_grp = stack.resources['WebServerGroup']
- self.assertTrue('AutoScalingRollingUpdate'
- in updated_grp.update_policy)
- updated_policy = updated_grp.update_policy['AutoScalingRollingUpdate']
- self.assertTrue(updated_policy and len(updated_policy) > 0)
- self.assertEqual(int(updated_policy['MaxBatchSize']), 5)
- updated_conf_name = self.get_launch_conf_name(stack, 'WebServerGroup')
- self.assertNotEqual(conf_name, updated_conf_name)
-
- def test_autoscaling_group_update_policy_removed(self):
-
- # setup stack from the initial template
- tmpl = template_format.parse(asg_tmpl_with_updt_policy_1)
- stack = utils.parse_stack(tmpl)
- nested = stack.resources['WebServerGroup'].nested()
-
- # test stack create
- # test the number of instance creation
- # test that physical resource name of launch configuration is used
- size = int(stack.resources['WebServerGroup'].properties['MinSize'])
- self._stub_create(size)
- self.m.ReplayAll()
- stack.create()
- self.m.VerifyAll()
- self.assertEqual(stack.state, ('CREATE', 'COMPLETE'))
- conf = stack.resources['LaunchConfig']
- conf_name_pattern = '%s-LaunchConfig-[a-zA-Z0-9]+$' % stack.name
- regex_pattern = re.compile(conf_name_pattern)
- self.assertTrue(regex_pattern.match(conf.FnGetRefId()))
- nested = stack.resources['WebServerGroup'].nested()
- self.assertTrue(len(nested.resources), size)
-
- # test stack update
- # test that update policy is removed
- current_grp = stack.resources['WebServerGroup']
- self.assertTrue('AutoScalingRollingUpdate'
- in current_grp.update_policy)
- current_policy = current_grp.update_policy['AutoScalingRollingUpdate']
- self.assertTrue(current_policy and len(current_policy) > 0)
- self.assertEqual(int(current_policy['MaxBatchSize']), 3)
- updated_tmpl = template_format.parse(asg_tmpl_without_updt_policy)
- updated_stack = utils.parse_stack(updated_tmpl)
- stack.update(updated_stack)
- self.assertEqual(stack.state, ('UPDATE', 'COMPLETE'))
- updated_grp = stack.resources['WebServerGroup']
- self.assertFalse(updated_grp.update_policy['AutoScalingRollingUpdate'])
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import copy
-import json
-import mox
-import testtools
-
-from oslo.config import cfg
-
-from heat.tests import fakes
-from heat.tests import generic_resource
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-from heat.common import template_format
-
-from heat.openstack.common.importutils import try_import
-
-from heat.engine import parser
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine.resources.ceilometer import alarm
-
-ceilometerclient = try_import('ceilometerclient.v2')
-
-alarm_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Alarm Test",
- "Parameters" : {},
- "Resources" : {
- "MEMAlarmHigh": {
- "Type": "OS::Ceilometer::Alarm",
- "Properties": {
- "description": "Scale-up if MEM > 50% for 1 minute",
- "counter_name": "MemoryUtilization",
- "statistic": "avg",
- "period": "60",
- "evaluation_periods": "1",
- "threshold": "50",
- "alarm_actions": [],
- "matching_metadata": {},
- "comparison_operator": "gt"
- }
- },
- "signal_handler" : {
- "Type" : "SignalResourceType"
- }
- }
-}
-'''
-
-
-class FakeCeilometerAlarm(object):
- alarm_id = 'foo'
-
-
-class FakeCeilometerAlarms(object):
- def create(self, **kwargs):
- pass
-
- def update(self, **kwargs):
- pass
-
- def delete(self, alarm_id):
- pass
-
-
-class FakeCeilometerClient(object):
- alarms = FakeCeilometerAlarms()
-
-
-class CeilometerAlarmTest(HeatTestCase):
- def setUp(self):
- super(CeilometerAlarmTest, self).setUp()
- utils.setup_dummy_db()
-
- resource._register_class('SignalResourceType',
- generic_resource.SignalResource)
-
- cfg.CONF.set_default('heat_waitcondition_server_url',
- 'http://server.test:8000/v1/waitcondition')
-
- self.fc = fakes.FakeKeystoneClient()
- self.fa = FakeCeilometerClient()
-
- # Note tests creating a stack should be decorated with @stack_delete_after
- # to ensure the stack is properly cleaned up
- def create_stack(self, template=None):
- if template is None:
- template = alarm_template
- temp = template_format.parse(template)
- template = parser.Template(temp)
- ctx = utils.dummy_context()
- ctx.tenant_id = 'test_tenant'
- stack = parser.Stack(ctx, utils.random_name(), template,
- disable_rollback=True)
- stack.store()
-
- self.m.StubOutWithMock(resource.Resource, 'keystone')
- resource.Resource.keystone().MultipleTimes().AndReturn(
- self.fc)
-
- self.m.StubOutWithMock(alarm.CeilometerAlarm, 'ceilometer')
- alarm.CeilometerAlarm.ceilometer().MultipleTimes().AndReturn(
- self.fa)
-
- al = copy.deepcopy(temp['Resources']['MEMAlarmHigh']['Properties'])
- al['description'] = mox.IgnoreArg()
- al['name'] = mox.IgnoreArg()
- al['alarm_actions'] = mox.IgnoreArg()
- self.m.StubOutWithMock(self.fa.alarms, 'create')
- self.fa.alarms.create(**al).AndReturn(FakeCeilometerAlarm())
- return stack
-
- @testtools.skipIf(ceilometerclient is None, 'ceilometerclient unavailable')
- @utils.stack_delete_after
- def test_mem_alarm_high_update_no_replace(self):
- '''
- Make sure that we can change the update-able properties
- without replacing the Alarm rsrc.
- '''
- #short circuit the alarm's references
- t = template_format.parse(alarm_template)
- properties = t['Resources']['MEMAlarmHigh']['Properties']
- properties['alarm_actions'] = ['signal_handler']
- properties['matching_metadata'] = {'a': 'v'}
-
- self.stack = self.create_stack(template=json.dumps(t))
- self.m.StubOutWithMock(self.fa.alarms, 'update')
- al2 = {}
- for k in alarm.CeilometerAlarm.update_allowed_properties:
- al2[k] = mox.IgnoreArg()
- al2['alarm_id'] = mox.IgnoreArg()
- self.fa.alarms.update(**al2).AndReturn(None)
-
- self.m.ReplayAll()
- self.stack.create()
- rsrc = self.stack['MEMAlarmHigh']
-
- snippet = copy.deepcopy(rsrc.parsed_template())
- snippet['Properties']['comparison_operator'] = 'lt'
- snippet['Properties']['description'] = 'fruity'
- snippet['Properties']['evaluation_periods'] = '2'
- snippet['Properties']['period'] = '90'
- snippet['Properties']['enabled'] = 'true'
- snippet['Properties']['repeat_actions'] = True
- snippet['Properties']['statistic'] = 'max'
- snippet['Properties']['threshold'] = '39'
- snippet['Properties']['insufficient_data_actions'] = []
- snippet['Properties']['alarm_actions'] = []
- snippet['Properties']['ok_actions'] = ['signal_handler']
-
- scheduler.TaskRunner(rsrc.update, snippet)()
-
- self.m.VerifyAll()
-
- @testtools.skipIf(ceilometerclient is None, 'ceilometerclient unavailable')
- @utils.stack_delete_after
- def test_mem_alarm_high_update_replace(self):
- '''
- Make sure that the Alarm resource IS replaced when non-update-able
- properties are changed.
- '''
- t = template_format.parse(alarm_template)
- properties = t['Resources']['MEMAlarmHigh']['Properties']
- properties['alarm_actions'] = ['signal_handler']
- properties['matching_metadata'] = {'a': 'v'}
-
- self.stack = self.create_stack(template=json.dumps(t))
-
- self.m.ReplayAll()
- self.stack.create()
- rsrc = self.stack['MEMAlarmHigh']
-
- snippet = copy.deepcopy(rsrc.parsed_template())
- snippet['Properties']['counter_name'] = 'temp'
-
- updater = scheduler.TaskRunner(rsrc.update, snippet)
- self.assertRaises(resource.UpdateReplace, updater)
-
- self.m.VerifyAll()
-
- @testtools.skipIf(ceilometerclient is None, 'ceilometerclient unavailable')
- @utils.stack_delete_after
- def test_mem_alarm_suspend_resume(self):
- """
- Make sure that the Alarm resource gets disabled on suspend
- and reenabled on resume.
- """
- self.stack = self.create_stack()
-
- self.m.StubOutWithMock(self.fa.alarms, 'update')
- al_suspend = {'alarm_id': mox.IgnoreArg(),
- 'enabled': False}
- self.fa.alarms.update(**al_suspend).AndReturn(None)
- al_resume = {'alarm_id': mox.IgnoreArg(),
- 'enabled': True}
- self.fa.alarms.update(**al_resume).AndReturn(None)
- self.m.ReplayAll()
-
- self.stack.create()
- rsrc = self.stack['MEMAlarmHigh']
- scheduler.TaskRunner(rsrc.suspend)()
- self.assertEqual((rsrc.SUSPEND, rsrc.COMPLETE), rsrc.state)
- scheduler.TaskRunner(rsrc.resume)()
- self.assertEqual((rsrc.RESUME, rsrc.COMPLETE), rsrc.state)
-
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import uuid
-import json
-import copy
-
-from heat.common import template_format
-from heat.engine import scheduler
-from heat.engine import resource
-from heat.engine.resources.rackspace import cloud_loadbalancer as lb
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-# The following fakes are for pyrax
-
-
-class FakeClient(object):
- user_agent = "Fake"
- USER_AGENT = "Fake"
-
-
-class FakeManager(object):
- api = FakeClient()
-
- def list(self):
- pass
-
- def get(self, item):
- pass
-
- def delete(self, item):
- pass
-
- def create(self, *args, **kwargs):
- pass
-
- def find(self, *args, **kwargs):
- pass
-
- def action(self, item, action_type, body={}):
- pass
-
-
-class FakeLoadBalancerManager(object):
- def __init__(self, api=None, *args, **kwargs):
- pass
-
- def set_content_caching(self, *args, **kwargs):
- pass
-
-
-class FakeNode(object):
- def __init__(self, address="0.0.0.0", port=80, condition=None, weight=None,
- status=None, parent=None, type=None, id=None):
- self.address = address
- self.port = port
- self.condition = condition
- self.weight = weight
- self.status = status
- self.parent = parent
- self.type = type
- self.id = id
-
- def __eq__(self, other):
- return self.__dict__ == other.__dict__
-
- def __ne__(self, other):
- return not self.__eq__(other)
-
-
-class FakeVirtualIP(object):
- def __init__(self, address=None, port=None, condition=None,
- ipVersion=None, type=None):
- self.address = address
- self.port = port
- self.condition = condition
- self.ipVersion = ipVersion
- self.type = type
-
- def __eq__(self, other):
- return self.__dict__ == other.__dict__
-
- def __ne__(self, other):
- return not self.__eq__(other)
-
-
-class FakeLoadBalancerClient(object):
- def __init__(self, *args, **kwargs):
- self.Node = FakeNode
- self.VirtualIP = FakeVirtualIP
- pass
-
- def get(*args, **kwargs):
- pass
-
- def create(*args, **kwargs):
- pass
-
-
-class FakeLoadBalancer(object):
- def __init__(self, name=None, info=None, *args, **kwargs):
- name = name or uuid.uuid4()
- info = info or {"fake": "fake"}
- self.id = uuid.uuid4()
- self.manager = FakeLoadBalancerManager()
- self.Node = FakeNode
- self.VirtualIP = FakeVirtualIP
- self.nodes = []
-
- def get(*args, **kwargs):
- pass
-
- def add_nodes(*args, **kwargs):
- pass
-
- def add_ssl_termination(*args, **kwargs):
- pass
-
- def set_error_page(*args, **kwargs):
- pass
-
- def add_access_list(*args, **kwargs):
- pass
-
-
-class LoadBalancerWithFakeClient(lb.CloudLoadBalancer):
- def cloud_lb(self):
- return FakeLoadBalancerClient()
-
-
-def override_resource():
- return {
- 'Rackspace::Cloud::LoadBalancer': LoadBalancerWithFakeClient
- }
-
-
-class LoadBalancerTest(HeatTestCase):
-
- def setUp(self):
- super(LoadBalancerTest, self).setUp()
-
- self.lb_template = {
- "AWSTemplateFormatVersion": "2010-09-09",
- "Description": "fawef",
- "Resources": {
- self._get_lb_resource_name(): {
- "Type": "Rackspace::Cloud::LoadBalancer",
- "Properties": {
- "name": "test-clb",
- "nodes": [{"address": "166.78.103.141", "port": 80,
- "condition": "ENABLED"}],
- "protocol": "HTTP",
- "port": 80,
- "virtualIps": [
- {"type": "PUBLIC", "ipVersion": "IPV6"}],
- "algorithm": 'LEAST_CONNECTIONS',
- "connectionThrottle": {'maxConnectionRate': 1000},
- 'timeout': 110,
- 'contentCaching': 'DISABLED'
- }
- }
- }
- }
-
- self.lb_name = 'test-clb'
- self.expected_body = {
- "nodes": [FakeNode(address=u"166.78.103.141", port=80,
- condition=u"ENABLED")],
- "protocol": u'HTTP',
- "port": 80,
- "virtual_ips": [FakeVirtualIP(type=u"PUBLIC", ipVersion=u"IPV6")],
- "halfClosed": None,
- "algorithm": u'LEAST_CONNECTIONS',
- "connectionThrottle": {'maxConnectionRate': 1000,
- 'maxConnections': None,
- 'rateInterval': None,
- 'minConnections': None},
- "connectionLogging": None,
- "halfClosed": None,
- "healthMonitor": None,
- "metadata": None,
- "sessionPersistence": None,
- "timeout": 110
- }
-
- lb.resource_mapping = override_resource
- utils.setup_dummy_db()
- resource._register_class("Rackspace::Cloud::LoadBalancer",
- LoadBalancerWithFakeClient)
-
- def _get_lb_resource_name(self):
- return "lb-" + str(uuid.uuid4())
-
- def __getattribute__(self, name):
- if name == 'expected_body' or name == 'lb_template':
- return copy.deepcopy(super(LoadBalancerTest, self)
- .__getattribute__(name))
- return super(LoadBalancerTest, self).__getattribute__(name)
-
- def _mock_create(self, t, stack, resource_name, lb_name, lb_body):
- rsrc = LoadBalancerWithFakeClient(resource_name,
- t['Resources'][resource_name],
- stack)
- self.m.StubOutWithMock(rsrc.clb, 'create')
- fake_loadbalancer = FakeLoadBalancer(name=lb_name)
- rsrc.clb.create(lb_name, **lb_body).AndReturn(fake_loadbalancer)
- return (rsrc, fake_loadbalancer)
-
- def _get_first_resource_name(self, templ):
- return next(k for k in templ['Resources'])
-
- def _mock_loadbalancer(self, lb_template, expected_name, expected_body):
- t = template_format.parse(json.dumps(lb_template))
- s = utils.parse_stack(t, stack_name=utils.random_name())
-
- rsrc, fake_loadbalancer = self._mock_create(t, s,
- self.
- _get_first_resource_name(
- lb_template),
- expected_name,
- expected_body)
- self.m.StubOutWithMock(fake_loadbalancer, 'get')
- fake_loadbalancer.get().MultipleTimes().AndReturn(None)
-
- fake_loadbalancer.status = 'ACTIVE'
-
- return (rsrc, fake_loadbalancer)
-
- def _set_template(self, templ, **kwargs):
- for k, v in kwargs.iteritems():
- templ['Resources'][self._get_first_resource_name(templ)][
- 'Properties'][k] = v
- return templ
-
- def _set_expected(self, expected, **kwargs):
- for k, v in kwargs.iteritems():
- expected[k] = v
- return expected
-
- def test_alter_properties(self):
- #test alter properties functions
- template = self._set_template(self.lb_template,
- sessionPersistence='HTTP_COOKIE',
- connectionLogging=True,
- metadata={'yolo': 'heeyyy_gurl'})
-
- expected = self._set_expected(self.expected_body,
- sessionPersistence=
- {'persistenceType': 'HTTP_COOKIE'},
- connectionLogging={'enabled': True},
- metadata=[
- {'key': 'yolo',
- 'value': 'heeyyy_gurl'}])
-
- rsrc, fake_loadbalancer = self._mock_loadbalancer(template,
- self.lb_name,
- expected)
-
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.m.VerifyAll()
-
- def test_validate_half_closed(self):
- #test failure (invalid protocol)
- template = self._set_template(self.lb_template, halfClosed=True)
- expected = self._set_expected(self.expected_body, halfClosed=True)
- rsrc, fake_loadbalancer = self._mock_loadbalancer(template,
- self.lb_name,
- expected)
- self.assertEqual(rsrc.validate(), {
- 'Error':
- 'The halfClosed property is only available for the '
- 'TCP or TCP_CLIENT_FIRST protocols'})
-
- #test TCP protocol
- template = self._set_template(template, protocol='TCP')
- expected = self._set_expected(expected, protocol='TCP')
- rsrc, fake_loadbalancer = self._mock_loadbalancer(template,
- self.lb_name,
- expected)
- self.assertEqual(rsrc.validate(), None)
-
- #test TCP_CLIENT_FIRST protocol
- template = self._set_template(template,
- protocol='TCP_CLIENT_FIRST')
- expected = self._set_expected(expected,
- protocol='TCP_CLIENT_FIRST')
- rsrc, fake_loadbalancer = self._mock_loadbalancer(template,
- self.lb_name,
- expected)
- self.assertEqual(rsrc.validate(), None)
-
- def test_validate_health_monitor(self):
- #test connect success
- health_monitor = {
- 'type': 'CONNECT',
- 'attemptsBeforeDeactivation': 1,
- 'delay': 1,
- 'timeout': 1
- }
- template = self._set_template(self.lb_template,
- healthMonitor=health_monitor)
- expected = self._set_expected(self.expected_body,
- healthMonitor=health_monitor)
- rsrc, fake_loadbalancer = self._mock_loadbalancer(template,
- self.lb_name,
- expected)
-
- self.assertEqual(rsrc.validate(), None)
-
- #test connect failure
- #bodyRegex is only valid for type 'HTTP(S)'
- health_monitor['bodyRegex'] = 'dfawefawe'
- template = self._set_template(template,
- healthMonitor=health_monitor)
- expected = self._set_expected(expected,
- healthMonitor=health_monitor)
- rsrc, fake_loadbalancer = self._mock_loadbalancer(template,
- self.lb_name,
- expected)
- self.assertEqual(rsrc.validate(),
- {'Error': 'Unknown Property bodyRegex'})
-
- #test http fields
- health_monitor['type'] = 'HTTP'
- health_monitor['bodyRegex'] = 'bodyRegex'
- health_monitor['statusRegex'] = 'statusRegex'
- health_monitor['hostHeader'] = 'hostHeader'
- health_monitor['path'] = 'path'
-
- template = self._set_template(template,
- healthMonitor=health_monitor)
- expected = self._set_expected(expected,
- healthMonitor=health_monitor)
- rsrc, fake_loadbalancer = self._mock_loadbalancer(template,
- self.lb_name,
- expected)
- self.assertEqual(rsrc.validate(), None)
-
- def test_validate_ssl_termination(self):
- ssl_termination = {
- 'enabled': True,
- 'privatekey': 'ewfawe',
- 'certificate': 'dfaewfwef',
- 'intermediateCertificate': 'fwaefawe',
- 'secureTrafficOnly': True
- }
-
- #test ssl termination enabled without required fields failure
- template = self._set_template(self.lb_template,
- sslTermination=ssl_termination)
- expected = self._set_expected(self.expected_body,
- sslTermination=ssl_termination)
- rsrc, fake_loadbalancer = self._mock_loadbalancer(template,
- self.lb_name,
- expected)
- self.assertEqual(rsrc.validate(),
- {'Error':
- 'Property error : %s: Property securePort not '
- 'assigned' % rsrc.name})
-
- ssl_termination['securePort'] = 443
- template = self._set_template(template,
- sslTermination=ssl_termination)
- expected = self._set_expected(expected,
- sslTermination=ssl_termination)
- rsrc, fake_loadbalancer = self._mock_loadbalancer(template,
- self.lb_name,
- expected)
- self.assertEqual(rsrc.validate(), None)
-
- def test_post_creation_access_list(self):
- access_list = [{"address": '192.168.1.1/0',
- 'type': 'ALLOW'},
- {'address': '172.165.3.43',
- 'type': 'DENY'}]
-
- template = self._set_template(self.lb_template,
- accessList=access_list)
- rsrc, fake_loadbalancer = self._mock_loadbalancer(template,
- self.lb_name,
- self.expected_body)
- self.m.StubOutWithMock(fake_loadbalancer, 'add_access_list')
- fake_loadbalancer.add_access_list(access_list)
-
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.m.VerifyAll()
-
- def test_post_creation_error_page(self):
- error_page = "REALLY BIG ERROR"
-
- template = self._set_template(self.lb_template,
- errorPage=error_page)
- rsrc, fake_loadbalancer = self._mock_loadbalancer(template,
- self.lb_name,
- self.expected_body)
- self.m.StubOutWithMock(fake_loadbalancer, 'set_error_page')
- fake_loadbalancer.set_error_page(error_page)
-
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.m.VerifyAll()
-
- def test_post_creation_ssl_termination(self):
- ssl_termination = {
- 'securePort': 443,
- 'privatekey': 'afwefawe',
- 'certificate': 'fawefwea',
- 'intermediateCertificate': "intermediate_certificate",
- 'enabled': True,
- 'secureTrafficOnly': False
- }
-
- template = self._set_template(self.lb_template,
- sslTermination=ssl_termination)
- rsrc, fake_loadbalancer = self._mock_loadbalancer(template,
- self.lb_name,
- self.expected_body)
- self.m.StubOutWithMock(fake_loadbalancer, 'add_ssl_termination')
- fake_loadbalancer.add_ssl_termination(
- ssl_termination['securePort'],
- ssl_termination['privatekey'],
- ssl_termination['certificate'],
- intermediateCertificate=ssl_termination['intermediateCertificate'],
- enabled=ssl_termination['enabled'],
- secureTrafficOnly=ssl_termination['secureTrafficOnly'])
-
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.m.VerifyAll()
-
- def test_post_creation_content_caching(self):
- template = self._set_template(self.lb_template,
- contentCaching='ENABLED')
- rsrc, fake_loadbalancer = self._mock_loadbalancer(template,
- self.lb_name,
- self.expected_body)
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.m.VerifyAll()
-
- def test_update_add_node_by_ref(self):
- added_node = {'nodes': [
- {"address": "166.78.103.141", "port": 80, "condition": "ENABLED"},
- {"ref": "TEST_NODE_REF", "port": 80, "condition": "ENABLED"}]}
- expected_ip = '172.168.1.4'
- rsrc, fake_loadbalancer = self._mock_loadbalancer(self.lb_template,
- self.lb_name,
- self.expected_body)
- fake_loadbalancer.nodes = self.expected_body['nodes']
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.m.VerifyAll()
-
- self.m.StubOutWithMock(rsrc.clb, 'get')
- rsrc.clb.get(rsrc.resource_id).AndReturn(fake_loadbalancer)
-
- self.m.StubOutWithMock(rsrc.stack, 'resource_by_refid')
-
- class FakeFn(object):
- def FnGetAtt(self, attr):
- return expected_ip
-
- rsrc.stack.resource_by_refid('TEST_NODE_REF').AndReturn(FakeFn())
-
- self.m.StubOutWithMock(fake_loadbalancer, 'add_nodes')
- fake_loadbalancer.add_nodes([
- fake_loadbalancer.Node(address=expected_ip,
- port=80,
- condition='ENABLED')])
-
- self.m.ReplayAll()
- rsrc.handle_update({}, {}, added_node)
- self.m.VerifyAll()
-
- def test_update_add_node_by_address(self):
- expected_ip = '172.168.1.4'
- added_node = {'nodes': [
- {"address": "166.78.103.141", "port": 80, "condition": "ENABLED"},
- {"address": expected_ip, "port": 80, "condition": "ENABLED"}]}
- rsrc, fake_loadbalancer = self._mock_loadbalancer(self.lb_template,
- self.lb_name,
- self.expected_body)
- fake_loadbalancer.nodes = self.expected_body['nodes']
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.m.VerifyAll()
-
- self.m.StubOutWithMock(rsrc.clb, 'get')
- rsrc.clb.get(rsrc.resource_id).AndReturn(fake_loadbalancer)
-
- self.m.StubOutWithMock(fake_loadbalancer, 'add_nodes')
- fake_loadbalancer.add_nodes([
- fake_loadbalancer.Node(address=expected_ip,
- port=80,
- condition='ENABLED')])
-
- self.m.ReplayAll()
- rsrc.handle_update({}, {}, added_node)
- self.m.VerifyAll()
-
- def test_update_delete_node_failed(self):
- deleted_node = {'nodes': []}
- rsrc, fake_loadbalancer = self._mock_loadbalancer(self.lb_template,
- self.lb_name,
- self.expected_body)
- fake_loadbalancer.nodes = self.expected_body['nodes']
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.m.VerifyAll()
-
- self.m.StubOutWithMock(rsrc.clb, 'get')
- rsrc.clb.get(rsrc.resource_id).AndReturn(fake_loadbalancer)
-
- self.m.ReplayAll()
- self.assertRaises(ValueError, rsrc.handle_update, {}, {}, deleted_node)
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-from heat.common import template_format
-from heat.engine import parser
-from heat.engine import environment
-from heat.engine import resource
-from heat.engine.resources.rackspace import clouddatabase
-from heat.openstack.common import uuidutils
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-wp_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "MYSQL instance running on Rackspace cloud",
- "Parameters" : {
- "FlavorRef": {
- "Description" : "Flavor reference",
- "Type": "String"
- },
- "VolumeSize": {
- "Description" : "The volume size",
- "Type": "Number",
- "MinValue" : "1",
- "MaxValue" : "1024"
- },
- "InstanceName": {
- "Description" : "The database instance name",
- "Type": "String"
- }
- },
- "Resources" : {
- "MySqlCloudDB": {
- "Type": "Rackspace::Cloud::DBInstance",
- "Properties" : {
- "InstanceName" : {"Ref": "InstanceName"},
- "FlavorRef" : {"Ref": "FlavorRef"},
- "VolumeSize" : {"Ref": VolumeSize},
- "Users" : [{"name":"testuser", "password":"testpass123"}] ,
- "Databases" : [{"name":"testdbonetwo"}]
- }
- }
- }
-
-}
-'''
-
-
-class FakeDBInstance(object):
- def __init__(self):
- self.id = 12345
- self.hostname = "testhost"
- self.links = \
- [{"href": "https://adga23dd432a.rackspacecloud.com/132345245"}]
- self.resource_id = 12345
-
-
-class CloudDBInstanceTest(HeatTestCase):
- def setUp(self):
- super(CloudDBInstanceTest, self).setUp()
- utils.setup_dummy_db()
- # Test environment may not have pyrax client library installed and if
- # pyrax is not installed resource class would not be registered.
- # So register resource provider class explicitly for unit testing.
- resource._register_class("Rackspace::Cloud::DBInstance",
- clouddatabase.CloudDBInstance)
-
- def _setup_test_clouddbinstance(self, name, inject_property_error=False):
- stack_name = '%s_stack' % name
- t = template_format.parse(wp_template)
- template = parser.Template(t)
- stack = parser.Stack(utils.dummy_context(),
- stack_name,
- template,
- environment.Environment({'InstanceName': 'Test',
- 'FlavorRef': '1GB',
- 'VolumeSize': '30'}),
- stack_id=uuidutils.generate_uuid())
-
- if inject_property_error:
- # database name given in users list is not a valid database
- t['Resources']['MySqlCloudDB']['Properties']['Databases'] = \
- [{"Name": "onedb"}]
- t['Resources']['MySqlCloudDB']['Properties']['Users'] = \
- [{"Name": "testuser",
- "Password": "pass",
- "Databases": ["invaliddb"]}]
- else:
- t['Resources']['MySqlCloudDB']['Properties']['Databases'] = \
- [{"Name": "validdb"}]
- t['Resources']['MySqlCloudDB']['Properties']['Users'] = \
- [{"Name": "testuser",
- "Password": "pass",
- "Databases": ["validdb"]}]
-
- instance = clouddatabase.CloudDBInstance(
- '%s_name' % name,
- t['Resources']['MySqlCloudDB'],
- stack)
- instance.resource_id = 1234
- self.m.StubOutWithMock(instance, 'cloud_db')
- return instance
-
- def test_clouddbinstance(self):
- instance = self._setup_test_clouddbinstance('dbinstance')
- self.assertEqual(instance.hostname, None)
- self.assertEqual(instance.href, None)
-
- def test_clouddbinstance_create(self):
- instance = self._setup_test_clouddbinstance('dbinstance_create')
- fake_client = self.m.CreateMockAnything()
- instance.cloud_db().AndReturn(fake_client)
- fakedbinstance = FakeDBInstance()
- fake_client.create('Test',
- flavor='1GB',
- volume='30').AndReturn(fakedbinstance)
- self.m.ReplayAll()
- instance.handle_create()
- expected_hostname = fakedbinstance.hostname
- expected_href = fakedbinstance.links[0]['href']
- self.assertEqual(instance._resolve_attribute('hostname'),
- expected_hostname)
- self.assertEqual(instance._resolve_attribute('href'), expected_href)
- self.m.VerifyAll()
-
- def test_clouddbinstance_delete_resource_notfound(self):
- instance = self._setup_test_clouddbinstance('dbinstance_delete')
- instance.resource_id = None
- self.m.ReplayAll()
- instance.handle_delete()
- self.m.VerifyAll()
-
- def test_attribute_not_found(self):
- instance = self._setup_test_clouddbinstance('dbinstance_create')
- fake_client = self.m.CreateMockAnything()
- instance.cloud_db().AndReturn(fake_client)
- fakedbinstance = FakeDBInstance()
- fake_client.create('Test',
- flavor='1GB',
- volume='30').AndReturn(fakedbinstance)
- self.m.ReplayAll()
- instance.handle_create()
- self.assertEqual(instance._resolve_attribute('invalid-attrib'), None)
- self.m.VerifyAll()
-
- def test_clouddbinstance_delete(self):
- instance = self._setup_test_clouddbinstance('dbinstance_delete')
- fake_client = self.m.CreateMockAnything()
- cloud_db = instance.cloud_db().AndReturn(fake_client)
- fakedbinstance = FakeDBInstance()
- fake_client.delete(1234).AndReturn(None)
- self.m.ReplayAll()
- instance.handle_delete()
- self.m.VerifyAll()
-
- def test_clouddbinstance_param_validation_success(self):
- instance = self._setup_test_clouddbinstance(
- 'dbinstance_params',
- inject_property_error=False)
- self.m.ReplayAll()
- ret = instance.validate()
- self.assertEqual(ret, None)
- self.m.VerifyAll()
-
- def test_clouddbinstance_param_validation_fail(self):
- instance = self._setup_test_clouddbinstance('dbinstance_params',
- inject_property_error=True)
- self.m.ReplayAll()
- ret = instance.validate()
- self.assertTrue('Error' in ret)
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from mock import patch
-
-from heat.common import exception
-from heat.common import template_format
-from heat.tests import common
-from heat.tests import utils
-from heat.engine import scheduler
-from heat.engine import watchrule
-
-
-AWS_CloudWatch_Alarm = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Description: Template which tests alarms
-Resources:
- test_me:
- Type: AWS::CloudWatch::Alarm
- Properties:
- MetricName: cpu_util
- Namespace: AWS/EC2
- Statistic: Average
- Period: '60'
- EvaluationPeriods: '1'
- Threshold: '50'
- ComparisonOperator: GreaterThanThreshold
-'''
-
-
-class CloudWatchAlarmTest(common.HeatTestCase):
-
- def setUp(self):
- super(CloudWatchAlarmTest, self).setUp()
- utils.setup_dummy_db()
- self.ctx = utils.dummy_context()
-
- def parse_stack(self):
- t = template_format.parse(AWS_CloudWatch_Alarm)
- self.stack = utils.parse_stack(t)
- return self.stack
-
- @utils.stack_delete_after
- def test_resource_create_good(self):
- s = self.parse_stack()
- self.assertEqual(None, scheduler.TaskRunner(s['test_me'].create)())
-
- @utils.stack_delete_after
- def test_resource_create_failed(self):
- s = self.parse_stack()
- with patch.object(watchrule.WatchRule, 'store') as bad_store:
- bad_store.side_effect = KeyError('any random failure')
- task_func = scheduler.TaskRunner(s['test_me'].create)
- self.assertRaises(exception.ResourceFailure, task_func)
-
- @utils.stack_delete_after
- def test_resource_delete_good(self):
- s = self.parse_stack()
- self.assertEqual(None, scheduler.TaskRunner(s['test_me'].create)())
- self.assertEqual(None, scheduler.TaskRunner(s['test_me'].delete)())
-
- @utils.stack_delete_after
- @utils.wr_delete_after
- def test_resource_delete_notfound(self):
- # if a resource is not found, handle_delete() should not raise
- # an exception.
- s = self.parse_stack()
- self.assertEqual(None, scheduler.TaskRunner(s['test_me'].create)())
- res_name = self.stack['test_me'].physical_resource_name()
- self.wr = watchrule.WatchRule.load(self.ctx,
- watch_name=res_name)
-
- with patch.object(watchrule.WatchRule, 'destroy') as bad_destroy:
- bad_destroy.side_effect = exception.WatchRuleNotFound
- self.assertEqual(None, scheduler.TaskRunner(s['test_me'].delete)())
+++ /dev/null
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import os.path
-
-from oslo.config import cfg
-
-from heat.common import policy
-from heat.common import exception
-from heat.openstack.common import policy as base_policy
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-policy_path = os.path.dirname(os.path.realpath(__file__)) + "/policy/"
-
-
-class TestPolicyEnforcer(HeatTestCase):
- cfn_actions = ("ListStacks", "CreateStack", "DescribeStacks",
- "DeleteStack", "UpdateStack", "DescribeStackEvents",
- "ValidateTemplate", "GetTemplate",
- "EstimateTemplateCost", "DescribeStackResource",
- "DescribeStackResources")
-
- cw_actions = ("DeleteAlarms", "DescribeAlarmHistory", "DescribeAlarms",
- "DescribeAlarmsForMetric", "DisableAlarmActions",
- "EnableAlarmActions", "GetMetricStatistics", "ListMetrics",
- "PutMetricAlarm", "PutMetricData", "SetAlarmState")
-
- def setUp(self):
- super(TestPolicyEnforcer, self).setUp()
- opts = [
- cfg.StrOpt('config_dir', default=policy_path),
- cfg.StrOpt('config_file', default='foo'),
- cfg.StrOpt('project', default='heat'),
- ]
- cfg.CONF.register_opts(opts)
-
- def test_policy_cfn_default(self):
- pf = policy_path + 'deny_stack_user.json'
- self.m.StubOutWithMock(base_policy.Enforcer, '_get_policy_path')
- base_policy.Enforcer._get_policy_path().MultipleTimes().AndReturn(pf)
- self.m.ReplayAll()
-
- enforcer = policy.Enforcer(scope='cloudformation')
-
- ctx = utils.dummy_context(roles=[])
- for action in self.cfn_actions:
- # Everything should be allowed
- enforcer.enforce(ctx, action, {})
-
- def test_policy_cfn_notallowed(self):
- pf = policy_path + 'notallowed.json'
- self.m.StubOutWithMock(base_policy.Enforcer, '_get_policy_path')
- base_policy.Enforcer._get_policy_path().MultipleTimes().AndReturn(pf)
- self.m.ReplayAll()
-
- enforcer = policy.Enforcer(scope='cloudformation')
-
- ctx = utils.dummy_context(roles=[])
- for action in self.cfn_actions:
- # Everything should raise the default exception.Forbidden
- self.assertRaises(exception.Forbidden, enforcer.enforce, ctx,
- action, {})
- self.m.VerifyAll()
-
- def test_policy_cfn_deny_stack_user(self):
- pf = policy_path + 'deny_stack_user.json'
- self.m.StubOutWithMock(base_policy.Enforcer, '_get_policy_path')
- base_policy.Enforcer._get_policy_path().MultipleTimes().AndReturn(pf)
- self.m.ReplayAll()
-
- enforcer = policy.Enforcer(scope='cloudformation')
-
- ctx = utils.dummy_context(roles=['heat_stack_user'])
- for action in self.cfn_actions:
- # Everything apart from DescribeStackResource should be Forbidden
- if action == "DescribeStackResource":
- enforcer.enforce(ctx, action, {})
- else:
- self.assertRaises(exception.Forbidden, enforcer.enforce, ctx,
- action, {})
- self.m.VerifyAll()
-
- def test_policy_cfn_allow_non_stack_user(self):
- pf = policy_path + 'deny_stack_user.json'
- self.m.StubOutWithMock(base_policy.Enforcer, '_get_policy_path')
- base_policy.Enforcer._get_policy_path().MultipleTimes().AndReturn(pf)
- self.m.ReplayAll()
-
- enforcer = policy.Enforcer(scope='cloudformation')
-
- ctx = utils.dummy_context(roles=['not_a_stack_user'])
- for action in self.cfn_actions:
- # Everything should be allowed
- enforcer.enforce(ctx, action, {})
- self.m.VerifyAll()
-
- def test_policy_cw_deny_stack_user(self):
- pf = policy_path + 'deny_stack_user.json'
- self.m.StubOutWithMock(base_policy.Enforcer, '_get_policy_path')
- base_policy.Enforcer._get_policy_path().MultipleTimes().AndReturn(pf)
- self.m.ReplayAll()
-
- enforcer = policy.Enforcer(scope='cloudwatch')
-
- ctx = utils.dummy_context(roles=['heat_stack_user'])
- for action in self.cw_actions:
- # Everything apart from PutMetricData should be Forbidden
- if action == "PutMetricData":
- enforcer.enforce(ctx, action, {})
- else:
- self.assertRaises(exception.Forbidden, enforcer.enforce, ctx,
- action, {})
- self.m.VerifyAll()
-
- def test_policy_cw_allow_non_stack_user(self):
- pf = policy_path + 'deny_stack_user.json'
- self.m.StubOutWithMock(base_policy.Enforcer, '_get_policy_path')
- base_policy.Enforcer._get_policy_path().MultipleTimes().AndReturn(pf)
- self.m.ReplayAll()
-
- enforcer = policy.Enforcer(scope='cloudwatch')
-
- ctx = utils.dummy_context(roles=['not_a_stack_user'])
- for action in self.cw_actions:
- # Everything should be allowed
- enforcer.enforce(ctx, action, {})
- self.m.VerifyAll()
-
- def test_clear(self):
- pf = policy_path + 'deny_stack_user.json'
- self.m.StubOutWithMock(base_policy.Enforcer, '_get_policy_path')
- base_policy.Enforcer._get_policy_path().MultipleTimes().AndReturn(pf)
- self.m.ReplayAll()
-
- enforcer = policy.Enforcer()
- enforcer.load_rules(force_reload=True)
- enforcer.clear()
- self.assertEqual(enforcer.enforcer.rules, {})
- self.m.VerifyAll()
-
- def test_set_rules_overwrite_true(self):
- pf = policy_path + 'deny_stack_user.json'
- self.m.StubOutWithMock(base_policy.Enforcer, '_get_policy_path')
- base_policy.Enforcer._get_policy_path().MultipleTimes().AndReturn(pf)
- self.m.ReplayAll()
-
- enforcer = policy.Enforcer()
- enforcer.load_rules(True)
- enforcer.set_rules({'test_heat_rule': 1}, True)
- self.assertEqual(enforcer.enforcer.rules, {'test_heat_rule': 1})
-
- def test_set_rules_overwrite_false(self):
- pf = policy_path + 'deny_stack_user.json'
- self.m.StubOutWithMock(base_policy.Enforcer, '_get_policy_path')
- base_policy.Enforcer._get_policy_path().MultipleTimes().AndReturn(pf)
- self.m.ReplayAll()
-
- enforcer = policy.Enforcer()
- enforcer.load_rules(True)
- enforcer.set_rules({'test_heat_rule': 1}, False)
- self.assertIn('test_heat_rule', enforcer.enforcer.rules)
-
- def test_load_rules_force_reload_true(self):
- pf = policy_path + 'deny_stack_user.json'
- self.m.StubOutWithMock(base_policy.Enforcer, '_get_policy_path')
- base_policy.Enforcer._get_policy_path().MultipleTimes().AndReturn(pf)
- self.m.ReplayAll()
-
- enforcer = policy.Enforcer()
- enforcer.set_rules({'test_heat_rule': 'test'})
- enforcer.load_rules(True)
- self.assertNotIn({'test_heat_rule': 'test'}, enforcer.enforcer.rules)
-
- def test_load_rules_force_reload_false(self):
- pf = policy_path + 'deny_stack_user.json'
- self.m.StubOutWithMock(base_policy.Enforcer, '_get_policy_path')
- base_policy.Enforcer._get_policy_path().MultipleTimes().AndReturn(pf)
- self.m.ReplayAll()
-
- enforcer = policy.Enforcer()
- enforcer.load_rules(True)
- enforcer.set_rules({'test_heat_rule': 'test'})
- enforcer.load_rules(False)
- self.assertIn('test_heat_rule', enforcer.enforcer.rules)
-
- def test_default_rule(self):
- pf = policy_path + 'deny_stack_user.json'
- self.m.StubOutWithMock(base_policy.Enforcer, '_get_policy_path')
- base_policy.Enforcer._get_policy_path().MultipleTimes().AndReturn(pf)
- self.m.ReplayAll()
-
- ctx = utils.dummy_context(roles=['not_a_stack_user'])
- default_rule = base_policy.FalseCheck()
- enforcer = policy.Enforcer(scope='cloudformation',
- exc=None, default_rule=default_rule)
- action = 'no_such_action'
- self.assertEqual(enforcer.enforce(ctx, action, {}), False)
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.engine.components import Component
-from heat.engine.components import Components
-from heat.tests.common import HeatTestCase
-
-
-class ComponentTest(HeatTestCase):
-
- def test_init(self):
- comp = Component()
- self.assertEqual(comp.type, 'OS::Heat::SoftwareConfig')
- self.assertEqual(comp.properties, {})
- self.assertEqual(comp.scripts, {})
- self.assertEqual(comp.relations, [])
- self.assertEqual(comp.hosted_on(), None)
- self.assertEqual(comp.depends(), [])
-
- def test_hosted_on(self):
- schema = {
- 'relationships': [
- {'hosted_on': 'wordpress'}
- ]
- }
- comp = Component(schema)
- self.assertEqual(comp.hosted_on(), 'wordpress')
-
- def test_depends(self):
- schema = {
- 'relationships': [
- {'depends_on': 'config_mysql'}
- ]
- }
- comp = Component(schema)
- self.assertEqual(comp.depends(), ['config_mysql'])
-
- comp['relationships'].append({'depends_on': 'config_wordpress'})
- self.assertEqual(comp.depends(),
- ['config_mysql', 'config_wordpress'])
-
-
-class ComponentsTest(HeatTestCase):
-
- def test_init(self):
- schema = {}
- comps = Components(schema)
- self.assertEqual(0, len(comps))
-
- schema['config_mysql'] = {}
- comps = Components(schema)
- self.assertEqual(1, len(comps))
- comp = comps['config_mysql']
- self.assertIsInstance(comp, Component)
-
- def test_depends(self):
- schema = {
- 'install_mysql': {
- },
- 'config_mysql': {
- 'relationships': [
- {'depends_on': 'install_mysql'}
- ]
- },
- 'start_mysql': {
- 'relationships': [
- {'depends_on': 'config_mysql'}
- ]
- }
- }
- comps = Components(schema)
- self.assertEqual(3, len(comps))
- deps = comps.depends()
- self.assertEqual(2, len(deps))
- self.assertIn('install_mysql', deps)
- self.assertIn('config_mysql', deps)
-
- def test_multi_depends(self):
- schema = {
- 'install_mysql': {
- },
- 'config_mysql': {
- 'relationships': [
- {'depends_on': 'install_mysql'}
- ]
- },
- 'start_mysql': {
- 'relationships': [
- {'depends_on': 'config_mysql'}
- ]
- },
- 'install_wordpress': {},
- 'config_wordpress': {
- 'relationships': [
- {'depends_on': 'install_wordpress'}
- ]
- },
- 'start_wordpress': {
- 'relationships': [
- {'depends_on': 'config_wordpress'},
- {'depends_on': 'start_mysql'}
- ]
- }
- }
- comps = Components(schema)
- deps = comps.depends()
- self.assertEqual(5, len(deps))
- self.assertNotIn('start_wordpress', deps)
- self.assertIn('install_wordpress', deps)
- self.assertIn('config_wordpress', deps)
- self.assertIn('start_mysql', deps)
- self.assertIn('config_mysql', deps)
- self.assertIn('install_mysql', deps)
-
- def test_filter(self):
- schema = {
- 'install_mysql': {
- 'relationships': [
- {'hosted_on': 'mysql'}
- ]
- },
- 'config_mysql': {
- 'relationships': [
- {'hosted_on': 'mysql'},
- {'depends_on': 'install_mysql'}
- ]
- },
- 'start_mysql': {
- 'relationships': [
- {'hosted_on': 'mysql'},
- {'depends_on': 'config_mysql'}
- ]
- },
- 'install_wordpress': {
- 'relationships': [
- {'hosted_on': 'wordpress'}
- ]
- },
- 'config_wordpress': {
- 'relationships': [
- {'hosted_on': 'wordpress'},
- {'depends_on': 'install_wordpress'}
- ]
- },
- 'start_wordpress': {
- 'relationships': [
- {'hosted_on': 'wordpress'},
- {'depends_on': 'config_wordpress'},
- {'depends_on': 'start_mysql'}
- ]
- }
- }
-
- comps = Components(schema)
- names = comps.filter('mysql')
- self.assertEqual(3, len(names))
- self.assertIn('config_mysql', names)
- self.assertIn('install_mysql', names)
- self.assertIn('start_mysql', names)
-
- names = comps.filter('wordpress')
- self.assertEqual(3, len(names))
- self.assertIn('config_wordpress', names)
- self.assertIn('install_wordpress', names)
- self.assertIn('start_wordpress', names)
-
- def test_validate(self):
- schema = {'install_mysql': {}}
- comps = Components(schema)
- self.assertTrue(comps.validate())
-
- schema = {
- 'config_mysql': {
- 'relationships': [
- {'depends_on': 'config_mysql'}
- ]
- }
- }
- comps = Components(schema)
- err = self.assertRaises(ValueError, comps.validate)
- self.assertIn('component config_mysql depends on itself.', str(err))
-
- schema = {
- 'config_mysql': {
- 'relationships': [
- {'depends_on': 'install_mysql'}
- ]
- }
- }
- comps = Components(schema)
- err = self.assertRaises(ValueError, comps.validate)
- self.assertIn('component install_mysql is not defined.', str(err))
-
- schema = {
- 'install_mysql': {
- },
- 'config_mysql': {
- 'relationships': [
- {'depends_on': 'install_mysql'},
- {'depends_on': 'install_mysql'}
- ]
- }
- }
- comps = Components(schema)
- err = self.assertRaises(ValueError, comps.validate)
- self.assertIn('duplicated install_mysql in config_mysql depends on.',
- str(err))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import copy
-
-from heat.common import template_format
-from heat.engine.resources import cloud_watch
-from heat.engine import resource
-from heat.engine import watchrule
-from heat.engine import scheduler
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-alarm_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Alarm Test",
- "Parameters" : {},
- "Resources" : {
- "MEMAlarmHigh": {
- "Type": "AWS::CloudWatch::Alarm",
- "Properties": {
- "AlarmDescription": "Scale-up if MEM > 50% for 1 minute",
- "MetricName": "MemoryUtilization",
- "Namespace": "system/linux",
- "Statistic": "Average",
- "Period": "60",
- "EvaluationPeriods": "1",
- "Threshold": "50",
- "AlarmActions": [],
- "Dimensions": [],
- "ComparisonOperator": "GreaterThanThreshold"
- }
- }
- }
-}
-'''
-
-
-class CloudWatchAlarmTest(HeatTestCase):
- def setUp(self):
- super(CloudWatchAlarmTest, self).setUp()
- utils.setup_dummy_db()
-
- def create_alarm(self, t, stack, resource_name):
- rsrc = cloud_watch.CloudWatchAlarm(resource_name,
- t['Resources'][resource_name],
- stack)
- self.assertEqual(None, rsrc.validate())
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def test_mem_alarm_high_update_no_replace(self):
- '''
- Make sure that we can change the update-able properties
- without replacing the Alarm rsrc.
- '''
- t = template_format.parse(alarm_template)
-
- #short circuit the alarm's references
- properties = t['Resources']['MEMAlarmHigh']['Properties']
- properties['AlarmActions'] = ['a']
- properties['Dimensions'] = [{'a': 'v'}]
-
- stack = utils.parse_stack(t)
- # the watch rule needs a valid stack_id
- stack.store()
-
- self.m.ReplayAll()
- rsrc = self.create_alarm(t, stack, 'MEMAlarmHigh')
- snippet = copy.deepcopy(rsrc.parsed_template())
- snippet['Properties']['ComparisonOperator'] = 'LessThanThreshold'
- snippet['Properties']['AlarmDescription'] = 'fruity'
- snippet['Properties']['EvaluationPeriods'] = '2'
- snippet['Properties']['Period'] = '90'
- snippet['Properties']['Statistic'] = 'Maximum'
- snippet['Properties']['Threshold'] = '39'
-
- scheduler.TaskRunner(rsrc.update, snippet)()
-
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
- def test_mem_alarm_high_update_replace(self):
- '''
- Make sure that the Alarm resource IS replaced when non-update-able
- properties are changed.
- '''
- t = template_format.parse(alarm_template)
-
- #short circuit the alarm's references
- properties = t['Resources']['MEMAlarmHigh']['Properties']
- properties['AlarmActions'] = ['a']
- properties['Dimensions'] = [{'a': 'v'}]
-
- stack = utils.parse_stack(t)
- # the watch rule needs a valid stack_id
- stack.store()
-
- self.m.ReplayAll()
- rsrc = self.create_alarm(t, stack, 'MEMAlarmHigh')
- snippet = copy.deepcopy(rsrc.parsed_template())
- snippet['Properties']['MetricName'] = 'temp'
-
- updater = scheduler.TaskRunner(rsrc.update, snippet)
- self.assertRaises(resource.UpdateReplace, updater)
-
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
- def test_suspend_resume(self):
- t = template_format.parse(alarm_template)
- stack = utils.parse_stack(t)
- # the watch rule needs a valid stack_id
- stack.store()
-
- self.m.ReplayAll()
- rsrc = self.create_alarm(t, stack, 'MEMAlarmHigh')
- scheduler.TaskRunner(rsrc.suspend)()
- self.assertEqual(rsrc.state, (rsrc.SUSPEND, rsrc.COMPLETE))
-
- wr = watchrule.WatchRule.load(
- None, watch_name="test_stack-MEMAlarmHigh")
-
- self.assertEqual(wr.state, watchrule.WatchRule.SUSPENDED)
-
- scheduler.TaskRunner(rsrc.resume)()
- self.assertEqual(rsrc.state, (rsrc.RESUME, rsrc.COMPLETE))
-
- wr = watchrule.WatchRule.load(
- None, watch_name="test_stack-MEMAlarmHigh")
-
- self.assertEqual(wr.state, watchrule.WatchRule.NODATA)
-
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import template_format
-from heat.engine import resource
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-from heat.engine import parser
-
-
-rds_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "RDS Test",
- "Parameters" : {
- "KeyName" : {
- "Description" : "KeyName",
- "Type" : "String",
- "Default" : "test"
- }
- },
- "Resources" : {
- "DatabaseServer": {
- "Type": "AWS::RDS::DBInstance",
- "Properties": {
- "DBName" : "wordpress",
- "Engine" : "MySQL",
- "MasterUsername" : "admin",
- "DBInstanceClass" : "db.m1.small",
- "DBSecurityGroups" : [],
- "AllocatedStorage" : "5",
- "MasterUserPassword": "admin"
- }
- }
- }
-}
-'''
-
-
-class DBInstance(resource.Resource):
- """This is copied from the old DBInstance
- to verify the schema of the new TemplateResource.
- """
- properties_schema = {
- 'DBSnapshotIdentifier': {'Type': 'String',
- 'Implemented': False},
- 'AllocatedStorage': {'Type': 'String',
- 'Required': True},
- 'AvailabilityZone': {'Type': 'String',
- 'Implemented': False},
- 'BackupRetentionPeriod': {'Type': 'String',
- 'Implemented': False},
- 'DBInstanceClass': {'Type': 'String',
- 'Required': True},
- 'DBName': {'Type': 'String',
- 'Required': False},
- 'DBParameterGroupName': {'Type': 'String',
- 'Implemented': False},
- 'DBSecurityGroups': {'Type': 'List',
- 'Required': False, 'Default': []},
- 'DBSubnetGroupName': {'Type': 'String',
- 'Implemented': False},
- 'Engine': {'Type': 'String',
- 'AllowedValues': ['MySQL'],
- 'Required': True},
- 'EngineVersion': {'Type': 'String',
- 'Implemented': False},
- 'LicenseModel': {'Type': 'String',
- 'Implemented': False},
- 'MasterUsername': {'Type': 'String',
- 'Required': True},
- 'MasterUserPassword': {'Type': 'String',
- 'Required': True},
- 'Port': {'Type': 'String',
- 'Default': '3306',
- 'Required': False},
- 'PreferredBackupWindow': {'Type': 'String',
- 'Implemented': False},
- 'PreferredMaintenanceWindow': {'Type': 'String',
- 'Implemented': False},
- 'MultiAZ': {'Type': 'Boolean',
- 'Implemented': False},
- }
-
- # We only support a couple of the attributes right now
- attributes_schema = {
- "Endpoint.Address": "Connection endpoint for the database.",
- "Endpoint.Port": ("The port number on which the database accepts "
- "connections.")
- }
-
-
-class DBInstanceTest(HeatTestCase):
- def setUp(self):
- super(DBInstanceTest, self).setUp()
- utils.setup_dummy_db()
-
- def test_dbinstance(self):
- """test that the Template is parsable and
- publishes the correct properties.
- """
- templ = parser.Template(template_format.parse(rds_template))
- stack = parser.Stack(utils.dummy_context(), 'test_stack',
- templ)
-
- res = stack['DatabaseServer']
- self.assertEqual(None, res._validate_against_facade(DBInstance))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import testtools
-
-from heat.engine.dependencies import Dependencies
-from heat.engine.dependencies import CircularDependencyException
-
-
-class dependenciesTest(testtools.TestCase):
-
- def _dep_test(self, func, checkorder, deps):
- nodes = set.union(*[set(e) for e in deps])
-
- d = Dependencies(deps)
- order = list(func(d))
-
- for n in nodes:
- self.assertTrue(n in order, '"%s" is not in the sequence' % n)
- self.assertEqual(order.count(n), 1)
-
- self.assertEqual(len(order), len(nodes))
-
- for l, f in deps:
- checkorder(order.index(f), order.index(l))
-
- def _dep_test_fwd(self, *deps):
- def assertLess(a, b):
- self.assertTrue(a < b,
- '"%s" is not less than "%s"' % (str(a), str(b)))
- self._dep_test(iter, assertLess, deps)
-
- def _dep_test_rev(self, *deps):
- def assertGreater(a, b):
- self.assertTrue(a > b,
- '"%s" is not greater than "%s"' % (str(a), str(b)))
- self._dep_test(reversed, assertGreater, deps)
-
- def test_edges(self):
- input_edges = [('1', None), ('2', '3'), ('2', '4')]
- dp = Dependencies(input_edges)
- self.assertEqual(set(dp.graph().edges()), set(input_edges))
-
- def test_repr(self):
- dp = Dependencies([('1', None), ('2', '3'), ('2', '4')])
- s = "Dependencies([('1', None), ('2', '3'), ('2', '4')])"
- self.assertEqual(repr(dp), s)
-
- def test_single_node(self):
- d = Dependencies([('only', None)])
- l = list(iter(d))
- self.assertEqual(len(l), 1)
- self.assertEqual(l[0], 'only')
-
- def test_disjoint(self):
- d = Dependencies([('1', None), ('2', None)])
- l = list(iter(d))
- self.assertEqual(len(l), 2)
- self.assertTrue('1' in l)
- self.assertTrue('2' in l)
-
- def test_single_fwd(self):
- self._dep_test_fwd(('second', 'first'))
-
- def test_single_rev(self):
- self._dep_test_rev(('second', 'first'))
-
- def test_chain_fwd(self):
- self._dep_test_fwd(('third', 'second'), ('second', 'first'))
-
- def test_chain_rev(self):
- self._dep_test_rev(('third', 'second'), ('second', 'first'))
-
- def test_diamond_fwd(self):
- self._dep_test_fwd(('last', 'mid1'), ('last', 'mid2'),
- ('mid1', 'first'), ('mid2', 'first'))
-
- def test_diamond_rev(self):
- self._dep_test_rev(('last', 'mid1'), ('last', 'mid2'),
- ('mid1', 'first'), ('mid2', 'first'))
-
- def test_complex_fwd(self):
- self._dep_test_fwd(('last', 'mid1'), ('last', 'mid2'),
- ('mid1', 'mid3'), ('mid1', 'first'),
- ('mid3', 'first'), ('mid2', 'first'))
-
- def test_complex_rev(self):
- self._dep_test_rev(('last', 'mid1'), ('last', 'mid2'),
- ('mid1', 'mid3'), ('mid1', 'first'),
- ('mid3', 'first'), ('mid2', 'first'))
-
- def test_many_edges_fwd(self):
- self._dep_test_fwd(('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'),
- ('mid1', 'e2'), ('mid1', 'mid3'),
- ('mid2', 'mid3'),
- ('mid3', 'e3'))
-
- def test_many_edges_rev(self):
- self._dep_test_rev(('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'),
- ('mid1', 'e2'), ('mid1', 'mid3'),
- ('mid2', 'mid3'),
- ('mid3', 'e3'))
-
- def test_dbldiamond_fwd(self):
- self._dep_test_fwd(('last', 'a1'), ('last', 'a2'),
- ('a1', 'b1'), ('a2', 'b1'), ('a2', 'b2'),
- ('b1', 'first'), ('b2', 'first'))
-
- def test_dbldiamond_rev(self):
- self._dep_test_rev(('last', 'a1'), ('last', 'a2'),
- ('a1', 'b1'), ('a2', 'b1'), ('a2', 'b2'),
- ('b1', 'first'), ('b2', 'first'))
-
- def test_circular_fwd(self):
- d = Dependencies([('first', 'second'),
- ('second', 'third'),
- ('third', 'first')])
- self.assertRaises(CircularDependencyException, list, iter(d))
-
- def test_circular_rev(self):
- d = Dependencies([('first', 'second'),
- ('second', 'third'),
- ('third', 'first')])
- self.assertRaises(CircularDependencyException, list, reversed(d))
-
- def test_self_ref(self):
- d = Dependencies([('node', 'node')])
- self.assertRaises(CircularDependencyException, list, iter(d))
-
- def test_complex_circular_fwd(self):
- d = Dependencies([('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'),
- ('mid1', 'e2'), ('mid1', 'mid3'),
- ('mid2', 'mid3'),
- ('mid3', 'e3'),
- ('e3', 'mid1')])
- self.assertRaises(CircularDependencyException, list, iter(d))
-
- def test_complex_circular_rev(self):
- d = Dependencies([('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'),
- ('mid1', 'e2'), ('mid1', 'mid3'),
- ('mid2', 'mid3'),
- ('mid3', 'e3'),
- ('e3', 'mid1')])
- self.assertRaises(CircularDependencyException, list, reversed(d))
-
- def test_noexist_partial(self):
- d = Dependencies([('foo', 'bar')])
- get = lambda i: d[i]
- self.assertRaises(KeyError, get, 'baz')
-
- def test_single_partial(self):
- d = Dependencies([('last', 'first')])
- p = d['last']
- l = list(iter(p))
- self.assertEqual(len(l), 1)
- self.assertEqual(l[0], 'last')
-
- def test_simple_partial(self):
- d = Dependencies([('last', 'middle'), ('middle', 'first')])
- p = d['middle']
- order = list(iter(p))
- self.assertEqual(len(order), 2)
- for n in ('last', 'middle'):
- self.assertTrue(n in order,
- "'%s' not found in dependency order" % n)
- self.assertTrue(order.index('last') > order.index('middle'))
-
- def test_simple_multilevel_partial(self):
- d = Dependencies([('last', 'middle'),
- ('middle', 'target'),
- ('target', 'first')])
- p = d['target']
- order = list(iter(p))
- self.assertEqual(len(order), 3)
- for n in ('last', 'middle', 'target'):
- self.assertTrue(n in order,
- "'%s' not found in dependency order" % n)
-
- def test_complex_partial(self):
- d = Dependencies([('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'),
- ('mid1', 'e2'), ('mid1', 'mid3'),
- ('mid2', 'mid3'),
- ('mid3', 'e3')])
- p = d['mid3']
- order = list(iter(p))
- self.assertEqual(len(order), 4)
- for n in ('last', 'mid1', 'mid2', 'mid3'):
- self.assertTrue(n in order,
- "'%s' not found in dependency order" % n)
-
- def test_required_by(self):
- d = Dependencies([('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'),
- ('mid1', 'e2'), ('mid1', 'mid3'),
- ('mid2', 'mid3'),
- ('mid3', 'e3')])
-
- self.assertEqual(0, len(list(d.required_by('last'))))
-
- required_by = list(d.required_by('mid3'))
- self.assertEqual(len(required_by), 2)
- for n in ('mid1', 'mid2'):
- self.assertTrue(n in required_by,
- "'%s' not found in required_by" % n)
-
- required_by = list(d.required_by('e2'))
- self.assertEqual(len(required_by), 1)
- self.assertTrue('mid1' in required_by,
- "'%s' not found in required_by" % n)
-
- self.assertRaises(KeyError, d.required_by, 'foo')
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from testtools import skipIf
-
-from heat.common import exception
-from heat.common import template_format
-from heat.engine.resources import eip
-from heat.engine import clients
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine import parser
-from heat.tests.common import HeatTestCase
-from heat.tests.v1_1 import fakes
-from heat.tests import fakes as fakec
-from heat.tests import utils
-
-
-eip_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "EIP Test",
- "Parameters" : {},
- "Resources" : {
- "IPAddress" : {
- "Type" : "AWS::EC2::EIP",
- "Properties" : {
- "InstanceId" : { "Ref" : "WebServer" }
- }
- },
- "WebServer": {
- "Type": "AWS::EC2::Instance",
- }
- }
-}
-'''
-
-eip_template_ipassoc = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "EIP Test",
- "Parameters" : {},
- "Resources" : {
- "IPAddress" : {
- "Type" : "AWS::EC2::EIP"
- },
- "IPAssoc" : {
- "Type" : "AWS::EC2::EIPAssociation",
- "Properties" : {
- "InstanceId" : { "Ref" : "WebServer" },
- "EIP" : { "Ref" : "IPAddress" }
- }
- },
- "WebServer": {
- "Type": "AWS::EC2::Instance",
- }
- }
-}
-'''
-
-eip_template_ipassoc2 = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "EIP Test",
- "Parameters" : {},
- "Resources" : {
- "the_eip" : {
- "Type" : "AWS::EC2::EIP",
- "Properties" : {
- "Domain": "vpc"
- }
- },
- "IPAssoc" : {
- "Type" : "AWS::EC2::EIPAssociation",
- "Properties" : {
- "AllocationId" : 'fc68ea2c-b60b-4b4f-bd82-94ec81110766',
- "NetworkInterfaceId" : { "Ref" : "the_nic" }
- }
- },
- "the_vpc" : {
- "Type" : "AWS::EC2::VPC",
- "Properties" : {
- "CidrBlock" : "10.0.0.0/16"
- }
- },
- "the_subnet" : {
- "Type" : "AWS::EC2::Subnet",
- "Properties" : {
- "CidrBlock" : "10.0.0.0/24",
- "VpcId" : { "Ref" : "the_vpc" }
- }
- },
- "the_nic" : {
- "Type" : "AWS::EC2::NetworkInterface",
- "Properties" : {
- "PrivateIpAddress": "10.0.0.100",
- "SubnetId": { "Ref": "the_subnet" }
- }
- },
- }
-}
-'''
-
-
-def force_networking(mode):
- if mode == 'nova':
- force_networking.client = clients.neutronclient
- clients.neutronclient = None
- if mode == 'neutron':
- clients.neutronclient = force_networking.client
-force_networking.client = None
-
-
-class EIPTest(HeatTestCase):
- def setUp(self):
- # force Nova, will test Neutron below
- force_networking('nova')
- super(EIPTest, self).setUp()
- self.fc = fakes.FakeClient()
- self.m.StubOutWithMock(eip.ElasticIp, 'nova')
- self.m.StubOutWithMock(eip.ElasticIpAssociation, 'nova')
- self.m.StubOutWithMock(self.fc.servers, 'get')
- utils.setup_dummy_db()
-
- def tearDown(self):
- super(EIPTest, self).tearDown()
- force_networking('neutron')
-
- def create_eip(self, t, stack, resource_name):
- rsrc = eip.ElasticIp(resource_name,
- t['Resources'][resource_name],
- stack)
- self.assertEqual(None, rsrc.validate())
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def create_association(self, t, stack, resource_name):
- rsrc = eip.ElasticIpAssociation(resource_name,
- t['Resources'][resource_name],
- stack)
- self.assertEqual(None, rsrc.validate())
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def test_eip(self):
- eip.ElasticIp.nova().MultipleTimes().AndReturn(self.fc)
- self.fc.servers.get('WebServer').AndReturn(self.fc.servers.list()[0])
- self.fc.servers.get('WebServer')
-
- self.m.ReplayAll()
-
- t = template_format.parse(eip_template)
- stack = utils.parse_stack(t)
-
- rsrc = self.create_eip(t, stack, 'IPAddress')
-
- try:
- self.assertEqual('11.0.0.1', rsrc.FnGetRefId())
- rsrc.refid = None
- self.assertEqual('11.0.0.1', rsrc.FnGetRefId())
-
- self.assertEqual('1', rsrc.FnGetAtt('AllocationId'))
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.FnGetAtt, 'Foo')
-
- finally:
- scheduler.TaskRunner(rsrc.destroy)()
-
- self.m.VerifyAll()
-
- def test_association_eip(self):
- eip.ElasticIp.nova().AndReturn(self.fc)
- eip.ElasticIp.nova().AndReturn(self.fc)
-
- self.m.ReplayAll()
-
- t = template_format.parse(eip_template_ipassoc)
- stack = utils.parse_stack(t)
-
- rsrc = self.create_eip(t, stack, 'IPAddress')
- association = self.create_association(t, stack, 'IPAssoc')
-
- # TODO(sbaker), figure out why this is an empty string
- #self.assertEqual('', association.FnGetRefId())
-
- scheduler.TaskRunner(association.delete)()
- scheduler.TaskRunner(rsrc.delete)()
-
- self.m.VerifyAll()
-
- def test_eip_with_exception(self):
- self.m.StubOutWithMock(self.fc.floating_ips, 'create')
- eip.ElasticIp.nova().MultipleTimes().AndReturn(self.fc)
- self.fc.floating_ips.create().AndRaise(
- clients.novaclient.exceptions.NotFound('fake_falure'))
- self.m.ReplayAll()
-
- t = template_format.parse(eip_template)
- stack = utils.parse_stack(t)
- resource_name = 'IPAddress'
- rsrc = eip.ElasticIp(resource_name,
- t['Resources'][resource_name],
- stack)
-
- self.assertRaises(clients.novaclient.exceptions.NotFound,
- rsrc.handle_create)
- self.m.VerifyAll()
-
-
-class AllocTest(HeatTestCase):
-
- @skipIf(clients.neutronclient is None, 'neutronclient unavailable')
- def setUp(self):
- super(AllocTest, self).setUp()
-
- self.fc = fakes.FakeClient()
- self.m.StubOutWithMock(eip.ElasticIp, 'nova')
- self.m.StubOutWithMock(eip.ElasticIpAssociation, 'nova')
- self.m.StubOutWithMock(self.fc.servers, 'get')
-
- self.m.StubOutWithMock(parser.Stack, 'resource_by_refid')
- self.m.StubOutWithMock(clients.neutronclient.Client,
- 'create_floatingip')
- self.m.StubOutWithMock(clients.neutronclient.Client,
- 'show_floatingip')
- self.m.StubOutWithMock(clients.neutronclient.Client,
- 'update_floatingip')
- self.m.StubOutWithMock(clients.neutronclient.Client,
- 'delete_floatingip')
- self.m.StubOutWithMock(clients.neutronclient.Client,
- 'add_gateway_router')
- self.m.StubOutWithMock(clients.neutronclient.Client, 'list_networks')
- self.m.StubOutWithMock(clients.neutronclient.Client, 'list_ports')
- self.m.StubOutWithMock(clients.neutronclient.Client, 'list_subnets')
- self.m.StubOutWithMock(clients.neutronclient.Client, 'show_network')
- self.m.StubOutWithMock(clients.neutronclient.Client, 'list_routers')
- self.m.StubOutWithMock(clients.neutronclient.Client,
- 'remove_gateway_router')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
-
- utils.setup_dummy_db()
-
- def mock_show_network(self):
- vpc_name = utils.PhysName('test_stack', 'the_vpc')
- clients.neutronclient.Client.show_network(
- 'aaaa-netid'
- ).AndReturn({"network": {
- "status": "BUILD",
- "subnets": [],
- "name": vpc_name,
- "admin_state_up": False,
- "shared": False,
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f",
- "id": "aaaa-netid"
- }})
-
- def create_eip(self, t, stack, resource_name):
- rsrc = eip.ElasticIp(resource_name,
- t['Resources'][resource_name],
- stack)
- self.assertEqual(None, rsrc.validate())
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def create_association(self, t, stack, resource_name):
- rsrc = eip.ElasticIpAssociation(resource_name,
- t['Resources'][resource_name],
- stack)
- self.assertEqual(None, rsrc.validate())
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def mock_update_floatingip(self, port='the_nic'):
- clients.neutronclient.Client.update_floatingip(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766',
- {'floatingip': {'port_id': port}}).AndReturn(None)
-
- def mock_create_gateway_attachment(self):
- clients.neutronclient.Client.add_gateway_router(
- 'bbbb', {'network_id': 'eeee'}).AndReturn(None)
-
- def mock_create_floatingip(self):
- clients.neutronclient.Client.list_networks(
- **{'router:external': True}).AndReturn({'networks': [{
- 'status': 'ACTIVE',
- 'subnets': [],
- 'name': 'nova',
- 'router:external': True,
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f',
- 'admin_state_up': True,
- 'shared': True,
- 'id': 'eeee'
- }]})
-
- clients.neutronclient.Client.create_floatingip({
- 'floatingip': {'floating_network_id': u'eeee'}
- }).AndReturn({'floatingip': {
- "status": "ACTIVE",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766",
- "floating_ip_address": "192.168.9.3"
- }})
-
- def mock_show_floatingip(self, refid):
- clients.neutronclient.Client.show_floatingip(
- refid,
- ).AndReturn({'floatingip': {
- 'router_id': None,
- 'tenant_id': 'e936e6cd3e0b48dcb9ff853a8f253257',
- 'floating_network_id': 'eeee',
- 'fixed_ip_address': None,
- 'floating_ip_address': '172.24.4.227',
- 'port_id': None,
- 'id': 'ffff'
- }})
-
- def mock_delete_floatingip(self):
- id = 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- clients.neutronclient.Client.delete_floatingip(id).AndReturn(None)
-
- def mock_list_ports(self):
- clients.neutronclient.Client.list_ports(id='the_nic').AndReturn(
- {"ports": [{
- "status": "DOWN",
- "binding:host_id": "null",
- "name": "wp-NIC-yu7fc7l4g5p6",
- "admin_state_up": True,
- "network_id": "22c26451-cf27-4d48-9031-51f5e397b84e",
- "tenant_id": "ecf538ec1729478fa1f97f1bf4fdcf7b",
- "binding:vif_type": "ovs",
- "device_owner": "",
- "binding:capabilities": {"port_filter": True},
- "mac_address": "fa:16:3e:62:2d:4f",
- "fixed_ips": [{"subnet_id": "mysubnetid-70ec",
- "ip_address": "192.168.9.2"}],
- "id": "a000228d-b40b-4124-8394-a4082ae1b76b",
- "security_groups": ["5c6f529d-3186-4c36-84c0-af28b8daac7b"],
- "device_id": ""
- }]})
-
- def mock_list_subnets(self):
- clients.neutronclient.Client.list_subnets(
- id='mysubnetid-70ec').AndReturn(
- {'subnets': [{
- u'name': u'wp-Subnet-pyjm7bvoi4xw',
- u'enable_dhcp': True,
- u'network_id': u'aaaa-netid',
- u'tenant_id': u'ecf538ec1729478fa1f97f1bf4fdcf7b',
- u'dns_nameservers': [],
- u'allocation_pools': [{u'start': u'192.168.9.2',
- u'end': u'192.168.9.254'}],
- u'host_routes': [],
- u'ip_version': 4,
- u'gateway_ip': u'192.168.9.1',
- u'cidr': u'192.168.9.0/24',
- u'id': u'2c339ccd-734a-4acc-9f64-6f0dfe427e2d'
- }]})
-
- def mock_router_for_vpc(self):
- vpc_name = utils.PhysName('test_stack', 'the_vpc')
- clients.neutronclient.Client.list_routers(name=vpc_name).AndReturn({
- "routers": [{
- "status": "ACTIVE",
- "external_gateway_info": {
- "network_id": "zzzz",
- "enable_snat": True},
- "name": vpc_name,
- "admin_state_up": True,
- "tenant_id": "3e21026f2dc94372b105808c0e721661",
- "routes": [],
- "id": "bbbb"
- }]
- })
-
- def mock_keystone(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakec.FakeKeystoneClient())
-
- def test_neutron_eip(self):
- eip.ElasticIp.nova().MultipleTimes().AndReturn(self.fc)
- self.fc.servers.get('WebServer').AndReturn(self.fc.servers.list()[0])
- self.fc.servers.get('WebServer')
-
- self.m.ReplayAll()
-
- t = template_format.parse(eip_template)
- stack = utils.parse_stack(t)
-
- rsrc = self.create_eip(t, stack, 'IPAddress')
-
- try:
- self.assertEqual('11.0.0.1', rsrc.FnGetRefId())
- rsrc.refid = None
- self.assertEqual('11.0.0.1', rsrc.FnGetRefId())
-
- self.assertEqual('1', rsrc.FnGetAtt('AllocationId'))
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.FnGetAtt, 'Foo')
-
- finally:
- scheduler.TaskRunner(rsrc.destroy)()
-
- self.m.VerifyAll()
-
- def test_association_allocationid(self):
- self.mock_keystone()
- self.mock_create_gateway_attachment()
- self.mock_show_network()
- self.mock_router_for_vpc()
-
- self.mock_create_floatingip()
- self.mock_list_ports()
- self.mock_list_subnets()
-
- self.mock_show_floatingip('fc68ea2c-b60b-4b4f-bd82-94ec81110766')
- self.mock_update_floatingip()
-
- self.mock_update_floatingip(port=None)
- self.mock_delete_floatingip()
-
- self.m.ReplayAll()
-
- t = template_format.parse(eip_template_ipassoc2)
- stack = utils.parse_stack(t)
-
- rsrc = self.create_eip(t, stack, 'the_eip')
- association = self.create_association(t, stack, 'IPAssoc')
-
- scheduler.TaskRunner(association.delete)()
- scheduler.TaskRunner(rsrc.delete)()
-
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import heat.engine.api as api
-from heat.engine import parser
-from heat.engine import resource
-from heat.openstack.common import uuidutils
-from heat.rpc import api as rpc_api
-from heat.tests.common import HeatTestCase
-from heat.tests import generic_resource as generic_rsrc
-from heat.tests import utils
-
-
-class EngineApiTest(HeatTestCase):
- def test_timeout_extract(self):
- p = {'timeout_mins': '5'}
- args = api.extract_args(p)
- self.assertEqual(args['timeout_mins'], 5)
-
- def test_timeout_extract_zero(self):
- p = {'timeout_mins': '0'}
- args = api.extract_args(p)
- self.assertTrue('timeout_mins' not in args)
-
- def test_timeout_extract_garbage(self):
- p = {'timeout_mins': 'wibble'}
- args = api.extract_args(p)
- self.assertTrue('timeout_mins' not in args)
-
- def test_timeout_extract_none(self):
- p = {'timeout_mins': None}
- args = api.extract_args(p)
- self.assertTrue('timeout_mins' not in args)
-
- def test_timeout_extract_not_present(self):
- args = api.extract_args({})
- self.assertTrue('timeout_mins' not in args)
-
- def test_disable_rollback_extract_true(self):
- args = api.extract_args({'disable_rollback': True})
- self.assertTrue('disable_rollback' in args)
- self.assertTrue(args.get('disable_rollback'))
-
- args = api.extract_args({'disable_rollback': 'True'})
- self.assertTrue('disable_rollback' in args)
- self.assertTrue(args.get('disable_rollback'))
-
- args = api.extract_args({'disable_rollback': 'true'})
- self.assertTrue('disable_rollback' in args)
- self.assertTrue(args.get('disable_rollback'))
-
- def test_disable_rollback_extract_false(self):
- args = api.extract_args({'disable_rollback': False})
- self.assertTrue('disable_rollback' in args)
- self.assertFalse(args.get('disable_rollback'))
-
- args = api.extract_args({'disable_rollback': 'False'})
- self.assertTrue('disable_rollback' in args)
- self.assertFalse(args.get('disable_rollback'))
-
- args = api.extract_args({'disable_rollback': 'false'})
- self.assertTrue('disable_rollback' in args)
- self.assertFalse(args.get('disable_rollback'))
-
- def test_disable_rollback_extract_bad(self):
- self.assertRaises(ValueError, api.extract_args,
- {'disable_rollback': 'bad'})
-
-
-class FormatTest(HeatTestCase):
-
- def setUp(self):
- super(FormatTest, self).setUp()
- utils.setup_dummy_db()
-
- template = parser.Template({
- 'Resources': {
- 'generic1': {'Type': 'GenericResourceType'},
- 'generic2': {
- 'Type': 'GenericResourceType',
- 'DependsOn': 'generic1'}
- }
- })
- resource._register_class('GenericResourceType',
- generic_rsrc.GenericResource)
- self.stack = parser.Stack(utils.dummy_context(), 'test_stack',
- template, stack_id=uuidutils.generate_uuid())
-
- def test_format_stack_resource(self):
- res = self.stack['generic1']
-
- resource_keys = set((
- rpc_api.RES_UPDATED_TIME,
- rpc_api.RES_NAME,
- rpc_api.RES_PHYSICAL_ID,
- rpc_api.RES_METADATA,
- rpc_api.RES_ACTION,
- rpc_api.RES_STATUS,
- rpc_api.RES_STATUS_DATA,
- rpc_api.RES_TYPE,
- rpc_api.RES_ID,
- rpc_api.RES_STACK_ID,
- rpc_api.RES_STACK_NAME,
- rpc_api.RES_REQUIRED_BY))
-
- resource_details_keys = resource_keys.union(set(
- (rpc_api.RES_DESCRIPTION, rpc_api.RES_METADATA)))
-
- formatted = api.format_stack_resource(res, True)
- self.assertEqual(resource_details_keys, set(formatted.keys()))
-
- formatted = api.format_stack_resource(res, False)
- self.assertEqual(resource_keys, set(formatted.keys()))
-
- def test_format_stack_resource_required_by(self):
- res1 = api.format_stack_resource(self.stack['generic1'])
- res2 = api.format_stack_resource(self.stack['generic2'])
- self.assertEqual(res1['required_by'], ['generic2'])
- self.assertEqual(res2['required_by'], [])
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import functools
-import json
-import sys
-
-import mox
-from testtools import matchers
-
-from oslo.config import cfg
-
-from heat.engine import environment
-from heat.common import heat_keystoneclient as hkc
-from heat.common import exception
-from heat.tests.v1_1 import fakes
-import heat.rpc.api as engine_api
-import heat.db.api as db_api
-from heat.common import identifier
-from heat.common import template_format
-from heat.engine import parser
-from heat.engine.resource import _register_class
-from heat.engine import service
-from heat.engine.properties import Properties
-from heat.engine import resource as res
-from heat.engine.resources import instance as instances
-from heat.engine.resources import nova_utils
-from heat.engine import resource as rsrs
-from heat.engine import watchrule
-from heat.openstack.common import threadgroup
-from heat.tests.common import HeatTestCase
-from heat.tests import generic_resource as generic_rsrc
-from heat.tests import utils
-
-
-wp_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "WordPress",
- "Parameters" : {
- "KeyName" : {
- "Description" : "KeyName",
- "Type" : "String",
- "Default" : "test"
- }
- },
- "Resources" : {
- "WebServer": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId" : "F17-x86_64-gold",
- "InstanceType" : "m1.large",
- "KeyName" : "test",
- "UserData" : "wordpress"
- }
- }
- }
-}
-'''
-
-alarm_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "alarming",
- "Resources" : {
- "service_alarm": {
- "Type": "AWS::CloudWatch::Alarm",
- "Properties": {
- "EvaluationPeriods": "1",
- "AlarmActions": [],
- "AlarmDescription": "do the thing",
- "Namespace": "dev/null",
- "Period": "300",
- "ComparisonOperator": "GreaterThanThreshold",
- "Statistic": "SampleCount",
- "Threshold": "2",
- "MetricName": "ServiceFailure"
- }
- }
- }
-}
-'''
-
-policy_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "alarming",
- "Resources" : {
- "WebServerScaleDownPolicy" : {
- "Type" : "AWS::AutoScaling::ScalingPolicy",
- "Properties" : {
- "AdjustmentType" : "ChangeInCapacity",
- "AutoScalingGroupName" : "",
- "Cooldown" : "60",
- "ScalingAdjustment" : "-1"
- }
- }
- }
-}
-'''
-
-
-def get_wordpress_stack(stack_name, ctx):
- t = template_format.parse(wp_template)
- template = parser.Template(t)
- stack = parser.Stack(ctx, stack_name, template,
- environment.Environment({'KeyName': 'test'}))
- return stack
-
-
-def get_stack(stack_name, ctx, template):
- t = template_format.parse(template)
- template = parser.Template(t)
- stack = parser.Stack(ctx, stack_name, template)
- return stack
-
-
-def setup_mocks(mocks, stack):
- fc = fakes.FakeClient()
- mocks.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().MultipleTimes().AndReturn(fc)
-
- instance = stack.resources['WebServer']
- user_data = instance.properties['UserData']
- server_userdata = nova_utils.build_userdata(instance, user_data)
- instance.mime_string = server_userdata
- mocks.StubOutWithMock(fc.servers, 'create')
- fc.servers.create(image=744, flavor=3, key_name='test',
- name=utils.PhysName(stack.name, 'WebServer'),
- security_groups=None,
- userdata=server_userdata, scheduler_hints=None,
- meta=None, nics=None,
- availability_zone=None).AndReturn(
- fc.servers.list()[-1])
- return fc
-
-
-def setup_stack(stack_name, ctx, create_res=True):
- stack = get_wordpress_stack(stack_name, ctx)
- stack.store()
- if create_res:
- m = mox.Mox()
- setup_mocks(m, stack)
- m.ReplayAll()
- stack.create()
- m.UnsetStubs()
- return stack
-
-
-def clean_up_stack(stack, delete_res=True):
- if delete_res:
- m = mox.Mox()
- fc = fakes.FakeClient()
- m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().MultipleTimes().AndReturn(fc)
- m.StubOutWithMock(fc.client, 'get_servers_9999')
- get = fc.client.get_servers_9999
- get().AndRaise(service.clients.novaclient.exceptions.NotFound(404))
- m.ReplayAll()
- stack.delete()
- if delete_res:
- m.UnsetStubs()
-
-
-def stack_context(stack_name, create_res=True):
- """
- Decorator which creates a stack by using the test case's context and
- deletes it afterwards to ensure tests clean up their stacks regardless
- of test success/failure
- """
- def stack_delete(test_fn):
- @functools.wraps(test_fn)
- def wrapped_test(test_case, *args, **kwargs):
- def create_stack():
- ctx = getattr(test_case, 'ctx', None)
- if ctx is not None:
- stack = setup_stack(stack_name, ctx, create_res)
- setattr(test_case, 'stack', stack)
-
- def delete_stack():
- stack = getattr(test_case, 'stack', None)
- if stack is not None and stack.id is not None:
- clean_up_stack(stack, delete_res=create_res)
-
- create_stack()
- try:
- test_fn(test_case, *args, **kwargs)
- except:
- exc_class, exc_val, exc_tb = sys.exc_info()
- try:
- delete_stack()
- finally:
- raise exc_class, exc_val, exc_tb
- else:
- delete_stack()
-
- return wrapped_test
- return stack_delete
-
-
-class DummyThreadGroup(object):
- def __init__(self):
- self.threads = []
-
- def add_timer(self, interval, callback, initial_delay=None,
- *args, **kwargs):
- self.threads.append(callback)
-
- def add_thread(self, callback, *args, **kwargs):
- self.threads.append(callback)
-
- def stop(self):
- pass
-
- def wait(self):
- pass
-
-
-class StackCreateTest(HeatTestCase):
- def setUp(self):
- super(StackCreateTest, self).setUp()
- utils.setup_dummy_db()
-
- def test_wordpress_single_instance_stack_create(self):
- stack = get_wordpress_stack('test_stack', utils.dummy_context())
- setup_mocks(self.m, stack)
- self.m.ReplayAll()
- stack.store()
- stack.create()
-
- self.assertNotEqual(stack.resources['WebServer'], None)
- self.assertTrue(stack.resources['WebServer'].resource_id > 0)
- self.assertNotEqual(stack.resources['WebServer'].ipaddress, '0.0.0.0')
-
- def test_wordpress_single_instance_stack_delete(self):
- ctx = utils.dummy_context()
- stack = get_wordpress_stack('test_stack', ctx)
- fc = setup_mocks(self.m, stack)
- self.m.ReplayAll()
- stack_id = stack.store()
- stack.create()
-
- db_s = db_api.stack_get(ctx, stack_id)
- self.assertNotEqual(db_s, None)
-
- self.assertNotEqual(stack.resources['WebServer'], None)
- self.assertTrue(stack.resources['WebServer'].resource_id > 0)
-
- self.m.StubOutWithMock(fc.client, 'get_servers_9999')
- get = fc.client.get_servers_9999
- get().AndRaise(service.clients.novaclient.exceptions.NotFound(404))
- mox.Replay(get)
- stack.delete()
-
- rsrc = stack.resources['WebServer']
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.assertEqual((stack.DELETE, stack.COMPLETE), rsrc.state)
- self.assertEqual(None, db_api.stack_get(ctx, stack_id))
- self.assertEqual('DELETE', db_s.action)
- self.assertEqual('COMPLETE', db_s.status, )
-
-
-class StackServiceCreateUpdateDeleteTest(HeatTestCase):
-
- def setUp(self):
- super(StackServiceCreateUpdateDeleteTest, self).setUp()
- utils.setup_dummy_db()
- self.ctx = utils.dummy_context()
-
- self.man = service.EngineService('a-host', 'a-topic')
-
- def test_stack_create(self):
- stack_name = 'service_create_test_stack'
- params = {'foo': 'bar'}
- template = '{ "Template": "data" }'
-
- stack = get_wordpress_stack(stack_name, self.ctx)
-
- self.m.StubOutWithMock(parser, 'Template')
- self.m.StubOutWithMock(environment, 'Environment')
- self.m.StubOutWithMock(parser, 'Stack')
-
- parser.Template(template, files=None).AndReturn(stack.t)
- environment.Environment(params).AndReturn(stack.env)
- parser.Stack(self.ctx, stack.name,
- stack.t, stack.env).AndReturn(stack)
-
- self.m.StubOutWithMock(stack, 'validate')
- stack.validate().AndReturn(None)
-
- self.m.StubOutClassWithMocks(hkc.kc, "Client")
- mock_ks_client = hkc.kc.Client(
- auth_url=mox.IgnoreArg(),
- tenant_name='test_tenant',
- token='abcd1234')
- mock_ks_client.authenticate().AndReturn(True)
-
- self.m.StubOutWithMock(hkc.KeystoneClient, 'create_trust_context')
- hkc.KeystoneClient.create_trust_context().AndReturn(None)
-
- self.m.StubOutWithMock(threadgroup, 'ThreadGroup')
- threadgroup.ThreadGroup().AndReturn(DummyThreadGroup())
-
- self.m.ReplayAll()
-
- result = self.man.create_stack(self.ctx, stack_name,
- template, params, None, {})
- self.assertEqual(stack.identifier(), result)
- self.assertTrue(isinstance(result, dict))
- self.assertTrue(result['stack_id'])
- self.m.VerifyAll()
-
- def test_stack_create_verify_err(self):
- stack_name = 'service_create_verify_err_test_stack'
- params = {'foo': 'bar'}
- template = '{ "Template": "data" }'
-
- stack = get_wordpress_stack(stack_name, self.ctx)
-
- self.m.StubOutWithMock(parser, 'Template')
- self.m.StubOutWithMock(environment, 'Environment')
- self.m.StubOutWithMock(parser, 'Stack')
-
- parser.Template(template, files=None).AndReturn(stack.t)
- environment.Environment(params).AndReturn(stack.env)
- parser.Stack(self.ctx, stack.name,
- stack.t,
- stack.env).AndReturn(stack)
-
- self.m.StubOutWithMock(stack, 'validate')
- stack.validate().AndRaise(exception.StackValidationFailed(
- message='fubar'))
-
- self.m.ReplayAll()
-
- self.assertRaises(
- exception.StackValidationFailed,
- self.man.create_stack,
- self.ctx, stack_name,
- template, params, None, {})
- self.m.VerifyAll()
-
- def test_stack_create_invalid_stack_name(self):
- stack_name = 'service_create/test_stack'
- stack = get_wordpress_stack('test_stack', self.ctx)
-
- self.assertRaises(ValueError,
- self.man.create_stack,
- self.ctx, stack_name, stack.t, {}, None, {})
-
- def test_stack_create_invalid_resource_name(self):
- stack_name = 'service_create_test_stack_invalid_res'
- stack = get_wordpress_stack(stack_name, self.ctx)
- tmpl = dict(stack.t)
- tmpl['Resources']['Web/Server'] = tmpl['Resources']['WebServer']
- del tmpl['Resources']['WebServer']
-
- self.assertRaises(ValueError,
- self.man.create_stack,
- self.ctx, stack_name,
- stack.t, {}, None, {})
-
- def test_stack_create_no_credentials(self):
- stack_name = 'test_stack_create_no_credentials'
- params = {'foo': 'bar'}
- template = '{ "Template": "data" }'
-
- stack = get_wordpress_stack(stack_name, self.ctx)
- # force check for credentials on create
- stack.resources['WebServer'].requires_deferred_auth = True
-
- self.m.StubOutWithMock(parser, 'Template')
- self.m.StubOutWithMock(environment, 'Environment')
- self.m.StubOutWithMock(parser, 'Stack')
-
- ctx_no_pwd = utils.dummy_context(password=None)
- ctx_no_user = utils.dummy_context(user=None)
-
- parser.Template(template, files=None).AndReturn(stack.t)
- environment.Environment(params).AndReturn(stack.env)
- parser.Stack(ctx_no_pwd, stack.name,
- stack.t, stack.env).AndReturn(stack)
-
- parser.Template(template, files=None).AndReturn(stack.t)
- environment.Environment(params).AndReturn(stack.env)
- parser.Stack(ctx_no_user, stack.name,
- stack.t, stack.env).AndReturn(stack)
-
- self.m.ReplayAll()
-
- ex = self.assertRaises(exception.MissingCredentialError,
- self.man.create_stack,
- ctx_no_pwd, stack_name,
- template, params, None, {})
- self.assertEqual(
- 'Missing required credential: X-Auth-Key', ex.message)
-
- ex = self.assertRaises(exception.MissingCredentialError,
- self.man.create_stack,
- ctx_no_user, stack_name,
- template, params, None, {})
- self.assertEqual(
- 'Missing required credential: X-Auth-User', ex.message)
-
- def test_stack_create_total_resources_equals_max(self):
- stack_name = 'service_create_stack_total_resources_equals_max'
- params = {}
- res._register_class('GenericResourceType',
- generic_rsrc.GenericResource)
- tpl = {'Resources': {
- 'A': {'Type': 'GenericResourceType'},
- 'B': {'Type': 'GenericResourceType'},
- 'C': {'Type': 'GenericResourceType'}}}
-
- template = parser.Template(tpl)
- stack = parser.Stack(self.ctx, stack_name, template,
- environment.Environment({}))
-
- self.m.StubOutWithMock(parser, 'Template')
- self.m.StubOutWithMock(environment, 'Environment')
- self.m.StubOutWithMock(parser, 'Stack')
-
- parser.Template(template, files=None).AndReturn(stack.t)
- environment.Environment(params).AndReturn(stack.env)
- parser.Stack(self.ctx, stack.name,
- stack.t,
- stack.env).AndReturn(stack)
-
- self.m.StubOutClassWithMocks(hkc.kc, "Client")
- mock_ks_client = hkc.kc.Client(
- auth_url=mox.IgnoreArg(),
- tenant_name='test_tenant',
- token='abcd1234')
- mock_ks_client.authenticate().AndReturn(True)
-
- self.m.StubOutWithMock(hkc.KeystoneClient, 'create_trust_context')
- hkc.KeystoneClient.create_trust_context().AndReturn(None)
-
- self.m.ReplayAll()
-
- cfg.CONF.set_override('max_resources_per_stack', 3)
-
- result = self.man.create_stack(self.ctx, stack_name, template, params,
- None, {})
- self.m.VerifyAll()
- self.assertEquals(stack.identifier(), result)
- self.assertEquals(3, stack.total_resources())
-
- def test_stack_create_total_resources_exceeds_max(self):
- stack_name = 'service_create_stack_total_resources_exceeds_max'
- params = {}
- res._register_class('GenericResourceType',
- generic_rsrc.GenericResource)
- tpl = {'Resources': {
- 'A': {'Type': 'GenericResourceType'},
- 'B': {'Type': 'GenericResourceType'},
- 'C': {'Type': 'GenericResourceType'}}}
- template = parser.Template(tpl)
- cfg.CONF.set_override('max_resources_per_stack', 2)
- self.assertRaises(exception.StackResourceLimitExceeded,
- self.man.create_stack, self.ctx, stack_name,
- template, params, None, {})
-
- def test_stack_validate(self):
- stack_name = 'service_create_test_validate'
- stack = get_wordpress_stack(stack_name, self.ctx)
- setup_mocks(self.m, stack)
-
- template = dict(stack.t)
- template['Parameters']['KeyName']['Default'] = 'test'
- resource = stack['WebServer']
-
- self.m.ReplayAll()
-
- resource.properties = Properties(
- resource.properties_schema,
- {
- 'ImageId': 'CentOS 5.2',
- 'KeyName': 'test',
- 'InstanceType': 'm1.large'
- })
- stack.validate()
-
- resource.properties = Properties(
- resource.properties_schema,
- {
- 'KeyName': 'test',
- 'InstanceType': 'm1.large'
- })
- self.assertRaises(exception.StackValidationFailed, stack.validate)
-
- def test_stack_delete(self):
- stack_name = 'service_delete_test_stack'
- stack = get_wordpress_stack(stack_name, self.ctx)
- sid = stack.store()
-
- s = db_api.stack_get(self.ctx, sid)
- self.m.StubOutWithMock(parser.Stack, 'load')
-
- parser.Stack.load(self.ctx, stack=s).AndReturn(stack)
-
- self.m.StubOutClassWithMocks(hkc.kc, "Client")
- mock_ks_client = hkc.kc.Client(
- auth_url=mox.IgnoreArg(),
- tenant_name='test_tenant',
- token='abcd1234')
- mock_ks_client.authenticate().AndReturn(True)
-
- self.m.StubOutWithMock(hkc.KeystoneClient, 'delete_trust_context')
- hkc.KeystoneClient.delete_trust_context().AndReturn(None)
-
- self.man.tg = DummyThreadGroup()
-
- self.m.ReplayAll()
-
- self.assertEqual(None,
- self.man.delete_stack(self.ctx, stack.identifier()))
- self.m.VerifyAll()
-
- def test_stack_delete_nonexist(self):
- stack_name = 'service_delete_nonexist_test_stack'
- stack = get_wordpress_stack(stack_name, self.ctx)
-
- self.m.ReplayAll()
-
- self.assertRaises(exception.StackNotFound,
- self.man.delete_stack,
- self.ctx, stack.identifier())
- self.m.VerifyAll()
-
- def test_stack_update(self):
- stack_name = 'service_update_test_stack'
- params = {'foo': 'bar'}
- template = '{ "Template": "data" }'
-
- old_stack = get_wordpress_stack(stack_name, self.ctx)
- sid = old_stack.store()
- s = db_api.stack_get(self.ctx, sid)
-
- stack = get_wordpress_stack(stack_name, self.ctx)
-
- self.m.StubOutWithMock(parser, 'Stack')
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx, stack=s).AndReturn(old_stack)
-
- self.m.StubOutWithMock(parser, 'Template')
- self.m.StubOutWithMock(environment, 'Environment')
-
- parser.Template(template, files=None).AndReturn(stack.t)
- environment.Environment(params).AndReturn(stack.env)
- parser.Stack(self.ctx, stack.name,
- stack.t, stack.env).AndReturn(stack)
-
- self.m.StubOutWithMock(stack, 'validate')
- stack.validate().AndReturn(None)
-
- self.m.StubOutWithMock(threadgroup, 'ThreadGroup')
- threadgroup.ThreadGroup().AndReturn(DummyThreadGroup())
-
- self.m.ReplayAll()
-
- result = self.man.update_stack(self.ctx, old_stack.identifier(),
- template, params, None, {})
- self.assertEqual(old_stack.identifier(), result)
- self.assertTrue(isinstance(result, dict))
- self.assertTrue(result['stack_id'])
- self.m.VerifyAll()
-
- def test_stack_update_equals(self):
- stack_name = 'test_stack_update_equals_resource_limit'
- params = {}
- res._register_class('GenericResourceType',
- generic_rsrc.GenericResource)
- tpl = {'Resources': {
- 'A': {'Type': 'GenericResourceType'},
- 'B': {'Type': 'GenericResourceType'},
- 'C': {'Type': 'GenericResourceType'}}}
-
- template = parser.Template(tpl)
-
- old_stack = parser.Stack(self.ctx, stack_name, template)
- sid = old_stack.store()
- s = db_api.stack_get(self.ctx, sid)
-
- stack = parser.Stack(self.ctx, stack_name, template)
-
- self.m.StubOutWithMock(parser, 'Stack')
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx, stack=s).AndReturn(old_stack)
-
- self.m.StubOutWithMock(parser, 'Template')
- self.m.StubOutWithMock(environment, 'Environment')
-
- parser.Template(template, files=None).AndReturn(stack.t)
- environment.Environment(params).AndReturn(stack.env)
- parser.Stack(self.ctx, stack.name,
- stack.t, stack.env).AndReturn(stack)
-
- self.m.StubOutWithMock(stack, 'validate')
- stack.validate().AndReturn(None)
-
- self.m.StubOutWithMock(threadgroup, 'ThreadGroup')
- threadgroup.ThreadGroup().AndReturn(DummyThreadGroup())
-
- self.m.ReplayAll()
-
- cfg.CONF.set_override('max_resources_per_stack', 3)
-
- result = self.man.update_stack(self.ctx, old_stack.identifier(),
- template, params, None, {})
- self.assertEqual(old_stack.identifier(), result)
- self.assertTrue(isinstance(result, dict))
- self.assertTrue(result['stack_id'])
- self.assertEquals(3, old_stack.root_stack.total_resources())
- self.m.VerifyAll()
-
- def test_stack_update_exceeds_resource_limit(self):
- stack_name = 'test_stack_update_exceeds_resource_limit'
- params = {}
- res._register_class('GenericResourceType',
- generic_rsrc.GenericResource)
- tpl = {'Resources': {
- 'A': {'Type': 'GenericResourceType'},
- 'B': {'Type': 'GenericResourceType'},
- 'C': {'Type': 'GenericResourceType'}}}
-
- template = parser.Template(tpl)
-
- old_stack = parser.Stack(self.ctx, stack_name, template)
- sid = old_stack.store()
- s = db_api.stack_get(self.ctx, sid)
-
- cfg.CONF.set_override('max_resources_per_stack', 2)
-
- self.assertRaises(exception.StackResourceLimitExceeded,
- self.man.update_stack, self.ctx,
- old_stack.identifier(), template, params, None, {})
-
- def test_stack_update_verify_err(self):
- stack_name = 'service_update_verify_err_test_stack'
- params = {'foo': 'bar'}
- template = '{ "Template": "data" }'
-
- old_stack = get_wordpress_stack(stack_name, self.ctx)
- old_stack.store()
- sid = old_stack.store()
- s = db_api.stack_get(self.ctx, sid)
-
- stack = get_wordpress_stack(stack_name, self.ctx)
-
- self.m.StubOutWithMock(parser, 'Stack')
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx, stack=s).AndReturn(old_stack)
-
- self.m.StubOutWithMock(parser, 'Template')
- self.m.StubOutWithMock(environment, 'Environment')
-
- parser.Template(template, files=None).AndReturn(stack.t)
- environment.Environment(params).AndReturn(stack.env)
- parser.Stack(self.ctx, stack.name,
- stack.t, stack.env).AndReturn(stack)
-
- self.m.StubOutWithMock(stack, 'validate')
- stack.validate().AndRaise(exception.StackValidationFailed(
- message='fubar'))
-
- self.m.ReplayAll()
-
- self.assertRaises(
- exception.StackValidationFailed,
- self.man.update_stack,
- self.ctx, old_stack.identifier(),
- template, params, None, {})
- self.m.VerifyAll()
-
- def test_stack_update_nonexist(self):
- stack_name = 'service_update_nonexist_test_stack'
- params = {'foo': 'bar'}
- template = '{ "Template": "data" }'
- stack = get_wordpress_stack(stack_name, self.ctx)
-
- self.m.ReplayAll()
-
- self.assertRaises(exception.StackNotFound,
- self.man.update_stack,
- self.ctx, stack.identifier(), template, params,
- None, {})
- self.m.VerifyAll()
-
- def test_stack_update_no_credentials(self):
- stack_name = 'test_stack_update_no_credentials'
- params = {'foo': 'bar'}
- template = '{ "Template": "data" }'
-
- old_stack = get_wordpress_stack(stack_name, self.ctx)
- # force check for credentials on create
- old_stack.resources['WebServer'].requires_deferred_auth = True
-
- sid = old_stack.store()
- s = db_api.stack_get(self.ctx, sid)
-
- self.ctx = utils.dummy_context(password=None)
-
- self.m.StubOutWithMock(parser, 'Stack')
- self.m.StubOutWithMock(parser.Stack, 'load')
- self.m.StubOutWithMock(parser, 'Template')
- self.m.StubOutWithMock(environment, 'Environment')
-
- parser.Stack.load(self.ctx, stack=s).AndReturn(old_stack)
-
- parser.Template(template, files=None).AndReturn(old_stack.t)
- environment.Environment(params).AndReturn(old_stack.env)
- parser.Stack(self.ctx, old_stack.name,
- old_stack.t, old_stack.env).AndReturn(old_stack)
-
- self.m.ReplayAll()
-
- ex = self.assertRaises(exception.MissingCredentialError,
- self.man.update_stack, self.ctx,
- old_stack.identifier(),
- template, params, None, {})
-
- self.assertEqual(
- 'Missing required credential: X-Auth-Key', ex.message)
-
- self.m.VerifyAll()
-
- def test_validate_deferred_auth_context_trusts(self):
- stack = get_wordpress_stack('test_deferred_auth', self.ctx)
- stack.resources['WebServer'].requires_deferred_auth = True
- ctx = utils.dummy_context(user=None, password=None)
- cfg.CONF.set_default('deferred_auth_method', 'trusts')
-
- # using trusts, no username or password required
- self.man._validate_deferred_auth_context(ctx, stack)
-
- def test_validate_deferred_auth_context_not_required(self):
- stack = get_wordpress_stack('test_deferred_auth', self.ctx)
- stack.resources['WebServer'].requires_deferred_auth = False
- ctx = utils.dummy_context(user=None, password=None)
- cfg.CONF.set_default('deferred_auth_method', 'password')
-
- # stack performs no deferred operations, so no username or
- # password required
- self.man._validate_deferred_auth_context(ctx, stack)
-
- def test_validate_deferred_auth_context_missing_credentials(self):
- stack = get_wordpress_stack('test_deferred_auth', self.ctx)
- stack.resources['WebServer'].requires_deferred_auth = True
- cfg.CONF.set_default('deferred_auth_method', 'password')
-
- # missing username
- ctx = utils.dummy_context(user=None)
- ex = self.assertRaises(exception.MissingCredentialError,
- self.man._validate_deferred_auth_context,
- ctx, stack)
- self.assertEqual(
- 'Missing required credential: X-Auth-User', ex.message)
-
- # missing password
- ctx = utils.dummy_context(password=None)
- ex = self.assertRaises(exception.MissingCredentialError,
- self.man._validate_deferred_auth_context,
- ctx, stack)
- self.assertEqual(
- 'Missing required credential: X-Auth-Key', ex.message)
-
-
-class StackServiceSuspendResumeTest(HeatTestCase):
-
- def setUp(self):
- super(StackServiceSuspendResumeTest, self).setUp()
- utils.setup_dummy_db()
- self.ctx = utils.dummy_context()
-
- self.man = service.EngineService('a-host', 'a-topic')
-
- def test_stack_suspend(self):
- stack_name = 'service_suspend_test_stack'
- stack = get_wordpress_stack(stack_name, self.ctx)
- sid = stack.store()
- s = db_api.stack_get(self.ctx, sid)
-
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx, stack=s).AndReturn(stack)
-
- self.m.StubOutWithMock(service.EngineService, '_start_in_thread')
- service.EngineService._start_in_thread(sid,
- mox.IgnoreArg(),
- stack).AndReturn(None)
- self.m.ReplayAll()
-
- result = self.man.stack_suspend(self.ctx, stack.identifier())
- self.assertEqual(None, result)
-
- self.m.VerifyAll()
-
- @stack_context('service_resume_test_stack', False)
- def test_stack_resume(self):
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx,
- stack=mox.IgnoreArg()).AndReturn(self.stack)
-
- self.m.StubOutWithMock(service.EngineService, '_start_in_thread')
- service.EngineService._start_in_thread(self.stack.id,
- mox.IgnoreArg(),
- self.stack).AndReturn(None)
- self.m.ReplayAll()
-
- result = self.man.stack_resume(self.ctx, self.stack.identifier())
- self.assertEqual(None, result)
- self.m.VerifyAll()
-
- def test_stack_suspend_nonexist(self):
- stack_name = 'service_suspend_nonexist_test_stack'
- stack = get_wordpress_stack(stack_name, self.ctx)
-
- self.m.ReplayAll()
-
- self.assertRaises(exception.StackNotFound,
- self.man.stack_suspend, self.ctx, stack.identifier())
- self.m.VerifyAll()
-
- def test_stack_resume_nonexist(self):
- stack_name = 'service_resume_nonexist_test_stack'
- stack = get_wordpress_stack(stack_name, self.ctx)
-
- self.m.ReplayAll()
-
- self.assertRaises(exception.StackNotFound,
- self.man.stack_resume, self.ctx, stack.identifier())
- self.m.VerifyAll()
-
-
-class StackServiceTest(HeatTestCase):
-
- def setUp(self):
- super(StackServiceTest, self).setUp()
-
- self.ctx = utils.dummy_context(tenant_id='stack_service_test_tenant')
- self.eng = service.EngineService('a-host', 'a-topic')
- cfg.CONF.set_default('heat_stack_user_role', 'stack_user_role')
- _register_class('ResourceWithPropsType',
- generic_rsrc.ResourceWithProps)
-
- utils.setup_dummy_db()
-
- @stack_context('service_identify_test_stack', False)
- def test_stack_identify(self):
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx,
- stack=mox.IgnoreArg()).AndReturn(self.stack)
-
- self.m.ReplayAll()
- identity = self.eng.identify_stack(self.ctx, self.stack.name)
- self.assertEqual(self.stack.identifier(), identity)
-
- self.m.VerifyAll()
-
- @stack_context('service_identify_uuid_test_stack', False)
- def test_stack_identify_uuid(self):
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx,
- stack=mox.IgnoreArg()).AndReturn(self.stack)
-
- self.m.ReplayAll()
- identity = self.eng.identify_stack(self.ctx, self.stack.id)
- self.assertEqual(self.stack.identifier(), identity)
-
- self.m.VerifyAll()
-
- def test_stack_identify_nonexist(self):
- self.assertRaises(exception.StackNotFound, self.eng.identify_stack,
- self.ctx, 'wibble')
-
- @stack_context('service_create_existing_test_stack', False)
- def test_stack_create_existing(self):
- self.assertRaises(exception.StackExists, self.eng.create_stack,
- self.ctx, self.stack.name, self.stack.t, {},
- None, {})
-
- @stack_context('service_name_tenants_test_stack', False)
- def test_stack_by_name_tenants(self):
- self.assertEqual(self.stack.id,
- db_api.stack_get_by_name(self.ctx,
- self.stack.name).id)
- ctx2 = utils.dummy_context(tenant_id='stack_service_test_tenant2')
- self.assertEqual(None, db_api.stack_get_by_name(ctx2, self.stack.name))
-
- @stack_context('service_event_list_test_stack')
- def test_stack_event_list(self):
- self.m.StubOutWithMock(service.EngineService, '_get_stack')
- s = db_api.stack_get(self.ctx, self.stack.id)
- service.EngineService._get_stack(self.ctx,
- self.stack.identifier(),
- show_deleted=True).AndReturn(s)
- self.m.ReplayAll()
-
- events = self.eng.list_events(self.ctx, self.stack.identifier())
-
- self.assertEqual(2, len(events))
- for ev in events:
- self.assertTrue('event_identity' in ev)
- self.assertEqual(dict, type(ev['event_identity']))
- self.assertTrue(ev['event_identity']['path'].rsplit('/', 1)[1])
-
- self.assertTrue('resource_name' in ev)
- self.assertEqual('WebServer', ev['resource_name'])
-
- self.assertTrue('physical_resource_id' in ev)
-
- self.assertTrue('resource_properties' in ev)
- # Big long user data field.. it mentions 'wordpress'
- # a few times so this should work.
- user_data = ev['resource_properties']['UserData']
- self.assertNotEqual(user_data.find('wordpress'), -1)
- self.assertEqual('F17-x86_64-gold',
- ev['resource_properties']['ImageId'])
- self.assertEqual('m1.large',
- ev['resource_properties']['InstanceType'])
-
- self.assertEqual('CREATE', ev['resource_action'])
- self.assertTrue(ev['resource_status'] in ('IN_PROGRESS',
- 'COMPLETE'))
-
- self.assertTrue('resource_status_reason' in ev)
- self.assertEqual('state changed', ev['resource_status_reason'])
-
- self.assertTrue('resource_type' in ev)
- self.assertEqual('AWS::EC2::Instance', ev['resource_type'])
-
- self.assertTrue('stack_identity' in ev)
-
- self.assertTrue('stack_name' in ev)
- self.assertEqual(self.stack.name, ev['stack_name'])
-
- self.assertTrue('event_time' in ev)
-
- self.m.VerifyAll()
-
- @stack_context('service_event_list_deleted_resource_test_stack')
- def test_stack_event_list_deleted_resource(self):
- rsrs._register_class('GenericResourceType',
- generic_rsrc.GenericResource)
-
- def run(stack_id, func, *args):
- func(*args)
- self.eng._start_in_thread = run
-
- new_tmpl = {'Resources': {'AResource': {'Type':
- 'GenericResourceType'}}}
-
- self.m.StubOutWithMock(instances.Instance, 'handle_delete')
- instances.Instance.handle_delete()
-
- self.m.ReplayAll()
-
- result = self.eng.update_stack(self.ctx, self.stack.identifier(),
- new_tmpl, None, None, {})
-
- # The self.stack reference needs to be updated. Since the underlying
- # stack is updated in update_stack, the original reference is now
- # pointing to an orphaned stack object.
- self.stack = parser.Stack.load(self.ctx, stack_id=result['stack_id'])
-
- self.assertEqual(self.stack.identifier(), result)
- self.assertTrue(isinstance(result, dict))
- self.assertTrue(result['stack_id'])
- events = self.eng.list_events(self.ctx, self.stack.identifier())
-
- self.assertEqual(6, len(events))
-
- for ev in events:
- self.assertIn('event_identity', ev)
- self.assertEqual(dict, type(ev['event_identity']))
- self.assertTrue(ev['event_identity']['path'].rsplit('/', 1)[1])
-
- self.assertIn('resource_name', ev)
- self.assertIn('physical_resource_id', ev)
- self.assertIn('resource_properties', ev)
- self.assertIn('resource_status_reason', ev)
-
- self.assertIn(ev['resource_action'], ('CREATE', 'DELETE'))
- self.assertIn(ev['resource_status'], ('IN_PROGRESS', 'COMPLETE'))
-
- self.assertIn('resource_type', ev)
- self.assertIn(ev['resource_type'], ('AWS::EC2::Instance',
- 'GenericResourceType'))
-
- self.assertIn('stack_identity', ev)
-
- self.assertIn('stack_name', ev)
- self.assertEqual(self.stack.name, ev['stack_name'])
-
- self.assertIn('event_time', ev)
-
- self.m.VerifyAll()
-
- @stack_context('service_event_list_test_stack')
- def test_stack_event_list_by_tenant(self):
- events = self.eng.list_events(self.ctx, None)
-
- self.assertEqual(2, len(events))
- for ev in events:
- self.assertIn('event_identity', ev)
- self.assertThat(ev['event_identity'], matchers.IsInstance(dict))
- self.assertTrue(ev['event_identity']['path'].rsplit('/', 1)[1])
-
- self.assertTrue('resource_name' in ev)
- self.assertEqual('WebServer', ev['resource_name'])
-
- self.assertTrue('physical_resource_id' in ev)
-
- self.assertTrue('resource_properties' in ev)
- # Big long user data field.. it mentions 'wordpress'
- # a few times so this should work.
- user_data = ev['resource_properties']['UserData']
- self.assertIn('wordpress', user_data)
- self.assertEqual('F17-x86_64-gold',
- ev['resource_properties']['ImageId'])
- self.assertEqual('m1.large',
- ev['resource_properties']['InstanceType'])
-
- self.assertEqual('CREATE', ev['resource_action'])
- self.assertIn(ev['resource_status'], ('IN_PROGRESS', 'COMPLETE'))
-
- self.assertIn('resource_status_reason', ev)
- self.assertEqual('state changed', ev['resource_status_reason'])
-
- self.assertIn('resource_type', ev)
- self.assertEqual('AWS::EC2::Instance', ev['resource_type'])
-
- self.assertIn('stack_identity', ev)
-
- self.assertIn('stack_name', ev)
- self.assertEqual(self.stack.name, ev['stack_name'])
-
- self.assertIn('event_time', ev)
-
- self.m.VerifyAll()
-
- @stack_context('service_list_all_test_stack')
- def test_stack_list_all(self):
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx, stack=mox.IgnoreArg(), resolve_data=False)\
- .AndReturn(self.stack)
-
- self.m.ReplayAll()
- sl = self.eng.list_stacks(self.ctx)
-
- self.assertEqual(1, len(sl))
- for s in sl:
- self.assertTrue('creation_time' in s)
- self.assertTrue('updated_time' in s)
- self.assertTrue('stack_identity' in s)
- self.assertNotEqual(s['stack_identity'], None)
- self.assertTrue('stack_name' in s)
- self.assertEqual(self.stack.name, s['stack_name'])
- self.assertTrue('stack_status' in s)
- self.assertTrue('stack_status_reason' in s)
- self.assertTrue('description' in s)
- self.assertNotEqual(s['description'].find('WordPress'), -1)
-
- self.m.VerifyAll()
-
- def test_stack_describe_nonexistent(self):
- non_exist_identifier = identifier.HeatIdentifier(
- self.ctx.tenant_id, 'wibble',
- '18d06e2e-44d3-4bef-9fbf-52480d604b02')
-
- self.m.StubOutWithMock(service.EngineService, '_get_stack')
- service.EngineService._get_stack(
- self.ctx, non_exist_identifier,
- show_deleted=True).AndRaise(exception.StackNotFound)
- self.m.ReplayAll()
-
- self.assertRaises(exception.StackNotFound,
- self.eng.show_stack,
- self.ctx, non_exist_identifier)
- self.m.VerifyAll()
-
- def test_stack_describe_bad_tenant(self):
- non_exist_identifier = identifier.HeatIdentifier(
- 'wibble', 'wibble',
- '18d06e2e-44d3-4bef-9fbf-52480d604b02')
-
- self.m.StubOutWithMock(service.EngineService, '_get_stack')
- service.EngineService._get_stack(
- self.ctx, non_exist_identifier,
- show_deleted=True).AndRaise(exception.InvalidTenant)
- self.m.ReplayAll()
-
- self.assertRaises(exception.InvalidTenant,
- self.eng.show_stack,
- self.ctx, non_exist_identifier)
-
- self.m.VerifyAll()
-
- @stack_context('service_describe_test_stack', False)
- def test_stack_describe(self):
- self.m.StubOutWithMock(service.EngineService, '_get_stack')
- s = db_api.stack_get(self.ctx, self.stack.id)
- service.EngineService._get_stack(self.ctx,
- self.stack.identifier(),
- show_deleted=True).AndReturn(s)
- self.m.ReplayAll()
-
- sl = self.eng.show_stack(self.ctx, self.stack.identifier())
-
- self.assertEqual(1, len(sl))
-
- s = sl[0]
- self.assertTrue('creation_time' in s)
- self.assertTrue('updated_time' in s)
- self.assertTrue('stack_identity' in s)
- self.assertNotEqual(s['stack_identity'], None)
- self.assertTrue('stack_name' in s)
- self.assertEqual(self.stack.name, s['stack_name'])
- self.assertTrue('stack_status' in s)
- self.assertTrue('stack_status_reason' in s)
- self.assertTrue('description' in s)
- self.assertNotEqual(s['description'].find('WordPress'), -1)
- self.assertTrue('parameters' in s)
-
- self.m.VerifyAll()
-
- @stack_context('service_describe_all_test_stack', False)
- def test_stack_describe_all(self):
- sl = self.eng.show_stack(self.ctx, None)
-
- self.assertEqual(1, len(sl))
-
- s = sl[0]
- self.assertTrue('creation_time' in s)
- self.assertTrue('updated_time' in s)
- self.assertTrue('stack_identity' in s)
- self.assertNotEqual(s['stack_identity'], None)
- self.assertTrue('stack_name' in s)
- self.assertEqual(self.stack.name, s['stack_name'])
- self.assertTrue('stack_status' in s)
- self.assertTrue('stack_status_reason' in s)
- self.assertTrue('description' in s)
- self.assertNotEqual(s['description'].find('WordPress'), -1)
- self.assertTrue('parameters' in s)
-
- def test_list_resource_types(self):
- resources = self.eng.list_resource_types(self.ctx)
- self.assertTrue(isinstance(resources, list))
- self.assertTrue('AWS::EC2::Instance' in resources)
-
- def test_resource_schema(self):
- type_name = 'ResourceWithPropsType'
- expected = {
- 'resource_type': type_name,
- 'properties': {
- 'Foo': {
- 'type': 'string',
- 'required': False,
- },
- },
- 'attributes': {
- 'foo': {'description': 'A generic attribute'},
- 'Foo': {'description': 'Another generic attribute'},
- },
- }
-
- schema = self.eng.resource_schema(self.ctx, type_name=type_name)
- self.assertEqual(expected, schema)
-
- def test_resource_schema_nonexist(self):
- self.assertRaises(exception.ResourceTypeNotFound,
- self.eng.resource_schema,
- self.ctx, type_name='Bogus')
-
- @stack_context('service_stack_resource_describe__test_stack')
- def test_stack_resource_describe(self):
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx,
- stack=mox.IgnoreArg()).AndReturn(self.stack)
- self.m.ReplayAll()
-
- r = self.eng.describe_stack_resource(self.ctx, self.stack.identifier(),
- 'WebServer')
-
- self.assertTrue('resource_identity' in r)
- self.assertTrue('description' in r)
- self.assertTrue('updated_time' in r)
- self.assertTrue('stack_identity' in r)
- self.assertNotEqual(r['stack_identity'], None)
- self.assertTrue('stack_name' in r)
- self.assertEqual(self.stack.name, r['stack_name'])
- self.assertTrue('metadata' in r)
- self.assertTrue('resource_status' in r)
- self.assertTrue('resource_status_reason' in r)
- self.assertTrue('resource_type' in r)
- self.assertTrue('physical_resource_id' in r)
- self.assertTrue('resource_name' in r)
- self.assertEqual('WebServer', r['resource_name'])
-
- self.m.VerifyAll()
-
- def test_stack_resource_describe_nonexist_stack(self):
- non_exist_identifier = identifier.HeatIdentifier(
- self.ctx.tenant_id,
- 'wibble',
- '18d06e2e-44d3-4bef-9fbf-52480d604b02')
-
- self.m.StubOutWithMock(service.EngineService, '_get_stack')
- service.EngineService._get_stack(
- self.ctx, non_exist_identifier).AndRaise(exception.StackNotFound)
- self.m.ReplayAll()
-
- self.assertRaises(exception.StackNotFound,
- self.eng.describe_stack_resource,
- self.ctx, non_exist_identifier, 'WebServer')
-
- self.m.VerifyAll()
-
- @stack_context('service_resource_describe_nonexist_test_stack')
- def test_stack_resource_describe_nonexist_resource(self):
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx,
- stack=mox.IgnoreArg()).AndReturn(self.stack)
-
- self.m.ReplayAll()
- self.assertRaises(exception.ResourceNotFound,
- self.eng.describe_stack_resource,
- self.ctx, self.stack.identifier(), 'foo')
-
- self.m.VerifyAll()
-
- @stack_context('service_resource_describe_user_deny_test_stack')
- def test_stack_resource_describe_stack_user_deny(self):
- self.ctx.roles = [cfg.CONF.heat_stack_user_role]
- self.m.StubOutWithMock(service.EngineService, '_authorize_stack_user')
- service.EngineService._authorize_stack_user(self.ctx, mox.IgnoreArg(),
- 'foo').AndReturn(False)
- self.m.ReplayAll()
-
- self.assertRaises(exception.Forbidden,
- self.eng.describe_stack_resource,
- self.ctx, self.stack.identifier(), 'foo')
-
- self.m.VerifyAll()
-
- @stack_context('service_authorize_stack_user_nocreds_test_stack')
- def test_stack_authorize_stack_user_nocreds(self):
- self.assertFalse(self.eng._authorize_stack_user(self.ctx,
- self.stack,
- 'foo'))
-
- @stack_context('service_authorize_user_attribute_error_test_stack')
- def test_stack_authorize_stack_user_attribute_error(self):
- self.m.StubOutWithMock(json, 'loads')
- json.loads(None).AndRaise(AttributeError)
- self.m.ReplayAll()
- self.assertFalse(self.eng._authorize_stack_user(self.ctx,
- self.stack,
- 'foo'))
- self.m.VerifyAll()
-
- @stack_context('service_authorize_stack_user_type_error_test_stack')
- def test_stack_authorize_stack_user_type_error(self):
- self.m.StubOutWithMock(json, 'loads')
- json.loads(mox.IgnoreArg()).AndRaise(TypeError)
- self.m.ReplayAll()
-
- self.assertFalse(self.eng._authorize_stack_user(self.ctx,
- self.stack,
- 'foo'))
-
- self.m.VerifyAll()
-
- @stack_context('service_resources_describe_test_stack')
- def test_stack_resources_describe(self):
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx,
- stack=mox.IgnoreArg()).AndReturn(self.stack)
- self.m.ReplayAll()
-
- resources = self.eng.describe_stack_resources(self.ctx,
- self.stack.identifier(),
- 'WebServer')
-
- self.assertEqual(1, len(resources))
- r = resources[0]
- self.assertTrue('resource_identity' in r)
- self.assertTrue('description' in r)
- self.assertTrue('updated_time' in r)
- self.assertTrue('stack_identity' in r)
- self.assertNotEqual(r['stack_identity'], None)
- self.assertTrue('stack_name' in r)
- self.assertEqual(self.stack.name, r['stack_name'])
- self.assertTrue('resource_status' in r)
- self.assertTrue('resource_status_reason' in r)
- self.assertTrue('resource_type' in r)
- self.assertTrue('physical_resource_id' in r)
- self.assertTrue('resource_name' in r)
- self.assertEqual('WebServer', r['resource_name'])
-
- self.m.VerifyAll()
-
- @stack_context('service_resources_describe_no_filter_test_stack')
- def test_stack_resources_describe_no_filter(self):
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx,
- stack=mox.IgnoreArg()).AndReturn(self.stack)
- self.m.ReplayAll()
-
- resources = self.eng.describe_stack_resources(self.ctx,
- self.stack.identifier(),
- None)
-
- self.assertEqual(1, len(resources))
- r = resources[0]
- self.assertTrue('resource_name' in r)
- self.assertEqual('WebServer', r['resource_name'])
-
- self.m.VerifyAll()
-
- def test_stack_resources_describe_bad_lookup(self):
- self.m.StubOutWithMock(service.EngineService, '_get_stack')
- service.EngineService._get_stack(
- self.ctx, None).AndRaise(TypeError)
- self.m.ReplayAll()
-
- self.assertRaises(TypeError,
- self.eng.describe_stack_resources,
- self.ctx, None, 'WebServer')
- self.m.VerifyAll()
-
- def test_stack_resources_describe_nonexist_stack(self):
- non_exist_identifier = identifier.HeatIdentifier(
- self.ctx.tenant_id, 'wibble',
- '18d06e2e-44d3-4bef-9fbf-52480d604b02')
-
- self.assertRaises(exception.StackNotFound,
- self.eng.describe_stack_resources,
- self.ctx, non_exist_identifier, 'WebServer')
-
- @stack_context('service_find_physical_resource_test_stack')
- def test_find_physical_resource(self):
- resources = self.eng.describe_stack_resources(self.ctx,
- self.stack.identifier(),
- None)
- phys_id = resources[0]['physical_resource_id']
-
- result = self.eng.find_physical_resource(self.ctx, phys_id)
- self.assertTrue(isinstance(result, dict))
- resource_identity = identifier.ResourceIdentifier(**result)
- self.assertEqual(self.stack.identifier(), resource_identity.stack())
- self.assertEqual('WebServer', resource_identity.resource_name)
-
- def test_find_physical_resource_nonexist(self):
- self.assertRaises(exception.PhysicalResourceNotFound,
- self.eng.find_physical_resource,
- self.ctx, 'foo')
-
- @stack_context('service_resources_list_test_stack')
- def test_stack_resources_list(self):
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx,
- stack=mox.IgnoreArg()).AndReturn(self.stack)
- self.m.ReplayAll()
-
- resources = self.eng.list_stack_resources(self.ctx,
- self.stack.identifier())
-
- self.assertEqual(1, len(resources))
- r = resources[0]
- self.assertTrue('resource_identity' in r)
- self.assertTrue('updated_time' in r)
- self.assertTrue('physical_resource_id' in r)
- self.assertTrue('resource_name' in r)
- self.assertEqual('WebServer', r['resource_name'])
- self.assertTrue('resource_status' in r)
- self.assertTrue('resource_status_reason' in r)
- self.assertTrue('resource_type' in r)
-
- self.m.VerifyAll()
-
- def test_stack_resources_list_nonexist_stack(self):
- non_exist_identifier = identifier.HeatIdentifier(
- self.ctx.tenant_id, 'wibble',
- '18d06e2e-44d3-4bef-9fbf-52480d604b02')
-
- self.m.StubOutWithMock(service.EngineService, '_get_stack')
- service.EngineService._get_stack(
- self.ctx, non_exist_identifier).AndRaise(exception.StackNotFound)
- self.m.ReplayAll()
-
- self.assertRaises(exception.StackNotFound,
- self.eng.list_stack_resources,
- self.ctx, non_exist_identifier)
-
- self.m.VerifyAll()
-
- def test_signal_reception(self):
- stack = get_stack('signal_reception',
- self.ctx,
- policy_template)
- self.stack = stack
- self.m.ReplayAll()
- stack.store()
- stack.create()
- test_data = {'food': 'yum'}
-
- self.m.StubOutWithMock(service.EngineService, '_get_stack')
- s = db_api.stack_get(self.ctx, self.stack.id)
- service.EngineService._get_stack(self.ctx,
- self.stack.identifier()).AndReturn(s)
-
- self.m.StubOutWithMock(service.EngineService, '_load_user_creds')
- service.EngineService._load_user_creds(
- mox.IgnoreArg()).AndReturn(self.ctx)
-
- self.m.StubOutWithMock(rsrs.Resource, 'signal')
- rsrs.Resource.signal(mox.IgnoreArg()).AndReturn(None)
- self.m.ReplayAll()
-
- result = self.eng.resource_signal(self.ctx,
- dict(self.stack.identifier()),
- 'WebServerScaleDownPolicy',
- test_data)
- self.m.VerifyAll()
- self.stack.delete()
-
- def test_signal_reception_no_resource(self):
- stack = get_stack('signal_reception_no_resource',
- self.ctx,
- policy_template)
- self.stack = stack
- self.m.ReplayAll()
- stack.store()
- stack.create()
- test_data = {'food': 'yum'}
-
- self.m.StubOutWithMock(service.EngineService, '_get_stack')
- s = db_api.stack_get(self.ctx, self.stack.id)
- service.EngineService._get_stack(self.ctx,
- self.stack.identifier()).AndReturn(s)
-
- self.m.StubOutWithMock(service.EngineService, '_load_user_creds')
- service.EngineService._load_user_creds(
- mox.IgnoreArg()).AndReturn(self.ctx)
- self.m.ReplayAll()
-
- self.assertRaises(exception.ResourceNotFound,
- self.eng.resource_signal, self.ctx,
- dict(self.stack.identifier()),
- 'resource_does_not_exist',
- test_data)
- self.m.VerifyAll()
- self.stack.delete()
-
- @stack_context('service_metadata_test_stack')
- def test_metadata(self):
- test_metadata = {'foo': 'bar', 'baz': 'quux', 'blarg': 'wibble'}
- pre_update_meta = self.stack['WebServer'].metadata
-
- self.m.StubOutWithMock(service.EngineService, '_get_stack')
- s = db_api.stack_get(self.ctx, self.stack.id)
- service.EngineService._get_stack(self.ctx,
- self.stack.identifier()).AndReturn(s)
- self.m.StubOutWithMock(instances.Instance, 'metadata_update')
- instances.Instance.metadata_update(new_metadata=test_metadata)
- self.m.StubOutWithMock(service.EngineService, '_load_user_creds')
- service.EngineService._load_user_creds(
- mox.IgnoreArg()).AndReturn(self.ctx)
- self.m.ReplayAll()
-
- result = self.eng.metadata_update(self.ctx,
- dict(self.stack.identifier()),
- 'WebServer', test_metadata)
- # metadata_update is a no-op for all resources except
- # WaitConditionHandle so we don't expect this to have changed
- self.assertEqual(pre_update_meta, result)
-
- self.m.VerifyAll()
-
- def test_metadata_err_stack(self):
- non_exist_identifier = identifier.HeatIdentifier(
- self.ctx.tenant_id, 'wibble',
- '18d06e2e-44d3-4bef-9fbf-52480d604b02')
-
- self.m.StubOutWithMock(service.EngineService, '_get_stack')
- service.EngineService._get_stack(
- self.ctx, non_exist_identifier).AndRaise(exception.StackNotFound)
- self.m.ReplayAll()
-
- test_metadata = {'foo': 'bar', 'baz': 'quux', 'blarg': 'wibble'}
- self.assertRaises(exception.StackNotFound,
- self.eng.metadata_update,
- self.ctx, non_exist_identifier,
- 'WebServer', test_metadata)
- self.m.VerifyAll()
-
- @stack_context('service_metadata_err_resource_test_stack', False)
- def test_metadata_err_resource(self):
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.ctx,
- stack=mox.IgnoreArg()).AndReturn(self.stack)
- self.m.ReplayAll()
-
- test_metadata = {'foo': 'bar', 'baz': 'quux', 'blarg': 'wibble'}
- self.assertRaises(exception.ResourceNotFound,
- self.eng.metadata_update,
- self.ctx, dict(self.stack.identifier()),
- 'NooServer', test_metadata)
-
- self.m.VerifyAll()
-
- @stack_context('periodic_watch_task_not_created')
- def test_periodic_watch_task_not_created(self):
- self.eng.stg[self.stack.id] = DummyThreadGroup()
- self.eng._start_watch_task(self.stack.id, self.ctx)
- self.assertEqual([], self.eng.stg[self.stack.id].threads)
-
- def test_periodic_watch_task_created(self):
- stack = get_stack('period_watch_task_created',
- utils.dummy_context(),
- alarm_template)
- self.stack = stack
- self.m.ReplayAll()
- stack.store()
- stack.create()
- self.eng.stg[stack.id] = DummyThreadGroup()
- self.eng._start_watch_task(stack.id, self.ctx)
- self.assertEqual([self.eng._periodic_watcher_task],
- self.eng.stg[stack.id].threads)
- self.stack.delete()
-
- @stack_context('service_show_watch_test_stack', False)
- @utils.wr_delete_after
- def test_show_watch(self):
- # Insert two dummy watch rules into the DB
- rule = {u'EvaluationPeriods': u'1',
- u'AlarmActions': [u'WebServerRestartPolicy'],
- u'AlarmDescription': u'Restart the WikiDatabase',
- u'Namespace': u'system/linux',
- u'Period': u'300',
- u'ComparisonOperator': u'GreaterThanThreshold',
- u'Statistic': u'SampleCount',
- u'Threshold': u'2',
- u'MetricName': u'ServiceFailure'}
- self.wr = []
- self.wr.append(watchrule.WatchRule(context=self.ctx,
- watch_name='HttpFailureAlarm',
- rule=rule,
- watch_data=[],
- stack_id=self.stack.id,
- state='NORMAL'))
- self.wr[0].store()
-
- self.wr.append(watchrule.WatchRule(context=self.ctx,
- watch_name='AnotherWatch',
- rule=rule,
- watch_data=[],
- stack_id=self.stack.id,
- state='NORMAL'))
- self.wr[1].store()
-
- # watch_name=None should return both watches
- result = self.eng.show_watch(self.ctx, watch_name=None)
- self.assertEqual(2, len(result))
-
- # watch_name="HttpFailureAlarm" should return only one
- result = self.eng.show_watch(self.ctx, watch_name="HttpFailureAlarm")
- self.assertEqual(1, len(result))
-
- self.assertRaises(exception.WatchRuleNotFound,
- self.eng.show_watch,
- self.ctx, watch_name="nonexistent")
-
- # Check the response has all keys defined in the engine API
- for key in engine_api.WATCH_KEYS:
- self.assertTrue(key in result[0])
-
- @stack_context('service_show_watch_metric_test_stack', False)
- @utils.wr_delete_after
- def test_show_watch_metric(self):
- # Insert dummy watch rule into the DB
- rule = {u'EvaluationPeriods': u'1',
- u'AlarmActions': [u'WebServerRestartPolicy'],
- u'AlarmDescription': u'Restart the WikiDatabase',
- u'Namespace': u'system/linux',
- u'Period': u'300',
- u'ComparisonOperator': u'GreaterThanThreshold',
- u'Statistic': u'SampleCount',
- u'Threshold': u'2',
- u'MetricName': u'ServiceFailure'}
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name='HttpFailureAlarm',
- rule=rule,
- watch_data=[],
- stack_id=self.stack.id,
- state='NORMAL')
- self.wr.store()
-
- # And add a metric datapoint
- watch = db_api.watch_rule_get_by_name(self.ctx, "HttpFailureAlarm")
- self.assertNotEqual(watch, None)
- values = {'watch_rule_id': watch.id,
- 'data': {u'Namespace': u'system/linux',
- u'ServiceFailure': {
- u'Units': u'Counter', u'Value': 1}}}
- watch = db_api.watch_data_create(self.ctx, values)
-
- # Check there is one result returned
- result = self.eng.show_watch_metric(self.ctx,
- metric_namespace=None,
- metric_name=None)
- self.assertEqual(1, len(result))
-
- # Create another metric datapoint and check we get two
- watch = db_api.watch_data_create(self.ctx, values)
- result = self.eng.show_watch_metric(self.ctx,
- metric_namespace=None,
- metric_name=None)
- self.assertEqual(2, len(result))
-
- # Check the response has all keys defined in the engine API
- for key in engine_api.WATCH_DATA_KEYS:
- self.assertTrue(key in result[0])
-
- @stack_context('service_show_watch_state_test_stack')
- @utils.wr_delete_after
- def test_set_watch_state(self):
- # Insert dummy watch rule into the DB
- rule = {u'EvaluationPeriods': u'1',
- u'AlarmActions': [u'WebServerRestartPolicy'],
- u'AlarmDescription': u'Restart the WikiDatabase',
- u'Namespace': u'system/linux',
- u'Period': u'300',
- u'ComparisonOperator': u'GreaterThanThreshold',
- u'Statistic': u'SampleCount',
- u'Threshold': u'2',
- u'MetricName': u'ServiceFailure'}
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name='OverrideAlarm',
- rule=rule,
- watch_data=[],
- stack_id=self.stack.id,
- state='NORMAL')
- self.wr.store()
-
- class DummyAction(object):
- signal = "dummyfoo"
-
- dummy_action = DummyAction()
- self.m.StubOutWithMock(parser.Stack, 'resource_by_refid')
- parser.Stack.resource_by_refid(
- 'WebServerRestartPolicy').AndReturn(dummy_action)
-
- # Replace the real stack threadgroup with a dummy one, so we can
- # check the function returned on ALARM is correctly scheduled
- self.eng.stg[self.stack.id] = DummyThreadGroup()
-
- self.m.ReplayAll()
-
- state = watchrule.WatchRule.NODATA
- result = self.eng.set_watch_state(self.ctx,
- watch_name="OverrideAlarm",
- state=state)
- self.assertEqual(state, result[engine_api.WATCH_STATE_VALUE])
- self.assertEqual([], self.eng.stg[self.stack.id].threads)
-
- state = watchrule.WatchRule.NORMAL
- result = self.eng.set_watch_state(self.ctx,
- watch_name="OverrideAlarm",
- state=state)
- self.assertEqual(state, result[engine_api.WATCH_STATE_VALUE])
- self.assertEqual([], self.eng.stg[self.stack.id].threads)
-
- state = watchrule.WatchRule.ALARM
- result = self.eng.set_watch_state(self.ctx,
- watch_name="OverrideAlarm",
- state=state)
- self.assertEqual(state, result[engine_api.WATCH_STATE_VALUE])
- self.assertEqual([DummyAction.signal],
- self.eng.stg[self.stack.id].threads)
-
- self.m.VerifyAll()
-
- @stack_context('service_show_watch_state_badstate_test_stack')
- @utils.wr_delete_after
- def test_set_watch_state_badstate(self):
- # Insert dummy watch rule into the DB
- rule = {u'EvaluationPeriods': u'1',
- u'AlarmActions': [u'WebServerRestartPolicy'],
- u'AlarmDescription': u'Restart the WikiDatabase',
- u'Namespace': u'system/linux',
- u'Period': u'300',
- u'ComparisonOperator': u'GreaterThanThreshold',
- u'Statistic': u'SampleCount',
- u'Threshold': u'2',
- u'MetricName': u'ServiceFailure'}
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name='OverrideAlarm2',
- rule=rule,
- watch_data=[],
- stack_id=self.stack.id,
- state='NORMAL')
- self.wr.store()
-
- self.m.StubOutWithMock(watchrule.WatchRule, 'set_watch_state')
- for state in ["HGJHGJHG", "1234", "!\*(&%"]:
- watchrule.WatchRule.set_watch_state(state)\
- .InAnyOrder().AndRaise(ValueError)
- self.m.ReplayAll()
-
- for state in ["HGJHGJHG", "1234", "!\*(&%"]:
- self.assertRaises(ValueError,
- self.eng.set_watch_state,
- self.ctx, watch_name="OverrideAlarm2",
- state=state)
-
- self.m.VerifyAll()
-
- def test_set_watch_state_noexist(self):
- state = watchrule.WatchRule.ALARM # State valid
-
- self.m.StubOutWithMock(watchrule.WatchRule, 'load')
- watchrule.WatchRule.load(self.ctx, "nonexistent")\
- .AndRaise(exception.WatchRuleNotFound)
- self.m.ReplayAll()
-
- self.assertRaises(exception.WatchRuleNotFound,
- self.eng.set_watch_state,
- self.ctx, watch_name="nonexistent", state=state)
- self.m.VerifyAll()
-
- def test_stack_list_all_empty(self):
- sl = self.eng.list_stacks(self.ctx)
-
- self.assertEqual(0, len(sl))
-
- def test_stack_describe_all_empty(self):
- sl = self.eng.show_stack(self.ctx, None)
-
- self.assertEqual(0, len(sl))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import mock
-
-from heat.engine import environment
-from heat.engine import resources
-
-from heat.tests import generic_resource
-from heat.tests import common
-
-
-class EnvironmentTest(common.HeatTestCase):
- def setUp(self):
- super(EnvironmentTest, self).setUp()
- self.g_env = resources.global_env()
-
- def test_load_old_parameters(self):
- old = {u'a': u'ff', u'b': u'ss'}
- expected = {u'parameters': old,
- u'resource_registry': {u'resources': {}}}
- env = environment.Environment(old)
- self.assertEqual(expected, env.user_env_as_dict())
-
- def test_load_new_env(self):
- new_env = {u'parameters': {u'a': u'ff', u'b': u'ss'},
- u'resource_registry': {u'OS::Food': u'fruity.yaml',
- u'resources': {}}}
- env = environment.Environment(new_env)
- self.assertEqual(new_env, env.user_env_as_dict())
-
- def test_global_registry(self):
- self.g_env.register_class('CloudX::Compute::Server',
- generic_resource.GenericResource)
- new_env = {u'parameters': {u'a': u'ff', u'b': u'ss'},
- u'resource_registry': {u'OS::*': 'CloudX::*'}}
- env = environment.Environment(new_env)
- self.assertEqual('CloudX::Compute::Server',
- env.get_resource_info('OS::Compute::Server',
- 'my_db_server').name)
-
- def test_map_one_resource_type(self):
- new_env = {u'parameters': {u'a': u'ff', u'b': u'ss'},
- u'resource_registry': {u'resources':
- {u'my_db_server':
- {u'OS::DBInstance': 'db.yaml'}}}}
- env = environment.Environment(new_env)
-
- info = env.get_resource_info('OS::DBInstance', 'my_db_server')
- self.assertEqual('db.yaml', info.value)
-
- def test_map_all_resources_of_type(self):
- self.g_env.register_class('OS::Nova::FloatingIP',
- generic_resource.GenericResource)
-
- new_env = {u'parameters': {u'a': u'ff', u'b': u'ss'},
- u'resource_registry':
- {u'OS::Networking::FloatingIP': 'OS::Nova::FloatingIP',
- u'OS::Loadbalancer': 'lb.yaml'}}
-
- env = environment.Environment(new_env)
- self.assertEqual('OS::Nova::FloatingIP',
- env.get_resource_info('OS::Networking::FloatingIP',
- 'my_fip').name)
-
- def test_resource_sort_order_len(self):
- new_env = {u'resource_registry': {u'resources': {u'my_fip': {
- u'OS::Networking::FloatingIP': 'ip.yaml'}}},
- u'OS::Networking::FloatingIP': 'OS::Nova::FloatingIP'}
-
- env = environment.Environment(new_env)
- self.assertEqual('ip.yaml',
- env.get_resource_info('OS::Networking::FloatingIP',
- 'my_fip').value)
-
- def test_env_load(self):
- new_env = {u'resource_registry': {u'resources': {u'my_fip': {
- u'OS::Networking::FloatingIP': 'ip.yaml'}}}}
-
- env = environment.Environment()
- self.assertEqual(None,
- env.get_resource_info('OS::Networking::FloatingIP',
- 'my_fip'))
-
- env.load(new_env)
- self.assertEqual('ip.yaml',
- env.get_resource_info('OS::Networking::FloatingIP',
- 'my_fip').value)
-
-
-class GlobalEnvLoadingTest(common.HeatTestCase):
-
- def test_happy_path(self):
- list_dir = 'heat.engine.resources._list_environment_files'
- with mock.patch(list_dir) as m_ldir:
- m_ldir.return_value = ['a.yaml']
- env_dir = '/etc_etc/heat/enviroment.d'
- env_content = '{"resource_registry": {}}'
-
- with mock.patch('heat.engine.resources.open',
- mock.mock_open(read_data=env_content),
- create=True) as m_open:
- resources._load_global_environment(env_dir)
-
- m_ldir.assert_called_once_with(env_dir)
- m_open.assert_called_once_with('%s/a.yaml' % env_dir)
-
- def test_empty_env_dir(self):
- list_dir = 'heat.engine.resources._list_environment_files'
- with mock.patch(list_dir) as m_ldir:
- m_ldir.return_value = []
- env_dir = '/etc_etc/heat/enviroment.d'
- resources._load_global_environment(env_dir)
-
- m_ldir.assert_called_once_with(env_dir)
-
- def test_continue_on_ioerror(self):
- """assert we get all files processed even if there are
- processing exceptions.
- """
- list_dir = 'heat.engine.resources._list_environment_files'
- with mock.patch(list_dir) as m_ldir:
- m_ldir.return_value = ['a.yaml', 'b.yaml']
- env_dir = '/etc_etc/heat/enviroment.d'
- env_content = '{}'
-
- with mock.patch('heat.engine.resources.open',
- mock.mock_open(read_data=env_content),
- create=True) as m_open:
- m_open.side_effect = IOError
- resources._load_global_environment(env_dir)
-
- m_ldir.assert_called_once_with(env_dir)
- expected = [mock.call('%s/a.yaml' % env_dir),
- mock.call('%s/b.yaml' % env_dir)]
- self.assertEqual(expected, m_open.call_args_list)
-
- def test_continue_on_parse_error(self):
- """assert we get all files processed even if there are
- processing exceptions.
- """
- list_dir = 'heat.engine.resources._list_environment_files'
- with mock.patch(list_dir) as m_ldir:
- m_ldir.return_value = ['a.yaml', 'b.yaml']
- env_dir = '/etc_etc/heat/enviroment.d'
- env_content = '{@$%#$%'
-
- with mock.patch('heat.engine.resources.open',
- mock.mock_open(read_data=env_content),
- create=True) as m_open:
- resources._load_global_environment(env_dir)
-
- m_ldir.assert_called_once_with(env_dir)
- expected = [mock.call('%s/a.yaml' % env_dir),
- mock.call('%s/b.yaml' % env_dir)]
- self.assertEqual(expected, m_open.call_args_list)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import environment_format
-from heat.tests import common
-
-
-class YamlEnvironmentTest(common.HeatTestCase):
-
- def test_minimal_yaml(self):
- yaml1 = ''
- yaml2 = '''
-parameters: {}
-resource_registry: {}
-'''
- tpl1 = environment_format.parse(yaml1)
- environment_format.default_for_missing(tpl1)
- tpl2 = environment_format.parse(yaml2)
- self.assertEqual(tpl1, tpl2)
-
- def test_wrong_sections(self):
- env = '''
-parameters: {}
-resource_regis: {}
-'''
- self.assertRaises(ValueError, environment_format.parse, env)
-
- def test_bad_yaml(self):
- env = '''
-parameters: }
-'''
- self.assertRaises(ValueError, environment_format.parse, env)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import heat.db.api as db_api
-from heat.engine import parser
-from heat.engine import resource
-from heat.engine import template
-from heat.engine import event
-
-from heat.tests.common import HeatTestCase
-from heat.tests import generic_resource as generic_rsrc
-from heat.tests import utils
-
-
-tmpl = {
- 'Resources': {
- 'EventTestResource': {
- 'Type': 'ResourceWithRequiredProps',
- 'Properties': {'Foo': 'goo'}
- }
- }
-}
-
-
-class EventTest(HeatTestCase):
-
- def setUp(self):
- super(EventTest, self).setUp()
- self.username = 'event_test_user'
-
- utils.setup_dummy_db()
- self.ctx = utils.dummy_context()
-
- self.m.ReplayAll()
-
- resource._register_class('ResourceWithRequiredProps',
- generic_rsrc.ResourceWithRequiredProps)
-
- self.stack = parser.Stack(self.ctx, 'event_load_test_stack',
- template.Template(tmpl))
- self.stack.store()
-
- self.resource = self.stack['EventTestResource']
- self.resource._store()
- self.addCleanup(db_api.stack_delete, self.ctx, self.stack.id)
-
- def test_load(self):
- self.resource.resource_id_set('resource_physical_id')
-
- e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing',
- 'wibble', self.resource.properties,
- self.resource.name, self.resource.type())
-
- e.store()
- self.assertNotEqual(e.id, None)
-
- loaded_e = event.Event.load(self.ctx, e.id)
-
- self.assertEqual(self.stack.id, loaded_e.stack.id)
- self.assertEqual(self.resource.name, loaded_e.resource_name)
- self.assertEqual('wibble', loaded_e.physical_resource_id)
- self.assertEqual('TEST', loaded_e.action)
- self.assertEqual('IN_PROGRESS', loaded_e.status)
- self.assertEqual('Testing', loaded_e.reason)
- self.assertNotEqual(None, loaded_e.timestamp)
- self.assertEqual({'Foo': 'goo'}, loaded_e.resource_properties)
-
- def test_load_given_stack_event(self):
- self.resource.resource_id_set('resource_physical_id')
-
- e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing',
- 'wibble', self.resource.properties,
- self.resource.name, self.resource.type())
-
- e.store()
- self.assertNotEqual(e.id, None)
-
- ev = db_api.event_get(self.ctx, e.id)
-
- loaded_e = event.Event.load(self.ctx, e.id, stack=self.stack, event=ev)
-
- self.assertEqual(self.stack.id, loaded_e.stack.id)
- self.assertEqual(self.resource.name, loaded_e.resource_name)
- self.assertEqual('wibble', loaded_e.physical_resource_id)
- self.assertEqual('TEST', loaded_e.action)
- self.assertEqual('IN_PROGRESS', loaded_e.status)
- self.assertEqual('Testing', loaded_e.reason)
- self.assertNotEqual(None, loaded_e.timestamp)
- self.assertEqual({'Foo': 'goo'}, loaded_e.resource_properties)
-
- def test_identifier(self):
- e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing',
- 'wibble', self.resource.properties,
- self.resource.name, self.resource.type())
-
- eid = e.store()
- expected_identifier = {
- 'stack_name': self.stack.name,
- 'stack_id': self.stack.id,
- 'tenant': self.ctx.tenant_id,
- 'path': '/resources/EventTestResource/events/%s' % str(eid)
- }
- self.assertEqual(expected_identifier, e.identifier())
-
- def test_badprop(self):
- tmpl = {'Type': 'ResourceWithRequiredProps',
- 'Properties': {'Foo': False}}
- rname = 'bad_resource'
- res = generic_rsrc.ResourceWithRequiredProps(rname, tmpl, self.stack)
- e = event.Event(self.ctx, self.stack, 'TEST', 'IN_PROGRESS', 'Testing',
- 'wibble', res.properties, res.name, res.type())
- self.assertTrue('Error' in e.resource_properties)
+++ /dev/null
-# Copyright 2012 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import fixtures
-
-from heat.common import exception
-from heat.tests import common
-
-
-class TestException(exception.HeatException):
- message = _("Testing message %(text)s")
-
-
-class TestHeatException(common.HeatTestCase):
-
- def test_fatal_exception_error(self):
- self.useFixture(fixtures.MonkeyPatch(
- 'heat.common.exception._FATAL_EXCEPTION_FORMAT_ERRORS',
- True))
- self.assertRaises(KeyError, TestException)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import exception as heat_exc
-from heat.openstack.common.rpc import common as rpc_common
-from heat.tests.common import HeatTestCase
-from oslo.config import cfg
-
-import heat.api.middleware.fault as fault
-
-
-class FaultMiddlewareTest(HeatTestCase):
-
- def test_openstack_exception_with_kwargs(self):
- wrapper = fault.FaultWrapper(None)
- msg = wrapper._error(heat_exc.StackNotFound(stack_name='a'))
- expected = {'code': 404,
- 'error': {'message': 'The Stack (a) could not be found.',
- 'traceback': None,
- 'type': 'StackNotFound'},
- 'explanation': 'The resource could not be found.',
- 'title': 'Not Found'}
- self.assertEqual(msg, expected)
-
- def test_openstack_exception_without_kwargs(self):
- wrapper = fault.FaultWrapper(None)
- msg = wrapper._error(heat_exc.NoServiceEndpoint())
- expected = {'code': 500,
- 'error': {'message': 'Response from Keystone does '
- 'not contain a Heat endpoint.',
- 'traceback': None,
- 'type': 'NoServiceEndpoint'},
- 'explanation': 'The server has either erred or is '
- 'incapable of performing the requested '
- 'operation.',
- 'title': 'Internal Server Error'}
- self.assertEqual(msg, expected)
-
- def test_exception_with_non_ascii_chars(self):
- # We set debug to true to test the code path for serializing traces too
- cfg.CONF.set_override('debug', True)
- msg = u'Error with non-ascii chars \x80'
-
- class TestException(heat_exc.HeatException):
- message = msg
-
- wrapper = fault.FaultWrapper(None)
- msg = wrapper._error(TestException())
- expected = {'code': 500,
- 'error': {'message': u'Error with non-ascii chars \x80',
- 'traceback': 'None\n',
- 'type': 'TestException'},
- 'explanation': ('The server has either erred or is '
- 'incapable of performing the requested '
- 'operation.'),
- 'title': 'Internal Server Error'}
- self.assertEqual(msg, expected)
-
- def test_remote_exception(self):
- # We want tracebacks
- cfg.CONF.set_override('debug', True)
- error = heat_exc.StackNotFound(stack_name='a')
- exc_info = (type(error), error, None)
- serialized = rpc_common.serialize_remote_exception(exc_info)
- remote_error = rpc_common.deserialize_remote_exception(cfg.CONF,
- serialized)
- wrapper = fault.FaultWrapper(None)
- msg = wrapper._error(remote_error)
- expected_message, expected_traceback = str(remote_error).split('\n', 1)
- expected = {'code': 404,
- 'error': {'message': expected_message,
- 'traceback': expected_traceback,
- 'type': 'StackNotFound'},
- 'explanation': 'The resource could not be found.',
- 'title': 'Not Found'}
- self.assertEqual(msg, expected)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import mox
-
-from oslo.config import cfg
-
-from heat.common import exception
-from heat.common import heat_keystoneclient
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-from heat.openstack.common import importutils
-
-
-class KeystoneClientTest(HeatTestCase):
- """Test cases for heat.common.heat_keystoneclient."""
-
- def setUp(self):
- super(KeystoneClientTest, self).setUp()
-
- # Import auth_token to have keystone_authtoken settings setup.
- importutils.import_module('keystoneclient.middleware.auth_token')
-
- dummy_url = 'http://server.test:5000/v2.0'
- cfg.CONF.set_override('auth_uri', dummy_url,
- group='keystone_authtoken')
- cfg.CONF.set_override('admin_user', 'heat',
- group='keystone_authtoken')
- cfg.CONF.set_override('admin_password', 'verybadpass',
- group='keystone_authtoken')
- cfg.CONF.set_override('admin_tenant_name', 'service',
- group='keystone_authtoken')
- self.addCleanup(self.m.VerifyAll)
-
- def _stubs_v2(self, method='token', auth_ok=True):
- self.m.StubOutClassWithMocks(heat_keystoneclient.kc, "Client")
- if method == 'token':
- self.mock_ks_client = heat_keystoneclient.kc.Client(
- auth_url=mox.IgnoreArg(),
- tenant_name='test_tenant',
- token='abcd1234')
- self.mock_ks_client.authenticate().AndReturn(auth_ok)
- elif method == 'password':
- self.mock_ks_client = heat_keystoneclient.kc.Client(
- auth_url=mox.IgnoreArg(),
- tenant_name='test_tenant',
- tenant_id='test_tenant_id',
- username='test_username',
- password='password')
- self.mock_ks_client.authenticate().AndReturn(auth_ok)
-
- def _stubs_v3(self, method='token', auth_ok=True):
- self.m.StubOutClassWithMocks(heat_keystoneclient.kc, "Client")
- self.m.StubOutClassWithMocks(heat_keystoneclient.kc_v3, "Client")
-
- if method == 'token':
- self.mock_ks_v3_client = heat_keystoneclient.kc_v3.Client(
- token='abcd1234', project_name='test_tenant',
- auth_url='http://server.test:5000/v3',
- endpoint='http://server.test:5000/v3')
- elif method == 'password':
- self.mock_ks_v3_client = heat_keystoneclient.kc_v3.Client(
- username='test_username',
- password='password',
- project_name='test_tenant',
- project_id='test_tenant_id',
- auth_url='http://server.test:5000/v3',
- endpoint='http://server.test:5000/v3')
- elif method == 'trust':
- self.mock_ks_v3_client = heat_keystoneclient.kc_v3.Client(
- username='heat',
- password='verybadpass',
- project_name='service',
- auth_url='http://server.test:5000/v3',
- trust_id='atrust123')
-
- self.mock_ks_v3_client.authenticate().AndReturn(auth_ok)
- if auth_ok:
- self.mock_ks_v3_client.auth_ref = self.m.CreateMockAnything()
- self.mock_ks_v3_client.auth_ref.get('auth_token').AndReturn(
- 'av3token')
- self.mock_ks_client = heat_keystoneclient.kc.Client(
- auth_url=mox.IgnoreArg(),
- tenant_name='test_tenant',
- token='4b97cc1b2454e137ee2e8261e115bbe8')
- self.mock_ks_client.authenticate().AndReturn(auth_ok)
-
- def test_username_length(self):
- """Test that user names >64 characters are properly truncated."""
-
- self._stubs_v2()
-
- # a >64 character user name and the expected version
- long_user_name = 'U' * 64 + 'S'
- good_user_name = long_user_name[-64:]
- # mock keystone client user functions
- self.mock_ks_client.users = self.m.CreateMockAnything()
- mock_user = self.m.CreateMockAnything()
- # when keystone is called, the name should have been truncated
- # to the last 64 characters of the long name
- (self.mock_ks_client.users.create(good_user_name, 'password',
- mox.IgnoreArg(), enabled=True,
- tenant_id=mox.IgnoreArg())
- .AndReturn(mock_user))
- # mock out the call to roles; will send an error log message but does
- # not raise an exception
- self.mock_ks_client.roles = self.m.CreateMockAnything()
- self.mock_ks_client.roles.list().AndReturn([])
- self.m.ReplayAll()
- # call create_stack_user with a long user name.
- # the cleanup VerifyAll should verify that though we passed
- # long_user_name, keystone was actually called with a truncated
- # user name
- heat_ks_client = heat_keystoneclient.KeystoneClient(
- utils.dummy_context())
- heat_ks_client.create_stack_user(long_user_name, password='password')
-
- def test_init_v2_password(self):
-
- """Test creating the client without trusts, user/password context."""
-
- self._stubs_v2(method='password')
- self.m.ReplayAll()
-
- ctx = utils.dummy_context()
- ctx.auth_token = None
- heat_ks_client = heat_keystoneclient.KeystoneClient(ctx)
- self.assertIsNotNone(heat_ks_client.client_v2)
- self.assertIsNone(heat_ks_client.client_v3)
-
- def test_init_v2_bad_nocreds(self):
-
- """Test creating the client without trusts, no credentials."""
-
- ctx = utils.dummy_context()
- ctx.auth_token = None
- ctx.username = None
- ctx.password = None
- self.assertRaises(exception.AuthorizationFailure,
- heat_keystoneclient.KeystoneClient, ctx)
-
- def test_init_v2_bad_denied(self):
-
- """Test creating the client without trusts, auth failure."""
-
- self._stubs_v2(method='password', auth_ok=False)
- self.m.ReplayAll()
-
- ctx = utils.dummy_context()
- ctx.auth_token = None
- self.assertRaises(exception.AuthorizationFailure,
- heat_keystoneclient.KeystoneClient, ctx)
-
- def test_init_v3_token(self):
-
- """Test creating the client with trusts, token auth."""
-
- cfg.CONF.set_override('deferred_auth_method', 'trusts')
-
- self._stubs_v3()
- self.m.ReplayAll()
-
- ctx = utils.dummy_context()
- ctx.username = None
- ctx.password = None
- heat_ks_client = heat_keystoneclient.KeystoneClient(ctx)
- self.assertIsNotNone(heat_ks_client.client_v2)
- self.assertIsNotNone(heat_ks_client.client_v3)
-
- def test_init_v3_password(self):
-
- """Test creating the client with trusts, password auth."""
-
- cfg.CONF.set_override('deferred_auth_method', 'trusts')
-
- self._stubs_v3(method='password')
- self.m.ReplayAll()
-
- ctx = utils.dummy_context()
- ctx.auth_token = None
- ctx.trust_id = None
- heat_ks_client = heat_keystoneclient.KeystoneClient(ctx)
- self.assertIsNotNone(heat_ks_client.client_v2)
- self.assertIsNotNone(heat_ks_client.client_v3)
-
- def test_init_v3_bad_nocreds(self):
-
- """Test creating the client with trusts, no credentials."""
-
- cfg.CONF.set_override('deferred_auth_method', 'trusts')
-
- ctx = utils.dummy_context()
- ctx.auth_token = None
- ctx.trust_id = None
- ctx.username = None
- ctx.password = None
- self.assertRaises(exception.AuthorizationFailure,
- heat_keystoneclient.KeystoneClient, ctx)
-
- def test_init_v3_bad_denied(self):
-
- """Test creating the client with trusts, auth failure."""
-
- cfg.CONF.set_override('deferred_auth_method', 'trusts')
-
- self._stubs_v3(method='password', auth_ok=False)
- self.m.ReplayAll()
-
- ctx = utils.dummy_context()
- ctx.auth_token = None
- ctx.trust_id = None
- self.assertRaises(exception.AuthorizationFailure,
- heat_keystoneclient.KeystoneClient, ctx)
-
- def test_create_trust_context_notrust(self):
-
- """Test create_trust_context with trusts disabled."""
-
- self._stubs_v2(method='password')
- self.m.ReplayAll()
-
- ctx = utils.dummy_context()
- ctx.auth_token = None
- ctx.trust_id = None
- heat_ks_client = heat_keystoneclient.KeystoneClient(ctx)
- self.assertIsNone(heat_ks_client.create_trust_context())
-
- def test_create_trust_context_trust_id(self):
-
- """Test create_trust_context with existing trust_id."""
-
- cfg.CONF.set_override('deferred_auth_method', 'trusts')
-
- self._stubs_v3()
- self.m.ReplayAll()
-
- ctx = utils.dummy_context()
- heat_ks_client = heat_keystoneclient.KeystoneClient(ctx)
- self.assertIsNone(heat_ks_client.create_trust_context())
-
- def test_create_trust_context_trust_create(self):
-
- """Test create_trust_context when creating a new trust."""
-
- cfg.CONF.set_override('deferred_auth_method', 'trusts')
-
- class MockTrust(object):
- id = 'atrust123'
-
- self._stubs_v3()
- mock_admin_client = heat_keystoneclient.kc.Client(
- auth_url=mox.IgnoreArg(),
- username='heat',
- password='verybadpass',
- tenant_name='service')
- mock_admin_client.authenticate().AndReturn(True)
- mock_admin_client.auth_ref = self.m.CreateMockAnything()
- mock_admin_client.auth_ref.__getitem__('user').AndReturn(
- {'id': '1234'})
- self.mock_ks_v3_client.auth_ref.__getitem__('user').AndReturn(
- {'id': '5678'})
- self.mock_ks_v3_client.auth_ref.__getitem__('project').AndReturn(
- {'id': '42'})
- self.mock_ks_v3_client.trusts = self.m.CreateMockAnything()
- self.mock_ks_v3_client.trusts.create(
- trustor_user='5678',
- trustee_user='1234',
- project='42',
- impersonation=True,
- role_names=['heat_stack_owner']).AndReturn(MockTrust())
-
- self.m.ReplayAll()
-
- ctx = utils.dummy_context()
- ctx.trust_id = None
- heat_ks_client = heat_keystoneclient.KeystoneClient(ctx)
- self.assertIsNone(heat_ks_client.create_trust_context())
- self.assertEqual(ctx.trust_id, 'atrust123')
- self.assertEqual(ctx.trustor_user_id, '5678')
-
- def test_create_trust_context_denied(self):
-
- """Test create_trust_context when creating admin auth fails."""
-
- cfg.CONF.set_override('deferred_auth_method', 'trusts')
-
- self._stubs_v3()
- mock_admin_client = heat_keystoneclient.kc.Client(
- auth_url=mox.IgnoreArg(),
- username='heat',
- password='verybadpass',
- tenant_name='service')
- mock_admin_client.authenticate().AndReturn(False)
- self.m.ReplayAll()
-
- ctx = utils.dummy_context()
- ctx.trust_id = None
- heat_ks_client = heat_keystoneclient.KeystoneClient(ctx)
- self.assertRaises(exception.AuthorizationFailure,
- heat_ks_client.create_trust_context)
-
- def test_trust_init(self):
-
- """Test consuming a trust when initializing."""
-
- cfg.CONF.set_override('deferred_auth_method', 'trusts')
-
- self._stubs_v3(method='trust')
- self.m.ReplayAll()
-
- ctx = utils.dummy_context()
- ctx.username = None
- ctx.password = None
- ctx.auth_token = None
- heat_ks_client = heat_keystoneclient.KeystoneClient(ctx)
-
- def test_delete_trust_context(self):
-
- """Test delete_trust_context when deleting trust."""
-
- cfg.CONF.set_override('deferred_auth_method', 'trusts')
-
- self._stubs_v3()
- self.mock_ks_v3_client.trusts = self.m.CreateMockAnything()
- self.mock_ks_v3_client.trusts.delete('atrust123').AndReturn(None)
-
- self.m.ReplayAll()
- ctx = utils.dummy_context()
- heat_ks_client = heat_keystoneclient.KeystoneClient(ctx)
- self.assertIsNone(heat_ks_client.delete_trust_context())
-
- def test_delete_trust_context_notrust(self):
-
- """Test delete_trust_context no trust_id specified."""
-
- cfg.CONF.set_override('deferred_auth_method', 'trusts')
-
- self._stubs_v3()
- self.m.ReplayAll()
- ctx = utils.dummy_context()
- ctx.trust_id = None
- heat_ks_client = heat_keystoneclient.KeystoneClient(ctx)
- self.assertIsNone(heat_ks_client.delete_trust_context())
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-from heat.common import template_format
-from heat.common import exception
-from heat.engine import parser
-from heat.engine import hot
-from heat.engine import parameters
-from heat.engine import template
-
-from heat.tests.common import HeatTestCase
-from heat.tests import test_parser
-from heat.tests import utils
-
-
-hot_tpl_empty = template_format.parse('''
-heat_template_version: 2013-05-23
-''')
-
-
-class HOTemplateTest(HeatTestCase):
- """Test processing of HOT templates."""
-
- def test_defaults(self):
- """Test default content behavior of HOT template."""
-
- tmpl = parser.Template(hot_tpl_empty)
- # check if we get the right class
- self.assertTrue(isinstance(tmpl, hot.HOTemplate))
- try:
- # test getting an invalid section
- tmpl['foobar']
- except KeyError:
- pass
- else:
- self.fail('Expected KeyError for invalid section')
-
- # test defaults for valid sections
- self.assertEqual(tmpl[hot.VERSION], '2013-05-23')
- self.assertEqual(tmpl[hot.DESCRIPTION], 'No description')
- self.assertEqual(tmpl[hot.PARAMETERS], {})
- self.assertEqual(tmpl[hot.RESOURCES], {})
- self.assertEqual(tmpl[hot.OUTPUTS], {})
-
- def test_translate_parameters(self):
- """Test translation of parameters into internal engine format."""
-
- hot_tpl = template_format.parse('''
- heat_template_version: 2013-05-23
- parameters:
- param1:
- description: foo
- type: string
- default: boo
- ''')
-
- expected = {'param1': {'Description': 'foo',
- 'Type': 'String',
- 'Default': 'boo'}}
-
- tmpl = parser.Template(hot_tpl)
- self.assertEqual(tmpl[hot.PARAMETERS], expected)
-
- def test_translate_parameters_unsupported_type(self):
- """Test translation of parameters into internal engine format
-
- This tests if parameters with a type not yet supported by engine
- are also parsed.
- """
-
- hot_tpl = template_format.parse('''
- heat_template_version: 2013-05-23
- parameters:
- param1:
- description: foo
- type: unsupported_type
- ''')
-
- expected = {'param1': {'Description': 'foo',
- 'Type': 'UnsupportedType'}}
-
- tmpl = parser.Template(hot_tpl)
- self.assertEqual(tmpl[hot.PARAMETERS], expected)
-
- def test_translate_parameters_length_range(self):
- hot_tpl = template_format.parse('''
- heat_template_version: 2013-05-23
- parameters:
- wait_time:
- description: application wait time
- type: number
- default: 150
- constraints:
- - range: { min: 120, max: 600}
- description: min value 120 seconds, max value 600 seconds
- key_name:
- description: Name of an existing EC2 KeyPair
- type: string
- default: heat_key
- constraints:
- - length: {min: 1, max: 32}
- description: length should be between 1 and 32
- ''')
-
- expected = {
- 'wait_time': {
- 'Description': 'application wait time',
- 'Type': 'Number',
- 'Default': 150,
- 'MaxValue': [
- (600, 'min value 120 seconds, max value 600 seconds')],
- 'MinValue': [
- (120, 'min value 120 seconds, max value 600 seconds')]
- },
- 'key_name': {
- 'Description': 'Name of an existing EC2 KeyPair',
- 'Type': 'String',
- 'Default': 'heat_key',
- 'MaxLength': [(32, u'length should be between 1 and 32')],
- 'MinLength': [(1, u'length should be between 1 and 32')]
- }}
-
- tmpl = parser.Template(hot_tpl)
- self.assertEqual(expected, tmpl[hot.PARAMETERS])
-
- def test_translate_parameters_allowed_values(self):
- hot_tpl = template_format.parse('''
- heat_template_version: 2013-05-23
- parameters:
- instance_type:
- description: instance type
- type: string
- default: m1.small
- constraints:
- - allowed_values: ["m1.tiny",
- "m1.small",
- "m1.medium", "m1.large", "m1.xlarge"]
- description: must be a valid EC2 instance type.
- ''')
- expected = {
- 'instance_type': {
- 'Description': 'instance type',
- 'Type': 'String',
- 'Default': 'm1.small',
- 'AllowedValues': [(["m1.tiny",
- "m1.small",
- "m1.medium",
- "m1.large",
- "m1.xlarge"],
- 'must be a valid EC2 instance type.')]}}
-
- tmpl = parser.Template(hot_tpl)
- self.assertEqual(expected, tmpl[hot.PARAMETERS])
-
- def test_translate_parameters_allowed_patterns(self):
- hot_tpl = template_format.parse('''
- heat_template_version: 2013-05-23
- parameters:
- db_name:
- description: The WordPress database name
- type: string
- default: wordpress
- constraints:
- - length: { min: 1, max: 64 }
- description: string lenght should between 1 and 64
- - allowed_pattern: "[a-zA-Z]+"
- description: Value must consist of characters only
- - allowed_pattern: "[a-z]+[a-zA-Z]*"
- description: Value must start with a lowercase character
- ''')
- expected = {
- 'db_name': {
- 'Description': 'The WordPress database name',
- 'Type': 'String',
- 'Default': 'wordpress',
- 'MinLength': [(1, 'string lenght should between 1 and 64')],
- 'MaxLength': [(64, 'string lenght should between 1 and 64')],
- 'AllowedPattern': [
- ('[a-zA-Z]+',
- 'Value must consist of characters only'),
- ('[a-z]+[a-zA-Z]*',
- 'Value must start with a lowercase character')]}}
- tmpl = parser.Template(hot_tpl)
- self.assertEqual(expected, tmpl[hot.PARAMETERS])
-
- def test_translate_parameters_hidden(self):
- hot_tpl = template_format.parse('''
- heat_template_version: 2013-05-23
- parameters:
- user_roles:
- description: User roles
- type: comma_delimited_list
- default: guest,newhire
- hidden: TRUE
- ''')
- expected = {
- 'user_roles': {
- 'Description': 'User roles',
- 'Type': 'CommaDelimitedList',
- 'Default': 'guest,newhire',
- 'NoEcho': True
- }}
-
- tmpl = parser.Template(hot_tpl)
- self.assertEqual(expected, tmpl[hot.PARAMETERS])
-
- def test_translate_resources(self):
- """Test translation of resources into internal engine format."""
-
- hot_tpl = template_format.parse('''
- heat_template_version: 2013-05-23
- resources:
- resource1:
- type: AWS::EC2::Instance
- properties:
- property1: value1
- ''')
-
- expected = {'resource1': {'Type': 'AWS::EC2::Instance',
- 'Properties': {'property1': 'value1'}}}
-
- tmpl = parser.Template(hot_tpl)
- self.assertEqual(tmpl[hot.RESOURCES], expected)
-
- def test_translate_outputs(self):
- """Test translation of outputs into internal engine format."""
-
- hot_tpl = template_format.parse('''
- heat_template_version: 2013-05-23
- outputs:
- output1:
- description: output1
- value: value1
- ''')
-
- expected = {'output1': {'Description': 'output1', 'Value': 'value1'}}
-
- tmpl = parser.Template(hot_tpl)
- self.assertEqual(tmpl[hot.OUTPUTS], expected)
-
- def test_param_refs(self):
- """Test if parameter references work."""
- params = {'foo': 'bar', 'blarg': 'wibble'}
- snippet = {'properties': {'key1': {'get_param': 'foo'},
- 'key2': {'get_param': 'blarg'}}}
- snippet_resolved = {'properties': {'key1': 'bar',
- 'key2': 'wibble'}}
- tmpl = parser.Template(hot_tpl_empty)
- self.assertEqual(tmpl.resolve_param_refs(snippet, params),
- snippet_resolved)
- snippet = {'properties': {'key1': {'Ref': 'foo'},
- 'key2': {'Ref': 'blarg'}}}
- snippet_resolved = {'properties': {'key1': 'bar',
- 'key2': 'wibble'}}
- tmpl = parser.Template(hot_tpl_empty)
- self.assertEqual(snippet_resolved,
- tmpl.resolve_param_refs(snippet, params))
-
- def test_str_replace(self):
- """Test str_replace function."""
-
- snippet = {'str_replace': {'template': 'Template $var1 string $var2',
- 'params': {'var1': 'foo', 'var2': 'bar'}}}
- snippet_resolved = 'Template foo string bar'
-
- tmpl = parser.Template(hot_tpl_empty)
-
- self.assertEqual(snippet_resolved,
- tmpl.resolve_replace(snippet))
-
- def test_str_fn_replace(self):
- """Test Fn:Replace function."""
-
- snippet = {'Fn::Replace': [{'$var1': 'foo', '$var2': 'bar'},
- 'Template $var1 string $var2']}
- snippet_resolved = 'Template foo string bar'
-
- tmpl = parser.Template(hot_tpl_empty)
-
- self.assertEqual(tmpl.resolve_replace(snippet), snippet_resolved)
-
- def test_str_replace_syntax(self):
- """
- Test str_replace function syntax.
-
- Pass wrong syntax (array instead of dictionary) to function and
- validate that we get a TypeError.
- """
-
- snippet = {'str_replace': [{'template': 'Template $var1 string $var2'},
- {'params': {'var1': 'foo', 'var2': 'bar'}}]}
-
- tmpl = parser.Template(hot_tpl_empty)
-
- self.assertRaises(TypeError, tmpl.resolve_replace, snippet)
-
- def test_str_replace_invalid_param_keys(self):
- """
- Test str_replace function parameter keys.
-
- Pass wrong parameters to function and verify that we get
- a KeyError.
- """
-
- snippet = {'str_replace': {'tmpl': 'Template $var1 string $var2',
- 'params': {'var1': 'foo', 'var2': 'bar'}}}
-
- tmpl = parser.Template(hot_tpl_empty)
-
- self.assertRaises(KeyError, tmpl.resolve_replace, snippet)
-
- snippet = {'str_replace': {'tmpl': 'Template $var1 string $var2',
- 'parms': {'var1': 'foo', 'var2': 'bar'}}}
-
- self.assertRaises(KeyError, tmpl.resolve_replace, snippet)
-
- def test_str_replace_invalid_param_types(self):
- """
- Test str_replace function parameter values.
-
- Pass parameter values of wrong type to function and verify that we get
- a TypeError.
- """
-
- snippet = {'str_replace': {'template': 12345,
- 'params': {'var1': 'foo', 'var2': 'bar'}}}
-
- tmpl = parser.Template(hot_tpl_empty)
-
- self.assertRaises(TypeError, tmpl.resolve_replace, snippet)
-
- snippet = {'str_replace': {'template': 'Template $var1 string $var2',
- 'params': ['var1', 'foo', 'var2', 'bar']}}
-
- self.assertRaises(TypeError, tmpl.resolve_replace, snippet)
-
-
-class StackTest(test_parser.StackTest):
- """Test stack function when stack was created from HOT template."""
-
- @utils.stack_delete_after
- def test_get_attr(self):
- """Test resolution of get_attr occurrences in HOT template."""
-
- hot_tpl = template_format.parse('''
- heat_template_version: 2013-05-23
- resources:
- resource1:
- type: GenericResourceType
- ''')
-
- self.stack = parser.Stack(self.ctx, 'test_get_attr',
- template.Template(hot_tpl))
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- snippet = {'Value': {'get_attr': ['resource1', 'foo']}}
- resolved = hot.HOTemplate.resolve_attributes(snippet, self.stack)
- # GenericResourceType has an attribute 'foo' which yields the resource
- # name.
- self.assertEqual(resolved, {'Value': 'resource1'})
- # test invalid reference
- self.assertRaises(exception.InvalidTemplateAttribute,
- hot.HOTemplate.resolve_attributes,
- {'Value': {'get_attr': ['resource1', 'NotThere']}},
- self.stack)
-
- snippet = {'Value': {'Fn::GetAtt': ['resource1', 'foo']}}
- resolved = hot.HOTemplate.resolve_attributes(snippet, self.stack)
- self.assertEqual({'Value': 'resource1'}, resolved)
-
- @utils.stack_delete_after
- def test_get_resource(self):
- """Test resolution of get_resource occurrences in HOT template."""
-
- hot_tpl = template_format.parse('''
- heat_template_version: 2013-05-23
- resources:
- resource1:
- type: GenericResourceType
- ''')
-
- self.stack = parser.Stack(self.ctx, 'test_get_resource',
- template.Template(hot_tpl))
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- snippet = {'value': {'get_resource': 'resource1'}}
- resolved = hot.HOTemplate.resolve_resource_refs(snippet, self.stack)
- self.assertEqual(resolved, {'value': 'resource1'})
-
-
-class HOTParamValidatorTest(HeatTestCase):
- "Test HOTParamValidator"
-
- def test_multiple_constraint_descriptions(self):
- len_desc = 'string length should between 8 and 16'
- pattern_desc1 = 'Value must consist of characters only'
- pattern_desc2 = 'Value must start with a lowercase character'
- param = {
- 'db_name': {
- 'Description': 'The WordPress database name',
- 'Type': 'String',
- 'Default': 'wordpress',
- 'MinLength': [(8, len_desc)],
- 'MaxLength': [(16, len_desc)],
- 'AllowedPattern': [
- ('[a-zA-Z]+', pattern_desc1),
- ('[a-z]+[a-zA-Z]*', pattern_desc2)]}}
-
- name = 'db_name'
- schema = param['db_name']
-
- def v(value):
- hot.HOTParamSchema(schema).do_check(name, value,
- [parameters.ALLOWED_VALUES,
- parameters.ALLOWED_PATTERN,
- parameters.MAX_LENGTH,
- parameters.MIN_LENGTH])
- return True
-
- value = 'wp'
- err = self.assertRaises(ValueError, v, value)
- self.assertIn(len_desc, str(err))
-
- value = 'abcdefghijklmnopq'
- err = self.assertRaises(ValueError, v, value)
- self.assertIn(len_desc, str(err))
-
- value = 'abcdefgh1'
- err = self.assertRaises(ValueError, v, value)
- self.assertIn(pattern_desc1, str(err))
-
- value = 'Abcdefghi'
- err = self.assertRaises(ValueError, v, value)
- self.assertIn(pattern_desc2, str(err))
-
- value = 'abcdefghi'
- self.assertTrue(v(value))
-
- value = 'abcdefghI'
- self.assertTrue(v(value))
-
- def test_hot_template_validate_param(self):
- len_desc = 'string length should between 8 and 16'
- pattern_desc1 = 'Value must consist of characters only'
- pattern_desc2 = 'Value must start with a lowercase character'
- hot_tpl = template_format.parse('''
- heat_template_version: 2013-05-23
- parameters:
- db_name:
- description: The WordPress database name
- type: string
- default: wordpress
- constraints:
- - length: { min: 8, max: 16 }
- description: %s
- - allowed_pattern: "[a-zA-Z]+"
- description: %s
- - allowed_pattern: "[a-z]+[a-zA-Z]*"
- description: %s
- ''' % (len_desc, pattern_desc1, pattern_desc2))
- tmpl = parser.Template(hot_tpl)
-
- def run_parameters(value):
- parameters.Parameters("stack_testit", tmpl, {'db_name': value})
- return True
-
- value = 'wp'
- err = self.assertRaises(ValueError, run_parameters, value)
- self.assertIn(len_desc, str(err))
-
- value = 'abcdefghijklmnopq'
- err = self.assertRaises(ValueError, run_parameters, value)
- self.assertIn(len_desc, str(err))
-
- value = 'abcdefgh1'
- err = self.assertRaises(ValueError, run_parameters, value)
- self.assertIn(pattern_desc1, str(err))
-
- value = 'Abcdefghi'
- err = self.assertRaises(ValueError, run_parameters, value)
- self.assertIn(pattern_desc2, str(err))
-
- value = 'abcdefghi'
- self.assertTrue(run_parameters(value))
-
- value = 'abcdefghI'
- self.assertTrue(run_parameters(value))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import testtools
-
-from heat.common import identifier
-
-
-class IdentifierTest(testtools.TestCase):
- url_prefix = 'http://1.2.3.4/foo/'
-
- def test_attrs(self):
- hi = identifier.HeatIdentifier('t', 's', 'i', 'p')
- self.assertEqual(hi.tenant, 't')
- self.assertEqual(hi.stack_name, 's')
- self.assertEqual(hi.stack_id, 'i')
- self.assertEqual(hi.path, '/p')
-
- def test_path_default(self):
- hi = identifier.HeatIdentifier('t', 's', 'i')
- self.assertEqual(hi.path, '')
-
- def test_items(self):
- hi = identifier.HeatIdentifier('t', 's', 'i', 'p')
- self.assertEqual(hi['tenant'], 't')
- self.assertEqual(hi['stack_name'], 's')
- self.assertEqual(hi['stack_id'], 'i')
- self.assertEqual(hi['path'], '/p')
-
- def test_invalid_attr(self):
- hi = identifier.HeatIdentifier('t', 's', 'i', 'p')
- hi.identity['foo'] = 'bar'
- self.assertRaises(AttributeError, getattr, hi, 'foo')
-
- def test_invalid_item(self):
- hi = identifier.HeatIdentifier('t', 's', 'i', 'p')
- hi.identity['foo'] = 'bar'
- self.assertRaises(KeyError, lambda o, k: o[k], hi, 'foo')
-
- def test_arn(self):
- hi = identifier.HeatIdentifier('t', 's', 'i', 'p')
- self.assertEqual(hi.arn(), 'arn:openstack:heat::t:stacks/s/i/p')
-
- def test_arn_url(self):
- hi = identifier.HeatIdentifier('t', 's', 'i', 'p')
- self.assertEqual(hi.arn_url_path(),
- '/arn%3Aopenstack%3Aheat%3A%3At%3Astacks%2Fs%2Fi%2Fp')
-
- def test_arn_id_int(self):
- hi = identifier.HeatIdentifier('t', 's', 42, 'p')
- self.assertEqual(hi.arn(), 'arn:openstack:heat::t:stacks/s/42/p')
-
- def test_arn_parse(self):
- arn = 'arn:openstack:heat::t:stacks/s/i/p'
- hi = identifier.HeatIdentifier.from_arn(arn)
- self.assertEqual(hi.tenant, 't')
- self.assertEqual(hi.stack_name, 's')
- self.assertEqual(hi.stack_id, 'i')
- self.assertEqual(hi.path, '/p')
-
- def test_arn_url_parse(self):
- url = self.url_prefix + 'arn%3Aopenstack%3Aheat%3A%3At%3Astacks/s/i/p'
- hi = identifier.HeatIdentifier.from_arn_url(url)
- self.assertEqual(hi.tenant, 't')
- self.assertEqual(hi.stack_name, 's')
- self.assertEqual(hi.stack_id, 'i')
- self.assertEqual(hi.path, '/p')
-
- def test_arn_parse_path_default(self):
- arn = 'arn:openstack:heat::t:stacks/s/i'
- hi = identifier.HeatIdentifier.from_arn(arn)
- self.assertEqual(hi.tenant, 't')
- self.assertEqual(hi.stack_name, 's')
- self.assertEqual(hi.stack_id, 'i')
- self.assertEqual(hi.path, '')
-
- def test_arn_url_parse_default(self):
- url = self.url_prefix + 'arn%3Aopenstack%3Aheat%3A%3At%3Astacks/s/i'
- hi = identifier.HeatIdentifier.from_arn_url(url)
- self.assertEqual(hi.tenant, 't')
- self.assertEqual(hi.stack_name, 's')
- self.assertEqual(hi.stack_id, 'i')
- self.assertEqual(hi.path, '')
-
- def test_arn_parse_upper(self):
- arn = 'ARN:openstack:heat::t:stacks/s/i/p'
- hi = identifier.HeatIdentifier.from_arn(arn)
- self.assertEqual(hi.stack_name, 's')
- self.assertEqual(hi.stack_id, 'i')
- self.assertEqual(hi.path, '/p')
-
- def test_arn_url_parse_upper(self):
- url = self.url_prefix + 'ARN%3Aopenstack%3Aheat%3A%3At%3Astacks/s/i/p'
- hi = identifier.HeatIdentifier.from_arn_url(url)
- self.assertEqual(hi.tenant, 't')
- self.assertEqual(hi.stack_name, 's')
- self.assertEqual(hi.stack_id, 'i')
- self.assertEqual(hi.path, '/p')
-
- def test_arn_url_parse_qs(self):
- url = self.url_prefix +\
- 'arn%3Aopenstack%3Aheat%3A%3At%3Astacks/s/i/p?foo=bar'
- hi = identifier.HeatIdentifier.from_arn_url(url)
- self.assertEqual(hi.tenant, 't')
- self.assertEqual(hi.stack_name, 's')
- self.assertEqual(hi.stack_id, 'i')
- self.assertEqual(hi.path, '/p')
-
- def test_arn_parse_arn_invalid(self):
- arn = 'urn:openstack:heat::t:stacks/s/i'
- self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn, arn)
-
- def test_arn_url_parse_arn_invalid(self):
- url = self.url_prefix + 'urn:openstack:heat::t:stacks/s/i/p'
- self.assertRaises(ValueError,
- identifier.HeatIdentifier.from_arn_url, url)
-
- def test_arn_parse_os_invalid(self):
- arn = 'arn:aws:heat::t:stacks/s/i'
- self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn, arn)
-
- def test_arn_url_parse_os_invalid(self):
- url = self.url_prefix + 'arn:aws:heat::t:stacks/s/i/p'
- self.assertRaises(ValueError,
- identifier.HeatIdentifier.from_arn_url, url)
-
- def test_arn_parse_heat_invalid(self):
- arn = 'arn:openstack:cool::t:stacks/s/i'
- self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn, arn)
-
- def test_arn_url_parse_heat_invalid(self):
- url = self.url_prefix + 'arn:openstack:cool::t:stacks/s/i/p'
- self.assertRaises(ValueError,
- identifier.HeatIdentifier.from_arn_url, url)
-
- def test_arn_parse_stacks_invalid(self):
- arn = 'arn:openstack:heat::t:sticks/s/i'
- self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn, arn)
-
- def test_arn_url_parse_stacks_invalid(self):
- url = self.url_prefix + 'arn%3Aopenstack%3Aheat%3A%3At%3Asticks/s/i/p'
- self.assertRaises(ValueError,
- identifier.HeatIdentifier.from_arn_url, url)
-
- def test_arn_parse_missing_field(self):
- arn = 'arn:openstack:heat::t:stacks/s'
- self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn, arn)
-
- def test_arn_url_parse_missing_field(self):
- url = self.url_prefix + 'arn%3Aopenstack%3Aheat%3A%3At%3Asticks/s/'
- self.assertRaises(ValueError,
- identifier.HeatIdentifier.from_arn_url, url)
-
- def test_arn_parse_empty_field(self):
- arn = 'arn:openstack:heat::t:stacks//i'
- self.assertRaises(ValueError, identifier.HeatIdentifier.from_arn, arn)
-
- def test_arn_url_parse_empty_field(self):
- url = self.url_prefix + 'arn%3Aopenstack%3Aheat%3A%3At%3Asticks//i'
- self.assertRaises(ValueError,
- identifier.HeatIdentifier.from_arn_url, url)
-
- def test_arn_url_parse_leading_char(self):
- url = self.url_prefix + 'Aarn%3Aopenstack%3Aheat%3A%3At%3Asticks/s/i/p'
- self.assertRaises(ValueError,
- identifier.HeatIdentifier.from_arn_url, url)
-
- def test_arn_url_parse_leading_space(self):
- url = self.url_prefix + ' arn%3Aopenstack%3Aheat%3A%3At%3Asticks/s/i/p'
- self.assertRaises(ValueError,
- identifier.HeatIdentifier.from_arn_url, url)
-
- def test_arn_url_parse_badurl_proto(self):
- url = 'htt://1.2.3.4/foo/arn%3Aopenstack%3Aheat%3A%3At%3Asticks/s/i/p'
- self.assertRaises(ValueError,
- identifier.HeatIdentifier.from_arn_url, url)
-
- def test_arn_url_parse_badurl_host(self):
- url = 'http:///foo/arn%3Aopenstack%3Aheat%3A%3At%3Asticks/s/i/p'
- self.assertRaises(ValueError,
- identifier.HeatIdentifier.from_arn_url, url)
-
- def test_arn_round_trip(self):
- hii = identifier.HeatIdentifier('t', 's', 'i', 'p')
- hio = identifier.HeatIdentifier.from_arn(hii.arn())
- self.assertEqual(hio.tenant, hii.tenant)
- self.assertEqual(hio.stack_name, hii.stack_name)
- self.assertEqual(hio.stack_id, hii.stack_id)
- self.assertEqual(hio.path, hii.path)
-
- def test_arn_parse_round_trip(self):
- arn = 'arn:openstack:heat::t:stacks/s/i/p'
- hi = identifier.HeatIdentifier.from_arn(arn)
- self.assertEqual(hi.arn(), arn)
-
- def test_arn_url_parse_round_trip(self):
- arn = '/arn%3Aopenstack%3Aheat%3A%3At%3Astacks%2Fs%2Fi%2Fp'
- url = 'http://1.2.3.4/foo' + arn
- hi = identifier.HeatIdentifier.from_arn_url(url)
- self.assertEqual(hi.arn_url_path(), arn)
-
- def test_dict_round_trip(self):
- hii = identifier.HeatIdentifier('t', 's', 'i', 'p')
- hio = identifier.HeatIdentifier(**dict(hii))
- self.assertEqual(hio.tenant, hii.tenant)
- self.assertEqual(hio.stack_name, hii.stack_name)
- self.assertEqual(hio.stack_id, hii.stack_id)
- self.assertEqual(hio.path, hii.path)
-
- def test_url_path(self):
- hi = identifier.HeatIdentifier('t', 's', 'i', 'p')
- self.assertEqual(hi.url_path(), 't/stacks/s/i/p')
-
- def test_url_path_default(self):
- hi = identifier.HeatIdentifier('t', 's', 'i')
- self.assertEqual(hi.url_path(), 't/stacks/s/i')
-
- def test_tenant_escape(self):
- hi = identifier.HeatIdentifier(':/', 's', 'i')
- self.assertEqual(hi.tenant, ':/')
- self.assertEqual(hi.url_path(), '%3A%2F/stacks/s/i')
- self.assertEqual(hi.arn(), 'arn:openstack:heat::%3A%2F:stacks/s/i')
-
- def test_name_escape(self):
- hi = identifier.HeatIdentifier('t', ':%', 'i')
- self.assertEqual(hi.stack_name, ':%')
- self.assertEqual(hi.url_path(), 't/stacks/%3A%25/i')
- self.assertEqual(hi.arn(), 'arn:openstack:heat::t:stacks/%3A%25/i')
-
- def test_id_escape(self):
- hi = identifier.HeatIdentifier('t', 's', ':/')
- self.assertEqual(hi.stack_id, ':/')
- self.assertEqual(hi.url_path(), 't/stacks/s/%3A%2F')
- self.assertEqual(hi.arn(), 'arn:openstack:heat::t:stacks/s/%3A%2F')
-
- def test_path_escape(self):
- hi = identifier.HeatIdentifier('t', 's', 'i', ':/')
- self.assertEqual(hi.path, '/:/')
- self.assertEqual(hi.url_path(), 't/stacks/s/i/%3A/')
- self.assertEqual(hi.arn(), 'arn:openstack:heat::t:stacks/s/i/%3A/')
-
- def test_tenant_decode(self):
- arn = 'arn:openstack:heat::%3A%2F:stacks/s/i'
- hi = identifier.HeatIdentifier.from_arn(arn)
- self.assertEqual(hi.tenant, ':/')
-
- def test_url_tenant_decode(self):
- enc_arn = 'arn%3Aopenstack%3Aheat%3A%3A%253A%252F%3Astacks%2Fs%2Fi'
- url = self.url_prefix + enc_arn
- hi = identifier.HeatIdentifier.from_arn_url(url)
- self.assertEqual(hi.tenant, ':/')
-
- def test_name_decode(self):
- arn = 'arn:openstack:heat::t:stacks/%3A%25/i'
- hi = identifier.HeatIdentifier.from_arn(arn)
- self.assertEqual(hi.stack_name, ':%')
-
- def test_url_name_decode(self):
- enc_arn = 'arn%3Aopenstack%3Aheat%3A%3At%3Astacks%2F%253A%2525%2Fi'
- url = self.url_prefix + enc_arn
- hi = identifier.HeatIdentifier.from_arn_url(url)
- self.assertEqual(hi.stack_name, ':%')
-
- def test_id_decode(self):
- arn = 'arn:openstack:heat::t:stacks/s/%3A%2F'
- hi = identifier.HeatIdentifier.from_arn(arn)
- self.assertEqual(hi.stack_id, ':/')
-
- def test_url_id_decode(self):
- enc_arn = 'arn%3Aopenstack%3Aheat%3A%3At%3Astacks%2Fs%2F%253A%252F'
- url = self.url_prefix + enc_arn
- hi = identifier.HeatIdentifier.from_arn_url(url)
- self.assertEqual(hi.stack_id, ':/')
-
- def test_path_decode(self):
- arn = 'arn:openstack:heat::t:stacks/s/i/%3A%2F'
- hi = identifier.HeatIdentifier.from_arn(arn)
- self.assertEqual(hi.path, '/:/')
-
- def test_url_path_decode(self):
- enc_arn = 'arn%3Aopenstack%3Aheat%3A%3At%3Astacks%2Fs%2Fi%2F%253A%252F'
- url = self.url_prefix + enc_arn
- hi = identifier.HeatIdentifier.from_arn_url(url)
- self.assertEqual(hi.path, '/:/')
-
- def test_arn_escape_decode_round_trip(self):
- hii = identifier.HeatIdentifier(':/', ':%', ':/', ':/')
- hio = identifier.HeatIdentifier.from_arn(hii.arn())
- self.assertEqual(hio.tenant, hii.tenant)
- self.assertEqual(hio.stack_name, hii.stack_name)
- self.assertEqual(hio.stack_id, hii.stack_id)
- self.assertEqual(hio.path, hii.path)
-
- def test_arn_decode_escape_round_trip(self):
- arn = 'arn:openstack:heat::%3A%2F:stacks/%3A%25/%3A%2F/%3A/'
- hi = identifier.HeatIdentifier.from_arn(arn)
- self.assertEqual(hi.arn(), arn)
-
- def test_arn_url_decode_escape_round_trip(self):
- enc_arn = "".join(['arn%3Aopenstack%3Aheat%3A%3A%253A%252F%3A',
- 'stacks%2F%253A%2525%2F%253A%252F%2F%253A'])
- url = self.url_prefix + enc_arn
- hi = identifier.HeatIdentifier.from_arn_url(url)
- hi2 = identifier.HeatIdentifier.from_arn_url(self.url_prefix +
- hi.arn_url_path())
- self.assertEqual(hi, hi2)
-
- def test_stack_name_slash(self):
- self.assertRaises(ValueError, identifier.HeatIdentifier,
- 't', 's/s', 'i', 'p')
-
- def test_equal(self):
- hi1 = identifier.HeatIdentifier('t', 's', 'i', 'p')
- hi2 = identifier.HeatIdentifier('t', 's', 'i', 'p')
- self.assertTrue(hi1 == hi2)
-
- def test_equal_dict(self):
- hi = identifier.HeatIdentifier('t', 's', 'i', 'p')
- self.assertTrue(hi == dict(hi))
- self.assertTrue(dict(hi) == hi)
-
- def test_not_equal(self):
- hi1 = identifier.HeatIdentifier('t', 's', 'i', 'p')
- hi2 = identifier.HeatIdentifier('t', 's', 'i', 'q')
- self.assertFalse(hi1 == hi2)
- self.assertFalse(hi2 == hi1)
-
- def test_not_equal_dict(self):
- hi1 = identifier.HeatIdentifier('t', 's', 'i', 'p')
- hi2 = identifier.HeatIdentifier('t', 's', 'i', 'q')
- self.assertFalse(hi1 == dict(hi2))
- self.assertFalse(dict(hi1) == hi2)
- self.assertFalse(hi1 == {'tenant': 't',
- 'stack_name': 's',
- 'stack_id': 'i'})
- self.assertFalse({'tenant': 't',
- 'stack_name': 's',
- 'stack_id': 'i'} == hi1)
-
- def test_path_components(self):
- hi = identifier.HeatIdentifier('t', 's', 'i', 'p1/p2/p3')
- self.assertEqual(hi._path_components(), ['p1', 'p2', 'p3'])
-
-
-class ResourceIdentifierTest(testtools.TestCase):
- def test_resource_init_no_path(self):
- si = identifier.HeatIdentifier('t', 's', 'i')
- ri = identifier.ResourceIdentifier(resource_name='r', **si)
- self.assertEqual(ri.path, '/resources/r')
-
- def test_resource_init_path(self):
- si = identifier.HeatIdentifier('t', 's', 'i')
- pi = identifier.ResourceIdentifier(resource_name='p', **si)
- ri = identifier.ResourceIdentifier(resource_name='r', **pi)
- self.assertEqual(ri.path, '/resources/p/resources/r')
-
- def test_resource_init_from_dict(self):
- hi = identifier.HeatIdentifier('t', 's', 'i', '/resources/r')
- ri = identifier.ResourceIdentifier(**hi)
- self.assertEqual(ri, hi)
-
- def test_resource_stack(self):
- si = identifier.HeatIdentifier('t', 's', 'i')
- ri = identifier.ResourceIdentifier(resource_name='r', **si)
- self.assertEqual(ri.stack(), si)
-
- def test_resource_id(self):
- ri = identifier.ResourceIdentifier('t', 's', 'i', '', 'r')
- self.assertEqual(ri.resource_name, 'r')
-
- def test_resource_name_slash(self):
- self.assertRaises(ValueError, identifier.ResourceIdentifier,
- 't', 's', 'i', 'p', 'r/r')
-
-
-class EventIdentifierTest(testtools.TestCase):
- def test_event_init(self):
- si = identifier.HeatIdentifier('t', 's', 'i')
- pi = identifier.ResourceIdentifier(resource_name='p', **si)
- ei = identifier.EventIdentifier(event_id='e', **pi)
- self.assertEqual(ei.path, '/resources/p/events/e')
-
- def test_event_init_from_dict(self):
- hi = identifier.HeatIdentifier('t', 's', 'i', '/resources/p/events/42')
- ei = identifier.EventIdentifier(**hi)
- self.assertEqual(ei, hi)
-
- def test_event_stack(self):
- si = identifier.HeatIdentifier('t', 's', 'i')
- pi = identifier.ResourceIdentifier(resource_name='r', **si)
- ei = identifier.EventIdentifier(event_id='e', **pi)
- self.assertEqual(ei.stack(), si)
-
- def test_event_resource(self):
- si = identifier.HeatIdentifier('t', 's', 'i')
- pi = identifier.ResourceIdentifier(resource_name='r', **si)
- ei = identifier.EventIdentifier(event_id='e', **pi)
- self.assertEqual(ei.resource(), pi)
-
- def test_resource_name(self):
- ei = identifier.EventIdentifier('t', 's', 'i', '/resources/p', 'e')
- self.assertEqual(ei.resource_name, 'p')
-
- def test_event_id(self):
- ei = identifier.EventIdentifier('t', 's', 'i', '/resources/p', 'e')
- self.assertEqual(ei.event_id, 'e')
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import copy
-
-import mox
-
-from heat.engine import environment
-from heat.tests.v1_1 import fakes
-from heat.common import exception
-from heat.common import template_format
-from heat.engine import parser
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine.resources import instance as instances
-from heat.engine.resources import network_interface
-from heat.engine.resources import nova_utils
-from heat.openstack.common import uuidutils
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-from neutronclient.v2_0 import client as neutronclient
-
-
-wp_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "WordPress",
- "Parameters" : {
- "KeyName" : {
- "Description" : "KeyName",
- "Type" : "String",
- "Default" : "test"
- }
- },
- "Resources" : {
- "WebServer": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId" : "F17-x86_64-gold",
- "InstanceType" : "m1.large",
- "KeyName" : "test",
- "UserData" : "wordpress"
- }
- }
- }
-}
-'''
-
-
-class InstancesTest(HeatTestCase):
- def setUp(self):
- super(InstancesTest, self).setUp()
- self.fc = fakes.FakeClient()
- utils.setup_dummy_db()
-
- def _setup_test_stack(self, stack_name):
- t = template_format.parse(wp_template)
- template = parser.Template(t)
- stack = parser.Stack(utils.dummy_context(), stack_name, template,
- environment.Environment({'KeyName': 'test'}),
- stack_id=uuidutils.generate_uuid())
- return (t, stack)
-
- def _setup_test_instance(self, return_server, name, image_id=None):
- stack_name = '%s_stack' % name
- (t, stack) = self._setup_test_stack(stack_name)
-
- t['Resources']['WebServer']['Properties']['ImageId'] = \
- image_id or 'CentOS 5.2'
- t['Resources']['WebServer']['Properties']['InstanceType'] = \
- '256 MB Server'
- instance = instances.Instance('%s_name' % name,
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(instance, 'nova')
- instance.nova().MultipleTimes().AndReturn(self.fc)
-
- instance.t = instance.stack.resolve_runtime_data(instance.t)
-
- # need to resolve the template functions
- server_userdata = nova_utils.build_userdata(
- instance,
- instance.t['Properties']['UserData'])
- instance.mime_string = server_userdata
- self.m.StubOutWithMock(self.fc.servers, 'create')
- self.fc.servers.create(
- image=1, flavor=1, key_name='test',
- name=utils.PhysName(stack_name, instance.name),
- security_groups=None,
- userdata=server_userdata, scheduler_hints=None,
- meta=None, nics=None, availability_zone=None).AndReturn(
- return_server)
-
- return instance
-
- def _create_test_instance(self, return_server, name):
- instance = self._setup_test_instance(return_server, name)
- self.m.ReplayAll()
- scheduler.TaskRunner(instance.create)()
- return instance
-
- def test_instance_create(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_create')
- # this makes sure the auto increment worked on instance creation
- self.assertTrue(instance.id > 0)
-
- expected_ip = return_server.networks['public'][0]
- self.assertEqual(instance.FnGetAtt('PublicIp'), expected_ip)
- self.assertEqual(instance.FnGetAtt('PrivateIp'), expected_ip)
- self.assertEqual(instance.FnGetAtt('PrivateDnsName'), expected_ip)
- self.assertEqual(instance.FnGetAtt('PrivateDnsName'), expected_ip)
-
- self.m.VerifyAll()
-
- def test_instance_create_with_image_id(self):
- return_server = self.fc.servers.list()[1]
- instance = self._setup_test_instance(return_server,
- 'test_instance_create_image_id',
- image_id='1')
- self.m.StubOutWithMock(uuidutils, "is_uuid_like")
- uuidutils.is_uuid_like('1').AndReturn(True)
-
- self.m.ReplayAll()
- scheduler.TaskRunner(instance.create)()
-
- # this makes sure the auto increment worked on instance creation
- self.assertTrue(instance.id > 0)
-
- expected_ip = return_server.networks['public'][0]
- self.assertEqual(instance.FnGetAtt('PublicIp'), expected_ip)
- self.assertEqual(instance.FnGetAtt('PrivateIp'), expected_ip)
- self.assertEqual(instance.FnGetAtt('PublicDnsName'), expected_ip)
- self.assertEqual(instance.FnGetAtt('PrivateDnsName'), expected_ip)
-
- self.m.VerifyAll()
-
- def test_instance_create_image_name_err(self):
- stack_name = 'test_instance_create_image_name_err_stack'
- (t, stack) = self._setup_test_stack(stack_name)
-
- # create an instance with non exist image name
- t['Resources']['WebServer']['Properties']['ImageId'] = 'Slackware'
- instance = instances.Instance('instance_create_image_err',
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(instance, 'nova')
- instance.nova().MultipleTimes().AndReturn(self.fc)
- self.m.ReplayAll()
-
- self.assertRaises(exception.ImageNotFound, instance.handle_create)
-
- self.m.VerifyAll()
-
- def test_instance_create_duplicate_image_name_err(self):
- stack_name = 'test_instance_create_image_name_err_stack'
- (t, stack) = self._setup_test_stack(stack_name)
-
- # create an instance with a non unique image name
- t['Resources']['WebServer']['Properties']['ImageId'] = 'CentOS 5.2'
- instance = instances.Instance('instance_create_image_err',
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(instance, 'nova')
- instance.nova().MultipleTimes().AndReturn(self.fc)
- self.m.StubOutWithMock(self.fc.client, "get_images_detail")
- self.fc.client.get_images_detail().AndReturn((
- 200, {'images': [{'id': 1, 'name': 'CentOS 5.2'},
- {'id': 4, 'name': 'CentOS 5.2'}]}))
- self.m.ReplayAll()
-
- self.assertRaises(exception.NoUniqueImageFound, instance.handle_create)
-
- self.m.VerifyAll()
-
- def test_instance_create_image_id_err(self):
- stack_name = 'test_instance_create_image_id_err_stack'
- (t, stack) = self._setup_test_stack(stack_name)
-
- # create an instance with non exist image Id
- t['Resources']['WebServer']['Properties']['ImageId'] = '1'
- instance = instances.Instance('instance_create_image_err',
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(instance, 'nova')
- instance.nova().MultipleTimes().AndReturn(self.fc)
- self.m.StubOutWithMock(uuidutils, "is_uuid_like")
- uuidutils.is_uuid_like('1').AndReturn(True)
- self.m.StubOutWithMock(self.fc.client, "get_images_1")
- self.fc.client.get_images_1().AndRaise(
- instances.clients.novaclient.exceptions.NotFound(404))
- self.m.ReplayAll()
-
- self.assertRaises(exception.ImageNotFound, instance.handle_create)
-
- self.m.VerifyAll()
-
- class FakeVolumeAttach:
- def started(self):
- return False
-
- def test_instance_create_unexpected_status(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_create')
- return_server.get = lambda: None
- return_server.status = 'BOGUS'
- self.assertRaises(exception.Error,
- instance.check_create_complete,
- (return_server, self.FakeVolumeAttach()))
-
- def test_instance_create_error_status(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_create')
- return_server.status = 'ERROR'
- return_server.fault = {
- 'message': 'NoValidHost',
- 'code': 500,
- 'created': '2013-08-14T03:12:10Z'
- }
- self.m.StubOutWithMock(return_server, 'get')
- return_server.get()
- self.m.ReplayAll()
-
- self.assertRaises(exception.Error,
- instance.check_create_complete,
- (return_server, self.FakeVolumeAttach()))
-
- self.m.VerifyAll()
-
- def test_instance_create_error_no_fault(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_create')
- return_server.status = 'ERROR'
-
- self.m.StubOutWithMock(return_server, 'get')
- return_server.get()
- self.m.ReplayAll()
-
- try:
- instance.check_create_complete(
- (return_server, self.FakeVolumeAttach()))
- except exception.Error as e:
- self.assertEqual(
- 'Creation of server sample-server2 failed: Unknown (500)',
- str(e))
- else:
- self.fail('Error not raised')
-
- self.m.VerifyAll()
-
- def test_instance_validate(self):
- stack_name = 'test_instance_validate_stack'
- (t, stack) = self._setup_test_stack(stack_name)
-
- # create an instance with non exist image Id
- t['Resources']['WebServer']['Properties']['ImageId'] = '1'
- instance = instances.Instance('instance_create_image_err',
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(instance, 'nova')
- instance.nova().MultipleTimes().AndReturn(self.fc)
-
- self.m.StubOutWithMock(uuidutils, "is_uuid_like")
- uuidutils.is_uuid_like('1').AndReturn(True)
- self.m.ReplayAll()
-
- self.assertEqual(instance.validate(), None)
-
- self.m.VerifyAll()
-
- def test_instance_create_delete(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_create_delete')
- instance.resource_id = 1234
-
- # this makes sure the auto increment worked on instance creation
- self.assertTrue(instance.id > 0)
-
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndRaise(instances.clients.novaclient.exceptions.NotFound(404))
- mox.Replay(get)
-
- scheduler.TaskRunner(instance.delete)()
- self.assertTrue(instance.resource_id is None)
- self.assertEqual(instance.state, (instance.DELETE, instance.COMPLETE))
- self.m.VerifyAll()
-
- def test_instance_update_metadata(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_update')
-
- update_template = copy.deepcopy(instance.t)
- update_template['Metadata'] = {'test': 123}
- scheduler.TaskRunner(instance.update, update_template)()
- self.assertEqual(instance.metadata, {'test': 123})
-
- def test_instance_update_instance_type(self):
- """
- Instance.handle_update supports changing the InstanceType, and makes
- the change making a resize API call against Nova.
- """
- return_server = self.fc.servers.list()[1]
- return_server.id = 1234
- instance = self._create_test_instance(return_server,
- 'test_instance_update')
-
- update_template = copy.deepcopy(instance.t)
- update_template['Properties']['InstanceType'] = 'm1.small'
-
- self.m.StubOutWithMock(self.fc.servers, 'get')
- self.fc.servers.get(1234).AndReturn(return_server)
-
- def activate_status(server):
- server.status = 'VERIFY_RESIZE'
- return_server.get = activate_status.__get__(return_server)
-
- self.m.StubOutWithMock(self.fc.client, 'post_servers_1234_action')
- self.fc.client.post_servers_1234_action(
- body={'resize': {'flavorRef': 2}}).AndReturn((202, None))
- self.fc.client.post_servers_1234_action(
- body={'confirmResize': None}).AndReturn((202, None))
- self.m.ReplayAll()
-
- scheduler.TaskRunner(instance.update, update_template)()
- self.assertEqual(instance.state, (instance.UPDATE, instance.COMPLETE))
- self.m.VerifyAll()
-
- def test_instance_update_instance_type_failed(self):
- """
- If the status after a resize is not VERIFY_RESIZE, it means the resize
- call failed, so we raise an explicit error.
- """
- return_server = self.fc.servers.list()[1]
- return_server.id = 1234
- instance = self._create_test_instance(return_server,
- 'test_instance_update')
-
- update_template = copy.deepcopy(instance.t)
- update_template['Properties']['InstanceType'] = 'm1.small'
-
- self.m.StubOutWithMock(self.fc.servers, 'get')
- self.fc.servers.get(1234).AndReturn(return_server)
-
- def activate_status(server):
- server.status = 'ACTIVE'
- return_server.get = activate_status.__get__(return_server)
-
- self.m.StubOutWithMock(self.fc.client, 'post_servers_1234_action')
- self.fc.client.post_servers_1234_action(
- body={'resize': {'flavorRef': 2}}).AndReturn((202, None))
- self.m.ReplayAll()
-
- updater = scheduler.TaskRunner(instance.update, update_template)
- error = self.assertRaises(exception.ResourceFailure, updater)
- self.assertEqual(
- "Error: Resizing to 'm1.small' failed, status 'ACTIVE'",
- str(error))
- self.assertEqual(instance.state, (instance.UPDATE, instance.FAILED))
- self.m.VerifyAll()
-
- def test_instance_update_replace(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_update')
-
- update_template = copy.deepcopy(instance.t)
- update_template['Notallowed'] = {'test': 123}
- updater = scheduler.TaskRunner(instance.update, update_template)
- self.assertRaises(resource.UpdateReplace, updater)
-
- def test_instance_update_properties(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_update')
-
- update_template = copy.deepcopy(instance.t)
- update_template['Properties']['KeyName'] = 'mustreplace'
- updater = scheduler.TaskRunner(instance.update, update_template)
- self.assertRaises(resource.UpdateReplace, updater)
-
- def test_instance_status_build(self):
- return_server = self.fc.servers.list()[0]
- instance = self._setup_test_instance(return_server,
- 'test_instance_status_build')
- instance.resource_id = 1234
-
- # Bind fake get method which Instance.check_create_complete will call
- def activate_status(server):
- server.status = 'ACTIVE'
- return_server.get = activate_status.__get__(return_server)
- self.m.ReplayAll()
-
- scheduler.TaskRunner(instance.create)()
- self.assertEqual(instance.state, (instance.CREATE, instance.COMPLETE))
-
- def test_instance_status_suspend_immediate(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_suspend')
-
- instance.resource_id = 1234
- self.m.ReplayAll()
-
- # Override the get_servers_1234 handler status to SUSPENDED
- d = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]}
- d['server']['status'] = 'SUSPENDED'
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndReturn((200, d))
- mox.Replay(get)
-
- scheduler.TaskRunner(instance.suspend)()
- self.assertEqual(instance.state, (instance.SUSPEND, instance.COMPLETE))
-
- self.m.VerifyAll()
-
- def test_instance_status_resume_immediate(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_resume')
-
- instance.resource_id = 1234
- self.m.ReplayAll()
-
- # Override the get_servers_1234 handler status to SUSPENDED
- d = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]}
- d['server']['status'] = 'ACTIVE'
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndReturn((200, d))
- mox.Replay(get)
- instance.state_set(instance.SUSPEND, instance.COMPLETE)
-
- scheduler.TaskRunner(instance.resume)()
- self.assertEqual(instance.state, (instance.RESUME, instance.COMPLETE))
-
- self.m.VerifyAll()
-
- def test_instance_status_suspend_wait(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_suspend')
-
- instance.resource_id = 1234
- self.m.ReplayAll()
-
- # Override the get_servers_1234 handler status to SUSPENDED, but
- # return the ACTIVE state first (twice, so we sleep)
- d1 = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]}
- d2 = copy.deepcopy(d1)
- d1['server']['status'] = 'ACTIVE'
- d2['server']['status'] = 'SUSPENDED'
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndReturn((200, d1))
- get().AndReturn((200, d1))
- get().AndReturn((200, d2))
- self.m.ReplayAll()
-
- scheduler.TaskRunner(instance.suspend)()
- self.assertEqual(instance.state, (instance.SUSPEND, instance.COMPLETE))
-
- self.m.VerifyAll()
-
- def test_instance_status_resume_wait(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_resume')
-
- instance.resource_id = 1234
- self.m.ReplayAll()
-
- # Override the get_servers_1234 handler status to ACTIVE, but
- # return the SUSPENDED state first (twice, so we sleep)
- d1 = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]}
- d2 = copy.deepcopy(d1)
- d1['server']['status'] = 'SUSPENDED'
- d2['server']['status'] = 'ACTIVE'
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndReturn((200, d1))
- get().AndReturn((200, d1))
- get().AndReturn((200, d2))
- self.m.ReplayAll()
-
- instance.state_set(instance.SUSPEND, instance.COMPLETE)
-
- scheduler.TaskRunner(instance.resume)()
- self.assertEqual(instance.state, (instance.RESUME, instance.COMPLETE))
-
- self.m.VerifyAll()
-
- def test_instance_suspend_volumes_step(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_suspend')
-
- instance.resource_id = 1234
- self.m.ReplayAll()
-
- # Override the get_servers_1234 handler status to SUSPENDED
- d = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]}
- d['server']['status'] = 'SUSPENDED'
-
- # Return a dummy PollingTaskGroup to make check_suspend_complete step
- def dummy_detach():
- yield
- dummy_tg = scheduler.PollingTaskGroup([dummy_detach, dummy_detach])
- self.m.StubOutWithMock(instance, '_detach_volumes_task')
- instance._detach_volumes_task().AndReturn(dummy_tg)
-
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndReturn((200, d))
- self.m.ReplayAll()
-
- scheduler.TaskRunner(instance.suspend)()
- self.assertEqual(instance.state, (instance.SUSPEND, instance.COMPLETE))
-
- self.m.VerifyAll()
-
- def test_instance_resume_volumes_step(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_resume')
-
- instance.resource_id = 1234
- self.m.ReplayAll()
-
- # Override the get_servers_1234 handler status to ACTIVE
- d = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]}
- d['server']['status'] = 'ACTIVE'
-
- # Return a dummy PollingTaskGroup to make check_resume_complete step
- def dummy_attach():
- yield
- dummy_tg = scheduler.PollingTaskGroup([dummy_attach, dummy_attach])
- self.m.StubOutWithMock(instance, '_attach_volumes_task')
- instance._attach_volumes_task().AndReturn(dummy_tg)
-
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndReturn((200, d))
-
- self.m.ReplayAll()
-
- instance.state_set(instance.SUSPEND, instance.COMPLETE)
-
- scheduler.TaskRunner(instance.resume)()
- self.assertEqual(instance.state, (instance.RESUME, instance.COMPLETE))
-
- self.m.VerifyAll()
-
- def test_instance_status_build_spawning(self):
- self._test_instance_status_not_build_active('BUILD(SPAWNING)')
-
- def test_instance_status_hard_reboot(self):
- self._test_instance_status_not_build_active('HARD_REBOOT')
-
- def test_instance_status_password(self):
- self._test_instance_status_not_build_active('PASSWORD')
-
- def test_instance_status_reboot(self):
- self._test_instance_status_not_build_active('REBOOT')
-
- def test_instance_status_rescue(self):
- self._test_instance_status_not_build_active('RESCUE')
-
- def test_instance_status_resize(self):
- self._test_instance_status_not_build_active('RESIZE')
-
- def test_instance_status_revert_resize(self):
- self._test_instance_status_not_build_active('REVERT_RESIZE')
-
- def test_instance_status_shutoff(self):
- self._test_instance_status_not_build_active('SHUTOFF')
-
- def test_instance_status_suspended(self):
- self._test_instance_status_not_build_active('SUSPENDED')
-
- def test_instance_status_verify_resize(self):
- self._test_instance_status_not_build_active('VERIFY_RESIZE')
-
- def _test_instance_status_not_build_active(self, uncommon_status):
- return_server = self.fc.servers.list()[0]
- instance = self._setup_test_instance(return_server,
- 'test_instance_status_build')
- instance.resource_id = 1234
-
- # Bind fake get method which Instance.check_create_complete will call
- def activate_status(server):
- if hasattr(server, '_test_check_iterations'):
- server._test_check_iterations += 1
- else:
- server._test_check_iterations = 1
- if server._test_check_iterations == 1:
- server.status = uncommon_status
- if server._test_check_iterations > 2:
- server.status = 'ACTIVE'
- return_server.get = activate_status.__get__(return_server)
- self.m.ReplayAll()
-
- scheduler.TaskRunner(instance.create)()
- self.assertEqual(instance.state, (instance.CREATE, instance.COMPLETE))
-
- self.m.VerifyAll()
-
- def test_build_nics(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_build_nics')
-
- self.assertEqual(None, instance._build_nics([]))
- self.assertEqual(None, instance._build_nics(None))
- self.assertEqual([
- {'port-id': 'id3'}, {'port-id': 'id1'}, {'port-id': 'id2'}],
- instance._build_nics([
- 'id3', 'id1', 'id2']))
- self.assertEqual([
- {'port-id': 'id1'},
- {'port-id': 'id2'},
- {'port-id': 'id3'}], instance._build_nics([
- {'NetworkInterfaceId': 'id3', 'DeviceIndex': '3'},
- {'NetworkInterfaceId': 'id1', 'DeviceIndex': '1'},
- {'NetworkInterfaceId': 'id2', 'DeviceIndex': 2},
- ]))
- self.assertEqual([
- {'port-id': 'id1'},
- {'port-id': 'id2'},
- {'port-id': 'id3'},
- {'port-id': 'id4'},
- {'port-id': 'id5'}
- ], instance._build_nics([
- {'NetworkInterfaceId': 'id3', 'DeviceIndex': '3'},
- {'NetworkInterfaceId': 'id1', 'DeviceIndex': '1'},
- {'NetworkInterfaceId': 'id2', 'DeviceIndex': 2},
- 'id4',
- 'id5'
- ]))
-
- def test_build_nics_with_security_groups(self):
- """
- Test the security groups defined in heat template can be associated
- to a new created port.
- """
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_build_nics')
-
- security_groups = ['security_group_1']
- self._test_security_groups(instance, security_groups)
-
- security_groups = ['fake_id_1']
- self._test_security_groups(instance, security_groups)
-
- security_groups = ['security_group_1', 'security_group_1']
- self._test_security_groups(instance, security_groups)
-
- security_groups = ['fake_id_1', 'fake_id_1']
- self._test_security_groups(instance, security_groups)
-
- security_groups = ['security_group_1', 'fake_id_1']
- self._test_security_groups(instance, security_groups)
-
- security_groups = ['security_group_1', 'fake_id_2']
- self._test_security_groups(instance, security_groups, sg='two')
-
- security_groups = ['wrong_group_id']
- self._test_security_groups(instance, security_groups, sg='zero')
-
- security_groups = ['wrong_group_id', 'fake_id_1']
- self._test_security_groups(instance, security_groups)
-
- security_groups = ['wrong_group_name']
- self._test_security_groups(instance, security_groups, sg='zero')
-
- security_groups = ['wrong_group_name', 'security_group_1']
- self._test_security_groups(instance, security_groups)
-
- def _test_security_groups(self, instance, security_groups, sg='one'):
- fake_groups_list, props = self._get_fake_properties(sg)
-
- def generate_sg_list():
- yield fake_groups_list
-
- nclient = neutronclient.Client()
- self.m.StubOutWithMock(instance, 'neutron')
- instance.neutron().MultipleTimes().AndReturn(nclient)
-
- self.m.StubOutWithMock(neutronclient.Client, 'list_security_groups')
- neutronclient.Client.list_security_groups(
- instance.resource_id).AndReturn(generate_sg_list())
-
- net_interface = network_interface.NetworkInterface
- self.m.StubOutWithMock(net_interface, 'network_id_from_subnet_id')
- net_interface.network_id_from_subnet_id(
- nclient,
- 'fake_subnet_id').MultipleTimes().AndReturn('fake_network_id')
-
- self.m.StubOutWithMock(neutronclient.Client, 'create_port')
- neutronclient.Client.create_port(
- {'port': props}).MultipleTimes().AndReturn(
- {'port': {'id': 'fake_port_id'}})
-
- self.m.ReplayAll()
-
- self.assertEqual(
- [{'port-id': 'fake_port_id'}],
- instance._build_nics(None,
- security_groups=security_groups,
- subnet_id='fake_subnet_id'))
-
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- def _get_fake_properties(self, sg='one'):
- fake_groups_list = {
- 'security_groups': [
- {
- 'id': 'fake_id_1',
- 'name': 'security_group_1',
- 'security_group_rules': [],
- 'description': 'no protocol'
- },
- {
- 'id': 'fake_id_2',
- 'name': 'security_group_2',
- 'security_group_rules': [],
- 'description': 'no protocol'
- }
- ]
- }
-
- fixed_ip = {'subnet_id': 'fake_subnet_id'}
- props = {
- 'admin_state_up': True,
- 'network_id': 'fake_network_id',
- 'fixed_ips': [fixed_ip],
- 'security_groups': ['fake_id_1']
- }
-
- if sg == 'zero':
- props['security_groups'] = []
- elif sg == 'one':
- props['security_groups'] = ['fake_id_1']
- elif sg == 'two':
- props['security_groups'] = ['fake_id_1', 'fake_id_2']
-
- return fake_groups_list, props
-
- def test_instance_without_ip_address(self):
- return_server = self.fc.servers.list()[3]
- instance = self._create_test_instance(return_server,
- 'test_without_ip_address')
-
- self.assertEqual(instance.FnGetAtt('PrivateIp'), '0.0.0.0')
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import copy
-
-from heat.common import exception
-from heat.common import template_format
-from heat.engine.resources import instance
-from heat.engine import resource
-from heat.engine import resources
-from heat.engine import scheduler
-from heat.engine import parser
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-ig_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to create multiple instances.",
- "Parameters" : {},
- "Resources" : {
- "JobServerGroup" : {
- "Type" : "OS::Heat::InstanceGroup",
- "Properties" : {
- "LaunchConfigurationName" : { "Ref" : "JobServerConfig" },
- "Size" : "1",
- "AvailabilityZones" : ["nova"]
- }
- },
-
- "JobServerConfig" : {
- "Type" : "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : "foo",
- "InstanceType" : "m1.large",
- "KeyName" : "test",
- "SecurityGroups" : [ "sg-1" ],
- "UserData" : "jsconfig data"
- }
- }
- }
-}
-'''
-
-
-class InstanceGroupTest(HeatTestCase):
- def setUp(self):
- super(InstanceGroupTest, self).setUp()
- utils.setup_dummy_db()
-
- def _stub_create(self, num, instance_class=instance.Instance):
- """
- Expect creation of C{num} number of Instances.
-
- :param instance_class: The resource class to expect to be created
- instead of instance.Instance.
- """
- self.m.StubOutWithMock(parser.Stack, 'validate')
- parser.Stack.validate()
-
- self.m.StubOutWithMock(instance_class, 'handle_create')
- self.m.StubOutWithMock(instance_class, 'check_create_complete')
- cookie = object()
-
- for x in range(num):
- instance_class.handle_create().AndReturn(cookie)
- instance_class.check_create_complete(cookie).AndReturn(False)
- instance_class.check_create_complete(
- cookie).MultipleTimes().AndReturn(True)
-
- def create_resource(self, t, stack, resource_name):
- # subsequent resources may need to reference previous created resources
- # use the stack's resource objects instead of instantiating new ones
- rsrc = stack.resources[resource_name]
- self.assertEqual(None, rsrc.validate())
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def test_instance_group(self):
-
- t = template_format.parse(ig_template)
- stack = utils.parse_stack(t)
-
- # start with min then delete
- self._stub_create(1)
- self.m.StubOutWithMock(instance.Instance, 'FnGetAtt')
- instance.Instance.FnGetAtt('PublicIp').AndReturn('1.2.3.4')
-
- self.m.ReplayAll()
- conf = self.create_resource(t, stack, 'JobServerConfig')
- rsrc = self.create_resource(t, stack, 'JobServerGroup')
- self.assertEqual(utils.PhysName(stack.name, rsrc.name),
- rsrc.FnGetRefId())
- self.assertEqual('1.2.3.4', rsrc.FnGetAtt('InstanceList'))
-
- nested = rsrc.nested()
- self.assertEqual(nested.id, rsrc.resource_id)
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_instance_group_custom_resource(self):
- """
- If AWS::EC2::Instance is overridden, InstanceGroup will automatically
- use that overridden resource type.
- """
- # resources may need to be initialised if this is the first test run.
- resources.initialise()
-
- class MyInstance(instance.Instance):
- """A customized Instance resource."""
-
- original_instance = resource.get_class("AWS::EC2::Instance")
- resource._register_class("AWS::EC2::Instance", MyInstance)
- self.addCleanup(resource._register_class, "AWS::EC2::Instance",
- original_instance)
-
- t = template_format.parse(ig_template)
- stack = utils.parse_stack(t)
- self._stub_create(1, instance_class=MyInstance)
-
- self.m.ReplayAll()
- conf = self.create_resource(t, stack, 'JobServerConfig')
- rsrc = self.create_resource(t, stack, 'JobServerGroup')
- self.assertEqual(utils.PhysName(stack.name, rsrc.name),
- rsrc.FnGetRefId())
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_missing_image(self):
-
- t = template_format.parse(ig_template)
- stack = utils.parse_stack(t)
-
- conf = self.create_resource(t, stack, 'JobServerConfig')
- rsrc = stack.resources['JobServerGroup']
-
- self.m.StubOutWithMock(instance.Instance, 'handle_create')
- not_found = exception.ImageNotFound(image_name='bla')
- instance.Instance.handle_create().AndRaise(not_found)
- self.m.StubOutWithMock(parser.Stack, 'validate')
- parser.Stack.validate()
-
- self.m.ReplayAll()
-
- create = scheduler.TaskRunner(rsrc.create)
- self.assertRaises(exception.ResourceFailure, create)
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
-
- self.m.VerifyAll()
-
- def test_handle_update_size(self):
- t = template_format.parse(ig_template)
- properties = t['Resources']['JobServerGroup']['Properties']
- properties['Size'] = '2'
- stack = utils.parse_stack(t)
-
- self._stub_create(2)
- self.m.ReplayAll()
- conf = self.create_resource(t, stack, 'JobServerConfig')
- rsrc = self.create_resource(t, stack, 'JobServerGroup')
-
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- # Increase min size to 5
- self._stub_create(3)
- self.m.StubOutWithMock(instance.Instance, 'FnGetAtt')
- instance.Instance.FnGetAtt('PublicIp').AndReturn('10.0.0.2')
- instance.Instance.FnGetAtt('PublicIp').AndReturn('10.0.0.3')
- instance.Instance.FnGetAtt('PublicIp').AndReturn('10.0.0.4')
- instance.Instance.FnGetAtt('PublicIp').AndReturn('10.0.0.5')
- instance.Instance.FnGetAtt('PublicIp').AndReturn('10.0.0.6')
-
- self.m.ReplayAll()
-
- update_snippet = copy.deepcopy(rsrc.parsed_template())
- update_snippet['Properties']['Size'] = '5'
- tmpl_diff = {'Properties': {'Size': '5'}}
- prop_diff = {'Size': '5'}
- self.assertEqual(None, rsrc.handle_update(update_snippet, tmpl_diff,
- prop_diff))
- self.assertEqual('10.0.0.2,10.0.0.3,10.0.0.4,10.0.0.5,10.0.0.6',
- rsrc.FnGetAtt('InstanceList'))
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_create_error(self):
- """
- If a resource in an instance group fails to be created, the instance
- group itself will fail and the broken inner resource will remain.
- """
- t = template_format.parse(ig_template)
- stack = utils.parse_stack(t)
-
- self.m.StubOutWithMock(parser.Stack, 'validate')
- parser.Stack.validate()
- self.m.StubOutWithMock(instance.Instance, 'handle_create')
- instance.Instance.handle_create().AndRaise(Exception)
-
- self.m.ReplayAll()
- conf = self.create_resource(t, stack, 'JobServerConfig')
- self.assertRaises(
- exception.ResourceFailure,
- self.create_resource, t, stack, 'JobServerGroup')
-
- rsrc = stack.resources['JobServerGroup']
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
-
- # The failed inner resource remains
- child_resource = rsrc.nested().resources['JobServerGroup-0']
- self.assertEqual((child_resource.CREATE, child_resource.FAILED),
- child_resource.state)
-
- self.m.VerifyAll()
-
- def test_update_error(self):
- """
- If a resource in an instance group fails to be created during an
- update, the instance group itself will fail and the broken inner
- resource will remain.
- """
- t = template_format.parse(ig_template)
- stack = utils.parse_stack(t)
-
- self._stub_create(1)
- self.m.ReplayAll()
- conf = self.create_resource(t, stack, 'JobServerConfig')
- rsrc = self.create_resource(t, stack, 'JobServerGroup')
-
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- self.m.StubOutWithMock(parser.Stack, 'validate')
- parser.Stack.validate()
- self.m.StubOutWithMock(instance.Instance, 'handle_create')
- instance.Instance.handle_create().AndRaise(Exception)
-
- self.m.ReplayAll()
-
- update_snippet = copy.deepcopy(rsrc.parsed_template())
- update_snippet['Properties']['Size'] = '2'
- tmpl_diff = {'Properties': {'Size': '2'}}
- prop_diff = {'Size': '2'}
- updater = scheduler.TaskRunner(rsrc.update, update_snippet)
- self.assertRaises(exception.ResourceFailure, updater)
-
- self.assertEqual((rsrc.UPDATE, rsrc.FAILED), rsrc.state)
-
- # The failed inner resource remains
- child_resource = rsrc.nested().resources['JobServerGroup-1']
- self.assertEqual((child_resource.CREATE, child_resource.FAILED),
- child_resource.state)
-
- self.m.VerifyAll()
-
- def test_update_fail_badkey(self):
- t = template_format.parse(ig_template)
- properties = t['Resources']['JobServerGroup']['Properties']
- properties['Size'] = '2'
- stack = utils.parse_stack(t)
-
- self._stub_create(2)
- self.m.ReplayAll()
- conf = self.create_resource(t, stack, 'JobServerConfig')
- rsrc = self.create_resource(t, stack, 'JobServerGroup')
-
- self.m.ReplayAll()
-
- update_snippet = copy.deepcopy(rsrc.parsed_template())
- update_snippet['Metadata'] = 'notallowedforupdate'
- updater = scheduler.TaskRunner(rsrc.update, update_snippet)
- self.assertRaises(resource.UpdateReplace, updater)
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_update_fail_badprop(self):
- t = template_format.parse(ig_template)
- properties = t['Resources']['JobServerGroup']['Properties']
- properties['Size'] = '2'
- stack = utils.parse_stack(t)
-
- self._stub_create(2)
- self.m.ReplayAll()
- conf = self.create_resource(t, stack, 'JobServerConfig')
- rsrc = self.create_resource(t, stack, 'JobServerGroup')
-
- self.m.ReplayAll()
-
- update_snippet = copy.deepcopy(rsrc.parsed_template())
- update_snippet['Properties']['AvailabilityZones'] = ['wibble']
- updater = scheduler.TaskRunner(rsrc.update, update_snippet)
- self.assertRaises(resource.UpdateReplace, updater)
-
- rsrc.delete()
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import re
-
-from heat.common import exception
-from heat.common import template_format
-from heat.engine.resources import instance
-from heat.engine import parser
-from heat.tests.common import HeatTestCase
-from heat.tests.utils import setup_dummy_db
-from heat.tests import utils
-
-
-ig_tmpl_without_updt_policy = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to create multiple instances.",
- "Parameters" : {},
- "Resources" : {
- "JobServerGroup" : {
- "Type" : "OS::Heat::InstanceGroup",
- "Properties" : {
- "LaunchConfigurationName" : { "Ref" : "JobServerConfig" },
- "Size" : "8",
- "AvailabilityZones" : ["nova"]
- }
- },
- "JobServerConfig" : {
- "Type" : "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : "foo",
- "InstanceType" : "m1.medium",
- "KeyName" : "test",
- "SecurityGroups" : [ "sg-1" ],
- "UserData" : "jsconfig data"
- }
- }
- }
-}
-'''
-
-ig_tmpl_with_bad_updt_policy = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to create multiple instances.",
- "Parameters" : {},
- "Resources" : {
- "JobServerGroup" : {
- "UpdatePolicy" : {
- "RollingUpdate": "foo"
- },
- "Type" : "OS::Heat::InstanceGroup",
- "Properties" : {
- "LaunchConfigurationName" : { "Ref" : "JobServerConfig" },
- "Size" : "8",
- "AvailabilityZones" : ["nova"]
- }
- },
- "JobServerConfig" : {
- "Type" : "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : "foo",
- "InstanceType" : "m1.medium",
- "KeyName" : "test",
- "SecurityGroups" : [ "sg-1" ],
- "UserData" : "jsconfig data"
- }
- }
- }
-}
-'''
-
-ig_tmpl_with_default_updt_policy = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to create multiple instances.",
- "Parameters" : {},
- "Resources" : {
- "JobServerGroup" : {
- "UpdatePolicy" : {
- "RollingUpdate" : {
- }
- },
- "Type" : "OS::Heat::InstanceGroup",
- "Properties" : {
- "LaunchConfigurationName" : { "Ref" : "JobServerConfig" },
- "Size" : "8",
- "AvailabilityZones" : ["nova"]
- }
- },
- "JobServerConfig" : {
- "Type" : "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : "foo",
- "InstanceType" : "m1.medium",
- "KeyName" : "test",
- "SecurityGroups" : [ "sg-1" ],
- "UserData" : "jsconfig data"
- }
- }
- }
-}
-'''
-
-ig_tmpl_with_updt_policy_1 = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to create multiple instances.",
- "Parameters" : {},
- "Resources" : {
- "JobServerGroup" : {
- "UpdatePolicy" : {
- "RollingUpdate" : {
- "MinInstancesInService" : "1",
- "MaxBatchSize" : "3",
- "PauseTime" : "PT30S"
- }
- },
- "Type" : "OS::Heat::InstanceGroup",
- "Properties" : {
- "LaunchConfigurationName" : { "Ref" : "JobServerConfig" },
- "Size" : "8",
- "AvailabilityZones" : ["nova"]
- }
- },
- "JobServerConfig" : {
- "Type" : "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : "foo",
- "InstanceType" : "m1.medium",
- "KeyName" : "test",
- "SecurityGroups" : [ "sg-1" ],
- "UserData" : "jsconfig data"
- }
- }
- }
-}
-'''
-
-ig_tmpl_with_updt_policy_2 = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to create multiple instances.",
- "Parameters" : {},
- "Resources" : {
- "JobServerGroup" : {
- "UpdatePolicy" : {
- "RollingUpdate" : {
- "MinInstancesInService" : "1",
- "MaxBatchSize" : "5",
- "PauseTime" : "PT30S"
- }
- },
- "Type" : "OS::Heat::InstanceGroup",
- "Properties" : {
- "LaunchConfigurationName" : { "Ref" : "JobServerConfig" },
- "Size" : "8",
- "AvailabilityZones" : ["nova"]
- }
- },
- "JobServerConfig" : {
- "Type" : "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : "foo",
- "InstanceType" : "m1.large",
- "KeyName" : "test",
- "SecurityGroups" : [ "sg-1" ],
- "UserData" : "jsconfig data"
- }
- }
- }
-}
-'''
-
-
-class InstanceGroupTest(HeatTestCase):
- def setUp(self):
- super(InstanceGroupTest, self).setUp()
- setup_dummy_db()
-
- def _stub_create(self, num, instance_class=instance.Instance):
- """
- Expect creation of C{num} number of Instances.
-
- :param instance_class: The resource class to expect to be created
- instead of instance.Instance.
- """
-
- self.m.StubOutWithMock(parser.Stack, 'validate')
- parser.Stack.validate()
-
- self.m.StubOutWithMock(instance_class, 'handle_create')
- self.m.StubOutWithMock(instance_class, 'check_create_complete')
- cookie = object()
- for x in range(num):
- instance_class.handle_create().AndReturn(cookie)
- instance_class.check_create_complete(cookie).AndReturn(False)
- instance_class.check_create_complete(
- cookie).MultipleTimes().AndReturn(True)
-
- def get_launch_conf_name(self, stack, ig_name):
- return stack.resources[ig_name].properties['LaunchConfigurationName']
-
- def test_parse_without_update_policy(self):
- tmpl = template_format.parse(ig_tmpl_without_updt_policy)
- stack = utils.parse_stack(tmpl)
- grp = stack.resources['JobServerGroup']
- self.assertFalse(grp.update_policy['RollingUpdate'])
-
- def test_parse_with_update_policy(self):
- tmpl = template_format.parse(ig_tmpl_with_updt_policy_1)
- stack = utils.parse_stack(tmpl)
- grp = stack.resources['JobServerGroup']
- self.assertTrue(grp.update_policy)
- self.assertTrue(len(grp.update_policy) == 1)
- self.assertTrue('RollingUpdate' in grp.update_policy)
- policy = grp.update_policy['RollingUpdate']
- self.assertTrue(policy and len(policy) > 0)
- self.assertEqual(int(policy['MinInstancesInService']), 1)
- self.assertEqual(int(policy['MaxBatchSize']), 3)
- self.assertEqual(policy['PauseTime'], 'PT30S')
-
- def test_parse_with_default_update_policy(self):
- tmpl = template_format.parse(ig_tmpl_with_default_updt_policy)
- stack = utils.parse_stack(tmpl)
- grp = stack.resources['JobServerGroup']
- self.assertTrue(grp.update_policy)
- self.assertTrue(len(grp.update_policy) == 1)
- self.assertTrue('RollingUpdate' in grp.update_policy)
- policy = grp.update_policy['RollingUpdate']
- self.assertTrue(policy and len(policy) > 0)
- self.assertEqual(int(policy['MinInstancesInService']), 0)
- self.assertEqual(int(policy['MaxBatchSize']), 1)
- self.assertEqual(policy['PauseTime'], 'PT0S')
-
- def test_parse_with_bad_update_policy(self):
- tmpl = template_format.parse(ig_tmpl_with_bad_updt_policy)
- stack = utils.parse_stack(tmpl)
- self.assertRaises(exception.StackValidationFailed, stack.validate)
-
- def validate_update_policy_diff(self, current, updated):
-
- # load current stack
- current_tmpl = template_format.parse(current)
- current_stack = utils.parse_stack(current_tmpl)
-
- # get the json snippet for the current InstanceGroup resource
- current_grp = current_stack.resources['JobServerGroup']
- current_snippets = dict((r.name, r.parsed_template())
- for r in current_stack)
- current_grp_json = current_snippets[current_grp.name]
-
- # load the updated stack
- updated_tmpl = template_format.parse(updated)
- updated_stack = utils.parse_stack(updated_tmpl)
-
- # get the updated json snippet for the InstanceGroup resource in the
- # context of the current stack
- updated_grp = updated_stack.resources['JobServerGroup']
- updated_grp_json = current_stack.resolve_runtime_data(updated_grp.t)
-
- # identify the template difference
- tmpl_diff = updated_grp.update_template_diff(
- updated_grp_json, current_grp_json)
- updated_policy = (updated_grp.t['UpdatePolicy']
- if 'UpdatePolicy' in updated_grp.t else None)
- expected = {u'UpdatePolicy': updated_policy}
- self.assertEqual(tmpl_diff, expected)
-
- def test_update_policy_added(self):
- self.validate_update_policy_diff(ig_tmpl_without_updt_policy,
- ig_tmpl_with_updt_policy_1)
-
- def test_update_policy_updated(self):
- self.validate_update_policy_diff(ig_tmpl_with_updt_policy_1,
- ig_tmpl_with_updt_policy_2)
-
- def test_update_policy_removed(self):
- self.validate_update_policy_diff(ig_tmpl_with_updt_policy_1,
- ig_tmpl_without_updt_policy)
-
- def test_instance_group_update(self):
-
- # setup stack from the initial template
- tmpl = template_format.parse(ig_tmpl_with_updt_policy_1)
- stack = utils.parse_stack(tmpl)
- nested = stack.resources['JobServerGroup'].nested()
-
- # test stack create
- # test the number of instance creation
- # test that physical resource name of launch configuration is used
- size = int(stack.resources['JobServerGroup'].properties['Size'])
- self._stub_create(size)
- self.m.ReplayAll()
- stack.create()
- self.m.VerifyAll()
- self.assertEqual(stack.state, ('CREATE', 'COMPLETE'))
- conf = stack.resources['JobServerConfig']
- conf_name_pattern = '%s-JobServerConfig-[a-zA-Z0-9]+$' % stack.name
- regex_pattern = re.compile(conf_name_pattern)
- self.assertTrue(regex_pattern.match(conf.FnGetRefId()))
- nested = stack.resources['JobServerGroup'].nested()
- self.assertTrue(len(nested.resources), size)
-
- # test stack update
- # test that update policy is updated
- # test that launch configuration is replaced
- current_grp = stack.resources['JobServerGroup']
- self.assertTrue('RollingUpdate' in current_grp.update_policy)
- current_policy = current_grp.update_policy['RollingUpdate']
- self.assertTrue(current_policy and len(current_policy) > 0)
- self.assertEqual(int(current_policy['MaxBatchSize']), 3)
- conf_name = self.get_launch_conf_name(stack, 'JobServerGroup')
- updated_tmpl = template_format.parse(ig_tmpl_with_updt_policy_2)
- updated_stack = utils.parse_stack(updated_tmpl)
- stack.update(updated_stack)
- self.assertEqual(stack.state, ('UPDATE', 'COMPLETE'))
- updated_grp = stack.resources['JobServerGroup']
- self.assertTrue('RollingUpdate' in updated_grp.update_policy)
- updated_policy = updated_grp.update_policy['RollingUpdate']
- self.assertTrue(updated_policy and len(updated_policy) > 0)
- self.assertEqual(int(updated_policy['MaxBatchSize']), 5)
- updated_conf_name = self.get_launch_conf_name(stack, 'JobServerGroup')
- self.assertNotEqual(conf_name, updated_conf_name)
-
- def test_instance_group_update_policy_removed(self):
-
- # setup stack from the initial template
- tmpl = template_format.parse(ig_tmpl_with_updt_policy_1)
- stack = utils.parse_stack(tmpl)
- nested = stack.resources['JobServerGroup'].nested()
-
- # test stack create
- # test the number of instance creation
- # test that physical resource name of launch configuration is used
- size = int(stack.resources['JobServerGroup'].properties['Size'])
- self._stub_create(size)
- self.m.ReplayAll()
- stack.create()
- self.m.VerifyAll()
- self.assertEqual(stack.state, ('CREATE', 'COMPLETE'))
- conf = stack.resources['JobServerConfig']
- conf_name_pattern = '%s-JobServerConfig-[a-zA-Z0-9]+$' % stack.name
- regex_pattern = re.compile(conf_name_pattern)
- self.assertTrue(regex_pattern.match(conf.FnGetRefId()))
- nested = stack.resources['JobServerGroup'].nested()
- self.assertTrue(len(nested.resources), size)
-
- # test stack update
- # test that update policy is removed
- current_grp = stack.resources['JobServerGroup']
- self.assertTrue('RollingUpdate' in current_grp.update_policy)
- current_policy = current_grp.update_policy['RollingUpdate']
- self.assertTrue(current_policy and len(current_policy) > 0)
- self.assertEqual(int(current_policy['MaxBatchSize']), 3)
- updated_tmpl = template_format.parse(ig_tmpl_without_updt_policy)
- updated_stack = utils.parse_stack(updated_tmpl)
- stack.update(updated_stack)
- self.assertEqual(stack.state, ('UPDATE', 'COMPLETE'))
- updated_grp = stack.resources['JobServerGroup']
- self.assertFalse(updated_grp.update_policy['RollingUpdate'])
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-from heat.engine import environment
-from heat.tests.v1_1 import fakes
-from heat.engine.resources import instance as instances
-from heat.engine.resources import network_interface as network_interfaces
-from heat.engine.resources import nova_utils
-from heat.common import template_format
-from heat.engine import parser
-from heat.engine import scheduler
-from heat.openstack.common import uuidutils
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-wp_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "WordPress",
- "Parameters" : {
- "KeyName" : {
- "Description" : "KeyName",
- "Type" : "String",
- "Default" : "test"
- },
- "InstanceType": {
- "Type": "String",
- "Description": "EC2 instance type",
- "Default": "m1.small",
- "AllowedValues": [ "m1.small", "m1.large" ]
- },
- "SubnetId": {
- "Type" : "String",
- "Description" : "SubnetId of an existing subnet in your VPC"
- },
- },
- "Resources" : {
- "WebServer": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId" : "F17-x86_64-gold",
- "InstanceType" : { "Ref" : "InstanceType" },
- "SubnetId" : { "Ref" : "SubnetId" },
- "KeyName" : { "Ref" : "KeyName" },
- "UserData" : "wordpress"
- }
- }
- }
-}
-'''
-
-
-wp_template_with_nic = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "WordPress",
- "Parameters" : {
- "KeyName" : {
- "Description" : "KeyName",
- "Type" : "String",
- "Default" : "test"
- },
- "InstanceType": {
- "Type": "String",
- "Description": "EC2 instance type",
- "Default": "m1.small",
- "AllowedValues": [ "m1.small", "m1.large" ]
- },
- "SubnetId": {
- "Type" : "String",
- "Description" : "SubnetId of an existing subnet in your VPC"
- },
- },
- "Resources" : {
-
- "nic1": {
- "Type": "AWS::EC2::NetworkInterface",
- "Properties": {
- "SubnetId": { "Ref": "SubnetId" }
- }
- },
-
- "WebServer": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId" : "F17-x86_64-gold",
- "InstanceType" : { "Ref" : "InstanceType" },
- "NetworkInterfaces": [ { "NetworkInterfaceId" : {"Ref": "nic1"},
- "DeviceIndex" : "0" } ],
- "KeyName" : { "Ref" : "KeyName" },
- "UserData" : "wordpress"
- }
- }
- }
-}
-'''
-
-
-class FakeNeutron(object):
-
- def show_subnet(self, subnet, **_params):
- return {
- 'subnet': {
- 'name': 'name',
- 'network_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766',
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f',
- 'allocation_pools': [{'start': '10.10.0.2',
- 'end': '10.10.0.254'}],
- 'gateway_ip': '10.10.0.1',
- 'ip_version': 4,
- 'cidr': '10.10.0.0/24',
- 'id': '4156c7a5-e8c4-4aff-a6e1-8f3c7bc83861',
- 'enable_dhcp': False,
- }}
-
- def create_port(self, body=None):
- return {
- 'port': {
- 'admin_state_up': True,
- 'device_id': '',
- 'device_owner': '',
- 'fixed_ips': [{
- 'ip_address': '10.0.3.3',
- 'subnet_id': '4156c7a5-e8c4-4aff-a6e1-8f3c7bc83861'}],
- 'id': '64d913c1-bcb1-42d2-8f0a-9593dbcaf251',
- 'mac_address': 'fa:16:3e:25:32:5d',
- 'name': '',
- 'network_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766',
- 'status': 'ACTIVE',
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f'
- }}
-
-
-class instancesTest(HeatTestCase):
- def setUp(self):
- super(instancesTest, self).setUp()
- self.fc = fakes.FakeClient()
- utils.setup_dummy_db()
-
- def _create_test_instance(self, return_server, name):
- stack_name = '%s_stack' % name
- t = template_format.parse(wp_template)
- template = parser.Template(t)
- kwargs = {'KeyName': 'test',
- 'InstanceType': 'm1.large',
- 'SubnetId': '4156c7a5-e8c4-4aff-a6e1-8f3c7bc83861'}
- stack = parser.Stack(utils.dummy_context(), stack_name, template,
- environment.Environment(kwargs),
- stack_id=uuidutils.generate_uuid())
-
- t['Resources']['WebServer']['Properties']['ImageId'] = 'CentOS 5.2'
- instance = instances.Instance('%s_name' % name,
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(instance, 'nova')
- instance.nova().MultipleTimes().AndReturn(self.fc)
-
- self.m.StubOutWithMock(instance, 'neutron')
- instance.neutron().MultipleTimes().AndReturn(FakeNeutron())
-
- instance.t = instance.stack.resolve_runtime_data(instance.t)
-
- # need to resolve the template functions
- server_userdata = nova_utils.build_userdata(
- instance,
- instance.t['Properties']['UserData'])
- instance.mime_string = server_userdata
- self.m.StubOutWithMock(self.fc.servers, 'create')
- self.fc.servers.create(
- image=1, flavor=3, key_name='test',
- name=utils.PhysName(stack_name, instance.name),
- security_groups=None,
- userdata=server_userdata, scheduler_hints=None, meta=None,
- nics=[{'port-id': '64d913c1-bcb1-42d2-8f0a-9593dbcaf251'}],
- availability_zone=None).AndReturn(
- return_server)
- self.m.ReplayAll()
-
- scheduler.TaskRunner(instance.create)()
- return instance
-
- def _create_test_instance_with_nic(self, return_server, name):
- stack_name = '%s_stack' % name
- t = template_format.parse(wp_template_with_nic)
- template = parser.Template(t)
- kwargs = {'KeyName': 'test',
- 'InstanceType': 'm1.large',
- 'SubnetId': '4156c7a5-e8c4-4aff-a6e1-8f3c7bc83861'}
- stack = parser.Stack(utils.dummy_context(), stack_name, template,
- environment.Environment(kwargs),
- stack_id=uuidutils.generate_uuid())
-
- t['Resources']['WebServer']['Properties']['ImageId'] = 'CentOS 5.2'
-
- nic = network_interfaces.NetworkInterface('%s_nic' % name,
- t['Resources']['nic1'],
- stack)
-
- instance = instances.Instance('%s_name' % name,
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(nic, 'neutron')
- nic.neutron().MultipleTimes().AndReturn(FakeNeutron())
-
- self.m.StubOutWithMock(instance, 'nova')
- instance.nova().MultipleTimes().AndReturn(self.fc)
-
- nic.t = nic.stack.resolve_runtime_data(nic.t)
- instance.t = instance.stack.resolve_runtime_data(instance.t)
-
- # need to resolve the template functions
- server_userdata = nova_utils.build_userdata(
- instance,
- instance.t['Properties']['UserData'])
- instance.mime_string = server_userdata
- self.m.StubOutWithMock(self.fc.servers, 'create')
- self.fc.servers.create(
- image=1, flavor=3, key_name='test',
- name=utils.PhysName(stack_name, instance.name),
- security_groups=None,
- userdata=server_userdata, scheduler_hints=None, meta=None,
- nics=[{'port-id': '64d913c1-bcb1-42d2-8f0a-9593dbcaf251'}],
- availability_zone=None).AndReturn(
- return_server)
- self.m.ReplayAll()
-
- # create network interface
- scheduler.TaskRunner(nic.create)()
- stack.resources["nic1"] = nic
-
- scheduler.TaskRunner(instance.create)()
- return instance
-
- def test_instance_create(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance(return_server,
- 'test_instance_create')
- # this makes sure the auto increment worked on instance creation
- self.assertTrue(instance.id > 0)
-
- expected_ip = return_server.networks['public'][0]
- self.assertEqual(instance.FnGetAtt('PublicIp'), expected_ip)
- self.assertEqual(instance.FnGetAtt('PrivateIp'), expected_ip)
- self.assertEqual(instance.FnGetAtt('PrivateDnsName'), expected_ip)
- self.assertEqual(instance.FnGetAtt('PrivateDnsName'), expected_ip)
-
- self.m.VerifyAll()
-
- def test_instance_create_with_nic(self):
- return_server = self.fc.servers.list()[1]
- instance = self._create_test_instance_with_nic(
- return_server, 'test_instance_create_with_network_interface')
-
- # this makes sure the auto increment worked on instance creation
- self.assertTrue(instance.id > 0)
-
- expected_ip = return_server.networks['public'][0]
- self.assertEqual(instance.FnGetAtt('PublicIp'), expected_ip)
- self.assertEqual(instance.FnGetAtt('PrivateIp'), expected_ip)
- self.assertEqual(instance.FnGetAtt('PrivateDnsName'), expected_ip)
- self.assertEqual(instance.FnGetAtt('PrivateDnsName'), expected_ip)
-
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.common import timeutils as util
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-class ISO8601UtilityTest(HeatTestCase):
-
- def setUp(self):
- super(ISO8601UtilityTest, self).setUp()
- utils.setup_dummy_db()
-
- def test_valid_durations(self):
- self.assertEqual(util.parse_isoduration('PT'), 0)
- self.assertEqual(util.parse_isoduration('PT1H'), 3600)
- self.assertEqual(util.parse_isoduration('PT2M'), 120)
- self.assertEqual(util.parse_isoduration('PT3S'), 3)
- self.assertEqual(util.parse_isoduration('PT1H5M'), 3900)
- self.assertEqual(util.parse_isoduration('PT1H5S'), 3605)
- self.assertEqual(util.parse_isoduration('PT5M3S'), 303)
- self.assertEqual(util.parse_isoduration('PT1H5M3S'), 3903)
- self.assertEqual(util.parse_isoduration('PT24H'), 24 * 3600)
-
- def test_invalid_durations(self):
- self.assertRaises(ValueError, util.parse_isoduration, 'P1Y')
- self.assertRaises(ValueError, util.parse_isoduration, 'P1DT12H')
- self.assertRaises(ValueError, util.parse_isoduration, 'PT1Y1D')
- self.assertRaises(ValueError, util.parse_isoduration, 'PTAH1M0S')
- self.assertRaises(ValueError, util.parse_isoduration, 'PT1HBM0S')
- self.assertRaises(ValueError, util.parse_isoduration, 'PT1H1MCS')
- self.assertRaises(ValueError, util.parse_isoduration, 'PT1H1H')
- self.assertRaises(ValueError, util.parse_isoduration, 'PT1MM')
- self.assertRaises(ValueError, util.parse_isoduration, 'PT1S0S')
- self.assertRaises(ValueError, util.parse_isoduration, 'ABCDEFGH')
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import mox
-import re
-
-from oslo.config import cfg
-from heat.common import exception
-from heat.common import template_format
-from heat.engine import clients
-from heat.engine import scheduler
-from heat.engine.resources import instance
-from heat.engine.resources import user
-from heat.engine.resources import loadbalancer as lb
-from heat.engine.resources import wait_condition as wc
-from heat.engine.resource import Metadata
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-from heat.tests.v1_1 import fakes
-from heat.tests import fakes as test_fakes
-
-
-lb_template = '''
-{
- "AWSTemplateFormatVersion": "2010-09-09",
- "Description": "LB Template",
- "Parameters" : {
- "KeyName" : {
- "Description" : "KeyName",
- "Type" : "String",
- "Default" : "test"
- }
- },
- "Resources": {
- "WikiServerOne": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId": "F17-x86_64-gold",
- "InstanceType" : "m1.large",
- "KeyName" : "test",
- "UserData" : "some data"
- }
- },
- "LoadBalancer" : {
- "Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
- "Properties" : {
- "AvailabilityZones" : ["nova"],
- "Instances" : [{"Ref": "WikiServerOne"}],
- "Listeners" : [ {
- "LoadBalancerPort" : "80",
- "InstancePort" : "80",
- "Protocol" : "HTTP"
- }]
- }
- }
- }
-}
-'''
-
-lb_template_nokey = '''
-{
- "AWSTemplateFormatVersion": "2010-09-09",
- "Description": "LB Template",
- "Resources": {
- "WikiServerOne": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId": "F17-x86_64-gold",
- "InstanceType" : "m1.large",
- "UserData" : "some data"
- }
- },
- "LoadBalancer" : {
- "Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
- "Properties" : {
- "AvailabilityZones" : ["nova"],
- "Instances" : [{"Ref": "WikiServerOne"}],
- "Listeners" : [ {
- "LoadBalancerPort" : "80",
- "InstancePort" : "80",
- "Protocol" : "HTTP"
- }]
- }
- }
- }
-}
-'''
-
-
-class LoadBalancerTest(HeatTestCase):
- def setUp(self):
- super(LoadBalancerTest, self).setUp()
- self.fc = fakes.FakeClient()
- self.m.StubOutWithMock(clients.OpenStackClients, 'nova')
- self.m.StubOutWithMock(self.fc.servers, 'create')
- self.m.StubOutWithMock(Metadata, '__set__')
- self.fkc = test_fakes.FakeKeystoneClient(
- username='test_stack.CfnLBUser')
-
- cfg.CONF.set_default('heat_waitcondition_server_url',
- 'http://server.test:8000/v1/waitcondition')
- utils.setup_dummy_db()
-
- def create_loadbalancer(self, t, stack, resource_name):
- rsrc = lb.LoadBalancer(resource_name,
- t['Resources'][resource_name],
- stack)
- self.assertEqual(None, rsrc.validate())
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def _create_stubs(self, key_name='test', stub_meta=True):
-
- self.m.StubOutWithMock(user.User, 'keystone')
- user.User.keystone().AndReturn(self.fkc)
- self.m.StubOutWithMock(user.AccessKey, 'keystone')
- user.AccessKey.keystone().AndReturn(self.fkc)
-
- self.m.StubOutWithMock(wc.WaitConditionHandle, 'keystone')
- wc.WaitConditionHandle.keystone().MultipleTimes().AndReturn(self.fkc)
-
- server_name = utils.PhysName(utils.PhysName('test_stack',
- 'LoadBalancer'),
- 'LB_instance')
- clients.OpenStackClients.nova(
- "compute").MultipleTimes().AndReturn(self.fc)
- self.fc.servers.create(
- flavor=2, image=745, key_name=key_name,
- meta=None, nics=None, name=server_name,
- scheduler_hints=None, userdata=mox.IgnoreArg(),
- security_groups=None, availability_zone=None).AndReturn(
- self.fc.servers.list()[1])
- if stub_meta:
- Metadata.__set__(mox.IgnoreArg(),
- mox.IgnoreArg()).AndReturn(None)
-
- self.m.StubOutWithMock(wc.WaitConditionHandle, 'get_status')
- wc.WaitConditionHandle.get_status().AndReturn(['SUCCESS'])
-
- def test_loadbalancer(self):
- self._create_stubs()
-
- self.m.ReplayAll()
-
- t = template_format.parse(lb_template)
- s = utils.parse_stack(t)
- s.store()
-
- rsrc = self.create_loadbalancer(t, s, 'LoadBalancer')
-
- hc = {
- 'Target': 'HTTP:80/',
- 'HealthyThreshold': '3',
- 'UnhealthyThreshold': '5',
- 'Interval': '30',
- 'Timeout': '5'}
- rsrc.t['Properties']['HealthCheck'] = hc
- self.assertEqual(None, rsrc.validate())
-
- hc['Timeout'] = 35
- self.assertEqual(
- {'Error': 'Interval must be larger than Timeout'},
- rsrc.validate())
- hc['Timeout'] = 5
-
- self.assertEqual('LoadBalancer', rsrc.FnGetRefId())
-
- templ = template_format.parse(lb.lb_template)
- ha_cfg = rsrc._haproxy_config(templ, rsrc.properties['Instances'])
- self.assertRegexpMatches(ha_cfg, 'bind \*:80')
- self.assertRegexpMatches(ha_cfg, 'server server1 1\.2\.3\.4:80 '
- 'check inter 30s fall 5 rise 3')
- self.assertRegexpMatches(ha_cfg, 'timeout check 5s')
-
- id_list = []
- for inst_name in ['WikiServerOne1', 'WikiServerOne2']:
- inst = instance.Instance(inst_name,
- s.t['Resources']['WikiServerOne'],
- s)
- id_list.append(inst.FnGetRefId())
-
- rsrc.handle_update(rsrc.json_snippet, {}, {'Instances': id_list})
-
- self.assertEqual('4.5.6.7', rsrc.FnGetAtt('DNSName'))
- self.assertEqual('', rsrc.FnGetAtt('SourceSecurityGroup.GroupName'))
-
- try:
- rsrc.FnGetAtt('Foo')
- raise Exception('Expected InvalidTemplateAttribute')
- except exception.InvalidTemplateAttribute:
- pass
-
- self.assertEqual(None, rsrc.handle_update({}, {}, {}))
-
- self.m.VerifyAll()
-
- def test_loadbalancer_nokey(self):
- self._create_stubs(key_name=None, stub_meta=False)
- self.m.ReplayAll()
-
- t = template_format.parse(lb_template_nokey)
- s = utils.parse_stack(t)
- s.store()
-
- rsrc = self.create_loadbalancer(t, s, 'LoadBalancer')
- self.m.VerifyAll()
-
- def assertRegexpMatches(self, text, expected_regexp, msg=None):
- """Fail the test unless the text matches the regular expression."""
- if isinstance(expected_regexp, basestring):
- expected_regexp = re.compile(expected_regexp)
- if not expected_regexp.search(text):
- msg = msg or "Regexp didn't match"
- msg = '%s: %r not found in %r' % (msg,
- expected_regexp.pattern, text)
- raise self.failureException(msg)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import errno
-import mox
-import os
-import pkg_resources
-import subprocess
-
-from heat.cloudinit import loguserdata
-from heat.tests.common import HeatTestCase
-
-
-class FakeCiVersion(object):
- def __init__(self, version=None):
- self.version = version
-
-
-class FakePOpen(object):
- def __init__(self, returncode=0):
- self.returncode = returncode
-
- def wait(self):
- pass
-
- def communicate(self, input=None):
- pass
-
-
-class LoguserdataTest(HeatTestCase):
-
- def setUp(self):
- super(LoguserdataTest, self).setUp()
- self.m.StubOutWithMock(pkg_resources, 'get_distribution')
- self.m.StubOutWithMock(subprocess, 'Popen')
- self.m.StubOutWithMock(os, 'chmod')
-
- def test_ci_version(self):
- # too old versions
- pkg_resources.get_distribution('cloud-init').AndReturn(
- FakeCiVersion('0.5.0'))
- pkg_resources.get_distribution('cloud-init').AndReturn(
- FakeCiVersion('0.5.9'))
-
- # new enough versions
- pkg_resources.get_distribution('cloud-init').AndReturn(
- FakeCiVersion('0.6.0'))
- pkg_resources.get_distribution('cloud-init').AndReturn(
- FakeCiVersion('0.7.0'))
- pkg_resources.get_distribution('cloud-init').AndReturn(
- FakeCiVersion('1.0'))
- pkg_resources.get_distribution('cloud-init').AndReturn(
- FakeCiVersion('2.0'))
-
- self.m.ReplayAll()
-
- self.assertFalse(loguserdata.chk_ci_version())
- self.assertFalse(loguserdata.chk_ci_version())
-
- self.assertTrue(loguserdata.chk_ci_version())
- self.assertTrue(loguserdata.chk_ci_version())
- self.assertTrue(loguserdata.chk_ci_version())
- self.assertTrue(loguserdata.chk_ci_version())
-
- self.m.VerifyAll()
-
- def test_call(self):
- subprocess.Popen(
- ['echo', 'hi'],
- stderr=mox.IgnoreArg(),
- stdout=mox.IgnoreArg()).AndReturn(FakePOpen(0))
-
- self.m.ReplayAll()
- self.assertEqual(0, loguserdata.call(['echo', 'hi']))
- self.m.VerifyAll()
-
- def test_main(self):
-
- pkg_resources.get_distribution('cloud-init').AndReturn(
- FakeCiVersion('0.7.0'))
-
- os.chmod('/var/lib/heat-cfntools/cfn-userdata', 0o700).AndReturn(None)
- subprocess.Popen(
- ['/var/lib/heat-cfntools/cfn-userdata'],
- stderr=mox.IgnoreArg(),
- stdout=mox.IgnoreArg()).AndReturn(FakePOpen(0))
-
- self.m.ReplayAll()
- loguserdata.main()
- self.m.VerifyAll()
-
- def test_main_script_empty(self):
-
- pkg_resources.get_distribution('cloud-init').AndReturn(
- FakeCiVersion('0.7.0'))
-
- os.chmod('/var/lib/heat-cfntools/cfn-userdata', 0o700).AndReturn(None)
- subprocess.Popen(
- ['/var/lib/heat-cfntools/cfn-userdata'],
- stderr=mox.IgnoreArg(),
- stdout=mox.IgnoreArg()).AndRaise(
- OSError(errno.ENOEXEC, "empty script"))
-
- self.m.ReplayAll()
- self.assertEqual(None, loguserdata.main())
-
- self.m.VerifyAll()
-
- def test_main_os_error(self):
-
- pkg_resources.get_distribution('cloud-init').AndReturn(
- FakeCiVersion('0.7.0'))
-
- os.chmod('/var/lib/heat-cfntools/cfn-userdata', 0o700).AndReturn(None)
- subprocess.Popen(
- ['/var/lib/heat-cfntools/cfn-userdata'],
- stderr=mox.IgnoreArg(),
- stdout=mox.IgnoreArg()).AndRaise(
- OSError(errno.ENOENT, "no such file"))
-
- self.m.ReplayAll()
- self.assertEqual(os.EX_OSERR, loguserdata.main())
-
- self.m.VerifyAll()
-
- def test_main_error_other(self):
- pkg_resources.get_distribution('cloud-init').AndReturn(
- FakeCiVersion('0.7.0'))
- os.chmod('/var/lib/heat-cfntools/cfn-userdata', 0o700).AndReturn(None)
- subprocess.Popen(
- ['/var/lib/heat-cfntools/cfn-userdata'],
- stderr=mox.IgnoreArg(),
- stdout=mox.IgnoreArg()).AndRaise(IOError("read failed"))
-
- self.m.ReplayAll()
- self.assertEqual(os.EX_SOFTWARE, loguserdata.main())
- self.m.VerifyAll()
-
- def test_main_fails(self):
-
- #fail on ci version
- pkg_resources.get_distribution('cloud-init').AndReturn(
- FakeCiVersion('0.5.0'))
- #fail on execute cfn-userdata
- pkg_resources.get_distribution('cloud-init').AndReturn(
- FakeCiVersion('0.7.0'))
-
- os.chmod('/var/lib/heat-cfntools/cfn-userdata', 0o700).AndReturn(None)
- subprocess.Popen(
- ['/var/lib/heat-cfntools/cfn-userdata'],
- stderr=mox.IgnoreArg(),
- stdout=mox.IgnoreArg()).AndReturn(FakePOpen(-2))
-
- self.m.ReplayAll()
- self.assertEqual(-1, loguserdata.main())
- self.assertEqual(-2, loguserdata.main())
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import mox
-
-from oslo.config import cfg
-from heat.tests import fakes
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-from heat.engine import environment
-from heat.common import identifier
-from heat.common import template_format
-from heat.engine import parser
-from heat.engine import scheduler
-from heat.engine import service
-from heat.engine.resources import instance
-from heat.engine.resources import wait_condition as wc
-
-
-test_template_metadata = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "",
- "Parameters" : {
- "KeyName" : {"Type" : "String", "Default": "mine" },
- },
- "Resources" : {
- "S1": {
- "Type": "AWS::EC2::Instance",
- "Metadata" : {
- "AWS::CloudFormation::Init" : {
- "config" : {
- "files" : {
- "/tmp/random_file" : {
- "content" : { "Fn::Join" : ["", [
- "s2-ip=", {"Fn::GetAtt": ["S2", "PublicIp"]}
- ]]},
- "mode" : "000400",
- "owner" : "root",
- "group" : "root"
- }
- }
- }
- }
- },
- "Properties": {
- "ImageId" : "a",
- "InstanceType" : "m1.large",
- "KeyName" : { "Ref" : "KeyName" },
- "UserData" : "#!/bin/bash -v\n"
- }
- },
- "S2": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId" : "a",
- "InstanceType" : "m1.large",
- "KeyName" : { "Ref" : "KeyName" },
- "UserData" : "#!/bin/bash -v\n"
- }
- }
- }
-}
-'''
-
-test_template_waitcondition = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Just a WaitCondition.",
- "Parameters" : {
- "KeyName" : {"Type" : "String", "Default": "mine" },
- },
- "Resources" : {
- "WH" : {
- "Type" : "AWS::CloudFormation::WaitConditionHandle"
- },
- "S1": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId" : "a",
- "InstanceType" : "m1.large",
- "KeyName" : { "Ref" : "KeyName" },
- "UserData" : { "Fn::Join" : [ "", [ "#!/bin/bash -v\n",
- "echo ",
- { "Ref" : "WH" },
- "\n" ] ] }
- }
- },
- "WC" : {
- "Type" : "AWS::CloudFormation::WaitCondition",
- "DependsOn": "S1",
- "Properties" : {
- "Handle" : {"Ref" : "WH"},
- "Timeout" : "5"
- }
- },
- "S2": {
- "Type": "AWS::EC2::Instance",
- "Metadata" : {
- "test" : {"Fn::GetAtt": ["WC", "Data"]}
- },
- "Properties": {
- "ImageId" : "a",
- "InstanceType" : "m1.large",
- "KeyName" : { "Ref" : "KeyName" },
- "UserData" : "#!/bin/bash -v\n"
- }
- }
- }
-}
-'''
-
-
-class MetadataRefreshTest(HeatTestCase):
- '''
- The point of the test is to confirm that metadata gets updated
- when FnGetAtt() returns something different.
- gets called.
- '''
- def setUp(self):
- super(MetadataRefreshTest, self).setUp()
- self.fc = fakes.FakeKeystoneClient()
- utils.setup_dummy_db()
-
- # Note tests creating a stack should be decorated with @stack_delete_after
- # to ensure the stack is properly cleaned up
- def create_stack(self, stack_name='test_stack', params={}):
- temp = template_format.parse(test_template_metadata)
- template = parser.Template(temp)
- ctx = utils.dummy_context()
- stack = parser.Stack(ctx, stack_name, template,
- environment.Environment(params),
- disable_rollback=True)
-
- self.stack_id = stack.store()
-
- self.m.StubOutWithMock(instance.Instance, 'handle_create')
- self.m.StubOutWithMock(instance.Instance, 'check_create_complete')
- for cookie in (object(), object()):
- instance.Instance.handle_create().AndReturn(cookie)
- create_complete = instance.Instance.check_create_complete(cookie)
- create_complete.InAnyOrder().AndReturn(True)
- self.m.StubOutWithMock(instance.Instance, 'FnGetAtt')
-
- return stack
-
- @utils.stack_delete_after
- def test_FnGetAtt(self):
- self.stack = self.create_stack()
-
- instance.Instance.FnGetAtt('PublicIp').AndReturn('1.2.3.5')
-
- # called by metadata_update()
- instance.Instance.FnGetAtt('PublicIp').AndReturn('10.0.0.5')
-
- self.m.ReplayAll()
- self.stack.create()
-
- self.assertEqual(self.stack.state,
- (self.stack.CREATE, self.stack.COMPLETE))
-
- s1 = self.stack.resources['S1']
- s2 = self.stack.resources['S2']
- files = s1.metadata['AWS::CloudFormation::Init']['config']['files']
- cont = files['/tmp/random_file']['content']
- self.assertEqual((s2.CREATE, s2.COMPLETE), s2.state)
- self.assertEqual(cont, 's2-ip=1.2.3.5')
-
- s1.metadata_update()
- s2.metadata_update()
- files = s1.metadata['AWS::CloudFormation::Init']['config']['files']
- cont = files['/tmp/random_file']['content']
- self.assertEqual(cont, 's2-ip=10.0.0.5')
-
- self.m.VerifyAll()
-
-
-class WaitCondMetadataUpdateTest(HeatTestCase):
- def setUp(self):
- super(WaitCondMetadataUpdateTest, self).setUp()
- utils.setup_dummy_db()
- self.fc = fakes.FakeKeystoneClient()
- self.man = service.EngineService('a-host', 'a-topic')
- cfg.CONF.set_default('heat_waitcondition_server_url',
- 'http://server.test:8000/v1/waitcondition')
-
- # Note tests creating a stack should be decorated with @stack_delete_after
- # to ensure the stack is properly cleaned up
- def create_stack(self, stack_name='test_stack'):
- temp = template_format.parse(test_template_waitcondition)
- template = parser.Template(temp)
- ctx = utils.dummy_context()
- stack = parser.Stack(ctx, stack_name, template, disable_rollback=True)
-
- self.stack_id = stack.store()
-
- self.m.StubOutWithMock(instance.Instance, 'handle_create')
- self.m.StubOutWithMock(instance.Instance, 'check_create_complete')
- for cookie in (object(), object()):
- instance.Instance.handle_create().AndReturn(cookie)
- instance.Instance.check_create_complete(cookie).AndReturn(True)
-
- self.m.StubOutWithMock(wc.WaitConditionHandle, 'keystone')
- wc.WaitConditionHandle.keystone().MultipleTimes().AndReturn(self.fc)
-
- id = identifier.ResourceIdentifier('test_tenant_id', stack.name,
- stack.id, '', 'WH')
- self.m.StubOutWithMock(wc.WaitConditionHandle, 'identifier')
- wc.WaitConditionHandle.identifier().MultipleTimes().AndReturn(id)
-
- self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep')
- self.m.StubOutWithMock(service.EngineService, '_load_user_creds')
- service.EngineService._load_user_creds(
- mox.IgnoreArg()).MultipleTimes().AndReturn(ctx)
-
- return stack
-
- @utils.stack_delete_after
- def test_wait_meta(self):
- '''
- 1 create stack
- 2 assert empty instance metadata
- 3 service.metadata_update()
- 4 assert valid waitcond metadata
- 5 assert valid instance metadata
- '''
-
- self.stack = self.create_stack()
-
- watch = self.stack['WC']
- inst = self.stack['S2']
-
- def check_empty(sleep_time):
- self.assertEqual(watch.FnGetAtt('Data'), '{}')
- self.assertEqual(inst.metadata['test'], None)
-
- def update_metadata(id, data, reason):
- self.man.metadata_update(utils.dummy_context(),
- dict(self.stack.identifier()),
- 'WH',
- {'Data': data, 'Reason': reason,
- 'Status': 'SUCCESS', 'UniqueId': id})
-
- def post_success(sleep_time):
- update_metadata('123', 'foo', 'bar')
-
- scheduler.TaskRunner._sleep(mox.IsA(int)).WithSideEffects(check_empty)
- scheduler.TaskRunner._sleep(mox.IsA(int)).WithSideEffects(post_success)
- scheduler.TaskRunner._sleep(mox.IsA(int)).MultipleTimes().AndReturn(
- None)
-
- self.m.ReplayAll()
- self.stack.create()
-
- self.assertEqual(self.stack.state,
- (self.stack.CREATE, self.stack.COMPLETE))
-
- self.assertEqual(watch.FnGetAtt('Data'), '{"123": "foo"}')
- self.assertEqual(inst.metadata['test'], '{"123": "foo"}')
-
- update_metadata('456', 'blarg', 'wibble')
- self.assertEqual(watch.FnGetAtt('Data'),
- '{"123": "foo", "456": "blarg"}')
- self.assertEqual(inst.metadata['test'],
- '{"123": "foo", "456": "blarg"}')
-
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import copy
-
-from oslo.config import cfg
-
-cfg.CONF.import_opt('max_resources_per_stack', 'heat.common.config')
-
-from heat.common import exception
-from heat.common import template_format
-from heat.common import urlfetch
-from heat.db import api as db_api
-from heat.engine import parser
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.tests import generic_resource as generic_rsrc
-from heat.tests import utils
-from heat.tests.common import HeatTestCase
-
-
-class NestedStackTest(HeatTestCase):
- test_template = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Resources:
- the_nested:
- Type: AWS::CloudFormation::Stack
- Properties:
- TemplateURL: https://server.test/the.template
- Parameters:
- KeyName: foo
-'''
-
- nested_template = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Parameters:
- KeyName:
- Type: String
-Outputs:
- Foo:
- Value: bar
-'''
-
- update_template = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Parameters:
- KeyName:
- Type: String
-Outputs:
- Bar:
- Value: foo
-'''
-
- def setUp(self):
- super(NestedStackTest, self).setUp()
- self.m.StubOutWithMock(urlfetch, 'get')
- utils.setup_dummy_db()
-
- def create_stack(self, template):
- t = template_format.parse(template)
- stack = self.parse_stack(t)
- stack.create()
- self.assertEqual(stack.state, (stack.CREATE, stack.COMPLETE))
- return stack
-
- def parse_stack(self, t):
- ctx = utils.dummy_context('test_username', 'aaaa', 'password')
- stack_name = 'test_stack'
- tmpl = parser.Template(t)
- stack = parser.Stack(ctx, stack_name, tmpl)
- stack.store()
- return stack
-
- def test_nested_stack_create(self):
- urlfetch.get('https://server.test/the.template').MultipleTimes().\
- AndReturn(self.nested_template)
- self.m.ReplayAll()
-
- stack = self.create_stack(self.test_template)
- rsrc = stack['the_nested']
- nested_name = utils.PhysName(stack.name, 'the_nested')
- self.assertEqual(nested_name, rsrc.physical_resource_name())
- arn_prefix = ('arn:openstack:heat::aaaa:stacks/%s/' %
- rsrc.physical_resource_name())
- self.assertTrue(rsrc.FnGetRefId().startswith(arn_prefix))
-
- self.assertEqual('bar', rsrc.FnGetAtt('Outputs.Foo'))
- self.assertRaises(
- exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Foo')
- self.assertRaises(
- exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Outputs.Bar')
- self.assertRaises(
- exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Bar')
-
- rsrc.delete()
- self.assertTrue(rsrc.FnGetRefId().startswith(arn_prefix))
-
- self.m.VerifyAll()
-
- def test_nested_stack_create_exceeds_resource_limit(self):
- cfg.CONF.set_override('max_resources_per_stack', 1)
- resource._register_class('GenericResource',
- generic_rsrc.GenericResource)
- urlfetch.get('https://server.test/the.template').MultipleTimes().\
- AndReturn('''
-HeatTemplateFormatVersion: '2012-12-12'
-Parameters:
- KeyName:
- Type: String
-Resources:
- NestedResource:
- Type: GenericResource
-Outputs:
- Foo:
- Value: bar
-''')
- self.m.ReplayAll()
-
- t = template_format.parse(self.test_template)
- stack = self.parse_stack(t)
- stack.create()
- self.assertEquals(stack.state, (stack.CREATE, stack.FAILED))
- self.assertIn('Maximum resources per stack exceeded',
- stack.status_reason)
-
- self.m.VerifyAll()
-
- def test_nested_stack_create_equals_resource_limit(self):
- cfg.CONF.set_override('max_resources_per_stack', 2)
- resource._register_class('GenericResource',
- generic_rsrc.GenericResource)
- urlfetch.get('https://server.test/the.template').MultipleTimes().\
- AndReturn('''
-HeatTemplateFormatVersion: '2012-12-12'
-Parameters:
- KeyName:
- Type: String
-Resources:
- NestedResource:
- Type: GenericResource
-Outputs:
- Foo:
- Value: bar
-''')
- self.m.ReplayAll()
-
- t = template_format.parse(self.test_template)
- stack = self.parse_stack(t)
- stack.create()
- self.assertEquals(stack.state, (stack.CREATE, stack.COMPLETE))
- self.assertIn('NestedResource',
- stack.resources['the_nested'].nested().resources)
-
- self.m.VerifyAll()
-
- def test_nested_stack_update(self):
- urlfetch.get('https://server.test/the.template').MultipleTimes().\
- AndReturn(self.nested_template)
- urlfetch.get('https://server.test/new.template').MultipleTimes().\
- AndReturn(self.update_template)
-
- self.m.ReplayAll()
-
- stack = self.create_stack(self.test_template)
- rsrc = stack['the_nested']
-
- original_nested_id = rsrc.resource_id
- t = template_format.parse(self.test_template)
- new_res = copy.deepcopy(t['Resources']['the_nested'])
- new_res['Properties']['TemplateURL'] = (
- 'https://server.test/new.template')
- prop_diff = {'TemplateURL': 'https://server.test/new.template'}
- updater = rsrc.handle_update(new_res, {}, prop_diff)
- updater.run_to_completion()
- self.assertEqual(True, rsrc.check_update_complete(updater))
-
- # Expect the physical resource name staying the same after update,
- # so that the nested was actually updated instead of replaced.
- self.assertEqual(original_nested_id, rsrc.resource_id)
- db_nested = db_api.stack_get(stack.context,
- rsrc.resource_id)
- # Owner_id should be preserved during the update process.
- self.assertEqual(stack.id, db_nested.owner_id)
-
- self.assertEqual('foo', rsrc.FnGetAtt('Outputs.Bar'))
- self.assertRaises(
- exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Foo')
- self.assertRaises(
- exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Outputs.Foo')
- self.assertRaises(
- exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'Bar')
-
- rsrc.delete()
-
- self.m.VerifyAll()
-
- def test_nested_stack_suspend_resume(self):
- urlfetch.get('https://server.test/the.template').AndReturn(
- self.nested_template)
- self.m.ReplayAll()
-
- stack = self.create_stack(self.test_template)
- rsrc = stack['the_nested']
-
- scheduler.TaskRunner(rsrc.suspend)()
- self.assertEqual(rsrc.state, (rsrc.SUSPEND, rsrc.COMPLETE))
-
- scheduler.TaskRunner(rsrc.resume)()
- self.assertEqual(rsrc.state, (rsrc.RESUME, rsrc.COMPLETE))
-
- rsrc.delete()
- self.m.VerifyAll()
-
- def test_nested_stack_three_deep(self):
- root_template = '''
-HeatTemplateFormat: 2012-12-12
-Resources:
- Nested:
- Type: AWS::CloudFormation::Stack
- Properties:
- TemplateURL: 'https://server.test/depth1.template'
-'''
- depth1_template = '''
-HeatTemplateFormat: 2012-12-12
-Resources:
- Nested:
- Type: AWS::CloudFormation::Stack
- Properties:
- TemplateURL: 'https://server.test/depth2.template'
-'''
- depth2_template = '''
-HeatTemplateFormat: 2012-12-12
-Resources:
- Nested:
- Type: AWS::CloudFormation::Stack
- Properties:
- TemplateURL: 'https://server.test/depth3.template'
- Parameters:
- KeyName: foo
-'''
- urlfetch.get(
- 'https://server.test/depth1.template').AndReturn(
- depth1_template)
- urlfetch.get(
- 'https://server.test/depth2.template').AndReturn(
- depth2_template)
- urlfetch.get(
- 'https://server.test/depth3.template').AndReturn(
- self.nested_template)
- self.m.ReplayAll()
- self.create_stack(root_template)
- self.m.VerifyAll()
-
- def test_nested_stack_four_deep(self):
- root_template = '''
-HeatTemplateFormat: 2012-12-12
-Resources:
- Nested:
- Type: AWS::CloudFormation::Stack
- Properties:
- TemplateURL: 'https://server.test/depth1.template'
-'''
- depth1_template = '''
-HeatTemplateFormat: 2012-12-12
-Resources:
- Nested:
- Type: AWS::CloudFormation::Stack
- Properties:
- TemplateURL: 'https://server.test/depth2.template'
-'''
- depth2_template = '''
-HeatTemplateFormat: 2012-12-12
-Resources:
- Nested:
- Type: AWS::CloudFormation::Stack
- Properties:
- TemplateURL: 'https://server.test/depth3.template'
-'''
- depth3_template = '''
-HeatTemplateFormat: 2012-12-12
-Resources:
- Nested:
- Type: AWS::CloudFormation::Stack
- Properties:
- TemplateURL: 'https://server.test/depth4.template'
- Parameters:
- KeyName: foo
-'''
- urlfetch.get(
- 'https://server.test/depth1.template').AndReturn(
- depth1_template)
- urlfetch.get(
- 'https://server.test/depth2.template').AndReturn(
- depth2_template)
- urlfetch.get(
- 'https://server.test/depth3.template').AndReturn(
- depth3_template)
- urlfetch.get(
- 'https://server.test/depth4.template').AndReturn(
- self.nested_template)
- self.m.ReplayAll()
- t = template_format.parse(root_template)
- stack = self.parse_stack(t)
- stack.create()
- self.assertEqual((stack.CREATE, stack.FAILED), stack.state)
- self.assertIn('Recursion depth exceeds', stack.status_reason)
- self.m.VerifyAll()
-
- def test_nested_stack_four_wide(self):
- root_template = '''
-HeatTemplateFormat: 2012-12-12
-Resources:
- Nested:
- Type: AWS::CloudFormation::Stack
- Properties:
- TemplateURL: 'https://server.test/depth1.template'
- Parameters:
- KeyName: foo
- Nested2:
- Type: AWS::CloudFormation::Stack
- Properties:
- TemplateURL: 'https://server.test/depth2.template'
- Parameters:
- KeyName: foo
- Nested3:
- Type: AWS::CloudFormation::Stack
- Properties:
- TemplateURL: 'https://server.test/depth3.template'
- Parameters:
- KeyName: foo
- Nested4:
- Type: AWS::CloudFormation::Stack
- Properties:
- TemplateURL: 'https://server.test/depth4.template'
- Parameters:
- KeyName: foo
-'''
- urlfetch.get(
- 'https://server.test/depth1.template').InAnyOrder().AndReturn(
- self.nested_template)
- urlfetch.get(
- 'https://server.test/depth2.template').InAnyOrder().AndReturn(
- self.nested_template)
- urlfetch.get(
- 'https://server.test/depth3.template').InAnyOrder().AndReturn(
- self.nested_template)
- urlfetch.get(
- 'https://server.test/depth4.template').InAnyOrder().AndReturn(
- self.nested_template)
- self.m.ReplayAll()
- self.create_stack(root_template)
- self.m.VerifyAll()
-
- def test_nested_stack_infinite_recursion(self):
- template = '''
-HeatTemplateFormat: 2012-12-12
-Resources:
- Nested:
- Type: AWS::CloudFormation::Stack
- Properties:
- TemplateURL: 'https://server.test/the.template'
-'''
- urlfetch.get(
- 'https://server.test/the.template').MultipleTimes().AndReturn(
- template)
- self.m.ReplayAll()
- t = template_format.parse(template)
- stack = self.parse_stack(t)
- stack.create()
- self.assertEqual(stack.state, (stack.CREATE, stack.FAILED))
- self.assertIn('Recursion depth exceeds', stack.status_reason)
- self.m.VerifyAll()
-
-
-class ResDataResource(generic_rsrc.GenericResource):
- def handle_create(self):
- db_api.resource_data_set(self, "test", 'A secret value', True)
-
-
-class ResDataNestedStackTest(NestedStackTest):
-
- nested_template = '''
-HeatTemplateFormatVersion: "2012-12-12"
-Parameters:
- KeyName:
- Type: String
-Resources:
- nested_res:
- Type: "res.data.resource"
-Outputs:
- Foo:
- Value: bar
-'''
-
- def setUp(self):
- resource._register_class("res.data.resource", ResDataResource)
- super(ResDataNestedStackTest, self).setUp()
-
- def test_res_data_delete(self):
- urlfetch.get('https://server.test/the.template').AndReturn(
- self.nested_template)
- self.m.ReplayAll()
- stack = self.create_stack(self.test_template)
- res = stack['the_nested'].nested()['nested_res']
- stack.delete()
- self.assertEqual(stack.state, (stack.DELETE, stack.COMPLETE))
- self.assertRaises(exception.NotFound, db_api.resource_data_get, res,
- 'test')
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import mox
-
-from testtools import skipIf
-
-from heat.engine import clients
-from heat.common import exception
-from heat.common import template_format
-from heat.engine import properties
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine.resources.neutron import net
-from heat.engine.resources.neutron import subnet
-from heat.engine.resources.neutron import router
-from heat.engine.resources.neutron.neutron import NeutronResource as qr
-from heat.openstack.common.importutils import try_import
-from heat.tests.common import HeatTestCase
-from heat.tests import fakes
-from heat.tests import utils
-
-neutronclient = try_import('neutronclient.v2_0.client')
-qe = try_import('neutronclient.common.exceptions')
-
-neutron_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test Neutron resources",
- "Parameters" : {},
- "Resources" : {
- "network": {
- "Type": "OS::Neutron::Net",
- "Properties": {
- "name": "the_network",
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f"
- }
- },
- "unnamed_network": {
- "Type": "OS::Neutron::Net"
- },
- "admin_down_network": {
- "Type": "OS::Neutron::Net",
- "Properties": {
- "admin_state_up": false
- }
- },
- "subnet": {
- "Type": "OS::Neutron::Subnet",
- "Properties": {
- "network_id": { "Ref" : "network" },
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f",
- "ip_version": 4,
- "cidr": "10.0.3.0/24",
- "allocation_pools": [{"start": "10.0.3.20", "end": "10.0.3.150"}],
- "dns_nameservers": ["8.8.8.8"]
- }
- },
- "port": {
- "Type": "OS::Neutron::Port",
- "Properties": {
- "device_id": "d6b4d3a5-c700-476f-b609-1493dd9dadc0",
- "name": "port1",
- "network_id": { "Ref" : "network" },
- "fixed_ips": [{
- "subnet_id": { "Ref" : "subnet" },
- "ip_address": "10.0.3.21"
- }]
- }
- },
- "port2": {
- "Type": "OS::Neutron::Port",
- "Properties": {
- "name": "port2",
- "network_id": { "Ref" : "network" }
- }
- },
- "router": {
- "Type": "OS::Neutron::Router"
- },
- "router_interface": {
- "Type": "OS::Neutron::RouterInterface",
- "Properties": {
- "router_id": { "Ref" : "router" },
- "subnet_id": { "Ref" : "subnet" }
- }
- },
- "gateway": {
- "Type": "OS::Neutron::RouterGateway",
- "Properties": {
- "router_id": { "Ref" : "router" },
- "network_id": { "Ref" : "network" }
- }
- }
- }
-}
-'''
-
-neutron_floating_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test Neutron resources",
- "Parameters" : {},
- "Resources" : {
- "port_floating": {
- "Type": "OS::Neutron::Port",
- "Properties": {
- "network_id": "xyz1234",
- "fixed_ips": [{
- "subnet_id": "12.12.12.0",
- "ip_address": "10.0.0.10"
- }]
- }
- },
- "floating_ip": {
- "Type": "OS::Neutron::FloatingIP",
- "Properties": {
- "floating_network_id": "abcd1234",
- }
- },
- "floating_ip_assoc": {
- "Type": "OS::Neutron::FloatingIPAssociation",
- "Properties": {
- "floatingip_id": { "Ref" : "floating_ip" },
- "port_id": { "Ref" : "port_floating" }
- }
- },
- "router": {
- "Type": "OS::Neutron::Router"
- },
- "gateway": {
- "Type": "OS::Neutron::RouterGateway",
- "Properties": {
- "router_id": { "Ref" : "router" },
- "network_id": "abcd1234"
- }
- }
- }
-}
-'''
-
-
-class NeutronTest(HeatTestCase):
-
- def test_validate_properties(self):
- vs = {'router:external': True}
- data = {'admin_state_up': False,
- 'value_specs': vs}
- p = properties.Properties(net.Net.properties_schema, data)
- self.assertEqual(None, qr.validate_properties(p))
-
- vs['shared'] = True
- self.assertEqual('shared not allowed in value_specs',
- qr.validate_properties(p))
- vs.pop('shared')
-
- vs['name'] = 'foo'
- self.assertEqual('name not allowed in value_specs',
- qr.validate_properties(p))
- vs.pop('name')
-
- vs['tenant_id'] = '1234'
- self.assertEqual('tenant_id not allowed in value_specs',
- qr.validate_properties(p))
- vs.pop('tenant_id')
-
- vs['foo'] = '1234'
- self.assertEqual(None, qr.validate_properties(p))
-
- def test_prepare_properties(self):
- data = {'admin_state_up': False,
- 'value_specs': {'router:external': True}}
- p = properties.Properties(net.Net.properties_schema, data)
- props = qr.prepare_properties(p, 'resource_name')
- self.assertEqual({'name': 'resource_name',
- 'router:external': True,
- 'admin_state_up': False}, props)
-
- def test_is_built(self):
- self.assertTrue(qr.is_built({
- 'name': 'the_net',
- 'status': 'ACTIVE'
- }))
- self.assertTrue(qr.is_built({
- 'name': 'the_net',
- 'status': 'DOWN'
- }))
- self.assertFalse(qr.is_built({
- 'name': 'the_net',
- 'status': 'BUILD'
- }))
- self.assertRaises(exception.Error, qr.is_built, {
- 'name': 'the_net',
- 'status': 'FROBULATING'
- })
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class NeutronNetTest(HeatTestCase):
-
- def setUp(self):
- super(NeutronNetTest, self).setUp()
- self.m.StubOutWithMock(neutronclient.Client, 'create_network')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_network')
- self.m.StubOutWithMock(neutronclient.Client, 'show_network')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- utils.setup_dummy_db()
-
- def create_net(self, t, stack, resource_name):
- rsrc = net.Net('test_net', t['Resources'][resource_name], stack)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def test_net(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_network({
- 'network': {
- 'name': u'the_network',
- 'admin_state_up': True,
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f'}
- }).AndReturn({"network": {
- "status": "BUILD",
- "subnets": [],
- "name": "name",
- "admin_state_up": False,
- "shared": False,
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
-
- neutronclient.Client.show_network(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndReturn({"network": {
- "status": "BUILD",
- "subnets": [],
- "name": "name",
- "admin_state_up": False,
- "shared": False,
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
-
- neutronclient.Client.show_network(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndReturn({"network": {
- "status": "ACTIVE",
- "subnets": [],
- "name": "name",
- "admin_state_up": False,
- "shared": False,
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
-
- neutronclient.Client.show_network(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndRaise(qe.NeutronClientException(status_code=404))
-
- neutronclient.Client.show_network(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndReturn({"network": {
- "status": "ACTIVE",
- "subnets": [],
- "name": "name",
- "admin_state_up": False,
- "shared": False,
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
-
- neutronclient.Client.show_network(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndReturn({"network": {
- "status": "ACTIVE",
- "subnets": [],
- "name": "name",
- "admin_state_up": False,
- "shared": False,
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
-
- neutronclient.Client.delete_network(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndReturn(None)
-
- neutronclient.Client.show_network(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndRaise(qe.NeutronClientException(status_code=404))
-
- neutronclient.Client.delete_network(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndRaise(qe.NeutronClientException(status_code=404))
-
- self.m.ReplayAll()
- t = template_format.parse(neutron_template)
- stack = utils.parse_stack(t)
- rsrc = self.create_net(t, stack, 'network')
-
- # assert the implicit dependency between the gateway and the interface
- deps = stack.dependencies[stack['router_interface']]
- self.assertIn(stack['gateway'], deps)
-
- # assert the implicit dependency between the gateway and the subnet
- deps = stack.dependencies[stack['subnet']]
- self.assertIn(stack['gateway'], deps)
-
- rsrc.validate()
-
- ref_id = rsrc.FnGetRefId()
- self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', ref_id)
-
- self.assertEqual(None, rsrc.FnGetAtt('status'))
- self.assertEqual('ACTIVE', rsrc.FnGetAtt('status'))
- try:
- rsrc.FnGetAtt('Foo')
- raise Exception('Expected InvalidTemplateAttribute')
- except exception.InvalidTemplateAttribute:
- pass
-
- self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766',
- rsrc.FnGetAtt('id'))
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- scheduler.TaskRunner(rsrc.delete)()
- rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again')
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class NeutronSubnetTest(HeatTestCase):
-
- def setUp(self):
- super(NeutronSubnetTest, self).setUp()
- self.m.StubOutWithMock(neutronclient.Client, 'create_subnet')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_subnet')
- self.m.StubOutWithMock(neutronclient.Client, 'show_subnet')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- utils.setup_dummy_db()
-
- def create_subnet(self, t, stack, resource_name):
- rsrc = subnet.Subnet('test_subnet', t['Resources'][resource_name],
- stack)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def test_subnet(self):
-
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_subnet({
- 'subnet': {
- 'name': utils.PhysName('test_stack', 'test_subnet'),
- 'network_id': u'None',
- 'dns_nameservers': [u'8.8.8.8'],
- 'allocation_pools': [
- {'start': u'10.0.3.20', 'end': u'10.0.3.150'}],
- 'ip_version': 4,
- 'cidr': u'10.0.3.0/24',
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f'
- }
- }).AndReturn({
- "subnet": {
- "allocation_pools": [
- {"start": "10.0.3.20", "end": "10.0.3.150"}],
- "cidr": "10.0.3.0/24",
- "dns_nameservers": ["8.8.8.8"],
- "enable_dhcp": True,
- "gateway_ip": "10.0.3.1",
- "id": "91e47a57-7508-46fe-afc9-fc454e8580e1",
- "ip_version": 4,
- "name": "name",
- "network_id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766",
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f"
- }
- })
- neutronclient.Client.show_subnet(
- '91e47a57-7508-46fe-afc9-fc454e8580e1').AndRaise(
- qe.NeutronClientException(status_code=404))
- sn = {
- "subnet": {
- "name": "name",
- "network_id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766",
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f",
- "allocation_pools": [
- {"start": "10.0.3.20", "end": "10.0.3.150"}],
- "gateway_ip": "10.0.3.1",
- "ip_version": 4,
- "cidr": "10.0.3.0/24",
- "dns_nameservers": ["8.8.8.8"],
- "id": "91e47a57-7508-46fe-afc9-fc454e8580e1",
- "enable_dhcp": True,
- }
- }
- neutronclient.Client.show_subnet(
- '91e47a57-7508-46fe-afc9-fc454e8580e1').AndReturn(sn)
- neutronclient.Client.show_subnet(
- '91e47a57-7508-46fe-afc9-fc454e8580e1').AndReturn(sn)
- neutronclient.Client.show_subnet(
- '91e47a57-7508-46fe-afc9-fc454e8580e1').AndReturn(sn)
-
- neutronclient.Client.delete_subnet(
- '91e47a57-7508-46fe-afc9-fc454e8580e1'
- ).AndReturn(None)
-
- neutronclient.Client.show_subnet(
- '91e47a57-7508-46fe-afc9-fc454e8580e1'
- ).AndRaise(qe.NeutronClientException(status_code=404))
-
- neutronclient.Client.delete_subnet(
- '91e47a57-7508-46fe-afc9-fc454e8580e1'
- ).AndRaise(qe.NeutronClientException(status_code=404))
-
- self.m.ReplayAll()
- t = template_format.parse(neutron_template)
- stack = utils.parse_stack(t)
- rsrc = self.create_subnet(t, stack, 'subnet')
-
- rsrc.validate()
-
- ref_id = rsrc.FnGetRefId()
- self.assertEqual('91e47a57-7508-46fe-afc9-fc454e8580e1', ref_id)
- self.assertEqual(None,
- rsrc.FnGetAtt('network_id'))
- self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766',
- rsrc.FnGetAtt('network_id'))
- self.assertEqual('8.8.8.8', rsrc.FnGetAtt('dns_nameservers')[0])
- self.assertEqual('91e47a57-7508-46fe-afc9-fc454e8580e1',
- rsrc.FnGetAtt('id'))
-
- # assert the dependency (implicit or explicit) between the ports
- # and the subnet
- self.assertIn(stack['port'], stack.dependencies[stack['subnet']])
- self.assertIn(stack['port2'], stack.dependencies[stack['subnet']])
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- self.assertEqual(scheduler.TaskRunner(rsrc.delete)(), None)
- rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again')
- self.assertEqual(scheduler.TaskRunner(rsrc.delete)(), None)
- self.m.VerifyAll()
-
- def test_subnet_disable_dhcp(self):
-
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_subnet({
- 'subnet': {
- 'name': utils.PhysName('test_stack', 'test_subnet'),
- 'network_id': u'None',
- 'dns_nameservers': [u'8.8.8.8'],
- 'allocation_pools': [
- {'start': u'10.0.3.20', 'end': u'10.0.3.150'}],
- 'ip_version': 4,
- 'enable_dhcp': False,
- 'cidr': u'10.0.3.0/24',
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f'
- }
- }).AndReturn({
- "subnet": {
- "allocation_pools": [
- {"start": "10.0.3.20", "end": "10.0.3.150"}],
- "cidr": "10.0.3.0/24",
- "dns_nameservers": ["8.8.8.8"],
- "enable_dhcp": False,
- "gateway_ip": "10.0.3.1",
- "id": "91e47a57-7508-46fe-afc9-fc454e8580e1",
- "ip_version": 4,
- "name": "name",
- "network_id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766",
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f"
- }
- })
-
- neutronclient.Client.show_subnet(
- '91e47a57-7508-46fe-afc9-fc454e8580e1').AndReturn({
- "subnet": {
- "name": "name",
- "network_id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766",
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f",
- "allocation_pools": [
- {"start": "10.0.3.20", "end": "10.0.3.150"}],
- "gateway_ip": "10.0.3.1",
- "ip_version": 4,
- "cidr": "10.0.3.0/24",
- "dns_nameservers": ["8.8.8.8"],
- "id": "91e47a57-7508-46fe-afc9-fc454e8580e1",
- "enable_dhcp": False,
- }
- })
-
- neutronclient.Client.delete_subnet(
- '91e47a57-7508-46fe-afc9-fc454e8580e1'
- ).AndReturn(None)
-
- neutronclient.Client.show_subnet(
- '91e47a57-7508-46fe-afc9-fc454e8580e1'
- ).AndRaise(qe.NeutronClientException(status_code=404))
-
- self.m.ReplayAll()
- t = template_format.parse(neutron_template)
- t['Resources']['subnet']['Properties']['enable_dhcp'] = 'False'
- stack = utils.parse_stack(t)
- rsrc = self.create_subnet(t, stack, 'subnet')
-
- rsrc.validate()
-
- ref_id = rsrc.FnGetRefId()
- self.assertEqual('91e47a57-7508-46fe-afc9-fc454e8580e1', ref_id)
- self.assertEqual(False, rsrc.FnGetAtt('enable_dhcp'))
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class NeutronRouterTest(HeatTestCase):
- @skipIf(router.neutronV20 is None, "Missing Neutron v2_0")
- def setUp(self):
- super(NeutronRouterTest, self).setUp()
- self.m.StubOutWithMock(neutronclient.Client, 'create_router')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_router')
- self.m.StubOutWithMock(neutronclient.Client, 'show_router')
- self.m.StubOutWithMock(neutronclient.Client, 'add_interface_router')
- self.m.StubOutWithMock(neutronclient.Client, 'remove_interface_router')
- self.m.StubOutWithMock(neutronclient.Client, 'add_gateway_router')
- self.m.StubOutWithMock(neutronclient.Client, 'remove_gateway_router')
- self.m.StubOutWithMock(router.neutronV20,
- 'find_resourceid_by_name_or_id')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- utils.setup_dummy_db()
-
- def create_router(self, t, stack, resource_name):
- rsrc = router.Router('router', t['Resources'][resource_name], stack)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def create_router_interface(self, t, stack, resource_name, properties={}):
- t['Resources'][resource_name]['Properties'] = properties
- rsrc = router.RouterInterface(
- 'router_interface',
- t['Resources'][resource_name],
- stack)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def create_gateway_router(self, t, stack, resource_name, properties={}):
- t['Resources'][resource_name]['Properties'] = properties
- rsrc = router.RouterGateway(
- 'gateway',
- t['Resources'][resource_name],
- stack)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def test_router(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_router({
- 'router': {
- 'name': utils.PhysName('test_stack', 'router'),
- 'admin_state_up': True,
- }
- }).AndReturn({
- "router": {
- "status": "BUILD",
- "external_gateway_info": None,
- "name": utils.PhysName('test_stack', 'router'),
- "admin_state_up": True,
- "tenant_id": "3e21026f2dc94372b105808c0e721661",
- "id": "3e46229d-8fce-4733-819a-b5fe630550f8"
- }
- })
- neutronclient.Client.show_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8').AndReturn({
- "router": {
- "status": "BUILD",
- "external_gateway_info": None,
- "name": utils.PhysName('test_stack', 'router'),
- "admin_state_up": True,
- "tenant_id": "3e21026f2dc94372b105808c0e721661",
- "routes": [],
- "id": "3e46229d-8fce-4733-819a-b5fe630550f8"
- }
- })
- neutronclient.Client.show_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8').AndReturn({
- "router": {
- "status": "ACTIVE",
- "external_gateway_info": None,
- "name": utils.PhysName('test_stack', 'router'),
- "admin_state_up": True,
- "tenant_id": "3e21026f2dc94372b105808c0e721661",
- "routes": [],
- "id": "3e46229d-8fce-4733-819a-b5fe630550f8"
- }
- })
-
- neutronclient.Client.show_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8').AndRaise(
- qe.NeutronClientException(status_code=404))
- neutronclient.Client.show_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8').AndReturn({
- "router": {
- "status": "ACTIVE",
- "external_gateway_info": None,
- "name": utils.PhysName('test_stack', 'router'),
- "admin_state_up": True,
- "tenant_id": "3e21026f2dc94372b105808c0e721661",
- "routes": [],
- "id": "3e46229d-8fce-4733-819a-b5fe630550f8"
- }
- })
- neutronclient.Client.show_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8').AndReturn({
- "router": {
- "status": "ACTIVE",
- "external_gateway_info": None,
- "name": utils.PhysName('test_stack', 'router'),
- "admin_state_up": True,
- "tenant_id": "3e21026f2dc94372b105808c0e721661",
- "routes": [],
- "id": "3e46229d-8fce-4733-819a-b5fe630550f8"
- }
- })
-
- neutronclient.Client.delete_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8'
- ).AndReturn(None)
-
- neutronclient.Client.show_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8'
- ).AndRaise(qe.NeutronClientException(status_code=404))
-
- neutronclient.Client.delete_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8'
- ).AndRaise(qe.NeutronClientException(status_code=404))
-
- self.m.ReplayAll()
- t = template_format.parse(neutron_template)
- stack = utils.parse_stack(t)
- rsrc = self.create_router(t, stack, 'router')
-
- rsrc.validate()
-
- ref_id = rsrc.FnGetRefId()
- self.assertEqual('3e46229d-8fce-4733-819a-b5fe630550f8', ref_id)
- self.assertEqual(None,
- rsrc.FnGetAtt('tenant_id'))
- self.assertEqual('3e21026f2dc94372b105808c0e721661',
- rsrc.FnGetAtt('tenant_id'))
- self.assertEqual('3e46229d-8fce-4733-819a-b5fe630550f8',
- rsrc.FnGetAtt('id'))
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- self.assertEqual(scheduler.TaskRunner(rsrc.delete)(), None)
- rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again')
- self.assertEqual(scheduler.TaskRunner(rsrc.delete)(), None)
- self.m.VerifyAll()
-
- def test_router_interface(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.add_interface_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8',
- {'subnet_id': '91e47a57-7508-46fe-afc9-fc454e8580e1'}
- ).AndReturn(None)
- neutronclient.Client.remove_interface_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8',
- {'subnet_id': '91e47a57-7508-46fe-afc9-fc454e8580e1'}
- ).AndReturn(None)
- neutronclient.Client.remove_interface_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8',
- {'subnet_id': '91e47a57-7508-46fe-afc9-fc454e8580e1'}
- ).AndRaise(qe.NeutronClientException(status_code=404))
- self.m.ReplayAll()
- t = template_format.parse(neutron_template)
- stack = utils.parse_stack(t)
-
- rsrc = self.create_router_interface(
- t, stack, 'router_interface', properties={
- 'router_id': '3e46229d-8fce-4733-819a-b5fe630550f8',
- 'subnet_id': '91e47a57-7508-46fe-afc9-fc454e8580e1'
- })
-
- scheduler.TaskRunner(rsrc.delete)()
- rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again')
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
- def test_gateway_router(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- router.neutronV20.find_resourceid_by_name_or_id(
- mox.IsA(neutronclient.Client),
- 'network',
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndReturn('fc68ea2c-b60b-4b4f-bd82-94ec81110766')
- neutronclient.Client.add_gateway_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8',
- {'network_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'}
- ).AndReturn(None)
- neutronclient.Client.remove_gateway_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8'
- ).AndReturn(None)
- neutronclient.Client.remove_gateway_router(
- '3e46229d-8fce-4733-819a-b5fe630550f8'
- ).AndRaise(qe.NeutronClientException(status_code=404))
- self.m.ReplayAll()
- t = template_format.parse(neutron_template)
- stack = utils.parse_stack(t)
-
- rsrc = self.create_gateway_router(
- t, stack, 'gateway', properties={
- 'router_id': '3e46229d-8fce-4733-819a-b5fe630550f8',
- 'network_id': 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- })
-
- scheduler.TaskRunner(rsrc.delete)()
- rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE, 'to delete again')
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class NeutronFloatingIPTest(HeatTestCase):
- @skipIf(net.clients.neutronclient is None, "Missing Neutron Client")
- def setUp(self):
- super(NeutronFloatingIPTest, self).setUp()
- self.m.StubOutWithMock(neutronclient.Client, 'create_floatingip')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_floatingip')
- self.m.StubOutWithMock(neutronclient.Client, 'show_floatingip')
- self.m.StubOutWithMock(neutronclient.Client, 'update_floatingip')
- self.m.StubOutWithMock(neutronclient.Client, 'create_port')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_port')
- self.m.StubOutWithMock(neutronclient.Client, 'show_port')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- utils.setup_dummy_db()
-
- def test_floating_ip(self):
-
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_floatingip({
- 'floatingip': {'floating_network_id': u'abcd1234'}
- }).AndReturn({'floatingip': {
- "status": "ACTIVE",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
-
- neutronclient.Client.show_floatingip(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndRaise(qe.NeutronClientException(status_code=404))
- neutronclient.Client.show_floatingip(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).MultipleTimes().AndReturn({'floatingip': {
- "status": "ACTIVE",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
-
- neutronclient.Client.delete_floatingip(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766').AndReturn(None)
- neutronclient.Client.delete_floatingip(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766').AndRaise(
- qe.NeutronClientException(status_code=404))
- self.m.ReplayAll()
-
- t = template_format.parse(neutron_floating_template)
- stack = utils.parse_stack(t)
-
- # assert the implicit dependency between the floating_ip
- # and the gateway
- deps = stack.dependencies[stack['gateway']]
- self.assertIn(stack['floating_ip'], deps)
-
- fip = stack['floating_ip']
- scheduler.TaskRunner(fip.create)()
- self.assertEqual((fip.CREATE, fip.COMPLETE), fip.state)
- fip.validate()
-
- fip_id = fip.FnGetRefId()
- self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', fip_id)
-
- self.assertEqual(None, fip.FnGetAtt('status'))
- self.assertEqual('ACTIVE', fip.FnGetAtt('status'))
- try:
- fip.FnGetAtt('Foo')
- raise Exception('Expected InvalidTemplateAttribute')
- except exception.InvalidTemplateAttribute:
- pass
-
- self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766',
- fip.FnGetAtt('id'))
- self.assertRaises(resource.UpdateReplace,
- fip.handle_update, {}, {}, {})
- scheduler.TaskRunner(fip.delete)()
- fip.state_set(fip.CREATE, fip.COMPLETE, 'to delete again')
- scheduler.TaskRunner(fip.delete)()
-
- self.m.VerifyAll()
-
- def test_port(self):
-
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_port({'port': {
- 'network_id': u'xyz1234',
- 'fixed_ips': [
- {'subnet_id': u'12.12.12.0', 'ip_address': u'10.0.0.10'}
- ],
- 'name': utils.PhysName('test_stack', 'port_floating'),
- 'admin_state_up': True}}
- ).AndReturn({'port': {
- "status": "BUILD",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
- neutronclient.Client.show_port(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndReturn({'port': {
- "status": "BUILD",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
- neutronclient.Client.show_port(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndReturn({'port': {
- "status": "ACTIVE",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
- neutronclient.Client.show_port(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndRaise(qe.NeutronClientException(status_code=404))
- neutronclient.Client.show_port(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).MultipleTimes().AndReturn({'port': {
- "status": "ACTIVE",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
-
- self.m.ReplayAll()
-
- t = template_format.parse(neutron_floating_template)
- stack = utils.parse_stack(t)
-
- p = stack['port_floating']
- scheduler.TaskRunner(p.create)()
- self.assertEqual((p.CREATE, p.COMPLETE), p.state)
- p.validate()
-
- port_id = p.FnGetRefId()
- self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766', port_id)
-
- self.assertEqual(None, p.FnGetAtt('status'))
- self.assertEqual('ACTIVE', p.FnGetAtt('status'))
- try:
- p.FnGetAtt('Foo')
- raise Exception('Expected InvalidTemplateAttribute')
- except exception.InvalidTemplateAttribute:
- pass
-
- self.assertEqual('fc68ea2c-b60b-4b4f-bd82-94ec81110766',
- p.FnGetAtt('id'))
-
- self.assertRaises(resource.UpdateReplace,
- p.handle_update, {}, {}, {})
-
- self.m.VerifyAll()
-
- def test_floatip_port(self):
-
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_floatingip({
- 'floatingip': {'floating_network_id': u'abcd1234'}
- }).AndReturn({'floatingip': {
- "status": "ACTIVE",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
-
- neutronclient.Client.create_port({'port': {
- 'network_id': u'xyz1234',
- 'fixed_ips': [
- {'subnet_id': u'12.12.12.0', 'ip_address': u'10.0.0.10'}
- ],
- 'name': utils.PhysName('test_stack', 'port_floating'),
- 'admin_state_up': True}}
- ).AndReturn({'port': {
- "status": "BUILD",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
- neutronclient.Client.show_port(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndReturn({'port': {
- "status": "ACTIVE",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
- neutronclient.Client.update_floatingip(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766',
- {
- 'floatingip': {
- 'port_id': u'fc68ea2c-b60b-4b4f-bd82-94ec81110766'}}
- ).AndReturn({'floatingip': {
- "status": "ACTIVE",
- "id": "fc68ea2c-b60b-4b4f-bd82-94ec81110766"
- }})
-
- neutronclient.Client.update_floatingip(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766',
- {'floatingip': {
- 'port_id': None
- }}).AndReturn(None)
-
- neutronclient.Client.delete_port(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndReturn(None)
-
- neutronclient.Client.show_port(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndRaise(qe.NeutronClientException(status_code=404))
-
- neutronclient.Client.delete_floatingip(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndReturn(None)
-
- neutronclient.Client.update_floatingip(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766',
- {'floatingip': {
- 'port_id': None
- }}).AndRaise(qe.NeutronClientException(status_code=404))
-
- neutronclient.Client.delete_port(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndRaise(qe.NeutronClientException(status_code=404))
-
- neutronclient.Client.delete_floatingip(
- 'fc68ea2c-b60b-4b4f-bd82-94ec81110766'
- ).AndRaise(qe.NeutronClientException(status_code=404))
-
- self.m.ReplayAll()
-
- t = template_format.parse(neutron_floating_template)
- stack = utils.parse_stack(t)
-
- fip = stack['floating_ip']
- scheduler.TaskRunner(fip.create)()
- self.assertEqual((fip.CREATE, fip.COMPLETE), fip.state)
-
- p = stack['port_floating']
- scheduler.TaskRunner(p.create)()
- self.assertEqual((p.CREATE, p.COMPLETE), p.state)
-
- fipa = stack['floating_ip_assoc']
- scheduler.TaskRunner(fipa.create)()
- self.assertEqual((fipa.CREATE, fipa.COMPLETE), fipa.state)
-
- fipa.validate()
-
- fipa_id = fipa.FnGetRefId()
- fip_id = fip.FnGetRefId()
- port_id = p.FnGetRefId()
- self.assertEqual('%s:%s' % (fip_id, port_id), fipa_id)
- self.assertRaises(resource.UpdateReplace,
- fipa.handle_update, {}, {}, {})
-
- scheduler.TaskRunner(fipa.delete)()
- scheduler.TaskRunner(p.delete)()
- scheduler.TaskRunner(fip.delete)()
-
- fipa.state_set(fipa.CREATE, fipa.COMPLETE, 'to delete again')
- fip.state_set(fip.CREATE, fip.COMPLETE, 'to delete again')
- p.state_set(p.CREATE, p.COMPLETE, 'to delete again')
-
- scheduler.TaskRunner(fipa.delete)()
- self.assertEqual(scheduler.TaskRunner(p.delete)(), None)
- scheduler.TaskRunner(fip.delete)()
-
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import copy
-
-from testtools import skipIf
-
-from heat.common import exception
-from heat.common import template_format
-from heat.engine import clients
-from heat.engine import scheduler
-from heat.engine.resources.neutron import firewall
-from heat.openstack.common.importutils import try_import
-from heat.tests import fakes
-from heat.tests import utils
-from heat.tests.common import HeatTestCase
-
-neutronclient = try_import('neutronclient.v2_0.client')
-
-firewall_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test neutron firewall resource",
- "Parameters" : {},
- "Resources" : {
- "firewall": {
- "Type": "OS::Neutron::Firewall",
- "Properties": {
- "name": "test-firewall",
- "firewall_policy_id": "policy-id",
- "admin_state_up": True,
- }
- }
- }
-}
-'''
-
-firewall_policy_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test neutron firewall policy resource",
- "Parameters" : {},
- "Resources" : {
- "firewall_policy": {
- "Type": "OS::Neutron::FirewallPolicy",
- "Properties": {
- "name": "test-firewall-policy",
- "shared": True,
- "audited": True,
- "firewall_rules": ['rule-id-1', 'rule-id-2'],
- }
- }
- }
-}
-'''
-
-firewall_rule_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test neutron firewall rule resource",
- "Parameters" : {},
- "Resources" : {
- "firewall_rule": {
- "Type": "OS::Neutron::FirewallRule",
- "Properties": {
- "name": "test-firewall-rule",
- "shared": True,
- "protocol": "tcp",
- "action": "allow",
- "enabled": True,
- "ip_version": "4",
- }
- }
- }
-}
-'''
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class FirewallTest(HeatTestCase):
-
- def setUp(self):
- super(FirewallTest, self).setUp()
- self.m.StubOutWithMock(neutronclient.Client, 'create_firewall')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_firewall')
- self.m.StubOutWithMock(neutronclient.Client, 'show_firewall')
- self.m.StubOutWithMock(neutronclient.Client, 'update_firewall')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- utils.setup_dummy_db()
-
- def create_firewall(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_firewall({
- 'firewall': {
- 'name': 'test-firewall', 'admin_state_up': True,
- 'firewall_policy_id': 'policy-id'}}
- ).AndReturn({'firewall': {'id': '5678'}})
-
- snippet = template_format.parse(firewall_template)
- stack = utils.parse_stack(snippet)
- return firewall.Firewall(
- 'firewall', snippet['Resources']['firewall'], stack)
-
- def test_create(self):
- rsrc = self.create_firewall()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_create_failed(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_firewall({
- 'firewall': {
- 'name': 'test-firewall', 'admin_state_up': True,
- 'firewall_policy_id': 'policy-id'}}
- ).AndRaise(firewall.NeutronClientException())
- self.m.ReplayAll()
-
- snippet = template_format.parse(firewall_template)
- stack = utils.parse_stack(snippet)
- rsrc = firewall.Firewall(
- 'firewall', snippet['Resources']['firewall'], stack)
-
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.create))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete(self):
- neutronclient.Client.delete_firewall('5678')
- neutronclient.Client.show_firewall('5678').AndRaise(
- firewall.NeutronClientException(status_code=404))
-
- rsrc = self.create_firewall()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete_already_gone(self):
- neutronclient.Client.delete_firewall('5678').AndRaise(
- firewall.NeutronClientException(status_code=404))
-
- rsrc = self.create_firewall()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete_failed(self):
- neutronclient.Client.delete_firewall('5678').AndRaise(
- firewall.NeutronClientException(status_code=400))
-
- rsrc = self.create_firewall()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.delete))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- def test_attribute(self):
- rsrc = self.create_firewall()
- neutronclient.Client.show_firewall('5678').MultipleTimes(
- ).AndReturn(
- {'firewall': {'admin_state_up': True,
- 'firewall_policy_id': 'policy-id'}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual(True, rsrc.FnGetAtt('admin_state_up'))
- self.assertEqual('policy-id', rsrc.FnGetAtt('firewall_policy_id'))
- self.m.VerifyAll()
-
- def test_attribute_failed(self):
- rsrc = self.create_firewall()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.FnGetAtt, 'subnet_id')
- self.assertEqual(
- 'The Referenced Attribute (firewall subnet_id) is '
- 'incorrect.', str(error))
- self.m.VerifyAll()
-
- def test_update(self):
- rsrc = self.create_firewall()
- neutronclient.Client.update_firewall(
- '5678', {'firewall': {'admin_state_up': False}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
-
- update_template = copy.deepcopy(rsrc.t)
- update_template['Properties']['admin_state_up'] = False
- scheduler.TaskRunner(rsrc.update, update_template)()
-
- self.m.VerifyAll()
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class FirewallPolicyTest(HeatTestCase):
-
- def setUp(self):
- super(FirewallPolicyTest, self).setUp()
- self.m.StubOutWithMock(neutronclient.Client, 'create_firewall_policy')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_firewall_policy')
- self.m.StubOutWithMock(neutronclient.Client, 'show_firewall_policy')
- self.m.StubOutWithMock(neutronclient.Client, 'update_firewall_policy')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- utils.setup_dummy_db()
-
- def create_firewall_policy(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_firewall_policy({
- 'firewall_policy': {
- 'name': 'test-firewall-policy', 'shared': True,
- 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2']}}
- ).AndReturn({'firewall_policy': {'id': '5678'}})
-
- snippet = template_format.parse(firewall_policy_template)
- stack = utils.parse_stack(snippet)
- return firewall.FirewallPolicy(
- 'firewall_policy', snippet['Resources']['firewall_policy'], stack)
-
- def test_create(self):
- rsrc = self.create_firewall_policy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_create_failed(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_firewall_policy({
- 'firewall_policy': {
- 'name': 'test-firewall-policy', 'shared': True,
- 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2']}}
- ).AndRaise(firewall.NeutronClientException())
- self.m.ReplayAll()
-
- snippet = template_format.parse(firewall_policy_template)
- stack = utils.parse_stack(snippet)
- rsrc = firewall.FirewallPolicy(
- 'firewall_policy', snippet['Resources']['firewall_policy'], stack)
-
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.create))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete(self):
- neutronclient.Client.delete_firewall_policy('5678')
- neutronclient.Client.show_firewall_policy('5678').AndRaise(
- firewall.NeutronClientException(status_code=404))
-
- rsrc = self.create_firewall_policy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete_already_gone(self):
- neutronclient.Client.delete_firewall_policy('5678').AndRaise(
- firewall.NeutronClientException(status_code=404))
-
- rsrc = self.create_firewall_policy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete_failed(self):
- neutronclient.Client.delete_firewall_policy('5678').AndRaise(
- firewall.NeutronClientException(status_code=400))
-
- rsrc = self.create_firewall_policy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.delete))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- def test_attribute(self):
- rsrc = self.create_firewall_policy()
- neutronclient.Client.show_firewall_policy('5678').MultipleTimes(
- ).AndReturn(
- {'firewall_policy': {'audited': True, 'shared': True}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual(True, rsrc.FnGetAtt('audited'))
- self.assertEqual(True, rsrc.FnGetAtt('shared'))
- self.m.VerifyAll()
-
- def test_attribute_failed(self):
- rsrc = self.create_firewall_policy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.FnGetAtt, 'subnet_id')
- self.assertEqual(
- 'The Referenced Attribute (firewall_policy subnet_id) is '
- 'incorrect.', str(error))
- self.m.VerifyAll()
-
- def test_update(self):
- rsrc = self.create_firewall_policy()
- neutronclient.Client.update_firewall_policy(
- '5678', {'firewall_policy': {'firewall_rules': ['3', '4']}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
-
- update_template = copy.deepcopy(rsrc.t)
- update_template['Properties']['firewall_rules'] = ['3', '4']
- scheduler.TaskRunner(rsrc.update, update_template)()
-
- self.m.VerifyAll()
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class FirewallRuleTest(HeatTestCase):
-
- def setUp(self):
- super(FirewallRuleTest, self).setUp()
- self.m.StubOutWithMock(neutronclient.Client, 'create_firewall_rule')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_firewall_rule')
- self.m.StubOutWithMock(neutronclient.Client, 'show_firewall_rule')
- self.m.StubOutWithMock(neutronclient.Client, 'update_firewall_rule')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- utils.setup_dummy_db()
-
- def create_firewall_rule(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_firewall_rule({
- 'firewall_rule': {
- 'name': 'test-firewall-rule', 'shared': True,
- 'action': 'allow', 'protocol': 'tcp', 'enabled': True,
- 'ip_version': "4"}}
- ).AndReturn({'firewall_rule': {'id': '5678'}})
-
- snippet = template_format.parse(firewall_rule_template)
- stack = utils.parse_stack(snippet)
- return firewall.FirewallRule(
- 'firewall_rule', snippet['Resources']['firewall_rule'], stack)
-
- def test_create(self):
- rsrc = self.create_firewall_rule()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_create_failed(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_firewall_rule({
- 'firewall_rule': {
- 'name': 'test-firewall-rule', 'shared': True,
- 'action': 'allow', 'protocol': 'tcp', 'enabled': True,
- 'ip_version': "4"}}
- ).AndRaise(firewall.NeutronClientException())
- self.m.ReplayAll()
-
- snippet = template_format.parse(firewall_rule_template)
- stack = utils.parse_stack(snippet)
- rsrc = firewall.FirewallRule(
- 'firewall_rule', snippet['Resources']['firewall_rule'], stack)
-
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.create))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete(self):
- neutronclient.Client.delete_firewall_rule('5678')
- neutronclient.Client.show_firewall_rule('5678').AndRaise(
- firewall.NeutronClientException(status_code=404))
-
- rsrc = self.create_firewall_rule()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete_already_gone(self):
- neutronclient.Client.delete_firewall_rule('5678').AndRaise(
- firewall.NeutronClientException(status_code=404))
-
- rsrc = self.create_firewall_rule()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete_failed(self):
- neutronclient.Client.delete_firewall_rule('5678').AndRaise(
- firewall.NeutronClientException(status_code=400))
-
- rsrc = self.create_firewall_rule()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.delete))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- def test_attribute(self):
- rsrc = self.create_firewall_rule()
- neutronclient.Client.show_firewall_rule('5678').MultipleTimes(
- ).AndReturn(
- {'firewall_rule': {'protocol': 'tcp', 'shared': True}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual('tcp', rsrc.FnGetAtt('protocol'))
- self.assertEqual(True, rsrc.FnGetAtt('shared'))
- self.m.VerifyAll()
-
- def test_attribute_failed(self):
- rsrc = self.create_firewall_rule()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.FnGetAtt, 'subnet_id')
- self.assertEqual(
- 'The Referenced Attribute (firewall_rule subnet_id) is '
- 'incorrect.', str(error))
- self.m.VerifyAll()
-
- def test_update(self):
- rsrc = self.create_firewall_rule()
- neutronclient.Client.update_firewall_rule(
- '5678', {'firewall_rule': {'protocol': 'icmp'}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
-
- update_template = copy.deepcopy(rsrc.t)
- update_template['Properties']['protocol'] = 'icmp'
- scheduler.TaskRunner(rsrc.update, update_template)()
-
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import copy
-
-from testtools import skipIf
-
-from heat.common import exception
-from heat.common import template_format
-from heat.engine import clients
-from heat.engine import scheduler
-from heat.engine.resources.neutron import loadbalancer
-from heat.openstack.common.importutils import try_import
-from heat.tests import fakes
-from heat.tests import utils
-from heat.tests.common import HeatTestCase
-from heat.tests.v1_1 import fakes as nova_fakes
-
-neutronclient = try_import('neutronclient.v2_0.client')
-
-health_monitor_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test load balancer resources",
- "Parameters" : {},
- "Resources" : {
- "monitor": {
- "Type": "OS::Neutron::HealthMonitor",
- "Properties": {
- "type": "HTTP",
- "delay": 3,
- "max_retries": 5,
- "timeout": 10
- }
- }
- }
-}
-'''
-
-pool_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test load balancer resources",
- "Parameters" : {},
- "Resources" : {
- "pool": {
- "Type": "OS::Neutron::Pool",
- "Properties": {
- "protocol": "HTTP",
- "subnet_id": "sub123",
- "lb_method": "ROUND_ROBIN",
- "vip": {
- "protocol_port": 80
- }
- }
- }
- }
-}
-'''
-
-lb_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test load balancer resources",
- "Parameters" : {},
- "Resources" : {
- "lb": {
- "Type": "OS::Neutron::LoadBalancer",
- "Properties": {
- "protocol_port": 8080,
- "pool_id": "pool123",
- "members": ["1234"]
- }
- }
- }
-}
-'''
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class HealthMonitorTest(HeatTestCase):
-
- def setUp(self):
- super(HealthMonitorTest, self).setUp()
- self.m.StubOutWithMock(neutronclient.Client, 'create_health_monitor')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_health_monitor')
- self.m.StubOutWithMock(neutronclient.Client, 'show_health_monitor')
- self.m.StubOutWithMock(neutronclient.Client, 'update_health_monitor')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- utils.setup_dummy_db()
-
- def create_health_monitor(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_health_monitor({
- 'health_monitor': {
- 'delay': 3, 'max_retries': 5, 'type': u'HTTP',
- 'timeout': 10, 'admin_state_up': True}}
- ).AndReturn({'health_monitor': {'id': '5678'}})
-
- snippet = template_format.parse(health_monitor_template)
- stack = utils.parse_stack(snippet)
- return loadbalancer.HealthMonitor(
- 'monitor', snippet['Resources']['monitor'], stack)
-
- def test_create(self):
- rsrc = self.create_health_monitor()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_create_failed(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_health_monitor({
- 'health_monitor': {
- 'delay': 3, 'max_retries': 5, 'type': u'HTTP',
- 'timeout': 10, 'admin_state_up': True}}
- ).AndRaise(loadbalancer.NeutronClientException())
- self.m.ReplayAll()
-
- snippet = template_format.parse(health_monitor_template)
- stack = utils.parse_stack(snippet)
- rsrc = loadbalancer.HealthMonitor(
- 'monitor', snippet['Resources']['monitor'], stack)
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.create))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete(self):
- neutronclient.Client.delete_health_monitor('5678')
- neutronclient.Client.show_health_monitor('5678').AndRaise(
- loadbalancer.NeutronClientException(status_code=404))
-
- rsrc = self.create_health_monitor()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete_already_gone(self):
- neutronclient.Client.delete_health_monitor('5678').AndRaise(
- loadbalancer.NeutronClientException(status_code=404))
-
- rsrc = self.create_health_monitor()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete_failed(self):
- neutronclient.Client.delete_health_monitor('5678').AndRaise(
- loadbalancer.NeutronClientException(status_code=400))
-
- rsrc = self.create_health_monitor()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.delete))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- def test_attribute(self):
- rsrc = self.create_health_monitor()
- neutronclient.Client.show_health_monitor('5678').MultipleTimes(
- ).AndReturn(
- {'health_monitor': {'admin_state_up': True, 'delay': 3}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual(True, rsrc.FnGetAtt('admin_state_up'))
- self.assertEqual(3, rsrc.FnGetAtt('delay'))
- self.m.VerifyAll()
-
- def test_attribute_failed(self):
- rsrc = self.create_health_monitor()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.FnGetAtt, 'subnet_id')
- self.assertEqual(
- 'The Referenced Attribute (monitor subnet_id) is incorrect.',
- str(error))
- self.m.VerifyAll()
-
- def test_update(self):
- rsrc = self.create_health_monitor()
- neutronclient.Client.update_health_monitor(
- '5678', {'health_monitor': {'delay': 10}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
-
- update_template = copy.deepcopy(rsrc.t)
- update_template['Properties']['delay'] = 10
- scheduler.TaskRunner(rsrc.update, update_template)()
-
- self.m.VerifyAll()
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class PoolTest(HeatTestCase):
-
- def setUp(self):
- super(PoolTest, self).setUp()
- self.m.StubOutWithMock(neutronclient.Client, 'create_pool')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_pool')
- self.m.StubOutWithMock(neutronclient.Client, 'show_pool')
- self.m.StubOutWithMock(neutronclient.Client, 'update_pool')
- self.m.StubOutWithMock(neutronclient.Client,
- 'associate_health_monitor')
- self.m.StubOutWithMock(neutronclient.Client,
- 'disassociate_health_monitor')
- self.m.StubOutWithMock(neutronclient.Client, 'create_vip')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_vip')
- self.m.StubOutWithMock(neutronclient.Client, 'show_vip')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- utils.setup_dummy_db()
-
- def create_pool(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_pool({
- 'pool': {
- 'subnet_id': 'sub123', 'protocol': u'HTTP',
- 'name': utils.PhysName('test_stack', 'pool'),
- 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}}
- ).AndReturn({'pool': {'id': '5678'}})
- neutronclient.Client.create_vip({
- 'vip': {
- 'protocol': u'HTTP', 'name': 'pool.vip',
- 'admin_state_up': True, 'subnet_id': u'sub123',
- 'pool_id': '5678', 'protocol_port': 80}}
- ).AndReturn({'vip': {'id': 'xyz'}})
- neutronclient.Client.show_pool('5678').AndReturn(
- {'pool': {'status': 'ACTIVE'}})
- neutronclient.Client.show_vip('xyz').AndReturn(
- {'vip': {'status': 'ACTIVE'}})
-
- snippet = template_format.parse(pool_template)
- stack = utils.parse_stack(snippet)
- return loadbalancer.Pool(
- 'pool', snippet['Resources']['pool'], stack)
-
- def test_create(self):
- rsrc = self.create_pool()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_create_pending(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_pool({
- 'pool': {
- 'subnet_id': 'sub123', 'protocol': u'HTTP',
- 'name': utils.PhysName('test_stack', 'pool'),
- 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}}
- ).AndReturn({'pool': {'id': '5678'}})
- neutronclient.Client.create_vip({
- 'vip': {
- 'protocol': u'HTTP', 'name': 'pool.vip',
- 'admin_state_up': True, 'subnet_id': u'sub123',
- 'pool_id': '5678', 'protocol_port': 80}}
- ).AndReturn({'vip': {'id': 'xyz'}})
- neutronclient.Client.show_pool('5678').AndReturn(
- {'pool': {'status': 'PENDING_CREATE'}})
- neutronclient.Client.show_pool('5678').MultipleTimes().AndReturn(
- {'pool': {'status': 'ACTIVE'}})
- neutronclient.Client.show_vip('xyz').AndReturn(
- {'vip': {'status': 'PENDING_CREATE'}})
- neutronclient.Client.show_vip('xyz').AndReturn(
- {'vip': {'status': 'ACTIVE'}})
-
- snippet = template_format.parse(pool_template)
- stack = utils.parse_stack(snippet)
- rsrc = loadbalancer.Pool(
- 'pool', snippet['Resources']['pool'], stack)
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_create_failed_unexpected_status(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_pool({
- 'pool': {
- 'subnet_id': 'sub123', 'protocol': u'HTTP',
- 'name': utils.PhysName('test_stack', 'pool'),
- 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}}
- ).AndReturn({'pool': {'id': '5678'}})
- neutronclient.Client.create_vip({
- 'vip': {
- 'protocol': u'HTTP', 'name': 'pool.vip',
- 'admin_state_up': True, 'subnet_id': u'sub123',
- 'pool_id': '5678', 'protocol_port': 80}}
- ).AndReturn({'vip': {'id': 'xyz'}})
- neutronclient.Client.show_pool('5678').AndReturn(
- {'pool': {'status': 'ERROR', 'name': '5678'}})
-
- snippet = template_format.parse(pool_template)
- stack = utils.parse_stack(snippet)
- rsrc = loadbalancer.Pool(
- 'pool', snippet['Resources']['pool'], stack)
- self.m.ReplayAll()
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.create))
- self.assertEqual(
- 'Error: neutron report unexpected pool '
- 'resource[5678] status[ERROR]',
- str(error))
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- def test_create_failed_unexpected_vip_status(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_pool({
- 'pool': {
- 'subnet_id': 'sub123', 'protocol': u'HTTP',
- 'name': utils.PhysName('test_stack', 'pool'),
- 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}}
- ).AndReturn({'pool': {'id': '5678'}})
- neutronclient.Client.create_vip({
- 'vip': {
- 'protocol': u'HTTP', 'name': 'pool.vip',
- 'admin_state_up': True, 'subnet_id': u'sub123',
- 'pool_id': '5678', 'protocol_port': 80}}
- ).AndReturn({'vip': {'id': 'xyz'}})
- neutronclient.Client.show_pool('5678').MultipleTimes().AndReturn(
- {'pool': {'status': 'ACTIVE'}})
- neutronclient.Client.show_vip('xyz').AndReturn(
- {'vip': {'status': 'ERROR', 'name': 'xyz'}})
-
- snippet = template_format.parse(pool_template)
- stack = utils.parse_stack(snippet)
- rsrc = loadbalancer.Pool(
- 'pool', snippet['Resources']['pool'], stack)
- self.m.ReplayAll()
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.create))
- self.assertEqual(
- 'Error: neutron reported unexpected vip '
- 'resource[xyz] status[ERROR]',
- str(error))
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- def test_create_failed(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_pool({
- 'pool': {
- 'subnet_id': 'sub123', 'protocol': u'HTTP',
- 'name': utils.PhysName('test_stack', 'pool'),
- 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}}
- ).AndRaise(loadbalancer.NeutronClientException())
- self.m.ReplayAll()
-
- snippet = template_format.parse(pool_template)
- stack = utils.parse_stack(snippet)
- rsrc = loadbalancer.Pool(
- 'pool', snippet['Resources']['pool'], stack)
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.create))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete(self):
- rsrc = self.create_pool()
- neutronclient.Client.delete_vip('xyz')
- neutronclient.Client.show_vip('xyz').AndRaise(
- loadbalancer.NeutronClientException(status_code=404))
- neutronclient.Client.delete_pool('5678')
- neutronclient.Client.show_pool('5678').AndRaise(
- loadbalancer.NeutronClientException(status_code=404))
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete_already_gone(self):
- neutronclient.Client.delete_vip('xyz').AndRaise(
- loadbalancer.NeutronClientException(status_code=404))
- neutronclient.Client.delete_pool('5678').AndRaise(
- loadbalancer.NeutronClientException(status_code=404))
-
- rsrc = self.create_pool()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete_vip_failed(self):
- neutronclient.Client.delete_vip('xyz').AndRaise(
- loadbalancer.NeutronClientException(status_code=400))
-
- rsrc = self.create_pool()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.delete))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete_failed(self):
- neutronclient.Client.delete_vip('xyz').AndRaise(
- loadbalancer.NeutronClientException(status_code=404))
- neutronclient.Client.delete_pool('5678').AndRaise(
- loadbalancer.NeutronClientException(status_code=400))
-
- rsrc = self.create_pool()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.delete))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- def test_attribute(self):
- rsrc = self.create_pool()
- neutronclient.Client.show_pool('5678').MultipleTimes(
- ).AndReturn(
- {'pool': {'admin_state_up': True, 'lb_method': 'ROUND_ROBIN'}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual(True, rsrc.FnGetAtt('admin_state_up'))
- self.assertEqual('ROUND_ROBIN', rsrc.FnGetAtt('lb_method'))
- self.m.VerifyAll()
-
- def test_vip_attribute(self):
- rsrc = self.create_pool()
- neutronclient.Client.show_vip('xyz').AndReturn(
- {'vip': {'address': '10.0.0.3', 'name': 'xyz'}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual({'address': '10.0.0.3', 'name': 'xyz'},
- rsrc.FnGetAtt('vip'))
- self.m.VerifyAll()
-
- def test_attribute_failed(self):
- rsrc = self.create_pool()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.FnGetAtt, 'net_id')
- self.assertEqual(
- 'The Referenced Attribute (pool net_id) is incorrect.',
- str(error))
- self.m.VerifyAll()
-
- def test_update(self):
- rsrc = self.create_pool()
- neutronclient.Client.update_pool(
- '5678', {'pool': {'admin_state_up': False}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
-
- update_template = copy.deepcopy(rsrc.t)
- update_template['Properties']['admin_state_up'] = False
- scheduler.TaskRunner(rsrc.update, update_template)()
-
- self.m.VerifyAll()
-
- def test_update_monitors(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_pool({
- 'pool': {
- 'subnet_id': 'sub123', 'protocol': u'HTTP',
- 'name': utils.PhysName('test_stack', 'pool'),
- 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}}
- ).AndReturn({'pool': {'id': '5678'}})
- neutronclient.Client.associate_health_monitor(
- '5678', {'health_monitor': {'id': 'mon123'}})
- neutronclient.Client.associate_health_monitor(
- '5678', {'health_monitor': {'id': 'mon456'}})
- neutronclient.Client.create_vip({
- 'vip': {
- 'protocol': u'HTTP', 'name': 'pool.vip',
- 'admin_state_up': True, 'subnet_id': u'sub123',
- 'pool_id': '5678', 'protocol_port': 80}}
- ).AndReturn({'vip': {'id': 'xyz'}})
- neutronclient.Client.show_pool('5678').AndReturn(
- {'pool': {'status': 'ACTIVE'}})
- neutronclient.Client.show_vip('xyz').AndReturn(
- {'vip': {'status': 'ACTIVE'}})
- neutronclient.Client.disassociate_health_monitor(
- '5678', {'health_monitor': {'id': 'mon456'}})
- neutronclient.Client.associate_health_monitor(
- '5678', {'health_monitor': {'id': 'mon789'}})
-
- snippet = template_format.parse(pool_template)
- stack = utils.parse_stack(snippet)
- snippet['Resources']['pool']['Properties']['monitors'] = [
- 'mon123', 'mon456']
- rsrc = loadbalancer.Pool(
- 'pool', snippet['Resources']['pool'], stack)
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
-
- update_template = copy.deepcopy(rsrc.t)
- update_template['Properties']['monitors'] = ['mon123', 'mon789']
- scheduler.TaskRunner(rsrc.update, update_template)()
-
- self.m.VerifyAll()
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class LoadBalancerTest(HeatTestCase):
-
- def setUp(self):
- super(LoadBalancerTest, self).setUp()
- self.fc = nova_fakes.FakeClient()
- self.m.StubOutWithMock(neutronclient.Client, 'create_member')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_member')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- self.m.StubOutWithMock(clients.OpenStackClients, 'nova')
- utils.setup_dummy_db()
-
- def create_load_balancer(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- clients.OpenStackClients.nova("compute").MultipleTimes().AndReturn(
- self.fc)
- neutronclient.Client.create_member({
- 'member': {
- 'pool_id': 'pool123', 'protocol_port': 8080,
- 'address': '1.2.3.4'}}
- ).AndReturn({'member': {'id': 'member5678'}})
- snippet = template_format.parse(lb_template)
- stack = utils.parse_stack(snippet)
- return loadbalancer.LoadBalancer(
- 'lb', snippet['Resources']['lb'], stack)
-
- def test_create(self):
- rsrc = self.create_load_balancer()
-
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_update(self):
- rsrc = self.create_load_balancer()
- neutronclient.Client.delete_member(u'member5678')
- neutronclient.Client.create_member({
- 'member': {
- 'pool_id': 'pool123', 'protocol_port': 8080,
- 'address': '4.5.6.7'}}
- ).AndReturn({'member': {'id': 'memberxyz'}})
-
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
-
- update_template = copy.deepcopy(rsrc.t)
- update_template['Properties']['members'] = ['5678']
-
- scheduler.TaskRunner(rsrc.update, update_template)()
- self.m.VerifyAll()
-
- def test_update_missing_member(self):
- rsrc = self.create_load_balancer()
- neutronclient.Client.delete_member(u'member5678').AndRaise(
- loadbalancer.NeutronClientException(status_code=404))
-
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
-
- update_template = copy.deepcopy(rsrc.t)
- update_template['Properties']['members'] = []
-
- scheduler.TaskRunner(rsrc.update, update_template)()
- self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete(self):
- rsrc = self.create_load_balancer()
- neutronclient.Client.delete_member(u'member5678')
-
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_delete_missing_member(self):
- rsrc = self.create_load_balancer()
- neutronclient.Client.delete_member(u'member5678').AndRaise(
- loadbalancer.NeutronClientException(status_code=404))
-
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import copy
-
-from testtools import skipIf
-
-from heat.common import exception
-from heat.common import template_format
-from heat.engine import clients
-from heat.engine import scheduler
-from heat.engine.resources.neutron import vpnservice
-from heat.openstack.common.importutils import try_import
-from heat.tests import fakes
-from heat.tests import utils
-from heat.tests.common import HeatTestCase
-
-
-neutronclient = try_import('neutronclient.v2_0.client')
-
-vpnservice_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test VPN service resource",
- "Parameters" : {},
- "Resources" : {
- "VPNService" : {
- "Type" : "OS::Neutron::VPNService",
- "Properties" : {
- "name" : "VPNService",
- "description" : "My new VPN service",
- "admin_state_up" : true,
- "router_id" : "rou123",
- "subnet_id" : "sub123"
- }
- }
- }
-}
-'''
-
-ipsec_site_connection_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test IPsec policy resource",
- "Parameters" : {},
- "Resources" : {
- "IPsecSiteConnection" : {
- "Type" : "OS::Neutron::IPsecSiteConnection",
- "Properties" : {
- "name" : "IPsecSiteConnection",
- "description" : "My new VPN connection",
- "peer_address" : "172.24.4.233",
- "peer_id" : "172.24.4.233",
- "peer_cidrs" : [ "10.2.0.0/24" ],
- "mtu" : 1500,
- "dpd" : {
- "actions" : "hold",
- "interval" : 30,
- "timeout" : 120
- },
- "psk" : "secret",
- "initiator" : "bi-directional",
- "admin_state_up" : true,
- "ikepolicy_id" : "ike123",
- "ipsecpolicy_id" : "ips123",
- "vpnservice_id" : "vpn123"
- }
- }
- }
-}
-'''
-
-ikepolicy_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test IKE policy resource",
- "Parameters" : {},
- "Resources" : {
- "IKEPolicy" : {
- "Type" : "OS::Neutron::IKEPolicy",
- "Properties" : {
- "name" : "IKEPolicy",
- "description" : "My new IKE policy",
- "auth_algorithm" : "sha1",
- "encryption_algorithm" : "3des",
- "phase1_negotiation_mode" : "main",
- "lifetime" : {
- "units" : "seconds",
- "value" : 3600
- },
- "pfs" : "group5",
- "ike_version" : "v1"
- }
- }
- }
-}
-'''
-
-ipsecpolicy_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test IPsec policy resource",
- "Parameters" : {},
- "Resources" : {
- "IPsecPolicy" : {
- "Type" : "OS::Neutron::IPsecPolicy",
- "Properties" : {
- "name" : "IPsecPolicy",
- "description" : "My new IPsec policy",
- "transform_protocol": "esp",
- "encapsulation_mode" : "tunnel",
- "auth_algorithm" : "sha1",
- "encryption_algorithm" : "3des",
- "lifetime" : {
- "units" : "seconds",
- "value" : 3600
- },
- "pfs" : "group5"
- }
- }
- }
-}
-'''
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class VPNServiceTest(HeatTestCase):
-
- VPN_SERVICE_CONF = {
- 'vpnservice': {
- 'name': 'VPNService',
- 'description': 'My new VPN service',
- 'admin_state_up': True,
- 'router_id': 'rou123',
- 'subnet_id': 'sub123'
- }
- }
-
- def setUp(self):
- super(VPNServiceTest, self).setUp()
- self.m.StubOutWithMock(neutronclient.Client, 'create_vpnservice')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_vpnservice')
- self.m.StubOutWithMock(neutronclient.Client, 'show_vpnservice')
- self.m.StubOutWithMock(neutronclient.Client, 'update_vpnservice')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- utils.setup_dummy_db()
-
- def create_vpnservice(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_vpnservice(
- self.VPN_SERVICE_CONF).AndReturn({'vpnservice': {'id': 'vpn123'}})
- snippet = template_format.parse(vpnservice_template)
- self.stack = utils.parse_stack(snippet)
- return vpnservice.VPNService('vpnservice',
- snippet['Resources']['VPNService'],
- self.stack)
-
- @utils.stack_delete_after
- def test_create(self):
- rsrc = self.create_vpnservice()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_create_failed(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_vpnservice(self.VPN_SERVICE_CONF).AndRaise(
- vpnservice.NeutronClientException())
- self.m.ReplayAll()
- snippet = template_format.parse(vpnservice_template)
- self.stack = utils.parse_stack(snippet)
- rsrc = vpnservice.VPNService('vpnservice',
- snippet['Resources']['VPNService'],
- self.stack)
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.create))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_delete(self):
- neutronclient.Client.delete_vpnservice('vpn123')
- neutronclient.Client.show_vpnservice('vpn123').AndRaise(
- vpnservice.NeutronClientException(status_code=404))
- rsrc = self.create_vpnservice()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_delete_already_gone(self):
- neutronclient.Client.delete_vpnservice('vpn123').AndRaise(
- vpnservice.NeutronClientException(status_code=404))
- rsrc = self.create_vpnservice()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_delete_failed(self):
- neutronclient.Client.delete_vpnservice('vpn123').AndRaise(
- vpnservice.NeutronClientException(status_code=400))
- rsrc = self.create_vpnservice()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.delete))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_attribute(self):
- rsrc = self.create_vpnservice()
- neutronclient.Client.show_vpnservice('vpn123').MultipleTimes(
- ).AndReturn(self.VPN_SERVICE_CONF)
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual('VPNService', rsrc.FnGetAtt('name'))
- self.assertEqual('My new VPN service', rsrc.FnGetAtt('description'))
- self.assertEqual(True, rsrc.FnGetAtt('admin_state_up'))
- self.assertEqual('rou123', rsrc.FnGetAtt('router_id'))
- self.assertEqual('sub123', rsrc.FnGetAtt('subnet_id'))
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_attribute_failed(self):
- rsrc = self.create_vpnservice()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.FnGetAtt, 'non-existent_property')
- self.assertEqual(
- 'The Referenced Attribute (vpnservice non-existent_property) is '
- 'incorrect.',
- str(error))
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update(self):
- rsrc = self.create_vpnservice()
- neutronclient.Client.update_vpnservice(
- 'vpn123', {'vpnservice': {'admin_state_up': False}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- update_template = copy.deepcopy(rsrc.t)
- update_template['Properties']['admin_state_up'] = False
- scheduler.TaskRunner(rsrc.update, update_template)()
- self.m.VerifyAll()
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class IPsecSiteConnectionTest(HeatTestCase):
-
- IPSEC_SITE_CONNECTION_CONF = {
- 'ipsec_site_connection': {
- 'name': 'IPsecSiteConnection',
- 'description': 'My new VPN connection',
- 'peer_address': '172.24.4.233',
- 'peer_id': '172.24.4.233',
- 'peer_cidrs': ['10.2.0.0/24'],
- 'mtu': 1500,
- 'dpd': {
- 'actions': 'hold',
- 'interval': 30,
- 'timeout': 120
- },
- 'psk': 'secret',
- 'initiator': 'bi-directional',
- 'admin_state_up': True,
- 'ikepolicy_id': 'ike123',
- 'ipsecpolicy_id': 'ips123',
- 'vpnservice_id': 'vpn123'
- }
- }
-
- def setUp(self):
- super(IPsecSiteConnectionTest, self).setUp()
- self.m.StubOutWithMock(neutronclient.Client,
- 'create_ipsec_site_connection')
- self.m.StubOutWithMock(neutronclient.Client,
- 'delete_ipsec_site_connection')
- self.m.StubOutWithMock(neutronclient.Client,
- 'show_ipsec_site_connection')
- self.m.StubOutWithMock(neutronclient.Client,
- 'update_ipsec_site_connection')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- utils.setup_dummy_db()
-
- def create_ipsec_site_connection(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_ipsec_site_connection(
- self.IPSEC_SITE_CONNECTION_CONF).AndReturn(
- {'ipsec_site_connection': {'id': 'con123'}})
- snippet = template_format.parse(ipsec_site_connection_template)
- self.stack = utils.parse_stack(snippet)
- return vpnservice.IPsecSiteConnection(
- 'ipsec_site_connection',
- snippet['Resources']['IPsecSiteConnection'],
- self.stack)
-
- @utils.stack_delete_after
- def test_create(self):
- rsrc = self.create_ipsec_site_connection()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_create_failed(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_ipsec_site_connection(
- self.IPSEC_SITE_CONNECTION_CONF).AndRaise(
- vpnservice.NeutronClientException())
- self.m.ReplayAll()
- snippet = template_format.parse(ipsec_site_connection_template)
- self.stack = utils.parse_stack(snippet)
- rsrc = vpnservice.IPsecSiteConnection(
- 'ipsec_site_connection',
- snippet['Resources']['IPsecSiteConnection'],
- self.stack)
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.create))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_delete(self):
- neutronclient.Client.delete_ipsec_site_connection('con123')
- neutronclient.Client.show_ipsec_site_connection('con123').AndRaise(
- vpnservice.NeutronClientException(status_code=404))
- rsrc = self.create_ipsec_site_connection()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_delete_already_gone(self):
- neutronclient.Client.delete_ipsec_site_connection('con123').AndRaise(
- vpnservice.NeutronClientException(status_code=404))
- rsrc = self.create_ipsec_site_connection()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_delete_failed(self):
- neutronclient.Client.delete_ipsec_site_connection('con123').AndRaise(
- vpnservice.NeutronClientException(status_code=400))
- rsrc = self.create_ipsec_site_connection()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.delete))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_attribute(self):
- rsrc = self.create_ipsec_site_connection()
- neutronclient.Client.show_ipsec_site_connection(
- 'con123').MultipleTimes().AndReturn(
- self.IPSEC_SITE_CONNECTION_CONF)
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual('IPsecSiteConnection', rsrc.FnGetAtt('name'))
- self.assertEqual('My new VPN connection', rsrc.FnGetAtt('description'))
- self.assertEqual('172.24.4.233', rsrc.FnGetAtt('peer_address'))
- self.assertEqual('172.24.4.233', rsrc.FnGetAtt('peer_id'))
- self.assertEqual(['10.2.0.0/24'], rsrc.FnGetAtt('peer_cidrs'))
- self.assertEqual('hold', rsrc.FnGetAtt('dpd')['actions'])
- self.assertEqual(30, rsrc.FnGetAtt('dpd')['interval'])
- self.assertEqual(120, rsrc.FnGetAtt('dpd')['timeout'])
- self.assertEqual('secret', rsrc.FnGetAtt('psk'))
- self.assertEqual('bi-directional', rsrc.FnGetAtt('initiator'))
- self.assertEqual(True, rsrc.FnGetAtt('admin_state_up'))
- self.assertEqual('ike123', rsrc.FnGetAtt('ikepolicy_id'))
- self.assertEqual('ips123', rsrc.FnGetAtt('ipsecpolicy_id'))
- self.assertEqual('vpn123', rsrc.FnGetAtt('vpnservice_id'))
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_attribute_failed(self):
- rsrc = self.create_ipsec_site_connection()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.FnGetAtt, 'non-existent_property')
- self.assertEqual(
- 'The Referenced Attribute (ipsec_site_connection '
- 'non-existent_property) is incorrect.',
- str(error))
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update(self):
- rsrc = self.create_ipsec_site_connection()
- neutronclient.Client.update_ipsec_site_connection(
- 'con123', {'ipsec_site_connection': {'admin_state_up': False}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- update_template = copy.deepcopy(rsrc.t)
- update_template['Properties']['admin_state_up'] = False
- scheduler.TaskRunner(rsrc.update, update_template)()
- self.m.VerifyAll()
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class IKEPolicyTest(HeatTestCase):
-
- IKE_POLICY_CONF = {
- 'ikepolicy': {
- 'name': 'IKEPolicy',
- 'description': 'My new IKE policy',
- 'auth_algorithm': 'sha1',
- 'encryption_algorithm': '3des',
- 'phase1_negotiation_mode': 'main',
- 'lifetime': {
- 'units': 'seconds',
- 'value': 3600
- },
- 'pfs': 'group5',
- 'ike_version': 'v1'
- }
- }
-
- def setUp(self):
- super(IKEPolicyTest, self).setUp()
- self.m.StubOutWithMock(neutronclient.Client, 'create_ikepolicy')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_ikepolicy')
- self.m.StubOutWithMock(neutronclient.Client, 'show_ikepolicy')
- self.m.StubOutWithMock(neutronclient.Client, 'update_ikepolicy')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- utils.setup_dummy_db()
-
- def create_ikepolicy(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_ikepolicy(
- self.IKE_POLICY_CONF).AndReturn(
- {'ikepolicy': {'id': 'ike123'}})
- snippet = template_format.parse(ikepolicy_template)
- self.stack = utils.parse_stack(snippet)
- return vpnservice.IKEPolicy('ikepolicy',
- snippet['Resources']['IKEPolicy'],
- self.stack)
-
- @utils.stack_delete_after
- def test_create(self):
- rsrc = self.create_ikepolicy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_create_failed(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_ikepolicy(
- self.IKE_POLICY_CONF).AndRaise(
- vpnservice.NeutronClientException())
- self.m.ReplayAll()
- snippet = template_format.parse(ikepolicy_template)
- self.stack = utils.parse_stack(snippet)
- rsrc = vpnservice.IKEPolicy(
- 'ikepolicy',
- snippet['Resources']['IKEPolicy'],
- self.stack)
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.create))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_delete(self):
- neutronclient.Client.delete_ikepolicy('ike123')
- neutronclient.Client.show_ikepolicy('ike123').AndRaise(
- vpnservice.NeutronClientException(status_code=404))
- rsrc = self.create_ikepolicy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_delete_already_gone(self):
- neutronclient.Client.delete_ikepolicy('ike123').AndRaise(
- vpnservice.NeutronClientException(status_code=404))
- rsrc = self.create_ikepolicy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_delete_failed(self):
- neutronclient.Client.delete_ikepolicy('ike123').AndRaise(
- vpnservice.NeutronClientException(status_code=400))
- rsrc = self.create_ikepolicy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.delete))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_attribute(self):
- rsrc = self.create_ikepolicy()
- neutronclient.Client.show_ikepolicy(
- 'ike123').MultipleTimes().AndReturn(self.IKE_POLICY_CONF)
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual('IKEPolicy', rsrc.FnGetAtt('name'))
- self.assertEqual('My new IKE policy', rsrc.FnGetAtt('description'))
- self.assertEqual('sha1', rsrc.FnGetAtt('auth_algorithm'))
- self.assertEqual('3des', rsrc.FnGetAtt('encryption_algorithm'))
- self.assertEqual('main', rsrc.FnGetAtt('phase1_negotiation_mode'))
- self.assertEqual('seconds', rsrc.FnGetAtt('lifetime')['units'])
- self.assertEqual(3600, rsrc.FnGetAtt('lifetime')['value'])
- self.assertEqual('group5', rsrc.FnGetAtt('pfs'))
- self.assertEqual('v1', rsrc.FnGetAtt('ike_version'))
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_attribute_failed(self):
- rsrc = self.create_ikepolicy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.FnGetAtt, 'non-existent_property')
- self.assertEqual(
- 'The Referenced Attribute (ikepolicy non-existent_property) is '
- 'incorrect.',
- str(error))
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update(self):
- rsrc = self.create_ikepolicy()
- neutronclient.Client.update_ikepolicy('ike123',
- {'ikepolicy': {
- 'name': 'New IKEPolicy'}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- update_template = copy.deepcopy(rsrc.t)
- update_template['Properties']['name'] = 'New IKEPolicy'
- scheduler.TaskRunner(rsrc.update, update_template)()
- self.m.VerifyAll()
-
-
-@skipIf(neutronclient is None, 'neutronclient unavailable')
-class IPsecPolicyTest(HeatTestCase):
-
- IPSEC_POLICY_CONF = {
- 'ipsecpolicy': {
- 'name': 'IPsecPolicy',
- 'description': 'My new IPsec policy',
- 'transform_protocol': 'esp',
- 'encapsulation_mode': 'tunnel',
- 'auth_algorithm': 'sha1',
- 'encryption_algorithm': '3des',
- 'lifetime': {
- 'units': 'seconds',
- 'value': 3600
- },
- 'pfs': 'group5'
- }
- }
-
- def setUp(self):
- super(IPsecPolicyTest, self).setUp()
- self.m.StubOutWithMock(neutronclient.Client, 'create_ipsecpolicy')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_ipsecpolicy')
- self.m.StubOutWithMock(neutronclient.Client, 'show_ipsecpolicy')
- self.m.StubOutWithMock(neutronclient.Client, 'update_ipsecpolicy')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- utils.setup_dummy_db()
-
- def create_ipsecpolicy(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_ipsecpolicy(
- self.IPSEC_POLICY_CONF).AndReturn(
- {'ipsecpolicy': {'id': 'ips123'}})
- snippet = template_format.parse(ipsecpolicy_template)
- self.stack = utils.parse_stack(snippet)
- return vpnservice.IPsecPolicy('ipsecpolicy',
- snippet['Resources']['IPsecPolicy'],
- self.stack)
-
- @utils.stack_delete_after
- def test_create(self):
- rsrc = self.create_ipsecpolicy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_create_failed(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- neutronclient.Client.create_ipsecpolicy(
- self.IPSEC_POLICY_CONF).AndRaise(
- vpnservice.NeutronClientException())
- self.m.ReplayAll()
- snippet = template_format.parse(ipsecpolicy_template)
- self.stack = utils.parse_stack(snippet)
- rsrc = vpnservice.IPsecPolicy(
- 'ipsecpolicy',
- snippet['Resources']['IPsecPolicy'],
- self.stack)
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.create))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_delete(self):
- neutronclient.Client.delete_ipsecpolicy('ips123')
- neutronclient.Client.show_ipsecpolicy('ips123').AndRaise(
- vpnservice.NeutronClientException(status_code=404))
- rsrc = self.create_ipsecpolicy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_delete_already_gone(self):
- neutronclient.Client.delete_ipsecpolicy('ips123').AndRaise(
- vpnservice.NeutronClientException(status_code=404))
- rsrc = self.create_ipsecpolicy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_delete_failed(self):
- neutronclient.Client.delete_ipsecpolicy('ips123').AndRaise(
- vpnservice.NeutronClientException(status_code=400))
- rsrc = self.create_ipsecpolicy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.delete))
- self.assertEqual(
- 'NeutronClientException: An unknown exception occurred.',
- str(error))
- self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_attribute(self):
- rsrc = self.create_ipsecpolicy()
- neutronclient.Client.show_ipsecpolicy(
- 'ips123').MultipleTimes().AndReturn(self.IPSEC_POLICY_CONF)
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual('IPsecPolicy', rsrc.FnGetAtt('name'))
- self.assertEqual('My new IPsec policy', rsrc.FnGetAtt('description'))
- self.assertEqual('esp', rsrc.FnGetAtt('transform_protocol'))
- self.assertEqual('tunnel', rsrc.FnGetAtt('encapsulation_mode'))
- self.assertEqual('sha1', rsrc.FnGetAtt('auth_algorithm'))
- self.assertEqual('3des', rsrc.FnGetAtt('encryption_algorithm'))
- self.assertEqual('seconds', rsrc.FnGetAtt('lifetime')['units'])
- self.assertEqual(3600, rsrc.FnGetAtt('lifetime')['value'])
- self.assertEqual('group5', rsrc.FnGetAtt('pfs'))
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_attribute_failed(self):
- rsrc = self.create_ipsecpolicy()
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- error = self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.FnGetAtt, 'non-existent_property')
- self.assertEqual(
- 'The Referenced Attribute (ipsecpolicy non-existent_property) is '
- 'incorrect.',
- str(error))
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update(self):
- rsrc = self.create_ipsecpolicy()
- neutronclient.Client.update_ipsecpolicy(
- 'ips123',
- {'ipsecpolicy': {'name': 'New IPsecPolicy'}})
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.create)()
- update_template = copy.deepcopy(rsrc.t)
- update_template['Properties']['name'] = 'New IPsecPolicy'
- scheduler.TaskRunner(rsrc.update, update_template)()
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from heat.tests.v1_1 import fakes
-from heat.engine.resources import instance as instances
-from heat.engine.resources import nova_utils
-from heat.common import template_format
-from heat.engine import scheduler
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-nokey_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "NoKey Test",
- "Parameters" : {},
- "Resources" : {
- "WebServer": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId" : "foo",
- "InstanceType" : "m1.large",
- "UserData" : "some data"
- }
- }
- }
-}
-'''
-
-
-class nokeyTest(HeatTestCase):
- def setUp(self):
- super(nokeyTest, self).setUp()
- self.fc = fakes.FakeClient()
- utils.setup_dummy_db()
-
- def test_nokey_create(self):
-
- stack_name = 'instance_create_test_nokey_stack'
- t = template_format.parse(nokey_template)
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- t['Resources']['WebServer']['Properties']['ImageId'] = 'CentOS 5.2'
- t['Resources']['WebServer']['Properties']['InstanceType'] = \
- '256 MB Server'
- instance = instances.Instance('create_instance_name',
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(instance, 'nova')
- instance.nova().MultipleTimes().AndReturn(self.fc)
-
- instance.t = instance.stack.resolve_runtime_data(instance.t)
-
- # need to resolve the template functions
- server_userdata = nova_utils.build_userdata(
- instance,
- instance.t['Properties']['UserData'])
- instance.mime_string = server_userdata
- self.m.StubOutWithMock(self.fc.servers, 'create')
- self.fc.servers.create(
- image=1, flavor=1, key_name=None,
- name=utils.PhysName(stack_name, instance.name),
- security_groups=None,
- userdata=server_userdata, scheduler_hints=None,
- meta=None, nics=None, availability_zone=None).AndReturn(
- self.fc.servers.list()[1])
- self.m.ReplayAll()
-
- scheduler.TaskRunner(instance.create)()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-"""Tests for :module:'heat.engine.resources.nova_utls'."""
-
-import uuid
-
-from heat.common import exception
-from heat.engine.resources import nova_utils
-from heat.tests.common import HeatTestCase
-
-
-class NovaUtilsTests(HeatTestCase):
- """
- Basic tests for the helper methods in
- :module:'heat.engine.resources.nova_utils'.
- """
-
- def setUp(self):
- super(NovaUtilsTests, self).setUp()
- self.nova_client = self.m.CreateMockAnything()
-
- def test_get_image_id(self):
- """Tests the get_image_id function."""
- my_image = self.m.CreateMockAnything()
- img_id = str(uuid.uuid4())
- img_name = 'myfakeimage'
- my_image.id = img_id
- my_image.name = img_name
- self.nova_client.images = self.m.CreateMockAnything()
- self.nova_client.images.get(img_id).AndReturn(my_image)
- self.nova_client.images.list().MultipleTimes().AndReturn([my_image])
- self.m.ReplayAll()
- self.assertEqual(img_id, nova_utils.get_image_id(self.nova_client,
- img_id))
- self.assertEqual(img_id, nova_utils.get_image_id(self.nova_client,
- 'myfakeimage'))
- self.assertRaises(exception.ImageNotFound, nova_utils.get_image_id,
- self.nova_client, 'noimage')
- self.m.VerifyAll()
-
- def test_get_flavor_id(self):
- """Tests the get_flavor_id function."""
- flav_id = str(uuid.uuid4())
- flav_name = 'X-Large'
- my_flavor = self.m.CreateMockAnything()
- my_flavor.name = flav_name
- my_flavor.id = flav_id
- self.nova_client.flavors = self.m.CreateMockAnything()
- self.nova_client.flavors.list().MultipleTimes().AndReturn([my_flavor])
- self.m.ReplayAll()
- self.assertEqual(flav_id, nova_utils.get_flavor_id(self.nova_client,
- flav_name))
- self.assertEqual(flav_id, nova_utils.get_flavor_id(self.nova_client,
- flav_id))
- self.assertRaises(exception.FlavorMissing, nova_utils.get_flavor_id,
- self.nova_client, 'noflavor')
- self.m.VerifyAll()
-
- def test_get_keypair(self):
- """Tests the get_keypair function."""
- my_pub_key = 'a cool public key string'
- my_key_name = 'mykey'
- my_key = self.m.CreateMockAnything()
- my_key.public_key = my_pub_key
- my_key.name = my_key_name
- self.nova_client.keypairs = self.m.CreateMockAnything()
- self.nova_client.keypairs.list().MultipleTimes().AndReturn([my_key])
- self.m.ReplayAll()
- self.assertEqual(my_key, nova_utils.get_keypair(self.nova_client,
- my_key_name))
- self.assertRaises(exception.UserKeyPairMissing, nova_utils.get_keypair,
- self.nova_client, 'notakey')
- self.m.VerifyAll()
-
- def test_build_userdata(self):
- """Tests the build_userdata function."""
- resource = self.m.CreateMockAnything()
- resource.t = {}
- self.m.StubOutWithMock(nova_utils.cfg, 'CONF')
- cnf = nova_utils.cfg.CONF
- cnf.instance_user = 'testuser'
- cnf.heat_metadata_server_url = 'http://server.test:123'
- cnf.heat_watch_server_url = 'http://server.test:345'
- cnf.instance_connection_is_secure = False
- cnf.instance_connection_https_validate_certificates = False
- self.m.ReplayAll()
- data = nova_utils.build_userdata(resource)
- self.assertTrue("Content-Type: text/cloud-config;" in data)
- self.assertTrue("Content-Type: text/cloud-boothook;" in data)
- self.assertTrue("Content-Type: text/part-handler;" in data)
- self.assertTrue("Content-Type: text/x-cfninitdata;" in data)
- self.assertTrue("Content-Type: text/x-shellscript;" in data)
- self.assertTrue("http://server.test:345" in data)
- self.assertTrue("http://server.test:123" in data)
- self.assertTrue("[Boto]" in data)
- self.assertTrue('testuser' in data)
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import testtools
-import json
-
-from heat.common import exception
-from heat.engine import parameters
-from heat.engine import template
-
-
-class ParameterTest(testtools.TestCase):
-
- def new_parameter(self, name, schema, value=None,
- validate_value=True):
- tmpl = template.Template({template.PARAMETERS: {name:
- schema}})
- schema = tmpl.param_schemata()[name]
- return parameters.Parameter(name, schema, value,
- validate_value)
-
- def test_new_string(self):
- p = self.new_parameter('p', {'Type': 'String'}, validate_value=False)
- self.assertTrue(isinstance(p, parameters.StringParam))
-
- def test_new_number(self):
- p = self.new_parameter('p', {'Type': 'Number'}, validate_value=False)
- self.assertTrue(isinstance(p, parameters.NumberParam))
-
- def test_new_list(self):
- p = self.new_parameter('p', {'Type': 'CommaDelimitedList'},
- validate_value=False)
- self.assertTrue(isinstance(p, parameters.CommaDelimitedListParam))
-
- def test_new_json(self):
- p = self.new_parameter('p', {'Type': 'Json'}, validate_value=False)
- self.assertTrue(isinstance(p, parameters.JsonParam))
-
- def test_new_bad_type(self):
- self.assertRaises(ValueError, self.new_parameter, 'p',
- {'Type': 'List'})
-
- def test_new_no_type(self):
- self.assertRaises(KeyError, self.new_parameter,
- 'p', {'Default': 'blarg'})
-
- def test_default_no_override(self):
- p = self.new_parameter('defaulted', {'Type': 'String',
- 'Default': 'blarg'})
- self.assertTrue(p.has_default())
- self.assertEqual(p.default(), 'blarg')
- self.assertEqual(p.value(), 'blarg')
-
- def test_default_override(self):
- p = self.new_parameter('defaulted',
- {'Type': 'String',
- 'Default': 'blarg'},
- 'wibble')
- self.assertTrue(p.has_default())
- self.assertEqual(p.default(), 'blarg')
- self.assertEqual(p.value(), 'wibble')
-
- def test_default_invalid(self):
- schema = {'Type': 'String',
- 'AllowedValues': ['foo'],
- 'ConstraintDescription': 'wibble',
- 'Default': 'bar'}
- try:
- self.new_parameter('p', schema, 'foo')
- except ValueError as ve:
- msg = str(ve)
- self.assertNotEqual(msg.find('wibble'), -1)
- else:
- self.fail('ValueError not raised')
-
- def test_no_echo_true(self):
- p = self.new_parameter('anechoic',
- {'Type': 'String',
- 'NoEcho': 'true'},
- 'wibble')
- self.assertTrue(p.no_echo())
- self.assertNotEqual(str(p), 'wibble')
-
- def test_no_echo_true_caps(self):
- p = self.new_parameter('anechoic',
- {'Type': 'String',
- 'NoEcho': 'TrUe'},
- 'wibble')
- self.assertTrue(p.no_echo())
- self.assertNotEqual(str(p), 'wibble')
-
- def test_no_echo_false(self):
- p = self.new_parameter('echoic',
- {'Type': 'String',
- 'NoEcho': 'false'},
- 'wibble')
- self.assertFalse(p.no_echo())
- self.assertEqual(str(p), 'wibble')
-
- def test_description(self):
- description = 'Description of the parameter'
- p = self.new_parameter('p', {'Type': 'String',
- 'Description': description},
- validate_value=False)
- self.assertEqual(p.description(), description)
-
- def test_no_description(self):
- p = self.new_parameter('p', {'Type': 'String'}, validate_value=False)
- self.assertEqual(p.description(), '')
-
- def test_string_len_good(self):
- schema = {'Type': 'String',
- 'MinLength': '3',
- 'MaxLength': '3'}
- p = self.new_parameter('p', schema, 'foo')
- self.assertEqual(p.value(), 'foo')
-
- def test_string_underflow(self):
- schema = {'Type': 'String',
- 'ConstraintDescription': 'wibble',
- 'MinLength': '4'}
- try:
- self.new_parameter('p', schema, 'foo')
- except ValueError as ve:
- msg = str(ve)
- self.assertNotEqual(msg.find('wibble'), -1)
- else:
- self.fail('ValueError not raised')
-
- def test_string_overflow(self):
- schema = {'Type': 'String',
- 'ConstraintDescription': 'wibble',
- 'MaxLength': '2'}
- try:
- self.new_parameter('p', schema, 'foo')
- except ValueError as ve:
- msg = str(ve)
- self.assertNotEqual(msg.find('wibble'), -1)
- else:
- self.fail('ValueError not raised')
-
- def test_string_pattern_good(self):
- schema = {'Type': 'String',
- 'AllowedPattern': '[a-z]*'}
- p = self.new_parameter('p', schema, 'foo')
- self.assertEqual(p.value(), 'foo')
-
- def test_string_pattern_bad_prefix(self):
- schema = {'Type': 'String',
- 'ConstraintDescription': 'wibble',
- 'AllowedPattern': '[a-z]*'}
- try:
- self.new_parameter('p', schema, '1foo')
- except ValueError as ve:
- msg = str(ve)
- self.assertNotEqual(msg.find('wibble'), -1)
- else:
- self.fail('ValueError not raised')
-
- def test_string_pattern_bad_suffix(self):
- schema = {'Type': 'String',
- 'ConstraintDescription': 'wibble',
- 'AllowedPattern': '[a-z]*'}
- try:
- self.new_parameter('p', schema, 'foo1')
- except ValueError as ve:
- msg = str(ve)
- self.assertNotEqual(msg.find('wibble'), -1)
- else:
- self.fail('ValueError not raised')
-
- def test_string_value_list_good(self):
- schema = {'Type': 'String',
- 'AllowedValues': ['foo', 'bar', 'baz']}
- p = self.new_parameter('p', schema, 'bar')
- self.assertEqual(p.value(), 'bar')
-
- def test_string_value_list_bad(self):
- schema = {'Type': 'String',
- 'ConstraintDescription': 'wibble',
- 'AllowedValues': ['foo', 'bar', 'baz']}
- try:
- self.new_parameter('p', schema, 'blarg')
- except ValueError as ve:
- msg = str(ve)
- self.assertNotEqual(msg.find('wibble'), -1)
- else:
- self.fail('ValueError not raised')
-
- def test_number_int_good(self):
- schema = {'Type': 'Number',
- 'MinValue': '3',
- 'MaxValue': '3'}
- p = self.new_parameter('p', schema, '3')
- self.assertEqual(p.value(), '3')
-
- def test_number_float_good(self):
- schema = {'Type': 'Number',
- 'MinValue': '3.0',
- 'MaxValue': '3.0'}
- p = self.new_parameter('p', schema, '3.0')
- self.assertEqual(p.value(), '3.0')
-
- def test_number_low(self):
- schema = {'Type': 'Number',
- 'ConstraintDescription': 'wibble',
- 'MinValue': '4'}
- try:
- self.new_parameter('p', schema, '3')
- except ValueError as ve:
- msg = str(ve)
- self.assertNotEqual(msg.find('wibble'), -1)
- else:
- self.fail('ValueError not raised')
-
- def test_number_high(self):
- schema = {'Type': 'Number',
- 'ConstraintDescription': 'wibble',
- 'MaxValue': '2'}
- try:
- self.new_parameter('p', schema, '3')
- except ValueError as ve:
- msg = str(ve)
- self.assertNotEqual(msg.find('wibble'), -1)
- else:
- self.fail('ValueError not raised')
-
- def test_number_value_list_good(self):
- schema = {'Type': 'Number',
- 'AllowedValues': ['1', '3', '5']}
- p = self.new_parameter('p', schema, '5')
- self.assertEqual(p.value(), '5')
-
- def test_number_value_list_bad(self):
- schema = {'Type': 'Number',
- 'ConstraintDescription': 'wibble',
- 'AllowedValues': ['1', '3', '5']}
- try:
- self.new_parameter('p', schema, '2')
- except ValueError as ve:
- msg = str(ve)
- self.assertNotEqual(msg.find('wibble'), -1)
- else:
- self.fail('ValueError not raised')
-
- def test_list_value_list_good(self):
- schema = {'Type': 'CommaDelimitedList',
- 'AllowedValues': ['foo', 'bar', 'baz']}
- p = self.new_parameter('p', schema, 'baz,foo,bar')
- self.assertEqual(p.value(), 'baz,foo,bar')
-
- def test_list_value_list_bad(self):
- schema = {'Type': 'CommaDelimitedList',
- 'ConstraintDescription': 'wibble',
- 'AllowedValues': ['foo', 'bar', 'baz']}
- try:
- self.new_parameter('p', schema, 'foo,baz,blarg')
- except ValueError as ve:
- msg = str(ve)
- self.assertNotEqual(msg.find('wibble'), -1)
- else:
- self.fail('ValueError not raised')
-
- def test_map_value(self):
- '''Happy path for value thats already a map.'''
- schema = {'Type': 'Json'}
- val = {"foo": "bar", "items": [1, 2, 3]}
- val_s = json.dumps(val)
- p = self.new_parameter('p', schema, val)
- self.assertEqual(val_s, p.value())
- self.assertEqual(val, p.parsed)
-
- def test_map_value_bad(self):
- '''Map value is not JSON parsable.'''
- schema = {'Type': 'Json',
- 'ConstraintDescription': 'wibble'}
- val = {"foo": "bar", "not_json": len}
- try:
- self.new_parameter('p', schema, val)
- except ValueError as verr:
- self.assertIn('Value must be valid JSON', str(verr))
- else:
- self.fail("Value error not raised")
-
- def test_map_value_parse(self):
- '''Happy path for value that's a string.'''
- schema = {'Type': 'Json'}
- val = {"foo": "bar", "items": [1, 2, 3]}
- val_s = json.dumps(val)
- p = self.new_parameter('p', schema, val_s)
- self.assertEqual(val_s, p.value())
- self.assertEqual(val, p.parsed)
-
- def test_map_value_bad_parse(self):
- '''Test value error for unparsable string value.'''
- schema = {'Type': 'Json',
- 'ConstraintDescription': 'wibble'}
- val = "I am not a map"
- try:
- self.new_parameter('p', schema, val)
- except ValueError as verr:
- self.assertIn('Value must be valid JSON', str(verr))
- else:
- self.fail("Value error not raised")
-
- def test_map_values_good(self):
- '''Happy path for map keys.'''
- schema = {'Type': 'Json',
- 'AllowedValues': ["foo", "bar", "baz"]}
- val = {"foo": "bar", "baz": [1, 2, 3]}
- val_s = json.dumps(val)
- p = self.new_parameter('p', schema, val_s)
- self.assertEqual(val_s, p.value())
- self.assertEqual(val, p.parsed)
-
- def test_map_values_bad(self):
- '''Test failure of invalid map keys.'''
- schema = {'Type': 'Json',
- 'AllowedValues': ["foo", "bar", "baz"]}
- val = {"foo": "bar", "items": [1, 2, 3]}
- try:
- self.new_parameter('p', schema, val)
- except ValueError as verr:
- self.assertIn("items", str(verr))
- else:
- self.fail("Value error not raised")
-
- def test_map_underrun(self):
- '''Test map length under MIN_LEN.'''
- schema = {'Type': 'Json',
- 'MinLength': 3}
- val = {"foo": "bar", "items": [1, 2, 3]}
- try:
- self.new_parameter('p', schema, val)
- except ValueError as verr:
- self.assertIn('underflows', str(verr))
- else:
- self.fail("Value error not raised")
-
- def test_map_overrun(self):
- '''Test map length over MAX_LEN.'''
- schema = {'Type': 'Json',
- 'MaxLength': 1}
- val = {"foo": "bar", "items": [1, 2, 3]}
- try:
- self.new_parameter('p', schema, val)
- except ValueError as verr:
- self.assertIn('overflows', str(verr))
- else:
- self.fail("Value error not raised")
-
- def test_missing_param(self):
- '''Test missing user parameter.'''
- self.assertRaises(exception.UserParameterMissing,
- self.new_parameter, 'p',
- {'Type': 'String'})
-
-
-params_schema = json.loads('''{
- "Parameters" : {
- "User" : { "Type": "String" },
- "Defaulted" : {
- "Type": "String",
- "Default": "foobar"
- }
- }
-}''')
-
-
-class ParametersTest(testtools.TestCase):
- def new_parameters(self, stack_name, tmpl, user_params={}, stack_id=None,
- validate_value=True):
- tmpl = template.Template(tmpl)
- return parameters.Parameters(stack_name, tmpl, user_params, stack_id,
- validate_value)
-
- def test_pseudo_params(self):
- params = self.new_parameters('test_stack', {"Parameters": {}})
-
- self.assertEqual(params['AWS::StackName'], 'test_stack')
- self.assertEqual(params['AWS::StackId'], 'None')
- self.assertTrue('AWS::Region' in params)
-
- def test_pseudo_param_stackid(self):
- params = self.new_parameters('test_stack', {'Parameters': {}},
- stack_id='123::foo')
-
- self.assertEqual(params['AWS::StackId'], '123::foo')
- params.set_stack_id('456::bar')
- self.assertEqual(params['AWS::StackId'], '456::bar')
-
- def test_schema_invariance(self):
- params1 = self.new_parameters('test', params_schema,
- {'User': 'foo',
- 'Defaulted': 'wibble'})
- self.assertEqual(params1['Defaulted'], 'wibble')
-
- params2 = self.new_parameters('test', params_schema, {'User': 'foo'})
- self.assertEqual(params2['Defaulted'], 'foobar')
-
- def test_to_dict(self):
- template = {'Parameters': {'Foo': {'Type': 'String'},
- 'Bar': {'Type': 'Number', 'Default': '42'}}}
- params = self.new_parameters('test_params', template, {'Foo': 'foo'})
-
- as_dict = dict(params)
- self.assertEqual(as_dict['Foo'], 'foo')
- self.assertEqual(as_dict['Bar'], '42')
- self.assertEqual(as_dict['AWS::StackName'], 'test_params')
- self.assertTrue('AWS::Region' in as_dict)
-
- def test_map(self):
- template = {'Parameters': {'Foo': {'Type': 'String'},
- 'Bar': {'Type': 'Number', 'Default': '42'}}}
- params = self.new_parameters('test_params', template, {'Foo': 'foo'})
-
- expected = {'Foo': False,
- 'Bar': True,
- 'AWS::Region': True,
- 'AWS::StackId': True,
- 'AWS::StackName': True}
-
- self.assertEqual(params.map(lambda p: p.has_default()), expected)
-
- def test_map_str(self):
- template = {'Parameters': {'Foo': {'Type': 'String'},
- 'Bar': {'Type': 'Number'}}}
- params = self.new_parameters('test_params', template,
- {'Foo': 'foo', 'Bar': 42})
-
- expected = {'Foo': 'foo',
- 'Bar': '42',
- 'AWS::Region': 'ap-southeast-1',
- 'AWS::StackId': 'None',
- 'AWS::StackName': 'test_params'}
-
- self.assertEqual(params.map(str), expected)
-
- def test_unknown_params(self):
- user_params = {'Foo': 'wibble'}
- self.assertRaises(exception.UnknownUserParameter,
- self.new_parameters,
- 'test',
- params_schema,
- user_params)
-
- def test_missing_params(self):
- user_params = {}
- self.assertRaises(exception.UserParameterMissing,
- self.new_parameters,
- 'test',
- params_schema,
- user_params)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import json
-import time
-
-from heat.engine import environment
-from heat.common import exception
-from heat.common import template_format
-from heat.common import urlfetch
-from heat.engine import clients
-from heat.engine import resource
-from heat.engine import parser
-from heat.engine import parameters
-from heat.engine import scheduler
-from heat.engine import template
-
-from heat.tests.fakes import FakeKeystoneClient
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-from heat.tests.v1_1 import fakes
-from heat.tests import generic_resource as generic_rsrc
-
-import heat.db.api as db_api
-
-
-def join(raw):
- return parser.Template.resolve_joins(raw)
-
-
-class ParserTest(HeatTestCase):
-
- def test_list(self):
- raw = ['foo', 'bar', 'baz']
- parsed = join(raw)
- for i in xrange(len(raw)):
- self.assertEqual(parsed[i], raw[i])
- self.assertTrue(parsed is not raw)
-
- def test_dict(self):
- raw = {'foo': 'bar', 'blarg': 'wibble'}
- parsed = join(raw)
- for k in raw:
- self.assertEqual(parsed[k], raw[k])
- self.assertTrue(parsed is not raw)
-
- def test_dict_list(self):
- raw = {'foo': ['bar', 'baz'], 'blarg': 'wibble'}
- parsed = join(raw)
- self.assertEqual(parsed['blarg'], raw['blarg'])
- for i in xrange(len(raw['foo'])):
- self.assertEqual(parsed['foo'][i], raw['foo'][i])
- self.assertTrue(parsed is not raw)
- self.assertTrue(parsed['foo'] is not raw['foo'])
-
- def test_list_dict(self):
- raw = [{'foo': 'bar', 'blarg': 'wibble'}, 'baz', 'quux']
- parsed = join(raw)
- for i in xrange(1, len(raw)):
- self.assertEqual(parsed[i], raw[i])
- for k in raw[0]:
- self.assertEqual(parsed[0][k], raw[0][k])
- self.assertTrue(parsed is not raw)
- self.assertTrue(parsed[0] is not raw[0])
-
- def test_join(self):
- raw = {'Fn::Join': [' ', ['foo', 'bar', 'baz']]}
- self.assertEqual(join(raw), 'foo bar baz')
-
- def test_join_none(self):
- raw = {'Fn::Join': [' ', ['foo', None, 'baz']]}
- self.assertEqual(join(raw), 'foo baz')
-
- def test_join_list(self):
- raw = [{'Fn::Join': [' ', ['foo', 'bar', 'baz']]}, 'blarg', 'wibble']
- parsed = join(raw)
- self.assertEqual(parsed[0], 'foo bar baz')
- for i in xrange(1, len(raw)):
- self.assertEqual(parsed[i], raw[i])
- self.assertTrue(parsed is not raw)
-
- def test_join_dict_val(self):
- raw = {'quux': {'Fn::Join': [' ', ['foo', 'bar', 'baz']]},
- 'blarg': 'wibble'}
- parsed = join(raw)
- self.assertEqual(parsed['quux'], 'foo bar baz')
- self.assertEqual(parsed['blarg'], raw['blarg'])
- self.assertTrue(parsed is not raw)
-
- def test_join_recursive(self):
- raw = {'Fn::Join': ['\n', [{'Fn::Join':
- [' ', ['foo', 'bar']]}, 'baz']]}
- self.assertEqual(join(raw), 'foo bar\nbaz')
-
-
-mapping_template = template_format.parse('''{
- "Mappings" : {
- "ValidMapping" : {
- "TestKey" : { "TestValue" : "wibble" }
- },
- "InvalidMapping" : {
- "ValueList" : [ "foo", "bar" ],
- "ValueString" : "baz"
- },
- "MapList": [ "foo", { "bar" : "baz" } ],
- "MapString": "foobar"
- }
-}''')
-
-
-class TemplateTest(HeatTestCase):
-
- def setUp(self):
- super(TemplateTest, self).setUp()
- self.ctx = utils.dummy_context()
-
- def test_defaults(self):
- empty = parser.Template({})
- try:
- empty[template.VERSION]
- except KeyError:
- pass
- else:
- self.fail('Expected KeyError for version not present')
- self.assertEqual(empty[template.DESCRIPTION], 'No description')
- self.assertEqual(empty[template.MAPPINGS], {})
- self.assertEqual(empty[template.PARAMETERS], {})
- self.assertEqual(empty[template.RESOURCES], {})
- self.assertEqual(empty[template.OUTPUTS], {})
-
- def test_invalid_template(self):
- scanner_error = '''
-1
-Mappings:
- ValidMapping:
- TestKey: TestValue
-'''
- parser_error = '''
-Mappings:
- ValidMapping:
- TestKey: {TestKey1: "Value1" TestKey2: "Value2"}
-'''
-
- self.assertRaises(ValueError, template_format.parse, scanner_error)
- self.assertRaises(ValueError, template_format.parse, parser_error)
-
- def test_invalid_section(self):
- tmpl = parser.Template({'Foo': ['Bar']})
- try:
- tmpl['Foo']
- except KeyError:
- pass
- else:
- self.fail('Expected KeyError for invalid template key')
-
- def test_find_in_map(self):
- tmpl = parser.Template(mapping_template)
- find = {'Fn::FindInMap': ["ValidMapping", "TestKey", "TestValue"]}
- self.assertEqual(tmpl.resolve_find_in_map(find), "wibble")
-
- def test_find_in_invalid_map(self):
- tmpl = parser.Template(mapping_template)
- finds = ({'Fn::FindInMap': ["InvalidMapping", "ValueList", "foo"]},
- {'Fn::FindInMap': ["InvalidMapping", "ValueString", "baz"]},
- {'Fn::FindInMap': ["MapList", "foo", "bar"]},
- {'Fn::FindInMap': ["MapString", "foo", "bar"]})
-
- for find in finds:
- self.assertRaises(KeyError, tmpl.resolve_find_in_map, find)
-
- def test_bad_find_in_map(self):
- tmpl = parser.Template(mapping_template)
- finds = ({'Fn::FindInMap': "String"},
- {'Fn::FindInMap': {"Dict": "String"}},
- {'Fn::FindInMap': ["ShortList", "foo"]},
- {'Fn::FindInMap': ["ReallyShortList"]})
-
- for find in finds:
- self.assertRaises(KeyError, tmpl.resolve_find_in_map, find)
-
- def test_param_refs(self):
- params = {'foo': 'bar', 'blarg': 'wibble'}
- p_snippet = {"Ref": "foo"}
- self.assertEqual(parser.Template.resolve_param_refs(p_snippet, params),
- "bar")
-
- def test_param_refs_resource(self):
- params = {'foo': 'bar', 'blarg': 'wibble'}
- r_snippet = {"Ref": "baz"}
- self.assertEqual(parser.Template.resolve_param_refs(r_snippet, params),
- r_snippet)
-
- def test_param_ref_missing(self):
- tmpl = {'Parameters': {'foo': {'Type': 'String', 'Required': True}}}
- tmpl = parser.Template(tmpl)
- params = parameters.Parameters('test', tmpl, validate_value=False)
- snippet = {"Ref": "foo"}
- self.assertRaises(exception.UserParameterMissing,
- parser.Template.resolve_param_refs,
- snippet, params)
-
- def test_resource_refs(self):
- resources = {'foo': self.m.CreateMock(resource.Resource),
- 'blarg': self.m.CreateMock(resource.Resource)}
- resources['foo'].FnGetRefId().AndReturn('bar')
- self.m.ReplayAll()
-
- r_snippet = {"Ref": "foo"}
- self.assertEqual(parser.Template.resolve_resource_refs(r_snippet,
- resources),
- "bar")
- self.m.VerifyAll()
-
- def test_resource_refs_param(self):
- resources = {'foo': 'bar', 'blarg': 'wibble'}
- p_snippet = {"Ref": "baz"}
- self.assertEqual(parser.Template.resolve_resource_refs(p_snippet,
- resources),
- p_snippet)
-
- def test_select_from_list(self):
- data = {"Fn::Select": ["1", ["foo", "bar"]]}
- self.assertEqual(parser.Template.resolve_select(data), "bar")
-
- def test_select_from_list_not_int(self):
- data = {"Fn::Select": ["one", ["foo", "bar"]]}
- self.assertRaises(TypeError, parser.Template.resolve_select,
- data)
-
- def test_select_from_list_out_of_bound(self):
- data = {"Fn::Select": ["3", ["foo", "bar"]]}
- self.assertRaises(IndexError, parser.Template.resolve_select,
- data)
-
- def test_select_from_dict(self):
- data = {"Fn::Select": ["red", {"red": "robin", "re": "foo"}]}
- self.assertEqual(parser.Template.resolve_select(data), "robin")
-
- def test_select_from_none(self):
- data = {"Fn::Select": ["red", None]}
- self.assertEqual(parser.Template.resolve_select(data), "")
-
- def test_select_from_dict_not_str(self):
- data = {"Fn::Select": ["1", {"red": "robin", "re": "foo"}]}
- self.assertRaises(TypeError, parser.Template.resolve_select,
- data)
-
- def test_select_from_dict_not_existing(self):
- data = {"Fn::Select": ["green", {"red": "robin", "re": "foo"}]}
- self.assertRaises(KeyError, parser.Template.resolve_select,
- data)
-
- def test_select_from_serialized_json_map(self):
- js = json.dumps({"red": "robin", "re": "foo"})
- data = {"Fn::Select": ["re", js]}
- self.assertEqual(parser.Template.resolve_select(data), "foo")
-
- def test_select_from_serialized_json_list(self):
- js = json.dumps(["foo", "fee", "fum"])
- data = {"Fn::Select": ["0", js]}
- self.assertEqual(parser.Template.resolve_select(data), "foo")
-
- def test_select_from_serialized_json_wrong(self):
- js = "this is really not serialized json"
- data = {"Fn::Select": ["not", js]}
- self.assertRaises(ValueError, parser.Template.resolve_select,
- data)
-
- def test_select_wrong_num_args(self):
- join0 = {"Fn::Select": []}
- self.assertRaises(ValueError, parser.Template.resolve_select,
- join0)
- join1 = {"Fn::Select": ["4"]}
- self.assertRaises(ValueError, parser.Template.resolve_select,
- join1)
- join3 = {"Fn::Select": ["foo", {"foo": "bar"}, ""]}
- self.assertRaises(ValueError, parser.Template.resolve_select,
- join3)
-
- def test_join_reduce(self):
- join = {"Fn::Join": [" ", ["foo", "bar", "baz", {'Ref': 'baz'},
- "bink", "bonk"]]}
- self.assertEqual(
- parser.Template.reduce_joins(join),
- {"Fn::Join": [" ", ["foo bar baz", {'Ref': 'baz'}, "bink bonk"]]})
-
- join = {"Fn::Join": [" ", ["foo", {'Ref': 'baz'},
- "bink"]]}
- self.assertEqual(
- parser.Template.reduce_joins(join),
- {"Fn::Join": [" ", ["foo", {'Ref': 'baz'}, "bink"]]})
-
- join = {"Fn::Join": [" ", [{'Ref': 'baz'}]]}
- self.assertEqual(
- parser.Template.reduce_joins(join),
- {"Fn::Join": [" ", [{'Ref': 'baz'}]]})
-
- def test_join(self):
- join = {"Fn::Join": [" ", ["foo", "bar"]]}
- self.assertEqual(parser.Template.resolve_joins(join), "foo bar")
-
- def test_join_string(self):
- join = {"Fn::Join": [" ", "foo"]}
- self.assertRaises(TypeError, parser.Template.resolve_joins,
- join)
-
- def test_join_dict(self):
- join = {"Fn::Join": [" ", {"foo": "bar"}]}
- self.assertRaises(TypeError, parser.Template.resolve_joins,
- join)
-
- def test_join_wrong_num_args(self):
- join0 = {"Fn::Join": []}
- self.assertRaises(ValueError, parser.Template.resolve_joins,
- join0)
- join1 = {"Fn::Join": [" "]}
- self.assertRaises(ValueError, parser.Template.resolve_joins,
- join1)
- join3 = {"Fn::Join": [" ", {"foo": "bar"}, ""]}
- self.assertRaises(ValueError, parser.Template.resolve_joins,
- join3)
-
- def test_join_string_nodelim(self):
- join1 = {"Fn::Join": "o"}
- self.assertRaises(TypeError, parser.Template.resolve_joins,
- join1)
- join2 = {"Fn::Join": "oh"}
- self.assertRaises(TypeError, parser.Template.resolve_joins,
- join2)
- join3 = {"Fn::Join": "ohh"}
- self.assertRaises(TypeError, parser.Template.resolve_joins,
- join3)
-
- def test_join_dict_nodelim(self):
- join1 = {"Fn::Join": {"foo": "bar"}}
- self.assertRaises(TypeError, parser.Template.resolve_joins,
- join1)
- join2 = {"Fn::Join": {"foo": "bar", "blarg": "wibble"}}
- self.assertRaises(TypeError, parser.Template.resolve_joins,
- join2)
- join3 = {"Fn::Join": {"foo": "bar", "blarg": "wibble", "baz": "quux"}}
- self.assertRaises(TypeError, parser.Template.resolve_joins,
- join3)
-
- def test_split_ok(self):
- data = {"Fn::Split": [";", "foo; bar; achoo"]}
- self.assertEqual(parser.Template.resolve_split(data),
- ['foo', ' bar', ' achoo'])
-
- def test_split_no_delim_in_str(self):
- data = {"Fn::Split": [";", "foo, bar, achoo"]}
- self.assertEqual(parser.Template.resolve_split(data),
- ['foo, bar, achoo'])
-
- def test_split_no_delim(self):
- data = {"Fn::Split": ["foo, bar, achoo"]}
- self.assertRaises(ValueError, parser.Template.resolve_split, data)
-
- def test_split_no_list(self):
- data = {"Fn::Split": "foo, bar, achoo"}
- self.assertRaises(TypeError, parser.Template.resolve_split, data)
-
- def test_base64(self):
- snippet = {"Fn::Base64": "foobar"}
- # For now, the Base64 function just returns the original text, and
- # does not convert to base64 (see issue #133)
- self.assertEqual(parser.Template.resolve_base64(snippet), "foobar")
-
- def test_base64_list(self):
- list_snippet = {"Fn::Base64": ["foobar"]}
- self.assertRaises(TypeError, parser.Template.resolve_base64,
- list_snippet)
-
- def test_base64_dict(self):
- dict_snippet = {"Fn::Base64": {"foo": "bar"}}
- self.assertRaises(TypeError, parser.Template.resolve_base64,
- dict_snippet)
-
- def test_get_azs(self):
- snippet = {"Fn::GetAZs": ""}
- self.assertEqual(
- parser.Template.resolve_availability_zones(snippet, None),
- ["nova"])
-
- def test_get_azs_with_stack(self):
- snippet = {"Fn::GetAZs": ""}
- stack = parser.Stack(self.ctx, 'test_stack', parser.Template({}))
- self.m.StubOutWithMock(clients.OpenStackClients, 'nova')
- fc = fakes.FakeClient()
- clients.OpenStackClients.nova().MultipleTimes().AndReturn(fc)
- self.m.ReplayAll()
- self.assertEqual(
- parser.Template.resolve_availability_zones(snippet, stack),
- ["nova1"])
-
- def test_replace(self):
- snippet = {"Fn::Replace": [
- {'$var1': 'foo', '%var2%': 'bar'},
- '$var1 is %var2%'
- ]}
- self.assertEqual(
- parser.Template.resolve_replace(snippet),
- 'foo is bar')
-
- def test_replace_list_mapping(self):
- snippet = {"Fn::Replace": [
- ['var1', 'foo', 'var2', 'bar'],
- '$var1 is ${var2}'
- ]}
- self.assertRaises(TypeError, parser.Template.resolve_replace,
- snippet)
-
- def test_replace_dict(self):
- snippet = {"Fn::Replace": {}}
- self.assertRaises(TypeError, parser.Template.resolve_replace,
- snippet)
-
- def test_replace_missing_template(self):
- snippet = {"Fn::Replace": [['var1', 'foo', 'var2', 'bar']]}
- self.assertRaises(ValueError, parser.Template.resolve_replace,
- snippet)
-
- def test_replace_none_template(self):
- snippet = {"Fn::Replace": [['var1', 'foo', 'var2', 'bar'], None]}
- self.assertRaises(TypeError, parser.Template.resolve_replace,
- snippet)
-
- def test_replace_list_string(self):
- snippet = {"Fn::Replace": [
- {'var1': 'foo', 'var2': 'bar'},
- ['$var1 is ${var2}']
- ]}
- self.assertRaises(TypeError, parser.Template.resolve_replace,
- snippet)
-
- def test_replace_none_values(self):
- snippet = {"Fn::Replace": [
- {'$var1': None, '${var2}': None},
- '"$var1" is "${var2}"'
- ]}
- self.assertEqual(
- parser.Template.resolve_replace(snippet),
- '"" is ""')
-
- def test_replace_missing_key(self):
- snippet = {"Fn::Replace": [
- {'$var1': 'foo', 'var2': 'bar'},
- '"$var1" is "${var3}"'
- ]}
- self.assertEqual(
- parser.Template.resolve_replace(snippet),
- '"foo" is "${var3}"')
-
- def test_member_list2map_good(self):
- snippet = {"Fn::MemberListToMap": [
- 'Name', 'Value', ['.member.0.Name=metric',
- '.member.0.Value=cpu',
- '.member.1.Name=size',
- '.member.1.Value=56']]}
- self.assertEqual(
- {'metric': 'cpu', 'size': '56'},
- parser.Template.resolve_member_list_to_map(snippet))
-
- def test_member_list2map_good2(self):
- snippet = {"Fn::MemberListToMap": [
- 'Key', 'Value', ['.member.2.Key=metric',
- '.member.2.Value=cpu',
- '.member.5.Key=size',
- '.member.5.Value=56']]}
- self.assertEqual(
- {'metric': 'cpu', 'size': '56'},
- parser.Template.resolve_member_list_to_map(snippet))
-
- def test_member_list2map_no_key_or_val(self):
- snippet = {"Fn::MemberListToMap": [
- 'Key', ['.member.2.Key=metric',
- '.member.2.Value=cpu',
- '.member.5.Key=size',
- '.member.5.Value=56']]}
- self.assertRaises(TypeError,
- parser.Template.resolve_member_list_to_map,
- snippet)
-
- def test_member_list2map_no_list(self):
- snippet = {"Fn::MemberListToMap": [
- 'Key', '.member.2.Key=metric']}
- self.assertRaises(TypeError,
- parser.Template.resolve_member_list_to_map,
- snippet)
-
- def test_member_list2map_not_string(self):
- snippet = {"Fn::MemberListToMap": [
- 'Name', ['Value'], ['.member.0.Name=metric',
- '.member.0.Value=cpu',
- '.member.1.Name=size',
- '.member.1.Value=56']]}
- self.assertRaises(TypeError,
- parser.Template.resolve_member_list_to_map,
- snippet)
-
- def test_resource_facade(self):
- metadata_snippet = {'Fn::ResourceFacade': 'Metadata'}
- deletion_policy_snippet = {'Fn::ResourceFacade': 'DeletionPolicy'}
- update_policy_snippet = {'Fn::ResourceFacade': 'UpdatePolicy'}
-
- class DummyClass(object):
- pass
- parent_resource = DummyClass()
- parent_resource.metadata = '{"foo": "bar"}'
- parent_resource.t = {'DeletionPolicy': 'Retain',
- 'UpdatePolicy': '{"foo": "bar"}'}
- parent_resource.stack = parser.Stack(self.ctx, 'toplevel_stack',
- parser.Template({}))
- stack = parser.Stack(self.ctx, 'test_stack',
- parser.Template({}),
- parent_resource=parent_resource)
- self.assertEqual(
- parser.Template.resolve_resource_facade(metadata_snippet, stack),
- '{"foo": "bar"}')
- self.assertEqual(
- parser.Template.resolve_resource_facade(deletion_policy_snippet,
- stack), 'Retain')
- self.assertEqual(
- parser.Template.resolve_resource_facade(update_policy_snippet,
- stack), '{"foo": "bar"}')
-
- def test_resource_facade_invalid_arg(self):
- snippet = {'Fn::ResourceFacade': 'wibble'}
- stack = parser.Stack(self.ctx, 'test_stack', parser.Template({}))
- self.assertRaises(ValueError,
- parser.Template.resolve_resource_facade,
- snippet,
- stack)
-
- def test_resource_facade_missing_key(self):
- snippet = {'Fn::ResourceFacade': 'DeletionPolicy'}
-
- class DummyClass(object):
- pass
- parent_resource = DummyClass()
- parent_resource.metadata = '{"foo": "bar"}'
- parent_resource.t = {}
- parent_resource.stack = parser.Stack(self.ctx, 'toplevel_stack',
- parser.Template({}))
- stack = parser.Stack(self.ctx, 'test_stack',
- parser.Template({}),
- parent_resource=parent_resource)
- self.assertRaises(KeyError,
- parser.Template.resolve_resource_facade,
- snippet,
- stack)
-
-
-class StackTest(HeatTestCase):
- def setUp(self):
- super(StackTest, self).setUp()
-
- self.username = 'parser_stack_test_user'
-
- utils.setup_dummy_db()
- self.ctx = utils.dummy_context()
-
- resource._register_class('GenericResourceType',
- generic_rsrc.GenericResource)
- resource._register_class('ResourceWithPropsType',
- generic_rsrc.ResourceWithProps)
-
- self.m.ReplayAll()
-
- def test_state_defaults(self):
- stack = parser.Stack(self.ctx, 'test_stack', parser.Template({}))
- self.assertEqual(stack.state, (None, None))
- self.assertEqual(stack.status_reason, '')
-
- def test_no_auth_token(self):
- ctx = utils.dummy_context()
- ctx.auth_token = None
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- clients.OpenStackClients.keystone().AndReturn(FakeKeystoneClient())
-
- self.m.ReplayAll()
- stack = parser.Stack(ctx, 'test_stack', parser.Template({}))
- self.assertEqual('abcd1234', stack.clients.auth_token)
-
- self.m.VerifyAll()
-
- def test_state(self):
- stack = parser.Stack(self.ctx, 'test_stack', parser.Template({}),
- action=parser.Stack.CREATE,
- status=parser.Stack.IN_PROGRESS)
- self.assertEqual(stack.state,
- (parser.Stack.CREATE, parser.Stack.IN_PROGRESS))
- stack.state_set(parser.Stack.CREATE, parser.Stack.COMPLETE, 'test')
- self.assertEqual(stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
- stack.state_set(parser.Stack.DELETE, parser.Stack.COMPLETE, 'test')
- self.assertEqual(stack.state,
- (parser.Stack.DELETE, parser.Stack.COMPLETE))
-
- def test_state_bad(self):
- stack = parser.Stack(self.ctx, 'test_stack', parser.Template({}),
- action=parser.Stack.CREATE,
- status=parser.Stack.IN_PROGRESS)
- self.assertEqual(stack.state,
- (parser.Stack.CREATE, parser.Stack.IN_PROGRESS))
- self.assertRaises(ValueError, stack.state_set,
- 'baad', parser.Stack.COMPLETE, 'test')
- self.assertRaises(ValueError, stack.state_set,
- parser.Stack.CREATE, 'oops', 'test')
-
- def test_status_reason(self):
- stack = parser.Stack(self.ctx, 'test_stack', parser.Template({}),
- status_reason='quux')
- self.assertEqual(stack.status_reason, 'quux')
- stack.state_set(parser.Stack.CREATE, parser.Stack.IN_PROGRESS,
- 'wibble')
- self.assertEqual(stack.status_reason, 'wibble')
-
- def test_load_nonexistant_id(self):
- self.assertRaises(exception.NotFound, parser.Stack.load,
- None, -1)
-
- def test_total_resources_empty(self):
- stack = parser.Stack(self.ctx, 'test_stack', parser.Template({}),
- status_reason='flimflam')
- self.assertEqual(0, stack.total_resources())
-
- def test_total_resources_generic(self):
- tpl = {'Resources':
- {'A': {'Type': 'GenericResourceType'}}}
- stack = parser.Stack(self.ctx, 'test_stack', parser.Template(tpl),
- status_reason='blarg')
- self.assertEqual(1, stack.total_resources())
-
- def _setup_nested(self, name):
- nested_tpl = ('{"Resources":{'
- '"A": {"Type": "GenericResourceType"},'
- '"B": {"Type": "GenericResourceType"}}}')
- tpl = {'Resources':
- {'A': {'Type': 'AWS::CloudFormation::Stack',
- 'Properties':
- {'TemplateURL': 'http://server.test/nested.json'}},
- 'B': {'Type': 'GenericResourceType'}}}
- self.m.StubOutWithMock(urlfetch, 'get')
- urlfetch.get('http://server.test/nested.json').AndReturn(nested_tpl)
- self.m.ReplayAll()
- self.stack = parser.Stack(self.ctx, 'test_stack', parser.Template(tpl),
- status_reason=name)
- self.stack.store()
- self.stack.create()
-
- @utils.stack_delete_after
- def test_total_resources_nested(self):
- self._setup_nested('zyzzyx')
- self.assertEqual(4, self.stack.total_resources())
- self.assertNotEqual(None, self.stack.resources['A'].nested())
- self.assertEqual(
- 2, self.stack.resources['A'].nested().total_resources())
- self.assertEqual(
- 4,
- self.stack.resources['A'].nested().root_stack.total_resources())
-
- @utils.stack_delete_after
- def test_root_stack(self):
- self._setup_nested('toor')
- self.assertEqual(self.stack, self.stack.root_stack)
- self.assertNotEqual(None, self.stack.resources['A'].nested())
- self.assertEqual(
- self.stack, self.stack.resources['A'].nested().root_stack)
-
- @utils.stack_delete_after
- def test_load_parent_resource(self):
- self.stack = parser.Stack(self.ctx, 'load_parent_resource',
- parser.Template({}))
- self.stack.store()
- stack = db_api.stack_get(self.ctx, self.stack.id)
-
- t = template.Template.load(self.ctx, stack.raw_template_id)
- self.m.StubOutWithMock(template.Template, 'load')
- template.Template.load(self.ctx, stack.raw_template_id).AndReturn(t)
-
- env = environment.Environment(stack.parameters)
- self.m.StubOutWithMock(environment, 'Environment')
- environment.Environment(stack.parameters).AndReturn(env)
-
- self.m.StubOutWithMock(parser.Stack, '__init__')
- parser.Stack.__init__(self.ctx, stack.name, t, env, stack.id,
- stack.action, stack.status, stack.status_reason,
- stack.timeout, True, stack.disable_rollback,
- 'parent', owner_id=None)
-
- self.m.ReplayAll()
- parser.Stack.load(self.ctx, stack_id=self.stack.id,
- parent_resource='parent')
-
- self.m.VerifyAll()
-
- # Note tests creating a stack should be decorated with @stack_delete_after
- # to ensure the self.stack is properly cleaned up
- @utils.stack_delete_after
- def test_identifier(self):
- self.stack = parser.Stack(self.ctx, 'identifier_test',
- parser.Template({}))
- self.stack.store()
- identifier = self.stack.identifier()
- self.assertEqual(identifier.tenant, self.ctx.tenant_id)
- self.assertEqual(identifier.stack_name, 'identifier_test')
- self.assertTrue(identifier.stack_id)
- self.assertFalse(identifier.path)
-
- @utils.stack_delete_after
- def test_set_param_id(self):
- self.stack = parser.Stack(self.ctx, 'param_arn_test',
- parser.Template({}))
- exp_prefix = ('arn:openstack:heat::test_tenant_id'
- ':stacks/param_arn_test/')
- self.assertEqual(self.stack.parameters['AWS::StackId'],
- exp_prefix + 'None')
- self.stack.store()
- identifier = self.stack.identifier()
- self.assertEqual(self.stack.parameters['AWS::StackId'],
- exp_prefix + self.stack.id)
- self.assertEqual(self.stack.parameters['AWS::StackId'],
- identifier.arn())
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_load_param_id(self):
- self.stack = parser.Stack(self.ctx, 'param_load_arn_test',
- parser.Template({}))
- self.stack.store()
- identifier = self.stack.identifier()
- self.assertEqual(self.stack.parameters['AWS::StackId'],
- identifier.arn())
-
- newstack = parser.Stack.load(self.ctx, stack_id=self.stack.id)
- self.assertEqual(newstack.parameters['AWS::StackId'], identifier.arn())
-
- @utils.stack_delete_after
- def test_created_time(self):
- self.stack = parser.Stack(self.ctx, 'creation_time_test',
- parser.Template({}))
- self.assertEqual(self.stack.created_time, None)
- self.stack.store()
- self.assertNotEqual(self.stack.created_time, None)
-
- @utils.stack_delete_after
- def test_updated_time(self):
- self.stack = parser.Stack(self.ctx, 'update_time_test',
- parser.Template({}))
- self.assertEqual(self.stack.updated_time, None)
- self.stack.store()
- stored_time = self.stack.updated_time
- self.stack.state_set(self.stack.CREATE, self.stack.IN_PROGRESS, 'test')
- self.assertNotEqual(self.stack.updated_time, None)
- self.assertNotEqual(self.stack.updated_time, stored_time)
-
- @utils.stack_delete_after
- def test_delete(self):
- self.stack = parser.Stack(self.ctx, 'delete_test',
- parser.Template({}))
- stack_id = self.stack.store()
-
- db_s = db_api.stack_get(self.ctx, stack_id)
- self.assertNotEqual(db_s, None)
-
- self.stack.delete()
-
- db_s = db_api.stack_get(self.ctx, stack_id)
- self.assertEqual(db_s, None)
- self.assertEqual(self.stack.state,
- (parser.Stack.DELETE, parser.Stack.COMPLETE))
-
- @utils.stack_delete_after
- def test_suspend_resume(self):
- self.m.ReplayAll()
- tmpl = {'Resources': {'AResource': {'Type': 'GenericResourceType'}}}
- self.stack = parser.Stack(self.ctx, 'suspend_test',
- parser.Template(tmpl))
- stack_id = self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (self.stack.CREATE, self.stack.COMPLETE))
-
- self.stack.suspend()
-
- self.assertEqual(self.stack.state,
- (self.stack.SUSPEND, self.stack.COMPLETE))
-
- self.stack.resume()
-
- self.assertEqual(self.stack.state,
- (self.stack.RESUME, self.stack.COMPLETE))
-
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_suspend_fail(self):
- tmpl = {'Resources': {'AResource': {'Type': 'GenericResourceType'}}}
- self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_suspend')
- exc = Exception('foo')
- generic_rsrc.GenericResource.handle_suspend().AndRaise(exc)
- self.m.ReplayAll()
-
- self.stack = parser.Stack(self.ctx, 'suspend_test_fail',
- parser.Template(tmpl))
-
- stack_id = self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (self.stack.CREATE, self.stack.COMPLETE))
-
- self.stack.suspend()
-
- self.assertEqual(self.stack.state,
- (self.stack.SUSPEND, self.stack.FAILED))
- self.assertEqual(self.stack.status_reason,
- 'Resource suspend failed: Exception: foo')
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_resume_fail(self):
- tmpl = {'Resources': {'AResource': {'Type': 'GenericResourceType'}}}
- self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_resume')
- generic_rsrc.GenericResource.handle_resume().AndRaise(Exception('foo'))
- self.m.ReplayAll()
-
- self.stack = parser.Stack(self.ctx, 'resume_test_fail',
- parser.Template(tmpl))
-
- stack_id = self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (self.stack.CREATE, self.stack.COMPLETE))
-
- self.stack.suspend()
-
- self.assertEqual(self.stack.state,
- (self.stack.SUSPEND, self.stack.COMPLETE))
-
- self.stack.resume()
-
- self.assertEqual(self.stack.state,
- (self.stack.RESUME, self.stack.FAILED))
- self.assertEqual(self.stack.status_reason,
- 'Resource resume failed: Exception: foo')
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_suspend_timeout(self):
- tmpl = {'Resources': {'AResource': {'Type': 'GenericResourceType'}}}
- self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_suspend')
- exc = scheduler.Timeout('foo', 0)
- generic_rsrc.GenericResource.handle_suspend().AndRaise(exc)
- self.m.ReplayAll()
-
- self.stack = parser.Stack(self.ctx, 'suspend_test_fail_timeout',
- parser.Template(tmpl))
-
- stack_id = self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (self.stack.CREATE, self.stack.COMPLETE))
-
- self.stack.suspend()
-
- self.assertEqual(self.stack.state,
- (self.stack.SUSPEND, self.stack.FAILED))
- self.assertEqual(self.stack.status_reason, 'Suspend timed out')
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_resume_timeout(self):
- tmpl = {'Resources': {'AResource': {'Type': 'GenericResourceType'}}}
- self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_resume')
- exc = scheduler.Timeout('foo', 0)
- generic_rsrc.GenericResource.handle_resume().AndRaise(exc)
- self.m.ReplayAll()
-
- self.stack = parser.Stack(self.ctx, 'resume_test_fail_timeout',
- parser.Template(tmpl))
-
- stack_id = self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (self.stack.CREATE, self.stack.COMPLETE))
-
- self.stack.suspend()
-
- self.assertEqual(self.stack.state,
- (self.stack.SUSPEND, self.stack.COMPLETE))
-
- self.stack.resume()
-
- self.assertEqual(self.stack.state,
- (self.stack.RESUME, self.stack.FAILED))
-
- self.assertEqual(self.stack.status_reason, 'Resume timed out')
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_delete_rollback(self):
- self.stack = parser.Stack(self.ctx, 'delete_rollback_test',
- parser.Template({}), disable_rollback=False)
- stack_id = self.stack.store()
-
- db_s = db_api.stack_get(self.ctx, stack_id)
- self.assertNotEqual(db_s, None)
-
- self.stack.delete(action=self.stack.ROLLBACK)
-
- db_s = db_api.stack_get(self.ctx, stack_id)
- self.assertEqual(db_s, None)
- self.assertEqual(self.stack.state,
- (parser.Stack.ROLLBACK, parser.Stack.COMPLETE))
-
- @utils.stack_delete_after
- def test_delete_badaction(self):
- self.stack = parser.Stack(self.ctx, 'delete_badaction_test',
- parser.Template({}))
- stack_id = self.stack.store()
-
- db_s = db_api.stack_get(self.ctx, stack_id)
- self.assertNotEqual(db_s, None)
-
- self.stack.delete(action="wibble")
-
- db_s = db_api.stack_get(self.ctx, stack_id)
- self.assertNotEqual(db_s, None)
- self.assertEqual(self.stack.state,
- (parser.Stack.DELETE, parser.Stack.FAILED))
-
- @utils.stack_delete_after
- def test_update_badstate(self):
- self.stack = parser.Stack(self.ctx, 'test_stack', parser.Template({}),
- action=parser.Stack.CREATE,
- status=parser.Stack.FAILED)
- stack_id = self.stack.store()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.FAILED))
- self.stack.update({})
- self.assertEqual(self.stack.state,
- (parser.Stack.UPDATE, parser.Stack.FAILED))
-
- @utils.stack_delete_after
- def test_resource_by_refid(self):
- tmpl = {'Resources': {'AResource': {'Type': 'GenericResourceType'}}}
-
- self.stack = parser.Stack(self.ctx, 'resource_by_refid_stack',
- template.Template(tmpl))
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
- self.assertTrue('AResource' in self.stack)
- rsrc = self.stack['AResource']
- rsrc.resource_id_set('aaaa')
- self.assertNotEqual(None, resource)
- self.assertEqual(rsrc, self.stack.resource_by_refid('aaaa'))
-
- rsrc.state_set(rsrc.DELETE, rsrc.IN_PROGRESS)
- try:
- self.assertEqual(None, self.stack.resource_by_refid('aaaa'))
- self.assertEqual(None, self.stack.resource_by_refid('bbbb'))
- finally:
- rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE)
-
- @utils.stack_delete_after
- def test_update_add(self):
- tmpl = {'Resources': {'AResource': {'Type': 'GenericResourceType'}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl))
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- tmpl2 = {'Resources': {
- 'AResource': {'Type': 'GenericResourceType'},
- 'BResource': {'Type': 'GenericResourceType'}}}
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.UPDATE, parser.Stack.COMPLETE))
- self.assertTrue('BResource' in self.stack)
-
- @utils.stack_delete_after
- def test_update_remove(self):
- tmpl = {'Resources': {
- 'AResource': {'Type': 'GenericResourceType'},
- 'BResource': {'Type': 'GenericResourceType'}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl))
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- tmpl2 = {'Resources': {'AResource': {'Type': 'GenericResourceType'}}}
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.UPDATE, parser.Stack.COMPLETE))
- self.assertFalse('BResource' in self.stack)
-
- @utils.stack_delete_after
- def test_update_description(self):
- tmpl = {'Description': 'ATemplate',
- 'Resources': {'AResource': {'Type': 'GenericResourceType'}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl))
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- tmpl2 = {'Description': 'BTemplate',
- 'Resources': {'AResource': {'Type': 'GenericResourceType'}}}
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.UPDATE, parser.Stack.COMPLETE))
- self.assertEqual(self.stack.t[template.DESCRIPTION], 'BTemplate')
-
- @utils.stack_delete_after
- def test_update_modify_ok_replace(self):
- tmpl = {'Resources': {'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'abc'}}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl))
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- tmpl2 = {'Resources': {'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'xyz'}}}}
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
-
- # Calls to GenericResource.handle_update will raise
- # resource.UpdateReplace because we've not specified the modified
- # key/property in update_allowed_keys/update_allowed_properties
- self.m.ReplayAll()
-
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.UPDATE, parser.Stack.COMPLETE))
- self.assertEqual(self.stack['AResource'].properties['Foo'], 'xyz')
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update_modify_update_failed(self):
- tmpl = {'Resources': {'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'abc'}}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl),
- disable_rollback=True)
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- res = self.stack['AResource']
- res.update_allowed_keys = ('Properties',)
- res.update_allowed_properties = ('Foo',)
-
- tmpl2 = {'Resources': {'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'xyz'}}}}
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
-
- # patch in a dummy handle_update
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_update')
- tmpl_diff = {'Properties': {'Foo': 'xyz'}}
- prop_diff = {'Foo': 'xyz'}
- generic_rsrc.ResourceWithProps.handle_update(
- tmpl2['Resources']['AResource'], tmpl_diff,
- prop_diff).AndRaise(Exception("Foo"))
- self.m.ReplayAll()
-
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.UPDATE, parser.Stack.FAILED))
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update_modify_replace_failed_delete(self):
- tmpl = {'Resources': {'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'abc'}}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl),
- disable_rollback=True)
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- tmpl2 = {'Resources': {'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'xyz'}}}}
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
-
- # Calls to GenericResource.handle_update will raise
- # resource.UpdateReplace because we've not specified the modified
- # key/property in update_allowed_keys/update_allowed_properties
-
- # make the update fail deleting the existing resource
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_delete')
- generic_rsrc.ResourceWithProps.handle_delete().AndRaise(Exception)
- self.m.ReplayAll()
-
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.UPDATE, parser.Stack.FAILED))
- self.m.VerifyAll()
- # Unset here so destroy() is not stubbed for stack.delete cleanup
- self.m.UnsetStubs()
-
- @utils.stack_delete_after
- def test_update_modify_replace_failed_create(self):
- tmpl = {'Resources': {'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'abc'}}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl),
- disable_rollback=True)
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- tmpl2 = {'Resources': {'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'xyz'}}}}
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
-
- # Calls to GenericResource.handle_update will raise
- # resource.UpdateReplace because we've not specified the modified
- # key/property in update_allowed_keys/update_allowed_properties
-
- # patch in a dummy handle_create making the replace fail creating
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create')
- generic_rsrc.ResourceWithProps.handle_create().AndRaise(Exception)
- self.m.ReplayAll()
-
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.UPDATE, parser.Stack.FAILED))
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update_add_failed_create(self):
- tmpl = {'Resources': {'AResource': {'Type': 'GenericResourceType'}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl))
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- tmpl2 = {'Resources': {
- 'AResource': {'Type': 'GenericResourceType'},
- 'BResource': {'Type': 'GenericResourceType'}}}
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
-
- # patch in a dummy handle_create making BResource fail creating
- self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_create')
- generic_rsrc.GenericResource.handle_create().AndRaise(Exception)
- self.m.ReplayAll()
-
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.UPDATE, parser.Stack.FAILED))
- self.assertTrue('BResource' in self.stack)
-
- # Reload the stack from the DB and prove that it contains the failed
- # resource (to ensure it will be deleted on stack delete)
- re_stack = parser.Stack.load(self.ctx, stack_id=self.stack.id)
- self.assertTrue('BResource' in re_stack)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update_rollback(self):
- tmpl = {'Resources': {'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'abc'}}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl),
- disable_rollback=False)
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- tmpl2 = {'Resources': {'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'xyz'}}}}
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
-
- # Calls to GenericResource.handle_update will raise
- # resource.UpdateReplace because we've not specified the modified
- # key/property in update_allowed_keys/update_allowed_properties
-
- # patch in a dummy handle_create making the replace fail when creating
- # the replacement rsrc
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create')
- generic_rsrc.ResourceWithProps.handle_create().AndRaise(Exception)
- self.m.ReplayAll()
-
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.ROLLBACK, parser.Stack.COMPLETE))
- self.assertEqual(self.stack['AResource'].properties['Foo'], 'abc')
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update_rollback_fail(self):
- tmpl = {'Resources': {'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'abc'}}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl),
- disable_rollback=False)
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- tmpl2 = {'Resources': {'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'xyz'}}}}
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
-
- # Calls to GenericResource.handle_update will raise
- # resource.UpdateReplace because we've not specified the modified
- # key/property in update_allowed_keys/update_allowed_properties
-
- # patch in a dummy handle_create making the replace fail when creating
- # the replacement rsrc, and again on the second call (rollback)
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create')
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_delete')
- generic_rsrc.ResourceWithProps.handle_create().AndRaise(Exception)
- generic_rsrc.ResourceWithProps.handle_delete().AndRaise(Exception)
- self.m.ReplayAll()
-
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.ROLLBACK, parser.Stack.FAILED))
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update_rollback_add(self):
- tmpl = {'Resources': {'AResource': {'Type': 'GenericResourceType'}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl),
- disable_rollback=False)
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- tmpl2 = {'Resources': {
- 'AResource': {'Type': 'GenericResourceType'},
- 'BResource': {'Type': 'GenericResourceType'}}}
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
-
- # patch in a dummy handle_create making the replace fail when creating
- # the replacement rsrc, and succeed on the second call (rollback)
- self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_create')
- generic_rsrc.GenericResource.handle_create().AndRaise(Exception)
- self.m.ReplayAll()
-
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.ROLLBACK, parser.Stack.COMPLETE))
- self.assertFalse('BResource' in self.stack)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update_rollback_remove(self):
- tmpl = {'Resources': {
- 'AResource': {'Type': 'GenericResourceType'},
- 'BResource': {'Type': 'ResourceWithPropsType'}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl),
- disable_rollback=False)
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- tmpl2 = {'Resources': {'AResource': {'Type': 'GenericResourceType'}}}
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
-
- # patch in a dummy delete making the destroy fail
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_delete')
- generic_rsrc.ResourceWithProps.handle_delete().AndRaise(Exception)
- self.m.ReplayAll()
-
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.ROLLBACK, parser.Stack.COMPLETE))
- self.assertTrue('BResource' in self.stack)
- self.m.VerifyAll()
- # Unset here so delete() is not stubbed for stack.delete cleanup
- self.m.UnsetStubs()
-
- @utils.stack_delete_after
- def test_update_rollback_replace(self):
- tmpl = {'Resources': {
- 'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'foo'}}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl),
- disable_rollback=False)
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- tmpl2 = {'Resources': {'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'bar'}}}}
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
-
- # patch in a dummy delete making the destroy fail
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_delete')
- generic_rsrc.ResourceWithProps.handle_delete().AndRaise(Exception)
- generic_rsrc.ResourceWithProps.handle_delete().AndReturn(None)
- generic_rsrc.ResourceWithProps.handle_delete().AndReturn(None)
- self.m.ReplayAll()
-
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.ROLLBACK, parser.Stack.COMPLETE))
- self.m.VerifyAll()
- # Unset here so delete() is not stubbed for stack.delete cleanup
- self.m.UnsetStubs()
-
- @utils.stack_delete_after
- def test_update_replace_by_reference(self):
- '''
- assertion:
- changes in dynamic attributes, due to other resources been updated
- are not ignored and can cause dependant resources to be updated.
- '''
- tmpl = {'Resources': {
- 'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'abc'}},
- 'BResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Ref': 'AResource'}}}}}
- tmpl2 = {'Resources': {
- 'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'smelly'}},
- 'BResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Ref': 'AResource'}}}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl))
-
- self.m.ReplayAll()
-
- self.stack.store()
- self.stack.create()
- self.m.VerifyAll()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
- self.assertEqual(self.stack['AResource'].properties['Foo'], 'abc')
- self.assertEqual(self.stack['BResource'].properties['Foo'],
- 'AResource')
-
- # Calls to GenericResource.handle_update will raise
- # resource.UpdateReplace because we've not specified the modified
- # key/property in update_allowed_keys/update_allowed_properties
-
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'FnGetRefId')
- generic_rsrc.ResourceWithProps.FnGetRefId().AndReturn(
- 'AResource')
- generic_rsrc.ResourceWithProps.FnGetRefId().MultipleTimes().AndReturn(
- 'inst-007')
- self.m.ReplayAll()
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2))
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.UPDATE, parser.Stack.COMPLETE))
- self.assertEqual(self.stack['AResource'].properties['Foo'], 'smelly')
- self.assertEqual(self.stack['BResource'].properties['Foo'], 'inst-007')
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update_by_reference_and_rollback_1(self):
- '''
- assertion:
- check that rollback still works with dynamic metadata
- this test fails the first instance
- '''
- tmpl = {'Resources': {
- 'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'abc'}},
- 'BResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Ref': 'AResource'}}}}}
- tmpl2 = {'Resources': {
- 'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': 'smelly'}},
- 'BResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Ref': 'AResource'}}}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl),
- disable_rollback=False)
-
- self.m.ReplayAll()
-
- self.stack.store()
- self.stack.create()
- self.m.VerifyAll()
-
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
- self.assertEqual(self.stack['AResource'].properties['Foo'], 'abc')
- self.assertEqual(self.stack['BResource'].properties['Foo'],
- 'AResource')
-
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'FnGetRefId')
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create')
-
- # Calls to ResourceWithProps.handle_update will raise
- # resource.UpdateReplace because we've not specified the modified
- # key/property in update_allowed_keys/update_allowed_properties
-
- generic_rsrc.ResourceWithProps.FnGetRefId().MultipleTimes().AndReturn(
- 'AResource')
-
- # mock to make the replace fail when creating the replacement resource
- generic_rsrc.ResourceWithProps.handle_create().AndRaise(Exception)
-
- self.m.ReplayAll()
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2),
- disable_rollback=False)
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.ROLLBACK, parser.Stack.COMPLETE))
- self.assertEqual(self.stack['AResource'].properties['Foo'], 'abc')
-
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update_by_reference_and_rollback_2(self):
- '''
- assertion:
- check that rollback still works with dynamic metadata
- this test fails the second instance
- '''
-
- class ResourceTypeA(generic_rsrc.ResourceWithProps):
- count = 0
-
- def handle_create(self):
- ResourceTypeA.count += 1
- self.resource_id_set('%s%d' % (self.name, self.count))
-
- resource._register_class('ResourceTypeA', ResourceTypeA)
-
- tmpl = {'Resources': {
- 'AResource': {'Type': 'ResourceTypeA',
- 'Properties': {'Foo': 'abc'}},
- 'BResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Ref': 'AResource'}}}}}
- tmpl2 = {'Resources': {
- 'AResource': {'Type': 'ResourceTypeA',
- 'Properties': {'Foo': 'smelly'}},
- 'BResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Ref': 'AResource'}}}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl),
- disable_rollback=False)
-
- self.m.ReplayAll()
-
- self.stack.store()
- self.stack.create()
- self.m.VerifyAll()
-
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
- self.assertEqual(self.stack['AResource'].properties['Foo'], 'abc')
- self.assertEqual(self.stack['BResource'].properties['Foo'],
- 'AResource1')
-
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create')
-
- # Calls to ResourceWithProps.handle_update will raise
- # resource.UpdateReplace because we've not specified the modified
- # key/property in update_allowed_keys/update_allowed_properties
-
- # mock to make the replace fail when creating the second
- # replacement resource
- generic_rsrc.ResourceWithProps.handle_create().AndRaise(Exception)
-
- self.m.ReplayAll()
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl2),
- disable_rollback=False)
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.ROLLBACK, parser.Stack.COMPLETE))
- self.assertEqual(self.stack['AResource'].properties['Foo'], 'abc')
- self.assertEqual(self.stack['BResource'].properties['Foo'],
- 'AResource1')
-
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_update_replace_parameters(self):
- '''
- assertion:
- changes in static environment parameters
- are not ignored and can cause dependant resources to be updated.
- '''
- tmpl = {'Parameters': {'AParam': {'Type': 'String'}},
- 'Resources': {
- 'AResource': {'Type': 'ResourceWithPropsType',
- 'Properties': {'Foo': {'Ref': 'AParam'}}}}}
-
- env1 = {'parameters': {'AParam': 'abc'}}
- env2 = {'parameters': {'AParam': 'smelly'}}
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl),
- environment.Environment(env1))
-
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
- self.assertEqual(self.stack['AResource'].properties['Foo'], 'abc')
-
- updated_stack = parser.Stack(self.ctx, 'updated_stack',
- template.Template(tmpl),
- environment.Environment(env2))
- self.stack.update(updated_stack)
- self.assertEqual(self.stack.state,
- (parser.Stack.UPDATE, parser.Stack.COMPLETE))
- self.assertEqual(self.stack['AResource'].properties['Foo'], 'smelly')
-
- def test_stack_create_timeout(self):
- self.m.StubOutWithMock(scheduler.DependencyTaskGroup, '__call__')
- self.m.StubOutWithMock(scheduler, 'wallclock')
-
- stack = parser.Stack(self.ctx, 's', parser.Template({}))
-
- def dummy_task():
- while True:
- yield
-
- start_time = time.time()
- scheduler.wallclock().AndReturn(start_time)
- scheduler.wallclock().AndReturn(start_time + 1)
- scheduler.DependencyTaskGroup.__call__().AndReturn(dummy_task())
- scheduler.wallclock().AndReturn(start_time + stack.timeout_secs() + 1)
-
- self.m.ReplayAll()
-
- stack.create()
-
- self.assertEqual(stack.state,
- (parser.Stack.CREATE, parser.Stack.FAILED))
- self.assertEqual(stack.status_reason, 'Create timed out')
-
- self.m.VerifyAll()
-
- def test_stack_name_valid(self):
- stack = parser.Stack(self.ctx, 's', parser.Template({}))
- stack = parser.Stack(self.ctx, 'stack123', parser.Template({}))
- stack = parser.Stack(self.ctx, 'test.stack', parser.Template({}))
- stack = parser.Stack(self.ctx, 'test_stack', parser.Template({}))
- stack = parser.Stack(self.ctx, 'TEST', parser.Template({}))
- stack = parser.Stack(self.ctx, 'test-stack', parser.Template({}))
-
- def test_stack_name_invalid(self):
- self.assertRaises(ValueError, parser.Stack, self.ctx, '_foo',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, '1bad',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, '.kcats',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, 'test stack',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, ' teststack',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, '^-^',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, '\"stack\"',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, '1234',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, 'cat|dog',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, '$(foo)',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, 'test/stack',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, 'test\stack',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, 'test::stack',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, 'test;stack',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, 'test~stack',
- parser.Template({}))
- self.assertRaises(ValueError, parser.Stack, self.ctx, '#test',
- parser.Template({}))
-
- @utils.stack_delete_after
- def test_resource_state_get_att(self):
- tmpl = {
- 'Resources': {'AResource': {'Type': 'GenericResourceType'}},
- 'Outputs': {'TestOutput': {'Value': {
- 'Fn::GetAtt': ['AResource', 'Foo']}}
- }
- }
-
- self.stack = parser.Stack(self.ctx, 'resource_state_get_att',
- template.Template(tmpl))
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
- self.assertTrue('AResource' in self.stack)
- rsrc = self.stack['AResource']
- rsrc.resource_id_set('aaaa')
- self.assertEqual('AResource', rsrc.FnGetAtt('Foo'))
-
- for action, status in (
- (rsrc.CREATE, rsrc.IN_PROGRESS),
- (rsrc.CREATE, rsrc.COMPLETE),
- (rsrc.UPDATE, rsrc.IN_PROGRESS),
- (rsrc.UPDATE, rsrc.COMPLETE)):
- rsrc.state_set(action, status)
- self.assertEqual('AResource', self.stack.output('TestOutput'))
- for action, status in (
- (rsrc.CREATE, rsrc.FAILED),
- (rsrc.DELETE, rsrc.IN_PROGRESS),
- (rsrc.DELETE, rsrc.FAILED),
- (rsrc.DELETE, rsrc.COMPLETE),
- (rsrc.UPDATE, rsrc.FAILED)):
- rsrc.state_set(action, status)
- self.assertEqual(None, self.stack.output('TestOutput'))
-
- @utils.stack_delete_after
- def test_resource_required_by(self):
- tmpl = {'Resources': {'AResource': {'Type': 'GenericResourceType'},
- 'BResource': {'Type': 'GenericResourceType',
- 'DependsOn': 'AResource'},
- 'CResource': {'Type': 'GenericResourceType',
- 'DependsOn': 'BResource'},
- 'DResource': {'Type': 'GenericResourceType',
- 'DependsOn': 'BResource'}}}
-
- self.stack = parser.Stack(self.ctx, 'depends_test_stack',
- template.Template(tmpl))
- self.stack.store()
- self.stack.create()
- self.assertEqual(self.stack.state,
- (parser.Stack.CREATE, parser.Stack.COMPLETE))
-
- self.assertEqual(['BResource'],
- self.stack['AResource'].required_by())
- self.assertEqual([],
- self.stack['CResource'].required_by())
- required_by = self.stack['BResource'].required_by()
- self.assertEqual(2, len(required_by))
- for r in ['CResource', 'DResource']:
- self.assertIn(r, required_by)
-
- @utils.stack_delete_after
- def test_store_saves_owner(self):
- """
- The owner_id attribute of Store is saved to the database when stored.
- """
- self.stack = parser.Stack(
- self.ctx, 'owner_stack', template.Template({}))
- stack_ownee = parser.Stack(
- self.ctx, 'ownee_stack', template.Template({}),
- owner_id=self.stack.id)
- stack_ownee.store()
- db_stack = db_api.stack_get(self.ctx, stack_ownee.id)
- self.assertEqual(db_stack.owner_id, self.stack.id)
-
- @utils.stack_delete_after
- def test_store_saves_creds(self):
- """
- A user_creds entry is created on first stack store
- """
- self.stack = parser.Stack(
- self.ctx, 'creds_stack', template.Template({}))
- self.stack.store()
-
- # The store should've created a user_creds row and set user_creds_id
- db_stack = db_api.stack_get(self.ctx, self.stack.id)
- user_creds_id = db_stack.user_creds_id
- self.assertIsNotNone(user_creds_id)
-
- # Store again, ID should not change
- self.stack.store()
- self.assertEqual(user_creds_id, db_stack.user_creds_id)
-
- @utils.stack_delete_after
- def test_load_honors_owner(self):
- """
- Loading a stack from the database will set the owner_id of the
- resultant stack appropriately.
- """
- self.stack = parser.Stack(
- self.ctx, 'owner_stack', template.Template({}))
- stack_ownee = parser.Stack(
- self.ctx, 'ownee_stack', template.Template({}),
- owner_id=self.stack.id)
- stack_ownee.store()
-
- saved_stack = parser.Stack.load(self.ctx, stack_id=stack_ownee.id)
- self.assertEqual(saved_stack.owner_id, self.stack.id)
-
- @utils.stack_delete_after
- def test_requires_deferred_auth(self):
- tmpl = {'Resources': {'AResource': {'Type': 'GenericResourceType'},
- 'BResource': {'Type': 'GenericResourceType'},
- 'CResource': {'Type': 'GenericResourceType'}}}
-
- self.stack = parser.Stack(self.ctx, 'update_test_stack',
- template.Template(tmpl),
- disable_rollback=False)
-
- self.assertFalse(self.stack.requires_deferred_auth())
-
- self.stack['CResource'].requires_deferred_auth = True
- self.assertTrue(self.stack.requires_deferred_auth())
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import pkgutil
-import sys
-import testtools
-
-import heat.engine
-from heat.common import plugin_loader
-
-
-class PluginLoaderTest(testtools.TestCase):
- def test_module_name(self):
- self.assertEqual(plugin_loader._module_name('foo.bar', 'blarg.wibble'),
- 'foo.bar.blarg.wibble')
-
- def test_create_subpackage_single_path(self):
- pkg_name = 'heat.engine.test_single_path'
- self.assertFalse(pkg_name in sys.modules)
- pkg = plugin_loader.create_subpackage('/tmp',
- 'heat.engine',
- 'test_single_path')
- self.assertTrue(pkg_name in sys.modules)
- self.assertEqual(sys.modules[pkg_name], pkg)
- self.assertEqual(pkg.__path__, ['/tmp'])
- self.assertEqual(pkg.__name__, pkg_name)
-
- def test_create_subpackage_path_list(self):
- path_list = ['/tmp']
- pkg_name = 'heat.engine.test_path_list'
- self.assertFalse(pkg_name in sys.modules)
- pkg = plugin_loader.create_subpackage('/tmp',
- 'heat.engine',
- 'test_path_list')
- self.assertTrue(pkg_name in sys.modules)
- self.assertEqual(sys.modules[pkg_name], pkg)
- self.assertEqual(pkg.__path__, path_list)
- self.assertFalse(pkg.__path__ is path_list)
- self.assertEqual(pkg.__name__, pkg_name)
-
- def test_import_module_existing(self):
- import heat.engine.service
- existing = heat.engine.service
- importer = pkgutil.ImpImporter(heat.engine.__path__[0])
- loaded = plugin_loader._import_module(importer,
- 'heat.engine.service',
- heat.engine)
- self.assertTrue(loaded is existing)
-
- def test_import_module_garbage(self):
- importer = pkgutil.ImpImporter(heat.engine.__path__[0])
- self.assertEqual(plugin_loader._import_module(importer,
- 'wibble',
- heat.engine),
- None)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import testtools
-
-from heat.engine import parameters
-from heat.engine import properties
-from heat.engine import resources
-from heat.common import exception
-
-
-class SchemaTest(testtools.TestCase):
- def test_range_schema(self):
- d = {'range': {'min': 5, 'max': 10}, 'description': 'a range'}
- r = properties.Range(5, 10, description='a range')
- self.assertEqual(d, dict(r))
-
- def test_range_min_schema(self):
- d = {'range': {'min': 5}, 'description': 'a range'}
- r = properties.Range(min=5, description='a range')
- self.assertEqual(d, dict(r))
-
- def test_range_max_schema(self):
- d = {'range': {'max': 10}, 'description': 'a range'}
- r = properties.Range(max=10, description='a range')
- self.assertEqual(d, dict(r))
-
- def test_length_schema(self):
- d = {'length': {'min': 5, 'max': 10}, 'description': 'a length range'}
- r = properties.Length(5, 10, description='a length range')
- self.assertEqual(d, dict(r))
-
- def test_length_min_schema(self):
- d = {'length': {'min': 5}, 'description': 'a length range'}
- r = properties.Length(min=5, description='a length range')
- self.assertEqual(d, dict(r))
-
- def test_length_max_schema(self):
- d = {'length': {'max': 10}, 'description': 'a length range'}
- r = properties.Length(max=10, description='a length range')
- self.assertEqual(d, dict(r))
-
- def test_allowed_values_schema(self):
- d = {'allowed_values': ['foo', 'bar'], 'description': 'allowed values'}
- r = properties.AllowedValues(['foo', 'bar'],
- description='allowed values')
- self.assertEqual(d, dict(r))
-
- def test_allowed_pattern_schema(self):
- d = {'allowed_pattern': '[A-Za-z0-9]', 'description': 'alphanumeric'}
- r = properties.AllowedPattern('[A-Za-z0-9]',
- description='alphanumeric')
- self.assertEqual(d, dict(r))
-
- def test_range_validate(self):
- r = properties.Range(min=5, max=5, description='a range')
- r.validate(5)
-
- def test_range_min_fail(self):
- r = properties.Range(min=5, description='a range')
- self.assertRaises(ValueError, r.validate, 4)
-
- def test_range_max_fail(self):
- r = properties.Range(max=5, description='a range')
- self.assertRaises(ValueError, r.validate, 6)
-
- def test_length_validate(self):
- l = properties.Length(min=5, max=5, description='a range')
- l.validate('abcde')
-
- def test_length_min_fail(self):
- l = properties.Length(min=5, description='a range')
- self.assertRaises(ValueError, l.validate, 'abcd')
-
- def test_length_max_fail(self):
- l = properties.Length(max=5, description='a range')
- self.assertRaises(ValueError, l.validate, 'abcdef')
-
- def test_schema_all(self):
- d = {
- 'type': 'string',
- 'description': 'A string',
- 'default': 'wibble',
- 'required': True,
- 'constraints': [
- {'length': {'min': 4, 'max': 8}},
- ]
- }
- s = properties.Schema(properties.STRING, 'A string',
- default='wibble', required=True,
- constraints=[properties.Length(4, 8)])
- self.assertEqual(d, dict(s))
-
- def test_schema_list_schema(self):
- d = {
- 'type': 'list',
- 'description': 'A list',
- 'schema': {
- '*': {
- 'type': 'string',
- 'description': 'A string',
- 'default': 'wibble',
- 'required': True,
- 'constraints': [
- {'length': {'min': 4, 'max': 8}},
- ]
- }
- },
- 'required': False,
- }
- s = properties.Schema(properties.STRING, 'A string',
- default='wibble', required=True,
- constraints=[properties.Length(4, 8)])
- l = properties.Schema(properties.LIST, 'A list',
- schema=s)
- self.assertEqual(d, dict(l))
-
- def test_schema_map_schema(self):
- d = {
- 'type': 'map',
- 'description': 'A map',
- 'schema': {
- 'Foo': {
- 'type': 'string',
- 'description': 'A string',
- 'default': 'wibble',
- 'required': True,
- 'constraints': [
- {'length': {'min': 4, 'max': 8}},
- ]
- }
- },
- 'required': False,
- }
- s = properties.Schema(properties.STRING, 'A string',
- default='wibble', required=True,
- constraints=[properties.Length(4, 8)])
- m = properties.Schema(properties.MAP, 'A map',
- schema={'Foo': s})
- self.assertEqual(d, dict(m))
-
- def test_schema_nested_schema(self):
- d = {
- 'type': 'list',
- 'description': 'A list',
- 'schema': {
- '*': {
- 'type': 'map',
- 'description': 'A map',
- 'schema': {
- 'Foo': {
- 'type': 'string',
- 'description': 'A string',
- 'default': 'wibble',
- 'required': True,
- 'constraints': [
- {'length': {'min': 4, 'max': 8}},
- ]
- }
- },
- 'required': False,
- }
- },
- 'required': False,
- }
- s = properties.Schema(properties.STRING, 'A string',
- default='wibble', required=True,
- constraints=[properties.Length(4, 8)])
- m = properties.Schema(properties.MAP, 'A map',
- schema={'Foo': s})
- l = properties.Schema(properties.LIST, 'A list',
- schema=m)
- self.assertEqual(d, dict(l))
-
- def test_all_resource_schemata(self):
- for resource_type in resources.global_env().get_types():
- for schema in getattr(resource_type,
- 'properties_schema',
- {}).itervalues():
- properties.Schema.from_legacy(schema)
-
- def test_invalid_type(self):
- self.assertRaises(properties.InvalidPropertySchemaError,
- properties.Schema,
- 'Fish')
-
- def test_schema_invalid_type(self):
- self.assertRaises(properties.InvalidPropertySchemaError,
- properties.Schema,
- 'String',
- schema=properties.Schema('String'))
-
- def test_range_invalid_type(self):
- self.assertRaises(properties.InvalidPropertySchemaError,
- properties.Schema,
- 'String',
- constraints=[properties.Range(1, 10)])
-
- def test_length_invalid_type(self):
- self.assertRaises(properties.InvalidPropertySchemaError,
- properties.Schema,
- 'Integer',
- constraints=[properties.Length(1, 10)])
-
- def test_allowed_pattern_invalid_type(self):
- self.assertRaises(properties.InvalidPropertySchemaError,
- properties.Schema,
- 'Integer',
- constraints=[properties.AllowedPattern('[0-9]*')])
-
- def test_range_vals_invalid_type(self):
- self.assertRaises(properties.InvalidPropertySchemaError,
- properties.Range, '1', 10)
- self.assertRaises(properties.InvalidPropertySchemaError,
- properties.Range, 1, '10')
-
- def test_length_vals_invalid_type(self):
- self.assertRaises(properties.InvalidPropertySchemaError,
- properties.Length, '1', 10)
- self.assertRaises(properties.InvalidPropertySchemaError,
- properties.Length, 1, '10')
-
- def test_from_legacy_idempotency(self):
- s = properties.Schema(properties.STRING)
- self.assertTrue(properties.Schema.from_legacy(s) is s)
-
- def test_from_legacy_minimal_string(self):
- s = properties.Schema.from_legacy({
- 'Type': 'String',
- })
- self.assertEqual(properties.STRING, s.type)
- self.assertEqual(None, s.description)
- self.assertEqual(None, s.default)
- self.assertFalse(s.required)
- self.assertEqual(0, len(s.constraints))
- self.assertTrue(s.implemented)
-
- def test_from_legacy_string(self):
- s = properties.Schema.from_legacy({
- 'Type': 'String',
- 'Description': 'a string',
- 'Default': 'wibble',
- 'Required': True,
- 'Implemented': False,
- 'MinLength': 4,
- 'MaxLength': 8,
- 'AllowedValues': ['blarg', 'wibble'],
- 'AllowedPattern': '[a-z]*',
- })
- self.assertEqual(properties.STRING, s.type)
- self.assertEqual('a string', s.description)
- self.assertEqual('wibble', s.default)
- self.assertTrue(s.required)
- self.assertEqual(3, len(s.constraints))
- self.assertFalse(s.implemented)
-
- def test_from_legacy_min_length(self):
- s = properties.Schema.from_legacy({
- 'Type': 'String',
- 'MinLength': 4,
- })
- self.assertEqual(1, len(s.constraints))
- c = s.constraints[0]
- self.assertEqual(properties.Length, type(c))
- self.assertEqual(4, c.min)
- self.assertEqual(None, c.max)
-
- def test_from_legacy_max_length(self):
- s = properties.Schema.from_legacy({
- 'Type': 'String',
- 'MaxLength': 8,
- })
- self.assertEqual(1, len(s.constraints))
- c = s.constraints[0]
- self.assertEqual(properties.Length, type(c))
- self.assertEqual(None, c.min)
- self.assertEqual(8, c.max)
-
- def test_from_legacy_minmax_length(self):
- s = properties.Schema.from_legacy({
- 'Type': 'String',
- 'MinLength': 4,
- 'MaxLength': 8,
- })
- self.assertEqual(1, len(s.constraints))
- c = s.constraints[0]
- self.assertEqual(properties.Length, type(c))
- self.assertEqual(4, c.min)
- self.assertEqual(8, c.max)
-
- def test_from_legacy_minmax_string_length(self):
- s = properties.Schema.from_legacy({
- 'Type': 'String',
- 'MinLength': '4',
- 'MaxLength': '8',
- })
- self.assertEqual(1, len(s.constraints))
- c = s.constraints[0]
- self.assertEqual(properties.Length, type(c))
- self.assertEqual(4, c.min)
- self.assertEqual(8, c.max)
-
- def test_from_legacy_min_value(self):
- s = properties.Schema.from_legacy({
- 'Type': 'Integer',
- 'MinValue': 4,
- })
- self.assertEqual(1, len(s.constraints))
- c = s.constraints[0]
- self.assertEqual(properties.Range, type(c))
- self.assertEqual(4, c.min)
- self.assertEqual(None, c.max)
-
- def test_from_legacy_max_value(self):
- s = properties.Schema.from_legacy({
- 'Type': 'Integer',
- 'MaxValue': 8,
- })
- self.assertEqual(1, len(s.constraints))
- c = s.constraints[0]
- self.assertEqual(properties.Range, type(c))
- self.assertEqual(None, c.min)
- self.assertEqual(8, c.max)
-
- def test_from_legacy_minmax_value(self):
- s = properties.Schema.from_legacy({
- 'Type': 'Integer',
- 'MinValue': 4,
- 'MaxValue': 8,
- })
- self.assertEqual(1, len(s.constraints))
- c = s.constraints[0]
- self.assertEqual(properties.Range, type(c))
- self.assertEqual(4, c.min)
- self.assertEqual(8, c.max)
-
- def test_from_legacy_minmax_string_value(self):
- s = properties.Schema.from_legacy({
- 'Type': 'Integer',
- 'MinValue': '4',
- 'MaxValue': '8',
- })
- self.assertEqual(1, len(s.constraints))
- c = s.constraints[0]
- self.assertEqual(properties.Range, type(c))
- self.assertEqual(4, c.min)
- self.assertEqual(8, c.max)
-
- def test_from_legacy_allowed_values(self):
- s = properties.Schema.from_legacy({
- 'Type': 'String',
- 'AllowedValues': ['blarg', 'wibble'],
- })
- self.assertEqual(1, len(s.constraints))
- c = s.constraints[0]
- self.assertEqual(properties.AllowedValues, type(c))
- self.assertEqual(('blarg', 'wibble'), c.allowed)
-
- def test_from_legacy_allowed_pattern(self):
- s = properties.Schema.from_legacy({
- 'Type': 'String',
- 'AllowedPattern': '[a-z]*',
- })
- self.assertEqual(1, len(s.constraints))
- c = s.constraints[0]
- self.assertEqual(properties.AllowedPattern, type(c))
- self.assertEqual('[a-z]*', c.pattern)
-
- def test_from_legacy_list(self):
- l = properties.Schema.from_legacy({
- 'Type': 'List',
- 'Default': ['wibble'],
- 'Schema': {
- 'Type': 'String',
- 'Default': 'wibble',
- 'MaxLength': 8,
- }
- })
- self.assertEqual(properties.LIST, l.type)
- self.assertEqual(['wibble'], l.default)
-
- ss = l.schema[0]
- self.assertEqual(properties.STRING, ss.type)
- self.assertEqual('wibble', ss.default)
-
- def test_from_legacy_map(self):
- l = properties.Schema.from_legacy({
- 'Type': 'Map',
- 'Schema': {
- 'foo': {
- 'Type': 'String',
- 'Default': 'wibble',
- }
- }
- })
- self.assertEqual(properties.MAP, l.type)
-
- ss = l.schema['foo']
- self.assertEqual(properties.STRING, ss.type)
- self.assertEqual('wibble', ss.default)
-
- def test_from_legacy_invalid_key(self):
- self.assertRaises(properties.InvalidPropertySchemaError,
- properties.Schema.from_legacy,
- {'Type': 'String', 'Foo': 'Bar'})
-
- def test_from_string_param(self):
- description = "WebServer EC2 instance type"
- allowed_values = ["t1.micro", "m1.small", "m1.large", "m1.xlarge",
- "m2.xlarge", "m2.2xlarge", "m2.4xlarge",
- "c1.medium", "c1.xlarge", "cc1.4xlarge"]
- constraint_desc = "Must be a valid EC2 instance type."
- param = parameters.ParamSchema({
- "Type": "String",
- "Description": description,
- "Default": "m1.large",
- "AllowedValues": allowed_values,
- "ConstraintDescription": constraint_desc,
- })
-
- schema = properties.Schema.from_parameter(param)
-
- self.assertEqual(properties.STRING, schema.type)
- self.assertEqual(description, schema.description)
- self.assertEqual(None, schema.default)
- self.assertFalse(schema.required)
- self.assertEqual(1, len(schema.constraints))
-
- allowed_constraint = schema.constraints[0]
-
- self.assertEqual(tuple(allowed_values), allowed_constraint.allowed)
- self.assertEqual(constraint_desc, allowed_constraint.description)
-
- def test_from_string_allowed_pattern(self):
- description = "WebServer EC2 instance type"
- allowed_pattern = "[A-Za-z0-9]*"
- constraint_desc = "Must contain only alphanumeric characters."
- param = parameters.ParamSchema({
- "Type": "String",
- "Description": description,
- "Default": "m1.large",
- "AllowedPattern": allowed_pattern,
- "ConstraintDescription": constraint_desc,
- })
-
- schema = properties.Schema.from_parameter(param)
-
- self.assertEqual(properties.STRING, schema.type)
- self.assertEqual(description, schema.description)
- self.assertEqual(None, schema.default)
- self.assertFalse(schema.required)
- self.assertEqual(1, len(schema.constraints))
-
- allowed_constraint = schema.constraints[0]
-
- self.assertEqual(allowed_pattern, allowed_constraint.pattern)
- self.assertEqual(constraint_desc, allowed_constraint.description)
-
- def test_from_string_multi_constraints(self):
- description = "WebServer EC2 instance type"
- allowed_pattern = "[A-Za-z0-9]*"
- constraint_desc = "Must contain only alphanumeric characters."
- param = parameters.ParamSchema({
- "Type": "String",
- "Description": description,
- "Default": "m1.large",
- "MinLength": "7",
- "AllowedPattern": allowed_pattern,
- "ConstraintDescription": constraint_desc,
- })
-
- schema = properties.Schema.from_parameter(param)
-
- self.assertEqual(properties.STRING, schema.type)
- self.assertEqual(description, schema.description)
- self.assertEqual(None, schema.default)
- self.assertFalse(schema.required)
- self.assertEqual(2, len(schema.constraints))
-
- len_constraint = schema.constraints[0]
- allowed_constraint = schema.constraints[1]
-
- self.assertEqual(7, len_constraint.min)
- self.assertEqual(None, len_constraint.max)
- self.assertEqual(allowed_pattern, allowed_constraint.pattern)
- self.assertEqual(constraint_desc, allowed_constraint.description)
-
- def test_from_param_string_min_len(self):
- param = parameters.ParamSchema({
- "Description": "WebServer EC2 instance type",
- "Type": "String",
- "Default": "m1.large",
- "MinLength": "7",
- })
- schema = properties.Schema.from_parameter(param)
-
- self.assertFalse(schema.required)
- self.assertEqual(1, len(schema.constraints))
-
- len_constraint = schema.constraints[0]
-
- self.assertEqual(7, len_constraint.min)
- self.assertEqual(None, len_constraint.max)
-
- def test_from_param_string_max_len(self):
- param = parameters.ParamSchema({
- "Description": "WebServer EC2 instance type",
- "Type": "String",
- "Default": "m1.large",
- "MaxLength": "11",
- })
- schema = properties.Schema.from_parameter(param)
-
- self.assertFalse(schema.required)
- self.assertEqual(1, len(schema.constraints))
-
- len_constraint = schema.constraints[0]
-
- self.assertEqual(None, len_constraint.min)
- self.assertEqual(11, len_constraint.max)
-
- def test_from_param_string_min_max_len(self):
- param = parameters.ParamSchema({
- "Description": "WebServer EC2 instance type",
- "Type": "String",
- "Default": "m1.large",
- "MinLength": "7",
- "MaxLength": "11",
- })
- schema = properties.Schema.from_parameter(param)
-
- self.assertFalse(schema.required)
- self.assertEqual(1, len(schema.constraints))
-
- len_constraint = schema.constraints[0]
-
- self.assertEqual(7, len_constraint.min)
- self.assertEqual(11, len_constraint.max)
-
- def test_from_param_no_default(self):
- param = parameters.ParamSchema({
- "Description": "WebServer EC2 instance type",
- "Type": "String",
- })
- schema = properties.Schema.from_parameter(param)
-
- self.assertTrue(schema.required)
- self.assertEqual(None, schema.default)
- self.assertEqual(0, len(schema.constraints))
-
- def test_from_number_param_min(self):
- default = "42"
- param = parameters.ParamSchema({
- "Type": "Number",
- "Default": default,
- "MinValue": "10",
- })
-
- schema = properties.Schema.from_parameter(param)
-
- self.assertEqual(properties.NUMBER, schema.type)
- self.assertEqual(None, schema.default)
- self.assertFalse(schema.required)
- self.assertEqual(1, len(schema.constraints))
-
- value_constraint = schema.constraints[0]
-
- self.assertEqual(10, value_constraint.min)
- self.assertEqual(None, value_constraint.max)
-
- def test_from_number_param_max(self):
- default = "42"
- param = parameters.ParamSchema({
- "Type": "Number",
- "Default": default,
- "MaxValue": "100",
- })
-
- schema = properties.Schema.from_parameter(param)
-
- self.assertEqual(properties.NUMBER, schema.type)
- self.assertEqual(None, schema.default)
- self.assertFalse(schema.required)
- self.assertEqual(1, len(schema.constraints))
-
- value_constraint = schema.constraints[0]
-
- self.assertEqual(None, value_constraint.min)
- self.assertEqual(100, value_constraint.max)
-
- def test_from_number_param_min_max(self):
- default = "42"
- param = parameters.ParamSchema({
- "Type": "Number",
- "Default": default,
- "MinValue": "10",
- "MaxValue": "100",
- })
-
- schema = properties.Schema.from_parameter(param)
-
- self.assertEqual(properties.NUMBER, schema.type)
- self.assertEqual(None, schema.default)
- self.assertFalse(schema.required)
- self.assertEqual(1, len(schema.constraints))
-
- value_constraint = schema.constraints[0]
-
- self.assertEqual(10, value_constraint.min)
- self.assertEqual(100, value_constraint.max)
-
- def test_from_number_param_allowed_vals(self):
- default = "42"
- constraint_desc = "The quick brown fox jumps over the lazy dog."
- param = parameters.ParamSchema({
- "Type": "Number",
- "Default": default,
- "AllowedValues": ["10", "42", "100"],
- "ConstraintDescription": constraint_desc,
- })
-
- schema = properties.Schema.from_parameter(param)
-
- self.assertEqual(properties.NUMBER, schema.type)
- self.assertEqual(None, schema.default)
- self.assertFalse(schema.required)
- self.assertEqual(1, len(schema.constraints))
-
- allowed_constraint = schema.constraints[0]
-
- self.assertEqual(('10', '42', '100'), allowed_constraint.allowed)
- self.assertEqual(constraint_desc, allowed_constraint.description)
-
- def test_from_list_param(self):
- param = parameters.ParamSchema({
- "Type": "CommaDelimitedList",
- "Default": "foo,bar,baz"
- })
-
- schema = properties.Schema.from_parameter(param)
-
- self.assertEqual(properties.LIST, schema.type)
- self.assertEqual(None, schema.default)
- self.assertFalse(schema.required)
-
- def test_from_json_param(self):
- param = parameters.ParamSchema({
- "Type": "Json",
- "Default": {"foo": "bar", "blarg": "wibble"}
- })
-
- schema = properties.Schema.from_parameter(param)
-
- self.assertEqual(properties.MAP, schema.type)
- self.assertEqual(None, schema.default)
- self.assertFalse(schema.required)
-
-
-class PropertyTest(testtools.TestCase):
- def test_required_default(self):
- p = properties.Property({'Type': 'String'})
- self.assertFalse(p.required())
-
- def test_required_false(self):
- p = properties.Property({'Type': 'String', 'Required': False})
- self.assertFalse(p.required())
-
- def test_required_true(self):
- p = properties.Property({'Type': 'String', 'Required': True})
- self.assertTrue(p.required())
-
- def test_implemented_default(self):
- p = properties.Property({'Type': 'String'})
- self.assertTrue(p.implemented())
-
- def test_implemented_false(self):
- p = properties.Property({'Type': 'String', 'Implemented': False})
- self.assertFalse(p.implemented())
-
- def test_implemented_true(self):
- p = properties.Property({'Type': 'String', 'Implemented': True})
- self.assertTrue(p.implemented())
-
- def test_no_default(self):
- p = properties.Property({'Type': 'String'})
- self.assertFalse(p.has_default())
-
- def test_default(self):
- p = properties.Property({'Type': 'String', 'Default': 'wibble'})
- self.assertEqual(p.default(), 'wibble')
-
- def test_type(self):
- p = properties.Property({'Type': 'String'})
- self.assertEqual(p.type(), 'String')
-
- def test_bad_type(self):
- self.assertRaises(properties.InvalidPropertySchemaError,
- properties.Property, {'Type': 'Fish'})
-
- def test_bad_key(self):
- self.assertRaises(properties.InvalidPropertySchemaError,
- properties.Property,
- {'Type': 'String', 'Foo': 'Bar'})
-
- def test_string_pattern_good(self):
- schema = {'Type': 'String',
- 'AllowedPattern': '[a-z]*'}
- p = properties.Property(schema)
- self.assertEqual(p.validate_data('foo'), 'foo')
-
- def test_string_pattern_bad_prefix(self):
- schema = {'Type': 'String',
- 'AllowedPattern': '[a-z]*'}
- p = properties.Property(schema)
- self.assertRaises(ValueError, p.validate_data, '1foo')
-
- def test_string_pattern_bad_suffix(self):
- schema = {'Type': 'String',
- 'AllowedPattern': '[a-z]*'}
- p = properties.Property(schema)
- self.assertRaises(ValueError, p.validate_data, 'foo1')
-
- def test_string_value_list_good(self):
- schema = {'Type': 'String',
- 'AllowedValues': ['foo', 'bar', 'baz']}
- p = properties.Property(schema)
- self.assertEqual(p.validate_data('bar'), 'bar')
-
- def test_string_value_list_bad(self):
- schema = {'Type': 'String',
- 'AllowedValues': ['foo', 'bar', 'baz']}
- p = properties.Property(schema)
- self.assertRaises(ValueError, p.validate_data, 'blarg')
-
- def test_string_maxlength_good(self):
- schema = {'Type': 'String',
- 'MaxLength': '5'}
- p = properties.Property(schema)
- self.assertEqual(p.validate_data('abcd'), 'abcd')
-
- def test_string_exceeded_maxlength(self):
- schema = {'Type': 'String',
- 'MaxLength': '5'}
- p = properties.Property(schema)
- self.assertRaises(ValueError, p.validate_data, 'abcdef')
-
- def test_string_length_in_range(self):
- schema = {'Type': 'String',
- 'MinLength': '5',
- 'MaxLength': '10'}
- p = properties.Property(schema)
- self.assertEqual(p.validate_data('abcdef'), 'abcdef')
-
- def test_string_minlength_good(self):
- schema = {'Type': 'String',
- 'MinLength': '5'}
- p = properties.Property(schema)
- self.assertEqual(p.validate_data('abcde'), 'abcde')
-
- def test_string_smaller_than_minlength(self):
- schema = {'Type': 'String',
- 'MinLength': '5'}
- p = properties.Property(schema)
- self.assertRaises(ValueError, p.validate_data, 'abcd')
-
- def test_int_good(self):
- schema = {'Type': 'Integer',
- 'MinValue': 3,
- 'MaxValue': 3}
- p = properties.Property(schema)
- self.assertEqual(p.validate_data(3), 3)
-
- def test_int_bad(self):
- schema = {'Type': 'Integer'}
- p = properties.Property(schema)
- self.assertRaises(TypeError, p.validate_data, '3')
-
- def test_integer_low(self):
- schema = {'Type': 'Integer',
- 'MinValue': 4}
- p = properties.Property(schema)
- self.assertRaises(ValueError, p.validate_data, 3)
-
- def test_integer_high(self):
- schema = {'Type': 'Integer',
- 'MaxValue': 2}
- p = properties.Property(schema)
- self.assertRaises(ValueError, p.validate_data, 3)
-
- def test_integer_value_list_good(self):
- schema = {'Type': 'Integer',
- 'AllowedValues': [1, 3, 5]}
- p = properties.Property(schema)
- self.assertEqual(p.validate_data(5), 5)
-
- def test_integer_value_list_bad(self):
- schema = {'Type': 'Integer',
- 'AllowedValues': [1, 3, 5]}
- p = properties.Property(schema)
- self.assertRaises(ValueError, p.validate_data, 2)
-
- def test_number_good(self):
- schema = {'Type': 'Number',
- 'MinValue': '3',
- 'MaxValue': '3'}
- p = properties.Property(schema)
- self.assertEqual(p.validate_data('3'), '3')
-
- def test_number_value_list_good(self):
- schema = {'Type': 'Number',
- 'AllowedValues': ['1', '3', '5']}
- p = properties.Property(schema)
- self.assertEqual(p.validate_data('5'), '5')
-
- def test_number_value_list_bad(self):
- schema = {'Type': 'Number',
- 'AllowedValues': ['1', '3', '5']}
- p = properties.Property(schema)
- self.assertRaises(ValueError, p.validate_data, '2')
-
- def test_number_low(self):
- schema = {'Type': 'Number',
- 'MinValue': '4'}
- p = properties.Property(schema)
- self.assertRaises(ValueError, p.validate_data, '3')
-
- def test_number_high(self):
- schema = {'Type': 'Number',
- 'MaxValue': '2'}
- p = properties.Property(schema)
- self.assertRaises(ValueError, p.validate_data, '3')
-
- def test_boolean_true(self):
- p = properties.Property({'Type': 'Boolean'})
- self.assertEqual(p.validate_data('True'), True)
- self.assertEqual(p.validate_data('true'), True)
- self.assertEqual(p.validate_data(True), True)
-
- def test_boolean_false(self):
- p = properties.Property({'Type': 'Boolean'})
- self.assertEqual(p.validate_data('False'), False)
- self.assertEqual(p.validate_data('false'), False)
- self.assertEqual(p.validate_data(False), False)
-
- def test_boolean_invalid(self):
- p = properties.Property({'Type': 'Boolean'})
- self.assertRaises(ValueError, p.validate_data, 'fish')
-
- def test_list_string(self):
- p = properties.Property({'Type': 'List'})
- self.assertRaises(TypeError, p.validate_data, 'foo')
-
- def test_list_good(self):
- p = properties.Property({'Type': 'List'})
- self.assertEqual(p.validate_data(['foo', 'bar']), ['foo', 'bar'])
-
- def test_list_dict(self):
- p = properties.Property({'Type': 'List'})
- self.assertRaises(TypeError, p.validate_data, {'foo': 'bar'})
-
- def test_list_maxlength_good(self):
- schema = {'Type': 'List',
- 'MaxLength': '3'}
- p = properties.Property(schema)
- self.assertEqual(p.validate_data(['1', '2']), ['1', '2'])
-
- def test_list_exceeded_maxlength(self):
- schema = {'Type': 'List',
- 'MaxLength': '2'}
- p = properties.Property(schema)
- self.assertRaises(ValueError, p.validate_data, ['1', '2', '3'])
-
- def test_list_length_in_range(self):
- schema = {'Type': 'List',
- 'MinLength': '2',
- 'MaxLength': '4'}
- p = properties.Property(schema)
- self.assertEqual(p.validate_data(['1', '2', '3']), ['1', '2', '3'])
-
- def test_list_minlength_good(self):
- schema = {'Type': 'List',
- 'MinLength': '3'}
- p = properties.Property(schema)
- self.assertEqual(p.validate_data(['1', '2', '3']), ['1', '2', '3'])
-
- def test_list_smaller_than_minlength(self):
- schema = {'Type': 'List',
- 'MinLength': '4'}
- p = properties.Property(schema)
- self.assertRaises(ValueError, p.validate_data, ['1', '2', '3'])
-
- def test_map_string(self):
- p = properties.Property({'Type': 'Map'})
- self.assertRaises(TypeError, p.validate_data, 'foo')
-
- def test_map_list(self):
- p = properties.Property({'Type': 'Map'})
- self.assertRaises(TypeError, p.validate_data, ['foo'])
-
- def test_map_schema_good(self):
- map_schema = {'valid': {'Type': 'Boolean'}}
- p = properties.Property({'Type': 'Map', 'Schema': map_schema})
- self.assertEqual(p.validate_data({'valid': 'TRUE'}), {'valid': True})
-
- def test_map_schema_bad_data(self):
- map_schema = {'valid': {'Type': 'Boolean'}}
- p = properties.Property({'Type': 'Map', 'Schema': map_schema})
- self.assertRaises(ValueError, p.validate_data, {'valid': 'fish'})
-
- def test_map_schema_missing_data(self):
- map_schema = {'valid': {'Type': 'Boolean'}}
- p = properties.Property({'Type': 'Map', 'Schema': map_schema})
- self.assertEqual(p.validate_data({}), {'valid': None})
-
- def test_map_schema_missing_required_data(self):
- map_schema = {'valid': {'Type': 'Boolean', 'Required': True}}
- p = properties.Property({'Type': 'Map', 'Schema': map_schema})
- self.assertRaises(ValueError, p.validate_data, {})
-
- def test_list_schema_good(self):
- map_schema = {'valid': {'Type': 'Boolean'}}
- list_schema = {'Type': 'Map', 'Schema': map_schema}
- p = properties.Property({'Type': 'List', 'Schema': list_schema})
- self.assertEqual(p.validate_data(
- [{'valid': 'TRUE'}, {'valid': 'False'}]),
- [{'valid': True}, {'valid': False}])
-
- def test_list_schema_bad_data(self):
- map_schema = {'valid': {'Type': 'Boolean'}}
- list_schema = {'Type': 'Map', 'Schema': map_schema}
- p = properties.Property({'Type': 'List', 'Schema': list_schema})
- self.assertRaises(ValueError, p.validate_data, [{'valid': 'True'},
- {'valid': 'fish'}])
-
- def test_list_schema_int_good(self):
- list_schema = {'Type': 'Integer'}
- p = properties.Property({'Type': 'List', 'Schema': list_schema})
- self.assertEqual(p.validate_data([1, 2, 3]), [1, 2, 3])
-
- def test_list_schema_int_bad_data(self):
- list_schema = {'Type': 'Integer'}
- p = properties.Property({'Type': 'List', 'Schema': list_schema})
- self.assertRaises(ValueError, p.validate_data, [42, 'fish'])
-
-
-class PropertiesTest(testtools.TestCase):
- def setUp(self):
- super(PropertiesTest, self).setUp()
- schema = {
- 'int': {'Type': 'Integer'},
- 'string': {'Type': 'String'},
- 'required_int': {'Type': 'Integer', 'Required': True},
- 'bad_int': {'Type': 'Integer'},
- 'missing': {'Type': 'Integer'},
- 'defaulted': {'Type': 'Integer', 'Default': 1},
- 'default_override': {'Type': 'Integer', 'Default': 1},
- }
- data = {
- 'int': 21,
- 'string': 'foo',
- 'bad_int': 'foo',
- 'default_override': 21,
- }
- double = lambda d: d * 2
- self.props = properties.Properties(schema, data, double, 'wibble')
-
- def test_integer_good(self):
- self.assertEqual(self.props['int'], 42)
-
- def test_string_good(self):
- self.assertEqual(self.props['string'], 'foofoo')
-
- def test_missing_required(self):
- self.assertRaises(ValueError, self.props.get, 'required_int')
-
- def test_integer_bad(self):
- self.assertRaises(ValueError, self.props.get, 'bad_int')
-
- def test_missing(self):
- self.assertEqual(self.props['missing'], None)
-
- def test_default(self):
- self.assertEqual(self.props['defaulted'], 1)
-
- def test_default_override(self):
- self.assertEqual(self.props['default_override'], 42)
-
- def test_bad_key(self):
- self.assertEqual(self.props.get('foo', 'wibble'), 'wibble')
-
- def test_none_string(self):
- schema = {'foo': {'Type': 'String'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual('', props['foo'])
-
- def test_none_integer(self):
- schema = {'foo': {'Type': 'Integer'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(0, props['foo'])
-
- def test_none_number(self):
- schema = {'foo': {'Type': 'Number'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(0, props['foo'])
-
- def test_none_boolean(self):
- schema = {'foo': {'Type': 'Boolean'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(False, props['foo'])
-
- def test_none_map(self):
- schema = {'foo': {'Type': 'Map'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual({}, props['foo'])
-
- def test_none_list(self):
- schema = {'foo': {'Type': 'List'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual([], props['foo'])
-
- def test_none_default_string(self):
- schema = {'foo': {'Type': 'String', 'Default': 'bar'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual('bar', props['foo'])
-
- def test_none_default_integer(self):
- schema = {'foo': {'Type': 'Integer', 'Default': 42}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(42, props['foo'])
-
- schema = {'foo': {'Type': 'Integer', 'Default': 0}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(0, props['foo'])
-
- schema = {'foo': {'Type': 'Integer', 'Default': -273}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(-273, props['foo'])
-
- def test_none_default_number(self):
- schema = {'foo': {'Type': 'Number', 'Default': 42.0}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(42.0, props['foo'])
-
- schema = {'foo': {'Type': 'Number', 'Default': 0.0}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(0.0, props['foo'])
-
- schema = {'foo': {'Type': 'Number', 'Default': -273.15}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(-273.15, props['foo'])
-
- def test_none_default_boolean(self):
- schema = {'foo': {'Type': 'Boolean', 'Default': True}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(True, props['foo'])
-
- def test_none_default_map(self):
- schema = {'foo': {'Type': 'Map', 'Default': {'bar': 'baz'}}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual({'bar': 'baz'}, props['foo'])
-
- def test_none_default_list(self):
- schema = {'foo': {'Type': 'List', 'Default': ['one', 'two']}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(['one', 'two'], props['foo'])
-
- def test_bad_resolver(self):
- schema = {'foo': {'Type': 'String', 'Default': 'bar'}}
-
- def bad_resolver(prop):
- raise Exception('resolution failed!')
-
- props = properties.Properties(schema, {'foo': 'baz'}, bad_resolver)
- err = self.assertRaises(ValueError, props.get, 'foo')
- self.assertEqual('foo resolution failed!', str(err))
-
- def test_schema_from_params(self):
- params_snippet = {
- "DBUsername": {
- "Type": "String",
- "Description": "The WordPress database admin account username",
- "Default": "admin",
- "MinLength": "1",
- "AllowedPattern": "[a-zA-Z][a-zA-Z0-9]*",
- "NoEcho": "true",
- "MaxLength": "16",
- "ConstraintDescription": ("must begin with a letter and "
- "contain only alphanumeric "
- "characters.")
- },
- "KeyName": {
- "Type": "String",
- "Description": ("Name of an existing EC2 KeyPair to enable "
- "SSH access to the instances")
- },
- "LinuxDistribution": {
- "Default": "F17",
- "Type": "String",
- "Description": "Distribution of choice",
- "AllowedValues": [
- "F18",
- "F17",
- "U10",
- "RHEL-6.1",
- "RHEL-6.2",
- "RHEL-6.3"
- ]
- },
- "DBPassword": {
- "Type": "String",
- "Description": "The WordPress database admin account password",
- "Default": "admin",
- "MinLength": "1",
- "AllowedPattern": "[a-zA-Z0-9]*",
- "NoEcho": "true",
- "MaxLength": "41",
- "ConstraintDescription": ("must contain only alphanumeric "
- "characters.")
- },
- "DBName": {
- "AllowedPattern": "[a-zA-Z][a-zA-Z0-9]*",
- "Type": "String",
- "Description": "The WordPress database name",
- "MaxLength": "64",
- "Default": "wordpress",
- "MinLength": "1",
- "ConstraintDescription": ("must begin with a letter and "
- "contain only alphanumeric "
- "characters.")
- },
- "InstanceType": {
- "Default": "m1.large",
- "Type": "String",
- "ConstraintDescription": "must be a valid EC2 instance type.",
- "Description": "WebServer EC2 instance type",
- "AllowedValues": [
- "t1.micro",
- "m1.small",
- "m1.large",
- "m1.xlarge",
- "m2.xlarge",
- "m2.2xlarge",
- "m2.4xlarge",
- "c1.medium",
- "c1.xlarge",
- "cc1.4xlarge"
- ]
- },
- "DBRootPassword": {
- "Type": "String",
- "Description": "Root password for MySQL",
- "Default": "admin",
- "MinLength": "1",
- "AllowedPattern": "[a-zA-Z0-9]*",
- "NoEcho": "true",
- "MaxLength": "41",
- "ConstraintDescription": ("must contain only alphanumeric "
- "characters.")
- }
- }
- expected = {
- "DBUsername": {
- "type": "string",
- "description": "The WordPress database admin account username",
- "required": False,
- "constraints": [
- {"length": {"min": 1, "max": 16}},
- {"allowed_pattern": "[a-zA-Z][a-zA-Z0-9]*",
- "description": "must begin with a letter and contain "
- "only alphanumeric characters."},
- ]
- },
- "LinuxDistribution": {
- "type": "string",
- "description": "Distribution of choice",
- "required": False,
- "constraints": [
- {"allowed_values": ["F18", "F17", "U10",
- "RHEL-6.1", "RHEL-6.2", "RHEL-6.3"]}
- ]
- },
- "InstanceType": {
- "type": "string",
- "description": "WebServer EC2 instance type",
- "required": False,
- "constraints": [
- {"allowed_values": ["t1.micro",
- "m1.small",
- "m1.large",
- "m1.xlarge",
- "m2.xlarge",
- "m2.2xlarge",
- "m2.4xlarge",
- "c1.medium",
- "c1.xlarge",
- "cc1.4xlarge"],
- "description": "must be a valid EC2 instance type."},
- ]
- },
- "DBRootPassword": {
- "type": "string",
- "description": "Root password for MySQL",
- "required": False,
- "constraints": [
- {"length": {"min": 1, "max": 41}},
- {"allowed_pattern": "[a-zA-Z0-9]*",
- "description": "must contain only alphanumeric "
- "characters."},
- ]
- },
- "KeyName": {
- "type": "string",
- "description": ("Name of an existing EC2 KeyPair to enable "
- "SSH access to the instances"),
- "required": True,
- },
- "DBPassword": {
- "type": "string",
- "description": "The WordPress database admin account password",
- "required": False,
- "constraints": [
- {"length": {"min": 1, "max": 41}},
- {"allowed_pattern": "[a-zA-Z0-9]*",
- "description": "must contain only alphanumeric "
- "characters."},
- ]
- },
- "DBName": {
- "type": "string",
- "description": "The WordPress database name",
- "required": False,
- "constraints": [
- {"length": {"min": 1, "max": 64}},
- {"allowed_pattern": "[a-zA-Z][a-zA-Z0-9]*",
- "description": "must begin with a letter and contain "
- "only alphanumeric characters."},
- ]
- },
- }
- params = dict((n, parameters.ParamSchema(s)) for n, s
- in params_snippet.items())
- props_schemata = properties.Properties.schema_from_params(params)
-
- self.assertEqual(expected,
- dict((n, dict(s)) for n, s in props_schemata.items()))
-
-
-class PropertiesValidationTest(testtools.TestCase):
- def test_required(self):
- schema = {'foo': {'Type': 'String', 'Required': True}}
- props = properties.Properties(schema, {'foo': 'bar'})
- self.assertEqual(props.validate(), None)
-
- def test_missing_required(self):
- schema = {'foo': {'Type': 'String', 'Required': True}}
- props = properties.Properties(schema, {})
- self.assertRaises(exception.StackValidationFailed, props.validate)
-
- def test_missing_unimplemented(self):
- schema = {'foo': {'Type': 'String', 'Implemented': False}}
- props = properties.Properties(schema, {})
- self.assertEqual(props.validate(), None)
-
- def test_present_unimplemented(self):
- schema = {'foo': {'Type': 'String', 'Implemented': False}}
- props = properties.Properties(schema, {'foo': 'bar'})
- self.assertRaises(exception.StackValidationFailed, props.validate)
-
- def test_missing(self):
- schema = {'foo': {'Type': 'String'}}
- props = properties.Properties(schema, {})
- self.assertEqual(props.validate(), None)
-
- def test_bad_data(self):
- schema = {'foo': {'Type': 'String'}}
- props = properties.Properties(schema, {'foo': 42})
- self.assertRaises(exception.StackValidationFailed, props.validate)
-
- def test_unknown_typo(self):
- schema = {'foo': {'Type': 'String'}}
- props = properties.Properties(schema, {'food': 42})
- self.assertRaises(exception.StackValidationFailed, props.validate)
-
- def test_none_string(self):
- schema = {'foo': {'Type': 'String'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(props.validate(), None)
-
- def test_none_integer(self):
- schema = {'foo': {'Type': 'Integer'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(props.validate(), None)
-
- def test_none_number(self):
- schema = {'foo': {'Type': 'Number'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(props.validate(), None)
-
- def test_none_boolean(self):
- schema = {'foo': {'Type': 'Boolean'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(props.validate(), None)
-
- def test_none_map(self):
- schema = {'foo': {'Type': 'Map'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(props.validate(), None)
-
- def test_none_list(self):
- schema = {'foo': {'Type': 'List'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(props.validate(), None)
-
- def test_none_default_string(self):
- schema = {'foo': {'Type': 'String', 'Default': 'bar'}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(props.validate(), None)
-
- def test_none_default_integer(self):
- schema = {'foo': {'Type': 'Integer', 'Default': 42}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(props.validate(), None)
-
- def test_none_default_number(self):
- schema = {'foo': {'Type': 'Number', 'Default': 42.0}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(props.validate(), None)
-
- def test_none_default_boolean(self):
- schema = {'foo': {'Type': 'Boolean', 'Default': True}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(props.validate(), None)
-
- def test_none_default_map(self):
- schema = {'foo': {'Type': 'Map', 'Default': {'bar': 'baz'}}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(props.validate(), None)
-
- def test_none_default_list(self):
- schema = {'foo': {'Type': 'List', 'Default': ['one', 'two']}}
- props = properties.Properties(schema, {'foo': None})
- self.assertEqual(props.validate(), None)
-
- def test_schema_to_template_nested_map_map_schema(self):
- nested_schema = {'Key': {'Type': 'String',
- 'Required': True},
- 'Value': {'Type': 'String',
- 'Required': True,
- 'Default': 'fewaf'}}
- schema = {'foo': {'Type': 'Map', 'Schema': {'Type': 'Map',
- 'Schema': nested_schema}}}
-
- prop_expected = {'foo': {'Ref': 'foo'}}
- param_expected = {'foo': {'Type': 'Json'}}
- (parameters, props) = \
- properties.Properties.schema_to_parameters_and_properties(schema)
- self.assertEqual(param_expected, parameters)
- self.assertEqual(prop_expected, props)
-
- def test_schema_to_template_nested_map_list_map_schema(self):
- key_schema = {'bar': {'Type': 'Number'}}
- nested_schema = {'Key': {'Type': 'Map', 'Schema': {'Type': 'Map',
- 'Schema': key_schema}},
- 'Value': {'Type': 'String',
- 'Required': True}}
- schema = {'foo': {'Type': 'List', 'Schema': {'Type': 'Map',
- 'Schema': nested_schema}}}
-
- prop_expected = {'foo': {'Fn::Split': {'Ref': 'foo'}}}
- param_expected = {'foo': {'Type': 'CommaDelimitedList'}}
- (parameters, props) = \
- properties.Properties.schema_to_parameters_and_properties(schema)
- self.assertEqual(param_expected, parameters)
- self.assertEqual(prop_expected, props)
-
- def test_schema_invalid_parameters_stripped(self):
- schema = {'foo': {'Type': 'String',
- 'Required': True,
- 'Implemented': True}}
-
- prop_expected = {'foo': {'Ref': 'foo'}}
- param_expected = {'foo': {'Type': 'String'}}
-
- (parameters, props) = \
- properties.Properties.schema_to_parameters_and_properties(schema)
- self.assertEqual(param_expected, parameters)
- self.assertEqual(prop_expected, props)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import os
-import json
-
-from heat.common import exception
-from heat.common import urlfetch
-from heat.common import template_format
-
-from heat.engine import environment
-from heat.engine import parser
-from heat.engine import properties
-from heat.engine import resource
-from heat.engine import resources
-from heat.engine.resources import template_resource
-
-from heat.openstack.common import uuidutils
-
-from heat.tests import generic_resource as generic_rsrc
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-class MyCloudResource(generic_rsrc.GenericResource):
- pass
-
-
-class ProviderTemplateTest(HeatTestCase):
- def setUp(self):
- super(ProviderTemplateTest, self).setUp()
- utils.setup_dummy_db()
- resource._register_class('OS::ResourceType',
- generic_rsrc.GenericResource)
- resource._register_class('myCloud::ResourceType',
- MyCloudResource)
-
- def test_get_os_empty_registry(self):
- # assertion: with an empty environment we get the correct
- # default class.
- env_str = {'resource_registry': {}}
- env = environment.Environment(env_str)
- cls = env.get_class('OS::ResourceType', 'fred')
- self.assertEqual(generic_rsrc.GenericResource, cls)
-
- def test_get_mine_global_map(self):
- # assertion: with a global rule we get the "mycloud" class.
- env_str = {'resource_registry': {"OS::*": "myCloud::*"}}
- env = environment.Environment(env_str)
- cls = env.get_class('OS::ResourceType', 'fred')
- self.assertEqual(MyCloudResource, cls)
-
- def test_get_mine_type_map(self):
- # assertion: with a global rule we get the "mycloud" class.
- env_str = {'resource_registry': {
- "OS::ResourceType": "myCloud::ResourceType"}}
- env = environment.Environment(env_str)
- cls = env.get_class('OS::ResourceType', 'fred')
- self.assertEqual(MyCloudResource, cls)
-
- def test_get_mine_resource_map(self):
- # assertion: with a global rule we get the "mycloud" class.
- env_str = {'resource_registry': {'resources': {'fred': {
- "OS::ResourceType": "myCloud::ResourceType"}}}}
- env = environment.Environment(env_str)
- cls = env.get_class('OS::ResourceType', 'fred')
- self.assertEqual(MyCloudResource, cls)
-
- def test_get_os_no_match(self):
- # assertion: make sure 'fred' doesn't match 'jerry'.
- env_str = {'resource_registry': {'resources': {'jerry': {
- "OS::ResourceType": "myCloud::ResourceType"}}}}
- env = environment.Environment(env_str)
- cls = env.get_class('OS::ResourceType', 'fred')
- self.assertEqual(generic_rsrc.GenericResource, cls)
-
- def test_to_parameters(self):
- """Tests property conversion to parameter values."""
- provider = {
- 'Parameters': {
- 'Foo': {'Type': 'String'},
- 'AList': {'Type': 'CommaDelimitedList'},
- 'ListEmpty': {'Type': 'CommaDelimitedList'},
- 'ANum': {'Type': 'Number'},
- 'AMap': {'Type': 'Json'},
- },
- 'Outputs': {
- 'Foo': {'Value': 'bar'},
- },
- }
-
- files = {'test_resource.template': json.dumps(provider)}
-
- class DummyResource(object):
- attributes_schema = {"Foo": "A test attribute"}
- properties_schema = {
- "Foo": {"Type": "String"},
- "AList": {"Type": "List"},
- "ListEmpty": {"Type": "List"},
- "ANum": {"Type": "Number"},
- "AMap": {"Type": "Map"}
- }
-
- env = environment.Environment()
- resource._register_class('DummyResource', DummyResource)
- env.load({'resource_registry':
- {'DummyResource': 'test_resource.template'}})
- stack = parser.Stack(utils.dummy_context(), 'test_stack',
- parser.Template({}, files=files), env=env,
- stack_id=uuidutils.generate_uuid())
-
- map_prop_val = {
- "key1": "val1",
- "key2": ["lval1", "lval2", "lval3"],
- "key3": {
- "key4": 4,
- "key5": False
- }
- }
- json_snippet = {
- "Type": "DummyResource",
- "Properties": {
- "Foo": "Bar",
- "AList": ["one", "two", "three"],
- "ListEmpty": [],
- "ANum": 5,
- "AMap": map_prop_val
- }
- }
- temp_res = template_resource.TemplateResource('test_t_res',
- json_snippet, stack)
- temp_res.validate()
- converted_params = temp_res._to_parameters()
- self.assertTrue(converted_params)
- for key in DummyResource.properties_schema:
- self.assertIn(key, converted_params)
- # verify String conversion
- self.assertEqual("Bar", converted_params.get("Foo"))
- # verify List conversion
- self.assertEqual(",".join(json_snippet.get("Properties",
- {}).get("AList",
- [])),
- converted_params.get("AList"))
- # verify Number conversion
- self.assertEqual(5, converted_params.get("ANum"))
- # verify Map conversion
- self.assertEqual(map_prop_val, converted_params.get("AMap"))
-
- def test_attributes_extra(self):
- provider = {
- 'Outputs': {
- 'Foo': {'Value': 'bar'},
- 'Blarg': {'Value': 'wibble'},
- },
- }
- files = {'test_resource.template': json.dumps(provider)}
-
- class DummyResource(object):
- properties_schema = {}
- attributes_schema = {"Foo": "A test attribute"}
-
- env = environment.Environment()
- resource._register_class('DummyResource', DummyResource)
- env.load({'resource_registry':
- {'DummyResource': 'test_resource.template'}})
- stack = parser.Stack(utils.dummy_context(), 'test_stack',
- parser.Template({}, files=files), env=env,
- stack_id=uuidutils.generate_uuid())
-
- json_snippet = {
- "Type": "DummyResource",
- }
-
- temp_res = template_resource.TemplateResource('test_t_res',
- json_snippet, stack)
- self.assertEqual(None, temp_res.validate())
-
- def test_attributes_missing(self):
- provider = {
- 'Outputs': {
- 'Blarg': {'Value': 'wibble'},
- },
- }
- files = {'test_resource.template': json.dumps(provider)}
-
- class DummyResource(object):
- properties_schema = {}
- attributes_schema = {"Foo": "A test attribute"}
-
- json_snippet = {
- "Type": "DummyResource",
- }
-
- env = environment.Environment()
- resource._register_class('DummyResource', DummyResource)
- env.load({'resource_registry':
- {'DummyResource': 'test_resource.template'}})
- stack = parser.Stack(utils.dummy_context(), 'test_stack',
- parser.Template({}, files=files), env=env,
- stack_id=uuidutils.generate_uuid())
-
- temp_res = template_resource.TemplateResource('test_t_res',
- json_snippet, stack)
- self.assertRaises(exception.StackValidationFailed,
- temp_res.validate)
-
- def test_properties_normal(self):
- provider = {
- 'Parameters': {
- 'Foo': {'Type': 'String'},
- 'Blarg': {'Type': 'String', 'Default': 'wibble'},
- },
- }
- files = {'test_resource.template': json.dumps(provider)}
-
- class DummyResource(object):
- properties_schema = {"Foo": properties.Schema(properties.STRING,
- required=True)}
- attributes_schema = {}
-
- json_snippet = {
- "Type": "DummyResource",
- "Properties": {
- "Foo": "bar",
- },
- }
-
- env = environment.Environment()
- resource._register_class('DummyResource', DummyResource)
- env.load({'resource_registry':
- {'DummyResource': 'test_resource.template'}})
- stack = parser.Stack(utils.dummy_context(), 'test_stack',
- parser.Template({}, files=files), env=env,
- stack_id=uuidutils.generate_uuid())
-
- temp_res = template_resource.TemplateResource('test_t_res',
- json_snippet, stack)
- self.assertEqual(None, temp_res.validate())
-
- def test_properties_missing(self):
- provider = {
- 'Parameters': {
- 'Blarg': {'Type': 'String', 'Default': 'wibble'},
- },
- }
- files = {'test_resource.template': json.dumps(provider)}
-
- class DummyResource(object):
- properties_schema = {"Foo": properties.Schema(properties.STRING,
- required=True)}
- attributes_schema = {}
-
- json_snippet = {
- "Type": "DummyResource",
- }
-
- env = environment.Environment()
- resource._register_class('DummyResource', DummyResource)
- env.load({'resource_registry':
- {'DummyResource': 'test_resource.template'}})
- stack = parser.Stack(utils.dummy_context(), 'test_stack',
- parser.Template({}, files=files), env=env,
- stack_id=uuidutils.generate_uuid())
-
- temp_res = template_resource.TemplateResource('test_t_res',
- json_snippet, stack)
- self.assertRaises(exception.StackValidationFailed,
- temp_res.validate)
-
- def test_properties_extra_required(self):
- provider = {
- 'Parameters': {
- 'Blarg': {'Type': 'String'},
- },
- }
- files = {'test_resource.template': json.dumps(provider)}
-
- class DummyResource(object):
- properties_schema = {}
- attributes_schema = {}
-
- json_snippet = {
- "Type": "DummyResource",
- "Properties": {
- "Blarg": "wibble",
- },
- }
-
- env = environment.Environment()
- resource._register_class('DummyResource', DummyResource)
- env.load({'resource_registry':
- {'DummyResource': 'test_resource.template'}})
- stack = parser.Stack(utils.dummy_context(), 'test_stack',
- parser.Template({}, files=files), env=env,
- stack_id=uuidutils.generate_uuid())
-
- temp_res = template_resource.TemplateResource('test_t_res',
- json_snippet, stack)
- self.assertRaises(exception.StackValidationFailed,
- temp_res.validate)
-
- def test_properties_type_mismatch(self):
- provider = {
- 'Parameters': {
- 'Foo': {'Type': 'String'},
- },
- }
- files = {'test_resource.template': json.dumps(provider)}
-
- class DummyResource(object):
- properties_schema = {"Foo": properties.Schema(properties.MAP)}
- attributes_schema = {}
-
- json_snippet = {
- "Type": "DummyResource",
- "Properties": {
- "Foo": "bar",
- },
- }
-
- env = environment.Environment()
- resource._register_class('DummyResource', DummyResource)
- env.load({'resource_registry':
- {'DummyResource': 'test_resource.template'}})
- stack = parser.Stack(utils.dummy_context(), 'test_stack',
- parser.Template({}, files=files), env=env,
- stack_id=uuidutils.generate_uuid())
-
- temp_res = template_resource.TemplateResource('test_t_res',
- json_snippet, stack)
- self.assertRaises(exception.StackValidationFailed,
- temp_res.validate)
-
- def test_get_template_resource(self):
- # assertion: if the name matches {.yaml|.template} we get the
- # TemplateResource class.
- env_str = {'resource_registry': {'resources': {'fred': {
- "OS::ResourceType": "some_magic.yaml"}}}}
- env = environment.Environment(env_str)
- cls = env.get_class('OS::ResourceType', 'fred')
- self.assertEqual(cls, template_resource.TemplateResource)
-
- def test_template_as_resource(self):
- """
- Test that the resulting resource has the right prop and attrib schema.
-
- Note that this test requires the Wordpress_Single_Instance.yaml
- template in the templates directory since we want to test using a
- non-trivial template.
- """
- test_templ_name = "WordPress_Single_Instance.yaml"
- path = os.path.join(os.path.dirname(os.path.realpath(__file__)),
- 'templates', test_templ_name)
- # check if its in the directory list vs. exists to work around
- # case-insensitive file systems
- self.assertIn(test_templ_name, os.listdir(os.path.dirname(path)))
- with open(path) as test_templ_file:
- test_templ = test_templ_file.read()
- self.assertTrue(test_templ, "Empty test template")
- self.m.StubOutWithMock(urlfetch, "get")
- urlfetch.get(test_templ_name,
- allowed_schemes=('http', 'https')).AndReturn(test_templ)
- parsed_test_templ = template_format.parse(test_templ)
- self.m.ReplayAll()
- json_snippet = {
- "Type": test_templ_name,
- "Properties": {
- "KeyName": "mykeyname",
- "DBName": "wordpress1",
- "DBUsername": "wpdbuser",
- "DBPassword": "wpdbpass",
- "DBRootPassword": "wpdbrootpass",
- "LinuxDistribution": "U10"
- }
- }
- stack = parser.Stack(None, 'test_stack', parser.Template({}),
- stack_id=uuidutils.generate_uuid())
- templ_resource = resource.Resource("test_templ_resource", json_snippet,
- stack)
- self.m.VerifyAll()
- self.assertIsInstance(templ_resource,
- template_resource.TemplateResource)
- for prop in parsed_test_templ.get("Parameters", {}):
- self.assertIn(prop, templ_resource.properties)
- for attrib in parsed_test_templ.get("Outputs", {}):
- self.assertIn(attrib, templ_resource.attributes)
- for k, v in json_snippet.get("Properties").items():
- self.assertEqual(v, templ_resource.properties[k])
-
- def test_system_template_retrieve_by_file(self):
- # make sure that a TemplateResource defined in the global environment
- # can be created and the template retrieved using the "file:"
- # scheme.
- g_env = resources.global_env()
- test_templ_name = 'file:///etc/heatr/frodo.yaml'
- g_env.load({'resource_registry':
- {'Test::Frodo': test_templ_name}})
- stack = parser.Stack(utils.dummy_context(), 'test_stack',
- parser.Template({}),
- stack_id=uuidutils.generate_uuid())
-
- minimal_temp = json.dumps({'Parameters': {}, 'Resources': {}})
- self.m.StubOutWithMock(urlfetch, "get")
- urlfetch.get(test_templ_name,
- allowed_schemes=('http', 'https',
- 'file')).AndReturn(minimal_temp)
- self.m.ReplayAll()
-
- temp_res = template_resource.TemplateResource('test_t_res',
- {"Type": 'Test::Frodo'},
- stack)
- self.assertEqual(None, temp_res.validate())
- self.m.VerifyAll()
-
- def test_user_template_not_retrieved_by_file(self):
- # make sure that a TemplateResource defined in the user environment
- # can NOT be retrieved using the "file:" scheme, validation should fail
- env = environment.Environment()
- test_templ_name = 'file:///etc/heatr/flippy.yaml'
- env.load({'resource_registry':
- {'Test::Flippy': test_templ_name}})
- stack = parser.Stack(utils.dummy_context(), 'test_stack',
- parser.Template({}), env=env,
- stack_id=uuidutils.generate_uuid())
-
- temp_res = template_resource.TemplateResource('test_t_res',
- {"Type": 'Test::Flippy'},
- stack)
-
- self.assertRaises(exception.StackValidationFailed, temp_res.validate)
-
- def test_system_template_retrieve_fail(self):
- # make sure that a TemplateResource defined in the global environment
- # fails gracefully if the template file specified is inaccessible
- # we should be able to create the TemplateResource object, but
- # validation should fail, when the second attempt to access it is
- # made in validate()
- g_env = resources.global_env()
- test_templ_name = 'file:///etc/heatr/frodo.yaml'
- g_env.load({'resource_registry':
- {'Test::Frodo': test_templ_name}})
- stack = parser.Stack(utils.dummy_context(), 'test_stack',
- parser.Template({}),
- stack_id=uuidutils.generate_uuid())
-
- self.m.StubOutWithMock(urlfetch, "get")
- urlfetch.get(test_templ_name,
- allowed_schemes=('http', 'https',
- 'file')).AndRaise(IOError)
- urlfetch.get(test_templ_name,
- allowed_schemes=('http', 'https',
- 'file')).AndRaise(IOError)
- self.m.ReplayAll()
-
- temp_res = template_resource.TemplateResource('test_t_res',
- {"Type": 'Test::Frodo'},
- stack)
- self.assertRaises(exception.StackValidationFailed, temp_res.validate)
- self.m.VerifyAll()
-
- def test_user_template_retrieve_fail(self):
- # make sure that a TemplateResource defined in the user environment
- # fails gracefully if the template file specified is inaccessible
- # we should be able to create the TemplateResource object, but
- # validation should fail, when the second attempt to access it is
- # made in validate()
- env = environment.Environment()
- test_templ_name = 'http://heatr/noexist.yaml'
- env.load({'resource_registry':
- {'Test::Flippy': test_templ_name}})
- stack = parser.Stack(utils.dummy_context(), 'test_stack',
- parser.Template({}), env=env,
- stack_id=uuidutils.generate_uuid())
-
- self.m.StubOutWithMock(urlfetch, "get")
- urlfetch.get(test_templ_name,
- allowed_schemes=('http', 'https')).AndRaise(IOError)
- urlfetch.get(test_templ_name,
- allowed_schemes=('http', 'https')).AndRaise(IOError)
- self.m.ReplayAll()
-
- temp_res = template_resource.TemplateResource('test_t_res',
- {"Type": 'Test::Flippy'},
- stack)
- self.assertRaises(exception.StackValidationFailed, temp_res.validate)
- self.m.VerifyAll()
+++ /dev/null
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import copy
-
-import mox
-import paramiko
-import novaclient
-
-from heat.db import api as db_api
-from heat.tests.v1_1 import fakes
-from heat.common import template_format
-from heat.common import exception
-from heat.engine import parser
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine import environment
-from heat.engine.resources.rackspace import cloud_server
-from heat.engine.resources.rackspace import rackspace_resource
-from heat.openstack.common import uuidutils
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-wp_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "WordPress",
- "Parameters" : {
- "flavor" : {
- "Description" : "Rackspace Cloud Server flavor",
- "Type" : "String",
- "Default" : "2",
- "AllowedValues" : [ "2", "3", "4", "5", "6", "7", "8" ],
- "ConstraintDescription" : "must be a valid Rackspace Cloud Server flavor"
- },
- },
- "Resources" : {
- "WebServer": {
- "Type": "Rackspace::Cloud::Server",
- "Properties": {
- "image" : "Fedora 17 (Beefy Miracle)",
- "flavor" : "2",
- "user_data" : "wordpress"
- }
- }
- }
-}
-'''
-
-rsa_key = """-----BEGIN RSA PRIVATE KEY-----
-MIICWwIBAAKBgQDibWGom/83F2xYfVylBZhUbREiVlw42X7afUuHzNJuh/5EyhXQ
-BmBHjVGL1mxZY4GoISrxIkW1jVmTXbm8FknIlS3jxEOC+xF3IkLBtmZEkFVLOUCv
-Fpru1xThFS0L/pRttiTWLm+dsjboCV4qtg/+y30O0RJ5AAFgGkoVs8idrQIDAQAB
-AoGAQU/7037r5yBCiGPgzVkHz5KGVrlCcMOL68ood0uFh4yCs6T3FcJBE2KYGxYG
-uuIRDEZE9LlGElBrfi6S3MYxEbewITK9Li1cr8K0fJlIbg5PI1MxwiTXzG7i0f8Y
-trtZjo/fs8XNSS4xlGWCUgtiNXvLS6wxyDGGbqeh1BmETgECQQDmoPJ3h5kuZguA
-o7B+iTaKXqyWPf0ImsZ0UQYBgnEWTaZEh8W0015jP55mndALWA9pmhHJm+BC/Hfe
-Kp6jtVyxAkEA+1YctDe62u5pXU/GK8UfDJwi4m1VxUfASrlxh+ALag9knwe6Dlev
-EKKIe8R6HZs2zavaJs6dddxHRcIi8rXfvQJAW6octOVwPMDSUY69140x4E1Ay3ZX
-29OojRKnEHKIABVcwGA2dGiOW2Qt0RtoVRnrBk32Q+twdy9hdSv7YZX0AQJAVDaj
-QYNW2Zp+tWRQa0QORkRer+2gioyjEqaWMsfQK0ZjGaIWJk4c+37qKkZIAHmMYFeP
-recW/XHEc8w7t4VXJQJAevSyciBfFcWMZTwlqq8wXNMCRLJt5CxvO4gSO+hPNrDe
-gDZkz7KcZC7TkO0NYVRssA6/84mCqx6QHpKaYNG9kg==
------END RSA PRIVATE KEY-----
-"""
-
-
-class RackspaceCloudServerTest(HeatTestCase):
- def setUp(self):
- super(RackspaceCloudServerTest, self).setUp()
- self.fc = fakes.FakeClient()
- utils.setup_dummy_db()
- # Test environment may not have pyrax client library installed and if
- # pyrax is not installed resource class would not be registered.
- # So register resource provider class explicitly for unit testing.
- resource._register_class("Rackspace::Cloud::Server",
- cloud_server.CloudServer)
-
- f2 = self.m.CreateMockAnything()
- f2.id = '2'
- f3 = self.m.CreateMockAnything()
- f3.id = '3'
- f4 = self.m.CreateMockAnything()
- f4.id = '4'
- f5 = self.m.CreateMockAnything()
- f5.id = '5'
- f6 = self.m.CreateMockAnything()
- f6.id = '6'
- f7 = self.m.CreateMockAnything()
- f7.id = '7'
- f8 = self.m.CreateMockAnything()
- f8.id = '8'
- self.flavors = [f2, f3, f4, f5, f6, f7, f8]
-
- def _setup_test_stack(self, stack_name):
- t = template_format.parse(wp_template)
- template = parser.Template(t)
- stack = parser.Stack(utils.dummy_context(), stack_name, template,
- environment.Environment({'flavor': '2'}),
- stack_id=uuidutils.generate_uuid())
- return (t, stack)
-
- def _mock_ssh_sftp(self):
- # SSH
- self.m.StubOutWithMock(paramiko, "SSHClient")
- self.m.StubOutWithMock(paramiko, "MissingHostKeyPolicy")
- ssh = self.m.CreateMockAnything()
- paramiko.SSHClient().AndReturn(ssh)
- paramiko.MissingHostKeyPolicy()
- ssh.set_missing_host_key_policy(None)
- ssh.connect(mox.IgnoreArg(),
- key_filename=mox.IgnoreArg(),
- username='root')
- stdin = self.m.CreateMockAnything()
- stdout = self.m.CreateMockAnything()
- stderr = self.m.CreateMockAnything()
- stdout.read().AndReturn("stdout")
- stderr.read().AndReturn("stderr")
- ssh.exec_command(mox.IgnoreArg()).AndReturn((stdin, stdout, stderr))
-
- # SFTP
- self.m.StubOutWithMock(paramiko, "Transport")
- transport = self.m.CreateMockAnything()
- paramiko.Transport((mox.IgnoreArg(), 22)).AndReturn(transport)
- transport.connect(hostkey=None, username="root", pkey=mox.IgnoreArg())
- sftp = self.m.CreateMockAnything()
- self.m.StubOutWithMock(paramiko, "SFTPClient")
- paramiko.SFTPClient.from_transport(transport).AndReturn(sftp)
- sftp_file = self.m.CreateMockAnything()
- sftp.open(mox.IgnoreArg(), 'w').AndReturn(sftp_file)
- sftp_file.write(mox.IgnoreArg())
- sftp_file.close()
- sftp_file = self.m.CreateMockAnything()
- sftp.open(mox.IgnoreArg(), 'w').AndReturn(sftp_file)
- sftp_file.write(mox.IgnoreArg())
- sftp_file.close()
-
- def _setup_test_cs(self, return_server, name):
- stack_name = '%s_stack' % name
- (t, stack) = self._setup_test_stack(stack_name)
-
- cs_name = 'Fedora 17 (Beefy Miracle)'
- t['Resources']['WebServer']['Properties']['image'] = '1'
- t['Resources']['WebServer']['Properties']['flavor'] = '2'
-
- cs = cloud_server.CloudServer('%s_name' % name,
- t['Resources']['WebServer'], stack)
- cs._private_key = rsa_key
- cs.t = cs.stack.resolve_runtime_data(cs.t)
-
- flavor = t['Resources']['WebServer']['Properties']['flavor']
-
- self.m.StubOutWithMock(self.fc.servers, 'create')
- self.fc.servers.create(utils.PhysName(stack_name, cs.name),
- "1", flavor,
- files=mox.IgnoreArg()).AndReturn(return_server)
- return_server.adminPass = "foobar"
-
- self.m.StubOutWithMock(cloud_server.CloudServer, 'script')
- cloud_server.CloudServer.script = "foobar"
-
- self.m.StubOutWithMock(rackspace_resource.RackspaceResource, "nova")
- rackspace_resource.RackspaceResource.nova().MultipleTimes()\
- .AndReturn(self.fc)
-
- self._mock_ssh_sftp()
- return cs
-
- def _create_test_cs(self, return_server, name):
- cs = self._setup_test_cs(return_server, name)
-
- self.m.ReplayAll()
- scheduler.TaskRunner(cs.create)()
- return cs
-
- def _update_test_cs(self, return_server, name):
- self._mock_ssh_sftp()
- self.m.StubOutWithMock(rackspace_resource.RackspaceResource, "nova")
- rackspace_resource.RackspaceResource.nova().MultipleTimes()\
- .AndReturn(self.fc)
-
- def test_cs_create(self):
- return_server = self.fc.servers.list()[1]
- cs = self._create_test_cs(return_server, 'test_cs_create')
- # this makes sure the auto increment worked on cloud server creation
- self.assertTrue(cs.id > 0)
-
- expected_public = return_server.networks['public'][0]
- expected_private = return_server.networks['private'][0]
- self.assertEqual(cs.FnGetAtt('PublicIp'), expected_public)
- self.assertEqual(cs.FnGetAtt('PrivateIp'), expected_private)
- self.assertEqual(cs.FnGetAtt('PublicDnsName'), expected_public)
- self.assertEqual(cs.FnGetAtt('PrivateDnsName'), expected_public)
-
- self.m.VerifyAll()
-
- def test_cs_create_with_image_name(self):
- return_server = self.fc.servers.list()[1]
- cs = self._setup_test_cs(return_server, 'test_cs_create_image_id')
-
- self.m.ReplayAll()
- scheduler.TaskRunner(cs.create)()
-
- # this makes sure the auto increment worked on cloud server creation
- self.assertTrue(cs.id > 0)
-
- expected_public = return_server.networks['public'][0]
- expected_private = return_server.networks['private'][0]
- self.assertEqual(cs.FnGetAtt('PublicIp'), expected_public)
- self.assertEqual(cs.FnGetAtt('PrivateIp'), expected_private)
- self.assertEqual(cs.FnGetAtt('PublicDnsName'), expected_public)
- self.assertEqual(cs.FnGetAtt('PrivateDnsName'), expected_public)
- self.assertRaises(exception.InvalidTemplateAttribute,
- cs.FnGetAtt, 'foo')
- self.m.VerifyAll()
-
- def test_cs_create_image_name_err(self):
- stack_name = 'test_cs_create_image_name_err_stack'
- (t, stack) = self._setup_test_stack(stack_name)
-
- # create a cloud server with non exist image name
- t['Resources']['WebServer']['Properties']['image'] = 'Slackware'
-
- # Mock flavors
- self.m.StubOutWithMock(cloud_server.CloudServer, "flavors")
- cloud_server.CloudServer.flavors.__contains__('2').AndReturn(True)
- cloud_server.CloudServer.script = None
- self.m.ReplayAll()
-
- cs = cloud_server.CloudServer('cs_create_image_err',
- t['Resources']['WebServer'], stack)
-
- self.assertEqual({'Error': "user_data/metadata are not supported with "
- "Slackware."},
- cs.validate())
- self.m.VerifyAll()
-
- def test_cs_create_image_name_okay(self):
- stack_name = 'test_cs_create_image_name_err_stack'
- (t, stack) = self._setup_test_stack(stack_name)
-
- # create a cloud server with non exist image name
- t['Resources']['WebServer']['Properties']['image'] = 'Slackware'
- t['Resources']['WebServer']['Properties']['user_data'] = ''
-
- # Mock flavors
- self.m.StubOutWithMock(cloud_server.CloudServer, "flavors")
- cloud_server.CloudServer.flavors.__contains__('2').AndReturn(True)
- cloud_server.CloudServer.script = None
- self.m.ReplayAll()
-
- cs = cloud_server.CloudServer('cs_create_image_err',
- t['Resources']['WebServer'], stack)
-
- self.assertEqual(None, cs.validate())
- self.m.VerifyAll()
-
- def test_cs_create_flavor_err(self):
- """validate() should throw an if the flavor is invalid."""
- stack_name = 'test_cs_create_flavor_err_stack'
- (t, stack) = self._setup_test_stack(stack_name)
-
- # create a cloud server with non exist image name
- t['Resources']['WebServer']['Properties']['flavor'] = '1'
-
- # Mock flavors
- self.m.StubOutWithMock(cloud_server.CloudServer, "flavors")
- flavors = ['2', '3', '4', '5', '6', '7', '8']
- cloud_server.CloudServer.flavors = flavors
- self.m.ReplayAll()
-
- cs = cloud_server.CloudServer('cs_create_flavor_err',
- t['Resources']['WebServer'], stack)
-
- self.assertEqual({'Error': "flavor not found."}, cs.validate())
-
- self.m.VerifyAll()
-
- def test_cs_create_delete(self):
- return_server = self.fc.servers.list()[1]
- cs = self._create_test_cs(return_server,
- 'test_cs_create_delete')
- cs.resource_id = 1234
-
- # this makes sure the auto-increment worked on cloud server creation
- self.assertTrue(cs.id > 0)
-
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndRaise(novaclient.exceptions.NotFound(404))
- mox.Replay(get)
-
- scheduler.TaskRunner(cs.delete)()
- self.assertTrue(cs.resource_id is None)
- self.assertEqual(cs.state, (cs.DELETE, cs.COMPLETE))
- self.m.VerifyAll()
-
- def test_cs_update_metadata(self):
- return_server = self.fc.servers.list()[1]
- cs = self._create_test_cs(return_server, 'test_cs_metadata_update')
- self.m.UnsetStubs()
- self._update_test_cs(return_server, 'test_cs_metadata_update')
- self.m.ReplayAll()
- update_template = copy.deepcopy(cs.t)
- update_template['Metadata'] = {'test': 123}
- scheduler.TaskRunner(cs.update, update_template)()
- self.assertEqual(cs.metadata, {'test': 123})
-
- def test_cs_update_replace(self):
- return_server = self.fc.servers.list()[1]
- cs = self._create_test_cs(return_server, 'test_cs_update')
-
- update_template = copy.deepcopy(cs.t)
- update_template['Notallowed'] = {'test': 123}
- updater = scheduler.TaskRunner(cs.update, update_template)
- self.assertRaises(resource.UpdateReplace, updater)
-
- def test_cs_update_properties(self):
- return_server = self.fc.servers.list()[1]
- cs = self._create_test_cs(return_server, 'test_cs_update')
-
- update_template = copy.deepcopy(cs.t)
- update_template['Properties']['user_data'] = 'mustreplace'
- updater = scheduler.TaskRunner(cs.update, update_template)
- self.assertRaises(resource.UpdateReplace, updater)
-
- def test_cs_status_build(self):
- return_server = self.fc.servers.list()[0]
- cs = self._setup_test_cs(return_server, 'test_cs_status_build')
- cs.resource_id = 1234
-
- # Bind fake get method which cs.check_create_complete will call
- def activate_status(server):
- server.status = 'ACTIVE'
- return_server.get = activate_status.__get__(return_server)
- self.m.ReplayAll()
-
- scheduler.TaskRunner(cs.create)()
- self.assertEqual(cs.state, (cs.CREATE, cs.COMPLETE))
-
- def test_cs_status_hard_reboot(self):
- self._test_cs_status_not_build_active('HARD_REBOOT')
-
- def test_cs_status_password(self):
- self._test_cs_status_not_build_active('PASSWORD')
-
- def test_cs_status_reboot(self):
- self._test_cs_status_not_build_active('REBOOT')
-
- def test_cs_status_rescue(self):
- self._test_cs_status_not_build_active('RESCUE')
-
- def test_cs_status_resize(self):
- self._test_cs_status_not_build_active('RESIZE')
-
- def test_cs_status_revert_resize(self):
- self._test_cs_status_not_build_active('REVERT_RESIZE')
-
- def test_cs_status_shutoff(self):
- self._test_cs_status_not_build_active('SHUTOFF')
-
- def test_cs_status_suspended(self):
- self._test_cs_status_not_build_active('SUSPENDED')
-
- def test_cs_status_verify_resize(self):
- self._test_cs_status_not_build_active('VERIFY_RESIZE')
-
- def _test_cs_status_not_build_active(self, uncommon_status):
- return_server = self.fc.servers.list()[0]
- cs = self._setup_test_cs(return_server, 'test_cs_status_build')
- cs.resource_id = 1234
-
- # Bind fake get method which cs.check_create_complete will call
- def activate_status(server):
- if hasattr(server, '_test_check_iterations'):
- server._test_check_iterations += 1
- else:
- server._test_check_iterations = 1
- if server._test_check_iterations == 1:
- server.status = uncommon_status
- if server._test_check_iterations > 2:
- server.status = 'ACTIVE'
- return_server.get = activate_status.__get__(return_server)
- self.m.ReplayAll()
-
- scheduler.TaskRunner(cs.create)()
- self.assertEqual(cs.state, (cs.CREATE, cs.COMPLETE))
-
- self.m.VerifyAll()
-
- def mock_get_ip(self, cs):
- self.m.UnsetStubs()
- self.m.StubOutWithMock(cloud_server.CloudServer, "server")
- cloud_server.CloudServer.server = cs
- self.m.ReplayAll()
-
- def test_cs_get_ip(self):
- stack_name = 'test_cs_get_ip_err'
- (t, stack) = self._setup_test_stack(stack_name)
- cs = cloud_server.CloudServer('cs_create_image_err',
- t['Resources']['WebServer'],
- stack)
- cs.addresses = {'public': [{'version': 4, 'addr': '4.5.6.7'},
- {'version': 6, 'addr': 'fake:ip::6'}],
- 'private': [{'version': 4, 'addr': '10.13.12.13'}]}
- self.mock_get_ip(cs)
- self.assertEqual(cs.public_ip, '4.5.6.7')
- self.mock_get_ip(cs)
- self.assertEqual(cs.private_ip, '10.13.12.13')
-
- cs.addresses = {'public': [],
- 'private': []}
- self.mock_get_ip(cs)
- self.assertRaises(exception.Error, cs._get_ip, 'public')
-
- def test_private_key(self):
- stack_name = 'test_private_key'
- (t, stack) = self._setup_test_stack(stack_name)
- cs = cloud_server.CloudServer('cs_private_key',
- t['Resources']['WebServer'],
- stack)
-
- # This gives the fake cloud server an id and created_time attribute
- cs._store_or_update(cs.CREATE, cs.IN_PROGRESS, 'test_store')
-
- cs.private_key = 'fake private key'
- rs = db_api.resource_get_by_name_and_stack(None,
- 'cs_private_key',
- stack.id)
- encrypted_key = rs.data[0]['value']
- self.assertNotEqual(encrypted_key, "fake private key")
- decrypted_key = cs.private_key
- self.assertEqual(decrypted_key, "fake private key")
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import itertools
-
-from heat.common import exception
-from heat.engine import dependencies
-from heat.engine import parser
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine import template
-from heat.engine import environment
-from heat.openstack.common import uuidutils
-import heat.db.api as db_api
-
-from heat.tests import generic_resource as generic_rsrc
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-class ResourceTest(HeatTestCase):
- def setUp(self):
- super(ResourceTest, self).setUp()
- utils.setup_dummy_db()
-
- resource._register_class('GenericResourceType',
- generic_rsrc.GenericResource)
-
- env = environment.Environment()
- env.load({u'resource_registry':
- {u'OS::Test::GenericResource': u'GenericResourceType'}})
-
- self.stack = parser.Stack(utils.dummy_context(), 'test_stack',
- parser.Template({}), env=env,
- stack_id=uuidutils.generate_uuid())
-
- def test_get_class_ok(self):
- cls = resource.get_class('GenericResourceType')
- self.assertEqual(cls, generic_rsrc.GenericResource)
-
- def test_get_class_noexist(self):
- self.assertRaises(exception.StackValidationFailed, resource.get_class,
- 'NoExistResourceType')
-
- def test_resource_new_ok(self):
- snippet = {'Type': 'GenericResourceType'}
- res = resource.Resource('aresource', snippet, self.stack)
-
- def test_resource_new_err(self):
- snippet = {'Type': 'NoExistResourceType'}
- self.assertRaises(exception.StackValidationFailed,
- resource.Resource, 'aresource', snippet, self.stack)
-
- def test_state_defaults(self):
- tmpl = {'Type': 'Foo'}
- res = generic_rsrc.GenericResource('test_res_def', tmpl, self.stack)
- self.assertEqual(res.state, (res.INIT, res.COMPLETE))
- self.assertEqual(res.status_reason, '')
-
- def test_state_set(self):
- tmpl = {'Type': 'Foo'}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- res.state_set(res.CREATE, res.COMPLETE, 'wibble')
- self.assertEqual(res.action, res.CREATE)
- self.assertEqual(res.status, res.COMPLETE)
- self.assertEqual(res.state, (res.CREATE, res.COMPLETE))
- self.assertEqual(res.status_reason, 'wibble')
-
- def test_state_set_invalid(self):
- tmpl = {'Type': 'Foo'}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- self.assertRaises(ValueError, res.state_set, 'foo', 'bla')
- self.assertRaises(ValueError, res.state_set, 'foo', res.COMPLETE)
- self.assertRaises(ValueError, res.state_set, res.CREATE, 'bla')
-
- def test_state_del_stack(self):
- tmpl = {'Type': 'Foo'}
- self.stack.action = self.stack.DELETE
- self.stack.status = self.stack.IN_PROGRESS
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- self.assertEqual(res.DELETE, res.action)
- self.assertEqual(res.COMPLETE, res.status)
-
- def test_type(self):
- tmpl = {'Type': 'Foo'}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- self.assertEqual(res.type(), 'Foo')
-
- def test_has_interface_direct_match(self):
- tmpl = {'Type': 'GenericResourceType'}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- self.assertTrue(res.has_interface('GenericResourceType'))
-
- def test_has_interface_no_match(self):
- tmpl = {'Type': 'GenericResourceType'}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- self.assertFalse(res.has_interface('LookingForAnotherType'))
-
- def test_has_interface_mapping(self):
- tmpl = {'Type': 'OS::Test::GenericResource'}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- self.assertTrue(res.has_interface('GenericResourceType'))
-
- def test_created_time(self):
- tmpl = {'Type': 'Foo'}
- res = generic_rsrc.GenericResource('test_res_new', tmpl, self.stack)
- self.assertEqual(res.created_time, None)
- res._store()
- self.assertNotEqual(res.created_time, None)
-
- def test_updated_time(self):
- tmpl = {'Type': 'Foo'}
- res = generic_rsrc.GenericResource('test_res_upd', tmpl, self.stack)
- res._store()
- stored_time = res.updated_time
- res.state_set(res.CREATE, res.IN_PROGRESS, 'testing')
- self.assertNotEqual(res.updated_time, None)
- self.assertNotEqual(res.updated_time, stored_time)
-
- def test_store_or_update(self):
- tmpl = {'Type': 'Foo'}
- res = generic_rsrc.GenericResource('test_res_upd', tmpl, self.stack)
- res._store_or_update(res.CREATE, res.IN_PROGRESS, 'test_store')
- self.assertNotEqual(None, res.id)
- self.assertEqual(res.action, res.CREATE)
- self.assertEqual(res.status, res.IN_PROGRESS)
- self.assertEqual(res.status_reason, 'test_store')
-
- db_res = r = db_api.resource_get(None, res.id)
- self.assertEqual(db_res.action, res.CREATE)
- self.assertEqual(db_res.status, res.IN_PROGRESS)
- self.assertEqual(db_res.status_reason, 'test_store')
-
- res._store_or_update(res.CREATE, res.COMPLETE, 'test_update')
- self.assertEqual(res.action, res.CREATE)
- self.assertEqual(res.status, res.COMPLETE)
- self.assertEqual(res.status_reason, 'test_update')
- self.assertEqual(db_res.action, res.CREATE)
- self.assertEqual(db_res.status, res.COMPLETE)
- self.assertEqual(db_res.status_reason, 'test_update')
-
- def test_parsed_template(self):
- tmpl = {
- 'Type': 'Foo',
- 'foo': {'Fn::Join': [' ', ['bar', 'baz', 'quux']]}
- }
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
-
- parsed_tmpl = res.parsed_template()
- self.assertEqual(parsed_tmpl['Type'], 'Foo')
- self.assertEqual(parsed_tmpl['foo'], 'bar baz quux')
-
- self.assertEqual(res.parsed_template('foo'), 'bar baz quux')
- self.assertEqual(res.parsed_template('foo', 'bar'), 'bar baz quux')
-
- def test_parsed_template_default(self):
- tmpl = {'Type': 'Foo'}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- self.assertEqual(res.parsed_template('foo'), {})
- self.assertEqual(res.parsed_template('foo', 'bar'), 'bar')
-
- def test_metadata_default(self):
- tmpl = {'Type': 'Foo'}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- self.assertEqual(res.metadata, {})
-
- def test_equals_different_stacks(self):
- tmpl1 = {'Type': 'Foo'}
- tmpl2 = {'Type': 'Foo'}
- tmpl3 = {'Type': 'Bar'}
- stack2 = parser.Stack(utils.dummy_context(), 'test_stack',
- parser.Template({}), stack_id=-1)
- res1 = generic_rsrc.GenericResource('test_resource', tmpl1, self.stack)
- res2 = generic_rsrc.GenericResource('test_resource', tmpl2, stack2)
- res3 = generic_rsrc.GenericResource('test_resource2', tmpl3, stack2)
-
- self.assertEqual(res1, res2)
- self.assertNotEqual(res1, res3)
-
- def test_equals_names(self):
- tmpl1 = {'Type': 'Foo'}
- tmpl2 = {'Type': 'Foo'}
- res1 = generic_rsrc.GenericResource('test_resource1',
- tmpl1, self.stack)
- res2 = generic_rsrc.GenericResource('test_resource2', tmpl2,
- self.stack)
-
- self.assertNotEqual(res1, res2)
-
- def test_update_template_diff_empty(self):
- tmpl = {'Type': 'Foo'}
- update_snippet = {}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- self.assertRaises(resource.UpdateReplace, res.update_template_diff,
- update_snippet, tmpl)
-
- def test_update_template_diff_changed_notallowed(self):
- tmpl = {'Type': 'Foo'}
- update_snippet = {'Type': 'Bar'}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- self.assertRaises(resource.UpdateReplace, res.update_template_diff,
- update_snippet, tmpl)
-
- def test_update_template_diff_changed_modified(self):
- tmpl = {'Type': 'Foo', 'Metadata': {'foo': 123}}
- update_snippet = {'Type': 'Foo', 'Metadata': {'foo': 456}}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- res.update_allowed_keys = ('Metadata',)
- diff = res.update_template_diff(update_snippet, tmpl)
- self.assertEqual(diff, {'Metadata': {'foo': 456}})
-
- def test_update_template_diff_changed_add(self):
- tmpl = {'Type': 'Foo'}
- update_snippet = {'Type': 'Foo', 'Metadata': {'foo': 123}}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- res.update_allowed_keys = ('Metadata',)
- diff = res.update_template_diff(update_snippet, tmpl)
- self.assertEqual(diff, {'Metadata': {'foo': 123}})
-
- def test_update_template_diff_changed_remove(self):
- tmpl = {'Type': 'Foo', 'Metadata': {'foo': 123}}
- update_snippet = {'Type': 'Foo'}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- res.update_allowed_keys = ('Metadata',)
- diff = res.update_template_diff(update_snippet, tmpl)
- self.assertEqual(diff, {'Metadata': None})
-
- def test_update_template_diff_properties_none(self):
- tmpl = {'Type': 'Foo'}
- update_snippet = {'Type': 'Foo'}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- diff = res.update_template_diff_properties(update_snippet, tmpl)
- self.assertEqual(diff, {})
-
- def test_update_template_diff_properties_added(self):
- tmpl = {'Type': 'Foo'}
- update_snippet = {'Type': 'Foo', 'Properties': {'Bar': 123}}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- res.update_allowed_properties = ('Bar',)
- diff = res.update_template_diff_properties(update_snippet, tmpl)
- self.assertEqual(diff, {'Bar': 123})
-
- def test_update_template_diff_properties_removed(self):
- tmpl = {'Type': 'Foo', 'Properties': {'Bar': 123}}
- update_snippet = {'Type': 'Foo', 'Properties': {}}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- res.update_allowed_properties = ('Bar',)
- diff = res.update_template_diff_properties(update_snippet, tmpl)
- self.assertEqual(diff, {'Bar': None})
-
- def test_update_template_diff_properties_changed(self):
- tmpl = {'Type': 'Foo', 'Properties': {'Bar': 123}}
- update_snippet = {'Type': 'Foo', 'Properties': {'Bar': 456}}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- res.update_allowed_properties = ('Bar',)
- diff = res.update_template_diff_properties(update_snippet, tmpl)
- self.assertEqual(diff, {'Bar': 456})
-
- def test_update_template_diff_properties_notallowed(self):
- tmpl = {'Type': 'Foo', 'Properties': {'Bar': 123}}
- update_snippet = {'Type': 'Foo', 'Properties': {'Bar': 456}}
- res = generic_rsrc.GenericResource('test_resource', tmpl, self.stack)
- res.update_allowed_properties = ('Cat',)
- self.assertRaises(resource.UpdateReplace,
- res.update_template_diff_properties,
- update_snippet, tmpl)
-
- def test_resource(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'abc'}}
- res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack)
- scheduler.TaskRunner(res.create)()
- self.assertEqual((res.CREATE, res.COMPLETE), res.state)
-
- def test_create_fail_missing_req_prop(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {}}
- rname = 'test_resource'
- res = generic_rsrc.ResourceWithRequiredProps(rname, tmpl, self.stack)
-
- estr = 'Property error : test_resource: Property Foo not assigned'
- create = scheduler.TaskRunner(res.create)
- self.assertRaises(exception.ResourceFailure, create)
- self.assertEqual((res.CREATE, res.FAILED), res.state)
-
- def test_create_fail_prop_typo(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {'Food': 'abc'}}
- rname = 'test_resource'
- res = generic_rsrc.ResourceWithProps(rname, tmpl, self.stack)
-
- estr = 'Property error : test_resource: Property Foo not assigned'
- create = scheduler.TaskRunner(res.create)
- self.assertRaises(exception.ResourceFailure, create)
- self.assertEqual((res.CREATE, res.FAILED), res.state)
-
- def test_create_fail_metadata_parse_error(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {},
- 'Metadata': {"Fn::GetAtt": ["ResourceA", "abc"]}}
- rname = 'test_resource'
- res = generic_rsrc.ResourceWithProps(rname, tmpl, self.stack)
-
- create = scheduler.TaskRunner(res.create)
- self.assertRaises(exception.ResourceFailure, create)
- self.assertEqual((res.CREATE, res.FAILED), res.state)
-
- def test_create_resource_after_destroy(self):
- tmpl = {'Type': 'GenericResourceType'}
- rname = 'test_res_id_none'
- res = generic_rsrc.ResourceWithProps(rname, tmpl, self.stack)
- res.id = 'test_res_id'
- (res.action, res.status) = (res.INIT, res.DELETE)
- self.assertRaises(exception.ResourceFailure, res.create)
- scheduler.TaskRunner(res.destroy)()
- res.state_reset()
- scheduler.TaskRunner(res.create)()
- self.assertEqual((res.CREATE, res.COMPLETE), res.state)
-
- def test_update_ok(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'abc'}}
- res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack)
- res.update_allowed_keys = ('Properties',)
- res.update_allowed_properties = ('Foo',)
- scheduler.TaskRunner(res.create)()
- self.assertEqual((res.CREATE, res.COMPLETE), res.state)
-
- utmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'xyz'}}
- tmpl_diff = {'Properties': {'Foo': 'xyz'}}
- prop_diff = {'Foo': 'xyz'}
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_update')
- generic_rsrc.ResourceWithProps.handle_update(
- utmpl, tmpl_diff, prop_diff).AndReturn(None)
- self.m.ReplayAll()
-
- scheduler.TaskRunner(res.update, utmpl)()
- self.assertEqual((res.UPDATE, res.COMPLETE), res.state)
- self.m.VerifyAll()
-
- def test_update_replace(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'abc'}}
- res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack)
- res.update_allowed_keys = ('Properties',)
- res.update_allowed_properties = ('Foo',)
- scheduler.TaskRunner(res.create)()
- self.assertEqual((res.CREATE, res.COMPLETE), res.state)
-
- utmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'xyz'}}
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_update')
- tmpl_diff = {'Properties': {'Foo': 'xyz'}}
- prop_diff = {'Foo': 'xyz'}
- generic_rsrc.ResourceWithProps.handle_update(
- utmpl, tmpl_diff, prop_diff).AndRaise(resource.UpdateReplace())
- self.m.ReplayAll()
- # should be re-raised so parser.Stack can handle replacement
- updater = scheduler.TaskRunner(res.update, utmpl)
- self.assertRaises(resource.UpdateReplace, updater)
- self.m.VerifyAll()
-
- def test_update_fail_missing_req_prop(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'abc'}}
- res = generic_rsrc.ResourceWithRequiredProps('test_resource',
- tmpl, self.stack)
- res.update_allowed_keys = ('Properties',)
- res.update_allowed_properties = ('Foo',)
- scheduler.TaskRunner(res.create)()
- self.assertEqual((res.CREATE, res.COMPLETE), res.state)
-
- utmpl = {'Type': 'GenericResourceType', 'Properties': {}}
-
- updater = scheduler.TaskRunner(res.update, utmpl)
- self.assertRaises(exception.ResourceFailure, updater)
- self.assertEqual((res.UPDATE, res.FAILED), res.state)
-
- def test_update_fail_prop_typo(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'abc'}}
- res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack)
- res.update_allowed_keys = ('Properties',)
- res.update_allowed_properties = ('Foo',)
- scheduler.TaskRunner(res.create)()
- self.assertEqual((res.CREATE, res.COMPLETE), res.state)
-
- utmpl = {'Type': 'GenericResourceType', 'Properties': {'Food': 'xyz'}}
-
- updater = scheduler.TaskRunner(res.update, utmpl)
- self.assertRaises(exception.ResourceFailure, updater)
- self.assertEqual((res.UPDATE, res.FAILED), res.state)
-
- def test_update_not_implemented(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'abc'}}
- res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack)
- res.update_allowed_keys = ('Properties',)
- res.update_allowed_properties = ('Foo',)
- scheduler.TaskRunner(res.create)()
- self.assertEqual((res.CREATE, res.COMPLETE), res.state)
-
- utmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'xyz'}}
- tmpl_diff = {'Properties': {'Foo': 'xyz'}}
- prop_diff = {'Foo': 'xyz'}
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_update')
- generic_rsrc.ResourceWithProps.handle_update(
- utmpl, tmpl_diff, prop_diff).AndRaise(NotImplemented)
- self.m.ReplayAll()
- updater = scheduler.TaskRunner(res.update, utmpl)
- self.assertRaises(exception.ResourceFailure, updater)
- self.assertEqual((res.UPDATE, res.FAILED), res.state)
- self.m.VerifyAll()
-
- def test_suspend_resume_ok(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'abc'}}
- res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack)
- res.update_allowed_keys = ('Properties',)
- res.update_allowed_properties = ('Foo',)
- scheduler.TaskRunner(res.create)()
- self.assertEqual((res.CREATE, res.COMPLETE), res.state)
- scheduler.TaskRunner(res.suspend)()
- self.assertEqual((res.SUSPEND, res.COMPLETE), res.state)
- scheduler.TaskRunner(res.resume)()
- self.assertEqual((res.RESUME, res.COMPLETE), res.state)
-
- def test_suspend_fail_inprogress(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'abc'}}
- res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack)
- scheduler.TaskRunner(res.create)()
- self.assertEqual((res.CREATE, res.COMPLETE), res.state)
-
- res.state_set(res.CREATE, res.IN_PROGRESS)
- suspend = scheduler.TaskRunner(res.suspend)
- self.assertRaises(exception.ResourceFailure, suspend)
-
- res.state_set(res.UPDATE, res.IN_PROGRESS)
- suspend = scheduler.TaskRunner(res.suspend)
- self.assertRaises(exception.ResourceFailure, suspend)
-
- res.state_set(res.DELETE, res.IN_PROGRESS)
- suspend = scheduler.TaskRunner(res.suspend)
- self.assertRaises(exception.ResourceFailure, suspend)
-
- def test_resume_fail_not_suspend_complete(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'abc'}}
- res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack)
- scheduler.TaskRunner(res.create)()
- self.assertEqual((res.CREATE, res.COMPLETE), res.state)
-
- non_suspended_states = [s for s in
- itertools.product(res.ACTIONS, res.STATUSES)
- if s != (res.SUSPEND, res.COMPLETE)]
- for state in non_suspended_states:
- res.state_set(*state)
- resume = scheduler.TaskRunner(res.resume)
- self.assertRaises(exception.ResourceFailure, resume)
-
- def test_suspend_fail_exception(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'abc'}}
- res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack)
- scheduler.TaskRunner(res.create)()
- self.assertEqual((res.CREATE, res.COMPLETE), res.state)
-
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps,
- 'handle_suspend')
- generic_rsrc.ResourceWithProps.handle_suspend().AndRaise(Exception())
- self.m.ReplayAll()
-
- suspend = scheduler.TaskRunner(res.suspend)
- self.assertRaises(exception.ResourceFailure, suspend)
- self.assertEqual((res.SUSPEND, res.FAILED), res.state)
-
- def test_resume_fail_exception(self):
- tmpl = {'Type': 'GenericResourceType', 'Properties': {'Foo': 'abc'}}
- res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack)
- scheduler.TaskRunner(res.create)()
- self.assertEqual((res.CREATE, res.COMPLETE), res.state)
-
- self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_resume')
- generic_rsrc.ResourceWithProps.handle_resume().AndRaise(Exception())
- self.m.ReplayAll()
-
- res.state_set(res.SUSPEND, res.COMPLETE)
-
- resume = scheduler.TaskRunner(res.resume)
- self.assertRaises(exception.ResourceFailure, resume)
- self.assertEqual((res.RESUME, res.FAILED), res.state)
-
- def test_resource_class_to_template(self):
-
- class TestResource(resource.Resource):
- list_schema = {'wont_show_up': {'Type': 'Number'}}
- map_schema = {'will_show_up': {'Type': 'Integer'}}
-
- properties_schema = {
- 'name': {'Type': 'String'},
- 'bool': {'Type': 'Boolean'},
- 'implemented': {'Type': 'String',
- 'Implemented': True,
- 'AllowedPattern': '.*',
- 'MaxLength': 7,
- 'MinLength': 2,
- 'Required': True},
- 'not_implemented': {'Type': 'String',
- 'Implemented': False},
- 'number': {'Type': 'Number',
- 'MaxValue': 77,
- 'MinValue': 41,
- 'Default': 42},
- 'list': {'Type': 'List', 'Schema': {'Type': 'Map',
- 'Schema': list_schema}},
- 'map': {'Type': 'Map', 'Schema': {'Type': 'Map',
- 'Schema': map_schema}},
- }
-
- attributes_schema = {
- 'output1': 'output1_desc',
- 'output2': 'output2_desc'
- }
-
- expected_template = {
- 'Parameters': {
- 'name': {'Type': 'String'},
- 'bool': {'Type': 'Boolean'},
- 'implemented': {
- 'Type': 'String',
- 'AllowedPattern': '.*',
- 'MaxLength': 7,
- 'MinLength': 2
- },
- 'number': {'Type': 'Number',
- 'MaxValue': 77,
- 'MinValue': 41,
- 'Default': 42},
- 'list': {'Type': 'CommaDelimitedList'},
- 'map': {'Type': 'Json'}
- },
- 'Resources': {
- 'TestResource': {
- 'Type': 'Test::Resource::resource',
- 'Properties': {
- 'name': {'Ref': 'name'},
- 'bool': {'Ref': 'bool'},
- 'implemented': {'Ref': 'implemented'},
- 'number': {'Ref': 'number'},
- 'list': {'Fn::Split': {'Ref': 'list'}},
- 'map': {'Ref': 'map'}
- }
- }
- },
- 'Outputs': {
- 'output1': {
- 'Description': 'output1_desc',
- 'Value': '{"Fn::GetAtt": ["TestResource", "output1"]}'
- },
- 'output2': {
- 'Description': 'output2_desc',
- 'Value': '{"Fn::GetAtt": ["TestResource", "output2"]}'
- }
- }
- }
- self.assertEqual(expected_template,
- TestResource.resource_to_template(
- 'Test::Resource::resource')
- )
-
-
-class ResourceDependenciesTest(HeatTestCase):
- def setUp(self):
- super(ResourceDependenciesTest, self).setUp()
- utils.setup_dummy_db()
-
- resource._register_class('GenericResourceType',
- generic_rsrc.GenericResource)
- resource._register_class('ResourceWithPropsType',
- generic_rsrc.ResourceWithProps)
-
- self.deps = dependencies.Dependencies()
-
- def test_no_deps(self):
- tmpl = template.Template({
- 'Resources': {
- 'foo': {'Type': 'GenericResourceType'},
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['foo']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
-
- def test_ref(self):
- tmpl = template.Template({
- 'Resources': {
- 'foo': {'Type': 'GenericResourceType'},
- 'bar': {
- 'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Ref': 'foo'},
- }
- }
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['bar']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
- self.assertIn(stack['foo'], graph[res])
-
- def test_hot_ref(self):
- '''Test that HOT get_resource creates dependencies.'''
- tmpl = template.Template({
- 'heat_template_version': '2013-05-23',
- 'resources': {
- 'foo': {'type': 'GenericResourceType'},
- 'bar': {
- 'type': 'ResourceWithPropsType',
- 'properties': {
- 'Foo': {'get_resource': 'foo'},
- }
- }
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['bar']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
- self.assertIn(stack['foo'], graph[res])
-
- def test_ref_nested_dict(self):
- tmpl = template.Template({
- 'Resources': {
- 'foo': {'Type': 'GenericResourceType'},
- 'bar': {
- 'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Fn::Base64': {'Ref': 'foo'}},
- }
- }
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['bar']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
- self.assertIn(stack['foo'], graph[res])
-
- def test_hot_ref_nested_dict(self):
- tmpl = template.Template({
- 'heat_template_version': '2013-05-23',
- 'resources': {
- 'foo': {'type': 'GenericResourceType'},
- 'bar': {
- 'type': 'ResourceWithPropsType',
- 'properties': {
- 'Foo': {'Fn::Base64': {'get_resource': 'foo'}},
- }
- }
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['bar']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
- self.assertIn(stack['foo'], graph[res])
-
- def test_ref_nested_deep(self):
- tmpl = template.Template({
- 'Resources': {
- 'foo': {'Type': 'GenericResourceType'},
- 'bar': {
- 'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Fn::Join': [",", ["blarg",
- {'Ref': 'foo'},
- "wibble"]]},
- }
- }
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['bar']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
- self.assertIn(stack['foo'], graph[res])
-
- def test_hot_ref_nested_deep(self):
- tmpl = template.Template({
- 'heat_template_version': '2013-05-23',
- 'resources': {
- 'foo': {'type': 'GenericResourceType'},
- 'bar': {
- 'type': 'ResourceWithPropsType',
- 'properties': {
- 'foo': {'Fn::Join': [",", ["blarg",
- {'get_resource': 'foo'},
- "wibble"]]},
- }
- }
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['bar']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
- self.assertIn(stack['foo'], graph[res])
-
- def test_ref_fail(self):
- tmpl = template.Template({
- 'Resources': {
- 'foo': {'Type': 'GenericResourceType'},
- 'bar': {
- 'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Ref': 'baz'},
- }
- }
- }
- })
- ex = self.assertRaises(exception.InvalidTemplateReference,
- parser.Stack,
- None, 'test', tmpl)
- self.assertIn('"baz" (in bar.Properties.Foo)', str(ex))
-
- def test_hot_ref_fail(self):
- tmpl = template.Template({
- 'heat_template_version': '2013-05-23',
- 'resources': {
- 'foo': {'type': 'GenericResourceType'},
- 'bar': {
- 'type': 'ResourceWithPropsType',
- 'properties': {
- 'Foo': {'get_resource': 'baz'},
- }
- }
- }
- })
- ex = self.assertRaises(exception.InvalidTemplateReference,
- parser.Stack,
- None, 'test', tmpl)
- self.assertIn('"baz" (in bar.Properties.Foo)', str(ex))
-
- def test_getatt(self):
- tmpl = template.Template({
- 'Resources': {
- 'foo': {'Type': 'GenericResourceType'},
- 'bar': {
- 'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Fn::GetAtt': ['foo', 'bar']},
- }
- }
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['bar']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
- self.assertIn(stack['foo'], graph[res])
-
- def test_hot_getatt(self):
- tmpl = template.Template({
- 'heat_template_version': '2013-05-23',
- 'resources': {
- 'foo': {'type': 'GenericResourceType'},
- 'bar': {
- 'type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'get_attr': ['foo', 'bar']},
- }
- }
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['bar']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
- self.assertIn(stack['foo'], graph[res])
-
- def test_getatt_nested_dict(self):
- tmpl = template.Template({
- 'Resources': {
- 'foo': {'Type': 'GenericResourceType'},
- 'bar': {
- 'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Fn::Base64': {'Fn::GetAtt': ['foo', 'bar']}},
- }
- }
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['bar']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
- self.assertIn(stack['foo'], graph[res])
-
- def test_hot_getatt_nested_dict(self):
- tmpl = template.Template({
- 'heat_template_version': '2013-05-23',
- 'resources': {
- 'foo': {'type': 'GenericResourceType'},
- 'bar': {
- 'type': 'ResourceWithPropsType',
- 'properties': {
- 'Foo': {'Fn::Base64': {'get_attr': ['foo', 'bar']}},
- }
- }
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['bar']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
- self.assertIn(stack['foo'], graph[res])
-
- def test_getatt_nested_deep(self):
- tmpl = template.Template({
- 'Resources': {
- 'foo': {'Type': 'GenericResourceType'},
- 'bar': {
- 'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Fn::Join': [",", ["blarg",
- {'Fn::GetAtt': ['foo',
- 'bar']},
- "wibble"]]},
- }
- }
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['bar']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
- self.assertIn(stack['foo'], graph[res])
-
- def test_hot_getatt_nested_deep(self):
- tmpl = template.Template({
- 'heat_template_version': '2013-05-23',
- 'resources': {
- 'foo': {'type': 'GenericResourceType'},
- 'bar': {
- 'type': 'ResourceWithPropsType',
- 'properties': {
- 'Foo': {'Fn::Join': [",", ["blarg",
- {'get_attr': ['foo',
- 'bar']},
- "wibble"]]},
- }
- }
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['bar']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
- self.assertIn(stack['foo'], graph[res])
-
- def test_getatt_fail(self):
- tmpl = template.Template({
- 'Resources': {
- 'foo': {'Type': 'GenericResourceType'},
- 'bar': {
- 'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Fn::GetAtt': ['baz', 'bar']},
- }
- }
- }
- })
- ex = self.assertRaises(exception.InvalidTemplateReference,
- parser.Stack,
- None, 'test', tmpl)
- self.assertIn('"baz" (in bar.Properties.Foo)', str(ex))
-
- def test_hot_getatt_fail(self):
- tmpl = template.Template({
- 'heat_template_version': '2013-05-23',
- 'resources': {
- 'foo': {'type': 'GenericResourceType'},
- 'bar': {
- 'type': 'ResourceWithPropsType',
- 'properties': {
- 'Foo': {'get_attr': ['baz', 'bar']},
- }
- }
- }
- })
- ex = self.assertRaises(exception.InvalidTemplateReference,
- parser.Stack,
- None, 'test', tmpl)
- self.assertIn('"baz" (in bar.Properties.Foo)', str(ex))
-
- def test_getatt_fail_nested_deep(self):
- tmpl = template.Template({
- 'Resources': {
- 'foo': {'Type': 'GenericResourceType'},
- 'bar': {
- 'Type': 'ResourceWithPropsType',
- 'Properties': {
- 'Foo': {'Fn::Join': [",", ["blarg",
- {'Fn::GetAtt': ['foo',
- 'bar']},
- "wibble",
- {'Fn::GetAtt': ['baz',
- 'bar']}]]},
- }
- }
- }
- })
- ex = self.assertRaises(exception.InvalidTemplateReference,
- parser.Stack,
- None, 'test', tmpl)
- self.assertIn('"baz" (in bar.Properties.Foo.Fn::Join[1][3])', str(ex))
-
- def test_hot_getatt_fail_nested_deep(self):
- tmpl = template.Template({
- 'heat_template_version': '2013-05-23',
- 'resources': {
- 'foo': {'type': 'GenericResourceType'},
- 'bar': {
- 'type': 'ResourceWithPropsType',
- 'properties': {
- 'Foo': {'Fn::Join': [",", ["blarg",
- {'get_attr': ['foo',
- 'bar']},
- "wibble",
- {'get_attr': ['baz',
- 'bar']}]]},
- }
- }
- }
- })
- ex = self.assertRaises(exception.InvalidTemplateReference,
- parser.Stack,
- None, 'test', tmpl)
- self.assertIn('"baz" (in bar.Properties.Foo.Fn::Join[1][3])', str(ex))
-
- def test_dependson(self):
- tmpl = template.Template({
- 'Resources': {
- 'foo': {'Type': 'GenericResourceType'},
- 'bar': {
- 'Type': 'GenericResourceType',
- 'DependsOn': 'foo',
- }
- }
- })
- stack = parser.Stack(None, 'test', tmpl)
-
- res = stack['bar']
- res.add_dependencies(self.deps)
- graph = self.deps.graph()
-
- self.assertIn(res, graph)
- self.assertIn(stack['foo'], graph[res])
-
- def test_dependson_fail(self):
- tmpl = template.Template({
- 'Resources': {
- 'foo': {
- 'Type': 'GenericResourceType',
- 'DependsOn': 'wibble',
- }
- }
- })
- ex = self.assertRaises(exception.InvalidTemplateReference,
- parser.Stack,
- None, 'test', tmpl)
- self.assertIn('"wibble" (in foo)', str(ex))
-
-
-class MetadataTest(HeatTestCase):
- def setUp(self):
- super(MetadataTest, self).setUp()
- tmpl = {
- 'Type': 'Foo',
- 'Metadata': {'Test': 'Initial metadata'}
- }
- utils.setup_dummy_db()
- self.stack = parser.Stack(utils.dummy_context(),
- 'test_stack', parser.Template({}))
- self.stack.store()
- self.res = generic_rsrc.GenericResource('metadata_resource',
- tmpl, self.stack)
- scheduler.TaskRunner(self.res.create)()
- self.addCleanup(self.stack.delete)
-
- def test_read_initial(self):
- self.assertEqual(self.res.metadata, {'Test': 'Initial metadata'})
-
- def test_write(self):
- test_data = {'Test': 'Newly-written data'}
- self.res.metadata = test_data
- self.assertEqual(self.res.metadata, test_data)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2012, Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""
-Unit Tests for heat.rpc.client
-"""
-
-
-from oslo.config import cfg
-import stubout
-import testtools
-
-from heat.common import identifier
-from heat.rpc import api as rpc_api
-from heat.rpc import client as rpc_client
-from heat.openstack.common import rpc
-from heat.tests import utils
-
-
-class EngineRpcAPITestCase(testtools.TestCase):
-
- def setUp(self):
- self.context = utils.dummy_context()
- cfg.CONF.set_default('rpc_backend',
- 'heat.openstack.common.rpc.impl_fake')
- cfg.CONF.set_default('verbose', True)
- cfg.CONF.set_default('host', 'host')
-
- self.stubs = stubout.StubOutForTesting()
- self.identity = dict(identifier.HeatIdentifier('engine_test_tenant',
- '6',
- 'wordpress'))
- super(EngineRpcAPITestCase, self).setUp()
-
- def _test_engine_api(self, method, rpc_method, **kwargs):
- ctxt = utils.dummy_context()
- if 'rpcapi_class' in kwargs:
- rpcapi_class = kwargs['rpcapi_class']
- del kwargs['rpcapi_class']
- else:
- rpcapi_class = rpc_client.EngineClient
- rpcapi = rpcapi_class()
- expected_retval = 'foo' if method == 'call' else None
-
- expected_version = kwargs.pop('version', rpcapi.BASE_RPC_API_VERSION)
- expected_msg = rpcapi.make_msg(method, **kwargs)
-
- expected_msg['version'] = expected_version
- expected_topic = rpc_api.ENGINE_TOPIC
-
- cast_and_call = ['delete_stack']
- if rpc_method == 'call' and method in cast_and_call:
- kwargs['cast'] = False
-
- self.fake_args = None
- self.fake_kwargs = None
-
- def _fake_rpc_method(*args, **kwargs):
- self.fake_args = args
- self.fake_kwargs = kwargs
- if expected_retval:
- return expected_retval
-
- self.stubs.Set(rpc, rpc_method, _fake_rpc_method)
-
- retval = getattr(rpcapi, method)(ctxt, **kwargs)
-
- self.assertEqual(retval, expected_retval)
- expected_args = [ctxt, expected_topic, expected_msg]
- for arg, expected_arg in zip(self.fake_args, expected_args):
- self.assertEqual(arg, expected_arg)
-
- def test_authenticated_to_backend(self):
- self._test_engine_api('authenticated_to_backend', 'call')
-
- def test_list_stacks(self):
- self._test_engine_api('list_stacks', 'call')
-
- def test_identify_stack(self):
- self._test_engine_api('identify_stack', 'call',
- stack_name='wordpress')
-
- def test_show_stack(self):
- self._test_engine_api('show_stack', 'call', stack_identity='wordpress')
-
- def test_create_stack(self):
- self._test_engine_api('create_stack', 'call', stack_name='wordpress',
- template={u'Foo': u'bar'},
- params={u'InstanceType': u'm1.xlarge'},
- files={u'a_file': u'the contents'},
- args={'timeout_mins': u'30'})
-
- def test_update_stack(self):
- self._test_engine_api('update_stack', 'call',
- stack_identity=self.identity,
- template={u'Foo': u'bar'},
- params={u'InstanceType': u'm1.xlarge'},
- files={},
- args={})
-
- def test_get_template(self):
- self._test_engine_api('get_template', 'call',
- stack_identity=self.identity)
-
- def test_delete_stack_cast(self):
- self._test_engine_api('delete_stack', 'cast',
- stack_identity=self.identity)
-
- def test_delete_stack_call(self):
- self._test_engine_api('delete_stack', 'call',
- stack_identity=self.identity)
-
- def test_validate_template(self):
- self._test_engine_api('validate_template', 'call',
- template={u'Foo': u'bar'})
-
- def test_list_resource_types(self):
- self._test_engine_api('list_resource_types', 'call')
-
- def test_resource_schema(self):
- self._test_engine_api('resource_schema', 'call', type_name="TYPE")
-
- def test_generate_template(self):
- self._test_engine_api('generate_template', 'call', type_name="TYPE")
-
- def test_list_events(self):
- self._test_engine_api('list_events', 'call',
- stack_identity=self.identity)
-
- def test_describe_stack_resource(self):
- self._test_engine_api('describe_stack_resource', 'call',
- stack_identity=self.identity,
- resource_name='LogicalResourceId')
-
- def test_find_physical_resource(self):
- self._test_engine_api('find_physical_resource', 'call',
- physical_resource_id=u'404d-a85b-5315293e67de')
-
- def test_describe_stack_resources(self):
- self._test_engine_api('describe_stack_resources', 'call',
- stack_identity=self.identity,
- resource_name=u'WikiDatabase')
-
- def test_list_stack_resources(self):
- self._test_engine_api('list_stack_resources', 'call',
- stack_identity=self.identity)
-
- def test_stack_suspend(self):
- self._test_engine_api('stack_suspend', 'call',
- stack_identity=self.identity)
-
- def test_stack_resume(self):
- self._test_engine_api('stack_resume', 'call',
- stack_identity=self.identity)
-
- def test_metadata_update(self):
- self._test_engine_api('metadata_update', 'call',
- stack_identity=self.identity,
- resource_name='LogicalResourceId',
- metadata={u'wordpress': []})
-
- def test_resource_signal(self):
- self._test_engine_api('resource_signal', 'call',
- stack_identity=self.identity,
- resource_name='LogicalResourceId',
- details={u'wordpress': []})
-
- def test_create_watch_data(self):
- self._test_engine_api('create_watch_data', 'call',
- watch_name='watch1',
- stats_data={})
-
- def test_show_watch(self):
- self._test_engine_api('show_watch', 'call',
- watch_name='watch1')
-
- def test_show_watch_metric(self):
- self._test_engine_api('show_watch_metric', 'call',
- metric_namespace=None, metric_name=None)
-
- def test_set_watch_state(self):
- self._test_engine_api('set_watch_state', 'call',
- watch_name='watch1', state="xyz")
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-from testtools import skipIf
-
-from heat.common import exception
-from heat.common import template_format
-from heat.openstack.common.importutils import try_import
-from heat.engine.resources import s3
-from heat.engine import resource
-from heat.engine import clients
-from heat.engine import scheduler
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-from heat.tests import fakes
-
-swiftclient = try_import('swiftclient.client')
-
-swift_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test S3 Bucket resources",
- "Resources" : {
- "S3BucketWebsite" : {
- "Type" : "AWS::S3::Bucket",
- "DeletionPolicy" : "Delete",
- "Properties" : {
- "AccessControl" : "PublicRead",
- "WebsiteConfiguration" : {
- "IndexDocument" : "index.html",
- "ErrorDocument" : "error.html"
- }
- }
- },
- "SwiftContainer": {
- "Type": "OS::Swift::Container",
- "Properties": {
- "S3Bucket": {"Ref" : "S3Bucket"},
- }
- },
- "S3Bucket" : {
- "Type" : "AWS::S3::Bucket",
- "Properties" : {
- "AccessControl" : "Private"
- }
- }
- }
-}
-'''
-
-
-class s3Test(HeatTestCase):
- @skipIf(swiftclient is None, 'unable to import swiftclient')
- def setUp(self):
- super(s3Test, self).setUp()
- self.m.CreateMock(swiftclient.Connection)
- self.m.StubOutWithMock(swiftclient.Connection, 'put_container')
- self.m.StubOutWithMock(swiftclient.Connection, 'delete_container')
- self.m.StubOutWithMock(swiftclient.Connection, 'get_auth')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
-
- utils.setup_dummy_db()
-
- def create_resource(self, t, stack, resource_name):
- rsrc = s3.S3Bucket('test_resource',
- t['Resources'][resource_name],
- stack)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def test_attributes(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- container_name = utils.PhysName('test_stack', 'test_resource')
- swiftclient.Connection.put_container(
- container_name,
- {'X-Container-Write': 'test_tenant:test_username',
- 'X-Container-Read': 'test_tenant:test_username'}
- ).AndReturn(None)
- swiftclient.Connection.get_auth().MultipleTimes().AndReturn(
- ('http://server.test:8080/v_2', None))
- swiftclient.Connection.delete_container(container_name).AndReturn(None)
-
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
- stack = utils.parse_stack(t)
- rsrc = self.create_resource(t, stack, 'S3Bucket')
-
- ref_id = rsrc.FnGetRefId()
- self.assertEqual(container_name, ref_id)
-
- self.assertEqual('server.test', rsrc.FnGetAtt('DomainName'))
- url = 'http://server.test:8080/v_2/%s' % ref_id
-
- self.assertEqual(url, rsrc.FnGetAtt('WebsiteURL'))
-
- try:
- rsrc.FnGetAtt('Foo')
- raise Exception('Expected InvalidTemplateAttribute')
- except exception.InvalidTemplateAttribute:
- pass
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
- def test_public_read(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- container_name = utils.PhysName('test_stack', 'test_resource')
- swiftclient.Connection.put_container(
- utils.PhysName('test_stack', 'test_resource'),
- {'X-Container-Write': 'test_tenant:test_username',
- 'X-Container-Read': '.r:*'}).AndReturn(None)
- swiftclient.Connection.delete_container(
- container_name).AndReturn(None)
-
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
- properties = t['Resources']['S3Bucket']['Properties']
- properties['AccessControl'] = 'PublicRead'
- stack = utils.parse_stack(t)
- rsrc = self.create_resource(t, stack, 'S3Bucket')
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
- def test_public_read_write(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- container_name = utils.PhysName('test_stack', 'test_resource')
- swiftclient.Connection.put_container(
- container_name,
- {'X-Container-Write': '.r:*',
- 'X-Container-Read': '.r:*'}).AndReturn(None)
- swiftclient.Connection.delete_container(
- container_name).AndReturn(None)
-
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
- properties = t['Resources']['S3Bucket']['Properties']
- properties['AccessControl'] = 'PublicReadWrite'
- stack = utils.parse_stack(t)
- rsrc = self.create_resource(t, stack, 'S3Bucket')
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
- def test_authenticated_read(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- container_name = utils.PhysName('test_stack', 'test_resource')
- swiftclient.Connection.put_container(
- container_name,
- {'X-Container-Write': 'test_tenant:test_username',
- 'X-Container-Read': 'test_tenant'}).AndReturn(None)
- swiftclient.Connection.delete_container(container_name).AndReturn(None)
-
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
- properties = t['Resources']['S3Bucket']['Properties']
- properties['AccessControl'] = 'AuthenticatedRead'
- stack = utils.parse_stack(t)
- rsrc = self.create_resource(t, stack, 'S3Bucket')
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
- def test_website(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- container_name = utils.PhysName('test_stack', 'test_resource')
- swiftclient.Connection.put_container(
- container_name,
- {'X-Container-Meta-Web-Error': 'error.html',
- 'X-Container-Meta-Web-Index': 'index.html',
- 'X-Container-Write': 'test_tenant:test_username',
- 'X-Container-Read': '.r:*'}).AndReturn(None)
- swiftclient.Connection.delete_container(container_name).AndReturn(None)
-
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
- stack = utils.parse_stack(t)
- rsrc = self.create_resource(t, stack, 'S3BucketWebsite')
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
- def test_delete_exception(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- container_name = utils.PhysName('test_stack', 'test_resource')
- swiftclient.Connection.put_container(
- container_name,
- {'X-Container-Write': 'test_tenant:test_username',
- 'X-Container-Read': 'test_tenant:test_username'}).AndReturn(None)
- swiftclient.Connection.delete_container(container_name).AndRaise(
- swiftclient.ClientException('Test delete failure'))
-
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
- stack = utils.parse_stack(t)
- rsrc = self.create_resource(t, stack, 'S3Bucket')
- scheduler.TaskRunner(rsrc.delete)()
-
- self.m.VerifyAll()
-
- def test_delete_retain(self):
-
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- # first run, with retain policy
- swiftclient.Connection.put_container(
- utils.PhysName('test_stack', 'test_resource'),
- {'X-Container-Write': 'test_tenant:test_username',
- 'X-Container-Read': 'test_tenant:test_username'}).AndReturn(None)
-
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
-
- bucket = t['Resources']['S3Bucket']
- bucket['DeletionPolicy'] = 'Retain'
- stack = utils.parse_stack(t)
- rsrc = self.create_resource(t, stack, 'S3Bucket')
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
-
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import mox
-
-import contextlib
-import eventlet
-
-from heat.engine import dependencies
-from heat.engine import scheduler
-
-
-class DummyTask(object):
- def __init__(self, num_steps=3):
- self.num_steps = num_steps
-
- def __call__(self, *args, **kwargs):
- for i in range(1, self.num_steps + 1):
- self.do_step(i, *args, **kwargs)
- yield
-
- def do_step(self, step_num, *args, **kwargs):
- print(self, step_num)
-
-
-class PollingTaskGroupTest(mox.MoxTestBase):
-
- def test_group(self):
- tasks = [DummyTask() for i in range(3)]
- for t in tasks:
- self.mox.StubOutWithMock(t, 'do_step')
-
- self.mox.StubOutWithMock(scheduler.TaskRunner, '_sleep')
-
- for t in tasks:
- t.do_step(1).AndReturn(None)
- for t in tasks:
- scheduler.TaskRunner._sleep(mox.IsA(int)).AndReturn(None)
- t.do_step(2).AndReturn(None)
- scheduler.TaskRunner._sleep(mox.IsA(int)).AndReturn(None)
- t.do_step(3).AndReturn(None)
-
- self.mox.ReplayAll()
-
- tg = scheduler.PollingTaskGroup(tasks)
- scheduler.TaskRunner(tg)()
-
- def test_kwargs(self):
- input_kwargs = {'i': [0, 1, 2],
- 'i2': [0, 1, 4]}
-
- output_kwargs = scheduler.PollingTaskGroup._kwargs(input_kwargs)
-
- expected_kwargs = [{'i': 0, 'i2': 0},
- {'i': 1, 'i2': 1},
- {'i': 2, 'i2': 4}]
-
- self.assertEqual(list(output_kwargs), expected_kwargs)
-
- def test_kwargs_short(self):
- input_kwargs = {'i': [0, 1, 2],
- 'i2': [0]}
-
- output_kwargs = scheduler.PollingTaskGroup._kwargs(input_kwargs)
-
- expected_kwargs = [{'i': 0, 'i2': 0}]
-
- self.assertEqual(list(output_kwargs), expected_kwargs)
-
- def test_no_kwargs(self):
- output_kwargs = scheduler.PollingTaskGroup._kwargs({})
- self.assertEqual(list(output_kwargs), [])
-
- def test_args(self):
- input_args = ([0, 1, 2],
- [0, 1, 4])
-
- output_args = scheduler.PollingTaskGroup._args(input_args)
-
- expected_args = [(0, 0), (1, 1), (2, 4)]
-
- self.assertEqual(list(output_args), expected_args)
-
- def test_args_short(self):
- input_args = ([0, 1, 2],
- [0])
-
- output_args = scheduler.PollingTaskGroup._args(input_args)
-
- expected_args = [(0, 0)]
-
- self.assertEqual(list(output_args), expected_args)
-
- def test_no_args(self):
- output_args = scheduler.PollingTaskGroup._args([])
- self.assertEqual(list(output_args), [])
-
- @contextlib.contextmanager
- def _args_test(self, *arg_lists, **kwarg_lists):
- dummy = DummyTask(1)
-
- tg = scheduler.PollingTaskGroup.from_task_with_args(dummy,
- *arg_lists,
- **kwarg_lists)
-
- self.mox.StubOutWithMock(dummy, 'do_step')
- yield dummy
-
- self.mox.ReplayAll()
- scheduler.TaskRunner(tg)(wait_time=None)
- self.mox.VerifyAll()
-
- def test_with_all_args(self):
- with self._args_test([0, 1, 2], [0, 1, 8],
- i=[0, 1, 2], i2=[0, 1, 4]) as dummy:
- for i in range(3):
- dummy.do_step(1, i, i * i * i, i=i, i2=i * i)
-
- def test_with_short_args(self):
- with self._args_test([0, 1, 2], [0, 1],
- i=[0, 1, 2], i2=[0, 1, 4]) as dummy:
- for i in range(2):
- dummy.do_step(1, i, i * i, i=i, i2=i * i)
-
- def test_with_short_kwargs(self):
- with self._args_test([0, 1, 2], [0, 1, 8],
- i=[0, 1], i2=[0, 1, 4]) as dummy:
- for i in range(2):
- dummy.do_step(1, i, i * i, i=i, i2=i * i)
-
- def test_with_empty_args(self):
- with self._args_test([],
- i=[0, 1, 2], i2=[0, 1, 4]) as dummy:
- pass
-
- def test_with_empty_kwargs(self):
- with self._args_test([0, 1, 2], [0, 1, 8],
- i=[]) as dummy:
- pass
-
- def test_with_no_args(self):
- with self._args_test(i=[0, 1, 2], i2=[0, 1, 4]) as dummy:
- for i in range(3):
- dummy.do_step(1, i=i, i2=i * i)
-
- def test_with_no_kwargs(self):
- with self._args_test([0, 1, 2], [0, 1, 4]) as dummy:
- for i in range(3):
- dummy.do_step(1, i, i * i)
-
-
-class DependencyTaskGroupTest(mox.MoxTestBase):
-
- @contextlib.contextmanager
- def _dep_test(self, *edges):
- dummy = DummyTask(getattr(self, 'steps', 3))
-
- deps = dependencies.Dependencies(edges)
-
- tg = scheduler.DependencyTaskGroup(deps, dummy)
-
- self.mox.StubOutWithMock(dummy, 'do_step')
-
- yield dummy
-
- self.mox.ReplayAll()
- scheduler.TaskRunner(tg)(wait_time=None)
- self.mox.VerifyAll()
-
- def test_no_steps(self):
- self.steps = 0
- self.mox.StubOutWithMock(scheduler.TaskRunner, '_sleep')
- with self._dep_test(('second', 'first')) as dummy:
- scheduler.TaskRunner._sleep(None).AndReturn(None)
-
- def test_single_node(self):
- with self._dep_test(('only', None)) as dummy:
- dummy.do_step(1, 'only').AndReturn(None)
- dummy.do_step(2, 'only').AndReturn(None)
- dummy.do_step(3, 'only').AndReturn(None)
-
- def test_disjoint(self):
- with self._dep_test(('1', None), ('2', None)) as dummy:
- dummy.do_step(1, '1').InAnyOrder('1')
- dummy.do_step(1, '2').InAnyOrder('1')
- dummy.do_step(2, '1').InAnyOrder('2')
- dummy.do_step(2, '2').InAnyOrder('2')
- dummy.do_step(3, '1').InAnyOrder('3')
- dummy.do_step(3, '2').InAnyOrder('3')
-
- def test_single_fwd(self):
- with self._dep_test(('second', 'first')) as dummy:
- dummy.do_step(1, 'first').AndReturn(None)
- dummy.do_step(2, 'first').AndReturn(None)
- dummy.do_step(3, 'first').AndReturn(None)
- dummy.do_step(1, 'second').AndReturn(None)
- dummy.do_step(2, 'second').AndReturn(None)
- dummy.do_step(3, 'second').AndReturn(None)
-
- def test_chain_fwd(self):
- with self._dep_test(('third', 'second'),
- ('second', 'first')) as dummy:
- dummy.do_step(1, 'first').AndReturn(None)
- dummy.do_step(2, 'first').AndReturn(None)
- dummy.do_step(3, 'first').AndReturn(None)
- dummy.do_step(1, 'second').AndReturn(None)
- dummy.do_step(2, 'second').AndReturn(None)
- dummy.do_step(3, 'second').AndReturn(None)
- dummy.do_step(1, 'third').AndReturn(None)
- dummy.do_step(2, 'third').AndReturn(None)
- dummy.do_step(3, 'third').AndReturn(None)
-
- def test_diamond_fwd(self):
- with self._dep_test(('last', 'mid1'), ('last', 'mid2'),
- ('mid1', 'first'), ('mid2', 'first')) as dummy:
- dummy.do_step(1, 'first').AndReturn(None)
- dummy.do_step(2, 'first').AndReturn(None)
- dummy.do_step(3, 'first').AndReturn(None)
- dummy.do_step(1, 'mid1').InAnyOrder('1')
- dummy.do_step(1, 'mid2').InAnyOrder('1')
- dummy.do_step(2, 'mid1').InAnyOrder('2')
- dummy.do_step(2, 'mid2').InAnyOrder('2')
- dummy.do_step(3, 'mid1').InAnyOrder('3')
- dummy.do_step(3, 'mid2').InAnyOrder('3')
- dummy.do_step(1, 'last').AndReturn(None)
- dummy.do_step(2, 'last').AndReturn(None)
- dummy.do_step(3, 'last').AndReturn(None)
-
- def test_complex_fwd(self):
- with self._dep_test(('last', 'mid1'), ('last', 'mid2'),
- ('mid1', 'mid3'), ('mid1', 'first'),
- ('mid3', 'first'), ('mid2', 'first')) as dummy:
- dummy.do_step(1, 'first').AndReturn(None)
- dummy.do_step(2, 'first').AndReturn(None)
- dummy.do_step(3, 'first').AndReturn(None)
- dummy.do_step(1, 'mid2').InAnyOrder('1')
- dummy.do_step(1, 'mid3').InAnyOrder('1')
- dummy.do_step(2, 'mid2').InAnyOrder('2')
- dummy.do_step(2, 'mid3').InAnyOrder('2')
- dummy.do_step(3, 'mid2').InAnyOrder('3')
- dummy.do_step(3, 'mid3').InAnyOrder('3')
- dummy.do_step(1, 'mid1').AndReturn(None)
- dummy.do_step(2, 'mid1').AndReturn(None)
- dummy.do_step(3, 'mid1').AndReturn(None)
- dummy.do_step(1, 'last').AndReturn(None)
- dummy.do_step(2, 'last').AndReturn(None)
- dummy.do_step(3, 'last').AndReturn(None)
-
- def test_many_edges_fwd(self):
- with self._dep_test(('last', 'e1'), ('last', 'mid1'), ('last', 'mid2'),
- ('mid1', 'e2'), ('mid1', 'mid3'),
- ('mid2', 'mid3'),
- ('mid3', 'e3')) as dummy:
- dummy.do_step(1, 'e1').InAnyOrder('1edges')
- dummy.do_step(1, 'e2').InAnyOrder('1edges')
- dummy.do_step(1, 'e3').InAnyOrder('1edges')
- dummy.do_step(2, 'e1').InAnyOrder('2edges')
- dummy.do_step(2, 'e2').InAnyOrder('2edges')
- dummy.do_step(2, 'e3').InAnyOrder('2edges')
- dummy.do_step(3, 'e1').InAnyOrder('3edges')
- dummy.do_step(3, 'e2').InAnyOrder('3edges')
- dummy.do_step(3, 'e3').InAnyOrder('3edges')
- dummy.do_step(1, 'mid3').AndReturn(None)
- dummy.do_step(2, 'mid3').AndReturn(None)
- dummy.do_step(3, 'mid3').AndReturn(None)
- dummy.do_step(1, 'mid2').InAnyOrder('1mid')
- dummy.do_step(1, 'mid1').InAnyOrder('1mid')
- dummy.do_step(2, 'mid2').InAnyOrder('2mid')
- dummy.do_step(2, 'mid1').InAnyOrder('2mid')
- dummy.do_step(3, 'mid2').InAnyOrder('3mid')
- dummy.do_step(3, 'mid1').InAnyOrder('3mid')
- dummy.do_step(1, 'last').AndReturn(None)
- dummy.do_step(2, 'last').AndReturn(None)
- dummy.do_step(3, 'last').AndReturn(None)
-
- def test_dbldiamond_fwd(self):
- with self._dep_test(('last', 'a1'), ('last', 'a2'),
- ('a1', 'b1'), ('a2', 'b1'), ('a2', 'b2'),
- ('b1', 'first'), ('b2', 'first')) as dummy:
- dummy.do_step(1, 'first').AndReturn(None)
- dummy.do_step(2, 'first').AndReturn(None)
- dummy.do_step(3, 'first').AndReturn(None)
- dummy.do_step(1, 'b1').InAnyOrder('1b')
- dummy.do_step(1, 'b2').InAnyOrder('1b')
- dummy.do_step(2, 'b1').InAnyOrder('2b')
- dummy.do_step(2, 'b2').InAnyOrder('2b')
- dummy.do_step(3, 'b1').InAnyOrder('3b')
- dummy.do_step(3, 'b2').InAnyOrder('3b')
- dummy.do_step(1, 'a1').InAnyOrder('1a')
- dummy.do_step(1, 'a2').InAnyOrder('1a')
- dummy.do_step(2, 'a1').InAnyOrder('2a')
- dummy.do_step(2, 'a2').InAnyOrder('2a')
- dummy.do_step(3, 'a1').InAnyOrder('3a')
- dummy.do_step(3, 'a2').InAnyOrder('3a')
- dummy.do_step(1, 'last').AndReturn(None)
- dummy.do_step(2, 'last').AndReturn(None)
- dummy.do_step(3, 'last').AndReturn(None)
-
- def test_circular_deps(self):
- d = dependencies.Dependencies([('first', 'second'),
- ('second', 'third'),
- ('third', 'first')])
- self.assertRaises(dependencies.CircularDependencyException,
- scheduler.DependencyTaskGroup, d)
-
-
-class TaskTest(mox.MoxTestBase):
-
- def test_run(self):
- task = DummyTask()
- self.mox.StubOutWithMock(task, 'do_step')
- self.mox.StubOutWithMock(scheduler.TaskRunner, '_sleep')
-
- task.do_step(1).AndReturn(None)
- scheduler.TaskRunner._sleep(1).AndReturn(None)
- task.do_step(2).AndReturn(None)
- scheduler.TaskRunner._sleep(1).AndReturn(None)
- task.do_step(3).AndReturn(None)
-
- self.mox.ReplayAll()
-
- scheduler.TaskRunner(task)()
-
- def test_run_wait_time(self):
- task = DummyTask()
- self.mox.StubOutWithMock(task, 'do_step')
- self.mox.StubOutWithMock(scheduler.TaskRunner, '_sleep')
-
- task.do_step(1).AndReturn(None)
- scheduler.TaskRunner._sleep(42).AndReturn(None)
- task.do_step(2).AndReturn(None)
- scheduler.TaskRunner._sleep(42).AndReturn(None)
- task.do_step(3).AndReturn(None)
-
- self.mox.ReplayAll()
-
- scheduler.TaskRunner(task)(wait_time=42)
-
- def test_start_run(self):
- task = DummyTask()
- self.mox.StubOutWithMock(task, 'do_step')
- self.mox.StubOutWithMock(scheduler.TaskRunner, '_sleep')
-
- task.do_step(1).AndReturn(None)
- scheduler.TaskRunner._sleep(1).AndReturn(None)
- task.do_step(2).AndReturn(None)
- scheduler.TaskRunner._sleep(1).AndReturn(None)
- task.do_step(3).AndReturn(None)
-
- self.mox.ReplayAll()
-
- runner = scheduler.TaskRunner(task)
- runner.start()
- runner.run_to_completion()
-
- def test_start_run_wait_time(self):
- task = DummyTask()
- self.mox.StubOutWithMock(task, 'do_step')
- self.mox.StubOutWithMock(scheduler.TaskRunner, '_sleep')
-
- task.do_step(1).AndReturn(None)
- scheduler.TaskRunner._sleep(24).AndReturn(None)
- task.do_step(2).AndReturn(None)
- scheduler.TaskRunner._sleep(24).AndReturn(None)
- task.do_step(3).AndReturn(None)
-
- self.mox.ReplayAll()
-
- runner = scheduler.TaskRunner(task)
- runner.start()
- runner.run_to_completion(wait_time=24)
-
- def test_sleep(self):
- sleep_time = 42
- self.mox.StubOutWithMock(eventlet, 'sleep')
- eventlet.sleep(sleep_time).MultipleTimes().AndReturn(None)
-
- self.mox.ReplayAll()
-
- runner = scheduler.TaskRunner(DummyTask())
- runner(wait_time=sleep_time)
-
- def test_sleep_zero(self):
- self.mox.StubOutWithMock(eventlet, 'sleep')
- eventlet.sleep(0).MultipleTimes().AndReturn(None)
-
- self.mox.ReplayAll()
-
- runner = scheduler.TaskRunner(DummyTask())
- runner(wait_time=0)
-
- def test_sleep_none(self):
- self.mox.StubOutWithMock(eventlet, 'sleep')
- self.mox.ReplayAll()
-
- runner = scheduler.TaskRunner(DummyTask())
- runner(wait_time=None)
-
- def test_args(self):
- args = ['foo', 'bar']
- kwargs = {'baz': 'quux', 'blarg': 'wibble'}
-
- self.mox.StubOutWithMock(DummyTask, '__call__')
- task = DummyTask()
-
- task(*args, **kwargs)
-
- self.mox.ReplayAll()
-
- runner = scheduler.TaskRunner(task, *args, **kwargs)
- runner(wait_time=None)
-
- def test_non_callable(self):
- self.assertRaises(AssertionError, scheduler.TaskRunner, object())
-
- def test_stepping(self):
- task = DummyTask()
- self.mox.StubOutWithMock(task, 'do_step')
- self.mox.StubOutWithMock(scheduler.TaskRunner, '_sleep')
-
- task.do_step(1).AndReturn(None)
- task.do_step(2).AndReturn(None)
- task.do_step(3).AndReturn(None)
-
- self.mox.ReplayAll()
-
- runner = scheduler.TaskRunner(task)
- runner.start()
-
- self.assertFalse(runner.step())
- self.assertTrue(runner)
- self.assertFalse(runner.step())
- self.assertTrue(runner.step())
- self.assertFalse(runner)
-
- def test_start_no_steps(self):
- task = DummyTask(0)
- self.mox.StubOutWithMock(task, 'do_step')
- self.mox.StubOutWithMock(scheduler.TaskRunner, '_sleep')
-
- self.mox.ReplayAll()
-
- runner = scheduler.TaskRunner(task)
- runner.start()
-
- self.assertTrue(runner.done())
- self.assertTrue(runner.step())
-
- def test_start_only(self):
- task = DummyTask()
- self.mox.StubOutWithMock(task, 'do_step')
- self.mox.StubOutWithMock(scheduler.TaskRunner, '_sleep')
-
- task.do_step(1).AndReturn(None)
-
- self.mox.ReplayAll()
-
- runner = scheduler.TaskRunner(task)
-
- self.assertFalse(runner.started())
- runner.start()
- self.assertTrue(runner.started())
-
- def test_double_start(self):
- runner = scheduler.TaskRunner(DummyTask())
-
- runner.start()
- self.assertRaises(AssertionError, runner.start)
-
- def test_call_double_start(self):
- runner = scheduler.TaskRunner(DummyTask())
-
- runner(wait_time=None)
- self.assertRaises(AssertionError, runner.start)
-
- def test_start_function(self):
- def task():
- pass
-
- runner = scheduler.TaskRunner(task)
-
- runner.start()
- self.assertTrue(runner.started())
- self.assertTrue(runner.done())
- self.assertTrue(runner.step())
-
- def test_repeated_done(self):
- task = DummyTask(0)
- self.mox.StubOutWithMock(task, 'do_step')
- self.mox.StubOutWithMock(scheduler.TaskRunner, '_sleep')
-
- self.mox.ReplayAll()
-
- runner = scheduler.TaskRunner(task)
-
- runner.start()
- self.assertTrue(runner.step())
- self.assertTrue(runner.step())
-
- def test_timeout(self):
- st = scheduler.wallclock()
-
- def task():
- while True:
- yield
-
- self.mox.StubOutWithMock(scheduler, 'wallclock')
- scheduler.wallclock().AndReturn(st)
- scheduler.wallclock().AndReturn(st + 0.5)
- scheduler.wallclock().AndReturn(st + 1.5)
-
- self.mox.ReplayAll()
-
- runner = scheduler.TaskRunner(task)
-
- runner.start(timeout=1)
- self.assertTrue(runner)
- self.assertRaises(scheduler.Timeout, runner.step)
-
- self.mox.VerifyAll()
-
- def test_timeout_return(self):
- st = scheduler.wallclock()
-
- def task():
- while True:
- try:
- yield
- except scheduler.Timeout:
- return
-
- self.mox.StubOutWithMock(scheduler, 'wallclock')
- scheduler.wallclock().AndReturn(st)
- scheduler.wallclock().AndReturn(st + 0.5)
- scheduler.wallclock().AndReturn(st + 1.5)
-
- self.mox.ReplayAll()
-
- runner = scheduler.TaskRunner(task)
-
- runner.start(timeout=1)
- self.assertTrue(runner)
- self.assertTrue(runner.step())
- self.assertFalse(runner)
-
- self.mox.VerifyAll()
-
- def test_timeout_swallowed(self):
- st = scheduler.wallclock()
-
- def task():
- while True:
- try:
- yield
- except scheduler.Timeout:
- yield
- self.fail('Task still running')
-
- self.mox.StubOutWithMock(scheduler, 'wallclock')
- scheduler.wallclock().AndReturn(st)
- scheduler.wallclock().AndReturn(st + 0.5)
- scheduler.wallclock().AndReturn(st + 1.5)
-
- self.mox.ReplayAll()
-
- runner = scheduler.TaskRunner(task)
-
- runner.start(timeout=1)
- self.assertTrue(runner)
- self.assertTrue(runner.step())
- self.assertFalse(runner)
- self.assertTrue(runner.step())
-
- self.mox.VerifyAll()
-
-
-class DescriptionTest(mox.MoxTestBase):
- def test_func(self):
- def f():
- pass
-
- self.assertEqual(scheduler.task_description(f), 'f')
-
- def test_lambda(self):
- l = lambda: None
-
- self.assertEqual(scheduler.task_description(l), '<lambda>')
-
- def test_method(self):
- class C(object):
- def __str__(self):
- return 'C "o"'
-
- def __repr__(self):
- return 'o'
-
- def m(self):
- pass
-
- self.assertEqual(scheduler.task_description(C().m), 'm from C "o"')
-
- def test_object(self):
- class C(object):
- def __str__(self):
- return 'C "o"'
-
- def __repr__(self):
- return 'o'
-
- def __call__(self):
- pass
-
- self.assertEqual(scheduler.task_description(C()), 'o')
-
-
-class WrapperTaskTest(mox.MoxTestBase):
-
- def test_wrap(self):
- child_tasks = [DummyTask() for i in range(3)]
-
- @scheduler.wrappertask
- def task():
- for child_task in child_tasks:
- yield child_task()
-
- yield
-
- for child_task in child_tasks:
- self.mox.StubOutWithMock(child_task, 'do_step')
- self.mox.StubOutWithMock(scheduler.TaskRunner, '_sleep')
-
- for child_task in child_tasks:
- child_task.do_step(1).AndReturn(None)
- scheduler.TaskRunner._sleep(mox.IsA(int)).AndReturn(None)
- child_task.do_step(2).AndReturn(None)
- scheduler.TaskRunner._sleep(mox.IsA(int)).AndReturn(None)
- child_task.do_step(3).AndReturn(None)
- scheduler.TaskRunner._sleep(mox.IsA(int)).AndReturn(None)
-
- self.mox.ReplayAll()
-
- scheduler.TaskRunner(task)()
-
- def test_child_exception(self):
- class MyException(Exception):
- pass
-
- def child_task():
- yield
-
- raise MyException()
-
- @scheduler.wrappertask
- def parent_task():
- try:
- yield child_task()
- except MyException:
- raise
- else:
- self.fail('No exception raised in parent_task')
-
- task = parent_task()
- task.next()
- self.assertRaises(MyException, task.next)
-
- def test_child_exception_exit(self):
- class MyException(Exception):
- pass
-
- def child_task():
- yield
-
- raise MyException()
-
- @scheduler.wrappertask
- def parent_task():
- try:
- yield child_task()
- except MyException:
- return
- else:
- self.fail('No exception raised in parent_task')
-
- task = parent_task()
- task.next()
- self.assertRaises(StopIteration, task.next)
-
- def test_child_exception_swallow(self):
- class MyException(Exception):
- pass
-
- def child_task():
- yield
-
- raise MyException()
-
- @scheduler.wrappertask
- def parent_task():
- try:
- yield child_task()
- except MyException:
- yield
- else:
- self.fail('No exception raised in parent_task')
-
- yield
-
- task = parent_task()
- task.next()
- task.next()
-
- def test_child_exception_swallow_next(self):
- class MyException(Exception):
- pass
-
- def child_task():
- yield
-
- raise MyException()
-
- dummy = DummyTask()
-
- @scheduler.wrappertask
- def parent_task():
- try:
- yield child_task()
- except MyException:
- pass
- else:
- self.fail('No exception raised in parent_task')
-
- yield dummy()
-
- task = parent_task()
- task.next()
-
- self.mox.StubOutWithMock(dummy, 'do_step')
- for i in range(1, dummy.num_steps + 1):
- dummy.do_step(i).AndReturn(None)
- self.mox.ReplayAll()
-
- for i in range(1, dummy.num_steps + 1):
- task.next()
- self.assertRaises(StopIteration, task.next)
-
- def test_thrown_exception_swallow_next(self):
- class MyException(Exception):
- pass
-
- dummy = DummyTask()
-
- @scheduler.wrappertask
- def child_task():
- try:
- yield
- except MyException:
- yield dummy()
- else:
- self.fail('No exception raised in child_task')
-
- @scheduler.wrappertask
- def parent_task():
- yield child_task()
-
- task = parent_task()
-
- self.mox.StubOutWithMock(dummy, 'do_step')
- for i in range(1, dummy.num_steps + 1):
- dummy.do_step(i).AndReturn(None)
- self.mox.ReplayAll()
-
- next(task)
- task.throw(MyException)
-
- for i in range(2, dummy.num_steps + 1):
- task.next()
- self.assertRaises(StopIteration, task.next)
-
- def test_thrown_exception_raise(self):
- class MyException(Exception):
- pass
-
- dummy = DummyTask()
-
- @scheduler.wrappertask
- def child_task():
- try:
- yield
- except MyException:
- raise
- else:
- self.fail('No exception raised in child_task')
-
- @scheduler.wrappertask
- def parent_task():
- try:
- yield child_task()
- except MyException:
- yield dummy()
-
- task = parent_task()
-
- self.mox.StubOutWithMock(dummy, 'do_step')
- for i in range(1, dummy.num_steps + 1):
- dummy.do_step(i).AndReturn(None)
- self.mox.ReplayAll()
-
- next(task)
- task.throw(MyException)
-
- for i in range(2, dummy.num_steps + 1):
- task.next()
- self.assertRaises(StopIteration, task.next)
-
- def test_thrown_exception_exit(self):
- class MyException(Exception):
- pass
-
- dummy = DummyTask()
-
- @scheduler.wrappertask
- def child_task():
- try:
- yield
- except MyException:
- return
- else:
- self.fail('No exception raised in child_task')
-
- @scheduler.wrappertask
- def parent_task():
- yield child_task()
- yield dummy()
-
- task = parent_task()
-
- self.mox.StubOutWithMock(dummy, 'do_step')
- for i in range(1, dummy.num_steps + 1):
- dummy.do_step(i).AndReturn(None)
- self.mox.ReplayAll()
-
- next(task)
- task.throw(MyException)
-
- for i in range(2, dummy.num_steps + 1):
- task.next()
- self.assertRaises(StopIteration, task.next)
-
- def test_parent_exception(self):
- class MyException(Exception):
- pass
-
- def child_task():
- yield
-
- @scheduler.wrappertask
- def parent_task():
- yield child_task()
- raise MyException()
-
- task = parent_task()
- task.next()
- self.assertRaises(MyException, task.next)
-
- def test_parent_throw(self):
- class MyException(Exception):
- pass
-
- @scheduler.wrappertask
- def parent_task():
- try:
- yield DummyTask()()
- except MyException:
- raise
- else:
- self.fail('No exception raised in parent_task')
-
- task = parent_task()
- task.next()
- self.assertRaises(MyException, task.throw, MyException())
-
- def test_parent_throw_exit(self):
- class MyException(Exception):
- pass
-
- @scheduler.wrappertask
- def parent_task():
- try:
- yield DummyTask()()
- except MyException:
- return
- else:
- self.fail('No exception raised in parent_task')
-
- task = parent_task()
- task.next()
- self.assertRaises(StopIteration, task.throw, MyException())
-
- def test_parent_cancel(self):
- @scheduler.wrappertask
- def parent_task():
- try:
- yield
- except GeneratorExit:
- raise
- else:
- self.fail('parent_task not closed')
-
- task = parent_task()
- task.next()
- task.close()
-
- def test_parent_cancel_exit(self):
- @scheduler.wrappertask
- def parent_task():
- try:
- yield
- except GeneratorExit:
- return
- else:
- self.fail('parent_task not closed')
-
- task = parent_task()
- task.next()
- task.close()
-
- def test_cancel(self):
- def child_task():
- try:
- yield
- except GeneratorExit:
- raise
- else:
- self.fail('child_task not closed')
-
- @scheduler.wrappertask
- def parent_task():
- try:
- yield child_task()
- except GeneratorExit:
- raise
- else:
- self.fail('parent_task not closed')
-
- task = parent_task()
- task.next()
- task.close()
-
- def test_cancel_exit(self):
- def child_task():
- try:
- yield
- except GeneratorExit:
- return
- else:
- self.fail('child_task not closed')
-
- @scheduler.wrappertask
- def parent_task():
- try:
- yield child_task()
- except GeneratorExit:
- raise
- else:
- self.fail('parent_task not closed')
-
- task = parent_task()
- task.next()
- task.close()
-
- def test_cancel_parent_exit(self):
- def child_task():
- try:
- yield
- except GeneratorExit:
- return
- else:
- self.fail('child_task not closed')
-
- @scheduler.wrappertask
- def parent_task():
- try:
- yield child_task()
- except GeneratorExit:
- return
- else:
- self.fail('parent_task not closed')
-
- task = parent_task()
- task.next()
- task.close()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import collections
-
-from heat.engine import clients
-from heat.common import exception
-from heat.common import template_format
-from heat.engine import parser
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.tests.common import HeatTestCase
-from heat.tests.fakes import FakeKeystoneClient
-from heat.tests.v1_1 import fakes
-from heat.tests import utils
-
-from novaclient.v1_1 import security_groups as nova_sg
-from novaclient.v1_1 import security_group_rules as nova_sgr
-from neutronclient.common.exceptions import NeutronClientException
-from neutronclient.v2_0 import client as neutronclient
-
-NovaSG = collections.namedtuple('NovaSG',
- ' '.join([
- 'name',
- 'id',
- 'rules',
- 'description',
- ]))
-
-
-class SecurityGroupTest(HeatTestCase):
-
- test_template_nova = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Resources:
- the_sg:
- Type: AWS::EC2::SecurityGroup
- Properties:
- GroupDescription: HTTP and SSH access
- SecurityGroupIngress:
- - IpProtocol: tcp
- FromPort: "22"
- ToPort: "22"
- CidrIp: 0.0.0.0/0
- - IpProtocol: tcp
- FromPort : "80"
- ToPort : "80"
- CidrIp : 0.0.0.0/0
- - IpProtocol: tcp
- SourceSecurityGroupName: test
- - IpProtocol: icmp
- SourceSecurityGroupId: "1"
-'''
-
- test_template_nova_with_egress = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Resources:
- the_sg:
- Type: AWS::EC2::SecurityGroup
- Properties:
- GroupDescription: HTTP and SSH access
- SecurityGroupEgress:
- - IpProtocol: tcp
- FromPort: "22"
- ToPort: "22"
- CidrIp: 0.0.0.0/0
-'''
-
- test_template_neutron = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Resources:
- the_sg:
- Type: AWS::EC2::SecurityGroup
- Properties:
- GroupDescription: HTTP and SSH access
- VpcId: aaaa
- SecurityGroupIngress:
- - IpProtocol: tcp
- FromPort: "22"
- ToPort: "22"
- CidrIp: 0.0.0.0/0
- - IpProtocol: tcp
- FromPort : "80"
- ToPort : "80"
- CidrIp : 0.0.0.0/0
- - IpProtocol: tcp
- SourceSecurityGroupId: wwww
- SecurityGroupEgress:
- - IpProtocol: tcp
- FromPort: "22"
- ToPort: "22"
- CidrIp: 10.0.1.0/24
- - SourceSecurityGroupName: xxxx
-'''
-
- def setUp(self):
- super(SecurityGroupTest, self).setUp()
- self.fc = fakes.FakeClient()
- self.m.StubOutWithMock(clients.OpenStackClients, 'nova')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
- self.m.StubOutWithMock(nova_sgr.SecurityGroupRuleManager, 'create')
- self.m.StubOutWithMock(nova_sgr.SecurityGroupRuleManager, 'delete')
- self.m.StubOutWithMock(nova_sg.SecurityGroupManager, 'create')
- self.m.StubOutWithMock(nova_sg.SecurityGroupManager, 'delete')
- self.m.StubOutWithMock(nova_sg.SecurityGroupManager, 'get')
- self.m.StubOutWithMock(nova_sg.SecurityGroupManager, 'list')
- utils.setup_dummy_db()
- self.m.StubOutWithMock(neutronclient.Client, 'create_security_group')
- self.m.StubOutWithMock(
- neutronclient.Client, 'create_security_group_rule')
- self.m.StubOutWithMock(neutronclient.Client, 'show_security_group')
- self.m.StubOutWithMock(
- neutronclient.Client, 'delete_security_group_rule')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_security_group')
-
- def create_stack(self, template):
- t = template_format.parse(template)
- self.stack = self.parse_stack(t)
- self.assertEqual(None, self.stack.create())
- return self.stack
-
- def parse_stack(self, t):
- stack_name = 'test_stack'
- tmpl = parser.Template(t)
- stack = parser.Stack(utils.dummy_context(), stack_name, tmpl)
- stack.store()
- return stack
-
- def assertResourceState(self, rsrc, ref_id, metadata={}):
- self.assertEqual(None, rsrc.validate())
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- self.assertEqual(ref_id, rsrc.FnGetRefId())
- self.assertEqual(metadata, dict(rsrc.metadata))
-
- @utils.stack_delete_after
- def test_security_group_nova(self):
- #create script
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sg.SecurityGroupManager.list().AndReturn([NovaSG(
- id=1,
- name='test',
- description='FAKE_SECURITY_GROUP',
- rules=[],
- )])
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- sg_name = utils.PhysName('test_stack', 'the_sg')
- nova_sg.SecurityGroupManager.create(
- sg_name,
- 'HTTP and SSH access').AndReturn(NovaSG(
- id=2,
- name=sg_name,
- description='HTTP and SSH access',
- rules=[]))
-
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sgr.SecurityGroupRuleManager.create(
- 2, 'tcp', '22', '22', '0.0.0.0/0', None).AndReturn(None)
- nova_sgr.SecurityGroupRuleManager.create(
- 2, 'tcp', '80', '80', '0.0.0.0/0', None).AndReturn(None)
- nova_sgr.SecurityGroupRuleManager.create(
- 2, 'tcp', None, None, None, 1).AndReturn(None)
- nova_sgr.SecurityGroupRuleManager.create(
- 2, 'icmp', None, None, None, '1').AndReturn(None)
-
- # delete script
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sg.SecurityGroupManager.get(2).AndReturn(NovaSG(
- id=2,
- name=sg_name,
- description='HTTP and SSH access',
- rules=[{
- "from_port": '22',
- "group": {},
- "ip_protocol": "tcp",
- "to_port": '22',
- "parent_group_id": 2,
- "ip_range": {
- "cidr": "0.0.0.0/0"
- },
- 'id': 130
- }, {
- 'from_port': '80',
- 'group': {},
- 'ip_protocol': 'tcp',
- 'to_port': '80',
- 'parent_group_id': 2,
- 'ip_range': {
- 'cidr': '0.0.0.0/0'
- },
- 'id': 131
- }, {
- 'from_port': None,
- 'group': {
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'name': 'test'
- },
- 'ip_protocol': 'tcp',
- 'to_port': None,
- 'parent_group_id': 2,
- 'ip_range': {},
- 'id': 132
- }, {
- 'from_port': None,
- 'group': {
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'name': 'test'
- },
- 'ip_protocol': 'icmp',
- 'to_port': None,
- 'parent_group_id': 2,
- 'ip_range': {},
- 'id': 133
- }]
- ))
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sgr.SecurityGroupRuleManager.delete(130).AndReturn(None)
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sgr.SecurityGroupRuleManager.delete(131).AndReturn(None)
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sgr.SecurityGroupRuleManager.delete(132).AndReturn(None)
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sgr.SecurityGroupRuleManager.delete(133).AndReturn(None)
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sg.SecurityGroupManager.delete(2).AndReturn(None)
-
- self.m.ReplayAll()
- stack = self.create_stack(self.test_template_nova)
-
- sg = stack['the_sg']
- self.assertRaises(resource.UpdateReplace, sg.handle_update, {}, {}, {})
-
- self.assertResourceState(sg, utils.PhysName('test_stack', 'the_sg'))
-
- stack.delete()
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_security_group_nova_exception(self):
- #create script
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- sg_name = utils.PhysName('test_stack', 'the_sg')
- nova_sg.SecurityGroupManager.list().AndReturn([
- NovaSG(
- id=2,
- name=sg_name,
- description='HTTP and SSH access',
- rules=[],
- ),
- NovaSG(
- id=1,
- name='test',
- description='FAKE_SECURITY_GROUP',
- rules=[],
- )
- ])
-
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sgr.SecurityGroupRuleManager.create(
- 2, 'tcp', '22', '22', '0.0.0.0/0', None).AndRaise(
- clients.novaclient.exceptions.BadRequest(
- 400, 'Rule already exists'))
- nova_sgr.SecurityGroupRuleManager.create(
- 2, 'tcp', '80', '80', '0.0.0.0/0', None).AndReturn(
- clients.novaclient.exceptions.BadRequest(
- 400, 'Rule already exists'))
- nova_sgr.SecurityGroupRuleManager.create(
- 2, 'tcp', None, None, None, 1).AndReturn(
- clients.novaclient.exceptions.BadRequest(
- 400, 'Rule already exists'))
- nova_sgr.SecurityGroupRuleManager.create(
- 2, 'icmp', None, None, None, '1').AndReturn(
- clients.novaclient.exceptions.BadRequest(
- 400, 'Rule already exists'))
-
- # delete script
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sg.SecurityGroupManager.get(2).AndReturn(NovaSG(
- id=2,
- name=sg_name,
- description='HTTP and SSH access',
- rules=[{
- "from_port": '22',
- "group": {},
- "ip_protocol": "tcp",
- "to_port": '22',
- "parent_group_id": 2,
- "ip_range": {
- "cidr": "0.0.0.0/0"
- },
- 'id': 130
- }, {
- 'from_port': '80',
- 'group': {},
- 'ip_protocol': 'tcp',
- 'to_port': '80',
- 'parent_group_id': 2,
- 'ip_range': {
- 'cidr': '0.0.0.0/0'
- },
- 'id': 131
- }, {
- 'from_port': None,
- 'group': {
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'name': 'test'
- },
- 'ip_protocol': 'tcp',
- 'to_port': None,
- 'parent_group_id': 2,
- 'ip_range': {},
- 'id': 132
- }, {
- 'from_port': None,
- 'group': {
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'name': 'test'
- },
- 'ip_protocol': 'icmp',
- 'to_port': None,
- 'parent_group_id': 2,
- 'ip_range': {},
- 'id': 133
- }]
- ))
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sgr.SecurityGroupRuleManager.delete(130).AndRaise(
- clients.novaclient.exceptions.NotFound('goneburger'))
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sgr.SecurityGroupRuleManager.delete(131).AndRaise(
- clients.novaclient.exceptions.NotFound('goneburger'))
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sgr.SecurityGroupRuleManager.delete(132).AndRaise(
- clients.novaclient.exceptions.NotFound('goneburger'))
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sgr.SecurityGroupRuleManager.delete(133).AndRaise(
- clients.novaclient.exceptions.NotFound('goneburger'))
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sg.SecurityGroupManager.delete(2).AndReturn(None)
-
- clients.OpenStackClients.nova('compute').AndReturn(self.fc)
- nova_sg.SecurityGroupManager.get(2).AndRaise(
- clients.novaclient.exceptions.NotFound('goneburger'))
-
- self.m.ReplayAll()
- stack = self.create_stack(self.test_template_nova)
-
- sg = stack['the_sg']
- self.assertRaises(resource.UpdateReplace, sg.handle_update, {}, {}, {})
-
- self.assertResourceState(sg, utils.PhysName('test_stack', 'the_sg'))
-
- scheduler.TaskRunner(sg.delete)()
-
- sg.state_set(sg.CREATE, sg.COMPLETE, 'to delete again')
- sg.resource_id = 2
- stack.delete()
-
- self.m.VerifyAll()
-
- def test_security_group_nova_with_egress_rules(self):
- t = template_format.parse(self.test_template_nova_with_egress)
- stack = self.parse_stack(t)
-
- sg = stack['the_sg']
- self.assertRaises(exception.EgressRuleNotAllowed, sg.validate)
-
- @utils.stack_delete_after
- def test_security_group_neutron(self):
- #create script
- clients.OpenStackClients.keystone().AndReturn(
- FakeKeystoneClient())
- sg_name = utils.PhysName('test_stack', 'the_sg')
- neutronclient.Client.create_security_group({
- 'security_group': {
- 'name': sg_name,
- 'description': 'HTTP and SSH access'
- }
- }).AndReturn({
- 'security_group': {
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'name': sg_name,
- 'description': 'HTTP and SSH access',
- 'security_group_rules': [{
- "direction": "egress",
- "ethertype": "IPv4",
- "id": "aaaa-1",
- "port_range_max": None,
- "port_range_min": None,
- "protocol": None,
- "remote_group_id": None,
- "remote_ip_prefix": None,
- "security_group_id": "aaaa",
- "tenant_id": "f18ca530cc05425e8bac0a5ff92f7e88"
- }, {
- "direction": "egress",
- "ethertype": "IPv6",
- "id": "aaaa-2",
- "port_range_max": None,
- "port_range_min": None,
- "protocol": None,
- "remote_group_id": None,
- "remote_ip_prefix": None,
- "security_group_id": "aaaa",
- "tenant_id": "f18ca530cc05425e8bac0a5ff92f7e88"
- }],
- 'id': 'aaaa'
- }
- })
-
- neutronclient.Client.create_security_group_rule({
- 'security_group_rule': {
- 'direction': 'ingress',
- 'remote_group_id': None,
- 'remote_ip_prefix': '0.0.0.0/0',
- 'port_range_min': '22',
- 'ethertype': 'IPv4',
- 'port_range_max': '22',
- 'protocol': 'tcp',
- 'security_group_id': 'aaaa'
- }
- }).AndReturn({
- 'security_group_rule': {
- 'direction': 'ingress',
- 'remote_group_id': None,
- 'remote_ip_prefix': '0.0.0.0/0',
- 'port_range_min': '22',
- 'ethertype': 'IPv4',
- 'port_range_max': '22',
- 'protocol': 'tcp',
- 'security_group_id': 'aaaa',
- 'id': 'bbbb'
- }
- })
- neutronclient.Client.create_security_group_rule({
- 'security_group_rule': {
- 'direction': 'ingress',
- 'remote_group_id': None,
- 'remote_ip_prefix': '0.0.0.0/0',
- 'port_range_min': '80',
- 'ethertype': 'IPv4',
- 'port_range_max': '80',
- 'protocol': 'tcp',
- 'security_group_id': 'aaaa'
- }
- }).AndReturn({
- 'security_group_rule': {
- 'direction': 'ingress',
- 'remote_group_id': None,
- 'remote_ip_prefix': '0.0.0.0/0',
- 'port_range_min': '80',
- 'ethertype': 'IPv4',
- 'port_range_max': '80',
- 'protocol': 'tcp',
- 'security_group_id': 'aaaa',
- 'id': 'cccc'
- }
- })
- neutronclient.Client.create_security_group_rule({
- 'security_group_rule': {
- 'direction': 'ingress',
- 'remote_group_id': 'wwww',
- 'remote_ip_prefix': None,
- 'port_range_min': None,
- 'ethertype': 'IPv4',
- 'port_range_max': None,
- 'protocol': 'tcp',
- 'security_group_id': 'aaaa'
- }
- }).AndReturn({
- 'security_group_rule': {
- 'direction': 'ingress',
- 'remote_group_id': 'wwww',
- 'remote_ip_prefix': None,
- 'port_range_min': None,
- 'ethertype': 'IPv4',
- 'port_range_max': None,
- 'protocol': 'tcp',
- 'security_group_id': 'aaaa',
- 'id': 'dddd'
- }
- })
- neutronclient.Client.delete_security_group_rule('aaaa-1').AndReturn(
- None)
- neutronclient.Client.delete_security_group_rule('aaaa-2').AndReturn(
- None)
- neutronclient.Client.create_security_group_rule({
- 'security_group_rule': {
- 'direction': 'egress',
- 'remote_group_id': None,
- 'remote_ip_prefix': '10.0.1.0/24',
- 'port_range_min': '22',
- 'ethertype': 'IPv4',
- 'port_range_max': '22',
- 'protocol': 'tcp',
- 'security_group_id': 'aaaa'
- }
- }).AndReturn({
- 'security_group_rule': {
- 'direction': 'egress',
- 'remote_group_id': None,
- 'remote_ip_prefix': '10.0.1.0/24',
- 'port_range_min': '22',
- 'ethertype': 'IPv4',
- 'port_range_max': '22',
- 'protocol': 'tcp',
- 'security_group_id': 'aaaa',
- 'id': 'eeee'
- }
- })
- neutronclient.Client.create_security_group_rule({
- 'security_group_rule': {
- 'direction': 'egress',
- 'remote_group_id': 'xxxx',
- 'remote_ip_prefix': None,
- 'port_range_min': None,
- 'ethertype': 'IPv4',
- 'port_range_max': None,
- 'protocol': None,
- 'security_group_id': 'aaaa'
- }
- }).AndReturn({
- 'security_group_rule': {
- 'direction': 'egress',
- 'remote_group_id': 'xxxx',
- 'remote_ip_prefix': None,
- 'port_range_min': None,
- 'ethertype': 'IPv4',
- 'port_range_max': None,
- 'protocol': None,
- 'security_group_id': 'aaaa',
- 'id': 'ffff'
- }
- })
-
- # delete script
- neutronclient.Client.show_security_group('aaaa').AndReturn({
- 'security_group': {
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'name': 'sc1',
- 'description': '',
- 'security_group_rules': [{
- 'direction': 'ingress',
- 'protocol': 'tcp',
- 'port_range_max': '22',
- 'id': 'bbbb',
- 'ethertype': 'IPv4',
- 'security_group_id': 'aaaa',
- 'remote_group_id': None,
- 'remote_ip_prefix': '0.0.0.0/0',
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'port_range_min': '22'
- }, {
- 'direction': 'ingress',
- 'protocol': 'tcp',
- 'port_range_max': '80',
- 'id': 'cccc',
- 'ethertype': 'IPv4',
- 'security_group_id': 'aaaa',
- 'remote_group_id': None,
- 'remote_ip_prefix': '0.0.0.0/0',
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'port_range_min': '80'
- }, {
- 'direction': 'ingress',
- 'protocol': 'tcp',
- 'port_range_max': None,
- 'id': 'dddd',
- 'ethertype': 'IPv4',
- 'security_group_id': 'aaaa',
- 'remote_group_id': 'wwww',
- 'remote_ip_prefix': None,
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'port_range_min': None
- }, {
- 'direction': 'egress',
- 'protocol': 'tcp',
- 'port_range_max': '22',
- 'id': 'eeee',
- 'ethertype': 'IPv4',
- 'security_group_id': 'aaaa',
- 'remote_group_id': None,
- 'remote_ip_prefix': '10.0.1.0/24',
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'port_range_min': '22'
- }, {
- 'direction': 'egress',
- 'protocol': None,
- 'port_range_max': None,
- 'id': 'ffff',
- 'ethertype': 'IPv4',
- 'security_group_id': 'aaaa',
- 'remote_group_id': None,
- 'remote_ip_prefix': None,
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'port_range_min': None
- }],
- 'id': 'aaaa'}})
- neutronclient.Client.delete_security_group_rule('bbbb').AndReturn(None)
- neutronclient.Client.delete_security_group_rule('cccc').AndReturn(None)
- neutronclient.Client.delete_security_group_rule('dddd').AndReturn(None)
- neutronclient.Client.delete_security_group_rule('eeee').AndReturn(None)
- neutronclient.Client.delete_security_group_rule('ffff').AndReturn(None)
- neutronclient.Client.delete_security_group('aaaa').AndReturn(None)
-
- self.m.ReplayAll()
- stack = self.create_stack(self.test_template_neutron)
-
- sg = stack['the_sg']
- self.assertRaises(resource.UpdateReplace, sg.handle_update, {}, {}, {})
-
- self.assertResourceState(sg, 'aaaa')
-
- stack.delete()
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_security_group_neutron_exception(self):
- #create script
- clients.OpenStackClients.keystone().AndReturn(
- FakeKeystoneClient())
- sg_name = utils.PhysName('test_stack', 'the_sg')
- neutronclient.Client.create_security_group({
- 'security_group': {
- 'name': sg_name,
- 'description': 'HTTP and SSH access'
- }
- }).AndReturn({
- 'security_group': {
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'name': sg_name,
- 'description': 'HTTP and SSH access',
- 'security_group_rules': [],
- 'id': 'aaaa'
- }
- })
-
- neutronclient.Client.create_security_group_rule({
- 'security_group_rule': {
- 'direction': 'ingress',
- 'remote_group_id': None,
- 'remote_ip_prefix': '0.0.0.0/0',
- 'port_range_min': '22',
- 'ethertype': 'IPv4',
- 'port_range_max': '22',
- 'protocol': 'tcp',
- 'security_group_id': 'aaaa'
- }
- }).AndRaise(
- NeutronClientException(status_code=409))
- neutronclient.Client.create_security_group_rule({
- 'security_group_rule': {
- 'direction': 'ingress',
- 'remote_group_id': None,
- 'remote_ip_prefix': '0.0.0.0/0',
- 'port_range_min': '80',
- 'ethertype': 'IPv4',
- 'port_range_max': '80',
- 'protocol': 'tcp',
- 'security_group_id': 'aaaa'
- }
- }).AndRaise(
- NeutronClientException(status_code=409))
- neutronclient.Client.create_security_group_rule({
- 'security_group_rule': {
- 'direction': 'ingress',
- 'remote_group_id': 'wwww',
- 'remote_ip_prefix': None,
- 'port_range_min': None,
- 'ethertype': 'IPv4',
- 'port_range_max': None,
- 'protocol': 'tcp',
- 'security_group_id': 'aaaa'
- }
- }).AndRaise(
- NeutronClientException(status_code=409))
- neutronclient.Client.create_security_group_rule({
- 'security_group_rule': {
- 'direction': 'egress',
- 'remote_group_id': None,
- 'remote_ip_prefix': '10.0.1.0/24',
- 'port_range_min': '22',
- 'ethertype': 'IPv4',
- 'port_range_max': '22',
- 'protocol': 'tcp',
- 'security_group_id': 'aaaa'
- }
- }).AndRaise(
- NeutronClientException(status_code=409))
- neutronclient.Client.create_security_group_rule({
- 'security_group_rule': {
- 'direction': 'egress',
- 'remote_group_id': 'xxxx',
- 'remote_ip_prefix': None,
- 'port_range_min': None,
- 'ethertype': 'IPv4',
- 'port_range_max': None,
- 'protocol': None,
- 'security_group_id': 'aaaa'
- }
- }).AndRaise(
- NeutronClientException(status_code=409))
-
- # delete script
- neutronclient.Client.show_security_group('aaaa').AndReturn({
- 'security_group': {
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'name': 'sc1',
- 'description': '',
- 'security_group_rules': [{
- 'direction': 'ingress',
- 'protocol': 'tcp',
- 'port_range_max': '22',
- 'id': 'bbbb',
- 'ethertype': 'IPv4',
- 'security_group_id': 'aaaa',
- 'remote_group_id': None,
- 'remote_ip_prefix': '0.0.0.0/0',
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'port_range_min': '22'
- }, {
- 'direction': 'ingress',
- 'protocol': 'tcp',
- 'port_range_max': '80',
- 'id': 'cccc',
- 'ethertype': 'IPv4',
- 'security_group_id': 'aaaa',
- 'remote_group_id': None,
- 'remote_ip_prefix': '0.0.0.0/0',
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'port_range_min': '80'
- }, {
- 'direction': 'ingress',
- 'protocol': 'tcp',
- 'port_range_max': None,
- 'id': 'dddd',
- 'ethertype': 'IPv4',
- 'security_group_id': 'aaaa',
- 'remote_group_id': 'wwww',
- 'remote_ip_prefix': None,
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'port_range_min': None
- }, {
- 'direction': 'egress',
- 'protocol': 'tcp',
- 'port_range_max': '22',
- 'id': 'eeee',
- 'ethertype': 'IPv4',
- 'security_group_id': 'aaaa',
- 'remote_group_id': None,
- 'remote_ip_prefix': '10.0.1.0/24',
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'port_range_min': '22'
- }, {
- 'direction': 'egress',
- 'protocol': None,
- 'port_range_max': None,
- 'id': 'ffff',
- 'ethertype': 'IPv4',
- 'security_group_id': 'aaaa',
- 'remote_group_id': None,
- 'remote_ip_prefix': None,
- 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88',
- 'port_range_min': None
- }],
- 'id': 'aaaa'}})
- neutronclient.Client.delete_security_group_rule('bbbb').AndRaise(
- NeutronClientException(status_code=404))
- neutronclient.Client.delete_security_group_rule('cccc').AndRaise(
- NeutronClientException(status_code=404))
- neutronclient.Client.delete_security_group_rule('dddd').AndRaise(
- NeutronClientException(status_code=404))
- neutronclient.Client.delete_security_group_rule('eeee').AndRaise(
- NeutronClientException(status_code=404))
- neutronclient.Client.delete_security_group_rule('ffff').AndRaise(
- NeutronClientException(status_code=404))
- neutronclient.Client.delete_security_group('aaaa').AndRaise(
- NeutronClientException(status_code=404))
-
- neutronclient.Client.show_security_group('aaaa').AndRaise(
- NeutronClientException(status_code=404))
-
- self.m.ReplayAll()
- stack = self.create_stack(self.test_template_neutron)
-
- sg = stack['the_sg']
- self.assertRaises(resource.UpdateReplace, sg.handle_update, {}, {}, {})
-
- self.assertResourceState(sg, 'aaaa')
-
- scheduler.TaskRunner(sg.delete)()
-
- sg.state_set(sg.CREATE, sg.COMPLETE, 'to delete again')
- sg.resource_id = 'aaaa'
- stack.delete()
-
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import copy
-
-import mox
-
-from heat.engine import environment
-from heat.tests.v1_1 import fakes
-from heat.common import exception
-from heat.common import template_format
-from heat.engine import parser
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine.resources import server as servers
-from heat.openstack.common import uuidutils
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-wp_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "WordPress",
- "Parameters" : {
- "key_name" : {
- "Description" : "key_name",
- "Type" : "String",
- "Default" : "test"
- }
- },
- "Resources" : {
- "WebServer": {
- "Type": "OS::Nova::Server",
- "Properties": {
- "image" : "F17-x86_64-gold",
- "flavor" : "m1.large",
- "key_name" : "test",
- "user_data" : "wordpress"
- }
- }
- }
-}
-'''
-
-
-class ServersTest(HeatTestCase):
- def setUp(self):
- super(ServersTest, self).setUp()
- self.fc = fakes.FakeClient()
- utils.setup_dummy_db()
-
- def _setup_test_stack(self, stack_name):
- t = template_format.parse(wp_template)
- template = parser.Template(t)
- stack = parser.Stack(utils.dummy_context(), stack_name, template,
- environment.Environment({'key_name': 'test'}),
- stack_id=uuidutils.generate_uuid())
- return (t, stack)
-
- def _setup_test_server(self, return_server, name, image_id=None):
- stack_name = '%s_stack' % name
- (t, stack) = self._setup_test_stack(stack_name)
-
- t['Resources']['WebServer']['Properties']['image'] = \
- image_id or 'CentOS 5.2'
- t['Resources']['WebServer']['Properties']['flavor'] = \
- '256 MB Server'
- server = servers.Server('%s_name' % name,
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(server, 'nova')
- server.nova().MultipleTimes().AndReturn(self.fc)
-
- server.t = server.stack.resolve_runtime_data(server.t)
-
- # need to resolve the template functions
- #server_userdata = nova_utils.build_userdata(
- # server,
- # server.t['Properties']['user_data'])
- #server.mime_string = server_userdata
- self.m.StubOutWithMock(self.fc.servers, 'create')
- self.fc.servers.create(
- image=1, flavor=1, key_name='test',
- name=utils.PhysName(stack_name, server.name),
- security_groups=None,
- userdata=mox.IgnoreArg(), scheduler_hints=None,
- meta=None, nics=None, availability_zone=None,
- block_device_mapping=None, config_drive=None,
- disk_config=None, reservation_id=None).AndReturn(
- return_server)
-
- return server
-
- def _create_test_server(self, return_server, name):
- server = self._setup_test_server(return_server, name)
- self.m.ReplayAll()
- scheduler.TaskRunner(server.create)()
- return server
-
- def test_server_create(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_create')
- # this makes sure the auto increment worked on server creation
- self.assertTrue(server.id > 0)
-
- public_ip = return_server.networks['public'][0]
- self.assertEqual(
- server.FnGetAtt('addresses')['public'][0]['addr'], public_ip)
- self.assertEqual(
- server.FnGetAtt('networks')['public'][0], public_ip)
- self.assertEqual(
- server.FnGetAtt('first_public_address'), public_ip)
-
- private_ip = return_server.networks['private'][0]
- self.assertEqual(
- server.FnGetAtt('addresses')['private'][0]['addr'], private_ip)
- self.assertEqual(
- server.FnGetAtt('networks')['private'][0], private_ip)
- self.assertEqual(
- server.FnGetAtt('first_private_address'), private_ip)
-
- self.assertEqual(return_server._info, server.FnGetAtt('show'))
- self.assertEqual('sample-server2', server.FnGetAtt('instance_name'))
- self.assertEqual('192.0.2.0', server.FnGetAtt('accessIPv4'))
- self.assertEqual('::babe:4317:0A83', server.FnGetAtt('accessIPv6'))
- self.m.VerifyAll()
-
- def test_server_create_with_image_id(self):
- return_server = self.fc.servers.list()[1]
- server = self._setup_test_server(return_server,
- 'test_server_create_image_id',
- image_id='1')
- self.m.StubOutWithMock(uuidutils, "is_uuid_like")
- uuidutils.is_uuid_like('1').AndReturn(True)
-
- self.m.ReplayAll()
- scheduler.TaskRunner(server.create)()
-
- # this makes sure the auto increment worked on server creation
- self.assertTrue(server.id > 0)
-
- public_ip = return_server.networks['public'][0]
- self.assertEqual(
- server.FnGetAtt('addresses')['public'][0]['addr'], public_ip)
- self.assertEqual(
- server.FnGetAtt('networks')['public'][0], public_ip)
- self.assertEqual(
- server.FnGetAtt('first_public_address'), public_ip)
-
- private_ip = return_server.networks['private'][0]
- self.assertEqual(
- server.FnGetAtt('addresses')['private'][0]['addr'], private_ip)
- self.assertEqual(
- server.FnGetAtt('networks')['private'][0], private_ip)
- self.assertEqual(
- server.FnGetAtt('first_private_address'), private_ip)
-
- self.m.VerifyAll()
-
- def test_server_create_image_name_err(self):
- stack_name = 'test_server_create_image_name_err_stack'
- (t, stack) = self._setup_test_stack(stack_name)
-
- # create an server with non exist image name
- t['Resources']['WebServer']['Properties']['image'] = 'Slackware'
- server = servers.Server('server_create_image_err',
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(server, 'nova')
- server.nova().MultipleTimes().AndReturn(self.fc)
- self.m.ReplayAll()
-
- self.assertRaises(exception.ImageNotFound, server.handle_create)
-
- self.m.VerifyAll()
-
- def test_server_create_duplicate_image_name_err(self):
- stack_name = 'test_server_create_image_name_err_stack'
- (t, stack) = self._setup_test_stack(stack_name)
-
- # create an server with a non unique image name
- t['Resources']['WebServer']['Properties']['image'] = 'CentOS 5.2'
- server = servers.Server('server_create_image_err',
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(server, 'nova')
- server.nova().MultipleTimes().AndReturn(self.fc)
- self.m.StubOutWithMock(self.fc.client, "get_images_detail")
- self.fc.client.get_images_detail().AndReturn((
- 200, {'images': [{'id': 1, 'name': 'CentOS 5.2'},
- {'id': 4, 'name': 'CentOS 5.2'}]}))
- self.m.ReplayAll()
-
- self.assertRaises(exception.NoUniqueImageFound, server.handle_create)
-
- self.m.VerifyAll()
-
- def test_server_create_image_id_err(self):
- stack_name = 'test_server_create_image_id_err_stack'
- (t, stack) = self._setup_test_stack(stack_name)
-
- # create an server with non exist image Id
- t['Resources']['WebServer']['Properties']['image'] = '1'
- server = servers.Server('server_create_image_err',
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(server, 'nova')
- server.nova().MultipleTimes().AndReturn(self.fc)
- self.m.StubOutWithMock(uuidutils, "is_uuid_like")
- uuidutils.is_uuid_like('1').AndReturn(True)
- self.m.StubOutWithMock(self.fc.client, "get_images_1")
- self.fc.client.get_images_1().AndRaise(
- servers.clients.novaclient.exceptions.NotFound(404))
- self.m.ReplayAll()
-
- self.assertRaises(exception.ImageNotFound, server.handle_create)
-
- self.m.VerifyAll()
-
- def test_server_create_unexpected_status(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_create')
- return_server.get = lambda: None
- return_server.status = 'BOGUS'
- self.assertRaises(exception.Error,
- server.check_create_complete,
- return_server)
-
- def test_server_create_error_status(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_create')
- return_server.status = 'ERROR'
- return_server.fault = {
- 'message': 'NoValidHost',
- 'code': 500,
- 'created': '2013-08-14T03:12:10Z'
- }
- self.m.StubOutWithMock(return_server, 'get')
- return_server.get()
- self.m.ReplayAll()
-
- self.assertRaises(exception.Error,
- server.check_create_complete,
- return_server)
-
- self.m.VerifyAll()
-
- def test_server_validate(self):
- stack_name = 'test_server_validate_stack'
- (t, stack) = self._setup_test_stack(stack_name)
-
- # create an server with non exist image Id
- t['Resources']['WebServer']['Properties']['image'] = '1'
- server = servers.Server('server_create_image_err',
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(server, 'nova')
- server.nova().MultipleTimes().AndReturn(self.fc)
-
- self.m.StubOutWithMock(uuidutils, "is_uuid_like")
- uuidutils.is_uuid_like('1').AndReturn(True)
- self.m.ReplayAll()
-
- self.assertEqual(server.validate(), None)
-
- self.m.VerifyAll()
-
- def test_server_validate_delete_policy(self):
- stack_name = 'test_server_validate_stack'
- (t, stack) = self._setup_test_stack(stack_name)
-
- # create an server with non exist image Id
- t['Resources']['WebServer']['DeletionPolicy'] = 'SelfDestruct'
- server = servers.Server('server_create_image_err',
- t['Resources']['WebServer'], stack)
-
- self.m.ReplayAll()
-
- ex = self.assertRaises(exception.StackValidationFailed,
- server.validate)
- self.assertEqual('Invalid DeletionPolicy SelfDestruct',
- str(ex))
-
- self.m.VerifyAll()
-
- def test_server_delete(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_create_delete')
- server.resource_id = 1234
-
- # this makes sure the auto increment worked on server creation
- self.assertTrue(server.id > 0)
-
- server_get = self.fc.client.get_servers_1234()
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndReturn(server_get)
- get().AndRaise(servers.clients.novaclient.exceptions.NotFound(404))
- mox.Replay(get)
- self.m.ReplayAll()
-
- scheduler.TaskRunner(server.delete)()
- self.assertTrue(server.resource_id is None)
- self.assertEqual(server.state, (server.DELETE, server.COMPLETE))
- self.m.VerifyAll()
-
- def test_server_delete_notfound(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_create_delete')
- server.resource_id = 1234
-
- # this makes sure the auto increment worked on server creation
- self.assertTrue(server.id > 0)
-
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndRaise(servers.clients.novaclient.exceptions.NotFound(404))
- mox.Replay(get)
-
- scheduler.TaskRunner(server.delete)()
- self.assertTrue(server.resource_id is None)
- self.assertEqual(server.state, (server.DELETE, server.COMPLETE))
- self.m.VerifyAll()
-
- server.state_set(server.CREATE, server.COMPLETE, 'to delete again')
- scheduler.TaskRunner(server.delete)()
- self.assertEqual(server.state, (server.DELETE, server.COMPLETE))
- self.m.VerifyAll()
-
- def test_server_update_metadata(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_update')
-
- update_template = copy.deepcopy(server.t)
- update_template['Metadata'] = {'test': 123}
- scheduler.TaskRunner(server.update, update_template)()
- self.assertEqual(server.metadata, {'test': 123})
-
- server.t['Metadata'] = {'test': 456}
- server.metadata_update()
- self.assertEqual(server.metadata, {'test': 456})
-
- def test_server_update_server_flavor(self):
- """
- Server.handle_update supports changing the flavor, and makes
- the change making a resize API call against Nova.
- """
- return_server = self.fc.servers.list()[1]
- return_server.id = 1234
- server = self._create_test_server(return_server,
- 'test_server_update')
-
- update_template = copy.deepcopy(server.t)
- update_template['Properties']['flavor'] = 'm1.small'
-
- self.m.StubOutWithMock(self.fc.servers, 'get')
- self.fc.servers.get(1234).AndReturn(return_server)
-
- def activate_status(server):
- server.status = 'VERIFY_RESIZE'
- return_server.get = activate_status.__get__(return_server)
-
- self.m.StubOutWithMock(self.fc.client, 'post_servers_1234_action')
- self.fc.client.post_servers_1234_action(
- body={'resize': {'flavorRef': 2}}).AndReturn((202, None))
- self.fc.client.post_servers_1234_action(
- body={'confirmResize': None}).AndReturn((202, None))
- self.m.ReplayAll()
-
- scheduler.TaskRunner(server.update, update_template)()
- self.assertEqual(server.state, (server.UPDATE, server.COMPLETE))
- self.m.VerifyAll()
-
- def test_server_update_server_flavor_failed(self):
- """
- If the status after a resize is not VERIFY_RESIZE, it means the resize
- call failed, so we raise an explicit error.
- """
- return_server = self.fc.servers.list()[1]
- return_server.id = 1234
- server = self._create_test_server(return_server,
- 'test_server_update')
-
- update_template = copy.deepcopy(server.t)
- update_template['Properties']['flavor'] = 'm1.small'
-
- self.m.StubOutWithMock(self.fc.servers, 'get')
- self.fc.servers.get(1234).AndReturn(return_server)
-
- def activate_status(server):
- server.status = 'ACTIVE'
- return_server.get = activate_status.__get__(return_server)
-
- self.m.StubOutWithMock(self.fc.client, 'post_servers_1234_action')
- self.fc.client.post_servers_1234_action(
- body={'resize': {'flavorRef': 2}}).AndReturn((202, None))
- self.m.ReplayAll()
-
- updater = scheduler.TaskRunner(server.update, update_template)
- error = self.assertRaises(exception.ResourceFailure, updater)
- self.assertEqual(
- "Error: Resizing to 'm1.small' failed, status 'ACTIVE'",
- str(error))
- self.assertEqual(server.state, (server.UPDATE, server.FAILED))
- self.m.VerifyAll()
-
- def test_server_update_server_flavor_replace(self):
- stack_name = 'test_server_update_flavor_replace'
- (t, stack) = self._setup_test_stack(stack_name)
-
- t['Resources']['WebServer']['Properties'][
- 'flavor_update_policy'] = 'REPLACE'
- server = servers.Server('server_server_update_flavor_replace',
- t['Resources']['WebServer'], stack)
-
- update_template = copy.deepcopy(server.t)
- update_template['Properties']['flavor'] = 'm1.smigish'
- updater = scheduler.TaskRunner(server.update, update_template)
- self.assertRaises(resource.UpdateReplace, updater)
-
- def test_server_update_server_flavor_policy_update(self):
- stack_name = 'test_server_update_flavor_replace'
- (t, stack) = self._setup_test_stack(stack_name)
-
- server = servers.Server('server_server_update_flavor_replace',
- t['Resources']['WebServer'], stack)
-
- update_template = copy.deepcopy(server.t)
- # confirm that when flavor_update_policy is changed during
- # the update then the updated policy is followed for a flavor
- # update
- update_template['Properties']['flavor_update_policy'] = 'REPLACE'
- update_template['Properties']['flavor'] = 'm1.smigish'
- updater = scheduler.TaskRunner(server.update, update_template)
- self.assertRaises(resource.UpdateReplace, updater)
-
- def test_server_update_replace(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_update')
-
- update_template = copy.deepcopy(server.t)
- update_template['Notallowed'] = {'test': 123}
- updater = scheduler.TaskRunner(server.update, update_template)
- self.assertRaises(resource.UpdateReplace, updater)
-
- def test_server_update_properties(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_update')
-
- update_template = copy.deepcopy(server.t)
- update_template['Properties']['key_name'] = 'mustreplace'
- updater = scheduler.TaskRunner(server.update, update_template)
- self.assertRaises(resource.UpdateReplace, updater)
-
- def test_server_status_build(self):
- return_server = self.fc.servers.list()[0]
- server = self._setup_test_server(return_server,
- 'test_server_status_build')
- server.resource_id = 1234
-
- # Bind fake get method which Server.check_create_complete will call
- def activate_status(server):
- server.status = 'ACTIVE'
- return_server.get = activate_status.__get__(return_server)
- self.m.ReplayAll()
-
- scheduler.TaskRunner(server.create)()
- self.assertEqual(server.state, (server.CREATE, server.COMPLETE))
-
- def test_server_status_suspend_no_resource_id(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_suspend')
-
- server.resource_id = None
- self.m.ReplayAll()
-
- ex = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(server.suspend))
- self.assertEqual('Error: Cannot suspend test_server_suspend_name, '
- 'resource_id not set',
- str(ex))
- self.assertEqual(server.state, (server.SUSPEND, server.FAILED))
-
- self.m.VerifyAll()
-
- def test_server_status_suspend_not_found(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_suspend')
-
- server.resource_id = 1234
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndRaise(servers.clients.novaclient.exceptions.NotFound(404))
- mox.Replay(get)
- self.m.ReplayAll()
-
- ex = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(server.suspend))
- self.assertEqual('NotFound: Failed to find server 1234',
- str(ex))
- self.assertEqual(server.state, (server.SUSPEND, server.FAILED))
-
- self.m.VerifyAll()
-
- def test_server_status_suspend_immediate(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_suspend')
-
- server.resource_id = 1234
- self.m.ReplayAll()
-
- # Override the get_servers_1234 handler status to SUSPENDED
- d = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]}
- d['server']['status'] = 'SUSPENDED'
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndReturn((200, d))
- mox.Replay(get)
-
- scheduler.TaskRunner(server.suspend)()
- self.assertEqual(server.state, (server.SUSPEND, server.COMPLETE))
-
- self.m.VerifyAll()
-
- def test_server_status_resume_immediate(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_resume')
-
- server.resource_id = 1234
- self.m.ReplayAll()
-
- # Override the get_servers_1234 handler status to SUSPENDED
- d = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]}
- d['server']['status'] = 'ACTIVE'
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndReturn((200, d))
- mox.Replay(get)
- server.state_set(server.SUSPEND, server.COMPLETE)
-
- scheduler.TaskRunner(server.resume)()
- self.assertEqual(server.state, (server.RESUME, server.COMPLETE))
-
- self.m.VerifyAll()
-
- def test_server_status_suspend_wait(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_suspend')
-
- server.resource_id = 1234
- self.m.ReplayAll()
-
- # Override the get_servers_1234 handler status to SUSPENDED, but
- # return the ACTIVE state first (twice, so we sleep)
- d1 = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]}
- d2 = copy.deepcopy(d1)
- d1['server']['status'] = 'ACTIVE'
- d2['server']['status'] = 'SUSPENDED'
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndReturn((200, d1))
- get().AndReturn((200, d1))
- get().AndReturn((200, d2))
- self.m.ReplayAll()
-
- scheduler.TaskRunner(server.suspend)()
- self.assertEqual(server.state, (server.SUSPEND, server.COMPLETE))
-
- self.m.VerifyAll()
-
- def test_server_status_suspend_unknown_status(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_suspend')
-
- server.resource_id = 1234
- self.m.ReplayAll()
-
- # Override the get_servers_1234 handler status to SUSPENDED, but
- # return the ACTIVE state first (twice, so we sleep)
- d1 = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]}
- d2 = copy.deepcopy(d1)
- d1['server']['status'] = 'ACTIVE'
- d2['server']['status'] = 'TRANSMOGRIFIED'
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndReturn((200, d1))
- get().AndReturn((200, d1))
- get().AndReturn((200, d2))
- self.m.ReplayAll()
-
- ex = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(server.suspend))
- self.assertEqual('Error: Suspend of server sample-server failed '
- 'with unknown status: TRANSMOGRIFIED',
- str(ex))
- self.assertEqual(server.state, (server.SUSPEND, server.FAILED))
-
- self.m.VerifyAll()
-
- def test_server_status_resume_wait(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_resume')
-
- server.resource_id = 1234
- self.m.ReplayAll()
-
- # Override the get_servers_1234 handler status to ACTIVE, but
- # return the SUSPENDED state first (twice, so we sleep)
- d1 = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]}
- d2 = copy.deepcopy(d1)
- d1['server']['status'] = 'SUSPENDED'
- d2['server']['status'] = 'ACTIVE'
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndReturn((200, d1))
- get().AndReturn((200, d1))
- get().AndReturn((200, d2))
- self.m.ReplayAll()
-
- server.state_set(server.SUSPEND, server.COMPLETE)
-
- scheduler.TaskRunner(server.resume)()
- self.assertEqual(server.state, (server.RESUME, server.COMPLETE))
-
- self.m.VerifyAll()
-
- def test_server_status_resume_no_resource_id(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_suspend')
-
- server.resource_id = None
- self.m.ReplayAll()
-
- server.state_set(server.SUSPEND, server.COMPLETE)
- ex = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(server.resume))
- self.assertEqual('Error: Cannot resume test_server_suspend_name, '
- 'resource_id not set',
- str(ex))
- self.assertEqual(server.state, (server.RESUME, server.FAILED))
-
- self.m.VerifyAll()
-
- def test_server_status_resume_not_found(self):
- return_server = self.fc.servers.list()[1]
- server = self._create_test_server(return_server,
- 'test_server_resume')
-
- server.resource_id = 1234
- self.m.ReplayAll()
-
- # Override the get_servers_1234 handler status to ACTIVE, but
- # return the SUSPENDED state first (twice, so we sleep)
- self.m.StubOutWithMock(self.fc.client, 'get_servers_1234')
- get = self.fc.client.get_servers_1234
- get().AndRaise(servers.clients.novaclient.exceptions.NotFound(404))
- self.m.ReplayAll()
-
- server.state_set(server.SUSPEND, server.COMPLETE)
-
- ex = self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(server.resume))
- self.assertEqual('NotFound: Failed to find server 1234',
- str(ex))
- self.assertEqual(server.state, (server.RESUME, server.FAILED))
-
- self.m.VerifyAll()
-
- def test_server_status_build_spawning(self):
- self._test_server_status_not_build_active('BUILD(SPAWNING)')
-
- def test_server_status_hard_reboot(self):
- self._test_server_status_not_build_active('HARD_REBOOT')
-
- def test_server_status_password(self):
- self._test_server_status_not_build_active('PASSWORD')
-
- def test_server_status_reboot(self):
- self._test_server_status_not_build_active('REBOOT')
-
- def test_server_status_rescue(self):
- self._test_server_status_not_build_active('RESCUE')
-
- def test_server_status_resize(self):
- self._test_server_status_not_build_active('RESIZE')
-
- def test_server_status_revert_resize(self):
- self._test_server_status_not_build_active('REVERT_RESIZE')
-
- def test_server_status_shutoff(self):
- self._test_server_status_not_build_active('SHUTOFF')
-
- def test_server_status_suspended(self):
- self._test_server_status_not_build_active('SUSPENDED')
-
- def test_server_status_verify_resize(self):
- self._test_server_status_not_build_active('VERIFY_RESIZE')
-
- def _test_server_status_not_build_active(self, uncommon_status):
- return_server = self.fc.servers.list()[0]
- server = self._setup_test_server(return_server,
- 'test_server_status_build')
- server.resource_id = 1234
-
- check_iterations = [0]
-
- # Bind fake get method which Server.check_create_complete will call
- def activate_status(server):
- check_iterations[0] += 1
- if check_iterations[0] == 1:
- server.status = uncommon_status
- if check_iterations[0] > 2:
- server.status = 'ACTIVE'
- return_server.get = activate_status.__get__(return_server)
- self.m.ReplayAll()
-
- scheduler.TaskRunner(server.create)()
- self.assertEqual(server.state, (server.CREATE, server.COMPLETE))
-
- self.m.VerifyAll()
-
- def test_build_nics(self):
- self.assertEqual(None, servers.Server._build_nics([]))
- self.assertEqual(None, servers.Server._build_nics(None))
- self.assertEqual([
- {'net-id': '1234abcd'},
- {'v4-fixed-ip': '192.0.2.0'},
- {'port-id': 'aaaabbbb'}
- ], servers.Server._build_nics([
- {'uuid': '1234abcd'},
- {'fixed_ip': '192.0.2.0'},
- {'port': 'aaaabbbb'}
- ]))
-
- def test_server_without_ip_address(self):
- return_server = self.fc.servers.list()[3]
- server = self._create_test_server(return_server,
- 'test_without_ip_address')
-
- self.assertEqual(server.FnGetAtt('addresses'), {'empty_net': []})
- self.assertEqual(server.FnGetAtt('networks'), {'empty_net': []})
- self.assertEqual(server.FnGetAtt('first_private_address'), '')
- self.assertEqual(server.FnGetAtt('first_public_address'), '')
-
- def test_build_block_device_mapping(self):
- self.assertEqual(
- None, servers.Server._build_block_device_mapping([]))
- self.assertEqual(
- None, servers.Server._build_block_device_mapping(None))
-
- self.assertEqual({
- 'vda': '1234:',
- 'vdb': '1234:snap',
- }, servers.Server._build_block_device_mapping([
- {'device_name': 'vda', 'volume_id': '1234'},
- {'device_name': 'vdb', 'snapshot_id': '1234'},
- ]))
-
- self.assertEqual({
- 'vdc': '1234::10',
- 'vdd': '1234:snap:0:True'
- }, servers.Server._build_block_device_mapping([
- {
- 'device_name': 'vdc',
- 'volume_id': '1234',
- 'volume_size': '10'
- },
- {
- 'device_name': 'vdd',
- 'snapshot_id': '1234',
- 'delete_on_termination': True
- }
- ]))
-
- def test_validate_conflict_block_device_mapping_props(self):
- stack_name = 'test_validate_conflict_block_device_mapping_props'
- (t, stack) = self._setup_test_stack(stack_name)
-
- bdm = [{'device_name': 'vdb', 'snapshot_id': '1234',
- 'volume_id': '1234'}]
- t['Resources']['WebServer']['Properties']['block_device_mapping'] = bdm
- server = servers.Server('server_create_image_err',
- t['Resources']['WebServer'], stack)
- self.m.StubOutWithMock(server, 'nova')
- server.nova().MultipleTimes().AndReturn(self.fc)
- self.m.ReplayAll()
-
- self.assertRaises(exception.ResourcePropertyConflict, server.validate)
- self.m.VerifyAll()
-
- def test_validate_insufficient_block_device_mapping_props(self):
- stack_name = 'test_validate_insufficient_block_device_mapping_props'
- (t, stack) = self._setup_test_stack(stack_name)
-
- bdm = [{'device_name': 'vdb', 'volume_size': '1',
- 'delete_on_termination': True}]
- t['Resources']['WebServer']['Properties']['block_device_mapping'] = bdm
- server = servers.Server('server_create_image_err',
- t['Resources']['WebServer'], stack)
- self.m.StubOutWithMock(server, 'nova')
- server.nova().MultipleTimes().AndReturn(self.fc)
- self.m.ReplayAll()
-
- ex = self.assertRaises(exception.StackValidationFailed,
- server.validate)
- msg = 'Either volume_id or snapshot_id must be specified for device' +\
- ' mapping vdb'
- self.assertEqual(msg, str(ex))
-
- self.m.VerifyAll()
-
- def test_validate_without_image_or_bootable_volume(self):
- stack_name = 'test_validate_without_image_or_bootable_volume'
- (t, stack) = self._setup_test_stack(stack_name)
-
- del t['Resources']['WebServer']['Properties']['image']
- bdm = [{'device_name': 'vdb', 'volume_id': '1234'}]
- t['Resources']['WebServer']['Properties']['block_device_mapping'] = bdm
- server = servers.Server('server_create_image_err',
- t['Resources']['WebServer'], stack)
- self.m.StubOutWithMock(server, 'nova')
- server.nova().MultipleTimes().AndReturn(self.fc)
- self.m.ReplayAll()
-
- ex = self.assertRaises(exception.StackValidationFailed,
- server.validate)
- msg = 'Neither image nor bootable volume is specified for instance %s'\
- % server.name
- self.assertEqual(msg, str(ex))
-
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-import mox
-
-from heat.engine import environment
-from heat.tests.v1_1 import fakes
-from heat.engine.resources import instance as instances
-from heat.engine.resources import nova_utils
-from heat.common import template_format
-from heat.engine import parser
-from heat.engine import scheduler
-from heat.openstack.common import uuidutils
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-instance_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "WordPress",
- "Parameters" : {
- "KeyName" : {
- "Description" : "KeyName",
- "Type" : "String",
- "Default" : "test"
- }
- },
- "Resources" : {
- "WebServer": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId" : "CentOS 5.2",
- "InstanceType" : "256 MB Server",
- "KeyName" : "test",
- "UserData" : "wordpress"
- }
- }
- }
-}
-'''
-
-group_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "WordPress",
- "Parameters" : {
- "KeyName" : {
- "Description" : "KeyName",
- "Type" : "String",
- "Default" : "test"
- }
- },
- "Resources" : {
- "Config": {
- "Type": "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : "CentOS 5.2",
- "InstanceType" : "256 MB Server",
- "KeyName" : "test",
- "UserData" : "wordpress"
- }
- },
-
- "WebServer": {
- "Type": "OS::Heat::InstanceGroup",
- "Properties": {
- "AvailabilityZones" : ["nova"],
- "LaunchConfigurationName": { "Ref": "Config" },
- "Size" : "1"
- }
- }
- }
-}
-'''
-
-autoscaling_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "WordPress",
- "Parameters" : {
- "KeyName" : {
- "Description" : "KeyName",
- "Type" : "String",
- "Default" : "test"
- }
- },
- "Resources" : {
- "Config": {
- "Type": "AWS::AutoScaling::LaunchConfiguration",
- "Properties": {
- "ImageId" : "CentOS 5.2",
- "InstanceType" : "256 MB Server",
- "KeyName" : "test",
- "UserData" : "wordpress"
- }
- },
-
- "WebServer": {
- "Type": "AWS::AutoScaling::AutoScalingGroup",
- "Properties": {
- "AvailabilityZones" : ["nova"],
- "LaunchConfigurationName": { "Ref": "Config" },
- "MinSize" : "1",
- "MaxSize" : "2",
- "Tags" : [{"Key" : "foo", "Value" : "42"}],
- }
- }
- }
-}
-'''
-
-
-class ServerTagsTest(HeatTestCase):
- def setUp(self):
- super(ServerTagsTest, self).setUp()
- self.fc = fakes.FakeClient()
- utils.setup_dummy_db()
-
- def _setup_test_instance(self, intags=None, nova_tags=None):
- stack_name = 'tag_test'
- t = template_format.parse(instance_template)
- template = parser.Template(t)
- stack = parser.Stack(utils.dummy_context(), stack_name, template,
- environment.Environment({'KeyName': 'test'}),
- stack_id=uuidutils.generate_uuid())
-
- t['Resources']['WebServer']['Properties']['Tags'] = intags
- instance = instances.Instance(stack_name,
- t['Resources']['WebServer'], stack)
-
- self.m.StubOutWithMock(instance, 'nova')
- instance.nova().MultipleTimes().AndReturn(self.fc)
-
- instance.t = instance.stack.resolve_runtime_data(instance.t)
-
- # need to resolve the template functions
- server_userdata = nova_utils.build_userdata(
- instance,
- instance.t['Properties']['UserData'])
- instance.mime_string = server_userdata
- self.m.StubOutWithMock(self.fc.servers, 'create')
- self.fc.servers.create(
- image=1, flavor=1, key_name='test',
- name=utils.PhysName(stack_name, instance.name),
- security_groups=None,
- userdata=server_userdata, scheduler_hints=None,
- meta=nova_tags, nics=None, availability_zone=None).AndReturn(
- self.fc.servers.list()[1])
-
- return instance
-
- def test_instance_tags(self):
- tags = [{'Key': 'Food', 'Value': 'yum'}]
- metadata = dict((tm['Key'], tm['Value']) for tm in tags)
-
- instance = self._setup_test_instance(intags=tags, nova_tags=metadata)
- self.m.ReplayAll()
- scheduler.TaskRunner(instance.create)()
- # we are just using mock to verify that the tags get through to the
- # nova call.
- self.m.VerifyAll()
-
- def _setup_test_group(self, intags=None, nova_tags=None):
- stack_name = 'tag_test'
- t = template_format.parse(group_template)
- template = parser.Template(t)
- stack = parser.Stack(utils.dummy_context(), stack_name, template,
- environment.Environment({'KeyName': 'test'}),
- stack_id=uuidutils.generate_uuid())
-
- t['Resources']['WebServer']['Properties']['Tags'] = intags
-
- # create the launch configuration
- conf = stack.resources['Config']
- self.assertEqual(None, conf.validate())
- scheduler.TaskRunner(conf.create)()
- self.assertEqual((conf.CREATE, conf.COMPLETE), conf.state)
-
- group = stack.resources['WebServer']
-
- nova_tags['metering.groupname'] = utils.PhysName(stack.name,
- group.name)
-
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().MultipleTimes().AndReturn(self.fc)
-
- group.t = group.stack.resolve_runtime_data(group.t)
-
- # need to resolve the template functions
- self.m.StubOutWithMock(self.fc.servers, 'create')
- self.fc.servers.create(
- image=1, flavor=1, key_name='test',
- name=mox.IgnoreArg(),
- security_groups=None,
- userdata=mox.IgnoreArg(), scheduler_hints=None,
- meta=nova_tags, nics=None, availability_zone=None).AndReturn(
- self.fc.servers.list()[1])
-
- return group
-
- def test_group_tags(self):
- tags = [{'Key': 'Food', 'Value': 'yum'}]
- metadata = dict((tm['Key'], tm['Value']) for tm in tags)
- group = self._setup_test_group(intags=tags, nova_tags=metadata)
- self.m.ReplayAll()
- scheduler.TaskRunner(group.create)()
- # we are just using mock to verify that the tags get through to the
- # nova call.
- self.m.VerifyAll()
-
- def _setup_test_group_autoscaling(self, intags=None, nova_tags=None):
- stack_name = 'tag_as_name'
- t = template_format.parse(autoscaling_template)
- template = parser.Template(t)
- stack = parser.Stack(utils.dummy_context(), stack_name, template,
- environment.Environment({'KeyName': 'test'}),
- stack_id=uuidutils.generate_uuid())
- t['Resources']['WebServer']['Properties']['Tags'] += intags
-
- # create the launch configuration
- conf = stack.resources['Config']
- self.assertEqual(None, conf.validate())
- scheduler.TaskRunner(conf.create)()
- self.assertEqual((conf.CREATE, conf.COMPLETE), conf.state)
- group = stack.resources['WebServer']
-
- group_refid = utils.PhysName(stack.name, group.name)
-
- nova_tags['metering.groupname'] = group_refid
- nova_tags['AutoScalingGroupName'] = group_refid
-
- self.m.StubOutWithMock(group, '_cooldown_timestamp')
- group._cooldown_timestamp(mox.IgnoreArg()).AndReturn(None)
-
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().MultipleTimes().AndReturn(self.fc)
-
- group.t = group.stack.resolve_runtime_data(group.t)
-
- # need to resolve the template functions
- self.m.StubOutWithMock(self.fc.servers, 'create')
- self.fc.servers.create(
- image=1, flavor=1, key_name='test',
- name=mox.IgnoreArg(),
- security_groups=None,
- userdata=mox.IgnoreArg(), scheduler_hints=None,
- meta=nova_tags, nics=None, availability_zone=None).AndReturn(
- self.fc.servers.list()[1])
-
- return group
-
- def test_as_group_tags(self):
- tags = [{'Key': 'Food', 'Value': 'yum'}, {'Key': 'foo', 'Value': '42'}]
- metadata = dict((tm['Key'], tm['Value']) for tm in tags)
- group = self._setup_test_group_autoscaling(intags=[tags[0]],
- nova_tags=metadata)
- self.m.ReplayAll()
- scheduler.TaskRunner(group.create)()
- # we are just using mock to verify that the tags get through to the
- # nova call.
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import testtools
-
-from heat.common import short_id
-import uuid
-
-
-class ShortIdTest(testtools.TestCase):
-
- def test_byte_string_8(self):
- self.assertEqual(short_id._to_byte_string(0xab, 8), '\xab')
- self.assertEqual(short_id._to_byte_string(0x05, 8), '\x05')
-
- def test_byte_string_16(self):
- self.assertEqual(short_id._to_byte_string(0xabcd, 16), '\xab\xcd')
- self.assertEqual(short_id._to_byte_string(0xabc, 16), '\x0a\xbc')
-
- def test_byte_string_12(self):
- self.assertEqual(short_id._to_byte_string(0xabc, 12), '\xab\xc0')
- self.assertEqual(short_id._to_byte_string(0x0ab, 12), '\x0a\xb0')
-
- def test_byte_string_60(self):
- val = 0x111111111111111
- byte_string = short_id._to_byte_string(val, 60)
- self.assertEqual(byte_string, '\x11\x11\x11\x11\x11\x11\x11\x10')
-
- def test_get_id_string(self):
- id = short_id.get_id('11111111-1111-4111-bfff-ffffffffffff')
- self.assertEqual(id, 'ceirceirceir')
-
- def test_get_id_uuid_1(self):
- source = uuid.UUID('11111111-1111-4111-bfff-ffffffffffff')
- self.assertEqual(source.time, 0x111111111111111)
- self.assertEqual(short_id.get_id(source), 'ceirceirceir')
-
- def test_get_id_uuid_f(self):
- source = uuid.UUID('ffffffff-ffff-4fff-8000-000000000000')
- self.assertEqual(short_id.get_id(source), '777777777777')
-
- def test_get_id_uuid_0(self):
- source = uuid.UUID('00000000-0000-4000-bfff-ffffffffffff')
- self.assertEqual(short_id.get_id(source), 'aaaaaaaaaaaa')
-
- def test_get_id_uuid_endianness(self):
- source = uuid.UUID('ffffffff-00ff-4000-aaaa-aaaaaaaaaaaa')
- self.assertEqual(short_id.get_id(source), 'aaaa77777777')
-
- def test_get_id_uuid1(self):
- source = uuid.uuid1()
- self.assertRaises(ValueError, short_id.get_id, source)
-
- def test_generate_ids(self):
- allowed_chars = 'abcdefghijklmnopqrstuvwxyz234567'
- ids = [short_id.generate_id() for i in range(25)]
-
- for id in ids:
- self.assertEqual(len(id), 12)
- self.assertFalse(id.translate(None, allowed_chars))
- self.assertEqual(ids.count(id), 1)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import datetime
-
-from oslo.config import cfg
-
-from heat.tests import generic_resource
-from heat.tests import fakes
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-from heat.common import exception
-from heat.common import template_format
-
-from heat.engine import parser
-from heat.engine import resource
-from heat.engine import signal_responder as sr
-
-
-test_template_signal = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Just a test.",
- "Parameters" : {},
- "Resources" : {
- "signal_handler" : {"Type" : "SignalResourceType"},
- "resource_X" : {"Type" : "GenericResourceType"}
- },
- "Outputs": {
- "signed_url": {"Fn::GetAtt": ["signal_handler", "AlarmUrl"]}
- }
-}
-'''
-
-
-class SignalTest(HeatTestCase):
-
- def setUp(self):
- super(SignalTest, self).setUp()
- utils.setup_dummy_db()
-
- resource._register_class('SignalResourceType',
- generic_resource.SignalResource)
- resource._register_class('GenericResourceType',
- generic_resource.GenericResource)
-
- cfg.CONF.set_default('heat_waitcondition_server_url',
- 'http://server.test:8000/v1/waitcondition')
-
- self.stack_id = 'STACKABCD1234'
- self.fc = fakes.FakeKeystoneClient()
-
- def tearDown(self):
- super(SignalTest, self).tearDown()
- utils.reset_dummy_db()
-
- # Note tests creating a stack should be decorated with @stack_delete_after
- # to ensure the stack is properly cleaned up
- def create_stack(self, stack_name='test_stack', stub=True):
- temp = template_format.parse(test_template_signal)
- template = parser.Template(temp)
- ctx = utils.dummy_context()
- ctx.tenant_id = 'test_tenant'
- stack = parser.Stack(ctx, stack_name, template,
- disable_rollback=True)
-
- # Stub out the stack ID so we have a known value
- with utils.UUIDStub(self.stack_id):
- stack.store()
-
- if stub:
- self.m.StubOutWithMock(sr.SignalResponder, 'keystone')
- sr.SignalResponder.keystone().MultipleTimes().AndReturn(
- self.fc)
-
- self.m.ReplayAll()
-
- return stack
-
- @utils.stack_delete_after
- def test_FnGetAtt_Alarm_Url(self):
- self.stack = self.create_stack()
-
- self.m.ReplayAll()
- self.stack.create()
-
- rsrc = self.stack.resources['signal_handler']
- created_time = datetime.datetime(2012, 11, 29, 13, 49, 37)
- rsrc.created_time = created_time
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- expected_url = "".join([
- 'http://server.test:8000/v1/signal/',
- 'arn%3Aopenstack%3Aheat%3A%3Atest_tenant%3Astacks%2F',
- 'test_stack%2FSTACKABCD1234%2Fresources%2F',
- 'signal_handler?',
- 'Timestamp=2012-11-29T13%3A49%3A37Z&',
- 'SignatureMethod=HmacSHA256&',
- 'AWSAccessKeyId=4567&',
- 'SignatureVersion=2&',
- 'Signature=',
- 'VW4NyvRO4WhQdsQ4rxl5JMUr0AlefHN6OLsRz9oZyls%3D'])
-
- self.assertEqual(expected_url, rsrc.FnGetAtt('AlarmUrl'))
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_FnGetAtt_Alarm_Url_is_cached(self):
- self.stack = self.create_stack()
-
- self.m.ReplayAll()
- self.stack.create()
-
- rsrc = self.stack.resources['signal_handler']
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- first_url = rsrc.FnGetAtt('AlarmUrl')
- second_url = rsrc.FnGetAtt('AlarmUrl')
- self.assertEqual(first_url, second_url)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_FnGetAtt_delete_not_found(self):
- self.stack = self.create_stack()
-
- self.m.ReplayAll()
- self.stack.create()
-
- rsrc = self.stack.resources['signal_handler']
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- rsrc.delete()
- rsrc.resource_id = 'not-none'
- rsrc.delete()
-
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_signal(self):
- test_d = {'Data': 'foo', 'Reason': 'bar',
- 'Status': 'SUCCESS', 'UniqueId': '123'}
-
- self.stack = self.create_stack()
-
- # to confirm we get a call to handle_signal
- self.m.StubOutWithMock(generic_resource.SignalResource,
- 'handle_signal')
- generic_resource.SignalResource.handle_signal(test_d).AndReturn(None)
-
- self.m.ReplayAll()
- self.stack.create()
-
- rsrc = self.stack.resources['signal_handler']
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
- self.assertTrue(rsrc.requires_deferred_auth)
-
- rsrc.signal(details=test_d)
-
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_signal_wrong_resource(self):
- # assert that we get the correct exception when calling a
- # resource.signal() that does not have a handle_signal()
- self.stack = self.create_stack()
-
- self.m.ReplayAll()
- self.stack.create()
-
- rsrc = self.stack.resources['resource_X']
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- err_metadata = {'Data': 'foo', 'Status': 'SUCCESS', 'UniqueId': '123'}
- self.assertRaises(exception.ResourceFailure, rsrc.signal,
- details=err_metadata)
-
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_signal_reception_wrong_state(self):
- # assert that we get the correct exception when calling a
- # resource.signal() that is in having a destructive action.
- self.stack = self.create_stack()
-
- self.m.ReplayAll()
- self.stack.create()
-
- rsrc = self.stack.resources['signal_handler']
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
- # manually override the action to DELETE
- rsrc.action = rsrc.DELETE
-
- err_metadata = {'Data': 'foo', 'Status': 'SUCCESS', 'UniqueId': '123'}
- self.assertRaises(exception.ResourceFailure, rsrc.signal,
- details=err_metadata)
-
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_signal_reception_failed_call(self):
- # assert that we get the correct exception from resource.signal()
- # when resource.handle_signal() raises an exception.
- self.stack = self.create_stack()
-
- test_d = {'Data': 'foo', 'Reason': 'bar',
- 'Status': 'SUCCESS', 'UniqueId': '123'}
-
- # to confirm we get a call to handle_signal
- self.m.StubOutWithMock(generic_resource.SignalResource,
- 'handle_signal')
- generic_resource.SignalResource.handle_signal(test_d).AndRaise(
- ValueError)
-
- self.m.ReplayAll()
- self.stack.create()
-
- rsrc = self.stack.resources['signal_handler']
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- self.assertRaises(exception.ResourceFailure,
- rsrc.signal, details=test_d)
-
- self.m.VerifyAll()
+++ /dev/null
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import mox
-
-from heat.db.sqlalchemy import api as db_api
-from heat.engine import environment
-from heat.tests.v1_1 import fakes
-from heat.engine.resource import Resource
-from heat.common import exception
-from heat.common import template_format
-from heat.engine.resources import instance as instances
-from heat.engine import parser
-from heat.engine import scheduler
-from heat.openstack.common import uuidutils
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-from heat.engine.clients import novaclient
-
-wp_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "WordPress",
- "Parameters" : {
- "KeyName" : {
- "Description" : "KeyName",
- "Type" : "String",
- "Default" : "test"
- }
- },
- "Resources" : {
- "WebServer": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId" : "F17-x86_64-gold",
- "InstanceType" : "m1.large",
- "KeyName" : "test",
- "UserData" : "wordpress"
- }
- }
- }
-}
-'''
-
-UUIDs = (UUID1, UUID2) = sorted([uuidutils.generate_uuid() for x in range(2)])
-
-
-class MyResource(Resource):
- properties_schema = {
- 'ServerName': {'Type': 'String', 'Required': True},
- 'Flavor': {'Type': 'String', 'Required': True},
- 'ImageName': {'Type': 'String', 'Required': True},
- 'UserData': {'Type': 'String'},
- 'PublicKey': {'Type': 'String'}
- }
-
- @property
- def my_secret(self):
- return db_api.resource_data_get(self, 'my_secret')
-
- @my_secret.setter
- def my_secret(self, my_secret):
- db_api.resource_data_set(self, 'my_secret', my_secret, True)
-
-
-class SqlAlchemyTest(HeatTestCase):
- def setUp(self):
- super(SqlAlchemyTest, self).setUp()
- self.fc = fakes.FakeClient()
- utils.setup_dummy_db()
- utils.reset_dummy_db()
- self.ctx = utils.dummy_context()
-
- def tearDown(self):
- super(SqlAlchemyTest, self).tearDown()
-
- def _setup_test_stack(self, stack_name, stack_id=None):
- t = template_format.parse(wp_template)
- template = parser.Template(t)
- stack_id = stack_id or uuidutils.generate_uuid()
- stack = parser.Stack(self.ctx, stack_name, template,
- environment.Environment({'KeyName': 'test'}))
- with utils.UUIDStub(stack_id):
- stack.store()
- return (t, stack)
-
- def _mock_create(self, mocks):
- fc = fakes.FakeClient()
- mocks.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().MultipleTimes().AndReturn(fc)
-
- mocks.StubOutWithMock(fc.servers, 'create')
- fc.servers.create(image=744, flavor=3, key_name='test',
- name=mox.IgnoreArg(),
- security_groups=None,
- userdata=mox.IgnoreArg(), scheduler_hints=None,
- meta=None, nics=None,
- availability_zone=None).MultipleTimes().AndReturn(
- fc.servers.list()[-1])
- return fc
-
- def _mock_delete(self, mocks):
- fc = fakes.FakeClient()
- mocks.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().MultipleTimes().AndReturn(fc)
-
- mocks.StubOutWithMock(fc.client, 'get_servers_9999')
- get = fc.client.get_servers_9999
- get().MultipleTimes().AndRaise(novaclient.exceptions.NotFound(404))
-
- def test_encryption(self):
- stack_name = 'test_encryption'
- (t, stack) = self._setup_test_stack(stack_name)
- cs = MyResource('cs_encryption',
- t['Resources']['WebServer'],
- stack)
-
- # This gives the fake cloud server an id and created_time attribute
- cs._store_or_update(cs.CREATE, cs.IN_PROGRESS, 'test_store')
-
- cs.my_secret = 'fake secret'
- rs = db_api.resource_get_by_name_and_stack(None,
- 'cs_encryption',
- stack.id)
- encrypted_key = rs.data[0]['value']
- self.assertNotEqual(encrypted_key, "fake secret")
- decrypted_key = cs.my_secret
- self.assertEqual(decrypted_key, "fake secret")
- scheduler.TaskRunner(cs.destroy)()
-
- def test_resource_data_delete(self):
- stack = self._setup_test_stack('stack', UUID1)[1]
- self._mock_create(self.m)
- self.m.ReplayAll()
- stack.create()
- rsrc = stack.resources['WebServer']
- db_api.resource_data_set(rsrc, 'test', 'test_data')
- self.assertEqual('test_data', db_api.resource_data_get(rsrc, 'test'))
- db_api.resource_data_delete(rsrc, 'test')
- self.assertRaises(exception.NotFound,
- db_api.resource_data_get, rsrc, 'test')
-
- def test_stack_get_by_name(self):
- stack = self._setup_test_stack('stack', UUID1)[1]
-
- st = db_api.stack_get_by_name(self.ctx, 'stack')
- self.assertEqual(UUID1, st.id)
-
- stack.delete()
-
- st = db_api.stack_get_by_name(self.ctx, 'stack')
- self.assertIsNone(st)
-
- def test_stack_get(self):
- stack = self._setup_test_stack('stack', UUID1)[1]
-
- st = db_api.stack_get(self.ctx, UUID1, show_deleted=False)
- self.assertEqual(UUID1, st.id)
-
- stack.delete()
- st = db_api.stack_get(self.ctx, UUID1, show_deleted=False)
- self.assertIsNone(st)
-
- st = db_api.stack_get(self.ctx, UUID1, show_deleted=True)
- self.assertEqual(UUID1, st.id)
-
- def test_stack_get_all(self):
- stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs]
-
- st_db = db_api.stack_get_all(self.ctx)
- self.assertEqual(2, len(st_db))
-
- stacks[0].delete()
- st_db = db_api.stack_get_all(self.ctx)
- self.assertEqual(1, len(st_db))
-
- stacks[1].delete()
- st_db = db_api.stack_get_all(self.ctx)
- self.assertEqual(0, len(st_db))
-
- def test_stack_get_all_by_tenant(self):
- stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs]
-
- st_db = db_api.stack_get_all_by_tenant(self.ctx)
- self.assertEqual(2, len(st_db))
-
- stacks[0].delete()
- st_db = db_api.stack_get_all_by_tenant(self.ctx)
- self.assertEqual(1, len(st_db))
-
- stacks[1].delete()
- st_db = db_api.stack_get_all_by_tenant(self.ctx)
- self.assertEqual(0, len(st_db))
-
- def test_stack_count_all_by_tenant(self):
- stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs]
-
- st_db = db_api.stack_count_all_by_tenant(self.ctx)
- self.assertEqual(2, st_db)
-
- stacks[0].delete()
- st_db = db_api.stack_count_all_by_tenant(self.ctx)
- self.assertEqual(1, st_db)
-
- stacks[1].delete()
- st_db = db_api.stack_count_all_by_tenant(self.ctx)
- self.assertEqual(0, st_db)
-
- def test_event_get_all_by_stack(self):
- stack = self._setup_test_stack('stack', UUID1)[1]
-
- self._mock_create(self.m)
- self.m.ReplayAll()
- stack.create()
- self.m.UnsetStubs()
-
- events = db_api.event_get_all_by_stack(self.ctx, UUID1)
- self.assertEqual(2, len(events))
-
- self._mock_delete(self.m)
- self.m.ReplayAll()
- stack.delete()
-
- events = db_api.event_get_all_by_stack(self.ctx, UUID1)
- self.assertEqual(4, len(events))
-
- self.m.VerifyAll()
-
- def test_event_get_all_by_tenant(self):
- stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs]
-
- self._mock_create(self.m)
- self.m.ReplayAll()
- [s.create() for s in stacks]
- self.m.UnsetStubs()
-
- events = db_api.event_get_all_by_tenant(self.ctx)
- self.assertEqual(4, len(events))
-
- self._mock_delete(self.m)
- self.m.ReplayAll()
- [s.delete() for s in stacks]
-
- events = db_api.event_get_all_by_tenant(self.ctx)
- self.assertEqual(0, len(events))
-
- self.m.VerifyAll()
-
- def test_event_get_all(self):
- stacks = [self._setup_test_stack('stack', x)[1] for x in UUIDs]
-
- self._mock_create(self.m)
- self.m.ReplayAll()
- [s.create() for s in stacks]
- self.m.UnsetStubs()
-
- events = db_api.event_get_all(self.ctx)
- self.assertEqual(4, len(events))
-
- self._mock_delete(self.m)
- self.m.ReplayAll()
- stacks[0].delete()
-
- events = db_api.event_get_all(self.ctx)
- self.assertEqual(2, len(events))
-
- self.m.VerifyAll()
-
- def test_user_creds_password(self):
- self.ctx.trust_id = None
- db_creds = db_api.user_creds_create(self.ctx)
- load_creds = db_api.user_creds_get(db_creds.id)
-
- self.assertEqual(load_creds.get('username'), 'test_username')
- self.assertEqual(load_creds.get('password'), 'password')
- self.assertEqual(load_creds.get('tenant'), 'test_tenant')
- self.assertEqual(load_creds.get('tenant_id'), 'test_tenant_id')
- self.assertIsNotNone(load_creds.get('created_at'))
- self.assertIsNone(load_creds.get('updated_at'))
- self.assertEqual(load_creds.get('auth_url'),
- 'http://server.test:5000/v2.0')
- self.assertIsNone(load_creds.get('trust_id'))
- self.assertIsNone(load_creds.get('trustor_user_id'))
-
- def test_user_creds_trust(self):
- self.ctx.username = None
- self.ctx.password = None
- self.ctx.trust_id = 'atrust123'
- self.ctx.trustor_user_id = 'atrustor123'
- db_creds = db_api.user_creds_create(self.ctx)
- load_creds = db_api.user_creds_get(db_creds.id)
-
- self.assertIsNone(load_creds.get('username'))
- self.assertIsNone(load_creds.get('password'))
- self.assertIsNone(load_creds.get('tenant'))
- self.assertIsNone(load_creds.get('tenant_id'))
- self.assertIsNotNone(load_creds.get('created_at'))
- self.assertIsNone(load_creds.get('updated_at'))
- self.assertIsNone(load_creds.get('auth_url'))
- self.assertEqual(load_creds.get('trust_id'), 'atrust123')
- self.assertEqual(load_creds.get('trustor_user_id'), 'atrustor123')
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import mox
-
-from heat.common import template_format
-from heat.common import exception
-from heat.engine import environment
-from heat.engine import parser
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine import stack_resource
-from heat.engine import template
-from heat.openstack.common import uuidutils
-from heat.tests.common import HeatTestCase
-from heat.tests import generic_resource as generic_rsrc
-from heat.tests import utils
-
-
-ws_res_snippet = {"Type": "some_magic_type",
- "metadata": {
- "key": "value",
- "some": "more stuff"}}
-
-param_template = '''
-{
- "Parameters" : {
- "KeyName" : {
- "Description" : "KeyName",
- "Type" : "String",
- "Default" : "test"
- }
- },
- "Resources" : {
- "WebServer": {
- "Type": "GenericResource",
- "Properties": {}
- }
- }
-}
-'''
-
-
-simple_template = '''
-{
- "Parameters" : {},
- "Resources" : {
- "WebServer": {
- "Type": "GenericResource",
- "Properties": {}
- }
- }
-}
-'''
-
-
-class MyStackResource(stack_resource.StackResource,
- generic_rsrc.GenericResource):
- def physical_resource_name(self):
- return "cb2f2b28-a663-4683-802c-4b40c916e1ff"
-
- def set_template(self, nested_tempalte, params):
- self.nested_tempalte = nested_tempalte
- self.nested_params = params
-
- def handle_create(self):
- return self.create_with_template(self.nested_tempalte,
- self.nested_params)
-
- def handle_delete(self):
- self.delete_nested()
-
-
-class StackResourceTest(HeatTestCase):
-
- def setUp(self):
- super(StackResourceTest, self).setUp()
- utils.setup_dummy_db()
- resource._register_class('some_magic_type',
- MyStackResource)
- resource._register_class('GenericResource',
- generic_rsrc.GenericResource)
- t = parser.Template({template.RESOURCES:
- {"provider_resource": ws_res_snippet}})
- self.parent_stack = parser.Stack(utils.dummy_context(), 'test_stack',
- t, stack_id=uuidutils.generate_uuid())
- self.parent_resource = MyStackResource('test',
- ws_res_snippet,
- self.parent_stack)
- self.templ = template_format.parse(param_template)
- self.simple_template = template_format.parse(simple_template)
-
- @utils.stack_delete_after
- def test_create_with_template_ok(self):
- self.parent_resource.create_with_template(self.templ,
- {"KeyName": "key"})
- self.stack = self.parent_resource.nested()
-
- self.assertEqual(self.parent_resource, self.stack.parent_resource)
- self.assertEqual("cb2f2b28-a663-4683-802c-4b40c916e1ff",
- self.stack.name)
- self.assertEqual(self.templ, self.stack.t.t)
- self.assertEqual(self.stack.id, self.parent_resource.resource_id)
-
- @utils.stack_delete_after
- def test_create_with_template_validates(self):
- """
- Creating a stack with a template validates the created stack, so that
- an invalid template will cause an error to be raised.
- """
- # Make a parameter key with the same name as the resource to cause a
- # simple validation error
- template = self.simple_template.copy()
- template['Parameters']['WebServer'] = {'Type': 'String'}
- self.assertRaises(
- exception.StackValidationFailed,
- self.parent_resource.create_with_template,
- template, {'WebServer': 'foo'})
-
- @utils.stack_delete_after
- def test_update_with_template_validates(self):
- """Updating a stack with a template validates the created stack."""
- create_result = self.parent_resource.create_with_template(
- self.simple_template, {})
- while not create_result.step():
- pass
-
- template = self.simple_template.copy()
- template['Parameters']['WebServer'] = {'Type': 'String'}
- self.assertRaises(
- exception.StackValidationFailed,
- self.parent_resource.update_with_template,
- template, {'WebServer': 'foo'})
-
- @utils.stack_delete_after
- def test_update_with_template_ok(self):
- """
- The update_with_template method updates the nested stack with the
- given template and user parameters.
- """
- create_result = self.parent_resource.create_with_template(
- self.simple_template, {})
- while not create_result.step():
- pass
- self.stack = self.parent_resource.nested()
-
- new_templ = self.simple_template.copy()
- inst_snippet = new_templ["Resources"]["WebServer"].copy()
- new_templ["Resources"]["WebServer2"] = inst_snippet
- updater = self.parent_resource.update_with_template(
- new_templ, {})
- updater.run_to_completion()
- self.assertEqual(True,
- self.parent_resource.check_update_complete(updater))
- self.assertEqual(self.stack.state, ('UPDATE', 'COMPLETE'))
- self.assertEqual(set(self.stack.resources.keys()),
- set(["WebServer", "WebServer2"]))
-
- # The stack's owner_id is maintained.
- saved_stack = parser.Stack.load(
- self.parent_stack.context, self.stack.id)
- self.assertEqual(saved_stack.owner_id, self.parent_stack.id)
-
- @utils.stack_delete_after
- def test_update_with_template_state_err(self):
- """
- update_with_template_state_err method should raise error when update
- task is done but the nested stack is in (UPDATE, FAILED) state.
- """
- create_creator = self.parent_resource.create_with_template(
- self.simple_template, {})
- create_creator.run_to_completion()
- self.stack = self.parent_resource.nested()
-
- new_templ = self.simple_template.copy()
- inst_snippet = new_templ["Resources"]["WebServer"].copy()
- new_templ["Resources"]["WebServer2"] = inst_snippet
-
- def update_task():
- yield
- self.stack.state_set(parser.Stack.UPDATE, parser.Stack.FAILED, '')
-
- self.m.StubOutWithMock(self.stack, 'update_task')
- self.stack.update_task(mox.IgnoreArg()).AndReturn(update_task())
- self.m.ReplayAll()
-
- updater = self.parent_resource.update_with_template(new_templ, {})
- updater.run_to_completion()
- self.assertEqual((self.stack.UPDATE, self.stack.FAILED),
- self.stack.state)
- ex = self.assertRaises(exception.Error,
- self.parent_resource.check_update_complete,
- updater)
- self.assertEqual('Nested stack update failed: ', str(ex))
-
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_load_nested_ok(self):
- self.parent_resource.create_with_template(self.templ,
- {"KeyName": "key"})
- self.stack = self.parent_resource.nested()
-
- self.parent_resource._nested = None
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.parent_resource.context,
- self.parent_resource.resource_id,
- parent_resource=self.parent_resource).AndReturn('s')
- self.m.ReplayAll()
-
- self.parent_resource.nested()
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_load_nested_non_exist(self):
- self.parent_resource.create_with_template(self.templ,
- {"KeyName": "key"})
- self.stack = self.parent_resource.nested()
-
- self.parent_resource._nested = None
- self.m.StubOutWithMock(parser.Stack, 'load')
- parser.Stack.load(self.parent_resource.context,
- self.parent_resource.resource_id,
- parent_resource=self.parent_resource)
- self.m.ReplayAll()
-
- self.assertRaises(exception.NotFound, self.parent_resource.nested)
- self.m.VerifyAll()
-
- def test_delete_nested_ok(self):
- nested = self.m.CreateMockAnything()
- self.m.StubOutWithMock(stack_resource.StackResource, 'nested')
- stack_resource.StackResource.nested().AndReturn(nested)
- nested.delete()
- self.m.ReplayAll()
-
- self.parent_resource.delete_nested()
- self.m.VerifyAll()
-
- def test_get_output_ok(self):
- nested = self.m.CreateMockAnything()
- self.m.StubOutWithMock(stack_resource.StackResource, 'nested')
- stack_resource.StackResource.nested().AndReturn(nested)
- nested.outputs = {"key": "value"}
- nested.output('key').AndReturn("value")
- self.m.ReplayAll()
-
- self.assertEqual("value", self.parent_resource.get_output("key"))
-
- self.m.VerifyAll()
-
- def test_get_output_key_not_found(self):
- nested = self.m.CreateMockAnything()
- self.m.StubOutWithMock(stack_resource.StackResource, 'nested')
- stack_resource.StackResource.nested().AndReturn(nested)
- nested.outputs = {}
- self.m.ReplayAll()
-
- self.assertRaises(exception.InvalidTemplateAttribute,
- self.parent_resource.get_output,
- "key")
-
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_create_complete_state_err(self):
- """
- check_create_complete should raise error when create task is
- done but the nested stack is not in (CREATE,COMPLETE) state
- """
- del self.templ['Resources']['WebServer']
- self.parent_resource.set_template(self.templ, {"KeyName": "test"})
-
- ctx = self.parent_resource.context
- phy_id = "cb2f2b28-a663-4683-802c-4b40c916e1ff"
- templ = parser.Template(self.templ)
- env = environment.Environment({"KeyName": "test"})
- self.stack = parser.Stack(ctx, phy_id, templ, env, timeout_mins=None,
- disable_rollback=True,
- parent_resource=self.parent_resource)
-
- self.m.StubOutWithMock(parser, 'Template')
- parser.Template(self.templ).AndReturn(templ)
-
- self.m.StubOutWithMock(environment, 'Environment')
- environment.Environment({"KeyName": "test"}).AndReturn(env)
-
- self.m.StubOutWithMock(parser, 'Stack')
- parser.Stack(ctx, phy_id, templ, env, timeout_mins=None,
- disable_rollback=True,
- parent_resource=self.parent_resource,
- owner_id=self.parent_stack.id)\
- .AndReturn(self.stack)
-
- st_set = self.stack.state_set
- self.m.StubOutWithMock(self.stack, 'state_set')
- self.stack.state_set(parser.Stack.CREATE, parser.Stack.IN_PROGRESS,
- "Stack CREATE started").WithSideEffects(st_set)
-
- self.stack.state_set(parser.Stack.CREATE, parser.Stack.COMPLETE,
- "Stack create completed successfully")
- self.m.ReplayAll()
-
- self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(self.parent_resource.create))
- self.assertEqual(('CREATE', 'FAILED'), self.parent_resource.state)
- self.assertEqual(('Error: Stack CREATE started'),
- self.parent_resource.status_reason)
-
- self.m.VerifyAll()
- # Restore state_set to let clean up proceed
- self.stack.state_set = st_set
-
- @utils.stack_delete_after
- def test_suspend_complete_state_err(self):
- """
- check_suspend_complete should raise error when suspend task is
- done but the nested stack is not in (SUSPEND,COMPLETE) state
- """
- del self.templ['Resources']['WebServer']
- self.parent_resource.set_template(self.templ, {"KeyName": "test"})
- scheduler.TaskRunner(self.parent_resource.create)()
- self.stack = self.parent_resource.nested()
-
- st_set = self.stack.state_set
- self.m.StubOutWithMock(self.stack, 'state_set')
- self.stack.state_set(parser.Stack.SUSPEND, parser.Stack.IN_PROGRESS,
- "Stack SUSPEND started").WithSideEffects(st_set)
-
- self.stack.state_set(parser.Stack.SUSPEND, parser.Stack.COMPLETE,
- "Stack suspend completed successfully")
- self.m.ReplayAll()
-
- self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(self.parent_resource.suspend))
- self.assertEqual(('SUSPEND', 'FAILED'), self.parent_resource.state)
- self.assertEqual(('Error: Stack SUSPEND started'),
- self.parent_resource.status_reason)
-
- self.m.VerifyAll()
- # Restore state_set to let clean up proceed
- self.stack.state_set = st_set
-
- @utils.stack_delete_after
- def test_resume_complete_state_err(self):
- """
- check_resume_complete should raise error when resume task is
- done but the nested stack is not in (RESUME,COMPLETE) state
- """
- del self.templ['Resources']['WebServer']
- self.parent_resource.set_template(self.templ, {"KeyName": "test"})
- scheduler.TaskRunner(self.parent_resource.create)()
- self.stack = self.parent_resource.nested()
-
- scheduler.TaskRunner(self.parent_resource.suspend)()
-
- st_set = self.stack.state_set
- self.m.StubOutWithMock(self.stack, 'state_set')
- self.stack.state_set(parser.Stack.RESUME, parser.Stack.IN_PROGRESS,
- "Stack RESUME started").WithSideEffects(st_set)
-
- self.stack.state_set(parser.Stack.RESUME, parser.Stack.COMPLETE,
- "Stack resume completed successfully")
- self.m.ReplayAll()
-
- self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(self.parent_resource.resume))
- self.assertEqual(('RESUME', 'FAILED'), self.parent_resource.state)
- self.assertEqual(('Error: Stack RESUME started'),
- self.parent_resource.status_reason)
-
- self.m.VerifyAll()
- # Restore state_set to let clean up proceed
- self.stack.state_set = st_set
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import mox
-
-from testtools import skipIf
-
-from heat.common import template_format
-from heat.openstack.common.importutils import try_import
-from heat.engine.resources import swift
-from heat.engine import clients
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.tests.common import HeatTestCase
-from heat.tests import fakes
-from heat.tests import utils
-
-swiftclient = try_import('swiftclient.client')
-
-swift_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Template to test OS::Swift::Container resources",
- "Resources" : {
- "SwiftContainerWebsite" : {
- "Type" : "OS::Swift::Container",
- "DeletionPolicy" : "Delete",
- "Properties" : {
- "X-Container-Read" : ".r:*",
- "X-Container-Meta" : {
- "Web-Index" : "index.html",
- "Web-Error" : "error.html"
- }
- }
- },
- "S3Bucket" : {
- "Type" : "AWS::S3::Bucket",
- "Properties" : {
- "SwiftContainer" : {"Ref" : "SwiftContainer"}
- }
- },
- "SwiftContainer" : {
- "Type" : "OS::Swift::Container",
- "Properties" : {
- }
- }
- }
-}
-'''
-
-
-class swiftTest(HeatTestCase):
- @skipIf(swiftclient is None, 'unable to import swiftclient')
- def setUp(self):
- super(swiftTest, self).setUp()
- self.m.CreateMock(swiftclient.Connection)
- self.m.StubOutWithMock(swiftclient.Connection, 'put_container')
- self.m.StubOutWithMock(swiftclient.Connection, 'delete_container')
- self.m.StubOutWithMock(swiftclient.Connection, 'head_container')
- self.m.StubOutWithMock(swiftclient.Connection, 'get_auth')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
-
- utils.setup_dummy_db()
-
- def create_resource(self, t, stack, resource_name):
- rsrc = swift.SwiftContainer(
- 'test_resource',
- t['Resources'][resource_name],
- stack)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def test_create_container_name(self):
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
- t['Resources']['SwiftContainer']['Properties']['name'] = 'the_name'
- stack = utils.parse_stack(t)
- rsrc = swift.SwiftContainer(
- 'test_resource',
- t['Resources']['SwiftContainer'],
- stack)
-
- self.assertEqual('the_name', rsrc.physical_resource_name())
-
- def test_build_meta_headers(self):
- self.m.UnsetStubs()
- self.assertEqual({}, swift.SwiftContainer._build_meta_headers({}))
- self.assertEqual({}, swift.SwiftContainer._build_meta_headers(None))
- meta = {
- 'X-Container-Meta-Web-Index': 'index.html',
- 'X-Container-Meta-Web-Error': 'error.html'
- }
- self.assertEqual(meta, swift.SwiftContainer._build_meta_headers({
- "Web-Index": "index.html",
- "Web-Error": "error.html"
- }))
-
- def test_attributes(self):
- headers = {
- "content-length": "0",
- "x-container-object-count": "82",
- "x-container-write": "None",
- "accept-ranges": "bytes",
- "x-trans-id": "tx08ea48ef2fa24e6da3d2f5c188fd938b",
- "date": "Wed, 23 Jan 2013 22:48:05 GMT",
- "x-timestamp": "1358980499.84298",
- "x-container-read": ".r:*",
- "x-container-bytes-used": "17680980",
- "content-type": "text/plain; charset=utf-8"}
-
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- container_name = utils.PhysName('test_stack', 'test_resource')
- swiftclient.Connection.put_container(
- container_name,
- {'X-Container-Write': None,
- 'X-Container-Read': None}
- ).AndReturn(None)
- swiftclient.Connection.get_auth().MultipleTimes().AndReturn(
- ('http://server.test:8080/v_2', None))
- swiftclient.Connection.head_container(
- mox.IgnoreArg()).MultipleTimes().AndReturn(headers)
- swiftclient.Connection.delete_container(container_name).AndReturn(None)
-
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
- stack = utils.parse_stack(t)
- rsrc = self.create_resource(t, stack, 'SwiftContainer')
-
- ref_id = rsrc.FnGetRefId()
- self.assertEqual(container_name, ref_id)
-
- self.assertEqual('server.test', rsrc.FnGetAtt('DomainName'))
- url = 'http://server.test:8080/v_2/%s' % ref_id
-
- self.assertEqual(url, rsrc.FnGetAtt('WebsiteURL'))
- self.assertEqual('82', rsrc.FnGetAtt('ObjectCount'))
- self.assertEqual('17680980', rsrc.FnGetAtt('BytesUsed'))
- self.assertEqual(headers, rsrc.FnGetAtt('HeadContainer'))
-
- try:
- rsrc.FnGetAtt('Foo')
- raise Exception('Expected InvalidTemplateAttribute')
- except swift.exception.InvalidTemplateAttribute:
- pass
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
- def test_public_read(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- container_name = utils.PhysName('test_stack', 'test_resource')
- swiftclient.Connection.put_container(
- container_name,
- {'X-Container-Write': None,
- 'X-Container-Read': '.r:*'}).AndReturn(None)
- swiftclient.Connection.delete_container(container_name).AndReturn(None)
-
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
- properties = t['Resources']['SwiftContainer']['Properties']
- properties['X-Container-Read'] = '.r:*'
- stack = utils.parse_stack(t)
- rsrc = self.create_resource(t, stack, 'SwiftContainer')
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
- def test_public_read_write(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- container_name = utils.PhysName('test_stack', 'test_resource')
- swiftclient.Connection.put_container(
- container_name,
- {'X-Container-Write': '.r:*',
- 'X-Container-Read': '.r:*'}).AndReturn(None)
- swiftclient.Connection.delete_container(container_name).AndReturn(None)
-
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
- properties = t['Resources']['SwiftContainer']['Properties']
- properties['X-Container-Read'] = '.r:*'
- properties['X-Container-Write'] = '.r:*'
- stack = utils.parse_stack(t)
- rsrc = self.create_resource(t, stack, 'SwiftContainer')
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
- def test_website(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- container_name = utils.PhysName('test_stack', 'test_resource')
- swiftclient.Connection.put_container(
- container_name,
- {'X-Container-Meta-Web-Error': 'error.html',
- 'X-Container-Meta-Web-Index': 'index.html',
- 'X-Container-Write': None,
- 'X-Container-Read': '.r:*'}).AndReturn(None)
- swiftclient.Connection.delete_container(container_name).AndReturn(None)
-
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
- stack = utils.parse_stack(t)
- rsrc = self.create_resource(t, stack, 'SwiftContainerWebsite')
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
- def test_delete_exception(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- container_name = utils.PhysName('test_stack', 'test_resource')
- swiftclient.Connection.put_container(
- container_name,
- {'X-Container-Write': None,
- 'X-Container-Read': None}).AndReturn(None)
- swiftclient.Connection.delete_container(container_name).AndRaise(
- swiftclient.ClientException('Test delete failure'))
-
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
- stack = utils.parse_stack(t)
- rsrc = self.create_resource(t, stack, 'SwiftContainer')
- scheduler.TaskRunner(rsrc.delete)()
-
- self.m.VerifyAll()
-
- def test_delete_retain(self):
-
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
- # first run, with retain policy
- swiftclient.Connection.put_container(
- utils.PhysName('test_stack', 'test_resource'),
- {'X-Container-Write': None,
- 'X-Container-Read': None}).AndReturn(None)
-
- self.m.ReplayAll()
- t = template_format.parse(swift_template)
-
- container = t['Resources']['SwiftContainer']
- container['DeletionPolicy'] = 'Retain'
- stack = utils.parse_stack(t)
- rsrc = self.create_resource(t, stack, 'SwiftContainer')
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
-
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from testtools import skipIf
-import os
-import yaml
-
-from heat.engine import clients
-from heat.common import config
-from heat.common import exception
-from heat.common import template_format
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-class JsonToYamlTest(HeatTestCase):
-
- def setUp(self):
- super(JsonToYamlTest, self).setUp()
- self.expected_test_count = 2
- self.longMessage = True
- self.maxDiff = None
-
- def test_convert_all_templates(self):
- path = os.path.join(os.path.dirname(os.path.realpath(__file__)),
- 'templates')
-
- template_test_count = 0
- for (json_str,
- yml_str,
- file_name) in self.convert_all_json_to_yaml(path):
-
- self.compare_json_vs_yaml(json_str, yml_str, file_name)
- template_test_count += 1
- if template_test_count >= self.expected_test_count:
- break
-
- self.assertTrue(template_test_count >= self.expected_test_count,
- 'Expected at least %d templates to be tested, not %d' %
- (self.expected_test_count, template_test_count))
-
- def compare_json_vs_yaml(self, json_str, yml_str, file_name):
- yml = template_format.parse(yml_str)
-
- self.assertEqual(u'2012-12-12', yml[u'HeatTemplateFormatVersion'],
- file_name)
- self.assertFalse(u'AWSTemplateFormatVersion' in yml, file_name)
- del(yml[u'HeatTemplateFormatVersion'])
-
- jsn = template_format.parse(json_str)
- template_format.default_for_missing(jsn, 'AWSTemplateFormatVersion',
- template_format.CFN_VERSIONS)
-
- if u'AWSTemplateFormatVersion' in jsn:
- del(jsn[u'AWSTemplateFormatVersion'])
-
- self.assertEqual(yml, jsn, file_name)
-
- def convert_all_json_to_yaml(self, dirpath):
- for path in os.listdir(dirpath):
- if not path.endswith('.template') and not path.endswith('.json'):
- continue
- f = open(os.path.join(dirpath, path), 'r')
- json_str = f.read()
-
- yml_str = template_format.convert_json_to_yaml(json_str)
- yield (json_str, yml_str, f.name)
-
-
-class YamlMinimalTest(HeatTestCase):
-
- def test_minimal_yaml(self):
- yaml1 = ''
- yaml2 = '''HeatTemplateFormatVersion: '2012-12-12'
-Parameters: {}
-Mappings: {}
-Resources: {}
-Outputs: {}
-'''
- tpl1 = template_format.parse(yaml1)
- tpl2 = template_format.parse(yaml2)
- self.assertEqual(tpl1, tpl2)
-
- def test_long_yaml(self):
- template = {'HeatTemplateVersion': '2012-12-12'}
- config.cfg.CONF.set_override('max_template_size', 1024)
- template['Resources'] = ['a'] * (config.cfg.CONF.max_template_size / 3)
- limit = config.cfg.CONF.max_template_size
- long_yaml = yaml.safe_dump(template)
- self.assertTrue(len(long_yaml) > limit)
- ex = self.assertRaises(exception.RequestLimitExceeded,
- template_format.parse, long_yaml)
- msg = 'Request limit exceeded: Template exceeds maximum allowed size.'
- self.assertEqual(msg, str(ex))
-
-
-class JsonYamlResolvedCompareTest(HeatTestCase):
-
- def setUp(self):
- super(JsonYamlResolvedCompareTest, self).setUp()
- self.longMessage = True
- self.maxDiff = None
- utils.setup_dummy_db()
-
- def load_template(self, file_name):
- filepath = os.path.join(os.path.dirname(os.path.realpath(__file__)),
- 'templates', file_name)
- f = open(filepath)
- t = template_format.parse(f.read())
- f.close()
- return t
-
- def compare_stacks(self, json_file, yaml_file, parameters):
- t1 = self.load_template(json_file)
- template_format.default_for_missing(t1, 'AWSTemplateFormatVersion',
- template_format.CFN_VERSIONS)
- del(t1[u'AWSTemplateFormatVersion'])
-
- t2 = self.load_template(yaml_file)
- del(t2[u'HeatTemplateFormatVersion'])
-
- stack1 = utils.parse_stack(t1, parameters)
- stack2 = utils.parse_stack(t2, parameters)
-
- # compare resources separately so that resolved static data
- # is compared
- t1nr = dict(stack1.t.t)
- del(t1nr['Resources'])
-
- t2nr = dict(stack2.t.t)
- del(t2nr['Resources'])
- self.assertEqual(t1nr, t2nr)
-
- self.assertEqual(set(stack1.resources.keys()),
- set(stack2.resources.keys()))
- for key in stack1.resources:
- self.assertEqual(stack1.resources[key].t, stack2.resources[key].t)
-
- @skipIf(clients.neutronclient is None, 'neutronclient unavailable')
- def test_neutron_resolved(self):
- self.compare_stacks('Neutron.template', 'Neutron.yaml', {})
-
- def test_wordpress_resolved(self):
- self.compare_stacks('WordPress_Single_Instance.template',
- 'WordPress_Single_Instance.yaml',
- {'KeyName': 'test'})
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import requests
-from requests import exceptions
-import urllib2
-import cStringIO
-
-from heat.common import urlfetch
-from heat.tests.common import HeatTestCase
-
-
-class Response:
- def __init__(self, buf=''):
- self._text = buf
-
- @property
- def text(self):
- return self._text
-
- def raise_for_status(self):
- pass
-
-
-class UrlFetchTest(HeatTestCase):
- def setUp(self):
- super(UrlFetchTest, self).setUp()
- self.m.StubOutWithMock(requests, 'get')
-
- def test_file_scheme_default_behaviour(self):
- self.m.ReplayAll()
- self.assertRaises(IOError, urlfetch.get, 'file:///etc/profile')
- self.m.VerifyAll()
-
- def test_file_scheme_supported(self):
- data = '{ "foo": "bar" }'
- url = 'file:///etc/profile'
-
- self.m.StubOutWithMock(urllib2, 'urlopen')
- urllib2.urlopen(url).AndReturn(cStringIO.StringIO(data))
- self.m.ReplayAll()
-
- self.assertEqual(data, urlfetch.get(url, allowed_schemes=['file']))
- self.m.VerifyAll()
-
- def test_file_scheme_failure(self):
- url = 'file:///etc/profile'
-
- self.m.StubOutWithMock(urllib2, 'urlopen')
- urllib2.urlopen(url).AndRaise(urllib2.URLError('oops'))
- self.m.ReplayAll()
-
- self.assertRaises(IOError, urlfetch.get, url, allowed_schemes=['file'])
- self.m.VerifyAll()
-
- def test_http_scheme(self):
- url = 'http://example.com/template'
- data = '{ "foo": "bar" }'
-
- requests.get(url).AndReturn(Response(data))
- self.m.ReplayAll()
-
- self.assertEqual(urlfetch.get(url), data)
- self.m.VerifyAll()
-
- def test_https_scheme(self):
- url = 'https://example.com/template'
- data = '{ "foo": "bar" }'
-
- requests.get(url).AndReturn(Response(data))
- self.m.ReplayAll()
-
- self.assertEqual(urlfetch.get(url), data)
- self.m.VerifyAll()
-
- def test_http_error(self):
- url = 'http://example.com/template'
-
- requests.get(url).AndRaise(exceptions.HTTPError())
- self.m.ReplayAll()
-
- self.assertRaises(IOError, urlfetch.get, url)
- self.m.VerifyAll()
-
- def test_non_exist_url(self):
- url = 'http://non-exist.com/template'
-
- requests.get(url).AndRaise(exceptions.Timeout())
- self.m.ReplayAll()
-
- self.assertRaises(IOError, urlfetch.get, url)
- self.m.VerifyAll()
-
- def test_garbage(self):
- self.m.ReplayAll()
- self.assertRaises(IOError, urlfetch.get, 'wibble')
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-from oslo.config import cfg
-
-from heat.common import exception
-from heat.common import template_format
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine.resources import user
-from heat.tests.common import HeatTestCase
-from heat.tests import fakes
-from heat.tests import utils
-
-import keystoneclient.exceptions
-
-user_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Just a User",
- "Parameters" : {},
- "Resources" : {
- "CfnUser" : {
- "Type" : "AWS::IAM::User"
- }
- }
-}
-'''
-
-user_accesskey_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Just a User",
- "Parameters" : {},
- "Resources" : {
- "CfnUser" : {
- "Type" : "AWS::IAM::User"
- },
-
- "HostKeys" : {
- "Type" : "AWS::IAM::AccessKey",
- "Properties" : {
- "UserName" : {"Ref": "CfnUser"}
- }
- }
- }
-}
-'''
-
-
-user_policy_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Just a User",
- "Parameters" : {},
- "Resources" : {
- "CfnUser" : {
- "Type" : "AWS::IAM::User",
- "Properties" : {
- "Policies" : [ { "Ref": "WebServerAccessPolicy"} ]
- }
- },
- "WebServerAccessPolicy" : {
- "Type" : "OS::Heat::AccessPolicy",
- "Properties" : {
- "AllowedResources" : [ "WikiDatabase" ]
- }
- },
- "WikiDatabase" : {
- "Type" : "AWS::EC2::Instance",
- }
- }
-}
-'''
-
-
-class UserPolicyTestCase(HeatTestCase):
- def setUp(self):
- super(UserPolicyTestCase, self).setUp()
- username = utils.PhysName('test_stack', 'CfnUser')
- self.fc = fakes.FakeKeystoneClient(username=username)
- cfg.CONF.set_default('heat_stack_user_role', 'stack_user_role')
- utils.setup_dummy_db()
-
-
-class UserTest(UserPolicyTestCase):
-
- def create_user(self, t, stack, resource_name):
- rsrc = user.User(resource_name,
- t['Resources'][resource_name],
- stack)
- self.assertEqual(None, rsrc.validate())
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def test_user(self):
-
- self.m.StubOutWithMock(user.User, 'keystone')
- user.User.keystone().MultipleTimes().AndReturn(self.fc)
-
- self.m.ReplayAll()
-
- t = template_format.parse(user_template)
- stack = utils.parse_stack(t)
-
- rsrc = self.create_user(t, stack, 'CfnUser')
- self.assertEqual(self.fc.user_id, rsrc.resource_id)
- self.assertEqual(utils.PhysName('test_stack', 'CfnUser'),
- rsrc.FnGetRefId())
-
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- self.assertEqual(None, rsrc.handle_suspend())
- self.assertEqual(None, rsrc.handle_resume())
-
- rsrc.resource_id = None
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
-
- rsrc.resource_id = self.fc.access
- rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE)
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
-
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
-
- rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE)
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
-
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
- self.m.VerifyAll()
-
- def test_user_validate_policies(self):
-
- self.m.StubOutWithMock(user.User, 'keystone')
- user.User.keystone().MultipleTimes().AndReturn(self.fc)
-
- self.m.ReplayAll()
-
- t = template_format.parse(user_policy_template)
- stack = utils.parse_stack(t)
-
- rsrc = self.create_user(t, stack, 'CfnUser')
- self.assertEqual(self.fc.user_id, rsrc.resource_id)
- self.assertEqual(utils.PhysName('test_stack', 'CfnUser'),
- rsrc.FnGetRefId())
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
-
- self.assertEqual([u'WebServerAccessPolicy'],
- rsrc.properties['Policies'])
-
- # OK
- self.assertTrue(
- rsrc._validate_policies([u'WebServerAccessPolicy']))
-
- # Resource name doesn't exist in the stack
- self.assertFalse(rsrc._validate_policies([u'NoExistAccessPolicy']))
-
- # Resource name is wrong Resource type
- self.assertFalse(rsrc._validate_policies([u'NoExistAccessPolicy',
- u'WikiDatabase']))
-
- # Wrong type (AWS embedded policy format, not yet supported)
- dict_policy = {"PolicyName": "AccessForCFNInit",
- "PolicyDocument":
- {"Statement": [{"Effect": "Allow",
- "Action":
- "cloudformation:DescribeStackResource",
- "Resource": "*"}]}}
-
- # However we should just ignore it to avoid breaking existing templates
- self.assertTrue(rsrc._validate_policies([dict_policy]))
-
- self.m.VerifyAll()
-
- def test_user_create_bad_policies(self):
- self.m.ReplayAll()
-
- t = template_format.parse(user_policy_template)
- t['Resources']['CfnUser']['Properties']['Policies'] = ['NoExistBad']
- stack = utils.parse_stack(t)
- resource_name = 'CfnUser'
- rsrc = user.User(resource_name,
- t['Resources'][resource_name],
- stack)
- self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.handle_create)
- self.m.VerifyAll()
-
- def test_user_access_allowed(self):
-
- self.m.StubOutWithMock(user.User, 'keystone')
- user.User.keystone().MultipleTimes().AndReturn(self.fc)
-
- self.m.StubOutWithMock(user.AccessPolicy, 'access_allowed')
- user.AccessPolicy.access_allowed('a_resource').AndReturn(True)
- user.AccessPolicy.access_allowed('b_resource').AndReturn(False)
-
- self.m.ReplayAll()
-
- t = template_format.parse(user_policy_template)
- stack = utils.parse_stack(t)
-
- rsrc = self.create_user(t, stack, 'CfnUser')
- self.assertEqual(self.fc.user_id, rsrc.resource_id)
- self.assertEqual(utils.PhysName('test_stack', 'CfnUser'),
- rsrc.FnGetRefId())
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
-
- self.assertTrue(rsrc.access_allowed('a_resource'))
- self.assertFalse(rsrc.access_allowed('b_resource'))
- self.m.VerifyAll()
-
- def test_user_access_allowed_ignorepolicy(self):
-
- self.m.StubOutWithMock(user.User, 'keystone')
- user.User.keystone().MultipleTimes().AndReturn(self.fc)
-
- self.m.StubOutWithMock(user.AccessPolicy, 'access_allowed')
- user.AccessPolicy.access_allowed('a_resource').AndReturn(True)
- user.AccessPolicy.access_allowed('b_resource').AndReturn(False)
-
- self.m.ReplayAll()
-
- t = template_format.parse(user_policy_template)
- t['Resources']['CfnUser']['Properties']['Policies'] = [
- 'WebServerAccessPolicy', {'an_ignored': 'policy'}]
- stack = utils.parse_stack(t)
-
- rsrc = self.create_user(t, stack, 'CfnUser')
- self.assertEqual(self.fc.user_id, rsrc.resource_id)
- self.assertEqual(utils.PhysName('test_stack', 'CfnUser'),
- rsrc.FnGetRefId())
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
-
- self.assertTrue(rsrc.access_allowed('a_resource'))
- self.assertFalse(rsrc.access_allowed('b_resource'))
- self.m.VerifyAll()
-
-
-class AccessKeyTest(UserPolicyTestCase):
-
- def create_access_key(self, t, stack, resource_name):
- rsrc = user.AccessKey(resource_name,
- t['Resources'][resource_name],
- stack)
- self.assertEqual(None, rsrc.validate())
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def create_user(self, t, stack, resource_name):
- rsrc = stack[resource_name]
- self.assertEqual(None, rsrc.validate())
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
- return rsrc
-
- def test_access_key(self):
- self.m.StubOutWithMock(user.AccessKey, 'keystone')
- self.m.StubOutWithMock(user.User, 'keystone')
- user.AccessKey.keystone().MultipleTimes().AndReturn(self.fc)
- user.User.keystone().MultipleTimes().AndReturn(self.fc)
-
- self.m.ReplayAll()
-
- t = template_format.parse(user_accesskey_template)
-
- stack = utils.parse_stack(t)
-
- self.create_user(t, stack, 'CfnUser')
- rsrc = self.create_access_key(t, stack, 'HostKeys')
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
- self.assertEqual(self.fc.access,
- rsrc.resource_id)
-
- self.assertEqual(self.fc.secret,
- rsrc._secret)
-
- self.assertEqual(utils.PhysName(stack.name, 'CfnUser'),
- rsrc.FnGetAtt('UserName'))
- rsrc._secret = None
- self.assertEqual(rsrc.FnGetAtt('SecretAccessKey'),
- self.fc.secret)
-
- self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.FnGetAtt, 'Foo')
- scheduler.TaskRunner(rsrc.delete)()
- self.m.VerifyAll()
-
- def test_access_key_deleted(self):
- self.m.StubOutWithMock(user.AccessKey, 'keystone')
- self.m.StubOutWithMock(user.User, 'keystone')
- user.AccessKey.keystone().MultipleTimes().AndReturn(self.fc)
- user.User.keystone().MultipleTimes().AndReturn(self.fc)
-
- self.m.ReplayAll()
-
- t = template_format.parse(user_accesskey_template)
- stack = utils.parse_stack(t)
-
- self.create_user(t, stack, 'CfnUser')
- rsrc = self.create_access_key(t, stack, 'HostKeys')
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
-
- self.m.StubOutWithMock(self.fc, 'delete_ec2_keypair')
- NotFound = keystoneclient.exceptions.NotFound
- self.fc.delete_ec2_keypair(self.fc.user_id,
- rsrc.resource_id).AndRaise(NotFound('Gone'))
- self.m.ReplayAll()
- scheduler.TaskRunner(rsrc.delete)()
-
- self.m.VerifyAll()
-
- def test_access_key_no_user(self):
- self.m.ReplayAll()
-
- t = template_format.parse(user_accesskey_template)
- # Set the resource properties UserName to an unknown user
- t['Resources']['HostKeys']['Properties']['UserName'] = 'NonExistant'
- stack = utils.parse_stack(t)
- stack.resources['CfnUser'].resource_id = self.fc.user_id
-
- rsrc = user.AccessKey('HostKeys',
- t['Resources']['HostKeys'],
- stack)
- create = scheduler.TaskRunner(rsrc.create)
- self.assertRaises(exception.ResourceFailure, create)
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
-
- scheduler.TaskRunner(rsrc.delete)()
- self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state)
-
- self.m.VerifyAll()
-
-
-class AccessPolicyTest(UserPolicyTestCase):
-
- def test_accesspolicy_create_ok(self):
- t = template_format.parse(user_policy_template)
- stack = utils.parse_stack(t)
-
- resource_name = 'WebServerAccessPolicy'
- rsrc = user.AccessPolicy(resource_name,
- t['Resources'][resource_name],
- stack)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
-
- def test_accesspolicy_create_ok_empty(self):
- t = template_format.parse(user_policy_template)
- resource_name = 'WebServerAccessPolicy'
- t['Resources'][resource_name]['Properties']['AllowedResources'] = []
- stack = utils.parse_stack(t)
-
- rsrc = user.AccessPolicy(resource_name,
- t['Resources'][resource_name],
- stack)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state)
-
- def test_accesspolicy_create_err_notfound(self):
- t = template_format.parse(user_policy_template)
- resource_name = 'WebServerAccessPolicy'
- t['Resources'][resource_name]['Properties']['AllowedResources'] = [
- 'NoExistResource']
- stack = utils.parse_stack(t)
-
- rsrc = user.AccessPolicy(resource_name,
- t['Resources'][resource_name],
- stack)
- self.assertRaises(exception.ResourceNotFound, rsrc.handle_create)
-
- def test_accesspolicy_update(self):
- t = template_format.parse(user_policy_template)
- resource_name = 'WebServerAccessPolicy'
- stack = utils.parse_stack(t)
-
- rsrc = user.AccessPolicy(resource_name,
- t['Resources'][resource_name],
- stack)
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- def test_accesspolicy_access_allowed(self):
- t = template_format.parse(user_policy_template)
- resource_name = 'WebServerAccessPolicy'
- stack = utils.parse_stack(t)
-
- rsrc = user.AccessPolicy(resource_name,
- t['Resources'][resource_name],
- stack)
- self.assertTrue(rsrc.access_allowed('WikiDatabase'))
- self.assertFalse(rsrc.access_allowed('NotWikiDatabase'))
- self.assertFalse(rsrc.access_allowed(None))
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from testtools import skipIf
-
-from heat.engine import clients
-from heat.engine import environment
-from heat.tests.v1_1 import fakes
-from heat.common import exception
-from heat.common import template_format
-from heat.engine import resources
-from heat.engine.resources import instance as instances
-from heat.engine import service
-from heat.openstack.common.importutils import try_import
-import heat.db.api as db_api
-from heat.engine import parser
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-test_template_volumeattach = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Resources" : {
- "WikiDatabase": {
- "Type": "AWS::EC2::Instance",
- "DeletionPolicy": "Delete",
- "Properties": {
- "ImageId": "image_name",
- "InstanceType": "m1.large",
- "KeyName": "test_KeyName"
- }
- },
- "DataVolume" : {
- "Type" : "AWS::EC2::Volume",
- "Properties" : {
- "Size" : "6",
- "AvailabilityZone" : "nova"
- }
- },
- "MountPoint" : {
- "Type" : "AWS::EC2::VolumeAttachment",
- "Properties" : {
- "InstanceId" : { "Ref" : "WikiDatabase" },
- "VolumeId" : { "Ref" : "DataVolume" },
- "Device" : "/dev/%s"
- }
- }
- }
-}
-'''
-
-test_template_ref = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Parameters" : {
-
- "KeyName" : {
-''' + \
- '"Description" : "Name of an existing EC2' + \
- 'KeyPair to enable SSH access to the instances",' + \
- '''
- "Type" : "String"
- }
- },
-
- "Resources" : {
- "WikiDatabase": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId": "image_name",
- "InstanceType": "m1.large",
- "KeyName": { "Ref" : "KeyName" }
- }
- },
- "DataVolume" : {
- "Type" : "AWS::EC2::Volume",
- "Properties" : {
- "Size" : "6",
- "AvailabilityZone" : "nova"
- }
- },
- "MountPoint" : {
- "Type" : "AWS::EC2::VolumeAttachment",
- "Properties" : {
- "InstanceId" : { "Ref" : "%s" },
- "VolumeId" : { "Ref" : "DataVolume" },
- "Device" : "/dev/vdb"
- }
- }
- }
- }
- '''
-test_template_findinmap_valid = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Parameters" : {
- "KeyName" : {
-''' + \
- '"Description" : "Name of an existing EC2 KeyPair to' + \
- 'enable SSH access to the instances",' + \
- '''
- "Type" : "String"
- }
- },
-
- "Resources" : {
- "WikiDatabase": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId": "image_name",
- "InstanceType": "m1.large",
- "KeyName": { "Ref" : "KeyName" }
- }
- },
- "DataVolume" : {
- "Type" : "AWS::EC2::Volume",
- "Properties" : {
- "Size" : "6",
- "AvailabilityZone" : "nova"
- }
- },
-
- "MountPoint" : {
- "Type" : "AWS::EC2::VolumeAttachment",
- "Properties" : {
- "InstanceId" : { "Ref" : "WikiDatabase" },
- "VolumeId" : { "Ref" : "DataVolume" },
- "Device" : "/dev/vdb"
- }
- }
- }
- }
- '''
-test_template_findinmap_invalid = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Parameters" : {
-
- "KeyName" : {
-''' + \
- '"Description" : "Name of an existing EC2 KeyPair to enable SSH ' + \
- 'access to the instances",' + \
- ''' "Type" : "String"
- }
- },
-
- "Mappings" : {
- "AWSInstanceType2Arch" : {
- "t1.micro" : { "Arch" : "64" },
- "m1.small" : { "Arch" : "64" },
- "m1.medium" : { "Arch" : "64" },
- "m1.large" : { "Arch" : "64" },
- "m1.xlarge" : { "Arch" : "64" },
- "m2.xlarge" : { "Arch" : "64" },
- "m2.2xlarge" : { "Arch" : "64" },
- "m2.4xlarge" : { "Arch" : "64" },
- "c1.medium" : { "Arch" : "64" },
- "c1.xlarge" : { "Arch" : "64" },
- "cc1.4xlarge" : { "Arch" : "64HVM" },
- "cc2.8xlarge" : { "Arch" : "64HVM" },
- "cg1.4xlarge" : { "Arch" : "64HVM" }
- }
- },
- "Resources" : {
- "WikiDatabase": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- ''' + \
- '"ImageId" : { "Fn::FindInMap" : [ "DistroArch2AMI", { "Ref" : ' + \
- '"LinuxDistribution" },' + \
- '{ "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : ' + \
- '"InstanceType" }, "Arch" ] } ] },' + \
- '''
- "InstanceType": "m1.large",
- "KeyName": { "Ref" : "KeyName"}
- }
- },
- "DataVolume" : {
- "Type" : "AWS::EC2::Volume",
- "Properties" : {
- "Size" : "6",
- "AvailabilityZone" : "nova"
- }
- },
-
- "MountPoint" : {
- "Type" : "AWS::EC2::VolumeAttachment",
- "Properties" : {
- "InstanceId" : { "Ref" : "WikiDatabase" },
- "VolumeId" : { "Ref" : "DataVolume" },
- "Device" : "/dev/vdb"
- }
- }
- }
-}
-'''
-
-test_template_invalid_resources = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "AWS CloudFormation Sample Template for xyz.",
- "Parameters" : {
- "InstanceType" : {
- "Description" : "Defined instance type",
- "Type" : "String",
- "Default" : "node.ee",
- "AllowedValues" : ["node.ee", "node.apache", "node.api"],
- "ConstraintDescription" : "must be a valid instance type."
- }
- },
- "Resources" : {
- "Type" : "AWS::EC2::Instance",
- "Metadata" : {
- },
- "Properties" : {
- "ImageId" : { "Ref" : "centos-6.4-20130701-0" },
- "InstanceType" : { "Ref" : "InstanceType" }
- }
- }
-}
-'''
-
-test_template_invalid_property = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Parameters" : {
-
- "KeyName" : {
-''' + \
- '"Description" : "Name of an existing EC2' + \
- 'KeyPair to enable SSH access to the instances",' + \
- '''
- "Type" : "String"
- }
- },
-
- "Resources" : {
- "WikiDatabase": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId": "image_name",
- "InstanceType": "m1.large",
- "KeyName": { "Ref" : "KeyName" },
- "UnknownProperty": "unknown"
- }
- }
- }
- }
- '''
-
-test_template_unimplemented_property = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Parameters" : {
-
- "KeyName" : {
-''' + \
- '"Description" : "Name of an existing EC2' + \
- 'KeyPair to enable SSH access to the instances",' + \
- '''
- "Type" : "String"
- }
- },
-
- "Resources" : {
- "WikiDatabase": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId": "image_name",
- "InstanceType": "m1.large",
- "KeyName": { "Ref" : "KeyName" },
- "SourceDestCheck": "false"
- }
- }
- }
- }
- '''
-
-test_template_invalid_deletion_policy = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Parameters" : {
-
- "KeyName" : {
-''' + \
- '"Description" : "Name of an existing EC2' + \
- 'KeyPair to enable SSH access to the instances",' + \
- '''
- "Type" : "String"
- }
- },
-
- "Resources" : {
- "WikiDatabase": {
- "Type": "AWS::EC2::Instance",
- "DeletionPolicy": "Destroy",
- "Properties": {
- "ImageId": "image_name",
- "InstanceType": "m1.large",
- "KeyName": { "Ref" : "KeyName" }
- }
- }
- }
- }
- '''
-
-test_template_snapshot_deletion_policy = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Parameters" : {
-
- "KeyName" : {
-''' + \
- '"Description" : "Name of an existing EC2' + \
- 'KeyPair to enable SSH access to the instances",' + \
- '''
- "Type" : "String"
- }
- },
-
- "Resources" : {
- "WikiDatabase": {
- "Type": "AWS::EC2::Instance",
- "DeletionPolicy": "Snapshot",
- "Properties": {
- "ImageId": "image_name",
- "InstanceType": "m1.large",
- "KeyName": { "Ref" : "KeyName" }
- }
- }
- }
- }
- '''
-
-test_template_volume_snapshot = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Resources" : {
- "DataVolume" : {
- "Type" : "AWS::EC2::Volume",
- "DeletionPolicy": "Snapshot",
- "Properties" : {
- "Size" : "6",
- "AvailabilityZone" : "nova"
- }
- }
- }
-}
-'''
-
-test_unregistered_key = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Parameters" : {
-
- "KeyName" : {
-''' + \
- '"Description" : "Name of an existing EC2' + \
- 'KeyPair to enable SSH access to the instances",' + \
- '''
- "Type" : "String"
- }
- },
-
- "Resources" : {
- "Instance": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId": "image_name",
- "InstanceType": "m1.large",
- "KeyName": { "Ref" : "KeyName" }
- }
- }
- }
- }
- '''
-
-test_template_image = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Parameters" : {
-
- "KeyName" : {
-''' + \
- '"Description" : "Name of an existing EC2' + \
- 'KeyPair to enable SSH access to the instances",' + \
- '''
- "Type" : "String"
- }
- },
-
- "Resources" : {
- "Instance": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId": "image_name",
- "InstanceType": "m1.large",
- "KeyName": { "Ref" : "KeyName" }
- }
- }
- }
- }
- '''
-
-test_template_invalid_secgroups = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Parameters" : {
-
- "KeyName" : {
-''' + \
- '"Description" : "Name of an existing EC2' + \
- 'KeyPair to enable SSH access to the instances",' + \
- '''
- "Type" : "String"
- }
- },
-
- "Resources" : {
- "Instance": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId": "image_name",
- "InstanceType": "m1.large",
- "KeyName": { "Ref" : "KeyName" },
- "SecurityGroups": [ "default" ],
- "NetworkInterfaces": [ "mgmt", "data" ]
- }
- }
- }
- }
- '''
-
-test_template_invalid_secgroupids = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Parameters" : {
-
- "KeyName" : {
-''' + \
- '"Description" : "Name of an existing EC2' + \
- 'KeyPair to enable SSH access to the instances",' + \
- '''
- "Type" : "String"
- }
- },
-
- "Resources" : {
- "Instance": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId": "image_name",
- "InstanceType": "m1.large",
- "KeyName": { "Ref" : "KeyName" },
- "SecurityGroupIds": [ "default" ],
- "NetworkInterfaces": [ "mgmt", "data" ]
- }
- }
- }
- }
- '''
-
-test_template_nova_client_exception = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Resources" : {
- "Instance": {
- "Type": "AWS::EC2::Instance",
- "DeletionPolicy": "Delete",
- "Properties": {
- "ImageId": "image_name",
- "InstanceType": "m1.large"
- }
- }
- }
-}
-'''
-
-test_template_unique_logical_name = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "test.",
- "Parameters" : {
-
- "KeyName" : {
-''' + \
- '"Description" : "Name of an existing EC2' + \
- 'KeyPair to enable SSH access to the instances",' + \
- '''
- "Type" : "String"
- },
- "AName" : {
-''' + \
- '"Description" : "Name of an existing EC2' + \
- 'KeyPair to enable SSH access to the instances",' + \
- '''
- "Type" : "String"
- }
- },
-
- "Resources" : {
- "AName": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId": "image_name",
- "InstanceType": "m1.large",
- "KeyName": { "Ref" : "KeyName" },
- "NetworkInterfaces": [ "mgmt", "data" ]
- }
- }
- }
- }
- '''
-
-
-class validateTest(HeatTestCase):
- def setUp(self):
- super(validateTest, self).setUp()
- resources.initialise()
- self.fc = fakes.FakeClient()
- resources.initialise()
- utils.setup_dummy_db()
- self.ctx = utils.dummy_context()
-
- def test_validate_volumeattach_valid(self):
- t = template_format.parse(test_template_volumeattach % 'vdq')
- stack = parser.Stack(self.ctx, 'test_stack', parser.Template(t))
-
- self.m.StubOutWithMock(db_api, 'resource_get_by_name_and_stack')
- db_api.resource_get_by_name_and_stack(None, 'test_resource_name',
- stack).AndReturn(None)
-
- self.m.ReplayAll()
- volumeattach = stack.resources['MountPoint']
- self.assertTrue(volumeattach.validate() is None)
-
- def test_validate_volumeattach_invalid(self):
- t = template_format.parse(test_template_volumeattach % 'sda')
- stack = parser.Stack(self.ctx, 'test_stack', parser.Template(t))
-
- self.m.StubOutWithMock(db_api, 'resource_get_by_name_and_stack')
- db_api.resource_get_by_name_and_stack(None, 'test_resource_name',
- stack).AndReturn(None)
-
- self.m.ReplayAll()
- volumeattach = stack.resources['MountPoint']
- self.assertRaises(exception.StackValidationFailed,
- volumeattach.validate)
-
- def test_validate_ref_valid(self):
- t = template_format.parse(test_template_ref % 'WikiDatabase')
-
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- engine = service.EngineService('a', 't')
- res = dict(engine.validate_template(None, t))
- self.assertEqual(res['Description'], 'test.')
-
- def test_validate_hot_valid(self):
- t = template_format.parse(
- """
- heat_template_version: 2013-05-23
- description: test.
- resources:
- my_instance:
- type: AWS::EC2::Instance
- """)
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- engine = service.EngineService('a', 't')
- res = dict(engine.validate_template(None, t))
- self.assertEqual(res['Description'], 'test.')
-
- def test_validate_ref_invalid(self):
- t = template_format.parse(test_template_ref % 'WikiDatabasez')
-
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- engine = service.EngineService('a', 't')
- res = dict(engine.validate_template(None, t))
- self.assertNotEqual(res['Description'], 'Successfully validated')
-
- def test_validate_findinmap_valid(self):
- t = template_format.parse(test_template_findinmap_valid)
-
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- engine = service.EngineService('a', 't')
- res = dict(engine.validate_template(None, t))
- self.assertEqual(res['Description'], 'test.')
-
- def test_validate_findinmap_invalid(self):
- t = template_format.parse(test_template_findinmap_invalid)
-
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- engine = service.EngineService('a', 't')
- res = dict(engine.validate_template(None, t))
- self.assertNotEqual(res['Description'], 'Successfully validated')
-
- def test_validate_parameters(self):
- t = template_format.parse(test_template_ref % 'WikiDatabase')
-
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- engine = service.EngineService('a', 't')
- res = dict(engine.validate_template(None, t))
- self.assertEqual(res['Parameters'], {'KeyName': {
- 'Type': 'String',
- 'Description': 'Name of an existing EC2KeyPair to enable SSH '
- 'access to the instances'}})
-
- def test_validate_properties(self):
- t = template_format.parse(test_template_invalid_property)
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- engine = service.EngineService('a', 't')
- res = dict(engine.validate_template(None, t))
- self.assertEqual(res, {'Error': 'Unknown Property UnknownProperty'})
-
- def test_invalid_resources(self):
- t = template_format.parse(test_template_invalid_resources)
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- engine = service.EngineService('a', 't')
- res = dict(engine.validate_template(None, t))
- self.assertEqual({'Error': 'Resources must contain Resource. '
- 'Found a [string] instead'},
- res)
-
- def test_unimplemented_property(self):
- t = template_format.parse(test_template_unimplemented_property)
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- engine = service.EngineService('a', 't')
- res = dict(engine.validate_template(None, t))
- self.assertEqual(
- res,
- {'Error': 'Property SourceDestCheck not implemented yet'})
-
- def test_invalid_deletion_policy(self):
- t = template_format.parse(test_template_invalid_deletion_policy)
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- engine = service.EngineService('a', 't')
- res = dict(engine.validate_template(None, t))
- self.assertEqual(res, {'Error': 'Invalid DeletionPolicy Destroy'})
-
- def test_snapshot_deletion_policy(self):
- t = template_format.parse(test_template_snapshot_deletion_policy)
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- engine = service.EngineService('a', 't')
- res = dict(engine.validate_template(None, t))
- self.assertEqual(
- res, {'Error': 'Snapshot DeletionPolicy not supported'})
-
- @skipIf(try_import('cinderclient.v1.volume_backups') is None,
- 'unable to import volume_backups')
- def test_volume_snapshot_deletion_policy(self):
- t = template_format.parse(test_template_volume_snapshot)
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- engine = service.EngineService('a', 't')
- res = dict(engine.validate_template(None, t))
- self.assertEqual(res, {'Description': u'test.', 'Parameters': {}})
-
- def test_unregistered_key(self):
- t = template_format.parse(test_unregistered_key)
- template = parser.Template(t)
- params = {'KeyName': 'not_registered'}
- stack = parser.Stack(self.ctx, 'test_stack', template,
- environment.Environment(params))
-
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- resource = stack.resources['Instance']
- self.assertRaises(exception.UserKeyPairMissing, resource.validate)
-
- def test_unregistered_image(self):
- t = template_format.parse(test_template_image)
- template = parser.Template(t)
-
- stack = parser.Stack(self.ctx, 'test_stack', template,
- environment.Environment({'KeyName': 'test'}))
-
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- resource = stack.resources['Instance']
- self.assertRaises(exception.ImageNotFound, resource.validate)
-
- self.m.VerifyAll()
-
- def test_duplicated_image(self):
- t = template_format.parse(test_template_image)
- template = parser.Template(t)
-
- stack = parser.Stack(self.ctx, 'test_stack', template,
- environment.Environment({'KeyName': 'test'}))
-
- class image_type(object):
-
- def __init__(self, id, name):
- self.id = id
- self.name = name
-
- image_list = [image_type(id='768b5464-3df5-4abf-be33-63b60f8b99d0',
- name='image_name'),
- image_type(id='a57384f5-690f-48e1-bf46-c4291e6c887e',
- name='image_name')]
-
- self.m.StubOutWithMock(self.fc.images, 'list')
- self.fc.images.list().AndReturn(image_list)
-
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- resource = stack.resources['Instance']
- self.assertRaises(exception.NoUniqueImageFound, resource.validate)
-
- self.m.VerifyAll()
-
- def test_invalid_security_groups_with_nics(self):
- t = template_format.parse(test_template_invalid_secgroups)
- template = parser.Template(t)
- stack = parser.Stack(self.ctx, 'test_stack', template,
- environment.Environment({'KeyName': 'test'}))
-
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- resource = stack.resources['Instance']
- self.assertRaises(exception.ResourcePropertyConflict,
- resource.validate)
-
- def test_invalid_security_group_ids_with_nics(self):
- t = template_format.parse(test_template_invalid_secgroupids)
- template = parser.Template(t)
- stack = parser.Stack(self.ctx, 'test_stack', template,
- environment.Environment({'KeyName': 'test'}))
-
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- resource = stack.resources['Instance']
- self.assertRaises(exception.ResourcePropertyConflict,
- resource.validate)
-
- def test_client_exception_from_nova_client(self):
- t = template_format.parse(test_template_nova_client_exception)
- template = parser.Template(t)
- stack = parser.Stack(self.ctx, 'test_stack', template)
-
- self.m.StubOutWithMock(self.fc.images, 'list')
- self.fc.images.list()\
- .AndRaise(clients.novaclient.exceptions.ClientException(500))
- self.m.StubOutWithMock(instances.Instance, 'nova')
- instances.Instance.nova().AndReturn(self.fc)
- self.m.ReplayAll()
-
- self.assertRaises(exception.Error, stack.validate)
- self.m.VerifyAll()
-
- def test_validate_unique_logical_name(self):
- t = template_format.parse(test_template_unique_logical_name)
- template = parser.Template(t)
- stack = parser.Stack(self.ctx, 'test_stack', template,
- environment.Environment({'AName': 'test',
- 'KeyName': 'test'}))
-
- self.assertRaises(exception.StackValidationFailed, stack.validate)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import json
-
-from testtools import skipIf
-
-from heat.common import exception
-from heat.common import template_format
-from heat.engine import scheduler
-from heat.engine.resources import instance
-from heat.engine.resources import volume as vol
-from heat.engine import clients
-from heat.engine import resource
-from heat.openstack.common.importutils import try_import
-from heat.tests.common import HeatTestCase
-from heat.tests.v1_1 import fakes
-from heat.tests import utils
-
-from cinderclient.v1 import client as cinderclient
-
-
-volume_backups = try_import('cinderclient.v1.volume_backups')
-
-volume_template = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Volume Test",
- "Parameters" : {},
- "Resources" : {
- "WikiDatabase": {
- "Type": "AWS::EC2::Instance",
- "Properties": {
- "ImageId" : "foo",
- "InstanceType" : "m1.large",
- "KeyName" : "test",
- "UserData" : "some data"
- }
- },
- "DataVolume" : {
- "Type" : "AWS::EC2::Volume",
- "Properties" : {
- "Size" : "1",
- "AvailabilityZone" : {"Fn::GetAtt": ["WikiDatabase",
- "AvailabilityZone"]},
- "Tags" : [{ "Key" : "Usage", "Value" : "Wiki Data Volume" }]
- }
- },
- "MountPoint" : {
- "Type" : "AWS::EC2::VolumeAttachment",
- "Properties" : {
- "InstanceId" : { "Ref" : "WikiDatabase" },
- "VolumeId" : { "Ref" : "DataVolume" },
- "Device" : "/dev/vdc"
- }
- }
- }
-}
-'''
-
-
-class VolumeTest(HeatTestCase):
- def setUp(self):
- super(VolumeTest, self).setUp()
- self.fc = fakes.FakeClient()
- self.cinder_fc = cinderclient.Client('username', 'password')
- self.m.StubOutWithMock(clients.OpenStackClients, 'cinder')
- self.m.StubOutWithMock(clients.OpenStackClients, 'nova')
- self.m.StubOutWithMock(self.cinder_fc.volumes, 'create')
- self.m.StubOutWithMock(self.cinder_fc.volumes, 'get')
- self.m.StubOutWithMock(self.cinder_fc.volumes, 'delete')
- self.m.StubOutWithMock(self.fc.volumes, 'create_server_volume')
- self.m.StubOutWithMock(self.fc.volumes, 'delete_server_volume')
- utils.setup_dummy_db()
-
- def create_volume(self, t, stack, resource_name):
- data = t['Resources'][resource_name]
- data['Properties']['AvailabilityZone'] = 'nova'
- rsrc = vol.Volume(resource_name, data, stack)
- self.assertEqual(rsrc.validate(), None)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
- return rsrc
-
- def create_attachment(self, t, stack, resource_name):
- rsrc = vol.VolumeAttachment(resource_name,
- t['Resources'][resource_name],
- stack)
- self.assertEqual(rsrc.validate(), None)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
- return rsrc
-
- def _mock_create_volume(self, fv, stack_name):
- clients.OpenStackClients.cinder().MultipleTimes().AndReturn(
- self.cinder_fc)
- vol_name = utils.PhysName(stack_name, 'DataVolume')
- self.cinder_fc.volumes.create(
- size=u'1', availability_zone='nova',
- display_description=vol_name,
- display_name=vol_name).AndReturn(fv)
-
- def _stubout_delete_volume(self, fv):
- self.m.StubOutWithMock(fv, 'delete')
- fv.delete().AndReturn(True)
- self.m.StubOutWithMock(fv, 'get')
- fv.get().AndReturn(None)
- fv.get().AndRaise(
- clients.cinderclient.exceptions.NotFound('Not found'))
- self.m.ReplayAll()
-
- def _mock_create_server_volume_script(self, fva):
- clients.OpenStackClients.nova().MultipleTimes().AndReturn(self.fc)
- self.fc.volumes.create_server_volume(
- device=u'/dev/vdc', server_id=u'WikiDatabase',
- volume_id=u'vol-123').AndReturn(fva)
- self.cinder_fc.volumes.get('vol-123').AndReturn(fva)
-
- def test_volume(self):
- fv = FakeVolume('creating', 'available')
- stack_name = 'test_volume_stack'
-
- # create script
- self._mock_create_volume(fv, stack_name)
-
- # delete script
- self.cinder_fc.volumes.get('vol-123').AndReturn(fv)
-
- self.cinder_fc.volumes.get('vol-123').AndReturn(fv)
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- rsrc = self.create_volume(t, stack, 'DataVolume')
- self.assertEqual(fv.status, 'available')
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- fv.status = 'in-use'
- self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.destroy))
-
- self._stubout_delete_volume(fv)
- fv.status = 'available'
- scheduler.TaskRunner(rsrc.destroy)()
-
- # Test when volume already deleted
- rsrc.state_set(rsrc.CREATE, rsrc.COMPLETE)
- scheduler.TaskRunner(rsrc.destroy)()
-
- self.m.VerifyAll()
-
- def test_volume_default_az(self):
- fv = FakeVolume('creating', 'available')
- stack_name = 'test_volume_stack'
-
- # create script
- self.m.StubOutWithMock(instance.Instance, 'handle_create')
- self.m.StubOutWithMock(instance.Instance, 'check_create_complete')
- self.m.StubOutWithMock(vol.VolumeAttachment, 'handle_create')
- self.m.StubOutWithMock(vol.VolumeAttachment, 'check_create_complete')
- instance.Instance.handle_create().AndReturn(None)
- instance.Instance.check_create_complete(None).AndReturn(True)
- clients.OpenStackClients.cinder().MultipleTimes().AndReturn(
- self.cinder_fc)
- vol_name = utils.PhysName(stack_name, 'DataVolume')
- self.cinder_fc.volumes.create(
- size=u'1', availability_zone=None,
- display_description=vol_name,
- display_name=vol_name).AndReturn(fv)
- vol.VolumeAttachment.handle_create().AndReturn(None)
- vol.VolumeAttachment.check_create_complete(None).AndReturn(True)
-
- # delete script
- self.m.StubOutWithMock(instance.Instance, 'handle_delete')
- self.m.StubOutWithMock(vol.VolumeAttachment, 'handle_delete')
- instance.Instance.handle_delete().AndReturn(None)
- self.cinder_fc.volumes.get('vol-123').AndRaise(
- clients.cinderclient.exceptions.NotFound('Not found'))
- vol.VolumeAttachment.handle_delete().AndReturn(None)
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- rsrc = stack['DataVolume']
- self.assertEqual(rsrc.validate(), None)
- scheduler.TaskRunner(stack.create)()
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- scheduler.TaskRunner(stack.delete)()
-
- self.m.VerifyAll()
-
- def test_volume_create_error(self):
- fv = FakeVolume('creating', 'error')
- stack_name = 'test_volume_create_error_stack'
-
- self._mock_create_volume(fv, stack_name)
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['Properties']['AvailabilityZone'] = 'nova'
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- rsrc = vol.Volume('DataVolume',
- t['Resources']['DataVolume'],
- stack)
- create = scheduler.TaskRunner(rsrc.create)
- self.assertRaises(exception.ResourceFailure, create)
-
- self.m.VerifyAll()
-
- def test_volume_attachment_error(self):
- fv = FakeVolume('creating', 'available')
- fva = FakeVolume('attaching', 'error')
- stack_name = 'test_volume_attach_error_stack'
-
- self._mock_create_volume(fv, stack_name)
-
- self._mock_create_server_volume_script(fva)
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['Properties']['AvailabilityZone'] = 'nova'
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- scheduler.TaskRunner(stack['DataVolume'].create)()
- self.assertEqual(fv.status, 'available')
- rsrc = vol.VolumeAttachment('MountPoint',
- t['Resources']['MountPoint'],
- stack)
- create = scheduler.TaskRunner(rsrc.create)
- self.assertRaises(exception.ResourceFailure, create)
-
- self.m.VerifyAll()
-
- def test_volume_attachment(self):
- fv = FakeVolume('creating', 'available')
- fva = FakeVolume('attaching', 'in-use')
- stack_name = 'test_volume_attach_stack'
-
- self._mock_create_volume(fv, stack_name)
-
- self._mock_create_server_volume_script(fva)
-
- # delete script
- fva = FakeVolume('in-use', 'available')
- self.fc.volumes.delete_server_volume('WikiDatabase',
- 'vol-123').AndReturn(None)
- self.cinder_fc.volumes.get('vol-123').AndReturn(fva)
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['Properties']['AvailabilityZone'] = 'nova'
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- scheduler.TaskRunner(stack['DataVolume'].create)()
- self.assertEqual(fv.status, 'available')
- rsrc = self.create_attachment(t, stack, 'MountPoint')
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- scheduler.TaskRunner(rsrc.delete)()
-
- self.m.VerifyAll()
-
- def test_volume_detachment_err(self):
- fv = FakeVolume('creating', 'available')
- fva = FakeVolume('in-use', 'available')
- stack_name = 'test_volume_detach_stack'
-
- self._mock_create_volume(fv, stack_name)
-
- self._mock_create_server_volume_script(fva)
-
- # delete script
- fva = FakeVolume('i-use', 'available')
- self.m.StubOutWithMock(fva, 'get')
- fva.get().MultipleTimes()
- fva.status = "in-use"
-
- self.cinder_fc.volumes.get('vol-123').AndReturn(fva)
-
- self.fc.volumes.delete_server_volume(
- 'WikiDatabase', 'vol-123').AndRaise(
- clients.novaclient.exceptions.NotFound('Not found'))
-
- self.fc.volumes.delete_server_volume(
- 'WikiDatabase', 'vol-123').AndRaise(
- clients.novaclient.exceptions.NotFound('Not found'))
-
- self.fc.volumes.delete_server_volume(
- 'WikiDatabase', 'vol-123').AndRaise(
- clients.cinderclient.exceptions.NotFound('Not found'))
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['Properties']['AvailabilityZone'] = 'nova'
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- scheduler.TaskRunner(stack['DataVolume'].create)()
- self.assertEqual(fv.status, 'available')
- rsrc = self.create_attachment(t, stack, 'MountPoint')
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- scheduler.TaskRunner(rsrc.delete)()
-
- self.m.VerifyAll()
-
- def test_volume_detach_non_exist(self):
- fv = FakeVolume('creating', 'available')
- fva = FakeVolume('in-use', 'available')
- stack_name = 'test_volume_detach_stack'
-
- self._mock_create_volume(fv, stack_name)
-
- self._mock_create_server_volume_script(fva)
-
- # delete script
- self.cinder_fc.volumes.get('vol-123').AndRaise(
- clients.cinderclient.exceptions.NotFound('Not found'))
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['Properties']['AvailabilityZone'] = 'nova'
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- scheduler.TaskRunner(stack['DataVolume'].create)()
- rsrc = self.create_attachment(t, stack, 'MountPoint')
-
- scheduler.TaskRunner(rsrc.delete)()
-
- self.m.VerifyAll()
-
- def test_volume_detach_with_latency(self):
- fv = FakeVolume('creating', 'available')
- fva = FakeVolume('attaching', 'in-use')
- stack_name = 'test_volume_attach_stack'
-
- self._mock_create_volume(fv, stack_name)
-
- self._mock_create_server_volume_script(fva)
-
- # delete script
- volume_detach_cycle = 'in-use', 'detaching', 'available'
- fva = FakeLatencyVolume(life_cycle=volume_detach_cycle)
- self.fc.volumes.delete_server_volume(
- 'WikiDatabase', 'vol-123').MultipleTimes().AndReturn(None)
- self.cinder_fc.volumes.get('vol-123').AndReturn(fva)
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['Properties']['AvailabilityZone'] = 'nova'
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- scheduler.TaskRunner(stack['DataVolume'].create)()
- self.assertEqual(fv.status, 'available')
- rsrc = self.create_attachment(t, stack, 'MountPoint')
-
- scheduler.TaskRunner(rsrc.delete)()
-
- self.m.VerifyAll()
-
- def test_volume_detach_with_error(self):
- fv = FakeVolume('creating', 'available')
- fva = FakeVolume('attaching', 'in-use')
- stack_name = 'test_volume_attach_stack'
-
- self._mock_create_volume(fv, stack_name)
-
- self._mock_create_server_volume_script(fva)
-
- # delete script
- fva = FakeVolume('in-use', 'error')
- self.fc.volumes.delete_server_volume('WikiDatabase',
- 'vol-123').AndReturn(None)
- self.cinder_fc.volumes.get('vol-123').AndReturn(fva)
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['Properties']['AvailabilityZone'] = 'nova'
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- scheduler.TaskRunner(stack['DataVolume'].create)()
- self.assertEqual(fv.status, 'available')
- rsrc = self.create_attachment(t, stack, 'MountPoint')
- detach_task = scheduler.TaskRunner(rsrc.delete)
-
- self.assertRaises(exception.ResourceFailure, detach_task)
-
- self.m.VerifyAll()
-
- @skipIf(volume_backups is None, 'unable to import volume_backups')
- def test_snapshot(self):
- stack_name = 'test_volume_stack'
- fv = FakeVolume('creating', 'available')
- fb = FakeBackup('creating', 'available')
-
- self._mock_create_volume(fv, stack_name)
-
- # snapshot script
- self.m.StubOutWithMock(self.cinder_fc.backups, 'create')
- self.cinder_fc.backups.create('vol-123').AndReturn(fb)
- self.cinder_fc.volumes.get('vol-123').AndReturn(fv)
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['DeletionPolicy'] = 'Snapshot'
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- rsrc = self.create_volume(t, stack, 'DataVolume')
-
- self._stubout_delete_volume(fv)
- scheduler.TaskRunner(rsrc.destroy)()
-
- self.m.VerifyAll()
-
- @skipIf(volume_backups is None, 'unable to import volume_backups')
- def test_snapshot_error(self):
- stack_name = 'test_volume_stack'
- fv = FakeVolume('creating', 'available')
- fb = FakeBackup('creating', 'error')
-
- self._mock_create_volume(fv, stack_name)
-
- # snapshot script
- self.cinder_fc.volumes.get('vol-123').AndReturn(fv)
- self.m.StubOutWithMock(self.cinder_fc.backups, 'create')
- self.cinder_fc.backups.create('vol-123').AndReturn(fb)
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['DeletionPolicy'] = 'Snapshot'
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- rsrc = self.create_volume(t, stack, 'DataVolume')
-
- self.assertRaises(exception.ResourceFailure,
- scheduler.TaskRunner(rsrc.destroy))
-
- self.m.VerifyAll()
-
- @skipIf(volume_backups is None, 'unable to import volume_backups')
- def test_snapshot_no_volume(self):
- stack_name = 'test_volume_stack'
- fv = FakeVolume('creating', 'error')
-
- self._mock_create_volume(fv, stack_name)
-
- self.cinder_fc.volumes.get('vol-123').AndReturn(fv)
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['DeletionPolicy'] = 'Snapshot'
- t['Resources']['DataVolume']['Properties']['AvailabilityZone'] = 'nova'
- stack = utils.parse_stack(t, stack_name=stack_name)
- rsrc = vol.Volume('DataVolume',
- t['Resources']['DataVolume'],
- stack)
-
- create = scheduler.TaskRunner(rsrc.create)
- self.assertRaises(exception.ResourceFailure, create)
-
- self._stubout_delete_volume(fv)
- scheduler.TaskRunner(rsrc.destroy)()
-
- self.m.VerifyAll()
-
- @skipIf(volume_backups is None, 'unable to import volume_backups')
- def test_create_from_snapshot(self):
- stack_name = 'test_volume_stack'
- fv = FakeVolumeFromBackup('restoring-backup', 'available')
- fvbr = FakeBackupRestore('vol-123')
-
- # create script
- clients.OpenStackClients.cinder().MultipleTimes().AndReturn(
- self.cinder_fc)
- self.m.StubOutWithMock(self.cinder_fc.restores, 'restore')
- self.cinder_fc.restores.restore('backup-123').AndReturn(fvbr)
- self.cinder_fc.volumes.get('vol-123').AndReturn(fv)
- self.m.StubOutWithMock(fv, 'update')
- vol_name = utils.PhysName(stack_name, 'DataVolume')
- fv.update(
- display_description=vol_name,
- display_name=vol_name)
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['Properties']['SnapshotId'] = 'backup-123'
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- self.create_volume(t, stack, 'DataVolume')
- self.assertEqual(fv.status, 'available')
-
- self.m.VerifyAll()
-
- @skipIf(volume_backups is None, 'unable to import volume_backups')
- def test_create_from_snapshot_error(self):
- stack_name = 'test_volume_stack'
- fv = FakeVolumeFromBackup('restoring-backup', 'error')
- fvbr = FakeBackupRestore('vol-123')
-
- # create script
- clients.OpenStackClients.cinder().MultipleTimes().AndReturn(
- self.cinder_fc)
- self.m.StubOutWithMock(self.cinder_fc.restores, 'restore')
- self.cinder_fc.restores.restore('backup-123').AndReturn(fvbr)
- self.cinder_fc.volumes.get('vol-123').AndReturn(fv)
- self.m.StubOutWithMock(fv, 'update')
- vol_name = utils.PhysName(stack_name, 'DataVolume')
- fv.update(
- display_description=vol_name,
- display_name=vol_name)
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['Properties']['SnapshotId'] = 'backup-123'
- t['Resources']['DataVolume']['Properties']['AvailabilityZone'] = 'nova'
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- rsrc = vol.Volume('DataVolume',
- t['Resources']['DataVolume'],
- stack)
- create = scheduler.TaskRunner(rsrc.create)
- self.assertRaises(exception.ResourceFailure, create)
-
- self.m.VerifyAll()
-
- def test_cinder_create(self):
- fv = FakeVolume('creating', 'available')
- stack_name = 'test_volume_stack'
-
- clients.OpenStackClients.cinder().MultipleTimes().AndReturn(
- self.cinder_fc)
- self.cinder_fc.volumes.create(
- size=u'1', availability_zone='nova',
- display_description='CustomDescription',
- display_name='CustomName',
- imageRef='Image1',
- snapshot_id='snap-123',
- metadata={'key': 'value'},
- source_volid='vol-012',
- volume_type='lvm').AndReturn(fv)
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['Properties'] = {
- 'size': '1',
- 'availability_zone': 'nova',
- 'name': 'CustomName',
- 'description': 'CustomDescription',
- 'volume_type': 'lvm',
- 'metadata': {'key': 'value'},
- # Note that specifying all these arguments doesn't work in
- # practice, as they are conflicting, but we just want to check they
- # are sent to the backend.
- 'imageRef': 'Image1',
- 'snapshot_id': 'snap-123',
- 'source_volid': 'vol-012',
- }
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- rsrc = vol.CinderVolume('DataVolume',
- t['Resources']['DataVolume'],
- stack)
- self.assertEqual(rsrc.validate(), None)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
- self.assertEqual(fv.status, 'available')
-
- self.m.VerifyAll()
-
- def test_cinder_default(self):
- fv = FakeVolume('creating', 'available')
- stack_name = 'test_volume_stack'
-
- clients.OpenStackClients.cinder().MultipleTimes().AndReturn(
- self.cinder_fc)
- vol_name = utils.PhysName(stack_name, 'DataVolume')
- self.cinder_fc.volumes.create(
- size=u'1', availability_zone='nova',
- display_description=None,
- display_name=vol_name).AndReturn(fv)
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['Properties'] = {
- 'size': '1',
- 'availability_zone': 'nova',
- }
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- rsrc = vol.CinderVolume('DataVolume',
- t['Resources']['DataVolume'],
- stack)
- self.assertEqual(rsrc.validate(), None)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
- self.assertEqual(fv.status, 'available')
-
- self.m.VerifyAll()
-
- def test_cinder_fn_getatt(self):
- fv = FakeVolume('creating', 'available', availability_zone='zone1',
- size=1, snapshot_id='snap-123', display_name='name',
- display_description='desc', volume_type='lvm',
- metadata={'key': 'value'}, source_volid=None,
- status='available', bootable=False,
- created_at='2013-02-25T02:40:21.000000')
- stack_name = 'test_volume_stack'
-
- clients.OpenStackClients.cinder().MultipleTimes().AndReturn(
- self.cinder_fc)
- vol_name = utils.PhysName(stack_name, 'DataVolume')
- self.cinder_fc.volumes.create(
- size=u'1', availability_zone='nova',
- display_description=None,
- display_name=vol_name).AndReturn(fv)
-
- self.cinder_fc.volumes.get('vol-123').MultipleTimes().AndReturn(fv)
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['Properties'] = {
- 'size': '1',
- 'availability_zone': 'nova',
- }
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- rsrc = vol.CinderVolume('DataVolume',
- t['Resources']['DataVolume'],
- stack)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual(u'vol-123', rsrc.FnGetAtt('id'))
- self.assertEqual(u'zone1', rsrc.FnGetAtt('availability_zone'))
- self.assertEqual(u'1', rsrc.FnGetAtt('size'))
- self.assertEqual(u'snap-123', rsrc.FnGetAtt('snapshot_id'))
- self.assertEqual(u'name', rsrc.FnGetAtt('display_name'))
- self.assertEqual(u'desc', rsrc.FnGetAtt('display_description'))
- self.assertEqual(u'lvm', rsrc.FnGetAtt('volume_type'))
- self.assertEqual(json.dumps({'key': 'value'}),
- rsrc.FnGetAtt('metadata'))
- self.assertEqual(u'None', rsrc.FnGetAtt('source_volid'))
- self.assertEqual(u'available', rsrc.FnGetAtt('status'))
- self.assertEqual(u'2013-02-25T02:40:21.000000',
- rsrc.FnGetAtt('created_at'))
- self.assertEqual(u'False', rsrc.FnGetAtt('bootable'))
- error = self.assertRaises(exception.InvalidTemplateAttribute,
- rsrc.FnGetAtt, 'unknown')
- self.assertEqual(
- 'The Referenced Attribute (DataVolume unknown) is incorrect.',
- str(error))
-
- self.m.VerifyAll()
-
- def test_cinder_attachment(self):
- fv = FakeVolume('creating', 'available')
- fva = FakeVolume('attaching', 'in-use')
- stack_name = 'test_volume_attach_stack'
-
- self._mock_create_volume(fv, stack_name)
-
- self._mock_create_server_volume_script(fva)
-
- # delete script
- fva = FakeVolume('in-use', 'available')
- self.fc.volumes.delete_server_volume('WikiDatabase',
- 'vol-123').AndReturn(None)
- self.cinder_fc.volumes.get('vol-123').AndReturn(fva)
-
- self.m.ReplayAll()
-
- t = template_format.parse(volume_template)
- t['Resources']['DataVolume']['Properties']['AvailabilityZone'] = 'nova'
- t['Resources']['MountPoint']['Properties'] = {
- 'instance_uuid': {'Ref': 'WikiDatabase'},
- 'volume_id': {'Ref': 'DataVolume'},
- 'mountpoint': '/dev/vdc'
- }
- stack = utils.parse_stack(t, stack_name=stack_name)
-
- scheduler.TaskRunner(stack['DataVolume'].create)()
- self.assertEqual(fv.status, 'available')
- rsrc = vol.CinderVolumeAttachment('MountPoint',
- t['Resources']['MountPoint'],
- stack)
- self.assertEqual(rsrc.validate(), None)
- scheduler.TaskRunner(rsrc.create)()
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- self.assertRaises(resource.UpdateReplace, rsrc.handle_update,
- {}, {}, {})
-
- scheduler.TaskRunner(rsrc.delete)()
-
- self.m.VerifyAll()
-
-
-class FakeVolume(object):
- status = 'attaching'
- id = 'vol-123'
-
- def __init__(self, initial_status, final_status, **attrs):
- self.status = initial_status
- self.final_status = final_status
- for key, value in attrs.iteritems():
- setattr(self, key, value)
-
- def get(self):
- self.status = self.final_status
-
- def update(self, **kw):
- pass
-
- def delete(self):
- pass
-
-
-class FakeLatencyVolume(object):
- status = 'attaching'
- id = 'vol-123'
-
- def __init__(self, life_cycle=('creating', 'available'), **attrs):
- if not isinstance(life_cycle, tuple):
- raise exception.Error('life_cycle need to be a tuple.')
- if not len(life_cycle):
- raise exception.Error('life_cycle should not be an empty tuple.')
- self.life_cycle = iter(life_cycle)
- self.status = next(self.life_cycle)
- for key, value in attrs.iteritems():
- setattr(self, key, value)
-
- def get(self):
- self.status = next(self.life_cycle)
-
- def update(self, **kw):
- pass
-
-
-class FakeBackup(FakeVolume):
- status = 'creating'
- id = 'backup-123'
-
-
-class FakeBackupRestore(object):
- volume_id = 'vol-123'
-
- def __init__(self, volume_id):
- self.volume_id = volume_id
-
-
-class FakeVolumeFromBackup(FakeVolume):
- status = 'restoring-backup'
- get_call_count = 0
-
- def get(self):
- # Allow get to be called once without changing the status
- # This is to allow the check_create_complete method to
- # check the inital status.
- if self.get_call_count < 1:
- self.get_call_count += 1
- else:
- self.status = self.final_status
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from testtools import skipIf
-
-from heat.common import exception
-from heat.common import template_format
-from heat.engine import parser
-from heat.engine import clients
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.tests.common import HeatTestCase
-from heat.tests import fakes
-from heat.tests import utils
-
-try:
- from neutronclient.common.exceptions import NeutronClientException
- from neutronclient.v2_0 import client as neutronclient
-except ImportError:
- neutronclient = None
-
-
-class VPCTestBase(HeatTestCase):
-
- @skipIf(neutronclient is None, 'neutronclient unavaialble')
- def setUp(self):
- super(VPCTestBase, self).setUp()
- utils.setup_dummy_db()
- self.m.StubOutWithMock(neutronclient.Client, 'add_interface_router')
- self.m.StubOutWithMock(neutronclient.Client, 'add_gateway_router')
- self.m.StubOutWithMock(neutronclient.Client, 'create_network')
- self.m.StubOutWithMock(neutronclient.Client, 'create_port')
- self.m.StubOutWithMock(neutronclient.Client, 'create_router')
- self.m.StubOutWithMock(neutronclient.Client, 'create_subnet')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_network')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_port')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_router')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_subnet')
- self.m.StubOutWithMock(neutronclient.Client, 'list_networks')
- self.m.StubOutWithMock(neutronclient.Client, 'list_routers')
- self.m.StubOutWithMock(neutronclient.Client, 'remove_gateway_router')
- self.m.StubOutWithMock(neutronclient.Client, 'remove_interface_router')
- self.m.StubOutWithMock(neutronclient.Client, 'show_subnet')
- self.m.StubOutWithMock(neutronclient.Client, 'show_network')
- self.m.StubOutWithMock(neutronclient.Client, 'show_router')
- self.m.StubOutWithMock(neutronclient.Client, 'create_security_group')
- self.m.StubOutWithMock(neutronclient.Client, 'show_security_group')
- self.m.StubOutWithMock(neutronclient.Client, 'delete_security_group')
- self.m.StubOutWithMock(
- neutronclient.Client, 'create_security_group_rule')
- self.m.StubOutWithMock(
- neutronclient.Client, 'delete_security_group_rule')
- self.m.StubOutWithMock(clients.OpenStackClients, 'keystone')
-
- def create_stack(self, template):
- t = template_format.parse(template)
- stack = self.parse_stack(t)
- self.assertEqual(None, stack.create())
- return stack
-
- def parse_stack(self, t):
- stack_name = 'test_stack'
- tmpl = parser.Template(t)
- stack = parser.Stack(utils.dummy_context(), stack_name, tmpl)
- stack.store()
- return stack
-
- def mock_keystone(self):
- clients.OpenStackClients.keystone().AndReturn(
- fakes.FakeKeystoneClient())
-
- def mock_create_network(self):
- self.vpc_name = utils.PhysName('test_stack', 'the_vpc')
- neutronclient.Client.create_network(
- {
- 'network': {'name': self.vpc_name}
- }).AndReturn({'network': {
- 'status': 'BUILD',
- 'subnets': [],
- 'name': 'name',
- 'admin_state_up': True,
- 'shared': False,
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f',
- 'id': 'aaaa'
- }})
- neutronclient.Client.show_network(
- 'aaaa'
- ).AndReturn({"network": {
- "status": "BUILD",
- "subnets": [],
- "name": self.vpc_name,
- "admin_state_up": False,
- "shared": False,
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f",
- "id": "aaaa"
- }})
-
- neutronclient.Client.show_network(
- 'aaaa'
- ).MultipleTimes().AndReturn({"network": {
- "status": "ACTIVE",
- "subnets": [],
- "name": self.vpc_name,
- "admin_state_up": False,
- "shared": False,
- "tenant_id": "c1210485b2424d48804aad5d39c61b8f",
- "id": "aaaa"
- }})
- neutronclient.Client.create_router(
- {'router': {'name': self.vpc_name}}).AndReturn({
- 'router': {
- 'status': 'BUILD',
- 'name': self.vpc_name,
- 'admin_state_up': True,
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f',
- 'id': 'bbbb'
- }})
- neutronclient.Client.list_routers(name=self.vpc_name).AndReturn({
- "routers": [{
- "status": "BUILD",
- "external_gateway_info": None,
- "name": self.vpc_name,
- "admin_state_up": True,
- "tenant_id": "3e21026f2dc94372b105808c0e721661",
- "routes": [],
- "id": "bbbb"
- }]
- })
- self.mock_router_for_vpc()
-
- def mock_create_subnet(self):
- self.subnet_name = utils.PhysName('test_stack', 'the_subnet')
- neutronclient.Client.create_subnet(
- {'subnet': {
- 'network_id': u'aaaa',
- 'cidr': u'10.0.0.0/24',
- 'ip_version': 4,
- 'name': self.subnet_name}}).AndReturn({
- 'subnet': {
- 'status': 'ACTIVE',
- 'name': self.subnet_name,
- 'admin_state_up': True,
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f',
- 'id': 'cccc'}})
- self.mock_router_for_vpc()
- neutronclient.Client.add_interface_router(
- u'bbbb',
- {'subnet_id': 'cccc'}).AndReturn(None)
-
- def mock_show_subnet(self):
- neutronclient.Client.show_subnet('cccc').AndReturn({
- 'subnet': {
- 'name': self.subnet_name,
- 'network_id': 'aaaa',
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f',
- 'allocation_pools': [{'start': '10.0.0.2',
- 'end': '10.0.0.254'}],
- 'gateway_ip': '10.0.0.1',
- 'ip_version': 4,
- 'cidr': '10.0.0.0/24',
- 'id': 'cccc',
- 'enable_dhcp': False,
- }})
-
- def mock_create_security_group(self):
- self.sg_name = utils.PhysName('test_stack', 'the_sg')
- neutronclient.Client.create_security_group({
- 'security_group': {
- 'name': self.sg_name,
- 'description': 'SSH access'
- }
- }).AndReturn({
- 'security_group': {
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f',
- 'name': self.sg_name,
- 'description': 'SSH access',
- 'security_group_rules': [],
- 'id': 'eeee'
- }
- })
-
- neutronclient.Client.create_security_group_rule({
- 'security_group_rule': {
- 'direction': 'ingress',
- 'remote_group_id': None,
- 'remote_ip_prefix': '0.0.0.0/0',
- 'port_range_min': '22',
- 'ethertype': 'IPv4',
- 'port_range_max': '22',
- 'protocol': 'tcp',
- 'security_group_id': 'eeee'
- }
- }).AndReturn({
- 'security_group_rule': {
- 'direction': 'ingress',
- 'remote_group_id': None,
- 'remote_ip_prefix': '0.0.0.0/0',
- 'port_range_min': '22',
- 'ethertype': 'IPv4',
- 'port_range_max': '22',
- 'protocol': 'tcp',
- 'security_group_id': 'eeee',
- 'id': 'bbbb'
- }
- })
-
- def mock_show_security_group(self, group='eeee'):
- sg_name = utils.PhysName('test_stack', 'the_sg')
- if group == 'eeee':
- neutronclient.Client.show_security_group(group).AndReturn({
- 'security_group': {
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f',
- 'name': sg_name,
- 'description': '',
- 'security_group_rules': [{
- 'direction': 'ingress',
- 'protocol': 'tcp',
- 'port_range_max': '22',
- 'id': 'bbbb',
- 'ethertype': 'IPv4',
- 'security_group_id': 'eeee',
- 'remote_group_id': None,
- 'remote_ip_prefix': '0.0.0.0/0',
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f',
- 'port_range_min': '22'
- }],
- 'id': 'eeee'}})
- elif group == 'INVALID-NO-REF':
- neutronclient.Client.show_security_group(group).AndRaise(
- NeutronClientException(status_code=404))
- elif group == 'RaiseException':
- neutronclient.Client.show_security_group('eeee').AndRaise(
- NeutronClientException(status_code=403))
-
- def mock_delete_security_group(self):
- self.mock_show_security_group()
- neutronclient.Client.delete_security_group_rule('bbbb').AndReturn(None)
- neutronclient.Client.delete_security_group('eeee').AndReturn(None)
-
- def mock_router_for_vpc(self):
- neutronclient.Client.list_routers(name=self.vpc_name).AndReturn({
- "routers": [{
- "status": "ACTIVE",
- "external_gateway_info": {
- "network_id": "zzzz",
- "enable_snat": True},
- "name": self.vpc_name,
- "admin_state_up": True,
- "tenant_id": "3e21026f2dc94372b105808c0e721661",
- "routes": [],
- "id": "bbbb"
- }]
- })
-
- def mock_delete_network(self):
- self.mock_router_for_vpc()
- neutronclient.Client.delete_router('bbbb').AndReturn(None)
- neutronclient.Client.delete_network('aaaa').AndReturn(None)
-
- def mock_delete_subnet(self):
- self.mock_router_for_vpc()
- neutronclient.Client.remove_interface_router(
- u'bbbb',
- {'subnet_id': 'cccc'}).AndReturn(None)
- neutronclient.Client.delete_subnet('cccc').AndReturn(None)
-
- def mock_create_route_table(self):
- self.rt_name = utils.PhysName('test_stack', 'the_route_table')
- neutronclient.Client.create_router({
- 'router': {'name': self.rt_name}}).AndReturn({
- 'router': {
- 'status': 'BUILD',
- 'name': self.rt_name,
- 'admin_state_up': True,
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f',
- 'id': 'ffff'
- }
- })
- neutronclient.Client.show_router('ffff').AndReturn({
- 'router': {
- 'status': 'BUILD',
- 'name': self.rt_name,
- 'admin_state_up': True,
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f',
- 'id': 'ffff'
- }
- })
- neutronclient.Client.show_router('ffff').AndReturn({
- 'router': {
- 'status': 'ACTIVE',
- 'name': self.rt_name,
- 'admin_state_up': True,
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f',
- 'id': 'ffff'
- }
- })
- self.mock_router_for_vpc()
- neutronclient.Client.add_gateway_router(
- 'ffff', {'network_id': 'zzzz'}).AndReturn(None)
-
- def mock_create_association(self):
- self.mock_show_subnet()
- self.mock_router_for_vpc()
- neutronclient.Client.remove_interface_router(
- 'bbbb',
- {'subnet_id': u'cccc'}).AndReturn(None)
- neutronclient.Client.add_interface_router(
- u'ffff',
- {'subnet_id': 'cccc'}).AndReturn(None)
-
- def mock_delete_association(self):
- self.mock_show_subnet()
- self.mock_router_for_vpc()
- neutronclient.Client.remove_interface_router(
- 'ffff',
- {'subnet_id': u'cccc'}).AndReturn(None)
- neutronclient.Client.add_interface_router(
- u'bbbb',
- {'subnet_id': 'cccc'}).AndReturn(None)
-
- def mock_delete_route_table(self):
- neutronclient.Client.delete_router('ffff').AndReturn(None)
- neutronclient.Client.remove_gateway_router('ffff').AndReturn(None)
-
- def assertResourceState(self, resource, ref_id):
- self.assertEqual(None, resource.validate())
- self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state)
- self.assertEqual(ref_id, resource.FnGetRefId())
-
- def mock_rsrc_by_refid(self, sg):
- parser.Stack.resource_by_refid(sg).AndReturn(None)
-
-
-class VPCTest(VPCTestBase):
-
- test_template = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Resources:
- the_vpc:
- Type: AWS::EC2::VPC
- Properties: {CidrBlock: '10.0.0.0/16'}
-'''
-
- def test_vpc(self):
- self.mock_keystone()
- self.mock_create_network()
- self.mock_delete_network()
- self.m.ReplayAll()
-
- stack = self.create_stack(self.test_template)
- vpc = stack['the_vpc']
- self.assertResourceState(vpc, 'aaaa')
- self.assertRaises(resource.UpdateReplace,
- vpc.handle_update, {}, {}, {})
-
- scheduler.TaskRunner(vpc.delete)()
- self.m.VerifyAll()
-
-
-class SubnetTest(VPCTestBase):
-
- test_template = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Resources:
- the_vpc:
- Type: AWS::EC2::VPC
- Properties: {CidrBlock: '10.0.0.0/16'}
- the_subnet:
- Type: AWS::EC2::Subnet
- Properties:
- CidrBlock: 10.0.0.0/24
- VpcId: {Ref: the_vpc}
- AvailabilityZone: moon
-'''
-
- def test_subnet(self):
- self.mock_keystone()
- self.mock_create_network()
- self.mock_create_subnet()
- self.mock_delete_subnet()
- self.mock_delete_network()
-
- # mock delete subnet which is already deleted
- self.mock_router_for_vpc()
- neutronclient.Client.remove_interface_router(
- u'bbbb',
- {'subnet_id': 'cccc'}).AndRaise(
- NeutronClientException(status_code=404))
- neutronclient.Client.delete_subnet('cccc').AndRaise(
- NeutronClientException(status_code=404))
-
- self.m.ReplayAll()
- stack = self.create_stack(self.test_template)
-
- subnet = stack['the_subnet']
- self.assertResourceState(subnet, 'cccc')
-
- self.assertRaises(resource.UpdateReplace,
- subnet.handle_update, {}, {}, {})
- self.assertRaises(
- exception.InvalidTemplateAttribute,
- subnet.FnGetAtt,
- 'Foo')
-
- self.assertEqual('moon', subnet.FnGetAtt('AvailabilityZone'))
-
- scheduler.TaskRunner(subnet.delete)()
- subnet.state_set(subnet.CREATE, subnet.COMPLETE, 'to delete again')
- scheduler.TaskRunner(subnet.delete)()
- scheduler.TaskRunner(stack['the_vpc'].delete)()
- self.m.VerifyAll()
-
-
-class NetworkInterfaceTest(VPCTestBase):
-
- test_template = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Resources:
- the_sg:
- Type: AWS::EC2::SecurityGroup
- Properties:
- VpcId: {Ref: the_vpc}
- GroupDescription: SSH access
- SecurityGroupIngress:
- - IpProtocol: tcp
- FromPort: "22"
- ToPort: "22"
- CidrIp: 0.0.0.0/0
- the_vpc:
- Type: AWS::EC2::VPC
- Properties: {CidrBlock: '10.0.0.0/16'}
- the_subnet:
- Type: AWS::EC2::Subnet
- Properties:
- CidrBlock: 10.0.0.0/24
- VpcId: {Ref: the_vpc}
- AvailabilityZone: moon
- the_nic:
- Type: AWS::EC2::NetworkInterface
- Properties:
- PrivateIpAddress: 10.0.0.100
- SubnetId: {Ref: the_subnet}
- GroupSet:
- - Ref: the_sg
-'''
-
- test_template_no_groupset = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Resources:
- the_vpc:
- Type: AWS::EC2::VPC
- Properties: {CidrBlock: '10.0.0.0/16'}
- the_subnet:
- Type: AWS::EC2::Subnet
- Properties:
- CidrBlock: 10.0.0.0/24
- VpcId: {Ref: the_vpc}
- AvailabilityZone: moon
- the_nic:
- Type: AWS::EC2::NetworkInterface
- Properties:
- PrivateIpAddress: 10.0.0.100
- SubnetId: {Ref: the_subnet}
-'''
-
- test_template_error = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Resources:
- the_sg:
- Type: AWS::EC2::SecurityGroup
- Properties:
- VpcId: {Ref: the_vpc}
- GroupDescription: SSH access
- SecurityGroupIngress:
- - IpProtocol: tcp
- FromPort: "22"
- ToPort: "22"
- CidrIp: 0.0.0.0/0
- the_vpc:
- Type: AWS::EC2::VPC
- Properties: {CidrBlock: '10.0.0.0/16'}
- the_subnet:
- Type: AWS::EC2::Subnet
- Properties:
- CidrBlock: 10.0.0.0/24
- VpcId: {Ref: the_vpc}
- AvailabilityZone: moon
- the_nic:
- Type: AWS::EC2::NetworkInterface
- Properties:
- PrivateIpAddress: 10.0.0.100
- SubnetId: {Ref: the_subnet}
- GroupSet:
- - Ref: INVALID-REF-IN-TEMPLATE
-'''
-
- test_template_error_no_ref = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Resources:
- the_vpc:
- Type: AWS::EC2::VPC
- Properties: {CidrBlock: '10.0.0.0/16'}
- the_subnet:
- Type: AWS::EC2::Subnet
- Properties:
- CidrBlock: 10.0.0.0/24
- VpcId: {Ref: the_vpc}
- AvailabilityZone: moon
- the_nic:
- Type: AWS::EC2::NetworkInterface
- Properties:
- PrivateIpAddress: 10.0.0.100
- SubnetId: {Ref: the_subnet}
- GroupSet:
- - INVALID-NO-REF
-'''
-
- def mock_create_network_interface(self, security_groups=['eeee']):
- self.nic_name = utils.PhysName('test_stack', 'the_nic')
- port = {'network_id': 'aaaa',
- 'fixed_ips': [{
- 'subnet_id': u'cccc',
- 'ip_address': u'10.0.0.100'
- }],
- 'name': self.nic_name,
- 'admin_state_up': True}
- if security_groups:
- port['security_groups'] = security_groups
-
- neutronclient.Client.create_port({'port': port}).AndReturn({
- 'port': {
- 'admin_state_up': True,
- 'device_id': '',
- 'device_owner': '',
- 'fixed_ips': [
- {
- 'ip_address': '10.0.0.100',
- 'subnet_id': 'cccc'
- }
- ],
- 'id': 'dddd',
- 'mac_address': 'fa:16:3e:25:32:5d',
- 'name': self.nic_name,
- 'network_id': 'aaaa',
- 'status': 'ACTIVE',
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f'
- }
- })
-
- def mock_delete_network_interface(self):
- neutronclient.Client.delete_port('dddd').AndReturn(None)
-
- def test_network_interface(self):
- self.mock_keystone()
- self.mock_create_security_group()
- self.mock_create_network()
- self.mock_create_subnet()
- self.mock_show_subnet()
- self.mock_create_network_interface()
- self.mock_delete_network_interface()
- self.mock_delete_subnet()
- self.mock_delete_network()
- self.mock_delete_security_group()
-
- self.m.ReplayAll()
-
- stack = self.create_stack(self.test_template)
- try:
- self.assertEqual((stack.CREATE, stack.COMPLETE), stack.state)
- rsrc = stack['the_nic']
- self.assertResourceState(rsrc, 'dddd')
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- finally:
- scheduler.TaskRunner(stack.delete)()
-
- self.m.VerifyAll()
-
- def test_network_interface_existing_groupset(self):
- self.m.StubOutWithMock(parser.Stack, 'resource_by_refid')
- self.mock_rsrc_by_refid(sg='eeee')
-
- self.mock_keystone()
- self.mock_create_security_group()
- self.mock_create_network()
- self.mock_create_subnet()
- self.mock_show_subnet()
- self.mock_create_network_interface()
- self.mock_show_security_group()
- self.mock_delete_network_interface()
- self.mock_delete_subnet()
- self.mock_delete_network()
- self.mock_delete_security_group()
-
- self.m.ReplayAll()
-
- stack = self.create_stack(self.test_template)
- try:
- self.assertEqual((stack.CREATE, stack.COMPLETE), stack.state)
- rsrc = stack['the_nic']
- self.assertResourceState(rsrc, 'dddd')
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
-
- finally:
- stack.delete()
-
- self.m.VerifyAll()
-
- def test_network_interface_no_groupset(self):
- self.mock_keystone()
- self.mock_create_network()
- self.mock_create_subnet()
- self.mock_show_subnet()
- self.mock_create_network_interface(security_groups=None)
- self.mock_delete_network_interface()
- self.mock_delete_subnet()
- self.mock_delete_network()
-
- self.m.ReplayAll()
-
- stack = self.create_stack(self.test_template_no_groupset)
- stack.delete()
-
- self.m.VerifyAll()
-
- def test_network_interface_exception(self):
- self.m.StubOutWithMock(parser.Stack, 'resource_by_refid')
- self.mock_rsrc_by_refid(sg='eeee')
-
- self.mock_keystone()
- self.mock_create_security_group()
- self.mock_create_network()
- self.mock_create_subnet()
- self.mock_show_subnet()
- self.mock_show_security_group(group='RaiseException')
-
- self.m.ReplayAll()
-
- try:
- stack = self.create_stack(self.test_template)
- rsrc = stack['the_nic']
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
- reason = rsrc.status_reason
- self.assertTrue(reason.startswith('NeutronClientException:'))
- finally:
- stack.delete()
-
- self.m.VerifyAll()
-
- def test_network_interface_error(self):
- real_exception = self.assertRaises(
- exception.InvalidTemplateReference,
- self.create_stack,
- self.test_template_error)
- expected_exception = exception.InvalidTemplateReference(
- resource='INVALID-REF-IN-TEMPLATE',
- key='the_nic.Properties.GroupSet[0]')
-
- self.assertEqual(str(expected_exception), str(real_exception))
-
- def test_network_interface_error_no_ref(self):
- self.mock_keystone()
- self.mock_create_network()
- self.mock_create_subnet()
- self.mock_show_subnet()
- self.mock_show_security_group(group='INVALID-NO-REF')
- self.mock_delete_subnet()
- neutronclient.Client.delete_port(None).AndReturn(None)
- self.mock_delete_network()
-
- self.m.ReplayAll()
-
- stack = self.create_stack(self.test_template_error_no_ref)
- try:
- self.assertEqual((stack.CREATE, stack.FAILED), stack.state)
- rsrc = stack['the_nic']
- self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state)
- reason = rsrc.status_reason
- self.assertTrue(reason.startswith('InvalidTemplateAttribute:'))
- finally:
- scheduler.TaskRunner(stack.delete)()
-
- self.m.VerifyAll()
-
-
-class InternetGatewayTest(VPCTestBase):
-
- test_template = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Resources:
- the_gateway:
- Type: AWS::EC2::InternetGateway
- the_vpc:
- Type: AWS::EC2::VPC
- Properties:
- CidrBlock: '10.0.0.0/16'
- the_subnet:
- Type: AWS::EC2::Subnet
- Properties:
- CidrBlock: 10.0.0.0/24
- VpcId: {Ref: the_vpc}
- AvailabilityZone: moon
- the_attachment:
- Type: AWS::EC2::VPCGatewayAttachment
- Properties:
- VpcId: {Ref: the_vpc}
- InternetGatewayId: {Ref: the_gateway}
- the_route_table:
- Type: AWS::EC2::RouteTable
- Properties:
- VpcId: {Ref: the_vpc}
- the_association:
- Type: AWS::EC2::SubnetRouteTableAssocation
- Properties:
- RouteTableId: {Ref: the_route_table}
- SubnetId: {Ref: the_subnet}
-'''
-
- def mock_create_internet_gateway(self):
- neutronclient.Client.list_networks(
- **{'router:external': True}).AndReturn({'networks': [{
- 'status': 'ACTIVE',
- 'subnets': [],
- 'name': 'nova',
- 'router:external': True,
- 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f',
- 'admin_state_up': True,
- 'shared': True,
- 'id': 'eeee'
- }]})
-
- def mock_create_gateway_attachment(self):
- neutronclient.Client.add_gateway_router(
- 'ffff', {'network_id': 'eeee'}).AndReturn(None)
-
- def mock_delete_gateway_attachment(self):
- neutronclient.Client.remove_gateway_router('ffff').AndReturn(None)
-
- def test_internet_gateway(self):
- self.mock_keystone()
- self.mock_create_internet_gateway()
- self.mock_create_network()
- self.mock_create_subnet()
- self.mock_create_route_table()
- self.mock_create_association()
- self.mock_create_gateway_attachment()
- self.mock_delete_gateway_attachment()
- self.mock_delete_association()
- self.mock_delete_route_table()
- self.mock_delete_subnet()
- self.mock_delete_network()
-
- self.m.ReplayAll()
-
- stack = self.create_stack(self.test_template)
-
- gateway = stack['the_gateway']
- self.assertResourceState(gateway, gateway.physical_resource_name())
- self.assertRaises(resource.UpdateReplace, gateway.handle_update,
- {}, {}, {})
-
- attachment = stack['the_attachment']
- self.assertResourceState(attachment, 'the_attachment')
- self.assertRaises(resource.UpdateReplace,
- attachment.handle_update, {}, {}, {})
-
- route_table = stack['the_route_table']
- self.assertEqual([route_table], list(attachment._vpc_route_tables()))
-
- stack.delete()
- self.m.VerifyAll()
-
-
-class RouteTableTest(VPCTestBase):
-
- test_template = '''
-HeatTemplateFormatVersion: '2012-12-12'
-Resources:
- the_vpc:
- Type: AWS::EC2::VPC
- Properties:
- CidrBlock: '10.0.0.0/16'
- the_subnet:
- Type: AWS::EC2::Subnet
- Properties:
- CidrBlock: 10.0.0.0/24
- VpcId: {Ref: the_vpc}
- AvailabilityZone: moon
- the_route_table:
- Type: AWS::EC2::RouteTable
- Properties:
- VpcId: {Ref: the_vpc}
- the_association:
- Type: AWS::EC2::SubnetRouteTableAssocation
- Properties:
- RouteTableId: {Ref: the_route_table}
- SubnetId: {Ref: the_subnet}
-'''
-
- def test_route_table(self):
- self.mock_keystone()
- self.mock_create_network()
- self.mock_create_subnet()
- self.mock_create_route_table()
- self.mock_create_association()
- self.mock_delete_association()
- self.mock_delete_route_table()
- self.mock_delete_subnet()
- self.mock_delete_network()
-
- self.m.ReplayAll()
-
- stack = self.create_stack(self.test_template)
-
- route_table = stack['the_route_table']
- self.assertResourceState(route_table, 'ffff')
- self.assertRaises(
- resource.UpdateReplace,
- route_table.handle_update, {}, {}, {})
-
- association = stack['the_association']
- self.assertResourceState(association, 'the_association')
- self.assertRaises(
- resource.UpdateReplace,
- association.handle_update, {}, {}, {})
-
- scheduler.TaskRunner(association.delete)()
- scheduler.TaskRunner(route_table.delete)()
-
- stack.delete()
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import datetime
-import time
-import json
-
-from oslo.config import cfg
-
-from heat.openstack.common import uuidutils
-
-from heat.tests.common import HeatTestCase
-from heat.tests import fakes
-from heat.tests import utils
-
-import heat.db.api as db_api
-from heat.common import template_format
-from heat.common import identifier
-from heat.engine import environment
-from heat.engine import parser
-from heat.engine import resource
-from heat.engine import scheduler
-from heat.engine.resources import wait_condition as wc
-
-test_template_waitcondition = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Just a WaitCondition.",
- "Parameters" : {},
- "Resources" : {
- "WaitHandle" : {
- "Type" : "AWS::CloudFormation::WaitConditionHandle"
- },
- "WaitForTheHandle" : {
- "Type" : "AWS::CloudFormation::WaitCondition",
- "Properties" : {
- "Handle" : {"Ref" : "WaitHandle"},
- "Timeout" : "5"
- }
- }
- }
-}
-'''
-
-test_template_wc_count = '''
-{
- "AWSTemplateFormatVersion" : "2010-09-09",
- "Description" : "Just a WaitCondition.",
- "Parameters" : {},
- "Resources" : {
- "WaitHandle" : {
- "Type" : "AWS::CloudFormation::WaitConditionHandle"
- },
- "WaitForTheHandle" : {
- "Type" : "AWS::CloudFormation::WaitCondition",
- "Properties" : {
- "Handle" : {"Ref" : "WaitHandle"},
- "Timeout" : "5",
- "Count" : "3"
- }
- }
- }
-}
-'''
-
-
-class WaitConditionTest(HeatTestCase):
-
- def setUp(self):
- super(WaitConditionTest, self).setUp()
- utils.setup_dummy_db()
- self.m.StubOutWithMock(wc.WaitConditionHandle,
- 'get_status')
-
- cfg.CONF.set_default('heat_waitcondition_server_url',
- 'http://server.test:8000/v1/waitcondition')
-
- self.fc = fakes.FakeKeystoneClient()
-
- def tearDown(self):
- super(WaitConditionTest, self).tearDown()
- utils.reset_dummy_db()
-
- # Note tests creating a stack should be decorated with @stack_delete_after
- # to ensure the stack is properly cleaned up
- def create_stack(self, stack_id=None,
- template=test_template_waitcondition, params={},
- stub=True):
- temp = template_format.parse(template)
- template = parser.Template(temp)
- ctx = utils.dummy_context(tenant_id='test_tenant')
- stack = parser.Stack(ctx, 'test_stack', template,
- environment.Environment(params),
- disable_rollback=True)
-
- # Stub out the stack ID so we have a known value
- if stack_id is None:
- stack_id = uuidutils.generate_uuid()
-
- self.stack_id = stack_id
- with utils.UUIDStub(self.stack_id):
- stack.store()
-
- if stub:
- self.m.StubOutWithMock(wc.WaitConditionHandle, 'keystone')
- wc.WaitConditionHandle.keystone().MultipleTimes().AndReturn(
- self.fc)
-
- id = identifier.ResourceIdentifier('test_tenant', stack.name,
- stack.id, '', 'WaitHandle')
- self.m.StubOutWithMock(wc.WaitConditionHandle, 'identifier')
- wc.WaitConditionHandle.identifier().MultipleTimes().AndReturn(id)
-
- return stack
-
- @utils.stack_delete_after
- def test_post_success_to_handle(self):
- self.stack = self.create_stack()
- wc.WaitConditionHandle.get_status().AndReturn([])
- wc.WaitConditionHandle.get_status().AndReturn([])
- wc.WaitConditionHandle.get_status().AndReturn(['SUCCESS'])
-
- self.m.ReplayAll()
-
- self.stack.create()
-
- rsrc = self.stack.resources['WaitForTheHandle']
- self.assertEqual(rsrc.state,
- (rsrc.CREATE, rsrc.COMPLETE))
-
- r = db_api.resource_get_by_name_and_stack(None, 'WaitHandle',
- self.stack.id)
- self.assertEqual(r.name, 'WaitHandle')
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_post_failure_to_handle(self):
- self.stack = self.create_stack()
- wc.WaitConditionHandle.get_status().AndReturn([])
- wc.WaitConditionHandle.get_status().AndReturn([])
- wc.WaitConditionHandle.get_status().AndReturn(['FAILURE'])
-
- self.m.ReplayAll()
-
- self.stack.create()
-
- rsrc = self.stack.resources['WaitForTheHandle']
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.FAILED))
- reason = rsrc.status_reason
- self.assertTrue(reason.startswith('WaitConditionFailure:'))
-
- r = db_api.resource_get_by_name_and_stack(None, 'WaitHandle',
- self.stack.id)
- self.assertEqual(r.name, 'WaitHandle')
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_post_success_to_handle_count(self):
- self.stack = self.create_stack(template=test_template_wc_count)
- wc.WaitConditionHandle.get_status().AndReturn([])
- wc.WaitConditionHandle.get_status().AndReturn(['SUCCESS'])
- wc.WaitConditionHandle.get_status().AndReturn(['SUCCESS', 'SUCCESS'])
- wc.WaitConditionHandle.get_status().AndReturn(['SUCCESS', 'SUCCESS',
- 'SUCCESS'])
-
- self.m.ReplayAll()
-
- self.stack.create()
-
- rsrc = self.stack.resources['WaitForTheHandle']
- self.assertEqual(rsrc.state,
- (rsrc.CREATE, rsrc.COMPLETE))
-
- r = db_api.resource_get_by_name_and_stack(None, 'WaitHandle',
- self.stack.id)
- self.assertEqual(r.name, 'WaitHandle')
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_post_failure_to_handle_count(self):
- self.stack = self.create_stack(template=test_template_wc_count)
- wc.WaitConditionHandle.get_status().AndReturn([])
- wc.WaitConditionHandle.get_status().AndReturn(['SUCCESS'])
- wc.WaitConditionHandle.get_status().AndReturn(['SUCCESS', 'FAILURE'])
-
- self.m.ReplayAll()
-
- self.stack.create()
-
- rsrc = self.stack.resources['WaitForTheHandle']
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.FAILED))
- reason = rsrc.status_reason
- self.assertTrue(reason.startswith('WaitConditionFailure:'))
-
- r = db_api.resource_get_by_name_and_stack(None, 'WaitHandle',
- self.stack.id)
- self.assertEqual(r.name, 'WaitHandle')
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_timeout(self):
- st = time.time()
-
- self.stack = self.create_stack()
-
- # Avoid the stack create exercising the timeout code at the same time
- self.m.StubOutWithMock(self.stack, 'timeout_secs')
- self.stack.timeout_secs().MultipleTimes().AndReturn(None)
-
- self.m.StubOutWithMock(scheduler, 'wallclock')
-
- scheduler.wallclock().AndReturn(st)
- scheduler.wallclock().AndReturn(st + 0.001)
- scheduler.wallclock().AndReturn(st + 0.1)
- wc.WaitConditionHandle.get_status().AndReturn([])
- scheduler.wallclock().AndReturn(st + 4.1)
- wc.WaitConditionHandle.get_status().AndReturn([])
- scheduler.wallclock().AndReturn(st + 5.1)
-
- self.m.ReplayAll()
-
- self.stack.create()
-
- rsrc = self.stack.resources['WaitForTheHandle']
-
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.FAILED))
- reason = rsrc.status_reason
- self.assertTrue(reason.startswith('WaitConditionTimeout:'))
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_FnGetAtt(self):
- self.stack = self.create_stack()
- wc.WaitConditionHandle.get_status().AndReturn(['SUCCESS'])
-
- self.m.ReplayAll()
- self.stack.create()
-
- rsrc = self.stack.resources['WaitForTheHandle']
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- wc_att = rsrc.FnGetAtt('Data')
- self.assertEqual(wc_att, unicode({}))
-
- handle = self.stack.resources['WaitHandle']
- self.assertEqual(handle.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- test_metadata = {'Data': 'foo', 'Reason': 'bar',
- 'Status': 'SUCCESS', 'UniqueId': '123'}
- handle.metadata_update(new_metadata=test_metadata)
- wc_att = rsrc.FnGetAtt('Data')
- self.assertEqual(wc_att, '{"123": "foo"}')
-
- test_metadata = {'Data': 'dog', 'Reason': 'cat',
- 'Status': 'SUCCESS', 'UniqueId': '456'}
- handle.metadata_update(new_metadata=test_metadata)
- wc_att = rsrc.FnGetAtt('Data')
- self.assertEqual(wc_att, u'{"123": "foo", "456": "dog"}')
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_validate_handle_url_bad_stackid(self):
- self.m.ReplayAll()
-
- stack_id = 'STACK_HUBSID_1234'
- t = json.loads(test_template_waitcondition)
- badhandle = ("http://server.test:8000/v1/waitcondition/" +
- "arn%3Aopenstack%3Aheat%3A%3Atest_tenant" +
- "%3Astacks%2Ftest_stack%2F" +
- "bad1" +
- "%2Fresources%2FWaitHandle")
- t['Resources']['WaitForTheHandle']['Properties']['Handle'] = badhandle
- self.stack = self.create_stack(template=json.dumps(t), stub=False,
- stack_id=stack_id)
- self.m.ReplayAll()
-
- rsrc = self.stack.resources['WaitForTheHandle']
- self.assertRaises(ValueError, rsrc.handle_create)
-
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_validate_handle_url_bad_stackname(self):
- self.m.ReplayAll()
-
- stack_id = 'STACKABCD1234'
- t = json.loads(test_template_waitcondition)
- badhandle = ("http://server.test:8000/v1/waitcondition/" +
- "arn%3Aopenstack%3Aheat%3A%3Atest_tenant" +
- "%3Astacks%2FBAD_stack%2F" +
- stack_id + "%2Fresources%2FWaitHandle")
- t['Resources']['WaitForTheHandle']['Properties']['Handle'] = badhandle
- self.stack = self.create_stack(template=json.dumps(t), stub=False,
- stack_id=stack_id)
-
- rsrc = self.stack.resources['WaitForTheHandle']
- self.assertRaises(ValueError, rsrc.handle_create)
-
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_validate_handle_url_bad_tenant(self):
- self.m.ReplayAll()
-
- stack_id = 'STACKABCD1234'
- t = json.loads(test_template_waitcondition)
- badhandle = ("http://server.test:8000/v1/waitcondition/" +
- "arn%3Aopenstack%3Aheat%3A%3ABAD_tenant" +
- "%3Astacks%2Ftest_stack%2F" +
- stack_id + "%2Fresources%2FWaitHandle")
- t['Resources']['WaitForTheHandle']['Properties']['Handle'] = badhandle
- self.stack = self.create_stack(stack_id=stack_id,
- template=json.dumps(t), stub=False)
-
- rsrc = self.stack.resources['WaitForTheHandle']
- self.assertRaises(ValueError, rsrc.handle_create)
-
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_validate_handle_url_bad_resource(self):
- self.m.ReplayAll()
-
- stack_id = 'STACK_HUBR_1234'
- t = json.loads(test_template_waitcondition)
- badhandle = ("http://server.test:8000/v1/waitcondition/" +
- "arn%3Aopenstack%3Aheat%3A%3Atest_tenant" +
- "%3Astacks%2Ftest_stack%2F" +
- stack_id + "%2Fresources%2FBADHandle")
- t['Resources']['WaitForTheHandle']['Properties']['Handle'] = badhandle
- self.stack = self.create_stack(stack_id=stack_id,
- template=json.dumps(t), stub=False)
-
- rsrc = self.stack.resources['WaitForTheHandle']
- self.assertRaises(ValueError, rsrc.handle_create)
-
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_validate_handle_url_bad_resource_type(self):
- self.m.ReplayAll()
- stack_id = 'STACKABCD1234'
- t = json.loads(test_template_waitcondition)
- badhandle = ("http://server.test:8000/v1/waitcondition/" +
- "arn%3Aopenstack%3Aheat%3A%3Atest_tenant" +
- "%3Astacks%2Ftest_stack%2F" +
- stack_id + "%2Fresources%2FWaitForTheHandle")
- t['Resources']['WaitForTheHandle']['Properties']['Handle'] = badhandle
- self.stack = self.create_stack(stack_id=stack_id,
- template=json.dumps(t), stub=False)
-
- rsrc = self.stack.resources['WaitForTheHandle']
- self.assertRaises(ValueError, rsrc.handle_create)
-
- self.m.VerifyAll()
-
-
-class WaitConditionHandleTest(HeatTestCase):
- def setUp(self):
- super(WaitConditionHandleTest, self).setUp()
- cfg.CONF.set_default('heat_waitcondition_server_url',
- 'http://server.test:8000/v1/waitcondition')
-
- self.fc = fakes.FakeKeystoneClient()
- utils.setup_dummy_db()
-
- def tearDown(self):
- super(WaitConditionHandleTest, self).tearDown()
- utils.reset_dummy_db()
-
- def create_stack(self, stack_name=None, stack_id=None):
- temp = template_format.parse(test_template_waitcondition)
- template = parser.Template(temp)
- ctx = utils.dummy_context(tenant_id='test_tenant')
- if stack_name is None:
- stack_name = utils.random_name()
- stack = parser.Stack(ctx, stack_name, template,
- disable_rollback=True)
- # Stub out the UUID for this test, so we can get an expected signature
- if stack_id is not None:
- with utils.UUIDStub(stack_id):
- stack.store()
- else:
- stack.store()
- self.stack_id = stack.id
-
- # Stub waitcondition status so all goes CREATE_COMPLETE
- self.m.StubOutWithMock(wc.WaitConditionHandle, 'get_status')
- wc.WaitConditionHandle.get_status().AndReturn(['SUCCESS'])
-
- # Stub keystone() with fake client
- self.m.StubOutWithMock(wc.WaitConditionHandle, 'keystone')
- wc.WaitConditionHandle.keystone().MultipleTimes().AndReturn(self.fc)
-
- id = identifier.ResourceIdentifier('test_tenant', stack.name,
- stack.id, '', 'WaitHandle')
- self.m.StubOutWithMock(wc.WaitConditionHandle, 'identifier')
- wc.WaitConditionHandle.identifier().MultipleTimes().AndReturn(id)
-
- self.m.ReplayAll()
- stack.create()
-
- return stack
-
- @utils.stack_delete_after
- def test_handle(self):
- stack_id = 'STACKABCD1234'
- stack_name = 'test_stack2'
- created_time = datetime.datetime(2012, 11, 29, 13, 49, 37)
- self.stack = self.create_stack(stack_id=stack_id,
- stack_name=stack_name)
-
- rsrc = self.stack.resources['WaitHandle']
- # clear the url
- db_api.resource_data_set(rsrc, 'ec2_signed_url', None, False)
-
- rsrc.created_time = created_time
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- expected_url = "".join([
- 'http://server.test:8000/v1/waitcondition/',
- 'arn%3Aopenstack%3Aheat%3A%3Atest_tenant%3Astacks%2F',
- 'test_stack2%2F', stack_id, '%2Fresources%2F',
- 'WaitHandle?',
- 'Timestamp=2012-11-29T13%3A49%3A37Z&',
- 'SignatureMethod=HmacSHA256&',
- 'AWSAccessKeyId=4567&',
- 'SignatureVersion=2&',
- 'Signature=',
- 'fHyt3XFnHq8%2FSwYaVcHdJka1hz6jdK5mHtgbo8OOKbQ%3D'])
-
- self.assertEqual(unicode(expected_url), rsrc.FnGetRefId())
-
- self.assertRaises(resource.UpdateReplace,
- rsrc.handle_update, {}, {}, {})
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_metadata_update(self):
- self.stack = self.create_stack()
- rsrc = self.stack.resources['WaitHandle']
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- test_metadata = {'Data': 'foo', 'Reason': 'bar',
- 'Status': 'SUCCESS', 'UniqueId': '123'}
- rsrc.metadata_update(new_metadata=test_metadata)
- handle_metadata = {u'123': {u'Data': u'foo',
- u'Reason': u'bar',
- u'Status': u'SUCCESS'}}
- self.assertEqual(handle_metadata, rsrc.metadata)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_metadata_update_invalid(self):
- self.stack = self.create_stack()
- rsrc = self.stack.resources['WaitHandle']
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- # metadata_update should raise a ValueError if the metadata
- # is missing any of the expected keys
- err_metadata = {'Data': 'foo', 'Status': 'SUCCESS', 'UniqueId': '123'}
- self.assertRaises(ValueError, rsrc.metadata_update,
- new_metadata=err_metadata)
-
- err_metadata = {'Data': 'foo', 'Reason': 'bar', 'UniqueId': '1234'}
- self.assertRaises(ValueError, rsrc.metadata_update,
- new_metadata=err_metadata)
-
- err_metadata = {'Data': 'foo', 'Reason': 'bar', 'UniqueId': '1234'}
- self.assertRaises(ValueError, rsrc.metadata_update,
- new_metadata=err_metadata)
-
- err_metadata = {'data': 'foo', 'reason': 'bar',
- 'status': 'SUCCESS', 'uniqueid': '1234'}
- self.assertRaises(ValueError, rsrc.metadata_update,
- new_metadata=err_metadata)
-
- # Also any Status other than SUCCESS or FAILURE should be rejected
- err_metadata = {'Data': 'foo', 'Reason': 'bar',
- 'Status': 'UCCESS', 'UniqueId': '123'}
- self.assertRaises(ValueError, rsrc.metadata_update,
- new_metadata=err_metadata)
- err_metadata = {'Data': 'foo', 'Reason': 'bar',
- 'Status': 'wibble', 'UniqueId': '123'}
- self.assertRaises(ValueError, rsrc.metadata_update,
- new_metadata=err_metadata)
- err_metadata = {'Data': 'foo', 'Reason': 'bar',
- 'Status': 'success', 'UniqueId': '123'}
- self.assertRaises(ValueError, rsrc.metadata_update,
- new_metadata=err_metadata)
- err_metadata = {'Data': 'foo', 'Reason': 'bar',
- 'Status': 'FAIL', 'UniqueId': '123'}
- self.assertRaises(ValueError, rsrc.metadata_update,
- new_metadata=err_metadata)
- self.m.VerifyAll()
-
- @utils.stack_delete_after
- def test_get_status(self):
- self.stack = self.create_stack()
- rsrc = self.stack.resources['WaitHandle']
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- # UnsetStubs, don't want get_status stubbed anymore..
- self.m.VerifyAll()
- self.m.UnsetStubs()
-
- self.assertEqual(rsrc.get_status(), [])
-
- test_metadata = {'Data': 'foo', 'Reason': 'bar',
- 'Status': 'SUCCESS', 'UniqueId': '123'}
- rsrc.metadata_update(new_metadata=test_metadata)
- self.assertEqual(rsrc.get_status(), ['SUCCESS'])
-
- test_metadata = {'Data': 'foo', 'Reason': 'bar',
- 'Status': 'SUCCESS', 'UniqueId': '456'}
- rsrc.metadata_update(new_metadata=test_metadata)
- self.assertEqual(rsrc.get_status(), ['SUCCESS', 'SUCCESS'])
-
- # re-stub keystone() with fake client or stack delete fails
- self.m.StubOutWithMock(wc.WaitConditionHandle, 'keystone')
- wc.WaitConditionHandle.keystone().MultipleTimes().AndReturn(self.fc)
- self.m.ReplayAll()
-
- @utils.stack_delete_after
- def test_get_status_reason(self):
- self.stack = self.create_stack()
- rsrc = self.stack.resources['WaitHandle']
- self.assertEqual(rsrc.state, (rsrc.CREATE, rsrc.COMPLETE))
-
- test_metadata = {'Data': 'foo', 'Reason': 'bar',
- 'Status': 'SUCCESS', 'UniqueId': '123'}
- rsrc.metadata_update(new_metadata=test_metadata)
- self.assertEqual('bar', rsrc.get_status_reason('SUCCESS'))
-
- test_metadata = {'Data': 'dog', 'Reason': 'cat',
- 'Status': 'SUCCESS', 'UniqueId': '456'}
- rsrc.metadata_update(new_metadata=test_metadata)
- self.assertEqual('bar;cat', rsrc.get_status_reason('SUCCESS'))
-
- test_metadata = {'Data': 'boo', 'Reason': 'hoo',
- 'Status': 'FAILURE', 'UniqueId': '789'}
- rsrc.metadata_update(new_metadata=test_metadata)
- self.assertEqual('hoo', rsrc.get_status_reason('FAILURE'))
- self.m.VerifyAll()
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import datetime
-import mox
-import heat.db.api as db_api
-
-from heat.common import exception
-from heat.openstack.common import timeutils
-from heat.engine import watchrule
-from heat.engine import parser
-from heat.tests.common import HeatTestCase
-from heat.tests import utils
-
-
-class WatchData(object):
- def __init__(self, data, created_at):
- self.created_at = created_at
- self.data = {'test_metric': {'Value': data,
- 'Unit': 'Count'}}
-
-
-class DummyAction(object):
- signal = "DummyAction"
-
-
-class WatchRuleTest(HeatTestCase):
- stack_id = None
-
- @classmethod
- def setUpDatabase(cls):
- if cls.stack_id is not None:
- return
- # Create a dummy stack in the DB as WatchRule instances
- # must be associated with a stack
- utils.setup_dummy_db()
- ctx = utils.dummy_context()
- ctx.auth_token = 'abcd1234'
- empty_tmpl = {"template": {}}
- tmpl = parser.Template(empty_tmpl)
- stack_name = 'dummystack'
- dummy_stack = parser.Stack(ctx, stack_name, tmpl)
- dummy_stack.state_set(dummy_stack.CREATE, dummy_stack.COMPLETE,
- 'Testing')
- dummy_stack.store()
-
- cls.stack_id = dummy_stack.id
-
- def setUp(self):
- super(WatchRuleTest, self).setUp()
- self.setUpDatabase()
- self.username = 'watchrule_test_user'
-
- self.ctx = utils.dummy_context()
- self.ctx.auth_token = 'abcd1234'
-
- self.m.ReplayAll()
-
- def _action_set_stubs(self, now, action_expected=True):
- # Setup stubs for the action tests
- self.m.StubOutWithMock(timeutils, 'utcnow')
- timeutils.utcnow().MultipleTimes().AndReturn(now)
-
- if action_expected:
- dummy_action = DummyAction()
- self.m.StubOutWithMock(parser.Stack, 'resource_by_refid')
- parser.Stack.resource_by_refid(mox.IgnoreArg()).\
- MultipleTimes().AndReturn(dummy_action)
-
- self.m.ReplayAll()
-
- def test_minimum(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'Period': '300',
- 'Statistic': 'Minimum',
- 'ComparisonOperator': 'LessThanOrEqualToThreshold',
- 'Threshold': '50'}
-
- now = timeutils.utcnow()
- last = now - datetime.timedelta(seconds=320)
- data = [WatchData(77, now - datetime.timedelta(seconds=100))]
- data.append(WatchData(53, now - datetime.timedelta(seconds=150)))
-
- # all > 50 -> NORMAL
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=data,
- stack_id=self.stack_id,
- last_evaluated=last)
- new_state = self.wr.get_alarm_state()
- self.assertEqual(new_state, 'NORMAL')
-
- data.append(WatchData(25, now - datetime.timedelta(seconds=250)))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=data,
- stack_id=self.stack_id,
- last_evaluated=last)
- new_state = self.wr.get_alarm_state()
- self.assertEqual(new_state, 'ALARM')
-
- def test_maximum(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'Period': '300',
- 'Statistic': 'Maximum',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '30'}
-
- now = timeutils.utcnow()
- last = now - datetime.timedelta(seconds=320)
- data = [WatchData(7, now - datetime.timedelta(seconds=100))]
- data.append(WatchData(23, now - datetime.timedelta(seconds=150)))
-
- # all < 30 -> NORMAL
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=data,
- stack_id=self.stack_id,
- last_evaluated=last)
- self.wr.now = now
- new_state = self.wr.get_alarm_state()
- self.assertEqual(new_state, 'NORMAL')
-
- data.append(WatchData(35, now - datetime.timedelta(seconds=150)))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=data,
- stack_id=self.stack_id,
- last_evaluated=last)
- self.wr.now = now
- new_state = self.wr.get_alarm_state()
- self.assertEqual(new_state, 'ALARM')
-
- def test_samplecount(self):
-
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'Period': '300',
- 'Statistic': 'SampleCount',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '3'}
-
- now = timeutils.utcnow()
- last = now - datetime.timedelta(seconds=320)
- data = [WatchData(1, now - datetime.timedelta(seconds=100))]
- data.append(WatchData(1, now - datetime.timedelta(seconds=150)))
-
- # only 2 samples -> NORMAL
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=data,
- stack_id=self.stack_id,
- last_evaluated=last)
- self.wr.now = now
- new_state = self.wr.get_alarm_state()
- self.assertEqual(new_state, 'NORMAL')
-
- # only 3 samples -> ALARM
- data.append(WatchData(1, now - datetime.timedelta(seconds=200)))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=data,
- stack_id=self.stack_id,
- last_evaluated=last)
- self.wr.now = now
- new_state = self.wr.get_alarm_state()
- self.assertEqual(new_state, 'ALARM')
-
- # only 3 samples (one old) -> NORMAL
- data.pop(0)
- data.append(WatchData(1, now - datetime.timedelta(seconds=400)))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=data,
- stack_id=self.stack_id,
- last_evaluated=last)
- self.wr.now = now
- new_state = self.wr.get_alarm_state()
- self.assertEqual(new_state, 'NORMAL')
-
- def test_sum(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'Period': '300',
- 'Statistic': 'Sum',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '100'}
-
- now = timeutils.utcnow()
- last = now - datetime.timedelta(seconds=320)
- data = [WatchData(17, now - datetime.timedelta(seconds=100))]
- data.append(WatchData(23, now - datetime.timedelta(seconds=150)))
-
- # all < 40 -> NORMAL
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=data,
- stack_id=self.stack_id,
- last_evaluated=last)
- self.wr.now = now
- new_state = self.wr.get_alarm_state()
- self.assertEqual(new_state, 'NORMAL')
-
- # sum > 100 -> ALARM
- data.append(WatchData(85, now - datetime.timedelta(seconds=150)))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=data,
- stack_id=self.stack_id,
- last_evaluated=last)
- self.wr.now = now
- new_state = self.wr.get_alarm_state()
- self.assertEqual(new_state, 'ALARM')
-
- def test_ave(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'Period': '300',
- 'Statistic': 'Average',
- 'ComparisonOperator': 'GreaterThanThreshold',
- 'Threshold': '100'}
-
- now = timeutils.utcnow()
- last = now - datetime.timedelta(seconds=320)
- data = [WatchData(117, now - datetime.timedelta(seconds=100))]
- data.append(WatchData(23, now - datetime.timedelta(seconds=150)))
-
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=data,
- stack_id=self.stack_id,
- last_evaluated=last)
- self.wr.now = now
- new_state = self.wr.get_alarm_state()
- self.assertEqual(new_state, 'NORMAL')
-
- data.append(WatchData(195, now - datetime.timedelta(seconds=250)))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=data,
- stack_id=self.stack_id,
- last_evaluated=last)
- self.wr.now = now
- new_state = self.wr.get_alarm_state()
- self.assertEqual(new_state, 'ALARM')
-
- @utils.wr_delete_after
- def test_load(self):
- # Insert two dummy watch rules into the DB
- rule = {u'EvaluationPeriods': u'1',
- u'AlarmActions': [u'WebServerRestartPolicy'],
- u'AlarmDescription': u'Restart the WikiDatabase',
- u'Namespace': u'system/linux',
- u'Period': u'300',
- u'ComparisonOperator': u'GreaterThanThreshold',
- u'Statistic': u'SampleCount',
- u'Threshold': u'2',
- u'MetricName': u'ServiceFailure'}
- self.wr = []
- self.wr.append(watchrule.WatchRule(context=self.ctx,
- watch_name='HttpFailureAlarm',
- rule=rule,
- watch_data=[],
- stack_id=self.stack_id,
- state='NORMAL'))
- self.wr[0].store()
-
- self.wr.append(watchrule.WatchRule(context=self.ctx,
- watch_name='AnotherWatch',
- rule=rule,
- watch_data=[],
- stack_id=self.stack_id,
- state='NORMAL'))
- self.wr[1].store()
-
- # Then use WatchRule.load() to retrieve each by name
- # and check that the object properties match the data above
- for wn in ('HttpFailureAlarm', 'AnotherWatch'):
- wr = watchrule.WatchRule.load(self.ctx, wn)
- self.assertEqual(type(wr), watchrule.WatchRule)
- self.assertEqual(wr.name, wn)
- self.assertEqual(wr.state, 'NORMAL')
- self.assertEqual(wr.rule, rule)
- self.assertEqual(wr.timeperiod, datetime.timedelta(
- seconds=int(rule['Period'])))
-
- @utils.wr_delete_after
- def test_store(self):
- rule = {u'EvaluationPeriods': u'1',
- u'AlarmActions': [u'WebServerRestartPolicy'],
- u'AlarmDescription': u'Restart the WikiDatabase',
- u'Namespace': u'system/linux',
- u'Period': u'300',
- u'ComparisonOperator': u'GreaterThanThreshold',
- u'Statistic': u'SampleCount',
- u'Threshold': u'2',
- u'MetricName': u'ServiceFailure'}
- self.wr = watchrule.WatchRule(context=self.ctx, watch_name='storetest',
- stack_id=self.stack_id, rule=rule)
- self.wr.store()
-
- dbwr = db_api.watch_rule_get_by_name(self.ctx, 'storetest')
- self.assertNotEqual(dbwr, None)
- self.assertEqual(dbwr.name, 'storetest')
- self.assertEqual(dbwr.state, watchrule.WatchRule.NODATA)
- self.assertEqual(dbwr.rule, rule)
-
- @utils.wr_delete_after
- def test_evaluate(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'Period': '300',
- 'Statistic': 'Maximum',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '30'}
-
- now = timeutils.utcnow()
- self.m.StubOutWithMock(timeutils, 'utcnow')
- timeutils.utcnow().MultipleTimes().AndReturn(now)
- self.m.ReplayAll()
-
- # It's not time to evaluate, so should stay NODATA
- last = now - datetime.timedelta(seconds=299)
- data = WatchData(25, now - datetime.timedelta(seconds=150))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=[data],
- stack_id=self.stack_id,
- last_evaluated=last)
-
- actions = self.wr.evaluate()
- self.assertEqual(self.wr.state, 'NODATA')
- self.assertEqual(actions, [])
-
- # now - last == Period, so should set NORMAL
- last = now - datetime.timedelta(seconds=300)
- data = WatchData(25, now - datetime.timedelta(seconds=150))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=[data],
- stack_id=self.stack_id,
- last_evaluated=last)
-
- actions = self.wr.evaluate()
- self.assertEqual(self.wr.state, 'NORMAL')
- self.assertEqual(self.wr.last_evaluated, now)
- self.assertEqual(actions, [])
-
- # Now data breaches Threshold, so should set ALARM
- last = now - datetime.timedelta(seconds=300)
- data = WatchData(35, now - datetime.timedelta(seconds=150))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=[data],
- stack_id=self.stack_id,
- last_evaluated=last)
-
- actions = self.wr.evaluate()
- self.assertEqual(self.wr.state, 'ALARM')
- self.assertEqual(self.wr.last_evaluated, now)
- self.assertEqual(actions, [])
-
- @utils.wr_delete_after
- def test_evaluate_suspend(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'Period': '300',
- 'Statistic': 'Maximum',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '30'}
-
- now = timeutils.utcnow()
- self.m.StubOutWithMock(timeutils, 'utcnow')
- timeutils.utcnow().MultipleTimes().AndReturn(now)
- self.m.ReplayAll()
-
- # Now data breaches Threshold, but we're suspended
- last = now - datetime.timedelta(seconds=300)
- data = WatchData(35, now - datetime.timedelta(seconds=150))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=[data],
- stack_id=self.stack_id,
- last_evaluated=last)
-
- self.wr.state_set(self.wr.SUSPENDED)
-
- actions = self.wr.evaluate()
- self.assertEqual(self.wr.state, self.wr.SUSPENDED)
- self.assertEqual(actions, [])
-
- @utils.wr_delete_after
- def test_rule_actions_alarm_normal(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'AlarmActions': ['DummyAction'],
- 'Period': '300',
- 'Statistic': 'Maximum',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '30'}
-
- now = timeutils.utcnow()
- self._action_set_stubs(now, action_expected=False)
-
- # Set data so rule evaluates to NORMAL state
- last = now - datetime.timedelta(seconds=300)
- data = WatchData(25, now - datetime.timedelta(seconds=150))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=[data],
- stack_id=self.stack_id,
- last_evaluated=last)
-
- actions = self.wr.evaluate()
- self.assertEqual(self.wr.state, 'NORMAL')
- self.assertEqual(actions, [])
- self.m.VerifyAll()
-
- @utils.wr_delete_after
- def test_rule_actions_alarm_alarm(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'AlarmActions': ['DummyAction'],
- 'Period': '300',
- 'Statistic': 'Maximum',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '30'}
-
- now = timeutils.utcnow()
- self._action_set_stubs(now)
-
- # Set data so rule evaluates to ALARM state
- last = now - datetime.timedelta(seconds=300)
- data = WatchData(35, now - datetime.timedelta(seconds=150))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=[data],
- stack_id=self.stack_id,
- last_evaluated=last)
-
- actions = self.wr.evaluate()
- self.assertEqual(self.wr.state, 'ALARM')
- self.assertEqual(actions, ['DummyAction'])
-
- # re-set last_evaluated so the rule will be evaluated again.
- last = now - datetime.timedelta(seconds=300)
- self.wr.last_evaluated = last
- actions = self.wr.evaluate()
- self.assertEqual(self.wr.state, 'ALARM')
- self.assertEqual(actions, ['DummyAction'])
- self.m.VerifyAll()
-
- @utils.wr_delete_after
- def test_rule_actions_alarm_two_actions(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'AlarmActions': ['DummyAction', 'AnotherDummyAction'],
- 'Period': '300',
- 'Statistic': 'Maximum',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '30'}
-
- now = timeutils.utcnow()
- self._action_set_stubs(now)
-
- # Set data so rule evaluates to ALARM state
- last = now - datetime.timedelta(seconds=300)
- data = WatchData(35, now - datetime.timedelta(seconds=150))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=[data],
- stack_id=self.stack_id,
- last_evaluated=last)
-
- actions = self.wr.evaluate()
- self.assertEqual(self.wr.state, 'ALARM')
- self.assertEqual(actions, ['DummyAction', 'DummyAction'])
- self.m.VerifyAll()
-
- @utils.wr_delete_after
- def test_rule_actions_ok_alarm(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'OKActions': ['DummyAction'],
- 'Period': '300',
- 'Statistic': 'Maximum',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '30'}
-
- now = timeutils.utcnow()
- self._action_set_stubs(now, action_expected=False)
-
- # On creation the rule evaluates to NODATA state
- last = now - datetime.timedelta(seconds=300)
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=[],
- stack_id=self.stack_id,
- last_evaluated=last)
-
- actions = self.wr.evaluate()
- self.assertEqual(self.wr.state, 'NODATA')
- self.assertEqual(actions, [])
-
- # Move time forward and add data below threshold so we transition from
- # ALARM -> NORMAL, so evaluate() should output a 'DummyAction'
- now = now + datetime.timedelta(seconds=300)
- self.m.VerifyAll()
- self.m.UnsetStubs()
- self._action_set_stubs(now)
-
- data = WatchData(25, now - datetime.timedelta(seconds=150))
- self.wr.watch_data = [data]
-
- actions = self.wr.evaluate()
- self.assertEqual(self.wr.state, 'NORMAL')
- self.assertEqual(actions, ['DummyAction'])
- self.m.VerifyAll()
-
- @utils.wr_delete_after
- def test_rule_actions_nodata(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'InsufficientDataActions': ['DummyAction'],
- 'Period': '300',
- 'Statistic': 'Maximum',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '30'}
-
- now = timeutils.utcnow()
- self._action_set_stubs(now, action_expected=False)
-
- # Set data so rule evaluates to ALARM state
- last = now - datetime.timedelta(seconds=300)
- data = WatchData(35, now - datetime.timedelta(seconds=150))
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=[data],
- stack_id=self.stack_id,
- last_evaluated=last)
-
- actions = self.wr.evaluate()
- self.assertEqual(self.wr.state, 'ALARM')
- self.assertEqual(actions, [])
-
- # Move time forward and don't add data so we transition from
- # ALARM -> NODATA, so evaluate() should output a 'DummyAction'
- now = now + datetime.timedelta(seconds=300)
- self.m.VerifyAll()
- self.m.UnsetStubs()
- self._action_set_stubs(now)
-
- actions = self.wr.evaluate()
- self.assertEqual(self.wr.state, 'NODATA')
- self.assertEqual(actions, ['DummyAction'])
- self.m.VerifyAll()
-
- @utils.wr_delete_after
- def test_create_watch_data(self):
- rule = {u'EvaluationPeriods': u'1',
- u'AlarmDescription': u'test alarm',
- u'Period': u'300',
- u'ComparisonOperator': u'GreaterThanThreshold',
- u'Statistic': u'SampleCount',
- u'Threshold': u'2',
- u'MetricName': u'CreateDataMetric'}
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name='create_data_test',
- stack_id=self.stack_id, rule=rule)
-
- self.wr.store()
-
- data = {u'CreateDataMetric': {"Unit": "Counter",
- "Value": "1",
- "Dimensions": []}}
- self.wr.create_watch_data(data)
-
- dbwr = db_api.watch_rule_get_by_name(self.ctx, 'create_data_test')
- self.assertEqual(dbwr.watch_data[0].data, data)
-
- # Note, would be good to write another datapoint and check it
- # but sqlite seems to not interpret the backreference correctly
- # so dbwr.watch_data is always a list containing only the latest
- # datapoint. In non-test use on mysql this is not the case, we
- # correctly get a list of all datapoints where watch_rule_id ==
- # watch_rule.id, so leave it as a single-datapoint test for now.
-
- @utils.wr_delete_after
- def test_create_watch_data_suspended(self):
- rule = {u'EvaluationPeriods': u'1',
- u'AlarmDescription': u'test alarm',
- u'Period': u'300',
- u'ComparisonOperator': u'GreaterThanThreshold',
- u'Statistic': u'SampleCount',
- u'Threshold': u'2',
- u'MetricName': u'CreateDataMetric'}
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name='create_data_test',
- stack_id=self.stack_id, rule=rule,
- state=watchrule.WatchRule.SUSPENDED)
-
- self.wr.store()
-
- data = {u'CreateDataMetric': {"Unit": "Counter",
- "Value": "1",
- "Dimensions": []}}
- self.wr.create_watch_data(data)
-
- dbwr = db_api.watch_rule_get_by_name(self.ctx, 'create_data_test')
- self.assertEqual(dbwr.watch_data, [])
-
- @utils.wr_delete_after
- def test_create_watch_data_match(self):
- rule = {u'EvaluationPeriods': u'1',
- u'AlarmDescription': u'test alarm',
- u'Period': u'300',
- u'ComparisonOperator': u'GreaterThanThreshold',
- u'Statistic': u'SampleCount',
- u'Threshold': u'2',
- u'Dimensions': [{u'Name': 'AutoScalingGroupName',
- u'Value': 'group_x'}],
- u'MetricName': u'CreateDataMetric'}
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name='create_data_test',
- stack_id=self.stack_id, rule=rule)
- self.wr.store()
-
- data = {u'CreateDataMetric': {"Unit": "Counter",
- "Value": "1",
- "Dimensions": [{u'AutoScalingGroupName':
- u'group_x'}]}}
- self.assertTrue(watchrule.rule_can_use_sample(self.wr, data))
-
- @utils.wr_delete_after
- def test_create_watch_data_match_2(self):
- rule = {u'EvaluationPeriods': u'1',
- u'AlarmDescription': u'test alarm',
- u'Period': u'300',
- u'ComparisonOperator': u'GreaterThanThreshold',
- u'Statistic': u'SampleCount',
- u'Threshold': u'2',
- u'Dimensions': [{u'Name': 'AutoScalingGroupName',
- u'Value': 'group_x'}],
- u'MetricName': u'CreateDataMetric'}
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name='create_data_test',
- stack_id=self.stack_id, rule=rule)
- self.wr.store()
-
- data = {u'not_interesting': {"Unit": "Counter",
- "Value": "1",
- "Dimensions": [
- {u'AutoScalingGroupName':
- u'group_x'}]},
- u'CreateDataMetric': {"Unit": "Counter",
- "Value": "1",
- "Dimensions": [
- {u'AutoScalingGroupName':
- u'group_x'}]}}
- self.assertTrue(watchrule.rule_can_use_sample(self.wr, data))
-
- def test_create_watch_data_match_3(self):
- rule = {u'EvaluationPeriods': u'1',
- u'AlarmDescription': u'test alarm',
- u'Period': u'300',
- u'ComparisonOperator': u'GreaterThanThreshold',
- u'Statistic': u'SampleCount',
- u'Threshold': u'2',
- u'Dimensions': [{u'Name': 'AutoScalingGroupName',
- u'Value': 'group_x'}],
- u'MetricName': u'CreateDataMetric'}
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name='create_data_test',
- stack_id=self.stack_id, rule=rule)
- self.wr.store()
-
- data = {u'CreateDataMetric': {"Unit": "Counter",
- "Value": "1",
- "Dimensions": [
- {u'AutoScalingGroupName':
- u'not_this'}]},
- u'CreateDataMetric': {"Unit": "Counter",
- "Value": "1",
- "Dimensions": [
- {u'AutoScalingGroupName':
- u'group_x'}]}}
- self.assertTrue(watchrule.rule_can_use_sample(self.wr, data))
-
- def test_create_watch_data_not_match_metric(self):
- rule = {u'EvaluationPeriods': u'1',
- u'AlarmDescription': u'test alarm',
- u'Period': u'300',
- u'ComparisonOperator': u'GreaterThanThreshold',
- u'Statistic': u'SampleCount',
- u'Threshold': u'2',
- u'Dimensions': [{u'Name': 'AutoScalingGroupName',
- u'Value': 'group_x'}],
- u'MetricName': u'CreateDataMetric'}
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name='create_data_test',
- stack_id=self.stack_id, rule=rule)
- self.wr.store()
-
- data = {u'not_this': {"Unit": "Counter",
- "Value": "1",
- "Dimensions": [
- {u'AutoScalingGroupName':
- u'group_x'}]},
- u'nor_this': {"Unit": "Counter",
- "Value": "1",
- "Dimensions": [
- {u'AutoScalingGroupName':
- u'group_x'}]}}
- self.assertFalse(watchrule.rule_can_use_sample(self.wr, data))
-
- def test_create_watch_data_not_match_dimensions(self):
- rule = {u'EvaluationPeriods': u'1',
- u'AlarmDescription': u'test alarm',
- u'Period': u'300',
- u'ComparisonOperator': u'GreaterThanThreshold',
- u'Statistic': u'SampleCount',
- u'Threshold': u'2',
- u'Dimensions': [{u'Name': 'AutoScalingGroupName',
- u'Value': 'group_x'}],
- u'MetricName': u'CreateDataMetric'}
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name='create_data_test',
- stack_id=self.stack_id, rule=rule)
- self.wr.store()
-
- data = {u'CreateDataMetric': {"Unit": "Counter",
- "Value": "1",
- "Dimensions": [
- {u'AutoScalingGroupName':
- u'not_this'}]},
- u'CreateDataMetric': {"Unit": "Counter",
- "Value": "1",
- "Dimensions": [
- {u'wrong_key':
- u'group_x'}]}}
- self.assertFalse(watchrule.rule_can_use_sample(self.wr, data))
-
- def test_destroy(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'AlarmActions': ['DummyAction'],
- 'Period': '300',
- 'Statistic': 'Maximum',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '30'}
-
- last = timeutils.utcnow()
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch_destroy",
- rule=rule,
- watch_data=[],
- stack_id=self.stack_id,
- last_evaluated=last)
-
- self.wr.store()
-
- check = watchrule.WatchRule.load(context=self.ctx,
- watch_name="testwatch_destroy")
- self.assertTrue(isinstance(check, watchrule.WatchRule))
-
- self.wr.destroy()
- self.assertRaises(exception.WatchRuleNotFound,
- watchrule.WatchRule.load, context=self.ctx,
- watch_name="testwatch_destroy")
-
- def test_state_set(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'AlarmActions': ['DummyAction'],
- 'Period': '300',
- 'Statistic': 'Maximum',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '30'}
-
- last = timeutils.utcnow()
- watcher = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch_set_state",
- rule=rule,
- watch_data=[],
- stack_id=self.stack_id,
- last_evaluated=last)
-
- watcher.state_set(watcher.SUSPENDED)
- self.assertEqual(watcher.state, watcher.SUSPENDED)
-
- check = watchrule.WatchRule.load(context=self.ctx,
- watch_name="testwatch_set_state")
- self.assertEqual(check.state, watchrule.WatchRule.SUSPENDED)
-
- def test_set_watch_state(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'AlarmActions': ['DummyAction'],
- 'Period': '300',
- 'Statistic': 'Maximum',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '30'}
-
- now = timeutils.utcnow()
- self._action_set_stubs(now)
-
- # Set data so rule evaluates to ALARM state
- last = now - datetime.timedelta(seconds=200)
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=[],
- stack_id=self.stack_id,
- last_evaluated=last)
-
- actions = self.wr.set_watch_state(watchrule.WatchRule.NODATA)
- self.assertEqual(actions, [])
-
- actions = self.wr.set_watch_state(watchrule.WatchRule.NORMAL)
- self.assertEqual(actions, [])
-
- actions = self.wr.set_watch_state(watchrule.WatchRule.ALARM)
- self.assertEqual(actions, ['DummyAction'])
- self.m.VerifyAll()
-
- def test_set_watch_state_invalid(self):
- rule = {'EvaluationPeriods': '1',
- 'MetricName': 'test_metric',
- 'AlarmActions': ['DummyAction'],
- 'Period': '300',
- 'Statistic': 'Maximum',
- 'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
- 'Threshold': '30'}
-
- now = timeutils.utcnow()
-
- last = now - datetime.timedelta(seconds=200)
- self.wr = watchrule.WatchRule(context=self.ctx,
- watch_name="testwatch",
- rule=rule,
- watch_data=[],
- stack_id=self.stack_id,
- last_evaluated=last)
-
- self.assertRaises(ValueError, self.wr.set_watch_state, None)
-
- self.assertRaises(ValueError, self.wr.set_watch_state, "BADSTATE")
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010-2011 OpenStack Foundation
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import datetime
-import json
-from oslo.config import cfg
-import stubout
-import webob
-
-from heat.common import exception
-from heat.common import wsgi
-from heat.tests.common import HeatTestCase
-
-
-class RequestTest(HeatTestCase):
-
- def setUp(self):
- self.stubs = stubout.StubOutForTesting()
- super(RequestTest, self).setUp()
-
- def test_content_type_missing(self):
- request = wsgi.Request.blank('/tests/123')
- self.assertRaises(exception.InvalidContentType,
- request.get_content_type, ('application/xml'))
-
- def test_content_type_unsupported(self):
- request = wsgi.Request.blank('/tests/123')
- request.headers["Content-Type"] = "text/html"
- self.assertRaises(exception.InvalidContentType,
- request.get_content_type, ('application/xml'))
-
- def test_content_type_with_charset(self):
- request = wsgi.Request.blank('/tests/123')
- request.headers["Content-Type"] = "application/json; charset=UTF-8"
- result = request.get_content_type(('application/json'))
- self.assertEqual(result, "application/json")
-
- def test_content_type_from_accept_xml(self):
- request = wsgi.Request.blank('/tests/123')
- request.headers["Accept"] = "application/xml"
- result = request.best_match_content_type()
- self.assertEqual(result, "application/json")
-
- def test_content_type_from_accept_json(self):
- request = wsgi.Request.blank('/tests/123')
- request.headers["Accept"] = "application/json"
- result = request.best_match_content_type()
- self.assertEqual(result, "application/json")
-
- def test_content_type_from_accept_xml_json(self):
- request = wsgi.Request.blank('/tests/123')
- request.headers["Accept"] = "application/xml, application/json"
- result = request.best_match_content_type()
- self.assertEqual(result, "application/json")
-
- def test_content_type_from_accept_json_xml_quality(self):
- request = wsgi.Request.blank('/tests/123')
- request.headers["Accept"] = ("application/json; q=0.3, "
- "application/xml; q=0.9")
- result = request.best_match_content_type()
- self.assertEqual(result, "application/json")
-
- def test_content_type_accept_default(self):
- request = wsgi.Request.blank('/tests/123.unsupported')
- request.headers["Accept"] = "application/unsupported1"
- result = request.best_match_content_type()
- self.assertEqual(result, "application/json")
-
- def test_best_match_language(self):
- # Test that we are actually invoking language negotiation by webop
- request = wsgi.Request.blank('/')
- accepted = 'unknown-lang'
- request.headers = {'Accept-Language': accepted}
-
- def fake_best_match(self, offers, default_match=None):
- # Best match on an unknown locale returns None
- return None
-
- self.stubs.SmartSet(request.accept_language,
- 'best_match', fake_best_match)
-
- self.assertEqual(request.best_match_language(), None)
-
- # If Accept-Language is missing or empty, match should be None
- request.headers = {'Accept-Language': ''}
- self.assertEqual(request.best_match_language(), None)
- request.headers.pop('Accept-Language')
- self.assertEqual(request.best_match_language(), None)
-
-
-class ResourceTest(HeatTestCase):
-
- def setUp(self):
- self.stubs = stubout.StubOutForTesting()
- super(ResourceTest, self).setUp()
-
- def test_get_action_args(self):
- env = {
- 'wsgiorg.routing_args': [
- None,
- {
- 'controller': None,
- 'format': None,
- 'action': 'update',
- 'id': 12,
- },
- ],
- }
-
- expected = {'action': 'update', 'id': 12}
- actual = wsgi.Resource(None, None, None).get_action_args(env)
-
- self.assertEqual(actual, expected)
-
- def test_get_action_args_invalid_index(self):
- env = {'wsgiorg.routing_args': []}
- expected = {}
- actual = wsgi.Resource(None, None, None).get_action_args(env)
- self.assertEqual(actual, expected)
-
- def test_get_action_args_del_controller_error(self):
- actions = {'format': None,
- 'action': 'update',
- 'id': 12}
- env = {'wsgiorg.routing_args': [None, actions]}
- expected = {'action': 'update', 'id': 12}
- actual = wsgi.Resource(None, None, None).get_action_args(env)
- self.assertEqual(actual, expected)
-
- def test_get_action_args_del_format_error(self):
- actions = {'action': 'update', 'id': 12}
- env = {'wsgiorg.routing_args': [None, actions]}
- expected = {'action': 'update', 'id': 12}
- actual = wsgi.Resource(None, None, None).get_action_args(env)
- self.assertEqual(actual, expected)
-
- def test_dispatch(self):
- class Controller(object):
- def index(self, shirt, pants=None):
- return (shirt, pants)
-
- resource = wsgi.Resource(None, None, None)
- actual = resource.dispatch(Controller(), 'index', 'on', pants='off')
- expected = ('on', 'off')
- self.assertEqual(actual, expected)
-
- def test_dispatch_default(self):
- class Controller(object):
- def default(self, shirt, pants=None):
- return (shirt, pants)
-
- resource = wsgi.Resource(None, None, None)
- actual = resource.dispatch(Controller(), 'index', 'on', pants='off')
- expected = ('on', 'off')
- self.assertEqual(actual, expected)
-
- def test_dispatch_no_default(self):
- class Controller(object):
- def show(self, shirt, pants=None):
- return (shirt, pants)
-
- resource = wsgi.Resource(None, None, None)
- self.assertRaises(AttributeError, resource.dispatch, Controller(),
- 'index', 'on', pants='off')
-
- def test_resource_call_error_handle(self):
- class Controller(object):
- def delete(self, req, identity):
- return (req, identity)
-
- actions = {'action': 'delete', 'id': 12, 'body': 'data'}
- env = {'wsgiorg.routing_args': [None, actions]}
- request = wsgi.Request.blank('/tests/123', environ=env)
- request.body = '{"foo" : "value"}'
- resource = wsgi.Resource(Controller(),
- wsgi.JSONRequestDeserializer(),
- None)
- # The Resource does not throw webob.HTTPExceptions, since they
- # would be considered responses by wsgi and the request flow would end,
- # instead they are wrapped so they can reach the fault application
- # where they are converted to a nice JSON/XML response
- e = self.assertRaises(exception.HTTPExceptionDisguise,
- resource, request)
- self.assertIsInstance(e.exc, webob.exc.HTTPBadRequest)
-
- def test_resource_call_error_handle_localized(self):
- class Controller(object):
- def delete(self, req, identity):
- return (req, identity)
-
- actions = {'action': 'delete', 'id': 12, 'body': 'data'}
- env = {'wsgiorg.routing_args': [None, actions]}
- request = wsgi.Request.blank('/tests/123', environ=env)
- request.body = '{"foo" : "value"}'
- message_es = "No Encontrado"
- translated_ex = webob.exc.HTTPBadRequest(message_es)
-
- resource = wsgi.Resource(Controller(),
- wsgi.JSONRequestDeserializer(),
- None)
-
- def fake_translate_exception(ex, locale):
- return translated_ex
-
- self.stubs.SmartSet(wsgi,
- 'translate_exception', fake_translate_exception)
-
- try:
- resource(request)
- except exception.HTTPExceptionDisguise as e:
- self.assertEqual(message_es, e.exc.message)
- self.m.VerifyAll()
-
-
-class JSONResponseSerializerTest(HeatTestCase):
-
- def test_to_json(self):
- fixture = {"key": "value"}
- expected = '{"key": "value"}'
- actual = wsgi.JSONResponseSerializer().to_json(fixture)
- self.assertEqual(actual, expected)
-
- def test_to_json_with_date_format_value(self):
- fixture = {"date": datetime.datetime(1, 3, 8, 2)}
- expected = '{"date": "0001-03-08T02:00:00"}'
- actual = wsgi.JSONResponseSerializer().to_json(fixture)
- self.assertEqual(actual, expected)
-
- def test_to_json_with_more_deep_format(self):
- fixture = {"is_public": True, "name": [{"name1": "test"}]}
- expected = '{"is_public": true, "name": [{"name1": "test"}]}'
- actual = wsgi.JSONResponseSerializer().to_json(fixture)
- self.assertEqual(actual, expected)
-
- def test_default(self):
- fixture = {"key": "value"}
- response = webob.Response()
- wsgi.JSONResponseSerializer().default(response, fixture)
- self.assertEqual(response.status_int, 200)
- content_types = filter(lambda h: h[0] == 'Content-Type',
- response.headerlist)
- self.assertEqual(len(content_types), 1)
- self.assertEqual(response.content_type, 'application/json')
- self.assertEqual(response.body, '{"key": "value"}')
-
-
-class JSONRequestDeserializerTest(HeatTestCase):
-
- def test_has_body_no_content_length(self):
- request = wsgi.Request.blank('/')
- request.method = 'POST'
- request.body = 'asdf'
- request.headers.pop('Content-Length')
- request.headers['Content-Type'] = 'application/json'
- self.assertFalse(wsgi.JSONRequestDeserializer().has_body(request))
-
- def test_has_body_zero_content_length(self):
- request = wsgi.Request.blank('/')
- request.method = 'POST'
- request.body = 'asdf'
- request.headers['Content-Length'] = 0
- request.headers['Content-Type'] = 'application/json'
- self.assertFalse(wsgi.JSONRequestDeserializer().has_body(request))
-
- def test_has_body_has_content_length_no_content_type(self):
- request = wsgi.Request.blank('/')
- request.method = 'POST'
- request.body = '{"key": "value"}'
- self.assertTrue('Content-Length' in request.headers)
- self.assertTrue(wsgi.JSONRequestDeserializer().has_body(request))
-
- def test_has_body_has_content_length_plain_content_type(self):
- request = wsgi.Request.blank('/')
- request.method = 'POST'
- request.body = '{"key": "value"}'
- self.assertTrue('Content-Length' in request.headers)
- request.headers['Content-Type'] = 'text/plain'
- self.assertTrue(wsgi.JSONRequestDeserializer().has_body(request))
-
- def test_has_body_has_content_type_malformed(self):
- request = wsgi.Request.blank('/')
- request.method = 'POST'
- request.body = 'asdf'
- self.assertTrue('Content-Length' in request.headers)
- request.headers['Content-Type'] = 'application/json'
- self.assertFalse(wsgi.JSONRequestDeserializer().has_body(request))
-
- def test_has_body_has_content_type(self):
- request = wsgi.Request.blank('/')
- request.method = 'POST'
- request.body = '{"key": "value"}'
- self.assertTrue('Content-Length' in request.headers)
- request.headers['Content-Type'] = 'application/json'
- self.assertTrue(wsgi.JSONRequestDeserializer().has_body(request))
-
- def test_has_body_has_wrong_content_type(self):
- request = wsgi.Request.blank('/')
- request.method = 'POST'
- request.body = '{"key": "value"}'
- self.assertTrue('Content-Length' in request.headers)
- request.headers['Content-Type'] = 'application/xml'
- self.assertFalse(wsgi.JSONRequestDeserializer().has_body(request))
-
- def test_has_body_has_aws_content_type_only(self):
- request = wsgi.Request.blank('/?ContentType=JSON')
- request.method = 'GET'
- request.body = '{"key": "value"}'
- self.assertTrue('Content-Length' in request.headers)
- self.assertTrue(wsgi.JSONRequestDeserializer().has_body(request))
-
- def test_has_body_respect_aws_content_type(self):
- request = wsgi.Request.blank('/?ContentType=JSON')
- request.method = 'GET'
- request.body = '{"key": "value"}'
- self.assertTrue('Content-Length' in request.headers)
- request.headers['Content-Type'] = 'application/xml'
- self.assertTrue(wsgi.JSONRequestDeserializer().has_body(request))
-
- def test_has_body_content_type_with_get(self):
- request = wsgi.Request.blank('/')
- request.method = 'GET'
- request.body = '{"key": "value"}'
- self.assertTrue('Content-Length' in request.headers)
- self.assertTrue(wsgi.JSONRequestDeserializer().has_body(request))
-
- def test_no_body_no_content_length(self):
- request = wsgi.Request.blank('/')
- self.assertFalse(wsgi.JSONRequestDeserializer().has_body(request))
-
- def test_from_json(self):
- fixture = '{"key": "value"}'
- expected = {"key": "value"}
- actual = wsgi.JSONRequestDeserializer().from_json(fixture)
- self.assertEqual(actual, expected)
-
- def test_from_json_malformed(self):
- fixture = 'kjasdklfjsklajf'
- self.assertRaises(webob.exc.HTTPBadRequest,
- wsgi.JSONRequestDeserializer().from_json, fixture)
-
- def test_default_no_body(self):
- request = wsgi.Request.blank('/')
- actual = wsgi.JSONRequestDeserializer().default(request)
- expected = {}
- self.assertEqual(actual, expected)
-
- def test_default_with_body(self):
- request = wsgi.Request.blank('/')
- request.method = 'POST'
- request.body = '{"key": "value"}'
- actual = wsgi.JSONRequestDeserializer().default(request)
- expected = {"body": {"key": "value"}}
- self.assertEqual(actual, expected)
-
- def test_default_with_get_with_body(self):
- request = wsgi.Request.blank('/')
- request.method = 'GET'
- request.body = '{"key": "value"}'
- actual = wsgi.JSONRequestDeserializer().default(request)
- expected = {"body": {"key": "value"}}
- self.assertEqual(actual, expected)
-
- def test_default_with_get_with_body_with_aws(self):
- request = wsgi.Request.blank('/?ContentType=JSON')
- request.method = 'GET'
- request.body = '{"key": "value"}'
- actual = wsgi.JSONRequestDeserializer().default(request)
- expected = {"body": {"key": "value"}}
- self.assertEqual(actual, expected)
-
- def test_from_json_exceeds_max_json_mb(self):
- cfg.CONF.set_override('max_json_body_size', 10)
- body = json.dumps(['a'] * cfg.CONF.max_json_body_size)
- self.assertTrue(len(body) > cfg.CONF.max_json_body_size)
- error = self.assertRaises(exception.RequestLimitExceeded,
- wsgi.JSONRequestDeserializer().from_json,
- body)
- msg = 'Request limit exceeded: JSON body size ' + \
- '(%s bytes) exceeds maximum allowed size (%s bytes).' % \
- (len(body), cfg.CONF.max_json_body_size)
- self.assertEqual(msg, str(error))
+++ /dev/null
-Heat testing
-------------
-
-All tests are to be placed in the heat/tests directory. The directory
-is organized by test type (unit, functional, etc). Within each type
-directory one may create another directory for additional test files as
-well as a separate __init__.py, which should be blank.
-
-An example directory structure illustrating the above:
-
-heat/tests
-|-- examples
-| |-- __init__.py
-| |-- test1.py
-| |-- test2.py
-| |-- test3.py
-|-- __init__.py
-`-- unit
- |-- __init__.py
- |-- test_template_convert.py
-
-If a given test has no overlapping requirements (variables or same
-routines) a new test does not need to create a subdirectory under the
-test type.
-
-Implementing a test
--------------------
-
-Testrepository - http://pypi.python.org/pypi/testrepository is used to
-find and run tests, parallelize their runs, and record timing/results.
-
-If new dependencies are introduced upon the development of a test, the
-tools/test-requires file needs to be updated so that the virtual
-environment will be able to successfully execute all tests.
-
-Running the tests
------------------
-
-During development, the simplest way to run tests is to simply invoke
-testr directly.
-
-$ testr run
-
-To run the tests with a clean virtual env in the same manner as the
-OpenStack testing infrastructure does so, use tox.
-
-$ tox -epy27 # test suite on python 2.7
-$ tox -epy26 # test suite on python 2.6
-$ tox -epep8 # run full source code checker
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import functools
-import random
-import string
-import sys
-import uuid
-
-import sqlalchemy
-
-from heat.common import context
-from heat.common import exception
-from heat.engine import environment
-from heat.engine import parser
-
-from heat.db.sqlalchemy.session import get_engine
-from heat.db import migration
-
-
-class UUIDStub(object):
- def __init__(self, value):
- self.value = value
-
- def __enter__(self):
- self.uuid4 = uuid.uuid4
- uuid_stub = lambda: self.value
- uuid.uuid4 = uuid_stub
-
- def __exit__(self, *exc_info):
- uuid.uuid4 = self.uuid4
-
-
-def random_name():
- return ''.join(random.choice(string.ascii_uppercase)
- for x in range(10))
-
-
-def stack_delete_after(test_fn):
- """
- Decorator which calls test class self.stack.delete()
- to ensure tests clean up their stacks regardless of test success/failure
- """
- @functools.wraps(test_fn)
- def wrapped_test(test_case, *args, **kwargs):
- def delete_stack():
- stack = getattr(test_case, 'stack', None)
- if stack is not None and stack.id is not None:
- stack.delete()
-
- try:
- test_fn(test_case, *args, **kwargs)
- except:
- exc_class, exc_val, exc_tb = sys.exc_info()
- try:
- delete_stack()
- finally:
- raise exc_class, exc_val, exc_tb
- else:
- delete_stack()
-
- return wrapped_test
-
-
-def wr_delete_after(test_fn):
- """
- Decorator which calls test class self.wr.destroy()
- to ensure tests clean up their watchrule regardless of test success/failure
- Used by tests which create watchrule objects directly to cleanup correctly
- self.wr can be either a single watchrule, or a list of several watchrules
- """
- @functools.wraps(test_fn)
- def wrapped_test(test_case, *args, **kwargs):
-
- def delete_wrs():
- wr = getattr(test_case, 'wr', None)
- try:
- for w in wr:
- delete_wr(w)
- except TypeError:
- delete_wr(wr)
-
- def delete_wr(w):
- if w.id is not None:
- try:
- w.destroy()
- except exception.NotFound:
- pass
- try:
- test_fn(test_case, *args, **kwargs)
- except:
- exc_class, exc_val, exc_tb = sys.exc_info()
- try:
- delete_wrs()
- finally:
- raise exc_class, exc_val, exc_tb
- else:
- delete_wrs()
-
- return wrapped_test
-
-
-def setup_dummy_db():
- migration.db_sync()
- engine = get_engine()
- engine.connect()
-
-
-def reset_dummy_db():
- engine = get_engine()
- meta = sqlalchemy.MetaData()
- meta.reflect(bind=engine)
-
- for table in reversed(meta.sorted_tables):
- if table.name == 'migrate_version':
- continue
- engine.execute(table.delete())
-
-
-def dummy_context(user='test_username', tenant_id='test_tenant_id',
- password='password', roles=[]):
- return context.RequestContext.from_dict({
- 'tenant_id': tenant_id,
- 'tenant': 'test_tenant',
- 'username': user,
- 'password': password,
- 'roles': roles,
- 'trust_id': 'atrust123',
- 'auth_url': 'http://server.test:5000/v2.0',
- 'auth_token': 'abcd1234'
- })
-
-
-def parse_stack(t, params={}, stack_name='test_stack', stack_id=None):
- ctx = dummy_context()
- template = parser.Template(t)
- stack = parser.Stack(ctx, stack_name, template,
- environment.Environment(params), stack_id)
- stack.store()
-
- return stack
-
-
-class PhysName(object):
- def __init__(self, stack_name, resource_name):
- self.stack_name = stack_name
- self.resource_name = resource_name
-
- def __eq__(self, physical_name):
- try:
- stack, res, short_id = str(physical_name).rsplit('-', 2)
- except ValueError:
- return False
-
- if self.stack_name != stack or self.resource_name != res:
- return False
-
- if len(short_id) != 12:
- return False
-
- return True
-
- def __ne__(self, physical_name):
- return not self.__eq__(physical_name)
-
- def __repr__(self):
- return '%s-%s-%s' % (self.stack_name,
- self.resource_name,
- 'x' * 12)
+++ /dev/null
-# Copyright (c) 2011 X.commerce, a business unit of eBay Inc.
-# Copyright 2011 OpenStack Foundation
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import httplib2
-import urlparse
-
-from novaclient import client as base_client
-from novaclient.v1_1 import client
-from heat.tests import fakes
-
-
-class FakeClient(fakes.FakeClient, client.Client):
-
- def __init__(self, *args, **kwargs):
- client.Client.__init__(self, 'username', 'password',
- 'project_id', 'auth_url')
- self.client = FakeHTTPClient(**kwargs)
-
-
-class FakeHTTPClient(base_client.HTTPClient):
-
- def __init__(self, **kwargs):
- self.username = 'username'
- self.password = 'password'
- self.auth_url = 'auth_url'
- self.callstack = []
-
- def _cs_request(self, url, method, **kwargs):
- # Check that certain things are called correctly
- if method in ['GET', 'DELETE']:
- assert 'body' not in kwargs
- elif method == 'PUT':
- assert 'body' in kwargs
-
- # Call the method
- args = urlparse.parse_qsl(urlparse.urlparse(url)[4])
- kwargs.update(args)
- munged_url = url.rsplit('?', 1)[0]
- munged_url = munged_url.strip('/').replace('/', '_').replace('.', '_')
- munged_url = munged_url.replace('-', '_')
-
- callback = "%s_%s" % (method.lower(), munged_url)
-
- if not hasattr(self, callback):
- raise AssertionError('Called unknown API method: %s %s, '
- 'expected fakes method name: %s' %
- (method, url, callback))
-
- # Note the call
- self.callstack.append((method, url, kwargs.get('body', None)))
-
- status, body = getattr(self, callback)(**kwargs)
- if hasattr(status, 'items'):
- return httplib2.Response(status), body
- else:
- return httplib2.Response({"status": status}), body
-
- #
- # Limits
- #
-
- def get_limits(self, **kw):
- return (200, {"limits": {
- "rate": [{"uri": "*",
- "regex": ".*",
- "limit": [
- {"value": 10,
- "verb": "POST",
- "remaining": 2,
- "unit": "MINUTE",
- "next-available": "2011-12-15T22:42:45Z"},
- {"value": 10,
- "verb": "PUT",
- "remaining": 2,
- "unit": "MINUTE",
- "next-available": "2011-12-15T22:42:45Z"},
- {"value": 100,
- "verb": "DELETE",
- "remaining": 100,
- "unit": "MINUTE",
- "next-available": "2011-12-15T22:42:45Z"}]},
- {"uri": "*/servers",
- "regex": "^/servers",
- "limit": [{"verb": "POST",
- "value": 25,
- "remaining": 24,
- "unit": "DAY",
- "next-available":
- "2011-12-15T22:42:45Z"}]}],
- "absolute": {"maxTotalRAMSize": 51200,
- "maxServerMeta": 5,
- "maxImageMeta": 5,
- "maxPersonality": 5,
- "maxPersonalitySize": 10240}}})
-
- #
- # Servers
- #
-
- def get_servers(self, **kw):
- return (200, {"servers": [
- {'id': 1234, 'name': 'sample-server'},
- {'id': 5678, 'name': 'sample-server2'},
- {'id': 9101, 'name': 'hard-reboot'},
- {'id': 9102, 'name': 'server-with-no-ip'},
- {'id': 9999, 'name': 'sample-server3'}
- ]})
-
- def get_servers_detail(self, **kw):
- return (200, {"servers": [{"id": 1234,
- "name": "sample-server",
- "OS-EXT-SRV-ATTR:instance_name":
- "sample-server",
- "image": {"id": 2,
- "name": "sample image"},
- "flavor": {"id": 1,
- "name": "256 MB Server"},
- "hostId":
- "e4d909c290d0fb1ca068ffaddf22cbd0",
- "status": "BUILD",
- "progress": 60,
- "addresses": {"public": [{"version": 4,
- "addr":
- "1.2.3.4"},
- {"version": 4,
- "addr":
- "5.6.7.8"}],
- "private": [{"version": 4,
- "addr": "10.11.12.13"}]},
- "accessIPv4": "",
- "accessIPv6": "",
- "metadata": {"Server Label": "Web Head 1",
- "Image Version": "2.1"}},
- {"id": 5678,
- "name": "sample-server2",
- "OS-EXT-SRV-ATTR:instance_name":
- "sample-server2",
- "image": {"id": 2,
- "name": "sample image"},
- "flavor": {"id": 1,
- "name": "256 MB Server"},
- "hostId":
- "9e107d9d372bb6826bd81d3542a419d6",
- "status": "ACTIVE",
- "accessIPv4": "192.0.2.0",
- "accessIPv6": "::babe:4317:0A83",
- "addresses": {"public": [{"version": 4,
- "addr":
- "4.5.6.7"},
- {"version": 4,
- "addr":
- "5.6.9.8"}],
- "private": [{"version": 4,
- "addr": "10.13.12.13"}]},
- "metadata": {"Server Label": "DB 1"}},
- {"id": 9101,
- "name": "hard-reboot",
- "OS-EXT-SRV-ATTR:instance_name":
- "hard-reboot",
- "image": {"id": 2,
- "name": "sample image"},
- "flavor": {"id": 1,
- "name": "256 MB Server"},
- "hostId":
- "9e44d8d435c43dd8d96bb63ed995605f",
- "status": "HARD_REBOOT",
- "accessIPv4": "",
- "accessIPv6": "",
- "addresses": {"public": [{"version": 4,
- "addr":
- "172.17.1.2"},
- {"version": 4,
- "addr":
- "10.20.30.40"}],
- "private": [{"version": 4,
- "addr": "10.13.12.13"}]},
- "metadata": {"Server Label": "DB 1"}},
- {"id": 9102,
- "name": "server-with-no-ip",
- "OS-EXT-SRV-ATTR:instance_name":
- "server-with-no-ip",
- "image": {"id": 2,
- "name": "sample image"},
- "flavor": {"id": 1,
- "name": "256 MB Server"},
- "hostId":
- "c1365ba78c624df9b2ff446515a682f5",
- "status": "ACTIVE",
- "accessIPv4": "",
- "accessIPv6": "",
- "addresses": {
- "empty_net": []},
- "metadata": {"Server Label": "DB 1"}},
- {"id": 9999,
- "name": "sample-server3",
- "OS-EXT-SRV-ATTR:instance_name":
- "sample-server3",
- "image": {"id": 3,
- "name": "sample image"},
- "flavor": {"id": 3,
- "name": "m1.large"},
- "hostId":
- "9e107d9d372bb6826bd81d3542a419d6",
- "status": "ACTIVE",
- "accessIPv4": "",
- "accessIPv6": "",
- "addresses": {
- "public": [{"version": 4,
- "addr": "4.5.6.7"},
- {"version": 4,
- "addr": "5.6.9.8"}],
- "private": [{"version": 4,
- "addr": "10.13.12.13"}]},
- "metadata": {"Server Label": "DB 1"}}]})
-
- def post_servers(self, body, **kw):
- assert body.keys() == ['server']
- fakes.assert_has_keys(body['server'],
- required=['name', 'imageRef', 'flavorRef'],
- optional=['metadata', 'personality'])
- if 'personality' in body['server']:
- for pfile in body['server']['personality']:
- fakes.assert_has_keys(pfile, required=['path', 'contents'])
- return (202, self.get_servers_1234()[1])
-
- def get_servers_1234(self, **kw):
- r = {'server': self.get_servers_detail()[1]['servers'][0]}
- return (200, r)
-
- def get_servers_WikiServerOne(self, **kw):
- r = {'server': self.get_servers_detail()[1]['servers'][0]}
- return (200, r)
-
- def get_servers_WikiServerOne1(self, **kw):
- r = {'server': self.get_servers_detail()[1]['servers'][0]}
- return (200, r)
-
- def get_servers_WikiServerOne2(self, **kw):
- r = {'server': self.get_servers_detail()[1]['servers'][3]}
- return (200, r)
-
- def get_servers_5678(self, **kw):
- r = {'server': self.get_servers_detail()[1]['servers'][1]}
- return (200, r)
-
- def put_servers_1234(self, body, **kw):
- assert body.keys() == ['server']
- fakes.assert_has_keys(body['server'], optional=['name', 'adminPass'])
- return (204, None)
-
- def delete_servers_1234(self, **kw):
- return (202, None)
-
- def delete_servers_1234_metadata_test_key(self, **kw):
- return (204, None)
-
- def delete_servers_1234_metadata_key1(self, **kw):
- return (204, None)
-
- def delete_servers_1234_metadata_key2(self, **kw):
- return (204, None)
-
- def delete_servers_5678(self, **kw):
- return (202, None)
-
- def delete_servers_5678_metadata_test_key(self, **kw):
- return (204, None)
-
- def delete_servers_5678_metadata_key1(self, **kw):
- return (204, None)
-
- def delete_servers_5678_metadata_key2(self, **kw):
- return (204, None)
-
- def get_servers_9999(self, **kw):
- r = {'server': self.get_servers_detail()[1]['servers'][0]}
- return (200, r)
-
- def put_servers_9999(self, body, **kw):
- assert body.keys() == ['server']
- fakes.assert_has_keys(body['server'], optional=['name', 'adminPass'])
- return (204, None)
-
- def delete_servers_9999(self, **kw):
- return (202, None)
-
- def delete_servers_9999_metadata_test_key(self, **kw):
- return (204, None)
-
- def delete_servers_9999_metadata_key1(self, **kw):
- return (204, None)
-
- def delete_servers_9999_metadata_key2(self, **kw):
- return (204, None)
-
- def post_servers_9999_metadata(self, **kw):
- return (204, {'metadata': {'test_key': 'test_value'}})
-
- def get_servers_9999_diagnostics(self, **kw):
- return (200, 'Fake diagnostics')
-
- def get_servers_9102(self, **kw):
- r = {'server': self.get_servers_detail()[1]['servers'][3]}
- return (200, r)
-
- def get_servers_1234_actions(self, **kw):
- return (200, {'actions': [{'action': 'rebuild',
- 'error': None,
- 'created_at': '2011-12-30 11:45:36'},
- {'action': 'reboot',
- 'error': 'Failed!',
- 'created_at': '2011-12-30 11:40:29'}]})
-
- #
- # Server Addresses
- #
-
- def get_servers_1234_ips(self, **kw):
- return (200, {'addresses':
- self.get_servers_1234()[1]['server']['addresses']})
-
- def get_servers_1234_ips_public(self, **kw):
- return (200, {'public':
- self.get_servers_1234_ips()[1]['addresses']['public']})
-
- def get_servers_1234_ips_private(self, **kw):
- return (200, {'private':
- self.get_servers_1234_ips()[1]['addresses']['private']})
-
- def delete_servers_1234_ips_public_1_2_3_4(self, **kw):
- return (202, None)
-
- #
- # Server actions
- #
-
- def post_servers_1234_action(self, body, **kw):
- _body = None
- resp = 202
- assert len(body.keys()) == 1
- action = body.keys()[0]
- if action == 'reboot':
- assert body[action].keys() == ['type']
- assert body[action]['type'] in ['HARD', 'SOFT']
- elif action == 'rebuild':
- keys = body[action].keys()
- if 'adminPass' in keys:
- keys.remove('adminPass')
- assert keys == ['imageRef']
- _body = self.get_servers_1234()[1]
- elif action == 'resize':
- assert body[action].keys() == ['flavorRef']
- elif action == 'confirmResize':
- assert body[action] is None
- # This one method returns a different response code
- return (204, None)
- elif action == 'revertResize':
- assert body[action] is None
- elif action == 'migrate':
- assert body[action] is None
- elif action == 'rescue':
- assert body[action] is None
- elif action == 'unrescue':
- assert body[action] is None
- elif action == 'lock':
- assert body[action] is None
- elif action == 'unlock':
- assert body[action] is None
- elif action == 'suspend':
- assert body[action] is None
- elif action == 'resume':
- assert body[action] is None
- elif action == 'addFixedIp':
- assert body[action].keys() == ['networkId']
- elif action == 'removeFixedIp':
- assert body[action].keys() == ['address']
- elif action == 'addFloatingIp':
- assert body[action].keys() == ['address']
- elif action == 'removeFloatingIp':
- assert body[action].keys() == ['address']
- elif action == 'createImage':
- assert set(body[action].keys()) == set(['name', 'metadata'])
- resp = dict(status=202, location="http://blah/images/456")
- elif action == 'changePassword':
- assert body[action].keys() == ['adminPass']
- elif action == 'os-getConsoleOutput':
- assert body[action].keys() == ['length']
- return (202, {'output': 'foo'})
- elif action == 'os-getVNCConsole':
- assert body[action].keys() == ['type']
- elif action == 'os-migrateLive':
- assert set(body[action].keys()) == set(['host',
- 'block_migration',
- 'disk_over_commit'])
- else:
- raise AssertionError("Unexpected server action: %s" % action)
- return (resp, _body)
-
- #
- # Cloudpipe
- #
-
- def get_os_cloudpipe(self, **kw):
- return (200, {'cloudpipes': [{'project_id': 1}]})
-
- def post_os_cloudpipe(self, **ks):
- return (202, {'instance_id': '9d5824aa-20e6-4b9f-b967-76a699fc51fd'})
-
- #
- # Flavors
- #
-
- def get_flavors(self, **kw):
- return (200, {'flavors': [
- {'id': 1, 'name': '256 MB Server'},
- {'id': 2, 'name': 'm1.small'},
- {'id': 3, 'name': 'm1.large'}
- ]})
-
- def get_flavors_detail(self, **kw):
- return (200, {'flavors': [
- {'id': 1, 'name': '256 MB Server', 'ram': 256, 'disk': 10,
- 'OS-FLV-EXT-DATA:ephemeral': 10},
- {'id': 2, 'name': 'm1.small', 'ram': 512, 'disk': 20,
- 'OS-FLV-EXT-DATA:ephemeral': 20},
- {'id': 3, 'name': 'm1.large', 'ram': 512, 'disk': 20,
- 'OS-FLV-EXT-DATA:ephemeral': 30}
- ]})
-
- def get_flavors_1(self, **kw):
- return (200, {'flavor': self.get_flavors_detail()[1]['flavors'][0]})
-
- def get_flavors_2(self, **kw):
- return (200, {'flavor': self.get_flavors_detail()[1]['flavors'][1]})
-
- def get_flavors_3(self, **kw):
- # Diablo has no ephemeral
- return (200, {'flavor': {'id': 3, 'name': '256 MB Server',
- 'ram': 256, 'disk': 10}})
-
- def delete_flavors_flavordelete(self, **kw):
- return (202, None)
-
- def post_flavors(self, body, **kw):
- return (202, {'flavor': self.get_flavors_detail()[1]['flavors'][0]})
-
- #
- # Floating ips
- #
-
- def get_os_floating_ip_pools(self):
- return (200, {'floating_ip_pools': [{'name': 'foo', 'name': 'bar'}]})
-
- def get_os_floating_ips(self, **kw):
- return (200, {'floating_ips': [
- {'id': 1, 'fixed_ip': '10.0.0.1', 'ip': '11.0.0.1'},
- {'id': 2, 'fixed_ip': '10.0.0.2', 'ip': '11.0.0.2'},
- ]})
-
- def get_os_floating_ips_1(self, **kw):
- return (200, {'floating_ip': {'id': 1,
- 'fixed_ip': '10.0.0.1',
- 'ip': '11.0.0.1'}})
-
- def post_os_floating_ips(self, body, **kw):
- return (202, self.get_os_floating_ips_1()[1])
-
- def delete_os_floating_ips_1(self, **kw):
- return (204, None)
-
- def get_os_floating_ip_dns(self, **kw):
- return (205, {'domain_entries':
- [{'domain': 'example.org'},
- {'domain': 'example.com'}]})
-
- def get_os_floating_ip_dns_testdomain_entries(self, **kw):
- if kw.get('ip'):
- return (205, {'dns_entries':
- [{'dns_entry': {'ip': kw.get('ip'),
- 'name': "host1",
- 'type': "A",
- 'domain': 'testdomain'}},
- {'dns_entry': {'ip': kw.get('ip'),
- 'name': "host2",
- 'type': "A",
- 'domain': 'testdomain'}}]})
- else:
- return (404, None)
-
- def get_os_floating_ip_dns_testdomain_entries_testname(self, **kw):
- return (205, {'dns_entry': {'ip': "10.10.10.10",
- 'name': 'testname',
- 'type': "A",
- 'domain': 'testdomain'}})
-
- def put_os_floating_ip_dns_testdomain(self, body, **kw):
- if body['domain_entry']['scope'] == 'private':
- fakes.assert_has_keys(body['domain_entry'],
- required=['availability_zone', 'scope'])
- elif body['domain_entry']['scope'] == 'public':
- fakes.assert_has_keys(body['domain_entry'],
- required=['project', 'scope'])
-
- else:
- fakes.assert_has_keys(body['domain_entry'],
- required=['project', 'scope'])
- return (205, None)
-
- def put_os_floating_ip_dns_testdomain_entries_testname(self, body, **kw):
- fakes.assert_has_keys(body['dns_entry'],
- required=['ip', 'dns_type'])
- return (205, None)
-
- def delete_os_floating_ip_dns_testdomain(self, **kw):
- return (200, None)
-
- def delete_os_floating_ip_dns_testdomain_entries_testname(self, **kw):
- return (200, None)
-
- #
- # Images
- #
- def get_images(self, **kw):
- return (200, {'images': [{'id': 1, 'name': 'CentOS 5.2'},
- {'id': 2, 'name': 'My Server Backup'},
- {'id': 3, 'name': 'F17-x86_64-gold'},
- {'id': 4, 'name': 'F17-x86_64-cfntools'}]})
-
- def get_images_detail(self, **kw):
- return (200, {'images': [{'id': 1,
- 'name': 'CentOS 5.2',
- "updated": "2010-10-10T12:00:00Z",
- "created": "2010-08-10T12:00:00Z",
- "status": "ACTIVE",
- "metadata": {"test_key": "test_value"},
- "links": {}},
- {"id": 743,
- "name": "My Server Backup",
- "serverId": 1234,
- "updated": "2010-10-10T12:00:00Z",
- "created": "2010-08-10T12:00:00Z",
- "status": "SAVING",
- "progress": 80,
- "links": {}},
- {"id": 744,
- "name": "F17-x86_64-gold",
- "serverId": 9999,
- "updated": "2010-10-10T12:00:00Z",
- "created": "2010-08-10T12:00:00Z",
- "status": "SAVING",
- "progress": 80,
- "links": {}},
- {"id": 745,
- "name": "F17-x86_64-cfntools",
- "serverId": 9998,
- "updated": "2010-10-10T12:00:00Z",
- "created": "2010-08-10T12:00:00Z",
- "status": "SAVING",
- "progress": 80,
- "links": {}}]})
-
- def get_images_1(self, **kw):
- return (200, {'image': self.get_images_detail()[1]['images'][0]})
-
- def get_images_2(self, **kw):
- return (200, {'image': self.get_images_detail()[1]['images'][1]})
-
- def post_images(self, body, **kw):
- assert body.keys() == ['image']
- fakes.assert_has_keys(body['image'], required=['serverId', 'name'])
- return (202, self.get_images_1()[1])
-
- def post_images_1_metadata(self, body, **kw):
- assert body.keys() == ['metadata']
- fakes.assert_has_keys(body['metadata'],
- required=['test_key'])
- return (200, {'metadata': self.get_images_1()[1]['image']['metadata']})
-
- def delete_images_1(self, **kw):
- return (204, None)
-
- def delete_images_1_metadata_test_key(self, **kw):
- return (204, None)
-
- #
- # Keypairs
- #
- def get_os_keypairs(self, *kw):
- return (200, {"keypairs": [{'fingerprint': 'FAKE_KEYPAIR',
- 'name': 'test'}]})
-
- def delete_os_keypairs_test(self, **kw):
- return (202, None)
-
- def post_os_keypairs(self, body, **kw):
- assert body.keys() == ['keypair']
- fakes.assert_has_keys(body['keypair'],
- required=['name'])
- r = {'keypair': self.get_os_keypairs()[1]['keypairs'][0]}
- return (202, r)
-
- #
- # Virtual Interfaces
- #
- def get_servers_1234_os_virtual_interfaces(self, **kw):
- return (200, {"virtual_interfaces": [
- {'id': 'fakeid', 'mac_address': 'fakemac'}
- ]})
-
- #
- # Quotas
- #
-
- def get_os_quota_sets_test(self, **kw):
- return (200, {'quota_set': {
- 'tenant_id': 'test',
- 'metadata_items': [],
- 'injected_file_content_bytes': 1,
- 'volumes': 1,
- 'gigabytes': 1,
- 'ram': 1,
- 'floating_ips': 1,
- 'instances': 1,
- 'injected_files': 1,
- 'cores': 1}})
-
- def get_os_quota_sets_test_defaults(self):
- return (200, {'quota_set': {
- 'tenant_id': 'test',
- 'metadata_items': [],
- 'injected_file_content_bytes': 1,
- 'volumes': 1,
- 'gigabytes': 1,
- 'ram': 1,
- 'floating_ips': 1,
- 'instances': 1,
- 'injected_files': 1,
- 'cores': 1}})
-
- def put_os_quota_sets_test(self, body, **kw):
- assert body.keys() == ['quota_set']
- fakes.assert_has_keys(body['quota_set'],
- required=['tenant_id'])
- return (200, {'quota_set': {
- 'tenant_id': 'test',
- 'metadata_items': [],
- 'injected_file_content_bytes': 1,
- 'volumes': 2,
- 'gigabytes': 1,
- 'ram': 1,
- 'floating_ips': 1,
- 'instances': 1,
- 'injected_files': 1,
- 'cores': 1}})
-
- #
- # Quota Classes
- #
-
- def get_os_quota_class_sets_test(self, **kw):
- return (200, {'quota_class_set': {
- 'class_name': 'test',
- 'metadata_items': [],
- 'injected_file_content_bytes': 1,
- 'volumes': 1,
- 'gigabytes': 1,
- 'ram': 1,
- 'floating_ips': 1,
- 'instances': 1,
- 'injected_files': 1,
- 'cores': 1}})
-
- def put_os_quota_class_sets_test(self, body, **kw):
- assert body.keys() == ['quota_class_set']
- fakes.assert_has_keys(body['quota_class_set'],
- required=['class_name'])
- return (200, {'quota_class_set': {
- 'class_name': 'test',
- 'metadata_items': [],
- 'injected_file_content_bytes': 1,
- 'volumes': 2,
- 'gigabytes': 1,
- 'ram': 1,
- 'floating_ips': 1,
- 'instances': 1,
- 'injected_files': 1,
- 'cores': 1}})
-
- #
- # Security Groups
- #
- def get_os_security_groups(self, **kw):
- return (200, {"security_groups": [{'id': 1,
- 'name': 'test',
- 'description':
- 'FAKE_SECURITY_GROUP'}]})
-
- def get_os_security_groups_1(self, **kw):
- return (200, {"security_group": {'id': 1,
- 'name': 'test',
- 'description':
- 'FAKE_SECURITY_GROUP'}})
-
- def delete_os_security_groups_1(self, **kw):
- return (202, None)
-
- def post_os_security_groups(self, body, **kw):
- assert body.keys() == ['security_group']
- fakes.assert_has_keys(body['security_group'],
- required=['name', 'description'])
- r = {'security_group':
- self.get_os_security_groups()[1]['security_groups'][0]}
- return (202, r)
-
- #
- # Security Group Rules
- #
- def get_os_security_group_rules(self, **kw):
- return (200, {"security_group_rules": [{'id': 1,
- 'parent_group_id': 1,
- 'group_id': 2,
- 'ip_protocol': 'TCP',
- 'from_port': '22',
- 'to_port': 22,
- 'cidr': '10.0.0.0/8'}]})
-
- def delete_os_security_group_rules_1(self, **kw):
- return (202, None)
-
- def post_os_security_group_rules(self, body, **kw):
- assert body.keys() == ['security_group_rule']
- fakes.assert_has_keys(body['security_group_rule'],
- required=['parent_group_id'],
- optional=['group_id', 'ip_protocol', 'from_port',
- 'to_port', 'cidr'])
- r = {'security_group_rule':
- self.get_os_security_group_rules()[1]['security_group_rules'][0]}
- return (202, r)
-
- #
- # Tenant Usage
- #
- def get_os_simple_tenant_usage(self, **kw):
- return (200, {u'tenant_usages': [{
- u'total_memory_mb_usage': 25451.762807466665,
- u'total_vcpus_usage': 49.71047423333333,
- u'total_hours': 49.71047423333333,
- u'tenant_id': u'7b0a1d73f8fb41718f3343c207597869',
- u'stop': u'2012-01-22 19:48:41.750722',
- u'server_usages': [{
- u'hours': 49.71047423333333,
- u'uptime': 27035, u'local_gb': 0, u'ended_at': None,
- u'name': u'f15image1',
- u'tenant_id': u'7b0a1d73f8fb41718f3343c207597869',
- u'vcpus': 1, u'memory_mb': 512, u'state': u'active',
- u'flavor': u'm1.tiny',
- u'started_at': u'2012-01-20 18:06:06.479998'}],
- u'start': u'2011-12-25 19:48:41.750687',
- u'total_local_gb_usage': 0.0}]})
-
- def get_os_simple_tenant_usage_tenantfoo(self, **kw):
- return (200, {u'tenant_usage': {
- u'total_memory_mb_usage': 25451.762807466665,
- u'total_vcpus_usage': 49.71047423333333,
- u'total_hours': 49.71047423333333,
- u'tenant_id': u'7b0a1d73f8fb41718f3343c207597869',
- u'stop': u'2012-01-22 19:48:41.750722',
- u'server_usages': [{
- u'hours': 49.71047423333333,
- u'uptime': 27035, u'local_gb': 0, u'ended_at': None,
- u'name': u'f15image1',
- u'tenant_id': u'7b0a1d73f8fb41718f3343c207597869',
- u'vcpus': 1, u'memory_mb': 512, u'state': u'active',
- u'flavor': u'm1.tiny',
- u'started_at': u'2012-01-20 18:06:06.479998'}],
- u'start': u'2011-12-25 19:48:41.750687',
- u'total_local_gb_usage': 0.0}})
-
- #
- # Certificates
- #
- def get_os_certificates_root(self, **kw):
- return (200, {'certificate': {'private_key': None, 'data': 'foo'}})
-
- def post_os_certificates(self, **kw):
- return (200, {'certificate': {'private_key': 'foo', 'data': 'bar'}})
-
- #
- # Aggregates
- #
- def get_os_aggregates(self, *kw):
- return (200, {"aggregates": [
- {'id': '1',
- 'name': 'test',
- 'availability_zone': 'nova1'},
- {'id': '2',
- 'name': 'test2',
- 'availability_zone': 'nova1'},
- ]})
-
- def _return_aggregate(self):
- r = {'aggregate': self.get_os_aggregates()[1]['aggregates'][0]}
- return (200, r)
-
- def get_os_aggregates_1(self, **kw):
- return self._return_aggregate()
-
- def post_os_aggregates(self, body, **kw):
- return self._return_aggregate()
-
- def put_os_aggregates_1(self, body, **kw):
- return self._return_aggregate()
-
- def put_os_aggregates_2(self, body, **kw):
- return self._return_aggregate()
-
- def post_os_aggregates_1_action(self, body, **kw):
- return self._return_aggregate()
-
- def post_os_aggregates_2_action(self, body, **kw):
- return self._return_aggregate()
-
- def delete_os_aggregates_1(self, **kw):
- return (202, None)
-
- #
- # Hosts
- #
- def get_os_hosts_host(self, *kw):
- return (200, {'host':
- [{'resource': {'project': '(total)', 'host': 'dummy',
- 'cpu': 16, 'memory_mb': 32234, 'disk_gb': 128}},
- {'resource': {'project': '(used_now)', 'host': 'dummy',
- 'cpu': 1, 'memory_mb': 2075, 'disk_gb': 45}},
- {'resource': {'project': '(used_max)', 'host': 'dummy',
- 'cpu': 1, 'memory_mb': 2048, 'disk_gb': 30}},
- {'resource': {'project': 'admin', 'host': 'dummy',
- 'cpu': 1, 'memory_mb': 2048, 'disk_gb': 30}}]})
-
- def get_os_hosts_sample_host(self, *kw):
- return (200, {'host': [{'resource': {'host': 'sample_host'}}], })
-
- def put_os_hosts_sample_host_1(self, body, **kw):
- return (200, {'host': 'sample-host_1',
- 'status': 'enabled'})
-
- def put_os_hosts_sample_host_2(self, body, **kw):
- return (200, {'host': 'sample-host_2',
- 'maintenance_mode': 'on_maintenance'})
-
- def put_os_hosts_sample_host_3(self, body, **kw):
- return (200, {'host': 'sample-host_3',
- 'status': 'enabled',
- 'maintenance_mode': 'on_maintenance'})
-
- def get_os_hosts_sample_host_startup(self, **kw):
- return (200, {'host': 'sample_host',
- 'power_action': 'startup'})
-
- def get_os_hosts_sample_host_reboot(self, **kw):
- return (200, {'host': 'sample_host',
- 'power_action': 'reboot'})
-
- def get_os_hosts_sample_host_shutdown(self, **kw):
- return (200, {'host': 'sample_host',
- 'power_action': 'shutdown'})
-
- def put_os_hosts_sample_host(self, body, **kw):
- result = {'host': 'dummy'}
- result.update(body)
- return (200, result)
-
- def get_os_availability_zone(self, *kw):
- return (200, {"availabilityZoneInfo": [{'zoneName': 'nova1'}]})
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-import pbr.version
-
-version_info = pbr.version.VersionInfo('heat')
+++ /dev/null
-#!/bin/bash
-
-if [[ $EUID -ne 0 ]]; then
- echo "This script must be run as root" >&2
- exit 1
-fi
-
-# Install prefix for config files (e.g. "/usr/local").
-# Leave empty to install into /etc
-CONF_PREFIX=""
-LOG_DIR=/var/log/heat
-
-
-install -d $LOG_DIR
-
-detect_rabbit() {
- PKG_CMD="rpm -q"
- RABBIT_PKG="rabbitmq-server"
- QPID_PKG="qpid-cpp-server"
-
- # Detect OS type
- # Ubuntu has an lsb_release command which allows us to detect if it is Ubuntu
- if lsb_release -i 2>/dev/null | grep -iq ubuntu
- then
- PKG_CMD="dpkg -s"
- QPID_PKG="qpidd"
- fi
- if $PKG_CMD $RABBIT_PKG > /dev/null 2>&1
- then
- if ! $PKG_CMD $QPID_PKG > /dev/null 2>&1
- then
- return 0
- fi
- fi
- return 1
-}
-
-# Determinate is the given option present in the INI file
-# ini_has_option config-file section option
-function ini_has_option() {
- local file=$1
- local section=$2
- local option=$3
- local line
- line=$(sed -ne "/^\[$section\]/,/^\[.*\]/ { /^$option[ \t]*=/ p; }" "$file")
- [ -n "$line" ]
-}
-
-# Set an option in an INI file
-# iniset config-file section option value
-function iniset() {
- local file=$1
- local section=$2
- local option=$3
- local value=$4
- if ! grep -q "^\[$section\]" "$file"; then
- # Add section at the end
- echo -e "\n[$section]" >>"$file"
- fi
- if ! ini_has_option "$file" "$section" "$option"; then
- # Add it
- sed -i -e "/^\[$section\]/ a\\
-$option = $value
-" "$file"
- else
- # Replace it
- sed -i -e "/^\[$section\]/,/^\[.*\]/ s|^\($option[ \t]*=[ \t]*\).*$|\1$value|" "$file"
- fi
-}
-
-basic_configuration() {
- conf_path=$1
- if echo $conf_path | grep ".conf$" >/dev/null 2>&1
- then
- iniset $target DEFAULT auth_encryption_key `hexdump -n 16 -v -e '/1 "%02x"' /dev/random`
- iniset $target DEFAULT db_backend heat.db.sqlalchemy.api
- iniset $target DEFAULT sql_connection "mysql://heat:heat@localhost/heat"
-
- BRIDGE_IP=127.0.0.1
- iniset $target DEFAULT heat_metadata_server_url "http://${BRIDGE_IP}:8000/"
- iniset $target DEFAULT heat_waitcondition_server_url "http://${BRIDGE_IP}:8000/v1/waitcondition/"
- iniset $target DEFAULT heat_watch_server_url "http://${BRIDGE_IP}:8003/"
-
- if detect_rabbit
- then
- echo "rabbitmq detected, configuring $conf_path for rabbit" >&2
- iniset $conf_path DEFAULT rpc_backend heat.openstack.common.rpc.impl_kombu
- iniset $conf_path DEFAULT rabbit_password guest
- else
- echo "qpid detected, configuring $conf_path for qpid" >&2
- iniset $conf_path DEFAULT rpc_backend heat.openstack.common.rpc.impl_qpid
- fi
- fi
-}
-
-install_dir() {
- local dir=$1
- local prefix=$2
-
- for fn in $(ls $dir); do
- f=$dir/$fn
- target=$prefix/$f
- if [ $fn = 'heat.conf.sample' ]; then
- target=$prefix/$dir/heat.conf
- fi
- if [ -d $f ]; then
- [ -d $target ] || install -d $target
- install_dir $f $prefix
- elif [ -f $target ]; then
- echo "NOT replacing existing config file $target" >&2
- diff -u $target $f
- else
- echo "Installing $fn in $prefix/$dir" >&2
- install -m 664 $f $target
- if [ $fn = 'heat.conf.sample' ]; then
- basic_configuration $target
- fi
- fi
- done
-}
-
-install_dir etc $CONF_PREFIX
-
-python setup.py install >/dev/null
-rm -rf build heat.egg-info
+++ /dev/null
-[DEFAULT]
-
-# The list of modules to copy from openstack-common
-module=db
-module=db.sqlalchemy
-module=eventlet_backdoor
-module=exception
-module=excutils
-module=gettextutils
-module=importutils
-module=install_venv_common
-module=jsonutils
-module=local
-module=log
-module=loopingcall
-module=network_utils
-module=notifier
-module=patch_tox_venv
-module=policy
-module=redhat-eventlet.patch
-module=rpc
-module=service
-module=threadgroup
-module=timeutils
-module=uuidutils
-module=config
-
-# The base module to hold the copy of openstack.common
-base=heat
+++ /dev/null
-[Messages Control]
-# W0511: TODOs in code comments are fine.
-# W0142: *args and **kwargs are fine.
-# W0622: Redefining id is fine.
-disable-msg=W0511,W0142,W0622
-
-[Basic]
-# Variable names can be 1 to 31 characters long, with lowercase and underscores
-variable-rgx=[a-z_][a-z0-9_]{0,30}$
-
-# Argument names can be 2 to 31 characters long, with lowercase and underscores
-argument-rgx=[a-z_][a-z0-9_]{1,30}$
-
-# Method names should be at least 3 characters long
-# and be lowecased with underscores
-method-rgx=[a-z_][a-z0-9_]{2,50}$
-
-# Module names matching nova-* are ok (files in bin/)
-module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+)|(nova-[a-z0-9_-]+))$
-
-# Don't require docstrings on tests.
-no-docstring-rgx=((__.*__)|([tT]est.*)|setUp|tearDown)$
-
-# Exclude variable names that conflict with debugger
-bad-names=c
-[Design]
-max-public-methods=100
-min-public-methods=0
-max-args=6
+++ /dev/null
-pbr>=0.5.21,<1.0
-pycrypto>=2.6
-eventlet>=0.13.0
-greenlet>=0.3.2
-httplib2
-iso8601>=0.1.4
-kombu>=2.4.8
-argparse
-lxml>=2.3
-netaddr
-six
-sqlalchemy-migrate>=0.7.2
-python-novaclient>=2.12.0
-PasteDeploy>=1.5.0
-Routes>=1.12.3
-SQLAlchemy>=0.7.8,<=0.7.99
-WebOb>=1.2.3,<1.3
-python-keystoneclient>=0.3.2
-python-swiftclient>=1.2
-python-neutronclient>=2.3.0,<3
-python-ceilometerclient>=1.0.3
-python-cinderclient>=1.0.4
-PyYAML>=3.1.0
-paramiko>=1.8.0
-Babel>=0.9.6
--f http://tarballs.openstack.org/oslo.config/oslo.config-1.2.0a3.tar.gz#egg=oslo.config-1.2.0a3
-oslo.config>=1.2.0a3
--- /dev/null
+[DEFAULT]
+sql_connection = mysql://heat:heat@localhost/heat
+heat_metadata_server_url = http://127.0.0.1:8000
+heat_waitcondition_server_url = http://127.0.0.1:8000/v1/waitcondition
+heat_watch_server_url = http://127.0.0.1:8003
+db_backend = heat.db.sqlalchemy.api
+log_dir = /var/log/heat
+rpc_backend = heat.openstack.common.rpc.impl_qpid
+
+[keystone_authtoken]
+admin_tenant_name = %SERVICE_TENANT_NAME%
+admin_user = %SERVICE_USER%
+admin_password = %SERVICE_PASSWORD%
+auth_host = 127.0.0.1
+auth_port = 35357
+auth_protocol = http
+auth_uri = http://127.0.0.1:5000/v2.0
+signing_dir = /tmp/keystone-signing-heat
+
+[ssl]
+
+[database]
+
+[paste_deploy]
+api_paste_config = /usr/share/heat/api-paste-dist.ini
+
+[rpc_notifier2]
+
+[ec2authtoken]
+
+[heat_api_cloudwatch]
+
+[heat_api]
+
+[heat_api_cfn]
+
+[auth_password]
+
+[matchmaker_ring]
+
+[matchmaker_redis]
--- /dev/null
+diff --git a/heat/common/__init__.py b/heat/common/__init__.py
+index e8e4035..0db8b7e 100644
+--- a/heat/common/__init__.py
++++ b/heat/common/__init__.py
+@@ -12,3 +12,36 @@
+ # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ # License for the specific language governing permissions and limitations
+ # under the License.
++
++import sys
++import pkg_resources
++
++# If there is a conflicting non egg module,
++# i.e. an older standard system module installed,
++# then replace it with this requirement
++def replace_dist(requirement):
++ try:
++ return pkg_resources.require(requirement)
++ except pkg_resources.VersionConflict:
++ e = sys.exc_info()[1]
++ dist = e.args[0]
++ req = e.args[1]
++ if dist.key == req.key and not dist.location.endswith('.egg'):
++ del pkg_resources.working_set.by_key[dist.key]
++ # We assume there is no need to adjust sys.path
++ # and the associated pkg_resources.working_set.entries
++ try:
++ return pkg_resources.require(requirement)
++ except pkg_resources.DistributionNotFound:
++ print 'Not found, skipping', requirement
++ return {}
++
++replace_dist("WebOb >= 1.0")
++replace_dist("SQLAlchemy >= 0.6.3")
++replace_dist("Routes >= 1.12.3")
++replace_dist("PasteDeploy >= 1.5")
++# This hack is needed because replace_dist() results in
++# the standard paste module path being at the start of __path__.
++# TODO: See can we get pkg_resources to do the right thing directly
++import paste
++paste.__path__.insert(0, paste.__path__.pop(-1))
--- /dev/null
+/var/log/heat/*.log {
+ weekly
+ rotate 4
+ missingok
+ compress
+ minsize 100k
+}
--- /dev/null
+#!/bin/sh
+#
+# openstack-heat-api-cfn - Openstack Heat CFN-compatible API Service
+#
+# chkconfig: - 98 02
+# description: Provides an AWS CloudFormation-compatible interface to the Heat
+# Engine for starting AWS CloudFormation templates on OpenStack.
+#
+
+### BEGIN INIT INFO
+# Provides:
+# Required-Start: $remote_fs $network $syslog
+# Required-Stop: $remote_fs $syslog
+# Default-Stop: 0 1 6
+# Short-Description: OpenStack Heat CFN-compatible API Server
+# Description: The Heat API provides an OpenStack-native
+# ReST API to the Heat Engine for starting
+# AWS CloudFormation templates on OpenStack.
+### END INIT INFO
+
+. /etc/rc.d/init.d/functions
+
+prog=heat-api-cfn
+exec="/usr/bin/$prog"
+config="/etc/heat/heat.conf"
+pidfile="/var/run/heat/$prog.pid"
+
+[ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog
+
+lockfile=/var/lock/subsys/openstack-heat-api-cfn
+
+start() {
+ [ -x $exec ] || exit 5
+ [ -f $config ] || exit 6
+ echo -n $"Starting $prog: "
+ daemon --user heat --pidfile $pidfile "$exec &>/dev/null & echo \$! > $pidfile"
+ retval=$?
+ echo
+ [ $retval -eq 0 ] && touch $lockfile
+ return $retval
+}
+
+stop() {
+ echo -n $"Stopping $prog: "
+ killproc -p $pidfile $prog
+ retval=$?
+ echo
+ [ $retval -eq 0 ] && rm -f $lockfile
+ return $retval
+}
+
+restart() {
+ stop
+ start
+}
+
+reload() {
+ restart
+}
+
+force_reload() {
+ restart
+}
+
+rh_status() {
+ status -p $pidfile $prog
+}
+
+rh_status_q() {
+ rh_status >/dev/null 2>&1
+}
+
+
+case "$1" in
+ start)
+ rh_status_q && exit 0
+ $1
+ ;;
+ stop)
+ rh_status_q || exit 0
+ $1
+ ;;
+ restart)
+ $1
+ ;;
+ reload)
+ rh_status_q || exit 7
+ $1
+ ;;
+ force-reload)
+ force_reload
+ ;;
+ status)
+ rh_status
+ ;;
+ condrestart|try-restart)
+ rh_status_q || exit 0
+ restart
+ ;;
+ *)
+ echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
+ exit 2
+esac
+exit $?
--- /dev/null
+#!/bin/sh
+#
+# openstack-heat-api-cloudwatch - OpenStack Heat CloudWatch API Service
+#
+# chkconfig: - 98 02
+# description: The Heat CloudWatch API provides an AWS CloudWatch-compatible
+# interface to the Heat Engine.
+#
+### BEGIN INIT INFO
+# Provides:
+# Required-Start: $remote_fs $network $syslog
+# Required-Stop: $remote_fs $syslog
+# Default-Stop: 0 1 6
+# Short-Description: OpenStack Heat CloudWatch API Server
+# Description: The Heat CloudWatch API provides an AWS CloudWatch-comptabile
+# interface to the Heat Engine.
+### END INIT INFO
+
+. /etc/rc.d/init.d/functions
+
+prog=heat-api-cloudwatch
+exec="/usr/bin/$prog"
+config="/etc/heat/heat.conf"
+pidfile="/var/run/heat/$prog.pid"
+
+[ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog
+
+lockfile=/var/lock/subsys/openstack-heat-api-cloudwatch
+
+start() {
+ [ -x $exec ] || exit 5
+ [ -f $config ] || exit 6
+ echo -n $"Starting $prog: "
+ daemon --user heat --pidfile $pidfile "$exec &>/dev/null & echo \$! > $pidfile"
+ retval=$?
+ echo
+ [ $retval -eq 0 ] && touch $lockfile
+ return $retval
+}
+
+stop() {
+ echo -n $"Stopping $prog: "
+ killproc -p $pidfile $prog
+ retval=$?
+ echo
+ [ $retval -eq 0 ] && rm -f $lockfile
+ return $retval
+}
+
+restart() {
+ stop
+ start
+}
+
+reload() {
+ restart
+}
+
+force_reload() {
+ restart
+}
+
+rh_status() {
+ status -p $pidfile $prog
+}
+
+rh_status_q() {
+ rh_status >/dev/null 2>&1
+}
+
+
+case "$1" in
+ start)
+ rh_status_q && exit 0
+ $1
+ ;;
+ stop)
+ rh_status_q || exit 0
+ $1
+ ;;
+ restart)
+ $1
+ ;;
+ reload)
+ rh_status_q || exit 7
+ $1
+ ;;
+ force-reload)
+ force_reload
+ ;;
+ status)
+ rh_status
+ ;;
+ condrestart|try-restart)
+ rh_status_q || exit 0
+ restart
+ ;;
+ *)
+ echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
+ exit 2
+esac
+exit $?
--- /dev/null
+#!/bin/sh
+#
+# openstack-heat-api - OpenStack Heat API Service
+#
+# chkconfig: - 98 02
+# description: The Heat API provides an OpenStack-native
+# ReST API to the Heat Engine for starting
+# AWS CloudFormation templates on OpenStack.
+
+### BEGIN INIT INFO
+# Provides:
+# Required-Start: $remote_fs $network $syslog
+# Required-Stop: $remote_fs $syslog
+# Default-Stop: 0 1 6
+# Short-Description: OpenStack Heat API Server
+# Description: The Heat API provides an OpenStack-native
+# ReST API to the Heat Engine for starting
+# AWS CloudFormation templates on OpenStack.
+### END INIT INFO
+
+. /etc/rc.d/init.d/functions
+
+prog=heat-api
+exec="/usr/bin/$prog"
+config="/etc/heat/heat.conf"
+pidfile="/var/run/heat/$prog.pid"
+
+[ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog
+
+lockfile=/var/lock/subsys/openstack-heat-api
+
+start() {
+ [ -x $exec ] || exit 5
+ [ -f $config ] || exit 6
+ echo -n $"Starting $prog: "
+ daemon --user heat --pidfile $pidfile "$exec &>/dev/null & echo \$! > $pidfile"
+ retval=$?
+ echo
+ [ $retval -eq 0 ] && touch $lockfile
+ return $retval
+}
+
+stop() {
+ echo -n $"Stopping $prog: "
+ killproc -p $pidfile $prog
+ retval=$?
+ echo
+ [ $retval -eq 0 ] && rm -f $lockfile
+ return $retval
+}
+
+restart() {
+ stop
+ start
+}
+
+reload() {
+ restart
+}
+
+force_reload() {
+ restart
+}
+
+rh_status() {
+ status -p $pidfile $prog
+}
+
+rh_status_q() {
+ rh_status >/dev/null 2>&1
+}
+
+
+case "$1" in
+ start)
+ rh_status_q && exit 0
+ $1
+ ;;
+ stop)
+ rh_status_q || exit 0
+ $1
+ ;;
+ restart)
+ $1
+ ;;
+ reload)
+ rh_status_q || exit 7
+ $1
+ ;;
+ force-reload)
+ force_reload
+ ;;
+ status)
+ rh_status
+ ;;
+ condrestart|try-restart)
+ rh_status_q || exit 0
+ restart
+ ;;
+ *)
+ echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
+ exit 2
+esac
+exit $?
--- /dev/null
+#!/bin/sh
+#
+# openstack-heat-engine - Openstack Heat Engine Service
+#
+# chkconfig: - 98 02
+# description: The Heat Engine provides an OpenStack API for starting AWS
+# CloudFormation templates on OpenStack.
+#
+### BEGIN INIT INFO
+# Provides:
+# Required-Start: $remote_fs $network $syslog
+# Required-Stop: $remote_fs $syslog
+# Default-Stop: 0 1 6
+# Short-Description: OpenStack Heat Engine Server
+# Description: The Heat Engine provides an OpenStack API for starting AWS
+# CloudFormation templates on OpenStack.
+### END INIT INFO
+
+. /etc/rc.d/init.d/functions
+
+prog=heat-engine
+exec="/usr/bin/$prog"
+config="/etc/heat/heat.conf"
+pidfile="/var/run/heat/$prog.pid"
+
+[ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog
+
+lockfile=/var/lock/subsys/openstack-heat-engine
+
+start() {
+ [ -x $exec ] || exit 5
+ [ -f $config ] || exit 6
+ echo -n $"Starting $prog: "
+ daemon --user heat --pidfile $pidfile "$exec &>/dev/null & echo \$! > $pidfile"
+ retval=$?
+ echo
+ [ $retval -eq 0 ] && touch $lockfile
+ return $retval
+}
+
+stop() {
+ echo -n $"Stopping $prog: "
+ killproc -p $pidfile $prog
+ retval=$?
+ echo
+ [ $retval -eq 0 ] && rm -f $lockfile
+ return $retval
+}
+
+restart() {
+ stop
+ start
+}
+
+reload() {
+ restart
+}
+
+force_reload() {
+ restart
+}
+
+rh_status() {
+ status -p $pidfile $prog
+}
+
+rh_status_q() {
+ rh_status >/dev/null 2>&1
+}
+
+
+case "$1" in
+ start)
+ rh_status_q && exit 0
+ $1
+ ;;
+ stop)
+ rh_status_q || exit 0
+ $1
+ ;;
+ restart)
+ $1
+ ;;
+ reload)
+ rh_status_q || exit 7
+ $1
+ ;;
+ force-reload)
+ force_reload
+ ;;
+ status)
+ rh_status
+ ;;
+ condrestart|try-restart)
+ rh_status_q || exit 0
+ restart
+ ;;
+ *)
+ echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
+ exit 2
+esac
+exit $?
--- /dev/null
+diff --git a/heat/version.py b/heat/version.py
+index a717606..8ea213a 100644
+--- a/heat/version.py
++++ b/heat/version.py
+@@ -14,6 +14,14 @@
+ # under the License.
+
+
+-import pbr.version
++class VersionInfo(object):
++ release = 'REDHATHEATRELEASE'
++ version = 'REDHATHEATVERSION'
+
+-version_info = pbr.version.VersionInfo('heat')
++ def version_string(self):
++ return self.version
++
++ def release_string(self):
++ return self.release
++
++version_info = VersionInfo()
--- /dev/null
+From 4510d7e7f427ac4b0770832108641a5048624332 Mon Sep 17 00:00:00 2001
+From: Jeff Peeler <jpeeler@redhat.com>
+Date: Wed, 8 May 2013 12:27:35 -0400
+Subject: [PATCH] Switch to using M2Crypto
+
+This patch uses M2Crypto instead of PyCrypto to perform encryption
+and decryption of user authentication information.
+
+---
+diff --git a/heat/common/crypt.py b/heat/common/crypt.py
+index 81b4c92..01fa353 100644
+--- a/heat/common/crypt.py
++++ b/heat/common/crypt.py
+@@ -14,7 +14,7 @@
+ # under the License.
+
+ import base64
+-from Crypto.Cipher import AES
++import M2Crypto
+ from os import urandom
+
+ from oslo.config import cfg
+@@ -36,9 +36,11 @@ logger = logging.getLogger(__name__)
+ def encrypt(auth_info):
+ if auth_info is None:
+ return None
+- iv = urandom(AES.block_size)
+- cipher = AES.new(cfg.CONF.auth_encryption_key[:32], AES.MODE_CFB, iv)
+- res = base64.b64encode(iv + cipher.encrypt(auth_info))
++ iv = urandom(16)
++ cipher = M2Crypto.EVP.Cipher(alg='aes_128_cbc',
++ key=cfg.CONF.auth_encryption_key[:32], iv=iv,
++ op=1) # 1 is encode
++ res = base64.b64encode(iv + cipher.update(auth_info) + cipher.final())
+ return res
+
+
+@@ -46,7 +48,9 @@ def decrypt(auth_info):
+ if auth_info is None:
+ return None
+ auth = base64.b64decode(auth_info)
+- iv = auth[:AES.block_size]
+- cipher = AES.new(cfg.CONF.auth_encryption_key[:32], AES.MODE_CFB, iv)
+- res = cipher.decrypt(auth[AES.block_size:])
++ iv = auth[:16]
++ cipher = M2Crypto.EVP.Cipher(alg='aes_128_cbc',
++ key=cfg.CONF.auth_encryption_key[:32], iv=iv,
++ op=0) # 0 is decode
++ res = cipher.update(auth[16:]) + cipher.final()
+ return res
+diff --git a/tools/pip-requires b/tools/pip-requires
+index 06795f9..091168c 100644
+--- a/requirements.txt
++++ b/requirements.txt
+@@ -1,5 +1,5 @@
+ pbr>=0.5.21,<1.0
+-pycrypto>=2.6
++M2Crypto
+ eventlet>=0.13.0
+ greenlet>=0.3.2
+ httplib2
+--
+1.8.1.4
+
--- /dev/null
+%global release_name havana
+%global release_letter rc
+%global milestone 2
+%global full_release heat-%{version}
+
+%global with_doc %{!?_without_doc:1}%{?_without_doc:0}
+
+Name: openstack-heat
+Summary: OpenStack Orchestration (heat)
+Version: 2013.2
+Release: 1.0%{?dist}
+License: ASL 2.0
+Group: System Environment/Base
+URL: http://www.openstack.org
+Source0: https://launchpad.net/heat/%{release_name}/%{version}/+download/heat-%{version}.tar.gz
+Obsoletes: heat < 7-9
+Provides: heat
+
+Source1: heat.logrotate
+Source2: openstack-heat-api.init
+Source3: openstack-heat-api-cfn.init
+Source4: openstack-heat-engine.init
+Source5: openstack-heat-api-cloudwatch.init
+Source20: heat-dist.conf
+
+Patch0: switch-to-using-m2crypto.patch
+Patch1: remove-pbr-runtime-dependency.patch
+# EPEL specific patch, not upstream
+Patch100: heat-newdeps.patch
+
+BuildArch: noarch
+BuildRequires: git
+BuildRequires: python2-devel
+BuildRequires: python-setuptools
+BuildRequires: python-oslo-sphinx
+BuildRequires: python-argparse
+BuildRequires: python-eventlet
+BuildRequires: python-greenlet
+BuildRequires: python-httplib2
+BuildRequires: python-iso8601
+BuildRequires: python-kombu
+BuildRequires: python-lxml
+BuildRequires: python-netaddr
+BuildRequires: python-memcached
+BuildRequires: python-migrate
+BuildRequires: python-qpid
+BuildRequires: python-six
+BuildRequires: PyYAML
+BuildRequires: m2crypto
+BuildRequires: python-paramiko
+BuildRequires: python-sphinx10
+# These are required to build due to the requirements check added
+BuildRequires: python-paste-deploy1.5
+BuildRequires: python-routes1.12
+BuildRequires: python-sqlalchemy0.7
+BuildRequires: python-webob
+
+BuildRequires: python-pbr
+BuildRequires: python-d2to1
+%if 0%{?with_doc}
+BuildRequires: python-oslo-config
+BuildRequires: python-cinderclient
+BuildRequires: python-keystoneclient
+BuildRequires: python-novaclient
+BuildRequires: python-neutronclient
+BuildRequires: python-swiftclient
+%endif
+
+Requires: %{name}-common = %{version}-%{release}
+Requires: %{name}-engine = %{version}-%{release}
+Requires: %{name}-api = %{version}-%{release}
+Requires: %{name}-api-cfn = %{version}-%{release}
+Requires: %{name}-api-cloudwatch = %{version}-%{release}
+
+%prep
+%setup -q -n %{full_release}
+%patch0 -p1
+%patch1 -p1
+%patch100 -p1
+
+sed -i s/REDHATHEATVERSION/%{version}/ heat/version.py
+sed -i s/REDHATHEATRELEASE/%{release}/ heat/version.py
+
+# Remove the requirements file so that pbr hooks don't add it
+# to distutils requires_dist config
+rm -rf {test-,}requirements.txt tools/{pip,test}-requires
+
+echo '
+#
+# Options to be passed to keystoneclient.auth_token middleware
+# NOTE: These options are not defined in heat but in keystoneclient
+#
+[keystone_authtoken]
+
+# the name of the admin tenant (string value)
+#admin_tenant_name=
+
+# the keystone admin username (string value)
+#admin_user=
+
+# the keystone admin password (string value)
+#admin_password=
+
+# the keystone host (string value)
+#auth_host=
+
+# the keystone port (integer value)
+#auth_port=
+
+# protocol to be used for auth requests http/https (string value)
+#auth_protocol=
+
+#auth_uri=
+
+# signing_dir is configurable, but the default behavior of the authtoken
+# middleware should be sufficient. It will create a temporary directory
+# in the home directory for the user the heat process is running as.
+#signing_dir=/var/lib/heat/keystone-signing
+' >> etc/heat/heat.conf.sample
+
+# Programmatically update defaults in sample config
+# which is installed at /etc/heat/heat.conf
+# TODO: Make this more robust
+# Note it only edits the first occurance, so assumes a section ordering in sample
+# and also doesn't support multi-valued variables.
+while read name eq value; do
+ test "$name" && test "$value" || continue
+ sed -i "0,/^# *$name=/{s!^# *$name=.*!#$name=$value!}" etc/heat/heat.conf.sample
+done < %{SOURCE20}
+
+%build
+%{__python} setup.py build
+
+%install
+%{__python} setup.py install -O1 --skip-build --root=%{buildroot}
+sed -i -e '/^#!/,1 d' %{buildroot}/%{python_sitelib}/heat/db/sqlalchemy/manage.py
+sed -i -e '/^#!/,1 d' %{buildroot}/%{python_sitelib}/heat/db/sqlalchemy/migrate_repo/manage.py
+mkdir -p %{buildroot}/var/log/heat/
+mkdir -p %{buildroot}/var/run/heat/
+install -p -D -m 644 %{SOURCE1} %{buildroot}%{_sysconfdir}/logrotate.d/openstack-heat
+
+# install init scripts
+install -p -D -m 755 %{SOURCE2} %{buildroot}%{_initrddir}/openstack-heat-api
+install -p -D -m 755 %{SOURCE3} %{buildroot}%{_initrddir}/openstack-heat-api-cfn
+install -p -D -m 755 %{SOURCE4} %{buildroot}%{_initrddir}/openstack-heat-engine
+install -p -D -m 755 %{SOURCE5} %{buildroot}%{_initrddir}/openstack-heat-api-cloudwatch
+
+mkdir -p %{buildroot}/var/lib/heat/
+mkdir -p %{buildroot}/etc/heat/
+
+%if 0%{?with_doc}
+export PYTHONPATH="$( pwd ):$PYTHONPATH"
+pushd doc
+sphinx-1.0-build -b html -d build/doctrees source build/html
+sphinx-1.0-build -b man -d build/doctrees source build/man
+
+mkdir -p %{buildroot}%{_mandir}/man1
+install -p -D -m 644 build/man/*.1 %{buildroot}%{_mandir}/man1/
+popd
+%endif
+
+rm -rf %{buildroot}/var/lib/heat/.dummy
+rm -f %{buildroot}/usr/bin/cinder-keystone-setup
+rm -rf %{buildroot}/%{python_sitelib}/heat/tests
+
+install -p -D -m 640 %{_builddir}/%{full_release}/etc/heat/heat.conf.sample %{buildroot}/%{_sysconfdir}/heat/heat.conf
+install -p -D -m 640 %{SOURCE20} %{buildroot}%{_datadir}/heat/heat-dist.conf
+install -p -D -m 640 %{_builddir}/%{full_release}/etc/heat/api-paste.ini %{buildroot}/%{_datadir}/heat/api-paste-dist.ini
+install -p -D -m 640 etc/heat/policy.json %{buildroot}/%{_sysconfdir}/heat
+
+# TODO: move this to setup.cfg
+cp -vr etc/heat/templates %{buildroot}/%{_sysconfdir}/heat
+cp -vr etc/heat/environment.d %{buildroot}/%{_sysconfdir}/heat
+
+%description
+Heat provides AWS CloudFormation and CloudWatch functionality for OpenStack.
+
+
+%package common
+Summary: Heat common
+Group: System Environment/Base
+
+Requires: python-argparse
+Requires: python-eventlet
+Requires: python-greenlet
+Requires: python-httplib2
+Requires: python-iso8601
+Requires: python-kombu
+Requires: python-lxml
+Requires: python-netaddr
+Requires: python-cinderclient
+Requires: python-keystoneclient >= 0.3.1
+Requires: python-memcached
+Requires: python-novaclient
+Requires: python-oslo-config >= 1:1.2.0
+Requires: python-neutronclient
+Requires: python-swiftclient
+Requires: python-migrate
+Requires: python-qpid
+Requires: python-six
+Requires: PyYAML
+Requires: m2crypto
+Requires: python-anyjson
+Requires: python-paramiko
+Requires: python-heatclient
+Requires: python-babel
+
+Requires: python-paste-deploy1.5
+Requires: python-routes1.12
+Requires: python-sqlalchemy0.7
+Requires: python-webob1.2
+
+Requires(pre): shadow-utils
+
+%description common
+Components common to all OpenStack Heat services
+
+%files common
+%doc LICENSE
+%{_bindir}/heat-manage
+%{_bindir}/heat-db-setup
+%{_bindir}/heat-keystone-setup
+%{python_sitelib}/heat*
+%attr(-, root, heat) %{_datadir}/heat/heat-dist.conf
+%attr(-, root, heat) %{_datadir}/heat/api-paste-dist.ini
+%dir %attr(0755,heat,root) %{_localstatedir}/log/heat
+%dir %attr(0755,heat,root) %{_localstatedir}/run/heat
+%dir %attr(0755,heat,root) %{_sharedstatedir}/heat
+%dir %attr(0755,heat,root) %{_sysconfdir}/heat
+%config(noreplace) %{_sysconfdir}/logrotate.d/openstack-heat
+%config(noreplace) %attr(-, root, heat) %{_sysconfdir}/heat/heat.conf
+%config(noreplace) %attr(-, root, heat) %{_sysconfdir}/heat/policy.json
+%config(noreplace) %attr(-,root,heat) %{_sysconfdir}/heat/environment.d/*
+%config(noreplace) %attr(-,root,heat) %{_sysconfdir}/heat/templates/*
+%if 0%{?with_doc}
+%{_mandir}/man1/heat-db-setup.1.gz
+%{_mandir}/man1/heat-keystone-setup.1.gz
+%endif
+
+%pre common
+# 187:187 for heat - rhbz#845078
+getent group heat >/dev/null || groupadd -r --gid 187 heat
+getent passwd heat >/dev/null || \
+useradd --uid 187 -r -g heat -d %{_sharedstatedir}/heat -s /sbin/nologin \
+-c "OpenStack Heat Daemons" heat
+exit 0
+
+%package engine
+Summary: The Heat engine
+Group: System Environment/Base
+
+Requires: %{name}-common = %{version}-%{release}
+
+Requires(post): chkconfig
+Requires(preun): chkconfig
+Requires(preun): initscripts
+Requires(postun): initscripts
+
+%description engine
+OpenStack API for starting CloudFormation templates on OpenStack
+
+%files engine
+%doc README.rst LICENSE
+%if 0%{?with_doc}
+%doc doc/build/html/man/heat-engine.html
+%endif
+%{_bindir}/heat-engine
+%{_initrddir}/openstack-heat-engine
+%if 0%{?with_doc}
+%{_mandir}/man1/heat-engine.1.gz
+%endif
+
+%post engine
+/sbin/chkconfig --add openstack-heat-engine
+
+%preun engine
+if [ $1 -eq 0 ]; then
+ /sbin/service openstack-heat-engine stop >/dev/null 2>&1
+ /sbin/chkconfig --del openstack-heat-engine
+fi
+
+%postun engine
+if [ $1 -ge 1 ]; then
+ /sbin/service openstack-heat-engine condrestart >/dev/null 2>&1 ||Â :
+fi
+
+
+%package api
+Summary: The Heat API
+Group: System Environment/Base
+
+Requires: %{name}-common = %{version}-%{release}
+
+Requires(post): chkconfig
+Requires(preun): chkconfig
+Requires(preun): initscripts
+Requires(postun): initscripts
+
+%description api
+OpenStack-native ReST API to the Heat Engine
+
+%files api
+%doc README.rst LICENSE
+%if 0%{?with_doc}
+%doc doc/build/html/man/heat-api.html
+%endif
+%{_bindir}/heat-api
+%{_initrddir}/openstack-heat-api
+%if 0%{?with_doc}
+%{_mandir}/man1/heat-api.1.gz
+%endif
+
+%post api
+/sbin/chkconfig --add openstack-heat-api
+
+%preun api
+if [ $1 -eq 0 ]; then
+ /sbin/service openstack-heat-api stop >/dev/null 2>&1
+ /sbin/chkconfig --del openstack-heat-api
+fi
+
+%postun api
+if [ $1 -ge 1 ]; then
+ /sbin/service openstack-heat-api condrestart >/dev/null 2>&1 ||Â :
+fi
+
+
+%package api-cfn
+Summary: Heat CloudFormation API
+Group: System Environment/Base
+
+Requires: %{name}-common = %{version}-%{release}
+
+Requires(post): chkconfig
+Requires(preun): chkconfig
+Requires(preun): initscripts
+Requires(postun): initscripts
+
+%description api-cfn
+AWS CloudFormation-compatible API to the Heat Engine
+
+%files api-cfn
+%doc README.rst LICENSE
+%if 0%{?with_doc}
+%doc doc/build/html/man/heat-api-cfn.html
+%endif
+%{_bindir}/heat-api-cfn
+%{_initrddir}/openstack-heat-api-cfn
+%if 0%{?with_doc}
+%{_mandir}/man1/heat-api-cfn.1.gz
+%endif
+
+%post api-cfn
+/sbin/chkconfig --add openstack-heat-api-cfn
+
+%preun api-cfn
+if [ $1 -eq 0 ]; then
+ /sbin/service openstack-heat-api-cfn stop >/dev/null 2>&1
+ /sbin/chkconfig --del openstack-heat-api-cfn
+fi
+
+%postun api-cfn
+if [ $1 -ge 1 ]; then
+ /sbin/service openstack-heat-api-cfn condrestart >/dev/null 2>&1 ||Â :
+fi
+
+
+%package api-cloudwatch
+Summary: Heat CloudWatch API
+Group: System Environment/Base
+
+Requires: %{name}-common = %{version}-%{release}
+
+Requires(post): chkconfig
+Requires(preun): chkconfig
+Requires(preun): initscripts
+Requires(postun): initscripts
+
+%description api-cloudwatch
+AWS CloudWatch-compatible API to the Heat Engine
+
+%files api-cloudwatch
+%doc README.rst LICENSE
+%if 0%{?with_doc}
+%doc doc/build/html/man/heat-api-cloudwatch.html
+%endif
+%{_bindir}/heat-api-cloudwatch
+%{_initrddir}/openstack-heat-api-cloudwatch
+%if 0%{?with_doc}
+%{_mandir}/man1/heat-api-cloudwatch.1.gz
+%endif
+
+%post api-cloudwatch
+/sbin/chkconfig --add openstack-heat-api-cloudwatch
+
+%preun api-cloudwatch
+if [ $1 -eq 0 ]; then
+ /sbin/service openstack-heat-api-cloudwatch stop >/dev/null 2>&1
+ /sbin/chkconfig --del openstack-heat-api-cloudwatch
+fi
+
+%postun api-cloudwatch
+if [ $1 -ge 1 ]; then
+ /sbin/service openstack-heat-api-cloudwatch condrestart >/dev/null 2>&1 ||Â :
+fi
+
+
+%changelog
+* Thu Oct 17 2013 Jeff Peeler <jpeeler@redhat.com> 2013.2-1
+- update to havana final
+
+* Mon Oct 14 2013 Jeff Peeler <jpeeler@redhat.com> 2013.2-0.9.rc2
+- rebase to havana-rc2
+
+* Thu Oct 3 2013 Jeff Peeler <jpeeler@redhat.com> 2013.2-0.9.rc1
+- update to rc1
+- exclude doc builds if with_doc 0
+
+* Mon Sep 23 2013 Jeff Peeler <jpeeler@redhat.com> 2013.2-0.9.b3
+- changed webob requires to 1.0 -> 1.2
+
+* Thu Sep 19 2013 Jeff Peeler <jpeeler@redhat.com> 2013.2-0.8.b3
+- fix the python-oslo-config dependency to cater for epoch
+- add api-paste-dist.ini to /usr/share/heat
+
+* Tue Sep 17 2013 Jeff Peeler <jpeeler@redhat.com> 2013.2-0.7.b3
+- Depend on python-oslo-config >= 1.2 so it upgraded automatically
+- Distribute dist defaults in heat-dist.conf separate to user heat.conf (rhbz 1008560)
+
+* Wed Sep 11 2013 Jeff Peeler <jpeeler@redhat.com> 2013.2-0.6.b3
+- fix init scripts (rhbz 1006868)
+- added python-babel
+- remove runtime pbr dependency (rhbz 1006911)
+
+* Mon Sep 9 2013 Jeff Peeler <jpeeler@redhat.com> 2013.2-0.5.b3
+- rebase to havana-3
+- remove tests from common
+- remove cli package and move heat-manage into common
+- added requires for python-heatclient
+- remove python-boto as boto has been moved to another repo
+- remove heat-cfn bash completion
+- add /var/run/heat directory
+
+* Tue Jul 30 2013 Pádraig Brady <pbrady@redhat.com> 2013.2-0.4.b2
+- avoid python runtime dependency management
+
+* Mon Jul 22 2013 Jeff Peeler <jpeeler@redhat.com> 2013.2-0.3.b2
+- rebase to havana-2
+
+* Mon Jun 10 2013 Jeff Peeler <jpeeler@redhat.com> 2013.2-0.2.b1
+- fix m2crypto patch
+
+* Tue Jun 4 2013 Jeff Peeler <jpeeler@redhat.com> 2013.2-0.1.b1
+- rebase to havana-1
+- consolidate api-paste files into one file in common
+- removed runner.py as it is no longer present
+- added heat-manage
+- added new buildrequires pbr and d2to1
+
+* Tue May 28 2013 Jeff Peeler <jpeeler@redhat.com> 2013.1-1.3
+- bumped obsoletes for f18 rebuilds of the old heat package
+- added missing policy.json file (rhbz#965549)
+
+* Wed May 8 2013 Jeff Peeler <jpeeler@redhat.com> 2013.1-1.2
+- removed python-crypto require
+
+* Wed May 8 2013 Jeff Peeler <jpeeler@redhat.com> 2013.1-1.1
+- re-added m2crypto patch (rhbz960165)
+
+* Mon Apr 8 2013 Jeff Peeler <jpeeler@redhat.com> 2013.1-1.0
+- update to grizzly final
+
+* Tue Apr 2 2013 Jeff Peeler <jpeeler@redhat.com> 2013.1-1.0.rc2
+- added epel patch for dependencies
+- added new buildrequires due to above patch
+- changed requires to more recent versions
+
+* Mon Apr 1 2013 Jeff Peeler <jpeeler@redhat.com> 2013.1-0.9.rc2
+- add /var/run/heat so PID directory has correct permissions
+
+* Thu Mar 28 2013 Jeff Peeler <jpeeler@redhat.com> 2013.1-0.8.rc2
+- bump to rc2
+
+* Wed Mar 27 2013 Jeff Peeler <jpeeler@redhat.com> 2013.1-0.8.rc1
+- converted systemd scripts to sysvinit
+- changed buildrequires from python-sphinx to python-sphinx10
+
+* Thu Mar 21 2013 Steven Dake <sdake@redhat.com> 2013.1-0.7.rc1
+- Add all dependencies required
+- Remove buildrequires of python-glanceclient
+
+* Wed Mar 20 2013 Jeff Peeler <jpeeler@redhat.com> 2013.1-0.6.rc1
+- Updated URL
+- Added version for Obsoletes
+- Removed dev suffix in builddir
+- Added missing man pages
+
+* Mon Mar 11 2013 Steven Dake <sdake@redhat.com> 2013.1-0.5.g3
+- Assign heat user with 167:167
+- Rename packages from *-api to api-*
+- Rename clients to cli
+- change user/gid to heat from openstack-heat
+- use shared state dir macro for shared state
+- Add /etc/heat dir to owned directory list
+- set proper uid/gid for files
+- set proper read/write/execute bits
+
+* Thu Dec 20 2012 Jeff Peeler <jpeeler@redhat.com> 2013.1-2
+- split into subpackages
+
+* Fri Dec 14 2012 Steve Baker <sbaker@redhat.com> 2013.1-1
+- rebase to 2013.1
+- expunge heat-metadata
+- generate man pages and html developer docs with sphinx
+
+* Tue Oct 23 2012 Zane Bitter <zbitter@redhat.com> 7-1
+- rebase to v7
+- add heat-api daemon (OpenStack-native API)
+
+* Fri Sep 21 2012 Jeff Peeler <jpeeler@redhat.com> 6-5
+- update m2crypto patch (Fedora)
+- fix user/group install permissions
+
+* Tue Sep 18 2012 Steven Dake <sdake@redhat.com> 6-4
+- update to new v6 binary names in heat
+
+* Tue Aug 21 2012 Jeff Peeler <jpeeler@redhat.com> 6-3
+- updated systemd scriptlets
+
+* Tue Aug 7 2012 Jeff Peeler <jpeeler@redhat.com> 6-2
+- change user/group ids to openstack-heat
+
+* Wed Aug 1 2012 Jeff Peeler <jpeeler@redhat.com> 6-1
+- create heat user and change file permissions
+- set systemd scripts to run as heat user
+
+* Fri Jul 27 2012 Ian Main <imain@redhat.com> - 5-1
+- added m2crypto patch.
+- bumped version for new release.
+- added boto.cfg to sysconfigdir
+
+* Tue Jul 24 2012 Jeff Peeler <jpeeler@redhat.com> - 4-5
+- added LICENSE to docs
+- added dist tag
+- added heat directory to files section
+- removed unnecessary defattr
+
+* Tue Jul 24 2012 Jeff Peeler <jpeeler@redhat.com> - 4-4
+- remove pycrypto requires
+
+* Fri Jul 20 2012 Jeff Peeler <jpeeler@redhat.com> - 4-3
+- change python-devel to python2-devel
+
+* Wed Jul 11 2012 Jeff Peeler <jpeeler@redhat.com> - 4-2
+- add necessary requires
+- removed shebang line for scripts not requiring executable permissions
+- add logrotate, removes all rpmlint warnings except for python-httplib2
+- remove buildroot tag since everything since F10 has a default buildroot
+- remove clean section as it is not required as of F13
+- add systemd unit files
+- change source URL to download location which doesn't require a SHA
+
+* Fri Jun 8 2012 Steven Dake <sdake@redhat.com> - 4-1
+- removed jeos from packaging since that comes from another repository
+- compressed all separate packages into one package
+- removed setup options which were producing incorrect results
+- replaced python with {__python}
+- added a br on python-devel
+- added a --skip-build to the install step
+- added percent-dir for directories
+- fixed most rpmlint warnings/errors
+
+* Mon Apr 16 2012 Chris Alfonso <calfonso@redhat.com> - 3-1
+- initial openstack package log
+++ /dev/null
-#!/bin/bash
-
-BASE_DIR=`dirname $0`
-
-function usage {
- echo "Usage: $0 [OPTION]..."
- echo "Run Heat's test suite(s)"
- echo ""
- echo " -V, --virtual-env Use virtualenv. Install automatically if not present."
- echo " (Default is to run tests in local environment)"
- echo " -F, --force Force a clean re-build of the virtual environment. Useful when dependencies have been added."
- echo " -f, --func Functional tests have been removed."
- echo " -u, --unit Run unit tests (default when nothing specified)"
- echo " -p, --pep8 Run pep8 tests"
- echo " --all Run pep8 and unit tests"
- echo " -c, --coverage Generate coverage report"
- echo " -d, --debug Run tests with testtools instead of testr. This allows you to use the debugger."
- echo " -h, --help Print this usage message"
- exit
-}
-
-# must not assign -a as an option, needed for selecting custom attributes
-no_venv=1
-function process_option {
- case "$1" in
- -V|--virtual-env) no_venv=0;;
- -F|--force) force=1;;
- -f|--func) test_func=1;;
- -u|--unit) test_unit=1;;
- -p|--pep8) test_pep8=1;;
- --all) test_unit=1; test_pep8=1;;
- -c|--coverage) coverage=1;;
- -d|--debug) debug=1;;
- -h|--help) usage;;
- *) args="$args $1"; test_unit=1;;
- esac
-}
-
-venv=.venv
-with_venv=tools/with_venv.sh
-wrapper=""
-debug=0
-
-function run_tests {
- echo 'Running tests'
- # Remove any extraneous DB migrations
- find heat/db/sqlalchemy/migrate_repo/versions/ -name '*.pyc' -delete
-
- if [ $debug -eq 1 ]; then
- echo "Debugging..."
- if [ "$args" = "" ]; then
- # Default to running all tests if specific test is not
- # provided.
- testrargs="discover ./heat/tests"
- fi
- ${wrapper} python -m testtools.run $args $testrargs
-
- # Short circuit because all of the testr and coverage stuff
- # below does not make sense when running testtools.run for
- # debugging purposes.
- return $?
- fi
-
- # Just run the test suites in current environment
- if [ -n "$args" ] ; then
- args="-t $args"
- fi
- python setup.py testr --slowest $args
-}
-
-function run_pep8 {
- echo "Running flake8..."
- bash -c "${wrapper} flake8"
-}
-
-# run unit tests with pep8 when no arguments are specified
-# otherwise process CLI options
-if [[ $# == 0 ]]; then
- test_pep8=1
- test_unit=1
-else
- for arg in "$@"; do
- process_option $arg
- done
-fi
-
-if [ "$no_venv" == 0 ]
-then
- # Remove the virtual environment if --force used
- if [ "$force" == 1 ]; then
- echo "Cleaning virtualenv..."
- rm -rf ${venv}
- fi
- if [ -e ${venv} ]; then
- wrapper="${with_venv}"
- else
- # Automatically install the virtualenv
- python tools/install_venv.py
- wrapper="${with_venv}"
- fi
-fi
-
-result=0
-
-# If functional or unit tests have been selected, run them
-if [ "$test_unit" == 1 ] || [ "$debug" == 1 ] ; then
- run_tests
- result=$?
-fi
-
-# Run pep8 if it was selected
-if [ "$test_pep8" == 1 ]; then
- run_pep8
-fi
-
-# Generate coverage report
-if [ "$coverage" == 1 ]; then
- echo "Generating coverage report in ./cover"
- python setup.py testr --coverage --slowest
- coverage report -m
-fi
-
-exit $result
+++ /dev/null
-[metadata]
-name = heat
-version = 2013.2
-summary = OpenStack Orchestration
-description-file =
- README.rst
-author = OpenStack
-author-email = openstack-dev@lists.openstack.org
-home-page = http://www.openstack.org/
-classifier =
- Environment :: OpenStack
- Intended Audience :: Information Technology
- Intended Audience :: System Administrators
- License :: OSI Approved :: Apache Software License
- Operating System :: POSIX :: Linux
- Programming Language :: Python
- Programming Language :: Python :: 2
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 2.6
-
-[files]
-packages =
- heat
-scripts =
- bin/cinder-keystone-setup
- bin/heat-api
- bin/heat-api-cfn
- bin/heat-api-cloudwatch
- bin/heat-db-setup
- bin/heat-engine
- bin/heat-keystone-setup
- bin/heat-manage
-
-[global]
-setup-hooks =
- pbr.hooks.setup_hook
-
-[compile_catalog]
-directory = heat/locale
-domain = heat
-
-[update_catalog]
-domain = heat
-output_dir = heat/locale
-input_file = heat/locale/heat.pot
-
-[extract_messages]
-keywords = _ gettext ngettext l_ lazy_gettext
-mapping_file = babel.cfg
-output_file = heat/locale/heat.pot
-
-[build_sphinx]
-all_files = 1
-build-dir = doc/build
-source-dir = doc/source
+++ /dev/null
-#!/usr/bin/env python
-# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
-import setuptools
-
-setuptools.setup(
- setup_requires=['pbr>=0.5.21,<1.0'],
- pbr=True)
+++ /dev/null
-# Install bounded pep8/pyflakes first, then let flake8 install
-pep8==1.4.5
-pyflakes==0.7.2
-flake8==2.0
-hacking>=0.5.6,<0.7
-
-coverage>=3.6
-discover
-mock>=1.0
-mox>=0.5.3
-testtools>=0.9.32
-testrepository>=0.0.17
-python-glanceclient>=0.9.0
-sphinx>=1.1.2
-oslo.sphinx
+++ /dev/null
-Files in this directory are general developer tools or examples of how
-to do certain activities.
-
-If you're running on F16, make sure you first enable the preview yum repository
-http://fedoraproject.org/wiki/Getting_started_with_OpenStack_on_Fedora_17#Preview_Repository_for_Fedora_16
-
------
-Tools
------
-
-+ experimental_ssh_eventlet.py
- - Example of using ssh inside python with eventlets.
-
-+ openstack - WARNING COULD DESTROY DATA
- - Installs, Uninstalls, starts, and stops openstack from the RPM repository
- on F16 or F17. This provides a consistent way for developers to deploy
- openstack in a consistent painless way.
-
- THIS IS NOT INTENDED FOR END USERS AS IT COULD PERMANENTLY DESTROY DATA
- AND DOES NOT PROPERLY SECURE THE INSTALLATION.
-
- note: This tool does not create a network. Creating a network depends
- on the specific environment, but an example network creation:
-
-sudo nova-manage network create demonet 10.0.0.0/24 1 256 --bridge=demonetbr0
-
-+ heat-keystone-setup
- - This tool configures keystone for use with the heat service the first
- time openstack is installed.
-
-+ heat-db-drop
- - This script drops the heat database from mysql in the case of developer
- data corruption or erasing heat.
-
-+ glance-jeos-add-from-github.sh
- - Register all JEOS images from github prebuilt repositories.
- This takes about 1 hour on a typical wireless connection.
+++ /dev/null
-#!/usr/bin/env python
-
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import sys
-import os
-import yaml
-import json
-import re
-from heat.common import template_format
-
-def main():
- path = sys.argv[1]
- if os.path.isdir(path):
- convert_directory(path)
- elif os.path.isfile(path):
- convert_file(path)
- else:
- print('File or directory not valid: %s' % path)
-
-def convert_file(path):
- f = open(path, 'r')
- print(template_format.convert_json_to_yaml(f.read()))
-
-def convert_directory(dirpath):
- for path in os.listdir(dirpath):
- if not path.endswith('.template') and not path.endswith('.json'):
- continue
- yamlpath = re.sub('\..*$', '.yaml', path)
- print('Writing to %s' % yamlpath)
- f = open(os.path.join(dirpath, path), 'r')
- out = open(os.path.join(dirpath, yamlpath), 'w')
- yml = template_format.convert_json_to_yaml(f.read())
- out.write(yml)
- out.close()
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-#!/bin/sh
-TMPFILE=`mktemp`
-trap "rm -f ${TMPFILE}" EXIT
-tools/conf/generate_sample.sh "${TMPFILE}"
-if ! diff "${TMPFILE}" etc/heat/heat.conf.sample
-then
- echo "E: heat.conf.sample is not up to date, please run tools/conf/generate_sample.sh"
- exit 42
-fi
+++ /dev/null
-#!/usr/bin/env bash
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2012 SINA Corporation
-# All Rights Reserved.
-# Author: Zhongyue Luo <lzyeval@gmail.com>
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-FILES=$(find heat -type f -name "*.py" ! -path "heat/tests/*" -exec \
- grep -l "Opt(" {} \; | sort -u)
-
-DEST=${1:-etc/heat/heat.conf.sample}
-
-PYTHONPATH=./:${PYTHONPATH} \
- python $(dirname "$0")/../../heat/openstack/common/config/generator.py ${FILES} > $DEST
+++ /dev/null
-#!/usr/bin/env bash
-
-print_hint() {
- echo "Try \`${0##*/} --help' for more information." >&2
-}
-
-PARSED_OPTIONS=$(getopt -n "${0##*/}" -o hb:p:o: \
- --long help,base-dir:,package-name:,output-dir: -- "$@")
-
-if [ $? != 0 ] ; then print_hint ; exit 1 ; fi
-
-eval set -- "$PARSED_OPTIONS"
-
-while true; do
- case "$1" in
- -h|--help)
- echo "${0##*/} [options]"
- echo ""
- echo "options:"
- echo "-h, --help show brief help"
- echo "-b, --base-dir=DIR Project base directory (required)"
- echo "-p, --package-name=NAME Project package name"
- echo "-o, --output-dir=DIR File output directory"
- exit 0
- ;;
- -b|--base-dir)
- shift
- BASEDIR=`echo $1 | sed -e 's/\/*$//g'`
- shift
- ;;
- -p|--package-name)
- shift
- PACKAGENAME=`echo $1`
- shift
- ;;
- -o|--output-dir)
- shift
- OUTPUTDIR=`echo $1 | sed -e 's/\/*$//g'`
- shift
- ;;
- --)
- break
- ;;
- esac
-done
-
-if [ -z $BASEDIR ] || ! [ -d $BASEDIR ]
-then
- echo "${0##*/}: missing project base directory" >&2 ; print_hint ; exit 1
-fi
-
-PACKAGENAME=${PACKAGENAME:-${BASEDIR##*/}}
-
-OUTPUTDIR=${OUTPUTDIR:-$BASEDIR/etc}
-if ! [ -d $OUTPUTDIR ]
-then
- echo "${0##*/}: cannot access \`$OUTPUTDIR': No such file or directory" >&2
- exit 1
-fi
-
-BASEDIRESC=`echo $BASEDIR | sed -e 's/\//\\\\\//g'`
-FILES=$(find $BASEDIR/$PACKAGENAME -type f -name "*.py" ! -path "*/tests/*" \
- -exec grep -l "Opt(" {} + | sed -e "s/^$BASEDIRESC\///g" | sort -u)
-
-export EVENTLET_NO_GREENDNS=yes
-
-MODULEPATH=heat.openstack.common.config.generator
-OUTPUTFILE=$OUTPUTDIR/$PACKAGENAME.conf.sample
-python -m $MODULEPATH $FILES > $OUTPUTFILE
+++ /dev/null
-#!/usr/bin/python
-
-import eventlet
-from eventlet.green import socket
-import libssh2
-import os
-import random
-
-
-def monitor(hostname, username, id):
-
- print('%s %s %d' % (hostname, username, id))
- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- sock.connect((hostname, 22))
-
- session = libssh2.Session()
- started = False
- while not started:
- try:
- session.startup(sock)
- started = True
- except:
- eventlet.sleep(1)
- session.userauth_publickey_fromfile(
- username,
- os.path.expanduser('~/.ssh/id_rsa.pub'),
- os.path.expanduser('~/.ssh/id_rsa'),
- '')
-
- while True:
- sl = random.randint(1, 20)
- eventlet.sleep(sl)
- channel = session.channel()
- channel.execute('uname -a')
-
- stdout = []
- #stderr = []
-
- while not channel.eof:
- data = channel.read(1024)
- if data:
- stdout.append(data)
-
- #data = channel.read(1024, libssh2.STDERR)
- #if data:
- # stderr.append(data)
-
- print('%d %d %s' % (id, sl, ''.join(stdout)))
- #print ''.join(stderr)
-
-
-pool = eventlet.GreenPool()
-i = 1
-while True:
- pool.spawn_n(monitor, '192.168.122.238', 'root', i)
- i = i + 1
- if i > 800:
- break
-
-pool.waitall()
+++ /dev/null
-#!/bin/bash
-# Downloads JEOS images from github and installs them in glance
-
-DISK_FORMAT="qcow2"
-INDEX_URL="https://github.com/heat-api/prebuilt-jeos-images/downloads"
-DOWNLOAD_URL="http://cloud.github.com/downloads/heat-api/prebuilt-jeos-images"
-IMAGES=$(curl -s ${INDEX_URL} | grep 'href="/downloads/heat-api/prebuilt-jeos-images' 2>/dev/null | grep ${DISK_FORMAT} | cut -d">" -f2 | cut -d"<" -f1)
-
-for i in ${IMAGES}
-do
- NAME=$(echo $i | sed "s/\.${DISK_FORMAT}//")
- echo "Downloading and registering $i with OpenStack glance as ${NAME}"
- if glance index | grep -q "\s${NAME}\s"
- then
- echo "WARNING : ${NAME} already exists, skipping"
- else
- echo "Downloading from ${DOWNLOAD_URL}/$i"
- glance add name=${NAME} is_public=true disk_format=${DISK_FORMAT} container_format=bare copy_from="${DOWNLOAD_URL}/$i"
- fi
-done
+++ /dev/null
-#!/bin/bash
-
-DATABASE=heat
-
-while [ $# -gt 0 ]; do
- case $1 in
- heat|nova|keystone|glance)
- DATABASE=$1
- ;;
- -r|--rootpw)
- shift
- MYSQL_ROOT_PW_ARG="--password=$1"
- ;;
- *)
- ;;
- esac
- shift
-done
-
-if [ ! "${MYSQL_ROOT_PW_ARG+defined}" ] ; then
- printf "Please enter the password for the 'root' MySQL user: "
- read -s MYSQL_ROOT_PW
- MYSQL_ROOT_PW_ARG="--password=${MYSQL_ROOT_PW}"
- echo
-fi
-
-cat << EOF | mysql -u root ${MYSQL_ROOT_PW_ARG}
-DROP USER '${DATABASE}'@'localhost';
-DROP USER '${DATABASE}'@'%';
-DROP DATABASE ${DATABASE};
-flush privileges;
-EOF
-
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2010 United States Government as represented by the
-# Administrator of the National Aeronautics and Space Administration.
-# All Rights Reserved.
-#
-# Copyright 2010 OpenStack Foundation
-# Copyright 2013 IBM Corp.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import os
-import sys
-
-import install_venv_common as install_venv
-
-
-def print_help(venv, root):
- help = """
- Heat development environment setup is complete.
-
- Heat development uses virtualenv to track and manage Python dependencies
- while in development and testing.
-
- To activate the Heat virtualenv for the extent of your current shell
- session you can run:
-
- $ source %s/bin/activate
-
- Or, if you prefer, you can run commands in the virtualenv on a case by case
- basis by running:
-
- $ %s/tools/with_venv.sh <your command>
-
- Also, make test will automatically use the virtualenv.
- """
- print(help % (venv, root))
-
-
-def main(argv):
- root = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
-
- if os.environ.get('tools_path'):
- root = os.environ['tools_path']
- venv = os.path.join(root, '.venv')
- if os.environ.get('venv'):
- venv = os.environ['venv']
-
- pip_requires = os.path.join(root, 'requirements.txt')
- test_requires = os.path.join(root, 'test-requirements.txt')
- py_version = "python%s.%s" % (sys.version_info[0], sys.version_info[1])
- project = 'Heat'
- install = install_venv.InstallVenv(root, venv, pip_requires, test_requires,
- py_version, project)
- options = install.parse_args(argv)
- install.check_python_version()
- install.check_dependencies()
- install.create_virtualenv(no_site_packages=options.no_site_packages)
- install.install_dependencies()
- install.post_process()
- print_help(venv, root)
-
-if __name__ == '__main__':
- main(sys.argv)
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2013 OpenStack Foundation
-# Copyright 2013 IBM Corp.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Provides methods needed by installation script for OpenStack development
-virtual environments.
-
-Since this script is used to bootstrap a virtualenv from the system's Python
-environment, it should be kept strictly compatible with Python 2.6.
-
-Synced in from openstack-common
-"""
-
-from __future__ import print_function
-
-import optparse
-import os
-import subprocess
-import sys
-
-
-class InstallVenv(object):
-
- def __init__(self, root, venv, requirements,
- test_requirements, py_version,
- project):
- self.root = root
- self.venv = venv
- self.requirements = requirements
- self.test_requirements = test_requirements
- self.py_version = py_version
- self.project = project
-
- def die(self, message, *args):
- print(message % args, file=sys.stderr)
- sys.exit(1)
-
- def check_python_version(self):
- if sys.version_info < (2, 6):
- self.die("Need Python Version >= 2.6")
-
- def run_command_with_code(self, cmd, redirect_output=True,
- check_exit_code=True):
- """Runs a command in an out-of-process shell.
-
- Returns the output of that command. Working directory is self.root.
- """
- if redirect_output:
- stdout = subprocess.PIPE
- else:
- stdout = None
-
- proc = subprocess.Popen(cmd, cwd=self.root, stdout=stdout)
- output = proc.communicate()[0]
- if check_exit_code and proc.returncode != 0:
- self.die('Command "%s" failed.\n%s', ' '.join(cmd), output)
- return (output, proc.returncode)
-
- def run_command(self, cmd, redirect_output=True, check_exit_code=True):
- return self.run_command_with_code(cmd, redirect_output,
- check_exit_code)[0]
-
- def get_distro(self):
- if (os.path.exists('/etc/fedora-release') or
- os.path.exists('/etc/redhat-release')):
- return Fedora(
- self.root, self.venv, self.requirements,
- self.test_requirements, self.py_version, self.project)
- else:
- return Distro(
- self.root, self.venv, self.requirements,
- self.test_requirements, self.py_version, self.project)
-
- def check_dependencies(self):
- self.get_distro().install_virtualenv()
-
- def create_virtualenv(self, no_site_packages=True):
- """Creates the virtual environment and installs PIP.
-
- Creates the virtual environment and installs PIP only into the
- virtual environment.
- """
- if not os.path.isdir(self.venv):
- print('Creating venv...', end=' ')
- if no_site_packages:
- self.run_command(['virtualenv', '-q', '--no-site-packages',
- self.venv])
- else:
- self.run_command(['virtualenv', '-q', self.venv])
- print('done.')
- else:
- print("venv already exists...")
- pass
-
- def pip_install(self, *args):
- self.run_command(['tools/with_venv.sh',
- 'pip', 'install', '--upgrade'] + list(args),
- redirect_output=False)
-
- def install_dependencies(self):
- print('Installing dependencies with pip (this can take a while)...')
-
- # First things first, make sure our venv has the latest pip and
- # setuptools.
- self.pip_install('pip>=1.3')
- self.pip_install('setuptools')
-
- self.pip_install('-r', self.requirements)
- self.pip_install('-r', self.test_requirements)
-
- def post_process(self):
- self.get_distro().post_process()
-
- def parse_args(self, argv):
- """Parses command-line arguments."""
- parser = optparse.OptionParser()
- parser.add_option('-n', '--no-site-packages',
- action='store_true',
- help="Do not inherit packages from global Python "
- "install")
- return parser.parse_args(argv[1:])[0]
-
-
-class Distro(InstallVenv):
-
- def check_cmd(self, cmd):
- return bool(self.run_command(['which', cmd],
- check_exit_code=False).strip())
-
- def install_virtualenv(self):
- if self.check_cmd('virtualenv'):
- return
-
- if self.check_cmd('easy_install'):
- print('Installing virtualenv via easy_install...', end=' ')
- if self.run_command(['easy_install', 'virtualenv']):
- print('Succeeded')
- return
- else:
- print('Failed')
-
- self.die('ERROR: virtualenv not found.\n\n%s development'
- ' requires virtualenv, please install it using your'
- ' favorite package management tool' % self.project)
-
- def post_process(self):
- """Any distribution-specific post-processing gets done here.
-
- In particular, this is useful for applying patches to code inside
- the venv.
- """
- pass
-
-
-class Fedora(Distro):
- """This covers all Fedora-based distributions.
-
- Includes: Fedora, RHEL, CentOS, Scientific Linux
- """
-
- def check_pkg(self, pkg):
- return self.run_command_with_code(['rpm', '-q', pkg],
- check_exit_code=False)[1] == 0
-
- def apply_patch(self, originalfile, patchfile):
- self.run_command(['patch', '-N', originalfile, patchfile],
- check_exit_code=False)
-
- def install_virtualenv(self):
- if self.check_cmd('virtualenv'):
- return
-
- if not self.check_pkg('python-virtualenv'):
- self.die("Please install 'python-virtualenv'.")
-
- super(Fedora, self).install_virtualenv()
-
- def post_process(self):
- """Workaround for a bug in eventlet.
-
- This currently affects RHEL6.1, but the fix can safely be
- applied to all RHEL and Fedora distributions.
-
- This can be removed when the fix is applied upstream.
-
- Nova: https://bugs.launchpad.net/nova/+bug/884915
- Upstream: https://bitbucket.org/eventlet/eventlet/issue/89
- RHEL: https://bugzilla.redhat.com/958868
- """
-
- # Install "patch" program if it's not there
- if not self.check_pkg('patch'):
- self.die("Please install 'patch'.")
-
- # Apply the eventlet patch
- self.apply_patch(os.path.join(self.venv, 'lib', self.py_version,
- 'site-packages',
- 'eventlet/green/subprocess.py'),
- 'contrib/redhat-eventlet.patch')
+++ /dev/null
-#!/bin/bash
-
-TOOLS_DIR=`dirname $0`
-HEAT_DIR="$TOOLS_DIR/.."
-
-clean() {
- $TOOLS_DIR/uninstall-heat -y -r ""
-}
-
-error() {
- echo "Failed :("
-}
-
-run() {
- bash -c "$($TOOLS_DIR/rst2script.sed $HEAT_DIR/doc/source/getting_started/on_fedora.rst)" || error
-}
-
-case $1 in
- clean|run)
- $1
- ;;
- *)
- clean
- run
- ;;
-esac
+++ /dev/null
-#!/bin/bash
-echo "WARNING: This script now make no modifications to the current flavors"
-nova flavor-list
+++ /dev/null
-#!/bin/bash
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-#
-# Copyright (C) 2012, Red Hat, Inc.
-# Angus Salkeld <asalkeld@redhat.com>
-# Steven Dake <sdake@redhat.com>
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-# sleep 1 = systemd's definition of start is different from mine
-
-BASE_DIR=`dirname $0`
-
-if [[ $1 = '--with-cinder' ]]; then
- with_cinder=1
-else
- with_cinder=0
-fi
-
-FEDORA_VERSION=$(cat /etc/fedora-release | awk '{print $3}')
-if [[ $FEDORA_VERSION -gt 17 || $with_cinder -eq 1 ]]; then
- VOLUME_SERVICE="openstack-cinder"
- VOLUME_PATH="/var/lib/cinder/cinder-volumes.img"
- VOLUME_NAME="cinder-volumes"
-else
- VOLUME_SERVICE="openstack-nova"
- VOLUME_PATH="/var/lib/nova/nova-volumes.img"
- VOLUME_NAME="nova-volumes"
-fi
-
-action=$1
-if [ -z "$action" ]
-then
- echo "openstack [start|stop|install|erase|status]"
- echo
- echo "This tool is designed to control OpenStack on a Fedora 16+ system"
- echo
- echo "start - Starts OpenStack"
- echo "stop - Stops OpenStack"
- echo "restart - Restart OpenStack"
- echo "install - Installs a fresh OpenStack system with Keystone from RPM repostories"
- echo "erase - permanently destroys an existing installation of OpenStack"
- echo "status - show service status for all required OpenStack services"
-fi
-
-os_dist='unknown'
-function check_os_version() {
- #determine which OpenStack version we'll be running
- os_version=$(rpm -q openstack-nova-common --qf='%{VERSION}')
- case $os_version in
- 2012.1|2012.1.*) os_dist='essex';;
- 2012.2|2012.2.*) os_dist='folsom';;
- 2013.1|2013.1.*) os_dist='grizzly';;
- *) echo 'Could not detect installed OpenStack version';;
- esac
- echo "Detected os_version $os_version os_dist=$os_dist"
-}
-
-function os_status() {
- for service in ${OS_SERVICES[@]}
- do
- output=$(systemctl show "$service.service" --property=ActiveState)
- running=(${output//=/ }) #ActiveState=active
- echo "$service ${running[1]}" | awk '{ printf "%-40s %s\n", $1, $2}'
- done
-}
-
-OS_STATUS="OK"
-function os_check_status() {
- # If a service is not running, we try again up to MAX_TRIES times
- MAX_TRIES=5
- for service in ${OS_SERVICES[@]}
- do
- attempts=0
- while [[ ${attempts} < ${MAX_TRIES} ]]
- do
- attempts=$((${attempts} + 1))
- output=$(systemctl show "$service.service" --property=ActiveState)
- running=${output#ActiveState=} #ActiveState=active
- if [[ ${running} != "active" ]]
- then
- echo "Service ${service} does not seem to be running, waiting 1s ${attempts}/${MAX_TRIES}"
- OS_STATUS="FAIL ${service} : ${running}"
- sleep 1
- else
- echo "${service} ${running}" | awk '{ printf "%-40s %s\n", $1, $2}'
- OS_STATUS="OK"
- break
- fi
- done
-
- # If we get here and OS_STATUS != OK then we return as something failed
- if [[ ${OS_STATUS} != "OK" ]]
- then
- echo "Service ${service} has failed to start, check logs for errors"
- break
- fi
- done
-}
-
-function os_start() {
- action=start
- sudo systemctl $action qpidd.service mysqld.service
- sleep 1
- sudo systemctl $action openstack-keystone.service tgtd.service
- sleep 1
- for svc in api registry
- do
- sudo systemctl $action openstack-glance-$svc.service
- done
-
- if ! sudo vgs | grep -q $VOLUME_NAME; then
- sudo vgcreate $VOLUME_NAME $(sudo losetup --show -f $VOLUME_PATH)
- fi
-
- for svc in api cert objectstore compute scheduler
- do
- sudo systemctl $action openstack-nova-$svc.service
- done
-
- sudo systemctl $action $VOLUME_SERVICE-volume.service
- if [[ $VOLUME_SERVICE == "openstack-cinder" ]]; then
- for svc in api scheduler
- do
- sudo systemctl $action $VOLUME_SERVICE-$svc.service
- done
- fi
-
- # This must be started after openstack-nova-cert due to an SELinux
- # policy problem. See https://bugzilla.redhat.com/show_bug.cgi?id=857747
- sleep 2
- sudo systemctl $action openstack-nova-network.service
-
- # conductor service is new for grizzly
- if [ "$os_dist" = 'grizzly' ]; then
- sudo systemctl $action openstack-nova-conductor.service
- fi
-}
-
-function os_stop() {
- action=stop
- sudo systemctl $action openstack-keystone.service tgtd.service
- for svc in api objectstore compute network scheduler cert
- do
- sudo systemctl $action openstack-nova-$svc.service
- done
-
- sudo systemctl $action $VOLUME_SERVICE-volume.service
- if [[ $VOLUME_SERVICE == "openstack-cinder" ]]; then
- for svc in api scheduler
- do
- sudo systemctl $action $VOLUME_SERVICE-$svc.service
- done
- fi
-
- for svc in api registry
- do
- sudo systemctl $action openstack-glance-$svc.service
- done
-}
-
-function os_restart() {
- action=restart
- os_stop
- sleep 1
- os_start
-}
-
-function os_erase() {
- for net in `sudo nova-manage network list |
- awk '/^[[:digit:]]/ { print $9 }'`
- do
- sudo nova-manage network delete --uuid $net
- done
-
- os_stop
- sleep 1
-
- # Kill dnsmasq processes
- if find /var/lib/nova/networks -name '*.pid'; then
- sudo kill `cat /var/lib/nova/networks/*.pid`
- fi
-
- sudo rm -f /var/lib/libvirt/qemu/save/instance-000*
- sudo rm -f /var/lib/libvirt/qemu/instance-000*
- sudo yum -q -y erase python-glance* python-nova* python-keystone* openstack-swift* openstack-dashboard
-
- sudo systemctl start mysqld.service
- sleep 1
-
- sudo openstack-db --yes --service nova --drop $*
- sudo openstack-db --yes --service glance --drop $*
- sudo openstack-db --yes --service keystone --drop $*
- if [[ $VOLUME_SERVICE == "openstack-cinder" ]]; then
- sudo openstack-db --yes --service cinder --drop $*
- fi
-
- sudo yum -q -y erase openstack-utils
-
- sudo vgchange -an $VOLUME_NAME
- sudo losetup -d /dev/loop0
- sudo rm -f $VOLUME_PATH
- sudo rm -rf /etc/{glance,nova,swift,keystone,openstack-dashboard,cinder} /var/lib/{glance,nova,swift,keystone,cinder} /var/log/{glance,nova,swift,keystone,cinder} /var/run/{glance,nova,swift,keystone,cinder}
- rm -f $HOME/.openstack/.keystonerc
-}
-
-function os_install() {
- sudo yum -q -y groupinstall Virtualization
- sudo yum -q -y install openstack-utils openstack-nova openstack-glance openstack-keystone openstack-dashboard scsi-target-utils qpid-cpp-server qpid-cpp-server-daemon $VOLUME_SERVICE
-
- check_os_version
-
- sudo dd if=/dev/zero of=$VOLUME_PATH bs=1M seek=20k count=0
- # Note if mysql-server is not installed, this will fail
- # and openstack-db --yes will install/start it
- sudo systemctl start mysqld.service
- sudo systemctl enable libvirtd.service
- sudo systemctl start libvirtd.service
- sleep 1
- # Configure the databases
- sudo openstack-db --yes --service nova --init $*
- sudo openstack-db --yes --service glance --init $*
- sudo openstack-db --yes --service keystone --init $*
- if [[ $VOLUME_SERVICE == "openstack-cinder" ]]; then
- sudo openstack-db --yes --service cinder --init $*
- fi
-
- # Create a keystone RC file
- # Note that keystone bug #1073291 "fix" removes a chunk of stuff from
- # /usr/share/openstack-keystone/sample_data.sh (called from
- # openstack-keystone-sample-data below), so for grizzly we have to align
- # our RC file with the hardcoded values, because the password variables
- # are now ignored, also the admin tenant is no longer created.
- # I raised bug #1166182 to potentially fix this, but for now workaround:
- mkdir -p $HOME/.openstack
- if [ "$os_dist" = 'grizzly' ]; then
- cat > $HOME/.openstack/keystonerc <<EOF
-export ADMIN_TOKEN=$(openssl rand -hex 10)
-export OS_USERNAME=admin
-export OS_PASSWORD=secrete
-export OS_TENANT_NAME=demo
-export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/
-export OS_AUTH_STRATEGY=keystone
-EOF
- else
- cat > $HOME/.openstack/keystonerc <<EOF
-export ADMIN_TOKEN=$(openssl rand -hex 10)
-export OS_USERNAME=admin
-export OS_PASSWORD=verybadpass
-export OS_TENANT_NAME=admin
-export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/
-export OS_AUTH_STRATEGY=keystone
-EOF
- fi
- # Install keystone catalog
- source $HOME/.openstack/keystonerc
- sudo openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
-
- # Need to setup pki on grizzly or you get "Unable to sign token." error
- if [ "$os_dist" = 'grizzly' ]; then
- keystone-manage pki_setup
- chown -R keystone /etc/keystone/ssl/
- fi
-
- sudo systemctl start openstack-keystone.service
- sleep 1
-
- sudo ADMIN_PASSWORD=$OS_PASSWORD SERVICE_PASSWORD=servicepass openstack-keystone-sample-data
-
- # Configure nova to use keystone
- sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name service
- sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user nova
- if [ "$os_dist" = 'grizzly' ]; then
- sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password nova
- else
- sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password servicepass
- fi
- sudo openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
- # Configure glance to use keystone
- sudo openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
- sudo openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
- sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_tenant_name service
- sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_user glance
- if [ "$os_dist" = 'grizzly' ]; then
- sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_password glance
- else
- sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_password servicepass
- fi
- sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_tenant_name service
- sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_user glance
- sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_password servicepass
-
- if [[ $VOLUME_SERVICE == "openstack-cinder" ]]; then
- sudo openstack-config --set /etc/cinder/api-paste.ini filter:authtoken admin_tenant_name service
- sudo openstack-config --set /etc/cinder/api-paste.ini filter:authtoken admin_user cinder
- sudo openstack-config --set /etc/cinder/api-paste.ini filter:authtoken admin_password servicepass
- sudo openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
- # must disable implicit osapi_volume
- sudo openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis ec2,osapi_compute,metadata
- sudo openstack-config --set /etc/nova/nova.conf DEFAULT volume_api_class nova.volume.cinder.API
- fi
-
- os_stop
- sleep 1
- sudo rm -rf /var/log/{glance,nova,swift,keystone,cinder}/*
- os_start
- sleep 1
- if [[ $VOLUME_SERVICE == "openstack-cinder" ]]; then
- ${BASE_DIR}/../bin/cinder-keystone-setup
- fi
- echo "Installation Complete."
-
- echo "Checking all expected services are running"
- os_check_status
- if [[ ${OS_STATUS} != "OK" ]]
- then
- echo "Service failed to start : ${OS_STATUS}, cannot continue"
- exit 1
- fi
-
- echo "Testing nova and glance. If any errors are displayed, the install failed..."
- # Create additional flavors required by heat templates
- ${BASE_DIR}/nova_create_flavors.sh
- nova flavor-list
- glance index
- echo
- echo "note: This tool does not create a network. Creating a network"
- echo "depends on your environment. An example network create operation:"
- echo
- echo " sudo nova-manage network create demonet 10.0.0.0/24 1 256 --bridge=demonetbr0"
- echo
- echo -e "The network range here should *not* be one used on your existing physical\n network."
- echo "It should be a range dedicated for the network that OpenStack will configure."
- echo "If 10.0.0.0/24 clashes with your local network, pick another range."
-
-}
-
-OS_SERVICES=(qpidd mysqld openstack-keystone tgtd openstack-glance-api openstack-glance-registry openstack-nova-api openstack-nova-objectstore openstack-nova-compute openstack-nova-network openstack-nova-scheduler openstack-nova-cert $VOLUME_SERVICE-volume)
-if [[ $VOLUME_SERVICE == "openstack-cinder" ]]; then
- OS_SERVICES+=($VOLUME_SERVICE-api $VOLUME_SERVICE-scheduler)
-fi
-
-check_os_version
-if [ "$os_dist" = 'grizzly' ]; then
- OS_SERVICES+=(openstack-nova-conductor)
-fi
-
-case $action in
- "")
- ;;
- start)
- os_start
- ;;
- stop)
- os_stop
- ;;
- restart)
- os_restart
- ;;
- erase)
- shift
- os_erase $*
- ;;
- install)
- shift
- os_install $*
- ;;
- status)
- os_status
- ;;
- *)
- echo "The action \"$action\" is not supported."
- ;;
-esac
-
+++ /dev/null
-#!/bin/bash
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-# Copyright (C) 2012, Red Hat, Inc.
-# Steven Hardy <shardy@redhat.com>
-# Based on tools/openstack
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-BASE_DIR=`dirname $0`
-action=$1
-if [ -z "$action" ]
-then
- echo "openstack [start|stop|install|erase|status]"
- echo
- echo "This tool is designed to control OpenStack on an Ubuntu 12.XX system"
- echo "It installs a minimal openstack environment in order to demonstrate heat functionality"
- echo "Note swift and the horizon dashboard are not installed"
- echo
- echo "start - Starts OpenStack"
- echo "stop - Stops OpenStack"
- echo "restart - Restart OpenStack"
- echo "install - Installs a fresh OpenStack system with Keystone from repostories"
- echo "erase - permanently destroys an existing installation of OpenStack"
-fi
-
-BASE_SERVICES="tgt mysql"
-OS_SERVICES="nova-compute nova-cert nova-vncproxy nova-objectstore keystone nova-api glance-registry nova-network nova-scheduler nova-volume glance-api keystone"
-
-OS_REQ_PACKAGES="bridge-utils rabbitmq-server memcached python-memcache kvm libvirt-bin tgt open-iscsi open-iscsi-utils mysql-server python-mysqldb"
-OS_GLANCE_PACKAGES="glance glance-api glance-client glance-common glance-registry python-glance"
-OS_KEYSTONE_PACKAGES="keystone python-keystone python-keystoneclient"
-OS_NOVA_PACKAGES="nova-api nova-cert nova-common nova-compute nova-compute-kvm nova-doc nova-network nova-objectstore nova-scheduler nova-vncproxy nova-volume python-nova python-novaclient"
-
-INSTALL_CMD="sudo aptitude -y install"
-UNINSTALL_CMD="sudo aptitude -y purge"
-
-os_status () {
- for service in ${BASE_SERVICES}
- do
- output=$(initctl status ${service} | awk '{print $2}' | sed "s/,$//")
- echo "$service ${output}" | awk '{ printf "%-40s %s\n", $1, $2}'
- done
- for service in ${OS_SERVICES}
- do
- output=$(initctl status ${service} | awk '{print $2}' | sed "s/,$//")
- echo "$service ${output}" | awk '{ printf "%-40s %s\n", $1, $2}'
- done
-}
-
-OS_STATUS="OK"
-os_check_status () {
- # If a service is not running, we try again up to MAX_TRIES times
- MAX_TRIES=5
- for service in ${OS_SERVICES}
- do
- attempts=0
- while [[ ${attempts} < ${MAX_TRIES} ]]
- do
- attempts=$((${attempts} + 1))
- output=$(initctl status $service)
- running=$(initctl status $service | cut -d"/" -f2 | sed "s/,.*$//")
- if [[ ${running} != "running" ]]
- then
- echo "Service ${service} does not seem to be running, waiting 1s ${attempts}/${MAX_TRIES}"
- OS_STATUS="FAIL ${service} : ${running}"
- sleep 1
- else
- echo "${service} ${running}" | awk '{ printf "%-40s %s\n", $1, $2}'
- OS_STATUS="OK"
- break
- fi
- done
-
- # If we get here and OS_STATUS != OK then we return as something failed
- if [[ ${OS_STATUS} != "OK" ]]
- then
- echo "Service ${service} has failed to start, check logs for errors"
- break
- fi
- done
-}
-
-DBPW=""
-db_get_pw() {
- # If we're passed a -r rootpw argument, use it,
- # Otherwise, prompt for a password
- # Note we use the service name for the password, which is the default
- # behavior of the Fedora openstack-utils openstack-db script
- # Obviously this is only suitable for a development/demo installation
- if [ $# -eq 2 ]
- then
- if [ $1 = "-r" ]
- then
- DBPW=$2
- fi
- else
- echo "Enter mysql root password"
- read -s DBPW
- fi
-}
-
-db_setup () {
- db_get_pw $@
- for APP in nova glance keystone
- do
- MYSQL_APP_PW=${APP}
- echo "Creating '$APP' database." >&2
- mysql -u root --password=${DBPW} <<EOF
-CREATE DATABASE $APP;
-CREATE USER '$APP'@'localhost' IDENTIFIED BY '${MYSQL_APP_PW}';
-CREATE USER '$APP'@'%' IDENTIFIED BY '${MYSQL_APP_PW}';
-GRANT ALL ON $APP.* TO '$APP'@'localhost';
-GRANT ALL ON $APP.* TO '$APP'@'%';
-flush privileges;
-EOF
- done
-}
-
-db_drop () {
- db_get_pw $@
- for APP in nova glance keystone
- do
- MYSQL_APP_PW=${APP}
- echo "Dropping '$APP' database." >&2
- mysql -u root --password=${DBPW} <<EOF
-DROP USER '$APP'@'localhost';
-DROP USER '$APP'@'%';
-DROP DATABASE $APP;
-flush privileges;
-EOF
- done
-}
-
-os_start () {
- sudo rabbitmqctl start
- sleep 1
- if ! sudo vgs | grep -q nova-volumes
- then
- sudo vgcreate nova-volumes $(sudo losetup --show -f /var/lib/nova/nova-volumes.img)
- fi
-
- for service in ${BASE_SERVICES}
- do
- initctl start ${service}
- done
- for service in ${OS_SERVICES}
- do
- initctl start ${service}
- done
-}
-
-os_stop () {
- action=stop
- for service in ${OS_SERVICES}
- do
- echo "initctl stop ${service}"
- initctl stop ${service}
- done
-}
-
-os_restart () {
- action=restart
- os_stop
- sleep 1
- os_start
-}
-
-os_erase () {
- for net in `sudo nova-manage network list |
- awk '/^[[:digit:]]/ { print $9 }'`
- do
- sudo nova-manage network delete --uuid $net
- done
-
- os_stop
- sleep 1
-
- # Kill dnsmasq processes
- if find /var/lib/nova/networks -name '*.pid'; then
- sudo kill `cat /var/lib/nova/networks/*.pid`
- fi
-
- sudo rm -f /var/lib/libvirt/qemu/save/instance-000*
- sudo rm -f /var/lib/libvirt/qemu/instance-000*
- echo "$UNINSTALL_CMD $OS_GLANCE_PACKAGES" >&2
- $UNINSTALL_CMD $OS_GLANCE_PACKAGES
- echo "$UNINSTALL_CMD $OS_NOVA_PACKAGES" >&2
- $UNINSTALL_CMD $OS_NOVA_PACKAGES
- echo "$UNINSTALL_CMD $OS_KEYSTONE_PACKAGES" >&2
- $UNINSTALL_CMD $OS_KEYSTONE_PACKAGES
-
- db_drop $@
-
- sudo vgchange -an nova-volumes
- sudo losetup -d /dev/loop0
- sudo rm -f /var/lib/nova/nova-volumes.img
- sudo rm -rf /etc/{glance,nova,swift,keystone,openstack-dashboard} /var/lib/{glance,nova,swift,keystone} /var/log/{glance,nova,swift,keystone} /var/run/{glance,nova,swift,keystone}
- rm -f $HOME/.openstack/.keystonerc
-}
-
-get_id () {
- echo `$@ | awk '/ id / { print $4 }'`
-}
-
-os_install () {
-
- echo "Installing prerequisite packages" >&2
- ${INSTALL_CMD} ${OS_REQ_PACKAGES}
- echo "Installing Glance packages" >&2
- ${INSTALL_CMD} ${OS_GLANCE_PACKAGES}
- echo "Installing Keystone packages" >&2
- ${INSTALL_CMD} ${OS_KEYSTONE_PACKAGES}
- echo "Installing Nova packages" >&2
- ${INSTALL_CMD} ${OS_NOVA_PACKAGES}
-
- echo "Setting up /var/lib/nova/nova-volumes.img" >&2
- sudo dd if=/dev/zero of=/var/lib/nova/nova-volumes.img bs=1M seek=20k count=0
- # Configure the databases
- db_setup $*
-
- # Create a keystone RC file
- mkdir -p $HOME/.openstack
- TOKEN=$(openssl rand -hex 10)
- cat > $HOME/.openstack/keystonerc <<EOF
-export ADMIN_TOKEN=$TOKEN
-export OS_USERNAME=admin
-export OS_PASSWORD=verybadpass
-export OS_TENANT_NAME=admin
-export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/
-export OS_AUTH_STRATEGY=keystone
-export ADMIN_PASSWORD=$OS_PASSWORD
-EOF
- # Configure keystone
- source $HOME/.openstack/keystonerc
- sudo cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
- sudo -E sed -i "s/^admin_token = ADMIN/admin_token = $ADMIN_TOKEN/" /etc/keystone/keystone.conf
- sudo sed -i "s/^connection =.*$/connection = mysql:\/\/keystone:keystone@127.0.0.1\/keystone/" /etc/keystone/keystone.conf
- sudo sed -i "/^\[catalog\]/a template_file = /etc/keystone/default_catalog.templates" /etc/keystone/keystone.conf
- keystone-manage db_sync
- sudo initctl restart keystone
- sleep 5
-
- # Create keystone sample data (users/roles/tenants)
- ENABLE_ENDPOINTS=1 /usr/share/keystone/sample_data.sh
-
- # Configure nova to use keystone
- SERVICE_PASSWORD=$OS_PASSWORD
- sudo sed -i "s/^admin_tenant_name = %SERVICE_TENANT_NAME%/admin_tenant_name = service/" /etc/nova/api-paste.ini
- sudo sed -i "s/admin_user = %SERVICE_USER%/admin_user = nova/" /etc/nova/api-paste.ini
- sudo sed -i "s/admin_password = %SERVICE_PASSWORD%/admin_password = $SERVICE_PASSWORD/" /etc/nova/api-paste.ini
- sudo echo "--auth_strategy = keystone" >> /etc/nova/nova.conf
- sudo echo "--sql_connection=mysql://nova:nova@127.0.0.1/nova" >> /etc/nova/nova.conf
- nova-manage db sync
-
- # Configure glance to use keystone
- sudo sed -i "s/^admin_tenant_name = %SERVICE_TENANT_NAME%/admin_tenant_name = service/" /etc/glance/glance-api-paste.ini
- sudo sed -i "s/admin_user = %SERVICE_USER%/admin_user = glance/" /etc/glance/glance-api-paste.ini
- sudo sed -i "s/admin_password = %SERVICE_PASSWORD%/admin_password = $SERVICE_PASSWORD/" /etc/glance/glance-api-paste.ini
- sudo sed -i "s/^admin_tenant_name = %SERVICE_TENANT_NAME%/admin_tenant_name = service/" /etc/glance/glance-registry-paste.ini
- sudo sed -i "s/admin_user = %SERVICE_USER%/admin_user = glance/" /etc/glance/glance-registry-paste.ini
- sudo sed -i "s/admin_password = %SERVICE_PASSWORD%/admin_password = $SERVICE_PASSWORD/" /etc/glance/glance-registry-paste.ini
- sed -i "s/^sql_connection =.*$/sql_connection = mysql:\/\/glance:glance@127.0.0.1\/glance/" /etc/glance/glance-registry.conf
- cat >> /etc/glance/glance-registry.conf << EOF
-[paste_deploy]
-flavor = keystone
-EOF
- cat >> /etc/glance/glance-api.conf << EOF
-[paste_deploy]
-flavor = keystone
-EOF
- glance-manage version_control 0
- glance-manage db_sync
-
- # Restart all services and clear logfiles
- os_stop
- sleep 1
- sudo rm -rf /var/log/{glance,nova,swift,keystone}/*
- os_start
- sleep 1
- echo "Installation Complete."
-
- echo "Checking all expected services are running"
- os_check_status
- if [[ ${OS_STATUS} != "OK" ]]
- then
- echo "Service failed to start : ${OS_STATUS}, cannot continue"
- exit 1
- fi
-
- echo "Testing nova and glance. If any errors are displayed, the install failed..."
- # Create additional flavors required by heat templates
- ${BASE_DIR}/nova_create_flavors.sh
- nova flavor-list
- glance index
- echo
- echo "note: This tool does not create a network. Creating a network"
- echo "depends on your environment. An example network create operation:"
- echo
- echo " sudo nova-manage network create --label=demonet --fixed_range_v4=10.0.0.0/24 --bridge=demonetbr0 --bridge_interface=eth0"
- echo
- echo -e "The network range here should *not* be one used on your existing physical\n network."
- echo "It should be a range dedicated for the network that OpenStack will configure."
- echo "If 10.0.0.0/24 clashes with your local network, pick another range."
- echo "--bridge_interface argument should be a physical device to be bridged, eth0 in this example"
-}
-
-case $action in
- "")
- ;;
- start)
- os_start
- ;;
- stop)
- os_stop
- ;;
- restart)
- os_restart
- ;;
- erase)
- shift
- os_erase $*
- ;;
- install)
- shift
- os_install $*
- ;;
- status)
- os_status
- ;;
- *)
- echo "The action \"$action\" is not supported."
- ;;
-esac
+++ /dev/null
-# vim: tabstop=4 shiftwidth=4 softtabstop=4
-
-# Copyright 2013 Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import os
-import sys
-
-import install_venv_common as install_venv # noqa
-
-
-def first_file(file_list):
- for candidate in file_list:
- if os.path.exists(candidate):
- return candidate
-
-
-def main(argv):
- root = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
-
- venv = os.environ['VIRTUAL_ENV']
-
- pip_requires = first_file([
- os.path.join(root, 'requirements.txt'),
- os.path.join(root, 'tools', 'pip-requires'),
- ])
- test_requires = first_file([
- os.path.join(root, 'test-requirements.txt'),
- os.path.join(root, 'tools', 'test-requires'),
- ])
- py_version = "python%s.%s" % (sys.version_info[0], sys.version_info[1])
- project = 'heat'
- install = install_venv.InstallVenv(root, venv, pip_requires, test_requires,
- py_version, project)
- #NOTE(dprince): For Tox we only run post_process (which patches files, etc)
- install.post_process()
-
-if __name__ == '__main__':
- main(sys.argv)
+++ /dev/null
-#!/bin/sed -nrf
-
-# Skip over the ".." line that starts a comment block.
-/^\.{2}[[:space:]]*$/ n
-
-# Loop through the block until a non-indented line is found.
-# Append indented lines to the hold space.
-: indent
-/^ {4}/ {
- s/^ {4}//
- H
- $ b endblock
- n
- b indent
-}
-
-# Loop through to the end of the block.
-# Clear the hold space if unindented lines are present.
-:nonindent
-/^[[:space:]]*$/! {
- x
- s/.*//
- x
- $ d
- n
- b nonindent
-}
-
-# Print the contents of the hold space (if any) and clear it.
-: endblock
-s/.*//
-x
-s/^\n//
-/./ {
-p
-a \
-
-}
+++ /dev/null
-#!/usr/bin/python
-
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Generate a list of all possible state transitions.
-
-Useful as a starting point for documentation.
-"""
-
-from heat.engine import resource
-
-actions = resource.Resource.ACTIONS
-stack_statuses = resource.Resource.STATUSES
-engine_statuses = ("Alive", "Dead")
-
-print("""\
-| Orig action | Stack status | Engine status | New action | Behavior |
-|-------------+--------------+---------------+------------+------------------|\
-""")
-
-for orig_action in actions:
- for stack_status in stack_statuses:
- for new_action in actions:
- if stack_status == resource.Resource.IN_PROGRESS:
- for engine_status in engine_statuses:
- print("| %11s | %12s | %13s | %10s | |" \
- % (orig_action, stack_status, engine_status,
- new_action))
- else:
- print("| %11s | %12s | %13s | %10s | |" \
- % (orig_action, stack_status, "NA", new_action))
+++ /dev/null
-# this file lists dependencies required for the testing of heat (RPM version)
-python-setuptools
-
-python-coverage
-python-mox
-python-testrepository
-python-nose
-#openstack.nose_plugin>=0.7
-python-pep8
-python-sphinx
-python-paramiko
+++ /dev/null
-#!/bin/bash
-
-TOOLS_DIR=`dirname $0`
-if [ "$1" = "-y" ] || [ "$1" = "--yes" ]; then
- SKIP_ASK=y
- shift
-fi
-
-user_wants() {
- if [ "$SKIP_ASK" = "y" ]; then return 0; fi
-
- while true; do
- read -n 1 -p "$1 " cont
- echo 1>&2
- case $cont in
- y|Y)
- return 0
- ;;
- n|N)
- return 1
- ;;
- *)
- ;;
- esac
- done
-}
-
-HEAT_PIDS=`pgrep '^heat-(api|api-cfn|engine|metadata)$'`
-if [ -n "$HEAT_PIDS" ]; then
- echo 'The following Heat processes are running:' 1>&2
- ps $HEAT_PIDS 1>&2
- if user_wants 'Kill them?'; then
- sudo kill $HEAT_PIDS
- fi
- echo 1>&2
-fi
-
-if user_wants 'Drop Heat database tables?'; then
- $TOOLS_DIR/heat-db-drop $*
- echo 1>&2
-fi
-
-if user_wants 'Erase OpenStack installation?'; then
- $TOOLS_DIR/openstack erase $*
- echo 1>&2
-fi
-
-HEAT_EGGS=`python -c 'import sys; print "\n".join(sys.path)' | grep '/heat-[^/]*\.egg$'`
-if [ -n "$HEAT_EGGS" ]; then
- echo 'The following Heat installations were found:' 1>&2
- echo "$HEAT_EGGS" 1>&2
- if user_wants 'Delete them?'; then
- sudo rm -rf $HEAT_EGGS
- fi
- echo 1>&2
-fi
-
-if user_wants 'Delete Heat binaries?'; then
- BIN_PATH=/usr/bin
- sudo rm -f $BIN_PATH/heat-db-setup
- sudo rm -f $BIN_PATH/heat-metadata
- sudo rm -f $BIN_PATH/heat-api
- sudo rm -f $BIN_PATH/heat-api-cfn
- sudo rm -f $BIN_PATH/heat-engine
-
- echo 1>&2
-fi
-
-if user_wants 'Delete Heat configuration?'; then
- sudo rm -rf /etc/heat
- echo 1>&2
-fi
-
-if user_wants 'Delete Heat logs?'; then
- sudo rm -rf /var/log/heat
- echo 1>&2
-fi
-
+++ /dev/null
-#!/bin/bash
-TOOLS=`dirname $0`
-VENV=$TOOLS/../.venv
-source $VENV/bin/activate && "$@"
+++ /dev/null
-[tox]
-envlist = py26,py27,pep8
-
-[testenv]
-setenv = VIRTUAL_ENV={envdir}
-deps = -r{toxinidir}/requirements.txt
- -r{toxinidir}/test-requirements.txt
-commands =
- python tools/patch_tox_venv.py
- python setup.py testr --slowest --testr-args='{posargs}'
-
-[testenv:pep8]
-commands =
- flake8 heat bin/heat-api bin/heat-api-cfn bin/heat-api-cloudwatch bin/heat-engine bin/heat-manage
- {toxinidir}/tools/conf/check_uptodate.sh
-
-[testenv:venv]
-commands = {posargs}
-
-[testenv:cover]
-setenv = VIRTUAL_ENV={envdir}
-commands =
- python tools/patch_tox_venv.py
- python setup.py testr --coverage
-
-[testenv:docs]
-deps = -r{toxinidir}/requirements.txt
- -r{toxinidir}/test-requirements.txt
- sphinxcontrib-httpdomain
-commands = python setup.py build_sphinx
-
-[flake8]
-# F841 local variable 'json_template' is assigned to but never used
-# H201 no 'except:' at least use 'except Exception:'
-# H302 import only modules.'bla..' does not import a module
-# H306 imports not in alphabetical order
-# H404 multi line docstring should start with a summary
-ignore = F841,H201,H302,H306,H404
-show-source = true
-builtins = _
-exclude=.venv,.git,.tox,dist,*openstack/common*,*lib/python*,*egg,tools,build
+++ /dev/null
-#!/bin/bash
-
-if [ $EUID -ne 0 ]; then
- echo "This script must be run as root."
- exit
-fi
-
-type -P pip-python &> /dev/null && have_pip_python=1 || have_pip_python=0
-if [ $have_pip_python -eq 1 ]; then
- pip-python uninstall -y heat
- exit
-fi
-
-type -P pip &> /dev/null && have_pip=1 || have_pip=0
-if [ $have_pip -eq 1 ]; then
- pip uninstall -y heat
- exit
-fi
-
-echo "pip-python not found. install package (probably python-pip) or run
-'easy_install pip', then rerun $0";