+++ /dev/null
-..
- Copyright 2012 OpenStack, LLC
- Copyright 2012 Citrix Systems, Inc.
- All Rights Reserved.
-
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Host Aggregates
-===============
-
-This extension introduces the concept of aggregate into Cinder. Host aggregates are different from zones and availability zones: while the former allows the partition of Cinder deployments into logical groups for load balancing and instance distribution, the latter are for providing some form of physical isolation and redundancy from other availability zones (e.g. by using separate power supply and network gears). Availability zones do not necessarily mean geographic distribution whereas zones usually do. Host aggregates can be regarded as a mechanism to further partitioning an availability zone, i.e. into multiple groups of hosts that share common resources like storage and network. This enables a finer level of granularity in which to structure an entire OpenStack deployment. Aggregates allows higher availability of a single guest instance within an availability zone, it enables advanced VM placement strategies, and more importantly it enables hosts' zero-downtime upgrades (for example, via VM live migration across members of the aggregate, thus causing no disruption to guest instances).
-
-You can use this extension when you have multiple Compute nodes installed (only XenServer/XCP via xenapi driver is currently supported), and you want to leverage the capabilities of the underlying hypervisor resource pools. For example, you want to enable VM live migration (i.e. VM migration within the pool) or enable host maintenance with zero-downtime for guest instances. Please, note that VM migration across pools (i.e. storage migration) is not yet supported in XenServer/XCP, but will be added when available. Bear in mind that the two migration techniques are not mutually exclusive and can be used in combination for a higher level of flexibility in your cloud management.
-
-Design
-=======
-
-The OSAPI Admin API will be extended to support the following operations:
-
- * Aggregates
-
- * list aggregates: returns a list of all the host-aggregates (optionally filtered by availability zone)
- * create aggregate: creates an aggregate, takes a friendly name, etc. returns an id
- * show aggregate: shows the details of an aggregate (id, name, availability_zone, hosts and metadata)
- * update aggregate: updates the name and availability zone of an aggregate
- * set metadata: sets the metadata on an aggregate to the values supplied
- * delete aggregate: deletes an aggregate, it fails if the aggregate is not empty
- * add host: adds a host to the aggregate
- * remove host: removes a host from the aggregate, it fails if the host is not disabled or
-
- * Hosts
-
- * start host maintenance (or evacuate-host): disallow a host to serve API requests and migrate instances to other hosts of the aggregate
- * stop host maintenance: (or rebalance-host): put the host back into operational mode, migrating instances back onto that host
-
-Using the Cinder CLI
-==================
-
-Using the cinder command you can create, delete and manage aggregates. The following section outlines the list of available commands.
-
-Usage
------
-
-::
-
- * aggregate-list Print a list of all aggregates.
- * aggregate-create <name> <availability_zone> Create a new aggregate with the specified details.
- * aggregate-delete <id> Delete the aggregate by its id.
- * aggregate-details <id> Show details of the specified aggregate.
- * aggregate-add-host <id> <host> Add the host to the specified aggregate.
- * aggregate-remove-host <id> <host> Remove the specified host from the specfied aggregate.
- * aggregate-set-metadata <id> <key=value> [<key=value> ...] Update the metadata associated with the aggregate.
- * aggregate-update <id> <name> [<availability_zone>] Update the aggregate's name and optionally availablity zone.
-
- * host-update --maintenance [enable | disable] Put/resume host into/from maintenance.
..
Copyright 2010-2011 United States Government as represented by the
- Administrator of the National Aeronautics and Space Administration.
+ Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
:undoc-members:
:show-inheritance:
-The :mod:`backup_schedules` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. automodule:: cinder.api.openstack.backup_schedules
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-The :mod:`faults` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. automodule:: cinder.api.openstack.faults
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-The :mod:`flavors` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. automodule:: cinder.api.openstack.flavors
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-The :mod:`images` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. automodule:: cinder.api.openstack.images
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-The :mod:`servers` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. automodule:: cinder.api.openstack.servers
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-The :mod:`sharedipgroups` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. automodule:: cinder.api.openstack.sharedipgroups
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
EC2 API
-------
:undoc-members:
:show-inheritance:
-The :mod:`apirequest` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. automodule:: cinder.api.ec2.apirequest
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
The :mod:`cloud` Module
~~~~~~~~~~~~~~~~~~~~~~~
:undoc-members:
:show-inheritance:
-The :mod:`images` Module
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. automodule:: cinder.api.ec2.images
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
The :mod:`metadatarequesthandler` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:members:
:undoc-members:
:show-inheritance:
-
-The :mod:`test_flavors` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. automodule:: cinder.tests.api.openstack.test_flavors
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-The :mod:`test_images` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. automodule:: cinder.tests.api.openstack.test_images
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-The :mod:`test_servers` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. automodule:: cinder.tests.api.openstack.test_servers
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-The :mod:`test_sharedipgroups` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. automodule:: cinder.tests.api.openstack.test_sharedipgroups
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
Cinder System Architecture
========================
-Cinder is built on a shared-nothing, messaging-based architecture. All of the major cinder components can be run on multiple servers. This means that most component to component communication must go via message queue. In order to avoid blocking each component while waiting for a response, we use deferred objects, with a callback that gets triggered when a response is received.
+The Cinder Block Storage Service is intended to be ran on one or more nodes.
-Cinder recently moved to using a sql-based central database that is shared by all components in the system. The amount and depth of the data fits into a sql database quite well. For small deployments this seems like an optimal solution. For larger deployments, and especially if security is a concern, cinder will be moving towards multiple data stores with some kind of aggregation system.
+Cinder uses a sql-based central database that is shared by all Cinder services in the system. The amount and depth of the data fits into a sql database quite well. For small deployments this seems like an optimal solution. For larger deployments, and especially if security is a concern, cinder will be moving towards multiple data stores with some kind of aggregation system.
Components
----------
-Below you will find a helpful explanation of the different components.
+Below you will a brief explanation of the different components.
::
- /- ( LDAP )
- [ Auth Manager ] ---
- | \- ( DB )
- |
- | [ scheduler ] - [ volume ] - ( ATAoE/iSCSI )
- | /
- [ Web Dashboard ] -> [ api ] -- < AMQP > ------ [ network ] - ( Flat/Vlan )
- | \
- < HTTP > [ scheduler ] - [ compute ] - ( libvirt/xen )
- | |
- [ objectstore ] < - retrieves images
+ /- ( LDAP )
+ [ Auth Manager ] ---
+ | \- ( DB )
+ |
+ |
+ cinderclient |
+ / \ |
+ [ Web Dashboard ]- -[ api ] -- < AMQP > -- [ scheduler ] -- [ volume ] -- ( iSCSI )
+ \ / |
+ novaclient |
+ |
+ |
+ |
+ < REST >
+
* DB: sql database for data storage. Used by all components (LINKS NOT SHOWN)
* Web Dashboard: potential external component that talks to the api
-* api: component that receives http requests, converts commands and communicates with other components via the queue or http (in the case of objectstore)
+* api: component that receives http requests, converts commands and communicates with other components via the queue or http
* Auth Manager: component responsible for users/projects/and roles. Can backend to DB or LDAP. This is not a separate binary, but rather a python class that is used by most components in the system.
-* objectstore: http server that replicates s3 api and allows storage and retrieval of images
-* scheduler: decides which host gets each vm and volume
+* scheduler: decides which host gets each volume
* volume: manages dynamically attachable block devices.
-* network: manages ip forwarding, bridges, and vlans
-* compute: manages communication with hypervisor and virtual machines.
:show-inheritance:
-The :mod:`cinder.auth.ldapdriver` Driver
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. automodule:: cinder.auth.ldapdriver
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-The :mod:`cinder.auth.dbdriver` Driver
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. automodule:: cinder.auth.dbdriver
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-
Tests
-----
+++ /dev/null
-..
- Copyright 2010-2011 United States Government as represented by the
- Administrator of the National Aeronautics and Space Administration.
- All Rights Reserved.
-
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-
-.. _cloudpipe:
-
-Cloudpipe -- Per Project Vpns
-=============================
-
-Cloudpipe is a method for connecting end users to their project instances in vlan mode.
-
-
-Overview
---------
-
-The support code for cloudpipe implements admin commands (via cinder-manage) to automatically create a vm for a project that allows users to vpn into the private network of their project. Access to this vpn is provided through a public port on the network host for the project. This allows users to have free access to the virtual machines in their project without exposing those machines to the public internet.
-
-
-Cloudpipe Image
----------------
-
-The cloudpipe image is basically just a linux instance with openvpn installed. It needs a simple script to grab user data from the metadata server, b64 decode it into a zip file, and run the autorun.sh script from inside the zip. The autorun script will configure and run openvpn to run using the data from cinder.
-
-It is also useful to have a cron script that will periodically redownload the metadata and copy the new crl. This will keep revoked users from connecting and will disconnect any users that are connected with revoked certificates when their connection is renegotiated (every hour).
-
-
-Creating a Cloudpipe Image
---------------------------
-
-Making a cloudpipe image is relatively easy.
-
-# install openvpn on a base ubuntu image.
-# set up a server.conf.template in /etc/openvpn/
-
-.. literalinclude:: server.conf.template
- :language: bash
- :linenos:
-
-# set up.sh in /etc/openvpn/
-
-.. literalinclude:: up.sh
- :language: bash
- :linenos:
-
-# set down.sh in /etc/openvpn/
-
-.. literalinclude:: down.sh
- :language: bash
- :linenos:
-
-# download and run the payload on boot from /etc/rc.local
-
-.. literalinclude:: rc.local
- :language: bash
- :linenos:
-
-# setup /etc/network/interfaces
-
-.. literalinclude:: interfaces
- :language: bash
- :linenos:
-
-# register the image and set the image id in your flagfile::
-
- --vpn_image_id=ami-xxxxxxxx
-
-# you should set a few other flags to make vpns work properly::
-
- --use_project_ca
- --cnt_vpn_clients=5
-
-
-Cloudpipe Launch
-----------------
-
-When you use cinder-manage to launch a cloudpipe for a user, it goes through the following process:
-
-#. creates a keypair called <project_id>-vpn and saves it in the keys directory
-#. creates a security group <project_id>-vpn and opens up 1194 and icmp
-#. creates a cert and private key for the vpn instance and saves it in the CA/projects/<project_id>/ directory
-#. zips up the info and puts it b64 encoded as user data
-#. launches an m1.tiny instance with the above settings using the flag-specified vpn image
-
-
-Vpn Access
-----------
-
-In vlan networking mode, the second ip in each private network is reserved for the cloudpipe instance. This gives a consistent ip to the instance so that cinder-network can create forwarding rules for access from the outside world. The network for each project is given a specific high-numbered port on the public ip of the network host. This port is automatically forwarded to 1194 on the vpn instance.
-
-If specific high numbered ports do not work for your users, you can always allocate and associate a public ip to the instance, and then change the vpn_public_ip and vpn_public_port in the database. This will be turned into a cinder-manage command or a flag soon.
-
-
-Certificates and Revocation
----------------------------
-
-If the use_project_ca flag is set (required to for cloudpipes to work securely), then each project has its own ca. This ca is used to sign the certificate for the vpn, and is also passed to the user for bundling images. When a certificate is revoked using cinder-manage, a new Certificate Revocation List (crl) is generated. As long as cloudpipe has an updated crl, it will block revoked users from connecting to the vpn.
-
-The userdata for cloudpipe isn't currently updated when certs are revoked, so it is necessary to restart the cloudpipe instance if a user's credentials are revoked.
-
-
-Restarting Cloudpipe VPN
-------------------------
-
-You can reboot a cloudpipe vpn through the api if something goes wrong (using euca-reboot-instances for example), but if you generate a new crl, you will have to terminate it and start it again using cinder-manage vpn run. The cloudpipe instance always gets the first ip in the subnet and it can take up to 10 minutes for the ip to be recovered. If you try to start the new vpn instance too soon, the instance will fail to start because of a NoMoreAddresses error. If you can't wait 10 minutes, you can manually update the ip with something like the following (use the right ip for the project)::
-
- euca-terminate-instances <instance_id>
- mysql cinder -e "update fixed_ips set allocated=0, leased=0, instance_id=NULL where fixed_ip='10.0.0.2'"
-
-You also will need to terminate the dnsmasq running for the user (make sure you use the right pid file)::
-
- sudo kill `cat /var/lib/cinder/br100.pid`
-
-Now you should be able to re-run the vpn::
-
- cinder-manage vpn run <project_id>
-
-
-Logging into Cloudpipe VPN
---------------------------
-
-The keypair that was used to launch the cloudpipe instance should be in the keys/<project_id> folder. You can use this key to log into the cloudpipe instance for debugging purposes.
-
-
-The :mod:`cinder.cloudpipe.pipelib` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. automodule:: cinder.cloudpipe.pipelib
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-
-The :mod:`cinder.api.cloudpipe` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. automodule:: cinder.api.cloudpipe
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-
-The :mod:`cinder.crypto` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. automodule:: cinder.crypto
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
..
Copyright 2010-2011 United States Government as represented by the
- Administrator of the National Aeronautics and Space Administration.
+ Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
+++ /dev/null
-..
- Copyright 2010-2011 United States Government as represented by the
- Administrator of the National Aeronautics and Space Administration.
- All Rights Reserved.
-
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Glance Integration - The Future of File Storage
-===============================================
-
-The :mod:`cinder.image.service` Module
-------------------------------------
-
-.. automodule:: cinder.image.service
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
architecture
threading
il8n
- filter_scheduler
- multinic
rpc
Other Resources
database
volume
xensmvolume
- compute
- network
auth
api
scheduler
+++ /dev/null
-# This file describes the network interfaces available on your system
-# and how to activate them. For more information, see interfaces(5).
-
-# The loopback network interface
-auto lo
-iface lo inet loopback
-
-# The primary network interface
-auto eth0
-iface eth0 inet manual
- up ifconfig $IFACE 0.0.0.0 up
- down ifconfig $IFACE down
-
-auto br0
-iface br0 inet dhcp
- bridge_ports eth0
-
+++ /dev/null
-MultiNic
-========
-
-What is it
-----------
-
-Multinic allows an instance to have more than one vif connected to it. Each vif is representative of a separate network with its own IP block.
-
-Managers
---------
-
-Each of the network managers are designed to run independently of the compute manager. They expose a common API for the compute manager to call to determine and configure the network(s) for an instance. Direct calls to either the network api or especially the DB should be avoided by the virt layers.
-
-On startup a manager looks in the networks table for networks it is assigned and configures itself to support that network. Using the periodic task, they will claim new networks that have no host set. Only one network per network-host will be claimed at a time. This allows for psuedo-loadbalancing if there are multiple network-hosts running.
-
-Flat Manager
-------------
-
- .. image:: /images/multinic_flat.png
-
-The Flat manager is most similar to a traditional switched network environment. It assumes that the IP routing, DNS, DHCP (possibly) and bridge creation is handled by something else. That is it makes no attempt to configure any of this. It does keep track of a range of IPs for the instances that are connected to the network to be allocated.
-
-Each instance will get a fixed IP from each network's pool. The guest operating system may be configured to gather this information through an agent or by the hypervisor injecting the files, or it may ignore it completely and come up with only a layer 2 connection.
-
-Flat manager requires at least one cinder-network process running that will listen to the API queue and respond to queries. It does not need to sit on any of the networks but it does keep track of the IPs it hands out to instances.
-
-FlatDHCP Manager
-----------------
-
- .. image:: /images/multinic_dhcp.png
-
-FlatDHCP manager builds on the the Flat manager adding dnsmask (DNS and DHCP) and radvd (Router Advertisement) servers on the bridge for that network. The services run on the host that is assigned to that network. The FlatDHCP manager will create its bridge as specified when the network was created on the network-host when the network host starts up or when a new network gets allocated to that host. Compute nodes will also create the bridges as necessary and connect instance VIFs to them.
-
-VLAN Manager
-------------
-
- .. image:: /images/multinic_vlan.png
-
-The VLAN manager sets up forwarding to/from a cloudpipe instance in addition to providing dnsmask (DNS and DHCP) and radvd (Router Advertisement) services for each network. The manager will create its bridge as specified when the network was created on the network-host when the network host starts up or when a new network gets allocated to that host. Compute nodes will also create the bridges as necessary and connect instance VIFs to them.
+++ /dev/null
-..
- Copyright 2010-2011 United States Government as represented by the
- Administrator of the National Aeronautics and Space Administration.
- All Rights Reserved.
-
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
-Networking
-==========
-
-.. todo::
-
- * document hardware specific commands (maybe in admin guide?) (todd)
- * document a map between flags and managers/backends (todd)
-
-
-The :mod:`cinder.network.manager` Module
---------------------------------------
-
-.. automodule:: cinder.network.manager
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-The :mod:`cinder.network.linux_net` Driver
-----------------------------------------
-
-.. automodule:: cinder.network.linux_net
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-Tests
------
-
-The :mod:`network_unittest` Module
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. automodule:: cinder.tests.network_unittest
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-
-Legacy docs
------------
-
-The cinder networking components manage private networks, public IP addressing, VPN connectivity, and firewall rules.
-
-Components
-----------
-There are several key components:
-
-* NetworkController (Manages address and vlan allocation)
-* RoutingNode (NATs public IPs to private IPs, and enforces firewall rules)
-* AddressingNode (runs DHCP services for private networks)
-* BridgingNode (a subclass of the basic cinder ComputeNode)
-* TunnelingNode (provides VPN connectivity)
-
-Component Diagram
------------------
-
-Overview::
-
- (PUBLIC INTERNET)
- | \
- / \ / \
- [RoutingNode] ... [RN] [TunnelingNode] ... [TN]
- | \ / | |
- | < AMQP > | |
- [AddressingNode]-- (VLAN) ... | (VLAN)... (VLAN) --- [AddressingNode]
- \ | \ /
- / \ / \ / \ / \
- [BridgingNode] ... [BridgingNode]
-
-
- [NetworkController] ... [NetworkController]
- \ /
- < AMQP >
- |
- / \
- [CloudController]...[CloudController]
-
-While this diagram may not make this entirely clear, nodes and controllers communicate exclusively across the message bus (AMQP, currently).
-
-State Model
------------
-Network State consists of the following facts:
-
-* VLAN assignment (to a project)
-* Private Subnet assignment (to a security group) in a VLAN
-* Private IP assignments (to running instances)
-* Public IP allocations (to a project)
-* Public IP associations (to a private IP / running instance)
-
-While copies of this state exist in many places (expressed in IPTables rule chains, DHCP hosts files, etc), the controllers rely only on the distributed "fact engine" for state, queried over RPC (currently AMQP). The NetworkController inserts most records into this datastore (allocating addresses, etc) - however, individual nodes update state e.g. when running instances crash.
-
-The Public Traffic Path
------------------------
-
-Public Traffic::
-
- (PUBLIC INTERNET)
- |
- <NAT> <-- [RoutingNode]
- |
- [AddressingNode] --> |
- ( VLAN )
- | <-- [BridgingNode]
- |
- <RUNNING INSTANCE>
-
-The RoutingNode is currently implemented using IPTables rules, which implement both NATing of public IP addresses, and the appropriate firewall chains. We are also looking at using Netomata / Clusto to manage NATting within a switch or router, and/or to manage firewall rules within a hardware firewall appliance.
-
-Similarly, the AddressingNode currently manages running DNSMasq instances for DHCP services. However, we could run an internal DHCP server (using Scapy ala Clusto), or even switch to static addressing by inserting the private address into the disk image the same way we insert the SSH keys. (See compute for more details).
..
Copyright 2010-2011 United States Government as represented by the
- Administrator of the National Aeronautics and Space Administration.
+ Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
:show-inheritance:
-The :mod:`cinder.scheduler.chance` Driver
----------------------------------------
-
-.. automodule:: cinder.scheduler.chance
- :noindex:
- :members:
- :undoc-members:
- :show-inheritance:
-
-
The :mod:`cinder.scheduler.simple` Driver
---------------------------------------
..
Copyright 2010-2011 United States Government as represented by the
- Administrator of the National Aeronautics and Space Administration.
+ Administrator of the National Aeronautics and Space Administration.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
Old Docs
--------
-Cinder uses ata-over-ethernet (AoE) to export storage volumes from multiple storage nodes. These AoE exports are attached (using libvirt) directly to running instances.
+Cinder uses iSCSI to export storage volumes from multiple storage nodes. These iSCSI exports are attached (using libvirt) directly to running instances.
Cinder volumes are exported over the primary system VLAN (usually VLAN 1), and not over individual VLANs.
-AoE exports are numbered according to a "shelf and blade" syntax. In order to avoid collisions, we currently perform an AoE-discover of existing exports, and then grab the next unused number. (This obviously has race condition problems, and should be replaced by allocating a shelf-id to each storage node.)
-
-The underlying volumes are LVM logical volumes, created on demand within a single large volume group.
+The underlying volumes by default are LVM logical volumes, created on demand within a single large volume group.
tree. Additional draft and project documentation on Cinder and other components of OpenStack can
be found on the `OpenStack wiki`_. Cloud administrators, refer to `docs.openstack.org`_.
-Cinder is an upcoming project, developer documentation is in progress...
-
.. _`OpenStack wiki`: http://wiki.openstack.org
.. _`docs.openstack.org`: http://docs.openstack.org
+
+
+Developer Docs
+==============
+
+.. toctree::
+ :maxdepth: 1
+
+ devref/index
+
+API Extensions
+==============
+
+Go to http://api.openstack.org for information about Cinder API extensions.
+
+Outstanding Documentation Tasks
+===============================
+
+.. todolist::
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
+r Docs