Kevin Benton [Thu, 11 Jun 2015 08:04:24 +0000 (01:04 -0700)]
Initial pecan structure
This patch is the initial work for the pecan refactor.
* Adds pecan as a requirement
* Adds a simple API server named 'neutron-dev-server' for use
when neutron server is not deployed in a web server
* Wraps the app with the openstack request ID middleware
* Adds a basic V2 controller that breaks out requests by method
* Adds functional tests to ensure request ID is set and requests
are properly sent to the V2 controller.
When a network with a dhcp_enabled subnet is scheduled on a dhcp
agent, dhcp driver creates the network:dhcp port for the subnet.
However, the port info is not updated in dhcp agents internal cache.
Subsequently if the user deletes the network:dhcp port, the port is
properly deleted on the server side (i.e., in the database) and when
the port_delete_end notification is sent to the dhcp agent, it simply
ignores it as the port entry would be missing in the cache. This patch
fixes this issue by updating the dhcp agents cache when dhcp driver
creates the network:dhcp port for the subnets.
Currently, there is no dns servers prioritization for subnets
for Neutron.
Generally speaking, it is useful to keep the order of dns
nameservers consistent. Add a new column named 'order' in table
'dnsnameservers' and add nameserver into DB one by one.
Introduce a 'plural_name' attribute in the BaseResource
class in the neutron.quota.resource module. This will
enable to specify the plural name of a resource when it's
entered in the quota registry.
This will come handy for managing reservations and also
removes the need for passing the collection name when
counting a resource.
Also, the name of the 'resources' parameter for the
_count_resource routine has ben changed to collection_name,
as the previous one was misleading, since it led to believe
it should be used for a collection, whereas it is simply
the name of a collection.
This patch introduces application logic support for tracking
Neutron resource usage data, thus introducing a different
way of enforcing quota limits, which now relies on records
tracking resource usage for each tenant.
When these records go "out-of-sync" with actual resource usage,
the quota usage table entry is marked "dirty".
And it will be resynchronized the next time a resource count is
requested. Changes in resource utilization are detected using
SQLAlchemy events.
This patch however does not automatically enable resource usage
tracking for any plugin. Plugins must explicitly declare for which
resources they wish to enable tracking. To this aim, this patch
provides the @tracked_resources decorator.
As some operators might wish to not use resource usage tracking, this
patch adds the track_quota_usage configuration option. If set to
False, DB-level usage tracking will not be performed, and
resources will be counted as before, ie: invoking a plugin method.
Usage tracking is performed using a new resource type,
TrackedResource, which can work side by side with CountableResource,
the one Neutron used so far. To this aim a ResourceRegistry class
has been introduced for registering and managing resources. Most
of the resource management code previously embedded in the
QuotaEngine class is being moved to ResourceRegistry as a part of
this patch.
This patch simply renames quota modules introducing two
new packages:
1) neutron.quota
2) neutron.db.quota
This paves the way for adding more quota related modules.
Unit tests paths are changed accordingly, as well as references
to moved modules.
This patch hovewer preserves the ability of importing
neutron.db.quota_db in order to avoid breaking repositories which
use neutron as a requirement.
In Python 3, dhcp.LOG.process is not called when calling dhcp.LOG.debug. Since
what we really want to know is whether dhcp.LOG.debug is called, tweak the test
a little.
The current coding, raising DBReferenceError directly, seems to
confuse sqlalchemy a bit. Instead, actually causes the error by
tweaking the row being inserted.
Fix KeyError: 'L3_ROUTER_NAT' in l3 scheduler functional test
The issue was introduced by commit with change ID:
Ic4df299bac83c80abf0890a81d2dd36d15993b33
In Neutron-speak we define the 'core plugin' as something like ML2, while
service plugins examples are L3, VPNaaS, etc. The code was initializing
the L3 service plugin as the core plugin, which is unexpected behavior.
The code will now use the DB core plugin base class as the core plugin,
and not initialize the service plugins. The tests will manually and locally
instantiate a L3 service plugin instance and use it.
This patch introduces database support for tracking Neutron
resource usage data. A single DB model class tracks usage
info for all neutron resources.
The patch also provides a simple API for managing resource
usage info, as well as unit tests providing coverage for
this API.
This patch also makes a slight change to the ContextBase
class, adding the ability to explicitly set is_advsvc at
initialization time. While this probably makes no difference
for practical use of the context class, it simplifies
development of DB-only unit tests.
rossella [Wed, 14 Jan 2015 14:11:36 +0000 (15:11 +0100)]
L2 agent RPC add new RPC calls
This patch introduces two new RPC calls:
* update_device_list
* get_devices_details_list_and_failed_devices
update_device_up and update_device_down are the RPC calls
used by the L2 agent to notify the plugin that
a device is up or down. These calls accept only one device.
update_device_list accepts a list of devices that will
be put up or down and returns a list of devices for which
the operation failed.
get_devices_details_list_and_failed_devices is a new RPC
call that returns a list of devices details and a list of
devices for which it was not possible to retrieve the
details.
Thanks to these new calls, the agent will be able to update
the status of several devices using one RPC call
and to retry the operation only for the devices that failed
instead of performing a full resync.
This will be implemented in another patch.
This patch introduces the new calls only for the OVS agent.
A following patch will take care of the other L2 agents.
This patch ensures log_opt_values is invoked only once, thus
avoiding annoying multiple dumps of options values.
Note that is rpc_workers is greater than 0 a second dump will
be logged. This patch does not address this as REST and RPC
servers are going to be separated as a part of the feature/pecan
work.
Pavel Bondar [Thu, 14 May 2015 11:06:08 +0000 (14:06 +0300)]
Add Pluggable IPAM Backend Part 2
Introduces new Pluggable IPAM backend.
IP/subnet allocation calls are sent to IPAM driver.
Calls to IPAM Driver are considered as call to third-party
environment, so if any action fails, rollback action is called.
Removes associate_neutron_subnet step from interface and reference
driver. It is not needed any more because foreign key relationship
between IPAM subnet and neutron subnet was removed.
So IPAM subnet can store id of neutron subnet,
which is not created yet.
For now only reference IPAM driver is available.
Temporarily disabled test_ipam_pluggable_backend from
gate-neutron-python34. Current patch adds executing
parts of test_db_base_plugin_v2, which is not py34 compatible yet.
Might be enabled back once 204791 is merged.
Fix handling of port-range-min 0 in secgroup RPC and agent
For TCP/UDP protocol, port valid range is 0 to 65535, so for a
security group rule, its valid range is also 0 to 65535. this
patch makes two changes:
1. if a security group rule port_range_min is 0, l2 agent also can
get port_range_min real value 0 when it gets this rule for a
device via RPC.
2. For IptablesFirewallDriver, if port range is [0, xxxx], l2 agent
also need add this rule to iptables.
This patch adds the address scope api tests and
also returns the address scope resource attr map
in the get_extended_resources() function which was
missed in the previous patch