Support subnetpool association to an address scope
This patch supports the following
- create a subnetpool with address scope
- update a subnetpool
- to associate with an address scope
- to change the association to another address scope
- to remove the association with the address scope
Enable set DNS and DHCP log of dnsmasq for dhcp agent
Add a new configuration named 'dnsmasq_base_log_dir'
in dhcp_agent.ini.
This entry should be a path of log file. It should
like this:
dnsmasq_base_log_dir=/tmp
And the DNS and DHCP log will be written into the file
"/tmp/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/dhcp_dns_log".
The dir path will be created if the given path doesn't exists.
Move 1c844d1677f7 expand migration to appropriate branch
Commit cb60d0bb4e0cc0cba68f59fdf5f4e89d6ec52950 added an expand
migration. Howewer it indicated a contract migration as its
ancestor (2a16083502f3). As a result the migration was treated
by alembic as a contract one, even if it was defined in the
'expand' directory.
This patch 'rebases' this migration on top of the expand branch
HEAD. This might impact trunk chasers which have already executed
migration 1c844d1677f7 as a part of the contract branch.
OVSBridge was inheriting db_list from BaseOVS, which was
returning the information of all the ports on the machine,
not only the ones belonging to the bridge.
The OVSNeutronAgent was using that method with the assumption
that ports were filtered by bridge.
To avoid confusion, this patch add a new method to OVSBridge
get_ports_attributes to query the info for all the ports
belonging to the bridge.
db_list is removed from BaseOVS since that method is already
available in ovsdb/api.py
ovs_lib methods that use db_list are refactored accordingly.
Sam Betts [Thu, 16 Jul 2015 13:08:59 +0000 (14:08 +0100)]
Fix order of calls in update_port
https://review.openstack.org/#/c/196908 introduced some extra logic to
do with DVR to update_port, however it changed the ordering of some of
the calls, this patch ensures that the calls that need to happen before
precommit happen in the right place.
Kevin Benton [Tue, 28 Jul 2015 23:15:34 +0000 (16:15 -0700)]
Initialize port_info dict as blank in OVS agent
The first assignment of port_info was from the scan_ports function
which could result in an exception and result in port_info being
unbound for the port stats scan below.
This patch just initializes port_info as an empty dict so the port
stats will always have an input.
Cedric Brandily [Thu, 28 May 2015 16:35:17 +0000 (18:35 +0200)]
Ensure non-overlapping cidrs in subnetpools with galera
This change enables galera support in _lock_subnetpool[1]. It uses an
update to disallow 2 transactions performing concurrent subnet
allocation in the same subnetpool to succeed: the 2 transactions will
conflict because they update the same row so the db (including Galera
multi-writer cluster) will discard the last transaction and
Controller.create[2] will catch and retry the "discarded" allocation.
This change adds the "hash" attribute in "subnetpools" table to enable
previous update.
John Kasperski [Thu, 30 Jul 2015 04:52:01 +0000 (23:52 -0500)]
Prevent update alloc pool over existing gateway ip
The gateway IP for a subnet is not allowed to be listed in the
allocation pool for that subnet. This restriction is checked and
enforced at subnet-create time.
During subnet-update, it is only partially checked. An
exception is returned if the update request tries to place the gateway
IP in an existing allocation pool OR if both gateway and allocation
pool are being changed and the gateway is located in the new pool.
If only the allocation pool is being updated, no check is made to
verify that the new allocation pool does not contain the existing
gateway IP.
- Make sriov agent to set the spoofchecking on VFs
according to port_security_enabled attribute of the port.
- Extend vf management sanity check to probe spoof checking
capability
When a network with a dhcp_enabled subnet is scheduled on a dhcp
agent, dhcp driver creates the network:dhcp port for the subnet.
However, the port info is not updated in dhcp agents internal cache.
Subsequently if the user deletes the network:dhcp port, the port is
properly deleted on the server side (i.e., in the database) and when
the port_delete_end notification is sent to the dhcp agent, it simply
ignores it as the port entry would be missing in the cache. This patch
fixes this issue by updating the dhcp agents cache when dhcp driver
creates the network:dhcp port for the subnets.
Currently, there is no dns servers prioritization for subnets
for Neutron.
Generally speaking, it is useful to keep the order of dns
nameservers consistent. Add a new column named 'order' in table
'dnsnameservers' and add nameserver into DB one by one.
John Nielsen [Wed, 22 Jul 2015 18:43:04 +0000 (12:43 -0600)]
Extend vxlan_group option to allow a range of group addresses
If vxlan_group is specified in CIDR notation, it is interpreted as a
range of group addresses. VXLAN VNIs are mapped to group addresses in
a many-to-one round robin fashion, or one-to-one if a large enough
range is provided. Since VNIs are 24 bits, a /8 such as 239.0.0.0/8
allows each VNI to use a unique multicast group. (239.0.0.0/8 also
happens to be the "site-local" multicast range.)
With multiple VNIs on a single multicast group, it is likely that
VTEPs will unnecessarily receive broadcast/unknown/multicast
datagrams for VNIs in which they do not participate. Using a range of
groups mitigates or eliminates this issue. It is thus an alternative
to the l2_population extension and driver for environments where both
multicast and linuxbridge are used.
The default setting is unchanged, but the comments in the ini file
suggest 239.0.0.0/8 as an alternative. Administrators are free to use
any valid multicast range that can be expressed in CIDR notation, and
should choose a size and starting address that make sense for their
environment.
Enable resource usage tracking for reference plugins.
Specify which resources should be tracked when initializing
the ML2 and l3_router plugins. This will enable usage tracking
for the following resources:
- Networks
- Ports
- Subnets
- Subnet pools
- Security groups
- Security group rules
- Routers
- Floating IPs
Introduce a 'plural_name' attribute in the BaseResource
class in the neutron.quota.resource module. This will
enable to specify the plural name of a resource when it's
entered in the quota registry.
This will come handy for managing reservations and also
removes the need for passing the collection name when
counting a resource.
Also, the name of the 'resources' parameter for the
_count_resource routine has ben changed to collection_name,
as the previous one was misleading, since it led to believe
it should be used for a collection, whereas it is simply
the name of a collection.
This patch introduces application logic support for tracking
Neutron resource usage data, thus introducing a different
way of enforcing quota limits, which now relies on records
tracking resource usage for each tenant.
When these records go "out-of-sync" with actual resource usage,
the quota usage table entry is marked "dirty".
And it will be resynchronized the next time a resource count is
requested. Changes in resource utilization are detected using
SQLAlchemy events.
This patch however does not automatically enable resource usage
tracking for any plugin. Plugins must explicitly declare for which
resources they wish to enable tracking. To this aim, this patch
provides the @tracked_resources decorator.
As some operators might wish to not use resource usage tracking, this
patch adds the track_quota_usage configuration option. If set to
False, DB-level usage tracking will not be performed, and
resources will be counted as before, ie: invoking a plugin method.
Usage tracking is performed using a new resource type,
TrackedResource, which can work side by side with CountableResource,
the one Neutron used so far. To this aim a ResourceRegistry class
has been introduced for registering and managing resources. Most
of the resource management code previously embedded in the
QuotaEngine class is being moved to ResourceRegistry as a part of
this patch.
This patch simply renames quota modules introducing two
new packages:
1) neutron.quota
2) neutron.db.quota
This paves the way for adding more quota related modules.
Unit tests paths are changed accordingly, as well as references
to moved modules.
This patch hovewer preserves the ability of importing
neutron.db.quota_db in order to avoid breaking repositories which
use neutron as a requirement.
In Python 3, dhcp.LOG.process is not called when calling dhcp.LOG.debug. Since
what we really want to know is whether dhcp.LOG.debug is called, tweak the test
a little.
The current coding, raising DBReferenceError directly, seems to
confuse sqlalchemy a bit. Instead, actually causes the error by
tweaking the row being inserted.
Fix KeyError: 'L3_ROUTER_NAT' in l3 scheduler functional test
The issue was introduced by commit with change ID:
Ic4df299bac83c80abf0890a81d2dd36d15993b33
In Neutron-speak we define the 'core plugin' as something like ML2, while
service plugins examples are L3, VPNaaS, etc. The code was initializing
the L3 service plugin as the core plugin, which is unexpected behavior.
The code will now use the DB core plugin base class as the core plugin,
and not initialize the service plugins. The tests will manually and locally
instantiate a L3 service plugin instance and use it.
This patch introduces database support for tracking Neutron
resource usage data. A single DB model class tracks usage
info for all neutron resources.
The patch also provides a simple API for managing resource
usage info, as well as unit tests providing coverage for
this API.
This patch also makes a slight change to the ContextBase
class, adding the ability to explicitly set is_advsvc at
initialization time. While this probably makes no difference
for practical use of the context class, it simplifies
development of DB-only unit tests.
rossella [Wed, 14 Jan 2015 14:11:36 +0000 (15:11 +0100)]
L2 agent RPC add new RPC calls
This patch introduces two new RPC calls:
* update_device_list
* get_devices_details_list_and_failed_devices
update_device_up and update_device_down are the RPC calls
used by the L2 agent to notify the plugin that
a device is up or down. These calls accept only one device.
update_device_list accepts a list of devices that will
be put up or down and returns a list of devices for which
the operation failed.
get_devices_details_list_and_failed_devices is a new RPC
call that returns a list of devices details and a list of
devices for which it was not possible to retrieve the
details.
Thanks to these new calls, the agent will be able to update
the status of several devices using one RPC call
and to retry the operation only for the devices that failed
instead of performing a full resync.
This will be implemented in another patch.
This patch introduces the new calls only for the OVS agent.
A following patch will take care of the other L2 agents.