From: Carl Baldwin Date: Fri, 5 Jun 2015 22:23:50 +0000 (+0000) Subject: Add address scope to ports in RPC response to L3 agent X-Git-Url: https://review.fuel-infra.org/gitweb?a=commitdiff_plain;h=c7d0d2e5692b7a66599885f961035bca59c62402;p=openstack-build%2Fneutron-build.git Add address scope to ports in RPC response to L3 agent The L3 agent needs to know the address scope of each port of each router it sets up in order to enforce isolation between scopes. This commit adds a devref for the address scopes and subnet pools features. Change-Id: I6a7b3708fadefff1919d70ab1b8bc345b3fbe81c Partially-Implements: blueprint address-scopes --- diff --git a/doc/source/devref/address_scopes.rst b/doc/source/devref/address_scopes.rst new file mode 100644 index 000000000..711864191 --- /dev/null +++ b/doc/source/devref/address_scopes.rst @@ -0,0 +1,154 @@ +Subnet Pools and Address Scopes +=============================== + +This page discusses subnet pools and address scopes + +Subnet Pools +------------ + +Learn about subnet pools by watching the summit talk given in Vancouver [#]_. + +.. [#] http://www.youtube.com/watch?v=QqP8yBUUXBM&t=6m12s + +Subnet pools were added in Kilo. They are relatively simple. A SubnetPool has +any number of SubnetPoolPrefix objects associated to it. These prefixes are in +CIDR format. Each CIDR is a piece of the address space that is available for +allocation. + +Subnet Pools support IPv6 just as well as IPv4. + +The Subnet model object now has a subnetpool_id attribute whose default is null +for backward compatibility. The subnetpool_id attribute stores the UUID of the +subnet pool that acted as the source for the address range of a particular +subnet. + +When creating a subnet, the subnetpool_id can be optionally specified. If it +is, the 'cidr' field is not required. If 'cidr' is specified, it will be +allocated from the pool assuming the pool includes it and hasn't already +allocated any part of it. If 'cidr' is left out, then the prefixlen attribute +can be specified. If it is not, the default prefix length will be taken from +the subnet pool. Think of it this way, the allocation logic always needs to +know the size of the subnet desired. It can pull it from a specific CIDR, +prefixlen, or default. A specific CIDR is optional and the allocation will try +to honor it if provided. The request will fail if it can't honor it. + +Subnet pools do not allow overlap of subnets. + +Subnet Pool Quotas +~~~~~~~~~~~~~~~~~~ + +A quota mechanism was provided for subnet pools. It is different than other +quota mechanisms in Neutron because it doesn't count instances of first class +objects. Instead it counts how much of the address space is used. + +For IPv4, it made reasonable sense to count quota in terms of individual +addresses. So, if you're allowed exactly one /24, your quota should be set to +256. Three /26s would be 192. This mechanism encourages more efficient use of +the IPv4 space which will be increasingly important when working with globally +routable addresses. + +For IPv6, the smallest viable subnet in Neutron is a /64. There is no reason +to allocate a subnet of any other size for use on a Neutron network. It would +look pretty funny to set a quota of 4611686018427387904 to allow one /64 +subnet. To avoid this, we count IPv6 quota in terms of /64s. So, a quota of 3 +allows three /64 subnets. When we need to allocate something smaller in the +future, we will need to ensure that the code can handle non-integer quota +consumption. + +Allocation +~~~~~~~~~~ + +Allocation is done in a way that aims to minimize fragmentation of the pool. +The relevant code is here [#]_. First, the available prefixes are computed +using a set difference: pool - allocations. The result is compacted [#]_ and +then sorted by size. The subnet is then allocated from the smallest available +prefix that is large enough to accommodate the request. + +.. [#] neutron/ipam/subnet_alloc.py (_allocate_any_subnet) +.. [#] http://pythonhosted.org/netaddr/api.html#netaddr.IPSet.compact + +Address Scopes +-------------- + +Before subnet pools or address scopes, it was impossible to tell if a network +address was routable in a certain context because the address was given +explicitly on subnet create and wasn't validated against any other addresses. +Address scopes are meant to solve this by putting control over the address +space in the hands of an authority: the address scope owner. It makes use of +the already existing SubnetPool concept for allocation. + +Address scopes are "the thing within which address overlap is not allowed" and +thus provide more flexible control as well as decoupling of address overlap +from tenancy. + +Prior to the Mitaka release, there was implicitly only a single 'shared' +address scope. Arbitrary address overlap was allowed making it pretty much a +"free for all". To make things seem somewhat sane, normal tenants are not able +to use routers to cross-plug networks from different tenants and NAT was used +between internal networks and external networks. It was almost as if each +tenant had a private address scope. + +The problem is that this model cannot support use cases where NAT is not +desired or supported (e.g. IPv6) or we want to allow different tenants to +cross-plug their networks. + +An AddressScope covers only one address family. But, they work equally well +for IPv4 and IPv6. + +Routing +~~~~~~~ + +The reference implementation honors address scopes. Within an address scope, +addresses route freely (barring any FW rules or other external restrictions). +Between scopes, routed is prevented unless address translation is used. Future +patches will expand on this. + +.. TODO (Carl) Implement NAT for floating ips crossing scopes +.. TODO (Carl) Implement SNAT for crossing scopes + +RPC +~~~ + +The L3 agent in the reference implementation needs to know the address scope +for each port on each router in order to map ingress traffic correctly. + +Each subnet from the same address family on a network is required to be from +the same subnet pool. Therefore, the address scope will also be the same. If +this were not the case, it would be more difficult to match ingress traffic on +a port with the appropriate scope. It may be counter-intuitive but L3 address +scopes need to be anchored to some sort of non-L3 thing (e.g. an L2 interface) +in the topology in order to determine the scope of ingress traffic. For now, +we use ports/networks. In the future, we may be able to distinguish by +something else like the remote MAC address or something. + +The address scope id is set on each port in a dict under the 'address_scopes' +attribute. The scope is distinct per address family. If the attribute does +not appear, it is assumed to be null for both families. A value of null means +that the addresses are in the "implicit" address scope which holds all +addresses that don't have an explicit one. All subnets that existed in Neutron +before address scopes existed fall here. + +Here is an example of how the json will look in the context of a router port:: + + "address_scopes": { + "4": "d010a0ea-660e-4df4-86ca-ae2ed96da5c1", + "6": null + }, + +Model +~~~~~ + +The model for subnet pools and address scopes can be found in +neutron/db/models_v2.py and neutron/db/address_scope_db.py. This document +won't go over all of the details. It is worth noting how they relate to +existing Neutron objects. The existing Neutron subnet now optionally +references a single subnet pool:: + + +----------------+ +------------------+ +--------------+ + | Subnet | | SubnetPool | | AddressScope | + +----------------+ +------------------+ +--------------+ + | subnet_pool_id +------> | address_scope_id +------> | | + | | | | | | + | | | | | | + | | | | | | + +----------------+ +------------------+ +--------------+ diff --git a/doc/source/devref/index.rst b/doc/source/devref/index.rst index 516f62b71..8c4718ad5 100644 --- a/doc/source/devref/index.rst +++ b/doc/source/devref/index.rst @@ -72,6 +72,7 @@ Neutron Internals upgrade i18n instrumentation + address_scopes Testing ------- diff --git a/neutron/db/l3_db.py b/neutron/db/l3_db.py index 8a051c74d..0276bed0d 100644 --- a/neutron/db/l3_db.py +++ b/neutron/db/l3_db.py @@ -1218,12 +1218,25 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase): if not network_ids: return {} - filters = {'network_id': [id for id in network_ids]} + query = context.session.query(models_v2.Subnet, + models_v2.SubnetPool.address_scope_id) + query = query.outerjoin( + models_v2.SubnetPool, + models_v2.Subnet.subnetpool_id == models_v2.SubnetPool.id) + query = query.filter(models_v2.Subnet.network_id.in_(network_ids)) + fields = ['id', 'cidr', 'gateway_ip', 'dns_nameservers', 'network_id', 'ipv6_ra_mode', 'subnetpool_id'] + def make_subnet_dict_with_scope(row): + subnet_db, address_scope_id = row + subnet = self._core_plugin._make_subnet_dict( + subnet_db, fields, context=context) + subnet['address_scope_id'] = address_scope_id + return subnet + subnets_by_network = dict((id, []) for id in network_ids) - for subnet in self._core_plugin.get_subnets(context, filters, fields): + for subnet in (make_subnet_dict_with_scope(row) for row in query): subnets_by_network[subnet['network_id']].append(subnet) return subnets_by_network @@ -1242,7 +1255,15 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase): port['subnets'] = [] port['extra_subnets'] = [] + port['address_scopes'] = {l3_constants.IP_VERSION_4: None, + l3_constants.IP_VERSION_6: None} + + scopes = {} for subnet in subnets_by_network[port['network_id']]: + scope = subnet['address_scope_id'] + cidr = netaddr.IPNetwork(subnet['cidr']) + scopes[cidr.version] = scope + # If this subnet is used by the port (has a matching entry # in the port's fixed_ips), then add this subnet to the # port's subnets list, and populate the fixed_ips entry @@ -1256,14 +1277,15 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase): for fixed_ip in port['fixed_ips']: if fixed_ip['subnet_id'] == subnet['id']: port['subnets'].append(subnet_info) - prefixlen = netaddr.IPNetwork( - subnet['cidr']).prefixlen + prefixlen = cidr.prefixlen fixed_ip['prefixlen'] = prefixlen break else: # This subnet is not used by the port. port['extra_subnets'].append(subnet_info) + port['address_scopes'].update(scopes) + def _process_floating_ips(self, context, routers_dict, floating_ips): for floating_ip in floating_ips: router = routers_dict.get(floating_ip['router_id']) diff --git a/neutron/tests/unit/db/test_l3_db.py b/neutron/tests/unit/db/test_l3_db.py index e97842240..cd71a2169 100644 --- a/neutron/tests/unit/db/test_l3_db.py +++ b/neutron/tests/unit/db/test_l3_db.py @@ -40,20 +40,28 @@ class TestL3_NAT_dbonly_mixin(base.BaseTestCase): def test__get_subnets_by_network_no_query(self): """Basic test that no query is performed if no Ports are passed""" + context = mock.Mock() with mock.patch.object(manager.NeutronManager, 'get_plugin') as get_p: - self.db._get_subnets_by_network_list(mock.sentinel.context, []) - self.assertFalse(get_p().get_subnets.called) + self.db._get_subnets_by_network_list(context, []) + self.assertFalse(context.session.query.called) + self.assertFalse(get_p.called) def test__get_subnets_by_network(self): """Basic test that the right query is called""" - network_ids = ['a', 'b'] + context = mock.MagicMock() + query = context.session.query().outerjoin().filter() + query.__iter__.return_value = [(mock.sentinel.subnet_db, + mock.sentinel.address_scope_id)] + with mock.patch.object(manager.NeutronManager, 'get_plugin') as get_p: - self.db._get_subnets_by_network_list( - mock.sentinel.context, network_ids) - get_p().get_subnets.assert_called_once_with( - mock.sentinel.context, - {'network_id': network_ids}, - mock.ANY) + get_p()._make_subnet_dict.return_value = { + 'network_id': mock.sentinel.network_id} + subnets = self.db._get_subnets_by_network_list( + context, [mock.sentinel.network_id]) + self.assertEqual({ + mock.sentinel.network_id: [{ + 'address_scope_id': mock.sentinel.address_scope_id, + 'network_id': mock.sentinel.network_id}]}, subnets) def test__populate_ports_for_subnets_none(self): """Basic test that the method runs correctly with no ports""" @@ -70,16 +78,21 @@ class TestL3_NAT_dbonly_mixin(base.BaseTestCase): 'gateway_ip': mock.sentinel.gateway_ip, 'dns_nameservers': mock.sentinel.dns_nameservers, 'ipv6_ra_mode': mock.sentinel.ipv6_ra_mode, - 'subnetpool_id': mock.sentinel.subnetpool_id} + 'subnetpool_id': mock.sentinel.subnetpool_id, + 'address_scope_id': mock.sentinel.address_scope_id} get_subnets_by_network.return_value = {'net_id': [subnet]} ports = [{'network_id': 'net_id', 'id': 'port_id', 'fixed_ips': [{'subnet_id': mock.sentinel.subnet_id}]}] self.db._populate_subnets_for_ports(mock.sentinel.context, ports) + keys = ('id', 'cidr', 'gateway_ip', 'ipv6_ra_mode', 'subnetpool_id', + 'dns_nameservers') + address_scopes = {4: None, 6: mock.sentinel.address_scope_id} self.assertEqual([{'extra_subnets': [], 'fixed_ips': [{'subnet_id': mock.sentinel.subnet_id, 'prefixlen': 64}], 'id': 'port_id', 'network_id': 'net_id', - 'subnets': [subnet]}], ports) + 'subnets': [{k: subnet[k] for k in keys}], + 'address_scopes': address_scopes}], ports)