2 Licensed under the Apache License, Version 2.0 (the "License"); you may
3 not use this file except in compliance with the License. You may obtain
4 a copy of the License at
6 http://www.apache.org/licenses/LICENSE-2.0
8 Unless required by applicable law or agreed to in writing, software
9 distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
10 WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
11 License for the specific language governing permissions and limitations
15 Convention for heading levels in Neutron devref:
16 ======= Heading 0 (reserved for the title in a document)
21 (Avoid deeper levels because they do not render well.)
29 There's two ways to approach testing:
31 1) Write unit tests because they're required to get your patch merged.
32 This typically involves mock heavy tests that assert that your code is as
34 2) Putting as much thought in to your testing strategy as you do to the rest
35 of your code. Use different layers of testing as appropriate to provide
36 high *quality* coverage. Are you touching an agent? Test it against an
37 actual system! Are you adding a new API? Test it for race conditions
38 against a real database! Are you adding a new cross-cutting feature?
39 Test that it does what it's supposed to do when run on a real cloud!
41 Do you feel the need to verify your change manually? If so, the next few
42 sections attempt to guide you through Neutron's different test infrastructures
43 to help you make intelligent decisions and best exploit Neutron's test
48 We will talk about three classes of tests: unit, functional and integration.
49 Each respective category typically targets a larger scope of code. Other than
50 that broad categorization, here are a few more characteristic:
52 * Unit tests - Should be able to run on your laptop, directly following a
53 'git clone' of the project. The underlying system must not be mutated,
54 mocks can be used to achieve this. A unit test typically targets a function
56 * Functional tests - Run against a pre-configured environment
57 (tools/configure_for_func_testing.sh). Typically test a component
58 such as an agent using no mocks.
59 * Integration tests - Run against a running cloud, often target the API level,
60 but also 'scenarios' or 'user stories'. You may find such tests under
61 tests/api, tests/fullstack and in the Tempest and Rally projects.
63 Tests in the Neutron tree are typically organized by the testing infrastructure
64 used, and not by the scope of the test. For example, many tests under the
65 'unit' directory invoke an API call and assert that the expected output was
66 received. The scope of such a test is the entire Neutron server stack,
67 and clearly not a specific function such as in a typical unit test.
72 The different frameworks are listed below. The intent is to list the
73 capabilities of each testing framework as to help the reader understand when
74 should each tool be used. Remember that when adding code that touches many
75 areas of Neutron, each area should be tested with the appropriate framework.
76 Overlap between different test layers is often desirable and encouraged.
81 Unit tests (neutron/tests/unit/) are meant to cover as much code as
82 possible. They are designed to test the various pieces of the Neutron tree to
83 make sure any new changes don't break existing functionality. Unit tests have
84 no requirements nor make changes to the system they are running on. They use
85 an in-memory sqlite database to test DB interaction.
87 At the start of each test run:
89 * RPC listeners are mocked away.
90 * The fake Oslo messaging driver is used.
92 At the end of each test run:
94 * Mocks are automatically reverted.
95 * The in-memory database is cleared of content, but its schema is maintained.
96 * The global Oslo configuration object is reset.
98 The unit testing framework can be used to effectively test database interaction,
99 for example, distributed routers allocate a MAC address for every host running
100 an OVS agent. One of DVR's DB mixins implements a method that lists all host
101 MAC addresses. Its test looks like this:
103 .. code-block:: python
105 def test_get_dvr_mac_address_list(self):
106 self._create_dvr_mac_entry('host_1', 'mac_1')
107 self._create_dvr_mac_entry('host_2', 'mac_2')
108 mac_list = self.mixin.get_dvr_mac_address_list(self.ctx)
109 self.assertEqual(2, len(mac_list))
111 It inserts two new host MAC address, invokes the method under test and asserts
112 its output. The test has many things going for it:
114 * It targets the method under test correctly, not taking on a larger scope
116 * It does not use mocks to assert that methods were called, it simply
117 invokes the method and asserts its output (In this case, that the list
118 method returns two records).
120 This is allowed by the fact that the method was built to be testable -
121 The method has clear input and output with no side effects.
126 Functional tests (neutron/tests/functional/) are intended to
127 validate actual system interaction. Mocks should be used sparingly,
128 if at all. Care should be taken to ensure that existing system
129 resources are not modified and that resources created in tests are
130 properly cleaned up both on test success and failure.
132 Let's examine the benefits of the functional testing framework.
133 Neutron offers a library called 'ip_lib' that wraps around the 'ip' binary.
134 One of its methods is called 'device_exists' which accepts a device name
135 and a namespace and returns True if the device exists in the given namespace.
136 It's easy building a test that targets the method directly, and such a test
137 would be considered a 'unit' test. However, what framework should such a test
138 use? A test using the unit tests framework could not mutate state on the system,
139 and so could not actually create a device and assert that it now exists. Such
140 a test would look roughly like this:
142 * It would mock 'execute', a method that executes shell commands against the
143 system to return an IP device named 'foo'.
144 * It would then assert that when 'device_exists' is called with 'foo', it
145 returns True, but when called with a different device name it returns False.
146 * It would most likely assert that 'execute' was called using something like:
149 The value of such a test is arguable. Remember that new tests are not free,
150 they need to be maintained. Code is often refactored, reimplemented and
153 * There are other ways to find out if a device exists (Such as
154 by looking at '/sys/class/net'), and in such a case the test would have
156 * Methods are mocked using their name. When methods are renamed, moved or
157 removed, their mocks must be updated. This slows down development for
159 * Most importantly, the test does not assert the behavior of the method. It
160 merely asserts that the code is as written.
162 When adding a functional test for 'device_exists', several framework level
163 methods were added. These methods may now be used by other tests as well.
164 One such method creates a virtual device in a namespace,
165 and ensures that both the namespace and the device are cleaned up at the
166 end of the test run regardless of success or failure using the 'addCleanup'
167 method. The test generates details for a temporary device, asserts that
168 a device by that name does not exist, create that device, asserts that
169 it now exists, deletes it, and asserts that it no longer exists.
170 Such a test avoids all three issues mentioned above if it were written
171 using the unit testing framework.
173 Functional tests are also used to target larger scope, such as agents.
174 Many good examples exist: See the OVS, L3 and DHCP agents functional tests.
175 Such tests target a top level agent method and assert that the system
176 interaction that was supposed to be perform was indeed performed.
177 For example, to test the DHCP agent's top level method that accepts network
178 attributes and configures dnsmasq for that network, the test:
180 * Instantiates an instance of the DHCP agent class (But does not start its
182 * Calls its top level function with prepared data.
183 * Creates a temporary namespace and device, and calls 'dhclient' from that
185 * Assert that the device successfully obtained the expected IP address.
193 The idea behind "fullstack" testing is to fill a gap between unit + functional
194 tests and Tempest. Tempest tests are expensive to run, and target black box API
195 tests exclusively. Tempest requires an OpenStack deployment to be run against,
196 which can be difficult to configure and setup. Full stack testing addresses
197 these issues by taking care of the deployment itself, according to the topology
198 that the test requires. Developers further benefit from full stack testing as
199 it can sufficiently simulate a real environment and provide a rapidly
200 reproducible way to verify code while you're still writing it.
205 Full stack tests set up their own Neutron processes (Server & agents). They
206 assume a working Rabbit and MySQL server before the run starts. Instructions
207 on how to run fullstack tests on a VM are available below.
209 Each test defines its own topology (What and how many servers and agents should
212 Since the test runs on the machine itself, full stack testing enables
213 "white box" testing. This means that you can, for example, create a router
214 through the API and then assert that a namespace was created for it.
216 Full stack tests run in the Neutron tree with Neutron resources alone. You
217 may use the Neutron API (The Neutron server is set to NOAUTH so that Keystone
218 is out of the picture). VMs may be simulated with a container-like class:
219 neutron.tests.fullstack.resources.machine.FakeFullstackMachine.
220 An example of its usage may be found at:
221 neutron/tests/fullstack/test_connectivity.py.
223 Full stack testing can simulate multi node testing by starting an agent
224 multiple times. Specifically, each node would have its own copy of the
225 OVS/DHCP/L3 agents, all configured with the same "host" value. Each OVS agent
226 is connected to its own pair of br-int/br-ex, and those bridges are then
229 .. image:: images/fullstack_multinode_simulation.png
231 Segmentation at the database layer is guaranteed by creating a database
232 per test. The messaging layer achieves segmentation by utilizing a RabbitMQ
233 feature called 'vhosts'. In short, just like a MySQL server serve multiple
234 databases, so can a RabbitMQ server serve multiple messaging domains.
235 Exchanges and queues in one 'vhost' are segmented from those in another
241 1) You'd like to test the interaction between Neutron components (Server
242 and agents) and have already tested each component in isolation via unit or
243 functional tests. You should have many unit tests, fewer tests to test
244 a component and even fewer to test their interaction. Edge cases should
245 not be tested with full stack testing.
246 2) You'd like to increase coverage by testing features that require multi node
247 testing such as l2pop, L3 HA and DVR.
248 3) You'd like to test agent restarts. We've found bugs in the OVS, DHCP and
249 L3 agents and haven't found an effective way to test these scenarios. Full
250 stack testing can help here as the full stack infrastructure can restart an
251 agent during the test.
256 Neutron offers a Quality of Service API, initially offering bandwidth
257 capping at the port level. In the reference implementation, it does this by
258 utilizing an OVS feature.
259 neutron.tests.fullstack.test_qos.TestQoSWithOvsAgent.test_qos_policy_rule_lifecycle
260 is a positive example of how the fullstack testing infrastructure should be used.
261 It creates a network, subnet, QoS policy & rule and a port utilizing that policy.
262 It then asserts that the expected bandwidth limitation is present on the OVS
263 bridge connected to that port. The test is a true integration test, in the
264 sense that it invokes the API and then asserts that Neutron interacted with
265 the hypervisor appropriately.
270 API tests (neutron/tests/api/) are intended to ensure the function
271 and stability of the Neutron API. As much as possible, changes to
272 this path should not be made at the same time as changes to the code
273 to limit the potential for introducing backwards-incompatible changes,
274 although the same patch that introduces a new API should include an API
277 Since API tests target a deployed Neutron daemon that is not test-managed,
278 they should not depend on controlling the runtime configuration
279 of the target daemon. API tests should be black-box - no assumptions should
280 be made about implementation. Only the contract defined by Neutron's REST API
281 should be validated, and all interaction with the daemon should be via
284 neutron/tests/api was copied from the Tempest project. The Tempest networking
285 API directory was frozen and any new tests belong to the Neutron repository.
290 It is expected that any new changes that are proposed for merge
291 come with tests for that feature or code area. Any bugs
292 fixes that are submitted must also have tests to prove that they stay
293 fixed! In addition, before proposing for merge, all of the
294 current tests should be passing.
296 Structure of the Unit Test Tree
297 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
299 The structure of the unit test tree should match the structure of the
302 - target module: neutron.agent.utils
304 - test module: neutron.tests.unit.agent.test_utils
306 Unit test modules should have the same path under neutron/tests/unit/
307 as the module they target has under neutron/, and their name should be
308 the name of the target module prefixed by `test_`. This requirement
309 is intended to make it easier for developers to find the unit tests
312 Similarly, when a test module targets a package, that module's name
313 should be the name of the package prefixed by `test_` with the same
314 path as when a test targets a module, e.g. ::
316 - target package: neutron.ipam
318 - test module: neutron.tests.unit.test_ipam
320 The following command can be used to validate whether the unit test
321 tree is structured according to the above requirements: ::
323 ./tools/check_unit_test_structure.sh
325 Where appropriate, exceptions can be added to the above script. If
326 code is not part of the Neutron namespace, for example, it's probably
327 reasonable to exclude their unit tests from the check.
332 There are three mechanisms for running tests: run_tests.sh, tox,
333 and nose2. Before submitting a patch for review you should always
334 ensure all test pass; a tox run is triggered by the jenkins gate
335 executed on gerrit for each patch pushed for review.
337 With these mechanisms you can either run the tests in the standard
338 environment or create a virtual environment to run them in.
340 By default after running all of the tests, any pep8 errors
341 found in the tree will be reported.
347 You can use the `run_tests.sh` script in the root source directory to execute
348 tests in a virtualenv::
356 You can use `nose2`_ to run individual tests, as well as use for debugging
357 portions of your code::
359 source .venv/bin/activate
363 There are disadvantages to running nose2 - the tests are run sequentially, so
364 race condition bugs will not be triggered, and the full test suite will
365 take significantly longer than tox & testr. The upside is that testr has
366 some rough edges when it comes to diagnosing errors and failures, and there is
367 no easy way to set a breakpoint in the Neutron code, and enter an
368 interactive debugging session while using testr.
370 Note that nose2's predecessor, `nose`_, does not understand
371 `load_tests protocol`_ introduced in Python 2.7. This limitation will result in
372 errors being reported for modules that depend on load_tests
373 (usually due to use of `testscenarios`_). nose, therefore, is not supported,
376 .. _nose2: http://nose2.readthedocs.org/en/latest/index.html
377 .. _nose: https://nose.readthedocs.org/en/latest/index.html
378 .. _load_tests protocol: https://docs.python.org/2/library/unittest.html#load-tests-protocol
379 .. _testscenarios: https://pypi.python.org/pypi/testscenarios/
384 Neutron, like other OpenStack projects, uses `tox`_ for managing the virtual
385 environments for running test cases. It uses `Testr`_ for managing the running
388 Tox handles the creation of a series of `virtualenvs`_ that target specific
391 Testr handles the parallel execution of series of test cases as well as
392 the tracking of long-running tests and other things.
394 For more information on the standard Tox-based test infrastructure used by
395 OpenStack and how to do some common test/debugging procedures with Testr,
398 https://wiki.openstack.org/wiki/Testr
400 .. _Testr: https://wiki.openstack.org/wiki/Testr
401 .. _tox: http://tox.readthedocs.org/en/latest/
402 .. _virtualenvs: https://pypi.python.org/pypi/virtualenv
407 Running pep8 and unit tests is as easy as executing this in the root
408 directory of the Neutron source code::
416 Since pep8 includes running pylint on all files, it can take quite some time to run.
417 To restrict the pylint check to only the files altered by the latest patch changes::
421 To run only the unit tests::
428 To run functional tests that do not require sudo privileges or
429 specific-system dependencies::
433 To run all the functional tests, including those requiring sudo
434 privileges and system-specific dependencies, the procedure defined by
435 tools/configure_for_func_testing.sh should be followed.
437 IMPORTANT: configure_for_func_testing.sh relies on DevStack to perform
438 extensive modification to the underlying host. Execution of the
439 script requires sudo privileges and it is recommended that the
440 following commands be invoked only on a clean and disposeable VM.
441 A VM that has had DevStack previously installed on it is also fine. ::
443 git clone https://git.openstack.org/openstack-dev/devstack ../devstack
444 ./tools/configure_for_func_testing.sh ../devstack -i
445 tox -e dsvm-functional
447 The '-i' option is optional and instructs the script to use DevStack
448 to install and configure all of Neutron's package dependencies. It is
449 not necessary to provide this option if DevStack has already been used
450 to deploy Neutron to the target host.
455 To run all the full-stack tests, you may use: ::
457 tox -e dsvm-fullstack
459 Since full-stack tests often require the same resources and
460 dependencies as the functional tests, using the configuration script
461 tools/configure_for_func_testing.sh is advised (As described above).
462 When running full-stack tests on a clean VM for the first time, we
463 advise to run ./stack.sh successfully to make sure all Neutron's
464 dependencies are met. Full-stack based Neutron daemons produce logs to a
465 sub-folder in /tmp/dsvm-fullstack-logs (for example, a test named
466 "test_example" will produce logs to /tmp/dsvm-fullstack-logs/test_example/),
467 so that will be a good place to look if your test is failing.
468 Fullstack test suite assumes 240.0.0.0/4 (Class E) range in root namespace of
469 the test machine is available for its usage.
474 To run the api tests, deploy Tempest and Neutron with DevStack and
475 then run the following command: ::
479 If tempest.conf cannot be found at the default location used by
480 DevStack (/opt/stack/tempest/etc) it may be necessary to set
481 TEMPEST_CONFIG_DIR before invoking tox: ::
483 export TEMPEST_CONFIG_DIR=[path to dir containing tempest.conf]
487 Running Individual Tests
488 ~~~~~~~~~~~~~~~~~~~~~~~~
490 For running individual test modules, cases or tests, you just need to pass
491 the dot-separated path you want as an argument to it.
493 For example, the following would run only a single test or test case::
495 $ ./run_tests.sh neutron.tests.unit.test_manager
496 $ ./run_tests.sh neutron.tests.unit.test_manager.NeutronManagerTestCase
497 $ ./run_tests.sh neutron.tests.unit.test_manager.NeutronManagerTestCase.test_service_plugin_is_loaded
501 $ tox -e py27 neutron.tests.unit.test_manager
502 $ tox -e py27 neutron.tests.unit.test_manager.NeutronManagerTestCase
503 $ tox -e py27 neutron.tests.unit.test_manager.NeutronManagerTestCase.test_service_plugin_is_loaded
505 If you want to pass other arguments to ostestr, you can do the following::
506 $ tox -e -epy27 -- --regex neutron.tests.unit.test_manager --serial
512 Neutron has a fast growing code base and there are plenty of areas that
513 need better coverage.
515 To get a grasp of the areas where tests are needed, you can check
516 current unit tests coverage by running::
524 Since the coverage command can only show unit test coverage, a coverage
525 document is maintained that shows test coverage per area of code in:
526 doc/source/devref/testing_coverage.rst. You could also rely on Zuul
527 logs, that are generated post-merge (not every project builds coverage
528 results). To access them, do the following:
530 * Check out the latest `merge commit <https://review.openstack.org/gitweb?p=openstack/neutron.git;a=search;s=Jenkins;st=author>`_
531 * Go to: http://logs.openstack.org/<first-2-digits-of-sha1>/<sha1>/post/neutron-coverage/.
532 * `Spec <https://review.openstack.org/#/c/221494/>`_ is a work in progress to
533 provide a better landing page.
538 By default, calls to pdb.set_trace() will be ignored when tests
539 are run. For pdb statements to work, invoke run_tests as follows::
541 $ ./run_tests.sh -d [test module path]
543 It's possible to debug tests in a tox environment::
545 $ tox -e venv -- python -m testtools.run [test module path]
547 Tox-created virtual environments (venv's) can also be activated
548 after a tox run and reused for debugging::
551 $ . .tox/venv/bin/activate
552 $ python -m testtools.run [test module path]
554 Tox packages and installs the Neutron source tree in a given venv
555 on every invocation, but if modifications need to be made between
556 invocation (e.g. adding more pdb statements), it is recommended
557 that the source tree be installed in the venv in editable mode::
559 # run this only after activating the venv
560 $ pip install --editable .
562 Editable mode ensures that changes made to the source tree are
563 automatically reflected in the venv, and that such changes are not
564 overwritten during the next tox run.
566 Post-mortem Debugging
567 ~~~~~~~~~~~~~~~~~~~~~
569 Setting OS_POST_MORTEM_DEBUGGER in the shell environment will ensure
570 that the debugger .post_mortem() method will be invoked on test failure::
572 $ OS_POST_MORTEM_DEBUGGER=pdb ./run_tests.sh -d [test module path]
574 Supported debuggers are pdb, and pudb. Pudb is full-screen, console-based
575 visual debugger for Python which let you inspect variables, the stack,
576 and breakpoints in a very visual way, keeping a high degree of compatibility
579 $ ./.venv/bin/pip install pudb
581 $ OS_POST_MORTEM_DEBUGGER=pudb ./run_tests.sh -d [test module path]
586 .. [#pudb] PUDB debugger:
587 https://pypi.python.org/pypi/pudb