last modified: 17 March 2007
The Linux-USB kernel code is complex enough to need some focused testing efforts, and this web page tries to give an overview of the key ones. There are basically two things to tests: hosts like desktop PCs or other "USB masters", and peripherals, devices, or whatever you want to call the gadgets that act as "USB slaves" and implement the function the USB host is accessing. (On some hardware Linux supports OTG. OTG uses both host and gadget APIs, and adds mode negotiation support that needs testing.) Each of those has at least two layers of device drivers to test.
Although most of these tests will be of interest to folk debugging, developing or maintaining USB system software like controller drivers or device firmware, some may be useful to sysadmins or end users that suspect they may have flakey USB hardware.
If you're bringing up new USB hardware with Linux, perhaps with embedded or at least non-PCI controllers, you may find it hard to debug without hardware tools such as a USB protocol sniffer or a logic analyser. There are probably a dozen or more other vendors; look around. As is the case with more general-purpose JTAG tools, higher end tools (such as those from LeCroy, which bought CATC) can often be rented. At this writing, Total Phase has the most affordable tools (including a relatively new high speed analyser), with software which runs natively on Linux.
You're likely to be interested in this if you're maintaining a USB Host Controller Driver (HCD), especially if it's one that's not widely available on PCI hardware; or if you're using Linux as a host when testing some kinds of product. (The same tests can be used from a known-working Linux host to test a USB Device Controller driver.) Such tests help serve as driver regression tests, so they're good to have as arrows in the test case quiver of a Linux distributor. These are also the tests that might be helpful in turning up hardware problems with some USB configurations. You may even notice interesting performance characteristics.
Other than the old usbstress 0.3 software (which wasn't widely used), the primary effort here started with the 2.5.42 kernel release. It consists of:
Assuming you have a recent Linux kernel (such as 2.6.12) you will already have the kernel source code for the tests, so the main question is how to get a device to test with.. The simplest solution for most people will involve ordering a specialized PCI card and using it on a Linux PC; see below.
You won't notice issues with class or vendor-specific functionality with the kind of test setup described here, or with some of the less-mainstream Linux-USB APIs. Certain traffic patterns won't be covered at all, and there's not much testing for isochronous transfer modes. In other words, don't forget to test with real off-the-shelf peripheral and their Linux drivers too.
At this writing, there are many peripheral known to work with this testing software. The "bulk sink" and "bulk source" functionality is also supported by most peripheral firmware development kits, as is "iso sink" and "iso source" (for hardware that supports isochronous transfers).
The first type of peripheral is anything using a full speed Cypress EZ-USB chip, like some of the Keyspan serial adapters. The peripheral can use GPL'd firmware written by Martin Diehl, instead of whatever they might normally be using. (The source for that is in the firmware/ezusb/testing area in CVS for the Linux-Hotplug project. Cypress development kits include similar drivers.) Many products (notably, many types of serial adapters) use those chips internally, and rely on device drivers (or fxload) to download the firmware. You can disable their "official" device drivers and then use them for testing. Store the test firmware (doesn't currently support iso transfers) into the /etc/hotplug/usb directory so that you can download it with 'fxload' from http://linux-hotplug.sourceforge.net. Then enable the usbtest kernel driver, and install this /etc/hotplug/usb/usbtest driver setup script, you should be able to run these tests with very little trouble, either in a formal "test this now" mode or as background tasks in parallel with other activity (including other USB activity).
The second type of peripheral uses the Linux-USB Gadget driver framework API. That's standard in current Linux 2.4 and 2.6 kernels. It includes a Gadget Zero driver. To use it, you need a hardware-specific driver to make your USB controller implement that API. High speed USB peripherals can work, as well as full and low speed ones. There's a user-mode version of that driver, which optionally supports testing for a variety of isochronous transfer rates. (The collection of supported hardware is beginning to grow. In fact, testing the hardware-specific controller driver relies on gadget zero and host side tests like these.)
This is by far the simplest option. Two ways to run high speed peripherals on Linux are: (a) a PC with a spare PCI slot, plus a Net2280EVB card (made by PLX, available from various online sources for around $US 105) running most Linux 2.6 kernels; or, if you can work with non-x86 embedded Linux environments using buildroot, (b) an ATNGW100 board (from Atmel, a small but complete Linux-capable board available from distributors like Digi-Key for about $US 89) running Linux 2.6.23+.
Hey! Intel sells official USB2 compliance testing devices! PDF at around $100 each. Maybe you can help make these work with "usbtest" or related code.
The testusb.c program just issues ioctls to perform the tests implemented by the kernel driver. It can generate a variety of transfer patterns; you should make sure to test both regular streaming and mixes of transfer sizes (including short transfers), maybe by using this test.sh script. Run testusb like this:
[root@krypton misc]# testusb must specify '-a' or '-D dev' usage: testusb [-a] [-D dev] [-n] [-c iterations] [-s packetsize] [-g sglen] [root@krypton misc]# |
Use 'testusb -a' to test all recognized devices in parallel (one thread per device). Here's output from a test run (with an old usbtest driver) on a uniprocessor, for two high speed FX2 devices: one with firmware for bulk IN transfers, the other firmware for bulk OUT. That's with lots of I/O parallelism, so likely these would be good SMP test modes too:
[root@krypton misc]# usbtree /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ehci-hcd/5p, 480M |__ Port 1: Dev 2, If 0, Class=vend., Driver=usbtest, 480M |__ Port 2: Dev 3, If 0, Class=vend., Driver=usbtest, 480M [root@krypton misc]# [root@krypton misc]# testusb -a unknown speed /proc/bus/usb/001/003 unknown speed /proc/bus/usb/001/002 /proc/bus/usb/001/002 test 0 took 0.000011 sec /proc/bus/usb/001/003 test 0 took 0.000006 sec /proc/bus/usb/001/003 test 1 took 0.201934 sec /proc/bus/usb/001/002 test 2 took 0.226852 sec /proc/bus/usb/001/003 test 3 took 0.211918 sec /proc/bus/usb/001/002 test 4 took 0.222404 sec /proc/bus/usb/001/003 test 5 took 2.137454 sec /proc/bus/usb/001/002 test 6 took 2.133821 sec /proc/bus/usb/001/003 test 7 took 2.125387 sec /proc/bus/usb/001/002 test 8 took 2.115402 sec [root@krypton misc]# |
What did the tests do? UTSL; current versions add at least control message tests (covering many "chapter 9" spec behaviors, and unlink testing) and isochronous transfer support. Here's a summary from a different test run, with more tests and firmware that doesn't support the ISO transfer tests (#15 and #16):
[root@krypton misc]# dmesg | tail -15 usbtest 2-2.4:3.0: TEST 0: NOP usbtest 2-2.4:3.0: TEST 1: write 512 bytes 1000 times usbtest 2-2.4:3.0: TEST 2: read 512 bytes 1000 times usbtest 2-2.4:3.0: TEST 3: write/512 0..512 bytes 1000 times usbtest 2-2.4:3.0: TEST 4: read/512 0..512 bytes 1000 times usbtest 2-2.4:3.0: TEST 5: write 1000 sglists 32 entries of 512 bytes usbtest 2-2.4:3.0: TEST 6: read 1000 sglists 32 entries of 512 bytes usbtest 2-2.4:3.0: TEST 7: write/512 1000 sglists 32 entries 0..512 bytes usbtest 2-2.4:3.0: TEST 8: read/512 1000 sglists 32 entries 0..512 bytes usbtest 2-2.4:3.0: TEST 9: ch9 (subset) control tests, 1000 times usbtest 2-2.4:3.0: TEST 10: queue 32 control calls, 1000 times usbtest 2-2.4:3.0: TEST 11: unlink 1000 reads of 512 usbtest 2-2.4:3.0: TEST 12: unlink 1000 writes of 512 usbtest 2-2.4:3.0: TEST 13: set/clear 1000 halts usbtest 2-2.4:3.0: TEST 14: 1000 ep0out, 0..255 vary 1 [root@krypton misc]# |
On the host side, there are two types of test output. One is the results of the command line invocations; that's easily captured with the Linux "script" command. The other is the driver output, which is captured by the "syslog" daemon given an appropriate "syslog.conf" setup. (On some versions of this driver you may need to modify the driver by hand to re-enable this output.)
Tests #11 and #12 aren't very interesting from the perspective of the peripherals, but they cover some tricky code paths within HCDs and usbcore. Notice that those tests don't yet handle device disconnect/reconnect (do those by hand, at awkward spots including mid-test), suspend/resume, or reset, but they do cover significant portions of the rest of the Linux-USB host side API.
You can use module options to make the "usbtest" driver bind to any USB peripheral that enumerates and then the "testusb" program can talk to it with "test 9" and "test 10". Those are chapter 9 tests (control traffic) that every USB device should be able to pass. If even those simple tests don't work, you'll have found a bug in either that peripheral's firmware, some hardware component, or in Linux (probably the HCD, which can often be changed with an inexpensive PCI card). Those two tests could help system administrators track down some types of USB problems.
Test 10 has been particularly effective at shaking out low-level controller and driver bugs on both host and peripheral/gadget sides. It issues many back-to-back control transfers, and induces faults such as protocol stalls; so it's exposed races, fault handling bugs, and various annoying combinations of the two. Likewise the scatterlist tests have been good at doing the same thing for bulk transfers, for much the same reasons.
Test #14 can't use the default "testusb" parameters; you'll need to drive it using parameters such as those in the test script. That's also more interesting for peripheral controller testing, since it covers the "control-OUT" type transfers that are essential for supporting RNDIS connections to MS-Windows. The test itself will only work on devices which support some testing-only control messages. (Such as by using "gadget zero" to test the underlying peripheral controller driver, or the Intel test device mentioned above.)
As of 2.5.44, the three main HCDs (EHCI, OHCI, UHCI) seemed to pass those basic tests on at least some basic hardware configurations, on runs of a few hundred iterations. That's clearly a good milestone, but it certainly shouldn't be the last one! (Some host controllers have run these tests for weeks without significant problems; and since 2.5.44, more test cases have been added.)
When you're implementing a USB peripheral by embedding Linux, and using the Gadget driver framework, do all the testing outlined here. Gadget drivers are written to a hardware-neutral API, which can support both generic (class style) functionality or vendor-specific functionality. The controller driver implements that API, and you'll drive different parts of that implementation using different gadget drivers along with different host side software.
Your peripheral should certainly pass the www.usb.org USBCV tests. These take about five minutes to run.
These are the same tests described earlier for use in HCD testing. The difference is that here the "known good" component is the Linux-USB host, rather than peripheral (which is being tested by the host). Most Linux PCs should work just fine as the test driver. Run them for quick sanity tests, and as overnight stress loads. Leave them running all week while you do other things with your boards, too; you might turn up something interesting, like an unexpected interaction between different SOC components. If in doubt, try using an OHCI controller on your Linux host; that's been used most often for such testing (so it's least likely to hide problems). Also, be sure to use a very recent Linux kernel; bugs in the test code do show up sometimes, and older kernels won't have the fixes.
Note that once these tests work, you can use the test.sh script.(described earlier) as an unattended scripted test. You should be able to run it for weeks and never see an error.
These control tests cover more than USBCV, notably testing fault handling and, only with gadget zero, control-OUT transfers.
The test script linked above includes useful test parameters. Most of these tests can be run in modes where they verify that data matches some specific pattern. You should use that test script to make sure that enough of the interesting boundary cases are covered by your tests.
You may wish to make use of the "mod63" data pattern tests. These don't work with all packet sizes, so you'll need to set them up by hand, but they are good ways to help catch problems like accidentally duplicated packets or buffers.
As a rule, interrupt transfers are handled the same as bulk transfers; they shouldn't need much separate testing. However, you may want to enable the "usbtest" and/or "g_zero" module options which let you test those transfers.
At this writing, no generally available gadget drivers require isochronous transfer support.
When Gadget Zero is basically working, you should start using Ethernet style testing also, the g_ether gadget driver. That normally uses CDC Ethernet to talk to hosts, and will cover important code paths that won't be addressed using "usbtest". Specifically, transfers go in both directions concurrently; they use queue depths greater than one; and the rates at which requests enter and leave the queues vary considerably more. (Races will show up a lot more readily!)
With these instructions, you should use both Linux and MS-Windows as the USB host. Be sure to enable the RNDIS option for g_ether.
As with Gadget Zero, once this works for short periods you should ensure that it works reliably for days at a time. All tests should run for at least 24 hours without errors.
Don't forget connect/disconnect testing; do it in the middle of those bulk/control/iso operations, and be sure all pending transactions are properly cleaned up. There's also "softconnect" testing to be done on all systems except those which don't provide software control over the D+ (or D-) pullup used to signal device disconnect: when you "rmmod" a gadget driver when the peripheral is cabled to the host, the host should normally see the device disconnect. Likewise, when you modprobe the gadget driver, the host should imediately detect a new peripheral and enumerate it. Halting, or rebooting, your Linux peripheral should also disconnect it from the host.
Other gadget drivers are also available for testing, but once your controller works well with those two drivers it's much less likely you'll find significant bugs that aren't related to the gadget driver you're using. For example,
If your system could operate in OTG mode, or if it's a development board that configures its single USB port in either host or peripheral roles, you should add a few basic hardware tests to your suite. In particular, only host-only, non-OTG, non-SRP configurations should ever provide VBUS power by default. But VBUS switching bugs are easy to have on dual-role boards, so you should test this. If you have a device with a LED that turns on given VBUS power, it's easy to use that in manual testing for whether VBUS power is off.
If your peripheral supports USB OTG, run through all of the OPT tests. These tests are detailed in the OTG Compliance Plan for the USB 2.0 Specification. That document is available from www.usb.org, which also makes the OPT test equipment available. There are several dozen of these tests, covering your device in both the "A" role (default host) and the "B" role (default peripheral); so make sure both host and peripheral side stacks work well before you start running these tests.
Your peripheral might also implement just a subset of OTG, such as the SRP protocol. You can test just that portion of the OTG stack, too.
You're likely to be interested in this if you're developing a USB peripheral, and want to make it work well with Linux hosts. You can implement the peripheral by embedding Linux, or with some other OS.
There are several levels of testing that a Linux host can perform to your peripheral. The basic test is whether Linux-USB can enumerate your peripheral and parse its descriptors; that's basic plugfest style testing. All non-defective peripherals should support this. (Look at /proc/bus/usb/devices after Linux enumerates it, and verify that the descriptors are displayed correctly.) There are also some chapter 9 tests that your peripheral should handle; if it handles the analogous USB-IF tests (from a Windows host), it should pass these with little trouble. For full function support you must make sure Linux host side applications can use your peripheral, using some kind of device driver(s). (User mode drivers can sometimes work here.)
The http://linux-usb-test.sourceforge.net documentation is a resource for interoperability testing that you may find helpful.
You should do all such basic testing with all major Linux host configurations: the three primary types of host controller (EHCI, OHCI, and UHCI) with their drivers, and with both USB 1.1 and USB 2.0 hubs (with transaction translators). (Even in the 2.6.3 kernel, there are still situations where the UHCI driver behaves differently from other HCDs.)
You can't do thorough testing without both kinds of external hub, and you probably need an add-on PCI host controller card to make sure you have the other kind of USB 1.1 controller (OHCI or UHCI) and/or a USB 2.0 controller (EHCI). Or at least borrow the use of a system with such hardware, if you don't want to own it yourself. Although it's a goal to minimize differences in how the different USB host controllers behave on Linux, they can't all report the same status codes given the same errors.
High-speed capable peripherals must be tested both at high speed, connecting to EHCI directly or through a USB 2.0 hub, and at full speed. Most other peripherals run only at full speed (or sometimes low speed), so they won't need as much testing.
Full-speed (or low speed) tests connect peripherals in one of two ways.
The "testusb" program gives you access to two basic kinds of "chapter 9" tests. Test 9 just makes sure a number of required operations are handled correctly; no peripheral should ever fail it. Test 10 is reasonably aggressive, and tests things like queuing, protocol stalls, short reads, and handling of consecutive faults (where it's easy for peripherals and hosts to misbehave). Of course, if you're testing a peripheral you'll also want to be sure it passes the USB-IF "USBCV" tests (after paying a MSFT license tax, since that software only runs on Windows hosts). Any peripheral running under Linux should pass all of those tests, regardless of how much additional Linux integration is done.
The more interesting level is whether your Linux-using customers can use your USB peripheral through host-side applications. Some peripherals can work through 'usbfs' with user mode device drivers; those peripherals tend to rely only on simple half-duplex protocols. (Some very useful USB-IF class specifications are half duplex...) Otherwise you'll need some kernel device driver. The Linux kernel community strongly prefers GPL'd device drivers, which can safely be merged into kernel distributions. Closed-source drivers are undesirable, and can't usually be bugfixed. If you feel you must close your source, do it in user mode applications.
If your peripheral works with Linux, presumably you'll have a set of application level tests to verify higher level functionality. Here's a short checklist of other things you need to support from those host side drivers:
Of course, there are also the usual sort of driver portability issues. You can often expect other people in the Linux community to help with those issues, if your software is clean and portable. Once you have the basics working (plug/unplug, all host controllers, and your driver functionality meeting current application requirements), users should be able to submit patches for the rest.