There are two approaches for using DPDK acceleration in DPDK. One is the openvswitch fork from intel, called dpdk-ovs the other is done directly in openvswitch with a different approach from intel.
VirtualBox preparations
To run openvswitch with DPDK I used a virtual machine (VirtualBox) because the NIC I had on my laptop was not supported. I created three virtual NICs for my vm, one behind NAT to use it to ssh into the vm from the host, and two in host-only mode, to be use for testing.
If you happen to go down this road too, then it’s a good advice to do a few things before starting the test application.
First I would recommend configuring two virtual processors on the virtual machine, it makes possible to use most of the DPDK test apps, like testpmd. I would also recommend reserving about 4 GB of RAM to make sure DPDK works properly.
Then, to configure a network interface in host-only mode you need to create a host-only adapter (this applies to VirtualBox). Here is a link on how to do that: http://ubuntuforums.org/showthread.php?t=1873650
Then you need to make sure that the interface is assigned an IP address (dhcp should work and you should be able to ping the guest from the host but you may need to configure dhcp on the virtual network if virtual box doesn’t do that).
Ping the two guest interfaces in order to populate the ARP table:
ping 192.168.56.101 ping 192.168.57.101
Find the arp entries an make them persistent:
arp -n | grep 192.168.56.101 sudo arp -s 192.168.56.101 08:00:27:20:88:10 arp -n | grep 192.168.56.101 # now you should see flags CM to your arp entry 192.168.56.101 ether 08:00:27:20:88:10 CM vboxnet0 arp -n | grep 192.168.57.101 sudo arp -s 192.168.57.1 08:00:27:85:40:f6 arp -n | grep 192.168.57.101 # now you should see flags CM to your arp entry 192.168.57.101 ether 08:00:27:85:40:f6 CM vboxnet1
Installing DPDK
There is good information from intel on DPDK, quickstarts, howtos etc. But for the sake of simplicity, here is selected useful information needed when starting with DPDK for the first time. http://dpdk.org/doc
Getting the code
Simply git clone git://dpdk.org/dpdk
Note: At the moment I’m writing this, openvswitch doesn’t compile with latest version of dpdk so I went with version 1.7.0rc0. Unfortunately there is no tag for it, so you will have to:
git checkout 536ba2d8a867ecf4331673d4c45475e552d57e27 -b version-1.7
Hardware requirements
DPDK works only on a select range of intel devices. It seems that most of the x86 NICs should work just fine, but I couldn’t get it working on my HP ProBook 6560b laptop containing an Intel 82540EM Gigabit Ethernet Controller (rev 02)
To check if your NIC is supported first you need to get the device id and then look it up in the list of supported devices.
lspci - lists all the pci devices; look for your network card here and get the PCI vendor and device ID in the square brackets after the type string. For example:
# lspci -nn|grep Ethernet 00:19.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection I217-LM [8086:153a] (rev 05)
The list of known and supported devices can be found in the dpdk repository, under:
lib/librte_eal/common/include/rte_pci_dev_ids.h ... #define E1000_DEV_ID_PCH_LPT_I217_LM 0x153A ...
If the device is supported then there will be a subsequent line declaring it as supported, for instance:
RTE_PCI_DEV_ID_DECL_EM(PCI_VENDOR_ID_INTEL, E1000_DEV_ID_82540EM)
If you have no supported NICs then you can always try a hypervisor (VirtualBox for instance) that can emulate one of the supported NICs (like the one above for example).
Compiling DPDK
Before even trying to compile openvswitch you need the dpdk code built in a single library. See INSTALL.DPDK for details. Basically you need to open $DPDK/config/defconfig_x86_64-default-linuxapp-gcc and change
CONFIG_RTE_BUILD_COMBINE_LIBS=n
to
CONFIG_RTE_BUILD_COMBINE_LIBS=y
Then compile:
make install T=x86_64-default-linuxapp-gcc
Bind interfaces to DPDK
DPDK uses a specialized kernel module to allow userspace applications to control the network card.
cd $DPDK sudo modprobe uio sudo insmod x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
Check that igb_uio is supported
./tools/igb_uio_bind.py --status
You should get something like this:
Network devices using IGB_UIO driver ==================================== Network devices using kernel driver =================================== 0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=eth0 drv=e1000 unused=igb_uio *Active* 0000:00:08.0 '82540EM Gigabit Ethernet Controller' if=eth1 drv=e1000 unused=igb_uio *Active* 0000:00:09.0 '82540EM Gigabit Ethernet Controller' if=eth2 drv=e1000 unused=igb_uio *Active* Other network devices =====================
To bind an “active” NIC (one that is being used by Linux already) you will need to force it:
sudo ./tools/igb_uio_bind.py --force --bind=igb_uio eth1
Do the same for the other interface
sudo ./tools/igb_uio_bind.py --force --bind=igb_uio eth2
You can bind the interface back to it’s original driver with the same tool, i.e.
sudo ./tools/igb_uio_bind.py --force --bind=e1000 eth1
Quick test of DPDK using l2fwd
cd examples/l2fwd make RTE_SDK=$DPDK RTE_TARGET=x86_64-default-linuxapp-gcc
To run it you will also need to mount hugetlbfs.
sudo sysctl -w vm.nr_hugepages=320 sudo mkdir -p /mnt/huge sudo mount -t hugetlbfs hugetlbfs /mnt/huge cd examples/l2fwd/build/ sudo ./l2fwd -c 0x3 -n 4 -- -p 0x3 -T 1
You can also specify the number of queues per lcore but the default value 1 is ok. The -p 0x3 is important, it selects the DPDK ‘ports’ aka network devices that will be used. In this case ports 1 and 2 are selected. The -T parameter is used to updates the counters once a second.
Now to test that it actually works we will use Wireshark and good old ping.
Start Wireshark on the host and make it capture traffic on the second host only network interface (in my case this was vboxnet1).
From the shell run:
ping 192.168.56.101 -i 0.2
Wireshark should now show packets coming from 192.168.56.1 and with destination 192.168.56.101. The counters in the l2fwd applications should also be updated once a second.
Then try to close the l2fwd application and observe that wireshark doesn’t get packets anymore.
Other DPDK examples
There are other examples included, I tried test-pmd just out of curiosity. Detailed information about running them can be found on: http://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-dpdk-sample-applications-user-guide.pdf
Installing Openvswitch with DPDK
After installing and running dpdk successfully you can start working on getting openvswitch up and running with DPDK. I’ve used commit b596218aa8acafd64a4c7d1c3e761f00e50c0c53 and it worked for me.
Compiling OVS with DPDK
Official information can be found here: http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=INSTALL.DPDK;hb=HEAD
I followed the exact same steps, but I added the -ldl to LIBS since my x86_64 Ubuntu didn’t link with the default options.
cd openvswitch ./boot.sh ./configure –with-dpdk=$DPDK LIBS=-ldl make sudo make install
Database initialization
To manually initialize the database you need to do:
sudo mkdir -p /usr/local/etc/openvswitch sudo ovsdb-tool create /usr/local/etc/openvswitch/conf.db vswitchd/vswitch.ovsschema
Preparing DPDK
For the DPDK part you need to configure hugetables, insert the kernel modules (uio and igb_uio) and bind the network interfaces to igb_uio. These need to be done pretty much once after the computer is started.
sudo sysctl -w vm.nr_hugepages=2000 sudo mkdir -p /mnt/huge sudo mount -t hugetlbfs hugetlbfs /mnt/huge sudo modprobe uio.ko cd $DPDK sudo insmod x86_64-default-linuxapp-gcc/kmod/igb_uio.ko sudo ./tools/igb_uio_bind.py --force --bind=igb_uio eth1 sudo ./tools/igb_uio_bind.py --force --bind=igb_uio eth2
Permanent startup configuration
Here is how to setup hugetables and have the kernel modules loaded at startup. I haven’t looked for a solution to bind the wanted interfaces to DPDK on startup, but there should be a way to do that.
To have hugetables mounted by default at startup you need to add an entry to /etc/fstab
hugetlbfs /mnt/huge hugetlbfs rw,mode=0777 0 0
To have the igb_uio.ko module loaded at boot time you need to make it known to modprobe. One good option is to make a symlink to it somewhere in the linux kernel structure:
sudo ln -s $DPDK/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko /lib/modules/`uname -r`/kernel/drivers/uio/igb_uio.ko sudo depmod -a
Then you need to add both uio and igb_uio to /etc/modules so that they are loaded at boot time.
My Ubuntu 13.10 installation failed to start the desktop manager properly when one of the cards didn’t get IP from dhcp. If that happens to you and you want the desktop manager, just log on to console 1 (Alt + Ctrl + F1) and run: sudo service lightdm restart
sudo service lightdm restart
However disabling the X server altogether should result in better performance overall.
Running openvswitch with DPDK
To manually run openvswitch you must do these each time:
sudo ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \ --remote=db:Open_vSwitch,Open_vSwitch,manager_options \ --private-key=db:Open_vSwitch,SSL,private_key \ --certificate=db:Open_vSwitch,SSL,certificate \ --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert \ --pidfile --detach sudo ovs-vsctl --no-wait init sudo ovs-vswitchd --dpdk -c 0x3 -n 4 -- unix:/usr/local/var/run/openvswitch/db.sock --log-file=/usr/local/var/log/openvswitch/ovs-vswitchd.log --pidfile --detach
Bridge configuration
You need to add a netdev type bridge in order to make ovs running with DPDK. This means that all datapath switching will be done in userspace.
You may get an error like “ovs-vsctl: Error detected while setting up ‘ovsbr0’. See ovs-vswitchd log for details.” in case you don’t have the openvswitch module inserted. But the bridge will be created so you ignore the warning and continue.
sudo ovs-vsctl add-br ovsbr0 sudo ovs-vsctl set bridge ovsbr0 datapath_type=netdev sudo ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk sudo ovs-vsctl add-port ovsbr0 dpdk1 -- set Interface dpdk1 type=dpdk
Testing with ping and tcpdump
A simple test to see that packets arrive at the bridge could be the following. First add an internal port to the bridge and bring it up:
sudo ovs-vsctl add-port br0 br0p1 -- set Interface br0p1 type=internal sudo ifconfig br0p1 up
On the guest machine then start tcpdump
sudo tcpdump -i br0p1
From the host try to ping the guest. With the virtual machine configuration described you can try to ping something inside the subnet of one of the host-only interfaces. Linux will route the requests to the host-only adapter which in turn will be relayed by VirtualBox and then broadcasted in the virtual switch, including the port br0p1.
ping 192.168.56.102
Testing with mirroring port
sudo ovs-vsctl -- set Bridge br0 mirrors=@m -- --id=@eth1 get Port eth1 -- --id=@eth2 get Port eth2 -- --id=@m create Mirror name=mymirror select-dst-port=@eth2 select-src-port=@eth1 output-port=@eth2 sudo ovs-vsctl -- set Bridge ovsbr0 mirrors=@m -- --id=@dpdk0 get Port dpdk0 -- --id=@dpdk1 get Port dpdk1 -- --id=@m create Mirror name=mymirror select-dst-port=@dpdk1 select-src-port=@dpdk0 output-port=@dpdk1
备注:本文转自 https://wiki.linaro.org/LNG/Engineering/OVSDPDKOnUbuntu。