This post covers how to make a nice simple (cheap!) but still powerful traffic ‘mangler’ :) It can quite granularly control the bandwidth, latency, loss and error injection levels without having to re-plumb or reconfigure the network you are trying to test.

To do this we’ll use two nice features that can be found on pretty much all linux distros (this post is made with Fedora but should work for most major linux versions)

- Traffic Control (tc)

- Ethernet Bridge Administration (brclt)

Configuring this setup for me as a layer 2 “transparent” was pretty key as my lab was implementing  MPLS-TE/TP so setting up my testing machine with routed interfaces would have been a bit more of a pain (plus the need to reconfigure routers etc too when I connect & disconnect the test rig.

Basic topology:

This is how its basically implemented just a laptop (or any linux PC) with two NICs. For my simple setup it was a Fedora laptop with two cheap USB-C ethernet dongles (£10 each on Amazon :D)

Setup the Bridge

First thing following the physical patching is to create the bridge on the laptop. Following this step the routers should see the laptop transparently and all connectivity between the routers should come “up” e.g. Basic IP connectivity, OSPF, MPLS etc should all come up as the laptop only examines and forwards the Ethernet frames which encapsulates all these.
The basic config to create the bridge is as below

sudo ip addr flush dev enp3s0f3u1
sudo ip addr flush dev enp3s0f3u4
sudo brctl addbr TestBr
sudo brctl addif TestBr enp3s0f3u1 enp3s0f3u4
sudo ip link set dev TestBr up

Reviewing what we just did above (please note the device names will need to be substituted for your own and can be seen in the output of ip link or other tools). First off we flush the interfaces to ensure they are nice and clean of Ips etc. Next we create a new bridge then add both physical interfaces to be part of that bridge. Finally we bring up the bridge.
You should then confirm IP connectivity between your routers or devices is working as expected before going further.

“mangling” traffic

Now we have a laptop inline between two devices (two routers in my case) we can start to inject whatever scenarios or issues we are looking to test. The best place to check for any specific scenario you’d like to replicate if the traffic control (tc) man page. Some of the most common testing examples below though :)
Before going further its worth noting that most of the tc settings control the outbound direction only. e.g. applying a latency deviation will only apply in the outbound direction of a single NIC (this was very desirable in my own testing to build up asymmetrical delay!). You can create symmetrical delays by just configuring the same delay on multiple NICs

Add 200ms delay (to one NIC so in one direction only)

sudo tc qdisc add dev enp3s0f3u1 root netem delay 200ms

Change delay to 500ms (if you already have a value configured you need to “change” not add) in one direction:

sudo tc qdisc change dev enp3s0f3u1 root netem delay 500ms

Variable delay of 500ms +/- 100ms

sudo tc qdisc change dev enp3s0f3u1 root netem delay 500ms 100ms

Remove delay
Note the syntax to remove settings (delete keyword)

sudo tc qdisc delete dev enp3s0f3u1 root netem delay 500ms


Loss % can be very low (again quite important for my own testing) with the lowest configurable % being 0.0000000232%. Loss simply discards this % of frames

Straight 10% loss of all packets

sudo tc qdisc add dev enp3s0f3u1 root netem loss 10%

1% loss

sudo tc qdisc add dev enp3s0f3u1 root netem loss 10%


This will introduce a single bit error into the % of frames configured at some random offset within the frame.

Introduce single bit errors into 1% of frames

sudo tc qdisc add dev enp3s0f3u1 root netem corrupt 1%


Here we can duplicate a % of frames which is quite handy to test on some less ‘well rounded’ applications :)

Duplicating %1 of frames

sudo tc qdisc add dev enp3s0f3u1 root netem duplicate 1%

Bandwidth restriction/limiting

Here we can configure the egress (so again remember only in one direction in our setup unless you apply to both interfaces) bandwidth available.

Set the egress bandwidth to 512kb/s

sudo tc qdisc add dev enp3s0f3u1 root tbf rate 512kbit burst 32kbit

tbf: use the token buffer filter to manipulate traffic rates
rate: sustained maximum rate
burst: maximum allowed burst


All these filters and features can generally be applied in parallel to allow for very granular scenarios to be tested :) Depending on the detail you need to test to I would also recommend running your testing after physically installing the test rig and enabling the bridge as just having this in-line will introduce some new delays etc. This was ~ +1ms in my own lab which was fairly significant but because it was captured we could account for it :)