Basic distributed switching
The basic distributed switching in OpenDaylight is provided by the L2Switch project, proving layer 2 switch functionality. This project is built on top of the OpenFlowPlugin project, as it uses its capabilities to connect and interact with an OpenFlow switch.
The L2Switch project has the following features/components:
- Packet handler: Decodes the incoming packets, and dispatches them appropriately. It defines a packet lifecycle in three stages:
- Decode
- Modify
- Transmit
- Loop remover: Detects loops in the network and removes them.
- Arp handler: Handles ARP packets provided by the packet handler.
- Address tracker: Gathers MAC and IP addresses from network entities.
- Host tracker: Tracks hosts' locations in the network.
- L2Switch main: Installs flows on the switches present in the network.
Getting ready
This recipe requires an OpenFlow switch. If you don't have any, you can use a Mininet-VM with OvS installed.
You can download Mininet-VM from their website https://github.com/mininet/mininet/wiki/Mininet-VM-Images. All versions should work.
This recipe will be presented using a Mininet-VM with OvS 2.0.2.
How to do it...
Perform the following steps:
- Start your OpenDaylight distribution using the
karaf
script. Using this script will give you access to the Karaf CLI:
$ ./bin/karaf
- Install the user-facing feature responsible for pulling in all dependencies needed to enable basic distributed switching:
opendaylight-user@root>feature:install odl-l2switch-switch-ui
It might take a few minutes to complete the installation.
- Creating a network using Mininet:
- Log in to Mininet-VM using:
Username
:mininet
Password
:mininet
- Clean current Mininet state:
If you're using the same instance as before, you want to clear its state. We previously created one bridge, br0
, so let's delete it:
mininet@mininet-vm:~$ sudo ovs-vsctl del-br br0
- Create the topology:
In order to do so, use the following command:
mininet@mininet-vm:~$ sudo mn --controller=remote,ip=${CONTROLLER_IP}--topo=linear,3 --switch ovsk,protocols=OpenFlow13
Using this command will create a virtual network provisioned with three switches that will connect to the controller specified by ${CONTROLLER_IP}
. The previous command will also set up links between switches and hosts.
We will end up with three OpenFlow nodes in the opendaylight-inventory
:
- Type:
GET
- Headers:
Authorization: Basic YWRtaW46YWRtaW4=
- URL:
http://localhost:8080/restconf/operational/opendaylight-inventory:nodes
This request will return the following:
--[cut]- { "id": "openflow:1", --[cut]- }, { "id": "openflow:2", --[cut]- }, { "id": "openflow:3", --[cut]-
- Generate network traffic using
mininet
.
Between two hosts using ping
:
mininet> h1 ping h2
The preceding command will cause host1 (h1
) to ping host2 (h2
), and we can see that host1 is able to reach h2.
Between all hosts:
mininet> pingall
The pingall
command will make all hosts ping all other hosts.
- Checking address observations.
This is done thanks to the address tracker that observes address tuples on a switch's port (node-connector
).
This information will be present in the OpenFlow node connector and can be retrieved using the following request (for openflow:2
, which is the switch 2):
- Type:
GET
- Headers:
Authorization: Basic YWRtaW46YWRtaW4=
- URL:
http://localhost:8080/restconf/operational/opendaylight-inventory:nodes/node/openflow:1/node-connector/openflow:2:1
This request will return the following:
{ "nodes": { "node": [ { "id": "openflow:2", "node-connector": [ { "id": "openflow:2:1", --[cut]-- "address-tracker:addresses": [ { "id": 0, "first-seen": 1462650320161, "mac": "7a:e4:ba:4d:bc:35", "last-seen": 1462650320161, "ip": "10.0.0.2" } ] }, --[cut]--
This result means the host with the mac address 7a:e4:ba:4d:bc:35
has sent a packet to switch 2 and that port 1 of switch 2 handled the incoming packet.
- Checking the host address and attachment point to the node/switch:
- Type:
GET
- Headers:
Authorization: Basic YWRtaW46YWRtaW4=
- URL:
http://localhost:8080/restconf/operational/network-topology:network-topology/topology/flow:1/
This will return the following:
--[cut]-- <node> <node-id>host:c2:5f:c0:14:f3:1d</node-id> <termination-point> <tp-id>host:c2:5f:c0:14:f3:1d</tp-id> </termination-point> <attachment-points> <tp-id>openflow:3:1</tp-id> <corresponding-tp>host:c2:5f:c0:14:f3:1d</corresponding-tp> <active>true</active> </attachment-points> <addresses> <id>2</id> <mac>c2:5f:c0:14:f3:1d</mac> <last-seen>1462650434613</last-seen> <ip>10.0.0.3</ip> <first-seen>1462650434613</first-seen> </addresses> <id>c2:5f:c0:14:f3:1d</id> </node> --[cut]--
address
contains information about the mapping between the MAC address and the IP address, and attachment-points
defines the mapping between the MAC address and the switch port.
- Checking the spanning tree protocol status for each link.
The spanning tree protocol status can be either forwarding, meaning packets are flowing on an active link, or discarding, indicating packets are not sent as the link is inactive.
To check the link status, send this request:
- Type:
GET
- Headers:
Authorization: Basic YWRtaW46YWRtaW4=
- URL:
http://localhost:8181/restconf/operational/opendaylight-inventory:nodes/node/openflow:2/node-connector/openflow:2:2
This will return the following:
{ "node-connector": [ { "id": "openflow:2:2", --[cut]-- "stp-status-aware-node-connector:status": "forwarding", "opendaylight-port-statistics:flow-capable-node-connector-statistics": {} } } ] }
In this case, all packets coming in port 2 of switch 2 will be forwarded on the established link.
- Checking created links.
In order to check the links created, we are going to send the same request as the one sent at step 6, but we will focus on a different part of the response:
- Type:
GET
- Headers:
Authorization: Basic YWRtaW46YWRtaW4=
- URL:
http://localhost:8080/restconf/operational/network-topology:network-topology/topology/flow:1/
The different part this time is the following:
--[cut]-- <link> <link-id>host:7a:e4:ba:4d:bc:35/openflow:2:1</link-id> <source> <source-tp>host:7a:e4:ba:4d:bc:35</source-tp> <source-node>host:7a:e4:ba:4d:bc:35</source-node> </source> <destination> <dest-node>openflow:2</dest-node> <dest-tp>openflow:2:1</dest-tp> </destination> </link> <link> <link-id>openflow:3:1/host:c2:5f:c0:14:f3:1d</link-id> <source> <source-tp>openflow:3:1</source-tp> <source-node>openflow:3</source-node> </source> <destination> <dest-node>host:c2:5f:c0:14:f3:1d</dest-node> <dest-tp>host:c2:5f:c0:14:f3:1d</dest-tp> </destination> </link> --[cut]--
It represents links that were established while setting the topology earlier. It also provides the source, destination node, and termination point.
How it works...
It leverages the OpenFlowPlugin project providing the basic communication channel between OpenFlow capable switches and OpenDaylight. The layer 2 discovery is handled by an ARP listener/responder. Using it, OpenDaylight is able to learn and track network entity addresses. Finally, using graph algorithms, it is able to detect the shortest path and remove loops within the network.
There's more...
It is possible to change or increase basic configuration of the L2Switch component to perform more accurate operations.
Configuring L2Switch
We have presented L2Switch usage with the default configuration.
To change the configuration, here are the steps to follow:
- Execute the two first points mentioned previously.
- Stop OpenDaylight:
opendaylight-user@root>logout
- Navigate to
$ODL_ROOT/etc/opendaylight/karaf/
. - Open the configuration file you want to modify.
- Perform your modification.
Note
Do not play with the configuration files and their values, and be very careful and change only what is needed based on the link provided at the beginning of this tip, or else you could break functionality.
- Save the file and re-execute the steps mentioned in the How to do it section.
The new configuration should now be applied.