The objective of OpenDaylight clustering is to have a set of nodes providing a fault-tolerant, decentralized, peer-to-peer membership with no single point of failure. From a networking perspective, clustering is when you have a group of compute nodes working together to achieve a common function or objective.
Perform the following steps:
The mentioned repository in the Getting ready section is providing a Vagrantfile spawning VMs with the following network characteristics:
- Adapter 1:
NAT
- Adapter 2: Bridge
en0: Wi-Fi
(AirPort) - Static IP address:
192.168.50.15X
(X
being the number of the node) - Adapter type:
paravirtualized
These are the steps to follow:
$ git clone https://github.com/adetalhouet/cluster-nodes.git
$ cd cluster-nodes
$ export NUM_OF_NODES=3
$ vagrant up
After a few minutes, to make sure the VMs are correctly running, execute the following command in the cluster-nodes folder:
$ vagrant status
Current machine states:
node-1 running (virtualbox)
node-2 running (virtualbox)
node-3 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed preceding with their current state. For more information about a specific VM, run vagrant status NAME
.
The credentials of the VMs are:
User
: vagrant
Password
: vagrant
We now have three VMs available at those IP addresses:
192.168.50.151
192.168.50.152
192.168.50.153
- Prepare the cluster deployment.
In order to deploy the cluster, we will use the cluster-deployer script provided by OpenDaylight:
$ git clone https://git.opendaylight.org/gerrit/integration/test.git
$ cd test/tools/clustering/cluster-deployer/
You will need the following information:
- Your VMs/containers IP addresses:
192.168.50.151
, 192.168.50.152
, 192.168.50.153
- Their credentials (must be the same for all the VMs/containers):
vagrant
/vagrant
- The path to the distribution to deploy:
$ODL_ROOT
- The cluster's configuration files located under the
templates/multi-node-test
repository:
$ cd templates/multi-node-test/
$ ls -1
akka.conf.template
jolokia.xml.template
module-shards.conf.template
modules.conf.template
org.apache.karaf.features.cfg.template
org.apache.karaf.management.cfg.template
We are currently located in the cluster-deployer
folder:
$ pwd
test/tools/clustering/cluster-deployer
We need to create a temp
folder, so the deployment script can put some temporary files in there:
$ mkdir temp
Your tree architecture should look like this:
$ tree
.
├── cluster-nodes
├── distribution-karaf-0.4.0-Beryllium.zip
└── test
└── tools
└── clustering
└── cluster-deployer
├── deploy.py
├── kill_controller.sh
├── remote_host.py
├── remote_host.pyc
├── restart.py
├── temp
└── templates
└── multi-node-test
Now let's deploy the cluster using this command:
$ python deploy.py --clean --distribution=../../../../distribution-karaf-0.4.0-Beryllium.zip --rootdir=/home/vagrant --hosts=192.168.50.151,192.168.50.152,192.168.50.153 --user=vagrant --password=vagrant --template=multi-node-test
If the process went fine, you should see similar logs while deploying:
https://github.com/jgoodyear/OpenDaylightCookbook/tree/master/chapter1/chapter1-recipe8
Let's use Jolokia to read the cluster's nodes data store:
Let's request on node 1, located under 192.168.50.151
, its config data store for the network-topology shard:
Authorization: Basic YWRtaW46YWRtaW4=
- URL:
http://192.168.50.151:8181/jolokia/read/org.opendaylight.controller:Category=Shards,name=member-1-shard-topology-config,type=DistributedConfigDatastore
{
"request": {
"mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-topology-config,type=DistributedConfigDatastore",
"type": "read"
},
"status": 200,
"timestamp": 1462739174,
"value": {
--[cut]--
"FollowerInfo": [
{
"active": true,
"id": "member-2-shard-topology-config",
"matchIndex": -1,
"nextIndex": 0,
"timeSinceLastActivity": "00:00:00.066"
},
{
"active": true,
"id": "member-3-shard-topology-config",
"matchIndex": -1,
"nextIndex": 0,
"timeSinceLastActivity": "00:00:00.067"
}
],
--[cut]--
"Leader": "member-1-shard-topology-config",
"PeerAddresses": "member-2-shard-topology-config: akka.tcp://[email protected]:2550/user/shardmanager-config/member-2-shard-topology-config, member-3-shard-topology-config: akka.tcp://[email protected]:2550/user/shardmanager-config/member-3-shard-topology-config",
"RaftState": "Leader",
--[cut]--
"ShardName": "member-1-shard-topology-config",
"VotedFor": "member-1-shard-topology-config",
--[cut]--
}
The result presents a lot of interesting information such as the leader of the requested shard, which can be seen under Leader
. We can also see the current state (under active
) of followers for this particular shard, represented by its id
. Finally, it provides the addresses of the peers. Addresses can be found in the AKKA domain, as AKKA is the tool used to enable a node's wiring within the cluster.
By requesting the same shard on another peer, you would see similar information. For instance, for node 2 located under 192.168.50.152
:
Authorization: Basic YWRtaW46YWRtaW4=
- URL:
http://192.168.50.152:8181/jolokia/read/org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-config,type=DistributedConfigDatastore
Note
Make sure to update the digit after member-
in the shard name, as this should match the node you're requesting for:
{
"request": {
"mbean": "org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-config,type=DistributedConfigDatastore",
"type": "read"
},
"status": 200,
"timestamp": 1462739791,
"value": {
--[cut]--
"Leader": "member-1-shard-topology-config",
"PeerAddresses": "member-1-shard-topology-config: akka.tcp://[email protected]:2550/user/shardmanager-config/member-1-shard-topology-config, member-3-shard-topology-config: akka.tcp://[email protected]:2550/user/shardmanager-config/member-3-shard-topology-config",
"RaftState": "Follower",
--[cut]--
"ShardName": "member-2-shard-topology-config",
"VotedFor": "member-1-shard-topology-config",
--[cut]--
}
}
We can see the peers for this shard as well as that this node is voted node 1 - to be elected the shard leader.