Mounting a NETCONF device
The OpenDaylight component responsible for connecting remote NETCONF devices is called the NETCONF southbound plugin, aka the netconf-connector
. Creating an instance of the netconf-connector
will connect a NETCONF device. The NETCONF device will be seen as a mount point in the MD-SAL, exposing the device configuration and operational data store and its capabilities. These mount points allow applications and remote users (over RESTCONF) to interact with the mounted devices.
The netconf-connector
currently supports RFC-6241, RFC-5277, and RFC-6022.
The following recipe will explain how to connect a NETCONF device to OpenDaylight.
Getting ready
This recipe requires a NETCONF device. If you don't have any, you can use the NETCONF test tool provided by OpenDaylight. It can be downloaded from the OpenDaylight Nexus repository:
How to do it...
Perform the following steps:
- Start the OpenDaylight Karaf distribution using the
karaf
script. Using this script will give you access to the Karaf CLI:
$ ./bin/karaf
- Install the user-facing feature responsible for pulling in all dependencies needed to connect a NETCONF device:
opendaylight-user@root>feature:install odl-netconf-topology odl-restconf
It might take a minute or so to complete the installation.
- Start your NETCONF device.
If you want to use the NETCONF test tool, it is time to simulate a NETCONF device using the following command:
$ java -jar netconf-testtool-1.0.1-Beryllium-SR4-executable.jar --device-count 1
This will simulate one device that will be bound to port 17830
.
- Configure a new
netconf-connector
.
Send the following request using RESTCONF:
- Type:
PUT
- URL:
http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device
By looking closer at the URL you will notice that the last part is new-netconf-device
. This must match the node-id
that we will define in the payload.
- Headers:
Accept: application
/xml
Content-Type: application
/xml
Authorization: Basic YWRtaW46YWRtaW4=
- Payload:
<node xmlns="urn:TBD:params:xml:ns:yang:network-topology"> <node-id>new-netconf-device</node-id> <host xmlns="urn:opendaylight:netconf-node-topology">127.0.0.1</host> <port xmlns="urn:opendaylight:netconf-node-topology">17830</port> <username xmlns="urn:opendaylight:netconf-node-topology">admin</username> <password xmlns="urn:opendaylight:netconf-node-topology">admin</password> <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only> </node>
- Let's have a closer look at this payload:
node-id
: Defines the name of thenetconf-connector
.
address
: Defines the IP address of the NETCONF device.
port
: Defines the port for the NETCONF session.
username
: Defines the username of the NETCONF session. This should be provided by the NETCONF device configuration.
password
: Defines the password of the NETCONF session. As for theusername
, this should be provided by the NETCONF device configuration.
tcp-only
: Defines whether or not the NETCONF session should use TCP or SSL. If set totrue
it will use TCP.
Note
This is the default configuration of the netconf-connector
; it actually has more configurable elements that we will look at later.
Once you have completed the request, send it. This will spawn a new netconf-connector
that connects to the NETCONF device at the provided IP address and port using the provided credentials.
- Verify that the
netconf-connector
has correctly been pushed and get information about the connected NETCONF device.
First, you could look at the log to see if any errors occurred. If no error has occurred, you will see the following:
2016-05-07 11:37:42,470 | INFO | sing-executor-11 | NetconfDevice | 253 - org.opendaylight.netconf.sal-netconf-connector - 1.3.0.Beryllium | RemoteDevice{new-netconf-device}: Netconf connector initialized successfully
Once the new netconf-connector
is created, some useful metadata is written into the MD-SAL's operational data store under the network-topology subtree. To retrieve this information, you should send the following request:
- Type:
GET
- Headers:
Authorization: Basic YWRtaW46YWRtaW4=
- URL:
http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device
We're using new-netconf-device
as the node-id
because this is the name we assigned to the netconf-connector
in a previous step.
This request will provide information about the connection status and device capabilities. The device capabilities are all the YANG models the NETCONF device is providing in its hello-message
that was used to create the schema context.
- More configuration for the
netconf-connector
.
As mentioned previously, the netconf-connector
contains various configuration elements. Those fields are non-mandatory, with default values. If you do not wish to override any of these values, you shouldn't provide them:
schema-cache-directory
: This corresponds to the destination schema repository for YANG files downloaded from the NETCONF device. By default, those schemas are saved in the cache directory ($ODL_ROOT/cache/schema
). Using this configuration will define where to save the downloaded schema related to the cache directory. For instance, if you assignednew-schema-cache
, schemas related to this device would be located under$ODL_ROOT/cache/new-schema-cache/
.
reconnect-on-changed-schema
: If set totrue
, the connector will auto disconnect/reconnect when schemas are changed in the remote device. Thenetconf-connector
will subscribe to base NETCONF notifications and listen for netconf-capability-change notifications. The default value isfalse
.
connection-timeout-millis
: Timeout in milliseconds after which the connection must be established. The default value is 20000 milliseconds.
default-request-timeout-millis
: Timeout for blocking operations within transactions. Once this timer is reached, if the request is not yet finished, it will be canceled. The default value is 60000 milliseconds.
max-connection-attempts
: Maximum number of connection attempts. Nonpositive or null values are interpreted as infinity. The default value is0
, which means it will retry forever.
between-attempts-timeout-millis
: Initial timeout in milliseconds between connection attempts. This will be multiplied by the sleep-factor for every new attempt. The default value is 2000 milliseconds.
sleep-factor
: Back-off factor used to increase the delay between connection attempt(s). The default value is1.5
.
keepalive-delay
:netconf-connector
sends keep-alive RPCs while the session is idle to ensure session connectivity. This delay specifies the timeout between keep-alive RPCs in seconds. Providing a0
value will disable this mechanism. The default value is120
seconds.
Using this configuration, your payload would look like this:
<node xmlns="urn:TBD:params:xml:ns:yang:network-topology"> <node-id>new-netconf-device</node-id> <host xmlns="urn:opendaylight:netconf-node-topology">127.0.0.1</host> <port xmlns="urn:opendaylight:netconf-node-topology">17830</port> <username xmlns="urn:opendaylight:netconf-node-topology">admin</username> <password xmlns="urn:opendaylight:netconf-node-topology">admin</password> <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only> <schema-cache-directory xmlns="urn:opendaylight:netconf-node-topology">new_netconf_device_cache</schema-cache-directory> <reconnect-on-changed-schema xmlns="urn:opendaylight:netconf-node-topology">false</reconnect-on-changed-schema> <connection-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">20000</connection-timeout-millis> <default-request-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">60000</default-request-timeout-millis> <max-connection-attempts xmlns="urn:opendaylight:netconf-node-topology">0</max-connection-attempts> <between-attempts-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">2000</between-attempts-timeout-millis> <sleep-factor xmlns="urn:opendaylight:netconf-node-topology">1.5</sleep-factor> <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">120</keepalive-delay> </node>
How it works...
Once the request to connect a new NETCONF device is sent, OpenDaylight will set up the communication channel used for managing and interacting with the device. At first, the remote NETCONF device will send its hello-message
defining all of the capabilities it has. Based on this, the netconf-connector
will download all the YANG files provided by the device. All those YANG files will define the schema context of the device.
At the end of the process, some exposed capabilities might end up as unavailable, for two possible reasons:
- The NETCONF device provided a capability in its
hello-message
, but hasn't provided the schema. - OpenDaylight failed to mount a given schema due to YANG violation(s).
OpenDaylight parses YANG models as per RFC 6020; if a schema is not respecting the RFC, it could end up as an unavailable-capability.
If you encounter one of these situations, looking at the logs will pinpoint the reason for such a failure.
There's more...
Once the NETCONF device is connected, all its capabilities are available through the mount point. View it as a pass-through directly to the NETCONF device.
GET data store
To see the data contained in the device data store, use the following request:
- Type:
GET
- Headers:
Authorization: Basic YWRtaW46YWRtaW4=
- URL:
http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/
Adding yang-ext:mount
/ to the URL will access the mount point created for new-netconf-device
. This will show the configuration data store. If you want to see the operational one, replace config
with operational
in the URL.
If your device defines the YANG model, you can access its data using the following request:
- Type:
GET
- Headers:
Authorization: Basic YWRtaW46YWRtaW4=
- URL:
http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<container>
The <module>
represents a schema defining the <container>
. The <container>
can either be a list or a container. It is not possible to access a single leaf. You can access containers/lists within containers/lists. The last part of the URL would look like this:
.../ yang-ext:mount/<module>:<container>/<sub-container>
Invoking RPC
In order to invoke an RPC on the remote device, you should use the following request:
- Type:
POST
- Headers:
Accept: application
/xml
Content-Type: application
/xml
Authorization: Basic YWRtaW46YWRtaW4=
- URL:
http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<operation>
This URL is accessing the mount point of new-netconf-device
, and through this mount point we're accessing the <module>
to call its <operation>.
The <module>
represents a schema defining the RPC and <operation>
represents the RPC to call.
Deleting a netconf-connector
Removing a netconf-connector
will drop the NETCONF session and all resources will be cleaned. To perform such an operation, use the following request:
- Type:
DELETE
- Headers:
Authorization: Basic YWRtaW46YWRtaW4=
- URL:
http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device
By looking closer at the URL, you can see that we are removing the netconf node-idnew-netconf-device
.