Automating Citrix ADC/NetScaler Virtual Server Monitoring End-To-End with NITRO API

Automation is a great way to manage repeatable tasks. Scripts can do repetitive tasks quickly and without error. This can lead to quicker response times and shorter outage windows. Automation also frees up your team to continually innovate.


This is especially valuable if you manage mission critical applications. Applications that receive high traffic levels each day will need some sort of load balancing between servers. Citrix ADC/NetScaler is one option for load balancing the traffic between multiple backend servers. It does this by creating one “Virtual IP Address” that consumers use to access the site. From the virtual IP Address, the NetScaler uses various types of logic to balance all backend servers, so the workload is equal. This allows for you to “build out” backend services, rather than “build up”.


With adding a load balancer of sorts, you increase complexity within the environment. You have another layer to troubleshoot when it comes to outages. With the environment being more complex, the outages can waste hours of time troubleshooting to find the root cause. Valuable time can be spent just identifying which ADC, Load Balanced, or Content Switched vServer is responsible for the specific outage. Many times, the ADC is not the cause, it is often a change or failure on the applications/servers bound to the service groups.


After watching this occur repeatedly with many of our customers, we decided to find a more efficient way to identify possible root causes.


Built into each ADC/NetScaler is a REST API, called NITRO API. This can be found on the Documentation tab after logging in. On the top you will find materials on how to use the NITRO API, as well as a client to test and build requests. The NITRO API allows for us to use automation tools to run checks or set values on the ADC/NetScaler via a script, rather than logging into the CLI or GUI.


Throughout this blog, we will discuss how we leveraged the Citrix ADC NITRO API to enumerate the ADC resources, namely:

  • LB vServer Names

  • Service Groups Names

  • Backend Servers

  • Backend Server States

  • Monitors

  • HTTP Requests for testing with a 200 “OK” response expected

We use this information and test the services end-to-end. This gives us an accurate view of what is happening in the environment for every object. This allows us to identify precisely where the issue or error exists.


Development Process Overview

The first step I took to build out the automated workflow was writing some simple pseudocode. We need to identify the steps required to complete our process before we jump in and start writing our script. The following steps were used to gather all the information needed to check monitor and backend server status:

  • Understand Business Case

- The purpose of creating a script to check vServer status is to reduce the time to resolve and troubleshoot issues

- The output will provide detailed information about resource to all stakeholders during an outage and can effectively centralize communication

  • Build Use-Case

- Connect

- Login

- Get list of all Load Balanced (LB) vServers

- For each LB vServer, enumerate the Service Group

- For each Service Group, enumerate the bound servers

-Get the backend server IP, Port, Name, and Current State

- For each Service Group, enumerate the bound LB Monitors

- Get any custom HTTP Request strings

- Test each backend server to ensure the Monitor(s) bound are functioning as expected

  • Once the workflow/pseudocode was designed, I needed to build and configure the lab to prototype the solution for testing

  • This required exploring the Citrix NITRO API, a tool that comes built-in to any NetScaler. It is used by third-party management tools to interface with your Citrix ADC/NetScalers. The NITRO API has many capabilities and is well documented. Checking Server states is only scratching the surface of what is possible.

  • During the exploration I used the built-in NITRO API Client to complete testing.

  • Now, I was ready to build the script in Python.

  • Once built, I needed to test the script

  • Finally, document and post the script

Lab Overview

This script was built and tested in my lab environment. The lab consists of a few simple components as shown in the diagram below:

  • Workstation: Ubuntu 20.04.1 LTS

  • Python 3.8

  • Libraries

json

- This will be used to load the json responses to parse for the needed data

requests

- Form the GET requests sent to the ADC

sys

- Formatting

collections

- Parsing through arrays of data received from ADC

urllib3

- Bypass any ssl certificate errors when connecting to ADC or backend servers

csv

- Open, write, and close the csv file

getpass

- Hide password input when connecting to the ADC

  • NetScaler (Hosted on VMWare ESXi)

- VM: NetScaler 12.1 Build 51.19.nc

  • VM (Hosted on VMWare ESXi)

- JSONPlaceHolder docker image (3x)


The table below gives you a general idea of what the minimal configuration should be on the NetScaler. This can be used to set up your own lab environment. The configuration includes the following:

  • Enabling the Load Balancing Feature

  • Setting the hostname and Subnet IP Address (SNIP)

  • Creating some backend servers

  • Creating (3) load balancing vServers

  • Creating (3) Service Groups

  • Creating a monitor

  • Binding the monitors, backend servers, and Service Groups to the LB vServers


NetScaler Configuration (Base Config)

set   ns config -IPAddress 192.168.99.50 -netmask 255.255.255.0
enable ns feature WL LB CH
set ns hostName NS
add ns ip 192.168.99.51 255.255.255.0   -vServer DISABLED
add server server01 192.168.99.110
add server server02 192.168.99.111
add server server03 192.168.99.112
add serviceGroup SVG-TEST1 HTTP   -maxClient 0 -maxReq 0 -cip ENABLED X-Forwarded-For -usip NO -useproxyport   YES -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP YES
add serviceGroup SVG-TEST2 HTTP   -maxClient 0 -maxReq 0 -cip ENABLED X-Forwarded-For -usip NO -useproxyport   YES -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP YES
add serviceGroup SVG-TEST3 HTTP   -maxClient 0 -maxReq 0 -cip ENABLED X-Forwarded-For -usip NO -useproxyport   YES -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP YES
add lb vserver VS-LB-TEST1 HTTP   192.168.99.100 80 -persistenceType NONE -cltTimeout 180
add lb vserver VS-LB-TEST2 HTTP   192.168.99.101 80 -persistenceType NONE -cltTimeout 180
add lb vserver VS-LB-TEST3 HTTP   192.168.99.102 80 -persistenceType NONE -cltTimeout 180
bind lb vserver VS-LB-TEST1 SVG-TEST1
bind lb vserver VS-LB-TEST2 SVG-TEST2
bind lb vserver VS-LB-TEST3 SVG-TEST3
add dns nameServer 192.168.99.2
add dns nameServer 8.8.8.8
add lb monitor MON-TEST1 HTTP -respCode   200 -httpRequest "GET /posts"
bind serviceGroup SVG-TEST1 server02 80
bind serviceGroup SVG-TEST1 server03 80
bind serviceGroup SVG-TEST1 server01 80
bind serviceGroup SVG-TEST1   -monitorName MON-TEST1
bind serviceGroup SVG-TEST2 server02 80
bind serviceGroup SVG-TEST2 server03 80
bind serviceGroup SVG-TEST2 server01 80


Exploring the Citrix NTIRO API with Citrix Developer Docs

Before starting the script, I wanted to get an idea of what is possible with the NITRO API. Citrix’s Developer Docs are a great resource for documentation of each of the commands. The docs layout examples that outline the request verbs (GET, PUT, DELETE, etc…), syntax, and payloads.


NOTE: There were some examples on GitHub as well, but they seemed a bit too contrived for my needs.

Citrix Developer Docs


The Developer Docs you will use might differ slightly for your environment. The link referenced above is for 12.0, which is the NetScaler build in my lab.

Figure 1: Citrix Developer Docs

The documentation provides detailed explanations for each endpoint and parameter that can be sent, along with their payloads. I was able to quickly navigate through the reference guide to find my starting point for the script, enumerating all LB vServers. The URL required to complete this, can be found in Figure 2 below.

Figure 2 Load Balanced vServers get (all)

Continuing with the psudocode, we need to figure out how to find the Service Group bindings for each of the LB vServers listed above. This can be accomplished by using the ”/nitro/v1/config/lbvserver_servicegroup_binding” URL. (Figure 3)

Figure 3 Load Balancer Service Group Binding

The process continues until we have all the following objects:

  • Load Balanced vServer

  • Service Group Binding

  • Service Group Members Servers

  • Service Group Monitor bindings

Figure 4 Service Group Member Bindings