In part 2, we went over the installation of the RD Gateway, RD Web Access and RD Connection Broker roles. It wasn’t difficult at all as the majority of the installation was wizard driven with us only having to tell it which servers will hold the roles. We then proceeded to bind a public SSL certificate to the gateway and web access role and finally, created a session collection for a user group that consisted of two session host servers. Here in part 3, we’re going to be installing and configuring our Netscaler 12 VPX so that they’ll work in high availability mode. We’ll then be creating our load balancing server for the connection broker role so that when we do configure that role for HA in the next article, everything will be ready to go. Let’s begin!
Part 3: Installation of Netscaler HA pair and Connection Broker LB Server
Netscaler 12 VPX High Availability
Before we can configure load balancing for our connection broker role, we need to first setup the Netscaler VPX. Without the Netscaler, the default load balancing method used by the connection brokers is via the DNS round robin method. Most system administrators understand that round robin DNS is not true load balancing at all as there is no logic or algorithm behind the method. It works in lab environments and even in production to a certain extent depending on what you are trying to accomplish but since we’re setting up a Netscaler VPX to front our RDS farm anyways, why not take advantage of it and have it load balance the connection broker role as well?
The Netscaler VPX is a free download. All you need is to create a free Citrix account.You can download the Netscaler 12 VPX Freemium edition from here once you’ve created and signed in with your free Citrix account
Once you have that downloaded, import the OVF into your vSphere environment. Remember to import it twice because we’ll be setting up a pair of them in HA mode.
We’ll first configure our primary Netscaler node. In the console after it boots, give it the IP address for management purposes. This IP will be considered the NSIP. Choose option #4 afterwards to save the configuration and begin the Netscaler initialization.
Next, login to the web GUI by typing in the IP address of the Netscaler in a browser. By default, both the username and password to login is ‘nsroot’. We’ll now configure the subnet IP address (SNIP), hostname, DNS and time zone for our primary Netscaler. Reboot the Netscaler when asked.
At this point, go ahead and perform the same steps for your secondary Netscaler VPX making sure to give it a different NSIP IP as well as hostname. Reboot the VM when done with the initial configuration. Head back over to your primary node and go to System –> High Availability and then click on the Add button. Enter in the information for your secondary node and then hit the Create button.
Back in the High Availability screen, you should then see both nodes as being in the UP state. If your primary node suddenly became the secondary, simply right-click on it and select ‘Force Failover’. This will immediately switch the nodes back.
And…..just like that, we have configured ourselves a Netscaler HA pair! Pretty simple right? From now on, the two nodes will perform background synchronization and when the primary goes down, the secondary will immediately take over the slack. We will test this once we have everything configured later on. Like I mentioned in part 1, there’s really no reason why you wouldn’t want to configure this HA pair in a lab considering that it’s so dead simple to perform.
Creating Connection Broker Virtual Server
We will now create our load balancing virtual server in the Netscaler to be used by the connection broker servers. First we create the two servers that represents our two actual back-end servers that hosts our connection broker roles. For me, this will be rdws01.lab.local and rdws02.lab.local. In Netscaler, drill down to Traffic Management –> Load Balancing. Right click on the yellow exclamation mark and select ‘Enable Feature’. Then select Servers and click Add button. Give each of your server a descriptive name along with their corresponding IP address.With the Netscaler, changes you make WILL NOT be saved after a reboot unless you save the running configuration! This can be done by simply clicking on the little floppy diskette icon in the upper right corner. This can be irritating for some but you’ll get used to it. So remember, changes you make will not survive a reboot until you hit that save button!
Now we create load balancing services for the two servers as well as telling Netscaler how it should monitor them to see if they are up. Head to the Services tab and hit Add. With a connection broker, it uses RDP and port 3389 when communicating with the gateway server so that’s the service we’ll be creating here. Give the service a name and choose one of the existing servers you created above. For protocol, choose RDP with Port 3389. Repeat the same steps to bind the service to your other server. What you’ll notice is that your second server has a state of DOWN in the Services page afterwards. This is expected because remember, we’ve only installed the connection broker role on the first server, not the second! Therefore, port 3389 wouldn’t be open on the second server at the moment. We’ll revisit this page once the role has been installed on the second server.
We are now ready to ready to create our LB virtual server. Head to Virtual Servers tab and hit the Add button. Give it a name, select Protocol of RDP, IP address to be used for the HA connection broker role with Port number of 3389 and hit Create. Finally, we need to bind the two services we just created to this virtual server. Click on the “No Load Balancing Virtual Server Service Binding” link and select both of the services you’ve created. Hit the Bind button. You virtual server has now been created! Notice that although one of the services is down, our LB virtual server is still actually up and running! It knows not to send requests to the downed server. This right here demonstrates how a load balancer such as the Netscaler can incorporate logic into determining whether to send traffic to a back-end server or not rather than using DNS round robin. Of course, this is just a really simple scenario as you can create much more complex rules and policies to have the Netscaler do your bidding.
Finally, our last step is to create a DNS entry to map the IP of our newly created virtual server to the name we will be using as the connection broker HA name. For my lab, I’ll be using rdcb.lab.local. If you then ping the hostname, it will resolve to the IP of the LB virtual server we’ve just created on the Netscaler and you should then get replies back.
At this point, our Netscaler has been prepped to provide the load balancing feature for our connection brokers. We will finally get to configure that piece in part 4 as well as creating the actual load balancing virtual server that our external clients will connect to! We are actually very close to the finish line and by that I mean being able to test our RDS farm externally.Log back in to your secondary Netscaler. You’ll be presented with a message stating that you are logging into the secondary node and that you should not make any changes to this node! What you want to verify is that all the servers, services and load balancing virtual server we’ve created above on the primary node has been successfully synced to this secondary. You should find all of them in the same places as the primary node. Once you confirm this, that means the synchronization process is working and that the secondary Netscaler node is ready take over should the primary fail or reboot.