Netscaler Access Gateway issue

Randomly seen  “Unable to launch application. Contact your help desk with the following information: Cannot connect to the Citrix XenApp server.SSL Error 43: The proxy denied access to ….  port 1494” when using 2 or more PVS based STA servers, running on the same image.


So this is one of those annoying error messages that can mean a great many things.

  • Access gateway licenses might not be installed
  • Mismatched STA’s configured on the Access Gateway and Web Interfaces
  • You could actually have some firewall issues with the XenApp server you are trying to hit.

However in a recent case, all of those factors were fine, and the problem would appear randomly. Usually if any of the above issues are present, it is an all or nothing game. It is either totally broke every time or it works. However in my case it would randomly work. You click on your app in web interface, and if you got the above message and retried a few times it would connect fine. Other times it would connect with no message at all.

So to baseline the configuration,

  • 2x Netscaler 10.0 running Access Gateway
  • 2xXenApp 6.5 HRUP1 servers providing zone data collection for the farm and STA service.
  • KEY POINT : Both STA servers are provided off the same PVS vDisk.

So what is happening is best displayed in the Netscaler config for the Access Gateway virtual server. Go to the Published Applications tab and look at the STA identifiers


Now it is important to note this pic is from AFTER I fixed the issue. When the issue was occurring, both STA Identifiers were identical. Essentially Citrix was expecting the identifier to be unique and was getting confused when both STA servers were responding to the same identifier. The relevant (and fairly new) Citrix Limited Release KB article is here.

The hotfix side of the KB corrects an issue with using the XenApp Server Configuration tool, when you prepare a server for imaging and provisioning. It was not guaranteeing that each server had a unique STA. The second part of the fix is just to set the Citrix XML service to delayed start to ensure it comes up after the NIC does.

So a simple fix for an odd and random problem.

vCenter Accidental lockout! Read-only pitfalls

Recently while working with a customer, we had a single host that was completely read-only for their domain login.

Symptoms were buttons and controls they normally had access to (everything to do with editing) was grayed out.

(Most screenshots of vcenter will be from the web client from here on out, because like it or not, it is going to be the only choice soon)


Since all permissions were supposed to be set at the top level of vcenter, and was set to a group, this was puzzling. A quick look at that hosts permissions tab, and we find our culprit.


So after some googling I found the explanation here

So the core the issue is a read-only permissions setting overrides an administrator setting for the object. If you set it at the top level you can even totally lockout all administrators from vcenter in one go (depending on how you setup permissions to begin with).

The article is accurate in how to correct the situation, but since a lot of admins I run across are nervous around SQL, I thought a walk-through video might be helpful.

In essence, to fix the issue you just need to update a single table in the vcenter database dbo.VPX_Access. However being a good administrator, you are going to want to backup your database first before editing directly :)

Below is a short video walking through editing the table and restoring your access.

Where are the logs?!

Starting to work on some “IT Essentials” training videos for folks in our NOC. In this section we will focus on finding logs for various technologies. Logs are GOLD! Always find the logs!

In this installment we have Windows and SQL logs covered. Apologies as I learn the best way to record these kinds of videos. I can only promise they will get better ;-)

Searching Windows Logs

Searching SQL Logs

Exchange 2010 SP1 Stretched DAG node

Just completed an interesting deployment and I thought I would capture a few key points of interest.

The scenario is extending an existing Exchange Database Availability Group to include a copy in an offsite datacenter located over a 10mb WAN with 50ms latency from the local DAG HA pair. Note that this particular scenario does not include a DAG pair in the DR datacenter, just a single node. So we will not be configuring a separate witness fileshare for the DR side.

First off, this KB article from Microsoft goes straight to the heart of the configuration, so if you are looking to go in depth, just click and start reading.

After going through the process, I think I can boil down the process to the following key points

  • When adding new nodes to an existing DAG cluster, especially in another datacenter, server builds may vary. It is exceedingly important that the DAG members match each other in the area of NIC count. They all have to have the same amount of NICs visible to the OS.
  • If you are a straight Exchange admin who does not normally work with network routing, this process does require you to work some with NETSH if you have broken out your replication network from your MAPI network (and you should be doing that!!). Since a given server should only have 1 default gateway (on the MAPI network) your replication network wont have a default gateway. NETSH static routes will establish this for your replication network.
    • Syntax is NETSH interface ipv4 add route <remote replication subnet> “REPLICATION NIC NAME”  <local replication network default gateway>
      • netsh interface ipv4 add route “DAG REPLICATION”
    • this gets run on every DAG node so they can talk to other replication subnets
  • Unless you are in an unusual situation and have a flat network covering both your primary and your DR site, you will be working with 2 different subnets for your MAPI network. For each MAPI NIC you have in a unique subnet, you must have a unique DAG virtual IP assigned in that network as well, and you must add the DAG IP to the DAG in Exchange Management Console (pictured below)



  • Add your DAG node from the Exchange management console, not the failover cluster manager.
  • Reboot the new DAG member after adding it to the DAG and make sure to restart an existing Exchange management console session.

That’s it! Pretty easy process once you have gone through it once and very handy for replicating your Exchange data to a safe location. Just be sure your latency is not over 500ms and expect that large mail databases are going to take a while to seed. You have ZERO control over throttling the seed process from Exchange, so if you have to throttle it, you will need to involve your network team and throttle it from the firewall.


PCoIP and USB Mic misbehaving with “Follow-Me” desktop


I’ve been working on “Follow-me” desktop solutions lately, especially VMware View. Working through different workflows and use cases, I ran into a pretty vexing peripheral issue. As it stands, the PCoIP protocol has an issue with how it handles bi-directional audio through a USB microphone.

One of the new features of recent PCoIP releases has been support for isochronous USB devices to be connected. This type of device has certain bandwidth guarantee associated, and is given privileged status essentially. Things like microphones! You have to have a great connection or your voice input comes out sounds all garbled and scratchy when put over the network (connecting your physical desktop to your virtual desktop). This actually works quite well and I have to give PCoIP props for how well it performs. A good guide to the basics of getting things rolling is found here.

However, one particular use case makes apparent a big problem right now with PCoIP and USB microphones, the “Follow-me” desktop. Normal behavior for USB redirection goes along the lines of….

  1. Disconnect device from host
  2. Reconnect device to target virtual desktop

This is the Virtual USB hub included with the View client making this happen. When you disconnect your physical computer from your View desktop, the following step is supposed to happen

  1. Disconnect device from target virtual desktop
  2. Reconnect device to host

This is all because only one system can “own” a USB device at once. However the last step is not happening for USB microphones when connecting via PCoIP. The process works correctly for RDP connections oddly enough, however this will not work correctly with some commonly used apps for a USB mic, like Dragon Naturally Speaking.

What you will observe follows this pattern….

  1. First connection from a “fresh” host PC to a Virtual desktop ends with the microphone working.
  2. The user logs off of their View desktop
  3. Any user that then tries to logon from that desktop will be unable to connect to the microphone. You will see an error similar to “Cannot connect <device name> It may be in use by another application”.
  4. If you unplug the USB microphone and plug it back in to the physical PC, you will see the USB composite device in a disabled state in device manager.

There is a VMware KB regarding this issue here that describes in great detail what error messages you might see, and describes their current official workaround.

The core of the solution, if you can call it that right now, is to remove the disabled USB Composite device corresponding to your mic, from the device manager, while it is plugged in and after it has “failed” and will no longer connect to View desktops. This does reset the device so that it can successfully work with another virtual desktop again, but far too detailed for an average user to do on a near constant basis.

If you want to script this to take a lot of the pain out, Microsoft actually has a nifty command line utility that lets you manipulate the device manager. DevCon is the name, and it can be a lot of fun….but can have….ahh…unfortunate side effects if you aren’t careful.

Regardless, if used correctly, you can fire off DevCon to remove the specific VID/PID of the USB mic from the host PC(specifically the USB composite device). However the end users would still need to physically unplug the mic and plug it back in, before the device would be fully reset and ready to connect to another virtual session.

Feelers are out to Teradici and VMware to see if there is a more elegant solution to be found. It sure would be nice to be able to treat USB microphones as equal citizens in the mobile virtual world!