by Brendan Griffin on 1/21/2010 1:24 PM
I recently did some investigation into an explorer view performance issue that a customer was experiencing, end users were reporting that connecting to SharePoint via Explorer View was taking forever (well a few minutes). I did some basic testing and found that it was taking in excess of three minutes to connect using Explorer View – this is painfully slow and understandably end users were getting frustrated.
To give you some background to configuration of the farm, it was running MOSS 2007 and had two load balanced WFE’s. Load balancing was being performed by a hardware device rather than WNLB.
The first thing I always like to do with performance issues is to eradicate load balancing from the equation. This was fairly simple to do, by updating the hosts file on a client machine to point the hostname of the SharePoint site directly to the IP address of one of the WFE’s, this caused the client to connect directly to the WFE.
I then tried to connect using Explorer View, this time it was lightning fast and the performance problem had gone away – bingo, I’d found the problem! Obviously the customer couldn’t disable the load balancer to fix this issue so I had to find a solution to the performance problem. The next thing that I did was to run Network Monitor from a test machine to capture network traffic whilst making a connection to Explorer View to see what was happening on the network layer. The network trace was very interesting (well as interesting as a network trace will ever be)
What I discovered was that the client machine was making a request to the IP address of the load balancer on port 445 and then 139 (these are the ports that are used for file sharing). It tried to make a connection on these ports a total of 6 times before eventually giving up and connecting to SharePoint on port 80, which it should have done from the start J
Each time a connection to these ports failed it doubled the time it waited between the next connection attempt, for example it attempted at 3 seconds into the trace and then 6, 12, 24, 48, 96 before finally giving up at 192 seconds and connecting on port 80.
As it turns out when Explorer View is initiated it attempts to connect to the WFE using port 139 and 445, if it fails for example if the port isn’t open or is blocked by a firewall it backs off and tries again. In this particular case it re-tried a total 5 times. You may be thinking, how does the load balancer tie into this? Well the load balancer had been configured to only listen on port 80 and distribute traffic to the WFE’s, which is perfectly reasonable and expected, but this meant that connection made to any other ports would be automatically discarded. The fix is to configure the load balancer to listen on port 139 and 445 and distribute this traffic to the WFE’s (as it is currently doing for port 80 traffic) this ensures that the initial connection that the client makes to the IP address is successful. An alternative solution is to block ICMP requests (Ping) to the load balanced IP address.
To summarise, when a client uses Explorer View it does the following:
1. Sends an ICMP (Ping) request to the IP address of the server (or load balancer if used) if this fails it goes straight to step 3. If it is successful it goes to step 2.
2. It attempts a connection on port 139 and/or 445 (Explorer View doesn’t actually use these ports) if this fails it re-tries multiple times before going to step 3 (hence the delay in rendering Explorer View)
3. It then makes a connection to the IP address of the server (or load balancer if used) on port 80 (or other port the Web application is configured to listen on) and Explorer View is rendered on the client.