It occurs to me that I had better fix that NFS issue I am having. Why? Well, if I have five servers clustered, and three can mount the NFS datastore with VMs on it, could there be a chance of DRS moving a VM to a server not talking to the VM’s NFS orgin? I do not think so, but if true, things would fail. If not true, then my cluster is only as good as three servers, not five.
My strategy: Mount at the command line on one of the ESX servers first to test. If that fails, unmount the same NFS share from one of the other servers and try to remount it, from within VIM. This will tell me quite a lot about what is going on, I hope. The vmknics on four of the servers (two that can and the two that cannot mount) are on the same subnet, which differs from the subnet the NFS mount is on. So why can two mount, but the other two not? They fail instantly, so it is not a timeout. The firewalls are all off for now, so that is not part of the issue.
And of course, dig through the logs on each of the servers – /usr/var/messages, /vmkernel, /vmksummary, and /vmkwarning at a minimum.
My task list has otherwise been eradicated in the past week (YES!) – outside of NFS, all that really remains is for me to build a golden master of Windows 2003 Server, and maybe fork some application templates (DHCP, DNS, print, AD, web, SQL, FTP, etc.) off of it. Cake, right?