Disk2vhd v1.4

How funky is this

Source: http://technet.microsoft.com/en-nz/sysinternals/ee656415%28en-us%29.aspx

Disk2vhd is a utility that creates VHD (Virtual Hard Disk – Microsoft’s Virtual Machine disk format) versions of physical disks for use in Microsoft Virtual PC or Microsoft Hyper-V virtual machines (VMs). The difference between Disk2vhd and other physical-to-virtual tools is that you can run Disk2vhd on a system that’s online. Disk2vhd uses Windows’ Volume Snapshot capability, introduced in Windows XP, to create consistent point-in-time snapshots of the volumes you want to include in a conversion. You can even have Disk2vhd create the VHDs on local volumes, even ones being converted (though performance is better when the VHD is on a disk different than ones being converted).

Problems upgrading #VMWare ESX 4.0 to 4.0 Update 1

This is a bit scary

Source: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1016070

When attempting to upgrade ESX 4.0 to ESX 4.0 Update 1 (U1), you may experience these symptoms:

  • Upgrade operation may fail or hang and can result in an incomplete installation
  • Upon reboot, the host that was being upgraded may be left in an inconsistent state and may display a purple diagnostic screen with the following error:
    COS Panic: Int3 @ mp_register_ioapic
Who is affected
  1. Customers using VMware vSphere 4 upgrading to ESX 4.0 U1 on HP Proliant systems with a supported version of HP Insight Management Agents running.
  2. Customers running rpm commands on systems from any vendor while upgrading to ESX 4.0 U1.

#VMware #ESX 4.0 Update 1 and VMware vCenter Server 4.0 Update 1

Thanks Virtual Commander for the links: http://www.vmware.com/support/pubs/vs_pages/vsp_pubs_esx40_u1_vc40_u1.html



What’s new?

The following information provides highlights of some of the enhancements available in this release of VMware ESX:

VMware View 4.0 support This release adds support for VMware View 4.0, a solution built specifically for delivering desktops as a managed service from the protocol to the platform.

Windows 7 and Windows 2008 R2 support –This release adds support for 32-bit and 64-bit versions of Windows 7 as well as 64-bit Windows 2008 R2 as guest OS platforms. In addition, the vSphere Client is now supported and can be installed on a Windows 7 platform.

Enhanced Clustering Support for Microsoft Windows – Microsoft Cluster Server (MSCS) for Windows 2000 and 2003 and Windows Server 2008 Failover Clustering is now supported on an VMware High Availability (HA) and Dynamic Resource Scheduler (DRS) cluster in a limited configuration. HA and DRS functionality can be effectively disabled for individual MSCS virtual machines as opposed to disabling HA and DRS on the entire ESX/ESXi host.

Enhanced VMware Paravirtualized SCSI Support Support for boot disk devices attached to a Paravirtualized SCSI ( PVSCSI) adapter has been added for Windows 2003 and 2008 guest operating systems. Floppy disk images are also available containing the driver for use during the Windows installation by selecting F6 to install additional drivers during setup. Floppy images can be found in the /vmimages/floppies/ folder.

Improved vNetwork Distributed Switch Performance Several performance and usability issues have been resolved resulting in the following:

  • Improved performance when making configuration changes to a vNetwork Distributed Switch (vDS) instance when the ESX/ESXi host is under a heavy load
  • Improved performance when adding or removing an ESX/ESXi host to or from a vDS instance

Increase in vCPU per Core Limit The limit on vCPUs per core has been increased from 20 to 25. This change raises the supported limit only. It does not include any additional performance optimizations. Raising the limit allows users more flexibility to configure systems based on specific workloads and to get the most advantage from increasingly faster processors. The achievable number of vCPUs per core depends on the workload and specifics of the hardware. Processor 3400 Series – Support for the Xeon processor 3400 series has been added.

Resolved Issues In addition, this release delivers a number of bug fixes that have been documented in the Resolved Issues section.

VMware vSphere 4: Exchange Server on NFS, iSCSI, and Fibre Channel

Thanks to Ash for pointing me at this: http://www.vmware.com/files/pdf/vsphere_perf_exchange-storage-protocols.pdf

Microsoft Exchange Server 2007 is a storage intensive workload used by a large number of IT organizations to provide email for their users. In this paper, a large installation of 16,000 Exchange users was configured across eight virtual machines (VMs) on a single VMware vSphere 4 server. The performance of this configuration was measured when using storage supporting Fibre Channel, iSCSI, and NFS storage protocols. The results show that each protocol achieved great performance with Fibre Channel leading the way, with iSCSI and NFS following closely behind.

VMware ESX, Windows 2008 and a sluggish mouse

You know how on a Windows 2003 server, when you install it on ESX, and you put the tools on, you get the annoying prompt to set Hardware acceleration to Full.

Well when you install Windows 2008, you don’t get that prompt and the mouse is really sluggish.

Well thanks to Ash for pointing out that you need to set the Hardware acceleration to Full.

So clear the desktop, right click and Personalize.  Click on Display Settings, Advanced Settings, Troubleshooting, and then move the slider all the way to Full.  Click OK

Now on all the servers I have tried this on it want a reboot! after it!

Moving from VMWare Server to Hyper-V

How hard do you think this would be?  well damm hard in fact.  So here is the deal.  I have a Mac Pro running Windows 2008 and VMWare Server.  I had an issue with my graphics card, so I moved my VMWare stuff to Tosh Laptop.

So my graphics card is now okay and I removed VMWare and installed Hyper-V.  My plan was to migrate this Blog (win2k3), two DC’s (both win2k3, one 64 bit) and Exchange 2007 (win2k3 64 bit)

The basic principals are:

  1. Remove VMWare Tools
  2. Convert the VMWare VMDK file to VHD format using Vmdk2Vhd (http://vmtoolkit.com)
  3. Start it up in Hyper-V

Easy hey … yeah right.  The Blog moved over with no problems, but the DC’s converted, but then cycled Blue Screens. with 0×0000007B – “Inaccessible Boot Device” error. It appears that this is because my VM’s used iSCSI and had no Primary IDE Channel, while Hyper-V assumed that my converted disk was IDE and located on the Primary IDE Channel. DOH!.  So after a lot of googling I found a solution.

  1. Add a new IDE disk drive of any size to your VM.  Make sure that you select “Adapter: IDE 0 Device: 0” under “Virtual Device Node” while creating the new disk
  2. Boot up the VM and check that the new drive is available and that in device manager you can see a primary IDE channel
  3. Power off the VM and convert the disks (excluding the one you added above) using Vmdk2Vhd.

So now my VM’s could boot and windows would start.  Now for the next problem.  As I converted the Domain Controllers something funky was happening.  When I tried to install the Integration Services it would barf with a”System-Level Error Occurred While Verifying Trust“.  The NIC wasn’t discovered, so I was thinking crl, but none of the magic I tried before would work. 

That’s when I found the good ole Legacy Network Adapter.  So I Shut down the vm, added the legacy nic and started it back up.  Now the 32bit was sweet and the drivers installed and I could install the Integration Service.  But I couldn’t find the 64bit drivers anywhere. So more googling and I found a post that said “On a healthy Windows Vista Machine (x64 or x86, I tested on Vista SP1 x86), navigate to %windir%system32driverstoreFileRepository, and copy the directory named dc21x4vm.inf_7d8c6569 to a CD (be sure to include hidden files – 4 files total), or other media that you can access from the x64 VM. Then use Device Manager to update the NIC driver in the VM.

As it goes I had a 64bit machine and created and ISO with the drivers in, installed them and the NIC worked and managed to install Integration Services. So .. the full steps are:

  1. Take note of the server’s IP configuration! Very important as you will loose it
  2. Remove VMWare Tools
  3. Check to see if the VM has a Primary IDE Channel
  4. Shut VM down
    1. If no Primary IDE Channel add a new IDE disk drive of any size to your VM.  Make sure that you select “Adapter: IDE 0 Device: 0” under “Virtual Device Node” while creating the new disk
    2. Boot up the VM and check that the new drive is available and that in device manager you can see a primary IDE channel
    3. Power off the VM
  5. Convert the VMWare VMDK file to VHD format using Vmdk2Vhd (http://vmtoolkit.com)
  6. Create a new VM on Hyper-V and attach the converted disks.
  7. Power up in Hyper-V
    1. If the server cycles a Blue Screen, check above (Primary IDE Channel)
  8. Install Integration Services
    1. If it fails with a “System-Level Error Occurred While Verifying Trust“. ,shut the VM down, add a Legacy Network Adapter and restart
    2. If the NIC driver is not automagically found get hold of %windir%system32driverstoreFileRepositorydc21x4vm*, create an ISO and install the drivers.
    3. Try Installing Integration Services again
    4. Remember to remove the Legacy Network Adapter once the Integration Services are working correctly
  9. Reboot
  10. Done all should be okay.  You will need to reconfigure the Network IP’s but that is it

And finally .. thanks to these two links for pointing me in the right direction:

Phew, glad that is over!

Hyper-V, SCVMM and Windows 2008

this is kinda interesting, migrated from a VmWare Server Proof of Concept to Hyper-V (by rebuild it from scratch).  As part of this we used System Centre Virtual Machine Manager (SCVMM) templates.

So everything is going fine, but since the tail end of last week strange things started to happen.  Nothing was logged in any event log, but it started with weird authentication issues and general strange behaviour of Windows 2008.

After using netmon and turning on debug logging for netlogon, it looked like kerberos problem, so Neil done some digging.  First he created a 64bit Windows 2008 server from media and it all worked okay.  So then Neil templated it, and built another server from the template and it had the same problems as before :-|  He tried a few more and they all had the old problems, whereas a clean media build worked?

Don’t ask me, dunno what the hell is going on with SCVMM but it looks like some kind of undocumented feature or a step that is required before you template.

So note to self, if your using SCVMM dont template Windows 2008, just build it from media