Category Archives: Hyper-V

Hyper-V

How to create Microsoft HyperV cluster

Prerequisites

We will need:

  • An ISCSI enabled Network Attached storage, like QNAP.
  • 2 PC/Server with equal number of NICs (4+ NICs needed) /RAM and CPU type
  • A working active directory domain on Windows 2012 R2

For proper HyperV operation you will need n+1 NICs where n is the number of vms hosted on the hypervisor.

For the cluster we are about to build we need 3 NICs on each node of the cluster, plus n NICs for the n number of VMs we are about to host.

Make a good sketch of your solution, in order to have nic ips/configuration handy at all times during the installation or troubleshooting.

Drawing1

On the sketch above you will identify that each of the nodes (node03/node04) have 3 nics configured (the rest NICs are virtual switched on the HyperV and therefore have no part on this sketch)

  • 1x NIC for each node connected to network switch (that’s the interface we will use for joining the pc/server on our domain. On our scenario NODE03 has the ip 10.124.10.3 and NODE04 10.124.10.4.
  • 1x NIC for each node connected to the RSO/NAS (directly or via switch). On our scenario NODE03 has the ip 172.16.0.1 and NODE04 172.16.0.2.
  • 1x NIC for each node connected to each other (no need for a cross cable if auto mdix is applicable on your NICs-it’s a standard nowadays. We call this the heartbeat cable, where each cluster node gets the status of its partner node. On our scenario NODE03 has the ip 192.168.1.1 and NODE04 192.168.1.2.

Join all on the same domain

Ensure all nodes (hyper –V servers ) and Qnap are joined to the same Active Directory Domain.

Organise and name your nodes NICs

Configure Network cards NICs for Failover Cluster: Rename all Network cards and make names identical on both servers in order to save yourself from auto moving questions of resources. Be very cautious on identifying the physical location of each NIC.

a. Rename all Network cards

00

b. Rename the Domain Network NIC as Production and deselect unnecessary protocols and features

IPv6 is up to the installers hand to enable or disable. Proceed according to your internal network specs.

01

IPv6 should be unchecked.

Make sure Register this connection’s address in DNS is checked,

02.jpg

Check option “Register this connection’s addresses in DNS”.

At WINS tab Enable NetBIOS over TCP/IP option.

03

c. Configure the RSO NICs as RSO and deselect unnecessary protocols and features

IPv6 was deselected on our scenario in order to avoid IPv6 communication failures.

d. Configure the HeartbeatNICs as Heartbeat and deselect unnecessary protocols and features

Uncheck IPv6

01

Watch it now! Put the heartbeat ips with no Gateways, no DNS servers.

04

Make sure Register this connection’s address in DNS is NOT checked,

05

and make sure NetBIOS over TCP/IP is NOT checked!

06

Your Heartbeat NIC properties should look like this

045

e. Set the Network Priority (arrange binding order)

Navigate to Advanced Settings, though Network and Internet ->Network connections.

Open your network connections and click Advanced>Advanced Settings

07

Arrange the adapter binding order as follows:

  1. Production
  2. Storage
  3. Heartbeat

This is very importart to how each node responds and reacts to network requests. If you ommit this step latencies in cluster behaviour related to network access or interoperability with other network resources may occur.

08

Configure NAS/RSO

We assume you have already configured your Raid. (Best results we have achieved on COTs systems are Raid 10 and Raid 6). On our scenario we used a QNAP with 5 HDD RAID6 array.

a. Configure Shared Storage (iSCSI Target)

Fire up your ISCSI configuration wizard and enable iSCSI target service on its 3260 default port.

Enable iSCSI Target Service at port 3260

09

Through iSCSI Storage’s Configuration Wizard,

Select to create iSCSI Target with a mapped LUN(Logical Unit Number).

Create a new iSCSI target with a mapped LUN

10

VERY IMPORTANT!

“Target Name” and “Target Alias” should be Quorum.

Clustering access to iSCSI target from multiple initiators must be “Enabled”.

Name it Quorum. That’s the most important shared storage resource of the cluster since the cluster configuration is exchanged between nodes through it.

Make sure you check the Enable clustering access to the iSCSI target from multiple initiators in order to avoid data corruption, occuring on simitaneously iSCSI connections and prepare this part of the storage for CSVFS.

11

Don’t use chap authentication, unless needed.12

Don’t use more than 1GB for the Quorum, since you will never exceed it.

Allocate space from your storage pool.

For performance purposes we select “Allocate space from a storage pool as an iSCSI LUN”. On the other hand the disk space is pre-allocated making it your Cluster storage more secure in cases of rapid data deployment in the rest of its free disk space.

131415

Proceed making the above steps again 2 times. Each for the following names:

  1. ClusterDisk1, with allocated space as prefered
  2. ClusterDisk2, with allocated space as prefered

You need at least one Cluster Disk, in case you need more resources prepare more.

At the iSCSI target list you will see the iSCSI targets you just created.

16

Quorum and Cluster Disks should appear as “Ready” after the initialization of the iSCSI storage.

After finishing with our NAS configuration, we proceed with NODES.

b. Connect to iSCSI targets from both Nodes

Both Nodes must be connected to our Storage using iSCSI Initiator (though Server management Tools).

From your server manager, select tools iSCSI initiator. A message will come up informing you that the iSCSI initiator service will start automatically next time windows loads.

17

On the discovery tab hit Discover portal

18

Be rather cautious to put the ip of the RSO belong to the nodes RSO network, eg. on our example 172.16.0.xxx

19

Discovery should find the IP address and Port of your iSCSI target (make sure your cluster nodes RSO nics and iSCSI RSO are on the same switch or VLAN.

20

Following that, though Targets tab, you should be able to see your disks (including Quorum) as “Inactive”.

Proceed connecting them. Go back on the targets tab, hit refresh and when the list is populated hit connect.

21

Do the above on both nodes

c. Initialize disks

On the first node open Disk Management console. Right click on each of the new hard disks appearing and select online.

22

Initialize the disk2324

and create a new simple volume25

Assign the drive letter Q for Quorum, we don’t care what you put on the rest.

26

Format it as NTFS and name it Quorum

27

Proceed with the same process for ClusterDisk1 and 2, put whatever drive letter you like. At the end of the process you will see the below.

Launch disk management on the second node and “online” the already made HDDs.

28

HYPER-V installation on both Nodes

Though Manage Tab, select ”Add Roles and Features”.

29

From Server Roles, select the Hyper-V role and proceed.

30

Include management tools and proceed adding the feature.

31

Create Virtual Switch (Production) on both Nodes

32

On your Hyper-V Manager console select the Virtual Switch Manager action on the right.

3334

Create a New Virtual Network Switch. Type: EXTERNAL. Make sure you don’t select ANY of your RSO or Heartbeat NICS!

35

Name the Virtual switch, assign appropriate NIC and check the option “Allow management operating system to share this network adapter”.

Do the same on both nodes.

Install Failover Cluster Roles Features on both Nodes

Through the “Add Roles and Features” we proceed to “Features.

3638

Select the “Failover Clustering” and proceed.

37

Do the same on both nodes.

Validate the cluster configuration

Pick up one of the two Nodes and run the Cluster Validation configuration tool.

Next steps shown below will be performed to validate cluster’s failover configuration.

39404142

Since all nodes are “Validated” we can proceed creating the Failover Cluster.

43

Create the Hyper-V Failover Cluster

4445

We proceed to create cluster through Failover Cluster Manager.

Make sure all required servers have been selected (separated by a comma “,”).

4647

Provide the cluster name, revise that addresses are correct for each network that is part of the Failover Cluster.

48

4950

Your Cluster has completed, revise again summary.

Rename Cluster Networks for easy understanding and mapping to the physical node NICs.

5152

Through Failover Cluster Manager, we configure networks’ names and communication permissions.

Specifically, at Heartbeat network we ONLY allow cluster network communication.

At production Network, we allow cluster network communication AND also allow clients to connect through.

At Storage network we DO NOT allow any cluster network communications.

Also through the above steps, we have the chance to check one again that subnets have been assigned correctly.

5354555657

Enable Cluster Shared Volumes

586260

Following cluster’s networks configuration, we are ready to ADD storage disks to our cluster.

Through Failover cluster manager -> Storage -> Disks, we should see our Cluster Disks marked as “AVAILABLE STORAGE”. Selecting one by one we proceed adding them to “Cluster Shared Volumes”.

WE DO NOT TAKE ANY ACTION ON QUORUM DISK!

59

At the end of process all added disks should be marked as “Assigned to Cluster Shared Volume”.

Create a VM and Configure for High Availability or make an Existing VM Highly Available

Test the Failover Cluster by shutting down the node having the VM resources. If you see VMs moving to other node you are ready to start serving clients. Further tests should be made regarding the VMs functionality.

Written and tested by Creative People Team, Costantinos Koptis, Andreas Lavazos and Chrysostomos Psaroudakis

Advertisements

Exchange 2003 failing to start, Restoring while keeping latest mailbox store. Using eseutil /r /i

I came up -just now- with a failing to boot VM running an Exchange 2003.

Thank God:
I had Backup of the VM
Had Seperated Exchange Mailboxes from system and had put them to another attached vhd.

When I tried to attach the system vhd to the host, failure was coming up…don’t even remember the exact error message since I was rather frustrated! Fortunately the mailbox vhd was attached succesfully, somehow luckily! So mailboxes are intact!

Restored both vhds from backup-while keeping in mind, that if I replace the mailbox vhd users will lose -at least- one day of mails (don’t even want to think about it).

Therefore I kept the mailbox vhd as present, while restoring the system vhd from the previous night’s backup.

Booted the VM and all services came up…with no problem…Come on, be a sport, can’t be that easy!

Well after launching Exchange system manager Mailbox store and Public Folder store were unable to be mounted!

Ok…fingers crossed and we fire up the restore process.

Navigate to your logs folder (there where the e00xxx files are in, x:\mdbdata\)

Copy in x:\mdbdata the
eseutil.exe
and
ese.dll

that you will find in c:\program files\Exchsrvr\bin

These files need to be in the same folder with the logs (just to make the process easier, while not loosing time with paths).

Fire up command prompt and type

x:\mdbdata\eseutil /r /i

and hit enter.

/i switch will ignore mismatched/missing database attachments

The process of regeneration of Exchange databases will start and you may monitor it by refreshing your application log.

Be patient, the more e00 logs you have in, the more time it will take.

After eseutil has finished its job go ahead and manually mount your Mailbox and Public Folder store.

Till next time:) Goodnight!

see more on http://www.creativepeople.gr

Windows 2008 server Backup with HyperV role fails

Event id 521 or others on W2k8 event log.
The backup operation that started at ‘‎xxxxxxxxxxxxxxxxxxxxxxxxx’ has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code ‘2155348129’. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved.
Step 1.
Fire up your cmd on the w2k8 server and check the vss writers
Vssadmin list writers
All should look like
Writer name: ‘xxxxxxxxxxxxxxxxxxxxxxxxxxxx’
   Writer Id: {xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}
   Writer Instance Id: {xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}
   State: [1] Stable
   Last error: No error
If all you writers have a state 1 Stable and last error is no error, then the problem is deeper and it comes from your virtual machines. If not first check the reason of vss failing and consult your event viewer for more details before going on.
There are 2 possible reasons for this behavior:
  1. Either there is no space left on –at least one of your VMs. Please note that free space for correct vss working should be at least 1GB for vhds of 100GB of space.
  2. Or at least one vss writer fails on one of your vms. Go back to step 1 and execute vssadmin list writers on every VM you have. At least one fails. On the one that fails (personally seen it on w2k3 Exchange VM were the Exchange Writer was on error) make a batch file containing the following (I have put some pauses in order to see who the procedure works out). Save the batch as whatever you like .bat and WATCH IT!!! Before Re-registering the vss and required dlls you have to switch your path to windows\system32!!! If you don’t, then the vss will not run actually and when you run vssadmin list writers nothing will come up. So pay attention!!!

Step1 :
Follow info on http://support.microsoft.com/kb/940349
Download and install update.
Restart

Step2:
Open Cmd and run vssadmin list writers
No error should appear.
Check if backup works (It is likely it won’t, instead of running the whole W2k8 Hypervisor backup, try backing up the current VM by it’s own NTbackup, try system state or any folder)
If it does not work proceed to step3:

Step3:
Regedit
Locate and export the following key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\EventSystem\{26c409cc-ae86-11d1-b616-00805fc79216}\Subscriptions
Export the key/folder somewhere in order to get it back if smt goes wrong
Now delete that key/folder (No worries Windows will recreate it)

Step4:
Restart each of the following services in that exact order:
– COM+ Event System
– COM+ System Application
– Microsoft Software Shadow Copy Provider
– Volume Shadow Copy
If one of the services is on stop mode, change the startup type to automatic and start the service

Step5:
Open Cmd and run vssadmin list writers
If errors still show up…go the last …step6

Step6:
Watch it you have to switch your path to windows\system32 – otherwise you will see no writes in vssadmin.
Make a batch copy-pasting the following:

Net stop vss
net stop swprv
pause
cd..
cd windows
cd system32
pause
regsvr32 ole32.dll
regsvr32 oleaut32.dll
regsvr32 vss_ps.dll
vssvc /Register
regsvr32 /i swprv.dll
regsvr32 /i eventcls.dll
regsvr32 vss_ps.dll
vssvc /register
regsvr32 /i swprv.dll
regsvr32 es.dll
regsvr32 stdprov.dll
regsvr32 vssui.dll
regsvr32 msxml.dll
regsvr32 msxml3.dll
regsvr32 msxml4.dll
pause
Net Start vss
Net Start swprv
pause
vssadmin list writers
pause
The batch will list vss writers at the end. Check if all are in stable.RESTART!!!!

Go back to your HyperV and try to back up.

You can check whether the backup runs at any time by running the VMs NTbackup- you don’t need to run the Hypervisor’s backup.
Hope it saves you some time.
Best regards,

Creativepeople.gr

Alternative ways….on How to convert a physical HDD to a vhd

Apart from the usual disk2vhd tools that we use regularly, what about turning an already configured and running HDD to a virtual one?
I came up to a case that the machine was lost, but the hard drive had survived.
After a few surface checks on the disk it appeared to have some bad clusters/corrupted files (the OS was a Red Hat Enterprise Linux 5.3 Server (x86)).
1.       I took the Hard drive and put it on a working Win7 machine (x32/x64 does not matter).
2.       Created a vhd and attached it on the Win7 from the Disk Manager mmc console.
3.       Fired up a Hirens Boot CD and used a partition magic tool to copy the linux partitions to the vhd. This actually failed (had to overcome the warnings) because the HDD was unreadable in some particular areas. But it should generally work on a healthy HDD.
4.       Fired up Norton Ghost from the Hirens Boot CD and made a local image of the hard drive.
5.       I expanded the image to the previously made vhd.
6.       I move the vhd on my w2k8 R2 HYPERV and it worked like a charm! (this will not work on non Linux vhds, I’ m sure you are aware of that).
However while I was creating the virtual machine on my w2k8 HyperV R2 with SP1, I saw the following
Copy the contents of the specified physical drives….
This means that if I plug the HDD on my HyperV and mount it I can copy the contents of it on a brand new made vhd…
Don’t have the time to test it right now, but if someone does just drop a line and share your experience.
Got to get back to my lab, because my feet have left already…. J
%d bloggers like this: