Monthly Archives: February 2016

How to create Microsoft HyperV cluster

Prerequisites

We will need:

  • An ISCSI enabled Network Attached storage, like QNAP.
  • 2 PC/Server with equal number of NICs (4+ NICs needed) /RAM and CPU type
  • A working active directory domain on Windows 2012 R2

For proper HyperV operation you will need n+1 NICs where n is the number of vms hosted on the hypervisor.

For the cluster we are about to build we need 3 NICs on each node of the cluster, plus n NICs for the n number of VMs we are about to host.

Make a good sketch of your solution, in order to have nic ips/configuration handy at all times during the installation or troubleshooting.

Drawing1

On the sketch above you will identify that each of the nodes (node03/node04) have 3 nics configured (the rest NICs are virtual switched on the HyperV and therefore have no part on this sketch)

  • 1x NIC for each node connected to network switch (that’s the interface we will use for joining the pc/server on our domain. On our scenario NODE03 has the ip 10.124.10.3 and NODE04 10.124.10.4.
  • 1x NIC for each node connected to the RSO/NAS (directly or via switch). On our scenario NODE03 has the ip 172.16.0.1 and NODE04 172.16.0.2.
  • 1x NIC for each node connected to each other (no need for a cross cable if auto mdix is applicable on your NICs-it’s a standard nowadays. We call this the heartbeat cable, where each cluster node gets the status of its partner node. On our scenario NODE03 has the ip 192.168.1.1 and NODE04 192.168.1.2.

Join all on the same domain

Ensure all nodes (hyper –V servers ) and Qnap are joined to the same Active Directory Domain.

Organise and name your nodes NICs

Configure Network cards NICs for Failover Cluster: Rename all Network cards and make names identical on both servers in order to save yourself from auto moving questions of resources. Be very cautious on identifying the physical location of each NIC.

a. Rename all Network cards

00

b. Rename the Domain Network NIC as Production and deselect unnecessary protocols and features

IPv6 is up to the installers hand to enable or disable. Proceed according to your internal network specs.

01

IPv6 should be unchecked.

Make sure Register this connection’s address in DNS is checked,

02.jpg

Check option “Register this connection’s addresses in DNS”.

At WINS tab Enable NetBIOS over TCP/IP option.

03

c. Configure the RSO NICs as RSO and deselect unnecessary protocols and features

IPv6 was deselected on our scenario in order to avoid IPv6 communication failures.

d. Configure the HeartbeatNICs as Heartbeat and deselect unnecessary protocols and features

Uncheck IPv6

01

Watch it now! Put the heartbeat ips with no Gateways, no DNS servers.

04

Make sure Register this connection’s address in DNS is NOT checked,

05

and make sure NetBIOS over TCP/IP is NOT checked!

06

Your Heartbeat NIC properties should look like this

045

e. Set the Network Priority (arrange binding order)

Navigate to Advanced Settings, though Network and Internet ->Network connections.

Open your network connections and click Advanced>Advanced Settings

07

Arrange the adapter binding order as follows:

  1. Production
  2. Storage
  3. Heartbeat

This is very importart to how each node responds and reacts to network requests. If you ommit this step latencies in cluster behaviour related to network access or interoperability with other network resources may occur.

08

Configure NAS/RSO

We assume you have already configured your Raid. (Best results we have achieved on COTs systems are Raid 10 and Raid 6). On our scenario we used a QNAP with 5 HDD RAID6 array.

a. Configure Shared Storage (iSCSI Target)

Fire up your ISCSI configuration wizard and enable iSCSI target service on its 3260 default port.

Enable iSCSI Target Service at port 3260

09

Through iSCSI Storage’s Configuration Wizard,

Select to create iSCSI Target with a mapped LUN(Logical Unit Number).

Create a new iSCSI target with a mapped LUN

10

VERY IMPORTANT!

“Target Name” and “Target Alias” should be Quorum.

Clustering access to iSCSI target from multiple initiators must be “Enabled”.

Name it Quorum. That’s the most important shared storage resource of the cluster since the cluster configuration is exchanged between nodes through it.

Make sure you check the Enable clustering access to the iSCSI target from multiple initiators in order to avoid data corruption, occuring on simitaneously iSCSI connections and prepare this part of the storage for CSVFS.

11

Don’t use chap authentication, unless needed.12

Don’t use more than 1GB for the Quorum, since you will never exceed it.

Allocate space from your storage pool.

For performance purposes we select “Allocate space from a storage pool as an iSCSI LUN”. On the other hand the disk space is pre-allocated making it your Cluster storage more secure in cases of rapid data deployment in the rest of its free disk space.

131415

Proceed making the above steps again 2 times. Each for the following names:

  1. ClusterDisk1, with allocated space as prefered
  2. ClusterDisk2, with allocated space as prefered

You need at least one Cluster Disk, in case you need more resources prepare more.

At the iSCSI target list you will see the iSCSI targets you just created.

16

Quorum and Cluster Disks should appear as “Ready” after the initialization of the iSCSI storage.

After finishing with our NAS configuration, we proceed with NODES.

b. Connect to iSCSI targets from both Nodes

Both Nodes must be connected to our Storage using iSCSI Initiator (though Server management Tools).

From your server manager, select tools iSCSI initiator. A message will come up informing you that the iSCSI initiator service will start automatically next time windows loads.

17

On the discovery tab hit Discover portal

18

Be rather cautious to put the ip of the RSO belong to the nodes RSO network, eg. on our example 172.16.0.xxx

19

Discovery should find the IP address and Port of your iSCSI target (make sure your cluster nodes RSO nics and iSCSI RSO are on the same switch or VLAN.

20

Following that, though Targets tab, you should be able to see your disks (including Quorum) as “Inactive”.

Proceed connecting them. Go back on the targets tab, hit refresh and when the list is populated hit connect.

21

Do the above on both nodes

c. Initialize disks

On the first node open Disk Management console. Right click on each of the new hard disks appearing and select online.

22

Initialize the disk2324

and create a new simple volume25

Assign the drive letter Q for Quorum, we don’t care what you put on the rest.

26

Format it as NTFS and name it Quorum

27

Proceed with the same process for ClusterDisk1 and 2, put whatever drive letter you like. At the end of the process you will see the below.

Launch disk management on the second node and “online” the already made HDDs.

28

HYPER-V installation on both Nodes

Though Manage Tab, select ”Add Roles and Features”.

29

From Server Roles, select the Hyper-V role and proceed.

30

Include management tools and proceed adding the feature.

31

Create Virtual Switch (Production) on both Nodes

32

On your Hyper-V Manager console select the Virtual Switch Manager action on the right.

3334

Create a New Virtual Network Switch. Type: EXTERNAL. Make sure you don’t select ANY of your RSO or Heartbeat NICS!

35

Name the Virtual switch, assign appropriate NIC and check the option “Allow management operating system to share this network adapter”.

Do the same on both nodes.

Install Failover Cluster Roles Features on both Nodes

Through the “Add Roles and Features” we proceed to “Features.

3638

Select the “Failover Clustering” and proceed.

37

Do the same on both nodes.

Validate the cluster configuration

Pick up one of the two Nodes and run the Cluster Validation configuration tool.

Next steps shown below will be performed to validate cluster’s failover configuration.

39404142

Since all nodes are “Validated” we can proceed creating the Failover Cluster.

43

Create the Hyper-V Failover Cluster

4445

We proceed to create cluster through Failover Cluster Manager.

Make sure all required servers have been selected (separated by a comma “,”).

4647

Provide the cluster name, revise that addresses are correct for each network that is part of the Failover Cluster.

48

4950

Your Cluster has completed, revise again summary.

Rename Cluster Networks for easy understanding and mapping to the physical node NICs.

5152

Through Failover Cluster Manager, we configure networks’ names and communication permissions.

Specifically, at Heartbeat network we ONLY allow cluster network communication.

At production Network, we allow cluster network communication AND also allow clients to connect through.

At Storage network we DO NOT allow any cluster network communications.

Also through the above steps, we have the chance to check one again that subnets have been assigned correctly.

5354555657

Enable Cluster Shared Volumes

586260

Following cluster’s networks configuration, we are ready to ADD storage disks to our cluster.

Through Failover cluster manager -> Storage -> Disks, we should see our Cluster Disks marked as “AVAILABLE STORAGE”. Selecting one by one we proceed adding them to “Cluster Shared Volumes”.

WE DO NOT TAKE ANY ACTION ON QUORUM DISK!

59

At the end of process all added disks should be marked as “Assigned to Cluster Shared Volume”.

Create a VM and Configure for High Availability or make an Existing VM Highly Available

Test the Failover Cluster by shutting down the node having the VM resources. If you see VMs moving to other node you are ready to start serving clients. Further tests should be made regarding the VMs functionality.

Written and tested by Creative People Team, Costantinos Koptis, Andreas Lavazos and Chrysostomos Psaroudakis

Advertisements

DEFENCO ACRITAS Mini Heli UAV

DEFENCO ACRITAS Mini Heli UAV, featuring CBRNE sensors from NCSR Demokritos, IR and Day Hitachi Lens cams, FPV, Auto Pilot. Prototype for #ACRITAS project.#UAV#Helicopter#CBRNE#BorderSecurity. Special thanks to our partners Defenco, SATWAYS, Demokritos, KEMEA, NOA, EMC, HAI, University of Aegean#CreativePeople.gr

Walkthrough Εγκατάστασης Intel Ethernet Card σε Server 2012 – 2012 R2

Σε πολλές περιπτώσεις θα δείτε ότι είναι αδύνατη η εγκατάσταση Ethernet Καρτών κατασκευή της Intel ή Intel –Gigabit σε Η/Υ με λειτουργικό Windows Server 2012 / 2012 R2.

Η εγκατάσταση των driver σταματάει επειδή το software του driver δεν μπορεί να εντοπίσει Ethernet κάρτες της Intel.  Τo πρόβλημα υφίσταται διότι το πρόγραμμα εγκατάστασης της Intel προσπαθεί αυτόματα να εγκαταστήσει τον driver, ενώ θα έπρεπε να αφήσει το λειτουργικό σύστημα να αναλάβει την διαδικασία της εγκατάστασης. Επίσης έχει προστεθεί μια λίστα εξαιρουμένων συσκευών προς εγκατάσταση, που οδηγεί στο πάγωμα της εγκατάστασης των οδηγών.

Ο λόγος που γίνεται αυτό δεν είναι γνωστός και λογικά γίνεται στο να αποτρέψει χειριστές να εγκαθιστούν οδηγούς που ενδεχομένως να μην λειτουργούν σε ορισμένα setup.

Σφάλμα Εγκατάστασης Κάρτας Intel

01

Για να μπορέσει να πραγματοποιηθεί η εγκατάσταση χρειάζεται να προβούμε σε πολλαπλές ενέργειες  που θα εξαναγκάσουν την διαδικασία εγκατάστασης να ολοκληρωθεί.

Παρουσιάζουμε παρακάτω, αναλυτικά τα βήματα που πρέπει να ακολουθηθούν.

Απενεργοποίηση Driver Signing & Ενεργοποίηση Test Mode

Εκτελείτε το Command Prompt με δικαιώματα Διαχειριστή και δίνεται τις εξής εντολές.

bcdedit -set loadoptions DISABLE_INTEGRITY_CHECKS

bcdedit -set TESTSIGNING ON

Εφόσον εκτελεστούν οι εντολές κάνετε επανεκκίνηση τον Η/Υ.

Πλέον το λειτουργικό επιτρέπει την εγκατάσταση μη εγκεκριμένους οδηγούς συσκευών.

Προετοιμασία Custom Driver προς εγκατάσταση.

Αρχικά θα πρέπει να κάνετε download τον driver της συσκευής σας από το site του κατασκευαστή, εκτός και αν τον έχετε ήδη.

Στην συνέχεια κάνετε εξαγωγή τα αρχεία του driver κάπου στον Η/Υ. Στην περίπτωση του παραδείγματος ο οδηγός αφορά την κάρτα Intel 82579V Gigabit NIC και η εξαγωγή έγινε στο C:\temp\Intel_LAN_V17.1.50.0_Win8_Beta\PRO1000\Winx64\NDIS63.

Θα πρέπει να ανοιχθούν και να επεξεργαστούν όλα τα αρχεία .inf που περιέχουν τα hardware IDs της συσκευής.

Οπότε αρχικά θα πρέπει να βρούμε τα IDs της συσκευής, πηγαίνοντας στον Device Manager και επιλέγοντας την συσκευή που δυσλειτουργεί. Στο tab λεπτομέρειες  επιλέγουμε το Hardware IDS

Device Manager –> Details–>Hardware ID

Στην προκείμενη περίπτωση θα χρησιμοποιηθεί το ‘VEN_8086&DEV_1503’. Στο δικό σας σύστημα πιθανότατα να είναι διαφορετικά τα IDs.

02

Έχοντας πλέον και τον Vendor Και τα Hardware IDs μπορούμε να αναζητήσουμε στο φάκελο που εξάγαμε τον driver προηγουμένως , τα αρχεία που εμπεριέχουν τα συγκεκριμένα Hardware IDs.

Σε Powershell δώστε την ακόλουθη εντολή εφόσον έχετε πρώτα πλοηγηθεί στο φάκελο που έγινε η εξαγωγή του driver.

Get-Children –recurce | Select-String –pattern “το Hardware ID” | group path | select name

03

Η εντολή αυτή θα μας επιστρέψει όλα τα .inf αρχεία σε όλο τον φάκελο και υπό φακέλους του οδηγού που αφορούν το συγκεκριμένο Hardware ID.

Εδώ μπορούμε να διαχωρίσουμε λίγο τα ζητούμενα αρχεία μιας και αν γνωρίζουμε την έκδοση του λειτουργικού μας, αν είναι 32bit ή 64bit.  Στην περίπτωση του παραδείγματος το λειτουργικό είναι 64bit οπότε θα επικεντρωθούμε στους φακέλους Winx64.

04

Στον παρακάτω πίνακα αναγράφονται οι συσχετίσεις των εκδόσεων του οδηγού με τα λειτουργικά συστήματα.

Version Desktop OS Server OS
NDIS 6.0 Vista *
NDIS 6.1 Vista SP 1 Server 2008
NDIS 6.2 Windows 7 Server 2008 R2
NDIS 6.3 Windows 8 Server 2012
NDIS 6.4 Windows 8.1 Server 2012 R2

Βάσει των παραπάνω ο ζητούμενη έκδοση για το παράδειγμα μας είναι η NDIS63.

05

Οπότε η αναζήτηση μας περιορίζεται μόνο σε ένα αρχείο, το e1c63x64.inf .

Στο παρακάτω πλαίσιο έχουμε αναγράψει της αλλαγές που πρέπει να γίνουν και αν ενσωματωθούν στο αρχείο e1c63x64.inf έτσι ώστε να ολοκληρωθεί η εγκατάσταση του οδηγού.

;**  Unless otherwise agreed by Intel in writing, you may not remove or      **

;**  alter this notice or any other notice embedded in Materials by Intel    **

;**  or Intel’s suppliers or licensors in any way.                           **

;******************************************************************************

;

;******************************************************************************

; e1c63x64.INF (Intel 64 bit extension Platform Only,

; Windows 8 64 bit extension)

;

; Intel(R) Gigabit Network connections

;******************************************************************************

;

[Version]

Signature   = “$Windows NT$”

Class       = Net

ClassGUID   = {4d36e972-e325-11ce-bfc1-08002be10318}

Provider    = %Intel%

CatalogFile = e1c63x64.cat

DriverVer   = 03/29/2012,12.1.10.0

[Manufacturer]

%Intel%     = Intel, NTamd64.6.2, NTamd64.6.2.1

[ControlFlags]

;ExcludeFromSelect = \

;    PCI\VEN_8086&DEV_1502,\

;    PCI\VEN_8086&DEV_1503

[Intel]

[Intel.NTamd64.6.2.1]

; DisplayName                   Section              DeviceID

; ———–                   ——-              ——–

%E1502NC.DeviceDesc%            = E1502.6.2.1,       PCI\VEN_8086&DEV_1502

%E1502NC.DeviceDesc%            = E1502.6.2.1,       PCI\VEN_8086&DEV_1502&SUBSYS_00011179

%E1502NC.DeviceDesc%            = E1502.6.2.1,       PCI\VEN_8086&DEV_1502&SUBSYS_00021179

%E1502NC.DeviceDesc%            = E1502.6.2.1,       PCI\VEN_8086&DEV_1502&SUBSYS_80001025

%E1503NC.DeviceDesc%            = E1503.6.2.1,       PCI\VEN_8086&DEV_1503

%E1503NC.DeviceDesc%            = E1503.6.2.1,       PCI\VEN_8086&DEV_1503&SUBSYS_00011179

%E1503NC.DeviceDesc%            = E1503.6.2.1,       PCI\VEN_8086&DEV_1503&SUBSYS_00021179

%E1503NC.DeviceDesc%            = E1503.6.2.1,       PCI\VEN_8086&DEV_1503&SUBSYS_80001025

%E1503NC.DeviceDesc%            = E1503.6.2.1,       PCI\VEN_8086&DEV_1503&SUBSYS_04911025

 

[Intel.NTamd64.6.2]

; DisplayName                   Section        DeviceID

; ———–                   ——-        ——–

%E1502NC.DeviceDesc%            = E1502,       PCI\VEN_8086&DEV_1502

%E1502NC.DeviceDesc%            = E1502,       PCI\VEN_8086&DEV_1502&SUBSYS_00011179

%E1502NC.DeviceDesc%            = E1502,       PCI\VEN_8086&DEV_1502&SUBSYS_00021179

%E1502NC.DeviceDesc%            = E1502,       PCI\VEN_8086&DEV_1502&SUBSYS_80001025

%E1503NC.DeviceDesc%            = E1503.6.2.1,       PCI\VEN_8086&DEV_1503

%E1503NC.DeviceDesc%            = E1503.6.2.1,       PCI\VEN_8086&DEV_1503&SUBSYS_00011179

%E1503NC.DeviceDesc%            = E1503.6.2.1,       PCI\VEN_8086&DEV_1503&SUBSYS_00021179

%E1503NC.DeviceDesc%            = E1503.6.2.1,       PCI\VEN_8086&DEV_1503&SUBSYS_80001025

%E1503NC.DeviceDesc%            = E1503.6.2.1,       PCI\VEN_8086&DEV_1503&SUBSYS_04911025

;===============================================================================

;                WINDOWS 8 for 64-bit EXTENDED PLATFORMS

;

;===============================================================================

Ουσιαστικά διαγράφεται τις 3 γραμμές στο [ControlFlags] και στα version παρακάτω προσθέστε τα hardware IDs σας και αποθηκεύετε το αρχείο inf.

Εγκατάσταση Driver

Στην συνέχεια εκτελείτε το setup του οδηγού, το οποίο θα ολοκληρωθεί αυτή την φορά.

06

Ενεργοποίηση Driver Signing & Απενεργοποίηση Test Mode

Αφού ολοκληρωθεί η εγκατάσταση θα πρέπει να επαναφέρουμε τις ρυθμίσεις των Driver Signing & test mode.

Εκτελείτε το Command Prompt με δικαιώματα Διαχειριστή και δίνεται τις εξής εντολές.

bcdedit -set loadoptions ENABLE_INTEGRITY_CHECKS

bcdedit -set TESTSIGNING OFF

Written and executed by Andreas Lavazos@CreativePeople

Copyright Creative People ©2016

 

%d bloggers like this: