I had an issue today with four new fibre cables, LC to LC OM3 multimode, the correct type for short distance fibre storage, all good they should work…but they didn’t. No light was coming up on the port, maybe I had me the wrong cable?
I checked and the cable was indeed correct, but it seems at random and without cause your fibre cable can come “crossed over”, this is easy to pick once you’re on to it, the A and B in the connector should be the same colour. As per the above picture my cable was crossed, this is easy to fix as each strand can be removed from the clip and swapped over to match.
There is a bug with 4.1 host profiles, profiles built from a reference host that were joined to a Windows Domain using Authentication Services cannot be successfully applied. The Host Profile prompts you for credentials to authenticate and gets stuck, next in the host profile wizard no longer works. I also notice if you select options in the host profile settings displayed you may receive an errror “an internal error occurred in the vSphere Client. Details: The given key was not present in the dictionary.”
According to this Communities thread it is indeed a bug and will be fixed in future releases.
In the meantime you can disable the Domain join setting in the host profile by editing the host profile “Configure a fixed domain name” in “Authentication Services/Active Directory Configuration/Domain Name” and set it to “Host not joined to any domain” or remove the reference host from the domain and updating the profile using that reference host.
So after completing the racking and cabling next comes the initialization and configuration of the NS-120. The Celerra Registration Setup (CSA) needs to be run from a workstation on the same VLAN/subnet, this will allow you to configure the IP address, passwords and licensing details of the Control Station and Storage Processors of the backend CX.
The default password for the ‘root’ and ‘nasadmin’ are ‘nasadmin’, but this can be changed during the initialization. I found that these accounts are both present on the Celerra but only the ‘nasadmin’ is configured on the CX by the initialization wizard.
The Unisphere and CX registration tools cannot be used as the CX is connected to the LAN via bridging on the Celerra’s network interfaces. Once the Celerra is initialized you should register it and the CX using the supplied Celerra RegWiz utility.
Once initialized you can connect to the Celerra and/or the CX, via Unisphere using the IP address you specified. The NS120 came configured from the factory with the first five disks in a RAID group and the sixth disk as a hot spare. There are also 8 Lun’s all presented to the Celerra blade as follows.
This includes two 919 GB unused LUN’s (Celerra-16_d7 and Celerra-17_d8) that could be used for file storage like CIFS or NFS. These two Lun’s can be safely removed and the free space, approx 1.8 GB (this system had 600gb FC drives so it may differ depending on the size on the disks) can be used for block (FC) if you wish. These LUN’s should not be used to high I/O workloads.
The other LUN’s are where the Dart and Flare OS’s for the storage system itself exist. Here is the volumes that are created by default, again you can see d7 and d8 which can be removed if you are using block.
This NS120 is the FC model, it has four FC ports per Storage Processor and is only licensed for CIFS. I will be deleting the d7, d8 LUN’s andcreating a smaller LUN that will be used for CIFS from the Celerra. The rest of the Storage will be carved up and presented as block from the CX4 directly to VMware and Windows hosts.
Racked my first NS-120 today, it arrives all together in one carton weighing over 100 kilos. Its packed together in the rails enclosed in a wooden crate, I got my hopes up that I could simply crack open the crate and somehow lift it straight into the rack, but unfortunately it can’t be done.
You must remove each of the five components individually then remove the rails from within the crate.
Each rail is one single unit that is easily connected to the rack.
Once the rails are in the rack you must re-rack each of the components in the same order as it was delivered. The installation guide in the accessory box is very good and details the component order and how it is correctly racked.
Top: DAE (Disk Array Enclosure)
Second from top: CS (Control Station)
Middle: SPS (Standby Power Supply)
Second from bottom: SPE (Storage Processor Enclosure)
Bottom: Blade Enclosure
This is the front and back views of the unit once its fully racked, this is a small SAN that only has one additional DAE.
Rear view post cabling the unit, cabling is very straight forward as, it the same previous EMC storage systems. I didn’t get a chance to tidy up the cables just yet, that will happen once I cable up the hosts and fibre switches tomorrow.
Update: Microsoft has changed it’s support stance, see the below article.
I agree that using the Exchange DAG functionality within 2010 is the best High Availability solution you can implement to protect your mailboxes and provide the best uptime available. VMware HA alone does not provide an alternative solution, therefore due to the MS support requirements, you must disable HA by setting the VMware Cluster HA restart priority to disabled for Mailbox VMs within a DAG.
The MS article bases it’s case on not using VMware HA for DAG Mailbox Server as it is a better application aware HA solution and the additional costs of VMware HA. Now as stated above I agree on DAG’s being a better solution but costs? Every version of vSphere, (except the free version) licenses unlimited number of VMs for HA. If a customer already has vSphere then there is no additional costs. The only other requirement for VMware HA that could be perceived to have a cost associated is shared storage.
The primary advantage of Shared storage is also lost when using DAS, host failure means the data stored on that host cannot be used until the host failure is resolved. Well you might say fine, my database copies will activate on another DAG member. True but the surviving DAG member will be required to run more databases until the failed DAG member is restored. Depending on the load, size of your hosts and number or users, it is more than likely that during this time the users will notice slower performance. Where as an Exchange DAG using shared storage, another host can bring those database copies back online quickly and the databases can be redistributed back across two DAG members.
Therefore I do see benefits in continuing to use shared storage and do not believe cost is a significant hurdle to using a DAG.
Shared storage can be anything from iSCSI from an Openfiler server, NFS or Fibre Channel. Storage is a major component when designing and deploying MS Exchange 2010 with or without DAG’s. Exchange 2010 single instance storage is gone, for each database copy you plan to have increases the amount of storage required, plus you must have a restore volume. Therefore it is common depending on the level of protection required to need up to two or three times the storage you would normally require without DAG’s. This is clearly why MS pushes JBOD and cheaper DAS, instead of RAID FIbre channel so that your Exchange 2010 project doesn’t break the budget. Generally though Shared storage prices have reduced considerably and a lot more companies now have a SAN or NAS with SAS and SATA for either their current physical or virtual environment. If you do have existing equipment DAS in fact can be more expensive as it must be managed separately, it’s no cheaper to purchase and may require additional or different backup technologies.
I think this is another Microsoft blogging blunder much like the old blog wars over memory oversubscription. It is also clearer than ever before that Microsoft simply do not understand storage and the changes that have occured across the IT industry driven by companies Virtualizing their workloads.