Archive

Archive for April, 2011

Uptime records and Cisco switch dominos

April 29, 2011 3 comments

Yesterday I came across the following ESX hosts, note the top host, 1525 days uptime. I myself have never seen an uptime this long, doubt it’s a world record for an x86 platform server but seeing it’s still running now, its heading in the right direction.

OS: ESX 2.5.1 build 14182
Hardware: Dell PE 2850

Every now and again you come across really stupid things and the following is a supremely accurate example of inexprience and sheer supidity.

Advertisements
Categories: VMware

My history of using VMware Workstation

April 19, 2011 Leave a comment

I was lucky enough to be invited by VMware to participant in the VMware Workstation (WS) 8 beta program. Being a long time user of Workstation I jumped at the chance to see the upcoming features of the next version of Workstation. Unfortunately part of the NDA of participating in the beta is that I can’t discuss the beta itself but it did make me thing back to when I first used WS.

I first used WS3 back in 2002, a co-worker asked me to come over and check out this new application he was using to setup demo environments. He had setup a demo Exchange 2000 environment on a stack of Servers running workstation in a lab at his home. Somehow he had managed to convince his Mrs to dedicate a room to the lab and put up with the loud noise and big power bills from the room. This was back when CPU, RAM and Disk were very limited and expensive so it wasn’t really possible to run more than one VM on even the more powerful computers so the only real option was to use a Server. I was very impressed and saw the possibilities immediately, so I did my first Windows 2000 install but quickly found that I simply didn’t have the hardware to run the guests properly, unlike my friend I didn’t have a stack of servers at home ready to go and my computer at the time barely could run Windows 2000 itself.

Fast Forward to 2006 and I got myself a VMTN subscription, the VMTN subscription licensed you to use one workstation and ESX for personal, test and demo’s purposes only. One of the worst decisions VMware ever made so to discontinue the VMTN subscription and block IT professionals access to license their products at home for training etc. So this is when I first really started to use Workstation, by this time it was possible to run two or more guests on even mid-range computers as long as they had enough RAM. This is also when VMware replaced GSX with VMware Server and made it free!

Since that time I have continued to use Workstation at home on my laptops, I also have a small lab with ESX installed but still prefer to run my test ESX hosts in workstation. Workstation continues to get features first before the other VMware products.

Ever wondered why when vSphere 4 was released the VM hardware version jumped from 4 to 7? Well at that time VMware also released WS7, ESX3.5 was using WS4 VM hardware type and with the release of vSphere VMware merged the hardware types. This also enabled WS7 and later releaase to run ESX as a supported guest. Xtravirt did publish a solution to allow ESX 3.5 to be run on WS6/6.5 but this required you to modify the vmx and trick WS into running ESX as a Redhat guest.

Categories: VMware

Data Domain introduction training notes

April 15, 2011 1 comment

SISL (Stream-Informed Segment Layout)

Leverages the continued advancement of CPU performance to add direct benefit to system throughput scalability.

Other deduplication technologies require additional disk drives or “spindles” to achieve the throughput speeds needed for efficient deduplication. Ironically, these other hybrid technologies that mandate the use of more disk drives require more storage, time and cost to achieve a similar, yet fundamentally inferior result.

Data Domain SISL Technology Provides Many Unique Advantages
99% of duplicate data segments are identified in RAM, inline, before storing to disk.
Block data transfers with related segments and fingerprints are stored together, so large groups are written or read at once.
Efficient disk access minimizes disk seeks to enable increased performance and minimizes the number of large capacity, cost-efficient SATA disks needed to deliver high throughput.
Minimal spindle count reduces the amount of total physical storage needed, along with associated storage management.
In SISL, Data Domain has developed a proven architecture that uses deduplication to achieve high throughput with economical storage hardware. Over time, this will allow the continued scaling of CPUs to add direct benefit to system scalability in the form of additional throughput while minimizing the storage footprint.

Deduplication

File Dedup (not efficient)
Segment based Dedup (fixed seg)
Variable Segement Size (not fixed seg)
Inline and post process (post process limited to disk)

First full back 2-4X
First week back 7-10X
Second Fri full back 50-60X

SISL

Up to 99% identified inline in RAM
Storing related segments in RAM before written out to Disk

Data stream into RAM
Slices into segments 4-12K
fingerprint for each segment
compairs segment fingerprints

summary vector used
segment localities contain all similar data
storing unique segments into containers

DIA (Data Invulnerability Architecture)

Defense against integrity issues

End to End data verification
-reading after it’s written

Self-healing file system
-Activly reverify data

Other
RAID6
NVRAM fast restarts
Snapshots

Data Domain Replication

Source to Destination

license both systems

Replication types:

Collection – full system mirrior, chnages only on source, destination is read only

Directory – directory based at dir level, all systems can be source or destination, must have post compressed size of maximum expected size, CIFS and NFS ok but separate dirs

Pool – VTL pools, works like dir replication

Replication pair = context

Replication streams:

Model Source Destination

DD140, DD610 15 20
DD630 30 20
DD670 60 90
DD860 90 90
DD890 135 270

Relpication Topologies

One to One
src to des

bi-directional

src to des
des to src

many to one

src
src to des
src

one to many

des
src to des
des

cascaded

src
src to pri to des
src

Cascaded

src
src to pri to des
src des

Data Domain Supported Protocals

FC Eth
VTL DD Boost,NFS,CIFS,NDMP

DD OS
DDFS
Dedupe Storage

Data Paths

eth cifs/nfs
eth replication
fc vtl

Data Domain FS

ddvar administration file system
NFS /ddvar
CIFS \ddvar

These contain DD system core and log files
-can’t rename/delete
-Can’t access all dirs
-Data streams change per OS verions and DD model

mtrees Storage File system
5.0 and later

backup
nfs /backup
cifs \backup

data \ col1 \backup Mtree /a (cant be delete or renamed)
/b
\Mtree /a
/b

Mtree – you can add up to 14 dirs Mtrees under /data/col1/

you can manage each mtree dir separately (compression rates etc)

DD Products

DLH data less head – Controller

Speed DD Boost (expects 10GbE)
Speed other (NFS, CIFS or VLT)
Logical capacity = total data including dedupe
Usable capacity = storage space

ES20 Expansion shelf, 16 drives

Models supporting external storage only:

DD690, DD860, DD880, DD890, DD Archiver and GDA
(have 4 internal disks for DDOS, boot and logs)

Models support internal storage only:

DD610 and DD630 (7 disks expandable to 12)
DD140 branch office (fixed 5 drives, RAID 5 only)

DD800 Series

DD890

Dual socket, six core 2.8GHz
96GB RAM
Two 1GB NVRAM cards
Four 1TB disks
Two Quad-port SAS cards (up to 12 ES20’s)
dual path exp shelf connectivity

DD860

Dual socket, quad core
36GB exp to 72GB RAM
One 1GB NVRAM cards
Four 1TB disks
Two Quad-port SAS cards (up to 12 ES20’s)
dual path exp shelf connectivity

DD600 Series

DD670

Single socket, quad core
96GB RAM
Two 1GB NVRAM cards
12 1TB disks
Two Quad-port SAS cards (up to 12 ES20’s)
up to two 32TB exp
up to four 16TB exp

DD140 Remote Office Appliance

3 Disks, RAID5
2 ETH port
1 NVRAM

Data Domain Archiver

Larger tier of storage behind a standard DD
one controller
up to 24 ES20-32TB exp shelves
570TB usable storagae
30 x logical data capacity

Data migration

Active tier receives the data
Based on data movement schedule data to moved from the active tier to the first archive unit in the archive tier
DIA checks file atfer the are moved
data removed from active tier only once DIA verifies data

Archive Tier sealing
one or multiple shelves can be configured as an archive unit
Arcive unit automatically sealed when it fills up
data is not written into a sealed unit but files can be deleted

DD Archiver Hardware

DD860
72GB RAM
One NVRAM card 1GB
three quad port SAS cards
One to 24 ES20 shelves
24 shelves used all 12 ports dual pathed
two 1Gb rth ports
up to optimal 1Gb or 10Gb NIC

DD Archiver replication

Controller to Controller replication

Global Depuplication Array GDA

Largest system

750 TB usable

2 DD890 controllers

Networker supports DD boost and DD VTL

Either VLT or DD Boost, not both

Data Domaim System Management

DD Enterprise Manager

DD Management Framework (CLI)

IPMI management power (Power, status, off, on, cycle)

SOL Serial over LAN

Data Domain Software licenses

DD Boost
VTL
VTL with IBM i
DD Encryption
DD Retention Lock

Hardware and Capacity Licenses

Expanded storage 7 to 12 disks DD510 or DD630
GDA
DD Archiver
Capacity Active 1 shelf
Capacity Archive 1 shelf

DD Boost

1. Improved throughput or retaining data (OST)
2. Backup Server controller replication
3. Backup Server replica awareness

DSP Distributed Segment Processing

Backup Server

1. Segments
2. Fingerprints
4. Compresses

DD

3. Filters
5. Writes

DD Boost enables Advanced Load Balancing and Link Failover

DD VTL

DD systems support backups over the SAN and LAN
Backup application managed all data movement to and from the DD system
Backup aplication manages physical tape creation
DD Replication software manages virtual tape replication
DD Enterprise Manager is used to configure and manager tape emulations

DD up to 64 Tape Libraries per system
LTO1-LTO3
up to 256 VLT per system for single node systems
VTL Slots up to 800GB

NDMP Tape Server support for NAS backup

DD Replicator

async IP replication

supports SSL encryption
minimal performance impact

DD Encrytion

Types:

Data-in-flight (as data is transported)
Data-at-rest (data stored encrypted)

Challenges

Encrypt before dedupe
Encrypt after dedupe (requires hardware)
Integrated dedupe and encryption

DD Inline Encryption

immediately, SISL used to optimized, no hardware needed

DD Retention Lock

Electronic Data Shredding
Enforced Retention for active archiving

policy based, file, database, email

Categories: Data Domain, EMC

Cisco MDS 9000 and VSAN’s

April 6, 2011 Leave a comment

Fibre Channel Storage Networks primarily use two different ways to partition fabrics.

1. Physically separated fabrics, using two or more Fabric switches where each half of the fabric is not interconnected, fabric a/b, with each fabric running a single zoneset.

2. Logically separated fabrics, using one of more Fabric switches that are connected but implement VSAN’s and multiple zoneset’s to partition the Fabric.

Cisco uses VSAN’s to be the storage equivalent of VLAN’s in Ethernet switches and routers.

The same as VLAN 1 in switches, VSAN 1 also known as the default VSAN, is typically used for communication, management, or testing purposes. It is recommend that you do not use VSAN 1 as your production environment VSAN.

VSAN’s are created and stored in the VSAN database:

show vsan membership

The following steps are required setup a VSAN.

1. Create a VSAN
2. Add interfaces to the VSAN
3. Configure the interfaces
4. Enable the interfaces
5. Connect the cables to the interfaces

VSAN trunking enables interconnect ports to transmit and receive frames in more than one VSAN over one physical interface using enhanced ISL (EISL) frame format.

Trunked E_Ports become TE_Ports, those TE_Ports have an associated VSAN’s trunk allowed list, by default 1-4039 are allowed.

In combination with Port Channel Fibre switches can be interconnected to act as one large aggregated fabric that still has partitions to reduce the impact of changes and modifications to one zoneset affecting another.

Reference: http://www.cisco.com/en/US/docs/storage/san_switches/mds9000/sw/san-os/quick/guide/qcg_vin.html

Categories: Fibre Channel