Views
From Wiki
Jump to: navigation, search

System Info:

  • coraid chassis
  • 2 GB RAM
  • 16 SATA disks for storage pool
  • raid-1 SATA disk for boot (disk on chip?)
  • Nexenta Core 2 OS

Contents

Creating zpools

3x5x74GB raidz1 = 818 GB usable storage

zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 raidz c0t5d0 c0t6d0 c0t7d0 c1t0d0 c1t1d0 raidz c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0

2x7x74GB raidz + spare = 811 GB usable

zpool create -f tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 c0t6d0 raidz c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0 spare c0t7d0

3x5x74GB raidz2 = 609 GB usable storage

zpool create tank raidz2 c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 raidz2 c0t5d0 c0t6d0 c0t7d0 c1t0d0 c1t1d0 raidz2 c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0

2x7x74GB raidz2 = 677 GB usable storage + one spare

zpool create tank raidz2 c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 c0t6d0 raidz2 c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0 spare c0t7d0

5x3x74GB raidz1 = 675 GB usable storage (more fault tolerant, same performance with less space)

zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 raidz c0t3d0 c0t4d0 c0t5d0 raidz c0t6d0 c0t7d0 c1t0d0 raidz c1t1d0 c1t2d0 c1t3d0 raidz c1t4d0 c1t5d0 c1t6d0

Drive Replacement

  • Place new drive in the tray and install it in the server. /var/adm/messages should show the new drive connect.
  • Configure the drive.
cfgadm -c connect sata0/2
cfgadm -c configure sata0/2

Set up replication using AVS

  • create bitmap volumes on primary and secondary servers. 1-2 GB is enough.
zfs create -V 1024M rpool/bitmap00
zfs create -V 1024M rpool/bitmap01

Enable replica sets

1. Log in to the primary host rmshost1 as the superuser.

2. Enable the volume sets:

cluster1# sndradm -nE cluster1 /dev/rdsk/c2d1s0 /dev/zvol/rdsk/rpool/bitmap00 cluster2 /dev/rdsk/c2d1s0 /dev/zvol/rdsk/rpool/bitmap00 ip async g primary
cluster1# sndradm -nE cluster1 /dev/rdsk/c3d0s0 /dev/zvol/rdsk/rpool/bitmap01 cluster2 /dev/rdsk/c2d1s0 /dev/zvol/rdsk/rpool/bitmap01 ip async g primary

3. Log in to the secondary host rmshost2 as the superuser.

4. Enable the volume sets:

cluster2# sndradm -nE cluster1 /dev/rdsk/c2d1s0 /dev/zvol/rdsk/rpool/bitmap00 cluster2 /dev/rdsk/c2d1s0 /dev/zvol/rdsk/rpool/bitmap00 ip async g primary
cluster2# sndradm -nE cluster1 /dev/rdsk/c3d0s0 /dev/zvol/rdsk/rpool/bitmap01 cluster2 /dev/rdsk/c2d1s0 /dev/zvol/rdsk/rpool/bitmap01 ip async g primary

Create ZFS storage pool

zpool create primary mirror c2d1 c3d0

Synchronize volume set

1. Log in to the primary host rmshost1 as superuser.

2. Unmount the secondary volume. You can keep the primary volume mounted.

3. Synchronize the volumes:

rmshost1# sndradm -m rmshost2:/dev/rdsk/c0t117d0s5

4. Check the synchronization progress:

rmshost1# dsstat -m sndr

To Do

  • Automated failover - place mirror in logging mode, import pool, bring up service IP

External Links

Personal tools