Back to the main page

First time configuration for StorEdge 3510

So you are now happy owner of SE3510 FC array!? 

Wait, there is lots of work now, since new 3510 is coming with configuration you don't want, because it's not usable for production environment. 
It's preconfigured as RAID 0 mapped to LUN 0 with no spare drives (wow, how scary to implement this configuration in production environment). 

Note: if you reconfigure 3510 from previously working configuration, please make some kind of snapshot of it. 
You never know, you may need it again in case something happens.  

The main menu is intuitive and you'll be able to navigate easily.  

If array has IP than access it with telnet (exit session with "Ctrl + ]" and "quit") or even better if you use Cyclades console appliance (great product, check it), so you can access array's console remotely from your desk (this makes my life much easier).

Welcome to old fashion main menu.

Okay, let's do some exercise and learn hot to configure 3510 from scratch. 

Go to 'View and Edit Drives'.

There are seven 73G FC dives in the storage. 

See at the bottom of the screen for instructions/commands. 



We'll create RAID10 of six drives (this is Logical Drive). 

But first, let's add Global Spare Drive, which is spare for any logical drive in the storage. 

Well in this case we have only one logical drive, so it will also work creating Local Spare Drive, which in case of more logical drives is spare one only for specific logic drive.  

Anyway, remember, this is just example for learning and reference. 
 


Okay, we have Global Spare drive. 

Let's add Logical drive as I said before. 

Go to 'View and Edit Logical Drives'

The system has FC drives. 
 


Select RAID 1. 



Select remaining six drives to form RAID 1. 

Note: in the list there is no RAID 10, which is mirroring and striping. 
But if you use 4 and more disks for RAID1, the final result will be RAID10. 
It is important to know that it must be even number of disks, not odd number.  



Escape will show additional info. 



New escape will ask to confirm creation of logical drive.



And wait for initialization. 



Now we have logical drive.



Logical drive can be partitioned. 



StorEdge 3510 also supports Logical Volumes, which is collection of two or more Logical drives. Since we have only one Logical drive, we are not using Logical Volumes. 

Still here is some tips about logical volumes:

1. For FC, logical volume can have max 32 partitions. 
2. Host sees partition of logical volume as one single physical disk. 

About Channels. 

Channels 0 and 1 are for host connection.
Channels 2 and 3 are to connect other 3510 as extensions (drive channels). 
Channels 4 and 5 are for host connection but possibly can be used as drive channels. 



At this point, we cannot really use this storage: 
# devfsadm

# cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c0                             scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0                 CD-ROM       connected    configured   unknown
c1                             scsi-bus     connected    configured   unknown
c1::dsk/c1t0d0                 disk         connected    configured   unknown
c1::dsk/c1t1d0                 disk         connected    configured   unknown
c2                             fc-private   connected    configured   unknown
c2::216000c0ff89cacc           ESI          connected    configured   unknown
c2::216000c0ff99cacc           ESI          connected    configured   unknown

# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@0,0
       1. c1t1d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@1,0
       2. c2t40d0 <drive type unknown> 
          /pci@7c0/pci@0/pci@1/pci@0,2/SUNW,qlc@1/fp@0,0/ssd@w216000c0ff89cacc,0

# luxadm probe
No Network Array enclosures found in /dev/es
Found Fibre Channel device:
  Node WWN:206000c0ff09cacc  Device Type:Disk device
    Logical Path:/dev/rdsk/c2t40d0s2

# luxadm display /dev/rdsk/c2t40d0s2
DEVICE PROPERTIES for: /dev/rdsk/c2t40d0s2
  Vendor:               SUN
  Product ID:           StorEdge 3510
  Revision:             421F
  Serial Num:           09CACC00000000FFFFFFFF
  Device Type:          SES device
  Path(s):
  /dev/rdsk/c2t40d0s2
/devices/pci@7c0/pci@0/pci@1/pci@0,2/SUNW,qlc@1/fp@0,0/ssd@w216000c0ff89cacc,0:c,raw
    LUN path port WWN:          216000c0ff89cacc
    Host controller port WWN:   210000e08b9116d4
    Path status:                Not Ready 
Now we need to map partition to LUN under host's SCSI ID (LUN is partition on logical drive, in this case we have only one LUN since we have only one logical drive). Select 'view and edit Host luns' - channel - logical drive Select LUN 0 - logical drive - partition 0 Map host LUN And we see some different results:
# format
Searching for disks...done
c2t40d0: configured with capacity of 204.34GB
AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@0,0
       1. c1t1d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@1,0
       2. c2t40d0  <SUN-StorEdge3510-421F cyl 52723 alt 2 hd 64 sec 127> 
          /pci@7c0/pci@0/pci@1/pci@0,2/SUNW,qlc@1/fp@0,0/ssd@w216000c0ff89cacc,0

# cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c0                             scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0                 CD-ROM       connected    configured   unknown
c1                             scsi-bus     connected    configured   unknown
c1::dsk/c1t0d0                 disk         connected    configured   unknown
c1::dsk/c1t1d0                 disk         connected    configured   unknown
c2                             fc-private   connected    configured   unknown
c2::216000c0ff89cacc           disk         connected    configured   unknown
c2::216000c0ff99cacc           ESI          connected    configured   unknown

# luxadm display /dev/rdsk/c2t40d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c2t40d0s2
  Vendor:               SUN
  Product ID:           StorEdge 3510
  Revision:             421F
  Serial Num:           09CACC5FDB326400
  Unformatted capacity: 209253.000 MBytes
  Write Cache:          Enabled
  Read Cache:           Enabled
    Minimum prefetch:   0x0
    Maximum prefetch:   0xffff
  Device Type:          Disk device
  Path(s):
  /dev/rdsk/c2t40d0s2
  /devices/pci@7c0/pci@0/pci@1/pci@0,2/SUNW,qlc@1/fp@0,0/ssd@w216000c0ff89cacc,0:c,raw
    LUN path port WWN:          216000c0ff89cacc
    Host controller port WWN:   210000e08b9116d4
    Path status:                O.K.

# luxadm -e dump_map /devices/pci@7c0/pci@0/pci@1/pci@0,2/SUNW,qlc@1/fp@0,0:devctl
Pos AL_PA ID Hard_Addr Port WWN         Node WWN         Type
0     1   7d    0      210000e08b9116d4 200000e08b9116d4 0x1f (Unknown Type,Host Bus Adapter)
1     a6  29    a6     216000c0ff99cacc 206000c0ff09cacc 0xd  (SES device)
2     a7  28    a7     216000c0ff89cacc 206000c0ff09cacc 0x0  (Disk device)
See info of host FC HBA
# fcinfo hba-port
HBA Port WWN: 210000e08b9116d4
        OS Device Name: /dev/cfg/c2
        Manufacturer: QLogic Corp.
        Model: QLA2340
        Firmware Version: 3.03.27
        FCode/BIOS Version:  fcode: 1.13;
        Serial Number: not available
        Driver Name: qlc
        Driver Version: 20081115-2.29
        Type: L-port
        State: online
        Supported Speeds: 1Gb 2Gb
        Current Speed: 2Gb
        Node WWN: 200000e08b9116d4 (on the host)
See info of FC target (3510)
# fcinfo remote-port -p 210000e08b9116d4
Remote Port WWN: 216000c0ff99cacc
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 206000c0ff09cacc
Remote Port WWN: 216000c0ff89cacc
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 206000c0ff09cacc
For example, let's create zpool consisting of only this 3510's logical drive and perform some write/read test:
# zpool create -f pool1 c2t40d0

# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
pool1   106K   201G    18K  /pool1

/pool1 # foreach i (`seq 1 3`)
foreach? echo ----- write -----
foreach? /usr/local/bin/dd if=/dev/zero of=${i} bs=128k count=81920
foreach? echo
foreach? echo --- read ----
foreach? /usr/local/bin/dd if=${i} of=/dev/null  bs=128k count=81920
foreach? echo
foreach? end
----- write -----
81920+0 records in
81920+0 records out
10737418240 bytes (11 GB) copied, 120.208 s, 89.3 MB/s

--- read ----
81920+0 records in
81920+0 records out
10737418240 bytes (11 GB) copied, 60.6986 s, 177 MB/s

----- write -----
81920+0 records in
81920+0 records out
10737418240 bytes (11 GB) copied, 86.9232 s, 124 MB/s

--- read ----
81920+0 records in
81920+0 records out
10737418240 bytes (11 GB) copied, 68.5545 s, 157 MB/s

----- write -----
81920+0 records in
81920+0 records out
10737418240 bytes (11 GB) copied, 87.9937 s, 122 MB/s

--- read ----
81920+0 records in
81920+0 records out
10737418240 bytes (11 GB) copied, 68.277 s, 157 MB/s
Back to the main page