Buffalo NAS-Central Forums

Welcome to the Linkstation Wiki community
It is currently Wed Nov 22, 2017 10:15 am

All times are UTC+01:00




Post new topic  Reply to topic  [ 2 posts ] 
Author Message
PostPosted: Sat Mar 22, 2014 11:43 pm 
Offline
Total Newbie

Joined: Fri Mar 21, 2014 6:36 pm
Posts: 1
I have successfully installed Debian squeeze (thanks to the excellent guide by Vollemelk) but having trouble recreating the RAID arrays. I tried to remove /dev/md2 from RAID0 and recreate it as RAID1 but ended up losing network access to my NAS thrice (steady blue light, no remote access). I assume something failed because of it. I didn't touch the other partitions. Can someone experienced help me with this please? This is the current status of my FS -

Both sda and sdb are exactly the same -PARTED -
Code:
Model: ATA SAMSUNG HD103SI (scsi)
Disk /dev/sdb: 1953525168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start        End          Size         File system  Name     Flags
        34s          2047s        2014s        Free Space
 1      2048s        2002943s     2000896s     ext3         primary
 2      2002944s     12003327s    10000384s                 primary
 3      12003328s    12005375s    2048s                     primary
 4      12005376s    12007423s    2048s                     primary
 5      12007424s    14008319s    2000896s                  primary
 6      14008320s    1937508319s  1923500000s               primary
        1937508320s  1953525134s  16016815s    Free Space

FSTAT -
Code:
# /etc/fstab: static file system information.
#
# file system   mount point     type    options                 dump pass
/dev/md1        /               ext3    defaults,noatime        0    1
/dev/md2        /home           ext3    defaults,noatime        0    2
/dev/md0        /boot           ext3    rw,nosuid,nodev         0    2
/dev/md10       none            swap    sw                      0    0
proc            /proc           proc    defaults                0    0

MDSTAT-
Code:
Personalities : [linear] [raid0] [raid1]
md1 : active raid1 sda2[0] sdb2[1]
      4999156 blocks super 1.2 [2/2] [UU]

md10 : active raid1 sda5[0] sdb5[1]
      1000436 blocks super 1.2 [2/2] [UU]

md2 : active raid0 sda6[0] sdb6[1]
      1923496960 blocks super 1.2 512k chunks

md0 : active raid1 sda1[0] sdb1[1]
      1000384 blocks [2/2] [UU]

unused devices: <none>

I roughly used these commands last 3 times-
umount /home
mdadm --stop /dev/md2
mdadm --create --level=1 --raid-devices=2 /dev/sda6 /dev/sb6

I also read Vollemelk's guide on rebuilding RAID - http://pastebin.com/pWWrkzQB
But it seems he already has one of the drives setup and is starting from there.

What is best way to go about this?


Top
   
PostPosted: Sat May 24, 2014 8:01 pm 
Offline
Newbie

Joined: Wed Dec 12, 2012 10:25 am
Posts: 72
Missed this post. And haven't been here much lately. Are you still having issues?

What is the goal of your setup? Do you want RAID1 or RAID0? I recommend using the stock firmware to configure your drives with the Buffalo Webinterface.

And the array rebuild is something that I have to explain ;)

Let me quote my own guide first:
Quote:
# Here is the trick. It takes around 980 minutes to sync RAID1. If I had to wait 980 minutes each time I
# bricked my NAS, it would've taken me YEARS. So, we are going to interrupt the sync process:
# Make sure the RAID is configured (The orange LED is blinking indicating it is rebuilding the array)
# * Turn off the NAS using the switch on the back
# * Remove drive 2
# * Turn on the NAS using the powerswitch
# The NAS will boot, the red error light will blink and the red bay light will turn on. This is good,
# once we have the NAS up and running we will use mdadm to add the second drive back to the array and
# let it rebuild.

I bricked my NAS perhaps 40-50 times before I finally got the whole thing up and running. So each time the NAS would crash, I took out the drives, formatted them, put them back, flashed stock firmware, and used the interface to configure the RAID array again. It takes 980 minutes to rebuild the array! I didn't want to wait for that..

So one time, again the NAS failed and I had to wait the whole 980 minutes before I could start over. I did some researching, and found out that I could start an array degraded. This way, it wouldn't need to resync the drives (there was only one drive) but I already created the /dev/md0 device. So now I could try it 4-5 times per evening.

Naturally, once everything worked I wanted to have the RAID capabilities. This is where you start reading the rebuild array guide. You correctly stated I am already starting off with one drive, which is why :)

Of course, you can install debian with any configuration you want and change it afterwards.

edit: I'm no mdadm expert, but have a look at this: http://www.ducea.com/2009/03/08/mdadm-cheat-sheet/

The way I see things, you completely stop the array with --stop. If you look at 3. you have to --fail the raid to remove a drive. After you have done that, you can use parted to configure that drive. Then use mdadm to create a degraded array. Once the new RAID is up and running, stop and delete the first array, add the drive to the new array and let it sync. Don't forget to update fstab, or you can start over ;)


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic  [ 2 posts ] 

All times are UTC+01:00


Who is online

Users browsing this forum: No registered users and 9 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Powered by phpBB® Forum Software © phpBB Limited