Buffalo NAS-Central Forums

Welcome to the Linkstation Wiki community
It is currently Mon Jun 18, 2018 8:31 pm

All times are UTC+01:00




Post new topic  Reply to topic  [ 3 posts ] 
Author Message
PostPosted: Thu Dec 30, 2010 5:34 am 
Offline
Newbie

Joined: Wed Nov 05, 2008 11:01 pm
Posts: 6
In case this helps someone out, I'm documenting my experience recovering from a RAID error. I initially thought I would need to swap out a disk, but it turns out I didn't have to.
Model LS-WSGL 250Gb, configuration RAID1

I noticed that my LS mini showed a red LED which repeatedly flashed 4 times. According to the manual, this error code corresponds to a "fan error" - but there's no fan?!? Accessing my device web page, I saw that there was a message stating there was a RAID error (forgot the exact wording). There was no other information. I ran the check disk command which didn't return anything.

Fortunately I had already installed acp_commander. I was able to telnet in as root and poke around, with much help from google.

Code:
root@LS_NAS:~# mount
rootfs on / type rootfs (rw)
/dev/root on / type xfs (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw)
/dev/ram1 on /mnt/ram type tmpfs (rw)
/dev/md0 on /boot type ext3 (rw,data=ordered)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/md2 on /mnt/array1 type xfs (rw,noatime)
root@LS_NAS:~#

root@LS_NAS:~# mdadm -D /dev/md2
 /dev/md2:
        Version : 00.90.03
  Creation Time : Sun Oct 19 10:57:00 2008
     Raid Level : raid1
     Array Size : 236332096 (225.38 GiB 242.00 GB)
    Device Size : 236332096 (225.38 GiB 242.00 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Wed Dec 29 11:39:38 2010
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : 33fa2a64:3f042a04:58c689ac:ae1b8046
         Events : 0.252220

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       22        1      active sync   /dev/sdb6

root@LS_NAS:~# cat /proc/mdstat
Personalities : [raid0] [raid1]
md2 : active raid1 sdb6[1]
      236332096 blocks [2/1] [_U]
     
md1 : active raid1 sda2[0] sdb2[1]
      5004160 blocks [2/2] [UU]
     
md10 : active raid1 sda5[0] sdb5[1]
      1003904 blocks [2/2] [UU]
     
md0 : active raid1 sda1[0] sdb1[1]
      1003904 blocks [2/2] [UU]
     
unused devices: <none>


In short, one of the partitions had failed (the main one containing all the data: dev/sda6) and the corresponding disk had been kicked out of the array.
I called Buffalo support line which was quite unhelpful. Regarding the LED code they just said it could be a controller issue, recommended re-formatting my disks. I thought about pulling and swapping the failed drive myself, and asked for the part#, but they weren't able to even provide that information.

Then I realized that the per the /proc/mdstat output the other partitions on the bad disk (sda) were still listed, which made me think that the disk and controller were fine but just that one partition was corrupt.

I ran:
badblocks /dev/sda6
to search for failed blocks on the partition. Took a while, however there was no error reported.

Next:
mdadm --manage /dev/md2 --add /dev/sda6
to put the partition back into the array. The partition automatically underwent a successful recovery procedure which took about 3 hours.

Now the LED error code is gone and there is no error reported on the webpage. Problem solved!? (at least, for now)

One thing that does bother me is why this happened in the first place. Looking back through /var/log/messages, I saw a bunch of core driver errors which occurred a week ago, followed by messages about sda partitions being kicked out of the array. Not exactly sure what the core driver error means, but maybe it's time to update the firmware.

Comments welcome, thanks.


Top
   
PostPosted: Mon Mar 26, 2012 5:39 pm 
Offline
Total Newbie

Joined: Mon Mar 26, 2012 5:12 pm
Posts: 1
Thanks for this post. I had the exact same problem and this fixed it.
I don't know why Buffalo would not provide the same help, but I'm glad someone did.


Top
   
PostPosted: Tue Jun 25, 2013 11:34 am 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2730
Quote:
The failure of just one drive will result in all data in an array being lost


???

Use Raid 6 and you can kill 2 drives without losing data.


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic  [ 3 posts ] 

All times are UTC+01:00


Who is online

Users browsing this forum: No registered users and 3 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Powered by phpBB® Forum Software © phpBB Limited