In case this helps someone out, I'm documenting my experience recovering from a RAID error. I initially thought I would need to swap out a disk, but it turns out I didn't have to.
Model LS-WSGL 250Gb, configuration RAID1
I noticed that my LS mini showed a red LED which repeatedly flashed 4 times. According to the manual, this error code corresponds to a "fan error" - but there's no fan?!? Accessing my device web page, I saw that there was a message stating there was a RAID error (forgot the exact wording). There was no other information. I ran the check disk command which didn't return anything.
Fortunately I had already installed acp_commander. I was able to telnet in as root and poke around, with much help from google.
rootfs on / type rootfs (rw)
/dev/root on / type xfs (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw)
/dev/ram1 on /mnt/ram type tmpfs (rw)
/dev/md0 on /boot type ext3 (rw,data=ordered)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/md2 on /mnt/array1 type xfs (rw,noatime)
root@LS_NAS:~# mdadm -D /dev/md2
Version : 00.90.03
Creation Time : Sun Oct 19 10:57:00 2008
Raid Level : raid1
Array Size : 236332096 (225.38 GiB 242.00 GB)
Device Size : 236332096 (225.38 GiB 242.00 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Wed Dec 29 11:39:38 2010
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 33fa2a64:3f042a04:58c689ac:ae1b8046
Events : 0.252220
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 22 1 active sync /dev/sdb6
root@LS_NAS:~# cat /proc/mdstat
Personalities : [raid0] [raid1]
md2 : active raid1 sdb6
236332096 blocks [2/1] [_U]
md1 : active raid1 sda2 sdb2
5004160 blocks [2/2] [UU]
md10 : active raid1 sda5 sdb5
1003904 blocks [2/2] [UU]
md0 : active raid1 sda1 sdb1
1003904 blocks [2/2] [UU]
unused devices: <none>
In short, one of the partitions had failed (the main one containing all the data: dev/sda6) and the corresponding disk had been kicked out of the array.
I called Buffalo support line which was quite unhelpful. Regarding the LED code they just said it could be a controller issue, recommended re-formatting my disks. I thought about pulling and swapping the failed drive myself, and asked for the part#, but they weren't able to even provide that information.
Then I realized that the per the /proc/mdstat output the other partitions on the bad disk (sda) were still listed, which made me think that the disk and controller were fine but just that one partition was corrupt.
to search for failed blocks on the partition. Took a while, however there was no error reported.
mdadm --manage /dev/md2 --add /dev/sda6
to put the partition back into the array. The partition automatically underwent a successful recovery procedure which took about 3 hours.
Now the LED error code is gone and there is no error reported on the webpage. Problem solved!? (at least, for now)
One thing that does bother me is why this happened in the first place. Looking back through /var/log/messages, I saw a bunch of core driver errors which occurred a week ago, followed by messages about sda partitions being kicked out of the array. Not exactly sure what the core driver error means, but maybe it's time to update the firmware.
Comments welcome, thanks.