Buffalo NAS-Central Forums

Welcome to the Linkstation Wiki community
It is currently Thu Apr 19, 2018 4:31 pm

All times are UTC+01:00




Post new topic  Reply to topic  [ 16 posts ]  Go to page 1 2 Next
Author Message
PostPosted: Tue Dec 20, 2016 7:25 pm 
Offline
Newbie

Joined: Sun Dec 18, 2016 8:12 pm
Posts: 9
I have a LinkStation Quad LS-QVL setup as follows:

3 x 3TB Western Digital Red NAS drives.

Disk 1 & Disk 2 are in a RAID-1 array (mirrored) : "Array 1".
Disk 3 is a stand-alone volume.

Had a power failure the other day and when I turned my LS-QVL back on, one HDD (Disk 1) is showing the red LED and the status on the web interface is: Error.
Items on the Storage page show:

Disks
Disk 1: Error
Disk 2: Array 1
Disk 3: Normal (RMM available)
Disk 4: Removed

RAID Array
Array 1: Not Configured
Array 2: Not Configured


I shutdown the LS-QVL and removed Disk 1, attached it to my PC and booted up. I ran CrystalDiskInfo and it reports the Health Status of the drive as Good.
Is there any way to see more detail on why the NAS is reporting an Error for the drive?
This drive is under factory warranty still and if there's a problem I want to RMA it and replace it, but I'm concerned that since SMART status shows good that it won't be determined to be bad and won't be replaced.

Edit: Here's the CrystalDiskInfo info:

Image


Last edited by eric1024 on Tue Dec 20, 2016 8:03 pm, edited 1 time in total.

Top
   
PostPosted: Tue Dec 20, 2016 7:59 pm 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2717
power failure with a raid can bring this effects

i recommend you to create a backup
after that format the drive 1 and rebuild the array

with luck you will not need the backup

for the future think about
APC BE700G-GR

Image


Top
   
PostPosted: Tue Dec 20, 2016 8:09 pm 
Offline
Newbie

Joined: Sun Dec 18, 2016 8:12 pm
Posts: 9
What type of backup do you suggest? I cannot access the data on Disk2 since Array 1 is not active due to the failed Disk1.

I actually ordered another drive (I originally wanted 4 anyway, 2 x RAID 1 arrays), so I figured maybe I could: remove Disk #1, insert the new disk, rebuild the array, and then try to use the old Disk#1 in slot 4 and change Disk 3 to a RAID-1 Array with Disk 4. I'm more concerned with the data on Array1 than Disk3, so if the drive reporting Error does fail, I'd rather it happen on the new Array with Disk3.

When you mention formatting drive 1, do you mean outside of the Buffalo utility? Because it doesn't give me the option of formatting for that disk, the only option active is Remove Disk.

Yeah, I definitely need to get a UPS; I'll find a good one and order that soon also.


Top
   
PostPosted: Tue Dec 20, 2016 9:00 pm 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2717
Quote:
I cannot access the data on Disk2 since Array 1 is not active


This is not normal for raid1
do not rebuild without a backup

Quote:
When you mention formatting drive 1, do you mean outside of the Buffalo utility?


no

Quote:
Remove Disk.

this is the right way


i think, the firmware will delete the old raid.

try to read all data from the removed GOOD drive

start your pc with knoppix and the raid drive connectet

boot from this cd:
Knoppix boot cd
http://www.knopper.net/knoppix-mirrors/


send this command to the console

Code:
apt-get install mdadm xfsprogs smartmontools mc

cat /proc/partitions


post the output
Code:
cat /proc/mdstat

post the output



Code:
mdadm --assemble /dev/md0 /dev/sd[bcde]6

xfs_repair /dev/md0

mkdir /tmp/test

mount /dev/md0 /tmp/test

mc /tmp/test


Top
   
PostPosted: Tue Dec 20, 2016 10:46 pm 
Offline
Newbie

Joined: Sun Dec 18, 2016 8:12 pm
Posts: 9
oxygen8 wrote:
Quote:
I cannot access the data on Disk2 since Array 1 is not active


This is not normal for raid1
do not rebuild without a backup



Yeah, I didn't think so either.. thought it was strange I couldn't access it.

oxygen8 wrote:
Quote:
Remove Disk.

this is the right way

i think, the firmware will delete the old raid.


Are you saying if I remove Disk1, the firmware will delete the old raid? I certainly don't want that to happen.

oxygen8 wrote:
try to read all data from the removed GOOD drive

start your pc with knoppix and the raid drive connectet

boot from this cd:
Knoppix boot cd
http://www.knopper.net/knoppix-mirrors/



I have already enabled ssh/root access on this NAS, so I should be able to do all of this from the NAS shell, correct? (Maybe all except for the mc command, at least).

oxygen8 wrote:
send this command to the console

Code:
apt-get install mdadm xfsprogs smartmontools mc

cat /proc/partitions


post the output



Code:
root@LS-QVL1C0:~# cat /proc/partitions
major minor  #blocks  name

  31        0        512 mtdblock0
  31        1      16384 mtdblock1
  31        2     499712 mtdblock2
   8       32 2930266584 sdc
   8       33    1000448 sdc1
   8       34    5000192 sdc2
   8       35       1024 sdc3
   8       36       1024 sdc4
   8       37    1000448 sdc5
   8       38 2914875392 sdc6
  31        3       8192 mtdblock3
   8       16 2930266584 sdb
   8       17    1000448 sdb1
   8       18    5000192 sdb2
   8       19       1024 sdb3
   8       20       1024 sdb4
   8       21    1000448 sdb5
   8       22 2914875392 sdb6
   8        0 2930266584 sda
   8        1    1000448 sda1
   8        2    5000192 sda2
   8        3       1024 sda3
   8        4       1024 sda4
   8        5    1000448 sda5
   8        6 2914875392 sda6
   9        0    1000384 md0
   9       10    1000436 md10
   9        1    4999156 md1
   9       23 2914874232 md23


oxygen8 wrote:
Code:
cat /proc/mdstat

post the output


Code:
root@LS-QVL1C0:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : inactive sdb6[1](S)
      2914874368 blocks super 1.2

md23 : active raid1 sdc6[0]
      2914874232 blocks super 1.2 [2/1] [U_]

md1 : active raid1 sdb2[1] sdc2[2]
      4999156 blocks super 1.2 [4/2] [_UU_]

md10 : active raid1 sdb5[1] sdc5[2]
      1000436 blocks super 1.2 [4/2] [_UU_]

md0 : active raid1 sdb1[1] sdc1[2]
      1000384 blocks [4/2] [_UU_]



oxygen8 wrote:
Code:
mdadm --assemble /dev/md0 /dev/sd[bcde]6

xfs_repair /dev/md0

mkdir /tmp/test

mount /dev/md0 /tmp/test

mc /tmp/test


I assume I shouldn't run this as-is on the NAS? I'm not sure which devices correspond to which drives/raid configurations.
I would have thought Disk1=sda, Disk2=sdb, Disk3=sdc, however from the output of /proc/mdstat it appears that Array1 is setup with sdb and sdc (unless I'm reading it wrong).


Top
   
PostPosted: Tue Dec 20, 2016 11:00 pm 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2717
This is no windows-pc with knoppix

Quote:
I have already enabled ssh/root access on this NAS, so I should be able to do all of this from the NAS shell, correct?



No
the buffalo scripts will make the work

your data stay on
Quote:
md2 : inactive sdb6[1](S)
2914874368 blocks super 1.2


S meens spare


this is not my way
you can try your way with a clone of the original drive


Top
   
PostPosted: Tue Dec 20, 2016 11:07 pm 
Offline
Newbie

Joined: Sun Dec 18, 2016 8:12 pm
Posts: 9
oxygen8 wrote:
/dev/md23 is your data drive



Code:
mkdir /tmp/test

mount /dev/md23 /tmp/test

mc /tmp/test


Actually, looks like md23 is the Disk3 "array", which is the one I'm able to access (it's already mounted and accessible).

Code:
root@LS-QVL1C0:/var/log# df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/md1                  4.7G      1.6G      2.8G  37% /
udev                     10.0M    160.0k      9.8M   2% /dev
/dev/ram1                15.0M    184.0k     14.8M   1% /mnt/ram
/dev/md0                969.2M     29.3M    939.9M   3% /boot
/dev/md23                 2.7T    511.3G      2.2T  18% /mnt/disk3


I'm guessing the inactive md2 (from the mdstat output) is Array1 which only indicates sd6 as a member since sda is not functioning properly.
I had two separate volumes on here, one that was a RAID1 array with Disk1 & Disk2, and then a second volume with just a single disk (Disk3).
What I need to be able to do is either restore the array and rebuild Disk1, or at least backup the data from Disk2 and rebuild the array from scratch.


Top
   
PostPosted: Tue Dec 20, 2016 11:08 pm 
Offline
Newbie

Joined: Sun Dec 18, 2016 8:12 pm
Posts: 9
oxygen8 wrote:
This is no windows-pc with knoppix


No, I ran those on the NAS (LS-QVL) itself:

Quote:
I have already enabled ssh/root access on this NAS, so I should be able to do all of this from the NAS shell, correct? (Maybe all except for the mc command, at least).


Top
   
PostPosted: Tue Dec 20, 2016 11:21 pm 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2717
Quote:
this is not my way
you can try your way with a clone of the original drive


Top
   
PostPosted: Tue Dec 20, 2016 11:31 pm 
Offline
Newbie

Joined: Sun Dec 18, 2016 8:12 pm
Posts: 9
oxygen8 wrote:
this is not my way
you can try your way with a clone of the original drive


Okay, well I'll get the knoppix cd downloaded and burnt, get all of my HDD's disconnected from my PC and connect Disk2 and then run your commands - that'll take some time.
What is the end result of your method? xfs_repair will repair the good Disk2 and allow it to function properly on its own (without Disk1) when put back in the NAS?


Top
   
PostPosted: Tue Dec 20, 2016 11:54 pm 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2717
Quote:
when put back in the NAS?


you will get error for disk1
disk2 is now spare and not part of a active raid1

i see no way with the crashed buffalo firmware.

the wrong settings are stored on /etc/melco/diskinfo

you can try to modify this, if you have a backup


Top
   
PostPosted: Wed Dec 21, 2016 2:39 am 
Offline
Newbie

Joined: Sun Dec 18, 2016 8:12 pm
Posts: 9
oxygen8 wrote:
Quote:
when put back in the NAS?


you will get error for disk1
disk2 is now spare and not part of a active raid1

i see no way with the crashed buffalo firmware.

the wrong settings are stored on /etc/melco/diskinfo

you can try to modify this, if you have a backup



What makes you say /etc/melco/diskinfo is wrong? What is wrong and what needs to be changed?
Here are the current contents:

Code:
root@LS-QVL1C0:~# cat /etc/melco/diskinfo
array1=off
array1_dev=md2
array2=off
array2_dev=
disk1=degrade
disk1_dev=
disk2=array1
disk2_dev=
disk3=normal
disk3_dev=md23
disk4=remove
disk4_dev=
usb_disk1=""
usb_disk2=""


Top
   
PostPosted: Wed Dec 21, 2016 7:01 am 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2717
Quote:
array1=off

should be on

Quote:
disk1=degrade

should be disk1=array1


Top
   
PostPosted: Wed Dec 21, 2016 7:38 am 
Offline
Newbie

Joined: Sun Dec 18, 2016 8:12 pm
Posts: 9
oxygen8 wrote:
Quote:
array1=off

should be on

Quote:
disk1=degrade

should be disk1=array1


You think if I change those manually and reboot the NAS, it may just start working?

Interesting results running an examine on sda6 and sdb6:

Code:
root@LS-QVL1C0:/mnt# mdadm --examine /dev/sda6
/dev/sda6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 935f8ccc:423d853f:ed0ecdcc:2da5db16
           Name : LS-QVL1C0:2  (local to host LS-QVL1C0)
  Creation Time : Fri May  1 00:35:37 2015
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 5829748736 (2779.84 GiB 2984.83 GB)
     Array Size : 5829748464 (2779.84 GiB 2984.83 GB)
  Used Dev Size : 5829748464 (2779.84 GiB 2984.83 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 0239bccb:d26280ee:ff95aae6:dfd3f29c

    Update Time : Sat Dec 17 16:29:08 2016
       Checksum : cf210825 - correct
         Events : 19


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)



Code:
root@LS-QVL1C0:/mnt# mdadm --examine /dev/sdb6
/dev/sdb6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 935f8ccc:423d853f:ed0ecdcc:2da5db16
           Name : LS-QVL1C0:2  (local to host LS-QVL1C0)
  Creation Time : Fri May  1 00:35:37 2015
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 5829748736 (2779.84 GiB 2984.83 GB)
     Array Size : 5829748464 (2779.84 GiB 2984.83 GB)
  Used Dev Size : 5829748464 (2779.84 GiB 2984.83 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 0fe10680:8de487b2:7c6e7985:11c1699c

    Update Time : Sun Dec 18 12:41:22 2016
       Checksum : e5bb13f0 - correct
         Events : 107


   Device Role : Active device 1
   Array State : .A ('A' == active, '.' == missing)


Before I make any changes on the NAS, I'll be making a clone of at least one of the drives (Disk2 likely).


Top
   
PostPosted: Wed Dec 21, 2016 7:54 am 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2717
this looks good
try it with the clones or with one clone


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic  [ 16 posts ]  Go to page 1 2 Next

All times are UTC+01:00


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
cron
Powered by phpBB® Forum Software © phpBB Limited