Buffalo NAS-Central Forums

Welcome to the Linkstation Wiki community
It is currently Sun Jan 21, 2018 2:08 am

All times are UTC+01:00




Post new topic  Reply to topic  [ 27 posts ]  Go to page 1 2 Next
Author Message
PostPosted: Tue Jun 28, 2016 12:15 pm 
Offline
Newbie

Joined: Tue Jun 28, 2016 11:57 am
Posts: 15
My LS-QVL run RAID 5 with 3 HDD x 4TB. The disk 1 was found with status Error. Then all shared folders could not access. When I login to CP, found RAID 5 changed to Array 1 and Storage Not Available.
I replaced the disk 1 with new one, not yet format but the status still Error and all other options in Grey.
What should I do now, please help !

Image


Top
   
PostPosted: Tue Jun 28, 2016 12:18 pm 
Offline
Newbie

Joined: Tue Jun 28, 2016 11:57 am
Posts: 15
I did the System > Restore/Erase > Restore Factory Defaults without any success.
Thanks


Top
   
PostPosted: Tue Jun 28, 2016 3:26 pm 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2698
You have to format the new drive.
Use slot 2

then you can build a new array and restore your backup.


Top
   
PostPosted: Tue Jun 28, 2016 3:32 pm 
Offline
Newbie

Joined: Tue Jun 28, 2016 11:57 am
Posts: 15
oxygen8 wrote:
You have to format the new drive.
Use slot 2

then you can build a new array and restore your backup.


My slot 2 got problem since I hot plugged this slot some months ago. Since then, this slot could not regconize the hdd


Top
   
PostPosted: Tue Jun 28, 2016 4:37 pm 
Offline
Newbie

Joined: Tue Jun 28, 2016 11:57 am
Posts: 15
Though that my nas slot 2 could not recognize the HDD even I tried many times in the past. However, this time it works without any reason.

I add the new hdd to slot 2 and format as XFS

Image

at the RAID Array tab, it ask to select the disk to preserve data. Only disk 2 is selectable

Image

After selecting disk 2, then nothing happens. I cannot choose to Rebuild the Array and the Create RAID 1 while retaining data (RMM) is Grey also

Image

Please help !


Top
   
PostPosted: Tue Jun 28, 2016 7:38 pm 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2698
You have to delet the demaged raid first.


Top
   
PostPosted: Wed Jun 29, 2016 2:45 am 
Offline
Newbie

Joined: Tue Jun 28, 2016 11:57 am
Posts: 15
oxygen8 wrote:
You have to delet the demaged raid first.

How can I delete the demaged raid ?
Is the data kept ? Tks


Top
   
PostPosted: Wed Jun 29, 2016 6:15 am 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2698
All your data are lost!!!

You can try to recover with the three old drives.
But i think, its not possible.

0. please post the outputs from

Code:
cat /etc/melco/diskinfo

and

Code:
cat /proc/mdstat


1. put in all old drives and start your linkstation

2. run the hardwaretest
Code:
smartctl -d marvell -t long /dev/sda
smartctl -d marvell -t long /dev/sdb
smartctl -d marvell -t long /dev/sdc
smartctl -d marvell -t long /dev/sdd


this is one example for the output

Quote:
root@QVL:~# smartctl -d marvell -t long /dev/sda
smartctl version 5.37 [arm-none-linux-gnueabi] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: "Execute SMART Extended self-test routine immediately in off-line mode".
Drive command "Execute SMART Extended self-test routine immediately in off-line mode" successful.
Testing has begun.
Please wait 255 minutes for test to complete.
Test will complete after Wed Jun 29 11:25:40 2016

Use smartctl -X to abort test.


3. "Please wait 255 minutes for test to complete."
Do it

4. after this time control the rersults
Code:
smartctl -d marvell -a /dev/sda |grep "Extended offline"
smartctl -d marvell -a /dev/sdb |grep "Extended offline"
smartctl -d marvell -a /dev/sdc |grep "Extended offline"
smartctl -d marvell -a /dev/sdd |grep "Extended offline"



this is an example for a good drive
Code:
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%       240         -


Top
   
PostPosted: Wed Jun 29, 2016 9:20 am 
Offline
Newbie

Joined: Tue Jun 28, 2016 11:57 am
Posts: 15
OMG... I need to find another way before doing that because the data cannot be lost....

How can I remove the message "Error" on disk 1 ? I put the new hdd into slot 1 but still got this message. Some days ago, slot 1 still works smoothly and now it does not recognize any hdd. Can anyone please help !!!

Is it workable if I borrow another QVL and put the hdds to slot 1, 3 and 4 as used to be to get the RAID Array work ?

thanks so much !


Top
   
PostPosted: Wed Jun 29, 2016 10:23 am 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2698
The Raid superblock is on your disks.
a new QVL will not help you
you can try to assemble the old disks with mdadm

by the way
on linkstations you have to use 4 drives for raid5
i have also build a raid5 with 3 drives an 2/3 of capacity
everithing looks fine

but now we can see, that the firmware is not able to repair a raid5 wit only 2 runnung drives
this is very pitty


If you realy need the data

buy a second drive, clone both old good drives with dd and let us try to assemble with this copys

to reduce the risk of writing to your original drives, do not use this drives in the linkstation

you need a pc with cdrom, knoppix boot cd an a free sata-slot



du you use wd green drives?


Top
   
PostPosted: Wed Jun 29, 2016 10:26 am 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2698
to remove the error message, you can edit /etc/melco/diskinfo

my german howto
http://forum.nas-hilfe.de/buffalo-technology-nas-anleitungen/raid-eine-festplatte-kann-nicht-formatiert-werden-t2547.html

do not use this with your original drives!
use clones


Top
   
PostPosted: Sat Jul 02, 2016 5:50 am 
Offline
Newbie

Joined: Tue Jun 28, 2016 11:57 am
Posts: 15
Thank you so much for your help oxygen8 !!!

I need to have 3 new hdd to make clone disks first then I could try your advice above.

I still do nothing to avoid loosing data...


Top
   
PostPosted: Sat Jul 02, 2016 7:29 am 
Offline
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2698
You need only 2 new drives.
Raid 5 with 3 drives
one drive is only redundancy
i do not need this drive


Top
   
PostPosted: Sat Jul 02, 2016 8:55 am 
Offline
Newbie

Joined: Tue Jun 28, 2016 11:57 am
Posts: 15
oxygen8 wrote:
to remove the error message, you can edit /etc/melco/diskinfo

my german howto
http://forum.nas-hilfe.de/buffalo-technology-nas-anleitungen/raid-eine-festplatte-kann-nicht-formatiert-werden-t2547.html

do not use this with your original drives!
use clones


Thanks for your help !!!

I tried your guidance here and can remove the status Error of disk1. Now the slot 1 is working normally. However, it comes to another issue when cannot Rebuild the Array, it only shows that Create Raid Array.

When insert the hdd to slot 1, then Raid Array option works

Image

it let me choose 2 out of 3 hdd to Create Raid Array

Image

If I select 3 drivers then the Create Raid Array will become grey. How can I find Rebuild the Array ? Thanks

Image


Top
   
PostPosted: Sat Jul 02, 2016 9:19 am 
Offline
Newbie

Joined: Tue Jun 28, 2016 11:57 am
Posts: 15
oxygen8 wrote:
All your data are lost!!!

You can try to recover with the three old drives.
But i think, its not possible.

0. please post the outputs from

Code:
cat /etc/melco/diskinfo

and

Code:
cat /proc/mdstat





Code:
cat /etc/melco/diskinfo


Authenticate EnOneCmd... OK
Authenticate with admin pw... OK
array1=off
array1_dev=md2
array2=off
array2_dev=
disk1=normal
disk1_dev=md22
disk2=normal
disk2_dev=md22
disk3=array1
disk3_dev=
disk4=array1
disk4_dev=
usb_disk1=
usb_disk2=


Code:
cat /proc/mdstat


Authenticate EnOneCmd... OK
Authenticate with admin pw... OK
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : inactive sdb6[1](S)
2914874368 blocks super 1.2

md22 : active raid1 sda6[0]
2914874232 blocks super 1.2 [2/1] [U_]

md1 : active raid1 sdb2[4] sda2[5]
4999156 blocks super 1.2 [4/2] [UU__]

md10 : active raid1 sdb5[4] sda5[5]
1000436 blocks super 1.2 [4/2] [UU__]

md0 : active raid1 sdb1[1] sda1[0]
1000384 blocks [4/2] [UU__]

unused devices: <none>


Last edited by zerocool09 on Sat Jul 02, 2016 10:15 am, edited 1 time in total.

Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic  [ 27 posts ]  Go to page 1 2 Next

All times are UTC+01:00


Who is online

Users browsing this forum: No registered users and 7 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Powered by phpBB® Forum Software © phpBB Limited