Buffalo NAS-Central Forums

Welcome to the Linkstation Wiki community
It is currently Sat Dec 16, 2017 10:00 am

All times are UTC+01:00




Post new topic  Reply to topic  [ 17 posts ]  Go to page 1 2 Next
Author Message
PostPosted: Mon Dec 28, 2015 8:59 am 
Offline
Newbie

Joined: Mon Dec 28, 2015 8:56 am
Posts: 7
Hi,

I've just received this error message and lost access to all my files: E14 Cannot mount RAID array 1.

Steps taken:
- Check disk on array 1 - no result
- reboot - no result
- firmware downgrade and re-upgrade - no result
- HDD 1 removed - no result
- and readded - no result
- mdmadm check:
root@LS-WXL75C:~# mdadm -Q --detail /dev/md*
mdadm: /dev/md does not appear to be an md device
/dev/md0:
Version : 0.90
Creation Time : Wed Oct 21 22:46:27 2015
Raid Level : raid1
Array Size : 999872 (976.60 MiB 1023.87 MB)
Used Dev Size : 999872 (976.60 MiB 1023.87 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Mon Dec 28 16:13:39 2015
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : 303bce10:ccbc083b:7ed5cf53:07eda886
Events : 0.61

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
/dev/md1:
Version : 0.90
Creation Time : Wed Oct 21 22:46:28 2015
Raid Level : raid1
Array Size : 4999936 (4.77 GiB 5.12 GB)
Used Dev Size : 4999936 (4.77 GiB 5.12 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Mon Dec 28 16:20:32 2015
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : 42fe5f87:c983d945:6e5c12cc:df2e1f75
Events : 0.1011

Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
/dev/md10:
Version : 0.90
Creation Time : Wed Oct 21 22:46:28 2015
Raid Level : raid1
Array Size : 999872 (976.60 MiB 1023.87 MB)
Used Dev Size : 999872 (976.60 MiB 1023.87 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 10
Persistence : Superblock is persistent

Update Time : Mon Dec 28 16:18:16 2015
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : cc027bcb:4df6a8a3:c364bfde:1ab1a9dd
Events : 0.71

Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 21 1 active sync /dev/sdb5
/dev/md2:
Version : 1.2
Creation Time : Wed Oct 21 23:21:12 2015
Raid Level : raid1
Array Size : 961748840 (917.20 GiB 984.83 GB)
Used Dev Size : 961748840 (917.20 GiB 984.83 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Mon Dec 28 16:15:52 2015
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : LS-WXL75C:2 (local to host LS-WXL75C)
UUID : 28dcb6df:64f4e68c:052fc07e:c12207a9
Events : 177

Number Major Minor RaidDevice State
0 8 22 0 active sync /dev/sdb6
2 8 6 1 active sync /dev/sda6


Can someone please help? It has all my photos which I dont want to lose:(


Top
   
PostPosted: Mon Dec 28, 2015 9:10 am 
Online
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2692
post the output from
Code:
cat /proc/mdstat


Top
   
PostPosted: Mon Dec 28, 2015 9:24 am 
Offline
Newbie

Joined: Mon Dec 28, 2015 8:56 am
Posts: 7
root@LS-WXL75C:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid1 sda6[2]
961748840 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sda2[0]
4999936 blocks [2/1] [U_]

md10 : active raid1 sda5[0]
999872 blocks [2/1] [U_]

md0 : active raid1 sda1[0]
999872 blocks [2/1] [U_]

unused devices: <none>
root@LS-WXL75C:~#


Top
   
PostPosted: Mon Dec 28, 2015 9:25 am 
Offline
Newbie

Joined: Mon Dec 28, 2015 8:56 am
Posts: 7
ill replug the second hdd and repost the output

its rebuilding the raid now, I'll get the output once its completed

edit:

rebuilding is in progress, but meanwhile here is the output:
root@LS-WXL75C:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid1 sdb6[3] sda6[2]
961748840 blocks super 1.2 [2/1] [_U]
[>....................] recovery = 0.9% (8752448/961748840) finish=327.5min speed=48488K/sec

md1 : active raid1 sdb2[1] sda2[0]
4999936 blocks [2/2] [UU]

md10 : active raid1 sdb5[1] sda5[0]
999872 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
999872 blocks [2/2] [UU]

unused devices: <none>
root@LS-WXL75C:~#


Top
   
PostPosted: Mon Dec 28, 2015 10:27 am 
Online
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2692
Quote:
ill replug the second hdd and repost the output


be aware
you are using raid
if you remove a drive, this drive will be marked as demaged
you have removed the first drive and the second drive
now it is possible, that you rebuild a raid with 2 drives marked as demaged

this new raid will be emty

good luck


Top
   
PostPosted: Mon Dec 28, 2015 10:28 am 
Online
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2692
it is possible, that you can mount a degradet raid while it is recovering

post the output from
Code:
mount


Top
   
PostPosted: Mon Dec 28, 2015 1:16 pm 
Offline
Newbie

Joined: Mon Dec 28, 2015 8:56 am
Posts: 7
its still building the raid :

root@LS-WXL75C:~# mount
rootfs on / type rootfs (rw)
/dev/root on / type ext3 (rw,relatime,errors=continue,barrier=0,data=writeback)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
udev on /dev type tmpfs (rw,nosuid,relatime,size=10240k,mode=755)
devpts on /dev/pts type devpts (rw,relatime,gid=4,mode=620)
/dev/ram1 on /mnt/ram type tmpfs (rw,relatime,size=15360k)
/dev/md0 on /boot type ext3 (rw,relatime,errors=continue,barrier=1,data=writeback)
usbfs on /proc/bus/usb type usbfs (rw,relatime)


Top
   
PostPosted: Mon Dec 28, 2015 2:58 pm 
Offline
Newbie

Joined: Mon Dec 28, 2015 8:56 am
Posts: 7
Unfortunately the error remained the same after rebuilding the raid:

root@LS-WXL75C:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid1 sdb6[3] sda6[2]
961748840 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
4999936 blocks [2/2] [UU]

md10 : active raid1 sdb5[1] sda5[0]
999872 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
999872 blocks [2/2] [UU]

unused devices: <none>
root@LS-WXL75C:~# mount
rootfs on / type rootfs (rw)
/dev/root on / type ext3 (rw,relatime,errors=continue,barrier=0,data=writeback)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
udev on /dev type tmpfs (rw,nosuid,relatime,size=10240k,mode=755)
devpts on /dev/pts type devpts (rw,relatime,gid=4,mode=620)
/dev/ram1 on /mnt/ram type tmpfs (rw,relatime,size=15360k)
/dev/md0 on /boot type ext3 (rw,relatime,errors=continue,barrier=1,data=writebac k)
usbfs on /proc/bus/usb type usbfs (rw,relatime)
root@LS-WXL75C:~#


Any idea please?


Top
   
PostPosted: Mon Dec 28, 2015 3:19 pm 
Online
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2692
try to mount the filesystem
Code:
mount /dev/md2 /mnt/array1


Top
   
PostPosted: Mon Dec 28, 2015 3:38 pm 
Offline
Newbie

Joined: Mon Dec 28, 2015 8:56 am
Posts: 7
doesnt work:(
root@LS-WXL75C:~# mount /dev/md2 /mnt/array1
mount: mounting /dev/md2 on /mnt/array1 failed: Structure needs cleaning


Top
   
PostPosted: Mon Dec 28, 2015 4:32 pm 
Online
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2692
cleart the log

Code:
xfs_repair -L /dev/md2




then mount it


Top
   
PostPosted: Mon Dec 28, 2015 4:42 pm 
Offline
Newbie

Joined: Mon Dec 28, 2015 8:56 am
Posts: 7
U did it mate!!!!:)

I cant thank it enough, it just worked:)

root@LS-WXL75C:~# xfs_repair -L /dev/md2
Phase 1 - find and verify superblock...
Not enough RAM available for repair to enable prefetching.
This will be _slow_.
You need at least 204MB RAM to run with prefetching enabled.
Phase 2 - using internal log
- zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
- scan filesystem freespace and inode maps...
block (11,692025-692025) multiply claimed by cnt space tree, state - 2
block (11,199784-199784) multiply claimed by cnt space tree, state - 2
agf_freeblks 2930004, counted 2930000 in ag 11
agi unlinked bucket 27 is 539 in ag 22 (inode=2952790555)
sb_ifree 1083, counted 1067
sb_fdblocks 174272837, counted 169033092
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
data fork in ino 1476466251 claims free block 96892896
data fork in ino 1476466251 claims free block 96892897
correcting bt key (was 279996, now 279920) in inode 1476466251
data fork, btree block 95002326
data fork in ino 1476466251 claims free block 96888651
data fork in ino 1476466251 claims free block 96888652
data fork in ino 1476466251 claims free block 96892848
data fork in ino 1476466251 claims free block 96892849
data fork in ino 1476466251 claims free block 96893228
data fork in ino 1476466251 claims free block 96893229
data fork in ino 1476466251 claims free block 96893000
data fork in ino 1476466251 claims free block 96893001
data fork in ino 1476466251 claims free block 96893104
data fork in ino 1476466251 claims free block 96893105
data fork in ino 1476466251 claims free block 96893172
data fork in ino 1476466251 claims free block 96893173
data fork in ino 1476466251 claims free block 96893332
data fork in ino 1476466251 claims free block 96893333
data fork in ino 1476466251 claims free block 96893448
data fork in ino 1476466251 claims free block 96893449
data fork in ino 1476466251 claims free block 96893488
data fork in ino 1476466251 claims free block 96893489
data fork in ino 1476466251 claims free block 96893496
data fork in ino 1476466251 claims free block 96893497
data fork in ino 1476466251 claims free block 96893516
data fork in ino 1476466251 claims free block 96893517
data fork in ino 1476466251 claims free block 96893536
data fork in ino 1476466251 claims free block 96893537
data fork in ino 1476466251 claims free block 96893552
data fork in ino 1476466251 claims free block 96893553
data fork in ino 1476466251 claims free block 96893564
data fork in ino 1476466251 claims free block 96893565
data fork in ino 1476466251 claims free block 96893576
data fork in ino 1476466251 claims free block 96893577
data fork in ino 1476466251 claims free block 96894436
data fork in ino 1476466251 claims free block 96894437
correcting bt key (was 712664, now 712628) in inode 1476466251
data fork, btree block 95002326
correcting bt key (was 764354, now 766736) in inode 1476466251
data fork, btree block 95002326
bad fwd (right) sibling pointer (saw 92475221 parent block says 95933735)
in inode 1476466251 (data fork) bmap btree block 95702333
bad data fork in inode 1476466251
cleared inode 1476466251
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
entry "File1" in shortform directory 1476466250 references free inode 1476466251
junking entry "File1" in directory inode 1476466250
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
disconnected inode 2952790555, moving to lost+found
Phase 7 - verify and correct link counts...
Note - quota info will be regenerated on next quota mount.
done
root@LS-WXL75C:~# mount /dev/md2 /mnt/array1
root@LS-WXL75C:~#



Thanks again!!, i copied all the output, it might help somebody to resolve the same issue:)


Top
   
PostPosted: Tue May 17, 2016 5:31 pm 
Offline
Total Newbie

Joined: Tue May 17, 2016 5:26 pm
Posts: 4
Hi, I seem to have a very similar issue you have dealt with. what is the password in order to ssh into the NAS? that way i can follow your steps outlined in the forum.

Thanks!


Top
   
PostPosted: Tue May 17, 2016 5:45 pm 
Online
Moderator

Joined: Mon Apr 26, 2010 10:24 am
Posts: 2692
you have to set a password while opening ssh

2.5 from
http://forum.nas-hilfe.de/buffalo-technology-nas-anleitungen/ssh-freischalten-mit-2-klicks-t2468.html


Top
   
PostPosted: Tue May 17, 2016 10:14 pm 
Offline
Total Newbie

Joined: Tue May 17, 2016 5:26 pm
Posts: 4
unfortunately, acp commander doesn't work for me. keep getting communication error using default port 22936.

ACP Commander v1.5.6.0 started
Sending discovery request...
Communication timeout, please try again!


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic  [ 17 posts ]  Go to page 1 2 Next

All times are UTC+01:00


Who is online

Users browsing this forum: No registered users and 5 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Powered by phpBB® Forum Software © phpBB Limited