Buffalo NAS-Central Forums

Welcome to the Linkstation Wiki community
It is currently Fri Nov 24, 2017 6:54 am

All times are UTC+01:00




Post new topic  Reply to topic  [ 55 posts ]  Go to page Previous 1 2 3 4
Author Message
PostPosted: Sun Sep 01, 2013 8:16 pm 
Offline
Newbie

Joined: Sun Jul 07, 2013 9:59 pm
Posts: 17
Thanks, Vollemelk, for the updates, files, tutorials, and the trial-and-error time you invested.

Has anyone tried these patched with the 3.10 (Long-term support) kernel? I'm wondering if I should start with the 3.8.3 (end of life) kernel or use the LTS 3.10.x.


Top
   
PostPosted: Mon Sep 02, 2013 5:35 pm 
Offline
Newbie

Joined: Sun Jul 07, 2013 9:59 pm
Posts: 17
If anyone has thoughts about above, or indeed about wheezy vs squeeze, I'm eager to consider them...

I tried the instructions for the first time ==> No Boot.

I'm pretty certain there are missing instructions in "Debian on LS-WVL - Pastebin" after line 305. We are still in chroot, having not exit'ed. And the 2 initrd files have not been copied to ~/ of the "un-chroot" file system.

So, I exited and copied. I followed Step 5, and wound up with flashing blue power light. I.e., no successful boot.

I happened to notice http://buffalo.nas-central.org/wiki/Debian_Squeeze_on_LS-WXL in section Editing /etc/fstab says, "the LS has only a single drive attached, change /dev/md2 to either /dev/sda6 or /dev/sdb6 depending on which slot the drive is inserted." This is line 177 in VolleMelk's guide.

I don't find in Vollemelk where he says to replace the second hdd, so my first attempt was with only hhd 1 (sda) inserted. I did subsequently try to boot with both drives, even with both drives swapped. No love.

I'm gonna retrace my steps. I may try using sda in the fstab instead of md2, but I need to read and think more on this.

Anyone else give it a try? Success? I'd like to know.


Top
   
PostPosted: Thu Sep 05, 2013 8:30 pm 
Offline
Newbie

Joined: Wed Dec 12, 2012 10:25 am
Posts: 72
mickleby wrote:
Thanks, Vollemelk, for the updates, files, tutorials, and the trial-and-error time you invested.

Has anyone tried these patched with the 3.10 (Long-term support) kernel? I'm wondering if I should start with the 3.8.3 (end of life) kernel or use the LTS 3.10.x.

You're welcome, I see you're having trouble. Let's see what we can do.

I have not tried the new LTS kernel. If you're really dying to get the new kernel (which I should be too, but hey ;) ) I suggest you first try to get your NAS up and running with the older kernel. This way you know everything works. Afterwards you can upgrade the kernel or re-do everything (or do both if you fail at the upgrade ;) )

mickleby wrote:
If anyone has thoughts about above, or indeed about wheezy vs squeeze, I'm eager to consider them...

I know someone managed to get wheezy. He first installed squeeze, and then did a distribution upgrade, I believe
edit: See here for Wheezy. You could read the scripts, or use them ;) viewtopic.php?f=77&t=24575&start=30#p162909

mickleby wrote:
I tried the instructions for the first time ==> No Boot.

I'm pretty certain there are missing instructions in "Debian on LS-WVL - Pastebin" after line 305. We are still in chroot, having not exit'ed. And the 2 initrd files have not been copied to ~/ of the "un-chroot" file system.

So, I exited and copied. I followed Step 5, and wound up with flashing blue power light. I.e., no successful boot.

Looks you are right! If one is indeed following the guide in chronological order, there is an exit missing. Weird how I could've missed it.. I used the guide in its work-in-progress state as a reference myself. I added the exit, thanks!

mickleby wrote:
I happened to notice http://buffalo.nas-central.org/wiki/Debian_Squeeze_on_LS-WXL in section Editing /etc/fstab says, "the LS has only a single drive attached, change /dev/md2 to either /dev/sda6 or /dev/sdb6 depending on which slot the drive is inserted." This is line 177 in VolleMelk's guide.

It is kind of misleading. If you are indeed using /dev/sd*, you are not using RAID. In my guide, I only mention RAID1. RAID1 is nothing more than 1 drives mirrored. And it's perfectly possible to create a RAID (/dev/md*) with 1 drive, but then the RAID array is degraded. I used this so I wouldn't have to wait after each attempt resulted in a bricked NAS for the RAID array to rebuild. It's explained on lines 94-109 in the 'Debian on LS-WVL' pastebin guide.

mickleby wrote:
I don't find in Vollemelk where he says to replace the second hdd, so my first attempt was with only hhd 1 (sda) inserted. I did subsequently try to boot with both drives, even with both drives swapped. No love.

I'm gonna retrace my steps. I may try using sda in the fstab instead of md2, but I need to read and think more on this.

Anyone else give it a try? Success? I'd like to know.

Others seem to have used my guide with success, I am sure you can do the same. Let's do some troubleshooting. I assume the NAS is back on stock firmware (can you confirm this? Did you perform a TFTP recovery? With buffalo firmware or Shonk's firmware?)

If so, after you've gained SSH root access, could you please post results of the commands below? I think your partition layout is different, and that the initrd tries to boot from the wrong device/partition (be it raid or single drive). Do you want a RAID configuration? RAID1, RAID0? Or do you perhaps want two different drives?

Please post:
Code:
fdisk -l
cat /proc/mdstat
cat /etc/fstab
mount

And last but not least, please start 'parted' and enter these commands:
Code:
print
use /dev/sdb
print

It's possible that you will get an error in parted. This depends on whether or not you have inserted both drives, or have the left drive missing. Doesn't matter.


Top
   
PostPosted: Sun Sep 08, 2013 7:17 pm 
Offline
Newbie

Joined: Sun Jul 07, 2013 9:59 pm
Posts: 17
VolleMelk

Hey, it's great to see you have time to help me "baby steps" through this process. You rock!

Quote:
I assume the NAS is back on stock firmware (can you confirm this? Did you perform a TFTP recovery? With buffalo firmware or Shonk's firmware?)


I'm using Shonk's firmware. Is it 1.64.1?

I had some difficulty getting back there. The disk I wasn't using, Disk 2, was grayed out in the LS web interface; i.e. the web interface wouldn't let me check or format or remove or rediscover the disk I had REMOVED in your instructions line 104. The hardware was showing red on Disk 2 and gave I12 flashes. It was successfully copying the boot partition files over to Disk 2. I verified this, and it was one way, Disk 1 -> Disk 2. I deleted the partitions, I deleted this disk label... It simply wasn't interested in formatting the "new" disk. Ultimately, I went to Restore/Erase in the web interface. This did the trick.

During this struggle, I found I can boot with the old kernel initrd.buffalo.old and also initrd.buffalo.final (line 312) but NOT the initrd.buffalo (line 311) created as intermediate/temporary in the Guide. (I found this by sequentially renaming each of these as 'initrd.buffalo' on Disk 1.)

Quote:
I think your partition layout is different, and that the initrd tries to boot from the wrong device/partition (be it raid or single drive). Do you want a RAID configuration? RAID1, RAID0? Or do you perhaps want two different drives?


I want to follow your Guide, so I have restored the RAID1.

Code:
root@Nas:~# fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sda1               1      121602   976762583+ ee EFI GPT

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sdb1               1      121602   976762583+ ee EFI GPT
root@Nas:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid1 sdb6[1] sda6[0]
      961748840 blocks super 1.2 [2/2] [UU]
      [=>...................]  resync =  6.6% (64361472/961748840) finish=304.4min speed=49116K/sec
     
md1 : active raid1 sda2[0] sdb2[2]
      4999156 blocks super 1.2 [2/2] [UU]
     
md10 : active raid1 sda5[3] sdb5[2]
      1000436 blocks super 1.2 [2/2] [UU]
     
md0 : active raid1 sda1[0] sdb1[1]
      1000384 blocks [2/2] [UU]
     
unused devices: <none>
root@Nas:~# cat /etc/fstab
/dev/root   /      ext3   defaults   1 1
proc      /proc      proc   defaults   0 0
devpts         /dev/pts    devpts     gid=4,mode=620 0    0

root@Nas:~# mount
rootfs on / type rootfs (rw)
/dev/root on / type ext3 (rw,relatime,errors=continue,barrier=0,data=writeback)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
udev on /dev type tmpfs (rw,nosuid,relatime,size=10240k,mode=755)
devpts on /dev/pts type devpts (rw,relatime,gid=4,mode=620)
/dev/ram1 on /mnt/ram type tmpfs (rw,relatime,size=15360k)
/dev/md0 on /boot type ext3 (rw,relatime,errors=continue,barrier=1,data=writeback)
usbfs on /proc/bus/usb type usbfs (rw,relatime)
/dev/md2 on /mnt/array1 type xfs (rw,noatime,usrquota,grpquota)
root@Nas:~#


Quote:
print
use /dev/sdb
print

Here I think you meant "select /dev/sdb", and that the first print would default to /dev/sda:

Code:
root@Nas:~# parted
GNU Parted 1.8.8
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print                                                           
Model:  WD10EADS-11M2B2 (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  1026MB  1024MB  ext3         primary       
 2      1026MB  6146MB  5120MB  ext3         primary       
 3      6146MB  6147MB  1049kB  ext2         primary       
 4      6147MB  6148MB  1049kB  ext2         primary       
 5      6148MB  7172MB  1024MB  linux-swap   primary       
 6      7172MB  992GB   985GB                primary       

(parted) select /dev/sdb                                                 
Using /dev/sdb
(parted) print                                                           
Model:  WD10EADS-11M2B1 (scsi)
Disk /dev/sdb: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  1026MB  1024MB  ext3         primary       
 2      1026MB  6146MB  5120MB               primary       
 3      6146MB  6147MB  1049kB               primary       
 4      6147MB  6148MB  1049kB               primary       
 5      6148MB  7172MB  1024MB               primary       
 6      7172MB  992GB   985GB                primary       

(parted) use /dev/sdb                                                     
  check NUMBER                             do a simple check on the file system
  cp [FROM-DEVICE] FROM-NUMBER TO-NUMBER   copy file system to another partition
  help [COMMAND]                           print general help, or help on
        COMMAND
  mklabel,mktable LABEL-TYPE               create a new disklabel (partition
        table)
  mkfs NUMBER FS-TYPE                      make a FS-TYPE file system on
        partititon NUMBER
  mkpart PART-TYPE [FS-TYPE] START END     make a partition
  mkpartfs PART-TYPE FS-TYPE START END     make a partition with a file system
  move NUMBER START END                    move partition NUMBER
  name NUMBER NAME                         name partition NUMBER as NAME
  print [devices|free|list,all|NUMBER]     display the partition table,
        available devices, free space, all found partitions, or a particular
        partition
  quit                                     exit program
  rescue START END                         rescue a lost partition near START
        and END
  resize NUMBER START END                  resize partition NUMBER and its file
        system
  rm NUMBER                                delete partition NUMBER
  select DEVICE                            choose the device to edit
  set NUMBER FLAG STATE                    change the FLAG on partition NUMBER
  toggle [NUMBER [FLAG]]                   toggle the state of FLAG on partition
        NUMBER
  unit UNIT                                set the default unit to UNIT
  version                                  display the version number and
        copyright information of GNU Parted
  check NUMBER                             do a simple check on the file system
  cp [FROM-DEVICE] FROM-NUMBER TO-NUMBER   copy file system to another partition
  help [COMMAND]                           print general help, or help on
        COMMAND
  mklabel,mktable LABEL-TYPE               create a new disklabel (partition
        table)
  mkfs NUMBER FS-TYPE                      make a FS-TYPE file system on
        partititon NUMBER
  mkpart PART-TYPE [FS-TYPE] START END     make a partition
  mkpartfs PART-TYPE FS-TYPE START END     make a partition with a file system
  move NUMBER START END                    move partition NUMBER
  name NUMBER NAME                         name partition NUMBER as NAME
  print [devices|free|list,all|NUMBER]     display the partition table,
        available devices, free space, all found partitions, or a particular
        partition
  quit                                     exit program
  rescue START END                         rescue a lost partition near START
        and END
  resize NUMBER START END                  resize partition NUMBER and its file
        system
  rm NUMBER                                delete partition NUMBER
  select DEVICE                            choose the device to edit
  set NUMBER FLAG STATE                    change the FLAG on partition NUMBER
  toggle [NUMBER [FLAG]]                   toggle the state of FLAG on partition
        NUMBER
  unit UNIT                                set the default unit to UNIT
  version                                  display the version number and
        copyright information of GNU Parted
(parted) quit                                                             
root@Nas:~#


This all seems correct. I am curious about something in mdstat:
Quote:
root@Nas:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid1 sdb6[1] sda6[0]
961748840 blocks super 1.2 [2/2] [UU]
[=>...................] resync = 6.6% (64361472/961748840) finish=304.4min speed=49116K/sec

md1 : active raid1 sda2[0] sdb2[2]
4999156 blocks super 1.2 [2/2] [UU]

md10 : active raid1 sda5[3] sdb5[2]
1000436 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
1000384 blocks [2/2] [UU]

unused devices: <none>

What does it mean, these [?] numbers? And why do they differ? Anyway, this is from Restore so I assume it is "normal". Also, why is only md2 sdb[] sda[], while all the others are sda[] sdb[]?

Finally, you will note that the RAID1 is still being built by the resync message.


Top
   
PostPosted: Tue Sep 10, 2013 6:23 am 
Offline
Newbie

Joined: Sun Jul 07, 2013 9:59 pm
Posts: 17
I followed the other guide http://buffalo.nas-central.org/wiki/Debian_Squeeze_on_LS-WXL and I got the SAME results.

This is good and bad. It's always good when errors can be reproduced. It's bad for obvious reasons. It's possible I misread something in the guides, for example there are a few places where I squint and wonder, "Is that an O or an 0?"

BUT! Because the initrd.buffalo.final boots and the initrd.buffalo doesn't... I'm thinking there must be something in linuxrc, it must be that 0x902 isn't working.

Code:
# use /dev/md1 as root
# echo "0x901" > /proc/sys/kernel/real-root-dev
# use /dev/md2 as root
echo "0x902" > /proc/sys/kernel/real-root-dev


Top
   
PostPosted: Sun Sep 15, 2013 7:47 pm 
Offline
Newbie

Joined: Sun Jul 07, 2013 9:59 pm
Posts: 17
Somehow the rootfs partition, md1/sda2, was deleted. Not sure if this happened at first boot with the updated initrd. Anyway had to recover from brick. This resolved the strange order of md2, so I'm going to start again from the beginning.

-- edit 21-09-13
No. What happened: md moved the filesystem metadata because it used this partition as a member of version 1.2 raid. There was actually nothing wrong, probably.


Last edited by mickleby on Sun Sep 22, 2013 2:57 am, edited 1 time in total.

Top
   
PostPosted: Mon Sep 16, 2013 12:39 am 
Offline
Newbie

Joined: Sun Jul 07, 2013 9:59 pm
Posts: 17
Yes, again rootfs gets whacked and I get an infinite flashing blue, not the "flashing blue until it doesn't boot, then black, then retry", this is the same infinite flashing blue that happens with no system on root.

I'm thinking the mdadm is mangling the partitions/filesystem somehow. LS can work just fine, shutdown, and when I examine the drive using a different system I see the file system on sda2 as ext3--even though LS apparently considers this a member of md1. However, after I boot using the the new initrd the filesystem is changed. My Ubuntu box describes the partitions a RAID members. I can successfully mount sda1, the /boot partition, but if I try to mount sda2 it says bad superblock or something, won't mount. If I try to mount it or sda6 as XFS I get bad magic number, SB validate failed.

I do deviate from the Guide here: http://ftp.us.debian.org/debian/pool/ma ... 35_all.deb is no longer available, so I've been using http://ftp.us.debian.org/debian/pool/ma ... 53_all.deb. Should I try http://ftp.us.debian.org/debian/pool/ma ... e1_all.deb or look elsewhere for debootstrap_1.0.35_all.deb???

Also, I could try using a single disk. Maybe after I succeed to transition to debian I can install mdadm. Apparently I can create a RAID partition with a member "missing" and add the member after the fact.

--- edit 21-09-13
During the intermediate step, using the sda6 partition for the filesystem, the initrd.buffalo (initrd.buffalo.initial), the Power Light never stopped blinking. Because I wasn't getting DHCP, I couldn't see that the system was IN FACT actually behaving quite normally. Oy!

RE the deviation: I ultimately used the version of debootstrap that VolleMelk used. I found it easily using Google; I think Ubuntu still hosts it. Regardless, I now doubt this was an issue at all.


Last edited by mickleby on Sun Sep 22, 2013 3:03 am, edited 1 time in total.

Top
   
PostPosted: Sun Sep 22, 2013 12:04 am 
Offline
Newbie

Joined: Sun Jul 07, 2013 9:59 pm
Posts: 17
Big breakthrough!

All this time my problem was DHCP. My router had an old reservation that was preventing DHCP. Because I couldn't log in, I thought boot was failing. WRONG! :-) I mounted the drive from another system (after I worked out how to mount an md ver 1.2 member, http://forum.buffalo.nas-central.org/viewtopic.php?f=77&t=27617) and found the /var/log's... The rest was straightforward!

So... like the n00b I am... I'm finished with the debootstrap, ready to jump into building the kernel (when I catch my breath)!


Top
   
PostPosted: Sun Sep 22, 2013 2:37 am 
Offline
Newbie

Joined: Sun Jul 07, 2013 9:59 pm
Posts: 17
w00t

The kernel build guide was clear and straightforward! No hiccups whatsoever.

Great job VolleMelk. And thank you for your time and your Guide!


Top
   
PostPosted: Sun Sep 21, 2014 10:57 pm 
Offline
Newbie

Joined: Sun Sep 21, 2014 3:23 pm
Posts: 6
A year later! :p I can't for the live of me get my kernel loaded. I think it compiles fine, and I get a nice, calm reassuring steady blue light when I reboot. No flashing. Nothing red. On the down side, I can't ssh into the box, or find it on my router, so that kind of ... sucks ... for a NAS device. I saw that VolleMelk had this problem at one point in his process. I'm on brick #342 at this point; anyone have any pointers on solving this issue? I've also tried compiling a later version of the kernel with http://hajduk.one.pl/Buffalo%20LS-WVL%20kernel%203.11.10/ (and actually a later version from same website, but that page seems to be down today.) That method gives me no joy whatsoever.

I've got a LS-WVL that came with 2 3tb drives, so it has no nand. I don't suppose it's possible to find/install a kernel with lsupdater? :)


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic  [ 55 posts ]  Go to page Previous 1 2 3 4

All times are UTC+01:00


Who is online

Users browsing this forum: No registered users and 10 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Powered by phpBB® Forum Software © phpBB Limited