Thank you for your answer. I'm glad to have you listening on the other side.
I know the feeling. I hope I can help you.
On lines 178-187, the fstab file you mentioned, can you remember what the content was you gave in before you rebooted? Can you confirm you actually want your NAS in RAID0? If you want to run it different, it's easiest to change with the buffalo web interface.
No, no. My mistake. I want RAID1. I explain the whole procedure here below.
OK, got it.
What firmware are you running? I am surprised to read that your stock firmware is using ext3 instead of xfs. As far as I know, xfs is default on the LS-WVL.
OK, here there might be something, hope you discover it for me...
After bricking my NAS I formatted the disks on another computer with gparted as suggested by
kenatonline in post 4 of viewtopic.php?f=77&t=24581
Actually, the partitions I have now are copied from your suggestion at viewtopic.php?f=50&t=26775&start=15#p161745
I configure both disks exactly the same.
Then I reinstall the firmware 164mod1a downloaded from the site you suggested on disk 1.
I also follow kenatonline here.
Next, reboot the NAS and run LSUpdater.exe under wine on my unix machine.
After that I have a new working NAS with 1.64 firmware, Raid0 and root access.
However, I want to install debian on a RAID1 configuration. Here I start following your guide.
In Step 1 one initiates the RAID1 setup, interupts it to skip the 980 minutes wait, and restarts the
NAS with disk 2 removed. I am in that situation right now: I've done lines 102-104 of your guidehttp://pastebin.com/H6rEZ5ge
and I am ready to start step 3.
That explains! I have updated my guide on TFTP for the LS-WVL here: http://pastebin.com/5sY24CTQ
My procedure is slightly different than you described: I only delete the partitions, I do not recreate them. I do this with both drives, re-insert both drives in the NAS and boot. The NAS will now boot in EM mode. If your next try also results in a bricked NAS, could you use my guide to unbrick it? I am curious of the buffalo firmware (shonks mod1) formats the drive as xfs (I am fairly certain it does).
Now, to the files:
Can you also give me the output of the mdadm.conf file (with cat)?
root@Nas:~# cat /etc/mdadm.conf
ARRAY /dev/md0 UUID=efa653c5:5643d352:b78fdf7e:1eae21f5
ARRAY /dev/md/1 metadata=1.2 UUID=ed9ac047:86b49608:2380fe0b:2169e16a name=LS-WVL-EM44E:1
ARRAY /dev/md/10 metadata=1.2 UUID=4768c187:c02d02c3:77e273b0:5e26b024 name=LS-WVL-EM44E:10
ARRAY /dev/md/2 metadata=1.2 UUID=7d5ccedc:af32d432:27aa3914:17a032ff name=Nas:2
Same arrays as mine. Should not be a problem here.
The file /etc/mdadm/mdadm.conf contains just one line:
Buffalo modified /etc/mdadm/mdadm.conf. We are copying /etc/mdadm.conf (I double checked it just now). This is good.
root@Nas:~# cat /etc/fstab
/dev/root / ext3 defaults 1 1
proc /proc proc defaults 0 0
devpts /dev/pts devpts gid=4,mode=620 0 0
root@Nas:~# ls -l /dev/root /dev/md1
brw-r----- 1 root disk 9, 1 May 2 2013 /dev/md1
lrwxrwxrwx 1 root root 3 May 2 2013 /dev/root -> md1
I see that your boot partition is not mounted anywhere
Also, please cat /etc/mdstat
My NAS does not have that file:
cat: can't open '/etc/mdstat': No such file or directory
Can you locate the file with find
find / -name mdstat
I do hope you find out what I'm doing wrong.
I hope. Some things that come to mind:
* As stated before, your boot partition is not mounted anywhere according to /etc/fstab. Maybe buffalo software is mounting it, but can you check your (stock) NAS directory contents of /boot/ ? The guide is copying files from it as preperation for the debian installation, so I assume it's accessible. But also important: In Step5: Installation we are replacing boot files (initrd). If boot is not mounted to /boot/, we can't replace them. Please post the output of mount
so we can check it.
* The fstab file you create for debian: You told me that you formatted /dev/md2 to ext3, but the fstab file is xfs. It is possible that the kernel has a conflict here. You told me you are now on a fresh stock firmware and your partitions are formatted as ext3. If this is still the case, replace everything in my guide that is mentioning xfs with ext3 and give it a go.. Worst thing that could happen is that you brick it
* Are you using DHCP? In your DHCP server, do you have your NAS set to a fixed IP? I can recommend this, this way you can not 'loose' your NAS. I have updated Step 5: Installation with this:
# In this guide, you will reboot three times. From here on, the LEDs will not function anymore. Your blue
# LED will continue to blink rapidly as if the NAS is still booting. This is normal. The only way to check
# whether your NAS is done booting is by pinging it. Make sure you have your NAS set to a fixed IP address
# on DHCP (or configured so in /etc/network/interfaces) and continue to ping it. On linux/mac:
# ping x.x.x.x
# On windows use -t. This way ping will continue more than 4 times:
# ping x.x.x.x -t
# Abort ping with CTRL+C
I hope this helps you to the next phase! I will check in again tomorrow around lunch, and diner.